path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Notebooks/Weather-RNN.ipynb | ###Markdown
 Weather prediction using Recurrent Neural Network Dr. Tirthajyoti Sarkar, Fremont, CA ([LinkedIn](https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/), [Github](https://tirthajyoti.github.io))For more tutorial-style notebooks on deep learning, **[here is my Github repo](https://github.com/tirthajyoti/Deep-learning-with-Python)**.For more tutorial-style notebooks on general machine learning, **[here is my Github repo](https://github.com/tirthajyoti/Machine-Learning-with-Python)**.---In this Notebook, we show how the long-term trend of key weather parameters (humidity, temperature, atmospheric pressure, etc.) can be predicted with decent accuracy using simple recurrent neural network (RNN). We don't even need to use any sophisticated memory module like GRU or LSTM for this. A simple one-layer RNN based model seems sufficient to be able to predict long-term trends from limited training data surprisingly well.This is almost a proof of what Andrej Karpathy famously called **["The unusual effectiveness of recurrent neural networks"](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)** The datasetThe dataset consists of historical weather parameters (temperature, pressure, relative humidity) for major North American and other cities around the world over an extended time period of 2012 to 2017. Hourly data points are recorded, giving, over 45000 data points, in total.By attepmpting to do a time-series prediction, we are implicitly assuming that the past weather pattern is a good indicator of the future.For this analysis, we focus only on the data for the city of San Francisco.The full dataset can be found here: https://www.kaggle.com/selfishgene/historical-hourly-weather-data Data loading and pre-processing
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option('mode.chained_assignment', None)
from keras.models import Sequential
from keras.layers import Dense, SimpleRNN
from keras.optimizers import RMSprop
from keras.callbacks import Callback
humidity = pd.read_csv("../Data/historical-hourly-weather-data/humidity.csv")
temp = pd.read_csv("../Data/historical-hourly-weather-data/temperature.csv")
pressure = pd.read_csv("../Data/historical-hourly-weather-data/pressure.csv")
humidity_SF = humidity[['datetime','San Francisco']]
temp_SF = temp[['datetime','San Francisco']]
pressure_SF = pressure[['datetime','San Francisco']]
humidity_SF.head(10)
humidity_SF.tail(10)
print(humidity_SF.shape)
print(temp_SF.shape)
print(pressure_SF.shape)
###Output
(45253, 2)
(45253, 2)
(45253, 2)
###Markdown
There are many `NaN` values (blanck) in the dataset
###Code
print("How many NaN are there in the humidity dataset?",humidity_SF.isna().sum()['San Francisco'])
print("How many NaN are there in the temperature dataset?",temp_SF.isna().sum()['San Francisco'])
print("How many NaN are there in the pressure dataset?",pressure_SF.isna().sum()['San Francisco'])
###Output
How many NaN are there in the humidity dataset? 942
How many NaN are there in the temperature dataset? 793
How many NaN are there in the pressure dataset? 815
###Markdown
Choosing a point in the time-series for training dataWe choose Tp=7000 here which means we will train the RNN with only first 7000 data points and then let it predict the long-term trend (for the next > 35000 data points or so). That is not a lot of training data compared to the number of test points, is it?
###Code
Tp = 7000
def plot_train_points(quantity='humidity',Tp=7000):
plt.figure(figsize=(15,4))
if quantity=='humidity':
plt.title("Humidity of first {} data points".format(Tp),fontsize=16)
plt.plot(humidity_SF['San Francisco'][:Tp],c='k',lw=1)
if quantity=='temperature':
plt.title("Temperature of first {} data points".format(Tp),fontsize=16)
plt.plot(temp_SF['San Francisco'][:Tp],c='k',lw=1)
if quantity=='pressure':
plt.title("Pressure of first {} data points".format(Tp),fontsize=16)
plt.plot(pressure_SF['San Francisco'][:Tp],c='k',lw=1)
plt.grid(True)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
plot_train_points('humidity')
plot_train_points('temperature')
plot_train_points('pressure')
###Output
_____no_output_____
###Markdown
Interpolate data points to fill up `NaN` valuesWe observed some `NaN` values in the dataset. We could just eliminate these points. But assuming that the changes in the parameters are not extremely abrupt, we could try to fill them using simple linear interpolation.
###Code
humidity_SF.interpolate(inplace=True)
humidity_SF.dropna(inplace=True)
temp_SF.interpolate(inplace=True)
temp_SF.dropna(inplace=True)
pressure_SF.interpolate(inplace=True)
pressure_SF.dropna(inplace=True)
print(humidity_SF.shape)
print(temp_SF.shape)
print(pressure_SF.shape)
###Output
(45252, 2)
(45252, 2)
(45252, 2)
###Markdown
Train and test splits on the `Tp=7000`
###Code
train = np.array(humidity_SF['San Francisco'][:Tp])
test = np.array(humidity_SF['San Francisco'][Tp:])
print("Train data length:", train.shape)
print("Test data length:", test.shape)
train=train.reshape(-1,1)
test=test.reshape(-1,1)
plt.figure(figsize=(15,4))
plt.title("Train and test data plotted together",fontsize=16)
plt.plot(np.arange(Tp),train,c='blue')
plt.plot(np.arange(Tp,45252),test,c='orange',alpha=0.7)
plt.legend(['Train','Test'])
plt.grid(True)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Choose the embedding or step sizeRNN model requires a step value that contains n number of elements as an input sequence.Suppose x = {1,2,3,4,5,6,7,8,9,10}for step=1, x input and its y prediction become:| x | y ||---|---|| 1 | 2 || 2 | 3 || 3 | 4 || ... | ... || 9 | 10 |for step=3, x and y contain:| x | y ||---|---|| 1,2,3 | 4 || 2,3,4 | 5 || 3,4,5 | 6 || ... | ... || 7,8,9 | 10 |Here, we choose `step=8`. In more complex RNN and in particular for text processing, this is also called _embedding size_. The idea here is that **we are assuming that 8 hours of weather data can effectively predict the 9th hour data, and so on.**
###Code
step = 8
# add step elements into train and test
test = np.append(test,np.repeat(test[-1,],step))
train = np.append(train,np.repeat(train[-1,],step))
print("Train data length:", train.shape)
print("Test data length:", test.shape)
###Output
Train data length: (7008,)
Test data length: (38260,)
###Markdown
Converting to a multi-dimensional arrayNext, we'll convert test and train data into the matrix with step value as it has shown above example.
###Code
def convertToMatrix(data, step):
X, Y =[], []
for i in range(len(data)-step):
d=i+step
X.append(data[i:d,])
Y.append(data[d,])
return np.array(X), np.array(Y)
trainX,trainY =convertToMatrix(train,step)
testX,testY =convertToMatrix(test,step)
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
print("Training data shape:", trainX.shape,', ',trainY.shape)
print("Test data shape:", testX.shape,', ',testY.shape)
###Output
Training data shape: (7000, 1, 8) , (7000,)
Test data shape: (38252, 1, 8) , (38252,)
###Markdown
Modeling Keras model with `SimpleRNN` layerWe build a simple function to define the RNN model. It uses a single neuron for the output layer because we are predicting a real-valued number here. As activation, it uses the ReLU function. Following arguments are supported.- neurons in the RNN layer- embedding length (i.e. the step length we chose)- nenurons in the densely connected layer- learning rate
###Code
def build_simple_rnn(num_units=128, embedding=4,num_dense=32,lr=0.001):
"""
Builds and compiles a simple RNN model
Arguments:
num_units: Number of units of a the simple RNN layer
embedding: Embedding length
num_dense: Number of neurons in the dense layer followed by the RNN layer
lr: Learning rate (uses RMSprop optimizer)
Returns:
A compiled Keras model.
"""
model = Sequential()
model.add(SimpleRNN(units=num_units, input_shape=(1,embedding), activation="relu"))
model.add(Dense(num_dense, activation="relu"))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer=RMSprop(lr=lr),metrics=['mse'])
return model
model_humidity = build_simple_rnn(num_units=128,num_dense=32,embedding=8,lr=0.0005)
model_humidity.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
simple_rnn_1 (SimpleRNN) (None, 128) 17536
_________________________________________________________________
dense_1 (Dense) (None, 32) 4128
_________________________________________________________________
dense_2 (Dense) (None, 1) 33
=================================================================
Total params: 21,697
Trainable params: 21,697
Non-trainable params: 0
_________________________________________________________________
###Markdown
A simple Keras `Callback` class to print progress of the training at regular epoch intervalSince the RNN training is usually long, we want to see regular updates about epochs finishing. However, we may not want to see this update every epoch as that may flood the output stream. Therefore, we write a simple custom `Callback` function to print the finishing update every 50th epoch. You can think of adding other bells and whistles to this function to print error and other metrics dynamically.
###Code
class MyCallback(Callback):
def on_epoch_end(self, epoch, logs=None):
if (epoch+1) % 50 == 0 and epoch>0:
print("Epoch number {} done".format(epoch+1))
###Output
_____no_output_____
###Markdown
Batch size and number of epochs
###Code
batch_size=8
num_epochs = 1000
###Output
_____no_output_____
###Markdown
Training the model
###Code
model_humidity.fit(trainX,trainY,
epochs=num_epochs,
batch_size=batch_size,
callbacks=[MyCallback()],verbose=0)
###Output
WARNING:tensorflow:From c:\users\tirth\docume~1\personal\datasc~2\python~1\tf-gpu\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch number 50 done
Epoch number 100 done
Epoch number 150 done
Epoch number 200 done
Epoch number 250 done
Epoch number 300 done
Epoch number 350 done
Epoch number 400 done
Epoch number 450 done
Epoch number 500 done
Epoch number 550 done
Epoch number 600 done
Epoch number 650 done
Epoch number 700 done
Epoch number 750 done
Epoch number 800 done
Epoch number 850 done
Epoch number 900 done
Epoch number 950 done
Epoch number 1000 done
###Markdown
Plot RMSE loss over epochsNote that the `loss` metric available in the `history` attribute of the model is the MSE loss and you have to take a square-root to compute the RMSE loss.
###Code
plt.figure(figsize=(7,5))
plt.title("RMSE loss over epochs",fontsize=16)
plt.plot(np.sqrt(model_humidity.history.history['loss']),c='k',lw=2)
plt.grid(True)
plt.xlabel("Epochs",fontsize=14)
plt.ylabel("Root-mean-squared error",fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Result and analysis What did the model see while training?We are emphasizing and showing again what exactly the model see during training. If you look above, the model fitting code is,```model_humidity.fit(trainX,trainY, epochs=num_epochs, batch_size=batch_size, callbacks=[MyCallback()],verbose=0)```So, the model was fitted with `trainX` which is plotted below, and `trainY` which is just the 8 step shifted and shaped vector.
###Code
plt.figure(figsize=(15,4))
plt.title("This is what the model saw",fontsize=18)
plt.plot(trainX[:,0][:,0],c='blue')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Now predict the future pointsNow, we can generate predictions for the future by passing `testX` to the trained model.
###Code
trainPredict = model_humidity.predict(trainX)
testPredict= model_humidity.predict(testX)
predicted=np.concatenate((trainPredict,testPredict),axis=0)
###Output
_____no_output_____
###Markdown
See the magic!When we plot the predicted vector, we see it matches closely the true values and that is amazing given how little training data was used and how far in the _future_ it had to predict. Time-series techniques like ARIMA, Exponential smoothing, cannot predict very far into the future and their confidence interval quickly grows beyond being useful.**Note carefully how the model is able to predict sudden increase in humidity around time-points 12000. There was no indication of such shape or pattern of the data in the training set, yet, it is able to predict the general shape pretty well from the first 7000 data points!**
###Code
plt.figure(figsize=(10,4))
plt.title("This is what the model predicted",fontsize=18)
plt.plot(testPredict,c='orange')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the ground truth and model predictions togetherWe plot the ground truth and the model predictions together to show that it follows the general trends in the ground truth data pretty well. Considering less than 25% data was used for training, this is sort of amazing. The boundary between train and test splits is denoted by the vertical red line.There are, of course, some obvious mistakes in the model predictions, such as humidity values going above 100 and some very low values. These can be pruned with post-processing or a better model can be built with propoer hyperparameter tuning.
###Code
index = humidity_SF.index.values
plt.figure(figsize=(15,5))
plt.title("Humidity: Ground truth and prediction together",fontsize=18)
plt.plot(index,humidity_SF['San Francisco'],c='blue')
plt.plot(index,predicted,c='orange',alpha=0.75)
plt.legend(['True data','Predicted'],fontsize=15)
plt.axvline(x=Tp, c="r")
plt.grid(True)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.ylim(-20,120)
plt.show()
###Output
_____no_output_____
###Markdown
Modeling the temperature dataSince we have covered modeling the humidity data step-by-step in detail, we will show the modeling with other two parameters - temperature and pressure - quickly with similar code but not with detailed text.
###Code
train = np.array(temp_SF['San Francisco'][:Tp])
test = np.array(temp_SF['San Francisco'][Tp:])
train=train.reshape(-1,1)
test=test.reshape(-1,1)
step = 8
# add step elements into train and test
test = np.append(test,np.repeat(test[-1,],step))
train = np.append(train,np.repeat(train[-1,],step))
trainX,trainY =convertToMatrix(train,step)
testX,testY =convertToMatrix(test,step)
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
model_temp = build_simple_rnn(num_units=128,num_dense=32,embedding=8,lr=0.0005)
batch_size=8
num_epochs = 2000
model_temp.fit(trainX,trainY,
epochs=num_epochs,
batch_size=batch_size,
callbacks=[MyCallback()],verbose=0)
plt.figure(figsize=(7,5))
plt.title("RMSE loss over epochs",fontsize=16)
plt.plot(np.sqrt(model_temp.history.history['loss']),c='k',lw=2)
plt.grid(True)
plt.xlabel("Epochs",fontsize=14)
plt.ylabel("Root-mean-squared error",fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
trainPredict = model_temp.predict(trainX)
testPredict= model_temp.predict(testX)
predicted=np.concatenate((trainPredict,testPredict),axis=0)
index = temp_SF.index.values
plt.figure(figsize=(15,5))
plt.title("Temperature: Ground truth and prediction together",fontsize=18)
plt.plot(index,temp_SF['San Francisco'],c='blue')
plt.plot(index,predicted,c='orange',alpha=0.75)
plt.legend(['True data','Predicted'],fontsize=15)
plt.axvline(x=Tp, c="r")
plt.grid(True)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Modeling the atmospheric pressure data
###Code
train = np.array(pressure_SF['San Francisco'][:Tp])
test = np.array(pressure_SF['San Francisco'][Tp:])
train=train.reshape(-1,1)
test=test.reshape(-1,1)
step = 8
# add step elements into train and test
test = np.append(test,np.repeat(test[-1,],step))
train = np.append(train,np.repeat(train[-1,],step))
trainX,trainY =convertToMatrix(train,step)
testX,testY =convertToMatrix(test,step)
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
model_pressure = build_simple_rnn(num_units=128,num_dense=32,embedding=8,lr=0.0005)
batch_size=8
num_epochs = 500
model_pressure.fit(trainX,trainY,
epochs=num_epochs,
batch_size=batch_size,
callbacks=[MyCallback()],verbose=0)
plt.figure(figsize=(7,5))
plt.title("RMSE loss over epochs",fontsize=16)
plt.plot(np.sqrt(model_pressure.history.history['loss']),c='k',lw=2)
plt.grid(True)
plt.xlabel("Epochs",fontsize=14)
plt.ylabel("Root-mean-squared error",fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
trainPredict = model_pressure.predict(trainX)
testPredict= model_pressure.predict(testX)
predicted=np.concatenate((trainPredict,testPredict),axis=0)
index = pressure_SF.index.values
plt.figure(figsize=(15,5))
plt.title("Pressure: Ground truth and prediction together",fontsize=18)
plt.plot(index,pressure_SF['San Francisco'],c='blue')
plt.plot(index,predicted,c='orange',alpha=0.75)
plt.legend(['True data','Predicted'],fontsize=15)
plt.axvline(x=Tp, c="r")
plt.grid(True)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
###Output
_____no_output_____ |
Projects/Report/Instabot/Instabot.ipynb | ###Markdown
InstaBot Introduction - Part 1 Your friend has opened a new Food Blogging handle on Instagram and wants to get famous. He wants to follow a lot of people so that he can get noticed quickly but it is a tedious task so he asks you to help him. As you have just learned automation using Selenium, you decided to help him by creating an Instagram Bot.You need to create different functions for each task.
###Code
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
from bs4 import BeautifulSoup
import time
#opening the browser, change the path as per location of chromedriver in your system
driver = webdriver.Chrome(executable_path = 'C:/Users/admin/Downloads/Chromedriver/chromedriver.exe')
driver.maximize_window()
#opening instagram
driver.get('https://www.instagram.com/')
#update your username and password here
username = 'SAMPLE USERNAME'
password = 'SAMPLE PASSWORD'
#initializing wait object
wait = WebDriverWait(driver, 10)
###Output
_____no_output_____
###Markdown
Problem 1 : Login to your InstagramLogin to your Instagram Handle Submit with sample username and password
###Code
def LogIn(username, password):
try :
#locating username textbox and sending username
user_name = wait.until(EC.presence_of_element_located((By.NAME,'username')))
user_name.send_keys(username)
#locating password box and sending password
pwd = driver.find_element_by_name('password')
pwd.send_keys(password)
#locating login button
button = wait.until(EC.presence_of_element_located((By.XPATH,'//*[@id="loginForm"]/div[1]/div[3]/button/div')))
button.submit()
#Save Your Login Info? : Not Now
pop = wait.until(EC.presence_of_element_located((By.XPATH,'//*[@id="react-root"]/section/main/div/div/div/div/button')))
pop.click()
except TimeoutException :
print ("Something went wrong! Try Again")
#Login to your Instagram Handle
LogIn(username, password)
###Output
_____no_output_____
###Markdown
Problem 2 : Type for “food” in search barType for “food” in search bar and print all the names of the Instagram Handles that are displayed in list after typing “food” Note : Make sure to avoid printing hashtags
###Code
def search(s):
try:
#locating serch bar and sending text
search_box = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'XTCLo')))
search_box.send_keys(s)
#waiting till all searched is located
wait.until(EC.presence_of_element_located((By.CLASS_NAME,'yCE8d')))
#extracting all serched handle
handle_names = driver.find_elements_by_class_name('yCE8d')
names = []
#extracting username
for i in handle_names :
if i.text[0] != '#' :
names.append(i.text.split('\n')[0])
time.sleep(5)
#clearing search bar
driver.find_element_by_class_name('coreSpriteSearchClear').click()
return names
except TimeoutException :
print('No Search Found!')
#extracting all the names of the Instagram Handles that are displayed in list after typing “food” usimg search('food')
name_list = search('food')
for i in name_list :
print(i)
###Output
dilsefoodie
foodtalkindia
foodmaniacinthehouse
food.darzee
yourfoodlab
dilsefoodie_
food
foodnetwork
foodinsider
foodiesfeature
foodplanet001
delhifoodguide
food_belly11
food_lunatic
delhifoodie
bangalore_foodjunkies
food_and_makeup_lover
foodgastic_amdavadi
street_food_chandigarh
buzzfeedfood
thefoodranger
hmm_nikhil
pune_food_blogger
food_junc
sattorifoodlab
foodie_girl_sneha
hyderabad.food.diaries
indianfood_lovers
foodofchennai
ndtv_food
foodelhi
fityetfoodie
foodiesdelhite
foodys
food_affair
Food Street / Thindi Bheedi
_foodpaths_
mumbaifoodie
chandigarhfoodguide
food_travel_etc
delhifoodwalks
foodofbengaluru
Shivaji Nagar
indian_food_freak
gastronome101
yum_crunch
thisisdelhi
VijayNagar Food Street
foodrush.recipe
foodtalkprivilege
ruchika_asatkar
###Markdown
Problem 3 : Searching and Opening a profileSearching and Opening a profile using Open profile of “So Delhi”
###Code
def search_open_profile(s):
try:
#locatong search box bar and sending text
search_box = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'XTCLo')))
search_box.send_keys(s)
#locating serched result
res = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'yCE8d')))
res.click()
time.sleep(5)
#driver.back()
except TimeoutException :
print('No Search Found!')
search_open_profile('So Delhi')
###Output
_____no_output_____
###Markdown
Problem 4 : Follow/Unfollow given handleFollow/Unfollow given handle - 1.Open the Instagram Handle of “So Delhi” 2.Start following it. Print a message if you are already following 3.After following, unfollow the instagram handle. Print a message if you have already unfollowed.
###Code
def follow():
try :
#locating follow button
btn = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'_5f5mN')))
#checking for text
if btn.text == 'Follow' :
btn.click()
time.sleep(3)
else :
print('Already Following')
except TimeoutException :
print("Something Went Wrong!")
def unfollow():
try :
#locating follow button
btn = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'_5f5mN')))
#checking for text
if btn.text !='Follow' :
btn.click()
time.sleep(2)
#locating popup window (when you click on follow button)
pop_up = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'aOOlW')))
pop_up.click()
time.sleep(3)
else :
print('Already Unfollowed')
except TimeoutException :
print("Something Went Wrong!")
#for search and open 'dilsefoodie' instagram handle
search_open_profile('So Delhi')
#for following this instgram handle
follow()
#for unfollowing this instgram handle
unfollow()
###Output
_____no_output_____
###Markdown
Problem 5 : Like/Unlike postsLike/Unlike posts 1.Liking the top 30 posts of the ‘dilsefoodie'. Print message if you have already liked it. 2.Unliking the top 30 posts of the ‘dilsefoodie’. Print message if you have already unliked it.
###Code
def Like_Post():
try :
#scrolling for locating post
driver.execute_script('window.scrollTo(0, 6000);')
time.sleep(3)
driver.execute_script('window.scrollTo(0, -6000);')
time.sleep(3)
#locating post
posts = driver.find_elements_by_class_name('v1Nh3')
for i in range(30):
posts[i].click()
time.sleep(2)
#locating like/unke button
like = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'fr66n')))
st = BeautifulSoup(like.get_attribute('innerHTML'),"html.parser").svg['aria-label']
if st == 'Like' :
like.click()
time.sleep(2)
else :
print('You have already LIKED Post Number :', i+1)
time.sleep(2)
#locating cross button for closing post
driver.find_element_by_class_name('yiMZG').click()
time.sleep(2)
except TimeoutException :
print("Something Went Wrong!")
def Unlike_Post():
try :
#scrolling for locating post
driver.execute_script('window.scrollTo(0, 6000);')
time.sleep(3)
driver.execute_script('window.scrollTo(0, -6000);')
time.sleep(3)
#locating post
posts = driver.find_elements_by_class_name('v1Nh3')
for i in range(30):
posts[i].click()
time.sleep(2)
#locating like/unke button
like = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'fr66n')))
st = BeautifulSoup(like.get_attribute('innerHTML'),"html.parser").svg['aria-label']
if st == 'Unlike' :
like.click()
time.sleep(2)
else :
print('You have already UNLIKED Post Number', i+1)
time.sleep(2)
#locating cross button for closing post
driver.find_element_by_class_name('yiMZG').click()
time.sleep(2)
except TimeoutException :
print("Something Went Wrong!")
#for search and open 'dilsefoodie' instagram handle
search_open_profile('dilsefoodie')
#Liking the top 30 posts
Like_Post()
#Unliking the top 30 posts
Unlike_Post()
###Output
You have already UNLIKED Post Number 2
You have already UNLIKED Post Number 5
###Markdown
Problem 6 : Extract list of followersExtract list of followers 1.Extract the usernames of the first 500 followers of ‘foodtalkindia’ and ‘sodelhi’. 2.Now print all the followers of “foodtalkindia” that you are following but those who don’t follow you.
###Code
def Extract_Followers():
try :
# locating followers button and click on it
followers_btn = wait.until(EC.presence_of_all_elements_located((By.CLASS_NAME,'g47SY')))
followers_btn[1].click()
#locating followers list
frame = driver.find_element_by_class_name('isgrP')
#scrolling untill first 500 user is located
for i in range(50):
time.sleep(1)
driver.execute_script("arguments[0].scrollTop=arguments[0].scrollHeight",frame)
names = []
#extracting userdata
followers = driver.find_elements_by_class_name('d7ByH')
#extracting username
for i in followers[:500] :
names.append(i.text.split('\n')[0])
return names
except TimeoutException :
print("Something Went Wrong!")
###Output
_____no_output_____
###Markdown
First 500 followers of ‘foodtalkindia’
###Code
#for search and open 'foodtalkindia' instagram handle
search_open_profile('foodtalkindia')
# Extracting followers using Extract_Followers() function
users = Extract_Followers()
ind = 1
for username in users:
print(ind,username)
ind += 1
###Output
1 farhinvox
2 baby_mawu
3 jaison_1934
4 anujdigital
5 chef_tamorady
6 badboyharsh192
7 tandooriflames
8 starlit.suites
9 cookbookbylele
10 shree___kashyap
11 secret7society
12 value_investor_15
13 jikky_94
14 srj_universe
15 clicksbyrg
16 thesumptuoussaga
17 divyankarya3467
18 royal_bhamani_11
19 wallacehajimuken
20 _letters_to_self
21 meeta4632
22 leobite9927
23 chefzillabyrimalakhani
24 hiyaguptaa
25 rasa__delights
26 intercuisine_food
27 mayu_makeovers_
28 im.vishal.gupta_
29 anii._.10
30 orbitluxuryvillas
31 daniiyaall1
32 shagorika_chronicles
33 t.thanu_san
34 peerohotsauce
35 _badasstaurus_
36 nandhaprem
37 mr_notty_saif
38 ashmi_raval18
39 _freedom_wings
40 naman4550
41 shamlibarvekar
42 aumagronica
43 vanashreeyoga
44 averiesen
45 sgkalidas
46 manpreet__lohat
47 __shreyanigam__
48 the_amrit7
49 khkhansa345
50 craftsimpex
51 i_am_jimesh_raval_
52 sonwane4745
53 culinary_delights_punjab
54 sanrinzbinge
55 jasmine_2902_
56 itz_rg_ranu
57 mahi_madhu27
58 nikita_ramteke
59 story_danish1
60 curvy_girl_lookbook
61 umeisone
62 thefoodfeelingconnection
63 foodiekomal
64 foodtechmantras
65 mr.abdul.wahab_12
66 meher.bakery
67 krsnaruby
68 dj.rhett
69 estherjcolaco
70 makka.97
71 mansy.me
72 _khaadya_padarth
73 explore_world_universe
74 local_ka_traveller
75 foodkitchenzhindi_official
76 karnika.patnayak
77 deepesh7052
78 ind_healthy_food
79 whats_cookinng
80 samratsethi1
81 the_alluring_cutie
82 jainvinny
83 camerainmyshoes
84 kanhaiya_shandilya
85 anshuman_1996_
86 foodstationbylalli
87 _priyankabohara_
88 indianfood7074
89 shri_krishna_mandir_
90 eyerajdamania
91 could_land
92 way2burger
93 foodhub_n_love
94 suganyadevi2112
95 11.37270
96 muriel_vally_ferns
97 __fatima.amoudi
98 kitetsu7
99 swikritisharma
100 kashish_cute
101 theshahikitchen
102 rishab0077
103 _imainak_roy_
104 oasiaan
105 samazing.patil
106 playxtpro
107 aloowaala
108 admagnetomedia
109 jaaaanlove12gma
110 simply_unique_1
111 my_own_backpack
112 yashikatayal_94
113 the_silentpoison
114 cafemommyjoon
115 quarantine_foodzilla
116 justlookingforheaven
117 abhiramyenuga
118 _shotsoflife__
119 suzanepattanayak
120 rupesh_prajapati_919
121 the_classy_bloke
122 ww123c0m
123 sharanpreet491
124 mahen_lams
125 al_bismillah_biriyani
126 alchemy_design_studio
127 nishant.neeraj.549
128 machhindramandlik
129 pragyasharmaofficial
130 thatinsomniackhan
131 subhashree_97512
132 snigdhaberamaity
133 sagarchowdhury1
134 crust_n_creams_
135 theginbrigade
136 best_famous_tantrik_baba_india
137 foodie.want.food
138 world_famous_best_astrologer
139 ms._bhatia
140 rapiddart
141 thebig_foodie
142 suravie_pb
143 hauzkhassocial
144 teddy_de_bakker_2
145 vk87792
146 sufitravelspvtltd
147 adityamane200
148 rahul_rachakatla
149 foodforuindia
150 pritidubey1993
151 _iam_reshu
152 love__maan0007
153 aaro.hi9766
154 the_sarash
155 pawan628sharma
156 poojaparam04
157 adh_arch
158 gittushah
159 sweetsagittarius.93
160 vaishnav_mal
161 chef_mesutcanturk
162 colors_valley
163 qrable
164 tarungm
165 charubijlani
166 food_stories_101
167 faizan0000007
168 food_frolic_30
169 madhavi.jedhe
170 thisguytarun
171 ashmeeetsethi
172 culinary_hub_
173 akshay_foodandtravel
174 classicgravyy
175 sohini_clicks
176 bhavyaadhingra
177 shrees_shetty
178 armankhanaf
179 official_salmakhan
180 bhukkad__billi
181 sbrmsthofficial01
182 foodieclicks6
183 theimpeccablestyle
184 tricky_cat_09
185 khopdi__lover__01
186 subhrangshuslookbook
187 fodie_junction
188 instapromotion_90
189 someshgurav
190 _cottoncandyandshandy_
191 aartisingh1203
192 kangscookingcorner
193 s_ou_raa_vr46
194 ___naqqu__xx_01
195 the_pleasure_of_loving_foods_
196 the_bartender_cd
197 desi__sperm_
198 muksa.ajbani
199 _foodiemedico_
200 djschocolate
201 somabhambure
202 junglee_jalebi
203 desiigner_subrat
204 mahnfrank2
205 asifmobeen
206 dudaflormendes
207 norbertportka
208 bhatiyare
209 matlubk323
210 his_pureevil.devil
211 gangwar.rinul
212 foodie__ashutosh
213 takeeateasy2020
214 bloglife41
215 djpdaminijparmar
216 deepthi_pullela
217 foodies4forever
218 meeansh_k
219 virdiandsons
220 krishaprivate
221 refectionsthejuicebar
222 foodographybyvipin
223 altafmalik725
224 shivanya3278
225 anishaa.9
226 _ghar_ka__khana
227 pushpendra.khandelwal
228 sumittapadiya
229 foodpact_2020
230 foodonshot
231 jil_247
232 ayanmars
233 advjasbirkaur
234 nehaa7860
235 status_lover_td
236 sa__ha___d
237 er_sandeepyadav
238 jamungoa
239 nutsnspreads
240 danish.q143
241 lohia.shalini
242 _miss_foodiee
243 rehman_blog
244 __adityareddy__
245 duyguaaltug
246 kunaka_munashe
247 flavorsbychefdeparth
248 dalai_seafoods
249 patelqueen167
250 grafikmanish
251 sonam_r_sachdev_09
252 occupiedwith_myself
253 u.panigrahi
254 food_fun_adventure
255 buzz__twin_kle
256 thefoodiezpage
257 garimarishisain
258 arajurkar
259 prsharma4382
260 chef_siblings
261 shubham_522
262 wellness_withmona
263 _chf_dalai
264 diyakatiyarr
265 chefrahilaga
266 mai_hoon_doctor__strange
267 sowrav.sowrav.792
268 durgabaisa2
269 a27_shutterbug
270 prince_chetanya_pradhan
271 vicentefelixrodriguez
272 ritubajaj13
273 samarth_and_kushal
274 poojamadaan3
275 ashwinsins
276 altitude_ace
277 o0o_boy_yash_o0o
278 ayushi_ajit23
279 anuyadav0707
280 its_shahnawaz_sta
281 hair.blessed06
282 balalalalab
283 rohitgupta6773
284 vipul_uniyal105
285 nehaljoshi1
286 paridhibhatiya
287 ansumanbisoi2003
288 subhan_pathan07
289 tripsntreats01
290 newbombaymart
291 premfulwani2005
292 pleuvine
293 sundarii_kumar
294 a_p_megavar__
295 notyourpringles_
296 bindaascookingco
297 foodie_goodie_by_sushi
298 shagunakanwar
299 akhand.pratapgur
300 hameed_kohistan
301 veshalis74
302 nehearttrusha
303 mycoconutstories
304 kaushal_prajapati24
305 mungase_agro
306 headovermeals_7
307 shalis_cooking_window
308 shreesidheshwartraders
309 foodtalkdel
310 ram_sir_karan_n_group
311 the_food_philosophy25
312 broker_chefjulian
313 joyd33p.ghosh
314 i_am_nehapant
315 black_tick_7a3
316 jwmarriottchd
317 mr._right_naga
318 rendi_0649
319 royalsu73
320 laziz_darwaze
321 gurbani949
322 pyramid_atmbar
323 shreya_kapoor925
324 tarashidresses
325 deepu_manugula
326 niladri.s.d
327 atri.ashwin
328 manjitvdeshmukh
329 ahmedazruff
330 megha._1
331 storytellershive
332 cafe_r19
333 abidevroy
334 tanhadiltraveller
335 vandupai
336 lwd7152
337 _._._a.l.i.z.e.h.h.h.h._._._._
338 venkzzy
339 food_universe143
340 svs305
341 trend_shoppers_crew
342 maison_cuisine
343 foodprism70
344 foodieesince88
345 maskaonwheels
346 lokesh.sendhav
347 themohammedg
348 sweeterabyminal
349 zoshorganic
350 traces_of_flavours
351 _night_boy________
352 vishi_001
353 bachelors_kitchen_1min
354 sushmaas_kitchen
355 vaishnavi_raghav_makeup_artist
356 cakeland523
357 dilliiblog
358 shaikhtaj917
359 itzzz_adarsh21
360 firkush09
361 rajuthapa1983
362 khaled.abou.186
363 kingroger123
364 theeverydaycooking
365 a2_unisex_fashionnn
366 loveforfood42
367 lavenderlimecollective
368 headover_meal
369 spiceofpune
370 mysticfoodindia
371 dakshesh.12ka4
372 zarak.han1010
373 amitsharma5914
374 fittness_boy1
375 froshtafoods
376 irenesgroup
377 delhiadvertisements
378 shaileejauhari
379 aggrenu1101
380 nehaaggarwal3
381 klassykunal
382 ruby_arora_87
383 svk.92
384 muskan_virmani
385 yash.ntpl450
386 the_great_indian_kitchen
387 culi.naryarts
388 siva.murugesan.9615
389 dhanraj97
390 solotraveller01
391 chandigarhfoodblog
392 thechocolatepuddin
393 mohit.rathour_
394 fernweh234
395 itsreallydelicious
396 the_lavender_kitchen
397 thegoanloadedplate
398 susmitabaishya1133
399 mahdimaj1374
400 harleens_kitchen_youtube
401 myrecipesblog
402 anis_turani
403 yadavdharmendra__17
404 avdhoot_247
405 fouzia1222
406 sundar_sethuraman
407 revatir03
408 delhi_hai
409 olive.brew
410 luc_ky7779
411 kiranvyas77
412 soul_kill_sort
413 duttranjan
414 housoflife
415 chefjoshii
416 aamy_ks
417 shahidchoudhary8654
418 emre49000
419 foodwrit
420 hungry._.again
421 sonal.k_0650
422 panchaleagle
423 abhi_guitar143
424 bakearoma_neha
425 latte_s_bakeanddough
426 foodstoriesbyrutuja
427 rishikk_mishra
428 dhrus29
429 fashionbloggershubofficial
430 sweetycookingstories
431 taj_natural_dry_fruit_halwa
432 amirsohel47
433 malaydinerol
434 kartikgupta542
435 batterbutterbakery
436 _rishugarg_
437 prabhaprabha954
438 ajay_rajwade
439 priyankagohel
440 aug_os_r
441 soyab.khilji.1
442 azhar.shaikh.0220
443 mital.312
444 cookingstudio54
445 armankhan074
446 foodie_mohapatras
447 anand4506kumar
448 manjurkhan1936
449 barnaliattachment
450 anupam_viya
451 __jaykoli60__
452 salonit16
453 nuskha_e_biryani
454 foodies_court9
455 sethiavish28
456 anjalithakur9516
457 amitatrehan
458 rk.gamer__
459 m_a_j_e_e_d_madboi
460 foodforfoodie95
461 t_h_o_u_g_h_t_s_ofme
462 zarpashkhan9
463 travelandeatz
464 i_am_not_fake_banana
465 junurajan
466 koulickbiswas
467 souravrana411
468 _.p.r.i.n.c.e._.s.h.a.r.m.a._
469 theripalrao
470 daddieskichen
471 mohammadibrahim008
472 nikitavarma244
473 thillaiashok
474 la_vedlysevents
475 theboy_who_lives
476 shrutik1012
477 patel.kamlesh.7140497
478 bonappetitko
479 arunkushwah4321
480 mairajkhan63323
481 vinod4891
482 chefharshitagarwal
483 dev__unstoppable
484 arpita_garnaik
485 _._shadow_of_heart._._
486 flavorfeast_piku
487 rachnakarproductionstudio
488 ni3.kumar
489 sanyasinghal91
490 satakshi.sharma.7thd.sdpsmzn
491 soulhunt_saiash
492 porshiya_s_bose
493 nutri_pickles
494 mr.prashu.4141
495 timetravel_turtle_
496 _zeeshanali786
497 kumarhariombca
498 bhumikachheda
499 _foodiexplorer_
500 anarsa_flavours_of_bihar
###Markdown
First 500 followers of ‘sodelhi’
###Code
#for search and open 'sodelhi' instagram handle
search_open_profile('sodelhi')
# Extracting followers using Extract_Followers() function
users = Extract_Followers()
ind = 1
for username in users:
print(ind,username)
ind += 1
###Output
1 bambayallaofficiel
2 kanishq_basoya_dellhii0001
3 ashoka2906
4 choudhaarysagar
5 twenty4_into_seven
6 _____mahesh_panchabhai_____
7 the_positive_rays
8 21shrutikumari
9 vaani.n
10 chahatmalhotra_
11 _soflattering_
12 visualsbyfocus
13 shreyaa.xx__
14 ojasvikori
15 nitantgoradia
16 vaishaligupta6
17 vanshikhanagal
18 ayuroy_20
19 isagarkaushik
20 nau_tan_ki_nik
21 _kehkasha_
22 kaira_singh16
23 apala.03
24 dagar3905
25 romeopandit_0577
26 shrutika_94
27 monikabanerjee517
28 shubham.sr4
29 rizwan__rk__77
30 quotes_and_facts_0
31 _elenasingh
32 geeta_mehra_12
33 we_share_hearts
34 thestaplework
35 s.m.6662
36 jikky_94
37 sonammverma
38 llamanll_
39 amsterrgram
40 anushi.singla
41 jackyjwjwb
42 gallivanter.diaries
43 nishavikasbaliyan
44 fungusmaybe
45 nikhilkhetan
46 soodshaab1
47 asmitaarora22
48 goyalsaloni1
49 jollywood_daily
50 makeupnhairbyz
51 buddhism_teaching_page
52 bwithvanshika
53 kritijalan_
54 tanishkasandhu
55 sumanyudutta
56 printsolutionsdesign
57 lizanedsouza
58 kapurjiya_
59 adnan_00762
60 rocksaltlampindia
61 jeewandisha
62 manabi_12
63 abushaikh5708
64 vinthemoon
65 kritikapoor79
66 garry_7621
67 arpna_bharti
68 arshi28_
69 dikit_tsering
70 _delhistreet
71 foodiereviews45
72 asharaf.alam.5205
73 aarzoo_aroraa_
74 madhavgupta08
75 vaibhav_kain
76 mechanicalengineering_19
77 sid._.mahajan
78 mohi_t3600
79 dimariamit
80 naveenmg1818
81 luxuriousarenaofficial
82 manupriyakaplesh
83 _nambooo
84 aggarwaljii
85 nakshtra_003
86 art_and_craft_with_love
87 nishant3030
88 sgkalidas
89 shivangiamba
90 mes_sysoul01
91 mr_ayanofficial110
92 anandshivani
93 radhikka_khanna
94 indersaheb123
95 riyas._ym
96 sakshams21
97 jass_singh_kaler
98 bindia363
99 rahm_an7860
100 anurag_069
101 chicstyle_etmode_
102 sakshi__dhiman
103 accrete_designs
104 notror404erfound
105 pranavi.999
106 sanrinzbinge
107 su_naina_7
108 _cherr_david_
109 yogeshsharma0933
110 itz_rg_ranu
111 rakxw62376
112 iammissy6
113 ammuu8762
114 _the_untold_tales__
115 khana261549
116 kunaljoshi9555
117 chinmaiverma
118 mahek.singla
119 mohammad.sojib.73113
120 _nimmi.nimmi_
121 rasoolattia
122 food.overdudes
123 mrs.dr.rayadivyangsharma
124 zanuss.home
125 vishnu_shastri_ji_35
126 anythingbutsugar
127 anmol.rawlani
128 shiivaniyadav
129 bhukkad_at_the_nukkad
130 _thebrowndaughter
131 ishaatyagii
132 delhiboy8691
133 chitta_official_001
134 angerdoll_pg
135 meher.bakery
136 r_rahman567
137 army.adda
138 arche_rai
139 desi_traveler
140 mansy.me
141 anushreyabedi
142 varunsharma6003
143 mahipwadhawan96
144 aagamshah88
145 mohit.gupta.official
146 puniaanjali24
147 _sheena.bhatia_
148 khaopiyo_official
149 jerrry_sharma
150 photo_phactory_2244
151 ritz6196
152 hitmanpanwar.s
153 _nikhilkotecha_
154 arorasuksham
155 samratsethi1
156 chodogyaan11
157 namrata8.arora
158 shrreya__
159 safetogo.life
160 fuck.you8255
161 mcbc_memez
162 raghav2002agg
163 sarab_sainbhee
164 ashishrawat2911
165 realmirmasroor
166 gudduskhan143
167 rebha.wadhwa
168 bpeotiques
169 sachin4459singh
170 _tuhada_apna_chauhan_
171 rajatbhallaofficial
172 manankarangiya7
173 _simranjon_
174 rajamahendravaram_raju24
175 angadsngh
176 _.hell_o_leo._
177 arushisaini_
178 depesh.photography
179 himmilicious
180 shweta.saigal
181 petals.punjabi
182 abhishek.chauhan___
183 aksh_kg
184 thefaridhasan
185 islamic_facts786
186 afryn31
187 _imishan_
188 aasthasinghal786
189 sudeshmahala878
190 mayankrao00
191 cute_luv_mine11
192 earning_trick_04
193 heena.iftekhar
194 himanshidz
195 _preeti_honey_
196 kunal_ghatara
197 justperfect.content
198 justyatin
199 ajmeriet
200 shivendrasinghh
201 a__dev__
202 aryamanmehlawat03
203 saga.rjata
204 aadichaudary
205 _shaikh_ifra_
206 chianti_over_roses
207 cherrry.89
208 bikashdas9605
209 aniishachandel
210 emekannabude
211 mehar_sehgal_
212 thenotoriousbabaas
213 bao.the.frenchie
214 theprableenkaur
215 aksweets___
216 mridul_chillana
217 abhipahwa
218 designerkhusboofficial
219 tharwaniapoorva.ta
220 muskan_2212
221 punitchauhan16
222 i.akshaysharma
223 nath_sachindra
224 saumya_19
225 aslilava
226 anwar_irshad59
227 rachna_dutta
228 asmailaziz312
229 whenwritikawrites
230 _endlesswords__
231 notkashish
232 kunaaaalb__
233 oneimperfectguy
234 nikki19valya
235 akashgoel061089
236 _dixitjain_
237 ash_warriorr
238 pure_passionomist
239 aloowaala
240 _sachintichkule_
241 notsogregariousgirl
242 ashmeet_reyat
243 rs_fashionstyle
244 agarwalmayuka
245 ocherlybright
246 swapniil____
247 ncrblog
248 _miss_foodiee
249 jaspreet_mishukaur
250 _sumit_kaushik_
251 bharatpulyani
252 taannuu.u
253 milind_makeupartist
254 ansh_chadha3112
255 nanci_gayle
256 catchy_captures21
257 vijay_prd
258 shivendu.kumar
259 vishwastanwar
260 _itsprashant_ranjan
261 cci.chefstory
262 kartikey.1996
263 riyasharma13049
264 lexquestfoundation
265 anuarora68
266 _.itsmanya
267 tyagi_swati_08
268 jitenderkumar5336
269 _ayazmusic
270 potty6390
271 ayush_jhambar24
272 ch_pran.tiger_bsntpurya
273 nitish.gupta.08
274 scoresomestyle
275 thehappysoulss
276 jatinxtx
277 kanikasharma22.08
278 its_ankita15
279 chop.stick15
280 sp_chugh
281 sa.lmon6526
282 awsmvibe18
283 anjuchawla24gmail
284 _me_happiest_soul_
285 sheldonandmissy2020
286 anantyadavvv
287 nehaagg7
288 kshitij_bansal07
289 shivankitbagla
290 surmabhopali69
291 akashgoyal2296
292 high_ratedgabru0
293 gabbardhaliwal
294 tarot_palmist_sushmitah
295 gundeep_mann
296 lesliecuellar_
297 aditi_1011
298 a__nigma
299 must.aee
300 _fcukall_
301 mahesh.123111
302 x___sammmy___x
303 imkkmr
304 akashmandal621
305 meghna9962
306 justacoffeeperson
307 007_zayn_shuaib
308 eshanaoberoi
309 mannpesticides
310 mahalaxmi.industries
311 sleepyworld_14
312 anjalipvt.1995
313 sabhyapvt_
314 handa3131
315 social__distansingh
316 sattu_aulakh_3600
317 thetwobhukkads
318 x_poo_
319 ig_chronos_
320 harsh_sahni
321 __chaudhary__monu
322 riyaaa_097
323 reformedfinny
324 leena_500
325 anshika0044
326 manjyot2050
327 ritisharmaaa
328 kishh_shh
329 pratham_pb03
330 vipul_kumar001
331 the_classy_bloke
332 alishba7b3
333 antony____paul
334 shivi4frnds
335 peachorange.in
336 sonasolanki07
337 grisha.ggarwal
338 hanshul18
339 foodie.want.food
340 sameerupadhyay59
341 sachin.499
342 countryside_livingg
343 parineet09
344 firozansari257862
345 vro0om
346 _s.w.e.r.i.l_11_
347 izuu___2610
348 dharmesh.bunker
349 iam_divay
350 harsh_gupta_06
351 viidiishaaa
352 vipin9494
353 shubhambhandariiiii
354 silvershades_258
355 vanshika.thakkar21
356 _irl_inspired
357 robinsingh105
358 shikhar0104
359 rakhi_sharma_568
360 naveen_baneta
361 nitimasinghal
362 aulakh8602
363 yashansh10
364 tootoo983
365 avi._.meena
366 meenarm24
367 tendol_namtsang
368 sonam_verma0808
369 theartcart_30
370 amoreapparels
371 jsinha27
372 abidsheikh77.official
373 hauzkhassocial
374 dineshkumark27017
375 iam.sakshi.chaudhary
376 aaayusush
377 jayesh_khattar
378 poonamnegi_
379 teddy_de_bakker_2
380 manmeet_arora83
381 thisisramneekkaur
382 meenalaswal
383 akhilpal9999
384 puii_lalrem
385 _mani.aulakh_
386 ams.aman
387 tannukuswaha
388 uwyc_1409
389 himanshi.sejwal
390 ankushgabaa
391 ac_233484
392 suranshchopra
393 khanshazfa786
394 mireya_bymamta
395 sohilkhan2001sohil
396 foodforuindia
397 shrutii.sehgal
398 anup_misal15
399 aru.jha
400 idhashaha10
401 shubhijain.27
402 makeupstoriesbyrajnivirmani
403 abhinavshawarma
404 the_weddings451
405 yuvi.at.war
406 sriyansh_jain
407 _so.cha_
408 gmfedrina
409 navpreecollection
410 theb_righterside
411 _shaifali._.gaba_
412 kitchenandkulture
413 harbir2
414 chetanchetan16
415 love__maan0007
416 saintgraam
417 kaurparamveer
418 akshaygarg118
419 sona.sapehia
420 arpitvishnoi89
421 hxxetrii
422 fmlftsanika
423 _sargam_0608_
424 poorva.16
425 simrndhimn
426 sahil_shadow
427 tanwar_kanika
428 iamnbhullar
429 ias_paradise
430 _vanisha_m
431 barsha._.h
432 wanderlustraveler43
433 __.azaani
434 sylvanaesatpathy
435 hoodadeepika
436 k_saluja1608
437 khwaabonkamusafir
438 nishtha_kharbanda
439 neerumehta28
440 kinnikaur
441 myfoodstory4u
442 ga_education_india
443 priyachristina
444 kukrejasimran
445 thememeion
446 yukti_agarwal14
447 fierce143
448 jolly_sangeeta
449 sheetalbelwal
450 sanyamm_007
451 vikas.s_98
452 justanned.in
453 jamal_cooking
454 anmoll.jainn
455 dr.rashmi21
456 santoshkumarthukral
457 theclassiccollectionskn
458 lokesh.sharma__
459 _naveen_sheoran
460 _that_rainbowgirl
461 safarbhramantee
462 ketogenic_with_neha
463 alexandermediationgroup
464 tarun.1406
465 chavat_shakha_1
466 jass_sraw_
467 ivipinjhamb_
468 dev.editography
469 bhumika_143
470 _jagriti_26
471 anayashodwani
472 arman.sharma.7796
473 sanasiddiqui001
474 yogendraagarwal9
475 akieshabyakanksha
476 mehmakaurkohli
477 travelagain27
478 _muskan2003_
479 gourmetsfromindia
480 lalit_kumar_sambhariya
481 sumit3806singh
482 thakur_prashant_gaur
483 aryankashyap9563
484 anzeerupp
485 plantohub
###Markdown
Print all the followers of “foodtalkindia” that you are following but those who don’t follow you.
###Code
def Following():
try :
# locating following button and click on it
followers_btn = wait.until(EC.presence_of_all_elements_located((By.CLASS_NAME,'-nal3')))
followers_btn[2].click()
#locating followers list
frame = driver.find_element_by_class_name('isgrP')
#scrolling untill all users are located
for i in range(20):
time.sleep(1)
driver.execute_script("arguments[0].scrollTop=arguments[0].scrollHeight",frame)
names = []
#extracting userdata
following = driver.find_elements_by_class_name('d7ByH')
#extracting username
for i in following :
names.append(i.text.split('\n')[0])
return names
except TimeoutException :
print("Something Went Wrong!")
#for search and open 'foodtalkindia' instagram handle
search_open_profile('foodtalkindia')
# Extracting followers using Extract_Followers() function
followers_of_foodind = Extract_Followers()
#casting into set
followers_of_foodind = set(followers_of_foodind)
#now finding all user followed by me for that I'll use search_open_profile() after that using Following() I will extrat all user
#followed by me
search_open_profile(username)
followed_by_me = Following()
followed_by_me = set(followed_by_me)
#taking intersection so s1 contains only that user who followed by me
s1=(followers_of_foodind).intersection(followed_by_me)
if len(s1) == 0:
print('No such users found')
else:
#now extracting my followers
my_follower = Extract_Followers()
my_follower = set(my_follower)
#taking intersection with s1, so s2 contains only users that I am following them but they don’t follow me
s2=s1.intersection(my_follower)
if len(s2) == 0:
print('No such users found')
else:
for user in s2:
print(user)
###Output
No such users found
###Markdown
Problem 7 : Check the story of ‘coding.ninjas’Check the story of ‘coding.ninjas’. Consider the following Scenarios and print error messages accordingly - 1.If You have already seen the story. 2.Or The user has no story. 3.Or View the story if not yet seen.
###Code
def Check_Story():
try:
#locating story or profile pic
story = wait.until(EC.presence_of_element_located((By.CLASS_NAME,"RR-M-.h5uC0")))
#check the Profile photo size to find out story is available or not
height = driver.find_element_by_class_name('CfWVH').get_attribute('height')
if int(height) == 166:
print("Already seen the story")
else:
print("Viewing the story")
driver.execute_script("arguments[0].click();",story)
except:
print("No story is available to view")
#searching 'coding.ninjas' using search_open_profile() function
search_open_profile('coding.ninjas')
#for checking story
Check_Story()
###Output
Already seen the story
|
examples/Genetic_algorithm_helper_functions/GA_final_version .ipynb | ###Markdown
Initialize and Load Data Initialize the first iteration
###Code
np.random.seed(2)
conc_array = np.random.dirichlet((1,1,1,1,1), 7)
conc_array_actual = conc_array
def perform_UV_vis(next_gen_conc, conc_array_actual, spectra_array_actual):
current_gen_spectra = conc_to_spectra(next_gen_conc, sample_spectra[:,1:sample_conc.shape[1]+1])
conc_array_actual = np.vstack((conc_array_actual, next_gen_conc))
spectra_array_actual = np.vstack((spectra_array_actual, current_gen_spectra))
return current_gen_spectra, conc_array_actual, spectra_array_actual
def export_to_csv(conc_array):
sample_volume = 300 #uL
conc_array = conc_array*sample_volume
for i in range(conc_array.shape[0]):
for j in range(conc_array.shape[1]):
if conc_array[i,j] < 5:
conc_array[i,j] = 0
conc_array = np.round(conc_array)
df = pd.DataFrame(conc_array, columns =['red-stock', 'green-stock', 'blue-stock', 'yellow-stock', 'water-stock'])
df.to_csv("concentration_array.csv", index = False)
def import_from_excel(filename, conc_array_actual, spectra_array_actual):
sample_spectra = pd.read_excel(filename)
current_gen_spectra = np.asarray(sample_spectra)
conc_array_actual = np.vstack((conc_array_actual, next_gen_conc))
spectra_array_actual = np.vstack((spectra_array_actual, current_gen_spectra))
return current_gen_spectra, conc_array_actual, spectra_array_actual
###Output
_____no_output_____
###Markdown
Export Concentrations as CSV
###Code
conc_array
export_to_csv(conc_array)
###Output
_____no_output_____
###Markdown
Import UV-Vis Spectra from Excel
###Code
df = load_df(r'Spectra_iteration_0.xlsx')
df = subtract_baseline(df, 'A8')
df = normalize_df(df)
df = df.drop(['A8'], axis = 1)
current_gen_spectra = np.asarray(df)
wavelength = current_gen_spectra[:,0]
current_gen_spectra = current_gen_spectra[:,1:].T
###Output
_____no_output_____
###Markdown
Load Desired Spectra
###Code
df_desired = load_df(r'Target_spectra.xlsx')
df_desired = subtract_baseline(df_desired, 'C2')
df_desired = normalize_df(df_desired)
df_desired = df_desired.drop(['C2'], axis = 1)
x_test = df_desired['C1'].values.reshape(-1,1)
###Output
_____no_output_____
###Markdown
Additional Steps for the Zeroth Iteration
###Code
spectra_array = current_gen_spectra
conc_array_actual = conc_array
spectra_array_actual = spectra_array
###Output
_____no_output_____
###Markdown
Analyze Fitness of Zeroth Iteration
###Code
next_gen_conc, current_gen_spectra, median_fitness_list, max_fitness_list, iteration, mutation_rate_list, fitness_multiplier_list = zeroth_iteration(conc_array, spectra_array, x_test)
plot_spectra(current_gen_spectra, x_test, wavelength, iteration)
###Output
_____no_output_____
###Markdown
Nth Iteration
###Code
Iterations = 25
Moves_ahead = 3
GA_iterations = 6
n_samples = 7
seed = np.random.randint(1,100,1)[0]
mutation_rate, fitness_multiplier, mutation_rate_list_1, fitness_multiplier_list_1, best_move, best_move_turn, max_fitness, surrogate_score, next_gen_conc_1 = nth_iteration(Iterations, Moves_ahead, GA_iterations, n_samples, current_gen_spectra, next_gen_conc, x_test, conc_array_actual, spectra_array_actual, seed, median_fitness_list, max_fitness_list, iteration, mutation_rate_list, fitness_multiplier_list)
best_move
###Output
_____no_output_____
###Markdown
Run if satisfied with the best moves taken:
###Code
next_gen_conc = next_gen_conc_1
mutation_rate_list = mutation_rate_list_1
fitness_multiplier_list = fitness_multiplier_list_1
###Output
_____no_output_____
###Markdown
Export Concentrations to CSV
###Code
export_to_csv(next_gen_conc)
###Output
_____no_output_____
###Markdown
Create those samples using the OT2 and perfrom UV-Vis on them Import Spectra from excel
###Code
df = load_df(r'Spectra_iteration_2.xlsx')
df = subtract_baseline(df, 'B8')
df = normalize_df(df)
df = df.drop(['B8'], axis = 1)
current_gen_spectra = np.asarray(df)
wavelength = current_gen_spectra[:,0]
current_gen_spectra = current_gen_spectra[:,1:].T
conc_array_actual = np.vstack((conc_array_actual, next_gen_conc))
spectra_array_actual = np.vstack((spectra_array_actual, current_gen_spectra))
###Output
_____no_output_____
###Markdown
Plots the maximum and median fitness of the spectras of the next batch of samples.
###Code
median_fitness_list, max_fitness_list, iteration = plot_fitness(next_gen_conc, current_gen_spectra, x_test, median_fitness_list, max_fitness_list, iteration)
plot_spectra(current_gen_spectra, x_test, wavelength, iteration)
a = np.asarray([1,2,3])
b = np.asarray([4,5,6])
fig, ax = plt.subplots()
ax.plot(a,b)
ax.set_xticks(a)
###Output
_____no_output_____ |
jupyter_notebooks/machine_learning/mastering_machine_learning/04. Model Evaluation/02. Evaluation Metrics.ipynb | ###Markdown
Evaluation MetricsSo far, we have mainly used the $R^2$ metric to evaluate our models. There are many other evaluation metrics that are provided by scikit-learn and are all found in the metrics module. Score vs Error/Loss metricsTake a look at the [metrics module][1] in the API. You will see a number of different evaluation metrics for classification, clustering, and regression. Most of these end with either the word 'score' or 'error'/'loss'. Those functions that end in 'score' return a metric where **greater is better**. For example, the `r2_score` function returns $R^2$ in which a greater value corresponds with a better model.Other metrics that end in 'error' or 'loss' return a metric where **lesser is better**. That should make sense intuitively, as minimizing error or loss is what we naturally desire for our models. Regression MetricsTake a look at the regression metrics section of the scikit-learn API. These are all functions that accept the ground truth y values along with the predicted y values and return a metric. Let's see a few of these in action. We will read in the data, build a model with a few variables using one of the supervised regression models we've covered and then use one of the metric functions.[1]: https://scikit-learn.org/stable/modules/classes.htmlsklearn-metrics-metrics
###Code
import pandas as pd
import numpy as np
housing = pd.read_csv('../data/housing_sample.csv')
X = housing[['GrLivArea', 'GarageArea', 'FullBath']]
y = housing['SalePrice']
X.head()
###Output
_____no_output_____
###Markdown
Let's use a random forest to model the relationship between the input and sale price and complete our standard three-step process.
###Code
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(n_estimators=50)
rfr.fit(X, y);
###Output
_____no_output_____
###Markdown
First, use the built-in `score` method which always returns the $R^2$ for every regression estimator.
###Code
rfr.score(X, y)
###Output
_____no_output_____
###Markdown
Let's verify that we can get the same result with the corresponding `r2_score` function from the metrics module. We need to get the predicted y-values and pass it along with the ground truth to the function.
###Code
from sklearn.metrics import r2_score
y_pred = rfr.predict(X)
r2_score(y, y_pred)
###Output
_____no_output_____
###Markdown
Let's use a different metric such as mean squared error (MSE).
###Code
from sklearn.metrics import mean_squared_error
mean_squared_error(y, y_pred)
###Output
_____no_output_____
###Markdown
Easy to construct our own functionMost of these metrics are easy to compute on your own. The function below computes the same result from above.
###Code
def mse(y_true, y_pred):
error = y_true - y_pred
return np.mean(error ** 2)
mse(y, y_pred)
###Output
_____no_output_____
###Markdown
Taking the square root of the MSE computes the root mean squared error (RMSE) which provides insight as to what the average error is, though it is theoretically going to be slightly larger than the average error. Therer is no function in scikit-learn to compute the RMSE. We can use the numpy `sqrt` function to calculate it.
###Code
rmse = np.sqrt(mean_squared_error(y, y_pred))
rmse
###Output
_____no_output_____
###Markdown
The units of this metric are the same as the target variable, so we can think of our model as "averaging" about &36;18,000. The word averaging is in quotes because this isn't the actual average error, but will be somewhat near it. Use the `mean_absolute_error` to calculate the actual average error per observation.
###Code
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y, y_pred)
###Output
_____no_output_____
###Markdown
We can compute this manually as well.
###Code
(y - y_pred).abs().mean()
###Output
_____no_output_____
###Markdown
Different metrics with cross validationIt is possible to use these scores when doing cross validation with the `cross_val_score` function. It has a `scoring` parameter that you can pass a string to represent the type of score you want returned. Let's see an example with the default $R^2$ and then with other metrics. We use a linear regression here and continue to keep the data shuffled as before by setting the `random_state` parameter to 123.
###Code
from sklearn.model_selection import cross_val_score, KFold
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
kf = KFold(n_splits=5, shuffle=True, random_state=123)
###Output
_____no_output_____
###Markdown
By default, if no scoring method is given, `cross_val_score` uses the same metric as what the `score` method of the estimator uses.
###Code
cross_val_score(lr, X, y, cv=kf).round(2)
###Output
_____no_output_____
###Markdown
Use the string 'r2' to return $R^2$ values, which is the default and will be the same as above.
###Code
cross_val_score(lr, X, y, cv=kf, scoring='r2').round(2)
###Output
_____no_output_____
###Markdown
Use the documentation to find the string namesThe possible strings for each metric are found in the [user guide section of the official documentation][1]. The string 'neg_mean_squared_error' is used to return the negative of the mean squared error.[1]: https://scikit-learn.org/stable/modules/model_evaluation.htmlcommon-cases-predefined-values
###Code
cross_val_score(lr, X, y, cv=kf, scoring='neg_mean_squared_error')
###Output
_____no_output_____
###Markdown
Why are negative values returned?In an upcoming chapter, we cover model selection. scikit-learn selects models based on their scores and treats higher scores as better. But, with mean squared error, lower scores are better. In order to make this score work with model selection, scikit-learn negates this value when doing cross validation so that higher scores are indeed better. For instance, a score of -9 is better than -10. Mean squared log errorAnother popular regression scoring metric is the mean squared log error. This works by computing the natural logarithm of both the predicted value and the ground truth, then calculates the error, squares it and takes the mean. Let's import the function from the metrics module and use it.
###Code
from sklearn.metrics import mean_squared_log_error
mean_squared_log_error(y, y_pred)
###Output
_____no_output_____
###Markdown
We can use the metric with `cross_val_score` by passing it the string 'neg_mean_squared_log_error'. Again, greater scores here are better.
###Code
cross_val_score(lr, X, y, cv=kf, scoring='neg_mean_squared_log_error')
###Output
_____no_output_____
###Markdown
Finding the error metricsYou can find all the error metrics by navigating to the scikit-learn API or the user guide, but you can also find them directly in the `SCORERS` dictionary in the `metrics` module. The keys of the dictionary are the string names of the metrics. If you are on Python 3.7 or later, the dictionary will be ordered. There are eight (as of now) regression metrics and they are listed first. Let's take a look at their names.
###Code
from sklearn.metrics import SCORERS
list(SCORERS)[:8]
###Output
_____no_output_____
###Markdown
Let's use the maximum error as our cross validation metric, which simply returns the maximum error of all the predictions.
###Code
cross_val_score(lr, X, y, cv=kf, scoring='max_error').round(-3)
###Output
_____no_output_____
###Markdown
Most of the built-in scoring metrics are for classification or clustering and not for regression. Let's find the total number of scoring metrics.
###Code
len(SCORERS)
###Output
_____no_output_____ |
courses/2020-LMU-RDM/NDA2015_nix_excercise/module2nix.ipynb | ###Markdown
Metadata
###Code
m_gen = nixf.create_section("General","odml.general")
m_gen.create_property("Experimenter", values_or_dtype='Alexa Riehle')
m_gen.create_property("Institution", values_or_dtype='CNRS, Marseille, France')
m_gen.create_property("RelatedPublications", values_or_dtype='doi: 10.1523/JNEUROSCI.5441-08.2009')
m_exp = nixf.create_section("Experiment","odml.experiment")
m_exp.create_property("Task", values_or_dtype='DelayedCenterOut')
m_subj = m_exp.create_section("Subject","odml.subject")
m_subj.create_property("Name", values_or_dtype='Joe')
m_subj.create_property("Species", values_or_dtype='Macaca mulatta')
m_subj.create_property("Sex", values_or_dtype='male')
m_rec = m_exp.create_section("Recording","odml.recording")
m_rec.create_property("BrainArea", values_or_dtype='M1')
m_rec.create_property("RecordingType", values_or_dtype='extracellular')
m_rec.create_property("SpikeSortingMethod", values_or_dtype='WindowDiscriminator')
# trial conditions:
condnames = {1 : "full", 2 : "2 of 6", 3 : "3 of 6"}
m_cond = m_exp.create_section("TrialConditions","odml.conditions")
def mkcond(cond, target):
condname = "condition %d target %d" % (cond,target)
sec = m_cond.create_section(condname, "odml.section")
sec.create_property("BehavioralCondition", values_or_dtype=cond)
sec.create_property("BehavioralConditionName", values_or_dtype=condnames[cond])
sec.create_property("Target", values_or_dtype=target)
return sec
m_infiles = {"joe097":"joe097-23457.gdf", "joe108":"joe108-124567.gdf", "joe147":"joe147-12467.gdf", "joe151":"joe151-12346.gdf"}
#m_mlfiles = {"joe097":"joe097-5-C3-MO.mat", "joe108":"joe108-4-C3-MO.mat", "joe147":"", "joe151":""}
m_conds = [[mkcond(c,t) for t in range(1,7) ] for c in range(1,4) ]
m_sess = nixf.create_section("Sessions","odml.section")
for sess in ["joe097", "joe108", "joe147", "joe151" ]:
m_s1 = m_sess.create_section(sess,"odml.session")
m_s1id = m_s1.create_property("SessionID", values_or_dtype=sess)
m_s1infile = m_s1.create_property("InputFile", values_or_dtype=m_infiles[sess])
m_s1subject = m_s1.create_section("Subject","odml.subject")
m_s1subject.link = m_s1subject
m_s1conds = m_s1.create_section("TrialConditions","odml.conditions")
m_s1conds.link = m_cond
def mkunit(sblock,unitid):
print(sblock.name)
# create single unit
su = sblock.create_source("Unit %d" % (unitid), "nix.ephys.unit")
su.definition = "Single unit"
# count trials as they appear
trialcnt = 0
# read all data
for target in range(1, 7):
# load spike matrix
smxdata = np.loadtxt('asciidata/%s-%d-C3-MO_spikematrix_%02d.dat' % (sblock.name,unitid,target), dtype=int)
# load motion end times
medata = np.loadtxt('asciidata/%s-%d-C3-MO_MEevents_%02d.dat' % (sblock.name,unitid,target), dtype=float)
medata = medata - 1000. # motion end time is stored as array index, so subtract the time offset
# load trial start times
tsdata = np.loadtxt('asciidata/%s-%d-C3-TS_MOevents_%02d.dat' % (sblock.name,unitid,target), dtype=float)
tsdata = -tsdata # calculate trial start relative to motion from MO time in TS-aligned data
# load spike times
stf = open('asciidata/%s-%d-C3-MO_spiketrains_%02d.dat' % (sblock.name,unitid,target), 'r')
stdata = []
for line in stf:
st = [int(i)-1000 for i in line.split()] # shift by -1000 ms for alignment to MO
stdata.append(st)
stf.close()
#_for line
#
# data array of all trials with this target; time dim first, as in Jans data
spikeactivity = sblock.create_data_array("SpikeActivity Unit %d Target %d" % (unitid,target),"nix.timeseries.binary",data=smxdata.T)
spikeactivity.definition = "Array of spike occurrences aligned to movement onset (MO)"
sa_dim1 = spikeactivity.append_sampled_dimension(1.) # 1 ms sampling interval
sa_dim1.offset = -1000. # data aligned to MO
sa_dim1.unit = "ms"
sa_dim1.label = "time"
sa_dim2 = spikeactivity.append_set_dimension() # trials
sa_dim1.label = "trial"
spikeactivity.sources.append(su)
# mov tag
# this is not so great because the need to define the positions an extents as dataarrays,
# so stick to the movement epochs as tags for each trial
#movtag = filter(lambda x: x.name == "Arm movement epochs for Target %d" % (target), sblock.multi_tags )
#if not movtag:
# MOlst = sblock.create_data_array("MO times for Target %d" % (target), "nix.positions", data=[[0.0,tr] for tr in range(0,smxdata.shape[0])])
# MOdim1 = MOlst.append_sampled_dimension(1.)
# MOdim1.unit = "ms"
# MOdim1.label = "time"
# MOdim2 = MOlst.append_set_dimension()
# MOdim2.label = "trial"
# MElst = sblock.create_data_array("Movement durations for Target %d" % (target), "nix.positions", data=[[medata[tr],0] for tr in range(0,smxdata.shape[0])])
# MEdim1 = MElst.append_sampled_dimension(1.)
# MEdim1.unit = "ms"
# MEdim1.label = "time"
# MEdim2 = MElst.append_set_dimension()
# MEdim2.label = "trial"
# mov = sblock.create_multi_tag("Arm movement epochs for Target %d" % (target), "nix.epoch", MOlst)
# mov.definition = "Epochs between detected movement onset (MO) and movement end (ME)"
# mov.extents = MElst
# mov.units = ["ms",]
#else:
# mov = movtag[0]
#mov.references.append(spikeactivity)
#~~~~
# loop over all trials for this target
for tr in range(0,smxdata.shape[0]):
trialcnt += 1
#spikeactivity = sblock.create_data_array("SpikeActivity Unit %d Trial %03d" % (unitid,trialcnt),"nix.timeseries.binary",data=smxdata[tr])
#spikeactivity.definition = "Array of spike occurrences aligned to movement onset (MO)"
#sa_dim.offset = -1000. # data aligned to MO
#sa_dim.unit = "ms"
#sa_dim.label = "time"
# array of spike times
spiketimes = sblock.create_data_array("SpikeTimes Unit %d Trial %03d" % (unitid,trialcnt),"nix.spiketimes",data=stdata[tr])
spiketimes.definition = "Spike times aligned to movement onset (MO)"
spiketimes.append_set_dimension()
spiketimes.unit = "ms"
spiketimes.label = "spikes"
# spike train as multitag
#spikepos = [[x,tr] for x in stdata[tr]]
#spiketrain = sblock.create_multi_tag("Spiketrain Unit %d Trial %03d" % (unitid,trialcnt), "nix.spiketrain",spikepos)
#spiketrain.definition = "Spike times aligned to movement onset (MO)"
#spiketrain.references.append(spikeactivity)
# assign sources
spiketimes.sources.append(su)
#spiketrain.sources.append(su)
# assign metadata
spikeactivity.metadata = m_conds[2][target-1] # so far all data are 'C3' -> index 2 in conds
spiketimes.metadata = m_conds[2][target-1]
# trial as tag
trialtag = list(filter(lambda x: x.name == "Trial %03d" % (trialcnt), sblock.tags ))
if not trialtag:
trial = sblock.create_tag("Trial %03d" % (trialcnt), "nix.trial",[tsdata[tr],tr])
trial.definition = "Trial start (TS) relative to motion onset (MO)"
trial.extent = [3000.,0] # trial length of 3000ms is arbitrary
trial.units = ["ms"]
trial.metadata = m_conds[2][target-1]
else:
trial = trialtag[0]
trial.references.append(spikeactivity)
# arm movement period as tag
movtag = list(filter(lambda x: x.name == "Arm movement epoch Trial %03d" % (trialcnt), sblock.tags ))
if not movtag:
mov = sblock.create_tag("Arm movement epoch Trial %03d" % (trialcnt), "nix.epoch",[0.0,tr])
mov.definition = "Epoch between detected movement onset (MO) and movement end (ME)"
mov.extent = [medata[tr],0] # because motion onset is at 0.0, duration is equal to end time
mov.units = ["ms",]
else:
mov = movtag[0]
mov.references.append(spikeactivity)
#_for tr
#_for target
#_def mkunit
mkunit(b097, 5)
mkunit(b108, 4)
mkunit(b108, 7)
mkunit(b147, 1)
mkunit(b151, 1)
nixf.close()
###Output
_____no_output_____
###Markdown
read data from file
###Code
nixf = nix.File.open("module2x.h5", nix.FileMode.ReadOnly)
nixf.blocks
###Output
_____no_output_____
###Markdown
get overview of file contents
###Code
for b in nixf.blocks:
tlst = list(filter( lambda x : x.type == "nix.trial", b.tags))
print('%s: %d trials' % (b.name,len(tlst)))
for s in b.sources:
print('\t%s ' % s.name)
###Output
_____no_output_____
###Markdown
select some data from one of the blocks
###Code
b108 = nixf.blocks["joe108"]
b108.sources
dalst = filter(lambda x:
("SpikeActivity" in x.name) &
(filter(lambda s: s.name == "Unit 7", x.sources) != []) &
(x.metadata['Target'] == 2) &
(x.metadata['BehavioralCondition'] == 3),
b108.data_arrays)
dalst = list(dalst)
print(dalst)
len(dalst)
dalst[0].shape
spikedata = dalst[0]
spikedata.shape
yyy = np.nonzero(spikedata)
[tind,jind] = np.nonzero(spikedata)
plot.scatter(tind, jind)
[jbla, tbla] = np.nonzero(np.array(spikedata).T)
plot.scatter(tbla, jbla)
nixf.close()
###Output
_____no_output_____ |
preprocessing_split_train_test.ipynb | ###Markdown
Split datasets into train, validation, and test This module can use for processing split datasets. You need modify the ratio of train, validation, and test. And you can modify output directory you want and input directory you have.
###Code
# -*- coding: utf-8 -*-
""" Split datasets into train, validation, and test
This module can use for processing split datasets. You need modify the ratio of
train, validation, and test datasetes. And you can modify output directory you
want and input directory you have.
################################################################################
# Author: Weikun Han <[email protected]>
# Crate Date: 03/6/2018
# Update:
# Reference: https://github.com/jhetherly/EnglishSpeechUpsampler
################################################################################
"""
import os
import csv
import numpy as np
def write_csv(filename, pairs):
"""The function to wirte
Args:
param1 (str): filename
param2 (list): pairs
"""
with open(filename, 'w') as csvfile:
writer = csv.writer(csvfile)
for n in pairs:
writer.writerow(n)
if __name__ == '__main__':
# Please modify input path to locate you file
DATASETS_ROOT_DIR = './datasets'
OUTPUT_DIR = os.path.join(DATASETS_ROOT_DIR, 'final_dataset')
# Define ratio for train, validation, and test datasetes
train_fraction = 0.6
validation_fraction = 0.2
test_fraction = 0.2
# Reset random generator
np.random.seed(0)
# Check location to save datasets
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
print('Will send .csv dataset to {}'.format(OUTPUT_DIR))
# Create list to store each original and noise file name pair
original_noise_pairs = []
input_original_path = os.path.join(DATASETS_ROOT_DIR, 'TEDLIUM_5S')
input_noise_path = os.path.join(DATASETS_ROOT_DIR,
'TEDLIUM_noise_sample_5S')
for filename in os.listdir(input_original_path):
# Link same filename in noise path
filename_component = filename.split('_')
filename_noise = (filename_component[0] +
'_' +
filename_component[1] +
'_' +
'noise_sample' +
'_' +
filename_component[2])
input_original_filename = os.path.join(input_original_path,
filename)
input_noise_filename = os.path.join(input_noise_path, filename_noise)
if not os.path.isfile(input_original_filename):
continue
original_noise_pairs.append(
[input_original_filename, input_noise_filename])
# Shuffle the datasets
np.random.shuffle(original_noise_pairs)
datasets_size = len(original_noise_pairs)
# Create indexs
validation_start_index = 0
validation_end_index = (validation_start_index +
int(datasets_size * validation_fraction))
test_start_index = validation_end_index
test_end_index = (test_start_index +
int(datasets_size * test_fraction))
train_start_index = test_end_index
# Save pairs into .csv
validation_original_noise_pairs = original_noise_pairs[
validation_start_index:validation_end_index]
write_csv(os.path.join(OUTPUT_DIR, 'validation_files.csv'),
validation_original_noise_pairs)
test_original_noise_pairs = original_noise_pairs[
test_start_index : test_end_index]
write_csv(os.path.join(OUTPUT_DIR, 'test_files.csv'),
test_original_noise_pairs)
train_original_noise_pairs = original_noise_pairs[
train_start_index :]
write_csv(os.path.join(OUTPUT_DIR, 'train_files.csv'),
original_noise_pairs)
###Output
Will send .csv dataset to ./datasets/final_dataset
###Markdown
Split datasets into train, validation, and test This module can use for processing split datasets. You need modify the ratio of train, validation, and test. And you can modify output directory you want and input directory you have.
###Code
# -*- coding: utf-8 -*-
""" Split datasets into train, validation, and test
This module can use for processing split datasets. You need modify the ratio of
train, validation, and test datasetes. And you can modify output directory you
want and input directory you have.
################################################################################
# Author: Weikun Han <[email protected]>
# Crate Date: 03/6/2018
# Update:
# Reference: https://github.com/jhetherly/EnglishSpeechUpsampler
################################################################################
"""
import os
import csv
import numpy as np
def write_csv(filename, pairs):
"""The function to wirte
Args:
param1 (str): filename
param2 (list): pairs
"""
with open(filename, 'w') as csvfile:
writer = csv.writer(csvfile)
for n in pairs:
writer.writerow(n)
if __name__ == '__main__':
# Please modify input path to locate you file
DATASETS_ROOT_DIR = './datasets'
OUTPUT_DIR = os.path.join(DATASETS_ROOT_DIR, 'final_dataset')
# Define ratio for train, validation, and test datasetes
train_fraction = 0.6
validation_fraction = 0.2
test_fraction = 0.2
# Reset random generator
np.random.seed(0)
# Check location to save datasets
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
print('Will send .csv dataset to {}'.format(OUTPUT_DIR))
# Create list to store each original and noise file name pair
original_noise_pairs = []
input_original_path = os.path.join(DATASETS_ROOT_DIR, 'TEDLIUM_5S')
input_noise_path = os.path.join(DATASETS_ROOT_DIR,
'TEDLIUM_noise_sample_5S')
for filename in os.listdir(input_original_path):
# Link same filename in noise path
filename_component = filename.split('_')
filename_noise = (filename_component[0] +
'_' +
filename_component[1] +
'_' +
'noise_sample' +
'_' +
filename_component[2])
input_original_filename = os.path.join(input_original_path,
filename)
input_noise_filename = os.path.join(input_noise_path, filename_noise)
if not os.path.isfile(input_original_filename):
continue
original_noise_pairs.append(
[input_original_filename, input_noise_filename])
# Shuffle the datasets
np.random.shuffle(original_noise_pairs)
datasets_size = len(original_noise_pairs)
# Create indexs
validation_start_index = 0
validation_end_index = (validation_start_index +
int(datasets_size * validation_fraction))
test_start_index = validation_end_index
test_end_index = (test_start_index +
int(datasets_size * test_fraction))
train_start_index = test_end_index
# Save pairs into .csv
validation_original_noise_pairs = original_noise_pairs[
validation_start_index:validation_end_index]
write_csv(os.path.join(OUTPUT_DIR, 'validation_files.csv'),
validation_original_noise_pairs)
test_original_noise_pairs = original_noise_pairs[
test_start_index : test_end_index]
write_csv(os.path.join(OUTPUT_DIR, 'test_files.csv'),
test_original_noise_pairs)
train_original_noise_pairs = original_noise_pairs[
train_start_index :]
write_csv(os.path.join(OUTPUT_DIR, 'train_files.csv'),
original_noise_pairs)
###Output
Will send .csv dataset to ./datasets/final_dataset
|
Benchmarking Sorting Algorithms.ipynb | ###Markdown
Introduction Sorting is organising data in ascending or descending order. This project will take a comparative look at 6 sorting algorithms (Bubble Sort, Merge Sort, Counting Sort, Quick Sort, Insertion Sort, BogoSort). It is in two parts, firstly an overview of each algorithm and lastly the benchmarking application of the sorting algorithms. Sorting Algorithms Overview:1. How it works 2. Performance or Time complexity. Time Complexity is the computational complexity that describes the amount of time it takes to run an algorithm. (source: https://en.wikipedia.org/wiki/Time_complexity)3. An example diagram of how it works4. A python example of the selected algorithm (with comments) This project will also highlight the different sorting methods used in each algorithm whether the are comparison based or non-comparison based. The Benchmarking ApplicationUsing python(https://www.python.org/), random number arrays were created ranging in sizes from 100 to 50,000. The sorting algorithms will each run through the arrays and the timings will be captured using Python’s time module (https://docs.python.org/3/library/time.html). These timings are collected into a table, using the library pandas tables https://pandas.pydata.org/. The timings will be benchmarked against one another in a plot using Seaborn https://seaborn.pydata.org/ and Matplotlib https://matplotlib.org/. The result of the benchmarking sorting algorithm application are discussed to see if they matched the expected output. This project is written using a Jupyter notebook (https://jupyter.org/) using external python files. Sorting Algorithms 1. Bubble Sort (A simple comparison-based sort)Bubble sort is a simple sorting algorithm. source:https://en.wikipedia.org/wiki/Bubble_sort Named for the way larger values bubble up to the top. It is a comparison based sorting algorithm as it steps through the list, compares and swaps adjacent pairs if they are in the wrong order.How it works:1. It starts at the beginning of the dataset and compares the first two elements and if the first is greater it will swap them. 2. It will continue doing this until no swaps are needed. PerformanceBubble sort has a worst-case and average time complexity of О(n2), where n is the number of items being sorted. When the list is already sorted (best-case), the complexity of bubble sort is only O(n). In the case of a large dataset, Bubble sort should be avoided. It is not very practical or efficient and rarely used in the real world.Bubble sort in action https://www.youtube.com/watch?v=lyZQPjUT5B4&feature=youtu.be Bubble Sort Diagram
###Code
# code sourced from http://interactivepython.org/runestone/static/pythonds/SortSearch/TheBubbleSort.html
# calls a function bubblesort
def bubbleSort(alist):
for passnum in range(len(alist)-1,0,-1):
for i in range(passnum):
if alist[i]>alist[i+1]:
temp = alist[i]
alist[i] = alist[i+1]
alist[i+1] = temp
alist = [54,26,93,17,77]
bubbleSort(alist)
print(alist)
###Output
[17, 26, 54, 77, 93]
###Markdown
2. Merge Sort (An efficient comparison-based sort)Merge sort is a recursive divide and conquer algorithm that was invented by John von Neumann in 1945. This algorithm breaks down the array into a sublists until there is just a single element left and merging them back together until they are sorted. (https://en.wikipedia.org/wiki/Merge_sort)How it works:1. It starts by breaking down the list into sublists until each sublists contains just one element. 2. Repeatedly merging the sublists to produce new sorted sublists until there is only one sublist remaining. PerformanceIn sorting n objects, merge sort has an average and worst-case performance of O(n log n). It's best, worst and average cases are very similar, making it a good choice for predictable running behaviour. (Source: P.Mannion (2019)Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology )Merge sort in action:https://www.youtube.com/watch?v=XaqR3G_NVoo Merge Sort Diagram
###Code
# code sourced from http://interactivepython.org/runestone/static/pythonds/SortSearch/TheMergeSort.html
def mergeSort(alist):
# print("Splitting ",alist)
# if the array is greater than 1 then
if len(alist)>1:
# mid is length of array divided 2
mid = len(alist)//2
# left half is equal to the first slice
lefthalf = alist[:mid]
# left half is equal to the second slice
righthalf = alist[mid:]
# call merge sort again for the left half
mergeSort(lefthalf)
# call merge sort again for the right half
mergeSort(righthalf)
i=0
j=0
k=0
# copy to temp arrays lefthalf and righhalf
while i < len(lefthalf) and j < len(righthalf):
if lefthalf[i] < righthalf[j]:
alist[k]=lefthalf[i]
i=i+1
else:
alist[k]=righthalf[j]
j=j+1
k=k+1
#
while i < len(lefthalf):
alist[k]=lefthalf[i]
i=i+1
k=k+1
while j < len(righthalf):
alist[k]=righthalf[j]
j=j+1
k=k+1
#print("Merging ",alist)
alist = [54,26,93,17,77,31,44,55,20]
mergeSort(alist)
print(alist)
###Output
[17, 20, 26, 31, 44, 54, 55, 77, 93]
###Markdown
3. Counting Sort (A non-comparison sort)Invented by Harold H. Seward in 1954(source: https://en.wikipedia.org/wiki/Counting_sort.) Counting sort is a technique based on key values(kind of hashing). Then doing some arithemtic to calculate the position of the each object in the output sequence. (https://www.geeksforgeeks.org/counting-sort/) How it works (Source: P.Mannion (2019)Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology ):1. Determine key range k in the input array(if not already known)2. Initialise an array count size k, which will be used to count the number of times that each key value appears in the input instance.3. Initialise an array result of size n, which will be used to store the sorted output.4. Iterate through the input array, and record the number of times each distinct key values occurs in the input instance.5. Construct the sorted result array, based on the histogram of key frequencies stored in count. Refer to the ordering of keys in input to ensure that stability is preserved. PerformanceBest-, worst- and average-case time complexity of n +k, space complexity is also n+k (Source: P.Mannion (2019)Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology ) Counting Sort Diagram
###Code
# code sourced http://www.learntosolveit.com/python/algorithm_countingsort.html
def counting_sort(array, maxval):
"""in-place counting sort"""
n = len(array)
m = maxval + 1
count = [0] * m # init with zeros
for a in array:
count[a] += 1 # count occurences
i = 0
for a in range(m): # emit
for c in range(count[a]): # - emit 'count[a]' copies of 'a'
array[i] = a
i += 1
return array
print(counting_sort( alist, 93 ))
###Output
[17, 20, 26, 31, 44, 54, 55, 77, 93]
###Markdown
4. Quick SortQuicksort was developed by British computer scientist Tony Hoare in 1959. It is a recursive divide and conquer algorithm. Due to it's efficiency, it is still a commonly used algorithm for sorting.(https://en.wikipedia.org/wiki/Quicksort)How it works (Source: P.Mannion (2019) Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology):1. Pivot selection: Pick an element, called a “pivot” from the array2. Partioning: reorder the array elements with values < the pivot come before it, which all elements the values ≥ than the pivot come after it. After this partioining, the pivot is in its final position.3. Recursion: apply steps 1 and 2 above recursively to each of the two subarrays PerformanceIt has a worst case n^2 (rare), average case n log n, best case n log nMemory usage: O(n) (variants exist with O (n log n)) (Source: P.Mannion (2019) Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology): Quick Sort Diagram
###Code
# http://interactivepython.org/runestone/static/pythonds/SortSearch/TheQuickSort.html
def quickSort(alist):
quickSortHelper(alist,0,len(alist)-1)
def quickSortHelper(alist,first,last):
if first<last:
splitpoint = partition(alist,first,last)
quickSortHelper(alist,first,splitpoint-1)
quickSortHelper(alist,splitpoint+1,last)
def partition(alist,first,last):
pivotvalue = alist[first]
leftmark = first+1
rightmark = last
done = False
while not done:
while leftmark <= rightmark and alist[leftmark] <= pivotvalue:
leftmark = leftmark + 1
while alist[rightmark] >= pivotvalue and rightmark >= leftmark:
rightmark = rightmark -1
if rightmark < leftmark:
done = True
else:
temp = alist[leftmark]
alist[leftmark] = alist[rightmark]
alist[rightmark] = temp
temp = alist[first]
alist[first] = alist[rightmark]
alist[rightmark] = temp
return rightmark
# alist = [54,26,93,17,77,31,44,55,20]
quickSort(alist)
print(alist)
###Output
[17, 20, 26, 31, 44, 54, 55, 77, 93]
###Markdown
5. Insertion SortInsertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. (https://en.wikipedia.org/wiki/Insertion_sort) It sorts similar to the way card players sort their cards in their hand (Source: P.Mannion (2019) Week 10: Sorting Algorithms Part 2, Galway-Mayo Institute of Technology)How it works (Source: P.Mannion (2019) Week 10: Sorting Algorithms Part 2, Galway-Mayo Institute of Technology):1. Start from the left of the array, and set the “key” as the element at index 1.Move any elements to the left which are > the “key” right by one position, and insert the “key”.2. Set the “Key” as the element at index 2. Move any elements to the left which are > the key right by one position and insert the key.3. Set the “key” as the element at the index 3. Move any elements to the left which are > the key right by one position and index the key.4. …5. Set the “key” as the elements at index n-1. Move any elements to the left which are > the key right by one position and insert the key.https://www.youtube.com/watch?v=ROalU379l3U PerformanceThis algorithm works well on small lists and lists that are close to sorted. The best case is an array that is already sorted. In this case, insertsion sort would have a run time of O(n). Worst case would be that no numbers are sorted, giving a run time of O(n2). The average is also O(n2). https://en.wikipedia.org/wiki/Insertion_sort Insertion Sort Diagram
###Code
def insertionSort(alist):
for index in range(1,len(alist)):
currentvalue = alist[index]
position = index
while position>0 and alist[position-1]>currentvalue:
alist[position]=alist[position-1]
position = position-1
alist[position]=currentvalue
alist = [54,26,93,17,77,31,44,55,20]
insertionSort(alist)
print(alist)
###Output
[17, 20, 26, 31, 44, 54, 55, 77, 93]
###Markdown
6. BogoSortBogosort is a highly inefficient sorting algorithm. Also known as slowsort, https://en.wikipedia.org/wiki/BogosortHow it works:1. It randomly shuffles the array until it is sorted. PerformanceThe best case occurs if the list as given is already sorted. https://www.youtube.com/watch?v=CSe0MWDLevA BogoSort Diagram
###Code
# Python program for implementation of Bogo Sort
import random
# Sorts array a[0..n-1] using Bogo sort
def bogoSort(alist):
n = len(alist)
while (is_sorted(alist)== False):
shuffle(alist)
# To check if array is sorted or not
def is_sorted(alist):
n = len(alist)
for i in range(0, n-1):
if (alist[i] > alist[i+1] ):
return False
return True
# To generate permuatation of the array
def shuffle(alist):
n = len(alist)
for i in range (0,n):
r = random.randint(0,n-1)
alist[i], alist[r] = alist[r], alist[i]
alist = [54,26,93,17,77,31,44,55,20]
bogoSort(alist)
print(alist)
###Output
[17, 20, 26, 31, 44, 54, 55, 77, 93]
###Markdown
Implementation & Benchmarking For this section, a function will be defined to call each sorting function defined above1. Bubble Sort2. Merge Sort3. Counting Sort4. Quick Sort5. Insertion Sort6. BogosortFirstly, arrays are generated with random numbers using randint from the python's random library (https://docs.python.org/2/library/random.html). These will be used to test the speed of efficiency of the algorithms.
###Code
# code sourced from project example
# Creating an array using randint
from random import *
# creating a random array, function takes in n numbers
def random_array(n):
# create an array variable
array = []
# if n = 5, 0,1,2,3,4
for i in range(0, n, 1):
# add to the array random integers between 0 and 100
array.append(randint(0,100))
return array
# assign the random array to alist
alist = random_array(100)
alist1 = random_array(250)
alist2 = random_array(500)
alist3 = random_array(750)
alist4 = random_array(1000)
alist5 = random_array(1250)
alist6 = random_array(2500)
alist7 = random_array(3570)
alist8 = random_array(5000)
alist9 = random_array(6250)
alist10 = random_array(7500)
alist11 = random_array(8750)
alist12 = random_array(10000)
###Output
_____no_output_____
###Markdown
Benchmarking Multiple Statistical RunsReference Week 12 - 08 to 12 April 2019 lecture notes Using the time module (https://docs.python.org/3/library/time.html), a start time and end time for each function will be noted and the elapsed time is what will be noted.Above a random arrays were defined. They will be used to test the performance of the 1. Benchmarking Bubble Sort
###Code
#benchmark_bubblesort.py
from benchmark_bubblesort import *
###Output
_____no_output_____
###Markdown
2. Benchmarking Merge Sort
###Code
#benchmark_mergesort.py
from benchmark_mergesort import *
###Output
_____no_output_____
###Markdown
3. Benchmarking Counting Sort
###Code
#benchmark_countingsort.py
from benchmark_countingsort import *
###Output
_____no_output_____
###Markdown
4. Benchmarking Quick sort
###Code
#benchmark_quicksort.py
from benchmark_quicksort import *
###Output
[0.0, 0.002, 0.005, 0.009, 0.015, 0.024, 0.042, 0.069, 0.111, 0.166, 0.237, 0.326, 0.432, 1.97]
###Markdown
5. Benchmarking Insertion sort
###Code
#benchmark_insertionsort.py
from benchmark_insertionsort import *
###Output
_____no_output_____
###Markdown
6. Benchmarking bogosort
###Code
#benchmark_bogosort.py
from benchmark_bogosort import *
###Output
_____no_output_____
###Markdown
Create a table for the results Using the data from the benchmarking timings for each sorting algorithms, a table was created. The table was created using the pandas libary https://pandas.pydata.org/
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame(columns = ['Size','Bubble Sort', 'Merge Sort', 'Counting sort', 'Quick sort', 'Insertion sort', 'BogoSort'])
df['Size'] = [100, 250, 500, 750, 1000, 1250, 2500, 3570, 5000, 6250, 7500, 8750, 10000, 50000]
df['Bubble Sort'] = bubble_avglist
df['Merge Sort'] = mergesort_avglist
df['Counting sort'] = countsort_avglist
df['Quick sort'] = quicksort_avglist
df['Insertion sort'] = insertsort_avglist
df['BogoSort'] = bogosort_avg
df
###Output
_____no_output_____
###Markdown
Summary StatisticsSummary statistics can give an elegent overview of your data. You can clearly see that the slowest algorithm was Bubble Sort.
###Code
summary = df.describe()
summary = summary.transpose()
summary
###Output
_____no_output_____
###Markdown
Plotting the sorting algorithms timings in a graphSeaborn https://seaborn.pydata.org/ and Matplotlib https://matplotlib.org/ were used to generate a data visualisation of the algorithms
###Code
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="whitegrid", palette="husl", rc={'figure.figsize':(14,16)})
title="Benchmarking Sorting Algorithms"
# Bubble Sort
bubble = sns.lineplot( x="Size", y="Bubble Sort", data=df, marker='o', label='Bubble Sort')
# Merge sort
merge = sns.lineplot( x="Size", y="Merge Sort", data=df, marker='o', label='Merge Sort')
# Counting sort
counting = sns.lineplot( x="Size", y="Counting sort", marker='o', data=df, label="Counting Sort")
# Quick sort
quick = sns.lineplot( x="Size", y="Quick sort", data=df, marker='o',label="Quick Sort")
# Insertion sort
insert = sns.lineplot( x="Size", y="Insertion sort", data=df, marker='o', label="Insertion Sort")
# BogoSort
bogo = sns.lineplot( x="Size", y="BogoSort", data=df, marker='o', label="BogoSort")
plt.xlabel('Input size n', fontsize=16)
plt.ylabel('Running Time in seconds',fontsize=16)
# Increasing font size
plt.title(title, fontsize=26)
# Show the plot
plt.show()
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="whitegrid", palette="husl", rc={'figure.figsize':(14,14)})
title="Benchmarking Sorting Algorithms (closer look)"
# Bubble Sort
#bubble = sns.lineplot( x="Size", y="Bubble Sort", data=df, marker='o', label='Bubble Sort')
# Merge sort
merge = sns.lineplot( x="Size", y="Merge Sort", data=df, marker='o', label='Merge Sort')
# Counting sort
counting = sns.lineplot( x="Size", y="Counting sort", marker='o', data=df, label="Counting Sort")
# Quick sort
quick = sns.lineplot( x="Size", y="Quick sort", data=df, marker='o',label="Quick Sort")
# Insertion sort
insert = sns.lineplot( x="Size", y="Insertion sort", data=df, marker='o', label="Insertion Sort")
# BogoSort
bogo = sns.lineplot( x="Size", y="BogoSort", data=df, marker='o', label="BogoSort")
plt.xlabel('Input size n', fontsize=16)
plt.ylabel('Running Time in seconds',fontsize=16)
# Increasing font size
plt.title(title, fontsize=26)
# Show the plot
plt.show()
###Output
_____no_output_____ |
jupyterNotebooks/.ipynb_checkpoints/A1 core module notebook-checkpoint.ipynb | ###Markdown
A1: core module notebook Introduction This notebook includes a description of the 'core' python module in the JBEI Quantitative Metabolic Modeling (QMM) library. A description and demonstration of the diffent classes can be found below. Setup First, we need ot set the path and environment variable properly:
###Code
%matplotlib inline
import sys, os
pythonPath = "/scratch/david.ando/quantmodel/code/core"
if pythonPath not in sys.path:
sys.path.append('/scratch/david.ando/quantmodel/code/core')
os.environ["QUANTMODELPATH"] = '/scratch/david.ando/quantmodel'
###Output
_____no_output_____
###Markdown
Importing the required modules for the demo:
###Code
from IPython.display import Image
import core, FluxModels
import os
###Output
_____no_output_____
###Markdown
Classes description Metabolite related classes metbolite class The *metabolite* class is used to store all information related to a metabolite. For example the following instantation:
###Code
ala = core.Metabolite('ala-L', ncarbons=3, source=True, feed='100% 1-C', destination=False, formula='C3H7NO2')
###Output
_____no_output_____
###Markdown
creates a metabolite with nbame 'ala-L', 3 carbon atoms, which is the source of labeling, is labeled in the first carbon, is not a destination (measured) metabolite and with a composition formula equal to 'C3H7NO2' the **generateEMU** function creates the corresponding Elementary Metabolite Unit (EMU):
###Code
ala.generateEMU([2])
###Output
_____no_output_____
###Markdown
In this case the EMU contains the first and last carbon in alanine. The input ([2]) specifies which carbons to exclude:
###Code
ala.generateEMU([2,3])
###Output
_____no_output_____
###Markdown
reactant and product classes *Reactant* and *product* are classes derived from metabolite and the only difference is that they represent metabolites in the context of a reaction. Hence, the stoichiometry of the metabolite and the labeling pattern in that reaction are included:
###Code
R_ala = core.Reactant(ala, 1, 'abc')
###Output
_____no_output_____
###Markdown
Notice that the stoichiometry information (1, meaning in the reaction only 1 molecule participates in the reaction) and the labeling data ('abc', one part of the labeling pattern, see below) only make sense in the context of a reaction, so they are not included in the metabolite class.Both classes are derived from metabolites, so they inherit their methods:
###Code
R_ala.generateEMU([2,3])
###Output
_____no_output_____
###Markdown
Reaction related classes reaction class The *reaction* class produces a reaction instance:
###Code
# Create reactant metabolites
coa_c = core.Metabolite('coa_c')
nad_c = core.Metabolite('nad_c')
pyr_c = core.Metabolite('pyr_c')
# Convert into reactants
Rcoa_c = core.Reactant(coa_c, 1.0)
Rnad_c = core.Reactant(nad_c, 1.0)
Rpyr_c = core.Reactant(pyr_c, 1.0)
# Create product metabolites
accoa_c = core.Metabolite('accoa_c')
co2_c = core.Metabolite('co2_c')
nadh_c = core.Metabolite('nadh_c')
# Convert into products
Raccoa_c = core.Product(accoa_c, 1.0)
Rco2_c = core.Product(co2_c, 1.0)
Rnadh_c = core.Product(nadh_c, 1.0)
# Create reaction
PDH = core.Reaction('PDH',reactants=[Rcoa_c,Rnad_c,Rpyr_c] , products=[Raccoa_c,Rco2_c,Rnadh_c]
,subsystem='S_GlycolysisGluconeogenesis')
###Output
_____no_output_____
###Markdown
Reactions can also initialized from a string:
###Code
PDH2 = core.Reaction.from_string('PDH : coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c ')
###Output
_____no_output_____
###Markdown
The *reaction* class contains some useful functions such as: **stoichLine** to obtain the stoichiometric line for the reaction:
###Code
print PDH.stoichLine()
print PDH2.stoichLine()
###Output
PDH : coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c
PDH : coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c
###Markdown
**getReactDict** produces a dictionary of reactants:
###Code
PDH.getReactDict()
###Output
_____no_output_____
###Markdown
**getProdDict** produces a dictionary of products:
###Code
PDH.getProdDict()
###Output
_____no_output_____
###Markdown
Elementary Metabolite Unit (EMU) related classes Elementary Metabolite Units (or EMUs) of a compound are the molecule parts (moieties) comprising any distinct subset of the compound’s atoms (Antoniewicz MR, Kelleher JK, Stephanopoulos G: Elementary metabolite units (EMU): a novel framework for modeling isotopic distributions. Metab Eng 2007, 9:68-86.). For example, cit$_{123}$ represents the first 3 carbon atoms in the citrate molecule. EMU class The EMU class provides a class to hold and manipulate EMUs:
###Code
cit321= core.EMU('cit_3_2_1')
###Output
_____no_output_____
###Markdown
The method **findnCarbons** produces the number of carbons in the EMU:
###Code
print cit321.findnCarbons()
###Output
3.0
###Markdown
The method **getMetName** produces the name of the corresponding metabolite:
###Code
print cit321.getMetName()
str(cit321.getMetName()) == 'cit'
###Output
_____no_output_____
###Markdown
The method **getIndices** produces the indices:
###Code
print cit321.getIndices()
###Output
[3, 2, 1]
###Markdown
**getSortedName** sorts the indices in the EMU name:
###Code
print cit321.getSortedName()
###Output
cit_1_2_3
###Markdown
**getEmuInSBML** produces the name of the EMU in SBML format:
###Code
print cit321.getEmuInSBML()
###Output
cit_c_3_2_1
###Markdown
Transitions related classes Transitions contain the information on how carbon (or other) atoms are passed in each reaction. Atom transitions describe, for example, the fate of each carbon in a reaction, whereas EMU transitions describe this information by using EMUs, as described below. AtomTransition class Atom transitions represent the fate of each carbon in a reaction (Wiechert W. (2001) 13C metabolic flux analysis. Metabolic engineering 3: 195-206). For example, in:AKGDH akg --> succoa + co2 abcde : bcde + aakg gets split into succoa and co2, with the first 4 carbons going to succoa and the remaining carbon going to co2.
###Code
AT = core.AtomTransition('AKGDH akg --> succoa + co2 abcde : bcde + a')
print AT
###Output
AKGDH akg --> succoa + co2 abcde : bcde + a
###Markdown
The method **findEMUtransition** provides for a given input EMU (e.g. succoa_1_2_3_4), which EMU it comes from in the form of a EMU transition:
###Code
emu1 = core.EMU('co2_1')
print AT.findEMUtransition(emu1)
emu2 = core.EMU('succoa_1_2_3_4')
print AT.findEMUtransition(emu2)
###Output
['AKGDH, akg_1 --> co2_1']
['AKGDH, akg_2_3_4_5 --> succoa_1_2_3_4']
###Markdown
This is done through the method **findEMUs**, which finds the emus from which the input emanates in the given atom transition:
###Code
print emu2.name
print AT.findEMUs(emu2)
for emus in AT.findEMUs(emu2):
for emu_ in emus:
print emu_.name
###Output
succoa_1_2_3_4
[[<core.EMU instance at 0x7fe7d68e1320>]]
akg_2_3_4_5
###Markdown
which in turn, uses the method **getOriginDictionary** which provides for a given input EMU the originating metabolite and the correspondance in indices:
###Code
AT.getOriginDictionary(emu2)
###Output
_____no_output_____
###Markdown
EMUTransition class Class for EMU transitions that contain information on how different EMUs transform intto each other. For example: TA1_b, TAC3_c_1_2_3 + g3p_c_1_2_3 --> f6p_c_1_2_3_4_5_6 indicating that TAC3_c_1_2_3 and g3p_c_1_2_3 combine to produce f6p_c_1_2_3_4_5_6 in reaction TA1_b (backward reaction of TA1), or: SSALy, (0.5) sucsal_c_4 --> (0.5) succ_c_4 which indicates that the fourth atom of sucsal_c becomes the fourth atom of succ_c. The (0.5) contribution coefficient indicates that reaction SSALy contains a symmetric molecule and two labeling correspondences are equally likely. Hence this transition only contributes half the flux to the final labeling.
###Code
emuTrans = core.EMUTransition('TA1_b, TAC3_c_1_2_3 + g3p_c_1_2_3 --> f6p_c_1_2_3_4_5_6')
print emuTrans
str(emuTrans) == 'TA1_b, TAC3_c_1_2_3 + g3p_c_1_2_3 --> f6p_c_1_2_3_4_5_6'
###Output
_____no_output_____
###Markdown
Ranged number class The *rangedNumber* class describes floating point numbers for which a confidence interval is available. For example, fluxes obtained through 2S-$^{13}$C MFA are described through the flux that best fits the data and the highest and lowest values that are found to be compatible with labeling data (see equations 16-23 in Garcia Martin *et al* 2015). However, this class has been abstracted out so it can be used with other ranged intervals. Ranged numbers can used as follows:
###Code
number = core.rangedNumber(0.3,0.6,0.9) # 0.3 lowest, 0.6 best fit, 0.9 highest
###Output
_____no_output_____
###Markdown
Ranged numbers can be printed:
###Code
print number
###Output
[0.3 : 0.6 : 0.9]
###Markdown
and added, substracted, multiplied and divided following the standard error propagation rules(https://en.wikipedia.org/wiki/Propagation_of_uncertainty):
###Code
A = core.rangedNumber(0.3,0.6,0.9)
B = core.rangedNumber(0.1,0.15,0.18)
print A+B
print A-B
print 2*A
print B/3
###Output
[0.0333333333333 : 0.05 : 0.06]
###Markdown
Flux class The flux class describes fluxes attached to a reaction. For example, if the net flux is described by the ranged number A and the exchange flux by the ranged number B, the corresponding flux would be:
###Code
netFlux = A
exchangeFlux = B
flux1 = core.flux(net_exc_tup=(netFlux,exchangeFlux))
print flux1
###Output
Forward: [0.445861873485 : 0.75 : 1.05149626863]
Backward: [0.1 : 0.15 : 0.18]
Net: [0.3 : 0.6 : 0.9]
Exchange: [0.1 : 0.15 : 0.18]
###Markdown
Fluxes can easily multiplied:
###Code
print 3*flux1
###Output
Forward: [1.33758562046 : 2.25 : 3.1544888059]
Backward: [0.3 : 0.45 : 0.54]
Net: [0.9 : 1.8 : 2.7]
Exchange: [0.3 : 0.45 : 0.54]
|
ipython-7.29.0/examples/IPython Kernel/Beyond Plain Python.ipynb | ###Markdown
IPython: beyond plain Python When executing code in IPython, all valid Python syntax works as-is, but IPython provides a number of features designed to make the interactive experience more fluid and efficient. First things first: running code, getting help In the notebook, to run a cell of code, hit `Shift-Enter`. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use: - `Alt-Enter` to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook).- `Control-Enter` executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently.
###Code
print("Hi")
###Output
Hi
###Markdown
Getting help:
###Code
?
###Output
_____no_output_____
###Markdown
Typing `object_name?` will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes.
###Code
import collections
collections.namedtuple?
collections.Counter??
*int*?
###Output
_____no_output_____
###Markdown
An IPython quick reference card:
###Code
%quickref
###Output
_____no_output_____
###Markdown
Tab completion Tab completion, especially for attributes, is a convenient way to explore the structure of any object you’re dealing with. Simply type `object_name.` to view the object’s attributes. Besides Python objects and keywords, tab completion also works on file and directory names.
###Code
collections.
###Output
_____no_output_____
###Markdown
The interactive workflow: input, output, history
###Code
2+10
_+10
###Output
_____no_output_____
###Markdown
You can suppress the storage and rendering of output if you append `;` to the last cell (this comes in handy when plotting with matplotlib, for example):
###Code
10+20;
_
###Output
_____no_output_____
###Markdown
The output is stored in `_N` and `Out[N]` variables:
###Code
_10 == Out[10]
###Output
_____no_output_____
###Markdown
And the last three have shorthands for convenience:
###Code
from __future__ import print_function
print('last output:', _)
print('next one :', __)
print('and next :', ___)
In[11]
_i
_ii
print('last input:', _i)
print('next one :', _ii)
print('and next :', _iii)
%history -n 1-5
###Output
1: print("Hi")
2: ?
3:
import collections
collections.namedtuple?
4: collections.Counter??
5: *int*?
###Markdown
**Exercise**Write the last 10 lines of history to a file named `log.py`. Accessing the underlying operating system
###Code
!pwd
files = !ls
print("My current directory's files:")
print(files)
!echo $files
!echo {files[0].upper()}
###Output
ANIMATIONS USING CLEAR_OUTPUT.IPYNB
###Markdown
Note that all this is available even in multiline blocks:
###Code
import os
for i,f in enumerate(files):
if f.endswith('ipynb'):
!echo {"%02d" % i} - "{os.path.splitext(f)[0]}"
else:
print('--')
###Output
00 - Animations Using clear_output
01 - Background Jobs
02 - Beyond Plain Python
03 - Capturing Output
04 - Cell Magics
05 - Custom Display Logic
06 - Index
07 - Old Custom Display Logic
08 - Plotting in the Notebook
09 - Raw Input in the Notebook
10 - Rich Output
11 - Script Magics
12 - SymPy
13 - Terminal Usage
14 - Third Party Rich Output
15 - Trapezoid Rule
16 - Working With External Code
--
--
--
--
--
--
--
--
--
--
###Markdown
Beyond Python: magic functions The IPyhton 'magic' functions are a set of commands, invoked by prepending one or two `%` signs to their name, that live in a namespace separate from your normal Python variables and provide a more command-like interface. They take flags with `--` and arguments without quotes, parentheses or commas. The motivation behind this system is two-fold: - To provide an orthogonal namespace for controlling IPython itself and exposing other system-oriented functionality.- To expose a calling mode that requires minimal verbosity and typing while working interactively. Thus the inspiration taken from the classic Unix shell style for commands.
###Code
%magic
###Output
_____no_output_____
###Markdown
Line vs cell magics:
###Code
%timeit list(range(1000))
%%timeit
list(range(10))
list(range(100))
###Output
100000 loops, best of 3: 2.78 µs per loop
###Markdown
Line magics can be used even inside code blocks:
###Code
for i in range(1, 5):
size = i*100
print('size:', size, end=' ')
%timeit list(range(size))
###Output
size: 100 100000 loops, best of 3: 1.86 µs per loop
size: 200 100000 loops, best of 3: 2.49 µs per loop
size: 300 100000 loops, best of 3: 4.04 µs per loop
size: 400 100000 loops, best of 3: 6.21 µs per loop
###Markdown
Magics can do anything they want with their input, so it doesn't have to be valid Python:
###Code
%%bash
echo "My shell is:" $SHELL
echo "My disk usage is:"
df -h
###Output
My shell is: /usr/local/bin/bash
My disk usage is:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 233Gi 216Gi 16Gi 94% 56788108 4190706 93% /
devfs 190Ki 190Ki 0Bi 100% 656 0 100% /dev
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
###Markdown
Another interesting cell magic: create any file you want locally from the notebook:
###Code
%%writefile test.txt
This is a test file!
It can contain anything I want...
And more...
!cat test.txt
###Output
This is a test file!
It can contain anything I want...
And more...
###Markdown
Let's see what other magics are currently defined in the system:
###Code
%lsmagic
###Output
_____no_output_____
###Markdown
Running normal Python code: execution and errors Not only can you input normal Python code, you can even paste straight from a Python or IPython shell session:
###Code
>>> # Fibonacci series:
... # the sum of two elements defines the next
... a, b = 0, 1
>>> while b < 10:
... print(b)
... a, b = b, a+b
In [1]: for i in range(10):
...: print(i, end=' ')
...:
###Output
0 1 2 3 4 5 6 7 8 9
###Markdown
And when your code produces errors, you can control how they are displayed with the `%xmode` magic:
###Code
%%writefile mod.py
def f(x):
return 1.0/(x-1)
def g(y):
return f(y+1)
###Output
Overwriting mod.py
###Markdown
Now let's call the function `g` with an argument that would produce an error:
###Code
import mod
mod.g(0)
%xmode plain
mod.g(0)
%xmode verbose
mod.g(0)
###Output
Exception reporting mode: Verbose
###Markdown
The default `%xmode` is "context", which shows additional context but not all local variables. Let's restore that one for the rest of our session.
###Code
%xmode context
###Output
Exception reporting mode: Context
###Markdown
Running code in other languages with special `%%` magics
###Code
%%perl
@months = ("July", "August", "September");
print $months[0];
%%ruby
name = "world"
puts "Hello #{name.capitalize}!"
###Output
Hello World!
###Markdown
Raw Input in the notebook Since 1.0 the IPython notebook web application support `raw_input` which for example allow us to invoke the `%debug` magic in the notebook:
###Code
mod.g(0)
%debug
###Output
> [1;32m/Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py[0m(3)[0;36mf[1;34m()[0m
[1;32m 2 [1;33m[1;32mdef[0m [0mf[0m[1;33m([0m[0mx[0m[1;33m)[0m[1;33m:[0m[1;33m[0m[0m
[0m[1;32m----> 3 [1;33m [1;32mreturn[0m [1;36m1.0[0m[1;33m/[0m[1;33m([0m[0mx[0m[1;33m-[0m[1;36m1[0m[1;33m)[0m[1;33m[0m[0m
[0m[1;32m 4 [1;33m[1;33m[0m[0m
[0m
ipdb> up
> [1;32m/Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py[0m(6)[0;36mg[1;34m()[0m
[1;32m 4 [1;33m[1;33m[0m[0m
[0m[1;32m 5 [1;33m[1;32mdef[0m [0mg[0m[1;33m([0m[0my[0m[1;33m)[0m[1;33m:[0m[1;33m[0m[0m
[0m[1;32m----> 6 [1;33m [1;32mreturn[0m [0mf[0m[1;33m([0m[0my[0m[1;33m+[0m[1;36m1[0m[1;33m)[0m[1;33m[0m[0m
[0m
ipdb> down
> [1;32m/Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py[0m(3)[0;36mf[1;34m()[0m
[1;32m 2 [1;33m[1;32mdef[0m [0mf[0m[1;33m([0m[0mx[0m[1;33m)[0m[1;33m:[0m[1;33m[0m[0m
[0m[1;32m----> 3 [1;33m [1;32mreturn[0m [1;36m1.0[0m[1;33m/[0m[1;33m([0m[0mx[0m[1;33m-[0m[1;36m1[0m[1;33m)[0m[1;33m[0m[0m
[0m[1;32m 4 [1;33m[1;33m[0m[0m
[0m
ipdb> bt
[1;32m<ipython-input-46-5e708f13c839>[0m(1)[0;36m<module>[1;34m()[0m
[1;32m----> 1 [1;33m[0mmod[0m[1;33m.[0m[0mg[0m[1;33m([0m[1;36m0[0m[1;33m)[0m[1;33m[0m[0m
[0m
[1;32m/Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py[0m(6)[0;36mg[1;34m()[0m
[0;32m 2 [0m[1;32mdef[0m [0mf[0m[1;33m([0m[0mx[0m[1;33m)[0m[1;33m:[0m[1;33m[0m[0m
[0;32m 3 [0m [1;32mreturn[0m [1;36m1.0[0m[1;33m/[0m[1;33m([0m[0mx[0m[1;33m-[0m[1;36m1[0m[1;33m)[0m[1;33m[0m[0m
[0;32m 4 [0m[1;33m[0m[0m
[0;32m 5 [0m[1;32mdef[0m [0mg[0m[1;33m([0m[0my[0m[1;33m)[0m[1;33m:[0m[1;33m[0m[0m
[1;32m----> 6 [1;33m [1;32mreturn[0m [0mf[0m[1;33m([0m[0my[0m[1;33m+[0m[1;36m1[0m[1;33m)[0m[1;33m[0m[0m
[0m
> [1;32m/Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py[0m(3)[0;36mf[1;34m()[0m
[1;32m 1 [1;33m[1;33m[0m[0m
[0m[1;32m 2 [1;33m[1;32mdef[0m [0mf[0m[1;33m([0m[0mx[0m[1;33m)[0m[1;33m:[0m[1;33m[0m[0m
[0m[1;32m----> 3 [1;33m [1;32mreturn[0m [1;36m1.0[0m[1;33m/[0m[1;33m([0m[0mx[0m[1;33m-[0m[1;36m1[0m[1;33m)[0m[1;33m[0m[0m
[0m[1;32m 4 [1;33m[1;33m[0m[0m
[0m[1;32m 5 [1;33m[1;32mdef[0m [0mg[0m[1;33m([0m[0my[0m[1;33m)[0m[1;33m:[0m[1;33m[0m[0m
[0m
ipdb> exit
###Markdown
Don't forget to exit your debugging session. Raw input can of course be used to ask for user input:
###Code
enjoy = input('Are you enjoying this tutorial? ')
print('enjoy is:', enjoy)
###Output
Are you enjoying this tutorial? yes
enjoy is: yes
###Markdown
Plotting in the notebook This magic configures matplotlib to render its figures inline:
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 300)
y = np.sin(x**2)
plt.plot(x, y)
plt.title("A little chirp")
fig = plt.gcf() # let's keep the figure object around for later...
###Output
_____no_output_____
###Markdown
The IPython kernel/client model
###Code
%connect_info
###Output
{
"stdin_port": 62401,
"key": "64c935a7-64e8-4ab7-ab22-6e0f3ff84e02",
"hb_port": 62403,
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"shell_port": 62399,
"control_port": 62402,
"ip": "127.0.0.1",
"iopub_port": 62400
}
Paste the above JSON into a file, and connect with:
$> ipython <app> --existing <file>
or, if you are local, you can connect with just:
$> ipython <app> --existing kernel-25383540-ce7f-4529-900a-ded0e510d5d8.json
or even just:
$> ipython <app> --existing
if this is the most recent IPython session you have started.
###Markdown
We can automatically connect a Qt Console to the currently running kernel with the `%qtconsole` magic, or by typing `ipython console --existing ` in any terminal:
###Code
%qtconsole
###Output
_____no_output_____ |
2. Machine_Learning_Regression/K-Nearest Neighborhood Regression.ipynb | ###Markdown
Predicting house prices using k-nearest neighbors regression In this notebook, you will implement k-nearest neighbors regression. You will:Find the k-nearest neighbors of a given query inputPredict the output for the query input using the k-nearest neighborsChoose the best value of k using a validation set
###Code
import numpy as np
import pandas as pd
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}
sales = pd.read_csv('kc_house_data_small.csv', dtype = dtype_dict)
train = pd.read_csv('kc_house_data_small_train.csv', dtype = dtype_dict)
test = pd.read_csv('kc_house_data_small_test.csv', dtype = dtype_dict)
validate = pd.read_csv('kc_house_data_validation 2.csv', dtype = dtype_dict)
###Output
_____no_output_____
###Markdown
3. To efficiently compute pairwise distances among data points, we will convert the SFrame (or dataframe) into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() (or equivalent). The function takes a dataset, a list of features (e.g. [‘sqft_living’, ‘bedrooms’]) to be used as inputs, and a name of the output (e.g. ‘price’). It returns a ‘features_matrix’ (2D array) consisting of a column of ones followed by columns containing the values of the input features in the data set in the same order as the input list. It also returns an ‘output_array’, which is an array of the values of the output in the dataset (e.g. ‘price’).
###Code
def get_numpy_data(data, features, output):
data['constant'] = 1 # add a constant column to a dataframe
# prepend variable 'constant' to the features list
features = ['constant'] + features
# select the columns of dataframe given by the ‘features’ list into the SFrame ‘features_sframe’
# this will convert the features_sframe into a numpy matrix with GraphLab Create >= 1.7!!
features_matrix = data[features].as_matrix(columns=None)
# assign the column of data_sframe associated with the target to the variable ‘output_sarray’
# this will convert the SArray into a numpy array:
output_array = data[output].as_matrix(columns=None)
return(features_matrix, output_array)
###Output
_____no_output_____
###Markdown
Similarly, copy and paste the normalize_features function (or equivalent) from Module 5 (Ridge Regression). Given a feature matrix, each column is divided (element-wise) by its 2-norm. The function returns two items: (i) a feature matrix with normalized columns and (ii) the norms of the original columns.
###Code
def normalize_features(features):
norms = np.sqrt(np.sum(features**2,axis=0))
normlized_features = features/norms
return (normlized_features, norms)
###Output
_____no_output_____
###Markdown
Using get_numpy_data (or equivalent), extract numpy arrays of the training, test, and validation sets.
###Code
features = [m for m,n in dtype_dict.items() if train[m].dtypes != object]
features
features.remove('price')
training_feature_matrix, training_output = get_numpy_data(train, features, 'price')
testing_feature_matrix, testing_output = get_numpy_data(test, features, 'price')
validating_feature_matrix, validating_output = get_numpy_data(validate, features, 'price')
###Output
_____no_output_____
###Markdown
In computing distances, it is crucial to normalize features. Otherwise, for example, the ‘sqft_living’ feature (typically on the order of thousands) would exert a much larger influence on distance than the ‘bedrooms’ feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.e.g. in Python:
###Code
features_train, norms = normalize_features(training_feature_matrix)
features_test = testing_feature_matrix / norms
features_valid = validating_feature_matrix / norms
###Output
_____no_output_____
###Markdown
Compute a single distance To start, let's just explore computing the “distance” between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1. Similarly, print the 10th row (index 9) of the training feature matrix.
###Code
print features_test[0]
print features_train[9]
###Output
[ 0.01345102 0.01807473 0.01375926 0.01362084 0.01564352 0.01350306
0.01551285 -0.01346922 0.0016225 0.01759212 0.017059 0.00160518
0. 0.02481682 0. 0.01345387 0.0116321 0.05102365]
[ 0.01345102 0.00602491 0.01195898 0.0096309 0.01390535 0.01302544
0.01163464 -0.01346251 0.00156612 0.0083488 0.01279425 0.00050756
0. 0. 0. 0.01346821 0.01938684 0. ]
###Markdown
Quiz Question: What is the Euclidean distance between the query house and the 10th house of the training set?
###Code
np.sqrt(np.sum((features_train[9] - features_test[0])**2))
###Output
_____no_output_____
###Markdown
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set. Quiz Question: Among the first 10 training houses, which house is the closest to the query house?
###Code
distance = {}
for i in range(10):
distance[i] = np.sqrt(np.sum((features_train[i] - features_test[0])**2))
print distance
distance_2 = []
for x,y in distance.items():
distance_2.append((y,x))
distance_2.sort()
distance_2
###Output
_____no_output_____
###Markdown
It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process. (The material up to 13 is specific to numpy; if you are using other languages such as R or Matlab, consult relevant manuals on vectorization.)Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]):
###Code
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
print features_train[0:3] - features_test[0]
###Output
[[ 0.00000000e+00 -1.20498190e-02 -5.14364795e-03 -5.50336860e-03
-3.47633726e-03 -1.63756198e-04 -3.87821276e-03 1.29876855e-05
6.69281453e-04 -1.05552733e-02 -8.52950206e-03 2.08673616e-04
0.00000000e+00 -2.48168183e-02 0.00000000e+00 -1.70254220e-05
0.00000000e+00 -5.10236549e-02]
[ 0.00000000e+00 -4.51868214e-03 -2.89330197e-03 1.30705004e-03
-3.47633726e-03 -1.91048898e-04 -3.87821276e-03 6.16364736e-06
1.47606982e-03 -2.26610387e-03 0.00000000e+00 7.19763456e-04
0.00000000e+00 -1.45830788e-02 6.65082271e-02 4.23090220e-05
0.00000000e+00 -5.10236549e-02]
[ 0.00000000e+00 -1.20498190e-02 3.72914476e-03 -8.32384500e-03
-5.21450589e-03 -3.13866046e-04 -7.75642553e-03 1.56292487e-05
1.64764925e-03 -1.30002801e-02 -8.52950206e-03 1.60518166e-03
0.00000000e+00 -2.48168183e-02 0.00000000e+00 4.70885840e-05
0.00000000e+00 -5.10236549e-02]]
###Markdown
Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:
###Code
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
###Output
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Perform 1-nearest neighbor regression Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable ‘diff’ such that ‘diff[i]’ gives the element-wise difference between the features of the query house and the i-th training house.To test your code, print diff[-1].sum(), which should be -0.0934339605842.
###Code
diff = features_train[:] - features_test[0]
diff
features_train - features_test[0]
diff[-1].sum()
###Output
_____no_output_____
###Markdown
The next step in computing the Euclidean distances is to take these feature-by-feature differences in ‘diff’, square each, and take the sum over feature indices. That is, compute the sum of squared feature differences for each training house (row in ‘diff’).By default, ‘np.sum’ sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the ‘axis’ parameter described in the np.sum documentation. In particular, ‘axis=1’ computes the sum across each row.
###Code
total_row = np.sum(diff**2, axis=1)
total_row.shape
diff.shape
###Output
_____no_output_____
###Markdown
computes this sum of squared feature differences for all training houses. Verify that the two expressions
###Code
np.sum(diff**2, axis=1)[15]
np.sum(diff[15]**2)
###Output
_____no_output_____
###Markdown
With this result in mind, write a single-line expression to compute the Euclidean distances from the query to all the instances. Assign the result to variable distances.Hint: don't forget to take the square root of the sum of squares.Hint: distances[100] should contain 0.0237082324496.
###Code
np.sqrt(sum(diff[100]**2))
###Output
_____no_output_____
###Markdown
Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.
###Code
def compute_distances(features_query):
diff = features_train - features_test[features_query]
distances = np.sqrt(np.sum(diff**2, axis=1))
return distances
dist = compute_distances(2)
dist
min(dist)
np.argmin(dist)
###Output
_____no_output_____
###Markdown
Quiz Question: What is the predicted value of the query house based on 1-nearest neighbor regression?
###Code
training_output[382]
###Output
_____no_output_____
###Markdown
Perform k-nearest neighbor regression Using the functions above, implement a function that takes inthe value of k;the feature matrix for the instances; andthe feature of the queryand returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
###Code
def k_nearest_neighbors(k, feat_query):
distance = compute_distances(feat_query)
# print np.sort(distance)[:k]
return np.argsort(distance)[0:k]
###Output
_____no_output_____
###Markdown
Quiz Question: Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
###Code
k_nearest_neighbors(4,2)
###Output
_____no_output_____
###Markdown
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:the value of k;the feature matrix for the instances;the output values (prices) of the instances; andthe feature of the query, whose price we’re predicting.The function should return a predicted value of the query house.
###Code
def predict_output_of_query(k, features_train, output_train, features_query):
prediction = np.sum(output_train[k_nearest_neighbors(k,features_query)])/k
return prediction
###Output
_____no_output_____
###Markdown
Quiz Question: Make predictions for the first 10 houses in the test set, using k=10. What is the index of the house in this query set that has the lowest predicted value? What is the predicted value of this house?
###Code
for m in range(10):
print m, predict_output_of_query(10, features_train, training_output, m)
###Output
0 881300.0
1 431860.0
2 460595.0
3 430200.0
4 766750.0
5 667420.0
6 350032.0
7 512800.7
8 484000.0
9 457235.0
###Markdown
Choosing the best value of k using a validation set There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:For k in [1, 2, … 15]:Make predictions for the VALIDATION data using the k-nearest neighbors from the TRAINING data.Compute the RSS on VALIDATION dataReport which k produced the lowest RSS on validation data.
###Code
rss_all = np.zeros(15)
for k in range(1,16):
predictions_k = predict_output_of_query(k, features_train, training_output, features_valid)
rss_all[k-1] = np.sum((predictions_k-validating_output)**2)
###Output
_____no_output_____ |
Argentina - Mondiola Rock - 90 pts/Practica/TP1/ejercicio 4/.ipynb_checkpoints/Ejercicio 4-checkpoint.ipynb | ###Markdown
EJERCICIO 4El conjunto de datos “Iris” ha sido usado como caso de prueba para una gran cantidad de clasificadores y es, quizás, el conjunto de datos más conocido de la literatura específica. Iris es una variedad de planta que se la desea clasificar de acuerdo a su tipo. Se reconocen tres tipos distintos: 'Iris setosa', 'Iris versicolor' e 'Iris virgínica'. El objetivo es lograr clasificar una planta de la variedad Iris a partir del largo y del ancho del pétalo y del largo y del ancho del sépalo.El conjunto de datos Iris está formado en total por 150 muestras,siendo 50 de cada uno de los tres tipos de plantas. Cada muestraestá compuesta por el tipo de planta, la longitud y ancho delpétalo y la longitud y ancho del sépalo. Todos son atributos numéricos continuos. $$\begin{array}{|c|c|c|c|c|}\hline X & Setosa & Versicolor & Virgínica & Inválidas \\\hline Setosa & 50 & 0 & 0 & 0 \\\hline Versicolor & 0 & 50 & 0 & 0 \\\hline Virgínica & 0 & 0 & 50 & 0 \\\hline \end{array}$$
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import mpld3
%matplotlib inline
mpld3.enable_notebook()
from cperceptron import Perceptron
from cbackpropagation import ANN #, Identidad, Sigmoide
import patrones as magia
def progreso(ann, X, T, y=None, n=-1, E=None):
if n % 20 == 0:
print("Pasos: {0} - Error: {1:.32f}".format(n, E))
def progresoPerceptron(perceptron, X, T, n):
y = perceptron.evaluar(X)
incorrectas = (T != y).sum()
print("Pasos: {0}\tIncorrectas: {1}\n".format(n, incorrectas))
iris = np.load('iris.npy')
#Armo Patrones
clases, patronesEnt, patronesTest = magia.generar_patrones(
magia.escalar(iris[:,1:]).round(4),iris[:,:1],80)
X, T = magia.armar_patrones_y_salida_esperada(clases,patronesEnt)
clases, patronesEnt, noImporta = magia.generar_patrones(
magia.escalar(iris[:,1:]),iris[:,:1],100)
Xtest, Ttest = magia.armar_patrones_y_salida_esperada(clases,patronesEnt)
###Output
_____no_output_____
###Markdown
a) Entrene perceptrones para que cada uno aprenda a reconocer uno de los distintos tipos de plantas Iris. Informe los parámetros usados para el entrenamiento y el desempeño obtenido. Emplee todos los patrones para el entrenamiento. Muestre la matriz de confusión para la mejor clasificación obtenida luego del entrenamiento, informando los patrones clasificados correcta e incorrectamente.
###Code
print("Entrenando P1:")
p1 = Perceptron(X.shape[1])
I1 = p1.entrenar_numpy(X, T[:,0], max_pasos=5000, callback=progresoPerceptron, frecuencia_callback=2500)
print("Pasos:{0}".format(I1))
print("\nEntrenando P2:")
p2 = Perceptron(X.shape[1])
I2 = p2.entrenar_numpy(X, T[:,1], max_pasos=5000, callback=progresoPerceptron, frecuencia_callback=2500)
print("Pasos:{0}".format(I2))
print("\nEntrenando P3:")
p3 = Perceptron(X.shape[1])
I3 = p3.entrenar_numpy(X, T[:,2], max_pasos=5000, callback=progresoPerceptron, frecuencia_callback=2500)
print("Pasos:{0}".format(I3))
Y = np.vstack((p1.evaluar(Xtest),p2.evaluar(Xtest),p3.evaluar(Xtest))).T
magia.matriz_de_confusion(Ttest,Y)
###Output
_____no_output_____
###Markdown
b) Entrene una red neuronal artificial usando backpropagation como algoritmo de aprendizaje con el fin de lograr la clasificación pedida. Emplee todos los patrones para el entrenamiento. Detalle los parámetros usados para el entrenamiento así como la arquitectura de la red neuronal. Repita más de una vez el procedimiento para confirmar los resultados obtenidos e informe la matriz de confusión para la mejor clasificación obtenida.
###Code
# Crea la red neuronal
ocultas = 10
entradas = X.shape[1]
salidas = T.shape[1]
ann = ANN(entradas, ocultas, salidas)
ann.reiniciar()
#Entreno
E, n = ann.entrenar_rprop(X, T, min_error=0, max_pasos=100000, callback=progreso, frecuencia_callback=10000)
print("\nRed entrenada en {0} pasos con un error de {1:.32f}".format(n, E))
#Evaluo
Y = (ann.evaluar(Xtest) >= 0.97)
magia.matriz_de_confusion(Ttest,Y)
(ann.evaluar(Xtest)[90])
###Output
_____no_output_____ |
colloidal_chemotaxis.ipynb | ###Markdown
Passive and active colloidal chemotaxis in a microfluidic channel: mesoscopic and stochastic models**Author:** Pierre de Buyl and Laurens Deprez *Supplemental information to the article by L. Deprez and P. de Buyl*The data originates from the RMPCDMD simulation program. Please read its documentation and thepublished paper for meaningful use of this notebook.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.special import erfc, erf
from scipy.integrate import quad, nquad
from collections import namedtuple
import h5py
import os.path
from glob import glob
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['figure.subplot.hspace'] = 0.25
plt.rcParams['figure.subplot.wspace'] = 0.
plt.rcParams['figure.subplot.left'] = 0.05
plt.rcParams['figure.subplot.right'] = 0.95
plt.rcParams['figure.subplot.bottom'] = 0.19
plt.rcParams['figure.subplot.top'] = 0.91
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['font.size'] = 14
plt.rcParams['xtick.labelsize'] = 11
plt.rcParams['ytick.labelsize'] = 11
colors = ['#1f77b4', '#ff7f0e']
%load_ext Cython
# Set the following variable to point to the location of the chemotaxis mesoscopic simulations
mesoscopic_directory = '.'
# compute D, v_flow
fluid = namedtuple('fluid', ['tau', 'T', 'rho', 'alpha', 'm', 'a', 'eta'])
cell = namedtuple('cell', ['Ly', 'Lz', 'g'])
colloid = namedtuple('colloid', ['sigma', 'R'])
fluid.tau = 0.5
fluid.T = 0.33
fluid.rho = 10
fluid.alpha = 2.6
fluid.m = 1
fluid.a = 1
buffer_length = 20
# Kapral review Eq. 55
eta_kin = fluid.T * fluid.tau * fluid.rho / (2*fluid.m) * \
(5*fluid.rho-(fluid.rho - 1 + np.exp(-fluid.rho))*(2 - np.cos(fluid.alpha)-np.cos(2*fluid.alpha)))/ \
((fluid.rho - 1 + np.exp(-fluid.rho))*(2 - np.cos(fluid.alpha)-np.cos(2*fluid.alpha)))
# Kapral review Eq. 56
eta_coll = fluid.m / (18 * fluid.a * fluid.tau) * (fluid.rho - 1 + np.exp(-fluid.rho))*(1-np.cos(fluid.alpha))
fluid.eta = eta_kin + eta_coll
print("Viscosity", fluid.eta)
fluid.D = fluid.T*fluid.tau/(2*fluid.m) * (3*fluid.rho/((fluid.rho - 1 + np.exp(-fluid.rho))*(1-np.cos(fluid.alpha))) - 1)
print("Self-diffusion D", fluid.D)
cell.Ly = 60
cell.Lz = 15
cell.g = 1/1000
def v_of_eta(fluid, cell):
return fluid.rho*cell.g*cell.Lz**2/(8*fluid.eta)
v_max = v_of_eta(fluid, cell)
v_av = 2/3*v_max
print("Flow maximum ", v_max)
print("Flow average ", v_av)
print("Poiseuille flow Peclet number", v_av*cell.Lz/fluid.D)
colloid.sigma = 3
colloid.R = colloid.sigma*2**(1/6)
all_EPS = ['0.25', '0.50', '1.00', '2.00', '4.00']
# Quantities for the catalytic reaction on the surface of the colloid
probability = 1
k0 = probability*colloid.R**2*np.sqrt(8*np.pi*fluid.T/fluid.m)
kD = 4*np.pi*colloid.R*fluid.D
# define c_A(x,y) and lambda (derivative)
def c_A(x,y):
return fluid.rho * 0.5*(1+erf(-(y-cell.Ly/2)/np.sqrt(4*fluid.D*x/v_max)))
def lam(x,y):
return -fluid.rho*np.exp(-(y-cell.Ly/2)**2/(4*fluid.D*x/v_max))/np.sqrt(4*np.pi*fluid.D*x/v_max)
# define Lambda(R, eps)
def V(r, sigma, eps):
return 4*eps*((sigma/r)**12-(sigma/r)**6) + eps
def integrand(r, sigma, eps):
return r*np.exp(-V(r, sigma, eps)/fluid.T)
def Lambda(R, eps):
result, error = quad(integrand, colloid.R/2, colloid.R, args=(colloid.sigma, eps))
return result - colloid.R**2/2
# define placeholder dicts for the numerical data
passive_sphere_meso = {}
passive_sphere_stoc = {}
active_sphere_meso = {}
active_sphere_stoc = {}
nanomotor_meso = {}
nanomotor_stoc = {}
###Output
_____no_output_____
###Markdown
Single passive colloidHere, the setup
###Code
# Single passive colloid
# Lambda lambda
sigma = colloid.sigma
R = colloid.R
y_shift = 3.4
dt = 0.01
gamma = 4*np.pi*fluid.eta*sigma
D = fluid.T/gamma
x_factor = np.sqrt(2*D*dt)
y_factor = np.sqrt(2*D*dt)
def run_single_passive(passive_EPS):
F_factor = 8*np.pi*fluid.T/3 * R * (Lambda(R, 1)-Lambda(R, float(passive_EPS)))/gamma
x, y = sigma, cell.Ly/2 + y_shift
xy_data = []
for t in range(1000):
for tt in range(50):
F_y = F_factor * lam(x, y)
xi_x, xi_y = np.random.normal(size=(2,))
x += v_max*dt + x_factor*xi_x
y += F_y*dt + y_factor*xi_y
xy_data.append((x,y))
return np.array(xy_data)
# Collect mesoscopic simulation data
for passive_EPS in ['0.25', '0.50', '1.00', '2.00', '4.00']:
runs = glob(os.path.join(mesoscopic_directory, 'passive_sphere_EPS{}_*/passive_sphere_no_solvent.h5'.format(passive_EPS)))
runs.sort()
xy_data = []
for r in runs:
with h5py.File(r, 'r') as a:
xy_data.append(a['/particles/dimer/position/value'][:,0,:2])
passive_sphere_meso[passive_EPS] = np.array(xy_data)
# Generate stochastic simulation data
for passive_EPS in all_EPS:
passive_sphere_stoc[passive_EPS] = np.array([run_single_passive(passive_EPS) for i in range(16)])
plt.figure(figsize=(12,6))
for i, passive_EPS in enumerate(all_EPS):
plt.subplot(2, 3, i+1)
m = passive_sphere_stoc[passive_EPS].mean(axis=0).T
s = passive_sphere_stoc[passive_EPS].std(axis=0).T
color = colors[0]
plt.fill_between(m[0,:], m[1,:]-s[1,:], m[1,:]+s[1,:], color=color, alpha=0.5)
plt.plot(*m, color=color, lw=2)
m = passive_sphere_meso[passive_EPS][:,450:].mean(axis=0).T
s = passive_sphere_meso[passive_EPS][:,450:].std(axis=0).T
m[0,:] -= 20
color = colors[1]
plt.fill_between(m[0,:], m[1,:]-s[1,:], m[1,:]+s[1,:], color=color, alpha=0.5)
plt.plot(*m, color=color, lw=2)
plt.xlim(0, 26)
plt.ylim(25, 40)
plt.text(1, 26, r'$\epsilon_F='+passive_EPS+'$')
if i//3==1: plt.xlabel(r'$x$')
if i%3==0: plt.ylabel(r'$y$')
###Output
_____no_output_____
###Markdown
Single active colloid
###Code
# Single active colloid
# Lambda c_2
sigma = colloid.sigma
R = colloid.R
y_shift = 3.4
dt = 0.01
gamma = 4*np.pi*fluid.eta*sigma
D = fluid.T/gamma
x_factor = np.sqrt(2*D*dt)
y_factor = np.sqrt(2*D*dt)
def run_single_active(active_EPS):
F_factor = -8*np.pi*fluid.T/3 * R * k0/(k0+2*kD) * (Lambda(R, 1)-Lambda(R, float(active_EPS)))/gamma
x, y = sigma, cell.Ly/2 + y_shift
xy_data = []
for t in range(1000):
for tt in range(50):
F_y = F_factor * lam(x, y)
xi_x, xi_y = np.random.normal(size=(2,))
x += v_max*dt + x_factor*xi_x
y += F_y*dt + y_factor*xi_y
xy_data.append((x,y))
return np.array(xy_data)
# Collect simulation data
for active_EPS in all_EPS:
runs = glob(os.path.join(mesoscopic_directory,'active_sphere_EPS{}_*/active_sphere_no_solvent.h5'.format(active_EPS)))
runs.sort()
active_simulation = []
for r in runs:
with h5py.File(r, 'r') as a:
active_simulation.append(a['/particles/dimer/position/value'][:,0,:2])
active_sphere_meso[active_EPS] = np.array(active_simulation)
# Generate stochastic simulation data
for active_EPS in all_EPS:
active_sphere_stoc[active_EPS] = np.array([run_single_active(active_EPS) for i in range(16)])
plt.figure(figsize=(12,6))
for i, active_EPS in enumerate(all_EPS):
plt.subplot(2, 3, i+1)
m = active_sphere_stoc[active_EPS].mean(axis=0).T
s = active_sphere_stoc[active_EPS].std(axis=0).T
color = colors[0]
plt.fill_between(m[0,:], m[1,:]-s[1,:], m[1,:]+s[1,:], color=color, alpha=0.5)
plt.plot(*m, color=color, lw=2)
m = active_sphere_meso[active_EPS][:,400:].mean(axis=0).T
s = active_sphere_meso[active_EPS][:,400:].std(axis=0).T
m[0,:] -= 20
color = colors[1]
plt.fill_between(m[0,:], m[1,:]-s[1,:], m[1,:]+s[1,:], color=color, alpha=0.5)
plt.plot(*m, color=color, lw=2)
plt.xlim(0, 26)
plt.ylim(25, 40)
plt.text(1, 26, r'$\epsilon_B='+active_EPS+'$')
if i//3==1: plt.xlabel(r'$x$')
if i%3==0: plt.ylabel(r'$y$')
###Output
_____no_output_____
###Markdown
Dimer nanomotor
###Code
d = 6.7
def F_C_y(x, y, phi):
return 8*np.pi*fluid.T/3 * colloid.R * k0/(k0+2*kD) * lam(x-d*np.cos(phi)/2, y-d*np.sin(phi)/2)
def torque(f_c_y, f_n_x, f_n_y, phi):
return (np.cos(phi) * (f_c_y - f_n_y) + np.sin(phi) * f_n_x)*d/2
def rotate_xy(x, y, phi):
rot = np.array([[np.cos(phi), -np.sin(phi)], [np.sin(phi), np.cos(phi)]])
return np.dot(rot, (x,y))
print('cdef double RHO =', fluid.rho)
print('cdef double FLUID_D =', fluid.D)
print('cdef double V_MAX =', v_max)
print('cdef double R =', colloid.R)
print('cdef double T =', fluid.T)
print('cdef double k0 =', k0)
print('cdef double kD =', kD)
%%cython
import cython
cimport cython
import numpy as np
cimport numpy as np
from libc.math cimport exp, abs, cos, sin, sqrt, acos, erf
from scipy.integrate import nquad
cdef double d = 6.7
cdef double RHO = 10
cdef double LY = 60
cdef double FLUID_D = 0.06559643942750612
cdef double V_MAX = 0.095309639068441587
cdef double R = 3.367386144928119
cdef double T = 0.33
cdef double k0 = 32.6559814827
cdef double kD = 2.77576727425
cdef double PI = np.pi
@cython.cdivision(True)
cdef double c_A(double x,double y):
return RHO * 0.5*(1+erf(-(y-LY/2)/sqrt(4*FLUID_D*x/V_MAX)))
@cython.cdivision(True)
cdef double lam(double x, double y):
return -RHO*exp(-(y-LY/2)**2/(4*FLUID_D*x/V_MAX))/sqrt(4*PI*FLUID_D*x/V_MAX)
@cython.cdivision(True)
cdef double polar_c_B(double theta, double varphi, double r, double x, double y, double phi):
"""Concentration of B at location theta, varphi, r from the N bead.
x, y are the c.o.m. coordinates and phi is the orientation of the dimer."""
cdef double x_C, y_C, x_N, y_N, c0, c1, c2, x_p, y_p, z_p, r_0
x_C = x + d*cos(phi)/2
y_C = y + d*sin(phi)/2
x_N = x - d*cos(phi)/2
y_N = y - d*sin(phi)/2
c0 = c_A(x_C, y_C)
c1 = -k0/(k0+kD)*c0
c2 = -k0/(k0+2*kD)*lam(x_C, y_C)
x_p = x_N + r*cos(varphi)*sin(theta)
y_p = y_N + r*cos(theta)
z_p = r*sin(varphi)*sin(theta)
r_0 = sqrt((x_p-x_C)**2+(y_p-y_C)**2+z_p**2)
theta_0 = acos((r*cos(theta)-d*sin(phi))/r_0)
return -c1*(R/r_0) - c2*(R/r_0)**2*cos(theta_0)
@cython.cdivision(True)
cdef double F_C_y(double x, double y, double phi):
return 8*PI*T/3 * R * k0/(k0+2*kD) * lam(x-d*cos(phi)/2, y-d*sin(phi)/2)
@cython.boundscheck(False)
@cython.cdivision(True)
@cython.wraparound(False)
def F_N(double x, double y, double phi):
cdef double fx = 0
cdef double fy = 0
cdef int i_theta, i_varphi, N_theta, N_varphi
cdef double c, th, vphi
N_theta = 32
N_varphi = 32
cdef double inv_N_theta = 1.0/N_theta
cdef double inv_N_varphi = 1.0/N_varphi
for i_theta in range(N_theta):
th = (i_theta+0.5)*PI*inv_N_theta
for i_varphi in range(N_varphi):
vphi = (i_varphi+0.5)*2*PI*inv_N_varphi
c = polar_c_B(th, vphi, R, x, y, phi)
fx = fx + c*sin(th)*sin(th)*cos(vphi)
fy = fy + c*sin(th)*cos(th)
factor = 2*T*PI*inv_N_theta*2*PI*inv_N_varphi
return fx*factor, fy*factor
cdef double torque(double f_c_y, double f_n_x, double f_n_y, double phi):
return (cos(phi) * (f_c_y - f_n_y) + sin(phi) * f_n_x)*d/2
def run_nm(nanomotor_EPS):
Lambda_NM = Lambda(colloid.R, float(nanomotor_EPS)) - Lambda(colloid.R, 1)
y_shift = 3.4
x, y = 5, cell.Ly/2 + y_shift
phi = 0
D_para = 0.002
gamma_para = fluid.T/D_para
D_perp = 0.0015
gamma_perp = fluid.T/D_perp
D_r = 1.4e-4
gamma_r = fluid.T/D_r
dt = 0.025
x_para_factor = np.sqrt(2*D_para*dt)
x_perp_factor = np.sqrt(2*D_perp*dt)
phi_factor = np.sqrt(2*D_r*dt)
dimer_data = []
for t in range(500):
for i in range(20):
F_y = Lambda_NM*F_C_y(x, y, phi)
F_N_x, F_N_y = F_N(x, y, phi)
F_N_x, F_N_y = Lambda_NM*F_N_x, Lambda_NM*F_N_y
F_com_x = F_N_x
F_com_y = F_N_y + F_y
xi_para, xi_perp, xi_phi = np.random.normal(size=(3,))
F_para, F_perp = rotate_xy(F_com_x, F_com_y, -phi)
F_para = F_para*dt/gamma_para + x_para_factor*xi_para
F_perp = F_perp*dt/gamma_perp + x_perp_factor*xi_perp
F_com = rotate_xy(F_para, F_perp, phi)
x += v_max*dt + F_com[0]
y += F_com[1]
phi += torque(F_y, F_N_x, F_N_y, phi)*dt / gamma_r + phi_factor*xi_phi
dimer_data.append((x,y,phi))
return np.array(dimer_data)
for nanomotor_EPS in all_EPS:
nanomotor_stoc[nanomotor_EPS] = np.array([run_nm(nanomotor_EPS) for i in range(12)])
# Collect simulation data
for nanomotor_EPS in all_EPS:
runs = glob(os.path.join(mesoscopic_directory,'nanomotor_EPS{}_*/nanomotor_no_solvent.h5'.format(nanomotor_EPS)))
runs.sort()
nanomotor_simulation = []
for r in runs:
with h5py.File(r, 'r') as a:
r = a['/particles/dimer/position/value'][:,:,:]
orientation = r[:,0,:] - r[:,1,:]
r = r.mean(axis=1)
r[:,2] = np.arctan2(orientation[:,1], orientation[:,0])
nanomotor_simulation.append(r.copy())
nanomotor_meso[nanomotor_EPS] = np.array(nanomotor_simulation)
nanomotor_plot = {
'name': 'nanomotor',
'stoc': nanomotor_stoc,
'meso': nanomotor_meso,
'xlim': (0, 25),
'ylim': (29.5, 35.5),
'xticks': np.linspace(0, 20, 5),
'yticks': np.linspace(30, 35, 6),
'label': r'{\kappa,B}',
'ylabel': r'$y$',
'idx': 1,
}
nanomotor_phi_plot = {
'name': 'nanomotor_phi',
'stoc': nanomotor_stoc,
'meso': nanomotor_meso,
'xlim': (0, 25),
'ylim': (-np.pi/2, np.pi/2),
'xticks': np.linspace(0, 20, 5),
'yticks': np.linspace(-1.5, 1.5, 7),
'label': r'{\kappa,B}',
'ylabel': r'$\phi$',
'idx': 2,
}
active_sphere_plot = {
'name': 'active_sphere',
'stoc': active_sphere_stoc,
'meso': active_sphere_meso,
'xlim': (0, 25),
'ylim': (26, 39),
'xticks': np.linspace(0, 20, 5),
'yticks': np.linspace(27, 39, 7),
'label': '{C,B}',
'ylabel': r'$y$',
'idx': 1,
}
passive_sphere_plot = {
'name': 'passive_sphere',
'stoc': passive_sphere_stoc,
'meso': passive_sphere_meso,
'xlim': (0, 25),
'ylim': (26, 39),
'xticks': np.linspace(0, 20, 5),
'yticks': np.linspace(27, 39, 7),
'label': '{N,F}',
'ylabel': r'$y$',
'idx': 1,
}
for data_plot in [passive_sphere_plot, active_sphere_plot, nanomotor_plot, nanomotor_phi_plot]:
fig = plt.figure(figsize=(529*0.9/36,2.8))
idx = data_plot['idx']
for i, EPS in enumerate(all_EPS):
ax1 = plt.subplot(1, 5, i+1)
m = data_plot['stoc'][EPS][:,:,:].mean(axis=0)
s = data_plot['stoc'][EPS][:,:,:].std(axis=0)
color = colors[0]
ax1.fill_between(m[:,0], m[:,idx]-s[:,idx], m[:,idx]+s[:,idx], color=color, alpha=0.5)
ax1.plot(m[:,0], m[:,idx], color=color, lw=2)
m = data_plot['meso'][EPS][:,400:].mean(axis=0)
s = data_plot['meso'][EPS][:,400:].std(axis=0)
m[:,0] -= buffer_length
color = colors[1]
ax1.fill_between(m[:,0], m[:,idx]-s[:,idx], m[:,idx]+s[:,idx], color=color, alpha=0.5)
ax1.plot(m[:,0], m[:,idx], color=color, lw=2)
ax1.set_xlim(*data_plot['xlim'])
ax1.set_xticks(data_plot['xticks'])
ax1.set_ylim(*data_plot['ylim'])
if i==0:
ax1.set_yticks(data_plot['yticks'])
ax1.set_ylabel(data_plot['ylabel'])
elif i==4:
ax1.yaxis.tick_right()
ax1.yaxis.set_label_position("right")
ax1.set_yticks(data_plot['yticks'])
ax1.set_ylabel(data_plot['ylabel'])
else:
ax1.set_yticks([])
ax1.set_xlabel(r'$x$')
plt.text(0.05, 0.07, r'$\epsilon_'+data_plot['label']+'='+EPS+'$', transform=ax1.transAxes)
plt.savefig(data_plot['name']+'_panel.pdf')
###Output
_____no_output_____
###Markdown
Extra slides
###Code
X, Y = np.meshgrid(np.linspace(0.1, 20, 180),np.linspace(0, cell.Ly, 150))
plt.pcolormesh(X, Y, c_A(X, Y), cmap=plt.cm.viridis)
plt.colorbar()
plt.axis([X.min(), X.max(), 0, cell.Ly])
plt.pcolormesh(X, Y, lam(X, Y), cmap=plt.cm.viridis)
plt.colorbar()
plt.axis([X.min(), X.max(), 0, cell.Ly])
from matplotlib.figure import SubplotParams
params = SubplotParams(left=0.2)
plt.figure(figsize=(150/36,2.8), subplotpars=params)
EPS='1.00'
X, Y = np.meshgrid(np.linspace(0.1, 20, 180),np.linspace(0, cell.Ly, 300))
plt.pcolormesh(X, Y, lam(X, Y), cmap=plt.cm.viridis, rasterized=True)
plt.colorbar()
xy = passive_sphere_meso[EPS][0,:,:].copy()
idx = np.searchsorted(xy[:,0], buffer_length)
xy = xy[idx:,:] - np.array([buffer_length,0])
color = colors[1]
plt.plot(xy[:,0], xy[:,1], color='k', lw=2)
x_track = [colloid.R]
y_track = [cell.Ly/2 + y_shift]
plt.plot(x_track, y_track, color='k', marker='o', ms=7.5)
plt.xlim(0, 20)
plt.ylim(cell.Ly/2-10, cell.Ly/2+10)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.savefig('trajectory_and_gradient_'+EPS+'.pdf')
###Output
_____no_output_____ |
simple-exercises/basic-cryptography/7-basic-rsa-decryption-play.ipynb | ###Markdown
RSA Decryption- Ascii plaintext encoded using PKCS1.5 PKCS1.5```RSA Modulo Size: e.g 2048 bits or 256 bytes+------+------------------------------+------+--------------------+| 0x02 | RANDOM NONZERO DIGITS | 0x00 | MESSAGE IN ASCII |+------+------------------------------+------+--------------------+```
###Code
# Given
message = "Factoring lets us break RSA."
ct_string = "22096451867410381776306561134883418017410069787892831071731839143676135600120538004282329650473509424343946219751512256465839967942889460764542040581564748988013734864120452325229320176487916666402997509188729971690526083222067771600019329260870009579993724077458967773697817571267229951148662959627934791540"
E = 65537
N_string = "179769313486231590772930519078902473361797697894230657273430081157732675805505620686985379449212982959585501387537164015710139858647833778606925583497541085196591615128057575940752635007475935288710823649949940771895617054361149474865046711015101563940680527540071584560878577663743040086340742855278549092581"
p_string = "13407807929942597099574024998205846127479365820592393377723561443721764030073662768891111614362326998675040546094339320838419523375986027530441562135724301"
q_string = "13407807929942597099574024998205846127479365820592393377723561443721764030073778560980348930557750569660049234002192590823085163940025485114449475265364281"
from os import urandom
from gmpy2 import mpz
from gmpy2 import invert, t_mod, mul, powmod
def decrypt(y, d, N):
return powmod(y, d, N)
def encrypt(x, e, N):
return powmod(x, e, N)
def decrypt_pipeline(c_string, d, N):
m_decimal = decrypt(mpz(c_string), d, N)
m_hex = hex(m_decimal)[2:]
m = m_hex.split('00') #assumes correct format
return bytes.fromhex(m[1]).decode('utf8')
def encrypt_pipeline(message, e, N):
raw_message = bytes(message, 'utf8')
TOTAL_LENGTH = 128
APPENDLENGTH = TOTAL_LENGTH - len(raw_message) - 2
randomhexstring = urandom(APPENDLENGTH).hex()
final_bytes = bytes.fromhex('02' + randomhexstring + '00') + raw_message
final_decimal = mpz(int.from_bytes(final_bytes, 'big'))
return str(encrypt(final_decimal, e, N))
N = mpz(N_string)
p = mpz(p_string)
q = mpz(q_string)
c = mpz(ct_string)
e = mpz(E)
# compute d
phiN = N - p - q + 1
D = invert(e, phiN)
d = mpz(D)
# d * e mod phi(N) = 1
# where phi(N) = N - p - q + 1
assert t_mod(mul(d, e), phiN)
print(decrypt_pipeline(ct_string, d, N))
c = encrypt_pipeline(message, e, N)
m = decrypt_pipeline(c, d, N)
print(m)
###Output
Factoring lets us break RSA.
|
something-learned/Mathematics/hackermath/Module_3a_linear_algebra_eigenvectors.ipynb | ###Markdown
Intermediate Linear Algebra - Eigenvalues & Eigenvectors Key Equation: $Ax = \lambda b ~~ \text{for} ~~ n \times n $ TransformationsSo what really happens when we multiply the $A$ matrix with a vector $x$Lets say we have a vector - $x$$$ x = \begin{bmatrix} -1 \\ 1 \end{bmatrix} $$What happens when we multiply by a matrix - $A$$$ A = \begin{bmatrix} 6 & 2 \\ 2 & 6 \end{bmatrix} $$$$ Ax = \begin{bmatrix} 6 & 2 \\ 2 & 6 \end{bmatrix} \begin{bmatrix} -1 \\ 1 \end{bmatrix} = \begin{bmatrix} -4 \\ 4 \end{bmatrix} $$$$ Ax = 4Ix $$$$ Ax = 4x $$So this particular matrix has just scaled our original vector. It is a scalar transformation. Other matrices can do reflection, rotation and any arbitary transformation in the same 2d space for n = 2.Lets see what has happened through code.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (10, 6)
def vector_plot (vector):
X,Y,U,V = zip(*vector)
C = [1,1,2,2]
plt.figure()
ax = plt.gca()
ax.quiver(X,Y,U,V,C, angles='xy',scale_units='xy',scale=1)
ax.set_xlim([-6,6])
ax.set_ylim([-6,6])
plt.axhline(0, color='grey', linewidth=1)
plt.axvline(0, color='grey', linewidth=1)
plt.axes().set_aspect('equal')
plt.draw()
A = np.array([[ 6 , 2],
[ 2 , 6]])
x = np.array([[-1],
[1]])
v = A.dot(x)
# All the vectors start at 0, 0
vAX = np.r_[[0,0],A[:,0]]
vAY = np.r_[[0,0],A[:,1]]
vx = np.r_[[0,0],x[:,0]]
vv = np.r_[[0,0],v[:,0]]
vector_plot([vAX, vAY, vx, vv])
###Output
_____no_output_____
###Markdown
Solving Equation $Ax=\lambda x$ Special Case: $Ax = 0$ So far we have been solving the equation $Ax = b$. Let us just look at special case when $b=0$.$$ Ax =0 $$If $A^{-1}$ exists (the matrix is non-singular and invertable), then the solution is trival$$ A^{-1}Ax =0 $$$$ x = 0$$If $A^{-1}$ does not exist, then there may be infinitely many other solutions $x$. And since $A^{-1}$ is a singular matrix then$$||A|| = 0 $$ General CaseThe second part of linear algebra is solving the equation, for a given $A$ - $$ Ax = \lambda x$$Note that both $x$ and $\lambda$ are unknown in this equation. For all solutions of them:$$ \text{eigenvalues} = \lambda $$$$ \text{eigenvectors} = x $$ Calculating EigenvaluesSo let us first solve this for $\lambda$ :$$ Ax = \lambda Ix $$$$ (A-\lambda I)x = 0 $$So for non-trivial solution of $x$, $A$ should be singular:$$ ||A - \lambda I|| = 0 $$ For 2 x 2 MatrixLet us use the sample $A$ vector:$$ A = \begin{bmatrix}3 & 1\\ 1 & 3\end{bmatrix} $$So our equation becomes: $$ \begin{bmatrix}3 & 1\\ 1 & 3\end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}\lambda & 0\\ 0 & \lambda \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} $$$$ \begin{bmatrix}3 - \lambda & 1\\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = 0 $$So for a singular matrix: $$ \begin{Vmatrix}3 - \lambda & 1\\ 1 & 3 - \lambda \end{Vmatrix} = 0 $$$$ (3 - \lambda)^2 - 1 = 0 $$$$ \lambda^2 - 6\lambda + 8 = 0 $$$$ (\lambda - 4)(\lambda - 2) = 0 $$$$ \lambda_1 = 2, \lambda_2 = 4 $$$$||A|| = \lambda_{1} \lambda_{2} $$ Calculating EigenvectorsFor $\lambda = 2$,$$ \begin{bmatrix}3 - \lambda & 1\\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}1 & 1\\ 1 & 1 \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = 0 $$So one simple solution is:$$ \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}-1 \\ 1\end{bmatrix} $$For $\lambda = 4$,$$ \begin{bmatrix}3 - \lambda & 1\\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}-1 & 1\\ 1 & -1 \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = 0 $$So one simple solution is:$$ \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}1 \\ 1\end{bmatrix} $$The eigenvectors are orthonormal to each other in this case. Vector Representation (2x2)A vector representation for this is now:$$ \begin{bmatrix}3 \\ 1\end{bmatrix} x + \begin{bmatrix}1 \\ 3\end{bmatrix} y = \begin{bmatrix} \lambda \\ 0 \end{bmatrix} x + \begin{bmatrix} 0 \\ \lambda \end{bmatrix} y $$Now we need to draw these vectors and see the result
###Code
A = np.array([[ 3 , 1],
[ 1 , 3]])
eigen_val, eigen_vec = np.linalg.eig(A)
eigen_val
eigen_vec
eigen_vec[:,0]
# All the vectors start at 0, 0
vX1 = np.r_[[0,0],A[:,0]]
vY1 = np.r_[[0,0],A[:,1]]
vE1 = np.r_[[0,0],eigen_vec[:,0]] * 2
vE2 = np.r_[[0,0],eigen_vec[:,1]] * 2
vector_plot([vX1, vY1, vE1, vE2])
###Output
_____no_output_____
###Markdown
3 x 3 MatrixLet us write it in the form $$ Ax = \lambda x $$$$ \begin{bmatrix}1 & 1 & 1 \\ 3 & 8 & 1 \\ 5 & -4 & 3\end{bmatrix}\begin{bmatrix} x \\y \\ z\end{bmatrix}= \lambda \begin{bmatrix} x\\ y \\ x \end{bmatrix} $$
###Code
f = np.matrix([[1,1,1],
[3,8,1],
[5,-4,3]])
np.linalg.eig(f)
###Output
_____no_output_____ |
examples/demo/analysis_demo.ipynb | ###Markdown
*grama* Analysis Demo---*grama* is a *grammar of model analysis*---a language for describing and analyzing mathematical models. Heavily inspired by [ggplot](https://ggplot2.tidyverse.org/index.html), `py_grama` is a Python package that implements *grama* by providing tools for defining and exploring models. This notebook illustrates how one can use *grama* to ***analyze a fully-defined model***.Note that you will need to install `py_grama`, a fork of `dfply`, and dependencies in order to run this notebook. See the [installation instructions](https://github.com/zdelrosario/py_grama) for details.
###Code
### Setup
import grama as gr
import numpy as np
import pandas as pd
import seaborn as sns
X = gr.Intention()
###Output
_____no_output_____
###Markdown
Quick Tour: Analyzing a model---*grama* separates the model *definition* from model *analysis*; once the model is fully defined, only minimal information is necessary for further analysis.As a quick demonstration, we import a fully-defined model provided with *grama*, and carry out a few analyses.
###Code
from grama.models import make_cantilever_beam
md_beam = make_cantilever_beam()
md_beam.printpretty()
###Output
model: Cantilever Beam
inputs:
var_det:
t: [2, 4]
w: [2, 4]
var_rand:
H: (+1) norm, {'loc': 500.0, 'scale': 100.0}
V: (+1) norm, {'loc': 1000.0, 'scale': 100.0}
E: (+0) norm, {'loc': 29000000.0, 'scale': 1450000.0}
Y: (-1) norm, {'loc': 40000.0, 'scale': 2000.0}
copula:
Independence copula
functions:
cross-sectional area: ['w', 't'] -> ['c_area']
limit state: stress: ['w', 't', 'H', 'V', 'E', 'Y'] -> ['g_stress']
limit state: displacement: ['w', 't', 'H', 'V', 'E', 'Y'] -> ['g_disp']
###Markdown
The method `printpretty()` gives us a quick summary of the model; we can see this model has two deterministic variables `w,t` and four random variables `H,V,E,Y`. All of the variables affect the outputs `g_stress, g_displacement`, while only `w,t` affect `c_area`. Since there are random variables, there is a source of *uncertainty* which we must consider when studying this model. Studying model behavior with uncertaintySince the model has sources of randomness (`var_rand`), we must account for this when studying its behavior. We can do so through a Monte Carlo analysis. We make decisions about the deterministic inputs by specifying `df_det`, and the `py_grama` function `gr.ev_monte_carlo` automatically handles the random inputs. Below we fix a nominal value `w = 0.5 * (2 + 4)`, sweep over values for `t`, and account for the randomness via Monte Carlo.
###Code
## Carry out a Monte Carlo analysis of the random variables
df_beam_mc = \
md_beam >> \
gr.ev_monte_carlo(
n=1e2,
df_det=gr.df_make( # Define deterministic levels
w=0.5*(2 + 4), # Single value
t=np.linspace(2.5, 3, num=10) # Sweep
)
)
###Output
eval_monte_carlo() is rounding n...
###Markdown
To help plot the data, we use `gr.tf_gather` to reshape the data, and `seaborn` to quickly visualize results.
###Code
df_beam_wrangled = \
df_beam_mc >> \
gr.tf_gather("output", "y", ["c_area", "g_stress", "g_disp"])
g = sns.FacetGrid(df_beam_wrangled, col="output", sharey=False)
g.map(sns.lineplot, "t", "y")
###Output
_____no_output_____
###Markdown
The mean behavior of the model is shown as a solid line, while the band visualizes the standard deviation of the model output. From this plot, we can see:- The random variables have no effect on `c_area` (there is no band)- Comparing `g_stress` and `g_displacement`, the former is more strongly affected by the random inputs, as illustrated by its wider uncertainty band.While this provides a visual description of how uncertainty affects our outputs, we might be interested in *how* the different random variables affect our outputs. Probing random variable effectsOne way to quantify the effects of random variables is through *Sobol' indices*, which quantify variable importance by the fraction of output variance "explained" by each random variable. Since distribution information is included in the model, we can carry out a *hybrid-point Monte Carlo* and analyze the results with two calls to `py_grama`.
###Code
df_sobol = \
md_beam >> \
gr.ev_hybrid(n=1e3, df_det="nom", seed=101) >> \
gr.tf_sobol()
df_sobol
###Output
eval_hybrid() is rounding n...
###Markdown
The indices should lie between `[0, 1]`, but estimation error can lead to violations. These results suggest that `g_stress` is largely insensitive to `E`, while `g_disp` is insensitive to `Y`. For `g_disp`, the input `V` contributes about twice the variance as variables `H,E`.To get a *qualitative* sense of how the random variables affect our model, we can perform a set of sweeps over random variable space with a *sinew* design. First, we visualize the design in the six-dimensional full input space.
###Code
md_beam >> \
gr.ev_sinews(n_density=50, n_sweeps=10, df_det="swp", skip=True) >> \
gr.pt_auto()
###Output
Estimated runtime for design with model (Cantilever Beam):
0.0151 sec
###Markdown
The `skip` keyword argument allows us to delay evaluating a model; this is useful for inspecting a design before running a potentially expensive calculation. The `pt_auto()` function automatically detects DataFrames generated by `py_grama` functions and constructs an appropriate visualization. This is provided for convenience; you are of course welcome (and encouraged!) to create your own visualizations of the data.Here we can see the sweeps cross the domain in straight lines at random starting locations. Each of these sweeps gives us a "straight shot" within a single variable. Visualizing the outputs for these sweeps will give us a sense of a single variable's influence, contextualized by the effects of the other variables.
###Code
df_beam_sweeps = \
md_beam >> \
gr.ev_sinews(n_density=50, n_sweeps=10, df_det="swp")
df_beam_sweeps >> gr.pt_auto()
###Output
_____no_output_____
###Markdown
Removing the keyword argument `skip` falls back on the default behavior; the model functions are evaluated at each sample, and `pt_auto()` adjusts to use this new data.Based on this plot, we can see:- The output `c_area` is insensitive to all the random variables; it changes only with `t, w`- As the Sobol' analyis above suggested `g_stress` is insensitive to `E`, and `g_displacement` is insensitive to `Y`- Visualizing the results shows that inputs `H,E` tend to 'saturate' in their effects on `g_displacement`, while `V` is linear over its domain. This may explain the difference in contributed variance- Furthermore both `t, w` seem to saturate in their effects on the two limit states---there are diminishing returns on making the beam taller or wider Theory: The *grama* language---As a language, *grama* has both *objects* and *verbs*. Objects---*grama* as a language considers two categories of objects:- **data** (`df`): observations on various quantities, implemented by the Python package `Pandas`- **models** (`md`): a function and complete description of its inputs, implemented by `py_grama`For readability, we suggest using prefixes `df_` and `md_` when naming DataFrames and models. Since data is already well-handled by Pandas, `py_grama` focuses on providing tools to handle models. A `py_grama` model has **functions** and **inputs**: The method `printpretty()` gives a quick summary of the model's inputs and function outputs. Model inputs are organized into:| | Deterministic | Random || ---------- | ---------------------------------------- | ---------- || Variables | `model.var_det` | `model.var_rand` || Parameters | `model.density.marginals[i].d_param` | (Future*) |- **Variables** are inputs to the model's functions + **Deterministic** variables are chosen by the user; the model above has `w, t` + **Random** variables are not controlled; the model above has `H, V, E, Y`- **Parameters** define random variables + **Deterministic** parameters are currently implemented; these are listed under `var_rand` with their associated random variable + **Random** parameters* are not yet implementedThe `outputs` section lists the various model outputs. The model above has `c_area, g_stress, g_displacement`. Verbs---Verbs are used to take action on different *grama* objects. We use verbs to generate data from models, build new models from data, and ultimately make sense of the two.The following table summarizes the categories of `py_grama` verbs. Verbs take either data (`df`) or a model (`md`), and may return either object type. The prefix of a verb immediately tells one both the input and output types. The short prefix is used to denote the *pipe-enabled version* of a verb.| Verb Type | Prefix (Short) | In | Out || --------- | --------------- | ---- | ----- || Evaluate | `eval_` (`ev_`) | `md` | `df` || Fit | `fit_` (`ft_`) | `df` | `md` || Transform | `tran_` (`tf_`) | `df` | `df` || Compose | `comp_` (`cp_`) | `md` | `md` | Functional programming (Pipes)---`py_grama` provides tools to use functional programming patterns. Short-stem versions of `py_grama` functions are *pipe-enabled*, meaning they can be used in functional programming form with the pipe operator `>>`. These pipe-enabled functions are simply aliases for the base functions, as demonstrated below:
###Code
df_base = gr.eval_nominal(md_beam, df_det="nom")
df_functional = md_beam >> gr.ev_nominal(df_det="nom")
df_base.equals(df_functional)
###Output
_____no_output_____ |
07_Visualization/Tips/Exercises_seaborn.ipynb | ###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
tips = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv')
tips.head(2)
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
ttbill = sns.displot(tips.total_bill)
ttbill.set(xlabel='Value',ylabel='Frequency',title='Tital Bill')
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
# sns.relplot(x=tips.total_bill,y=tips.tip,kind='scatter')
sns.jointplot(x='total_bill',y='tip',data=tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.relplot(x=tips.total_bill,y=tips.tip,kind='scatter',hue=tips['size'],size=tips['size'])
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.relplot(y=tips.total_bill,x=tips.day,kind='line')
sns.stripplot(x='day',y='total_bill',data=tips,hue='sex')
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.relplot(x=tips.tip,y=tips.day,kind='scatter',hue=tips.sex)
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(y=tips.total_bill,x=tips.day,hue=tips['time'])
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
fig, axs = plt.subplots(ncols=2)
sns.histplot(tips[tips['time']=='Dinner'].tip,ax=axs[0])
sns.histplot(tips[tips['time']=='Lunch'].tip,ax=axs[1])
sns.set(style='ticks')
g=sns.FacetGrid(tips,col='time')
g.map(plt.hist,'tip')
sns.catplot(x='time', y='total_bill', data = tips,kind='violin')
g = sns.FacetGrid(tips, col='time')
g.map(sns.boxplot,'tip',orient='v')
###Output
E:\python3.6\lib\site-packages\seaborn\axisgrid.py:670: UserWarning: Using the boxplot function without specifying `order` is likely to produce an incorrect plot.
warnings.warn(warning)
E:\python3.6\lib\site-packages\seaborn\_core.py:1326: UserWarning: Vertical orientation ignored with only `x` specified.
warnings.warn(single_var_warning.format("Vertical", "x"))
E:\python3.6\lib\site-packages\seaborn\_core.py:1326: UserWarning: Vertical orientation ignored with only `x` specified.
warnings.warn(single_var_warning.format("Vertical", "x"))
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips,col='sex',hue='smoker')
g.map(plt.scatter,'total_bill','tip',alpha=.7)
g.add_legend()
###Output
_____no_output_____ |
Leading Causes of Death in Egypt.ipynb | ###Markdown
Data Sources: WHO, CDC, World Bank and UN.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from catboost import CatBoostClassifier
#importing plotly and cufflinks in offline mode
import cufflinks as cf
import plotly.offline
cf.go_offline()
cf.set_config_file(offline=False, world_readable=True)
import plotly
import plotly.express as px
import plotly.graph_objs as go
import plotly.offline as py
from plotly.offline import iplot
from plotly.subplots import make_subplots
import plotly.figure_factory as ff
import matplotlib.pyplot as plt
import matplotlib as mpl
import missingno as msno
from p5 import *
import datetime as dt
from datetime import timedelta
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.cluster import KMeans
from sklearn.linear_model import LinearRegression,Ridge,Lasso
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize']=20,10
from sklearn.preprocessing import MinMaxScaler
std=StandardScaler()
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv('Leading Causes of Death in Egypt.csv')
df.head()
df = df.replace({'Cause Effect': {'YES':1, 'NO':0}})
df.head()
df.info()
df.duplicated().sum()
def missing (df):
missing_number = df.isnull().sum().sort_values(ascending=False)
missing_percent = (df.isnull().sum()/df.isnull().count()).sort_values(ascending=False)
missing_values = pd.concat([missing_number, missing_percent], axis=1, keys=['Missing_Number', 'Missing_Percent'])
return missing_values
missing(df)
msno.matrix(df)
df.describe().T
df.describe().plot()
sns.pairplot(df, hue='Rate per 100,000')
sns.pairplot(df, kind='kde')
df.hist(figsize=(15,8))
plt.show()
px.treemap(df, path=['Percentage%','Deaths','Rate per 100,000'], values='Rate per 100,000')
y = df['Cause Effect']
print(f"There is: {round(y.value_counts(normalize=True)[1]*100,2)}% --> ({y.value_counts()[1]} of the death Leading Causes affect others)\nًِWhile: {round(y.value_counts(normalize=True)[0]*100,2)}% --> ({y.value_counts()[0]} doesn't affect)")
df['Cause Effect'].iplot(kind='hist')
numerical= df.select_dtypes('number').columns
categorical = df.select_dtypes('object').columns
print(f'Numerical Columns: {df[numerical].columns}')
print('\n')
print(f'Categorical Columns: {df[categorical].columns}')
###Output
Numerical Columns: Index(['Deaths', 'Percentage%', 'Rate per 100,000', 'World Rank/183',
'Cause Effect'],
dtype='object')
Categorical Columns: Index(['Death Causes'], dtype='object')
###Markdown
Target Variable Numerical Features
###Code
df[numerical].describe()
df[numerical].iplot(kind='hist');
df[numerical].iplot(kind='histogram',subplots=True,bins=50)
skew_limit = 0.2 # This is our threshold-limit to evaluate skewness. Overall below abs(5) seems acceptable for the linear models.
skew_vals = df[numerical].skew()
skew_cols= skew_vals[abs(skew_vals)> skew_limit].sort_values(ascending=False)
skew_cols
numerical1= df.select_dtypes('number').columns
matrix = np.triu(df[numerical1].corr())
fig, ax = plt.subplots(figsize=(14,10))
sns.heatmap (df[numerical1].corr(), annot=True, fmt= '.2f', vmin=-1, vmax=1, center=0, cmap='coolwarm',mask=matrix, ax=ax);
###Output
_____no_output_____
###Markdown
Categorical Features
###Code
df[categorical].head()
df[categorical].describe()
df[categorical].nunique()
###Output
_____no_output_____
###Markdown
So far so good. No zero variance and no extremely high variance compared with the high rate of data. MODEL SELECTION Prediction using Different Machine Learning Models first We'll use dummy Catboost module to hyperparameter tuning for Catboost CATBOOST
###Code
accuracy =[]
model_names =[]
X= df.drop('Cause Effect', axis=1)
y= df['Cause Effect']
categorical_features_indices = np.where(X.dtypes != np.float)[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
model = CatBoostClassifier(verbose=False,random_state=0,
objective= 'CrossEntropy',
colsample_bylevel= 0.04292240490294766,
depth= 10,
boosting_type= 'Plain',
bootstrap_type= 'MVS')
model.fit(X_train, y_train,cat_features=categorical_features_indices,eval_set=(X_test, y_test))
y_pred = model.predict(X_test)
accuracy.append(round(accuracy_score(y_test, y_pred),4))
print(classification_report(y_test, y_pred))
model_names = ['Catboost_tuned']
result_df6 = pd.DataFrame({'Accuracy':accuracy}, index=model_names)
result_df6
###Output
precision recall f1-score support
0 0.88 1.00 0.93 14
1 0.00 0.00 0.00 2
accuracy 0.88 16
macro avg 0.44 0.50 0.47 16
weighted avg 0.77 0.88 0.82 16
###Markdown
second we will use sklearn.model_selection to get the Label Distributions:
###Code
Rate = df['Cause Effect']
Percentage = df['Percentage%']
df.drop(['Cause Effect', 'Percentage%'], axis=1, inplace=True)
df.insert(0, 'Cause Effect', Rate)
df.insert(1, 'Percentage%', Percentage)
# Rate and Percentage are Scaled!
df.head()
###Output
_____no_output_____
###Markdown
Splitting the Data (Original DataFrame)
###Code
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold, StratifiedKFold
print(round(df['Cause Effect'].value_counts()[0]/len(df) * 100,2), '% of the dataset are not contagious')
print(round(df['Cause Effect'].value_counts()[1]/len(df) * 100,2), '% of the dataset are contagious')
X = df.drop('Cause Effect', axis=1)
y = df['Cause Effect']
sss = StratifiedKFold(n_splits=5, random_state=None, shuffle=False)
for train_index, test_index in sss.split(X, y):
print("Train:", train_index, "Test:", test_index)
original_Xtrain, original_Xtest = X.iloc[train_index], X.iloc[test_index]
original_ytrain, original_ytest = y.iloc[train_index], y.iloc[test_index]
# We already have X_train and y_train for undersample data thats why I am using original to distinguish and to not overwrite these variables.
# original_Xtrain, original_Xtest, original_ytrain, original_ytest = train_test_split(X, y, test_size=0.2, random_state=42)
# Check the Distribution of the labels
# Turn into an array
original_Xtrain = original_Xtrain.values
original_Xtest = original_Xtest.values
original_ytrain = original_ytrain.values
original_ytest = original_ytest.values
# See if both the train and test label distribution are similarly distributed
train_unique_label, train_counts_label = np.unique(original_ytrain, return_counts=True)
test_unique_label, test_counts_label = np.unique(original_ytest, return_counts=True)
print('-' * 100)
print('Label Distributions: \n')
print(train_counts_label/ len(original_ytrain))
print(test_counts_label/ len(original_ytest))
###Output
86.27 % of the dataset are not contagious
13.73 % of the dataset are contagious
Train: [10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 26 27 28 29 30 31 32 33 34
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50] Test: [ 0 1 2 3 4 5 6 7 8 9 25]
Train: [ 0 1 2 3 4 5 6 7 8 9 19 20 21 22 23 24 25 26 27 28 29 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50] Test: [10 11 12 13 14 15 16 17 18 30]
Train: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 25 29 30 31 32
33 34 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50] Test: [19 20 21 22 23 24 26 27 28 35]
Train: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 30 35 40 42 43 44 45 46 47 48 49 50] Test: [29 31 32 33 34 36 37 38 39 41]
Train: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 41] Test: [40 42 43 44 45 46 47 48 49 50]
----------------------------------------------------------------------------------------------------
Label Distributions:
[0.87804878 0.12195122]
[0.8 0.2]
###Markdown
Let's predict the increase of average rate using KNeighborsRegressor model
###Code
# example of evaluating a knn model on the housing regression dataset
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.neighbors import KNeighborsRegressor
# load the dataset
df = read_csv('Leading Causes of Death in Egypt.csv', header=None)
data = df.values
X, y = data[:, :-1], data[:, -1]
print(X.shape, y.shape)
# split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
try:
# define model
model = KNeighborsRegressor()
# fit model
model.fit(X_train, y_train)
# make predictions
yhat = model.predict(X_test)
# evaluate predictions
mae = mean_absolute_error(y_test, yhat)
except ValueError:
print('Mean Absolute Error (mae)')
print('Rate Of Increase: %.3f' % mae,'%')
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn import datasets
Accuracy = datasets.load_wine()
X = Accuracy.data
y = Accuracy.target
dtree = DecisionTreeClassifier()
Model_Accuracy = cross_val_score(dtree, X, y, scoring="accuracy").mean()
print('Model Accuracy:',Model_Accuracy *100, '%')
###Output
Model Accuracy: 86.52380952380952 %
###Markdown
important mention This statistics had been recorded in 2018 from the WORLD HEALTH RANKINGS of the world life expectancy in details such as(world rank, percentage, rate and deaths) as we can see in the previous dataset, since the last recorded statics listed in 2020 with the Total cases rate only without any details as we can see in the following blank, Hence from the the previous and following statics of 2018 and 2020 we can check the module accuracy in a specific row only as an example:Coronary Heart Disease in egypt in 2018: 271,690 casesas we saw earlier the model shown the Rate Of Increase as: 4.433 % so 4.433 of 271.690 = 12044.0177 cases So, 12044.0177 + (271690 Last record state) = 283,734.0177 cases.And the last recorded cases in 2020 for Coronary Heart Disease was 288,790 cases, which is almost nearly expectation record between the model and the real recorded cases as the model has a 86.52% of Accuracy.
###Code
Coronary Heart Disease
288,790
Liver Disease
121,883
Stroke
105,209
Influenza and Pneumonia
39,130
Kidney Disease
32,350
Liver Cancer
31,873
Alzheimers & Dementia
31,781
Diabetes Mellitus
31,593
Lung Disease
30,520
Low Birth Weight
24,134
Road Traffic Accidents
23,848
COVID-19
17,545
Breast Cancer
13,086
Hypertension
11,944
Birth Trauma
10,974
Lymphomas
9,144
Lung Cancers
8,936
Endocrine Disorders
8,719
Bladder Cancer
8,375
Violence
8,177
Diarrhoeal diseases
7,917
Leukemia
7,159
Suicide
6,724
Inflammatory/Heart
6,501
Other Neoplasms
6,355
Colon-Rectum Cancers
5,042
###Output
_____no_output_____ |
docs/python_basics/03_matplotlib.ipynb | ###Markdown
Python basics 3: Matplotlib This tutorial introduces matplotlib, a Python library for plotting numpy arrays as images. We will learn how to: Follow the instructions below to download the tutorial and open it in the Sandbox. Download the tutorial notebook [Download the Python basics 3 tutorial notebook](../_static/python_basics/03_download-matplotlib.ipynb)[Download the exercise image file](../_static/python_basics/Guinea_Bissau.JPG)To view this notebook on the Sandbox, you will need to first download the notebook and the image to your computer, then upload both of them to the Sandbox. Ensure you have followed the set-up prerequisities listed in [Python basics 1: Jupyter](./01_jupyter.ipynb), and then follow these instructions:1. Download the notebook by clicking the first link above. Download the image by clicking the second link above.2. On the Sandbox, open the **Training** folder.3. Click the **Upload Files** button as shown below.4. Select the downloaded notebook using the file browser. Click **OK**.5. Repeat to upload the image file to the **Training** folder. It may take a while for the upload to complete.5. Both files will appear in the **Training** folder. Double-click the tutorial notebook to open it and begin the tutorial.You can now use the tutorial notebook as an interactive version of this webpage.
###Code
.. note::
The tutorial notebook should look like the text and code below. However, the tutorial notebook outputs are blank (i.e. no results showing after code cells). Follow the instructions in the notebook to run the cells in the tutorial notebook. Refer to this page to check your outputs look similar.
###Output
_____no_output_____
###Markdown
Introduction to matplotlib's pyplot We are going to use part of matplotlib called `pyplot`. We can import pyplot by specifying it comes from matplotlib. We will abbreviate `pyplot` to `plt`.
###Code
%matplotlib inline
# Generates plots in the same page instead of opening a new window
import numpy as np
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Images are 2-dimensional arrays containing pixels. Therefore, we can use 2-dimensional arrays to represent image data and visualise with matplotlib. In the example below, we will use the numpy `arange` function to generate a 1-dimensional array filled with elements from `0` to `99`, and then reshape it into a 2-dimensional array using `reshape`.
###Code
arr = np.arange(100).reshape(10,10)
print(arr)
plt.imshow(arr)
###Output
[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]
[30 31 32 33 34 35 36 37 38 39]
[40 41 42 43 44 45 46 47 48 49]
[50 51 52 53 54 55 56 57 58 59]
[60 61 62 63 64 65 66 67 68 69]
[70 71 72 73 74 75 76 77 78 79]
[80 81 82 83 84 85 86 87 88 89]
[90 91 92 93 94 95 96 97 98 99]]
###Markdown
If you remember from the [last tutorial](./02_numpy.ipynb), we were able to address regions of a numpy array using the square bracket `[ ]` index notation. For multi-dimensional arrays we can use a comma `,` to distinguish between axes. ```python[ first dimension, second dimension, third dimension, etc. ]```As before, we use colons `:` to denote `[ start : end : stride ]`. We can do this for each dimension.For example, we can update the values on the left part of this array to be equal to `1`.
###Code
arr = np.arange(100).reshape(10,10)
arr[:, :5] = 1
plt.imshow(arr)
###Output
_____no_output_____
###Markdown
The indexes in the square brackets of `arr[:, :5]` can be broken down like this:```python[ 1st dimension start : 1st dimension end, 2nd dimension start : 2nd dimension end ]```Dimensions are separated by the comma `,`. Our first dimension is the vertical axis, and the second dimension is the horizontal axis. Their spans are marked by the colon `:`. Therefore:```python[ Vertical start : Vertical end, Horizontal start : Horizontal end ]```If there are no indexes entered, then the array will take all values. This means `[:, :5]` gives:```python[ Vertical start : Vertical end, Horizontal start : Horizontal start + 5 ]```Therefore the array index selected the first 5 pixels along the width, at all vertical values. Now let's see what that looks like on an actual image. > **Tip**: Ensure you uploaded the file `Guinea_Bissau.JPG` to your **Training** folder along with the tutorial notebook. We will be using this file in the next few steps and exercises. We can use the pyplot library to load an image using the matplotlib function `imread`. `imread` reads in an image file as a 3-dimensional numpy array. This makes it easy to manipulate the array. By convention, the first dimension corresponds to the vertical axis, the second to the horizontal axis and the third are the Red, Green and Blue channels of the image. Red-green-blue channels conventionally take on values from 0 to 255.
###Code
im = np.copy(plt.imread('Guinea_Bissau.JPG'))
# This file path (red text) indicates 'Guinea_Bissau.JPG' is in the
# same folder as the tutorial notebook. If you have moved or
# renamed the file, the file path must be edited to match.
im.shape
###Output
_____no_output_____
###Markdown
`Guinea_Bissau.JPG` is an image of Rio Baboque in Guinea-Bissau in 2018. It has been generated from Landsat 8 satellite data. The results of the above cell show that the image is 590 pixels tall, 602 pixels wide, and has 3 channels. The three channels are red, green, and blue (in that order). Let's display this image using the pyplot `imshow` function.
###Code
plt.imshow(im)
###Output
_____no_output_____
###Markdown
Exercises 3.1 Let's use the indexing functionality of numpy to select a portion of this image. Select the top-right corner of this image with shape `(200,200)`.> **Hint:** Remember there are three dimensions in this image. Colons separate spans, and commas separate dimensions.
###Code
# We already defined im above, but if you have not,
# you can un-comment and run the next line
# im = np.copy(plt.imread('Guinea_Bissau.JPG'))
# Fill in the question marks with the correct indexes
topright = im[?,?,?]
# Plot your result using imshow
plt.imshow(topright)
###Output
_____no_output_____
###Markdown
If you have selected the correct corner, there should be not much water in it! 3.2 Let's have a look at one of the pixels in this image. We choose the top-left corner with position `(0,0)` and show the values of its RGB channels.
###Code
# Run this cell to see the colour channel values
im[0,0]
###Output
_____no_output_____
###Markdown
The first value corresponds to the red component, the second to the green and the third to the blue. `uint8` can contain values in the range `[0-255]` so the pixel has a lot of red, some green, and not much blue. This pixel is a orange-yellow sandy colour. Now let's modify the image. What happens if we set all the values representing the blue channel to the maximum value?
###Code
# Run this cell to set all blue channel values to 255
# We first make a copy to avoid modifying the original image
im2 = np.copy(im)
im2[:,:,2] = 255
plt.imshow(im2)
###Output
_____no_output_____ |
problems/0059/solution.ipynb | ###Markdown
Problem 59 XOR decryptionEach character on a computer is assigned a unique code and the preferred standard is ASCII (American Standard Code for Information Interchange). For example, uppercase $A = 65$, asterisk $(*) = 42$, and lowercase $k = 107$.A modern encryption method is to take a text file, convert the bytes to ASCII, then XOR each byte with a given value, taken from a secret key. The advantage with the XOR function is that using the same encryption key on the cipher text, restores the plain text; for example, $65 XOR 42 = 107$, then $107 XOR 42 = 65$.For unbreakable encryption, the key is the same length as the plain text message, and the key is made up of random bytes. The user would keep the encrypted message and the encryption key in different locations, and without both "halves", it is impossible to decrypt the message.Unfortunately, this method is impractical for most users, so the modified method is to use a password as a key. If the password is shorter than the message, which is likely, the key is repeated cyclically throughout the message. The balance for this method is using a sufficiently long password key for security, but short enough to be memorable.Your task has been made easy, as the encryption key consists of three lower case characters. Using [p059_cipher.txt](https://projecteuler.net/project/resources/p059_cipher.txt) (right click and 'Save Link/Target As...'), a file containing the encrypted ASCII codes, and the knowledge that the plain text must contain common English words, decrypt the message and find the sum of the ASCII values in the original text. Solution
###Code
from itertools import product
def compute(path: str, n: int) -> int:
text = list(map(int, open(path).read().split(',')))
keys = {i: set() for i in range(n)}
letters = range(97, 123)
for i in range(n):
for j in letters:
for k in range(i, len(text), n):
keys[i].add(j)
if not 32 <= text[k] ^ j <= 122:
keys[i].remove(j)
break
for key in product(*list(keys.values())):
decrypted_text = ''
result = 0
for i, j in enumerate(text):
xor = j ^ key[i % n]
decrypted_text += chr(xor)
result += xor
if ' the ' in decrypted_text:
return result
compute('p059_cipher.txt', 3)
%timeit -n 100 -r 1 -p 6 compute('p059_cipher.txt', 3)
###Output
3.84701 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 100 loops each)
|
src/notebooks/PICO.ipynb | ###Markdown
The PICO model based on Reese et al (2018): "Antarctic sub-shelf melt rates via PICO"In part (a) we test a few idealized geometries, in part (b) realistic geometries are presented.There are a few differences to the original implementation w.r.t to real geometries.- underlying datasets: we use the BedMachine2 data- model resolution: we use the BedMachine native grid at 500 m grid spacing, whereas PICO uses 5 km Favier's implementationcompare the PICO Model Box Model (BM) to simple parametrization (M), and Plume Model (PME)- use two constant depths for "ambient" temperatures: 500 m or 700 m- use 2, 5, or 10 boxes- avoid pressure dependence of melting becuase it introduces an energetic inconsistency -> uniform melting in boxes
###Code
import sys
import numpy as np
import xarray as xr
import pandas as pd
import warnings
import geopandas
import matplotlib
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
sys.path.append("..")
# matplotlib.rc_file('../rc_file')
%matplotlib inline
%config InlineBackend.print_figure_kwargs={'bbox_inches':None}
%load_ext autoreload
%autoreload 2
warnings.filterwarnings("ignore", category=matplotlib.MatplotlibDeprecationWarning)
from real_geometry import RealGeometry, glaciers
from PICO import PicoModel, table2
from compare_models import compare_PICO
###Output
_____no_output_____
###Markdown
(a) idealized geometries
###Code
f, ax = plt.subplots(5,3, figsize=(12,12), sharey='row', constrained_layout=True)
for i, testcase in enumerate(['test1', 'test2', 'test3']):
geo, ds = PicoModel(name=testcase).compute_pico()
geo.draft.plot(ax=ax[0,i])
ax[0,i].set_title(testcase)
ds.melt.plot(ax=ax[1,i])
ds.mk.plot(ax=ax[2,i])
ds.Tk.plot(ax=ax[3,i])
ds.Sk.plot(ax=ax[4,i])
###Output
_____no_output_____
###Markdown
These are test geometries, the `test1` is a quasi-1D iceshelf of 100 km length with a grounding line depth of 1000 m and an ice shelf front depth of 500 m. `test2` is simply a rotated version of `test1`. `test3` has a sinusoidal grounding line profile and a flat ice shelf front profile. The geometries (arbitrarily) have 3 boxes. `boxnr=0` represents either the average (for melt) or the ambient conditions (temperature and salinity).The melt is highest near the grounding line in part because in-situ temperatures are highest there. Both temperature and salinity decrease as the plume ascends towards the ice shelf front. (b) real geometriesAt first execution, the code creates the real geometries from the BedMachine data and IceVelocity data (these files are too big for version control on Github, but see lines 26f in `real_geometries.py` for their location). example: Thwaites glacier
###Code
geo, ds = PicoModel('Thwaites').compute_pico()
f, ax = plt.subplots(1,4, figsize=(20,4), sharey=True)
geo.draft.plot(ax=ax[0])
geo.rd.plot(ax=ax[1])
geo.box.plot(ax=ax[2])
ds.melt.plot(ax=ax[3])
###Output
_____no_output_____
###Markdown
comparing the 6 currently implemented ice shelves
###Code
for i, glacier in enumerate(glaciers):
if glacier in ['Ross', 'FilchnerRonne']: # at the BedMachine resolution, these datasets are too big for laptop memory
continue
PicoModel(glacier).compute_pico()
compare_PICO()
###Output
_____no_output_____
###Markdown
maps of Amundsen Sea and East Antarctica
###Code
proj = ccrs.SouthPolarStereo(true_scale_latitude=-71)
def fn_poly(glacier): return f'../../data/mask_polygons/{glacier}_polygon.geojson'
x5, y5, _, _ = geopandas.read_file(fn_poly('MoscowUniversity'), crs='espg:3031').total_bounds
_, _, x6, y6 = geopandas.read_file(fn_poly('Totten') , crs='espg:3031').total_bounds
x3, _, _, y4 = geopandas.read_file(fn_poly('PineIsland') , crs='espg:3031').total_bounds
_, y3, x4, _ = geopandas.read_file(fn_poly('Dotson') , crs='espg:3031').total_bounds
import matplotlib.ticker as mticker
f = plt.figure(figsize=(8,12))
for i in range(2): # Amundsen Sea, Totten+MoscowUniversity
(x1,x2,y1,y2) = [(x3,x4,y3-1e4,y4+2e4),(x5-1e4,x6,y5,y6+1e4)][i]
shelves = [['PineIsland','Thwaites','Dotson'], ['Totten','MoscowUniversity']][i]
for s, shelf in enumerate(shelves):
(x,y) = [[(.65,.88),(.05,.55),(.05,.2)],[(.3,.8),(.4,.1)]][i][s]
name = [['Pine\nIsland','Thwaites','Dotson/\nCrosson'], ['Totten','Moscow\nUniversity']][i][s]
dsg = xr.open_dataset(RealGeometry(shelf).fn_PICO)
dsP = xr.open_dataset(PicoModel(shelf).fn_PICO_output)
lon, lat = dsg.lon, dsg.lat
for j in range(3):
q = [dsg.draft, dsg.box.where(dsg.mask), dsP.melt.where(dsg.mask)][j]
cmap = ['viridis', 'Spectral','inferno_r'][j]
(vmin,vmax) = [(-2000,0),(1,2),(0,25)][j]
ax = f.add_axes([j/3,.545-.54*i,.33,.45], projection=proj)
ax.set_frame_on(False)
ax.set_extent([x1,x2,y1,y2], crs=proj)
ax.coastlines()
gl = ax.gridlines()
gl.xlocator = mticker.FixedLocator(np.arange(-180,179,5))
gl.ylocator = mticker.FixedLocator(np.arange(-89,89))
im = ax.pcolormesh(lon, lat, q, transform=ccrs.PlateCarree(),
cmap=cmap, vmin=vmin, vmax=vmax)
if i==0: # colorbars
cax = f.add_axes([j/3+.02,.5,.29,.02])
label = ['draft [m]', 'box nr.', 'melt rate [m/yr]'][j]
plt.colorbar(im, cax=cax, orientation='horizontal', label=label)
if j==0: ax.text(x, y, name, weight='bold', transform=ax.transAxes)
if j==2: ax.text(x, y, f'{dsP.mk[0].values:.2f} m/yr', transform=ax.transAxes)
# f, ax = plt.subplots(4, 3, figsize=(15,15))
# for i, key in enumerate(list(ds.keys())[:-1]):
# if i<9: kwargs = {'cbar_kwargs':{'orientation':'horizontal'}}
# else: kwargs = {}
# ds[key].plot(ax=ax[int(i/3), i%3], **kwargs)
###Output
_____no_output_____ |
notes/book_ap/CSVShape.ipynb | ###Markdown
Import
###Code
from dataclasses import dataclass, field, asdict
from typing import List
from csv2shex.csvreader import (
csvreader,
_get_csvrow_dicts_list,
_get_corrected_csvrows_list,
_get_csvshape_dicts_list,
)
from csv2shex.csvrow import CSVRow
from csv2shex.utils import pprint_df
import pandas as pd
###Output
_____no_output_____
###Markdown
Declare
###Code
@dataclass
class CSVTripleConstraint:
"""Instances hold TAP/CSV row elements that form a triple constraint."""
propertyID: str = None
valueConstraint: str = None
valueShape: str = None
extras: field(default_factory=dict) = None
# propertyLabel: str = None
# mandatory: str = None
# repeatable: str = None
# valueNodeType: str = None
# valueDataType: str = None
# valueConstraintType: str = None
# note: str = None
@dataclass
class CSVShape:
"""Instances hold TAP/CSV row elements that form a shape."""
shapeID: str = None
# shapeLabel: str = None
# shapeClosed: str = None
# start: bool = False
tripleconstraints_list: List[CSVTripleConstraint] = field(default_factory=list)
@dataclass
class CSVSchema:
"""Set of shapes."""
csvrow_dicts_list = [{'shapeID': ':book',
'propertyID': 'dc:creator',
'valueConstraint': '',
'valueShape': ':author'},
{'shapeID': '',
'propertyID': 'dc:type',
'valueConstraint': 'so:Book',
'valueShape': ''},
{'shapeID': ':author',
'propertyID': 'foaf:name',
'valueConstraint': '',
'valueShape': ''}]
###Output
_____no_output_____
###Markdown
For each row 1. Initialize instance of CSVShape
###Code
for row in csvrow_dicts_list:
shape = CSVShape()
shape.shapeID = row["shapeID"]
shape.tripleconstraints_list = list()
dict_of_shape_objs[shape_dict["shapeID"]] = shape
dict_of_shape_objs
###Output
_____no_output_____
###Markdown
2. On finding new shapeID, capture shape-related elements in a shape_dict.
###Code
shape_dict = dict()
shape_dict["shapeID"] = "b"
shape_dict["shapeLabel"] = "label"
shape_dict["shapeClosed"] = False
shape_dict["start"] = True
shape_dict["tripleconstraints_list"] = list()
shape_dict
###Output
_____no_output_____
###Markdown
3. Assign CSVShape instance as value to key "shapeID" in dict_of_shape_objs
###Code
dict_of_shape_objs = dict()
dict_of_shape_objs[shape_dict["shapeID"]] = cshap
dict_of_shape_objs
"b" in dict_of_shape_objs
# Triple constraints list for shape "b"
dict_of_shape_objs["b"].tripleconstraints_list
###Output
_____no_output_____
###Markdown
4. Each new shape is added to dict_of_shape_dicts.
###Code
shape_dict = dict()
shape_dict["shapeID"] = "c"
shape_dict["shapeLabel"] = "clabel"
shape_dict["shapeClosed"] = False
shape_dict["start"] = False
shape_dict["tripleconstraints_list"] = list()
dict_of_shape_objs[shape_dict["shapeID"]] = CSVShape(**shape_dict)
dict_of_shape_objs
dict_of_shape_objs.keys()
# After first row, for rows that lack shapeIDs, get most-recently-inserted key from dict_of_shape_dicts
list(dict_of_shape_objs.keys())[-1]
###Output
_____no_output_____
###Markdown
4.
###Code
# Problem: append multiple triple constraint dicts to tripleconstraints_list
tc_dict = dict()
tc_dict["propertyID"] = "dc:type"
tc_dict["valueConstraint"] = "foaf:Person"
dict_of_shape_objs["b"].tripleconstraints_list.append(tc_dict)
dict_of_shape_objs
# Problem: append multiple triple constraint dicts to tripleconstraints_list
tc_dict = dict()
tc_dict["propertyID"] = "dc:creator"
tc_dict["valueConstraint"] = "http://example.org/person1"
tc_obj = CSVTripleConstraint(**tc_dict)
tc_obj
CSVTripleConstraint(**tc_dict)
dict_of_shape_objs
# This is to pretty-print the entire CSVShape
vars(CSVShape(shapeID='b', shapeLabel='label', shapeClosed=False, start=True, tripleconstraints_list=[
{'propertyID': 'dc:type', 'valueConstraint': 'foaf:Person'},
{'propertyID': 'dc:creator', 'valueConstraint': 'http://example.org/person1'}]))
###Output
_____no_output_____ |
Teaching Session1_part5.ipynb | ###Markdown
Operations on NumPy ArraysThe learning objectives of this section are:* Manipulate arrays * Reshape arrays * Stack arrays* Perform operations on arrays * Perform basic mathematical operations * Apply built-in functions * Apply your own functions * Apply basic linear algebra operations
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Example - 1 (Arithmatric Operations)
###Code
array1 = np.array([10,20,30,40,50])
array2 = np.arange(5)
array1
array2
# Add array1 and array2.
array3 = array1 + array2
array3
###Output
_____no_output_____
###Markdown
Example - 2
###Code
array4 = np.array([1,2,3,4])
array4 + array1
print (array1.shape)
print (array4.shape)
###Output
(4,)
###Markdown
Example - 3
###Code
array = np.linspace(1, 10, 5)
array
array*2
array**2
###Output
_____no_output_____
###Markdown
Stacking Arrays ```np.hstack()``` and ```n.vstack()```Stacking is done using the ```np.hstack()``` and ```np.vstack()``` methods. For horizontal stacking, the number of rows should be the same, while for vertical stacking, the number of columns should be the same.
###Code
# Note that np.hstack(a, b) throws an error - you need to pass the arrays as a list
a = np.array([1, 2, 3])
b = np.array([2, 3, 4])
np.hstack((a,b))
np.vstack((a,b))
np.arange(12)
np.arange(12).reshape(3,4)
array1 = np.arange(12).reshape(3,4) #3x4
array2 = np.arange(20).reshape(5,4) #5x4
print (array1, '\n', array2)
np.vstack((array1,array2))
###Output
_____no_output_____
###Markdown
Example - 4 (Numpy Built-in functions)
###Code
array1
np.power(array1, 3)
np.arange(9).reshape(3,3)
x = np.array([-2,-1, 0, 1,2])
x
abs(x)
np.absolute(x)
###Output
_____no_output_____
###Markdown
Example - 5 (Trignometric functions)
###Code
np.pi
theta = np.linspace(0, np.pi, 5)
theta
np.sin(theta)
np.cos(theta)
np.tan(theta)
###Output
_____no_output_____
###Markdown
Example - 6 (Exponential and logarithmic functions)
###Code
x = [1, 2, 3, 10]
x = np.array(x)
np.exp(x) # e=2.718...
# 2^1, 2^2, 2^3, 2^10
np.exp2(x)
np.power(x,3)
np.log(x)
np.log2(x)
np.log10(x)
np.log
###Output
_____no_output_____
###Markdown
Example - 7
###Code
x = np.arange(5)
x
y = x * 10
y
y = np.empty(5)
y
np.multiply(x, 12, out=y)
y
y = np.zeros(10)
y
np.power(2, x, out=y[::2])
y
###Output
_____no_output_____
###Markdown
Example - 8 (Aggregates)
###Code
x = np.arange(1,6)
x
sum(x)
np.add.reduce(x)
np.add.accumulate(x)
np.multiply.accumulate(x)
###Output
_____no_output_____
###Markdown
Apply Basic Linear Algebra OperationsNumPy provides the ```np.linalg``` package to apply common linear algebra operations, such as:* ```np.linalg.inv```: Inverse of a matrix* ```np.linalg.det```: Determinant of a matrix* ```np.linalg.eig```: Eigenvalues and eigenvectors of a matrix Also, you can multiple matrices using ```np.dot(a, b)```.
###Code
# np.linalg documentation
help(np.linalg)
A = np.array([[6, 1, 1],
[4, -2, 5],
[2, 8, 7]])
A
###Output
_____no_output_____
###Markdown
Rank of a matrix
###Code
np.linalg.matrix_rank(A)
###Output
_____no_output_____
###Markdown
Trace of matrix A
###Code
np.trace(A)
###Output
_____no_output_____
###Markdown
Determinant of a matrix
###Code
np.linalg.det(A)
###Output
_____no_output_____
###Markdown
Inverse of matrix A
###Code
A
np.linalg.inv(A)
B = np.linalg.inv(A)
np.matmul(A,B) #actual matrix multiplication
A * B
###Output
_____no_output_____
###Markdown
Matrix A raised to power 3
###Code
np.linalg.matrix_power(A,3) # matrix multiplication A A A
###Output
_____no_output_____ |
query_language.ipynb | ###Markdown
pandas temporal query language
###Code
#export
import re
import numpy as np
import pandas as pd
import ast
import glob
import ntpath
import os
from itertools import zip_longest, chain
from itertools import product
from functools import lru_cache
from functools import singledispatch
###Output
_____no_output_____
###Markdown
General helper functions Info
###Code
#export
class Info():
"""
A class to store information about the data and results from analysis
"""
def __init__(self):
self.evaluated = {}
###Output
_____no_output_____
###Markdown
memory
###Code
#export
def memory(info, func, expr):
"""
checks if the function has been called with the same argument previously and
if so, returns the same results instead of running the function again
args:
-
"""
rows=None
if info:
if func in info.evaluated:
if expr in info.evaluated[func]:
rows = info.evaluated[func][expr]
else:
info.evaluated[func] = {}
else:
info = Info()
info.evaluated[func] = {}
return info, rows
###Output
_____no_output_____
###Markdown
listify
###Code
#export
def listify(string_or_list):
"""
return a list if the input is a string, if not: returns the input as it was
Args:
string_or_list (str or any):
Returns:
A list if the input is a string, if not: returns the input as it was
Note:
- allows user to use a string as an argument instead of single lists
- cols='icd10' is allowed instead of cols=['icd10']
- cols='icd10' is transformed to cols=['icd10'] by this function
"""
if isinstance(string_or_list, str):
string_or_list = [string_or_list]
return string_or_list
###Output
_____no_output_____
###Markdown
unique
###Code
#export
# A function to identify all unique values in one or more columns
# with one or multiple codes in each cell
def unique(df, cols=None, sep=None, all_str=True):
"""
Lists unique values from one or more columns
sep (str): separator if cells have multiple values
all_str (bool): converts all values to strings
unique(df=df, cols='inpatient', sep=',')
"""
# if no column(s) are specified, find unique values in whole dataframe
if cols==None:
cols=list(df.columns)
cols = listify(cols)
# multiple values with separator in cells
if sep:
all_unique=set()
for col in cols:
new_unique = set(df[col].str.cat(sep=',').split(','))
all_unique.update(new_unique)
# single valued cells
else:
all_unique = pd.unique(df[cols].values.ravel('K'))
# if need to make sure all elements are strings without surrounding spaces
if all_str:
all_unique=[str(value).strip() for value in all_unique]
return all_unique
#unique(df=df, cols='codes', sep=',')
###Output
_____no_output_____
###Markdown
del dot and zero
###Code
#export
def del_dot(code):
if isinstance(code, str):
return code.replace('.','')
else:
codes = [c.replace('.','') for c in code]
return codes
def del_zero(code, left=True, right=False):
if isinstance(codes, str):
codes=[code]
if left:
codes = [c.lstrip('0') for c in code]
if right:
codes = [c.rstrip('0') for c in code]
if isinstance(code, str):
codes=codes[0]
return codes
###Output
_____no_output_____
###Markdown
Notations expand hyphen
###Code
#export
# function to expand a string like 'K51.2-K53.8' to a list of codes
# Need regex to extract the number component of the input string
# The singledispach decorator enables us to have the same name, but use
# different functions depending on the datatype of the first argument.
#
# In our case we want one function to deal with a single string input, and
# another to handle a list of strings. It could all be handled in a single
# function using nested if, but singledispatch makes it less messy and more fun!
# Here is the main function, it is just the name and an error message if the
# argument does not fit any of the inputs that wil be allowed
@singledispatch
def expand_hyphen(expr):
"""
Expands codes expression(s) that have hyphens to list of all codes
Args:
code (str or list of str): String or list of strings to be expanded
Returns:
List of strings
Examples:
expand_hyphen('C00*-C26*')
expand_hyphen('b01.1*-b09.9*')
expand_hyphen('n02.2-n02.7')
expand_hyphen('c00*-c260')
expand_hyphen('b01-b09')
expand_hyphen('b001.1*-b009.9*')
expand_hyphen(['b001.1*-b009.9*', 'c11-c15'])
Note:
Unequal number of decimals in start and end code is problematic.
Example: C26.0-C27.11 will not work since the meaning is not obvious:
Is the step size 0.01? In which case C27.1 will not be included, while
C27.10 will be (and traing zeros can be important in codes)
"""
raise ValueError('The argument must be a string or a list')
# register the function to be used if the input is a string
@expand_hyphen.register(str)
def _(expr):
# return immediately if nothing to expand
if '-' not in expr:
return [expr]
lower, upper = expr.split('-')
lower=lower.strip()
# identify the numeric component of the code
lower_str = re.search("\d*\.\d+|\d+", lower).group()
upper_str = re.search("\d*\.\d+|\d+", upper).group()
# note: what about european decimal notation?
# also note: what if multiple groups K50.1J8.4-etc
lower_num = int(lower_str.replace('.',''))
upper_num = int(upper_str.replace('.','')) +1
if upper_num<lower_num:
raise ValueError('The start code cannot have a higher number than the end code')
# remember length in case of leading zeros
length = len(lower_str)
nums = range(lower_num, upper_num)
# must use integers in a loop, not floats
# which also means that we must multiply and divide to get decimal back
# and take care of leading and trailing zeros that may disappear
if '.' in lower_str:
lower_decimals = len(lower_str.split('.')[1])
upper_decimals = len(upper_str.split('.')[1])
if lower_decimals==upper_decimals:
multiplier = 10**lower_decimals
codes = [lower.replace(lower_str, format(num /multiplier, f'.{lower_decimals}f').zfill(length)) for num in nums]
# special case: allow k1.1-k1.123, but not k.1-k2.123 the last is ambigious: should it list k2.0 only 2.00?
elif (lower_decimals<upper_decimals) & (upper_str.split('.')[0]==lower_str.split('.')[0]):
from_decimal = int(lower_str.split('.')[1])
to_decimal = int(upper_str.split('.')[1]) +1
nums = range(from_decimal, to_decimal)
decimal_str = '.'+lower.split('.')[1]
codes = [lower.replace(decimal_str, '.'+str(num)) for num in nums]
else:
raise ValueError('The start code and the end code do not have the same number of decimals')
else:
codes = [lower.replace(lower_str, str(num).zfill(length)) for num in nums]
return codes
# register the function to be used if if the input is a list of strings
@expand_hyphen.register(list)
def _(expr):
extended = []
for word in expr:
extended.extend(expand_hyphen(word))
return extended
###Output
_____no_output_____
###Markdown
expand star
###Code
#export
# A function to expand a string with star notation (K50*)
# to list of all codes starting with K50
@singledispatch
def expand_star(code, all_codes=None):
"""
Expand expressions with star notation to a list of all values with the specified pattern
Args:
expr (str or list): Expression (or list of expressions) to be expanded
all_codes (list) : A list of all codes
Examples:
expand_star('K50*', all_codes=icd9)
expand_star('K*5', all_codes=icd9)
expand_star('*5', all_codes=icd9)
"""
raise ValueError('The argument must be a string or a list')
@expand_star.register(str)
def _(code, all_codes=None):
# return immediately if there is nothing to expand
if '*' not in code:
return [code]
start_str, end_str = code.split('*')
if start_str and end_str:
codes = {code for code in all_codes if (code.startswith(start_str) & code.endswith(end_str))}
if start_str:
codes = {code for code in all_codes if code.startswith(start_str)}
if end_str:
codes = {code for code in all_codes if code.endswith(end_str)}
return sorted(list(codes))
@expand_star.register(list)
def _(code, all_codes=None):
expanded=[]
for star_code in code:
new_codes = expand_star(star_code, all_codes=all_codes)
expanded.extend(new_codes)
# uniqify in case some overlap
expanded = list(set(expanded))
return sorted(expanded)
###Output
_____no_output_____
###Markdown
expand colon
###Code
#export
# function to get all codes in a list between the specified start and end code
# Example: Get all codes between K40:L52
@singledispatch
def expand_colon(code, all_codes=None):
raise ValueError('The argument must be a string or a list')
@expand_colon.register(str)
def _(code, all_codes=None):
"""
Expand expressions with colon notation to a list of complete code names
code (str or list): Expression (or list of expressions) to be expanded
all_codes (list or array) : The list to slice from
Examples
K50:K52
K50.5:K52.19
A3.0:A9.3
Note: This is different from hyphen and star notation because it can handle
different code lengths and different number of decimals
"""
if ':' not in code:
return [code]
startstr, endstr = code.split(':')
# remove spaces
startstr = startstr.strip()
endstr =endstr.strip()
# find start and end position
startpos = all_codes.index(startstr)
endpos = all_codes.index(endstr) + 1
# slice list
expanded = all_codes[startpos:endpos+1]
return expanded
@expand_colon.register(list)
def _(code, all_codes=None, regex=False):
expanded=[]
for cod in code:
new_codes = expand_colon(cod, all_codes=all_codes)
expanded.extend(new_codes)
return expanded
###Output
_____no_output_____
###Markdown
expand regex
###Code
#export
# Return all elements in a list that fits a regex pattern
@singledispatch
def expand_regex(code, all_codes):
raise ValueError('The argument must be a string or a list of strings')
@expand_regex.register(str)
def _(code, all_codes=None):
code_regex = re.compile(code)
expanded = {code for code in all_codes if code_regex.match(code)}
# uniqify
expanded = list(set(expanded))
return expanded
@expand_regex.register(list)
def _(code, all_codes):
expanded=[]
for cod in code:
new_codes = expand_regex(cod, all_codes=all_codes)
expanded.extend(new_codes)
# uniqify in case some overlap
expanded = sorted(list(set(expanded)))
return expanded
###Output
_____no_output_____
###Markdown
expand code
###Code
#export
@singledispatch
def expand_code(code, all_codes=None,
hyphen=True, star=True, colon=True, regex=False,
drop_dot=False, drop_leading_zero=False,
sort_unique=True):
raise ValueError('The argument must be a string or a list of strings')
@expand_code.register(str)
def _(code, all_codes=None,
hyphen=True, star=True, colon=True, regex=False,
drop_dot=False, drop_leading_zero=False,
sort_unique=True):
#validating input
if (not regex) and (':' in code) and (('-' in code) or ('*' in code)):
raise ValueError('Notation using colon must start from and end in specific codes, not codes using star or hyphen')
if regex:
codes = expand_regex(code, all_codes=all_codes)
return codes
if drop_dot:
code = del_dot(code)
codes=[code]
if hyphen:
codes=expand_hyphen(code)
if star:
codes=expand_star(codes, all_codes=all_codes)
if colon:
codes=expand_colon(codes, all_codes=all_codes)
if sort_unique:
codes = sorted(list(set(codes)))
return codes
@expand_code.register(list)
def _(code, all_codes=None, hyphen=True, star=True, colon=True, regex=False,
drop_dot=False, drop_leading_zero=False,
sort_unique=True):
expanded=[]
for cod in code:
new_codes = expand_code(cod, all_codes=all_codes, hyphen=hyphen, star=star, colon=colon, regex=regex, drop_dot=drop_dot, drop_leading_zero=drop_leading_zero)
expanded.extend(new_codes)
# uniqify in case some overlap
expanded = list(set(expanded))
return sorted(expanded)
###Output
_____no_output_____
###Markdown
expand columns
###Code
#export
def expand_columns(expr, all_columns=None, df=None, star=True,
hyphen=True, colon=True, regex=None, info=None):
"""
Expand columns with special notation to their full column names
"""
notations = '* - :'.split()
# return immediately if not needed
if not any(symbol in expr for symbol in notations):
return [expr]
# get a list of columns of it is only implicity defined by the df
# warning: may depreciate this, require explicit all_columns
if df & (not all_columns):
all_columns=list(df.columns)
if regex:
cols = [col for col in all_columns if re.match(regex, expr)]
else:
if hyphen:
cols = expand_hyphen(expr)
if star:
cols = expand_star(expr, all_codes=all_columns)
if colon:
cols = expand_colon(expr, all_codes=all_columns)
return cols
###Output
_____no_output_____
###Markdown
More helper functions get rows
###Code
#export
# mark rows that contain certain codes in one or more colums
def get_rows(df, codes, cols=None, sep=None, pid='pid', info=None, fix=True):
"""
Make a boolean series that is true for all rows that contain the codes
Args
df (dataframe or series): The dataframe with codes
codes (str, list, set, dict): codes to be counted
cols (str or list): list of columns to search in
sep (str): The symbol that seperates the codes if there are multiple codes in a cell
pid (str): The name of the column with the personal identifier
"""
# check if evaluated previously
info, rows = memory(info=info, func = 'get_rows', expr=codes)
if rows:
return rows
# check if codes and columns need to be expanded (needed if they use notation)
if fix:
# do this when if cols exist, but if it does not ...
cols = expand_columns(expr=cols, all_columns=list(df.columns), info=info)
all_codes = sorted(unique(df=df, cols=cols, sep=sep))
codes = expand_code(codes, all_codes=all_codes)
# codes and cols should be lists
codes = listify(codes)
cols = listify(cols)
# approach depends on whether we have multi-value cells or not
# if sep exist, then have multi-value cells
if sep:
# have multi-valued cells
# note: this assumes the sep is a regex word delimiter
codes = [rf'\b{code}\b' for code in codes]
codes_regex = '|'.join(codes)
# starting point: no codes have been found
# needed since otherwise the function might return None if no codes exist
rows = pd.Series(False*len(df),index=df.index)
# loop over all columns and mark when a code exist
for col in cols:
rows=rows | df[col].str.contains(codes_regex, na=False)
# if not multi valued cells
else:
mask = df[cols].isin(codes)
rows = mask.any(axis=1)
return rows
###Output
_____no_output_____
###Markdown
make codes
###Code
#export
def make_codes(n, letters=26, numbers=100, seed=False):
"""
Generate a dataframe with a column of random codes
Args:
letters (int): The number of different letters to use
numbers (int): The number of different numbers to use
Returns
A dataframe with a column with one or more codes in the rows
"""
# each code is assumed to consist of a letter and a number
alphabet = list('abcdefghigjklmnopqrstuvwxyz')
letters=alphabet[:letters+1]
# make random numbers same if seed is specified
if seed:
np.random.seed(0)
# determine the number of codes to be drawn for each event
n_codes=np.random.negative_binomial(1, p=0.3, size=n)
# avoid zero (all events have to have at least one code)
n_codes=n_codes+1
# for each event, randomly generate a the number of codes specified by n_codes
codes=[]
for i in n_codes:
diag = [np.random.choice(letters).upper()+
str(int(np.random.uniform(low=1, high=numbers)))
for num in range(i)]
code_string=','.join(diag)
codes.append(code_string)
# create a dataframe based on the list
df=pd.DataFrame(codes)
df.columns=['code']
return df
make_codes(10)
###Output
_____no_output_____
###Markdown
make data
###Code
#export
def make_data(n, letters=26, numbers=100, seed=False):
"""
Generate a dataframe with a column of random codes
Args:
letters (int): The number of different letters to use
numbers (int): The number of different numbers to use
Returns
A dataframe with a column with one or more codes in the rows
"""
pid = range(n)
df_person=pd.DataFrame(index = pid)
#female = np.random.binomial(1, 0.5, size =n)
gender = np.random.choice(['male', 'female'], size=n)
region = np.random.choice(['north', 'south', 'east', 'west'], size=n)
birth_year = np.random.randint(1920, 1980, size=n)
birth_month = np.random.randint(1,12, size=n)
birth_day = np.random.randint(1,28, size=n) # ok, I know!
events_per_year = np.random.poisson(1, size=n)
years = 2020 - birth_year
events = years * events_per_year
events = np.where(events==0,1,events)
events = events.astype(int)
all_codes=[]
codes = [all_codes.extend(make_codes(n=n, letters=letters,
numbers=numbers,
seed=seed)['code'].tolist())
for n in events]
days_alive = (2020 - birth_year) *365
days_and_events = zip(days_alive.tolist(), events.tolist())
all_days=[]
days_after_birth = [all_days.extend(np.random.randint(0, max_day, size=n)) for max_day, n in days_and_events]
pid_and_events = zip(list(pid), events.tolist())
all_pids=[]
pids = [all_pids.extend([p+1]*e) for p, e in pid_and_events]
df_events = pd.DataFrame(index=all_pids)
df_events['codes'] = all_codes
df_events['days_after'] = all_days
#df_person['female'] = female
df_person['gender'] = gender
df_person['region'] = region
df_person['year'] = birth_year
df_person['month'] = birth_month
df_person['day'] = birth_day
df = df_events.merge(df_person, left_index=True, right_index=True)
df['birth_date'] = pd.to_datetime(df[['year', 'month', 'day']])
df['event_date'] = df['birth_date'] + pd.to_timedelta(df.days_after, unit='d')
del df['month']
del df['day']
del df['days_after']
df['pid'] = df.index
df.index_name = 'pid_index'
df=df[['pid', 'gender', 'birth_date', 'event_date', 'region', 'codes']]
# include deaths too?
return df
#df = make_data(n=1000)
#df
#count_person('max 2 L35')
#count_person('x before y')
###Output
_____no_output_____
###Markdown
formatting an expression
###Code
#export
def format_expression(expr):
"""
formats and expressions do it can be evaluated
"""
original = expr.copy()
# easier to parse and split when space only exist between siginificant words
expr=_remove_space(expr)
# insert external variables (maybe unnecessary?)
expr=_insert_external(expr)
# if multiple options are specified in the expression,
# make one expression for each alternative specification
expr=_get_expressions(expr)
return exprs
###Output
_____no_output_____
###Markdown
remove_space
###Code
#export
def remove_space(expr):
no_space_before = r'(\s)([<=>,])'
no_space_after = r'([<=>,])(\s)'
expr = re.sub(no_space_before, r'\2', expr)
expr = re.sub(no_space_after, r'\1', expr)
return expr
###Output
_____no_output_____
###Markdown
get_expressions
###Code
#export
def get_expressions(expr):
"""
Makes a list of all possible statements from an expression, all possible combination of expressions involving ?[x, y, z] that are in the expressions
>>>expr = 'min ?[2,3,4] of (K50, K51) in icd inside ?[10, 20, 30] days before 4AB02 in ncmp'
>>>get_options(expr)
"""
original = expr
alternatives = re.findall('\?(\[.*?\])', expr)
alt_list = [ast.literal_eval(alternative) for alternative in alternatives]
combinations = product(*alt_list)
all_expressions = []
for n, combination in enumerate(combinations):
new_expr = original
for i, value in enumerate(combination):
new_expr = new_expr.replace('?' + alternatives[i], str(value), 1)
all_expressions.extend([new_expr])
return all_expressions
expr = 'min ?[2,3,4] of (K50, K51) in icd inside ?[10, 20, 30] days before 4AB02 in ncmp'
get_expressions(expr)
###Output
_____no_output_____
###Markdown
insert_external
###Code
#export
def insert_external(expr):
"""
Replaces variables prefixed with @ in the expression with the
value of the variable from the global namespace
Example:
x=['4AB02', '4AB04', '4AB06']
expr = '@x before 4AB02'
insert_external(expr)
"""
externals = [word.strip('@') for word in expr.split() if word.startswith('@')]
for external in externals:
tmp = globals()[external]
expr = expr.replace(f'@{external} ', f'{tmp} ')
return expr
x_1=['4AB02', '4AB04', '4AB06']
expr = '@x_1 before 4AB02'
insert_external(expr)
###Output
_____no_output_____
###Markdown
insert columns
###Code
#export
def insert_columns(expr, cols=None, all_cols=None, code2col_rules = None, info=None):
"""
insert column names in expressions (in col ...)
logic: all conditions that do not contain column names, should end with an in statement
general approach:
- split on keywords to get each condition separately
- next: if the condition is about a column (age>20) no need to do anything
- if, not, check if it has an 'in', if not: insert one!
todo/problem: row selectors? code2col rule (a function that you can pass in?), sniffing, dict option, or in info?
expr = 'max 2 of 4AB02 before 4AB04'
expr = 'max 2 of 4AB02 in x before 4AB04'
expr = '5th of 5th' # the code is here also a keyword ... problem - maybe of as long as we keep the of keyword ... but more difficult when we do not, at least for automatic column labeling!
expr = 'max 2 of 4AB0552 before 4AB04'
expr = 'max 2 of 4AB02 in ncmp' # should include zero ?
expr = 'min ?[1,2,3,4] of 4AB02 in ncmp'
expr = 'max ?[1,2,3,4] of 4AB02 in ncmp' # should include zero ?
expr = 'min 2 of days>4'
expr = 'min 8 of days>6'
expr = 'min 3 of 4AB02 in ncmp within 200 days'
insert_cols(expr, rule=col_rules)
"""
split_words = [' and ', ' or ', ' before ', ' after ', ' within ']
for word in split_words:
expr = expr.replace(word, f'@split@{word}')
conditions = expr.split('@split@')
all_conditions=[]
for condition in conditions:
words = condition.split()
if re.match('[><=]', word):
pass
elif len(words)==1:
condition=condition + f' in {cols}'
elif ' in ' not in expr:
#alternative: words[-2] != 'in' but what if multiple cols already with spaces, then problem
condition=condition + f' in {cols}'
all_conditions.append(condition)
new_expr = " ".join(all_conditions)
new_expr = new_expr.replace('@split@','')
return new_expr
###Output
_____no_output_____
###Markdown
break up nested expressions outer parenthesisfirst whole parenthesis expression, from start to finish, including nested
###Code
#export
def outer_parenthesis(expr):
"""
identifies the first parenthesis expression
(may have nested parentheses inside it)
rerurns the content, or just the string if ther are no parentheses
may consider returning NONE then (if no?)
"""
start_pos=expr.find('(')
if start_pos == -1:
between = expr
else:
n=1
pos=start_pos
while n>0:
pos=pos+1
if expr[pos]=='(':
n=n+1
elif expr[pos]==')':
n=n-1
elif pos>len(expr):
print('Error: Not same number of opening and closing parenthesis')
between = expr[start_pos+1: pos]
return between
expr="nothing"
expr="more(than nothing)"
expr="outside((nested) and (nested))"
expr="(x) and (y)"
expr = "(x or q) before (y and z)"
expr = "c1 before (y and z)"
outer_parenthesis(expr)
###Output
_____no_output_____
###Markdown
first inner parenthesis expression
###Code
#export
def first_inner_parenthesis(expr):
"""
iterates until it finds the (first) innermost expression surrounded by
parentheses and returns this expression
"""
new_expr = expr
while '(' in new_expr:
outer= outer_parenthesis(new_expr)
new_expr = outer
#first_inner_parenthesis(new_expr) # hmm check why this is here
return new_expr
expr="nothing"
expr="more(than nothing)"
expr="outside((nested) and (nested))"
expr="(x) and (y)"
expr = "(x or q) before (y and z)"
expr = "c1 before (y and z)"
expr = "(x and (a or b)) before (y and z)"
expr = "(x and (a or (b after c))) before (y and z)"
expr = "x and b and c"
first_inner_parenthesis(expr)
###Output
_____no_output_____
###Markdown
break_upbreaks an neste expresssion into its component partsreturns a dictionary with the name and the expressions to be evaluatedthe dictionary is ordered: evaluate in order (since next expression may depend on calculation of previous expression)logic: recursively find the innnermost expresion, store it, substitute, repeat possible todo (well, never! explicit is better than implicit here): implicit breakup based on priority rules (and, or, before etc, like multiplication addition atc have priority rules that allow implicit breakup)
###Code
#export
def break_up(expr):
"""
breaks up an expression with nested parenthesis into sub expressions with
no parenthesis, with the innermost expressions first, that needs to be
calculated before doing the outer expression. also replaces the statement with
a single symbol in the larger expression
"""
p=0
new_expr = expr
expr_store={}
while "(" in new_expr:
inner = first_inner_parenthesis(new_expr)
new_expr = new_expr.replace(f'({inner})', f'p{p}')
expr_store[f'p{p}'] = inner
p=p+1
return new_expr, expr_store
expr="nothing"
expr="more(than nothing)"
expr="outside((nested) and (nested))"
expr="(x) and (y)"
expr = "(x or q) before (y and z)"
expr = "c1 before (y and z)"
expr = "(x and (a or b)) before (y and z)"
expr = "(x and (a or (b after c))) before (y and z)"
expr = "x and b and c"
expr="outside and ((nested) and (nested))"
break_up(expr)
###Output
_____no_output_____
###Markdown
eval stuffAfter formatting and fixing a query, we end up with a list of expressions to be evaluatd (eval_expr)After breaking up a standard expressions, we have a dictionary of sub expressions (X before Y, min 2 of X and max 3 of Y) (eval_sub_expr)To evaluate a sub-expression, we breake it up into atomic statements (min 2 of Y), evaluate these separately and then apply the transformations that are appropriate based on the type of sub-expression it is (and, before/after etc) (eval_atom)To evaluate an atom we use eval_row selection, get_rows and eval_prefix.Alternative language? Paragraph, sentence, statement, condition, compound, molecule eval_expr eval expr
###Code
#export
def eval_expr(df, expr, cols=None, sep=None, out='series', info=None, fix=True):
# check if evaluated previously
if cols:
name = expr + out + str(cols)
else:
name = expr + out
info, rows = memory(info, 'eval_expr', name)
if rows:
return rows
if fix:
expr = remove_space(expr)
expr = insert_external(expr)
expr = insert_columns(expr=expr, cols=cols, all_cols=list(df.columns))
#print(expr)
final_expr, sub_expressions = break_up(expr)
for name, expr in sub_expressions.items():
df[name] = eval_sub(df=df, expr=expr, cols=cols, sep=sep, info=info)
result = eval_sub(df=df, expr=final_expr, cols=cols, sep=sep, info=info)
# return boolean series person (default) or rows, or pids
if out == 'series':
result = result.any(level=0)
elif out == 'pids':
result = set(result.index[result])
info.evaluated['eval_expr'][name] = result
return result
###Output
_____no_output_____
###Markdown
eval sub (expression)
###Code
#export
def eval_sub(df, expr, cols=None, sep=None, info=None, fix=True):
# check if evaluated previously
if cols:
name = expr + str(cols)
else:
name = expr
info, rows = memory(info, 'eval_sub', name)
if rows:
return rows
splitwords = 'and or not within after before inside outside'.split()
operators = '= > <'.split()
# a simple existence expression
if not any(word in expr for word in splitwords):
print('simple')
rows = eval_atom(df=df, expr=expr, sep=sep, info=info)
# and/or epression (to do:allow multiple and or without parenthesis?)
# shortcut before splitting in sub_expressions: if only and or and not before etc: then can simplify: just split on and or, substitute, evaluate separate, keep parenthesis and do an eval
elif (' and ' in expr) or (' or ' in expr):
print('and or')
rows = eval_and_or(df=df, expr=expr, sep=sep, info = info)
# a before/after expression
elif any(word in expr for word in [' before ', ' after ', ' simultaneous ']):
print('before after')
rows = eval_before_after(df=df, condition=expr, sep=sep, info=info, fix=fix)
# within, not within expression
elif ' within ' in expr:
rows = eval_within(df=df, expr=expr, sep=sep, info=info)
# an inside expression
elif (' inside ' in expr) or (' outside ' in expr):
rows = eval_inside_outside(df=df, expr=expr, sep=sep, info=info)
# store result for future
info.evaluated['eval_sub'][name] = rows
return rows
###Output
_____no_output_____
###Markdown
(1st g after s) > 201st g after s > 201st g after sg after s eval atom
###Code
#export
# atoms: [prefix] condition[row_selector] [in columns]
def eval_atom(df, expr, cols=None, sep=None, info=None):
# precautionary move!
expr=expr.strip()
# check if evaluated previously
if cols:
name = expr + str(cols)
else:
name = expr
info, rows = memory(info, 'eval_atom', name)
if rows:
return rows
# check if it has a row selector, execute if so
if '[rows:' in expr:
row_selection = eval_row_selection(df=df, expr=expr)
df=df[row_selection]
# delete the row selector after applying it
before, after = expr.split('[rows:')
expr = before + after[1:]
# starting point
prefix = None
words = expr.split()
operator = any(operator in expr for operator in list('=><'))
function_call = '(' in expr
# is the atom a code based atom? example K50.1
# code based atoms must have in or cols
if (cols) or ('in' in words):
if 'in' in words:
codes, cols = expr.split(' in ')
cols=cols.strip()
else:
codes = expr
if len(words)>3:
prefix, codes = codes.rsplit(' ',1)
# handle multiple cols
# in icd0, icd1, icd3 or in [icd0, icd1, icd3]
if ',' in cols:
if cols.startswith('['):
cols = cols[1:-1].split(',')
cols=[col.strip() for col in cols]
# deal with list of codes [K50, K51]
if ',' in codes:
codes=codes[1:-1].split(',')
codes=[code.strip() for code in codes]
#expand codes and cols?
rows = get_rows(df=df, cols=cols, codes=codes, sep=sep, info=info)
# a simple column based atom: example glucose>8
elif (operator) and (not function_call):
prefix, expr = expr.rsplit(' ',1)
rows=df.eval(expr)
# a function based atom: example glucose.cumsum()>100
elif (operator) and (function_call):
prefix, expr = expr.rsplit(' ',1)
rows = pd.eval(f"df.groupby('pid').{expr}")
#alternative? rows = df.groupby(pid).apply(eval, expr)?
if prefix:
rows = eval_prefix(prefix, rows)
# store results for future
info.evaluated['eval_atom'][name] = rows
return rows
df=make_data(1000,letters=10, numbers=5)
df.head()
#df['prescription_code'] =df.codes.str.split(',', expand=True)[0]
#df['ddd']=np.random.randint(1,99, size=len(df))
#df.to_csv('/content/sample_prescriptions.csv')
#from google.colab import drive
#drive.mount('/content/drive')
eval_atom(df=df, expr='E2 in codes', sep=',')
###Output
_____no_output_____
###Markdown
eval row selection
###Code
#export
def eval_row_selection(df, expr, sep=None, info=None):
"""
example K50[rows:age>20]
example min 2 K51[rows: after hip_surgery]
df[surgery_rows]....
min 2 K51 inside after hip surgery
min 2 K51 after hip surgery
after age>20?
"""
# check if evaluated previously
info, rows = memory(info, 'eval_row_selection', expr)
if rows:
return rows
row_query = expr.split('[rows:')[1].split[']'][0]
statement = any(operator in expr for operator in list('=><'))
temporal = (' days ' in expr)
positional = (' events ' in expr) or (' event ' in expr)
#positional = (expr.startswith('inside ')) or (expr.startswith('outside '))
# statement:'age>20'
if statement:
rows=df.eval(row_query)
# positional: 'inside/outside 5 events before/after/around X'
elif positional:
rows = create_time_interval(df=df, expr=expr,cols=cols, info=info)
# temporal: 'after 1st S4'
else:
rows = create_time_interval(df=df, expr=expr,cols=cols, info=info)
info.evaluated['eval_row_selection'][expr] = rows
return rows
###Output
_____no_output_____
###Markdown
eval prefix
###Code
#export
def eval_prefix(prefix, rows):
interval=True
freq = ['min ', 'max ', 'exactly ']
first_last = [' first ', ' last']
ordinal = r'(-?\d+)(st|nd|rd|th)' # re to find and split 3rd into 3 and rd etc
rowscum = rows.groupby(level=0).cumsum()
# freq condition: min 5 of 4A
if any(word in prefix for word in freq):
word, num = prefix.split()
num = int(num)
if 'min' in word:
select = (rowscum >= num)
elif 'max' in word: # double check!
n_max = rowscum.max(level=0)
select = (n_max <= num)
elif 'exactly' in word:
select = (rowscum == num)
# beteween frequency or between ordinals (1st and 3rd)
# note: inclusive between
elif 'between ' in prefix:
word, lower,_, upper = prefix.split()
# interval/positional between 4th and 8th
if re.match(prefix, ordinal):
lower=int(lower[:-2])
upper=int(upper[:-2])
if lower > 0:
aboverows = (rowscum >= lower)
else:
aboverows = (lastrowscum >= abs(lower))
if upper > 0:
belowrows = (rowscum <= upper)
else:
belowrows = (lastrowscum <= abs(upper))
select = (aboverows & belowrows)
# frequency between: between 4 and 8 of 4AB02
else:
lower=int(lower)
upper=int(upper)
select = rowscum.between(lower, upper, inclusive=True)
# first, last range conditions: first 5 of 4A
elif any(word.strip() in prefix for word in first_last): # regex is better
word, num = prefix.split()
if '%' not in num:
num = int(num)
if 'first' in word:
select = (rowscum <= num)
if 'last' in word:
select = (rowscum >= num)
# pct condition: first 10% of 4A
elif '%' in prefix:
n_max = rowscum.groupby(level=0).max()
pct = float(num.split(r'%')[0]) / 100
pid_num = n_max * pct
# first 1% of two observations includes 1st obs
pid_num[pid_num < 1] = 1
if word == 'first':
# hmm, generalproblem: drop if pid is missing ...
select = (rowscum < pid_num)
if word == 'last':
select = (rowscum > pid_num)
# percentile condition
elif ' percentile ' in prefix:
event_num = rows.groupby(level=0).cumcount()
n_count = rowscum.groupby(level=0).size()
num = float(num.split(r'%')[0]) / 100
pid_num = n_count * num
if word == 'first':
rows = (pid_num < event_num)
if word == 'last':
rows = (pid_num > event_num)
# positional condition: 5th of 4a, 3rd to 8th of 4A, (3rd, 4th, 5th) of 4A
# also allows: 2nd last (or even -5th last)
elif re.match(ordinal, prefix):
pos_str = prefix.rsplit(' ',1)[0].strip('(').strip(')')
pos_nums = re.findall(ordinal, pos_str)
pos_nums = tuple([int(pos[0]) for pos in pos_nums])
# if the conditions includes last, need reversed cumsum
# example 2nd last
if ' last ' in pos_str or '-' in pos_str:
n_max = rowscum.groupby(level=0).max().add(1)
# reversed event number (by id)
lastrowscum = (rowscum - n_max).abs()
last_flag = 1
else:
last_flag = 0
# single position: 5th of 4A
if len(pos_nums) == 1:
interval = False
print('single position')
if last_flag:
select = (lastrowscum == pos_nums)
else:
select = (rowscum == pos_nums)
# from-to positions: 3rd to 8th of 4A, 1st to -3rd
elif ' to ' in pos_str:
lower, upper = pos_nums
if lower > 0:
aboverows = (rowscum >= lower)
else:
aboverows = (lastrowscum >= abs(lower))
if upper > 0:
belowrows = (rowscum <= upper)
else:
belowrows = (lastrowscum <= abs(upper))
select = (aboverows & belowrows)
# list of positions (3rd, 5th, 7th)
elif prefix.startswith('('):
pos_str = prefix.rsplit(' ',1)[0].strip().strip('(').strip(')')
pos_re = ordinal.replace(' ', '') # last condition may have ) i.e. 25th)
pos_nums = re.findall(pos_re, pos_str)
pos_nums = tuple([int(pos[0]) for pos in pos_nums])
pos_num = [num for num in pos_nums if num > 0]
neg_num = [num for num in pos_nums if num < 0]
pos_select = rowscum.isin(pos_nums)
neg_select = rowscum.isin(pos_nums)
select = (pos_select | neg_select)
if interval==True:
return select
else:
return rows & select
# so far, have marked interval of events for expressions with qualifications
# (existence conditions are not intervals). example: First 5 of 4A, markes
# all events in the interval between the 1st and 5th of 4A
# if we only want to pick the 4A events in this intereval, we and it with
# the boolena for 4A existence (row). But sometimes we want to keep and use
# the interval. For instance when the qualifiers are used in before/after
# statements if the evaluated expression should be returned as 'exact row',
# 'interval row' or pid existence
###Output
_____no_output_____
###Markdown
testing prefix
###Code
index = np.random.randint(100, size=1000)
code = np.random.binomial(1, p=0.4, size=1000)
rows=pd.Series(code, index=index).sort_index()
rows
df=rows.to_frame()
df['evaluated_prefix'] =eval_prefix('2nd A', rows=rows)
df.head(30)
###Output
_____no_output_____
###Markdown
eval and or not
###Code
#export
def eval_and_or(df, expr, sep=None, info=None):
split_words = [' and ', ' or ']
for word in split_words:
expr = expr.replace(word, f'@split@{word}')
conditions = expr.split('@split@')
for n, condition in enumerate(conditions):
atom = condition.replace('and','').replace('or','')
name = c + str(n)
df[name] = eval_atom(df=df, expr=atom, sep=sep, info=info)
final_expr = final_expr + condition.replace(atom, name)
rows = df.eval(final_expr)
return rows
###Output
_____no_output_____
###Markdown
eval before after
###Code
#export
def eval_before_after(df, condition, cols=None, sep=None,
info=None, out='pid', fix=True):
# check if evaluated previously
if cols:
name = condition + out + str(cols)
else:
name = condition + out
info, rows = memory(info, 'eval_before_after', name)
if rows:
return rows
# replace conditions so intervals/multiples becomes positional
# example: before first 5 of 4A --> before 5th of 4A
# background: first 5 is not satisfied until ALL five have passed, while some other conditions are
# may introduce shortcuts for some easy/common evaluations later (like 4A before 4B, easier than 4st of 4A before 1st of 4B?)
# before and after are also different, may exploit this to create shortcuts
condition = re.sub(r'last (-?\d+)', r'-\1st', condition)
condition = re.sub(r'first (-?\d+)', r'\1st', condition)
# todo: also fix percentile --> find position, not first 5 percent
before_expr, after_expr = re.split(' before | after | simultaneously ', condition)
print(f'{before_expr},{after_expr}')
# shortcut if ' simultaneous ' in condition: ...just a row and
# check if the left side of the before expression has been calculated
#if before in info.before_has_happened:
# before_has_happened = info.before_has_happened[before]
#else:
before_has_happened = eval_atom(df=df, expr=before_expr, cols=cols, sep=sep, info=info).groupby(level=0).cumsum().astype(bool).astype(int)
# info.before_has_happened[before] = before_has_happened
# check if the right side of the before expression have been calculated
#if after in info.after_has_happened:
# after_has_happened = info.after_has_happened[after]
#else:
after_has_happened = eval_atom(df=df, expr=after_expr, cols=cols, sep=sep, info=info).groupby(level=0).cumsum().astype(bool).astype(int)
# info.after_has_happened[after] = after_has_happened
both_exist = (before_has_happened.any(level=0)) & (after_has_happened.any(level=0))
if ' before ' in condition:
is_it_before = (before_has_happened - after_has_happened).groupby(level=0).sum()
endrows = both_exist & (is_it_before > 0)
elif ' after ' in condition:
is_it_after = (after_has_happened - before_has_happened).groupby(level=0).sum()
endrows = both_exist & (is_it_after > 0)
elif ' simultaneous ' in condition:
difference = (before_has_happened - after_has_happened).groupby(level=0).sum()
endrows = both_exist & (difference ==0)
#info.evaluated['eval_before_after'][name] = endrows
return endrows
###Output
_____no_output_____
###Markdown
eval withinExamples: expr= '4AB02 within 100 days after 4AB04' expr= 'min 2 of 4AB02 within 100 days' expr= '4AB02 within 50 to 100 days before 4AB04' expr= '4AB02 within 50 to 100 days before 4AB04' maybe use inside on some? expr= 'min 4 of 4AB02 in ncmp within 100 days' expr= 'min 2 of 4AB02 within last 100 days' expr= 'min 2 of 4AB02 within 100 days from end' expr= 'min 2 of 4AB02 within first 100 days' expr= 'between 2 and 5 of 4AB02 within first 100 days' avoid and? well, just use format and replace it with to? expr= 'min 2 of 4AB02 within 100 days from beginning' expr= 'min 2 of 4AB02 within 1st of 4AB04 to 5th of 4AB04' expr= 'min 2 of 4AB02 within 1st of 4AB06 to 3rd of 4AB04' expr= 'min 2 of 4AB02 within first 20 of 4AB04' expr= '3rd of 4AB02 within first 20 of 4AB04' expr= 'min 2 of 4AB02 within 100 days from 5th of 4AB04' expr = '3 or more of 4ab within 100 days' wstart, wend expr= 'min 4 of 4AB02 in ncmp within 100 days' expr= "min 4 of ncmp=='4AB02' within 100 days" expr= "at least 4 of ncmp=='4AB02' within 100 days" expr= "more than 4 of ncmp=='4AB02' within 100 days" best language expr= "less than 4 of ncmp=='4AB02' within 100 days" best language expr= "between 4 and 7 of ncmp=='4AB02' within 100 days" best language inclusive or exclusive between expr= "5 or more of ncmp=='4AB02' within 100 days" best language expr= "from 4 to 7 of ncmp=='4AB02' within 100 days" best language expr= " 4 to 7 events with 4AB02 within 100 days" best language events problem ... again format? expr= " from 4 to 7 events with 4AB02 within 100 days" best language events problem ... again format? expr= " at least 5 events with 4AB02 within 100 days" best language events problem ... again format? expr= " no more than 5 events with 4AB02 in ncmp within 100 days" best language events problem ... again format? expr= 'min 3 of days>3 within 100 days' expr= 'ddd[pharma=='pharmaz'].sum()>97 within 100 days'
###Code
#export
def eval_within(df, condition, cols=None, sep=None, date='event_date', info=None, out='pid', pid='pid'):
"""
mark observations that satisfy the conditions in a within statement
examples:
- expr = 'min 3 4AB within 100 days'
- expr = '4AB02 in ncmp within 50 to 100 days before 4AB04 in ncmp'
"""
# within 100 days after 4AB --> within 0 to 100 days after 4AB
if ((' after ' in condition) or (' before ' in condition)) and not (' to ' in condition):
condition=condition.replace(' within ', ' within 0 to ')
left, right = condition.split(' within ')
print(f'within {left}, {right}')
# within x to y days (better: between x to y days)
# expr='4AB02 in ncmp within 50 to 100 days before 4AB04 in ncmp'
if re.match(r'\d+ to \d+ ', right):
print(f'within x to y')
lower, _, upper, unit, direction, *rightsingle = right.split()
if direction == 'around':
condition = condition.replace(' around ', ' after ')
after_prows = eval_within(df=df, condition=condition, cols=cols, sep=sep, info=info)
condition = condition.replace(' after ', ' before ')
before_prows = eval_within(df=df, condition=condition, cols=cols, sep=sep, info=info)
endprows = (after_prows) | (before_prows)
return endprows
rightsingle = " ".join(rightsingle)
lower = int(lower)
upper = int(upper)
lrows = eval_atom(df=df, expr=left, cols=cols, sep=sep, info=info)
rrows = eval_atom(df=df, expr=rightsingle, cols=cols, sep=sep, info=info)
pid_change = ((df[pid] - df[pid].shift()) != 0)
rdates = df[date].where(rrows == 1, np.datetime64('NaT'))
# if not have a date assign one to avoid ffill from person above
# risky?
rdates[(pid_change & ~rrows)] = np.datetime64('2100-09-09')
if direction == 'after':
rdates = rdates.fillna(method='ffill') # hmmm must use groupby here? or can it be avoided? inseret a 999 when pid change and fill it with nan after ffill?
elif direction == 'before':
rdates = rdates.fillna(method='bfill')
rdates = rdates.where(rdates != np.datetime64('2100-09-09'), np.datetime64('NaT'))
# allow other time units, within 5 seconds etc
if unit == 'days':
delta = (df[date] - rdates) / np.timedelta64(1, 'D')
else:
# add s if it is not there? like 1 day, 1 second?
delta = (df[date] - rdates)
delta = getattr(delta.dt, unit)
if direction == 'before':
delta = delta.abs()
within = (delta >= lower) & (delta <= upper)
endrows = (lrows & within) # nb, frequency conditions not work here I think: min 3 x within 10 to 100 days before S
cpid = endrows.any(level=0)
# pure within statements have few elements to the right
# example min 2 4AB within 100 days
elif len(right.split()) < 3:
print(f'within x days')
if ' in ' in left:
word, num, codes, _, cols = left.split()
rows = get_rows(df=df, codes=codes, cols=cols, sep=sep, info=info)
# 'sum(days)>15 within 100 days' or 'min 5 of ddd>200 within 100 days'
# expr='sum(days)>15 within 100 days'
elif re.search('[>=<]', left):
if 'sum(' in left:
# may want to create smaller dataframe first, if possible? focus on only some variable, columns, rows?
sub = df.set_index(date) # assume column date exist, should also drop rows with no time
col, operator = left.split(')')
col = col.replace('sum(', '').strip(')')
threshold, unit = right.split()
if unit == 'days': unit = 'D'
eval_text = f"(sub.groupby('pid')['{col}'].rolling('{threshold}{unit}').sum()){operator}"
rows = pd.eval(eval_text, engine='python')
cpid = rows.any(level=0)
return cpid
# 'min 5 ddd>200 within 100 days'
else:
word, num, codes = left.split()
rows = df.eval(codes) # so far no sumsum etc, only 4 events with sugar_level>10 within 100 days
# code expression not quantity expression
# example: min 3 G2 within 100 days
else:
word, num, codes = left.split()
cols = cols
rows = get_rows(df=df, codes=codes, cols=cols)
threshold, unit = right.split()
threshold = int(threshold)
num = int(num)
if word == 'max': num = num + 1
# may need to use expand cols to get the cols (not use cols expression here if it starred)
sub = df[date][rows].dropna().to_frame()
sub.columns = ['date']
sub['pid'] = sub.index
sub['shifted_date'] = sub['date'].shift(-num)
sub['shifted_pid'] = sub['pid'].shift(-num)
sub['diff_pid'] = (sub['pid'] - sub['shifted_pid'])
sub['same_pid'] = np.where(sub.diff_pid == 0, 1, 0)
sub = sub[sub.same_pid == 1]
# sub['shifted_date'] = sub['date'].groupby(level=0).shift(int(num))
# sub['shifted_pid'] = sub['pid'].groupby(level=0).shift(int(num))
# todo: allow for different units here, months, weeks, seconds etc
sub['diff_days'] = (sub['shifted_date'] - sub['date']) / np.timedelta64(1, 'D')
# sub[sub.same_pid == 1]['diff_days'].dropna()/np.datetime64(1, 'D')
if word == 'min':
endrows = (sub['diff_days'] <= threshold)
cpid = endrows.any(level=0)
elif word == 'max':
# n = df.index.nunique()
endrows = (sub['diff_days'] <= threshold)
cpid = ~endrows.any(level=0)
# revise max and exactly
elif word == 'exactly':
endrows = (sub['diff_days'] <= threshold)
n_max = endrows.groupby(level=0).sum()
endrows = n_max == threshold
cpid = endrows.any(level=0)
# #todo (also need to change parsing then ...)
# elif word=='between':
# endrows=(sub['diff_days']<=threshold)
# n_max = endrows.groupby(level=0).sum()
# endrows = n_max == threshold
# cpid = endrows.any(level=0)
return cpid
###Output
_____no_output_____
###Markdown
eval inside outside
###Code
#export
def eval_inside_outside(df, condition, cols=None, sep=None, pid='pid', info=None):
"""
mark observations that satisfy the conditions in an inside/outside statement
'inside/within/outside 5 events days before/after/around x'
examples:
- expr = 'X inside 4 events after Y'
- expr = 'X inside 4 events after each Y'
- expr = 'always X inside 4 events after each Y'
- expr = 'always X inside 4 events after a Y'
- expr = 'X inside 3 events around Y'
- expr = 'X outside 5 events after Y'
- expr = 'no X before last 5 events' - hmm this is before after, not inside?
- expr = 'no X inside 5 events before Y'
- expr = 'min 3 4AB inside last 5 events' - special
- expr = 'X inside 1st and 5th Y' (between is better?) -special
- expr = 'X inside 2 events before and 8 events after Y' - special
- expr = 'min 2 X inside 4 events after Y'
- expr = 'X inside 4 events after min 3 Y'
- expr = 'X inside 4 to 7 events after min 3 Y'
"""
# some horrible parsing
if ' not ' in expr:
pre, negate, post = expr.partition(' not ', expr)
post='not ' + post
else:
pre, post = re.split(' inside | outside ', expr)
if ' inside ' in expr:
post = 'inside ' + post
else:
post = 'outside ' + post
# mark relevant rows
inside_rows = create_position_interval(df=df, expr=post, sep=sep, info=info)
first_atom = eval_atom(df=df, expr=pre, sep=sep, info=info)
endrows = inside_rows & first_atom
# todo: warning, frequency conditions not work (at least not as expected)
# todo: make a frequency version work? different keywords/sytax? groupby?
return endrows
#expr='G4 in codes within 1 to 800 days around 3rd G2 in codes'
#a=eval_within(df=df, condition=expr, cols='codes', sep=',')
#df['a'] = a
#a.sum()
df.head()
###Output
_____no_output_____
###Markdown
row selectors Examples- after/before s- after 3rd 3s- between 5th and 6th s- within last 5 events- within 100 days after s- within 100 days after 2nd s- within 50 days around 3rd s- after min 3 s- after glucose- (pharma x after s) and (pharma y before pharma z)- g after 1st s >20- x before y after 2nd s- x before (y after 2nd s)- (x before y) after 2nd s- after 2nd s: x before y- x before (y before z)- x before (y and z) and (y before z)- (x before y) and (y before z)- (x before y) after z- x before y before z before qbefore after statemetns have to be solved from right to left?
###Code
#def select_before_after(df, expr, sep=None, info=None):
# word, atom = expr.split(' '.1)
# rows = eval_atom(df=df, expr=expr. sep=sep, info=info)
#export
def eval_before_after(df, condition, cols=None, sep=None,
info=None, out='pid', fix=True):
# check if evaluated previously
if cols:
name = condition + out + str(cols)
else:
name = condition + out
info, rows = memory(info, 'row_eval', name)
if rows:
return rows
# replace conditions so intervals/multiples becomes positional
# example: before first 5 of 4A --> before 5th of 4A
# background: first 5 is not satisfied until ALL five have passed, while some other conditions are
# may introduce shortcuts for some easy/common evaluations later (like 4A before 4B, easier than 4st of 4A before 1st of 4B?)
# before and after are also different, may exploit this to create shortcuts
condition = re.sub(r'last (-?\d+)', r'-\1st', condition)
condition = re.sub(r'first (-?\d+)', r'\1st', condition)
# todo: also fix percentile --> find position, not first 5 percent
before_expr, after_expr = re.split(' before | after | simultaneously ', condition)
print(f'{before_expr},{after_expr}')
# shortcut if ' simultaneous ' in condition: ...just a row and
# check if the left side of the before expression has been calculated
#if before in info.before_has_happened:
# before_has_happened = info.before_has_happened[before]
#else:
before_has_happened = eval_atom(df=df, expr=before_expr, cols=cols, sep=sep, info=info).groupby(level=0).cumsum().astype(bool).astype(int)
# info.before_has_happened[before] = before_has_happened
# check if the right side of the before expression have been calculated
#if after in info.after_has_happened:
# after_has_happened = info.after_has_happened[after]
#else:
after_has_happened = eval_atom(df=df, expr=after_expr, cols=cols, sep=sep, info=info).groupby(level=0).cumsum().astype(bool).astype(int)
# info.after_has_happened[after] = after_has_happened
both_exist = (before_has_happened.any(level=0)) & (after_has_happened.any(level=0))
if ' before ' in condition:
is_it_before = (before_has_happened - after_has_happened).groupby(level=0).sum()
endrows = both_exist & (is_it_before > 0)
elif ' after ' in condition:
is_it_after = (after_has_happened - before_has_happened).groupby(level=0).sum()
endrows = both_exist & (is_it_after > 0)
elif ' simultaneous ' in condition:
difference = (before_has_happened - after_has_happened).groupby(level=0).sum()
endrows = both_exist & (difference ==0)
info.evaluated['eval_before_after'][name] = endrows
return endrows
###Output
_____no_output_____
###Markdown
eval selector prefix
###Code
#export
def eval_selector_prefix(df, prefix, sep=None, cols=None, date='date', info=None):
prefix=prefix.strip()
words = prefix.split()
# before last 5 events
if ' event' in prefix:
prefix = prefix.replace('events', 'event')
#ad hoc, works with little code, but slow, optimize later
df['event'] = 'e'
prefix = prefix.replace('event', 'e in event')
# just before
if prefix=='before':
print(prefix)
rows=~(df['rowscum']>0)
# just after
elif prefix=='after':
print(prefix)
rows=df['rowscum']>0
# 100 days before
elif words[1]=='days':
days, _, direction = prefix.split()
rows=mark_days(df=df, max_days=days, direction=direction, date=date, info=info)
# 3 events after
elif (words[1]=='events') or (words[1]=='event'):
days, _, direction = prefix.split()
rows=mark_events(rows=df['atom_rows'], max_event=days, direction=direction, info=info)
# 50 to 100 days after
elif ('to' in words) and ('days' in words):
min_days, _, max_days, unit, direction = prefix.split()
rows = mark_days(df=df, min_days=min_days, max_days=max_days, direction=direction)
# 2 to 5 events after
elif ('to' in words) and (words[1]=='events') or (words[1]=='event'):
min_events, _, max_events, unit, direction = prefix.split()
rows = mark_events(rows=df['atom_rows'], min_events=min_days, max_events=max_days, direction=direction)
return rows
###Output
_____no_output_____
###Markdown
create time interval
###Code
#export
def create_time_intervals(df, expr, cols=None, sep=None, date='date', info=None):
"""
expr='before 4AB04 in ncmp'
expr='100 days before 4AB04 in ncmp'
expr='50 to 100 days after 4AB04 in ncmp'
expr='5 to 10 events after 4AB04 in ncmp'
expr='50 days around 4AB04 in ncmp'
expr='5 events around 4AB04 in ncmp' #next x events, inside 1 event after
expr='before 3rd 4AB04 in ncmp'
expr='100 days before last event'
expr='between 3rd s in cod and 7th b in cod'
expr='before age>20'
expr='a pd statement' age >20
expr='100 days before last event'
expr='inside last 5 events'
create_time_intervals(df=df, expr=expr)
"""
original = expr
words = expr.split()
expr = re.sub(r'last (-?\d+)', r'-\1st', expr)
expr = re.sub(r'first (-?\d+)', r'\1st', expr)
# todo: also fix percentile --> find position, not first 5 percent
if any(word in words for word in 'before after around'.split()):
splitted = re.split(r'\bbefore\b|\bafter\b|\bsimultaneously\b|\baround\b', expr)
atom = splitted[-1]
prefix=''
for word in words:
prefix = prefix + ' ' + word
if word in 'before after sametime around'.split():
break
print(atom, prefix)
atom_rows = eval_atom(df=df, expr=atom, cols=cols, sep=sep, info=info)
rowscum = atom_rows.groupby(level=0).cumsum()
df['atom_rows'] = atom_rows
df['rowscum'] = rowscum
rows = eval_selector_prefix(df=df, prefix=prefix, sep=sep, cols=cols, date=date, info=info)
return rows
###Output
_____no_output_____
###Markdown
mark days
###Code
#export
# mark days (ex 'within 22 days before/after x')
def mark_days(df, direction, max_days, min_days=0,inside=True, date='date', info=None):
"""
mark days (ex 'inside/within/outside 22 days before/after/around x')
"""
df['reference_event'] = np.where(df['atom_rows'] == 1, df[date], np.datetime64('NaT'))
min_days=float(min_days)
max_days=float(max_days)
if direction=='around':
rows_before = mark_days(df=df, max_days=max_days, direction='before', date=date, inside=True)
rows_after = mark_days(df=df, max_days=max_days, direction='after', date=date, inside=True)
rows = rows_before | rows_after
if not inside: rows = ~rows
return rows
elif direction=='before':
df['reference_event'] = df['reference_event'].groupby(level=0).fillna(method='bfill')
elif direction =='after':
df['reference_event'] = df['reference_event'].groupby(level=0).fillna(method='ffill')
event_diff = (df[date] - df['reference_event']).dt.days.abs()
# inside 50 to 100 days before x
if inside:
rows = event_diff.between(min_days, max_days)
# outside 50 to 100 days before x
else:
rows = ~(event_diff.between(min_days, max_days))
return rows
###Output
_____no_output_____
###Markdown
create position interval
###Code
#export
# mark rows (ex 'within 5 events before/after x')
def create_position_interval(rows, expr, info=None):
"""
mark events (ex '(not,always, never) inside/within/outside 5 events before/after/around x')
"""
df = rows.to_frame()
df.columns = ['reference_event']
df['hard_way']=1
df['event_num'] = df.groupby(level=0)['hard_way'].cumsum()
pre, post = re.split(' inside | outside ', expr)
# to do: handle pre if it exists ... maybe only handle not (never and always is higher level)
last_atom = re.split(' before | after | around', post)[-1]
inside_statement = expr.replace(pre,'').replace(last_atom,'')
last_rows = eval_atom(df=df, expr=last_atom, cols=cols, sep=sep, info=info)
inside_statement = post.replace(last_atom,'')
# inside 2 to 5 events before x
# to do: validate since this should not work if the direction is around
if ' to ' in inside_statement:
in_or_out, min_events, _, max_events, direction =inside_statement.split()
# inside 5 events before x
else:
in_or_out, max_events, direction = inside_statement.split()
if in_or_out in ('inside', 'between', 'not outside'):
inside = True
else:
inside = False
min_events=int(min_events)
max_events=int(max_events)
rows = mark_events(rows=last_rows,
direction=direction,
max_events=max_events,
min_events=min_events,
inside=inside,
info=info)
return rows
###Output
_____no_output_____
###Markdown
mark events
###Code
#export
# mark rows (ex 'within 5 events before/after')
def mark_events(rows, direction, max_events, min_events=0, inside=True, info=None):
"""
mark events (ex '(not,always, never) inside/within/outside 5 events before/after/around x')
"""
df = rows.to_frame()
df.columns = ['reference_event']
df['hard_way']=1
df['event_num'] = df.groupby(level=0)['hard_way'].cumsum()
min_events=int(min_events)
max_events=int(max_events)
if direction=='around':
rows_before = mark_events(df=df, max_events=max_events, direction='before')
rows_after = mark_events(df=df, max_events=max_events, direction='after')
rows = rows_before | rows_after
if not inside: rows = ~rows
return rows
elif direction=='before':
df['reference_event'] = df['reference_event'].groupby(level=0).fillna(method='bfill')
elif direction =='after':
df['reference_event'] = df['reference_event'].groupby(level=0).fillna(method='ffill')
event_diff = (df['event_num'] - df['reference_event']).dt.days.abs()
# inside 50 to 100 events before x
if inside:
rows = event_diff.between(min_events, max_events)
# outside 50 to 100 events before x
else:
rows = ~(event_diff.between(min_events, max_events))
return rows
df=make_data(1000,letters=10, numbers=5)
df.head()
df['date'] = df.event_date
df = df.sort_values(['pid', 'date'])
df.head()
row = eval_atom(df=df, expr='B2 in codes', sep=',')
row.head()
eval_row_selection(df=df, expr='2 events before I2 in codes', sep=',')
expr='before 4AB04 in ncmp'
expr='100 days before 4AB04 in ncmp'
expr='50 to 100 days after 4AB04 in ncmp'
expr='before 3rd 4AB04 in ncmp'
expr='100 days before last event'
expr='after 1st D3 in codes'
expr='100 days before D3 in codes'
expr='500 days before H3 in codes'
expr='900 days around H3 in codes'
expr='100 to 600 days after H3 in codes' # interesting problem. is something is 150 days from a h3, but also 50 days from another h3?
df['around']=create_time_intervals(df=df, expr=expr, sep=',')
df.head(50)
df
df['atom_rows']=eval_atom(df=df, sep=',', expr='H4 in codes')
df['atom_rows']
df.groupby(level=0).atom_rows.cumsum()<1
create_time_intervals(df=df, expr=expr, sep=',')
expr='before D3 in codes'
x = re.split(r'\bbefore\b|\bafter\b|\bsimultaneously\b|\baround\b', expr)
x
df.head()
df=df.sort_values(['pid', 'event_date'])
df['date'] = df.event_date
df.head()
count_persons(df=df, expr='min ?[10, 20] G2 in codes', sep=',')
df.groupby('pid').size().value_counts().sort_index()
#export
def before_after(before, after, condition, info=None):
before_has_happened = before.groupby(level=0).cumsum().astype(bool).astype(int)
after_has_happened = after.groupby(level=0).cumsum().astype(bool).astype(int)
both_exist = (before_has_happened.any(level=0)) & (after_has_happened.any(level=0))
if condition=='before':
is_it_before = (before_has_happened - after_has_happened).groupby(level=0).sum()
endrows = both_exist & (is_it_before > 0)
elif condition=='after':
is_it_after = (after_has_happened - before_has_happened).groupby(level=0).sum()
endrows = both_exist & (is_it_after > 0)
elif condition == 'same time':
difference = (before_has_happened - after_has_happened).groupby(level=0).sum()
endrows = both_exist & (difference ==0)
#expand to fit df length?
endrows =endrows.reindex(index=before.index)
#expand_endrows['result'] = endrows
#endrows = expand_endrows['result']
return endrows
index = np.random.randint(100, size=1000)
code1 = np.random.binomial(1, p=0.8, size=1000)
code2 = np.random.binomial(1, p=0.1, size=1000)
df=pd.DataFrame(code1, index=index).sort_index()
df['col2'] = code2
df.columns =['col1', 'col2']
df.head()
col1=df.col1
col2=df.col2
condition='before'
df['before'] = before_after(col1, col2, condition)
condition='after'
df['after'] = before_after(col1, col2, condition)
df.head(50)
df=make_data(1000,letters=10, numbers=5)
df.head()
###Output
_____no_output_____
###Markdown
count persons
###Code
#export
def count_persons(df, expr, cols=None, sep=None, codebook=None, info=None, use_caching=True, insert_cols=True, fix=True):
"""
count persons who satisfy the conditions in an expression
examples
expr = 'K50* in icd'
expr = 'K50* before K51*'
expr = '(K50 or K51) and K52'
expr = 'min 3 of glucose>8 within 100 days'
expr = '3rd of 4AB04 in ncmp before 3th of 4AB02 in ncmp'
"""
expr = remove_space(expr)
expr = insert_external(expr)
exprs = get_expressions(expr)
count = {}
for expr in exprs:
rows = eval_expr(df=df, expr=expr, cols=cols, sep=sep, info=info)
count[expr] = rows.any(level=0).sum()
# return only number if only one expression
if len(count) == 1:
return count[expr]
return count
count_persons(df=df, expr='G3 in codes before I2 in codes', sep=',')
count_persons(df=df, expr='first 5 G3 before 3th G2', cols='codes', sep=',')
count_persons(df=df, expr='min 5 G3 before 3th G2', cols='codes', sep=',')
eval_atom(df=df, expr='1st G2 in codes', sep=True)
insert_columns('G2', cols='codes')
df.head()
count_persons(df=df, cols='codes', expr='B2 before H2', sep=',')
df.codes.str.contains('G2', na=False).any(level=0)
get_rows(df=df, codes='G2', cols='codes').any(level=0).sum()
expr = "?['4AB02', '4AB04'] in ncmp"
expr = '4AB02 in ncmp and 4AB04 in ncmp'
expr = 'min 10 of 4AB02 in ncmp'
expr = 'min ?[4,5,6] of 4AB02 in ncmp'
expr = 'min 6 of 4AB02 in ncmp'
expr = 'min 10 of 4AB02 in ncmp'
expr = 'min ?[6,8] of 4AB02 in ncmp'
expr = '1st of 4AB02 in ncmp'
expr = '2nd of 4AB02 in ncmp'
expr = '4AB02 in ncmp before 4AB04 in ncmp'
expr = '4AB04 in ncmp before 4AB02 in ncmp'
expr = '4AA23 in ncmp before 4AB02 in ncmp'
expr = 'max 2 of 4AB02 in ncmp before 4AB04 in ncmp'
expr = 'max 2 of 4AB02 in ncmp' # should include zero ?
expr = 'min ?[1,2,3,4] of 4AB02 in ncmp'
expr = 'max ?[1,2,3,4] of 4AB02 in ncmp' # should include zero ?
expr = 'min 2 of days>4'
expr = 'min 8 of days>6'
expr = 'min 3 of 4AB02 in ncmp within 200 days'
%time count_p(df=df, expr=expr, cols=None, codebook=None, info=None, sep=',')
%time count_p(df=df, expr=expr, cols=None, codebook=None, info=info)
def count_persons(df, codes=None, cols=None, pid='pid', sep=None,
normalize=False, dropna=True, group=False, merge=False,
length=None, groupby=None, codebook=None, fix=True):
"""
Counts number of individuals who are registered with given codes
Allows counting across multiple columns and multiple codes in the same
cells. For instance, there may be 10 diagnostic codes for one event (in
separate columns) and in some of the columns there may be more than one
diagnostic code (comma separated) and patient may have several such events
in the dataframe.
args:
codes (str, list or dict): Codes to be counted. Star and hyphen
notations are allowed. A dict can be used as input to merge codes
into larger categories before counting. The key is the name of
the category ('diabetes') and the value is a list of codes.
Examples:
codes="4ABA2"
codes="4AB*"
codes=['4AB2A', '4AB4A']
codes = {'diabetes' = ['34r32f', '3a*']}
cols (str or list): The column(s) with the codes. Star and colon
notation allowed.
Examples:
cols = 'icdmain'
cols = ['icdmain', 'icdside']
# all columns starting with 'icd'
cols = ['icd*'] # all columns starting with 'icd'
# all columns including and between icd1 and icd10
cols = ['icd1:icd10']
pid (str): Column name of the personal identifier
sep (str): The code seperator (if multiple codes in the same cells)
normalize (bool, default: False): If True, converts to pct
dropna (bool, default True): Include counts of how many did not get
any of the specified codes
length (int): If specified, will only use the number of characters
from each code as specified by the length parameter (useful to
count codes at different levels of granularity. For example,
sometimes oe wants to look at how many people get detailed codes,
other times the researcher wants to know only how many get general
atc codes, say the first four characters of the atc)
Examples
>>> df.atc.count_persons(codes='4AB04')
>>> df.atc.count_persons(codes='4AB04', dropna=False, normalize=True)
>>> df.atc.count_persons(codes=['4AB*', '4AC*'])
>>> df.atc.count_persons(codes=['4AB*', '4AC*'], group=True)
>>> df.atc.count_persons(codes=['4AB*', '4AC*'], group=True, merge=True)
>>> df.count_persons(codes={'adaliamumab':'4AB04'}, cols='ncmp', sep=',', pid='pid')
>>> df.count_persons(codes='4AB04', cols='ncmp', groupby=['disease', 'cohort'])
>>> df.groupby(['disease', 'cohort']).apply(count_persons, cols='ncmp', codes='4AB04', sep=',')
"""
sub = df
sub, cols = to_df(df=sub, cols=cols)
cols = expand_cols(df=sub, cols=cols)
if normalize:
sum_persons = sub[pid].nunique()
# if an expression instead of a codelist is used as input
if isinstance(codes, str) and codes.count(' ') > 1:
persons = use_expression(df=sub, expr=codes, cols=cols, sep=sep, out='persons', codebook=codebook, pid=pid)
if normalize:
counted = persons.sum() / len(persons)
else:
counted = persons.sum()
# if codes is a codelist (not an expression)
else:
if fix:
if not codes:
counted = count_persons_all_codes(df=sub, cols=cols, pid=pid, sep=sep,
normalize=normalize, dropna=dropna, length=length, groupby=groupby)
return counted
# if some codes are specified, expand and format these, and reduce the df to the relevant codes
else:
# expands and formats columns and codes input
codes, cols, allcodes, sep = fix_args(df=sub, codes=codes, cols=cols, sep=sep, group=group,
merge=merge)
rows = get_rows(df=sub, codes=allcodes, cols=cols, sep=sep, fix=False)
if not dropna:
sum_persons = df[pid].nunique()
sub = sub[rows].set_index(pid,
drop=False) # unsure if this is necessary, may drop it. Requred if method on a series? well not as long as we add pid column and recreate a series as a df
# make a df with the extracted codes
code_df = extract_codes(df=sub, codes=codes, cols=cols, sep=sep, fix=False, series=False)
labels = list(code_df.columns)
counted = pd.Series(index=labels)
# maybe delete groupby option, can be done outside df.groupby. apply ...
if groupby:
code_df = code_df.any(level=0)
sub_plevel = sub.groupby(pid)[groupby].first()
code_df = pd.concat([code_df, sub_plevel], axis=1) # outer vs inner problem?
code_df = code_df.set_index(groupby)
counted = code_df.groupby(groupby).sum()
else:
for label in labels:
counted[label] = code_df[code_df[label]].index.nunique()
if not dropna:
with_codes = code_df.any(axis=1).any(level=0).sum() # surprisingly time consuming?
nan_persons = persons - with_codes
counted['NaN'] = nan_persons
if normalize:
counted = counted / sum_persons
else:
counted = counted.astype(int)
if len(counted) == 1:
counted = counted.values[0]
return counted
eval_atom(df=df, expr='B1-B5 in codes', sep=',')
def eval_condition(expr):
@lru_cache()
def get_conditions(expr):
split_on = [' or ', ' and ']
split_rep = ' @split@ '
for split_word in split_on:
expr = expr.replace(split_word, split_rep)
conditions = expr.split(split_rep)
conditions = [condition.strip('(').strip(')') for condition in conditions]
return conditions
expr = 'max 2 of 4AB0552 before 4AB04'
_insert_columns(expr, 'icd')
###Output
_____no_output_____
###Markdown
evaluating atomic expressions
###Code
def eval_single(df, condition, cols=None, sep=None, codebook=None,
out='pid', info=None):
"""
evaluates a single expressions (1st 4A),
not relational conditions (A before B, within 100 days after etc)
condition ='first 5 of 4AB02 in ncmp'
condition ='min 2 of days>10'
condition ='ddd>10'
condition ='ddd[4AB02 in codes]>10'
condition ='ddd[4AB02 in codes].cumsum()>50'
condition ='sum(ddd[4AB02 in codes])>50'
a=eval_single(df=npr, condition=condition, sep=',')
todo: info bank problems after allowing code selections?
"""
# create temporary storage to avoid recalculations
if not info: info = Info()
original_condition = condition
# no re-evaluation necessary if it it has been evaluated before
if out == 'pid' and (condition in info.single_pid):
return info.single_pid[condition]
elif out == 'rows' and (condition in info.single_rows):
return info.single_rows[condition]
elif out == 'interval' and (condition in info.single_interval):
return info.single_interval[condition]
quantity = r'[>=<]' # better to use term comparison
freq = ['min ', 'max ', 'exactly ']
first_last_between = [' first ', ' last ', ' between ']
ordinal = r'(-?\d+)(st |nd |rd |th )' # re to find and split 3rd into 3 and rd etc
row_selection = ''
# select sub df if specified by [] after a code
if ('[' in condition) and (']' in condition):
row_query = condition.split('[')[-1].split(']')[0]
row_selection = row_query
# check if evaluated before
if row_query in info.single_rows:
rows = info.single_rows[row_query]
else:
condition = condition.replace(f'[{row_query}]', '')
if ' in ' in row_query:
row_query = row_query.replace(' in ', ' in: ') # using old use_expresssion wich requires in with colon
relevant_rows = use_expression(df=df, cols=cols, expr=row_query, sep=sep)
info.single_rows[row_query] = relevant_rows
df = df[relevant_rows]
# is it a functional expression? ddd.cumsum()>10
# expr="ddd.cumsum()>10"
# condition=expr
# expr='gender.nunique()==1'
# hmm what about properties like .is_monotonic? (no parenthesis!)
# if ('.' in condition) and ('(' in condition) and (')' in condition):
# still imperfect ... a code could also be a column name ... ok usually not also with a period mark in column name so ok
if ('.' in condition) and (condition.split('.')[0] in df.columns):
codetext = condition
codes = re.split('[<=>]', condition)[0]
if codes in info.single_rows:
rows = info.single_rows[codes]
# not evaluated before, so calc
else:
cols, funcexpr = condition.split('.')
# a method
if '(' in funcexpr:
func, threshold = funcexpr.split(')')
func, args = func.split('(')
rows = pd.eval(f"tmpdf.groupby(['pid'])['{cols}'].transform('{func}', {args}) {threshold}",
engine='python')
# an attribute (like is_monotonic)
else:
rows = pd.eval(f"tmpdf.groupby(['pid'])['{cols}'].transform(lambda x: x.{funcexpr})", engine='python')
info.single_rows[codes] = rows
# if it is a simple quantiative conditions (oxygen_level>20)
elif re.search(quantity, condition):
codetext = condition
codes = condition.split()[-1] # code condition always last hmm unnecessary
# check if evaluated before
if codes in info.single_rows:
rows = info.single_rows[codes]
# not evaluated before, so calc
else:
# sum(glucose_level)>10
# if this, then may skip further processing?
# well: 1st sum(glucose)>20 ok makes sense, maybe
# but not: max 5 of sum(glucose)>20 ... well maybe
# first 5 of sum(glucose)>20
# if the modifiers does not make sense, the sum might be in the
# list of other modifiers i.e. first 5, 3rd etc and not a
# pre-modifier when finding rows (which allows skipping)
# complex quantitative expression: sum(glucose_level)>10
# better, more flexible ...: glucose.sum()>10 ... can make any function work, and can pass arguments
if 'sum(' in codes: # can use ddd.cumsum() now, keep this to double check
col, operator = codes.split(')')
col = col.replace('sum(', '').strip(')')
eval_text = f"df.groupby(df.index)['{col}'].cumsum(){operator}"
rows = pd.eval(eval_text, engine='python').fillna(False) # is fillna false better than dropna here?
# simple quantitative expression: glucose_level)>10
else:
rows = df.eval(codes).fillna(False)
codecols = codes
info.single_rows[codecols] = rows
# code expression (involving a code, not a quantitative expressions
else:
codetext, incols = condition.split(' in ')
codes = codetext.split()[-1].strip() # codes always last in a simple string after cutting 'in cols'
if incols.strip() == '':
cols = cols
else:
cols = incols
codecols = codes + ' in ' + cols + ' row ' + row_selection # cannot use just codes to store rows since same code may be in different columns, so need to include col in name when storing
# If conditions is about events in general, create an events column
if (' event ' in codes) or (' events ' in codes):
rows = pd.Series(True, index=df.index).fillna(False)
codecols = ' event '
# not a quantitative condition or an event conditions, so it is a code condition
else:
if codecols in info.rows:
rows = info.rows[codecols]
else:
# cols = expand_cols(df=df, cols=cols)
# expanded_codes = expand_codes(df=df, codes=codes, cols=cols, sep=sep)
# allcodes=_get_allcodes(expanded_codes)
# rows = get_rows(df=df, codes=allcodes, cols=cols, sep=sep, fix=False)
rows = use_expression(df=df, expr=codes + ' in:' + cols, sep=sep)
info.rows[codecols] = rows
# is there a prefix to the conditions? if not, isolated condition, just return rows
# if not, start preparing for calculating conditions with qualifiers
# todo: quite messy! refactor: one function to evluate the code/expression itself, another to evalute the qualifier?
if ' ' not in codetext.strip():
# remember answer
info.single_rows[codecols] = rows
info.rows[codecols] = rows
if out == 'pid':
endrows = rows.groupby(level=0).any()
info.single_pid[codecols] = endrows
info.pid[codecols] = endrows
else:
endrows = rows
return endrows
# calculate and remember cumsum per person
# use previous calculation if exist
if codes in info.cumsum:
rowscum = info.cumsum[codes]
else:
rowscum = rows.groupby(level=0).cumsum()
info.cumsum[codecols] = rowscum
## if not a simple existence condition, it must be one of the conditions below
# positional condition: 5th of 4a, 3rd to 8th of 4A, (3rd, 4th, 5th) of 4A
# also allows: 2nd last (or even -5th last)
if re.match(ordinal, codetext):
pos_str = condition.split('of ')[0].strip().strip('(').strip(')')
# pos_re = ordinal.replace(' ', '[ )]|') # last condition may have ) i.e. 25th)
pos_re = ordinal.replace(' ', '') # last condition may have ) i.e. 25th)
pos_nums = re.findall(pos_re, pos_str)
pos_nums = tuple([int(pos[0]) for pos in pos_nums])
# if the conditions includes last, need reversed cumsum
if ' last ' in pos_str or '-' in pos_str:
n_max = rowscum.groupby(level=0).max().add(1)
# reversed event number (by id)
lastrowscum = (rowscum - n_max).abs()
last_flag = 1
else:
last_flag = 0
# single position: 5th of 4A
if len(pos_nums) == 1:
if last_flag:
select = (lastrowscum == pos_nums)
else:
select = (rowscum == pos_nums)
# from-to positions: 3rd to 8th of 4A, 1st to -3rd
elif ' to ' in pos_str:
lower, upper = pos_nums
if lower > 0:
aboverows = (rowscum >= lower)
else:
aboverows = (lastrowscum >= abs(lower))
if upper > 0:
belowrows = (rowscum <= upper)
else:
belowrows = (lastrowscum <= abs(upper))
select = (aboverows & belowrows)
# list of positions (3rd, 5th, 7th)
elif pos_str.strip().startswith('('):
pos_num = [num for num in pos_num if num > 0]
neg_num = [num for num in pos_num if num < 0]
if pos_num:
pos_select = rowscum.isin(pos_nums)
if neg_num:
neg_select = rowscum.isin(pos_nums)
select = (pos_select | neg_select)
# freq condition: min 5 of 4A
elif any(word in codetext for word in freq):
word, num, _, codes = codetext.split()
num = int(num)
if 'min' in word:
select = (rowscum >= num)
elif 'max' in word: # doublecheck!
n_max = rowscum.max(level=0)
select = (n_max <= num)
elif 'exactly' in word:
select = (rowscum == num)
# first, last range conditions: first 5 of 4A
elif any(word.strip() in condition for word in first_last_between): # regex is better
word, num, _, codes = codetext.split()
if '%' not in num:
num = int(num)
if 'first' in word:
select = (rowscum <= num)
if 'last' in word:
select = (rowscum >= num)
# if pct condition: first 10% of 4A
elif '%' in codetext:
n_max = rowscum.groupby(level=0).max()
pct = float(num.split(r'%')[0]) / 100
pid_num = n_max * pct
# first 1% of two observations includes 1st obs
pid_num[pid_num < 1] = 1
if word == 'first':
# hmm, generalproblem: drop if pid is missing ...
select = (rowscum < pid_num)
if word == 'last':
select = (rowscum > pid_num)
# percentile condition
elif ' percentile ' in codetext:
event_num = rows.groupby(level=0).cumcount()
n_count = rowscum.groupby(level=0).size()
num = float(num.split(r'%')[0]) / 100
pid_num = n_count * num
if word == 'first':
rows = (pid_num < event_num)
if word == 'last':
rows = (pid_num > event_num)
# so far, have marked interval of events for expressions with qualifications
# (existence conditions are not intervals). example: First 5 of 4A, markes
# all events in the interval between the 1st and 5th of 4A
# if we only want to pick the 4A events in this intereval, we and it with
# the boolena for 4A existence (row). But sometimes we want to keep and use
# the interval. For instance when the qualifiers are used in before/after
# statements if the evaluated expression should be returned as 'exact row',
# 'interval row' or pid existence
# store and return results
if out == 'pid':
endrows = (rows & select)
endrows = endrows.any(level=0)
info.single_pid[original_condition] = endrows
info.single_rows[original_condition] = rows
elif out == 'interval':
endrows = select
info.interval[original_condition] = endrows
elif out == 'rows':
endrows = (rows & select)
info.single_rows[original_condition] = endrows
return endrows
###Output
_____no_output_____
###Markdown
get inpatient data To test the functions and to calculate the Charslon index we need some data. Here we will use data on hospital visits from Medicare:
###Code
# Use pandas
import pandas as pd
# Read synthetic medicare sample data on inpatient hospital stays
path = 'https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/SynPUFs/Downloads/'
inpatient_file = 'DE1_0_2008_to_2010_Inpatient_Claims_Sample_1.zip'
inpatient = pd.read_csv(path+inpatient_file)
inpatient.columns = inpatient.columns.str.lower()
# easier to use a column called 'pid' than 'desynpuf_id'
inpatient['pid']=inpatient['desynpuf_id']
#set index to the personal id, but also keep id as a column
inpatient = inpatient.set_index('pid', drop=False)
inpatient.index.name='pid_index'
# Have a look
inpatient.head()
# make a list of columns with information about diagnostic codes
icd_cols = list(inpatient.columns[inpatient.columns.str.startswith('icd9_dgns_cd')])
icd_cols
###Output
_____no_output_____
###Markdown
Make a list of all unique ICD9 codes that exist, a all_codes:
###Code
# Codes to calculate CCI using ICD-9 (CM, US, Enhanced)
# Source: http://mchp-appserv.cpe.umanitoba.ca/concept/Charlson%20Comorbidities%20-%20Coding%20Algorithms%20for%20ICD-9-CM%20and%20ICD-10.pdf
infarction = '''
410*
412*
'''
heart_failure = '''
390.91
402.21 402.11 402.91
404.01 404.03 404.11 404.13 404.91 404.93
425.4-425.9
428*
'''
peripheral_vascular = '''
093.0
437.3
440*
441*
443.1-443.9
447.1
557.1 557.9
V43.4
'''
cerebrovascular = '''
362.34
430*-438*
'''
dementia = '''
290*
294.1
331.2
'''
pulmonary ='''
416.8 416.9
490*-505*
506.4
508.1 508.8
'''
rheumatic = '''
446.5
710.0-710.4
714.0-714.2 714.8
725*
'''
peptic_ulcer = '531*-534*'
liver_mild ='''
070.22
070.23
070.32
070.33
070.44
070.54
070.6
070.9
570.*
571.*
573.3 573.4 573.8 573.9
V42.7
'''
# Interesting, diabetes seems to be 5 digits long in the data, but not the specified codes
diabetes_without_complication = '250.0*-250.3* 250.8* 250.9*'
diabetes_with_complication = '250.4*-250.7*'
plegia = '''
334.1
342.*
343.*
344.0-344.6
344.9
'''
renal = '''
403.01 403.11,403.91
404.02 404.03 404.12 404.13 404.92 404.93
582.*
583.0-583.7
585*
586*
588.0
V42.0
V45.1
V56*
'''
malignancy = '''
140*-172*
174.0-195.8
200*-208*
238.6
'''
liver_not_mild = '''
456.0-456.2
572.2-572.8
'''
tumor = '196*-199*'
hiv = '042*-044*'
###Output
_____no_output_____
###Markdown
Put all the strings that describe the codes for the comorbitities in a single datastructure:
###Code
icd9 = unique(df=inpatient, cols = icd_cols, all_str=True)
# A dictionary with names of cormobitities and the associated medical codes
code_string = { 'infarction' : infarction,
'heart_failure' : heart_failure,
'peripheral_vascular' : peripheral_vascular,
'cerebrovascular' : cerebrovascular,
'dementia' : dementia,
'pulmonary' : pulmonary,
'rheumatic' : rheumatic,
'peptic_ulcer' : peptic_ulcer,
'liver_mild' : liver_mild,
'diabetes_without_complication' : diabetes_without_complication,
'diabetes_with_complication' : diabetes_with_complication,
'plegia' : plegia,
'renal' : renal,
'malignancy' : malignancy,
'liver_not_mild' : liver_not_mild,
'tumor' : tumor,
'hiv' : hiv}
###Output
_____no_output_____
###Markdown
Having created a all_codes, we can use the functions we have created to expand the description for all the different comorbidities to include all the specific codes:
###Code
codes = {disease : expand_code(codes.split(),
all_codes=icd9,
drop_dot=True,
drop_leading_zero=True)
for disease, codes in code_string.items()}
###Output
_____no_output_____
###Markdown
And we can check if it really expanded the codes, for instance by examining the codes for mild liver disease:
###Code
codes['liver_mild']
###Output
_____no_output_____
###Markdown
In order to do the calculations, we need the weights associated with each comorbidity. These weights are related to the predictive power of the comorbididy for the probability of dying in a given time period. There are a few different standards, but with relatively minor varitions. Here we use the following:
###Code
charlson_points = { 'infarction': 1,
'heart_failure': 1,
'peripheral_vascular': 1,
'cerebrovascular': 1,
'dementia': 1,
'pulmonary': 1,
'rheumatic': 1,
'peptic_ulcer': 1,
'liver_mild': 1,
'diabetes_without_complication': 1,
'diabetes_with_complication': 2,
'plegia': 2,
'renal': 2,
'malignancy': 2,
'liver_not_mild': 3,
'tumor': 6,
'hiv': 6}
###Output
_____no_output_____
###Markdown
We also need the function that takes a set of codes and identifies the rows and persons who have the codes (a function we developed in a previous notebook):
###Code
#hide
from nbdev.showdoc import *
from nbdev.export import *
notebook2script()
###Output
Converted 00_core.ipynb.
Converted index.ipynb.
Converted query_language.ipynb.
|
convolutional_networks/week2/KerasTutorial/Keras_Tutorial_v2a.ipynb | ###Markdown
Keras tutorial - Emotion Detection in Images of FacesWelcome to the first assignment of week 2. In this assignment, you will:1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK. 2. See how you can in a couple of hours build a deep learning algorithm. Why are we using Keras? * Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. * Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. * Being able to go from idea to result with the least possible delay is key to finding good models. * However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you would still implement in TensorFlow rather than in Keras. * That being said, Keras will work fine for many common models. Updates If you were working on the notebook before this update...* The current notebook is version "v2a".* You can find your original work saved in the notebook with the previous version name ("v2").* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* Changed back-story of model to "emotion detection" from "happy house."* Cleaned/organized wording of instructions and commentary.* Added instructions on how to set `input_shape`* Added explanation of "objects as functions" syntax.* Clarified explanation of variable naming convention.* Added hints for steps 1,2,3,4 Load packages* In this exercise, you'll work on the "Emotion detection" model, which we'll explain below. * Let's load the required packages.
###Code
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
**Note**: As you can see, we've imported a lot of functions from Keras. You can use them by calling them directly in your code. Ex: `X = Input(...)` or `X = ZeroPadding2D(...)`. In other words, unlike TensorFlow, you don't have to create the graph and then make a separate `sess.run()` call to evaluate those variables. 1 - Emotion Tracking* A nearby community health clinic is helping the local residents monitor their mental health. * As part of their study, they are asking volunteers to record their emotions throughout the day.* To help the participants more easily track their emotions, you are asked to create an app that will classify their emotions based on some pictures that the volunteers will take of their facial expressions.* As a proof-of-concept, you first train your model to detect if someone's emotion is classified as "happy" or "not happy."To build and train this model, you have gathered pictures of some volunteers in a nearby neighborhood. The dataset is labeled.Run the following code to normalize the dataset and learn about its shapes.
###Code
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
number of training examples = 600
number of test examples = 150
X_train shape: (600, 64, 64, 3)
Y_train shape: (600, 1)
X_test shape: (150, 64, 64, 3)
Y_test shape: (150, 1)
###Markdown
**Details of the "Face" dataset**:- Images are of shape (64,64,3)- Training: 600 pictures- Test: 150 pictures 2 - Building a model in KerasKeras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.Here is an example of a model in Keras:```pythondef model(input_shape): """ input_shape: The height, width and channels as a tuple. Note that this does not include the 'batch' as a dimension. If you have a batch like 'X_train', then you can provide the input_shape using X_train.shape[1:] """ Define the input placeholder as a tensor with shape input_shape. Think of this as your input image! X_input = Input(input_shape) Zero-Padding: pads the border of X_input with zeroes X = ZeroPadding2D((3, 3))(X_input) CONV -> BN -> RELU Block applied to X X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') return model``` Variable naming convention* Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. * Instead of creating unique variable names for each step and each layer, such as ```X = ...Z1 = ...A1 = ...```* Keras re-uses and overwrites the same variable at each step:```X = ...X = ...X = ...```* The exception is `X_input`, which we kept separate since it's needed later. Objects as functions* Notice how there are two pairs of parentheses in each statement. For example:```X = ZeroPadding2D((3, 3))(X_input)```* The first is a constructor call which creates an object (ZeroPadding2D).* In Python, objects can be called as functions. Search for 'python object as function and you can read this blog post [Python Pandemonium](https://medium.com/python-pandemonium/function-as-objects-in-python-d5215e6d1b0d). See the section titled "Objects as functions."* The single line is equivalent to this:```ZP = ZeroPadding2D((3, 3)) ZP is an object that can be called as a functionX = ZP(X_input) ``` **Exercise**: Implement a `HappyModel()`. * This assignment is more open-ended than most. * Start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. * Later, come back and try out other model architectures. * For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. * You can also use other functions such as `AveragePooling2D()`, `GlobalMaxPooling2D()`, `Dropout()`. **Note**: Be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
###Code
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
"""
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
(height, width, channels) as a tuple.
Note that this does not include the 'batch' as a dimension.
If you have a batch like 'X_train',
then you can provide the input_shape using
X_train.shape[1:]
Returns:
model -- a Model() instance in Keras
"""
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. The come back also try out other
# network architectures as well.
X_input = Input(input_shape)
X = ZeroPadding2D((3, 3))(X_input)
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
You have now built a function to describe your model. To train and test this model, there are four steps in Keras:1. Create the model by calling the function above 2. Compile the model by calling `model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])` 3. Train the model on train data by calling `model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)` 4. Test the model on test data by calling `model.evaluate(x = ..., y = ...)` If you want to know more about `model.compile()`, `model.fit()`, `model.evaluate()` and their arguments, refer to the official [Keras documentation](https://keras.io/models/model/). Step 1: create the model. **Hint**: The `input_shape` parameter is a tuple (height, width, channels). It excludes the batch number. Try `X_train.shape[1:]` as the `input_shape`.
###Code
### START CODE HERE ### (1 line)
happyModel = HappyModel(X_train.shape[1:])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Step 2: compile the model**Hint**: Optimizers you can try include `'adam'`, `'sgd'` or others. See the documentation for [optimizers](https://keras.io/optimizers/) The "happiness detection" is a binary classification problem. The loss function that you can use is `'binary_cross_entropy'`. Note that `'categorical_cross_entropy'` won't work with your data set as its formatted, because the data is an array of 0 or 1 rather than two arrays (one for each category). Documentation for [losses](https://keras.io/losses/)
###Code
### START CODE HERE ### (1 line)
happyModel.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Step 3: train the model**Hint**: Use the `'X_train'`, `'Y_train'` variables. Use integers for the epochs and batch_size**Note**: If you run `fit()` again, the `model` will continue to train with the parameters it has already learned instead of reinitializing them.
###Code
### START CODE HERE ### (1 line)
happyModel.fit(x =X_train, y = Y_train, epochs = 30, batch_size = 50)
### END CODE HERE ###
###Output
Epoch 1/30
600/600 [==============================] - 9s - loss: 2.2086 - acc: 0.5483
Epoch 2/30
600/600 [==============================] - 9s - loss: 0.4844 - acc: 0.8217
Epoch 3/30
600/600 [==============================] - 9s - loss: 0.1732 - acc: 0.9267
Epoch 4/30
600/600 [==============================] - 9s - loss: 0.1774 - acc: 0.9300
Epoch 5/30
600/600 [==============================] - 9s - loss: 0.0983 - acc: 0.9650
Epoch 6/30
600/600 [==============================] - 9s - loss: 0.0840 - acc: 0.9717
Epoch 7/30
600/600 [==============================] - 9s - loss: 0.0926 - acc: 0.9683
Epoch 8/30
600/600 [==============================] - 9s - loss: 0.0754 - acc: 0.9767
Epoch 9/30
600/600 [==============================] - 9s - loss: 0.0618 - acc: 0.9783
Epoch 10/30
600/600 [==============================] - 9s - loss: 0.0654 - acc: 0.9850
Epoch 11/30
600/600 [==============================] - 9s - loss: 0.0612 - acc: 0.9767
Epoch 12/30
600/600 [==============================] - 9s - loss: 0.0480 - acc: 0.9867
Epoch 13/30
600/600 [==============================] - 10s - loss: 0.0530 - acc: 0.9800
Epoch 14/30
600/600 [==============================] - 9s - loss: 0.0458 - acc: 0.9883
Epoch 15/30
600/600 [==============================] - 9s - loss: 0.0306 - acc: 0.9933
Epoch 16/30
600/600 [==============================] - 9s - loss: 0.0327 - acc: 0.9950
Epoch 17/30
600/600 [==============================] - 9s - loss: 0.0271 - acc: 0.9917
Epoch 18/30
600/600 [==============================] - 9s - loss: 0.0283 - acc: 0.9933
Epoch 19/30
600/600 [==============================] - 9s - loss: 0.0226 - acc: 0.9950
Epoch 20/30
600/600 [==============================] - 9s - loss: 0.0228 - acc: 0.9983
Epoch 21/30
600/600 [==============================] - 9s - loss: 0.0233 - acc: 0.9950
Epoch 22/30
600/600 [==============================] - 10s - loss: 0.0172 - acc: 0.9967
Epoch 23/30
600/600 [==============================] - 10s - loss: 0.0192 - acc: 0.9950
Epoch 24/30
600/600 [==============================] - 11s - loss: 0.0206 - acc: 0.9950
Epoch 25/30
600/600 [==============================] - 11s - loss: 0.0254 - acc: 0.9950
Epoch 26/30
600/600 [==============================] - 11s - loss: 0.0175 - acc: 0.9950
Epoch 27/30
600/600 [==============================] - 11s - loss: 0.0178 - acc: 0.9950
Epoch 28/30
600/600 [==============================] - 11s - loss: 0.0238 - acc: 0.9950
Epoch 29/30
600/600 [==============================] - 11s - loss: 0.0207 - acc: 0.9917
Epoch 30/30
600/600 [==============================] - 10s - loss: 0.0116 - acc: 1.0000
###Markdown
Step 4: evaluate model **Hint**: Use the `'X_test'` and `'Y_test'` variables to evaluate the model's performance.
###Code
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(x = X_test, y = Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
###Output
150/150 [==============================] - 1s
Loss = 0.151273331046
Test Accuracy = 0.966666664282
###Markdown
Expected performance If your `happyModel()` function worked, its accuracy should be better than random guessing (50% accuracy).To give you a point of comparison, our model gets around **95% test accuracy in 40 epochs** (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. Tips for improving your modelIf you have not yet achieved a very good accuracy (>= 80%), here are some things tips:- Use blocks of CONV->BATCHNORM->RELU such as:```pythonX = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)X = BatchNormalization(axis = 3, name = 'bn0')(X)X = Activation('relu')(X)```until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You can then flatten the volume and use a fully-connected layer.- Use MAXPOOL after such blocks. It will help you lower the dimension in height and width.- Change your optimizer. We find 'adam' works well. - If you get memory issues, lower your batch_size (e.g. 12 )- Run more epochs until you see the train accuracy no longer improves. **Note**: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. Normally, you'll want separate dev and test sets. The dev set is used for parameter tuning, and the test set is used once to estimate the model's performance in production. 3 - ConclusionCongratulations, you have created a proof of concept for "happiness detection"! Key Points to remember- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures.- Remember The four steps in Keras: 1. Create 2. Compile 3. Fit/Train 4. Evaluate/Test 4 - Test with your own image (Optional)Congratulations on finishing this assignment. You can now take a picture of your face and see if it can classify whether your expression is "happy" or "not happy". To do that:1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.2. Add your image to this Jupyter Notebook's directory, in the "images" folder3. Write your image's name in the following code4. Run the code and check if the algorithm is right (0 is not happy, 1 is happy)! The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!
###Code
### START CODE HERE ###
img_path = 'images/my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
###Output
[[ 1.]]
###Markdown
5 - Other useful functions in Keras (Optional)Two other basic features of Keras that you'll find useful are:- `model.summary()`: prints the details of your layers in a table with the sizes of its inputs/outputs- `plot_model()`: plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.Run the following code.
###Code
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
###Output
_____no_output_____ |
instagram_analysis/insta_agds_full_dataset.ipynb | ###Markdown
Softmax version
###Code
from scipy.special import softmax
def get_trait_dot_product(post_text: str, word_map: list, word_dataframe: pd.DataFrame) -> list:
# Filter out the text
filtered_post = remove_stopwords(clean_up_text(post_text))
filtered_post += extract_hashtags(post_text)
# Create a vector for dot product vector
post_vector = [0] * len(word_map)
# Calculate word occurrences
word_ctr = Counter(filtered_post)
for word, freq in word_ctr.items():
if word in word_map:
post_vector[word_map.index(word)] = freq
# Calculate dot product for a given text
word_dot = word_dataframe.dot(post_vector)
out_vec = pd.Series()
for trait in trait_list:
out_vec = out_vec.append(pd.Series([np.argmax(softmax(word_dot.loc[trait]))], index=[trait]))
return out_vec
# Trait accuracy - round the results
def natural_round(x: float) -> int:
out = int(x // 1)
return out + 1 if (x - out) >= 0.5 else out
def accuracy_per_trait(input_vector: pd.Series, annotated_vector: pd.Series) -> np.array:
out_array = np.array([0] * 37, dtype=np.int)
for i in range(len(out_array)):
if input_vector[i] == annotated_vector[i]:
out_array[i] = 1
return out_array
pbar = tqdm(arch_df.iterrows())
accuracy = 0
# Out accuracy vector
total_accuracy = np.array([0] * 37, dtype=np.int)
for idx, row in pbar:
user_text = list(itertools.chain.from_iterable(posts[users.index(idx)]))
user_text = " ".join(user_text)
sim_output = get_trait_dot_product(user_text, softmax_word_map, softmax_word_df)
user_accuracy = accuracy_per_trait(sim_output, row)
total_accuracy += user_accuracy
pbar.set_description(f"Average accuracy: {round(np.mean(np.divide(total_accuracy, users.index(idx)+1))*100, 2)}")
# Test dataset
# Load the .csv with archetypes
arch_df = pd.read_csv('test_archetypes_pl.csv', index_col=0)
# Save the order of columns
trait_list = arch_df.columns.tolist()
# Show the table header and column list
print(trait_list)
arch_df.head()
# Table preprocessing - replace all NaN with 2 (Unrelated/Don't know class), replace 0-5 values with the ones in range -1.0 - 1.0
arch_df = arch_df.fillna(2)
# Remove duplicated annotations, to exclude conflicting entries
arch_df = arch_df[~arch_df.index.duplicated(keep='first')]
# Print the head of the dataset after modification
arch_df.head()
# Check if a user has a non-empty directory in the dataset, otherwise delete the user from the list
available_arch_df = copy.deepcopy(arch_df)
posts = []
BASE_DIR = "instagram_cleared"
# Iterate over whole DataFrame
for i, row in tqdm(arch_df.iterrows()):
profile_posts = []
profile_hashtags = []
# Get all posts per profile
profile_path = os.path.join(BASE_DIR, i)
for file in os.listdir(profile_path):
if not file.endswith(".toml"):
with open(os.path.join(profile_path, file), "r") as post_f:
read_text = post_f.read()
profile_posts.append(remove_stopwords(clean_up_text(read_text)))
profile_hashtags.append(extract_hashtags(read_text))
# Merge lists - a single list for a single influencer
profile_hashtags = list(itertools.chain.from_iterable(profile_hashtags))
posts.append(list(itertools.chain.from_iterable([profile_posts, [profile_hashtags]])))
# Map usernames to indices
users = list(arch_df.index.values)
user_indices = {k: users.index(k) for k in users}
pbar = tqdm(arch_df.iterrows())
# Out accuracy vector
test_total_accuracy = np.array([0] * 37, dtype=np.int)
for idx, row in pbar:
profile_path = os.path.join(BASE_DIR, idx)
user_text = ""
for file in os.listdir(profile_path):
if not file.endswith(".toml"):
with open(os.path.join(profile_path, file), "r") as post_f:
read_text = post_f.read()
user_text += read_text
sim_output = get_trait_dot_product(user_text, softmax_word_map, softmax_word_df)
user_accuracy = accuracy_per_trait(sim_output, row)
test_total_accuracy += user_accuracy
pbar.set_description(f"Average test dataset accuracy: {round(np.mean(np.divide(test_total_accuracy, users.index(idx)+1))*100, 2)}")
# Show total accuracy
scaled_test_accuracy = np.divide(test_total_accuracy, len(arch_df))
avg_test_accuracy = np.mean(scaled_test_accuracy)
print("--- ACCURACY ON TESTING DATASET ---")
print(f"Average test dataset accuracy: {round(avg_test_accuracy*100, 2)}%")
print("Accuracy per trait:")
for i in range(len(trait_list)):
print(f"{trait_list[i]}: {round(scaled_test_accuracy[i] * 100, 2)}%")
###Output
0it [00:00, ?it/s]<ipython-input-17-4ffef00e153a>:21: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
out_vec = pd.Series()
Average test dataset accuracy: 37.64: : 177it [15:25, 5.23s/it]
###Markdown
Regression model - testing dataset
###Code
# Methods
def get_trait_dot_product(post_text: str, word_map: list, word_dataframe: pd.DataFrame) -> list:
# Filter out the text
filtered_post = remove_stopwords(clean_up_text(post_text))
filtered_post += extract_hashtags(post_text)
# Create a vector for dot product vector
post_vector = [0] * len(word_map)
# Calculate word occurrences
word_ctr = Counter(filtered_post)
for word, freq in word_ctr.items():
if word in word_map:
post_vector[word_map.index(word)] = freq
# Calculate dot product for a given text
word_dot = word_dataframe.dot(post_vector)
return word_dot
# Replace NaN with 0 in word_frequency_table
word_df = word_df.fillna(0)
# Method for calculating the dot product of trait <-> influencer relation
def get_influencer_dot_product(trait_output: list, influencer_dataframe: pd.DataFrame) -> pd.DataFrame:
return influencer_dataframe.dot(trait_output)
# Method for calculating the similarity
def calculate_similarity(post_text: str,
word_map: list,
word_dataframe: pd.DataFrame,
influencer_dataframe: pd.DataFrame) -> pd.DataFrame:
# Calculate word-trait dot product
post_result = get_trait_dot_product(post_text, word_map, word_dataframe)
# Calculate trate-influencer dot-product
inf_dot_product = get_influencer_dot_product(post_result, influencer_dataframe)
# Get the sum of influencer traits
influencer_sum = influencer_dataframe.sum(axis=1)
# Divide the dot product by the sum calculated above
inf_dot_product = inf_dot_product.divide(influencer_sum)
return inf_dot_product
# Trait accuracy - round the results
def natural_round(x: float) -> int:
out = int(x // 1)
return out + 1 if (x - out) >= 0.5 else out
def accuracy_per_trait(input_vector: pd.Series, annotated_vector: pd.Series) -> np.array:
out_array = np.array([0] * 37, dtype=np.int)
for i in range(len(out_array)):
if natural_round(input_vector[i]) == annotated_vector[i]:
out_array[i] = 1
return out_array
pbar = tqdm(arch_df.iterrows())
# Out accuracy vector
test_reg_total_accuracy = np.array([0] * 37, dtype=np.int)
for idx, row in pbar:
profile_path = os.path.join(BASE_DIR, idx)
user_text = ""
for file in os.listdir(profile_path):
if not file.endswith(".toml"):
with open(os.path.join(profile_path, file), "r") as post_f:
read_text = post_f.read()
user_text += read_text
sim_output = get_trait_dot_product(user_text, word_map, word_df)
user_accuracy = accuracy_per_trait(sim_output, row)
test_reg_total_accuracy += user_accuracy
pbar.set_description(f"Average test dataset accuracy: {round(np.mean(np.divide(test_reg_total_accuracy, users.index(idx)+1))*100, 2)}")
# Show total accuracy
scaled_reg_test_accuracy = np.divide(test_reg_total_accuracy, len(arch_df))
avg_reg_test_accuracy = np.mean(scaled_reg_test_accuracy)
print("--- ACCURACY ON TESTING DATASET ---")
print(f"Average test dataset accuracy: {round(avg_reg_test_accuracy*100, 2)}%")
print("Accuracy per trait:")
for i in range(len(trait_list)):
print(f"{trait_list[i]}: {round(scaled_reg_test_accuracy[i] * 100, 2)}%")
###Output
Average test dataset accuracy: 18.08: : 177it [14:44, 5.00s/it] |
Codigos/Bicycle Thefts_Toronto.ipynb | ###Markdown
**Import libraries**
###Code
from google.colab import drive
drive.mount("/content/gdrive/")
%cd "/content/gdrive/My Drive/Colab Notebooks/bikes-theft-model"
# Libraries: Standard ones
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.io as pio
# Library for boxplots
import seaborn as sns
import pandas as pd
#GRAPHS CLASS
from Codigos.DataStatistics import GraphsStatistics as gp
###Output
Mounted at /content/gdrive/
/content/gdrive/My Drive/Colab Notebooks/bikes-theft-model
###Markdown
DATA SET DESCRIPTION - TORONTOThis dataset contains Bicycle Thefts occurrences from **2014-2019** . The location of crime occurrences have been deliberately offset to the nearest road intersection node to protect the privacy of parties involved in the occurrence. All location data must be considered as an approximate location of the occurrence and users are advised not to interpret any of these locations as related to a specific address or individual. *Total of 26 features.* Field Field Description Variable Type Num of Unique values X Location in cartetian coordinates (X) float 4885 Y Location in cartetian coordinates (Y) float 4874 FID ID int 21584 Index Record Unique Identifier int 21584 event_unique_id Event Occurrence Identifier String 19350 Primary_Offence Offence related to the occurrence String 65 Occurrence_Date Date of occurrence String 2104 Occurrence_Year Occurrence year int 6 Occurrence_Month Occurrence Month int 12 Occurrence_Day Occurrence Day int 31 Occurrence_Time Occurrence Time String 933 Division Police Division where event occurred int 18 City City where event occurred String 1 Location_Type Location Type where event occurred String 44 Premise_Type Premise Type where event occurred String 5 Bike_Make Bicycle Make String 725 Bike_Model Bicycle Model String 7008 Bike_Type Bicycle Type String 13 Bike_Speed Bicycle Speed int 62 Bike_Colour Bicycle Colour String 233 Cost_of_Bike Cost of Bicycle float 1458 Status Status of event String 3 Hood_ID Neighbourhood Id int 140 Neighbourhood Neighbourhood name String 140 Lat Longitude of point extracted after offsetting X and & Coordinates to nearest intersection node float 4874 Long Latitude of point extracted after offsetting X and & Coordinates to nearest intersection node float 4885 **Descriptive statistics and visualisation** The following is a statistical analysis of the data, initially showing the type of variable it has per field:
###Code
data_bikes=pd.read_csv('Data/Bicycle_Thefts_Toronto.csv',header=0)
display(data_bikes.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 21584 entries, 0 to 21583
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 X 21584 non-null float64
1 Y 21584 non-null float64
2 FID 21584 non-null int64
3 Index_ 21584 non-null int64
4 event_unique_id 21584 non-null object
5 Primary_Offence 21584 non-null object
6 Occurrence_Date 21584 non-null object
7 Occurrence_Year 21584 non-null int64
8 Occurrence_Month 21584 non-null int64
9 Occurrence_Day 21584 non-null int64
10 Occurrence_Time 21584 non-null object
11 Division 21584 non-null int64
12 City 21584 non-null object
13 Location_Type 21584 non-null object
14 Premise_Type 21584 non-null object
15 Bike_Make 21584 non-null object
16 Bike_Model 13443 non-null object
17 Bike_Type 21584 non-null object
18 Bike_Speed 21584 non-null int64
19 Bike_Colour 19855 non-null object
20 Cost_of_Bike 20048 non-null float64
21 Status 21584 non-null object
22 Hood_ID 21584 non-null int64
23 Neighbourhood 21584 non-null object
24 Lat 21584 non-null float64
25 Long 21584 non-null float64
dtypes: float64(5), int64(8), object(13)
memory usage: 4.3+ MB
###Markdown
The following table presents a summary of the numerical variables in the database. It can be seen that the year with the most thefts is 2019 in December in division 58 (http://www.torontopolice.on.ca/divisions/map.php) . The average cost of stolen bicycles is 937,9778 Canadian dollars
###Code
data_bikes.describe()
###Output
_____no_output_____
###Markdown
In the following table, the summary of the categorical variables is presented. It can be seen that the majority of thefts occur in apartments (Rooming House, Condo). On the other hand, the neighborhood in which most robberies occur is Waterfront Communities-The Island, and most of the thefts occur at 18 h.
###Code
data_bikes.describe(include=['object'])
###Output
_____no_output_____
###Markdown
**Visualisation** In general, the status has three main types: stolen, recovered and unknown. The following graph shows the proportion of each status with respect to the general data. 97% of the bicycles correspond to stolen items
###Code
fig=px.pie(data_frame=data_bikes,names='Status',title='Theft status')
fig.show()
###Output
_____no_output_____
###Markdown
***What is the trend of annual thefts ?***In the following graph, the number of reported thefts per year is presented, it is observed that between 2014 and 2018 thefts were increasing, however a slight decrease is observed between 2018 and 2019
###Code
Dias_aux_robo_year=data_bikes.groupby(['Occurrence_Year','Status']).size().reset_index().rename(columns={0:'Count'})
#display(Dias_aux_robo_year)
fig = px.line(Dias_aux_robo_year, x='Occurrence_Year', y='Count', color='Status',title='Vol de vélos à Toronto')
fig.show()
###Output
_____no_output_____
###Markdown
***What day do the most thefts occur ?***According to the figure, it is observed that there is not a great difference between the number of thefts reported per day among the different days of the week, however it is observed that the greatest amount of thefts occur on Friday
###Code
data_bikes['Occurrence_Date']=pd.to_datetime(data_bikes['Occurrence_Date']) #Transforma la columna Formato de fecha
data_bikes['Week_day']=data_bikes['Occurrence_Date'].dt.day_name() #Agregar el nombre
day_type = pd.api.types.CategoricalDtype(categories=["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"], ordered=True)
data_bikes["Week_day"] = data_bikes["Week_day"].astype(day_type)
Dias_aux_week=data_bikes[data_bikes['Status']=='STOLEN'].groupby(['Week_day']).size().reset_index().rename(columns={0:'Count'})
fig = px.bar(Dias_aux_week, x='Week_day', y='Count',title='Bicycle theft in Toronto per day')
fig.show()
###Output
_____no_output_____
###Markdown
***At what time do most thefts occur?***According to the following histogram, it is observed that the greatest amount of thefts occur between 12h and 18h
###Code
data_bikes['Occurrence_Time'] = pd.to_datetime(data_bikes['Occurrence_Time'],format= '%H:%M:%S' ).dt.time
data_bikes['hour'] = pd.to_datetime(data_bikes['Occurrence_Time'],format= '%H:%M:%S' ).dt.hour
Dias_aux_time=data_bikes[data_bikes['Status']=='STOLEN'].groupby(['hour']).size().reset_index().rename(columns={0:'Count'})
fig = px.bar(Dias_aux_time, x='hour', y='Count',title='Bicycle theft in Toronto per hour')
fig.show()
###Output
_____no_output_____
###Markdown
***What are the most stolen bicycles?***According to the information in the following table, the most stolen bicycles are those between 0 - 1000 Canadian dollars
###Code
bins = [1, 1000, 2000, 5000, 10000, 120000]
aux=data_bikes[data_bikes['Status']=='STOLEN']
aux_2=aux['Cost_of_Bike'].value_counts(bins=bins, sort=False)
aux_3=pd.DataFrame(aux_2).reset_index().rename(columns={'index':'Range','Cost_of_Bike':'Total'})
aux_3
###Output
_____no_output_____
###Markdown
***What are the most dangerous neighborhoods?*** After an analysis of the information we have the top 10 most dangerous neighborhoods, we proceed to make an analysis of what aspects they might have in common.
###Code
Dias_aux_nei=data_bikes[(data_bikes['Status']=='STOLEN')].groupby(['Neighbourhood']).size().reset_index().rename(columns={0:'Count'})
Dias_aux_nei.sort_values(by=['Count'],ascending=False).head(10)
#fig = px.bar(Dias_aux_nei, x='Week_day', y='Count',title='Bicycle theft in Toronto per day')
#fig.show()
###Output
_____no_output_____ |
bronze/.ipynb_checkpoints/B28_Quantum_State-checkpoint.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Quantum State[Watch Lecture](https://youtu.be/6OE96rgQz8s)_The overall probability must be 1 when we observe a quantum system._For example, the following vectors cannot be a valid quantum state:$$ \myvector{ \dfrac{1}{2} \\ \dfrac{1}{2} } \mbox{ and } \myvector{ \dfrac{\sqrt{3}}{2} \\ \dfrac{1}{\sqrt{2}} }.$$For the first vector, the probabilities of observing the states $\ket{0} $ and $ \ket{1} $ are $ \dfrac{1}{4} $. So, the overall probability of getting a result is $ \dfrac{1}{4} + \dfrac{1}{4} = \dfrac{1}{2} $, which is less than 1.For the second vector, the probabilities of observing the states $\ket{0} $ and $ \ket{1} $ are respectively $ \dfrac{3}{4} $ and $ \dfrac{1}{2} $. So, the overall probability of getting a result is $ \dfrac{3}{4} + \dfrac{1}{2} = \dfrac{5}{4} $, which is greater than 1. The summation of amplitude squares must be 1 for a valid quantum state. More formally, a quantum state can be represented by a vector having length 1, and vice versa.The summation of amplitude squares gives the square of the length of vector.But, this summation is 1, and its square root is also 1. So, we can use the term length in the definition. Technical notes: We represent a quantum state as $ \ket{u} $ instead of $ u $. Remember the relation between the length and dot product: $ \norm{u} = \sqrt{\dot{u}{u}} $. In quantum computation, we use inner product instead of dot product, which is defined on complex numbers. By using bra-ket notation, $ \norm{ \ket{u} } = \sqrt{ \braket{u}{u} } = 1 $, or equivalently $ \braket{u}{u} = 1 $, where $ \braket{u}{u} $ is a short form of $ \bra{u}\ket{u} $. For real-valued vectors, $ \braket{v}{v} = \dot{v}{v} $. Task 1 If the following vectors are valid quantum states defined with real numbers, then what can be the values of $a$ and $b$?$$ \ket{v} = \myrvector{a \\ -0.1 \\ -0.3 \\ 0.4 \\ 0.5} ~~~~~ \mbox{and} ~~~~~ \ket{u} = \myrvector{ \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{b}} \\ -\frac{1}{\sqrt{3}} }.$$
###Code
#
# your code is here
# (you may find the values by hand (in mind) as well)
#
###Output
_____no_output_____
###Markdown
click for our solution Quantum Operators Once the quantum state is defined, the definition of quantum operator is very easy.Any length preserving (square) matrix is a quantum operator, and vice versa. Task 2Remember Hadamard operator:$$ H = \hadamard.$$ Randomly create a 2-dimensional quantum state, and test whether Hadamard operator preserves its length or not.Write a function that returns a randomly created 2-dimensional quantum state.Hint: Pick two random values between -100 and 100 for the amplitudes of state 0 and state 1 Find an appropriate normalization factor to divide each amplitude such that the length of quantum state should be 1 Write a function that determines whether a given vector is a valid quantum state or not.(Due to precision problem, the summation of squares may not be exactly 1 but very close to 1, e.g., 0.9999999999999998.)Repeat 10 times: Randomly pick a quantum state Check whether the pick quantum state is valid Multiply Hadamard matrix with the randomly created quantum state Check whether the quantum state in result is valid
###Code
#
# you may define your first function in a separate cell
#
from random import randrange
def random_quantum_state():
# quantum state
quantum_state=[0,0]
#
#
#
return quantum_state
#
# your code is here
#
###Output
_____no_output_____ |
main/nbs/poc/visualization-proto2.ipynb | ###Markdown
Easily export jupyter cells to python modulehttps://github.com/fastai/course-v3/blob/master/nbs/dl2/notebook2script.py
###Code
! python /tf/src/scripts/notebook2script.py visualization.ipynb
%matplotlib inline
! pip install -U scikit-learn
#export
from exp.nb_clustering import *
from exp.nb_evaluation import *
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.cm as cmx
import matplotlib.patches as patches
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import LogNorm
cd /tf/src/data/features
###Output
/tf/src/data/features
###Markdown
Generate all the feature vectors(Skip if already done)
###Code
embdr = D2VEmbedder("/tf/src/data/doc2vec/model")
# Generate and Save Human Features
hman_dict = embdr("/tf/src/data/methods/DATA00M_[god-r]/test")
with open('hman_features.pickle', 'wb') as f:
pickle.dump(hman_dict, f, protocol=pickle.HIGHEST_PROTOCOL)
# Generate and Save GPT-2 Pretrained Features
m1_dict = embdr("/tf/src/data/samples/unconditional/m1_example")
with open('m1_features.pickle', 'wb') as f:
pickle.dump(m1_dict, f, protocol=pickle.HIGHEST_PROTOCOL)
###Output
_____no_output_____
###Markdown
Read in Feature Vectors
###Code
models_path = "/tf/src/data/features/output_space"
models_features = load_features(models_path)
len(models_features[0]), len(models_features[1])
###Output
_____no_output_____
###Markdown
Visualize Features
###Code
models_clusters = cluster(models_features, k_range = [2, 3, 4, 5])
_, _, _, kmeans = models_clusters[1]
kmeans.n_clusters
def setup_data(model):
feature_vectors, _, centroids, kmeans = model
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = feature_vectors[:, 0].min() - 1, feature_vectors[:, 0].max() + 1
y_min, y_max = feature_vectors[:, 1].min() - 1, feature_vectors[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
return feature_vectors, centroids, xx, yy, Z
def plot_features(models_clusters):
plt.figure(figsize=(12, 8))
# Create 2x2 sub plots
gs = gridspec.GridSpec(2, 2)
plt.clf()
for i, model in enumerate(models_clusters):
# Setup data to be plotted
feature_vectors, centroids, xx, yy, Z = setup_data(model)
# Plot data
plt.subplot(gs[0, i])
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(feature_vectors[:, 0], feature_vectors[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('K-means clustering\n'
'(PCA & T-SNE - reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.subplot(gs[1, :])
colmap = {0: 'b.', 1: 'r.'}
plt.title('Blue denotes Human Methods and Red denotes GPT-2 Unconditional Samples')
for i, model in enumerate(models_clusters):
feature_vectors, _, _, _ = model
plt.plot(feature_vectors[:, 0], feature_vectors[:, 1], colmap[i], markersize=10)
# plt.xticks(())
# plt.yticks(())
plt.show()
plot_features(models_clusters)
###Output
_____no_output_____
###Markdown
Gaussian Mixture Visualization
###Code
def
###Output
_____no_output_____
###Markdown
Visualize 1D
###Code
models_clusters = cluster(models_features, k_range = [2, 3, 4, 5], dims = 1)
dims = 2
for model in models_clusters:
feature_vectors, _, _, kmeans = model
print(feature_vectors.shape)
# feature_vectors = reduce_dims(feature_vectors, dims)
gmm = generate_distributions(feature_vectors, kmeans.n_clusters)
fig = plt.figure()
ax = fig.add_subplot(111)
x = np.linspace(-10, 10, 1000).reshape(1000,1)
logprob = gmm.score_samples(x)
pdf = np.exp(logprob)
#print np.max(pdf) -> 19.8409464401 !?
ax.plot(x, pdf, '-k')
plt.show()
###Output
_____no_output_____
###Markdown
Visualize 2D
###Code
dims = 2
models_clusters = cluster(models_features, k_range = [2, 3, 4, 5], dims = dims)
# From http://www.itzikbs.com/gaussian-mixture-model-gmm-3d-point-cloud-classification-primer
def visualize_2D_gmm(points, w, mu, stdev, export=True):
'''
plots points and their corresponding gmm model in 2D
Input:
points: N X 2, sampled points
w: n_gaussians, gmm weights
mu: 2 X n_gaussians, gmm means
stdev: 2 X n_gaussians, gmm standard deviation (assuming diagonal covariance matrix)
Output:
None
'''
n_gaussians = mu.shape[1]
# print(n_gaussians)
N = int(np.round(points.shape[0] / n_gaussians))
# Visualize data
fig = plt.figure(figsize=(8, 8))
axes = plt.gca()
# axes.set_xlim([-100, 1])
# axes.set_ylim([-1, 1])
plt.set_cmap('Set1')
colors = cmx.Set1(np.linspace(0, 1, n_gaussians))
for i in range(n_gaussians):
# if
idx = range(i * N, (i + 1) * N)
plt.scatter(points[idx, 0], points[idx, 1], alpha=0.3, c=colors[i])
for j in range(8):
# print(stdev.shape, stdev[0, i], stdev[1, i])
axes.add_patch(
patches.Ellipse(mu[:, i], width=(j+1) * stdev[0, i], height=(j+1) * stdev[1, i], fill=False, color=[0.0, 0.0, 1.0, 1.0/(0.5*j+1)]))
plt.title('GMM')
plt.xlabel('X')
plt.ylabel('Y')
if export:
if not os.path.exists('images/'): os.mkdir('images/')
plt.savefig('images/2D_GMM_demonstration.png', dpi=100, format='png')
plt.show()
feature_vectors, _, _, kmeans = models_clusters[1]
gmm = generate_distributions(feature_vectors, kmeans.n_clusters)
feature_vectors.shape
visualize_2D_gmm(feature_vectors, gmm.weights_, gmm.means_.T, np.sqrt(gmm.covariances_).T)
def plot_2d(models_clusters):
plt.figure(figsize=(12, 8))
# Create 2x2 sub plots
gs = gridspec.GridSpec(1, 2)
plt.clf()
for i, model in enumerate(models_clusters):
# Setup data to be plotted
feature_vectors, _, _, kmeans = model
gmm = generate_distributions(feature_vectors, kmeans.n_clusters)
# Plot data
plt.subplot(gs[0, i])
# display predicted scores by the model as a contour plot
delta = 30.
x = np.linspace(feature_vectors[:, 0].min() - delta, feature_vectors[:, 0].max() + delta)
y = np.linspace(feature_vectors[:, 1].min() - delta, feature_vectors[:, 1].max() + delta)
X, Y = np.meshgrid(x, y)
XX = np.array([X.ravel(), Y.ravel()]).T
Z = -gmm.score_samples(XX)
Z = Z.reshape(X.shape)
CS = plt.contour(X, Y, Z, norm=LogNorm(vmin=1.0, vmax=100.0),
levels=np.logspace(1, 2, 10))
CB = plt.colorbar(CS, shrink=0.8, extend='both')
plt.scatter(feature_vectors[:, 0], feature_vectors[:, 1], .8)
plt.title('Negative log-likelihood predicted by a GMM')
plt.axis('tight')
plt.show()
plot_2d(models_clusters)
feature_vectors, _, _, kmeans = models_clusters[0]
gmm = generate_distributions(feature_vectors, kmeans.n_clusters)
feature_vectors.shape, kmeans.n_clusters
feature_vectors[:, 0].max(), feature_vectors[:, 1].min()
n_samples = 300
# generate random sample, two components
np.random.seed(0)
# generate spherical data centered on (20, 20)
shifted_gaussian = np.random.randn(n_samples, 2) + np.array([20, 20])
# generate zero centered stretched Gaussian data
C = np.array([[0., -0.7], [3.5, .7]])
stretched_gaussian = np.dot(np.random.randn(n_samples, 2), C)
# concatenate the two datasets into the final training set
X_train = np.vstack([shifted_gaussian, stretched_gaussian])
# fit a Gaussian Mixture Model with two components
# clf = GaussianMixture(n_components=2, covariance_type='full')
# clf.fit(X_train)
# display predicted scores by the model as a contour plot
delta = 30.
x = np.linspace(feature_vectors[:, 0].min() - delta, feature_vectors[:, 0].max() + delta)
y = np.linspace(feature_vectors[:, 1].min() - delta, feature_vectors[:, 1].max() + delta)
X, Y = np.meshgrid(x, y)
XX = np.array([X.ravel(), Y.ravel()]).T
Z = -gmm.score_samples(XX)
Z = Z.reshape(X.shape)
CS = plt.contour(X, Y, Z, norm=LogNorm(vmin=1.0, vmax=100.0),
levels=np.logspace(1, 2, 10))
CB = plt.colorbar(CS, shrink=0.8, extend='both')
plt.scatter(feature_vectors[:, 0], feature_vectors[:, 1], .8)
plt.title('Negative log-likelihood predicted by a GMM')
plt.axis('tight')
plt.show()
###Output
_____no_output_____
###Markdown
Visualize 3D
###Code
dims = 3
models_clusters = cluster(models_features, k_range = [2, 3, 4, 5], dims = dims)
# From http://www.itzikbs.com/gaussian-mixture-model-gmm-3d-point-cloud-classification-primer
def plot_sphere(w=0, c=[0,0,0], r=[1, 1, 1], subdev=10, ax=None, sigma_multiplier=3):
'''
plot a sphere surface
Input:
c: 3 elements list, sphere center
r: 3 element list, sphere original scale in each axis ( allowing to draw elipsoids)
subdiv: scalar, number of subdivisions (subdivision^2 points sampled on the surface)
ax: optional pyplot axis object to plot the sphere in.
sigma_multiplier: sphere additional scale (choosing an std value when plotting gaussians)
Output:
ax: pyplot axis object
'''
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
pi = np.pi
cos = np.cos
sin = np.sin
phi, theta = np.mgrid[0.0:pi:complex(0,subdev), 0.0:2.0 * pi:complex(0,subdev)]
x = sigma_multiplier*r[0] * sin(phi) * cos(theta) + c[0]
y = sigma_multiplier*r[1] * sin(phi) * sin(theta) + c[1]
z = sigma_multiplier*r[2] * cos(phi) + c[2]
cmap = cmx.ScalarMappable()
cmap.set_cmap('jet')
c = cmap.to_rgba(w)
ax.plot_surface(x, y, z, color=c, alpha=0.2, linewidth=1)
return ax
# From http://www.itzikbs.com/gaussian-mixture-model-gmm-3d-point-cloud-classification-primer
def visualize_3d_gmm(points, w, mu, stdev, export=True):
'''
plots points and their corresponding gmm model in 3D
Input:
points: N X 3, sampled points
w: n_gaussians, gmm weights
mu: 3 X n_gaussians, gmm means
stdev: 3 X n_gaussians, gmm standard deviation (assuming diagonal covariance matrix)
Output:
None
'''
n_gaussians = mu.shape[1]
N = int(np.round(points.shape[0] / n_gaussians))
# Visualize data
fig = plt.figure(figsize=(8, 8))
axes = fig.add_subplot(111, projection='3d')
# axes.set_xlim([-1, 1])
# axes.set_ylim([-1, 1])
# axes.set_zlim([-1, 1])
plt.set_cmap('Set1')
colors = cmx.Set1(np.linspace(0, 1, n_gaussians))
for i in range(n_gaussians):
idx = range(i * N, (i + 1) * N)
axes.scatter(points[idx, 0], points[idx, 1], points[idx, 2], alpha=0.3, c=colors[i])
plot_sphere(w=w[i], c=mu[:, i], r=stdev[:, i], ax=axes)
plt.title('3D GMM')
axes.set_xlabel('X')
axes.set_ylabel('Y')
axes.set_zlabel('Z')
axes.view_init(35.246, 45)
# if export:
# if not os.path.exists('images/'): os.mkdir('images/')
# plt.savefig('images/3D_GMM_demonstration.png', dpi=100, format='png')
plt.show()
feature_vectors, _, _, kmeans = models_clusters[0]
gmm = generate_distributions(feature_vectors, 2)
kmeans.n_clusters
visualize_3d_gmm(feature_vectors, gmm.weights_, gmm.means_.T, np.sqrt(gmm.covariances_).T)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(feature_vectors[:, 0], feature_vectors[:, 1], feature_vectors[:, 2])
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
X, Y = X, Y = np.mgrid[-1:1:30j, -1:1:30j] #np.mgrid[-100:100:30j, -100:100:30j]
XX = np.array([X.ravel(), Y.ravel()]).T
# Z = -gmm.score_samples(XX)
Z = np.sin(np.pi*X)*np.sin(np.pi*Y)
# ax.plot_surface(X, Y, Z, cmap="autumn_r", lw=0.5, rstride=1, cstride=1)
# ax.contour(X, Y, Z, 100, lw=3, cmap="autumn_r", linestyles="solid", offset=-1)
CS = ax.contour(X, Y, Z, 20, lw=3, colors="k", linestyles="solid", norm=LogNorm(vmin=1.0, vmax=100.0),
levels=np.logspace(1, 2, 10))
CB = ax.colorbar(CS, shrink=0.8, extend='both')
plt.show()
###Output
_____no_output_____ |
1. pandas/03_Grouping/Occupation/Exercises_with_solutions.ipynb | ###Markdown
Occupation Introduction:Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users.
###Code
users = pd.read_table('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user',
sep='|', index_col='user_id')
users.head()
###Output
_____no_output_____
###Markdown
Step 4. Discover what is the mean age per occupation
###Code
users.groupby('occupation').age.mean()
###Output
_____no_output_____
###Markdown
Step 5. Discover the Male ratio per occupation and sort it from the most to the least
###Code
# create a function
def gender_to_numeric(x):
if x == 'M':
return 1
if x == 'F':
return 0
# apply the function to the gender column and create a new column
users['gender_n'] = users['gender'].apply(gender_to_numeric)
a = users.groupby('occupation').gender_n.sum() / users.occupation.value_counts() * 100
# sort to the most male
a.sort_values(ascending = False)
###Output
_____no_output_____
###Markdown
Step 6. For each occupation, calculate the minimum and maximum ages
###Code
users.groupby('occupation').age.agg(['min', 'max'])
###Output
_____no_output_____
###Markdown
Step 7. For each combination of occupation and gender, calculate the mean age
###Code
users.groupby(['occupation', 'gender']).age.mean()
###Output
_____no_output_____
###Markdown
Step 8. For each occupation present the percentage of women and men
###Code
# create a data frame and apply count to gender
gender_ocup = users.groupby(['occupation', 'gender']).agg({'gender': 'count'})
# create a DataFrame and apply count for each occupation
occup_count = users.groupby(['occupation']).agg('count')
# divide the gender_ocup per the occup_count and multiply per 100
occup_gender = gender_ocup.div(occup_count, level = "occupation") * 100
# present all rows from the 'gender column'
occup_gender.loc[: , 'gender']
###Output
_____no_output_____ |
Modelo Inicial No Usar/5. Credit Risk Modeling - LGD and EAD Models - With Comments - 11-7.ipynb | ###Markdown
Import Libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Import Data
###Code
# Import data.
loan_data_preprocessed_backup = pd.read_csv('loan_data_2007_2014_preprocessed.csv')
###Output
_____no_output_____
###Markdown
Explore Data
###Code
loan_data_preprocessed = loan_data_preprocessed_backup.copy()
loan_data_preprocessed.columns.values
# Displays all column names.
loan_data_preprocessed.head()
loan_data_preprocessed.tail()
loan_data_defaults = loan_data_preprocessed[loan_data_preprocessed['loan_status'].isin(['Charged Off','Does not meet the credit policy. Status:Charged Off'])]
# Here we take only the accounts that were charged-off (written-off).
loan_data_defaults.shape
pd.options.display.max_rows = None
# Sets the pandas dataframe options to display all columns/ rows.
loan_data_defaults.isnull().sum()
###Output
_____no_output_____
###Markdown
Independent Variables
###Code
loan_data_defaults['mths_since_last_delinq'].fillna(0, inplace = True)
# We fill the missing values with zeroes.
#loan_data_defaults['mths_since_last_delinq'].fillna(loan_data_defaults['mths_since_last_delinq'].max() + 12, inplace=True)
loan_data_defaults['mths_since_last_record'].fillna(0, inplace=True)
# We fill the missing values with zeroes.
###Output
_____no_output_____
###Markdown
Dependent Variables
###Code
loan_data_defaults['recovery_rate'] = loan_data_defaults['recoveries'] / loan_data_defaults['funded_amnt']
# We calculate the dependent variable for the LGD model: recovery rate.
# It is the ratio of recoveries and funded amount.
loan_data_defaults['recovery_rate'].describe()
# Shows some descriptive statisics for the values of a column.
loan_data_defaults['recovery_rate'] = np.where(loan_data_defaults['recovery_rate'] > 1, 1, loan_data_defaults['recovery_rate'])
loan_data_defaults['recovery_rate'] = np.where(loan_data_defaults['recovery_rate'] < 0, 0, loan_data_defaults['recovery_rate'])
# We set recovery rates that are greater than 1 to 1 and recovery rates that are less than 0 to 0.
loan_data_defaults['recovery_rate'].describe()
# Shows some descriptive statisics for the values of a column.
loan_data_defaults['CCF'] = (loan_data_defaults['funded_amnt'] - loan_data_defaults['total_rec_prncp']) / loan_data_defaults['funded_amnt']
# We calculate the dependent variable for the EAD model: credit conversion factor.
# It is the ratio of the difference of the amount used at the moment of default to the total funded amount.
loan_data_defaults['CCF'].describe()
# Shows some descriptive statisics for the values of a column.
loan_data_defaults.to_csv('loan_data_defaults.csv')
# We save the data to a CSV file.
###Output
_____no_output_____
###Markdown
Explore Dependent Variables
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
plt.hist(loan_data_defaults['recovery_rate'], bins = 100)
# We plot a histogram of a variable with 100 bins.
plt.hist(loan_data_defaults['recovery_rate'], bins = 50)
# We plot a histogram of a variable with 50 bins.
plt.hist(loan_data_defaults['CCF'], bins = 100)
# We plot a histogram of a variable with 100 bins.
loan_data_defaults['recovery_rate_0_1'] = np.where(loan_data_defaults['recovery_rate'] == 0, 0, 1)
# We create a new variable which is 0 if recovery rate is 0 and 1 otherwise.
loan_data_defaults['recovery_rate_0_1']
###Output
_____no_output_____
###Markdown
LGD Model Splitting Data
###Code
from sklearn.model_selection import train_test_split
# LGD model stage 1 datasets: recovery rate 0 or greater than 0.
lgd_inputs_stage_1_train, lgd_inputs_stage_1_test, lgd_targets_stage_1_train, lgd_targets_stage_1_test = train_test_split(loan_data_defaults.drop(['good_bad', 'recovery_rate','recovery_rate_0_1', 'CCF'], axis = 1), loan_data_defaults['recovery_rate_0_1'], test_size = 0.2, random_state = 42)
# Takes a set of inputs and a set of targets as arguments. Splits the inputs and the targets into four dataframes:
# Inputs - Train, Inputs - Test, Targets - Train, Targets - Test.
###Output
_____no_output_____
###Markdown
Preparing the Inputs
###Code
features_all = ['grade:A',
'grade:B',
'grade:C',
'grade:D',
'grade:E',
'grade:F',
'grade:G',
'home_ownership:MORTGAGE',
'home_ownership:NONE',
'home_ownership:OTHER',
'home_ownership:OWN',
'home_ownership:RENT',
'verification_status:Not Verified',
'verification_status:Source Verified',
'verification_status:Verified',
'purpose:car',
'purpose:credit_card',
'purpose:debt_consolidation',
'purpose:educational',
'purpose:home_improvement',
'purpose:house',
'purpose:major_purchase',
'purpose:medical',
'purpose:moving',
'purpose:other',
'purpose:renewable_energy',
'purpose:small_business',
'purpose:vacation',
'purpose:wedding',
'initial_list_status:f',
'initial_list_status:w',
'term_int',
'emp_length_int',
'mths_since_issue_d',
'mths_since_earliest_cr_line',
'funded_amnt',
'int_rate',
'installment',
'annual_inc',
'dti',
'delinq_2yrs',
'inq_last_6mths',
'mths_since_last_delinq',
'mths_since_last_record',
'open_acc',
'pub_rec',
'total_acc',
'acc_now_delinq',
'total_rev_hi_lim']
# List of all independent variables for the models.
features_reference_cat = ['grade:G',
'home_ownership:RENT',
'verification_status:Verified',
'purpose:credit_card',
'initial_list_status:f']
# List of the dummy variable reference categories.
lgd_inputs_stage_1_train = lgd_inputs_stage_1_train[features_all]
# Here we keep only the variables we need for the model.
lgd_inputs_stage_1_train = lgd_inputs_stage_1_train.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
lgd_inputs_stage_1_train.isnull().sum()
# Check for missing values. We check whether the value of each row for each column is missing or not,
# then sum accross columns.
###Output
_____no_output_____
###Markdown
Estimating the Model
###Code
# P values for sklearn logistic regression.
# Class to display p-values for logistic regression in sklearn.
from sklearn import linear_model
import scipy.stats as stat
class LogisticRegression_with_p_values:
def __init__(self,*args,**kwargs):#,**kwargs):
self.model = linear_model.LogisticRegression(*args,**kwargs)#,**args)
def fit(self,X,y):
self.model.fit(X,y)
#### Get p-values for the fitted model ####
denom = (2.0 * (1.0 + np.cosh(self.model.decision_function(X))))
denom = np.tile(denom,(X.shape[1],1)).T
F_ij = np.dot((X / denom).T,X) ## Fisher Information Matrix
Cramer_Rao = np.linalg.inv(F_ij) ## Inverse Information Matrix
sigma_estimates = np.sqrt(np.diagonal(Cramer_Rao))
z_scores = self.model.coef_[0] / sigma_estimates # z-score for eaach model coefficient
p_values = [stat.norm.sf(abs(x)) * 2 for x in z_scores] ### two tailed test for p-values
self.coef_ = self.model.coef_
self.intercept_ = self.model.intercept_
#self.z_scores = z_scores
self.p_values = p_values
#self.sigma_estimates = sigma_estimates
#self.F_ij = F_ij
reg_lgd_st_1 = LogisticRegression_with_p_values()
# We create an instance of an object from the 'LogisticRegression' class.
reg_lgd_st_1.fit(lgd_inputs_stage_1_train, lgd_targets_stage_1_train)
# Estimates the coefficients of the object from the 'LogisticRegression' class
# with inputs (independent variables) contained in the first dataframe
# and targets (dependent variables) contained in the second dataframe.
feature_name = lgd_inputs_stage_1_train.columns.values
# Stores the names of the columns of a dataframe in a variable.
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
# Creates a dataframe with a column titled 'Feature name' and row values contained in the 'feature_name' variable.
summary_table['Coefficients'] = np.transpose(reg_lgd_st_1.coef_)
# Creates a new column in the dataframe, called 'Coefficients',
# with row values the transposed coefficients from the 'LogisticRegression' object.
summary_table.index = summary_table.index + 1
# Increases the index of every row of the dataframe with 1.
summary_table.loc[0] = ['Intercept', reg_lgd_st_1.intercept_[0]]
# Assigns values of the row with index 0 of the dataframe.
summary_table = summary_table.sort_index()
# Sorts the dataframe by index.
p_values = reg_lgd_st_1.p_values
# We take the result of the newly added method 'p_values' and store it in a variable 'p_values'.
p_values = np.append(np.nan,np.array(p_values))
# We add the value 'NaN' in the beginning of the variable with p-values.
summary_table['p_values'] = p_values
# In the 'summary_table' dataframe, we add a new column, called 'p_values', containing the values from the 'p_values' variable.
summary_table
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
summary_table['Coefficients'] = np.transpose(reg_lgd_st_1.coef_)
summary_table.index = summary_table.index + 1
summary_table.loc[0] = ['Intercept', reg_lgd_st_1.intercept_[0]]
summary_table = summary_table.sort_index()
p_values = reg_lgd_st_1.p_values
p_values = np.append(np.nan,np.array(p_values))
summary_table['p_values'] = p_values
summary_table
###Output
_____no_output_____
###Markdown
Testing the Model
###Code
lgd_inputs_stage_1_test = lgd_inputs_stage_1_test[features_all]
# Here we keep only the variables we need for the model.
lgd_inputs_stage_1_test = lgd_inputs_stage_1_test.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
y_hat_test_lgd_stage_1 = reg_lgd_st_1.model.predict(lgd_inputs_stage_1_test)
# Calculates the predicted values for the dependent variable (targets)
# based on the values of the independent variables (inputs) supplied as an argument.
y_hat_test_lgd_stage_1
y_hat_test_proba_lgd_stage_1 = reg_lgd_st_1.model.predict_proba(lgd_inputs_stage_1_test)
# Calculates the predicted probability values for the dependent variable (targets)
# based on the values of the independent variables (inputs) supplied as an argument.
y_hat_test_proba_lgd_stage_1
# This is an array of arrays of predicted class probabilities for all classes.
# In this case, the first value of every sub-array is the probability for the observation to belong to the first class, i.e. 0,
# and the second value is the probability for the observation to belong to the first class, i.e. 1.
y_hat_test_proba_lgd_stage_1 = y_hat_test_proba_lgd_stage_1[: ][: , 1]
# Here we take all the arrays in the array, and from each array, we take all rows, and only the element with index 1,
# that is, the second element.
# In other words, we take only the probabilities for being 1.
y_hat_test_proba_lgd_stage_1
lgd_targets_stage_1_test_temp = lgd_targets_stage_1_test
lgd_targets_stage_1_test_temp.reset_index(drop = True, inplace = True)
# We reset the index of a dataframe.
df_actual_predicted_probs = pd.concat([lgd_targets_stage_1_test_temp, pd.DataFrame(y_hat_test_proba_lgd_stage_1)], axis = 1)
# Concatenates two dataframes.
df_actual_predicted_probs.columns = ['lgd_targets_stage_1_test', 'y_hat_test_proba_lgd_stage_1']
df_actual_predicted_probs.index = lgd_inputs_stage_1_test.index
# Makes the index of one dataframe equal to the index of another dataframe.
df_actual_predicted_probs.head()
###Output
_____no_output_____
###Markdown
Estimating the Аccuracy of the Мodel
###Code
tr = 0.5
# We create a new column with an indicator,
# where every observation that has predicted probability greater than the threshold has a value of 1,
# and every observation that has predicted probability lower than the threshold has a value of 0.
df_actual_predicted_probs['y_hat_test_lgd_stage_1'] = np.where(df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1'] > tr, 1, 0)
pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted'])
# Creates a cross-table where the actual values are displayed by rows and the predicted values by columns.
# This table is known as a Confusion Matrix.
pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0]
# Here we divide each value of the table by the total number of observations,
# thus getting percentages, or, rates.
(pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0]).iloc[0, 0] + (pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0]).iloc[1, 1]
# Here we calculate Accuracy of the model, which is the sum of the diagonal rates.
from sklearn.metrics import roc_curve, roc_auc_score
fpr, tpr, thresholds = roc_curve(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1'])
# Returns the Receiver Operating Characteristic (ROC) Curve from a set of actual values and their predicted probabilities.
# As a result, we get three arrays: the false positive rates, the true positive rates, and the thresholds.
# we store each of the three arrays in a separate variable.
plt.plot(fpr, tpr)
# We plot the false positive rate along the x-axis and the true positive rate along the y-axis,
# thus plotting the ROC curve.
plt.plot(fpr, fpr, linestyle = '--', color = 'k')
# We plot a seconary diagonal line, with dashed line style and black color.
plt.xlabel('False positive rate')
# We name the x-axis "False positive rate".
plt.ylabel('True positive rate')
# We name the x-axis "True positive rate".
plt.title('ROC curve')
# We name the graph "ROC curve".
AUROC = roc_auc_score(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1'])
# Calculates the Area Under the Receiver Operating Characteristic Curve (AUROC)
# from a set of actual values and their predicted probabilities.
AUROC
###Output
_____no_output_____
###Markdown
Saving the Model
###Code
import pickle
pickle.dump(reg_lgd_st_1, open('lgd_model_stage_1.sav', 'wb'))
# Here we export our model to a 'SAV' file with file name 'lgd_model_stage_1.sav'.
###Output
_____no_output_____
###Markdown
Stage 2 – Linear Regression
###Code
lgd_stage_2_data = loan_data_defaults[loan_data_defaults['recovery_rate_0_1'] == 1]
# Here we take only rows where the original recovery rate variable is greater than one,
# i.e. where the indicator variable we created is equal to 1.
# LGD model stage 2 datasets: how much more than 0 is the recovery rate
lgd_inputs_stage_2_train, lgd_inputs_stage_2_test, lgd_targets_stage_2_train, lgd_targets_stage_2_test = train_test_split(lgd_stage_2_data.drop(['good_bad', 'recovery_rate','recovery_rate_0_1', 'CCF'], axis = 1), lgd_stage_2_data['recovery_rate'], test_size = 0.2, random_state = 42)
# Takes a set of inputs and a set of targets as arguments. Splits the inputs and the targets into four dataframes:
# Inputs - Train, Inputs - Test, Targets - Train, Targets - Test.
from sklearn import linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Since the p-values are obtained through certain statistics, we need the 'stat' module from scipy.stats
import scipy.stats as stat
# Since we are using an object oriented language such as Python, we can simply define our own
# LinearRegression class (the same one from sklearn)
# By typing the code below we will ovewrite a part of the class with one that includes p-values
# Here's the full source code of the ORIGINAL class: https://github.com/scikit-learn/scikit-learn/blob/7b136e9/sklearn/linear_model/base.py#L362
class LinearRegression(linear_model.LinearRegression):
"""
LinearRegression class after sklearn's, but calculate t-statistics
and p-values for model coefficients (betas).
Additional attributes available after .fit()
are `t` and `p` which are of the shape (y.shape[1], X.shape[1])
which is (n_features, n_coefs)
This class sets the intercept to 0 by default, since usually we include it
in X.
"""
# nothing changes in __init__
def __init__(self, fit_intercept=True, normalize=False, copy_X=True,
n_jobs=1):
self.fit_intercept = fit_intercept
self.normalize = normalize
self.copy_X = copy_X
self.n_jobs = n_jobs
def fit(self, X, y, n_jobs=1):
self = super(LinearRegression, self).fit(X, y, n_jobs)
# Calculate SSE (sum of squared errors)
# and SE (standard error)
sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1])
se = np.array([np.sqrt(np.diagonal(sse * np.linalg.inv(np.dot(X.T, X))))])
# compute the t-statistic for each feature
self.t = self.coef_ / se
# find the p-value for each feature
self.p = np.squeeze(2 * (1 - stat.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1])))
return self
import scipy.stats as stat
class LinearRegression(linear_model.LinearRegression):
def __init__(self, fit_intercept=True, normalize=False, copy_X=True,
n_jobs=1):
self.fit_intercept = fit_intercept
self.normalize = normalize
self.copy_X = copy_X
self.n_jobs = n_jobs
def fit(self, X, y, n_jobs=1):
self = super(LinearRegression, self).fit(X, y, n_jobs)
sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1])
se = np.array([np.sqrt(np.diagonal(sse * np.linalg.inv(np.dot(X.T, X))))])
self.t = self.coef_ / se
self.p = np.squeeze(2 * (1 - stat.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1])))
return self
lgd_inputs_stage_2_train = lgd_inputs_stage_2_train[features_all]
# Here we keep only the variables we need for the model.
lgd_inputs_stage_2_train = lgd_inputs_stage_2_train.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
reg_lgd_st_2 = LinearRegression()
# We create an instance of an object from the 'LogisticRegression' class.
reg_lgd_st_2.fit(lgd_inputs_stage_2_train, lgd_targets_stage_2_train)
# Estimates the coefficients of the object from the 'LogisticRegression' class
# with inputs (independent variables) contained in the first dataframe
# and targets (dependent variables) contained in the second dataframe.
feature_name = lgd_inputs_stage_2_train.columns.values
# Stores the names of the columns of a dataframe in a variable.
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
# Creates a dataframe with a column titled 'Feature name' and row values contained in the 'feature_name' variable.
summary_table['Coefficients'] = np.transpose(reg_lgd_st_2.coef_)
# Creates a new column in the dataframe, called 'Coefficients',
# with row values the transposed coefficients from the 'LogisticRegression' object.
summary_table.index = summary_table.index + 1
# Increases the index of every row of the dataframe with 1.
summary_table.loc[0] = ['Intercept', reg_lgd_st_2.intercept_]
# Assigns values of the row with index 0 of the dataframe.
summary_table = summary_table.sort_index()
# Sorts the dataframe by index.
p_values = reg_lgd_st_2.p
# We take the result of the newly added method 'p_values' and store it in a variable 'p_values'.
p_values = np.append(np.nan,np.array(p_values))
# We add the value 'NaN' in the beginning of the variable with p-values.
summary_table['p_values'] = p_values.round(3)
# In the 'summary_table' dataframe, we add a new column, called 'p_values', containing the values from the 'p_values' variable.
summary_table
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
summary_table['Coefficients'] = np.transpose(reg_lgd_st_2.coef_)
summary_table.index = summary_table.index + 1
summary_table.loc[0] = ['Intercept', reg_lgd_st_2.intercept_]
summary_table = summary_table.sort_index()
p_values = reg_lgd_st_2.p
p_values = np.append(np.nan,np.array(p_values))
summary_table['p_values'] = p_values.round(3)
summary_table
###Output
_____no_output_____
###Markdown
Stage 2 – Linear Regression Evaluation
###Code
lgd_inputs_stage_2_test = lgd_inputs_stage_2_test[features_all]
# Here we keep only the variables we need for the model.
lgd_inputs_stage_2_test = lgd_inputs_stage_2_test.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
lgd_inputs_stage_2_test.columns.values
# Calculates the predicted values for the dependent variable (targets)
# based on the values of the independent variables (inputs) supplied as an argument.
y_hat_test_lgd_stage_2 = reg_lgd_st_2.predict(lgd_inputs_stage_2_test)
# Calculates the predicted values for the dependent variable (targets)
# based on the values of the independent variables (inputs) supplied as an argument.
lgd_targets_stage_2_test_temp = lgd_targets_stage_2_test
lgd_targets_stage_2_test_temp = lgd_targets_stage_2_test_temp.reset_index(drop = True)
# We reset the index of a dataframe.
pd.concat([lgd_targets_stage_2_test_temp, pd.DataFrame(y_hat_test_lgd_stage_2)], axis = 1).corr()
# We calculate the correlation between actual and predicted values.
sns.distplot(lgd_targets_stage_2_test - y_hat_test_lgd_stage_2)
# We plot the distribution of the residuals.
pickle.dump(reg_lgd_st_2, open('lgd_model_stage_2.sav', 'wb'))
# Here we export our model to a 'SAV' file with file name 'lgd_model_stage_1.sav'.
###Output
_____no_output_____
###Markdown
Combining Stage 1 and Stage 2
###Code
y_hat_test_lgd_stage_2_all = reg_lgd_st_2.predict(lgd_inputs_stage_1_test)
y_hat_test_lgd_stage_2_all
y_hat_test_lgd = y_hat_test_lgd_stage_1 * y_hat_test_lgd_stage_2_all
# Here we combine the predictions of the models from the two stages.
pd.DataFrame(y_hat_test_lgd).describe()
# Shows some descriptive statisics for the values of a column.
y_hat_test_lgd = np.where(y_hat_test_lgd < 0, 0, y_hat_test_lgd)
y_hat_test_lgd = np.where(y_hat_test_lgd > 1, 1, y_hat_test_lgd)
# We set predicted values that are greater than 1 to 1 and predicted values that are less than 0 to 0.
pd.DataFrame(y_hat_test_lgd).describe()
# Shows some descriptive statisics for the values of a column.
###Output
_____no_output_____ |
examples/01_copying_task.ipynb | ###Markdown
Copying Task Inspired on the task described in the following paper: [https://arxiv.org/pdf/1511.06464.pdf](https://arxiv.org/pdf/1511.06464.pdf) Introduction The copying task is one of the simplest benchmark tasks for recurrent neural networks.The general idea of the task is to reproduce a random sequence of symbols with length`len_sequence` chosen from an alphabet of size `num_symbols` after a certain waitingperiod `len_wait`.Assuming the waiting symbol is `0`, the symbols chosen for the sequence are chosen fromthe alphabet `{1,2,3}` and the stop waiting symbol is `4`; an example input and target for awaiting time of 20 symbols and a sequence length of 5 can be given by:``` 213310000000000000000000400000 000000000000000000000000021331``` As discussed in the [paper](https://arxiv.org/pdf/1511.06464.pdf), it is always usefulto compare the loss of a certain implementation to the baseline loss of guessing.Assuming one uses the categorical cross-entropy loss, one can describe a baseline bypredicting the waiting symbol for the first `len_wait + len_sequence` timesteps, followed by a random sampling for the remaining `len_sequence` positions out ofthe alphabet of symbols `{a1,...,an}` with `num_symbols` elements. This baseline cross entropy loss boils down to``` len_sequence*log(n_symbols)/(len_wait + 2*len_sequence)``` Imports
###Code
%matplotlib inline
import torch
import numpy as np
import matplotlib.pyplot as plt
import sys; sys.path.append('..')
from torch_eunn import EURNN
torch.manual_seed(24)
np.random.seed(42)
###Output
_____no_output_____
###Markdown
Constants
###Code
# Training parameters
num_steps = 500
batch_size = 128
test_size = 100
valid_size = 100
# Data Parameters
len_wait = 100#0 # very slow if len_wait=1000
num_symbols = 8
len_sequence = 10
# RNN Parameters
capacity = 2
num_layers_rnn = 1
num_hidden_rnn = 128
# Cuda
cuda = True
device = torch.device('cuda' if cuda else 'cpu')
# Baseline Error
baseline = len_sequence*np.log(num_symbols)/(len_wait+2*len_sequence)
print(f'baseline = {baseline}')
###Output
baseline = 0.17328679513998632
###Markdown
Data
###Code
def data(len_wait, n_data, len_sequence, num_symbols):
seq = np.random.randint(1, high=(num_symbols+1), size=(n_data, len_sequence))
zeros1 = np.zeros((n_data, len_wait-1))
zeros2 = np.zeros((n_data, len_wait))
marker = (num_symbols+1) * np.ones((n_data, 1))
zeros3 = np.zeros((n_data, len_sequence))
x = torch.tensor(np.concatenate((seq, zeros1, marker, zeros3), axis=1), dtype=torch.int64, device=device)
y = torch.tensor(np.concatenate((zeros3, zeros2, seq), axis=1), dtype=torch.int64, device=device)
return x, y
x,y = data(len_wait, 1, len_sequence, num_symbols)
print(x)
print(y)
###Output
tensor([[7, 4, 5, 7, 3, 8, 5, 5, 7, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 4, 5, 7, 3, 8, 5, 5, 7, 2]],
device='cuda:0')
###Markdown
Model
###Code
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.embedding = torch.nn.Embedding(len_wait+2*len_sequence, num_symbols+2)
self.rnn = EURNN(num_symbols+2, num_hidden_rnn, capacity, batch_first=True)
self.fc = torch.nn.Linear(num_hidden_rnn, num_symbols+1)
# optimizers and criterion
self.lossfunc = torch.nn.CrossEntropyLoss()
self.optimizer = torch.optim.Adam(self.parameters(), lr=0.03)
# move to device
self.to(device)
def forward(self, data):
data = self.embedding(data)
rnn_out, _ = self.rnn(data)
out = self.fc(rnn_out)
return out
def loss(self, data, labels):
return self.lossfunc(self(data).view(-1, num_symbols+1), labels.view(-1))
def accuracy(self, data, labels):
return torch.mean((torch.argmax(self(data), -1).view(-1) == labels.view(-1)).float())
def prediction(self, data):
return torch.argmax(self(data), -1)
###Output
_____no_output_____
###Markdown
Train Create the model
###Code
model = Model()
###Output
_____no_output_____
###Markdown
Start Training
###Code
%%time
for step in range(num_steps):
# reset gradients
model.optimizer.zero_grad()
# calculate validation accuracy and loss
if step %100 == 0 or step == num_steps -1:
valid_data, valid_labels = data(len_wait, valid_size, len_sequence, num_symbols)
loss = model.loss(valid_data, valid_labels).item()
print(f'Step {step:5.0f}\t Valid. Loss. = {loss:5.4f}')
# train
batch_data, batch_labels = data(len_wait, batch_size, len_sequence, num_symbols)
loss = model.loss(batch_data, batch_labels)
loss.backward()
model.optimizer.step()
###Output
Step 0 Valid. Loss. = 20.2436
Step 100 Valid. Loss. = 0.0035
Step 200 Valid. Loss. = 0.0008
Step 300 Valid. Loss. = 0.0009
Step 400 Valid. Loss. = 0.0002
Step 499 Valid. Loss. = 0.0001
CPU times: user 3min 14s, sys: 91.3 ms, total: 3min 14s
Wall time: 3min 14s
###Markdown
Test
###Code
test_data, test_labels = data(len_wait, test_size, len_sequence, num_symbols)
test_loss = model.loss(test_data, test_labels).item()
test_acc = model.accuracy(test_data, test_labels).item()
print("Test result: Loss= " + "{:.6f}".format(test_loss) + ", Accuracy= " + "{:.5f}".format(test_acc))
print('baseline = %f'%baseline)
###Output
Test result: Loss= 0.000193, Accuracy= 1.00000
baseline = 0.173287
|
modeling-perchlorate-reduction/modeling_perchlorate_reduction.ipynb | ###Markdown
Modeling a theoretical co-culture of perchlorate-reducing bacteria and chlorate-reducing bacteriaThis Jupyter Notebook is a supplement to the manuscript Barnum et. al 2019.Reduction of perchlorate by bacteria involves the respiration of several high-energy substrates in multiple steps: perchlorate is reduced to chlorate, chlorate is reduced to chlorite, chlorite is converted to chloride and oxygen (without energy conservation), and oxygen is reduced to water. Chlorate accumulates to varying levels during perchlorate reduction because one enzyme (perchlorate reductase, or Pcr) reduces both perchlorate and chlorate (Dudley et. al 2008). Substrate inhibition of perchlorate reductase at high concentrations of perchlorate (>1 mM) may also contribute (Youngblut et. al 2016). Accumulaton of chlorite or oxygen has not been observed.In the present study, we found that chlorate-reducing bacteria (CRB), which cannot reduce perchlorate to chlorate, can dominate cultures of perchlorate-reducing bacteria (PRB) in a metabolic interaction based on the exchange of chlorate.To understand this interaction, we present models to simulate the behavior of the interaction *in silico*. We chose a model based on an approximation of Michaelis-Menton kinetics known as Equilibrium Chemistry Approximation (ECA). Other models, a simple Michaelis-Menton kinetics model and a Michaelis-Menton kinetics model including competitive inhibition (Dudley et. al 2008), are included for comparison. The Equilibrium Chemistry Approximation model allows the inclusion of the following features:- Competition of chlorate and perchlorate for Pcr following diffusion of chlorate from the active site- Competition for the reduction of chlorate to chloride by either the perchlorate reducer or chlorate reducer- Competition for acetate as a source of electrons and carbon- Substrate inhibition of Pcr by perchlorate (IGNORED HERE)The code uses the following data:- The half-velocity constant (Ks) of Pcr for chlorate and perchlorate, from Youngblut et. al 2016a- Redox potential of perchlorate/chlorate, from Youngblut et. al 2016b- Redox potential of chlorate/chloride adjusted to reflect the perchlorate reduction pathway, from Youngblut et. al 2016bScripts required to run this code:- energetics.py -- Equations for calculating yield, stoichiometry - kinetics.py -- Equations for kinetics models- perchlorate_reduction_models.py -- Formulations of models specific to perchlorate reduction using energetics.py and kinetics.py Load Functions
###Code
# Custom functions and variables
import sys
sys.path.append('./scripts') # Location of modules
from perchlorate_reduction_models import *
# External packages
# Integration function
from scipy.integrate import odeint
# Data wrangling
import numpy as np
import pandas as pd
from pylab import *
# Plotting
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import seaborn as sns;
sns.set(style='ticks',palette='Set2') # Tufte and Brewer style
sns.despine()
plt.rcParams['svg.fonttype'] = 'none' # Editable SVG text
%matplotlib inline
# Functions for resetting values and assisting plotting and data interpretation
def reset_parameters():
# All concentrations in molarity (M)
# Substrate affinity [PRB,CRB], Ks (M)
ks_clo4 = np.array([0.0060,10000])*10**-3 # Ks perchlorate, 0.0019 (Dudley) super high value for CRB because it does not catalyze
ks_clo3 = np.array([0.0074,0.0074])*10**-3 # Ks chorate 0.0007 M - Dudley #0.159 CRB
#ks_clo3 = np.array([0.0074,0.159])*10**-3 # Ks chorate 0.0007 M - Dudley #0.159 CRB
ks_acet = np.array([1,1])*10**-3 # Ks acetate
# Substrate inhibition factor, Haldane kinetics [PRB,CRB], Ki (M)
# Inhibition only occurs for PRB with perchlorate
# Youngblut et. al 2016 http://www.jbc.org/content/291/17/9190.full.pdf
ki_clo4 = np.array([10000, 10000])*10**-3 # Ki perchlorate, only low for ClO4- and PRB
#ki_clo4 = np.array([7.5, 10000])*10**-3 # Ki perchlorate, only low for ClO4- and PRB
ki_clo3 = np.array([10000, 10000])*10**-3 # Ki chlorate
ki_acet = np.array([10000, 10000])*10**-3 # Ki acetate
# Growth and death rate
mu = np.array([0.5,0.5]) # Maximum growth rate
m = np.array([0.0,0.0])# Death rate
return ks_clo4, ks_clo3, ks_acet, ki_clo4, ki_clo3, ki_acet, mu, m
def reset_default_concentrations():
# Initial concentrations (M)
x0 = [0.00001, # PRB
0.00001, # CRB
0.015, # Acetate
0.01, # Perchlorate
0.0, # Chlorate
0.0,# Cum. clorate to PRB
0.0]# Cum. clorate to CRB
return x0
# Format a dataframe from ODE output
def odeint_to_dataframe(state):
df = pd.DataFrame(state)
df.columns = values[0:7] # "PRB","CRB","C2H3O2-","ClO4-","ClO3-", "ClO3- to PRB", "ClO3- to CRB"
df[time] = t_span # Time
df["CRB"] = df["CRB"] * 113 # 113 g / mol biomass (C5H7O2N)
df["PRB"] = df["PRB"] * 113 # 113 g / mol biomass (C5H7O2N)
df["CRB/PRB"] = df["CRB"] / df["PRB"] # Ratio
df["Total Cells"] = df["CRB"] + df["PRB"] # Sum
df["CRB Growth Rate"] = df["CRB"].diff() / df[time].diff() # Ratio
df["PRB Growth Rate"] = df["PRB"].diff() / df[time].diff() # Ratio
df["Cumulative fClO3- CRB"] = df["ClO3- to CRB"] / (df["ClO3- to CRB"] + df["ClO3- to PRB"])
df["fClO3- CRB"] = df["ClO3- to CRB"].diff() / (df["ClO3- to CRB"].diff() + df["ClO3- to PRB"].diff())
df["ClO3-:ClO4-"] = df["ClO3-"] / df["ClO4-"]
df["ClO4-:ClO3-"] = df["ClO4-"] / df["ClO3-"]
return df
# Summarize different attributes of the dataframe
def dataframe_statistics(df):
statistics = {"Max. ClO3-" : max(df['ClO3-']),
"Max. ClO4-" : max(df['ClO4-']),
"Max. CRB" : max(df['CRB']),
"Max. PRB" : max(df['PRB']),
"Max. Total Cells" : max(df["Total Cells"]),
"Max. CRB/PRB" : max(df["CRB/PRB"]),
"% ClO3- to CRB" : 100 * max(df["ClO3- to CRB"] / (x0[id_clo4] + x0[id_clo3])),
"f:CRB/PRB" : df["CRB/PRB"].tolist()[-1],
"f:CRB/PRB / i:CRB/PRB" : df["CRB/PRB"].tolist()[-1] / df["CRB/PRB"].tolist()[0],
"Max. ClO4- Reduction Rate (M/h)" : max(abs(df['ClO4-'].diff().fillna(0))),
}
return statistics
def plot_growth_curves(df,plot_title='',save_to_file=None):
# Plot
values_to_plot = ["PRB","CRB","C2H3O2-","ClO4-","ClO3-", "ClO3- to PRB", "ClO3- to CRB", time]
colors = ['#D65228','#E69C26','black','#D65228','#E69C26','#D65228','#E69C26']
linestyles = ['-','-',':',':',':','--','--']
fig, (ax0,ax2,ax3,ax4) = plt.subplots(4, 1, figsize=(6,9), gridspec_kw = {'height_ratios':[10,4,4,4]})
ax0.set_title(plot_title, size=13)
ax1 = ax0.twinx()
ax=ax0
ax.set_ylabel('Biomass (g/L)')
df.loc[:,["PRB","CRB", time]].plot(ax=ax,x=time, color=['#D65228','#E69C26'], style=['-','-'])
ax.legend(loc='center right')
ax=ax1
ax.set_ylabel('Concentration (M)')
df.loc[:,["C2H3O2-","ClO4-","ClO3-", time]].plot(ax=ax,x=time, color=['black','black','black'], style=[':','-','--'])
ax.legend(loc='lower right')
ax=ax2
df.loc[:,[time,"ClO3-:ClO4-"]].plot(ax=ax,x=time,color='black',legend=None)
ax.set_ylabel("[ClO3-] / [ClO4-]")
ax=ax3
df.loc[:,[time,"fClO3- CRB"]].plot(ax=ax,x=time,color='black',legend=None)
ax.set_ylabel("Fraction ClO3- to CRB")
ax.set_ylim([0,1])
ax=ax4
df.loc[:,[time,"CRB Growth Rate","PRB Growth Rate"]].plot(ax=ax,x=time,color=['#E69C26','#D65228'],legend=None)
ax.set_ylabel("Growth Rate (g L-1 h-1)")
# Formatting
for ax in (ax0,ax1,ax2,ax3,ax4):
ax.spines["top"].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
for ax in (ax2,ax3,ax4):
ax.spines["right"].set_visible(False)
ax.yaxis.set_ticks_position('left')
for ax in (ax2,ax3):
ax.set_xlabel("")
ax.xaxis.set_ticks_position('none')
for xlabel_i in ax.axes.get_xticklabels():
xlabel_i.set_visible(False)
xlabel_i.set_fontsize(0.0)
if save_to_file == None:
pass
else:
plt.savefig('./data/' + save_to_file)
return plt.show()
###Output
_____no_output_____
###Markdown
Model Input
###Code
# Energetics of
energy = pd.DataFrame({"Electron acceptor half-reaction" : ["Gr", "A", "fs", "fe"],
"O2/H2O": energy_to_fractions(-78.72),
"NO3-/N2": energy_to_fractions(-72.20),
"SO42-/HS-": energy_to_fractions(20.85),
"CO2/CH4": energy_to_fractions(23.53),
"ClO4-/ClO3-" : energy_to_fractions(redox_to_Ga(ne_perc,E_perc)),
"ClO3-/Cl-": energy_to_fractions(redox_to_Ga(ne_chlor,E_chlor)),
})
energy = energy.set_index("Electron acceptor half-reaction")
energy = energy.transpose()
# save to CSV
energy.to_csv('./data/energetics.csv')
energy.sort_values(by='A')
###Output
_____no_output_____
###Markdown
Above Table: Energetic properties of different electron acceptors. Gr, energy per equiv. oxidized for energy production; A, equivalents of donor used for energy per cells formed; fs, fraction of donor used for cell synthesis; fe, fraction of donor used for energy.
###Code
df = pd.DataFrame({"Population" : ['Perchlorate-reducing bacteria (PRB)', 'Chlorate-reducing bacteria (CRB)'],
"Ks ClO4- (M)" : ks_clo4,
"Ks ClO3- (M)" : ks_clo3,
"Ks C2H3O2- (M)" : ks_acet,
"Ki ClO4- (M)" : ki_clo4,
"Ki ClO3- (M)" : ki_clo3,
"Ki C2H3O2- (M)" : ki_acet,
"Maximum growth rate (h^-1)" : mu,
"Death rate (h^-1)" : m,
"Yield coefficient ClO4- to ClO3-" : ypc,
"Yield coefficient ClO3- to Cl-" : yc,
"Stoichiometric ratio ClO4- to ClO3-" : ypa,
"Stoichiometric ratio ClO3- to Cl-" : yca,
})
df = df.set_index("Population")
df = df.transpose()
# save to CSV
df.to_csv('./data/model-input.csv')
df
###Output
_____no_output_____
###Markdown
Above Table: Model parameters for each population Model Simulations Equilibrium Chemistry Approximation (ECA) kineticsThe Equilibrium Chemistry Approximation (ECA) of Michaelis-Menton kinetics includes competitive inhibition for Pcr and competition for substrates. PRB Only
###Code
ks_clo4, ks_clo3, ks_acet, ki_clo4, ki_clo3, ki_acet, mu, m = reset_parameters()
# Initial concentrations (M), with no CRB
x0 = reset_default_concentrations()
x0[id_crb] = 0
# Time steps
t_end = 400; # end time (hours)
dt = 0.1; # time step (hours)
t_span = np.arange(0,t_end,dt)
# Run ECA kinetics model and plot
state = odeint(eca_kinetics, x0, t_span)
df = odeint_to_dataframe(state)
plot_growth_curves(df,save_to_file='eca-prb-only.png')
###Output
_____no_output_____
###Markdown
PRB + CRB
###Code
# CRB concentration = PRB concentration
x0[id_crb] = x0[id_prb]
state = odeint(eca_kinetics, x0, t_span)
df = odeint_to_dataframe(state)
plot_growth_curves(df,save_to_file='eca-prb-and-crb.png')
###Output
_____no_output_____
###Markdown
Results*Without* chlorate-reducing bacteria:- Chlorate (dashed line) accumulates during perchlorate reduction- Growth rate of the perchlorate-reducing population peaks when chlorate reduction begins*With* chlorate-reducing bacteria present:- Chlorate concentration decreases- Chlorate-reducing bacteria dominate the culture- Chlorate-reducing bacteria consume nearly all chlorate while fraction of chlorate is low- Chlorate-reducing bacteria have a higher growth rate throughout growth Conclusions- Chlorate-reducing bacteria utilize chlorate when perchlorate-reducing bacteria cannot: when the concentration of chlorate relative to perchlorate is low- Beacuse chlorate reduction to chloride has a higher total yield than perchlorate reduction to chlorate, chlorate-reducing bacteria have a higher growth rate- The consumption of chlorate by chlorate-reducing bacteria maintains the low chlorate:perchlorate ratio conducive to their success Other Model Simulations for Comparison Michaelis-Menton (MM) kineticsSimple model for growth limited by substrate concentrations PRB Only
###Code
# No CRB
x0[id_crb] = 0
# Time steps
t_end = 150; # end time (hours)
dt = 0.1; # time step (hours)
t_span = np.arange(0,t_end,0.1)
# Run Michaelis-Menton kinetics and plot simulation
state = odeint(mm_kinetics, x0, t_span)
df = odeint_to_dataframe(state)
plot_growth_curves(df,save_to_file='mm-prb-only.png')
###Output
_____no_output_____
###Markdown
PRB + CRB
###Code
# CRB concentration = PRB concentration
x0[id_crb] = x0[id_prb]
# Run Michaelis-Menton kinetics and plot simulation
state = odeint(mm_kinetics, x0, t_span)
df = odeint_to_dataframe(state)
plot_growth_curves(df,save_to_file='mm-prb-and-crb.png')
###Output
_____no_output_____
###Markdown
Michaelis-Menton kinetics with competitive inhibition (CI) (Dudley et. al 2008)Accounts for the consumption of perchlorate and chlorate by the same cell PRB Only
###Code
# Initial concentrations (M)
x0[id_crb] = 0
# Time steps
t_end = 300; # end time (hours)
dt = 1; # time step (hours)
t_span = np.arange(0,t_end,dt)
# Run competitive inhibition model and plot simulation
state = odeint(ci_kinetics, x0, t_span)
df = odeint_to_dataframe(state)
plot_growth_curves(df,save_to_file='ci-prb-only.png')
###Output
_____no_output_____
###Markdown
PRB + CRB
###Code
# CRB concentration = PRB concentration
x0[id_crb] = x0[id_prb]
state = odeint(ci_kinetics, x0, t_span)
df = odeint_to_dataframe(state)
plot_growth_curves(df, save_to_file='ci-prb-and-crb.png')
###Output
_____no_output_____
###Markdown
Results- Michaelis-Menton kinetics with competitive inhibition of Pcr by chlorate produces similar results to the ECA kinetics model and live co-cultures- The model without competitive inhibition of Pcr by chlorate shows little chlorate accumulation and much less CRB growth Conclusions- Competitive inhibition is an important component of the model to recapitulate experimental behavior Effect of varying initial concentrations on the interaction
###Code
def correlate_two_z_variables(id_N1,id_N2,z1_variable,z2_variable,save_to_file=None):
import matplotlib.colors as colors
fig, (ax0,ax1,ax2) = plt.subplots(1,3, figsize=(15,4))
Z1 = np.array(x_range)
Z2 = np.array(x_range)
z_variable = [z1_variable, z2_variable]
z_scale = [z1_scale,z2_scale]
Z = [Z1,Z2]
axes = [ax0,ax1]
color_map = [z1_color_map,z2_color_map]
for N in [0,1]:
# Initialize X-, Y-, and Z- dimensions
X = np.array(x_range)
Y = np.array(x_range)
for y in y_range:
output = []
# Plot by varying concentration
for x in x_range:
# Initial values
x0 = reset_default_concentrations()
# Replace one initial condition with x, a varying value
x0[id_N1] = x
x0[id_N2] = y
# Calculate from each initial condition
state = odeint(eca_kinetics, x0, t_span)
df = odeint_to_dataframe(state)
statistics = dataframe_statistics(df)
output.append(statistics[z_variable[N]])
x_row = x_range
X = np.vstack((X,x_row))
y_row = np.array([y]*len(x_range))
Y = np.vstack((Y,y_row))
z_row = output
Z[N] = np.vstack((Z[N],z_row))
X = X[1:] # X-dimension
Y = Y[1:] # Y-dimension
#Z[N] = Z[N][1:] # Z-dimension
Z[N] = Z[N][1:-1, :-1] # Z-dimension within X-Y bounds
# Plot each heatmap
# Normalization
cmap = plt.get_cmap(color_map[N])
levels = MaxNLocator(nbins=1000).tick_values(Z[N].min(), Z[N].max())
if z_scale[N] == 'log':
# https://matplotlib.org/users/colormapnorms.html
if (z_variable[N] == 'f:CRB/PRB') | (z_variable[N] == "f:CRB/PRB / i:CRB/PRB"):
z_bound = np.max([1/Z[N].min(),Z[N].max()])
norm = colors.LogNorm(vmin=1/z_bound, vmax=z_bound)
else:
norm = colors.LogNorm(vmin=Z[N].min(), vmax=Z[N].max())
else:
# https://matplotlib.org/api/_as_gen/matplotlib.colors.BoundaryNorm.html
norm = colors.BoundaryNorm(levels, ncolors=cmap.N, clip=True)
"""if z_scale == 'log':
# https://matplotlib.org/users/colormapnorms.html
norm = colors.LogNorm(vmin=Z[N].min(), vmax=Z[N].max())
else:
# https://matplotlib.org/api/_as_gen/matplotlib.colors.BoundaryNorm.html
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)"""
im = axes[N].pcolormesh(X, Y, Z[N], cmap=cmap, norm=norm)
fig.colorbar(im, ax=axes[N])
axes[N].set_xscale(x_scale, basex=2)
axes[N].set_yscale(y_scale, basey=2)
axes[N].set_title(z_variable[N])
axes[N].set_xlabel(values[id_N1])
axes[N].set_ylabel(values[id_N2])
plt.tight_layout()
# Plot correlation
Z1 = Z[0]
Z2 = Z[1]
ax2.scatter(x=Z1,y=Z2, s=20, facecolor="None", edgecolors='black', linewidths=1,)
ax2.set_xscale(z1_scale)
ax2.set_yscale(z2_scale)
ax2.set_xlim([Z1.min(),Z1.max()])
ax2.set_ylim([Z2.min(),Z2.max()])
ax2.set_xlabel(z1_variable)
ax2.set_ylabel(z2_variable)
ax2.set_title("Variable "+values[x_variable]+" and "+values[y_variable])
if save_to_file == None:
pass
else:
plt.savefig('./data/' + save_to_file)
return plt.show()
# Reset values
ks_clo4, ks_clo3, ks_acet, ki_clo4, ki_clo3, ki_acet, mu, m = reset_parameters()
x0 = reset_default_concentrations()
# Longer time period to capture all growth
t_end = 5000; # end time (hours)
dt = 1; # time step (hours)
t_span = np.arange(0,t_end,5)
# Z1 (X): ratio
z1_variable = "f:CRB/PRB"
z1_scale = 'log'
z1_color_map = 'RdBu'
# Z2 (Y): Max. ClO4- Reduction Rate (M/h)
z2_variable = "% ClO3- to CRB"
z2_scale = 'linear'
z2_color_map = 'binary'
# Y: PRB concentration steps
y_variable = id_prb
y_base2_min = -23
y_base2_max = -10
y_base2_steps = 1 + y_base2_max - y_base2_min
y_range = np.logspace(start=y_base2_min,stop=y_base2_max,base=2,num=y_base2_steps)
y_scale = 'log'
# X: Perchlorate concentration steps
x_variable = id_clo4
x_base2_min = -20
x_base2_max = -4
x_base2_steps = 1 + x_base2_max - x_base2_min
x_range = np.logspace(start=x_base2_min,stop=x_base2_max,base=2,num=x_base2_steps)
x_scale = 'log'
correlate_two_z_variables(x_variable,y_variable,z1_variable,z2_variable,save_to_file="corr-prb-perc-ratio-chlor.svg")
# Reset values
ks_clo4, ks_clo3, ks_acet, ki_clo4, ki_clo3, ki_acet, mu, m = reset_parameters()
x0 = reset_default_concentrations()
# Z1 (X): Final ratio
z1_variable = "f:CRB/PRB"
z1_scale = 'log'
color_map = 'RdBu'
# Z2 (Y): Max. ClO4- Reduction Rate (M/h)
z2_variable = "% ClO3- to CRB"
z2_scale = 'linear'
z2_color_map = 'binary'
# Y: PRB
y_variable = id_prb
y_base2_min = -23
y_base2_max = -10
y_base2_steps = 1 + y_base2_max - y_base2_min
y_range = np.logspace(start=y_base2_min,stop=y_base2_max,base=2,num=y_base2_steps)
y_scale = 'log'
# X: CRB
x_variable = id_crb
x_base2_min = -23
x_base2_max = -10
x_base2_steps = 1 + x_base2_max - x_base2_min
x_range = np.logspace(start=x_base2_min,stop=x_base2_max,base=2,num=x_base2_steps)
x_scale = 'log'
correlate_two_z_variables(x_variable,y_variable,z1_variable,z2_variable, "corr-prb-crb-ratio-chlor.svg")
print('Other z-dimensions available to plot:')
statistics = dataframe_statistics(df)
for stat in statistics.keys():
print (stat, "\t")
###Output
Other z-dimensions available to plot:
Max. ClO3-
Max. ClO4-
Max. CRB
Max. PRB
Max. Total Cells
Max. CRB/PRB
% ClO3- to CRB
f:CRB/PRB
f:CRB/PRB / i:CRB/PRB
Max. ClO4- Reduction Rate (M/h)
|
jupyter_notebooks/notebooks/NB9_CVIII-randomforests_ising.ipynb | ###Markdown
Notebook 9: Using Random Forests to classify phases in the Ising Model Learning GoalThe goal of this notebook is to show how one can employ ensemble methods such as Random Forests to classify the states of the 2D Ising model according to their phases. We discuss concepts like decision trees, extreme decision trees, and out-of-bag error. The notebook also introduces the powerful scikit-learn `Ensemble` class. Setting up the problemThe Hamiltonian for the classical Ising model is given by$$ H = -J\sum_{\langle ij\rangle}S_{i}S_j,\qquad \qquad S_j\in\{\pm 1\} $$where the lattice site indices $i,j$ run over all nearest neighbors of a 2D square lattice of side $L$, and $J$ is some arbitrary interaction energy scale. We adopt periodic boundary conditions. Onsager proved that this model undergoes a phase transition in the thermodynamic limit from an ordered ferromagnet with all spins aligned to a disordered phase at the critical temperature $T_c/J=1/\log(1+\sqrt{2})\approx 2.26$. For any finite system size, this critical point is expanded to a critical region around $T_c$.We will use the same basic idea as we did for logistic regression. An interesting question to ask is whether one can train a statistical model to distinguish between the two phases of the Ising model. In other words, given an Ising state, we would like to classify whether it belongs to the ordered or the disordered phase, without any additional information other than the spin configuration itself. This categorical machine learning problem is well suited for ensemble methods and in particular Random Forests.To this end, we consider the 2D Ising model on a $40\times 40$ square lattice, and use Monte-Carlo (MC) sampling to prepare $10^4$ states at every fixed temperature $T$ out of a pre-defined set. Using Onsager's criterion, we can assign a label to each state according to its phase: $0$ if the state is disordered, and $1$ if it is ordered. It is well-known that, near the critical temperature $T_c$, the ferromagnetic correlation length diverges which, among others, leads to a critical slowing down of the MC algorithm. Therefore, we expect identifying the phases to be harder in the critical region. With this in mind, consider the following three types of states: ordered ($T/J2.5$). We use both ordered and disordered states to train the random forest and, once the supervised training procedure is complete, we shall evaluate the performance of our classifier on unseen ordered, disordered and critical states. A link to the Ising dataset can be found at [https://physics.bu.edu/~pankajm/MLnotebooks.html](https://physics.bu.edu/~pankajm/MLnotebooks.html).
###Code
import numpy as np
np.random.seed() # shuffle random seed generator
# Ising model parameters
L=40 # linear system size
J=-1.0 # Ising interaction
T=np.linspace(0.25,4.0,16) # set of temperatures
T_c=2.26 # Onsager critical temperature in the TD limit
import pickle, os
from urllib.request import urlopen
# path to data directory (for testing)
#path_to_data=os.path.expanduser('~')+'/Dropbox/MachineLearningReview/Datasets/isingMC/'
url_main = 'https://physics.bu.edu/~pankajm/ML-Review-Datasets/isingMC/';
######### LOAD DATA
# The data consists of 16*10000 samples taken in T=np.arange(0.25,4.0001,0.25):
data_file_name = "Ising2DFM_reSample_L40_T=All.pkl"
# The labels are obtained from the following file:
label_file_name = "Ising2DFM_reSample_L40_T=All_labels.pkl"
#DATA
data = pickle.load(urlopen(url_main + data_file_name)) # pickle reads the file and returns the Python object (1D array, compressed bits)
data = np.unpackbits(data).reshape(-1, 1600) # Decompress array and reshape for convenience
data=data.astype('int')
data[np.where(data==0)]=-1 # map 0 state to -1 (Ising variable can take values +/-1)
#LABELS (convention is 1 for ordered states and 0 for disordered states)
labels = pickle.load(urlopen(url_main + label_file_name)) # pickle reads the file and returns the Python object (here just a 1D array with the binary labels)
###### define ML parameters
from sklearn.model_selection import train_test_split
train_to_test_ratio=0.8 # training samples
# divide data into ordered, critical and disordered
X_ordered=data[:70000,:]
Y_ordered=labels[:70000]
X_critical=data[70000:100000,:]
Y_critical=labels[70000:100000]
X_disordered=data[100000:,:]
Y_disordered=labels[100000:]
del data,labels
# define training and test data sets
X=np.concatenate((X_ordered,X_disordered))
Y=np.concatenate((Y_ordered,Y_disordered))
# pick random data points from ordered and disordered states
# to create the training and test sets
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,train_size=train_to_test_ratio,test_size=1.0-train_to_test_ratio)
print('X_train shape:', X_train.shape)
print('Y_train shape:', Y_train.shape)
print()
print(X_train.shape[0], 'train samples')
print(X_critical.shape[0], 'critical samples')
print(X_test.shape[0], 'test samples')
##### plot a few Ising states
%matplotlib inline
#import ml_style as style
import matplotlib as mpl
import matplotlib.pyplot as plt
#mpl.rcParams.update(style.style)
from mpl_toolkits.axes_grid1 import make_axes_locatable
# set colourbar map
cmap_args=dict(cmap='plasma_r')
# plot states
fig, axarr = plt.subplots(nrows=1, ncols=3)
axarr[0].imshow(X_ordered[20001].reshape(L,L),**cmap_args)
#axarr[0].set_title('$\\mathrm{ordered\\ phase}$',fontsize=16)
axarr[0].set_title('ordered phase',fontsize=16)
axarr[0].tick_params(labelsize=16)
axarr[1].imshow(X_critical[10001].reshape(L,L),**cmap_args)
#axarr[1].set_title('$\\mathrm{critical\\ region}$',fontsize=16)
axarr[1].set_title('critical region',fontsize=16)
axarr[1].tick_params(labelsize=16)
im=axarr[2].imshow(X_disordered[50001].reshape(L,L),**cmap_args)
#axarr[2].set_title('$\\mathrm{disordered\\ phase}$',fontsize=16)
axarr[2].set_title('disordered phase',fontsize=16)
axarr[2].tick_params(labelsize=16)
fig.subplots_adjust(right=2.0)
plt.show()
###Output
_____no_output_____
###Markdown
Random Forests**Hyperparameters**We start by training with Random Forests. As discussed in Sec. VIII of the review, Random Forests are ensemble models. Here we will use the sci-kit learn implementation of random forests. There are two main hyper-parameters that will be important in practice for the performance of the algorithm and the degree to which it overfits/underfits: the number of estimators in the ensemble and the depth of the trees used. The former is controlled by the parameter `n_estimators` whereas the latter (the complexity of the trees used) can be controlled in many distinct ways (`min_samples_split`, `min_samples_leaf`, `min_impurity_decrease`, etc). For our simple dataset, it does not really make much difference which one of these we use. We will just use the `min_samples_split` parameter that dictates how many samples need to be in each node of the classification tree. The bigger this number, the more coarse our trees and data partitioning.In the code below, we will just consider extremely fine trees (`min_samples_split=2`) or extremely coarse trees (`min_samples_split=10000`). As we will see, both of these tree complexities are sufficient to distinguish the ordered from the disordered samples. The reason for this is that the ordered and disordered phases are distinguished by the magnetization order parameter which is an equally weighted sum of all features. However, if we want to train deep in these simple phases, and then use our algorithm to distinguish critical samples it is crucial we use more complex trees even though the performance on the disordered and ordered phases is indistinguishable for coarse and complex trees.**Out of Bag (OOB) Estimates**For more complicated datasets, how can we choose the right hyperparameters? We can actually make use of one of the most important and interesting features of ensemble methods that employ Bagging: out-of-bag (OOB) estimates. Whenever we bag data, since we are drawing samples with replacement, we can ask how well our classifiers do on data points that are *not used* in the training. This is the out-of-bag prediction error and plays a similar role to cross-validation error in other ML methods. Since this is the best proxy for out-of-sample prediction, we choose hyperparameters to minimize the out-of-bag error.
###Code
# Apply Random Forest
#This is the random forest classifier
from sklearn.ensemble import RandomForestClassifier
#This is the extreme randomized trees
from sklearn.ensemble import ExtraTreesClassifier
#import time to see how performance depends on run time
import time
import warnings
#Comment to turn on warnings
warnings.filterwarnings("ignore")
#We will check
min_estimators = 10
max_estimators = 101
classifer = RandomForestClassifier # BELOW WE WILL CHANGE for the case of extremly randomized forest
n_estimator_range=np.arange(min_estimators, max_estimators, 10)
leaf_size_list=[2,10000]
m=len(n_estimator_range)
n=len(leaf_size_list)
#Allocate Arrays for various quantities
RFC_OOB_accuracy=np.zeros((n,m))
RFC_train_accuracy=np.zeros((n,m))
RFC_test_accuracy=np.zeros((n,m))
RFC_critical_accuracy=np.zeros((n,m))
run_time=np.zeros((n,m))
print_flag=True
for i, leaf_size in enumerate(leaf_size_list):
# Define Random Forest Classifier
myRF_clf = classifer(
n_estimators=min_estimators,
max_depth=None,
min_samples_split=leaf_size, # minimum number of sample per leaf
oob_score=True,
random_state=0,
warm_start=True # this ensures that you add estimators without retraining everything
)
for j, n_estimator in enumerate(n_estimator_range):
print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size))
start_time = time.time()
myRF_clf.set_params(n_estimators=n_estimator)
myRF_clf.fit(X_train, Y_train)
run_time[i,j] = time.time() - start_time
# check accuracy
RFC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train)
RFC_OOB_accuracy[i,j]=myRF_clf.oob_score_
RFC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test)
RFC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical)
if print_flag:
result = (run_time[i,j], RFC_train_accuracy[i,j], RFC_OOB_accuracy[i,j], RFC_test_accuracy[i,j], RFC_critical_accuracy[i,j])
print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score"))
print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result))
plt.figure()
plt.plot(n_estimator_range,RFC_train_accuracy[1],'--b^',label='Train (coarse)')
plt.plot(n_estimator_range,RFC_test_accuracy[1],'--r^',label='Test (coarse)')
plt.plot(n_estimator_range,RFC_critical_accuracy[1],'--g^',label='Critical (coarse)')
plt.plot(n_estimator_range,RFC_train_accuracy[0],'o-b',label='Train (fine)')
plt.plot(n_estimator_range,RFC_test_accuracy[0],'o-r',label='Test (fine)')
plt.plot(n_estimator_range,RFC_critical_accuracy[0],'o-g',label='Critical (fine)')
#plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Accuracy')
lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.show()
plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse')
plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Run time (s)')
plt.legend(loc=2)
#plt.savefig("Ising_RF_Runtime.pdf")
plt.show()
###Output
_____no_output_____
###Markdown
Extremely Randomized TreesAs discussed in the main text, the effectiveness of ensemble methods generally increases as the correlations between members of the ensemble decrease. This idea has been leveraged to make methods that introduce even more randomness into the ensemble by randomly choosing features to split on as well as randomly choosing thresholds to split on. See Section 4.3 of Louppe 2014 [arxiv:1407.7502](https://arxiv.org/pdf/1407.7502.pdf).Here we will make use of the scikit-learn function `ExtremeTreesClassifier` and we will just rerun what we did above. Since there is extra randomization compared to random forests, one can imagine that the performance of the critical samples will be much worse. Indeed, this is the case.
###Code
#This is the extreme randomized trees
from sklearn.ensemble import ExtraTreesClassifier
#import time to see how perforamance depends on run time
import time
import warnings
#Comment to turn on warnings
warnings.filterwarnings("ignore")
#We will check
min_estimators = 10
max_estimators = 101
classifer = ExtraTreesClassifier # only changing this
n_estimator_range=np.arange(min_estimators, max_estimators, 10)
leaf_size_list=[2,10000]
m=len(n_estimator_range)
n=len(leaf_size_list)
#Allocate Arrays for various quantities
ETC_OOB_accuracy=np.zeros((n,m))
ETC_train_accuracy=np.zeros((n,m))
ETC_test_accuracy=np.zeros((n,m))
ETC_critical_accuracy=np.zeros((n,m))
run_time=np.zeros((n,m))
print_flag=True
for i, leaf_size in enumerate(leaf_size_list):
# Define Random Forest Classifier
myRF_clf = classifer(
n_estimators=min_estimators,
max_depth=None,
min_samples_split=leaf_size, # minimum number of sample per leaf
oob_score=True,
bootstrap=True,
random_state=0,
warm_start=True # this ensures that you add estimators without retraining everything
)
for j, n_estimator in enumerate(n_estimator_range):
print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size))
start_time = time.time()
myRF_clf.set_params(n_estimators=n_estimator)
myRF_clf.fit(X_train, Y_train)
run_time[i,j] = time.time() - start_time
# check accuracy
ETC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train)
ETC_OOB_accuracy[i,j]=myRF_clf.oob_score_
ETC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test)
ETC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical)
if print_flag:
result = (run_time[i,j], ETC_train_accuracy[i,j], ETC_OOB_accuracy[i,j], ETC_test_accuracy[i,j], ETC_critical_accuracy[i,j])
print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score"))
print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result))
plt.figure()
plt.plot(n_estimator_range,ETC_train_accuracy[1],'--b^',label='Train (coarse)')
plt.plot(n_estimator_range,ETC_test_accuracy[1],'--r^',label='Test (coarse)')
plt.plot(n_estimator_range,ETC_critical_accuracy[1],'--g^',label='Critical (coarse)')
plt.plot(n_estimator_range,ETC_train_accuracy[0],'o-b',label='Train (fine)')
plt.plot(n_estimator_range,ETC_test_accuracy[0],'o-r',label='Test (fine)')
plt.plot(n_estimator_range,ETC_critical_accuracy[0],'o-g',label='Critical (fine)')
#plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Accuracy')
lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.show()
plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse')
plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Run time (s)')
plt.legend(loc=2)
plt.savefig("Ising_ETC_Runtime.pdf")
plt.show()
###Output
_____no_output_____
###Markdown
Notebook 9: Using Random Forests to classify phases in the Ising Model Learning GoalThe goal of this notebook is to show how one can employ ensemble methods such as Random Forests to classify the states of the 2D Ising model according to their phases. We discuss concepts like decision trees, extreme decision trees, and out-of-bag error. The notebook also introduces the powerful scikit-learn `Ensemble` class. Setting up the problemThe Hamiltonian for the classical Ising model is given by$$ H = -J\sum_{\langle ij\rangle}S_{i}S_j,\qquad \qquad S_j\in\{\pm 1\} $$where the lattice site indices $i,j$ run over all nearest neighbors of a 2D square lattice of side $L$, and $J$ is some arbitrary interaction energy scale. We adopt periodic boundary conditions. Onsager proved that this model undergoes a phase transition in the thermodynamic limit from an ordered ferromagnet with all spins aligned to a disordered phase at the critical temperature $T_c/J=1/\log(1+\sqrt{2})\approx 2.26$. For any finite system size, this critical point is expanded to a critical region around $T_c$.We will use the same basic idea as we did for logistic regression. An interesting question to ask is whether one can train a statistical model to distinguish between the two phases of the Ising model. In other words, given an Ising state, we would like to classify whether it belongs to the ordered or the disordered phase, without any additional information other than the spin configuration itself. This categorical machine learning problem is well suited for ensemble methods and in particular Random Forests.To this end, we consider the 2D Ising model on a $40\times 40$ square lattice, and use Monte-Carlo (MC) sampling to prepare $10^4$ states at every fixed temperature $T$ out of a pre-defined set. Using Onsager's criterion, we can assign a label to each state according to its phase: $0$ if the state is disordered, and $1$ if it is ordered. It is well-known that, near the critical temperature $T_c$, the ferromagnetic correlation length diverges which, among others, leads to a critical slowing down of the MC algorithm. Therefore, we expect identifying the phases to be harder in the critical region. With this in mind, consider the following three types of states: ordered ($T/J2.5$). We use both ordered and disordered states to train the random forest and, once the supervised training procedure is complete, we shall evaluate the performance of our classifier on unseen ordered, disordered and critical states. A link to the Ising dataset can be found at [https://physics.bu.edu/~pankajm/MLnotebooks.html](https://physics.bu.edu/~pankajm/MLnotebooks.html).
###Code
import numpy as np
np.random.seed() # shuffle random seed generator
# Ising model parameters
L=40 # linear system size
J=-1.0 # Ising interaction
T=np.linspace(0.25,4.0,16) # set of temperatures
T_c=2.26 # Onsager critical temperature in the TD limit
import pickle, os
from urllib.request import urlopen
# path to data directory (for testing)
#path_to_data=os.path.expanduser('~')+'/Dropbox/MachineLearningReview/Datasets/isingMC/'
url_main = 'https://physics.bu.edu/~pankajm/ML-Review-Datasets/isingMC/';
######### LOAD DATA
# The data consists of 16*10000 samples taken in T=np.arange(0.25,4.0001,0.25):
data_file_name = "Ising2DFM_reSample_L40_T=All.pkl"
# The labels are obtained from the following file:
label_file_name = "Ising2DFM_reSample_L40_T=All_labels.pkl"
#DATA
data = pickle.load(urlopen(url_main + data_file_name)) # pickle reads the file and returns the Python object (1D array, compressed bits)
data = np.unpackbits(data).reshape(-1, 1600) # Decompress array and reshape for convenience
data=data.astype('int')
data[np.where(data==0)]=-1 # map 0 state to -1 (Ising variable can take values +/-1)
#LABELS (convention is 1 for ordered states and 0 for disordered states)
labels = pickle.load(urlopen(url_main + label_file_name)) # pickle reads the file and returns the Python object (here just a 1D array with the binary labels)
###### define ML parameters
from sklearn.model_selection import train_test_split
train_to_test_ratio=0.8 # training samples
# divide data into ordered, critical and disordered
X_ordered=data[:70000,:]
Y_ordered=labels[:70000]
X_critical=data[70000:100000,:]
Y_critical=labels[70000:100000]
X_disordered=data[100000:,:]
Y_disordered=labels[100000:]
del data,labels
# define training and test data sets
X=np.concatenate((X_ordered,X_disordered))
Y=np.concatenate((Y_ordered,Y_disordered))
# pick random data points from ordered and disordered states
# to create the training and test sets
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,train_size=train_to_test_ratio,test_size=1.0-train_to_test_ratio)
print('X_train shape:', X_train.shape)
print('Y_train shape:', Y_train.shape)
print()
print(X_train.shape[0], 'train samples')
print(X_critical.shape[0], 'critical samples')
print(X_test.shape[0], 'test samples')
##### plot a few Ising states
%matplotlib inline
#import ml_style as style
import matplotlib as mpl
import matplotlib.pyplot as plt
#mpl.rcParams.update(style.style)
from mpl_toolkits.axes_grid1 import make_axes_locatable
# set colourbar map
cmap_args=dict(cmap='plasma_r')
# plot states
fig, axarr = plt.subplots(nrows=1, ncols=3)
axarr[0].imshow(X_ordered[20001].reshape(L,L),**cmap_args)
#axarr[0].set_title('$\\mathrm{ordered\\ phase}$',fontsize=16)
axarr[0].set_title('ordered phase',fontsize=16)
axarr[0].tick_params(labelsize=16)
axarr[1].imshow(X_critical[10001].reshape(L,L),**cmap_args)
#axarr[1].set_title('$\\mathrm{critical\\ region}$',fontsize=16)
axarr[1].set_title('critical region',fontsize=16)
axarr[1].tick_params(labelsize=16)
im=axarr[2].imshow(X_disordered[50001].reshape(L,L),**cmap_args)
#axarr[2].set_title('$\\mathrm{disordered\\ phase}$',fontsize=16)
axarr[2].set_title('disordered phase',fontsize=16)
axarr[2].tick_params(labelsize=16)
fig.subplots_adjust(right=2.0)
plt.show()
###Output
_____no_output_____
###Markdown
Random Forests**Hyperparameters**We start by training with Random Forests. As discussed in Sec. VIII of the review, Random Forests are ensemble models. Here we will use the sci-kit learn implementation of random forests. There are two main hyper-parameters that will be important in practice for the performance of the algorithm and the degree to which it overfits/underfits: the number of estimators in the ensemble and the depth of the trees used. The former is controlled by the parameter `n_estimators` whereas the latter (the complexity of the trees used) can be controlled in many distinct ways (`min_samples_split`, `min_samples_leaf`, `min_impurity_decrease`, etc). For our simple dataset, it does not really make much difference which one of these we use. We will just use the `min_samples_split` parameter that dictates how many samples need to be in each node of the classification tree. The bigger this number, the more coarse our trees and data partitioning.In the code below, we will just consider extremely fine trees (`min_samples_split=2`) or extremely coarse trees (`min_samples_split=10000`). As we will see, both of these tree complexities are sufficient to distinguish the ordered from the disordered samples. The reason for this is that the ordered and disordered phases are distinguished by the magnetization order parameter which is an equally weighted sum of all features. However, if we want to train deep in these simple phases, and then use our algorithm to distinguish critical samples it is crucial we use more complex trees even though the performance on the disordered and ordered phases is indistinguishable for coarse and complex trees.**Out of Bag (OOB) Estimates**For more complicated datasets, how can we choose the right hyperparameters? We can actually make use of one of the most important and interesting features of ensemble methods that employ Bagging: out-of-bag (OOB) estimates. Whenever we bag data, since we are drawing samples with replacement, we can ask how well our classifiers do on data points that are *not used* in the training. This is the out-of-bag prediction error and plays a similar role to cross-validation error in other ML methods. Since this is the best proxy for out-of-sample prediction, we choose hyperparameters to minimize the out-of-bag error.
###Code
# Apply Random Forest
#This is the random forest classifier
from sklearn.ensemble import RandomForestClassifier
#This is the extreme randomized trees
from sklearn.ensemble import ExtraTreesClassifier
#import time to see how perforamance depends on run time
import time
import warnings
#Comment to turn on warnings
warnings.filterwarnings("ignore")
#We will check
min_estimators = 10
max_estimators = 101
classifer = RandomForestClassifier # BELOW WE WILL CHANGE for the case of extremly randomized forest
n_estimator_range=np.arange(min_estimators, max_estimators, 10)
leaf_size_list=[2,10000]
m=len(n_estimator_range)
n=len(leaf_size_list)
#Allocate Arrays for various quantities
RFC_OOB_accuracy=np.zeros((n,m))
RFC_train_accuracy=np.zeros((n,m))
RFC_test_accuracy=np.zeros((n,m))
RFC_critical_accuracy=np.zeros((n,m))
run_time=np.zeros((n,m))
print_flag=True
for i, leaf_size in enumerate(leaf_size_list):
# Define Random Forest Classifier
myRF_clf = classifer(
n_estimators=min_estimators,
max_depth=None,
min_samples_split=leaf_size, # minimum number of sample per leaf
oob_score=True,
random_state=0,
warm_start=True # this ensures that you add estimators without retraining everything
)
for j, n_estimator in enumerate(n_estimator_range):
print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size))
start_time = time.time()
myRF_clf.set_params(n_estimators=n_estimator)
myRF_clf.fit(X_train, Y_train)
run_time[i,j] = time.time() - start_time
# check accuracy
RFC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train)
RFC_OOB_accuracy[i,j]=myRF_clf.oob_score_
RFC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test)
RFC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical)
if print_flag:
result = (run_time[i,j], RFC_train_accuracy[i,j], RFC_OOB_accuracy[i,j], RFC_test_accuracy[i,j], RFC_critical_accuracy[i,j])
print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score"))
print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result))
plt.figure()
plt.plot(n_estimator_range,RFC_train_accuracy[1],'--b^',label='Train (coarse)')
plt.plot(n_estimator_range,RFC_test_accuracy[1],'--r^',label='Test (coarse)')
plt.plot(n_estimator_range,RFC_critical_accuracy[1],'--g^',label='Critical (coarse)')
plt.plot(n_estimator_range,RFC_train_accuracy[0],'o-b',label='Train (fine)')
plt.plot(n_estimator_range,RFC_test_accuracy[0],'o-r',label='Test (fine)')
plt.plot(n_estimator_range,RFC_critical_accuracy[0],'o-g',label='Critical (fine)')
#plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Accuracy')
lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.show()
plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse')
plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Run time (s)')
plt.legend(loc=2)
#plt.savefig("Ising_RF_Runtime.pdf")
plt.show()
###Output
_____no_output_____
###Markdown
Extremely Randomized TreesAs discussed in the main text, the effectiveness of ensemble methods generally increases as the correlations between members of the ensemble decrease. This idea has been leveraged to make methods that introduce even more randomness into the ensemble by randomly choosing features to split on as well as randomly choosing thresholds to split on. See Section 4.3 of Louppe 2014 [arxiv:1407.7502](https://arxiv.org/pdf/1407.7502.pdf).Here we will make use of the scikit-learn function `ExtremeTreesClassifier` and we will just rerun what we did above. Since there is extra randomization compared to random forests, one can imagine that the performance of the critical samples will be much worse. Indeed, this is the case.
###Code
#This is the extreme randomized trees
from sklearn.ensemble import ExtraTreesClassifier
#import time to see how perforamance depends on run time
import time
import warnings
#Comment to turn on warnings
warnings.filterwarnings("ignore")
#We will check
min_estimators = 10
max_estimators = 101
classifer = ExtraTreesClassifier # only changing this
n_estimator_range=np.arange(min_estimators, max_estimators, 10)
leaf_size_list=[2,10000]
m=len(n_estimator_range)
n=len(leaf_size_list)
#Allocate Arrays for various quantities
ETC_OOB_accuracy=np.zeros((n,m))
ETC_train_accuracy=np.zeros((n,m))
ETC_test_accuracy=np.zeros((n,m))
ETC_critical_accuracy=np.zeros((n,m))
run_time=np.zeros((n,m))
print_flag=True
for i, leaf_size in enumerate(leaf_size_list):
# Define Random Forest Classifier
myRF_clf = classifer(
n_estimators=min_estimators,
max_depth=None,
min_samples_split=leaf_size, # minimum number of sample per leaf
oob_score=True,
bootstrap=True,
random_state=0,
warm_start=True # this ensures that you add estimators without retraining everything
)
for j, n_estimator in enumerate(n_estimator_range):
print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size))
start_time = time.time()
myRF_clf.set_params(n_estimators=n_estimator)
myRF_clf.fit(X_train, Y_train)
run_time[i,j] = time.time() - start_time
# check accuracy
ETC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train)
ETC_OOB_accuracy[i,j]=myRF_clf.oob_score_
ETC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test)
ETC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical)
if print_flag:
result = (run_time[i,j], ETC_train_accuracy[i,j], ETC_OOB_accuracy[i,j], ETC_test_accuracy[i,j], ETC_critical_accuracy[i,j])
print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score"))
print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result))
plt.figure()
plt.plot(n_estimator_range,ETC_train_accuracy[1],'--b^',label='Train (coarse)')
plt.plot(n_estimator_range,ETC_test_accuracy[1],'--r^',label='Test (coarse)')
plt.plot(n_estimator_range,ETC_critical_accuracy[1],'--g^',label='Critical (coarse)')
plt.plot(n_estimator_range,ETC_train_accuracy[0],'o-b',label='Train (fine)')
plt.plot(n_estimator_range,ETC_test_accuracy[0],'o-r',label='Test (fine)')
plt.plot(n_estimator_range,ETC_critical_accuracy[0],'o-g',label='Critical (fine)')
#plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Accuracy')
lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.show()
plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse')
plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine')
plt.xlabel('$N_\mathrm{estimators}$')
plt.ylabel('Run time (s)')
plt.legend(loc=2)
#plt.savefig("Ising_ETC_Runtime.pdf")
plt.show()
###Output
_____no_output_____ |
book/thermochemistry/cea_cantera.ipynb | ###Markdown
Implementing CEA calculations using Cantera
###Code
# this line makes figures interactive in Jupyter notebooks
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import cantera as ct
from pint import UnitRegistry
ureg = UnitRegistry()
Q_ = ureg.Quantity
# for convenience:
def to_si(quant):
'''Converts a Pint Quantity to magnitude at base SI units.
'''
return quant.to_base_units().magnitude
# these lines are only for helping improve the display
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
plt.rcParams['figure.dpi']= 150
plt.rcParams['savefig.dpi'] = 150
###Output
_____no_output_____
###Markdown
[CEA](https://www1.grc.nasa.gov/research-and-engineering/ceaweb/) (Chemical Equilibrium with Applications) is a classic NASA software tool developed for analyzing combustion and rocket propulsion problems. It was written in Fortran, but is available to run via a [web interface](https://cearun.grc.nasa.gov/index.html).Given rocket propellants, CEA can not only determine the combustion chamber equilibrium composition and temperature, but also calculate important rocket performance parameters.Although CEA is extremely useful, it cannot (easily) be used within Python. Plus, we might want to [Cantera](https://cantera.org/) is a modern software library for solving problems in chemical kinetics,thermodynamics, and transport, that offers a Python interface. Cantera natively supports phase and chemicalequilibrium solvers. In particular, it can simulate finite-rate chemical reactions.This article examines how we can use Cantera and Python to perform the calculations of CEA. Fixed temperature and pressureGiven a fixed temperature and pressure, determine the equilibrium composition of chemical species.This problem is relevant to an isothermal process, or where temperature is a design variable, suchas in nuclear thermal or electrothermal rockets.For example, say we have gaseous hydrazine (N2H4) as a propellant, with a chambertemperature of 5000 K and pressure of 50 psia. For this system, determine the equilibrium composition.In [CEA](https://cearun.grc.nasa.gov/), this is a `tp` problem, or fixed temperature and pressure problem.We should expect that, at such high temperatures, the equilibrium state will have mostly one- and two-atommolecules, based on the elements present: N2, H2, H, N, and HN.The CEA plaintext input file looks like:```textprob tp p,psia= 50 t,k= 5000reacname N2H4 mol 1.0output siunitsend```and the output is (with the repeated input removed):```text******************************************************************************* NASA-GLENN CHEMICAL EQUILIBRIUM PROGRAM CEA2, FEBRUARY 5, 2004 BY BONNIE MCBRIDE AND SANFORD GORDON REFS: NASA RP-1311, PART I, 1994 AND NASA RP-1311, PART II, 1996 ******************************************************************************* THERMODYNAMIC EQUILIBRIUM PROPERTIES AT ASSIGNED TEMPERATURE AND PRESSURE REACTANT WT FRACTION ENERGY TEMP (SEE NOTE) KJ/KG-MOL K NAME N2H4 1.0000000 0.000 0.000 O/F= 0.00000 %FUEL= 0.000000 R,EQ.RATIO= 0.000000 PHI,EQ.RATIO= 0.000000 THERMODYNAMIC PROPERTIES P, BAR 3.4474 T, K 5000.00 RHO, KG/CU M 5.5368-2 H, KJ/KG 42058.0 U, KJ/KG 35831.8 G, KJ/KG -103744.4 S, KJ/(KG)(K) 29.1605 M, (1/n) 6.677 (dLV/dLP)t -1.04028 (dLV/dLT)p 1.4750 Cp, KJ/(KG)(K) 11.1350 GAMMAs 1.2548 SON VEL,M/SEC 2795.1 MOLE FRACTIONS *H 0.74177 *H2 0.04573 *N 0.00806 *NH 0.00021 *N2 0.20422```So, CEA not only provides the equilibrium composition in terms of mole fraction ($X_i$), but also the mean molecular weight of the mixture $MW$; thermodynamic properties and derivatives density $\rho$, enthalpy $h$, entropy $s$,$\left(\partial \log V / \partial \log P\right)_T$, $\left(\partial \log V / \partial \log T\right)_P$,specific heat $C_p = \partial h / \partial T)_P$, the ratio of specific heats ($\gamma$), and the sonic velocity (i.e., speed of sound) $a$. We can perform the same equilibrium calculation in Cantera, but we need to construct an object that contains the appropriate chemical species. Cantera actually comes with a NASA database of gaseous species thermodynamic models,in the `nasa_gas.cti` file.
###Code
# extract all species in the NASA database
full_species = {S.name: S for S in ct.Species.listFromFile('nasa_gas.cti')}
# extract only the relevant species
species = [full_species[S] for S in (
'N2H4', 'N2', 'H2', 'H', 'N', 'NH'
)]
gas = ct.Solution(thermo='IdealGas', species=species)
temperature = Q_(5000, 'K')
pressure = Q_(50, 'psi')
gas.TPX = to_si(temperature), to_si(pressure), 'N2H4:1.0'
gas.equilibrate('TP')
gas()
###Output
temperature 5000 K
pressure 3.4474e+05 Pa
density 0.055346 kg/m^3
mean mol. weight 6.6743 kg/kmol
phase of matter gas
1 kg 1 kmol
--------------- ---------------
enthalpy 4.2088e+07 2.8091e+08 J
internal energy 3.586e+07 2.3934e+08 J
entropy 29182 1.9477e+05 J/K
Gibbs function -1.0382e+08 -6.9294e+08 J
heat capacity c_p 3779.4 25225 J/K
heat capacity c_v 2533.6 16910 J/K
mass frac. Y mole frac. X chem. pot. / RT
--------------- --------------- ---------------
N2 0.8567 0.20411 -30.731
H2 0.013688 0.045315 -24.65
H 0.1121 0.74225 -12.325
N 0.017042 0.0081204 -15.365
NH 0.00046864 0.00020831 -27.691
[ +1 minor] 2.3328e-15 4.8586e-16
###Markdown
Comparing the results from CEA and Cantera, we see very good agreement between (most) thermodynamicproperties and the species mole fractions.But, the heat capacity $C_p$ appears **very** different, and if we calculate the specific heat ratio,$$\gamma = \frac{C_p}{C_v} \;,$$we will see it also differs quite substantially:
###Code
gamma_ct = gas.cp_mole / gas.cv_mole
print(f'Cantera specific heat ratio: {gamma_ct: .4f}')
###Output
Cantera specific heat ratio: 1.4917
###Markdown
🤯Well, 1.492 is quite different from 1.255, and this would lead to substantially different rocket performance parameters that depend on $\gamma$.So, what's going on?Well, the key lies in examining the actual definition of specific heat, following Gordon and McBride {cite}`cea_analysis`:$$C_p = \left( \frac{\partial h}{\partial T} \right)_P \;.$$In this derivative, while enthalpy and temperature change, and pressure is held constant, what happens to the species composition? We could assume the composition is "frozen" and remains fixed,or that the composition adjusts to a new equilibrium instantaneously.The "equilibrium" specific heat then has two components, a frozen contribution and reaction contribution:$$\begin{align}C_{p,e} &= C_{p,f} + C_{p,r} \\&= \sum_{j=1}^{N_s} n_j C_{p,j}^{\circ} + \sum_{j=1}^{N_g} n_j \frac{H_j^{\circ}}{T} \left( \frac{\partial \log n_j}{\partial \log T}\right)_P + \sum_{j=N_g+1}^{N_s} \frac{H_j^{\circ}}{T} \left( \frac{\partial n_j}{\partial \log T}\right)_P \;,\end{align}$$where $N_s$ is the number of species and $N_g$ is the number of gas-phase species (so that $N_g + 1$ refers to the first condensed-phase species, if present).But, Cantera defines quantities like specific heat (and other thermodynamic quantities based on derivatives)at fixed composition, meaning Cantera's specific heat is just the frozen contribution $C_{p,f}$. We can obtain the full equilibrium-based value of specific heat, but it requires determining additional thermodynamic derivatives. Following Gordon and McBride {cite}`cea_analysis` again, we can obtain this system of linear equations:$$\begin{align}\sum_{i=1}^{N_e} \sum_{j=1}^{N_g} a_{kj} a_{ij} n_j \left( \frac{\partial \pi_i}{\partial \log T}\right)_P + \sum_{j=N_g+1}^{N_s} a_{ij} \left( \frac{\partial n_j}{\partial \log T}\right)_P + \sum_{j=1}^{N_g} a_{kj} n_j \left( \frac{\partial \log n}{\partial \log T} \right)_P &= -\sum_{j=1}^{N_g} \frac{a_{kj} n_j H_j^{\circ}}{RT} \;, \quad k=1, \ldots, {N_e} \\\sum_{i=1}^{N_e} a_{ij} \left( \frac{\partial \pi_i}{\partial \log T}\right)_P &= - \frac{H_j^{\circ}}{RT} \;, \quad j = N_g + 1, \ldots, N_s \\\sum_{i=1}^{N_e} \sum_{j=1}^{N_g} a_{ij} n_j \left( \frac{\partial \pi_i}{\partial \log T}\right)_P &= -\sum_{j=1}^{N_g} \frac{n_j H_j^{\circ}}{RT} \\\sum_{i=1}^{N_e} \sum_{j=1}^{N_g} a_{kj} a_{ij} n_j \left( \frac{\partial \pi_i}{\partial \log P}\right)_T + \sum_{j=N_g + 1}^{N_s} a_{kj} \left( \frac{\partial n_j}{\partial \log P}\right)_T + \sum_{j=1}^{N_g} a_{ij} n_j \left( \frac{\partial \log n}{\partial \log P}\right)_T &= \sum_{j=1}^{N_g} a_{kj} n_j \;, \quad k=1, \ldots, {N_e} \\\sum_{i=1}^{N_e} a_{ij} \left( \frac{\partial \pi_i}{\partial \log P}\right)_T &= 0 \;, \quad j = N_g + 1, \ldots, N_s \\\sum_{i=1}^{N_e} \sum_{j=1}^{N_g} a_{ij} n_j \left( \frac{\partial \pi_i}{\partial \log P}\right)_T &= \sum_{j=1}^{N_g} n_j \;,\end{align}$$where ${N_e}$ is the number of elements. As a first pass, let's assume that no condensed species are present. (This is fine for conditions in the combustion chamber, but for some systems the rapid expansion in the nozzle may drop below the dew point for some species.)Then, the unknowns in that system of equations are $ \left( \frac{\partial \pi_i}{\partial \log T}\right)_P$,$ \left( \frac{\partial \log n}{\partial \log T}\right)_P$,$ \left( \frac{\partial \pi_i}{\partial \log P}\right)_T$,$ \left( \frac{\partial \log n}{\partial \log P}\right)_T$,with a total of $2 \times N_e + 2$ unknowns. For the current system of N2H4, $N_e = 2$ and thus there are six unknowns.Since this is a linear system of equations, we can solve it using linear algebra, via NumPy's`linalg.solve` function. Let's set up a function to solve this system:
###Code
def get_thermo_derivatives(gas):
'''Gets thermo derivatives based on shifting equilibrium.
'''
# unknowns for system with no condensed species:
# dpi_i_dlogT_P (# elements)
# dlogn_dlogT_P
# dpi_i_dlogP_T (# elements)
# dlogn_dlogP_T
# total unknowns: 2*n_elements + 2
num_var = 2 * gas.n_elements + 2
coeff_matrix = np.zeros((num_var, num_var))
right_hand_side = np.zeros(num_var)
tot_moles = 1.0 / gas.mean_molecular_weight
moles = gas.X * tot_moles
condensed = False
# indices
idx_dpi_dlogT_P = 0
idx_dlogn_dlogT_P = idx_dpi_dlogT_P + gas.n_elements
idx_dpi_dlogP_T = idx_dlogn_dlogT_P + 1
idx_dlogn_dlogP_T = idx_dpi_dlogP_T + gas.n_elements
# construct matrix of elemental stoichiometric coefficients
stoich_coeffs = np.zeros((gas.n_elements, gas.n_species))
for i, elem in enumerate(gas.element_names):
for j, sp in enumerate(gas.species_names):
stoich_coeffs[i,j] = gas.n_atoms(sp, elem)
# equations for derivatives with respect to temperature
# first n_elements equations
for k in range(gas.n_elements):
for i in range(gas.n_elements):
coeff_matrix[k,i] = np.sum(stoich_coeffs[k,:] * stoich_coeffs[i,:] * moles)
coeff_matrix[k, gas.n_elements] = np.sum(stoich_coeffs[k,:] * moles)
right_hand_side[k] = -np.sum(stoich_coeffs[k,:] * moles * gas.standard_enthalpies_RT)
# skip equation relevant to condensed species
for i in range(gas.n_elements):
coeff_matrix[gas.n_elements, i] = np.sum(stoich_coeffs[i, :] * moles)
right_hand_side[gas.n_elements] = -np.sum(moles * gas.standard_enthalpies_RT)
# equations for derivatives with respect to pressure
for k in range(gas.n_elements):
for i in range(gas.n_elements):
coeff_matrix[gas.n_elements+1+k,gas.n_elements+1+i] = np.sum(stoich_coeffs[k,:] * stoich_coeffs[i,:] * moles)
coeff_matrix[gas.n_elements+1+k, 2*gas.n_elements+1] = np.sum(stoich_coeffs[k,:] * moles)
right_hand_side[gas.n_elements+1+k] = np.sum(stoich_coeffs[k,:] * moles)
for i in range(gas.n_elements):
coeff_matrix[2*gas.n_elements+1, gas.n_elements+1+i] = np.sum(stoich_coeffs[i, :] * moles)
right_hand_side[2*gas.n_elements+1] = np.sum(moles)
derivs = np.linalg.solve(coeff_matrix, right_hand_side)
dpi_dlogT_P = derivs[idx_dpi_dlogT_P : idx_dpi_dlogT_P + gas.n_elements]
dlogn_dlogT_P = derivs[idx_dlogn_dlogT_P]
dpi_dlogP_T = derivs[idx_dpi_dlogP_T]
dlogn_dlogP_T = derivs[idx_dlogn_dlogP_T]
# dpi_dlogP_T is not used
return dpi_dlogT_P, dlogn_dlogT_P, dlogn_dlogP_T
###Output
_____no_output_____
###Markdown
Using these derivatives, we can then calculate the specific heat, other relevant derivatives, and the ratio of specific heats:$$\begin{align}\frac{C_{p,e}}{R} &= \sum_{i=1}^{N_e} \left( \sum_{j=1}^{N_g} \frac{a_{ij} n_j H_j^{\circ}}{RT} \right) \left( \frac{\partial \pi_i}{\partial \log T}\right)_P + \sum_{j=N_g+1}^{N_s} \frac{H_j^{\circ}}{RT} \left( \frac{\partial n_j}{\partial \log T}\right)_P \\&+ \left( \sum_{j=1}^{N_g} \frac{n_j H_j^{\circ}}{RT} \right) \left( \frac{\partial \log n}{\partial \log T}\right)_P + \sum_{j=1}^{N_s} \frac{n_j C_{p,j}^{\circ}}{R} + \sum_{j=1}^{N_g} \frac{n_j (H_j^{\circ})^2}{R^2 T^2} \\\left( \frac{\partial \log V}{\partial \log T}\right)_P &= 1 + \left( \frac{\partial \log n}{\partial \log T}\right)_P \\\left( \frac{\partial \log V}{\partial \log P}\right)_T &= -1 + \left( \frac{\partial \log n}{\partial \log P}\right)_T \;.\end{align}$$The ratio of specific heats shows up via the speed of sound:$$\begin{align}a^2 &= \left( \frac{\partial P}{\partial \rho}\right)_s = -\frac{P}{\rho} \left( \frac{\partial \log P}{\partial \log V} \right)_s \\&= n R T \gamma_s\end{align} \;,$$where the ratio of specific heats is$$\gamma_s = \left( \frac{\partial \log P}{\partial \log \rho} \right)_s = - \frac{\gamma}{ \left( \frac{\partial \log V}{\partial \log P}\right)_T}$$and $$\gamma \equiv \frac{C_p}{C_v} \;.$$The constant volume specific heat is$$C_v \equiv \left( \frac{\partial u}{\partial T}\right)_V = C_p + \frac{ \frac{PV}{T} \left( \frac{\partial \log V}{\partial \log T}\right)_P^2}{ \left( \frac{\partial \log V}{\partial \log P}\right)_T} \;.$$
###Code
def get_thermo_properties(gas, dpi_dlogT_P, dlogn_dlogT_P, dlogn_dlogP_T):
'''Calculates specific heats, volume derivatives, and specific heat ratio.
Based on shifting equilibrium for mixtures.
'''
tot_moles = 1.0 / gas.mean_molecular_weight
moles = gas.X * tot_moles
# construct matrix of elemental stoichiometric coefficients
stoich_coeffs = np.zeros((gas.n_elements, gas.n_species))
for i, elem in enumerate(gas.element_names):
for j, sp in enumerate(gas.species_names):
stoich_coeffs[i,j] = gas.n_atoms(sp, elem)
spec_heat_p = ct.gas_constant * (
np.sum([dpi_dlogT_P[i] *
np.sum(stoich_coeffs[i,:] * moles * gas.standard_enthalpies_RT)
for i in range(gas.n_elements)
]) +
np.sum(moles * gas.standard_enthalpies_RT) * dlogn_dlogT_P +
np.sum(moles * gas.standard_cp_R) +
np.sum(moles * gas.standard_enthalpies_RT**2)
)
dlogV_dlogT_P = 1 + dlogn_dlogT_P
dlogV_dlogP_T = -1 + dlogn_dlogP_T
spec_heat_v = (
spec_heat_p + gas.P * gas.v / gas.T * dlogV_dlogT_P**2 / dlogV_dlogP_T
)
gamma = spec_heat_p / spec_heat_v
gamma_s = -gamma/dlogV_dlogP_T
return dlogV_dlogT_P, dlogV_dlogP_T, spec_heat_p, gamma_s
derivs = get_thermo_derivatives(gas)
dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties(
gas, derivs[0], derivs[1], derivs[2]
)
print(f'Cp = {cp: .2f} J/(K kg)')
print(f'(d log V/d log P)_T = {dlogV_dlogP_T: .4f}')
print(f'(d log V/d log T)_P = {dlogV_dlogT_P: .4f}')
print(f'gamma_s = {gamma_s: .4f}')
speed_sound = np.sqrt(ct.gas_constant * gas.T * gamma_s / gas.mean_molecular_weight)
print(f'Speed of sound = {speed_sound: .1f} m/s')
###Output
Cp = 11104.47 J/(K kg)
(d log V/d log P)_T = -1.0400
(d log V/d log T)_P = 1.4722
gamma_s = 1.2549
Speed of sound = 2795.8 m/s
###Markdown
🎉 Success! These calculations agree very closely with those from CEA. Adiabatic combustionCEA also supports calculating the chamber temperature (along with composition) for adiabatic combustion, both with gaseous and liquid propellants. Cantera's equilibrium solver that we used above handles constant enthlapy and pressure equilibrium (`HP`) just fine with gaseous reactants, but how to CEA has a database of reactants with assigned enthalpies, as described by Gordon and McBride {cite}`cea_analysis`:- noncryogenic reactants are represented via enthalpy of formation (i.e., heat of formation) at the standard reference temperature of 298.15 K- cryogenic liquid reactants are represented via enthalpies given at their boiling points, which represent the standard enthalpy of formation minus the sensible heat (between 298.15 K and the boiling point), the heat of vaporization at the boiling point, and also the difference in enthalpy due to real gas effects at the boiling point.For example, CEA's thermodynamic database {cite}`NASA_thermo` represents liquid dinitrogen tetroxide (N2O4), which is an oxidizer used with hydrazine, with```textN2O4(L) Dinitrogen tetroxide. McBride,1996 pp85,93. 0 g 6/96 N 2.00O 4.00 0.00 0.00 0.00 1 92.0110000 -17549.000 298.150 0.0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.000```while cryogenic liquid hydrogen is given with```textH2(L) Hydrogen. McBride,1996 pp84,92. 0 g 6/96 H 2.00 0.00 0.00 0.00 0.00 1 2.0158800 -9012.000 20.270 0.0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.000```The full format of species thermodynamic entries is given in the CEA User Manual{cite}`cea_manual`, but for these reactants the key information includes- species name, given in the first line- elemental composition, given in a fixed column format in the second line - phase, given as an integer in the third-to-last entry of the second line (zero for gases, nonzero for condensed phases)- molecular weight, in the second-to-last entry of the second line- enthalpy at the boiling point, in J/mol, at the end of the second line- boiling point temperature, in K, at the beginning of the third lineIn general, both CEA and Cantera represent the thermodynamic properties of gaseous andcondensed species via more-sophisticated polynomial fits across multiple ranges of temperatures,but these problems only require initial enthalpy of reactants. Let's consider the Space Shuttle main engine (SSME), which used cryogenic liquid hydrogen and liquid oxygen at an oxidizer to fuel ratio of 6.0 and a chamber pressure of around 3000 psia.This is a constant enthalpy and pressure problem (`hp`) in CEA: ``` NASA-GLENN CHEMICAL EQUILIBRIUM PROGRAM CEA2, FEBRUARY 5, 2004 BY BONNIE MCBRIDE AND SANFORD GORDON REFS: NASA RP-1311, PART I, 1994 AND NASA RP-1311, PART II, 1996 ******************************************************************************* CEA analysis performed on Wed 27-Jan-2021 13:09:27 Problem Type: "Assigned Enthalpy and Pressure" prob case=_______________3446 hp Pressure (1 value): p,psia= 3000 Oxidizer/Fuel Wt. ratio (1 value): o/f= 6.0 You selected the following fuels and oxidizers: reac fuel H2(L) wt%=100.0000 oxid O2(L) wt%=100.0000 You selected these options for output: short version of output output short Proportions of any products will be expressed as Mole Fractions. Heat will be expressed as siunits output siunits Input prepared by this script:prepareInputFile.cgi IMPORTANT: The following line is the end of your CEA input file! end THERMODYNAMIC EQUILIBRIUM COMBUSTION PROPERTIES AT ASSIGNED PRESSURES CASE = _______________ REACTANT WT FRACTION ENERGY TEMP (SEE NOTE) KJ/KG-MOL K FUEL H2(L) 1.0000000 -9012.000 20.270 OXIDANT O2(L) 1.0000000 -12979.000 90.170 O/F= 6.00000 %FUEL= 14.285714 R,EQ.RATIO= 1.322780 PHI,EQ.RATIO= 1.322780 THERMODYNAMIC PROPERTIES P, BAR 206.84 T, K 3598.76 RHO, KG/CU M 9.4113 0 H, KJ/KG -986.31 U, KJ/KG -3184.12 G, KJ/KG -62768.7 S, KJ/(KG)(K) 17.1677 M, (1/n) 13.614 (dLV/dLP)t -1.01897 (dLV/dLT)p 1.3291 Cp, KJ/(KG)(K) 7.3140 GAMMAs 1.1475 SON VEL,M/SEC 1588.1 MOLE FRACTIONS *H 0.02543 HO2 0.00003 *H2 0.24740 H2O 0.68635 H2O2 0.00002 *O 0.00202 *OH 0.03659 *O2 0.00215 * THERMODYNAMIC PROPERTIES FITTED TO 20000.K NOTE. WEIGHT FRACTION OF FUEL IN TOTAL FUELS AND OF OXIDANT IN TOTAL OXIDANTS``` The key results include the chamber pressure $T_c$ of 3598.8 K, the specific heat ratio $\gamma_s$ of1.148, and the mean molecular weight of 13.614 kg/kmol.To perform this calculation using Cantera, we need the reactant information:```H2(L) Hydrogen. McBride,1996 pp84,92. 0 g 6/96 H 2.00 0.00 0.00 0.00 0.00 1 2.0158800 -9012.000 20.270 0.0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.000O2(L) Oxygen. McBride,1996 pp85,93. 0 g 6/96 O 2.00 0.00 0.00 0.00 0.00 1 31.9988000 -12979.000 90.170 0.0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.000```Cantera has fairly sophisticated ways of representing the thermodynamics of condensed [phases](https://cantera.org/documentation/dev/sphinx/html/yaml/phases.htmlsec-yaml-fixed-stoichiometry), but in this case we actually do not need that—we just need a way of easily representing the elemental composition and enthalpy of the reactants, which is the data neededfor constraining the equilibrium solver.So, we can actually use the [ideal gas thermodynamic model](https://cantera.org/documentation/dev/doxygen/html/d7/dfa/classCantera_1_1IdealGasPhase.htmldetails) (`ideal-gas`) for the phase.For each species, we can use the [constant heat capacity](https://cantera.org/science/science-species.htmlconstant-heat-capacity) (`constant-cp`) thermodynamic model,with the reference temperature set to boiling point (for the cryogenic liquid propellants in this case; for non-cryogenic reactants, this would be 298.15 K), the reference enthalpy set to the assigned value, and the reference specific heat and entropy set to zero.I've constructed a representative Cantera [YAML input file](https://cantera.org/tutorials/yaml/defining-phases.html), that describes separate phases for liquid hydrogen and liquid oxygen.⚠️ Warning ⚠️ these phases are **only** valid at the specific cryogenic temperature specified,and should only be used for this specific purpose (as reactants).
###Code
h2o2_filename = 'h2o2_react.yaml'
print('Contents of ' + h2o2_filename + ':\n')
with open(h2o2_filename) as f:
file_contents = f.read()
print(file_contents)
###Output
Contents of h2o2_react.yaml:
phases:
- name: liquid_hydrogen
thermo: ideal-gas
elements: [H]
species: [H2(L)]
- name: liquid_oxygen
thermo: ideal-gas
elements: [O]
species: [O2(L)]
species:
- name: H2(L)
composition: {H: 2}
thermo:
model: constant-cp
T0: 20.270
h0: -9012.0 J/mol
s0: 0.0
cp0: 0.0
- name: O2(L)
composition: {O: 2}
thermo:
model: constant-cp
T0: 90.170
h0: -12979.0 J/mol
s0: 0.0
cp0: 0.0
###Markdown
To set up the system in Cantera, we create separate `Solution` objects for the liquid hydrogen and oxygen phases, and also a `Solution` containing the gas-phase products (actually, this could also include condensed species as well!).Then, create a `Mixture` that contains all three objects, and specify the initial moles of hydrogen and oxygen based on the oxidizer-to-fuel ratio:
###Code
o_f_ratio = 6.0
temperature_h2 = Q_(20.270, 'K')
temperature_o2 = Q_(90.170, 'K')
pressure_chamber = Q_(3000, 'psi')
h2 = ct.Solution(h2o2_filename, 'liquid_hydrogen')
h2.TP = to_si(temperature_h2), to_si(pressure_chamber)
o2 = ct.Solution(h2o2_filename, 'liquid_oxygen')
o2.TP = to_si(temperature_o2), to_si(pressure_chamber)
molar_ratio = o_f_ratio / (o2.mean_molecular_weight / h2.mean_molecular_weight)
moles_ox = molar_ratio / (1 + molar_ratio)
moles_f = 1 - moles_ox
gas2 = ct.Solution('nasa_h2o2.yaml', 'gas')
# create a mixture of the liquid phases with the gas-phase model,
# with the number of moles for fuel and oxidizer based on
# the O/F ratio
mix = ct.Mixture([(h2, moles_f), (o2, moles_ox), (gas2, 0)])
# Solve for the equilibrium state, at constant enthalpy and pressure
mix.equilibrate('HP')
gas2()
derivs = get_thermo_derivatives(gas2)
dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties(
gas2, derivs[0], derivs[1], derivs[2]
)
print(f'Cp = {cp: .2f} J/(K kg)')
print(f'(d log V/d log P)_T = {dlogV_dlogP_T: .4f}')
print(f'(d log V/d log T)_P = {dlogV_dlogT_P: .4f}')
print(f'gamma_s = {gamma_s: .4f}')
speed_sound = np.sqrt(ct.gas_constant * gas2.T * gamma_s / gas2.mean_molecular_weight)
print(f'Speed of sound = {speed_sound: .1f} m/s')
###Output
gas:
temperature 3597.5 K
pressure 2.0684e+07 Pa
density 9.4137 kg/m^3
mean mol. weight 13.613 kg/kmol
phase of matter gas
1 kg 1 kmol
--------------- ---------------
enthalpy -9.8628e+05 -1.3426e+07 J
internal energy -3.1835e+06 -4.3338e+07 J
entropy 17175 2.3381e+05 J/K
Gibbs function -6.2775e+07 -8.5457e+08 J
heat capacity c_p 3795.4 51668 J/K
heat capacity c_v 3184.7 43354 J/K
mass frac. Y mole frac. X chem. pot. / RT
--------------- --------------- ---------------
H 0.0018901 0.025526 -8.7917
HO2 8.3393e-05 3.4395e-05 -40.623
H2 0.036632 0.24736 -17.583
H2O 0.908 0.68615 -33.499
H2O2 4.23e-05 1.693e-05 -49.415
O 0.0023965 0.0020392 -15.916
OH 0.045862 0.036711 -24.707
O2 0.0050891 0.0021651 -31.831
O3 2.257e-08 6.4014e-09 -47.747
Cp = 7330.18 J/(K kg)
(d log V/d log P)_T = -1.0190
(d log V/d log T)_P = 1.3305
gamma_s = 1.1474
Speed of sound = 1587.8 m/s
###Markdown
🎉 Success! We get an equilibrium temperature of 3597.5 K, which is just 0.036% off the value calculated by CEA.Similarly, the ratios of specific heats match within 0.009%, and the speed of sounds within 0.019%. Rocket calculationsCEA also calculates performance quantities specific to rockets, such as the effective velocity (C-star, $c^*$), thrust coefficient ($C_F$), and specific impulse ($I_{\text{sp}}$).For the above example, but choosing the `rocket` problem and specifying a nozzle area ratioof 68.8, CEA provides this output:```******************************************************************************* NASA-GLENN CHEMICAL EQUILIBRIUM PROGRAM CEA2, FEBRUARY 5, 2004 BY BONNIE MCBRIDE AND SANFORD GORDON REFS: NASA RP-1311, PART I, 1994 AND NASA RP-1311, PART II, 1996 ******************************************************************************* Problem Type: "Rocket" (Infinite Area Combustor) prob case=_______________3446 ro equilibrium Pressure (1 value): p,psia= 3000 Supersonic Area Ratio (1 value): supar= 68.8 Oxidizer/Fuel Wt. ratio (1 value): o/f= 6.0 You selected the following fuels and oxidizers: reac fuel H2(L) wt%=100.0000 oxid O2(L) wt%=100.0000 output short output siunits end THEORETICAL ROCKET PERFORMANCE ASSUMING EQUILIBRIUM COMPOSITION DURING EXPANSION FROM INFINITE AREA COMBUSTOR Pin = 3000.0 PSIA CASE = _______________ REACTANT WT FRACTION ENERGY TEMP (SEE NOTE) KJ/KG-MOL K FUEL H2(L) 1.0000000 -9012.000 20.270 OXIDANT O2(L) 1.0000000 -12979.000 90.170 O/F= 6.00000 %FUEL= 14.285714 R,EQ.RATIO= 1.322780 PHI,EQ.RATIO= 1.322780 CHAMBER THROAT EXIT Pinf/P 1.0000 1.7403 961.12 P, BAR 206.84 118.85 0.21521 T, K 3598.76 3381.67 1233.84 RHO, KG/CU M 9.4113 0 5.8080 0 2.9602-2 H, KJ/KG -986.31 -2161.66 -10544.8 U, KJ/KG -3184.12 -4208.00 -11271.8 G, KJ/KG -62768.7 -60217.2 -31727.0 S, KJ/(KG)(K) 17.1677 17.1677 17.1677 M, (1/n) 13.614 13.740 14.111 (dLV/dLP)t -1.01897 -1.01412 -1.00000 (dLV/dLT)p 1.3291 1.2605 1.0000 Cp, KJ/(KG)(K) 7.3140 6.6953 2.9097 GAMMAs 1.1475 1.1487 1.2539 SON VEL,M/SEC 1588.1 1533.2 954.8 MACH NUMBER 0.000 1.000 4.579 PERFORMANCE PARAMETERS Ae/At 1.0000 68.800 CSTAR, M/SEC 2322.8 2322.8 CF 0.6601 1.8823 Ivac, M/SEC 2867.9 4538.6 Isp, M/SEC 1533.2 4372.3 MOLE FRACTIONS *H 0.02543 0.02034 0.00000 HO2 0.00003 0.00002 0.00000 *H2 0.24740 0.24494 0.24402 H2O 0.68635 0.70506 0.75598 H2O2 0.00002 0.00001 0.00000 *O 0.00202 0.00123 0.00000 *OH 0.03659 0.02704 0.00000 *O2 0.00215 0.00137 0.00000 * THERMODYNAMIC PROPERTIES FITTED TO 20000.K NOTE. WEIGHT FRACTION OF FUEL IN TOTAL FUELS AND OF OXIDANT IN TOTAL OXIDANTS```The key properties include:- C-star of 2322.8 m/s (based on combustion chamber conditions)- throat pressure of 118.85 bar and temperature of 3381.67 K- exit pressure of 0.21521 bar, temperature of 1233.84 K- at the nozzle exit, thrust coefficient = 1.8823, Isp = 4372.3 m/s
###Code
area_ratio = 68.8
pressure_throat_cea = Q_(118.85, 'bar').to('Pa')
temperature_throat_cea = 3381.67
pressure_exit_cea = Q_(0.21521, 'bar').to('Pa')
temperature_exit_cea = 1233.84
c_star_cea = 2322.8
thrust_coeff_cea = 1.8823
specific_impulse_cea = 4372.3
specific_impulse_vac_cea = 4538.6
###Output
_____no_output_____
###Markdown
C-starWe can calculate $c^*$ directly using the combustion chamber state already obtained with Cantera:
###Code
def calculate_c_star(gamma, temperature, molecular_weight):
return (
np.sqrt(ct.gas_constant * temperature / (molecular_weight * gamma)) *
np.power(2 / (gamma + 1), -(gamma + 1) / (2*(gamma - 1)))
)
entropy_chamber = gas2.s
enthalpy_chamber = gas2.enthalpy_mass
mole_fractions_chamber = gas2.X
gamma_chamber = gamma_s
c_star = calculate_c_star(gamma_chamber, gas2.T, gas2.mean_molecular_weight)
print(f'c-star: {c_star: .1f} m/s')
print('Error in c-star: '
f'{100*np.abs(c_star - c_star_cea)/c_star_cea: .3e} %'
)
###Output
c-star: 2323.0 m/s
Error in c-star: 7.130e-03 %
###Markdown
Throat conditionsThe nozzle flow from the combustion chamber to the throat is isentropic, and at the throat the flow velocity matches the sonic velocity. We need to iterate to determine the pressure and other properties.From 1D isentropic flow assumptions, the equation$$\frac{p_c}{p_t} = \left( \frac{\gamma_s + 1}{2} \right)^{\frac{\gamma_s}{\gamma_s - 1}}$$applies exactly, but only if $\gamma_s$ remains constant from the chamber to the throat.This works for the frozen-flow assumption, but not for shifting equilibrium, where the gas compositionwill adjust with changing pressure and temperature.We can use this equation to get a first estimate of throat pressure, $p_{t,1}$, then equilibratethe gas mixture at $s_c$ (chamber entropy) and $p_{t,1}$.The throat state is correct and converged when the velocity is sonic (i.e., equals the speed of sound).CEA checks for convergence using$$\left| \frac{u_t^2 - a_t^2}{u_t^2} \right| = \left| 1 - \frac{1}{M_t^2} \right| \leq 0.4 \cdot 10^{-4} \;,$$where $$\begin{align}M_t &= \frac{u_t}{a_t} \\u_t &= \sqrt{2 \left( h_c - h_t \right) } \\a_t &= \sqrt{ \gamma_s R T_t } \;,\end{align}$$using the properties at the current iteration.If the solution is not converged, we get an improved estimate for pressure:$$p_{t, k+1} = \left( p \frac{1 + \gamma_s M^2}{1 + \gamma_s} \right)_{t, k} \;,$$where $k$ is the iteration.
###Code
gas_throat = ct.Solution('nasa_h2o2.yaml', 'gas')
pressure_throat = pressure_chamber / np.power(
(gamma_chamber + 1) / 2., gamma_chamber / (gamma_chamber - 1)
)
# based on CEA defaults
max_iter_throat = 5
tolerance_throat = 0.4e-4
print('Throat iterations:')
mach = 1.0
num_iter = 0
residual = 1
while residual > tolerance_throat:
num_iter += 1
if num_iter == max_iter_throat:
break
print(f'Error: more than {max_iter_throat} iterations required for throat calculation')
pressure_throat = pressure_throat * (1 + gamma_s * mach**2) / (1 + gamma_s)
gas_throat.SPX = entropy_chamber, to_si(pressure_throat), mole_fractions_chamber
gas_throat.equilibrate('SP')
derivs = get_thermo_derivatives(gas_throat)
dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties(
gas_throat, derivs[0], derivs[1], derivs[2]
)
velocity = np.sqrt(2 * (enthalpy_chamber - gas_throat.enthalpy_mass))
speed_sound = np.sqrt(
ct.gas_constant * gas_throat.T * gamma_s / gas_throat.mean_molecular_weight
)
mach = velocity / speed_sound
residual = np.abs(1.0 - 1/mach**2)
print(f'{num_iter} {residual: .3e}')
temperature_throat = gas_throat.T
pressure_throat = Q_(gas_throat.P, 'Pa')
gamma_s_throat = gamma_s
print('Error in throat temperature: '
f'{100*np.abs(temperature_throat - temperature_throat_cea)/temperature_throat_cea: .3e} %'
)
print('Error in throat pressure: '
f'{100*np.abs(pressure_throat - pressure_throat_cea)/pressure_throat_cea: .3e~P} %'
)
###Output
Throat iterations:
1 9.420e-04
2 1.590e-06
Error in throat temperature: 2.640e-02 %
Error in throat pressure: 5.430e-03 %
###Markdown
Exit conditionsThe conditions at the nozzle exit (or any location, really) can be determined with a given exit-to-throat area ratio $A_e / A_t$, by an iterative approach.First, calculate the area per unit mass flow rate at the throat:$$\left( \frac{A}{\dot{m}} \right)_t = \frac{1}{\rho_t u_t} = \frac{T_t n_t \mathcal{R}}{p_t u_t} \;,$$where $n_t = 1 / \text{MW}_t$ is the number of moles. Then, for a supersonic nozzle with an area ratio greater than two ($A_e / A_t \geq 2$),we can obtain an initial estimate for pressure ratio using an empirical formula:$$\log \frac{p_c}{p_e} = \gamma_s + 1.4 \log \frac{A_e}{A_t} \;,$$where $\gamma_s$ is evaluated using the throat state.An improved estimate of pressure ratio for the next iteration can be found using:$$\left( \log \frac{p_c}{p_e} \right)_{k+1} = \left( \log \frac{p_c}{p_e} \right)_k + \left[ \left( \frac{\partial \log \frac{p_c}{p_e} }{\partial \log \frac{A_e}{A_t} } \right)_s \right]_k \times \left[ \log \frac{A_e}{A_t} - \left( \log \frac{A_e}{A_t} \right)_k \right] \;,$$where the derivative is$$\left( \frac{\partial \log \frac{p_c}{p_e} }{\partial \log \frac{A_e}{A_t} } \right)_s = \left( \frac{\gamma_s u^2}{u^2 - a^2} \right)_e$$and the $k$th estimate of area ratio comes from$$\left( \frac{A_e}{A_t} \right)_k = \left( \frac{T_e n_e \mathcal{R}}{p_e u_e} \right)_k \frac{1}{ \left(A/\dot{m}\right)_t } \;.$$
###Code
# this is constant
A_mdot_thr = gas_throat.T / (gas_throat.P * velocity * gas_throat.mean_molecular_weight)
gas_exit = ct.Solution('nasa_h2o2.yaml', 'gas')
gas_exit.SPX = gas_throat.s, gas_throat.P, gas_throat.X
# initial estimate for pressure ratio
pinf_pe = np.exp(gamma_s_throat + 1.4 * np.log(area_ratio))
p_exit = to_si(pressure_chamber) / pinf_pe
gas_exit.SP = entropy_chamber, p_exit
gas_exit.equilibrate('SP')
Ae_At = gas_exit.T / (gas_exit.P * velocity * gas_exit.mean_molecular_weight) / A_mdot_thr
print('Iter T_exit Ae/At P_exit P_inf/P')
num_iter = 0
print(f'{num_iter} {gas_exit.T:.3f} K {Ae_At: .2f} {gas_exit.P/1e5:.3f} bar {pinf_pe:.3f}')
max_iter_exit = 10
tolerance_exit = 4e-5
residual = 1
while np.abs(residual) > tolerance_exit:
num_iter += 1
if num_iter == max_iter_throat:
break
print(f'Error: more than {max_iter_exit} iterations required for exit calculation')
derivs = get_thermo_derivatives(gas_exit)
dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties(
gas_exit, derivs[0], derivs[1], derivs[2]
)
velocity = np.sqrt(2 * (enthalpy_chamber - gas_exit.enthalpy_mass))
speed_sound = np.sqrt(ct.gas_constant * gas_exit.T * gamma_s / gas_exit.mean_molecular_weight)
Ae_At = gas_exit.T / (gas_exit.P * velocity * gas_exit.mean_molecular_weight) / A_mdot_thr
dlogp_dlogA = gamma_s * velocity**2 / (velocity**2 - speed_sound**2)
residual = dlogp_dlogA * (np.log(area_ratio) - np.log(Ae_At))
log_pinf_pe = np.log(pinf_pe) + residual
pinf_pe = np.exp(log_pinf_pe)
p_exit = to_si(pressure_chamber) / pinf_pe
gas_exit.SP = entropy_chamber, p_exit
gas_exit.equilibrate('SP')
print(f'{num_iter} {gas_exit.T:.3f} K {Ae_At: .2f} {gas_exit.P/1e5:.3f} bar {pinf_pe:.3f}')
print(f'Exit temperature: {gas_exit.T: .2f} K')
print(f'Exit pressure: {Q_(gas_exit.P, "Pa").to("bar"): .5f~P}')
print()
print('Error in exit temperature: '
f'{100*np.abs(gas_exit.T - temperature_exit_cea)/temperature_exit_cea: .3e} %'
)
print('Error in exit pressure: '
f'{100*np.abs(Q_(gas_exit.P, "Pa") - pressure_exit_cea)/pressure_exit_cea: .3e~P} %'
)
###Output
Exit temperature: 1234.18 K
Exit pressure: 0.21528 bar
Error in exit temperature: 2.731e-02 %
Error in exit pressure: 3.329e-02 %
###Markdown
Those results look good! Now we can calculate thrust coefficient and specific impulse, using$$\begin{align}C_F &= \frac{v_e}{c^*} \\I_{\text{sp}} &= \frac{v_e}{g_0} \\I_{\text{vac}} &= I_{\text{sp}} + \frac{p_e}{A_e}{\dot{m}} = I_{\text{sp}} + \frac{T_e \mathcal{R}}{v_e \overline{M}} \;.\end{align}$$CEA prints specific impulse with units of velocity, without the reference gravity term,so we will compute both versions for comparison.
###Code
derivs = get_thermo_derivatives(gas_exit)
dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties(
gas_exit, derivs[0], derivs[1], derivs[2]
)
velocity = np.sqrt(2 * (enthalpy_chamber - gas_exit.enthalpy_mass))
thrust_coeff = velocity / c_star
print(f'Thrust coefficient: {thrust_coeff: .4f}')
g0 = 9.80665
Isp = velocity
Ivac = Isp + gas_exit.T * ct.gas_constant / (velocity * gas_exit.mean_molecular_weight)
print(f'I_sp = {Isp: .1f} m/s')
print(f'I_vac = {Ivac: .1f} m/s')
print()
print('Error in Isp: '
f'{100*np.abs(Isp - specific_impulse_cea)/specific_impulse_cea: .3e} %'
)
print('Error in Ivac: '
f'{100*np.abs(Ivac - specific_impulse_vac_cea)/specific_impulse_vac_cea: .3e} %'
)
print('Actual specific impulse:')
print(f'I_sp = {Isp / g0: .1f} s')
print(f'I_vac = {Ivac / g0: .1f} s')
###Output
Actual specific impulse:
I_sp = 445.8 s
I_vac = 462.8 s
|
_notebooks/2021-01-10-License-Plate-Detection.ipynb | ###Markdown
License Plate Detection
> Detecting license plate with an open source model
- toc: true
- badges: true
- comments: true
- categories: [object detection] Install library
###Code
!git clone https://github.com/quangnhat185/Plate_detect_and_recognize.git
%cd Plate_detect_and_recognize
###Output
_____no_output_____
###Markdown
Import packages
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
from local_utils import detect_lp
from os.path import splitext,basename
from keras.models import model_from_json
import glob
###Output
_____no_output_____
###Markdown
Load model
###Code
def load_model(path):
try:
path = splitext(path)[0]
with open('%s.json' % path, 'r') as json_file:
model_json = json_file.read()
model = model_from_json(model_json, custom_objects={})
model.load_weights('%s.h5' % path)
print("Loading model successfully...")
return model
except Exception as e:
print(e)
wpod_net_path = "wpod-net.json"
wpod_net = load_model(wpod_net_path)
###Output
_____no_output_____
###Markdown
Data loading and preprocessing
###Code
def preprocess_image(image_path,resize=False):
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img / 255
if resize:
img = cv2.resize(img, (224,224))
return img
#hide_input
!mkdir /content/plates
!wget -q -O /content/plates/plate1.jpg 'https://images.squarespace-cdn.com/content/v1/5c981f3d0fb4450001fdde5d/1563727260863-E9JQC4UVO8IYCE6P19BO/ke17ZwdGBToddI8pDm48kDHPSfPanjkWqhH6pl6g5ph7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z4YTzHvnKhyp6Da-NYroOW3ZGjoBKy3azqku80C789l0mwONMR1ELp49Lyc52iWr5dNb1QJw9casjKdtTg1_-y4jz4ptJBmI9gQmbjSQnNGng/cars+1.jpg'
!wget -q -O /content/plates/plate2.jpg 'https://www.cars24.com/blog/wp-content/uploads/2018/12/High-Security-Registration-Plates-Feature-Cars24.com_.png'
###Output
_____no_output_____
###Markdown
Create a list of image paths
###Code
image_paths = glob.glob("/content/plates/*.jpg")
print("Found %i images..."%(len(image_paths)))
###Output
_____no_output_____
###Markdown
Visualize data in subplot
###Code
#collapse-hide
fig = plt.figure(figsize=(12,8))
cols = 5
rows = 4
fig_list = []
for i in range(len(image_paths)):
fig_list.append(fig.add_subplot(rows,cols,i+1))
title = splitext(basename(image_paths[i]))[0]
fig_list[-1].set_title(title)
img = preprocess_image(image_paths[i],True)
plt.axis(False)
plt.imshow(img)
plt.tight_layout(True)
plt.show()
###Output
Found 2 images...
###Markdown
Inference Forward image through model and return plate's image and coordinates. if error "No Licensese plate is founded!" pop up, try to adjust Dmin.
###Code
def get_plate(image_path, Dmax=608, Dmin=256):
vehicle = preprocess_image(image_path)
ratio = float(max(vehicle.shape[:2])) / min(vehicle.shape[:2])
side = int(ratio * Dmin)
bound_dim = min(side, Dmax)
_ , LpImg, _, cor = detect_lp(wpod_net, vehicle, bound_dim, lp_threshold=0.5)
return LpImg, cor
###Output
_____no_output_____
###Markdown
Obtain plate image and its coordinates from an image
###Code
#collapse-output
test_image = image_paths[0]
LpImg,cor = get_plate(test_image)
print("Detect %i plate(s) in"%len(LpImg),splitext(basename(test_image))[0])
print("Coordinate of plate(s) in image: \n", cor)
###Output
Detect 1 plate(s) in plate2
Coordinate of plate(s) in image:
[array([[298.04423448, 433.05399526, 436.51161569, 301.50185491],
[338.43327592, 359.12640589, 385.37416506, 364.6810351 ],
[ 1. , 1. , 1. , 1. ]])]
###Markdown
Visualize our result
###Code
#collapse-hide
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.axis(False)
plt.imshow(preprocess_image(test_image))
plt.subplot(1,2,2)
plt.axis(False)
plt.imshow(LpImg[0])
###Output
_____no_output_____
###Markdown
Viualize all obtained plate images
###Code
#collapse-hide
fig = plt.figure(figsize=(12,6))
cols = 5
rows = 4
fig_list = []
for i in range(len(image_paths)):
fig_list.append(fig.add_subplot(rows,cols,i+1))
title = splitext(basename(image_paths[i]))[0]
fig_list[-1].set_title(title)
LpImg,_ = get_plate(image_paths[i])
plt.axis(False)
plt.imshow(LpImg[0])
plt.tight_layout(True)
plt.show()
###Output
_____no_output_____ |
jupyter/example_notebooks/.ipynb_checkpoints/spark_example_v0.2.1-checkpoint.ipynb | ###Markdown
0 - Setup Notebook Pod 0.1 - Run in Jupyter Bash Terminal```bash create application-default credentialsgcloud auth application-default login``` 1 - Initialize SparkSession
###Code
import pyspark
from pyspark.sql import SparkSession
# construct spark_jars list
spark_jars = ["https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-hadoop2-latest.jar"]
if pyspark.version.__version__[0] == "3":
spark_jars.append("https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-latest_2.12.jar")
else:
spark_jars.append("https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-latest_2.11.jar")
# create SparkSession
spark = SparkSession \
.builder \
.master("local[1]") \
.config("spark.driver.cores", "1") \
.config("spark.driver.memory", "4g") \
.config("spark.jars", ",".join(spark_jars)) \
.config("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "LEGACY") \
.config("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") \
.config("spark.hadoop.fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS") \
.config("spark.hadoop.fs.gs.auth.service.account.enable", "true") \
.config("spark.hadoop.fs.gs.auth.service.account.json.keyfile", "/home/jovyan/.config/gcloud/application_default_credentials.json") \
.getOrCreate()
###Output
_____no_output_____
###Markdown
2 - SparkSQL 2.0 - Docs* https://spark.apache.org/docs/latest/sql-getting-started.html* https://spark.apache.org/docs/latest/api/python/pyspark.sql.html 2.1 - Write CSV
###Code
# create a DataFrame
df = spark.createDataFrame(
[("aaa", 1, "!!!"),
("bbb", 2, "@@@"),
("ccc", 3, "###"),
("ddd", 4, "%%%")],
schema=["col1", "col2", "col3", ]
)
# write CSV
out_uri = f"gs://<<<MY_BUCKET>>>/example/spark_test.csv"
df.write \
.format("csv") \
.mode("overwrite") \
.option("header", "true") \
.save(out_uri)
# link to GUI
print("----------------")
print("View in GUI:")
print(f"https://console.cloud.google.com/storage/browser/${out_uri.lstrip('gs://')}/")
print("----------------")
###Output
_____no_output_____
###Markdown
2.2 - Read CSV
###Code
# read CSV
in_uri = f"gs://<<<MY_BUCKET>>>/example/spark_test.csv"
df2 = spark.read \
.format("csv") \
.option("mode", "FAILFAST") \
.option("inferSchema", "true") \
.option("header", "true") \
.load(in_uri)
# view DataFrame
df2.show()
###Output
_____no_output_____
###Markdown
3 - BigQuery 3.0 - Docs* https://github.com/GoogleCloudDataproc/spark-bigquery-connector 3.1 - Write to BigQuery
###Code
# create a DataFrame
df3 = spark.createDataFrame(
[("aaa", 1, "!!!"),
("bbb", 2, "@@@"),
("ccc", 3, "###"),
("ddd", 4, "%%%")],
schema=["col1", "col2", "col3", ]
)
# write to BigQuery
out_project = "<<<MY_PROJECT>>>"
out_table = "<<<MY_DATABASE>>>.example__spark_notebook"
billing_project = "<<<MY_PROJECT>>>"
df3.write \
.format("bigquery") \
.mode("overwrite") \
.option("temporaryGcsBucket", "<<MY_BUCKET>>") \
.option("parentProject", billing_project) \
.option("project", out_project) \
.option("table", out_table) \
.save()
# link to GUI
print("----------------")
print("View in GUI:")
print(f"https://console.cloud.google.com/bigquery?project=${out_project}")
print("----------------")
###Output
_____no_output_____
###Markdown
3.2 - Read from BigQuery
###Code
# read from BigQuery
in_project = "<<<MY_PROJECT>>>"
in_table = "<<<MY_DATABASE>>>.example__spark_notebook"
billing_project = "<<<MY_PROJECT>>>"
df4 = spark.read \
.format("bigquery") \
.option("readDataFormat", "ARROW") \
.option("parentProject", billing_project) \
.option("project", in_project) \
.option("table", in_table) \
.load()
# view DataFrame
df4.show()
###Output
_____no_output_____
###Markdown
4 - Advanced Functions 4.1 - Write File (Hadoop Java API)
###Code
def hadoop_write_file(spark: SparkSession,
fs_uri: str,
overwrite: bool,
file_data: str) -> str:
"""
Write a string as a file using the Hadoop Java API.
:param spark: a running SparkSession
:param fs_uri: the URI of the file
:param overwrite: if we should replace any existing file (error if False)
:param file_data: the string to write as the file data
:return the URI of the writen file
"""
# create py4j wrappers of java objects
hadoop = spark.sparkContext._jvm.org.apache.hadoop
java = spark.sparkContext._jvm.java
# create the FileSystem() object
conf = spark._jsc.hadoopConfiguration()
path = hadoop.fs.Path(java.net.URI(fs_uri))
fs = path.getFileSystem(conf)
# write the file
output_stream = fs.create(path, overwrite)
output_stream.writeBytes(file_data)
output_stream.close()
return fs_uri
# write file
out_uri = f"gs://<<<MY_BUCKET>>>/example/spark_test.txt"
file_data = "Hello World! " * 100
hadoop_write_file(spark=spark, fs_uri=out_uri, overwrite=True, file_data=file_data)
# link to GUI
print("----------------")
print("View in GUI:")
print(f"https://console.cloud.google.com/storage/browser/${out_project.lstrip('gs://')}")
print("----------------")
###Output
_____no_output_____
###Markdown
4.2 - Read File (Hadoop Java API)
###Code
def hadoop_read_file(spark: SparkSession,
fs_uri: str,
encoding: str = "utf-8") -> str:
"""
Read the content of a file as a string using the Hadoop Java API.
:param spark: a running SparkSession
:param fs_uri: the URI of the file
:param encoding: the file's encoding (defaults to utf-8)
:return: the content of the file (or None if the file is not present
"""
from py4j.protocol import Py4JJavaError
# create py4j wrappers of scala objects
commons = spark.sparkContext._jvm.org.apache.commons
hadoop = spark.sparkContext._jvm.org.apache.hadoop
java = spark.sparkContext._jvm.java
# create the FileSystem() object
conf = spark._jsc.hadoopConfiguration()
path = hadoop.fs.Path(java.net.URI(fs_uri))
fs = path.getFileSystem(conf)
# read file as string
try:
input_stream = fs.open(path)
file_data = commons.io.IOUtils.toString(input_stream, encoding)
input_stream.close()
return file_data
except Py4JJavaError as ex:
java_exception_class = ex.java_exception.getClass().getName()
if java_exception_class == "java.io.FileNotFoundException":
return None
else:
raise ex
# read file
in_uri = f"gs://<<<MY_BUCKET>>>/example/spark_test.txt"
file_data = hadoop_read_file(spark=spark, fs_uri=in_uri)
print("-------- File Content --------")
print(file_data)
print("------------------------------")
###Output
_____no_output_____ |
intro-to-python/intro-to-python.ipynb | ###Markdown
Introduction to Python Purpose: To begin practicing and working with Python *Step 1: Hello world*
###Code
print(Hello World)
print('Hello World')
###Output
Hello World
###Markdown
*Step 2: Print a User Friendly Greeting*
###Code
#Input your name here after the equal sign and in quotations
user_name = 'Hawley'
print('Hello,', user_name)
print('It is nice to meet you!')
###Output
Hello, Hawley
It is nice to meet you!
###Markdown
*Step 3: The area of a circle*
###Code
import numpy as np
# Define your radius
r = 3
area = np.pi * r ** 2
print(area)
# Define your radius
r = 3
area = np.pi * r ** 2
area
###Output
_____no_output_____
###Markdown
*Step 3b: Doing it in a function*
###Code
def find_area(r):
return np.pi*r**2
find_area(3)
###Output
_____no_output_____
###Markdown
*Step 3c: Doing more with a function, faster!*
###Code
numberlist = list(range(0, 10))
numberlist
for numbers in numberlist:
area = findArea(numbers)
print('Radius:', numbers, 'Area:', area)
###Output
Radius: 0 Area: 0.0
Radius: 1 Area: 3.141592653589793
Radius: 2 Area: 12.566370614359172
Radius: 3 Area: 28.274333882308138
Radius: 4 Area: 50.26548245743669
Radius: 5 Area: 78.53981633974483
Radius: 6 Area: 113.09733552923255
Radius: 7 Area: 153.93804002589985
Radius: 8 Area: 201.06192982974676
Radius: 9 Area: 254.46900494077323
###Markdown
*Step 4: Intro to Plotting*
###Code
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots()
ax.set(xlim=(-1, 1), ylim=(-1, 1))
a_circle = plt.Circle((0, 0), 0.5)
ax.add_artist(a_circle)
for numbers in numberlist:
fig, ax = plt.subplots()
ax.set(xlim=(-10, 10), ylim=(-10, 10))
a_circle = plt.Circle((0, 0), numbers/2)
ax.add_artist(a_circle)
###Output
_____no_output_____ |
FFSSN.ipynb | ###Markdown
This page should automatically redirect to the updated page. If it does not please go to [http://nmsutton.herokuapp.com/nmsuttondetails/Snnrl](http://nmsutton.herokuapp.com/nmsuttondetails/Snnrl)
###Code
from IPython.core.display import display, HTML
display(HTML('<script>window.location = "http://nmsutton.herokuapp.com/nmsuttondetails/Snnrl";</script>'))
###Output
_____no_output_____ |
8-Puzzle-Solver.ipynb | ###Markdown
8 Puzzle solver* Parsa KamaliPour - 97149081* In this repository we're going to solve this puzzle using $ A^* $ and $ IDA $ imports:
###Code
import copy
import pandas as pd
import numpy as np
import collections
import heapq
###Output
_____no_output_____
###Markdown
Test case 1:
###Code
input_puzzle_1 = [
[1, 2, 3],
[4, 0, 5],
[7, 8, 6]
]
print('Input: ')
print(pd.DataFrame(input_puzzle_1))
print()
desired_output_1 = [
[1, 2, 3],
[4, 5, 0],
[7, 8, 6]
]
print('Desired Output:')
print(pd.DataFrame(desired_output_1))
###Output
Input:
0 1 2
0 1 2 3
1 4 0 5
2 7 8 6
Desired Output:
0 1 2
0 1 2 3
1 4 5 0
2 7 8 6
###Markdown
Test case 2: (Hardest)
###Code
input_puzzle_2 = [
[8, 6, 7],
[2, 5, 4],
[3, 0, 1]
]
print('Input 2: ')
print(pd.DataFrame(input_puzzle_2))
print()
desired_output_2 = [
[6, 4, 7],
[8, 5, 0],
[3, 2, 1]
]
print('Desired Output 2:')
print(pd.DataFrame(desired_output_2))
###Output
Input 2:
0 1 2
0 8 6 7
1 2 5 4
2 3 0 1
Desired Output 2:
0 1 2
0 6 4 7
1 8 5 0
2 3 2 1
###Markdown
code's configs:
###Code
heuristic_method = input("Enter the desired Heuristic method: h1 or h2")
f_function_omega = eval(input("Enter the desired f function omega: 2 is Greedy, "
"0 is Uninformed best-first search, "
"0 < omega <= 1 is A*"))
test_case = eval(input("which test case? 1:easy, 2:hard"))
if test_case == 1:
input_puzzle = input_puzzle_1
desired_output = desired_output_1
elif test_case == 2:
input_puzzle = input_puzzle_2
desired_output = desired_output_2
###Output
_____no_output_____
###Markdown
Matrix to dictionary converter
###Code
class Mat2dict:
def __init__(self, matrix):
self.matrix = matrix
self.dic = {}
def convert(self):
for r in range(len(self.matrix)):
for c in range(len(self.matrix[0])):
key = self.matrix[r][c]
self.dic[key] = [r, c]
return self.dic
###Output
_____no_output_____
###Markdown
the heuristic calculator class:* H1 heuristic (misplaced tiles) $ \Sigma_{i=1}^{9} \; if \; currentPuzzleBoard[node_i] \; != \; goalPuzzleBoard[node_i]$ $ then \; h(state_y) = h(state_y) + 1$* H2 heuristic (manhattan distance) $ goalPuzzleBoard.find(currentPuzzleBoard[node_i]) $ $ retrieve \; Row \; \& \; Col \; of \; goal $ $ manhattanDistance = |(goal.row - current.row)| + |(goal.col - current.col)| $ $ TotalHeuristic[state_i] = \Sigma_{i=1}^{9} manhattanDistance_i $
###Code
class Heuristic:
def __init__(self, node, current_puzzle, desired_answer, method):
self.method = method
self.node = node
self.current_puzzle = current_puzzle
self.desired_answer = desired_answer
#self.current_puzzle_dict = Mat2dict(self.current_puzzle)
self.desired_answer_dict = Mat2dict(self.desired_answer).convert()
def do(self):
if self.method == 'h1':
return self.h1_misplaced_tiles()
elif self.method == 'h2':
return self.h2_manhattan_distance()
def h1_misplaced_tiles(self):
misplaced_counter = 0
for row in range(len(self.current_puzzle)):
for col in range(len(self.current_puzzle[0])):
if self.current_puzzle[row][col] != self.desired_answer[row][col]:
misplaced_counter += 1
return misplaced_counter
def h2_manhattan_distance(self):
total_distance_counter = 0
for row in range(len(self.current_puzzle)):
for col in range(len(self.current_puzzle[0])):
key = self.current_puzzle[row][col]
correct_row, correct_col = self.desired_answer_dict[key]
total_distance_counter += abs(row - correct_row) + abs(col - correct_col)
return total_distance_counter
###Output
_____no_output_____
###Markdown
The node class:* F function is calculated in a such way that you can control how the Heuristic and G-costcan perform: $ FCost(n) = (2-\omega) * GCost(n) + \omega * h(n)$ $ if \; \omega = 2: $ $ then: algorithm \; is \; Greedy \; due \; to \; GCost \; being \; 0:$ $ FCost(n) = 0 + 2 * h(n) $ $ if \; \omega = 0: $ $ then: algorithm \; is \; uninformed \; search \; due \; to \; h(n) \; being \; 0:$ $ FCost(n) = 2 * GCost(n) + 0 $ $ if \; 0 \lt \omega \le 1 : $ $ then: algorithm \; is \; informed \; search(A^*):$ $ FCost(n) = (2-\omega) * GCost(n) + \omega * h(n) $
###Code
class Node:
def __init__(self, current_puzzle, parent=None):
self.current_puzzle = current_puzzle
self.parent = parent
if self.parent:
self.g_cost = self.parent.f_function
self.depth = self.parent.depth + 1
else:
self.g_cost = 0
self.depth = 0
self.h_cost = Heuristic(self, current_puzzle, desired_output, heuristic_method).do()
self.f_function = (2 - f_function_omega) * self.g_cost + f_function_omega * self.h_cost
def __eq__(self, other):
return self.f_function == other.f_function
def __lt__(self, other):
return self.f_function < other.f_function
def get_id(self):
return str(self)
def get_path(self):
node, path = self, []
while node:
path.append(node)
node = node.parent
return list(reversed(path))
def get_position(self, element):
for row in range(len(self.current_puzzle)):
for col in range(len(self.current_puzzle[0])):
if self.current_puzzle[row][col] == element:
return [row, col]
return [0, 0]
###Output
_____no_output_____
###Markdown
The puzzle solver class:
###Code
class PuzzleSolver:
def __init__(self, start_node):
self.final_state = None
self.start_node = start_node
self.depth = 0
self.visited_nodes = set()
self.expanded_nodes = 0
def solve(self):
queue = [self.start_node]
self.visited_nodes.add(self.start_node.get_id())
while queue:
self.expanded_nodes += 1
node = heapq.heappop(queue)
if node.current_puzzle == desired_output:
self.final_state = node
Result(self.final_state, self.expanded_nodes)
return True
if node.depth + 1 > self.depth:
self.depth = node.depth + 1
for neighbor in NeighborsCalculator(node).get_list_of_neighbors():
if not neighbor.get_id in self.visited_nodes:
self.visited_nodes.add(neighbor.get_id())
heapq.heappush(queue, neighbor)
return False
###Output
_____no_output_____
###Markdown
result class
###Code
class Result:
def __init__(self, final_state, expanded_nodes):
self.expanded_nodes = expanded_nodes
self.final_state = final_state
self.solved_puzzle = self.final_state.current_puzzle
self.path = self.final_state.get_path()
self.show_puzzles()
self.show_path()
def show_puzzles(self):
print("Inital Puzzle: ")
print(pd.DataFrame(input_puzzle))
print("Result Puzzle: ")
print(pd.DataFrame(self.solved_puzzle))
print("Expected Puzzle: ")
print(pd.DataFrame(desired_output))
print()
print("Number of expanded nodes: {}".format(self.expanded_nodes))
print()
def show_path(self):
counter = 0
while self.path:
counter += 1
step = self.path.pop(0)
print("step {}: ".format(counter))
print(pd.DataFrame(step.current_puzzle))
###Output
_____no_output_____
###Markdown
Neighbors calculator
###Code
class NeighborsCalculator:
def __init__(self, current_state):
self.current_state = current_state
self.puzzle = self.current_state.current_puzzle
self.neighbors = []
def get_list_of_neighbors(self):
row, col = map(int, self.current_state.get_position(0))
#if row or col is None:
# return []
# move right
if col < len(self.puzzle[0]) - 1:
moved_right = copy.deepcopy(self.puzzle)
moved_right[row][col], moved_right[row][col + 1] = moved_right[row][col + 1], moved_right[row][col]
self.neighbors.append(Node(moved_right, self.current_state))
# move left
if col > 0:
moved_left = copy.deepcopy(self.puzzle)
moved_left[row][col], moved_left[row][col - 1] = moved_left[row][col - 1], moved_left[row][col]
self.neighbors.append(Node(moved_left, self.current_state))
# move up
if row > 0:
moved_up = copy.deepcopy(self.puzzle)
moved_up[row][col], moved_up[row - 1][col] = moved_up[row - 1][col], moved_up[row][col]
self.neighbors.append(Node(moved_up, self.current_state))
# move down
if row < len(self.puzzle) - 1:
moved_down = copy.deepcopy(self.puzzle)
moved_down[row][col], moved_down[row + 1][col] = moved_down[row + 1][col], moved_down[row][col]
self.neighbors.append(Node(moved_down, self.current_state))
return self.neighbors
initial_state = Node(input_puzzle)
PuzzleSolver(initial_state).solve()
###Output
Inital Puzzle:
0 1 2
0 1 2 3
1 4 0 5
2 7 8 6
Result Puzzle:
0 1 2
0 1 2 3
1 4 5 0
2 7 8 6
Expected Puzzle:
0 1 2
0 1 2 3
1 4 5 0
2 7 8 6
Number of expanded nodes: 2
step 1:
0 1 2
0 1 2 3
1 4 0 5
2 7 8 6
step 2:
0 1 2
0 1 2 3
1 4 5 0
2 7 8 6
###Markdown
The puzzle solver IDA class:
###Code
class PuzzleSolverIDA:
def __init__(self, start_node, iterate):
self.iterate = iterate
self.final_state = None
self.start_node = start_node
self.depth = 0
self.visited_nodes = set()
self.expanded_nodes = 0
self.f_cutoff = 0
def solve(self):
while True:
self.f_cutoff += self.iterate
queue = [self.start_node]
self.visited_nodes.add(self.start_node.get_id())
while queue:
self.expanded_nodes += 1
node = heapq.heappop(queue)
if node.current_puzzle == desired_output:
self.final_state = node
Result(self.final_state, self.expanded_nodes)
return True
if node.depth + 1 > self.depth:
self.depth = node.depth + 1
for neighbor in NeighborsCalculator(node).get_list_of_neighbors():
if not neighbor.get_id in self.visited_nodes:
if neighbor.f_function <= self.f_cutoff:
self.visited_nodes.add(neighbor.get_id())
heapq.heappush(queue, neighbor)
initial_state = Node(input_puzzle)
PuzzleSolverIDA(initial_state, 4).solve()
###Output
Inital Puzzle:
0 1 2
0 8 6 7
1 2 5 4
2 3 0 1
Result Puzzle:
0 1 2
0 6 4 7
1 8 5 0
2 3 2 1
Expected Puzzle:
0 1 2
0 6 4 7
1 8 5 0
2 3 2 1
Number of expanded nodes: 2191847
step 1:
0 1 2
0 8 6 7
1 2 5 4
2 3 0 1
step 2:
0 1 2
0 8 6 7
1 2 5 4
2 3 1 0
step 3:
0 1 2
0 8 6 7
1 2 5 0
2 3 1 4
step 4:
0 1 2
0 8 6 7
1 2 0 5
2 3 1 4
step 5:
0 1 2
0 8 6 7
1 0 2 5
2 3 1 4
step 6:
0 1 2
0 0 6 7
1 8 2 5
2 3 1 4
step 7:
0 1 2
0 6 0 7
1 8 2 5
2 3 1 4
step 8:
0 1 2
0 6 7 0
1 8 2 5
2 3 1 4
step 9:
0 1 2
0 6 7 5
1 8 2 0
2 3 1 4
step 10:
0 1 2
0 6 7 5
1 8 2 4
2 3 1 0
step 11:
0 1 2
0 6 7 5
1 8 2 4
2 3 0 1
step 12:
0 1 2
0 6 7 5
1 8 0 4
2 3 2 1
step 13:
0 1 2
0 6 7 5
1 8 4 0
2 3 2 1
step 14:
0 1 2
0 6 7 0
1 8 4 5
2 3 2 1
step 15:
0 1 2
0 6 0 7
1 8 4 5
2 3 2 1
step 16:
0 1 2
0 6 4 7
1 8 0 5
2 3 2 1
step 17:
0 1 2
0 6 4 7
1 8 5 0
2 3 2 1
|
docs/apphub/image_styletransfer/fst_coco/fst_coco.ipynb | ###Markdown
Fast Style Transfer with FastEstimatorIn this notebook we will demonstrate how to do a neural image style transfer with perceptual loss as described in [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf).Typical neural style transfer involves two images, an image containing semantics that you want to preserve and another image serving as a reference style; the first image is often referred as *content image* and the other image as *style image*.In [paper](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf) training images of COCO2014 dataset are used to learn the style transfer from any content image.
###Code
import os
import cv2
import fastestimator as fe
import tensorflow as tf
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
In this notebook we will use *Wassily Kandinsky's Composition 7* as a style image.We will also resize the style image to $256 \times 256$ to make the dimension consistent with that of COCO images.
###Code
style_img_path = tf.keras.utils.get_file(
'kandinsky.jpg',
'https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg'
)
style_img = cv2.imread(style_img_path)
style_img = cv2.resize(style_img, (256, 256))
style_img = (style_img.astype(np.float32) - 127.5) / 127.5
style_img_t = tf.convert_to_tensor(np.expand_dims(style_img, axis=0))
style_img_disp = cv2.cvtColor((style_img + 1) * 0.5, cv2.COLOR_BGR2RGB)
plt.imshow(style_img_disp)
plt.title('Wassily Kandinsky\'s Composition 7')
plt.axis('off');
#Parameters
batch_size = 4
epochs = 2
steps_per_epoch = None
validation_steps = None
img_path = 'panda.jpeg'
saved_model_path = 'style_transfer_net_epoch_1_step_41390.h5'
###Output
_____no_output_____
###Markdown
Step 1: Input Pipeline Downloading the dataFirst, we will download training images of COCO2014 dataset via our dataset API. The images will be first downloaded. Then, a csv file containing relative paths to these images will be created. The root path of the downloaded images will be parent_path.Downloading the images will take awhile.
###Code
from fastestimator.dataset.mscoco import load_data
train_csv, path = load_data()
###Output
reusing existing dataset
###Markdown
Once finished downloading images, we need to define an *Operator* to recale pixel values from $[0, 255]$ to $[-1, 1]$.We will define our own `Rescale` class in which we define the data transform logic inside `forward` method.
###Code
from fastestimator.op import TensorOp
class Rescale(TensorOp):
def forward(self, data, state):
return (tf.cast(data, tf.float32) - 127.5) / 127.5
###Output
_____no_output_____
###Markdown
Creating tfrecordsOnce the images are downloaded, we will create tfrecords using `RecordWriter`.Each row of the csv file will be used by `ImageReader` to read in the image using `cv2.imread`.Then, we resize the images to $256 \times 256$.
###Code
from fastestimator.op.numpyop import ImageReader, Resize
from fastestimator.util import RecordWriter
tfr_save_dir = os.path.join(path, 'tfrecords')
writer = RecordWriter(
train_data=train_csv,
save_dir=tfr_save_dir,
ops=[
ImageReader(inputs="image", parent_path=path, outputs="image"),
Resize(inputs="image", target_size=(256, 256), outputs="image")
])
###Output
_____no_output_____
###Markdown
Defining an instance of `Pipeline`We can now define an instance of `Pipeline`.
###Code
pipeline = fe.Pipeline(batch_size=batch_size, data=writer, ops=[Rescale(inputs="image", outputs="image")])
###Output
_____no_output_____
###Markdown
Step 2: NetworkOnce `Pipeline` is defined, we need to define network architecture, losses, and the forward pass of batch data. Defining model architectureWe first create a `FEModel` instance which collects the following:* model definition* model name* loss name* optimizerThe architecture of the model is a modified resnet.
###Code
from fastestimator.architecture.stnet import styleTransferNet
model = fe.build(model_def=styleTransferNet,
model_name="style_transfer_net",
loss_name="loss",
optimizer=tf.keras.optimizers.Adam(1e-3))
###Output
_____no_output_____
###Markdown
Defining LossThe perceptual loss described in the [paper](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf) is computed based on intermediate layers of VGG16 pretrained on ImageNet; specifically, `relu1_2`, `relu2_2`, `relu3_3`, and `relu4_3` of VGG16 are used.The *style* loss term is computed as the squared l2 norm of the difference in Gram Matrix of these feature maps between an input image and the reference stlye image.The *content* loss is simply l2 norm of the difference in `relu3_3` of the input image and the reference style image.In addition, the method also uses total variation loss to enforce spatial smoothness in the output image.The final loss is weighted sum of the style loss term, the content loss term (feature reconstruction term in the [paper](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf)), and the total variation term.We first define a custom `TensorOp` that outputs intermediate layers of VGG16.Given these intermediate layers returned by the loss network as a dictionary, we define a custom `Loss` class that encapsulates all the logics of the loss calculation.Since `Loss` is also yet another `TensorOp`, the final loss value is returned by `forward` method.
###Code
from fastestimator.architecture.stnet import lossNet
from fastestimator.op.tensorop import Loss
class ExtractVGGFeatures(TensorOp):
def __init__(self, inputs, outputs, mode=None):
super().__init__(inputs, outputs, mode)
self.vgg = lossNet()
def forward(self, data, state):
return self.vgg(data)
class StyleContentLoss(Loss):
def __init__(self, style_weight, content_weight, tv_weight, inputs, outputs=None, mode=None):
super().__init__(inputs=inputs, outputs=outputs, mode=mode)
self.style_weight = style_weight
self.content_weight = content_weight
self.tv_weight = tv_weight
def calculate_style_recon_loss(self, y_true, y_pred):
y_true_gram = self.calculate_gram_matrix(y_true)
y_pred_gram = self.calculate_gram_matrix(y_pred)
y_diff_gram = y_pred_gram - y_true_gram
y_norm = tf.math.sqrt(tf.reduce_sum(tf.math.square(y_diff_gram), axis=(1, 2)))
return (y_norm)
def calculate_feature_recon_loss(self, y_true, y_pred):
y_diff = y_pred - y_true
num_elts = tf.cast(tf.reduce_prod(y_diff.shape[1:]), tf.float32)
y_diff_norm = tf.reduce_sum(tf.square(y_diff), axis=(1, 2, 3)) / num_elts
return (y_diff_norm)
def calculate_gram_matrix(self, x):
x = tf.cast(x, tf.float32)
num_elts = tf.cast(x.shape[1] * x.shape[2] * x.shape[3], tf.float32)
gram_matrix = tf.einsum('bijc,bijd->bcd', x, x)
gram_matrix /= num_elts
return gram_matrix
def calculate_total_variation(self, y_pred):
return (tf.image.total_variation(y_pred))
def forward(self, data, state):
y_pred, y_style, y_content, image_out = data
style_loss = [self.calculate_style_recon_loss(a, b) for a, b in zip(y_style['style'], y_pred['style'])]
style_loss = tf.add_n(style_loss)
style_loss *= self.style_weight
content_loss = [
self.calculate_feature_recon_loss(a, b) for a, b in zip(y_content['content'], y_pred['content'])
]
content_loss = tf.add_n(content_loss)
content_loss *= self.content_weight
total_variation_reg = self.calculate_total_variation(image_out)
total_variation_reg *= self.tv_weight
return style_loss + content_loss + total_variation_reg
###Output
_____no_output_____
###Markdown
Defining forward passHaving defined the model and the associated loss, we can now define an instance of `Network` that specify forward pass of the batch data in a training loop.FastEstimator takes care of gradient computation and update of the model once this forward pass is defined.
###Code
from fastestimator.op.tensorop import ModelOp
style_weight=5.0
content_weight=1.0
tv_weight=1e-4
network = fe.Network(ops=[
ModelOp(inputs="image", model=model, outputs="image_out"),
ExtractVGGFeatures(inputs=lambda: style_img_t, outputs="y_style"),
ExtractVGGFeatures(inputs="image", outputs="y_content"),
ExtractVGGFeatures(inputs="image_out", outputs="y_pred"),
StyleContentLoss(style_weight=style_weight,
content_weight=content_weight,
tv_weight=tv_weight,
inputs=('y_pred', 'y_style', 'y_content', 'image_out'),
outputs='loss')
])
###Output
_____no_output_____
###Markdown
Step 3: EstimatorHaving defined `Pipeline` and `Network`, we can now define `Estimator`.We will use `Trace` to save intermediate models.
###Code
from fastestimator.trace import ModelSaver
import tempfile
model_dir=tempfile.mkdtemp()
estimator = fe.Estimator(network=network,
pipeline=pipeline,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
traces=ModelSaver(model_name="style_transfer_net", save_dir=model_dir))
###Output
_____no_output_____
###Markdown
We call `fit` method of `Estimator` to start training.
###Code
estimator.fit()
###Output
_____no_output_____
###Markdown
InferenceOnce the training is finished, we will apply the model to perform the style transfer on arbitrary images.Here we use a photo of a panda.
###Code
test_img = cv2.imread(img_path)
test_img = cv2.resize(test_img, (256, 256))
test_img = (test_img.astype(np.float32) - 127.5) / 127.5
test_img_t = tf.expand_dims(test_img, axis=0)
model_path = os.path.join(model_dir, saved_model_path)
trained_model = tf.keras.models.load_model(model_path,
custom_objects={
"ReflectionPadding2D":fe.architecture.stnet.ReflectionPadding2D,
"InstanceNormalization":fe.architecture.stnet.InstanceNormalization},
compile=False)
output_img = trained_model.predict(test_img_t)
output_img_disp = (output_img[0] + 1) * 0.5
test_img_disp = (test_img + 1) * 0.5
plt.figure(figsize=(20,20))
plt.subplot(131)
plt.imshow(cv2.cvtColor(test_img_disp, cv2.COLOR_BGR2RGB))
plt.title('Original Image')
plt.axis('off');
plt.subplot(132)
plt.imshow(style_img_disp)
plt.title('Style Image')
plt.axis('off');
plt.subplot(133)
plt.imshow(cv2.cvtColor(output_img_disp, cv2.COLOR_BGR2RGB));
plt.title('Transferred Image')
plt.axis('off');
###Output
_____no_output_____ |
AppStat2022/Week5/original/MVA_part1/2par_discriminant.ipynb | ###Markdown
2-parameters discriminant analysisPython notebook for constructing a Fisher disciminant from two 2D Gaussianly distributed correlated variables. The notebook creates artificial random data for two different types of processes, and the goal is then to separate these by constructing a Fisher discriminant. Authors: - Christian Michelsen (Niels Bohr Institute)- Troels C. Petersen (Niels Bohr Institute) Date: - 15-12-2021 (latest update) References:- Glen Cowan, Statistical Data Analysis, pages 51-57- http://en.wikipedia.org/wiki/Linear_discriminant_analysis***
###Code
import numpy as np # Matlab like syntax for linear algebra and functions
import matplotlib.pyplot as plt # Plots and figures like you know them from Matlab
from numpy.linalg import inv
r = np.random # Random generator
r.seed(42) # Set a random seed (but a fixed one)
save_plots = False # For now, don't save plots (once you trust your code, switch on)
###Output
_____no_output_____
###Markdown
Functions:Function to calculate the separation betweem two lists of numbers (see equation at the bottom of the script).__Note__: You need to fill in this function!
###Code
def calc_separation(x, y):
print("calc_separation needs to be filled out")
d = 0
return d
###Output
_____no_output_____
###Markdown
Define parameters:Number of species, their means and widths, correlations and the number of observations of each species:
###Code
# Number of 'species': signal / background
n_spec = 2
# Species A, mean and width for the two dimensions/parameters
mean_A = [15.0, 50.0]
width_A = [ 2.0, 6.0]
# Species B, mean and width for the two dimensions/parameters
mean_B = [12.0, 55.0]
width_B = [ 3.0, 6.0]
# Coefficient of correlation
corr_A = 0.8
corr_B = 0.9
# Amount of data you want to create
n_data = 2000
###Output
_____no_output_____
###Markdown
Generate data:For each "species", produce a number of $(x_0,x_1)$ points which are (linearly) correlated:
###Code
# The desired covariance matrix.
V_A = np.array([[width_A[0]**2, width_A[0]*width_A[1]*corr_A],
[width_A[0]*width_A[1]*corr_A, width_A[1]**2]])
V_B = np.array([[width_B[0]**2, width_B[0]*width_B[1]*corr_B],
[width_B[0]*width_B[1]*corr_B, width_B[1]**2]])
# Generate the random samples.
spec_A = np.random.multivariate_normal(mean_A, V_A, size=n_data)
spec_B = np.random.multivariate_normal(mean_B, V_B, size=n_data)
###Output
_____no_output_____
###Markdown
*** Plot your generated data:We plot the 2D-data as 1D-histograms (basically projections) in $x_0$ and $x_1$:
###Code
fig_1D, ax_1D = plt.subplots(ncols=2, figsize=(14, 6))
ax_1D[0].hist(spec_A[:, 0], 50, (0, 25), histtype='step', label='Species A', color='Red', lw=1.5)
ax_1D[0].hist(spec_B[:, 0], 50, (0, 25), histtype='step', label='Species B', color='Blue', lw=1.5)
ax_1D[0].set(title='Parameter x0', xlabel='x0', ylabel='Counts', xlim=(0,25))
ax_1D[0].legend(loc='upper left')
# uncomment later
#ax_1D[0].text(1, 176, fr'$\Delta_{{x0}} = {calc_separation(spec_A[:, 0], spec_B[:, 0]):.3f}$', fontsize=16)
ax_1D[1].hist(spec_A[:, 1], 50, (20, 80), histtype='step', label='Species A', color='Red', lw=1.5)
ax_1D[1].hist(spec_B[:, 1], 50, (20, 80), histtype='step', label='Species B', color='Blue', lw=1.5)
ax_1D[1].set(title='Parameter x1', xlabel='x1', ylabel='Counts', xlim=(20, 80))
ax_1D[1].legend(loc='upper left')
# uncomment later
#ax_1D[1].text(22, 140, fr'$\Delta_{{x1}} = {calc_separation(spec_A[:, 1], spec_B[:, 1]):.3f}$', fontsize=16)
fig_1D.tight_layout()
if save_plots :
fig_1D.savefig('InputVars_1D.pdf', dpi=600)
###Output
_____no_output_____
###Markdown
NOTE: Wait with drawing the 2D distribution, so that you think about the 1D distributions first!*** From the two 1D figures, it seems that species A and B can be separated to some degree, but not very well. If you were to somehow select cases of species A, then I can imagine a selection as follows: - If (x0 > 16) or (x1 13 and x1 < 52), then guess / select as A.Think about this yourself, and discuss with your peers, how you would go about separating A from B based on x0 and x1. ----------------------- 5-10 minutes later ----------------------- As it is, this type of selection is hard to optimise, especially with more dimensions (i.e. more variables than just x0 and x1). That is why Fisher's linear discriminant, $F$, is very useful. It makes the most separating linear combination of the input variables, and the coefficients can be calculated analytically. Thus, it is fast, efficient, and transparent. And it takes linear correlations into account.
###Code
# fig_corr, ax_corr = plt.subplots(figsize=(14, 8))
# ax_corr.scatter(spec_A[:, 0], spec_A[:, 1], color='Red', s=10, label='Species A')
# ax_corr.scatter(spec_B[:, 0], spec_B[:, 1], color='Blue', s=10, label='Species B')
# ax_corr.set(xlabel='Parameter x0', ylabel='Parameter x1', title='Correlation');
# ax_corr.legend();
# fig_corr.tight_layout()
#if save_plots :
# fig_corr.savefig('InputVars_2D.pdf', dpi=600)
###Output
_____no_output_____
###Markdown
Fisher Discriminant calculation:We want to find $\vec{w}$ defined by:$$\vec{w} = \left(\Sigma_A + \Sigma_B\right)^{-1} \left(\vec{\mu}_A - \vec{\mu}_B\right)$$ which we use to project our data into the best separating plane (line in this case) given by:$$ \mathcal{F} = w_0 + \vec{w} \cdot \vec{x} $$We start by finding the means and covariance of the individuel species: (__fill in yourself!__)
###Code
mu_A = 0 # fill in yourself
mu_B = 0 # fill in yourself
mu_A
cov_A = 0 # fill in yourself
cov_B = 0 # fill in yourself
cov_sum = cov_A + cov_B
cov_sum
###Output
_____no_output_____
###Markdown
where `cov_sum` is the sum of the all of the species' covariance matrices. We invert this using scipy's `inv` function. __Note__: fill in yourself!
###Code
# Delete the definition below of cov_sum when you have filled in the cells above:
cov_sum = np.diag([1, 2])
# Inverts cov_sum
cov_sum_inv = inv(cov_sum)
cov_sum_inv
###Output
_____no_output_____
###Markdown
We calculate the fisher weights, $\vec{w}$. __Note__: fill in yourself:
###Code
wf = np.ones(2) # fill in yourself
wf
###Output
_____no_output_____
###Markdown
We calculate the fisher discriminant, $\mathcal{F}$. __Note__: fill in yourself:
###Code
fisher_data_A = spec_A[:, 0] * (-1.4) + 10 # fill in yourself
fisher_data_B = spec_B[:, 0] * (-1.4) + 10 # fill in yourself
###Output
_____no_output_____
###Markdown
and plot it:
###Code
fig_fisher, ax_fisher = plt.subplots(figsize=(12, 8))
ax_fisher.hist(fisher_data_A, 200, (-22, 3), histtype='step', color='Red', label='Species A')
ax_fisher.hist(fisher_data_B, 200, (-22, 3), histtype='step', color='Blue', label='Species B')
ax_fisher.set(xlim=(-22, 3), xlabel='Fisher-discriminant')
ax_fisher.legend()
# ax_fisher.text(-21, 60, fr'$\Delta_{{fisher}} = {calc_separation(fisher_data_A, fisher_data_B):.3f}$', fontsize=16)
fig_fisher.tight_layout()
if save_plots:
fig_fisher.savefig('FisherOutput.pdf', dpi=600)
###Output
_____no_output_____ |
Projeto House Rocket/ProjetoHouseRocket_MachineLearning.ipynb | ###Markdown
1 - Quais casas o CEO da House Rocket deveria comprar e por qual preço de compra? 2 - Uma vez a casa em posse da empresa, qual seria o preço da venda? 3 - A House Rocket deveria fazer uma reforma para aumentar o preço da venda? Quais seriam as sugestões de mudanças? Qual o incremento no preço dado por cada opção de reforma?
###Code
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
###Output
_____no_output_____
###Markdown
Passo 1: Importar os dados e criar o modelo
###Code
tabela = pd.read_csv('kc_house_data.csv')
modelo = RandomForestRegressor()
tabela2 = tabela
###Output
_____no_output_____
###Markdown
Passo 2: Verificar o estado dos dados
###Code
tabela.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 21613 entries, 0 to 21612
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 21613 non-null int64
1 date 21613 non-null object
2 price 21613 non-null float64
3 bedrooms 21613 non-null int64
4 bathrooms 21613 non-null float64
5 sqft_living 21613 non-null int64
6 sqft_lot 21613 non-null int64
7 floors 21613 non-null float64
8 waterfront 21613 non-null int64
9 view 21613 non-null int64
10 condition 21613 non-null int64
11 grade 21613 non-null int64
12 sqft_above 21613 non-null int64
13 sqft_basement 21613 non-null int64
14 yr_built 21613 non-null int64
15 yr_renovated 21613 non-null int64
16 zipcode 21613 non-null int64
17 lat 21613 non-null float64
18 long 21613 non-null float64
19 sqft_living15 21613 non-null int64
20 sqft_lot15 21613 non-null int64
dtypes: float64(5), int64(15), object(1)
memory usage: 3.5+ MB
###Markdown
Passo 3: Limpeza e Organização
###Code
tabela = tabela.drop(['date', 'id'], axis=1)
tabela.floors = tabela.floors.astype(int)
tabela.price = tabela.price.astype(int)
tabela.bathrooms = tabela.bathrooms.astype(int)
tabela.price = tabela.price.round(-3)
display(tabela)
###Output
_____no_output_____
###Markdown
Passo 4: Modelagem
###Code
X = tabela.drop('price', axis=1)
y = tabela['price']
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.3, random_state=52)
###Output
_____no_output_____
###Markdown
Passo 5: Treinamento do algoritmo
###Code
modelo.fit(x_train, y_train)
pred = modelo.predict(x_test)
r2_score(y_test, pred)
###Output
_____no_output_____
###Markdown
Passo 6: Exportando modelo
###Code
import joblib
joblib.dump(modelo, 'model2.pkl')
teste = np.array([[3,1,1180,5650,1,0,0,3,7,1180,0,1955,0,98178,47.5112,-122.257,1340,5650]])
modelo.predict(teste)
###Output
_____no_output_____ |
greengems.ipynb | ###Markdown
Clash of Clans: How many builders do you *really* need? (or, how should I spend those green gems?) Hello everyone! I am a mid-level town hall 8 avid clasher with 4 builders. Recently I discovered (like so many other [people](https://www.reddit.com/r/ClashOfClans/comments/2psnf3/strategy_lab_time_longer_than_builder_time_what/))that at my level research, not build time, is the limiting factor for progress. This made me wonder, is it really worth it to save up for the fifth builder? Or should I just spend gems on barracks/collector boosts, finishing research/hero upgrades in a timely fashion, etc. To solve this conundrum I decided to do a bit of simple data analysis using the upgrade time data available on the [Clash of Clans wiki](http://clashofclans.wikia.com/wiki/Clash_of_Clans_Wiki).This next section contains a bit of Python used to prepare the dataset for visualization and analysis. If you aren't interested, just skip down to the [results section](Results)
###Code
%matplotlib inline
import numpy as np
import pandas as pd
building_df = pd.read_csv("building_upgrade_data.csv")
building_df = building_df[building_df["town_hall"] != 11]
research_df = pd.read_csv("research_data.csv")
research_df = research_df[research_df["town_hall"] != 11]
# CONSTANTS
HOURS_PER_DAY = 24.0
MIN_PER_DAY = HOURS_PER_DAY * 60
SEC_PER_DAY = MIN_PER_DAY * 60
UNIT_MAP = {"seconds": SEC_PER_DAY, "minutes": MIN_PER_DAY,
"hours": HOURS_PER_DAY, "days": 1.0}
# These functions parse the possible time strings
from functools import reduce
def parse_time(t):
return int(t[0]) / UNIT_MAP[t[1]]
def chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
def parse_time_string(s):
return reduce(lambda x, y: x + y, map(parse_time, chunks(s.split(' '), 2)))
building_df["build_days"] = building_df["build_time"].map(parse_time_string)
research_df["research_days"] = research_df["research_time"].map(parse_time_string)
def get_build_time(df):
"""This calculates total build time per town hall level"""
build_time = {}
grouped = df.groupby(["type"])
for name, group in grouped:
regrouped = group.groupby("town_hall")
prev_quant = group.iloc[0]["quantity"]
for rname, rgroup in regrouped:
quant = rgroup["quantity"].iloc[0]
build_days = quant * rgroup["build_days"].sum()
build_time.setdefault(rname, 0)
build_time[rname] += build_days
# This adds time to each town hall level based on new structure acquisition
if quant > prev_quant:
diff = quant - prev_quant
catch_up_days = diff * group[group["town_hall"] < rname]["build_days"].sum()
build_time[rname] += catch_up_days
prev_quant = quant
return pd.Series(build_time)
build_times = get_build_time(building_df)
# Get research times by town hall, don't forget to add lab upgrade time
lab_build_days = building_df.groupby("type").get_group("laboratory")[["town_hall","build_days"]]
research_times = research_df.groupby("town_hall")["research_days"].sum()
lab_build_days["total_time"] = lab_build_days["build_days"] + research_times.values
research_times = lab_build_days.set_index("town_hall")["total_time"]
times = pd.concat([research_times, build_times], axis=1)
times.columns = ["research_time", "build_time"]
times["percent_research_time"] = times["research_time"].map(
lambda x: x / times["research_time"].sum())
times["percent_build_time"] = times["build_time"].map(
lambda x: x / times["build_time"].sum())
times = times.fillna(0)
times
###Output
_____no_output_____ |
spatialmath/introduction.ipynb | ###Markdown
Working in 3D Rotation Rotations in 3D can be represented by rotation matrices – 3x3 orthonormal matrices – which belong to the group $\mbox{SO}(3)$. These are a subset of all possible 3x3 real matrices.We can create such a matrix, a rotation of $\pi/4$ radians around the x-axis by
###Code
R1 = SO3.Rx(pi/4)
###Output
_____no_output_____
###Markdown
which is an object of type
###Code
type(R1)
###Output
_____no_output_____
###Markdown
which contains an $\mbox{SO}(3)$ matrix. We can display that matrix
###Code
R1
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m-0.707107 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[48;5;255m [0m
###Markdown
which is colored red if the console supports color.The matrix, a numpy array, is encapsulated and not directly settable by the user. This way we can ensure that the matrix is proper member of the $\mbox{SO}(3)$ group.We can _compose_ these rotations using the Python `*` operator
###Code
R1 * R1
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m-1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
###Markdown
which is a rotation by $\pi/4$ _then_ another rotation by $\pi/4$ which is a total rotation of $\pi/2$ about the X-axis. We can doublecheck that
###Code
SO3.Rx(pi/2)
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m-1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
###Markdown
We could also have used the exponentiation operator
###Code
R1**2
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m-1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
###Markdown
We can also specify the angle in degrees
###Code
SO3.Rx(45, 'deg')
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m-0.707107 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[48;5;255m [0m
###Markdown
We can visualize what this looks like by
###Code
fig = plt.figure() # create a new figure
SE3().plot(frame='0', dims=[-1.5,1.5], color='black')
R1.plot(frame='1')
###Output
_____no_output_____
###Markdown
Click on the coordinate frame and use the mouse to change the viewpoint. The world reference frame is shown in black, and the rotated frame is shown in blue. Often we need to describe more complex orientations and we typically use a _3 angle_ convention to do this. Euler's rotation theorem says that any orientation can be expressed in terms of three rotations about different axes. One common convention is roll-pitch-yaw angles
###Code
R2 = SO3.RPY([10, 20, 30], unit='deg')
R2
###Output
[38;5;1m[48;5;255m 0.813798 [0m[38;5;1m[48;5;255m-0.44097 [0m[38;5;1m[48;5;255m 0.378522 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.469846 [0m[38;5;1m[48;5;255m 0.882564 [0m[38;5;1m[48;5;255m 0.0180283 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-0.34202 [0m[38;5;1m[48;5;255m 0.163176 [0m[38;5;1m[48;5;255m 0.925417 [0m[48;5;255m [0m
###Markdown
which says that we rotate by 30° about the Z-axis (yaw), _then_ 20° about the Y-axis (pitch) and _then_ 10° about the X-axis – this is the ZYX roll-pitch yaw convention. Note that:1. the first rotation in the sequence involves the last element in the angle sequence.2. we can change angle convention, for example by passing `order='xyz'`We can visualize the resulting orientation.
###Code
plt.figure() # create a new figure
SE3().plot(frame='0', dims=[-1.5,1.5], color='black')
R2.plot(frame='2')
###Output
_____no_output_____
###Markdown
We can convert any rotation matrix back to its 3-angle representation
###Code
R2.rpy()
###Output
_____no_output_____
###Markdown
ConstructorsThe default constructor yields a null rotation
###Code
SO3()
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
which is represented by the identity matrix.The class supports a number of variant constructors using class methods:| Constructor | rotation ||---------------|-----------|| SO3() | null rotation || SO3.Rx(theta) | about X-axis || SO3.Ry(theta) | about Y-axis|| SO3.Rz(theta) | about Z-axis|| SO3.RPY(rpy) | from roll-pitch-yaw angle vector|| SO3.Eul(euler) | from Euler angle vector || SO3.AngVec(theta, v) | from rotation and axis || SO3.Omega(v) | from a twist vector || SO3.OA | from orientation and approach vectors | Imagine we want a rotation that describes a frame that has its y-axis (o-vector) pointing in the world negative z-axis direction and its z-axis (a-vector) pointing in the world x-axis direction
###Code
SO3.OA(o=[0,0,-1], a=[1,0,0])
###Output
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m-1 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
###Markdown
We can redo our earlier example using `SO3.Rx()` with the explicit angle-axis notation
###Code
SO3.AngVec(pi/4, [1,0,0])
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m-0.707107 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[48;5;255m [0m
###Markdown
or
###Code
SO3.Exp([pi/4,0,0])
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m-0.707107 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[48;5;255m [0m
###Markdown
or a more complex example
###Code
SO3.AngVec(30, [1,2,3], unit='deg')
###Output
[38;5;1m[48;5;255m 0.875595 [0m[38;5;1m[48;5;255m-0.381753 [0m[38;5;1m[48;5;255m 0.29597 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.420031 [0m[38;5;1m[48;5;255m 0.904304 [0m[38;5;1m[48;5;255m-0.0762129 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-0.238552 [0m[38;5;1m[48;5;255m 0.191048 [0m[38;5;1m[48;5;255m 0.952152 [0m[48;5;255m [0m
###Markdown
PropertiesThe object has a number of properties, such as the columns which are often written as ${\bf R} = [n, o, a]$ where $n$, $o$ and $a$ are 3-vectors. For example
###Code
R1.n
###Output
_____no_output_____
###Markdown
or its inverse (in this case its transpose)
###Code
R1.inv()
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m-0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[48;5;255m [0m
###Markdown
the shape of the underlying matrix
###Code
R1.shape
###Output
_____no_output_____
###Markdown
and the order
###Code
R1.N
###Output
_____no_output_____
###Markdown
indicating it operates in 3D space. PredicatesWe can check various properties of the object using properties and methods that are common to all classes in this package
###Code
[R1.isSE, R1.isSO, R1.isrot(), R1.ishom(), R1.isrot2(), R1.ishom2()]
###Output
_____no_output_____
###Markdown
The last four in this list provide compatibility with the Spatial Math Toolbox for MATLAB. QuaternionsA quaternion is often described as a type of complex number but it is more useful (and simpler) to think of it as an order pair comprising a scalar and a vector. We can create a quaternions
###Code
q1 = Quaternion([1,2,3,4])
q1
###Output
1.000000 < 2.000000, 3.000000, 4.000000 >
###Markdown
where the scalar is before the angle brackets which enclose the vector part. Properties allow us to extract the scalar part
###Code
q1.s
###Output
_____no_output_____
###Markdown
and the vector part
###Code
q1.v
###Output
_____no_output_____
###Markdown
and we can represent it as a numpy array
###Code
q1.vec
###Output
_____no_output_____
###Markdown
A quaternion has a conjugate
###Code
q1.conj()
###Output
1.000000 < -2.000000, -3.000000, -4.000000 >
###Markdown
and a norm, which is the magnitude of the equivalent 4-vector
###Code
q1.norm()
###Output
_____no_output_____
###Markdown
We can create a second quaternion
###Code
q2 = Quaternion([5,6,7,8])
q2
###Output
5.000000 < 6.000000, 7.000000, 8.000000 >
###Markdown
Operators allow us to add
###Code
q1 + q2
###Output
6.000000 < 8.000000, 10.000000, 12.000000 >
###Markdown
subtract
###Code
q1 - q2
###Output
-4.000000 < -4.000000, -4.000000, -4.000000 >
###Markdown
and to multiply
###Code
q1 * q2
###Output
-60.000000 < 12.000000, 30.000000, 24.000000 >
###Markdown
which follows the special rules of Hamilton multiplication.Multiplication can also be performed as the linear algebraic product of one quaternion converted to a 4x4 matrix
###Code
q1.matrix
###Output
_____no_output_____
###Markdown
and the other as a 4-vector
###Code
q1.matrix @ q2.vec
###Output
_____no_output_____
###Markdown
The product of a quaternion and its conjugate is a scalar equal to the square of its norm
###Code
q1 * q1.conj()
###Output
30.000000 < 0.000000, 0.000000, 0.000000 >
###Markdown
Conversely, a quaternion with a zero scalar part is called a _pure quaternion_
###Code
Quaternion.Pure([1, 2, 3])
###Output
0.000000 < 1.000000, 2.000000, 3.000000 >
###Markdown
Unit quaternionsA quaternion with a unit norm is called a _unit quaternion_ . It is a group and its elements represent rotation in 3D space. It is in all regards like an $\mbox{SO}(3)$ matrix except for a _double mapping_ -- a quaternion and its element-wise negation represent the same rotation.
###Code
q1 = UnitQuaternion.Rx(30, 'deg')
q1
###Output
0.965926 << 0.258819, 0.000000, 0.000000 >>
###Markdown
the convention is that unit quaternions are denoted using double angle brackets. The norm, as advertised is indeed one
###Code
q1.norm()
###Output
_____no_output_____
###Markdown
We create another unit quaternion
###Code
q2 = UnitQuaternion.Ry(-40, 'deg')
q2
###Output
0.939693 << 0.000000, -0.342020, 0.000000 >>
###Markdown
The rotations can be composed by quaternion multiplication
###Code
q3 = q1 * q2
q3
###Output
0.907673 << 0.243210, -0.330366, -0.088521 >>
###Markdown
We can convert a quaternion to a rotation matrix
###Code
q3.R
###Output
_____no_output_____
###Markdown
which yields exactly the same answer as if we'd done it using SO(3) rotation matrices
###Code
SO3.Rx(30, 'deg') * SO3.Ry(-40, 'deg')
###Output
[38;5;1m[48;5;255m 0.766044 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m-0.642788 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-0.321394 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;1m[48;5;255m-0.383022 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.55667 [0m[38;5;1m[48;5;255m 0.5 [0m[38;5;1m[48;5;255m 0.663414 [0m[48;5;255m [0m
###Markdown
The advantages of unit quaternions are that1. they are compact, just 4 numbers instead of 92. multiplication involves fewer operations and is therefore faster3. numerical errors build up when we multiply rotation matrices together many times, and they lose the structure (the columns are no longer unit length or orthogonal). Correcting this, the process of _normalization_ is expensive. For unit quaternions errors will also compound, but normalization is simply a matter of dividing through by the norm Unit quaternions have an inverse
###Code
q2.inv()
q1 * q2.inv()
###Output
0.907673 << 0.243210, 0.330366, 0.088521 >>
###Markdown
or
###Code
q1 / q2
###Output
0.907673 << 0.243210, 0.330366, 0.088521 >>
###Markdown
We can convert any unit quaternion to an SO3 object if we wish
###Code
q1.SO3()
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;1m[48;5;255m-0.5 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.5 [0m[38;5;1m[48;5;255m 0.866025 [0m[48;5;255m [0m
###Markdown
and conversely, any `SO3` object to a unit quaternion
###Code
UnitQuaternion( SO3.Rx(30, 'deg'))
###Output
0.965926 << 0.258819, 0.000000, 0.000000 >>
###Markdown
A unit quaternion is not a minimal representation. Since we know the magnitude is 1, then with any 3 elements we can compute the fourth upto a sign ambiguity.
###Code
q1.vec3
a = UnitQuaternion.qvmul( q1.vec3, q2.vec3)
a
###Output
_____no_output_____
###Markdown
from which we can recreate the unit quaternion
###Code
UnitQuaternion.Vec3(a)
###Output
0.907673 << 0.243210, -0.330366, -0.088521 >>
###Markdown
Representing positionIn robotics we also need to describe the position of objects and we can do this with a _homogeneous transformation_ matrix – a 4x4 matrix – which belong to the group $\mbox{SE}(3)$ which is a subset of all 4x4 real matrices.We can create such a matrix, for a translation of 1 in the x-direction, 2 in the y-direction and 3 in the z-direction by
###Code
T1 = SE3(1, 2, 3)
T1
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 2 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;4m[48;5;255m 3 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
which is displayed in a color coded fashion: rotation matrix in red, translation vector in blue, and the constant bottom row in grey. We note that the red matrix is an _identity matrix_ . The class supports a number of variant constructors using class methods.| Constructor | motion ||---------------|-----------|| SE3() | null motion || SE3.Tx(d) | translation along X-axis || SE3.Ty(d) | translation along Y-axis || SE3.Tz(d) | translation along Z-axis || SE3.Rx(theta) | rotation about X-axis || SE3.Ry(theta) | rotation about Y-axis|| SE3.Rz(theta) | rotation about Z-axis|| SE3.RPY(rpy) | rotation from roll-pitch-yaw angle vector|| SE3.Eul(euler) | rotation from Euler angle vector || SE3.AngVec(theta, v) | rotation from rotation and axis || SO3.Omega(v) | from a twist vector || SE3.OA(ovec, avec) | rotation from orientation and approach vectors | We can visualize this
###Code
plt.figure() # create a new figure
SE3().plot(frame='0', dims=[0,4], color='black')
T1.plot(frame='1')
###Output
_____no_output_____
###Markdown
We can define another translation
###Code
T12 = SE3(2, -1, -2)
###Output
_____no_output_____
###Markdown
and compose it with `T1`
###Code
T2 = T1 * T12
T2.plot(frame='2', color='red')
###Output
_____no_output_____
###Markdown
Representing pose
###Code
T1 = SE3(1, 2, 3) * SE3.Rx(30, 'deg')
T1
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;1m[48;5;255m-0.5 [0m[38;5;4m[48;5;255m 2 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.5 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;4m[48;5;255m 3 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
Is a composition of two motions: a pure translation and _then_ a pure rotation. We can see the rotation matrix, computed above, in the top-left corner and the translation components in the right-most column. In the earlier example `Out[24]` was simply a null-rotation which is represented by the identity matrix.The frame now looks like this
###Code
plt.figure() # create a new figure
SE3().plot(frame='0', dims=[0,4], color='black')
T1.plot(frame='1')
###Output
_____no_output_____
###Markdown
PropertiesThe object has a number of properties, such as the columns which are often written as $[n, o, a]$
###Code
T1.o
###Output
_____no_output_____
###Markdown
or its inverse (computed in an efficient manner based on the structure of the matrix)
###Code
T1.inv()
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m-1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;1m[48;5;255m 0.5 [0m[38;5;4m[48;5;255m-3.23205 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m-0.5 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;4m[48;5;255m-1.59808 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
We can extract the rotation matrix as a numpy array
###Code
T1.R
###Output
_____no_output_____
###Markdown
or the translation vector, as a numpy array
###Code
T1.t
###Output
_____no_output_____
###Markdown
The shape of the underlying SE(3) matrix is
###Code
T1.shape
###Output
_____no_output_____
###Markdown
and the order
###Code
T1.N
###Output
_____no_output_____
###Markdown
indicating it operates in 3D space. PredicatesWe can check various properties
###Code
[T1.isSE, T1.isSO, T1.isrot(), T1.ishom(), T1.isrot2(), T1.ishom2()]
###Output
_____no_output_____
###Markdown
A couple of important points:When we compose motions they must be of the same type. An `SE3` object can represent pure transation, pure rotation or both. If we wish to compose a translation with a rotation, the rotation must be an `SE3` object - a rotation plus zero translation.SUperset Transforming points Imagine now a set of points defining the vertices of a cube
###Code
P = np.array([[-1, 1, 1, -1, -1, 1, 1, -1], [-1, -1, 1, 1, -1, -1, 1, 1], [-1, -1, -1, -1, 1, 1, 1, 1]])
P
###Output
_____no_output_____
###Markdown
defined with respect to a body reference frame ${}^A P_i$. Given a transformation ${}^0 \mathbf{T}_A$ from the world frame to the body frame, we determine the coordinates of the points in the world frame by ${}^0 P_i = {}^0 \mathbf{T}_A \, {}^A P_i$ which we can perform in a single operation
###Code
Q = T1 * P
###Output
_____no_output_____
###Markdown
which we can now plot
###Code
fig = plt.figure()
SE3().plot(frame='0', dims=[-2,3,0,5,0,5], color='black')
ax = plt.gca()
ax.set_xlabel('X'); ax.set_ylabel('Y'); ax.set_zlabel('Z');
ax.scatter(xs=Q[0], ys=Q[1], zs=Q[2], s=20) # draw vertices
# draw lines joining the vertices
lines = [[0,1,5,6], [1,2,6,7], [2,3,7,4], [3,0,4,5]]
for line in lines:
ax.plot([Q[0,i] for i in line], [Q[1,i] for i in line], [Q[2,i] for i in line])
###Output
_____no_output_____
###Markdown
This is often used in SLAM and bundle adjustment algorithms since it is compact and better behaved than using roll-pitch-yaw or Euler angles. TwistsA twist is an alternative way to represent a 3D pose, but it is more succinct, comprising just 6 values. In constrast an SE(3) matrix has 16 values with a considerable amount of redundancy, but it does offer consider computational convenience.Twists are the logarithm of an SE(3) matrix
###Code
T = SE3.Rand()
T
T.log()
###Output
_____no_output_____
###Markdown
How do we know this is really the logarithm? Well, we can exponentiate it
###Code
lg = T.log()
SE3.Exp(lg)
###Output
[38;5;1m[48;5;255m 0.570802 [0m[38;5;1m[48;5;255m 0.722709 [0m[38;5;1m[48;5;255m 0.389714 [0m[38;5;4m[48;5;255m 0.483881 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-0.255076 [0m[38;5;1m[48;5;255m 0.607224 [0m[38;5;1m[48;5;255m-0.752472 [0m[38;5;4m[48;5;255m-0.702483 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-0.780462 [0m[38;5;1m[48;5;255m 0.330106 [0m[38;5;1m[48;5;255m 0.53095 [0m[38;5;4m[48;5;255m 0.497569 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
and we have reconstituted our original matrix. The logarithm is a matrix with a very particular structure, it has a zero diagonal and bottom row, and the top-left 3x3 matrix is skew symmetric. This matrix has only 6 unique elements: three from the last column, and three from the skew symmetric matrix, and we can request the `log` method to give us just these
###Code
T.log(twist=True)
###Output
_____no_output_____
###Markdown
This 6-vector is a twist, a concise way to represent the translational and rotational components of a pose. Twists are represented by their own class
###Code
tw = Twist3(T)
tw
###Output
(0.42701 -0.3207 0.89151; 0.69954 0.75614 -0.63182)
###Markdown
Just like the other pose objects, `Twist3` objects can have multiple values.Twists can be composed
###Code
T = SE3(1, 2, 3) * SE3.Rx(0.3)
tw = Twist3(T)
tw
###Output
(1 2.435 2.6775; 0.3 0 0)
###Markdown
Now we can compose the twists
###Code
tw2 = tw * tw
tw2
###Output
(2 4.87 5.3549; 0.6 0 0)
###Markdown
and the result is just the same as if we had composed the transforms
###Code
Twist3(T * T)
###Output
(2 4.87 5.3549; 0.6 0 0)
###Markdown
Twists have great utility for robot arm kinematics, to compute the forward kinematics and Jacobians. Twist objects have a number of methods.The adjoint is a 6x6 matrix that relates velocities
###Code
tw.Ad()
###Output
_____no_output_____
###Markdown
and the `SE3` object also has this method.The logarithm of the adjoint is given by
###Code
tw.ad()
###Output
_____no_output_____
###Markdown
The name twist comes from considering the rigid-body motion as a rotation and a translation along a unique line of action. It rotates as it moves along the line following a screw like motion, hence its other name as a _screw_. The line in 3D space is described in Plücker coordinates by
###Code
tw.line()
###Output
{ 0.91 2.435 2.6775; 0.3 0 0}
###Markdown
The pitch of the screw is
###Code
tw.pitch()
###Output
_____no_output_____
###Markdown
and a point on the line is
###Code
tw.pole()
###Output
_____no_output_____
###Markdown
Working in 2D Things are actually much simpler in 2D. There's only one possible rotation which is around an axis perpendicular to the plane (where the z-axis would have been if it were in 3D).Rotations in 2D can be represented by rotation matrices – 2x2 orthonormal matrices – which belong to the group SO(2). Just as for the 3D case these matrices have special properties, each column (and row) is a unit vector, and they are all orthogonal, the inverse of this matrix is equal to its transpose, and its determinant is +1.We can create such a matrix, a rotation of $\pi/4$ radians by
###Code
R = SO2(pi/4)
R
###Output
[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m-0.707107 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[48;5;255m [0m
###Markdown
or in degrees
###Code
SO2(45, unit='deg')
###Output
[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m-0.707107 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[48;5;255m [0m
###Markdown
and we can plot this on the 2D plane
###Code
plt.figure() # create a new figure
R.plot()
###Output
_____no_output_____
###Markdown
Once again, it's useful to describe the position of things and we do this this with a homogeneous transformation matrix – a 3x3 matrix – which belong to the group SE(2).
###Code
T = SE2(1, 2)
T
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;4m[48;5;255m 2 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
which has a similar structure to the 3D case. The rotation matrix is in the top-left corner and the translation components are in the right-most column.We can also call the function with the element in a list
###Code
T = SE2([1, 2])
plt.figure() # create a new figure
T.plot()
T2 = SE2(45, unit='deg')
T2
plt.figure() # create a new figure
T2.plot()
###Output
_____no_output_____
###Markdown
The inplace versions of operators are also supported, for example
###Code
X = T
X /= T2
X
###Output
[38;5;1m[48;5;255m 0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;4m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-0.707107 [0m[38;5;1m[48;5;255m 0.707107 [0m[38;5;4m[48;5;255m 2 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
Operators Group operatorsFor the 3D case, the classes we have introduced mimic the behavior the mathematical groups $\mbox{SO}(3)$ and $\mbox{SE}(3)$ which contain matrices of particular structure. They are subsets respectively of the sets of all possible real 3x3 and 4x4 matrices.The only operations on two elements of the group that also belongs to the group are composition (represented by the `*` operator) and inversion.
###Code
T1 = SE3(1, 2, 3) * SE3.Rx(30, 'deg')
[type(T1), type(T1.inv()), type(T1*T1)]
###Output
_____no_output_____
###Markdown
If we know the pose of frame {2} and a _rigid body motion_ from frame {1} to frame {2}
###Code
T2 = SE3(4, 5, 6) * SE3.Ry(-40, 'deg')
T12 = SE3(0, -2, -1) * SE3.Rz(70, 'deg')
###Output
_____no_output_____
###Markdown
then ${}^0{\bf T}_1 \bullet {}^1{\bf T}_2 = {}^0{\bf T}_2$ then ${}^0{\bf T}_1 = {}^1{\bf T}_2 \bullet ({}^0{\bf T}_2)^{-1}$ which we write as
###Code
T1 * T2.inv()
###Output
[38;5;1m[48;5;255m 0.766044 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.642788 [0m[38;5;4m[48;5;255m-5.9209 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.321394 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;1m[48;5;255m-0.383022 [0m[38;5;4m[48;5;255m-1.31757 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-0.55667 [0m[38;5;1m[48;5;255m 0.5 [0m[38;5;1m[48;5;255m 0.663414 [0m[38;5;4m[48;5;255m-1.2538 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
or more concisely as
###Code
T1 / T2
###Output
[38;5;1m[48;5;255m 0.766044 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.642788 [0m[38;5;4m[48;5;255m-5.9209 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.321394 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;1m[48;5;255m-0.383022 [0m[38;5;4m[48;5;255m-1.31757 [0m[48;5;255m [0m
[38;5;1m[48;5;255m-0.55667 [0m[38;5;1m[48;5;255m 0.5 [0m[38;5;1m[48;5;255m 0.663414 [0m[38;5;4m[48;5;255m-1.2538 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
Exponentiation is also a group operator since it is simply repeated composition
###Code
T1 ** 2
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 2 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.5 [0m[38;5;1m[48;5;255m-0.866025 [0m[38;5;4m[48;5;255m 2.23205 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.866025 [0m[38;5;1m[48;5;255m 0.5 [0m[38;5;4m[48;5;255m 6.59808 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
Non-group operationsOperations such as addition and subtraction are valid for matrices but not for elements of the group, therefore these operations will return a numpy array rather than a group object
###Code
SE3() + SE3()
###Output
_____no_output_____
###Markdown
yields an array, not an `SE3` object. As do other non-group operations
###Code
2 * SE3()
SE3() - 1
###Output
_____no_output_____
###Markdown
Similar principles apply to quaternions. Unit quaternions are a group and only support composition and inversion. Any other operations will return an ordinary quaternion
###Code
UnitQuaternion() * 2
###Output
2.000000 < 0.000000, 0.000000, 0.000000 >
###Markdown
which is indicated by the single angle brackets. In-place operatorsAll of Pythons in-place operators are available as well, whether for group or non-group operations. For example
###Code
T = T1
T *= T2
T **= 2
###Output
_____no_output_____
###Markdown
Multi-valued objectsFor many tasks we might want to have a set or sequence of rotations or poses. The obvious solution would be to use a Python list
###Code
T = [ SE3.Rx(0), SE3.Rx(0.1), SE3.Rx(0.2), SE3.Rx(0.3), SE3.Rx(0.4)]
###Output
_____no_output_____
###Markdown
but the pose objects in this package can hold multiple values, just like a native Python list can. There are a few ways to do this, most obviously
###Code
T = SE3( [ SE3.Rx(0), SE3.Rx(0.1), SE3.Rx(0.2), SE3.Rx(0.3), SE3.Rx(0.4)] )
###Output
_____no_output_____
###Markdown
which has type of a pose object
###Code
type(T)
###Output
_____no_output_____
###Markdown
but it has length of five
###Code
len(T)
###Output
_____no_output_____
###Markdown
that is, it contains five values. We can see these when we display the object's value
###Code
T
###Output
[38;5;2m[0] =
[0m [38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[1] =
[0m [38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.995004 [0m[38;5;1m[48;5;255m-0.0998334 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.0998334 [0m[38;5;1m[48;5;255m 0.995004 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[2] =
[0m [38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.980067 [0m[38;5;1m[48;5;255m-0.198669 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.198669 [0m[38;5;1m[48;5;255m 0.980067 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[3] =
[0m [38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.955336 [0m[38;5;1m[48;5;255m-0.29552 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.29552 [0m[38;5;1m[48;5;255m 0.955336 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[4] =
[0m [38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.921061 [0m[38;5;1m[48;5;255m-0.389418 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.389418 [0m[38;5;1m[48;5;255m 0.921061 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
We can index into the object (slice it) just as we would a Python list
###Code
T[3]
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.955336 [0m[38;5;1m[48;5;255m-0.29552 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.29552 [0m[38;5;1m[48;5;255m 0.955336 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
or from the second element to the last in steps of two
###Code
T[1:-1:2]
###Output
[38;5;2m[0] =
[0m [38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.995004 [0m[38;5;1m[48;5;255m-0.0998334 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.0998334 [0m[38;5;1m[48;5;255m 0.995004 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[1] =
[0m [38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.955336 [0m[38;5;1m[48;5;255m-0.29552 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.29552 [0m[38;5;1m[48;5;255m 0.955336 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
We could another value to the end
###Code
T.append( SE3.Rx(0.5) )
len(T)
###Output
_____no_output_____
###Markdown
The `SE3` class, like all the classes in this package, inherits from the `UserList` class giving it all the methods of a Python list like append, extend, del etc. We can also use them as _iterables_ in _for_ loops and in list comprehensions.You can create an object of a particular type with no elements using this constructor
###Code
T = SE3.Empty()
len(T)
###Output
_____no_output_____
###Markdown
which is the equivalent of setting a variable to `[]`. We could write the above example more succinctly
###Code
T = SE3.Rx( np.linspace(0, 0.5, 5) )
len(T)
T[3]
###Output
[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.930508 [0m[38;5;1m[48;5;255m-0.366273 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0.366273 [0m[38;5;1m[48;5;255m 0.930508 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
###Markdown
Consider another rotation
###Code
T2 = SE3.Ry(40, 'deg')
###Output
_____no_output_____
###Markdown
If we write
###Code
A = T * T2
len(A)
###Output
_____no_output_____
###Markdown
we obtain a new list where each element of `A` is `T[i] * T2`. Similarly
###Code
B = T2 * T
len(B)
###Output
_____no_output_____
###Markdown
which has produced a new list where each element of `B` is `T2 * T[i]`.Similarly
###Code
C = T * T
len(C)
###Output
_____no_output_____
###Markdown
yields a new list where each element of `C` is the `T[i] * T[i]`. We can apply such a sequence to a coordinate vectors as we did earlier
###Code
P = T * [0, 1, 0]
P
###Output
_____no_output_____
###Markdown
where each element of `T` has transformed the coordinate vector (0, 1, 0), the results being consecutive columns of the resulting numpy array.This is equivalent to writing
###Code
np.column_stack([x * [0,1,0] for x in T])
###Output
_____no_output_____
###Markdown
C++ like programming modelLists are useful, but we might like to use a programming model where we allocate an array of pose objects and reference them or assign to them. We can do that to!
###Code
T = SE3.Alloc(5) # create a vector of SE3 values
for i, theta in enumerate(np.linspace(0, 1, len(T))):
T[i] = SE3.Rz(theta)
T
###Output
[38;5;2m[0] =
[0m [38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[1] =
[0m [38;5;1m[48;5;255m 0.968912 [0m[38;5;1m[48;5;255m-0.247404 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.247404 [0m[38;5;1m[48;5;255m 0.968912 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[2] =
[0m [38;5;1m[48;5;255m 0.877583 [0m[38;5;1m[48;5;255m-0.479426 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.479426 [0m[38;5;1m[48;5;255m 0.877583 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[3] =
[0m [38;5;1m[48;5;255m 0.731689 [0m[38;5;1m[48;5;255m-0.681639 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.681639 [0m[38;5;1m[48;5;255m 0.731689 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
[38;5;2m[4] =
[0m [38;5;1m[48;5;255m 0.540302 [0m[38;5;1m[48;5;255m-0.841471 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0.841471 [0m[38;5;1m[48;5;255m 0.540302 [0m[38;5;1m[48;5;255m 0 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 0 [0m[38;5;1m[48;5;255m 1 [0m[38;5;4m[48;5;255m 0 [0m[48;5;255m [0m
[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 0 [0m[38;5;244m[48;5;255m 1 [0m[48;5;255m [0m
|
20210519/housing_force03.ipynb | ###Markdown
State $$x = [w,n,m,s,e,o]$$ $w$: wealth level size: 20 $n$: 401k level size: 10 $m$: mortgage level size: 10 $s$: economic state size: 8 $e$: employment state size: 2 $o$: housing state: size: 2 Action$c$: consumption amount size: 20 $b$: bond investment size: 20 $k$: stock investment derived from budget constrain once $c$ and $b$ are determined. $h$: housing consumption size, related to housing status and consumption level If $O = 1$, the agent owns a house: $A = [c, b, k, h=H, action = 1]$ sold the house $A = [c, b, k, h=H, action = 0]$ keep the house If $O = 0$, the agent do not own a house: $A = [c, b, k, h= \frac{c}{\alpha} \frac{1-\alpha}{pr}, action = 0]$ keep renting the house $A = [c, b, k, h= \frac{c}{\alpha} \frac{1-\alpha}{pr}, action = 1]$ buy a housing with H unit Housing20% down payment of mortgage, fix mortgage rate, single housing unit available, from age between 20 and 50, agents could choose to buy a house, and could choose to sell the house at any moment. $H = 1000$
###Code
%%time
for t in tqdm(range(T_max-1,T_min-1, -1)):
if t == T_max-1:
v,cbkha = vmap(partial(V,t,Vgrid[:,:,:,:,:,:,t]))(Xs)
else:
v,cbkha = vmap(partial(V,t,Vgrid[:,:,:,:,:,:,t+1]))(Xs)
Vgrid[:,:,:,:,:,:,t] = v.reshape(dim)
cgrid[:,:,:,:,:,:,t] = cbkha[:,0].reshape(dim)
bgrid[:,:,:,:,:,:,t] = cbkha[:,1].reshape(dim)
kgrid[:,:,:,:,:,:,t] = cbkha[:,2].reshape(dim)
hgrid[:,:,:,:,:,:,t] = cbkha[:,3].reshape(dim)
agrid[:,:,:,:,:,:,t] = cbkha[:,4].reshape(dim)
np.save("Value03",Vgrid)
###Output
_____no_output_____ |
P3-Traffic_sign_classifier/CarND-Traffic-Sign-Classifier-Project/Traffic_Sign_Classifier.ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
from sklearn.model_selection import train_test_split
import random
import matplotlib.pyplot as plt
from tensorflow.contrib.layers import flatten
from sklearn.utils import shuffle
import tensorflow as tf
import numpy as np
import cv2
from skimage.transform import rotate
import glob
training_file = '../data/train.p' # '../data/mod_train0.p' '../data/train.p'
testing_file = '../data/test.p' # '../data/mod_test0.p' '../data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
assert(len(X_train) == len(y_train))
assert(len(X_test) == len(y_test))
n_train = len(X_train)
n_test = len(X_test)
n_classes = len(set(y_train))
image_shape = X_train[0].shape
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualization of the dataset:
###Code
### Data exploration visualization code goes here.
%matplotlib inline
# select and show a random sample from each class
rows, cols = 6, 8
fig, axs = plt.subplots(rows, cols)
plt.suptitle('Random images from the German Traffic Signs Dataset ')
for sign_class_idx, ax in enumerate(axs.ravel()):
if sign_class_idx < n_classes:
sign_class_img_set = X_train[y_train == sign_class_idx]
sign_class_rnd_img = sign_class_img_set[np.random.randint(len(sign_class_img_set))]
ax.imshow(sign_class_rnd_img)
#ax.set_title('{:02d}'.format(sign_class_idx), fontweight='bold')
ax.axis('off')
else:
ax.axis('off')
# hide x and y ticks
plt.setp([a.get_xticklabels() for a in axs.ravel()], visible=False)
plt.setp([a.get_yticklabels() for a in axs.ravel()], visible=False)
plt.show()
###Output
_____no_output_____
###Markdown
Histogram plotting of the original training data set shows that some classes may not have enough data required for high accuracy recognition. Secondly, distribution of the test data on the other side is quite similar to the train data, so I would expect that sign recognition will not be biased to a particular sign.
###Code
plt.hist(y_train, bins=n_classes, color='blue', alpha=0.7, rwidth=0.85)
plt.hist(y_test, bins=n_classes, color='orange', alpha=0.7, rwidth=0.85)
plt.legend(["Train data", "Test data"])
plt.grid(axis='y', alpha=0.75)
plt.title('Histogram of the German Traffic Signs Dataset')
plt.xlabel('Traffic sign')
plt.ylabel('Counts')
###Output
_____no_output_____
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) To improve the accuracy the first thing I did I increased the data set size. Initially, I implemented several steps: rotation, warping, shifting the images in x and y coordinates. But the data size increased hugely so that my pc would run out of memory very quickly. At the end, I just keept only image rotation option and the data set increased by 5.
###Code
def rotate_image(image, max_angle =15):
rotate_out = rotate(image, np.random.uniform(-max_angle, max_angle), mode='edge')
return rotate_out
aug_mode = 1 # =1 generated augmented data set
# =0 load already generated augmented data set
save_aug_img = 0
num_rot = 5 # number of rotations per image
mod_training_file = '../data/mod_train0.p'
if 1 == aug_mode:
# # generated augmented data set
y_train1 = np.matlib.repmat(y_train, num_rot, 1)
y_train1 = y_train1.T.reshape(-1)
X_train1 = np.zeros([len(X_train)*num_rot, 32, 32, 3], dtype=np.uint8)
for idx in range(len(X_train)):
for idx1 in range(num_rot):
k = idx * num_rot + idx1
# convert it back to 8 bytes, i.e. saves memory by factor 8
X_train1[k, :, :, :] = np.uint8(rotate_image(X_train[idx, :, :, :], max_angle=15)*255.0)
X_train, y_train = X_train1, y_train1
if 1 == save_aug_img:
with open(mod_training_file, mode='wb') as f:
pickle.dump({'features': X_train, 'labels': y_train}, f)
else:
# load already generated augmented data set
with open(mod_training_file, mode='rb') as f:
train = pickle.load(f)
X_train, y_train = train['features'], train['labels']
###Output
_____no_output_____
###Markdown
After some trying I just kept the straight forward normalization:
###Code
# Normalise input
X_train = (X_train - np.min(X_train)) / (np.max(X_train) - np.min(X_train))
X_test = (X_test - np.min(X_test)) / (np.max(X_test) - np.min(X_test))
X_train, y_train = shuffle(X_train, y_train)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=60)
###Output
_____no_output_____
###Markdown
Other things I tried shortly were playing with color spaces (YUV for example as mentioned here [[LeCun]](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) and gray colorspace) and histogram equalization. But haven't seen much of a progress nor I wanted to use more of GPU time. Model Architecture With LeNet architecture I reached 94% accuracy. After that I slightly modified it, I removed one of the fully connected layers and increased the depth of the activation volume instead. The idea here was to increase the depth column in order to get more details as mentioned in [[LeCun]](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) "In the case of 2 stages of features, the second stage extracts“global” and invariant shapes and structures, while the firststage extracts “local” motifs with more precise details."
###Code
BATCH_SIZE = 128
EPOCHS = 10
rate = 0.001
mu = 0
sigma = 0.1
conv1_depth = 64
conv2_depth = 128
fc1_depth = 64
fc2_depth = n_classes
last_saved_epoch = 0 # not a parameter, don't change it
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, conv1_depth), mean=mu, stddev=sigma), name='weights_0')
conv1_b = tf.Variable(tf.zeros(conv1_depth), name='bias_0')
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, conv1_depth, conv2_depth), mean=mu, stddev=sigma), name='weights_1')
conv2_b = tf.Variable(tf.zeros(conv2_depth), name='bias_1')
fc1_W = tf.Variable(tf.truncated_normal(shape=(5*5*conv2_depth, fc1_depth), mean=mu, stddev=sigma), name='weights_2')
fc1_b = tf.Variable(tf.zeros(fc1_depth), name='bias_2')
fc2_W = tf.Variable(tf.truncated_normal(shape=(fc1_depth, fc2_depth), mean=mu, stddev=sigma), name='weights_3')
fc2_b = tf.Variable(tf.zeros(fc2_depth), name='bias_3')
def ConvNet(x):
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x64.
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
conv1 = tf.nn.relu(conv1)
# Pooling. Input = 28x28x64. Output = 14x14x64.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x128.
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x128. Output = 5x5x128.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten. Input = 5x5x128. Output = 3200.
fc0 = flatten(conv2)
# Layer 3: Fully Connected. Input = 3200. Output = 64.
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 64. Output = 43.
logits = tf.matmul(fc1, fc2_W) + fc2_b
return logits
### Define your architecture here.
### Feel free to use as many code cells as needed.
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
logits = ConvNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
#Model Evaluation
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
#Train the Model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
training_accuracy = evaluate(X_train, y_train)
validation_accuracy = evaluate(X_val, y_val)
print("EPOCH {} ...".format(i + 1))
print("Training Accuracy = {:.3f}".format(training_accuracy))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
if (i % 3) == 0:
saver.save(sess, './lenet_epoch'+str(i + 1)+'.ckpt')
last_saved_epoch = i + 1
print("Model saved")
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
###Output
INFO:tensorflow:Restoring parameters from ./lenet_epoch10.ckpt
Test Accuracy = 0.956
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
images_new = glob.glob('../data/'+'*.jpg')
images_new = [cv2.cvtColor(cv2.imread(img), cv2.COLOR_BGR2RGB) for img in images_new]
y_new = [3, 34, 11, 25, 18] # class id's
fig, axs = plt.subplots(1, len(images_new))
for idx, ax in enumerate(axs.ravel()):
ax.imshow(images_new[idx])
ax.set_title('{:02d}'.format(y_new[idx]))
ax.axis('off')
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
# normalize
for idx in range(len(images_new)):
images_new[idx] = (images_new[idx] - np.min(images_new[idx])) / (np.max(images_new[idx]) - np.min(images_new[idx]))
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
prediction = np.argmax(np.array(sess.run(logits, feed_dict={x: images_new})), axis=1)
for i, pred in enumerate(prediction):
print('Target = {:02d} | Predicted = {:02d}'.format(y_new[i], pred))
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
###Output
INFO:tensorflow:Restoring parameters from ./lenet_epoch10.ckpt
Target = 03 | Predicted = 03
Target = 34 | Predicted = 34
Target = 11 | Predicted = 11
Target = 25 | Predicted = 25
Target = 18 | Predicted = 18
###Markdown
Analyze Performance
###Code
print('Test Accuracy = {:.3f}'.format(np.sum(y_new == prediction) / len(y_new)))
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
Test Accuracy = 1.000
###Markdown
The images used are actually "good" and easily detectable images, so it is not a surprise that accuracy is 100%. Output Top 5 Softmax Probabilities For Each Image Found on the Web
###Code
# visualizing softmax probabilities
num_tops = 5
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
top_k = sess.run(tf.nn.top_k(logits, k=num_tops), feed_dict={x: images_new})
softmax_probs = sess.run(tf.nn.softmax(logits), feed_dict={x: images_new})
# plot softmax probabilities per each test image
n_images = len(images_new)
fig, axs = plt.subplots(n_images, 2)
plt.suptitle('Softmax probabilities per each test image')
for idx in range(0, n_images):
axs[idx, 0].imshow(images_new[idx])
axs[idx, 1].bar(np.arange(n_classes), softmax_probs[idx])
axs[idx, 1].set_ylim([0, 1])
axs[idx, 1].set_xlim([0, n_classes-1])
# Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
for img_idx in range(len(images_new)):
print()
print('Top predictions for the target image {:02d}'.format(y_new[img_idx]))
for idx_within_tops in range(num_tops):
pred_img = top_k[1][img_idx][idx_within_tops]
probability = softmax_probs[img_idx][pred_img]
print('Predicted {:02d} with probability {:.5f}'.format(pred_img, probability))
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
Top predictions for the target image 03
Predicted 03 with probability 1.00000
Predicted 05 with probability 0.00000
Predicted 01 with probability 0.00000
Predicted 11 with probability 0.00000
Predicted 06 with probability 0.00000
Top predictions for the target image 34
Predicted 34 with probability 1.00000
Predicted 36 with probability 0.00000
Predicted 19 with probability 0.00000
Predicted 35 with probability 0.00000
Predicted 38 with probability 0.00000
Top predictions for the target image 11
Predicted 11 with probability 1.00000
Predicted 30 with probability 0.00000
Predicted 27 with probability 0.00000
Predicted 24 with probability 0.00000
Predicted 28 with probability 0.00000
Top predictions for the target image 25
Predicted 25 with probability 1.00000
Predicted 05 with probability 0.00000
Predicted 14 with probability 0.00000
Predicted 29 with probability 0.00000
Predicted 30 with probability 0.00000
Top predictions for the target image 18
Predicted 18 with probability 1.00000
Predicted 27 with probability 0.00000
Predicted 26 with probability 0.00000
Predicted 37 with probability 0.00000
Predicted 01 with probability 0.00000
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____ |
Desafio 6/DF6_Lit_2.ipynb | ###Markdown
Install
###Code
!pip install tpot
import pandas as pd
import numpy as np
import matplotlib as plt
import seaborn as sns
sns.set()
from sklearn.preprocessing import KBinsDiscretizer, LabelEncoder
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
from sklearn.metrics import classification_report, f1_score
from tpot import TPOTClassifier
df = pd.read_csv("https://github.com/maratonadev-br/desafio-6-2020/blob/master/dataset/training_dataset.csv?raw=true")
test = pd.read_csv("https://raw.githubusercontent.com/maratonadev-br/desafio-6-2020/master/dataset/to_be_scored.csv")
df.shape, test.shape
###Output
_____no_output_____
###Markdown
PreProcessing
###Code
colsToDrop = ["id", "importante_ter_certificado"
#"profissao", "graduacao", "modulos_iniciados",
#"pretende_fazer_cursos_lit", "como_conheceu_lit",
#"universidade", "organizacao"
]
for col in colsToDrop:
try:
df.drop(col, axis=1, inplace=True)
test.drop(col, axis=1, inplace=True)
except: print(f"{col} already droped")
df_num = df[["certificados", "modulos_finalizados", "modulos_iniciados", "total_modulos",
"categoria"]]
df_num = df_num.dropna()
df_num.shape
df_dropedna = df.dropna()
colsNumber = df.select_dtypes(include="number").columns
df[colsNumber] = df[colsNumber].fillna(0)
df["graduacao"].fillna("SEM FORMAÇÃO", inplace=True)
df["profissao"].fillna("SEM EXPERIÊNCIA", inplace=True)
df["como_conheceu_lit"].fillna("OUTROS", inplace=True)
df["organizacao"].fillna("Eletroeletronicos", inplace=True)
df["universidade"].fillna("FATEC", inplace=True)
colsToDummy = ['universidade', "organizacao", "como_conheceu_lit",
'graduacao', 'profissao']
# df = pd.get_dummies(df, columns=colsToDummy)
le = LabelEncoder()
df_dropedna[colsToDummy] = df_dropedna[colsToDummy].apply(lambda x: le.fit_transform(x)).astype(int)
df_dropedna
###Output
_____no_output_____
###Markdown
Tentando Balancear
###Code
df["categoria"].value_counts()
###Output
_____no_output_____
###Markdown
Training
###Code
X = df[["certificados", "modulos_finalizados", "modulos_iniciados", "total_modulos"]]
# X = df_dropedna.drop("categoria", axis=1)
# X = df_num.drop("categoria", axis=1)
# y = df_num["categoria"]
y = df["categoria"]
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=.7, random_state=0)
Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape
from imblearn.over_sampling import SMOTE
smote = SMOTE()
Xtrain_smote, ytrain_smote = smote.fit_resample(Xtrain, ytrain)
Xtrain_smote.shape, ytrain_smote.shape
tp_smote = TPOTClassifier(scoring="f1_micro", random_state=0, verbosity=2, config_dict="TPOT light")
tp_smote.fit(Xtrain_smote, ytrain_smote)
tp_smote("pipeline_smote")
tp = TPOTClassifier(scoring="f1_micro", random_state=0, verbosity=2)
tp.fit(Xtrain, ytrain)
tp.export("pipeline3")
from xgboost import XGBClassifier
# f1_micro: 0.8317582241150573 | ((2431, 12), (1043, 12), (2431,), (1043,))
exported_pipeline = XGBClassifier(learning_rate=0.1, max_depth=1,
min_child_weight=17, n_estimators=100,
subsample=0.7, random_state=0)
exported_pipeline.fit(Xtrain, ytrain)
results = exported_pipeline.predict(Xtest)
print(f1_score(ytest, results, average="micro"))
# 0.8475398475398476 oi? | ((6732, 4), (2886, 4), (6732,), (2886,))
from sklearn.ensemble import GradientBoostingClassifier
# f1_micro: 0.8351172767395709 | ((6732, 4), (2886, 4), (6732,), (2886,))
exported_pipeline = GradientBoostingClassifier(learning_rate=0.01, max_depth=10,
max_features=0.25, min_samples_leaf=15,
min_samples_split=10, n_estimators=100,
subsample=0.8, random_state=0)
exported_pipeline.fit(Xtrain, ytrain)
results = exported_pipeline.predict(Xtest)
print(f1_score(ytest, results, average="micro"))
from sklearn.feature_selection import SelectPercentile, f_classif
from sklearn.pipeline import make_pipeline
from sklearn.tree import DecisionTreeClassifier
from tpot.export_utils import set_param_recursive
# f1_micro: 0.8351171664289474 | ((6732, 4), (2886, 4), (6732,), (2886,))
exported_pipeline = make_pipeline(
SelectPercentile(score_func=f_classif, percentile=61),
DecisionTreeClassifier(criterion="entropy", max_depth=4, min_samples_leaf=3, min_samples_split=14)
)
# Fix random state for all the steps in exported pipeline
set_param_recursive(exported_pipeline.steps, 'random_state', 0)
exported_pipeline.fit(Xtrain, ytrain)
results = exported_pipeline.predict(Xtest)
print(f1_score(ytest, results, average="micro"))
!pip install scikit-optimize
from skopt import gp_minimize
def tunar_modelo(params):
learning_rate = params[0]
max_depth = params[1]
min_child_weight = params[2]
n_estimators = params[3]
subsample = params[4]
print(params,'\n')
mdl = XGBClassifier(
learning_rate = learning_rate, n_estimators = n_estimators,
max_depth = max_depth, min_child_weight = min_child_weight,
subsample = subsample, random_state = 0)
mdl.fit(Xtrain, ytrain)
p = mdl.predict(Xtest)
return -f1_score(ytest, p, average="micro")
space = [(1e-2, 1e-1), # learning_rate
(1, 1000), # n_estimators
(1, 100), # max_depth
(1, 100), # min_child_weight,
(0, 1)] # subsample
resultado_gp = gp_minimize(tunar_modelo, space, random_state=0,
n_calls=50, n_random_starts=20, verbose=1)
resultado_gp.x # 0.8496 [0.1, 1000, 70, 100, 1]
xgb = XGBClassifier(
learning_rate = 0.1, n_estimators = 1000,
max_depth = 70, min_child_weight = 100,
subsample = 1, random_state = 0)
xgb.fit(Xtrain, ytrain)
xgb_p = xgb.predict(Xtest)
print(f1_score(ytest, xgb_p, average="micro"))
xgb2 = XGBClassifier(
learning_rate = 0.01, n_estimators = 100,
max_depth = 8, min_child_weight = 20,
subsample = 0.45, random_state = 0)
xgb2.fit(Xtrain, ytrain)
xgb2_p = xgb.predict(Xtest)
print(f1_score(ytest, xgb2_p, average="micro"))
xgb2_smote = XGBClassifier(
learning_rate = 0.01, n_estimators = 100,
max_depth = 8, min_child_weight = 20,
subsample = 0.45, random_state = 0)
xgb2_smote.fit(Xtrain, ytrain)
xgb2_smote_p = xgb.predict(Xtest)
print(f1_score(ytest, xgb2_smote_p, average="micro"))
###Output
0.7635561160151324
###Markdown
Predict test
###Code
# test[colsToDummy] = test[colsToDummy].apply(lambda x: le.fit_transform(x)).astype(int)
test2 = test[["certificados", "modulos_finalizados", "modulos_iniciados", "total_modulos"]]
results2 = xgb2.predict(test2)
results1 = pd.read_csv("/content/results.csv")
results1.shape[0], results2.shape[0]
count = 0
for row in range(0, 1000):
if results1["target"][row] != results2[row]:
print(f"{results1['target'][row]} != {results2[row]}, {row}")
count += 1
print(count)
###Output
perfil4 != perfil6, 408
1
###Markdown
---
###Code
results_smote = exported_pipeline.predict(test2)
count = 0
for row in range(0, 1000):
if results1["target"][row] != results_smote[row]:
print(f"{results1['target'][row]} != {results_smote[row]}, {row}")
count += 1
print(count)
test_results_smote = pd.DataFrame({"target":results_smote})
test_results_smote
test_results_smote.to_csv("results", index=False)
###Output
_____no_output_____ |
python/Crawler/ipy/crawler4 - urllib.ipynb | ###Markdown
Python Web Crawler 4 - Urllib Request
###Code
import urllib.request
from bs4 import BeautifulSoup
def getPage(url):
page = urllib.request.urlopen(url) # <class 'http.client.HTTPResponse'>
print(page.status)
# print(page.getheaders())
return page.read().decode('utf-8')
tree = BeautifulSoup(getPage("https://www.bing.com/"),"lxml")
tree.div.select('#bgDiv') # JS rendered
# 200
# [<div data-minhdhor="" data-minhdver="" data-priority="0" id="bgDiv"></div>]
###Output
200
###Markdown
urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, context=None)urllib.request.Request(url, data=None, headers={}, origin_req_host=None, unverifiable=False, method=None)
###Code
import socket
from urllib import request, parse,error
def getInfo(url, data="", headers={}, method="GET",timeout=1):
dat = bytes(parse.urlencode(data), encoding='utf8')
req = request.Request(url=url, data=dat, headers=headers, method=method)
req = request.urlopen(req, timeout=timeout)
print(req.read().decode('utf-8'))
headers = {
'User-Agent':' Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36',
'Host': 'httpbin.org'
}
dict = {
'words1': 'you\'re a miracle' ,
'words2':'what do you fear'
}
getInfo("http://httpbin.org/post",dict,headers,"POST",5)
# {
# "args": {},
# "data": "",
# "files": {},
# "form": {
# "words1": "you're a miracle",
# "words2": "what do you fear"
# },
# "headers": {
# "Accept-Encoding": "identity",
# "Connection": "close",
# "Content-Length": "49",
# "Content-Type": "application/x-www-form-urlencoded",
# "Host": "httpbin.org",
# "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
# },
# "json": null,
# "origin": "183.246.20.118",
# "url": "http://httpbin.org/post"
# }
###Output
{
"args": {},
"data": "",
"files": {},
"form": {
"words1": "you're a miracle",
"words2": "what do you fear"
},
"headers": {
"Accept-Encoding": "identity",
"Connection": "close",
"Content-Length": "49",
"Content-Type": "application/x-www-form-urlencoded",
"Host": "httpbin.org",
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
},
"json": null,
"origin": "183.246.20.118",
"url": "http://httpbin.org/post"
}
{
"args": {},
"headers": {
"Accept-Encoding": "identity",
"Connection": "close",
"Content-Type": "application/x-www-form-urlencoded",
"Host": "httpbin.org",
"User-Agent": "Python-urllib/3.6"
},
"origin": "183.246.20.118",
"url": "http://httpbin.org/get"
}
###Markdown
ERROR
###Code
def getInfo(url, data="", headers={}, method="GET",timeout=1):
try:
dat = bytes(parse.urlencode(data), encoding='utf8')
req = request.Request(url=url, data=dat, headers=headers, method=method)
req = request.urlopen(req, timeout=timeout)
print(req.read().decode('utf-8'))
except error.HTTPError as e:
print(e.reason, e.code, e.headers, sep='\n')
except error.URLError as e:
if isinstance(e.reason, socket.timeout):
print('TIME OUT')
else:
pass
getInfo('http://httpbin.org/index.htm')
# NOT FOUND
# 404
# Connection: close
# Server: meinheld/0.6.1
# Date: Sun, 11 Mar 2018 06:25:37 GMT
# Content-Type: text/html
# Content-Length: 233
# Access-Control-Allow-Origin: *
# Access-Control-Allow-Credentials: true
# X-Powered-By: Flask
# X-Processed-Time: 0
# Via: 1.1 vegur
getInfo('http://httpbin.org/get',timeout=.1)
# TIME OUT
getInfo('http://httpbin.org/get')
# {
# "args": {},
# "headers": {
# "Accept-Encoding": "identity",
# "Connection": "close",
# "Content-Type": "application/x-www-form-urlencoded",
# "Host": "httpbin.org",
# "User-Agent": "Python-urllib/3.6"
# },
# "origin": "183.246.20.118",
# "url": "http://httpbin.org/get"
# }
###Output
_____no_output_____
###Markdown
ParseParse module supports the following URL schemes: file, ftp, gopher, hdl, http, https, imap, mailto, mms, news, nntp, prospero, rsync, rtsp, rtspu, sftp, shttp, sip, sips, snews, svn, svn+ssh, telnet, wais, ws, wss. Split & Combine
###Code
from urllib.parse import urlparse as pr
from urllib.parse import urlunparse as upr
# scheme://netloc/path;parameters?query#fragment
result = pr('http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt')
print(type(result), '\n',result)
# <class 'urllib.parse.ParseResult'>
# ParseResult(scheme='http', netloc='www.xiami.com', path='/play', \
# params='', query='ids=/song/playlist/id/1/type/9', fragment='loadedt')
[print(result[i]) for i in range(len(result))]
# http
# www.xiami.com
# /play
# ids=/song/playlist/id/1/type/9
# loaded
print( pr('www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt',scheme="https"))
# ParseResult(scheme='https', netloc='', path='www.xiami.com/play',\
# params='', query='ids=/song/playlist/id/1/type/9', fragment='loadedt')
print( pr('https://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt',scheme="http",allow_fragments=False))
# ParseResult(scheme='https', netloc='www.xiami.com', path='/play', \
# params='', query='ids=/song/playlist/id/1/type/9#loadedt', fragment='')
data = [result.scheme, result.netloc, result.path,result.params, result.query,result.fragment]
print(upr(data))
# http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt
from urllib.parse import urlsplit as sp
from urllib.parse import urlunsplit as usp
# # scheme://netloc/path?query#fragment
result = sp('http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt')
print(type(result), '\n',result)
# <class 'urllib.parse.SplitResult'>
# SplitResult(scheme='http', netloc='www.xiami.com', path='/play', \
# query='ids=/song/playlist/id/1/type/9', fragment='loadedt')
data = [result.scheme, result.netloc, result.path, result.query,result.fragment]
print(usp(data))
# http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt
### More
from urllib.parse import urljoin as jo
print(jo("http://www.xiami.com/","play?ids=/song/playlist/id/1/type/9#loadedt"))
print(jo("http://www.xiami.com/play?ids=/song/playlist/","play?ids=/song/playlist/id/1/type/9#loadedt"))
print(jo("http:","//www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt"))
# http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt
from urllib.parse import urlencode,parse_qs,quote,unquote
params = {
'tn':'baidu',
'wd': 'google chrome',
}
base_url = 'http://www.baidu.com/s?'
base_url + urlencode(params)
# 'http://www.baidu.com/s?tn=baidu&wd=google+chrome'
print(parse_qs( urlencode(params)))
# {'tn': ['baidu'], 'wd': ['google chrome']}
'https://www.baidu.com/s?wd=' + quote("百度")
# 'https://www.baidu.com/s?wd=%E7%99%BE%E5%BA%A6'
url = 'https://www.baidu.com/s?wd=%E7%99%BE%E5%BA%A6'
print(unquote(url))
# https://www.baidu.com/s?wd=百度
###Output
{'tn': ['baidu'], 'wd': ['google chrome']}
https://www.baidu.com/s?wd=百度
###Markdown
Handler `BaseHandler`[¶](https://docs.python.org/3/library/urllib.request.htmlurllib.request.BaseHandler)- `HTTPDefaultErrorHandler`- `HTTPRedirectHandler`- `HTTPCookieProcessor`(*cookiejar=None*)- `ProxyHandler`(*proxies=None*)- `HTTPPasswordMgr`- `HTTPPasswordMgrWithDefaultRealm`- `HTTPPasswordMgrWithPriorAuth`- ` ...` Cookies
###Code
import http.cookiejar, urllib.request
cookie = http.cookiejar.CookieJar()
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
print(response)
# <http.client.HTTPResponse object at 0x04D421F0>
for item in cookie:
print(item.name+"="+item.value)
# BAIDUID=7A55D7DB4ECB570361D1D1186DD85275:FG=1
# ...
filename = 'cookies.txt'
cookie = http.cookiejar.LWPCookieJar(filename) # cookie = http.cookiejar.MozillaCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
cookie.save(ignore_discard=True, ignore_expires=True)
## LWP-Cookies-2.0
# Set-Cookie3: BAIDUID="990E47C14A144D813BB6629BEA0D1BEF:FG=1"; path="/"; domain=".baidu.com"; path_spec; domain_dot; expires="2086-03-29 08:56:02Z"; version=0
# ...
cookie = http.cookiejar.LWPCookieJar()
cookie.load('cookies.txt', ignore_discard=True, ignore_expires=True)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
print(response.read().decode('utf-8'))
# <!DOCTYPE html>
# <!--STATUS OK-->
# ...
###Output
_____no_output_____
###Markdown
Password
###Code
from urllib.request import HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, build_opener
from urllib.error import URLError
username = 'username'
password = 'password'
url = 'url'
p = HTTPPasswordMgrWithDefaultRealm()
p.add_password(None, url, username, password)
auth_handler = HTTPBasicAuthHandler(p)
opener = build_opener(auth_handler)
try:
result = opener.open(url)
html = result.read().decode('utf-8')
print(html)
except URLError as e:
print(e.reason)
###Output
_____no_output_____
###Markdown
Proxy
###Code
from urllib.error import URLError
from urllib.request import ProxyHandler, build_opener
proxy_handler = ProxyHandler({
'http': 'url',
'https': 'url'
})
opener = build_opener(proxy_handler)
try:
response = opener.open('https://www.baidu.com')
print(response.read().decode('utf-8'))
except URLError as e:
print(e.reason)
Using requests
import requests
proxies = {
'http': 'url',
'https': 'url'
}
# http://user:password@host:port
proxies = {
"http": "http://user:[email protected]:3128/",
}
# socks
proxies = {
'http': 'socks5://user:password@host:port',
'https': 'socks5://user:password@host:port'
}
requests.get("https://www.baidu.com", proxies=proxies)
###Output
_____no_output_____
###Markdown
Robots e.g. Robots.txt https://www.taobao.com/robots.txt
###Code
User-agent: Baiduspider
Allow: /article
Allow: /oshtml
Disallow: /product/
Disallow: /
User-Agent: Googlebot
Allow: /article
Allow: /oshtml
Allow: /product
Allow: /spu
Allow: /dianpu
Allow: /oversea
Allow: /list
Disallow: /
User-agent: Bingbot
Allow: /article
Allow: /oshtml
Allow: /product
Allow: /spu
Allow: /dianpu
Allow: /oversea
Allow: /list
Disallow: /
User-Agent: 360Spider
Allow: /article
Allow: /oshtml
Disallow: /
User-Agent: Yisouspider
Allow: /article
Allow: /oshtml
Disallow: /
User-Agent: Sogouspider
Allow: /article
Allow: /oshtml
Allow: /product
Disallow: /
User-Agent: Yahoo! Slurp
Allow: /product
Allow: /spu
Allow: /dianpu
Allow: /oversea
Allow: /list
Disallow: /
User-Agent: *
Disallow: /
###Output
_____no_output_____
###Markdown
RobotFileParser
###Code
from urllib.robotparser import RobotFileParser
from urllib.request import urlopen
url = "http://httpbin.org/robots.txt "
rp = RobotFileParser(url)
rp.read()
print(rp.can_fetch('*', 'http://httpbin.org/deny'))
print(rp.can_fetch('*', "http://httpbin.org/image"))
# False
# True
###Output
False
True
###Markdown
REFERENCES
###Code
- https://docs.python.org/3/library/urllib.html
- http://httpbin.org/
###Output
_____no_output_____ |
src/NYUDataTester.ipynb | ###Markdown
PRC
###Code
for i in range(len(NYU_CLASSES)):
plt.figure()
y = prec[i]
x = rec[i]
writer_recall.writerow(x)
writer_precision.writerow(y)
f_recall.close()
f_precision.close()
# plt.plot(x, y)
# plt.axis([0, 1.0, 0, 1.0])
# plt.title(NYU_CLASSES[i])
# plt.xlabel('Recall')
# plt.ylabel('Precision')
# plt.savefig(('../results/PRC/RGB/' + NYU_CLASSES[i]+'.png'))
mAP_array = []
for i in np.linspace(0, 1, 101):
prec, rec, mean_iou = calc_detection_prec_rec(pred_labels, pred_scores, pred_bboxes, gt_bboxes, gt_labels, iou_thresh=i)
ap = calc_detection_ap(prec, rec, use_07_metric=True)
mAP_array.append(np.nanmean(ap))
print(mAP_array)
plt.plot(np.linspace(0, 1, 101), np.array(mAP_array))
plt.title('Overlap Threshold and mAP')
plt.xlabel('Overlap Threshold')
plt.ylabel('mAP')
plt.savefig('../results/map_overlap/RGB.png')
ap_array = np.zeros((len(NYU_CLASSES), len(np.linspace(0, 1, 101))))
for i, thresh in enumerate(np.linspace(0, 1, 101)):
prec, rec, mean_iou = calc_detection_prec_rec(pred_labels, pred_scores, pred_bboxes, gt_bboxes, gt_labels, iou_thresh=thresh)
ap = calc_detection_ap(prec, rec, use_07_metric=True)
for k in range(len(NYU_CLASSES)):
ap_array[k][i] = ap[k]
for k in range(len(NYU_CLASSES)):
plt.figure()
plt.plot(np.linspace(0, 1, 101), np.array(ap_array[k]))
plt.title(NYU_CLASSES[k])
plt.xlabel('Overlap Threshold')
plt.ylabel('Average Precision')
plt.savefig(('../results/ap_overlap/RGB/'+NYU_CLASSES[k]+'.png'))
images = results
for i, img in enumerate(images):
plt.figure()
if len(results[i]) == 0:
continue
det_label = results[i][:, 0]
det_conf = results[i][:, 1]
det_xmin = results[i][:, 2]
det_ymin = results[i][:, 3]
det_xmax = results[i][:, 4]
det_ymax = results[i][:, 5]
# Get detections with confidence higher than 0.6.
top_indices = [i for i, conf in enumerate(det_conf) if conf >= 0.6]
top_conf = det_conf[top_indices]
top_label_indices = det_label[top_indices].tolist()
top_xmin = det_xmin[top_indices]
top_ymin = det_ymin[top_indices]
top_xmax = det_xmax[top_indices]
top_ymax = det_ymax[top_indices]
colors = plt.cm.hsv(np.linspace(0, 1, 21)).tolist()
plt.imshow(img / 255.)
currentAxis = plt.gca()
for i in range(top_conf.shape[0]):
xmin = int(round(top_xmin[i] * img.shape[1]))
ymin = int(round(top_ymin[i] * img.shape[0]))
xmax = int(round(top_xmax[i] * img.shape[1]))
ymax = int(round(top_ymax[i] * img.shape[0]))
score = top_conf[i]
label = int(top_label_indices[i])
label_name = NYU_CLASSES[label - 1]
display_txt = '{:0.2f}, {}'.format(score, label_name)
coords = (xmin, ymin), xmax-xmin, ymax-ymin
color = colors[label]
currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor=color, linewidth=2))
currentAxis.text(xmin, ymin, display_txt, bbox={'facecolor':color, 'alpha':0.5})
plt.savefig('../results/detection_images/RGB/image' + str(i)+'_v10.png')
y_true = []
for key in val_keys:
y_true.append(gt[key])
y_true = np.array(y_true)
print(y_true.shape)
inputs = []
images = []
for key in val_keys:
img_path = path_prefix + key
img = image.load_img(img_path, target_size=(300, 300))
img = image.img_to_array(img)
images.append(imread(img_path))
inputs.append(img.copy())
inputs = preprocess_input(np.array(inputs))
preds = model.predict(inputs, batch_size=1, verbose=1)
results = bbox_util.detection_out(preds)
#calc_map(y_true, results)
print(results[0])
###Output
_____no_output_____ |
Taller_Grupal_21_Marzo.ipynb | ###Markdown
###Code
pd.crosstab(df["viaje_noche_fuera"], df["estrato"], normalize=True)
pd.crosstab(df["viaje_noche_fuera"], df["estado_civil"], normalize=True)
sns.histplot(x=df["edad"])
df["edad"].describe()
mode(df["edad"])
sns.histplot(x=df["estrato"])
sns.barplot(x=df["estrato"].value_counts().index, y=df["estrato"].value_counts())
sns.barplot(x=df["nivel_educativo"].value_counts().index, y=df["nivel_educativo"].value_counts())
sns.barplot(y=df["estado_civil"].value_counts().index, x=df["estado_civil"].value_counts(),orient='h')
#plt.xticks(rotation=90)
df["viaje_noche_fuera"]=df["viaje_noche_fuera"].replace({"si": 1, "no": 0})
df.drop(columns="parentesco_jefe_hogar",inplace=True)
df = df[df["nivel_educativo"]!= "no_sabe_no_informa"]
df = pd.get_dummies(df, drop_first=True)
X = df.copy()
y = X.pop("viaje_noche_fuera")
X = sm.add_constant(X)
model = sm.OLS(y,X)
reg = model.fit()
reg.summary()
X = df[["edad", "estrato", "estado_civil_pareja_union_libre", "nivel_educativo_superior_universitaria", "viaje_noche_fuera"]].copy()
y = X.pop("viaje_noche_fuera")
X = sm.add_constant(X)
model = sm.OLS(y,X)
reg = model.fit()
reg.summary()
###Output
_____no_output_____ |
Stephen_Lupsha_LS_DS_211_assignment.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning? ***Do Not Copy-Paste. You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.***
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.linear_model import LinearRegression
import numpy as np
import seaborn as sns
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv',
parse_dates=['created'],
index_col='created')
#assert df.shape == (49352, 34)
#changing this to 33 since i am indexing on "created"
assert df.shape == (49352, 33)
df.shape
df.info()
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.shape
# Alright, this is our trimmed dataframe.
df.head()
import matplotlib.pyplot as plt
plt.scatter(df['bedrooms'], df['price'])
plt.xlabel('Bedrooms')
plt.ylabel('Price')
# Well, that looks like a mess.
plt.scatter(df['latitude'], df['price'])
plt.xlabel('Latitude')
plt.ylabel('Price')
plt.scatter(df['longitude'], df['price'])
plt.xlabel('Longitude')
plt.ylabel('Price')
# Splitting the data into feature matrix (latitude first) and one target vector (price)
X_lat = df[['latitude']]
y_lat = df['price']
assert len(X_lat) == len(y)
y_mean = y.mean()
print("mean rental price:", y_mean)
y_pred = [y_mean]*len(y)
#Don't forget to put y_mean in brackets - also, remember to ask Nicholas what that does exactly? does it convert it to a float?
print('Baseline Mean Absolute Error:', mean_absolute_error(y, y_pred))
plt.scatter(df['latitude'], df['price'])
plt.plot(df['latitude'], y_lat_pred, label='baseline model line', color='grey')
plt.xlabel('Latitude')
plt.ylabel('Price')
###Output
_____no_output_____
###Markdown
Clearly this is useless without Longitude, so I'm gonna repeat my process for that just so we have those variables.
I mean, theoretically neither is USELESS without the other, by why would one look at rent prices across only a certain latitude or longitude within NYC? I can't imagine what shared characteristics those would have except for stops along public transpo etc. Also, no apparent linear relationship at all, although clearly a high value area in there between 40.7 and 40.8.
Since latitude is north south I would bet money that that range is probably central park...
###Code
# Step 1: Import predictor class
from sklearn.linear_model import LinearRegression
# Step 2: Instantiate my predictor
model_lat = LinearRegression()
# Step 3: FIT my predictor on the (training) data
model_lat.fit(X_lat, y)
plt.scatter(df['latitude'], df['price'])
plt.plot(df['latitude'], y_pred, label = 'baseline model line', color='grey')
plt.plot(df['latitude'], model_lat.predict(X), label='Lat model', color='red')
plt.xlabel('Latitude')
plt.ylabel('Price')
plt.legend()
###Output
_____no_output_____
###Markdown
Re-doing all this for longitude :
###Code
# Splitting the data into feature matrix (latitude first) and one target vector (price)
X_long = df[['longitude']]
assert len(X_long) == len(y)
# Step 2: Instantiate my predictor
model_long = LinearRegression()
# Step 3: FIT my predictor on the (training) data
model_long.fit(X_long, y)
plt.scatter(df['longitude'], df['price'])
plt.plot(df['longitude'], y_pred, label = 'baseline model line', color='grey')
plt.plot(df['longitude'], model_long.predict(X), label='Long model', color='red')
plt.xlabel('Longitude')
plt.ylabel('Price')
plt.legend()
print(f'Price = {model_lat.intercept_} + {model_lat.coef_[0]} * Latitude')
print(f'Price = {model_long.intercept_} + {model_long.coef_[0]} * Longitude')
print(f'Price = {model_both.intercept_} + {model_both.coef_[0]} * Lat & Long')
###Output
_____no_output_____
###Markdown
We can see from our results above that the fact that the Longitude in manhattan is negative, it's really throwing the calculation off. I need to figure out how to do a multi-variable regression model here and possibly even change the longitude to positive somehow... I don't know, reading more now.
###Code
X_Both = df[['latitude', 'longitude']]
model_both = LinearRegression()
model_both.fit(X_Both, y)
###Output
_____no_output_____
###Markdown
To quote the internet, and why I can't use Lat/Long
*You cannot use them directly, as it is unlikely there is a true linear relationship unless you're looking to predict "how far east or north" someone is. As mentioned in the comments, you need to convert them into zones. If you wanted to keep it really simple, you could use a kNN clustering algorithm with a low number of potential clusters and then assign each instance a new feature with the cluster ID, and then one-hot encode that.* Obviously, I'm not doing all that today. I'm gonna move on to another observation or X. Most of the variables here are CATEGORICAL and not QUANTITATIVE - which will make it tough
###Code
df.head()
# Back to the beginning
plt.scatter(df['elevator'], df['price'])
plt.xlabel('Elevator in Bldg?')
plt.ylabel('Price')
plt.scatter(df['laundry_in_unit'], df['price'])
plt.xlabel('Laundry in unit?')
plt.ylabel('Price')
plt.scatter(df['bathrooms'], df['price'])
plt.xlabel('# of Bathrooms')
plt.ylabel('Price')
plt.scatter(df['bedrooms'], df['price'])
plt.xlabel('# of Bedrooms')
plt.ylabel('Price')
###Output
_____no_output_____
###Markdown
Impossible. God this is going nowhere.
###Code
df.head()
# df["sum"] = df.sum(axis=1)
# nope. OK. I'm going to add the "amenities" - one would assume that each of those factors increases the value of a rental right?
df_amenities = df.copy()
#copy
df_amenities.drop(['description','bathrooms', 'bedrooms', 'display_address', 'latitude', 'longitude', 'price', 'street_address', 'interest_level'], axis=1, inplace=True)
#drop the non-boolean variables.
df_amenities["amenities_score"] = df_amenities.sum(axis=1)
# sum it up into a new column.
df_amenities.head()
# and finally add that column back onto the original dataframe.
df["amenities_score"] = df_amenities['amenities_score']
df.head()
plt.scatter(df['amenities_score'], df['price'])
plt.xlabel('# of Amenities')
plt.ylabel('Price')
###Output
_____no_output_____
###Markdown
We can pretty clearly see that there are plenty of places with "few" amenities that are priced awfully high...at least compared to what I would pay for rent.
###Code
# predictor class is already imported.
# Step 2: Instantiate my predictor
model_amenities = LinearRegression()
# Step 3: FIT my predictor on the (training) data
# 3. Arrange X features matrix & y target vector
# gotta remember to ask about the different language here in the steps. the 5 steps of modeling here...
features = ['amenities_score']
target = ['price']
x_train = df[features]
y_train = df[target]
model_amenities.fit(x_train, y_train)
plt.scatter(df['amenities_score'], df['price'])
plt.plot(df['amenities_score'], y_pred, label = 'baseline model line', color='grey')
plt.plot(df['amenities_score'], model_amenities.predict(x_train), label='amenities rating', color='red')
plt.xlabel('# of Amenities')
plt.ylabel('Price')
plt.legend()
###Output
_____no_output_____
###Markdown
at least amenities seems to go up at this point, as we expected. we expected some degree of positive slope.
###Code
print(f'Price = {model_amenities.intercept_} + {model_amenities.coef_[0]} * Amenities "Score"')
###Output
Price = [2843.40065221] + [157.55177491] * Amenities "Score"
###Markdown
I'm having some doubts about how I "did" here - perfect questions for support hours I guess. I feel like this unit is foundational for what employers will want us to be doing on a daily basis.
###Code
# ok, lets try something else.
amenities_test = 13
x_test = [[amenities_test]]
y_pred_am = model_amenities.predict(x_test)
y_pred_am
###Output
_____no_output_____
###Markdown
Ok, based off of what we saw above that's not actually a terrible prediction. I mean, I've never paid rent in NYC, just saying...
###Code
amenities_test = 3
x_test = [[amenities_test]]
y_pred_am = model_amenities.predict(x_test)
y_pred_am
###Output
_____no_output_____ |
.ipynb_checkpoints/10.Random Forest Classifier-checkpoint.ipynb | ###Markdown
select Some import data for Classification features and change arrival delay into binary class * late* not late
###Code
df1=df.select("DayofMonth","DayofWeek","originAirportID","DestAirportID","DepDelay",\
((col("ArrDelay") > 15).cast("Int").alias("Late")))
df1.show()
###Output
+----------+---------+---------------+-------------+--------+----+
|DayofMonth|DayofWeek|originAirportID|DestAirportID|DepDelay|Late|
+----------+---------+---------------+-------------+--------+----+
| 19| 5| 11433| 13303| -3| 0|
| 19| 5| 14869| 12478| 0| 0|
| 19| 5| 14057| 14869| -4| 0|
| 19| 5| 15016| 11433| 28| 1|
| 19| 5| 11193| 12892| -6| 0|
| 19| 5| 10397| 15016| -1| 0|
| 19| 5| 15016| 10397| 0| 0|
| 19| 5| 10397| 14869| 15| 1|
| 19| 5| 10397| 10423| 33| 1|
| 19| 5| 11278| 10397| 323| 1|
| 19| 5| 14107| 13487| -7| 0|
| 19| 5| 11433| 11298| 22| 1|
| 19| 5| 11298| 11433| 40| 1|
| 19| 5| 11433| 12892| -2| 0|
| 19| 5| 10397| 12451| 71| 1|
| 19| 5| 12451| 10397| 75| 1|
| 19| 5| 12953| 10397| -1| 0|
| 19| 5| 11433| 12953| -3| 0|
| 19| 5| 10397| 14771| 31| 1|
| 19| 5| 13204| 10397| 8| 1|
+----------+---------+---------------+-------------+--------+----+
only showing top 20 rows
###Markdown
Dividing Data into Train and Test
###Code
train_data,test_data=df1.randomSplit([0.7,0.3])
train_data.count()
test_data.count()
###Output
_____no_output_____
###Markdown
Preparing Data
###Code
# Vector Assembler
assembler=VectorAssembler(inputCols=["DayofMonth","DayofWeek","originAirportID","DestAirportID","DepDelay"]\
,outputCol="features")
tran_data=assembler.transform(df1)
tran_data.show(5)
###Output
+----------+---------+---------------+-------------+--------+----+--------------------+
|DayofMonth|DayofWeek|originAirportID|DestAirportID|DepDelay|Late| features|
+----------+---------+---------------+-------------+--------+----+--------------------+
| 19| 5| 11433| 13303| -3| 0|[19.0,5.0,11433.0...|
| 19| 5| 14869| 12478| 0| 0|[19.0,5.0,14869.0...|
| 19| 5| 14057| 14869| -4| 0|[19.0,5.0,14057.0...|
| 19| 5| 15016| 11433| 28| 1|[19.0,5.0,15016.0...|
| 19| 5| 11193| 12892| -6| 0|[19.0,5.0,11193.0...|
+----------+---------+---------------+-------------+--------+----+--------------------+
only showing top 5 rows
###Markdown
Final DataSet
###Code
tran_data=tran_data.select("features",tran_data["Late"].alias("label"))
tran_data.show(5)
train_data,test_data=tran_data.randomSplit([0.7,0.3])
train_data.count()
test_data.count()
train_data.show(2)
###Output
+--------------------+-----+
| features|label|
+--------------------+-----+
|[1.0,1.0,10140.0,...| 0|
|[1.0,1.0,10140.0,...| 0|
+--------------------+-----+
only showing top 2 rows
###Markdown
Training Data
###Code
lr=RandomForestClassifier(featuresCol="features",labelCol="label",predictionCol="prediction",\
numTrees=3,maxDepth=5,seed=42)
lrmodel=lr.fit(train_data)
print("Model is trained")
lrmodel.transform(train_data).show(10)
# Grab the Correct prediction
train_pred=lrmodel.transform(train_data)
train_pred.show(5)
correct_prediction=train_pred.filter(train_pred["label"]==train_pred["prediction"]).count()
print("Accuracy for training-data :,",correct_prediction/(train_data.count()))
###Output
Accuracy for training-data :, 0.9263481017511112
###Markdown
testing Data --RF
###Code
test=lrmodel.transform(test_data)
correct_prediction_test=test.filter(test["label"]==test["prediction"]).count()
print("Accuracy for test-data :,",correct_prediction_test/(test_data.count()))
###Output
Accuracy for test-data :, 0.926432533259034
|
notebook/experiment_1/3_intervention_timer.ipynb | ###Markdown
List of figures: 2. [Figure S1: Time spent on intervention screen](timer) Imports libraries
###Code
import matplotlib.pyplot as plt # Plotting
import os # File system handling
import pandas as pd # Dataframe handling
from matplotlib.ticker import FuncFormatter # Formating graphs
###Output
_____no_output_____
###Markdown
Set project directory
###Code
PROJECT_FOLDER = os.path.dirname(os.path.dirname(os.getcwd()))
FINAL_DATA_FOLDER = os.path.join(PROJECT_FOLDER, 'data', 'final')
TABLES_FOLDER = os.path.join(PROJECT_FOLDER, 'reports', 'tables')
FIGURES_FOLDER = os.path.join(PROJECT_FOLDER, 'reports', 'figures')
###Output
_____no_output_____
###Markdown
Pandas options
###Code
pd.set_option("display.precision", 3)
pd.set_option("display.expand_frame_repr", False)
pd.set_option("display.max_rows", 40)
###Output
_____no_output_____
###Markdown
Set plotting style
###Code
plt.style.use('classic')
###Output
_____no_output_____
###Markdown
Set plotting properties
###Code
font_kw = dict(fontsize=11, color='k')
xlab_kw = dict(fontsize=11, labelpad=3)
ylab_kw = dict(fontsize=11, labelpad=3)
tick_kw = dict(
size=5,
which='both',
direction='out',
right=False,
top=False,
labelbottom=True
)
###Output
_____no_output_____
###Markdown
Retrieving dataframe
###Code
DATA = os.path.join(
FINAL_DATA_FOLDER,
'experiment_1',
'data_final.feather'
)
df = pd.read_feather(DATA)
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3076 entries, 0 to 3075
Columns: 442 entries, Age to Q80_timer
dtypes: float64(225), int64(25), object(192)
memory usage: 10.4+ MB
###Markdown
Seperate quality concern-treatments from the following main analysys
###Code
sel = (df['Dataset'] == 'Main')
df = df[sel]
###Output
_____no_output_____
###Markdown
Figure S1: Time spent on intervention screen
###Code
treat = ['Praise', 'Reference point']
hist_params = dict(bins=20, range=(0, 60), density=True, color='0.4', alpha=0.8)
fig, axis = plt.subplots(ncols=2, nrows=1, figsize=(10, 5), dpi=150, facecolor='w')
fig.subplots_adjust(hspace=0.35, wspace=0.25)
for i, ax in enumerate(fig.axes):
timer = df[df['Leadership_technique'] == treat[i]]['Intervention_timer']
timer.hist(ax=ax, **hist_params)
ax.set_title(treat[i], **font_kw)
ax.grid(False)
ax.set_ylim(0, 0.18)
ax.tick_params(**tick_kw)
ax.set_xlabel("Time spent on intervention screen in seconds", **xlab_kw)
ax.set_ylabel("Share of subjects", **ylab_kw)
ax.yaxis.set_major_formatter(FuncFormatter('{:.0%}'.format))
mean, med = timer.mean(), timer.median()
ax.text(45, 0.16, f"$\~{{x}}={mean:.1f}$\n$q_{{0.5}}={med:.1f}$")
path = os.path.join(
FIGURES_FOLDER,
'experiment_1',
'intervention_timer_hist.pdf'
)
plt.savefig(path, bbox_inches='tight')
!jupyter nbconvert --output-dir='./docs' --to html 3_intervention_timer.ipynb
###Output
[NbConvertApp] Converting notebook 3_intervention_timer.ipynb to html
[NbConvertApp] Writing 658987 bytes to docs/3_intervention_timer.html
|
m2-data-analysis-and-hypothesis-testing/case-study-data-visualization.ipynb | ###Markdown
 Data Visualization
###Code
from IPython.display import IFrame
IFrame('https://player.vimeo.com/video/349962138/', width=600,height=400)
###Output
_____no_output_____
###Markdown
Make Notebook Run in Watson Studio
###Code
# The code was removed by Watson Studio for sharing.
# START CODE BLOCK
# cos2file - takes an object from Cloud Object Storage and writes it to file on container file system.
# Uses the IBM project_lib library.
# See https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html
# Arguments:
# p: project object defined in project token
# data_path: the directory to write the file
# filename: name of the file in COS
import os
def cos2file(p,data_path,filename):
data_dir = p.project_context.home + data_path
if not os.path.exists(data_dir):
os.makedirs(data_dir)
open( data_dir + '/' + filename, 'wb').write(p.get_file(filename).read())
# file2cos - takes file on container file system and writes it to an object in Cloud Object Storage.
# Uses the IBM project_lib library.
# See https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html
# Arguments:
# p: prooject object defined in project token
# data_path: the directory to read the file from
# filename: name of the file on container file system
import os
def file2cos(p,data_path,filename):
data_dir = p.project_context.home + data_path
path_to_file = data_dir + '/' + filename
if os.path.exists(path_to_file):
file_object = open(path_to_file, 'rb')
p.save_data(filename, file_object, set_project_asset=True, overwrite=True)
else:
print("file2cos error: File not found")
# END CODE BLOCK
###Output
_____no_output_____
###Markdown
**Create data directory and save the file**
###Code
cos2file(project, '/data', 'world-happiness.csv')
###Output
_____no_output_____
###Markdown
"The first task of the data scientist is always data visualization."You recall these words from the many courses and bootcamps you've attended, and here you are working for AAVAIL about to do just that!Your team lead has asked you to start looking at the market churn in Singapore. Singapore has a higher rate of churn than AAVAIL's other geographic regions, so it makes sense to start looking there. *How comfortable are you with creating common data science plots in Python?*If you are comfortable with creating plots, then this next section can serve as a review for you, before you move on to the challenging task of looking at AAVAIL's Singapore data. For everyone else, this section is an important exercise that covers a topic that every data scientist should know: **Making continuous variables easier to visualize by breaking them up into discrete categories.** That sounds easy, but in fact it can sometimes be very challenging because the discrete categories create must make sense in the context of the problem you are working on. There isn't a "one size fits all" approach to doing this. Before you take a deep dive into your client's data, let's level-set with some practice data. *You always want to test your tools with non-critical data before you start working with actual client data.*AAVAIL wants the deliverables for this project to be prepared in Jupyter notebooks. Jupyter has become an industry standard in the Python ecosystem and in data science. But here is a pro-tip: Jupyter notebooks don't do well when used in a version control system. Your first Jupyter notebook for this project will be a practice notebook to make sure you can execute some basic data manipulation tasks to make EDA easier. Download the notebook from the following link then open it locally using a Jupyter server or use your IBM cloud account to login to Watson Studio. Inside of Watson Studio cloud if you have not already ensure that this notebook is loaded as part of the project for this course. As a reminder fill in all of the places in this notebook marked with YOUR CODE HERE or YOUR ANSWER HERE. The data and notebook for this unit are available below. * [m2-u2-data-visualization.ipynb](m2-u2-data-visualization.ipynb)* [world-happiness.csv](./data/world-happiness.csv)This unit is organized into the following sections:1. EDA and pandas2. Data visualization best practices3. Essentials of matplotlib4. Pairs plots and correlation5. Beyond simple plots Data visualization in Python It is expected that you already know and use both pandas and matplotlib on a regular basis. For those of you who are comfortable with plotting---meaning that you can readily produce any of the several dozen types common plots used in data science, then this unit will serve as a review. In the above sections we will touch on the essential tools, best practices and survey the landscape of available tools for more advanced plotting.If you would like additional context a few links are available below:- [Anaconda's article on 'moving toward convergence'](https://www.anaconda.com/blog/developer-blog/python-data-visualization-moving-toward-convergence/)- [Anaconda's article on the future of Python visualization libraries](https://www.anaconda.com/blog/developer-blog/python-data-visualization-2018-where-do-we-go-from-here/)- [Matplotlib tutorial](http://www.scipy-lectures.org/intro/matplotlib/matplotlib.html)First let's make all of the necessary imports and configure the plot fonts. If you use Jupyter notebooks as a presentation tool then ensuring that your fonts are readable both the professional things to do and it will help with communication.
###Code
import re
import numpy as np
import pandas as pd
from IPython.display import Image
import matplotlib.pyplot as plt
plt.style.use('seaborn')
%matplotlib inline
SMALL_SIZE = 12
MEDIUM_SIZE = 14
LARGE_SIZE = 16
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=LARGE_SIZE) # fontsize of the figure title
###Output
_____no_output_____
###Markdown
Let's load the data that we will be using to practice EDA and then perform some basic cleanup.
###Code
## load the data and print the shape
df = pd.read_csv("../data/world-happiness.csv",index_col=0)
print("df: {} x {}".format(df.shape[0], df.shape[1]))
## clean up the column names and remove some
df.columns = [re.sub("\s+","_",col) for col in df.columns.tolist()]
df.head(n=4)
## missing values summary
print("Missing Value Summary\n{}".format("-"*35))
print(df.isnull().sum(axis = 0))
df.info()
## drop the rows that have NaNs
print("Original Matrix:", df.shape)
df.dropna(inplace=True)
print("After NaNs removed:", df.shape)
###Output
Original Matrix: (495, 12)
After NaNs removed: (470, 12)
###Markdown
The dataThe original data are produced by the [UN Sustainable Development Solutions Network (SDSN)](http://unsdsn.org/about-us/vision-and-organization) and the report is compiled and available at [https://worldhappiness.report](https://worldhappiness.report). The following is the messaging on the report website:> The World Happiness Report is a landmark survey of the state of global happiness that ranks 156 countries by how happy their citizens perceive themselves to be. The report is produced by the United Nations Sustainable Development Solutions Network in partnership with the Ernesto Illy Foundation. > The World Happiness Report was written by a group of independent experts acting in their personal capacities. Any views expressed in this report do not necessarily reflect the views of any organization, agency or program of the United Nations.so knowing this it makes sense to [sort the data](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html).
###Code
df.sort_values(['Year', "Happiness_Score"], ascending=[True, False], inplace=True)
df.head(n=4)
###Output
_____no_output_____
###Markdown
EDA and pandasThe pandas documentation is quite good compared to other packages and there are a substantial number of arguments available for most of the functions. See the [docs for pandas.read_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) as an example of this point. The Python [package pandas](https://pandas.pydata.org/pandas-docs/stable/getting_started/overview.html) is very commonly used during EDA. If you are not yet familiar with it or your need a refresher the [pandas tutorials](https://pandas.pydata.org/pandas-docs/stable/getting_started/tutorials.html) are a good place to start.[Pivot tables](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html) and [groupbys](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html) are methods that perform aggregations over a [pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).
###Code
columns_to_show = ["Happiness_Score","Health_(Life_Expectancy)"]
pd.pivot_table(df, index= 'Year', values=columns_to_show, aggfunc='mean').round(3)
df.groupby(['Year'])[columns_to_show].mean().round(3)
###Output
_____no_output_____
###Markdown
There [are some differences between pivot_table and groupby](https://stackoverflow.com/questions/34702815/pandas-group-by-and-pivot-table-difference), but either can be used to create aggregate summary tables. See the [pandas tutorial on reshaping and pivots](http://pandas-docs.github.io/pandas-docs-travis/user_guide/reshaping.html) to learn more. Also note that you can have more than one index.
###Code
pd.pivot_table(df, index = ['Region', 'Year'], values=columns_to_show).round(3)
###Output
_____no_output_____
###Markdown
When we want to summarize continuous data the functions [qcut()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html) and [cut](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html) are useful to partition them. The binned observations can be summarized with the same tabular functions as above. QUESTION: tabular summaryUse `pd.qcut()` or `pd.cut` to bin the data by `Happiness_Rank` and create a tabular summary that summarizes `Happiness_Score` and `Health_(Life_Expectancy)` with respect to `Region`. Take a minute to think about presenting this information in a way that makes it easily interpretable. How many bins should you use? Is there a way to make those bins more understandable? Here is the [difference between `pd.qcut()` and `pd.cut()`](https://stackoverflow.com/questions/30211923/what-is-the-difference-between-pandas-qcut-and-pandas-cut).
###Code
happines_rank_labels = ["Very Happy", "Happy", "Happy/Unhappy", "Unhappy", "Very Unhappy"]
df["Happines_Rank_bins"] = pd.qcut(df.Happiness_Rank, 5, labels=happines_rank_labels)
pd.pivot_table(df, index=["Happines_Rank_bins", "Region"], values=["Happiness_Score", "Health_(Life_Expectancy)"], aggfunc="mean").round(3)
###Output
_____no_output_____
###Markdown
Data visualization best practicesWhen tables like the one we just created become difficult to navigate it can be useful to use a simple plot to summarize the data. It is possible that both a table and a plot might be needed to communicate the findings and one common practice is to include an appendix in the deliverable. Another related practice when it comes to EDA is that the communication of your findings, usually via deliverable, is done in a clean and polished way. If using a notebook as a communication tool take the time to remove unnecessary code blocks and other elements as they can distract from the principal takeaways.Best practices as a data scientist generally require that all work be saved as text files: 1. [Executable scripts](https://docs.python.org/3/using/cmdline.html) 2. [Python modules](https://docs.python.org/3/tutorial/modules.html) 3. [Python package](https://www.pythoncentral.io/how-to-create-a-python-package)A module is a file containing Python definitions and statements. The file name is the module name with thesuffix `.py` appended. Jupyter notebooks have the suffix `.ipynb` and use JSON, with a lot of custom text. The readability of such files is difficult using a standard programming editor and file **readability** is key to leveraging version control.That being said the two notable exceptions to this rule of always preserving your code in readable files are, EDA and results communication, both of which are tasks that come up frequently in data science.Data visualization is arguably the most important tool for communicating your results to others, especially business stakeholders. Most importantly, there are three important points to communicate to your stakeholders: >1. what you have done >2. what you are doing, and >3. what you plan to do.1) **Keep your code-base separated from your notebooks**Here we will show the import of code from a Python module into a notebook to showcase the best practice of saving a maximum amount of code within files, while still making use of Jupyter as a presentation tool. Version control is a key component to effective collaboration and reproducible research. Version control is not within the scope of this course, but systems are generally built on [git](https://git-scm.com) or [mercurial](https://www.mercurial-scm.org). There are a [host of other websites and services as well](https://en.wikipedia.org/wiki/List_of_version-control_software).The following links provide more context to the topic of reproducible research. * [Introduction to version control in the context of research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004668) * [Collection of articles surveying several areas of reproducible research](https://www.nature.com/collections/prbfkwmwvz). 2) **Keep a notebook or a record of plots and plot manipulations**Outside of established software engineering practices there are couple of guidelines that have proven to be useful in practice. The first is related to version control and it revolves around the use of galleries. The [matplotlib gallery](https://matplotlib.org/gallery.html) and the [Seaborn gallery](https://seaborn.pydata.org/examples/index.html) are good starting points, but you should have your own. Just as you would do when engineering a piece of software you should be making the extra effort when something is reusable to ensure that it can be used in a different context. It could be as simple as a folder with a script for each.3) **Use your plots as a quality assurance tool**The other guideline is to make an educated guess **before** you see the plot. Before you execute the cell or run the script take a moment to predict what the plot should look like. You have likely already seen some of the data or created a tabular summary so you should have some intuition. This habit is surprisingly useful for quality assurance of both data and code. Essentials of matplotlibMatplotlib has a "functional" interface similar to MATLAB® that works via the `pyplot` module for simple interactive use, as well as an object-oriented interface that is more *pythonic* and better for plots that require some level of customization or modification. The latter is called the `artist` interface. There is also built in functionality from within pandas for rapid access to plotting capabilities.* [pandas visualization](http://pandas-docs.github.io/pandas-docs-travis/user_guide/visualization.html)* [matplotlib pyplot interface](https://matplotlib.org/users/pyplot_tutorial.html)* [matplotlib artist interface](https://matplotlib.org/users/artists.html)
###Code
fig = plt.figure(figsize=(14,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
table1 = pd.pivot_table(df,index='Region',columns='Year',values="Happiness_Score")
table1.plot(kind='bar',ax=ax1)
ax1.set_ylabel("Happiness Score");
table2 = pd.pivot_table(df,index='Region',columns='Year',values="Health_(Life_Expectancy)")
table2.plot(kind='bar',ax=ax2)
ax2.set_ylabel("Health (Life_Expectancy)");
## adjust the axis to accommodate the legend
ax1.set_ylim((0,9.3))
ax2.set_ylim((0,1.3))
###Output
_____no_output_____
###Markdown
There are some interface limitations when it comes to using the pandas interface for plotting, but it serves as an efficient first pass. You may also notice that if this notebook were to be used as a presentation there is some exposed plot generation code that can limit communication. There are ways to hide code in Jupyter, but in keeping with the best practices of storing code in text files for version control as well as the cataloging of plot code here is a script that makes a nicer version of the same plot with the `artist` interface. See the file for details on the additional customization.* [make-happiness-summary-plot.py](./scripts/make-happiness-summary-plot.py) **Create scripts directory and save the file**
###Code
cos2file(project, '/scripts', 'make-happiness-summary-plot.py')
def create_images_dir(p, images_path):
images_dir = p.project_context.home + images_path
if not os.path.exists(images_dir):
print("...create images directory")
os.makedirs(images_dir)
else:
print("...images directory exists")
create_images_dir(project, "/images")
!python ../scripts/make-happiness-summary-plot.py
Image("../images/happiness-summary.png",width=800, height=600)
###Output
... data ingestion
... creating plot
../images/happiness-summary.png created.
###Markdown
Pair plots and correlationThere are many useful tools and techniques for EDA that could be a part of these materials, but the focus of this course is on the workflow itself. In addition to indispensable summary tables and simple plots there is one more that deserves special mention, because of its utility, and that is the pair plot or sometimes referred to as the pairs plot. At a minimum these are used to visualize the relationships between all pairwise combinations of continuous variables in your data set. Importantly, we can quantify these relationships using [correlation](https://en.wikipedia.org/wiki/Correlation_and_dependence). There are also ways to get additional insight into the data by overlaying discrete variables, using a coloring scheme, and including univariate distributions along the diagonal. * [seaborn pairplot](https://seaborn.pydata.org/generated/seaborn.pairplot.html)* [seaborn pairwise correlations plot](https://seaborn.pydata.org/examples/many_pairwise_correlations.html)
###Code
import seaborn as sns
sns.set(style="ticks", color_codes=True)
## make a pair plot
columns = ['Happiness_Score','Economy_(GDP_per_Capita)', 'Family', 'Health_(Life_Expectancy)',
'Freedom', 'Trust_(Government_Corruption)']
axes = sns.pairplot(df, vars=columns, hue="Year", palette="husl")
###Output
_____no_output_____
###Markdown
QUESTION: Correlation plotUse the [following code snippet from the seaborn examples](https://seaborn.pydata.org/examples/many_pairwise_correlations.html) to create your own grid plot of pairwise correlations for this data set. Do this as a separate script then run and display the image here.
###Code
%%writefile ../scripts/make-happiness-correlations-plot.py
#!/usr/bin/env python
import os
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
## plot style, fonts and colors
plt.style.use('seaborn')
SMALL_SIZE = 12
MEDIUM_SIZE = 14
LARGE_SIZE = 16
COLORS = ["darkorange","royalblue","slategrey"]
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=LARGE_SIZE) # fontsize of the figure title
DATA_DIR = os.path.join(".","data")
IMAGE_DIR = os.path.join(".","images")
## allow script to be run from parent directory
if not os.path.exists(DATA_DIR):
DATA_DIR = os.path.join("..","data")
IMAGE_DIR = os.path.join("..","images")
if not os.path.exists(DATA_DIR):
raise Exception("cannot find DATA_DIR")
if not os.path.exists(IMAGE_DIR):
raise Exception("cannot find IMAGE_DIR")
sns.set(style="ticks", color_codes=True)
def ingest_data():
"""
ready the data for EDA
"""
print("... data ingestion")
## load the data and print the shape
df = pd.read_csv(os.path.join(DATA_DIR, "world-happiness.csv"), index_col=0)
## clean up the column names
df.columns = [re.sub("\s+","_",col) for col in df.columns.tolist()]
## drop the rows that have NaNs
df.dropna(inplace=True)
## sort the data for more intuitive visualization
df.sort_values(['Year', "Happiness_Score"], ascending=[True, False], inplace=True)
return(df)
def create_correlations_gridplot(df):
"""
create grid plot of pairwise correlations
"""
print("... creating plot")
# Compute the correlation matrix
corr = df.corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=np.bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5})
image_path = os.path.join(IMAGE_DIR, "pairwise-correlations.png")
plt.savefig(image_path, bbox_inches='tight', pad_inches = 0, dpi=200)
print("{} created.".format(image_path))
if __name__ == "__main__":
df = ingest_data()
create_correlations_gridplot(df)
file2cos(project, '/scripts', 'make-happiness-correlations-plot.py')
!python ../scripts/make-happiness-correlations-plot.py
Image("../images/pairwise-correlations.png",width=800, height=600)
###Output
... data ingestion
... creating plot
../images/pairwise-correlations.png created.
|
Resources/Starter_Code/credit_risk_ensemble.ipynb | ###Markdown
Read the CSV and Perform Basic Data Cleaning
###Code
# https://help.lendingclub.com/hc/en-us/articles/215488038-What-do-the-different-Note-statuses-mean-
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership",
"annual_inc", "verification_status", "issue_d", "loan_status",
"pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d",
"collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Load the data
file_path = Path('../Resources/LoanStats_2019Q1.csv.zip')
df = pd.read_csv(file_path, skiprows=1)[:-2]
df = df.loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
# Convert the target column values to low_risk and high_risk based on their values
x = {'Current': 'low_risk'}
df = df.replace(x)
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
df.reset_index(inplace=True, drop=True)
df.head()
###Output
_____no_output_____
###Markdown
Split the Data into Training and Testing
###Code
# Create our features
X = # YOUR CODE HERE
# Create our target
y = # YOUR CODE HERE
X.describe()
# Check the balance of our target values
y['loan_status'].value_counts()
# Split the X and y into X_train, X_test, y_train, y_test
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Ensemble LearnersIn this section, you will compare two ensemble algorithms to determine which algorithm results in the best performance. You will train a Balanced Random Forest Classifier and an Easy Ensemble AdaBoost classifier . For each algorithm, be sure to complete the folliowing steps:1. Train the model using the training data. 2. Calculate the balanced accuracy score from sklearn.metrics.3. Print the confusion matrix from sklearn.metrics.4. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.5. For the Balanced Random Forest Classifier onely, print the feature importance sorted in descending order (most important feature to least important) along with the feature scoreNote: Use a random state of 1 for each algorithm to ensure consistency between tests Balanced Random Forest Classifier
###Code
# Resample the training data with the RandomOversampler
# YOUR CODE HERE
# Calculated the balanced accuracy score
# YOUR CODE HERE
# Display the confusion matrix
# YOUR CODE HERE
# Print the imbalanced classification report
# YOUR CODE HERE
# List the features sorted in descending order by feature importance
# YOUR CODE HERE
###Output
loan_amnt: (0.09175752102205247)
int_rate: (0.06410003199501778)
installment: (0.05764917485461809)
annual_inc: (0.05729679526683975)
dti: (0.05174788106507317)
delinq_2yrs: (0.031955619175665397)
inq_last_6mths: (0.02353678623968216)
open_acc: (0.017078915518993903)
pub_rec: (0.017014861224701222)
revol_bal: (0.016537957646730293)
total_acc: (0.016169718411077325)
out_prncp: (0.01607049983545137)
out_prncp_inv: (0.01599866290723441)
total_pymnt: (0.015775537221600675)
total_pymnt_inv: (0.01535560674178928)
total_rec_prncp: (0.015029265003541079)
total_rec_int: (0.014828006488636946)
total_rec_late_fee: (0.01464881608833323)
recoveries: (0.014402430445752665)
collection_recovery_fee: (0.014318832248876989)
last_pymnt_amnt: (0.013519867193755364)
collections_12_mths_ex_med: (0.013151520216882331)
policy_code: (0.013101578263049833)
acc_now_delinq: (0.012784600558682344)
tot_coll_amt: (0.012636608914961465)
tot_cur_bal: (0.012633464965390648)
open_acc_6m: (0.012406321468566728)
open_act_il: (0.011687404692448701)
open_il_12m: (0.01156494245653799)
open_il_24m: (0.011455878011762288)
mths_since_rcnt_il: (0.011409157520644688)
total_bal_il: (0.01073641504525053)
il_util: (0.010380085181706624)
open_rv_12m: (0.010097528131347774)
open_rv_24m: (0.00995373830638152)
max_bal_bc: (0.00991410213601043)
all_util: (0.009821715826953788)
total_rev_hi_lim: (0.009603648248133598)
inq_fi: (0.009537423049553)
total_cu_tl: (0.008976776055926955)
inq_last_12m: (0.008870623013604539)
acc_open_past_24mths: (0.008745106187024114)
avg_cur_bal: (0.008045578273709669)
bc_open_to_buy: (0.007906251501807723)
bc_util: (0.00782073260901301)
chargeoff_within_12_mths: (0.007798696767389274)
delinq_amnt: (0.007608045628523077)
mo_sin_old_il_acct: (0.0075861537897335815)
mo_sin_old_rev_tl_op: (0.007554511001273182)
mo_sin_rcnt_rev_tl_op: (0.007471884930172615)
mo_sin_rcnt_tl: (0.007273779915807858)
mort_acc: (0.006874845464745796)
mths_since_recent_bc: (0.006862142977394886)
mths_since_recent_inq: (0.006838718858820505)
num_accts_ever_120_pd: (0.006413554699909871)
num_actv_bc_tl: (0.006319439816216779)
num_actv_rev_tl: (0.006160469432535709)
num_bc_sats: (0.006066257227997291)
num_bc_tl: (0.005981472544437747)
num_il_tl: (0.0055301594524349495)
num_op_rev_tl: (0.004961823663836347)
num_rev_accts: (0.004685198497435334)
num_rev_tl_bal_gt_0: (0.0045872929977180356)
num_sats: (0.0041651633321967895)
num_tl_120dpd_2m: (0.004016461341161775)
num_tl_30dpd: (0.0032750717701661657)
num_tl_90g_dpd_24m: (0.0027565184136781346)
num_tl_op_past_12m: (0.0026174030074401656)
pct_tl_nvr_dlq: (0.002279671873697176)
percent_bc_gt_75: (0.0021899772867773103)
pub_rec_bankruptcies: (0.0020851101815353096)
tax_liens: (0.0018404849590376573)
tot_hi_cred_lim: (0.001736019018028134)
total_bal_ex_mort: (0.0015472230884974506)
total_bc_limit: (0.0012263315437383057)
total_il_high_credit_limit: (0.0012213148580230454)
home_ownership_ANY: (0.0012151288883862276)
home_ownership_MORTGAGE: (0.0008976722260399365)
home_ownership_OWN: (0.0008125182396705508)
home_ownership_RENT: (0.000573414997420326)
verification_status_Not Verified: (0.0005168345750594915)
verification_status_Source Verified: (0.0004192455022893127)
verification_status_Verified: (0.0)
issue_d_Feb-2019: (0.0)
issue_d_Jan-2019: (0.0)
issue_d_Mar-2019: (0.0)
pymnt_plan_n: (0.0)
initial_list_status_f: (0.0)
initial_list_status_w: (0.0)
next_pymnt_d_Apr-2019: (0.0)
next_pymnt_d_May-2019: (0.0)
application_type_Individual: (0.0)
application_type_Joint App: (0.0)
hardship_flag_N: (0.0)
debt_settlement_flag_N: (0.0)
###Markdown
Easy Ensemble AdaBoost Classifier
###Code
# Train the Classifier
# YOUR CODE HERE
# Calculated the balanced accuracy score
# YOUR CODE HERE
# Display the confusion matrix
# YOUR CODE HERE
# Print the imbalanced classification report
# YOUR CODE HERE
###Output
pre rec spe f1 geo iba sup
high_risk 0.09 0.92 0.94 0.16 0.93 0.87 101
low_risk 1.00 0.94 0.92 0.97 0.93 0.87 17104
avg / total 0.99 0.94 0.92 0.97 0.93 0.87 17205
|
refactoring-101/refactoring.ipynb | ###Markdown
Refactoring*A pattern for evolution of code*Scott Hendrickson2016 April 29 What is refactoring? Code refactoring is the process of restructuring existing computer code—changing the factoring—without changing its external behavior. - wikipedia Maybe an illustration about two work streams. 1. "Normal" coding work. You are trying to add new functionality to your code. E.g. 2. Changing the code, structure, architecture, etc. (but *not* the functionality).3. Know *when* and *how* to move between the two activitiesMore wikipedia, Refactoring improves nonfunctional attributes of the software. Advantages include improved code readability and reduced complexity; these can improve source-code maintainability and create a more expressive internal architecture or object model to improve extensibility. Typically, refactoring applies a series of standardised basic micro-refactorings, each of which is (usually) a tiny change in a computer program's source code that either preserves the behaviour of the software, or at least does not modify its conformance to functional requirements. Why worry about it? Refactoring enables graceful code evolution -- You don't have the know all the answers when you start, you can discover them as you go. Refactoring is a key part of the *how*. ... code refactoring may also resolve hidden, dormant, or undiscovered computer bugs or vulnerabilities in the system by simplifying the underlying logic and eliminating unnecessary levels of complexity. What motivates refactoring?Sometimes we have a distinct feel that something isn't quite right. Or, we hit a wall where someone asks us to add some functionality to the code and we realize that code choices we made earlier make adding the new piece very hard, risky or clunky.More formally, smart people have identified some common "this doesn't feel right" moments and cateloged them. Bad Smells Refactoring is usually motivated by noticing a code smell.[2] For example the method at hand may be very long, or it may be a near duplicate of another nearby method. Once recognized, such problems can be addressed by refactoring the source code, or transforming it into a new form that behaves the same as before but that no longer "smells". These *bad smells* can be at code, architecture, data structure, etc. levels. Learning to identify many types of bad smells and then having ready tools to address them is part of what we mean by "professional developer skills." For a long routine, one or more smaller subroutines can be extracted; or for duplicate routines, the duplication can be removed and replaced with one shared function. Failure to perform refactoring can result in accumulating technical debt; on the other hand, refactoring is one of the primary means of repaying technical debt.[3] When you don't refactor regularly......you accumlulate technical debt. Can we say any generally useful things about the risks of not paying down technical debt?There are two general categories of benefits to the activity of refactoring:1. Maintainability. It is easier to fix bugs because the source code is easy to read and the intent of its author is easy to grasp.[4] This might be achieved by reducing large monolithic routines into a set of individually concise, well-named, single-purpose methods. It might be achieved by moving a method to a more appropriate class, or by removing misleading comments.2. Extensibility. It is easier to extend the capabilities of the application if it uses recognizable design patterns, and it provides some flexibility where none before may have existed.[1] - wikipedia What is a design pattern? A design pattern is the re-usable form of a solution to a design problem. The idea was introduced by the architect Christopher Alexander[1] and has been adapted for various other disciplines, most notably computer science.[2] - wikipedia Can you give an example, maybe from Alexander's work? ... how the components of the pattern relate to each other to give the solution.[3] Christopher Alexander describes common design problems as arising from "conflicting forces" — such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern. - wikipedia Important attributes of a Pattern The elements of this language are entities called patterns. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice. — Christopher Alexander[1] - wikipediaSo, summarizing attributes:1. The Context section sets the stage where the pattern takes place.2. The Problem section explains what the actual problem is.3. The Forces section describes why the problem is difficult to solve.4. The Solution section explains the solution in detail.5. The Consequences section demonstrates what happens when you apply the solution.(These steps follow http://www.europlop.net/sites/default/files/files/0_How%20to%20write%20a%20pattern-2011-11-30_linked.pdf )1. People living, working, playing near water. Desire to be near water.2. Desire for access water by many people at once --> reduced access to water3. Identify the forces * Desire - private land on water * Water adjacent land = common access land * Roads parallel to water aren't the kind of access we mean4. Form of solution and considerations * Common land along water * Approach roads perpendicular5. Preserving some common areas near water provide broad access. Preserving common waters-edge space my building roads perpendicular to water maximized common areas. WRT bad smells, give a more code-y example
###Code
import string, math
a = { x:[] for x in string.punctuation }
i = 0
for x in a:
a[x].append(math.sin(i))
i += 1
print(a)
###Output
{'!': [0.0], '#': [0.8414709848078965], '"': [0.9092974268256817], '%': [0.1411200080598672], '$': [-0.7568024953079282], "'": [-0.9589242746631385], '&': [-0.27941549819892586], ')': [0.6569865987187891], '(': [0.9893582466233818], '+': [0.4121184852417566], '*': [-0.5440211108893699], '-': [-0.9999902065507035], ',': [-0.5365729180004349], '/': [0.4201670368266409], '.': [0.9906073556948704], ';': [0.6502878401571169], ':': [-0.2879033166650653], '=': [-0.9613974918795568], '<': [-0.750987246771676], '?': [0.14987720966295234], '>': [0.9129452507276277], '@': [0.836655638536056], '[': [-0.008851309290403876], ']': [-0.8462204041751706], '\\': [-0.9055783620066239], '_': [-0.13235175009777303], '^': [0.7625584504796028], '`': [0.956375928404503], '{': [0.27090578830786904], '}': [-0.6636338842129675], '|': [-0.9880316240928618], '~': [-0.404037645323065]}
###Markdown
This activity has a name, we call it enumeration...
###Code
import string
a = { x:[] for x in string.punctuation }
for i, x in enumerate(a):
a[x].append(math.sin(i))
print(a)
###Output
{'!': [0.0], '#': [0.8414709848078965], '"': [0.9092974268256817], '%': [0.1411200080598672], '$': [-0.7568024953079282], "'": [-0.9589242746631385], '&': [-0.27941549819892586], ')': [0.6569865987187891], '(': [0.9893582466233818], '+': [0.4121184852417566], '*': [-0.5440211108893699], '-': [-0.9999902065507035], ',': [-0.5365729180004349], '/': [0.4201670368266409], '.': [0.9906073556948704], ';': [0.6502878401571169], ':': [-0.2879033166650653], '=': [-0.9613974918795568], '<': [-0.750987246771676], '?': [0.14987720966295234], '>': [0.9129452507276277], '@': [0.836655638536056], '[': [-0.008851309290403876], ']': [-0.8462204041751706], '\\': [-0.9055783620066239], '_': [-0.13235175009777303], '^': [0.7625584504796028], '`': [0.956375928404503], '{': [0.27090578830786904], '}': [-0.6636338842129675], '|': [-0.9880316240928618], '~': [-0.404037645323065]}
|
notebooks/UD_Staff_Management_EDV.ipynb | ###Markdown
Premise of NotebookThis exploratory notebook will explore increasingly granular levels of visualization, utilizing Altair for its high customizability and functionality, as well as interactivity that's retained when embedded to a website. This exercise focuses specifically on mentor/mentee data in regards to high level categorical metrics, but the concepts applied can be utilized in many other areas of data visualization, including attendance time series graphs, financial/resource management (tracking usage, supply/demand growths, etc.), mentor availability for mentees by day and time, staff metric overviews like what we're about to dive into...the list goes on. Data RetrievalFor starters, we will explore comparing counts of mentees and mentors based on certain categories, namely experience level and subject. This could be a useful tool for staff administration (superadmins).
###Code
# These are requests to the live database, showcasing how data could be retrieved for usage.
mentees_df = pd.DataFrame(requests.post("http://underdog-devs-ds-a-dev.us-east-1.elasticbeanstalk.com/Mentees/read").json()["result"])
mentors_df = pd.DataFrame(requests.post("http://underdog-devs-ds-a-dev.us-east-1.elasticbeanstalk.com/Mentors/read").json()["result"])
#verifying data retrieval
print("Mentees\n")
mentees_df.info()
print("------------------------------------------------------------------\nMentors\n")
mentors_df.info()
mentees_df['tech_stack'].unique()
# Using local files
mentees_df = pd.read_csv("https://raw.githubusercontent.com/BakerJr1904/Altair-visualization-for-underdogsDevs/main/mentees.csv")
mentors_df = pd.read_csv("https://raw.githubusercontent.com/BakerJr1904/Altair-visualization-for-underdogsDevs/main/mentors.csv")
#verifying data retrieval
print("Mentees\n")
mentees_df.info()
print("------------------------------------------------------------------\nMentors\n")
mentors_df.info()
# Using local files
mentees_df = pd.read_csv("https://raw.githubusercontent.com/BakerJr1904/Altair-visualization-for-underdogsDevs/main/mentees.csv")
mentors_df = pd.read_csv("https://raw.githubusercontent.com/BakerJr1904/Altair-visualization-for-underdogsDevs/main/mentors.csv")
# Adding role column to distinguish mentee vs mentor
mentees_df["role"] = ["Mentee"]*len(mentees_df)
mentors_df["role"] = ["Mentor"]*len(mentors_df)
# Generating random skill levels
levels = ['Beginner', 'Intermediate', 'Advanced', 'Master']
mentees_df['experience_level'] = [r.choice(levels) for _ in range(len(mentees_df))]
mentors_df['experience_level'] = [r.choice(levels) for _ in range(len(mentors_df))]
# Filtering for relevant columns
mentees_df = mentees_df[["role", "profile_id", "first_name", "last_name", "tech_stack", "experience_level"]]
mentors_df = mentors_df[["role", "profile_id", "first_name", "last_name", "tech_stack", "experience_level"]]
# Concatenating
df = pd.merge(mentees_df, mentors_df, how="outer")
# There shouldn't be nulls, but there are "None" values in tech_stack that shouldn't exist, so we'll remove them
df = df.loc[df['tech_stack'] != "None"]
# Checking redundant variances
print(df["role"].unique())
print(df["tech_stack"].unique())
print(df["experience_level"].unique())
# Generating full_name column
df["full_name"] = df["first_name"] + [" "]*len(df) + df["last_name"]
# High level data overview
df.info()
# actual dataframe
df.head(5)
###Output
_____no_output_____
###Markdown
Building Graphs Part 1: Mentors and MenteesNext, we'll build some graphs. Let's start with a lot of information in a single graph, something we really don't want to use, but that showcases information density.
###Code
graph = alt.Chart(df).mark_bar().encode(
x="role",
y=alt.X(
"count()",
title="Head Count"
),
color="tech_stack",
column="experience_level"
).properties(width=200).configure_axisX(
title="null",
labelFontSize=15
).configure_header(
labelFontSize=15
).configure_axisY(
labelFontSize=12
)
graph
###Output
_____no_output_____
###Markdown
This is obviously a lot to look at. It's hard to compare mentors and mentees across disciplines, even if their skill levels are clustered together. But what if we could choose what subject(s) to compare? With the power of Altair, we can! One way is to use Altair's included checkboxes, but a prettier way is to make our own selection panels, so let's go for it!
###Code
# Create selection panel with selection functionality
selection = alt.selection_multi(fields=['tech_stack'])
color_select = alt.condition(selection, alt.Color('tech_stack:N'), alt.value('lightgray'))
selector = alt.Chart().mark_rect().encode(y='tech_stack', color=color_select).add_selection(selection)
# Create main graph
main_graph = alt.Chart().mark_bar().encode(
x="role",
y=alt.X(
"count()",
title="Head Count"
),
color="tech_stack",
column="experience_level"
).transform_filter(selection).properties(height=400, width=150)
# Concatenate with data
full_graph = alt.hconcat(selector, main_graph, data=df)
full_graph.configure_axisX(labelFontSize=15, title="null").configure_header(labelFontSize=15, titleFontSize=20).configure_legend(disable=True)
###Output
_____no_output_____
###Markdown
Ta-da! Now we can filter them at will, both with single subjects and multiple if we click while holding shift! Using the same process, we could add a secondary filter for experience level that would condense the graph we're viewing into two bars, if we wanted to get really granular!
###Code
# Create secondary selection panel with selection functionality
selection2 = alt.selection_multi(fields=['experience_level'])
color_select2 = alt.condition(selection2, alt.Color('experience_level:N'), alt.value('lightgray'))
selector2 = alt.Chart(df).mark_rect().encode(y='experience_level', color=color_select2).add_selection(selection2)
# Add secondary filter to main_graph
main_graph = main_graph.transform_filter(selection2)
granular_graph = alt.hconcat(selector, main_graph, selector2, data=df).configure_legend(disable=True)
granular_graph.configure_axisX(labelFontSize=15).configure_header(labelFontSize=15)
###Output
_____no_output_____
###Markdown
Now we have two working filter panels! But it might make more sense to condense multiple selected experience levels for comparisons...let's do that! And while we're at it, since the sizes may get very small, let's make it so we can view each person's full name when we hover over the segmented bar graph. Though this may not seem useful in this situation, the same method can be applied to point graphs, for instance a time series graph that plots mentors' meetings with mentees and their resultant attendances; making it so each time you hover over a marked absence, the mentee's name, id, or other chosen identifier will appear to be viewed for quick reference.
###Code
# Recreate main graph without column clustering
main_graph = alt.Chart().mark_bar().encode(
x="role",
y=alt.X(
"count()",
title="Head Count"
),
color="experience_level",
tooltip="full_name"
).transform_filter(selection).transform_filter(selection2).properties(width=100, height=600)
granular_graph = alt.hconcat(selector, main_graph, selector2, data=df).configure_legend(disable=True).configure_axisY(labelFontSize=15, titleFontSize=15, tickMinStep=1).configure_axisX(labelFontSize=15, titleFontSize=15)
granular_graph
###Output
_____no_output_____ |
Labs28 Notebooks/Preprocessing_Labs28_Data_2020PB_and_826Pt1.ipynb | ###Markdown
** Labs28 Notebook for creating merged dataset.**
###Code
# Install newspaper3k for article parser
! pip3 install newspaper3k
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
from collections import Counter
from newspaper import Article
import json
import re
import requests
import spacy
from spacy.tokenizer import Tokenizer
import urllib3
nlp = spacy.load('en_core_web_sm')
###Output
_____no_output_____
###Markdown
First dataframe created from the github r/Police Brutality 2020 page.
###Code
# Import aggregated json data create to dataframe
all_locs = 'https://raw.githubusercontent.com/2020PB/police-brutality/data_build/all-locations-v2.json'
# Copy and paste link in url to see current update from Github 2020PB reddit page
df_gitjson = pd.read_json(all_locs)
# Pull data column out and create its own dataframe
df_2020PB = pd.json_normalize(data=df_gitjson['data'])
df_2020PB['updated_at'] = df_gitjson['updated_at']
# Create a last updated to save in .csv filename
last_updated = df_gitjson['updated_at'].iloc[0]
### Create a preprocessing function for df_2020PB
# Rename columns
df_2020PB.rename(columns = {'name':'title'}, inplace = True)
# Drop irrelevant columns
df_2020PB.drop(labels=['edit_at', 'date_text'], axis=1,inplace=True)
# Reorder column headers
df_2020PB = df_2020PB[['date', 'links', 'id', 'city', 'state', 'geolocation', 'title', 'tags', 'description']]
# Update the "date" column to timestamps
df_2020PB['date'] = pd.to_datetime(df_2020PB['date'],format='%Y-%m-%d')
# Write function to create hyperlinks for the 'links' columns
def cleanlinks(json):
links_out = []
for link in json:
links_out.append(link['url'])
return links_out
# Apply function to the dataframe 'links' column
df_2020PB['links'] = df_2020PB['links'].apply(cleanlinks)
# Ensure that dataframe was created correctly
df_2020PB
# Extract and clean the data from the 846 API
# https://incidents.846policebrutality.com/
url="https://api.846policebrutality.com/api/incidents"
# Copy and paste link in url to see current update from 846
http = urllib3.PoolManager()
response = http.request('GET', url)
soup = BeautifulSoup(response.data, "html.parser")
json_846 = json.loads(soup.text)
# Check length of the json_846 file
# print(len(json_846['data']))
# json_846 # Commented to see the json_846 object
# Retrieve data from the json_846['data'] key
# Create dataframe from the 846 API incident data
df_846 = pd.DataFrame(json_846['data'])
### Preprocessing
# Change data type for 'date' column to datetime type
df_846['date'] = pd.to_datetime(df_846['date'], infer_datetime_format=True)
# Drop irrelevant columns
df_846 = df_846.drop(columns=['data','pb_id'])
# Rename Columns
df_846.rename(columns = {'geocoding': 'geolocation'}, inplace = True)
# Reorder columns
df_846 = df_846[['date', 'links', 'id', 'city', 'state', 'geolocation', 'title',
'tags']]
# Check the dataframe
df_846
###Output
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
###Markdown
What is the difference between the information recieved in the first dataframe and the second. Is there any duplicate links relaying the same information?
###Code
# 846 API already comes in order
df_846['date'][0]
###Output
_____no_output_____
###Markdown
Created the merged dataframes from the r/2020PB Data and the 846 API
###Code
print(f'df_2020PB shape: {df_2020PB.shape}')
print(f'df_846: {df_846.shape}')
print(f'There will be a total of {df_2020PB.shape[0] + df_846.shape[0]} rows.')
# Merge the two datasets and check for duplicates
frames = [df_2020PB, df_846]
merged_dfs = pd.concat(frames)
# merged_dfs.reset_index(inplace=True) # Need to properly reset index
print(f'There are currently {merged_dfs.shape[0]}.')
merged_dfs.drop_duplicates(subset=["id"])
print(f'Now, there are currently {merged_dfs.shape[0]} after dropping duplicate ids.')
# Sort by date
merged_dfs.sort_values(by='date', inplace=True)
# Replace the Nan values with the string "None" in the description column *****************
merged_dfs['description'].replace({np.NaN: "None"}, inplace=True)
# Replace the Nan values with the string "None" in the geolocation column *****************
merged_dfs['geolocation'].replace({"": np.NaN}, inplace=True)
# Missing geolocations are mapped as empty strings
merged_dfs['geolocation'].replace({np.NaN: "None"}, inplace=True)
# Removed Outliers by dates outide of the year 2020.
merged_dfs =merged_dfs.loc[merged_dfs["date"].between('2020-01-01', '2020-12-30')]
# Reset index
merged_dfs.reset_index(inplace=True)
# Create a latitude (lat) and longitude (lon) column.
# Create function to create lat and long from geolocation column
def splitGeolocation(item):
"""
Creates two new columns (lat and lon) by separating the dictionaries of
geolocations into latitiude and longitude.
:col: indexed slice of a column consisting of dictionaries/strings with
latitiude and longitude integers
:return: latitude column
:return: longitude column
"""
lat = []
lon = []
if isinstance(item,str) and item != 'None':
item = item.split(',')
lat.append(float(item[0]))
lon.append(float(item[1]))
elif type(item) == dict:
lat.append(float(item['lat']))
lon.append(float(item['long']))
else:
lat.append("None") ### Null values
lon.append("None") ### Null values
return lat,lon
merged_dfs['lat'] = [splitGeolocation(item)[0][0] for item in merged_dfs['geolocation']]
merged_dfs['long'] = [splitGeolocation(item)[1][0] for item in merged_dfs['geolocation']]
# Drop the geolocation column
merged_dfs.drop(labels=['geolocation'], axis=1, inplace=True)
# Look at dataframe
merged_dfs = merged_dfs[['date', 'links', 'id', 'city', 'state', 'lat', 'long',
'title', 'description', 'tags']]
###Output
_____no_output_____
###Markdown
**[X] Decide what format the geolocation column needs to be. Should the current column have all dicts and two new column be created one for lat and one for lon and ints?**We decided to create two columns each containing floats for longitude/latitudevalues and insert NaNs where no values exist. Dropped the geolocation columnsince it wasn't being used on the front-end to populate any real data.
###Code
merged_dfs
###Output
_____no_output_____
###Markdown
Natural Language Pre-Processing and Analytics
###Code
def remove_list(col):
l = []
rows = ""
for row in col:
for item in row:
if item not in rows or len(rows) == 0:
rows = rows + " " + str(item)
l.append(rows)
rows = []
rows = ""
return l
# Apply function to remove tags within a list
merged_dfs['words'] = remove_list(merged_dfs['tags'])
from spacy.tokenizer import Tokenizer
nlp = spacy.load("en_core_web_sm")
# Tokenizer
tokenizer = Tokenizer(nlp.vocab)
# Update stop words with all non-police of force terms
stop_words = [
"celebrity",
"child",
"ederly",
"lgbtq+",
"homeless",
"journalist",
"non-protest",
"person-with-disability",
"medic",
"politician",
"pregnant",
"property-desctruction",
" ",
"bystander",
"protester",
"legal-observer",
"hide-badge",
'body-cam',
"conceal",
'elderly'
]
# Update stop words default list
stop = nlp.Defaults.stop_words.union(stop_words)
from tqdm import tqdm
tqdm.pandas()
def remove_stops(_list_):
keywords = []
for keyword in _list_:
phrase = []
words = keyword.split()
for word in words:
if word in stop:
pass
else:
phrase.append(word)
phrase = ' '.join(phrase)
if len(phrase) > 0:
keywords.append(phrase)
return keywords
# Apply function to use remove stop words and words that aren't indicative
# of police use of force
merged_dfs['cleaned_tags'] = merged_dfs['tags'].progress_apply(remove_stops)
merged_dfs.drop(labels=['words', 'tags'], axis=1, inplace=True)
merged_dfs.rename(columns={'cleaned_tags':'tags'}, inplace=True)
merged_dfs
# Analyzing tokens
# Object from Base Python
from collections import Counter
# The object `Counter` takes an iterable, but you can instaniate an empty one and update it.
word_counts = Counter()
# Update it based on a split of each of our documents
merged_dfs['tags'].apply(lambda x: word_counts.update(x))
# Print out the 20 most common words
word_counts.most_common(75) # All of the words
# NOTE: ALL CATEGORIES STRICTLY FOLLOW THE NATIONAL INJUSTICE OF JUSTICE USE-OF-CONTINUM DEFINITIONS
#for more information, visit https://nij.ojp.gov/topics/articles/use-force-continuum
VERBALIZATION = ['threaten', 'incitement']
EMPTY_HAND_SOFT = ['arrest', 'grab', 'zip-tie', ]
EMPTY_HAND_HARD = ['shove', 'push', 'strike', 'tackle', 'beat', 'knee', 'punch',
'throw', 'knee-on-neck', 'kick', 'choke', 'dog', 'headlock']
LESS_LETHAL_METHODS = ['less-lethal', 'tear-gas', 'pepper-spray', 'baton',
'projectile', 'stun-grenade', 'pepper-ball',
'tear-gas-canister', 'explosive', 'mace', 'lrad',
'bean-bag', 'gas', 'foam-bullets', 'taser', 'tase',
'wooden-bullet', 'rubber-bullet', 'marking-rounds',
'paintball']
LETHAL_FORCE = ['shoot', 'throw', 'gun', 'death', 'live-round', ]
UNCATEGORIZED = ['property-destruction', 'abuse-of-power', 'bike',
'inhumane-treatment', 'shield', 'vehicle', 'drive', 'horse',
'racial-profiling', 'spray', 'sexual-assault', ]
# UNCATEGORIZED are Potential Stop Words. Need to talk to team.
# Need dummy columns to fill. Create a cleaner function to handle this problem. DJ.
merged_dfs['Verbalization'],merged_dfs['Empty_Hand_Soft'],merged_dfs['Empty_Hand_Hard'],merged_dfs['Less_Lethal_Methods'],merged_dfs['Lethal_Force'],merged_dfs['Uncategorized'] = merged_dfs['date'],merged_dfs['date'],merged_dfs['date'],merged_dfs['date'],merged_dfs['date'],merged_dfs['date']
merged_dfs # Created dummy data filled with the date column
def Searchfortarget(list, targetl):
for target in targetl:
res = list.index(target) if target in list else -1 # finds index of target
if res == -1:
return 0 # if target is not in list returns -1
else:
return 1 # if the target exist it returns
def UseofForceContinuumtest(col):
for i, row in enumerate(col):
merged_dfs['Verbalization'].iloc[i], merged_dfs['Empty_Hand_Soft'].iloc[i], merged_dfs['Empty_Hand_Hard'].iloc[i], merged_dfs['Less_Lethal_Methods'].iloc[i],merged_dfs['Lethal_Force'].iloc[i],merged_dfs['Uncategorized'].iloc[i] = Searchfortarget(VERBALIZATION, row),Searchfortarget(EMPTY_HAND_SOFT, row), Searchfortarget(EMPTY_HAND_HARD, row),Searchfortarget(LESS_LETHAL_METHODS, row),Searchfortarget(LETHAL_FORCE, row), Searchfortarget(UNCATEGORIZED, row)
"""Alternatively, this (below) is what is happening under the hood"""
# def UseofForceContinuum(col):
# for i, row in enumerate(col):
# # print("--------------")
# # print(row, i)
# merged_dfs['Verbalization'].iloc[i] = Searchfortarget(VERBALIZATION, row)
# merged_dfs['Empty_Hand_Soft'].iloc[i] = Searchfortarget(EMPTY_HAND_SOFT, row)
# merged_dfs['Empty_Hand_Hard'].iloc[i] = Searchfortarget(EMPTY_HAND_HARD, row)
# merged_dfs['Less_Lethal_Methods'].iloc[i] = Searchfortarget(LESS_LETHAL_METHODS, row)
# merged_dfs['Lethal_Force'].iloc[i] = Searchfortarget(LETHAL_FORCE, row)
# merged_dfs['Uncategorized'].iloc[i] = Searchfortarget(UNCATEGORIZED, row)
# # return merged_dfs
# UseofForceContinuum(merged_dfs['cleaned_words'])
# Apply function to the cleaned_tags columns
UseofForceContinuumtest(merged_dfs['tags'])
###Output
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value)
###Markdown
The newly added columns are objects instead of integers.*****
###Code
# Saved the data in on .csv file for all sources.
# Create a copy of the data
cleaned_df = merged_dfs.copy()
cleaned_df
# Saved the data in on .csv file for all sources.
# cleaned_df.to_csv(f'Labs28_AllSources_Data{last_updated}.csv', sep="|",index=False) # Uncomment to save.
###Output
_____no_output_____
###Markdown
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Proceed to Labs28_D_Duplicate_LinkExperiment.ipynb * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
###Code
cleaned_df
###Output
_____no_output_____ |
examples/loading_saving.ipynb | ###Markdown
Loading and Saving Loading mzML To accommodate disparate instrument types and manufacturers (e.g. Bruker, Waters, Thermo, Agilent), DEIMoS operates under the assumption that input data are in an open, standard format.As of this publication, the accepted file format for DEIMoS is mzML (or mzML.gz), which contains metadata, separation, and spectrometry data that reproduce the contents of vendor formats.Conversion to mzML from several other formats can be performed using the free and open-source [ProteoWizard](https://proteowizard.sourceforge.io/) msconvert utility. By default, DEIMoS will load frame, scan, *m/z*, and intensity from the mzML, as well as precursor *m/z* for MS2, as available.Additional "accession" fields may be specified for data of higher dimension.To view these fields, a convenience function is provided.
###Code
import deimos
accessions = deimos.get_accessions('example_data.mzML.gz')
accessions
###Output
_____no_output_____
###Markdown
The example data referenced is from an Agilent 6560 Ion Mobility LC/Q-TOF system. Thus, we will additionally need to parse retention time and ion mobility drift times.Consulting the list above, we are able to supply appropriate accession fields to the `load` function, renaming as convenient (here, "scan start time" becomes "retention_time" and "ion mobility drift time" becomes "drift_time").The `load` function will infer file type based on extension (here, .mzML or .mzML.gz)
###Code
%%time
data = deimos.load('example_data.mzML.gz',
accession={'retention_time': 'MS:1000016',
'drift_time': 'MS:1002476'})
###Output
CPU times: user 4min 2s, sys: 38.9 s, total: 4min 41s
Wall time: 4min 57s
###Markdown
The resulting data will be returned as a dictionary containing data frames, with keys per MS level. The example data contains MS1 and MS2 (collected at 20 eV).
###Code
data['ms1']
data['ms2']
###Output
_____no_output_____
###Markdown
HDF5 If the data is already parsed and saved in the Hierarchical Data Format, loading will be much faster. The function does not change, as the loader will again infer format by file extension. However, arguments will be different: specifing accessions is no longer required, but the relevant MS level must be selected using the `key` flag.
###Code
%%time
ms1 = deimos.load('example_data.h5', key='ms1')
ms1
###Output
CPU times: user 7.81 s, sys: 10.1 s, total: 17.9 s
Wall time: 22.6 s
###Markdown
Multi-file Loading For certain alignment applications, a high number of input files bars reading each into memory simultaneously.In these situations, [Dask](https://dask.org/) is used to virtually load multiple data frames, thus more amenable for downstream computation.The `load` function will detect whether a list of inputs is passed and read using the appropriate backend.Dask chunksize (see [docs](https://docs.dask.org/en/stable/array-chunks.html)) may be specified by the `chunksize` flag, and additional meta data per input file can be passed as a dictionary with keys for each path (e.g. date, sample type, etc.). Only HDF5 format is support for multi-file loading.
###Code
ms1 = deimos.load(['example_data.h5', 'example_data.h5'], key='ms1', chunksize=1E7, meta=None)
ms1
###Output
_____no_output_____
###Markdown
Note that additional columns are appended to indicate each source file name and index.As the data frames are loaded virtually, the output is a placeholder for would-be data.For more on loading multiple files, see the section on [alignment](alignment.ipynb). Saving HDF5 By default, DEIMoS exports a lightweight, data frame-based representation in Hierarchical Data Format version 5 (HDF5) file format. One must specify a path, the data frame to be saved, and a key for the container. Multiple keys may be saved to the same container (i.e. MS1 and MS2). The `mode` flag is used to indicate file overwrite (`mode='w'`) or append (`mode='a'`), the latter to be used when saving multiple data frames to the file.
###Code
# save ms1 to new file
deimos.save('example_data.h5', data['ms1'], key='ms1', mode='w')
# save ms2 to same file
deimos.save('example_data.h5', data['ms2'], key='ms2', mode='a')
###Output
_____no_output_____ |
NY Stock Price Prediction RNN LSTM GRU/.ipynb_checkpoints/NY Stock Price Prediction RNN LSTM GRU-checkpoint.ipynb | ###Markdown
**Author:** Raoul Malm **Description:** This notebook demonstrates the future price prediction for different stocks using recurrent neural networks in tensorflow. Recurrent neural networks with basic, LSTM or GRU cells are implemented. **Outline:**1. [Libraries and settings](1-bullet)2. [Analyze data](2-bullet)3. [Manipulate data](3-bullet)4. [Model and validate data](4-bullet)5. [Predictions](5-bullet)**Reference:** [LSTM_Stock_prediction-20170507 by BenF](https://www.kaggle.com/benjibb/lstm-stock-prediction-20170507/notebook) 1. Libraries and settings
###Code
import numpy as np
import pandas as pd
import math
import sklearn
import sklearn.preprocessing
import datetime
import os
import matplotlib.pyplot as plt
import tensorflow as tf
# split data in 80%/10%/10% train/validation/test sets
valid_set_size_percentage = 10
test_set_size_percentage = 10
#display parent directory and working directory
print(os.path.dirname(os.getcwd())+':', os.listdir(os.path.dirname(os.getcwd())));
print(os.getcwd()+':', os.listdir(os.getcwd()));
###Output
/home/seyfullah/github-projects/syf_bindsnet: ['rl', 'guide_part_ii.py', '0', '1', 'guide_part_i.py', 'experiments', 'aifortrading.py', 'NY Stock Price Prediction RNN LSTM GRU', 'guide_part_i-2.py', 'LSTM']
/home/seyfullah/github-projects/syf_bindsnet/NY Stock Price Prediction RNN LSTM GRU: ['NY Stock Price Prediction RNN LSTM GRU.ipynb', 'NY Stock Price Prediction RNN LSTM GRU.zip', 'NY Stock Price Prediction RNN LSTM GRU', '.ipynb_checkpoints']
###Markdown
2. Analyze data - load stock prices from prices-split-adjusted.csv- analyze data
###Code
# import all stock prices
df = pd.read_csv("./NY Stock Price Prediction RNN LSTM GRU/prices-split-adjusted.csv", index_col = 0)
df.info()
df.head()
# number of different stocks
print('\nnumber of different stocks: ', len(list(set(df.symbol))))
print(list(set(df.symbol))[:10])
df.tail()
df.describe()
df.info()
plt.figure(figsize=(15, 5));
plt.subplot(1,2,1);
plt.plot(df[df.symbol == 'EQIX'].open.values, color='red', label='open')
plt.plot(df[df.symbol == 'EQIX'].close.values, color='green', label='close')
plt.plot(df[df.symbol == 'EQIX'].low.values, color='blue', label='low')
plt.plot(df[df.symbol == 'EQIX'].high.values, color='black', label='high')
plt.title('stock price')
plt.xlabel('time [days]')
plt.ylabel('price')
plt.legend(loc='best')
#plt.show()
plt.subplot(1,2,2);
plt.plot(df[df.symbol == 'EQIX'].volume.values, color='black', label='volume')
plt.title('stock volume')
plt.xlabel('time [days]')
plt.ylabel('volume')
plt.legend(loc='best');
###Output
_____no_output_____
###Markdown
3. Manipulate data - choose a specific stock- drop feature: volume- normalize stock data- create train, validation and test data sets
###Code
# function for min-max normalization of stock
def normalize_data(df):
min_max_scaler = sklearn.preprocessing.MinMaxScaler()
df['open'] = min_max_scaler.fit_transform(df.open.values.reshape(-1,1))
df['high'] = min_max_scaler.fit_transform(df.high.values.reshape(-1,1))
df['low'] = min_max_scaler.fit_transform(df.low.values.reshape(-1,1))
df['close'] = min_max_scaler.fit_transform(df['close'].values.reshape(-1,1))
return df
# function to create train, validation, test data given stock data and sequence length
def load_data(stock, seq_len):
data_raw = stock.as_matrix() # convert to numpy array
data = []
# create all possible sequences of length seq_len
for index in range(len(data_raw) - seq_len):
data.append(data_raw[index: index + seq_len])
data = np.array(data);
valid_set_size = int(np.round(valid_set_size_percentage/100*data.shape[0]));
test_set_size = int(np.round(test_set_size_percentage/100*data.shape[0]));
train_set_size = data.shape[0] - (valid_set_size + test_set_size);
x_train = data[:train_set_size,:-1,:]
y_train = data[:train_set_size,-1,:]
x_valid = data[train_set_size:train_set_size+valid_set_size,:-1,:]
y_valid = data[train_set_size:train_set_size+valid_set_size,-1,:]
x_test = data[train_set_size+valid_set_size:,:-1,:]
y_test = data[train_set_size+valid_set_size:,-1,:]
return [x_train, y_train, x_valid, y_valid, x_test, y_test]
# choose one stock
df_stock = df[df.symbol == 'EQIX'].copy()
df_stock.drop(['symbol'],1,inplace=True)
df_stock.drop(['volume'],1,inplace=True)
cols = list(df_stock.columns.values)
print('df_stock.columns.values = ', cols)
# normalize stock
df_stock_norm = df_stock.copy()
df_stock_norm = normalize_data(df_stock_norm)
# create train, test data
seq_len = 20 # choose sequence length
x_train, y_train, x_valid, y_valid, x_test, y_test = load_data(df_stock_norm, seq_len)
print('x_train.shape = ',x_train.shape)
print('y_train.shape = ', y_train.shape)
print('x_valid.shape = ',x_valid.shape)
print('y_valid.shape = ', y_valid.shape)
print('x_test.shape = ', x_test.shape)
print('y_test.shape = ',y_test.shape)
plt.figure(figsize=(15, 5));
plt.plot(df_stock_norm.open.values, color='red', label='open')
plt.plot(df_stock_norm.close.values, color='green', label='low')
plt.plot(df_stock_norm.low.values, color='blue', label='low')
plt.plot(df_stock_norm.high.values, color='black', label='high')
#plt.plot(df_stock_norm.volume.values, color='gray', label='volume')
plt.title('stock')
plt.xlabel('time [days]')
plt.ylabel('normalized price/volume')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
4. Model and validate data - RNNs with basic, LSTM, GRU cells
###Code
## Basic Cell RNN in tensorflow
index_in_epoch = 0;
perm_array = np.arange(x_train.shape[0])
np.random.shuffle(perm_array)
# function to get the next batch
def get_next_batch(batch_size):
global index_in_epoch, x_train, perm_array
start = index_in_epoch
index_in_epoch += batch_size
if index_in_epoch > x_train.shape[0]:
np.random.shuffle(perm_array) # shuffle permutation array
start = 0 # start next epoch
index_in_epoch = batch_size
end = index_in_epoch
return x_train[perm_array[start:end]], y_train[perm_array[start:end]]
# parameters
n_steps = seq_len-1
n_inputs = 4
n_neurons = 200
n_outputs = 4
n_layers = 2
learning_rate = 0.001
batch_size = 50
n_epochs = 100
train_set_size = x_train.shape[0]
test_set_size = x_test.shape[0]
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_outputs])
# use Basic RNN Cell
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.elu)
for layer in range(n_layers)]
# use Basic LSTM Cell
#layers = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons, activation=tf.nn.elu)
# for layer in range(n_layers)]
# use LSTM Cell with peephole connections
#layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons,
# activation=tf.nn.leaky_relu, use_peepholes = True)
# for layer in range(n_layers)]
# use GRU cell
#layers = [tf.contrib.rnn.GRUCell(num_units=n_neurons, activation=tf.nn.leaky_relu)
# for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
outputs = outputs[:,n_steps-1,:] # keep only last output of sequence
loss = tf.reduce_mean(tf.square(outputs - y)) # loss function = mean squared error
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
# run graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for iteration in range(int(n_epochs*train_set_size/batch_size)):
x_batch, y_batch = get_next_batch(batch_size) # fetch the next training batch
sess.run(training_op, feed_dict={X: x_batch, y: y_batch})
if iteration % int(5*train_set_size/batch_size) == 0:
mse_train = loss.eval(feed_dict={X: x_train, y: y_train})
mse_valid = loss.eval(feed_dict={X: x_valid, y: y_valid})
print('%.2f epochs: MSE train/valid = %.6f/%.6f'%(
iteration*batch_size/train_set_size, mse_train, mse_valid))
y_train_pred = sess.run(outputs, feed_dict={X: x_train})
y_valid_pred = sess.run(outputs, feed_dict={X: x_valid})
y_test_pred = sess.run(outputs, feed_dict={X: x_test})
###Output
_____no_output_____
###Markdown
5. Predictions
###Code
y_train.shape
ft = 0 # 0 = open, 1 = close, 2 = highest, 3 = lowest
## show predictions
plt.figure(figsize=(15, 5));
plt.subplot(1,2,1);
plt.plot(np.arange(y_train.shape[0]), y_train[:,ft], color='blue', label='train target')
plt.plot(np.arange(y_train.shape[0], y_train.shape[0]+y_valid.shape[0]), y_valid[:,ft],
color='gray', label='valid target')
plt.plot(np.arange(y_train.shape[0]+y_valid.shape[0],
y_train.shape[0]+y_test.shape[0]+y_test.shape[0]),
y_test[:,ft], color='black', label='test target')
plt.plot(np.arange(y_train_pred.shape[0]),y_train_pred[:,ft], color='red',
label='train prediction')
plt.plot(np.arange(y_train_pred.shape[0], y_train_pred.shape[0]+y_valid_pred.shape[0]),
y_valid_pred[:,ft], color='orange', label='valid prediction')
plt.plot(np.arange(y_train_pred.shape[0]+y_valid_pred.shape[0],
y_train_pred.shape[0]+y_valid_pred.shape[0]+y_test_pred.shape[0]),
y_test_pred[:,ft], color='green', label='test prediction')
plt.title('past and future stock prices')
plt.xlabel('time [days]')
plt.ylabel('normalized price')
plt.legend(loc='best');
plt.subplot(1,2,2);
plt.plot(np.arange(y_train.shape[0], y_train.shape[0]+y_test.shape[0]),
y_test[:,ft], color='black', label='test target')
plt.plot(np.arange(y_train_pred.shape[0], y_train_pred.shape[0]+y_test_pred.shape[0]),
y_test_pred[:,ft], color='green', label='test prediction')
plt.title('future stock prices')
plt.xlabel('time [days]')
plt.ylabel('normalized price')
plt.legend(loc='best');
corr_price_development_train = np.sum(np.equal(np.sign(y_train[:,1]-y_train[:,0]),
np.sign(y_train_pred[:,1]-y_train_pred[:,0])).astype(int)) / y_train.shape[0]
corr_price_development_valid = np.sum(np.equal(np.sign(y_valid[:,1]-y_valid[:,0]),
np.sign(y_valid_pred[:,1]-y_valid_pred[:,0])).astype(int)) / y_valid.shape[0]
corr_price_development_test = np.sum(np.equal(np.sign(y_test[:,1]-y_test[:,0]),
np.sign(y_test_pred[:,1]-y_test_pred[:,0])).astype(int)) / y_test.shape[0]
print('correct sign prediction for close - open price for train/valid/test: %.2f/%.2f/%.2f'%(
corr_price_development_train, corr_price_development_valid, corr_price_development_test))
###Output
_____no_output_____ |
pba.ipynb | ###Markdown
Visualize Population Based AugmentationRun all cells below to visualize augmentations on a sample image.0, 1, or 2 augmentations will be applied.Does not include additional horizontal flip, pad/crop, or Cutout that may be applied beforehand during training.
###Code
import PIL
import matplotlib.pyplot as plt
import numpy as np
import pba.augmentation_transforms_hp as augmentation_transforms_hp
from pba.utils import parse_log_schedule
from pba.data_utils import parse_policy
# Initialize CIFAR & SVHN policies.
cifar_policy = (parse_log_schedule('schedules/rcifar10_16_wrn.txt', 200), 'cifar10_4000')
svhn_policy = (parse_log_schedule('schedules/rsvhn_16_wrn.txt', 160), 'svhn_1000')
def parse_policy_hyperparams(policy_hyperparams):
"""We have two sets of hparams for each operation, which we need to split up."""
split = len(policy_hyperparams) // 2
policy = parse_policy(
policy_hyperparams[:split], augmentation_transforms_hp)
policy.extend(parse_policy(
policy_hyperparams[split:], augmentation_transforms_hp))
return policy
###Output
INFO:tensorflow:final len 200
INFO:tensorflow:final len 160
###Markdown
User input possible in cell belowDefaults to CIFAR policy and image of bird from CIFAR-10, at the final epoch of the schedule.You can change the image path (`image_path`), augmentation schedule (`cifar_policy` or `svhn_policy`), and which epoch to use within the schedule.
###Code
image_size = 32
image_path = 'figs/bird5.png'
# Choice of either cifar_policy or svhn_policy
policy, dset = cifar_policy
# Epoch number to use for policy. 200 epochs for CIFAR and 160 for SVHN.
epoch = 200
# Number of images to display
num_images = 10
# Load image
img = np.array(PIL.Image.open(image_path))
# Normalize Image
img = img / 255.
img = (img - augmentation_transforms_hp.MEANS[dset]) / augmentation_transforms_hp.STDS[dset]
print('Showing 10 example images at epoch {}.\n'.format(epoch))
for _ in range(num_images):
# Apply augmentations
print('Applied augmentations:')
img_aug = augmentation_transforms_hp.apply_policy(
policy=parse_policy_hyperparams(policy[epoch - 1]),
img=img,
aug_policy='cifar10',
dset=dset,
image_size=image_size,
verbose=True)
# Unnormalize Image
img_aug = (img_aug * augmentation_transforms_hp.STDS[dset]) + augmentation_transforms_hp.MEANS[dset]
plt.imshow(img_aug)
plt.show()
###Output
Showing 10 example images at epoch 200.
Applied augmentations:
('TranslateY', 0.6, 7)
('Contrast', 1.0, 7)
|
HWKS/assignment0_01_kNN/.ipynb_checkpoints/kNN_practice_0_01-checkpoint.ipynb | ###Markdown
k-Nearest Neighbor (kNN) implementation*Credits: this notebook is deeply based on Stanford CS231n course assignment 1. Source link: http://cs231n.github.io/assignments2019/assignment1/*The kNN classifier consists of two stages:- During training, the classifier takes the training data and simply remembers it- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples- The value of k is cross-validatedIn this exercise you will implement these steps and understand the basic Image Classification pipeline and gain proficiency in writing efficient, vectorized code.We will work with the handwritten digits dataset. Images will be flattened (8x8 sized image -> 64 sized vector) and treated as vectors.
###Code
'''
If you are using Google Colab, uncomment the next line to download `k_nearest_neighbor.py`.
You can open and change it in Colab using the "Files" sidebar on the left.
'''
# !wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/basic_s20/homeworks_basic/assignment0_01_kNN/k_nearest_neighbor.py
from sklearn import datasets
dataset = datasets.load_digits()
print(dataset.DESCR)
# First 100 images will be used for testing. This dataset is not sorted by the labels, so it's ok
# to do the split this way.
# Please be careful when you split your data into train and test in general.
test_border = 100
X_train, y_train = dataset.data[test_border:], dataset.target[test_border:]
X_test, y_test = dataset.data[:test_border], dataset.target[:test_border]
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
num_test = X_test.shape[0]
# Run some setup code for this notebook.
import random
import numpy as np
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (14.0, 12.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = list(np.arange(10))
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].reshape((8, 8)).astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
classes
###Output
_____no_output_____
###Markdown
Autoreload is a great stuff, but sometimes it does not work as intended. The code below aims to fix than. __Do not forget to save your changes in the `.py` file before reloading the `KNearestNeighbor` class.__
###Code
# This dirty hack might help if the autoreload has failed for some reason
try:
del KNearestNeighbor
except:
pass
from k_nearest_neighbor import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.fit(X_train, y_train)
X_train.shape
###Output
_____no_output_____
###Markdown
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.**Note: For the three distance computations that we require you to implement in this notebook, you may not use the np.linalg.norm() function that numpy provides.**First, open `k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
###Code
# Open k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
dists
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
###Output
_____no_output_____
###Markdown
**Inline Question 1** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the columns?$\color{blue}{\textit Your Answer:}$The y-axis of the graph resembles the number of test points. The x-axis represents all the train points.- Extremely bright rows means that one of the test points is very far to most of the train points. This class is probably not in the training data set, or is an outlier.- Extremely bright columns means that one of the train points is very far to all of the test points.
###Code
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
Got 95 / 100 correct => accuracy: 0.950000
###Markdown
You should expect to see approximately `95%` accuracy. Now lets try out a larger `k`, say `k = 5`:
###Code
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
Got 93 / 100 correct => accuracy: 0.930000
###Markdown
Accuracy should slightly decrease with `k = 5` compared to `k = 1`. **Inline Question 2**We can also use other distance metrics such as L1 distance.For pixel values $p_{ij}^{(k)}$ at location $(i,j)$ of some image $I_k$, the mean $\mu$ across all pixels over all images is $$\mu=\frac{1}{nhw}\sum_{k=1}^n\sum_{i=1}^{h}\sum_{j=1}^{w}p_{ij}^{(k)}$$And the pixel-wise mean $\mu_{ij}$ across all images is $$\mu_{ij}=\frac{1}{n}\sum_{k=1}^np_{ij}^{(k)}.$$The general standard deviation $\sigma$ and pixel-wise standard deviation $\sigma_{ij}$ is defined similarly.Which of the following preprocessing steps will not change the performance of a Nearest Neighbor classifier that uses L1 distance? Select all that apply.1. Subtracting the mean $\mu$ ($\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\mu$.)2. Subtracting the per pixel mean $\mu_{ij}$ ($\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\mu_{ij}$.)3. Subtracting the mean $\mu$ and dividing by the standard deviation $\sigma$.4. Subtracting the pixel-wise mean $\mu_{ij}$ and dividing by the pixel-wise standard deviation $\sigma_{ij}$.5. Rotating the coordinate axes of the data.$\color{blue}{\textit Your Answer:}$1. True2. False3. True4. False5. True$\color{blue}{\textit Your Explanation:}$1. Subtracting the mean shifts all the points in the same direction by the same amount; it does not change distances in between those points2. Subtracting the per pixel mean moves some pixels in different directions3. Same as (1), but then also scaling. Scaling does not affect the distances between points because the distances are scaled with the same constant factor. In this case the Sd.4. Same as (2).5. L1 distance changes for every point after a coordinate transformation. Except when this is exactly 90 degrees)
###Code
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('One loop difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('No loop difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
###Output
No loop difference was: 0.000000
Good! The distance matrices are the same
###Markdown
Comparing handcrafted and `sklearn` implementationsIn this section we will just compare the performance of handcrafted and `sklearn` kNN algorithms. The predictions should be the same. No need to write any code in this section.
###Code
from sklearn import neighbors
implemented_knn = KNearestNeighbor()
implemented_knn.fit(X_train, y_train)
n_neighbors = 1
external_knn = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors)
external_knn.fit(X_train, y_train)
print('sklearn kNN (k=1) implementation achieves: {} accuracy on the test set'.format(
external_knn.score(X_test, y_test)
))
y_predicted = implemented_knn.predict(X_test, k=n_neighbors).astype(int)
accuracy_score = sum((y_predicted==y_test).astype(float)) / num_test
print('Handcrafted kNN (k=1) implementation achieves: {} accuracy on the test set'.format(accuracy_score))
assert np.array_equal(
external_knn.predict(X_test),
y_predicted
), 'Labels predicted by handcrafted and sklearn kNN implementations are different!'
print('\nsklearn and handcrafted kNN implementations provide same predictions')
print('_'*76)
n_neighbors = 5
external_knn = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors)
external_knn.fit(X_train, y_train)
print('sklearn kNN (k=5) implementation achieves: {} accuracy on the test set'.format(
external_knn.score(X_test, y_test)
))
y_predicted = implemented_knn.predict(X_test, k=n_neighbors).astype(int)
accuracy_score = sum((y_predicted==y_test).astype(float)) / num_test
print('Handcrafted kNN (k=5) implementation achieves: {} accuracy on the test set'.format(accuracy_score))
assert np.array_equal(
external_knn.predict(X_test),
y_predicted
), 'Labels predicted by handcrafted and sklearn kNN implementations are different!'
print('\nsklearn and handcrafted kNN implementations provide same predictions')
print('_'*76)
###Output
sklearn kNN (k=1) implementation achieves: 0.95 accuracy on the test set
Handcrafted kNN (k=1) implementation achieves: 0.95 accuracy on the test set
sklearn and handcrafted kNN implementations provide same predictions
____________________________________________________________________________
sklearn kNN (k=5) implementation achieves: 0.93 accuracy on the test set
Handcrafted kNN (k=5) implementation achieves: 0.93 accuracy on the test set
sklearn and handcrafted kNN implementations provide same predictions
____________________________________________________________________________
###Markdown
Measuring the timeFinally let's compare how fast the implementations are.To make the difference more noticable, let's repeat the train and test objects (there is no point but to compute the distance between more pairs).
###Code
X_train_big = np.vstack([X_train]*5)
X_test_big = np.vstack([X_test]*5)
y_train_big = np.hstack([y_train]*5)
y_test_big = np.hstack([y_test]*5)
classifier_big = KNearestNeighbor()
classifier_big.fit(X_train_big, y_train_big)
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier_big.compute_distances_two_loops, X_test_big)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier_big.compute_distances_one_loop, X_test_big)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier_big.compute_distances_no_loops, X_test_big)
print('No loop version took %f seconds' % no_loop_time)
# You should see significantly faster performance with the fully vectorized implementation!
# NOTE: depending on what machine you're using,
# you might not see a speedup when you go from two loops to one loop,
# and might even see a slow-down.
###Output
Two loop version took 25.361200 seconds
One loop version took 0.365414 seconds
No loop version took 0.033853 seconds
|
pivot_table and crosstab.ipynb | ###Markdown
pivot_tableA pivot table is a data summarization tool frequently found in spreadsheet programs and other data analysis software. It aggregates a table of data by one or more keys, arranging the data in a rectangle with some of the group keys along the rows and some along the columns. Pivot tables in Python with pandas are made possible through the groupby facility combined with reshape operations utilizing hierarchical indexing. DataFrame has a pivot_table method, and there is also a top-level pandas.pivot_table function. In addition to providing aconvenience interface to groupby, pivot_table can add partial totals, also known as margins. Using in tipping dataset, suppose you wanted to compute a table of group means (the default pivot_table aggregation type) arranged by day and smoker on the rows:
###Code
import pandas as pd
import numpy as np
tips = pd.read_csv('tips.csv')
tips
tips.pivot_table(index=['day', 'smoker'])
###Output
_____no_output_____
###Markdown
Adding a column named tips_pct by calculating tip percentage of total bill
###Code
tips['tip_pct']=tips['tip']*100/tips['total_bill']
tips[:6]
###Output
_____no_output_____
###Markdown
Now, suppose we want to aggregate only tip_pct and size, and additionally group by time. I’ll put smoker in the table columns and day in the rows:
###Code
tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'],
columns='smoker')
###Output
_____no_output_____
###Markdown
Passing margins=True has the effect of adding all row and column labels, with corresponding values being the group statistics for all the data within a single tier:
###Code
tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'],
columns='smoker', margins=True)
###Output
_____no_output_____
###Markdown
To use a different aggregation function, pass it to aggfunc. For example, 'count' or len will give you a cross-tabulation (count or frequency) of group sizes:
###Code
tips.pivot_table('tip_pct', index=['time', 'smoker'], columns='day',
aggfunc=len, margins=True)
###Output
_____no_output_____
###Markdown
If some combinations are empty (or otherwise NA), you may wish to pass afill_value:
###Code
tips.pivot_table('tip_pct', index=['time', 'size', 'smoker'],
columns='day', aggfunc='mean', fill_value=0)
###Output
_____no_output_____
###Markdown
pivot_table options Crostabulation :> crosstab
###Code
from io import StringIO
data = """\
Sample Nationality Handedness
1 USA Right-handed
2 Japan Left-handed
3 USA Right-handed
4 Japan Right-handed
5 Japan Left-handed
6 Japan Right-handed
7 USA Right-handed
8 USA Left-handed
9 Japan Right-handed
10 USA Right-handed"""
data = pd.read_table(StringIO(data), sep='\s+')
data
###Output
_____no_output_____
###Markdown
As part of some survey analysis, we might want to summarize this data by nationality and handedness. You could use pivot_table to do this, but the pandas.crosstab function can be more convenient:
###Code
pd.crosstab(data.Nationality, data.Handedness, margins=True)
###Output
_____no_output_____
###Markdown
The first two arguments to crosstab can each either be an array or Series or a list of arrays. As in the tips data:
###Code
pd.crosstab([tips.time, tips.day], tips.smoker, margins=True)
###Output
_____no_output_____ |
tensorflow-tflite_converter.ipynb | ###Markdown
Tensorflow Lite Model Converter> Converts a SavedModel into Tensorflow Lite format. For details, see [Tensorflow Lite Converter](https://www.tensorflow.org/lite/convert)
###Code
# export
def convert_model(saved_model_dir):
"""
Convert a SavedModel into Tensorflow Lite Format.
`saved_model_dir`: the path to the SavedModel directory
returns: the converted Tensorflow Lite model
"""
logger.info('Converting SavedModel from: {}'.format(saved_model_dir))
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory
tflite_model = converter.convert()
return tflite_model
# export
def save_model(tflite_model, output_file):
"""
Save a Tensowflow Lite model to disk.
`tflite_model`: the Tensorflow Lite model
`output_file`: the path and filename to save the Tensorflow Lite model
"""
with open(output_file, 'wb') as f:
f.write(tflite_model)
logger.info('Successfully save model to file: {}'.format(output_file))
###Output
_____no_output_____
###Markdown
Helper Methods
###Code
# export
def read_pipeline_config(pipeline_config_path):
"""
Reads the pipeline config file.
`pipeline_config_path`: The path to the pipeline config file.
"""
pipeline_config = {}
with tf.io.gfile.GFile(pipeline_config_path, 'r') as f:
text_format.Parse(f.read(), pipeline_config)
return pipeline_config
# export
def configure_logging(logging_level=logging.INFO):
"""
Configures logging for the system.
`logging_level`: The logging level to use.
"""
logger.setLevel(logging_level)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging_level)
logger.addHandler(handler)
###Output
_____no_output_____
###Markdown
Run from command line To run from command line, use the following command:`python -m mlcore.tensorflow.tflite_converter [parameters]` The following parameters are supported:- `--source`: The path to the folder containing the SavedModel. (e.g.: *datasets/image_object_detection/car_damage/saved_model*)- `--categories`: The categories file to add to the Tensorflow Lite model. (e.g.: *datasets/image_object_detection/car_damage/categories.txt*)- `--name`: The name of the model. (e.g.: *"SSD MobileNetV2"*)- `--version`: The version of the model, default to *1* (=v1)- `--type`: The type of the model, if not explicitly set try to infer from categories file path.- `--output`: The folder to store the Tensorflow Lite model. (e.g.: *datasets/image_object_detection/car_damage/tflite*)
###Code
# export
if __name__ == '__main__' and '__file__' in globals():
configure_logging()
parser = argparse.ArgumentParser()
parser.add_argument("-s",
"--source",
help="The path to the folder containing the SavedModel.",
type=str)
parser.add_argument("-c",
"--categories",
help="The categories file to add to the Tensorflow Lite model.",
type=str)
parser.add_argument("-n",
"--name",
help="The name of the model.",
type=str)
parser.add_argument("-v",
"--version",
help="The version of the model.",
type=int,
default=1)
parser.add_argument("-t",
"--type",
help="The type of the model, if not explicitly set try to infer from categories file path.",
choices=list(DatasetType),
type=DatasetType,
default=None)
parser.add_argument("-o",
"--output",
help="The folder to store the Tensorflow Lite model.",
type=str)
args = parser.parse_args()
model_type = args.type
# try to infer the model type if not explicitly set
if model_type is None:
try:
model_type = infer_dataset_type(args.categories)
except ValueError as e:
logger.error(e)
sys.exit(1)
output_file = join(args.output, TFLITE_MODEL_DEFAULT_NAME)
save_model(convert_model(args.source), output_file)
model_meta = create_metadata(args.source, args.categories, model_type, args.name, args.version)
write_metadata(model_meta, output_file, args.categories)
logger.info('FINISHED!!!')
# hide
# for generating scripts from notebook directly
from nbdev.export import notebook2script
notebook2script()
###Output
Converted annotation-core.ipynb.
Converted annotation-folder_category_adapter.ipynb.
Converted annotation-multi_category_adapter.ipynb.
Converted annotation-via_adapter.ipynb.
Converted annotation-yolo_adapter.ipynb.
Converted annotation_converter.ipynb.
Converted annotation_viewer.ipynb.
Converted category_tools.ipynb.
Converted core.ipynb.
Converted dataset-core.ipynb.
Converted dataset-image_classification.ipynb.
Converted dataset-image_object_detection.ipynb.
Converted dataset-image_segmentation.ipynb.
Converted dataset-type.ipynb.
Converted dataset_generator.ipynb.
Converted evaluation-core.ipynb.
Converted geometry.ipynb.
Converted image-color_palette.ipynb.
Converted image-inference.ipynb.
Converted image-opencv_tools.ipynb.
Converted image-pillow_tools.ipynb.
Converted image-tools.ipynb.
Converted index.ipynb.
Converted io-core.ipynb.
Converted tensorflow-tflite_converter.ipynb.
Converted tensorflow-tflite_metadata.ipynb.
Converted tensorflow-tfrecord_builder.ipynb.
Converted tools-check_double_images.ipynb.
Converted tools-downloader.ipynb.
Converted tools-image_size_calculator.ipynb.
|
Chicago_predictions_combo_comparisons.ipynb | ###Markdown
Day of week analysis for each month of each block id
###Code
# start month = 3, end_month = 2 (months are 0-indexed)
# X: 4/2017 -> 3/2019 actual date
# y: 4/2019 -> 3/2020 actual date
#
X_test_start_month = 0
X_test_end_month = 0
X_test_start_year = 2016
X_test_end_year = 2018
TRAIN_NUM_BLOCKIDS = TEST_NUM_BLOCKIDS = 801
TRAIN_BLOCKIDS = random.sample(list(range(1,802)), k=TRAIN_NUM_BLOCKIDS)
train_blockid_dict = {}
for ind, blockid in enumerate(TRAIN_BLOCKIDS ):
train_blockid_dict[blockid] = ind
TEST_BLOCKIDS = random.sample(list(range(1,802)), k=TEST_NUM_BLOCKIDS)
test_blockid_dict = {}
for ind, blockid in enumerate(TEST_BLOCKIDS ):
test_blockid_dict[blockid] = ind
def plot_output(y, y_pred, dataset_type, x_label, y_label):
fig = plt.figure(figsize=(10, 8))
plt.plot(np.arange(len(y_pred.flatten())),
y_pred.flatten(), color='red');
plt.plot(np.arange(len(y.flatten())),
y.flatten(), color='blue');
plt.xlabel(x_label, fontsize=16)
plt.ylabel(y_label, fontsize=18)
plt.title(dataset_type + ' dataset', fontsize=18)
if use_counts == True:
plt.legend(labels=['count', 'predicted count'], prop={'size': 20})
else:
plt.legend(labels=['risk', 'predicted risk'], prop={'size': 20})
plt.show()
from sklearn.ensemble import RandomForestRegressor
from sklearn.multioutput import MultiOutputRegressor
from sklearn.metrics import mean_squared_error
random.seed(101)
def get_predictions(X_train, y_train, X_test, y_test,
x_label, y_label, model, do_gridsearch=False):
def print_data_info(data, data_name):
flat = data.flatten()
print('Number of data points:', len(flat))
print('Number of non-zero elements:', len(flat[flat > 0.0]))
print('Percentage of non-zero elements:', len(flat[flat > 0.0])/len(flat))
if use_counts == True:
pd.Series(flat).hist();
else:
pd.Series(flat).hist(bins=[0.25, 0.5, 1.0, 1.5, 2.5, 5.0, 10, 15, 20]);
plt.title(f'Histogram of {data_name}')
plt.show()
print_data_info(y_test, 'y_test')
print('Correlation between y_train and y_test:\n',
np.corrcoef(y_train.flatten(), y_test.flatten()))
X_train = X_train.reshape((TRAIN_NUM_BLOCKIDS, X_train.shape[1] * X_train.shape[2]))
y_train = y_train.reshape((TRAIN_NUM_BLOCKIDS, y_train.shape[1] * y_train.shape[2]))
X_test = X_test.reshape((TEST_NUM_BLOCKIDS, X_test.shape[1] * X_test.shape[2]))
y_test = y_test.reshape((TEST_NUM_BLOCKIDS, y_test.shape[1] * y_test.shape[2]))
print('y_test shape after reshaping:', y_test.shape)
if do_gridsearch == True:
# For regressors:
param_grid = { # param_grid values not working - have to debug --- TODO ---
'estimator__n_estimators': [80, 100, 120],
'estimator__max_depth': [2, 3, 4, 5, 6],
}
# For classifiers:
# param_grid = {
# 'estimator__n_estimators': [80, 100, 120],
# 'estimator__max_depth': [2, 3, 4, 5, 6, 7, 8],
# }
gridsearch = GridSearchCV(model,
param_grid=param_grid,
scoring='neg_mean_squared_error',
cv=3, n_jobs=-1,
return_train_score=True, verbose=10)
model = gridsearch
model.fit(X_train, y_train)
best_training_score = model.score(X_train, y_train)
best_testing_score = model.score(X_test, y_test)
print(f' Best training score:', -best_training_score)
print(f' Best testing score: ', -best_testing_score)
if do_gridsearch == True:
best_model_params = model.cv_results_['params'][model.best_index_]
print('Best Grid Search model:', best_model_params)
y_pred = model.predict(X_test)
print('mean_squared_error:', mean_squared_error(y_test, y_pred))
plot_output(y_test, y_pred, 'Testing', x_label, y_label)
def relative_percent_difference(y_true, y_pred):
return 1 - np.absolute((y_true - y_pred) / (np.absolute(y_true) + np.absolute(y_pred)))
return y_test, y_pred, relative_percent_difference(y_test, y_pred), model
###Output
_____no_output_____
###Markdown
Compare two different blocks of data
###Code
X_train_dow, X_test_dow, y_train_dow, y_test_dow = \
ready_data(2015, 2017, train_blockid_dict, # training (2015 2016) 2017
X_test_start_year, X_test_end_year, test_blockid_dict, # testing (2016 2017) 2018
DAY_OF_WEEK)
train_blockid_dict[1], test_blockid_dict[1]
def plot_block(blockid):
train_blockid = train_blockid_dict[blockid]
test_blockid = test_blockid_dict[blockid]
y1 = y_train_dow[train_blockid].flatten() # blockid, month, dow = Jan (M,Tu,W,...,Sun), Feb (M,Tu,W,...,Sun), ...
y1_mean = np.mean(y1)
train = pd.concat([pd.Series(X_train_dow[train_blockid].flatten()), pd.Series(y1)])
train_mean = np.mean(train)
y2 = y_test_dow[test_blockid].flatten()
y2_mean = np.mean(y2)
y_p = y_pred_dow[test_blockid].flatten()
y_p_mean = np.mean(y_p)
# plt.plot(np.arange(len(y1)), y1, color='red') # 2017
plt.plot(np.arange(len(y2)), y2, color='blue') # 2018
plt.plot(np.arange(len(y_p)), y_p, color='green')
plt.show()
print('train mean:', train_mean, '2017 mean:', y1_mean, '\n2018 mean:', y2_mean, \
'y_pred_mean:', y_p_mean)
[plot_block(i) for i in range(1, 6)]
###Output
_____no_output_____
###Markdown
Day of week analysis for each month of each block id
###Code
%%time
X_train_dow, X_test_dow, y_train_dow, y_test_dow = \
ready_data(2015, 2017, train_blockid_dict,
X_test_start_year, X_test_end_year, test_blockid_dict,
DAY_OF_WEEK)
print(X_train_dow.shape, y_train_dow.shape, X_test_dow.shape, y_test_dow.shape)
model = MultiOutputRegressor(RandomForestRegressor(max_depth=5, n_estimators=100))
if use_counts == True:
y_test_dow, y_pred_dow, rpd_dow, model_dow = \
get_predictions(X_train_dow, y_train_dow, X_test_dow, y_test_dow,
'day of week for each month',
f'crime count / population {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}',
model, do_gridsearch=do_gridsearch)
else:
y_test_dow, y_pred_dow, rpd_dow, model_dow = \
get_predictions(X_train_dow, y_train_dow, X_test_dow, y_test_dow,
'day of week for each month',
f'risk {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}',
model, do_gridsearch=do_gridsearch)
plot_output(y_test_dow.flatten()[:100], y_pred_dow.flatten()[:100], 'Test', 'day of week for each month', 'crime count per million')
###Output
_____no_output_____
###Markdown
Day of month analysis for each month of each block id
###Code
# %%time
# X_train_dom, X_test_dom, y_train_dom, y_test_dom = \
# ready_data(2015, 2017, train_blockid_dict,
# X_test_start_year, X_test_end_year, test_blockid_dict,
# DAY_OF_MONTH)
# print(X_train_dom.shape, y_train_dom.shape, X_test_dom.shape, y_test_dom.shape)
# model = MultiOutputRegressor(RandomForestRegressor(max_depth=4, n_estimators=120))
# if use_counts == True:
# y_test_dom, y_pred_dom, rpd_dom, model_dom = \
# get_predictions(X_train_dom, y_train_dom, X_test_dom, y_test_dom,
# 'day of month for each month',
# f'crime count / population {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}',
# model, do_gridsearch=do_gridsearch)
# else:
# y_test_dom, y_pred_dom, rpd_dom, model_dom = \
# get_predictions(X_train_dom, y_train_dom, X_test_dom, y_test_dom,
# 'day of month for each month',
# f'risk {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}',
# model, do_gridsearch=do_gridsearch)
###Output
_____no_output_____
###Markdown
Hour of day analysis for each month of each block id
###Code
%%time
X_train_hod, X_test_hod, y_train_hod, y_test_hod = \
ready_data(2015, 2017, train_blockid_dict,
X_test_start_year, X_test_end_year, test_blockid_dict,
HOUR_OF_DAY)
print(X_train_hod.shape, y_train_hod.shape, X_test_hod.shape, y_test_hod.shape)
model = MultiOutputRegressor(RandomForestRegressor(max_depth=4, n_estimators=120))
if use_counts == True:
y_test_hod, y_pred_hod, rpd_hod, model_hod = \
get_predictions(X_train_hod, y_train_hod, X_test_hod, y_test_hod,
'hour of day for each month',
f'crime count / population {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}',
model, do_gridsearch=do_gridsearch)
else:
y_test_hod, y_pred_hod, rpd_hod, model_hod = \
get_predictions(X_train_hod, y_train_hod, X_test_hod, y_test_hod,
'hour of day for each month',
f'risk {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}',
model, do_gridsearch=do_gridsearch)
###Output
(801, 24, 24) (801, 12, 24) (801, 24, 24) (801, 12, 24)
Number of data points: 230688
Number of non-zero elements: 120160
Percentage of non-zero elements: 0.5208766819253711
###Markdown
Weigh and combine predictions into one array
###Code
NUM_BLOCKIDS = 801
NUM_MONTHS_IN_YEAR = 12
NUM_DAYS_IN_WEEK = 7
NUM_HOURS_IN_DAY = 24
risks = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK * NUM_HOURS_IN_DAY))
y_test_dow_times_hour = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK * NUM_HOURS_IN_DAY))
# Returns number of days in a month
def days_in_month(year, month):
p = pd.Period(f'{year}-{month}-1')
return p.days_in_month
# Day of week returns 0-based day value
def day_of_week(dt):
return dt.weekday()
end_year = X_test_end_year
for blockid in range(NUM_BLOCKIDS):
for month in range(1, NUM_MONTHS_IN_YEAR + 1):
for dow in range(7):
for hour in range(24):
weight_dow = 7
weight_hod = 24
weight_sum = weight_dow + weight_hod
risks[blockid, month-1, dow * 24 + hour] += \
(y_pred_dow[blockid, (month - 1)*7+dow] * weight_dow +
y_pred_hod[blockid, (month - 1)*24+hour] * weight_hod) / weight_sum
y_test_dow_times_hour[blockid, month-1, dow * 24 + hour] += \
(y_test_dow[blockid, (month - 1)*7+dow] * weight_dow +
y_test_hod[blockid, (month - 1)*24+hour] * weight_hod) / weight_sum
risks_descaled = descale_data(risks)
risks_descaled = np.nan_to_num(risks_descaled)
risks = risks_descaled.copy()
y_test_dow_times_hour = descale_data(y_test_dow_times_hour)
y_test_dow_times_hour = np.nan_to_num(y_test_dow_times_hour)
y = y_test_dow_times_hour.flatten()
r = risks.flatten()
print('Number of zeros in y_test_dow_times_hour:', len(y[y == 0.0]), 'out of:', len(y))
print('Number of zeros in risks:', len(r[r == 0.0]), 'out of:', len(r))
def plot_y_vs_ypred(y, y_pred):
fig = plt.figure(figsize=(10, 8))
plt.plot(np.arange(len(y.flatten())),
y.flatten(), color='blue');
plt.plot(np.arange(len(y_pred.flatten())),
y_pred.flatten(), color='red');
plt.xlabel('dow * hour', fontsize=16)
plt.ylabel('crime count / population * 1000', fontsize=18)
plt.title('Test dataset', fontsize=18)
if use_counts == True:
plt.legend(labels=['count', 'predicted count'], prop={'size': 20})
else:
plt.legend(labels=['risk', 'predicted risk'], prop={'size': 20})
plt.show()
plot_y_vs_ypred(y_test_dow_times_hour, risks)
plot_y_vs_ypred(y_test_dow_times_hour[0][0], risks[0][0])
###Output
_____no_output_____
###Markdown
Save data to file
###Code
import pickle
old_model_objs_to_pkl = [risks, test_blockid_dict]
with open("old_data.pkl", "wb") as f:
pickle.dump(risks, f)
pickle.dump(test_blockid_dict, f)
###Output
_____no_output_____
###Markdown
Create predictions
###Code
X_test_start_year = 2017
X_test_end_year = 2019
X_train_dow, X_test_dow, y_train_dow, y_test_dow = \
ready_data(2016, 2018, train_blockid_dict,
X_test_start_year, X_test_end_year, test_blockid_dict,
DAY_OF_WEEK)
X_train_dom, X_test_dom, y_train_dom, y_test_dom = \
ready_data(2016, 2018, train_blockid_dict,
X_test_start_year, X_test_end_year, test_blockid_dict,
DAY_OF_MONTH)
X_train_hod, X_test_hod, y_train_hod, y_test_hod = \
ready_data(2016, 2018, train_blockid_dict,
X_test_start_year, X_test_end_year, test_blockid_dict,
HOUR_OF_DAY)
y_pred_dow = model_dow.predict(X_test_dow.reshape((NUM_BLOCKIDS, X_test_dow.shape[1] * X_test_dow.shape[2])))
y_pred_hod = model_hod.predict(X_test_hod.reshape((NUM_BLOCKIDS, X_test_hod.shape[1] * X_test_hod.shape[2])))
y_test_dow = y_test_dow.reshape((TEST_NUM_BLOCKIDS, y_test_dow.shape[1] * y_test_dow.shape[2]))
y_test_hod = y_test_hod.reshape((TEST_NUM_BLOCKIDS, y_test_hod.shape[1] * y_test_hod.shape[2]))
NUM_BLOCKIDS = 801
NUM_MONTHS_IN_YEAR = 12
NUM_DAYS_IN_WEEK = 7
NUM_HOURS_IN_DAY = 24
risks = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK * NUM_HOURS_IN_DAY))
y_test_dow_times_hour = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK * NUM_HOURS_IN_DAY))
# Returns number of days in a month
def days_in_month(year, month):
p = pd.Period(f'{year}-{month}-1')
return p.days_in_month
# Day of week returns 0-based day value
def day_of_week(dt):
return dt.weekday()
end_year = X_test_end_year
for blockid in range(NUM_BLOCKIDS):
for month in range(1, NUM_MONTHS_IN_YEAR + 1):
for dow in range(7):
for hour in range(24):
weight_dow = 7
weight_hod = 24
weight_sum = weight_dow + weight_hod
risks[blockid, month-1, dow * 24 + hour] += \
(y_pred_dow[blockid, (month - 1)*7+dow] * weight_dow +
y_pred_hod[blockid, (month - 1)*24+hour] * weight_hod) / weight_sum
y_test_dow_times_hour[blockid, month-1, dow * 24 + hour] += \
(y_test_dow[blockid, (month - 1)*7+dow] * weight_dow +
y_test_hod[blockid, (month - 1)*24+hour] * weight_hod) / weight_sum
risks_descaled = descale_data(risks)
risks_descaled = np.nan_to_num(risks_descaled)
risks = risks_descaled.copy()
y_test_dow_times_hour = descale_data(y_test_dow_times_hour)
y_test_dow_times_hour = np.nan_to_num(y_test_dow_times_hour)
###Output
_____no_output_____
###Markdown
Store predictions in DB
###Code
from decouple import config
pred_blockid_dict = test_blockid_dict
def store_predictions_in_db(y_pred):
DB_URI_WRITE = config('DB_URI_WRITE')
# Put predictions into pandas DataFrame with corresponding block id
predictions = pd.DataFrame([[x] for x in pred_blockid_dict.keys()], columns=["id"])
predictions.loc[:, "prediction"] = predictions["id"].apply(lambda x: y_pred[pred_blockid_dict[x],:,:].astype(np.float64).tobytes().hex())
predictions.loc[:, "month"] = 0
predictions.loc[:, "year"] = 2019
predictions.to_csv("predictions.csv", index=False)
# Query SQL
query_commit_predictions = """
CREATE TEMPORARY TABLE temp_predictions (
id SERIAL PRIMARY KEY,
prediction TEXT,
month INTEGER,
year INTEGER
);
COPY temp_predictions (id, prediction, month, year) FROM STDIN DELIMITER ',' CSV HEADER;
UPDATE block
SET
prediction = DECODE(temp_predictions.prediction, 'hex'),
month = temp_predictions.month,
year = temp_predictions.year
FROM temp_predictions
WHERE block.id = temp_predictions.id;
DROP TABLE temp_predictions;
"""
# Open saved predictions and send to database using above query
with open("predictions.csv", "r") as f:
print("SENDING TO DB")
RAW_CONN = create_engine(DB_URI_WRITE).raw_connection()
cursor = RAW_CONN.cursor()
cursor.copy_expert(query_commit_predictions, f)
RAW_CONN.commit()
RAW_CONN.close()
for r in SESSION.execute("SELECT ENCODE(prediction::BYTEA, 'hex'), id FROM block WHERE prediction IS NOT NULL LIMIT 5;").fetchall():
print(np.frombuffer(bytes.fromhex(r[0]), dtype=np.float64).reshape((12,7,24)))
print(y_pred[pred_blockid_dict[int(r[1])], :].reshape((12,7,24)))
with session_scope() as SESSION:
store_predictions_in_db(risks)
###Output
SENDING TO DB
[[[ 3.62226775 4.29141527 4.99341643 ... 18.65912714 19.76846219
19.71990031]
[ 3.48902621 4.15817373 4.86017489 ... 18.5258856 19.63522065
19.58665876]
[ 3.44368941 4.11283693 4.81483809 ... 18.48054881 19.58988385
19.54132197]
...
[ 3.72530414 4.39445167 5.09645283 ... 18.76216354 19.87149859
19.8229367 ]
[ 3.6650971 4.33424463 5.03624579 ... 18.7019565 19.81129155
19.76272966]
[ 3.55497764 4.22412516 4.92612633 ... 18.59183704 19.70117209
19.6526102 ]]
[[ 3.54364996 4.09162981 4.68260826 ... 17.90739264 18.94988511
19.52136698]
[ 3.24526502 3.79324488 4.38422333 ... 17.60900771 18.65150018
19.22298204]
[ 3.47677822 4.02475808 4.61573653 ... 17.84052091 18.88301337
19.45449524]
...
[ 3.26190582 3.80988568 4.40086413 ... 17.62564851 18.66814097
19.23962284]
[ 3.31992703 3.86790688 4.45888533 ... 17.68366971 18.72616218
19.29764404]
[ 3.61764469 4.16562455 4.756603 ... 17.98138738 19.02387985
19.59536171]]
[[ 3.72496658 4.31471053 4.91321745 ... 19.02188491 19.53100906
19.36907928]
[ 3.4719046 4.06164855 4.66015547 ... 18.76882294 19.27794709
19.1160173 ]
[ 3.53337759 4.12312155 4.72162847 ... 18.83029593 19.33942008
19.1774903 ]
...
[ 3.58954227 4.17928622 4.77779314 ... 18.88646061 19.39558476
19.23365497]
[ 3.42981132 4.01955528 4.6180622 ... 18.72672966 19.23585381
19.07392403]
[ 3.55494787 4.14469182 4.74319874 ... 18.85186621 19.36099036
19.19906058]]
...
[[ 3.47282897 4.1201407 4.71222005 ... 18.76036277 19.82987611
19.82978825]
[ 3.75071815 4.39802989 4.99010924 ... 19.03825196 20.10776529
20.10767744]
[ 3.63353221 4.28084394 4.87292329 ... 18.92106601 19.99057935
19.99049149]
...
[ 3.80752995 4.45484169 5.04692103 ... 19.09506376 20.16457709
20.16448923]
[ 3.69850744 4.34581918 4.93789853 ... 18.98604125 20.05555458
20.05546673]
[ 3.54380552 4.19111726 4.78319661 ... 18.83133933 19.90085266
19.9007648 ]]
[[ 3.31854055 3.92801438 4.45727301 ... 18.28841996 19.16752258
19.17617787]
[ 3.66278758 4.2722614 4.80152004 ... 18.63266699 19.51176961
19.5204249 ]
[ 3.66540822 4.27488204 4.80414067 ... 18.63528762 19.51439024
19.52304553]
...
[ 3.47868867 4.08816249 4.61742112 ... 18.44856808 19.3276707
19.33632599]
[ 3.40795192 4.01742575 4.54668438 ... 18.37783133 19.25693395
19.26558924]
[ 3.62582826 4.23530208 4.76456071 ... 18.59570766 19.47481028
19.48346557]]
[[ 3.28655525 3.89916418 4.38915978 ... 18.06270713 18.8524392
19.0097691 ]
[ 3.56468602 4.17729495 4.66729055 ... 18.34083789 19.13056997
19.28789987]
[ 3.50913761 4.12174653 4.61174213 ... 18.28528948 19.07502155
19.23235146]
...
[ 3.47367935 4.08628827 4.57628387 ... 18.24983122 19.0395633
19.1968932 ]
[ 3.18837562 3.80098455 4.29098015 ... 17.96452749 18.75425957
18.91158947]
[ 3.39143035 4.00403927 4.49403487 ... 18.16758222 18.9573143
19.1146442 ]]]
[[[ 3.62226775 4.29141527 4.99341643 ... 18.65912714 19.76846219
19.71990031]
[ 3.48902621 4.15817373 4.86017489 ... 18.5258856 19.63522065
19.58665876]
[ 3.44368941 4.11283693 4.81483809 ... 18.48054881 19.58988385
19.54132197]
...
[ 3.72530414 4.39445167 5.09645283 ... 18.76216354 19.87149859
19.8229367 ]
[ 3.6650971 4.33424463 5.03624579 ... 18.7019565 19.81129155
19.76272966]
[ 3.55497764 4.22412516 4.92612633 ... 18.59183704 19.70117209
19.6526102 ]]
[[ 3.54364996 4.09162981 4.68260826 ... 17.90739264 18.94988511
19.52136698]
[ 3.24526502 3.79324488 4.38422333 ... 17.60900771 18.65150018
19.22298204]
[ 3.47677822 4.02475808 4.61573653 ... 17.84052091 18.88301337
19.45449524]
...
[ 3.26190582 3.80988568 4.40086413 ... 17.62564851 18.66814097
19.23962284]
[ 3.31992703 3.86790688 4.45888533 ... 17.68366971 18.72616218
19.29764404]
[ 3.61764469 4.16562455 4.756603 ... 17.98138738 19.02387985
19.59536171]]
[[ 3.72496658 4.31471053 4.91321745 ... 19.02188491 19.53100906
19.36907928]
[ 3.4719046 4.06164855 4.66015547 ... 18.76882294 19.27794709
19.1160173 ]
[ 3.53337759 4.12312155 4.72162847 ... 18.83029593 19.33942008
19.1774903 ]
...
[ 3.58954227 4.17928622 4.77779314 ... 18.88646061 19.39558476
19.23365497]
[ 3.42981132 4.01955528 4.6180622 ... 18.72672966 19.23585381
19.07392403]
[ 3.55494787 4.14469182 4.74319874 ... 18.85186621 19.36099036
19.19906058]]
...
[[ 3.47282897 4.1201407 4.71222005 ... 18.76036277 19.82987611
19.82978825]
[ 3.75071815 4.39802989 4.99010924 ... 19.03825196 20.10776529
20.10767744]
[ 3.63353221 4.28084394 4.87292329 ... 18.92106601 19.99057935
19.99049149]
...
[ 3.80752995 4.45484169 5.04692103 ... 19.09506376 20.16457709
20.16448923]
[ 3.69850744 4.34581918 4.93789853 ... 18.98604125 20.05555458
20.05546673]
[ 3.54380552 4.19111726 4.78319661 ... 18.83133933 19.90085266
19.9007648 ]]
[[ 3.31854055 3.92801438 4.45727301 ... 18.28841996 19.16752258
19.17617787]
[ 3.66278758 4.2722614 4.80152004 ... 18.63266699 19.51176961
19.5204249 ]
[ 3.66540822 4.27488204 4.80414067 ... 18.63528762 19.51439024
19.52304553]
...
[ 3.47868867 4.08816249 4.61742112 ... 18.44856808 19.3276707
19.33632599]
[ 3.40795192 4.01742575 4.54668438 ... 18.37783133 19.25693395
19.26558924]
[ 3.62582826 4.23530208 4.76456071 ... 18.59570766 19.47481028
19.48346557]]
[[ 3.28655525 3.89916418 4.38915978 ... 18.06270713 18.8524392
19.0097691 ]
[ 3.56468602 4.17729495 4.66729055 ... 18.34083789 19.13056997
19.28789987]
[ 3.50913761 4.12174653 4.61174213 ... 18.28528948 19.07502155
19.23235146]
...
[ 3.47367935 4.08628827 4.57628387 ... 18.24983122 19.0395633
19.1968932 ]
[ 3.18837562 3.80098455 4.29098015 ... 17.96452749 18.75425957
18.91158947]
[ 3.39143035 4.00403927 4.49403487 ... 18.16758222 18.9573143
19.1146442 ]]]
[[[2.46237207 2.61105935 2.76347368 ... 7.29057962 7.71836531 7.08945701]
[2.31810551 2.46679279 2.61920712 ... 7.14631306 7.57409875 6.94519045]
[1.6912287 1.83991597 1.9923303 ... 6.51943624 6.94722193 6.31831363]
...
[2.44157915 2.59026643 2.74268076 ... 7.26978669 7.69757239 7.06866409]
[1.94386287 2.09255014 2.24496447 ... 6.77207041 7.1998561 6.5709478 ]
[1.87046172 2.019149 2.17156333 ... 6.69866927 7.12645496 6.49754666]]
[[1.95955169 2.07749878 2.16727771 ... 5.13849934 6.73052567 6.53014218]
[1.5811458 1.69909288 1.78887182 ... 4.76009345 6.35211978 6.15173629]
[1.75983107 1.87777816 1.96755709 ... 4.93877872 6.53080505 6.33042156]
...
[1.83917941 1.9571265 2.04690543 ... 5.01812706 6.61015339 6.4097699 ]
[1.96337912 2.08132621 2.17110515 ... 5.14232678 6.7343531 6.53396962]
[1.85642749 1.97437458 2.06415351 ... 5.03537514 6.62740147 6.42701798]]
[[1.57344613 1.76516323 1.85939947 ... 4.47439383 6.46611087 6.82521038]
[2.17273572 2.36445282 2.45868906 ... 5.07368342 7.06540046 7.42449997]
[1.94725593 2.13897303 2.23320928 ... 4.84820363 6.83992068 7.19902018]
...
[2.88890626 3.08062337 3.17485961 ... 5.78985397 7.78157101 8.14067052]
[1.86762433 2.05934143 2.15357768 ... 4.76857203 6.76028908 7.11938858]
[1.79152926 1.98324636 2.07748261 ... 4.69247696 6.68419401 7.04329351]]
...
[[2.11332418 2.30960906 2.3793171 ... 8.38464842 7.12409038 7.34116082]
[2.08234184 2.27862672 2.34833476 ... 8.35366608 7.09310804 7.31017848]
[2.09462121 2.29090609 2.36061414 ... 8.36594546 7.10538742 7.32245786]
...
[1.84716392 2.0434488 2.11315684 ... 8.11848816 6.85793012 7.07500056]
[1.79592658 1.99221146 2.0619195 ... 8.06725082 6.80669278 7.02376323]
[2.19357246 2.38985734 2.45956538 ... 8.4648967 7.20433866 7.4214091 ]]
[[1.99521236 2.18808404 2.1884776 ... 7.12607153 7.50587032 7.00748426]
[2.00616781 2.19903949 2.19943305 ... 7.13702698 7.51682576 7.01843971]
[1.86630288 2.05917456 2.05956812 ... 6.99716205 7.37696084 6.87857478]
...
[1.58521957 1.77809125 1.77848481 ... 6.71607874 7.09587753 6.59749147]
[1.97079483 2.16366651 2.16406007 ... 7.101654 7.48145279 6.98306673]
[1.86193129 2.05480298 2.05519654 ... 6.99279047 7.37258925 6.87420319]]
[[1.67312128 1.83139005 2.03954357 ... 6.66592507 7.1348477 7.19580077]
[1.67226141 1.83053018 2.0386837 ... 6.6650652 7.13398784 7.1949409 ]
[1.40384736 1.56211613 1.77026966 ... 6.39665115 6.86557379 6.92652685]
...
[2.07548831 2.23375708 2.4419106 ... 7.0682921 7.53721474 7.5981678 ]
[1.67685125 1.83512003 2.04327355 ... 6.66965504 7.13857768 7.19953074]
[1.75858166 1.91685043 2.12500395 ... 6.75138545 7.22030808 7.28126115]]]
[[[2.46237207 2.61105935 2.76347368 ... 7.29057962 7.71836531 7.08945701]
[2.31810551 2.46679279 2.61920712 ... 7.14631306 7.57409875 6.94519045]
[1.6912287 1.83991597 1.9923303 ... 6.51943624 6.94722193 6.31831363]
...
[2.44157915 2.59026643 2.74268076 ... 7.26978669 7.69757239 7.06866409]
[1.94386287 2.09255014 2.24496447 ... 6.77207041 7.1998561 6.5709478 ]
[1.87046172 2.019149 2.17156333 ... 6.69866927 7.12645496 6.49754666]]
[[1.95955169 2.07749878 2.16727771 ... 5.13849934 6.73052567 6.53014218]
[1.5811458 1.69909288 1.78887182 ... 4.76009345 6.35211978 6.15173629]
[1.75983107 1.87777816 1.96755709 ... 4.93877872 6.53080505 6.33042156]
...
[1.83917941 1.9571265 2.04690543 ... 5.01812706 6.61015339 6.4097699 ]
[1.96337912 2.08132621 2.17110515 ... 5.14232678 6.7343531 6.53396962]
[1.85642749 1.97437458 2.06415351 ... 5.03537514 6.62740147 6.42701798]]
[[1.57344613 1.76516323 1.85939947 ... 4.47439383 6.46611087 6.82521038]
[2.17273572 2.36445282 2.45868906 ... 5.07368342 7.06540046 7.42449997]
[1.94725593 2.13897303 2.23320928 ... 4.84820363 6.83992068 7.19902018]
...
[2.88890626 3.08062337 3.17485961 ... 5.78985397 7.78157101 8.14067052]
[1.86762433 2.05934143 2.15357768 ... 4.76857203 6.76028908 7.11938858]
[1.79152926 1.98324636 2.07748261 ... 4.69247696 6.68419401 7.04329351]]
...
[[2.11332418 2.30960906 2.3793171 ... 8.38464842 7.12409038 7.34116082]
[2.08234184 2.27862672 2.34833476 ... 8.35366608 7.09310804 7.31017848]
[2.09462121 2.29090609 2.36061414 ... 8.36594546 7.10538742 7.32245786]
...
[1.84716392 2.0434488 2.11315684 ... 8.11848816 6.85793012 7.07500056]
[1.79592658 1.99221146 2.0619195 ... 8.06725082 6.80669278 7.02376323]
[2.19357246 2.38985734 2.45956538 ... 8.4648967 7.20433866 7.4214091 ]]
[[1.99521236 2.18808404 2.1884776 ... 7.12607153 7.50587032 7.00748426]
[2.00616781 2.19903949 2.19943305 ... 7.13702698 7.51682576 7.01843971]
[1.86630288 2.05917456 2.05956812 ... 6.99716205 7.37696084 6.87857478]
...
[1.58521957 1.77809125 1.77848481 ... 6.71607874 7.09587753 6.59749147]
[1.97079483 2.16366651 2.16406007 ... 7.101654 7.48145279 6.98306673]
[1.86193129 2.05480298 2.05519654 ... 6.99279047 7.37258925 6.87420319]]
[[1.67312128 1.83139005 2.03954357 ... 6.66592507 7.1348477 7.19580077]
[1.67226141 1.83053018 2.0386837 ... 6.6650652 7.13398784 7.1949409 ]
[1.40384736 1.56211613 1.77026966 ... 6.39665115 6.86557379 6.92652685]
...
[2.07548831 2.23375708 2.4419106 ... 7.0682921 7.53721474 7.5981678 ]
[1.67685125 1.83512003 2.04327355 ... 6.66965504 7.13857768 7.19953074]
[1.75858166 1.91685043 2.12500395 ... 6.75138545 7.22030808 7.28126115]]]
[[[ 3.36042951 3.9539847 4.40054629 ... 16.61840351 17.51744014
17.47139048]
[ 3.47241571 4.06597091 4.51253249 ... 16.73038971 17.62942634
17.58337668]
[ 3.49505442 4.08860962 4.5351712 ... 16.75302842 17.65206506
17.60601539]
...
[ 3.19281636 3.78637156 4.23293314 ... 16.45079036 17.34982699
17.30377733]
[ 3.12460627 3.71816147 4.16472305 ... 16.38258027 17.2816169
17.23556724]
[ 3.4842873 4.0778425 4.52440408 ... 16.7422613 17.64129793
17.59524827]]
[[ 3.53024915 4.07575665 4.35981683 ... 16.43761956 17.99493501
17.55416469]
[ 3.07541996 3.62092747 3.90498765 ... 15.98279037 17.54010582
17.0993355 ]
[ 3.43990196 3.98540947 4.26946965 ... 16.34727238 17.90458783
17.4638175 ]
...
[ 3.40795897 3.95346648 4.23752666 ... 16.31532939 17.87264484
17.43187451]
[ 3.33960704 3.88511455 4.16917473 ... 16.24697746 17.80429291
17.36352258]
[ 3.09239468 3.63790219 3.92196237 ... 15.9997651 17.55708054
17.11631022]]
[[ 3.23233704 3.70406492 4.18209581 ... 17.36905757 18.49100811
17.78490538]
[ 3.073724 3.54545188 4.02348276 ... 17.21044453 18.33239507
17.62629234]
[ 3.26940343 3.74113131 4.21916219 ... 17.40612396 18.5280745
17.82197177]
...
[ 3.50422172 3.9759496 4.45398049 ... 17.64094225 18.76289279
18.05679006]
[ 3.39719918 3.86892707 4.34695795 ... 17.53391972 18.65587026
17.94976753]
[ 3.28192314 3.75365102 4.23168191 ... 17.41864367 18.54059421
17.83449148]]
...
[[ 3.55534395 4.15968699 4.59111093 ... 17.83059709 17.67154488
19.21062216]
[ 3.67272917 4.27707222 4.70849616 ... 17.94798232 17.78893011
19.32800738]
[ 3.52231684 4.12665989 4.55808382 ... 17.79756999 17.63851778
19.17759505]
...
[ 3.67597504 4.28031808 4.71174202 ... 17.95122818 17.79217597
19.33125325]
[ 3.36325682 3.96759986 4.3990238 ... 17.63850996 17.47945775
19.01853503]
[ 3.31555122 3.91989426 4.3513182 ... 17.59080436 17.43175215
18.97082943]]
[[ 3.47151714 3.99753042 4.47251109 ... 16.42671631 18.43416299
16.24639599]
[ 3.61056617 4.13657944 4.61156011 ... 16.56576533 18.57321201
16.38544501]
[ 3.32064231 3.84665559 4.32163626 ... 16.27584147 18.28328816
16.09552116]
...
[ 3.49341324 4.01942651 4.49440718 ... 16.4486124 18.45605908
16.26829208]
[ 3.41304088 3.93905416 4.41403483 ... 16.36824004 18.37568673
16.18791973]
[ 3.26222732 3.78824059 4.26322126 ... 16.21742648 18.22487316
16.03710616]]
[[ 3.40217069 3.95182548 4.3069421 ... 16.84621171 17.38633983
18.06202122]
[ 3.52710202 4.0767568 4.43187342 ... 16.97114303 17.51127115
18.18695255]
[ 3.41807911 3.9677339 4.32285051 ... 16.86212013 17.40224824
18.07792964]
...
[ 3.42397507 3.97362985 4.32874647 ... 16.86801608 17.4081442
18.08382559]
[ 3.4075063 3.95716109 4.3122777 ... 16.85154731 17.39167543
18.06735683]
[ 3.67371681 4.2233716 4.57848822 ... 17.11775783 17.65788594
18.33356734]]]
[[[ 3.36042951 3.9539847 4.40054629 ... 16.61840351 17.51744014
17.47139048]
[ 3.47241571 4.06597091 4.51253249 ... 16.73038971 17.62942634
17.58337668]
[ 3.49505442 4.08860962 4.5351712 ... 16.75302842 17.65206506
17.60601539]
...
[ 3.19281636 3.78637156 4.23293314 ... 16.45079036 17.34982699
17.30377733]
[ 3.12460627 3.71816147 4.16472305 ... 16.38258027 17.2816169
17.23556724]
[ 3.4842873 4.0778425 4.52440408 ... 16.7422613 17.64129793
17.59524827]]
[[ 3.53024915 4.07575665 4.35981683 ... 16.43761956 17.99493501
17.55416469]
[ 3.07541996 3.62092747 3.90498765 ... 15.98279037 17.54010582
17.0993355 ]
[ 3.43990196 3.98540947 4.26946965 ... 16.34727238 17.90458783
17.4638175 ]
...
[ 3.40795897 3.95346648 4.23752666 ... 16.31532939 17.87264484
17.43187451]
[ 3.33960704 3.88511455 4.16917473 ... 16.24697746 17.80429291
17.36352258]
[ 3.09239468 3.63790219 3.92196237 ... 15.9997651 17.55708054
17.11631022]]
[[ 3.23233704 3.70406492 4.18209581 ... 17.36905757 18.49100811
17.78490538]
[ 3.073724 3.54545188 4.02348276 ... 17.21044453 18.33239507
17.62629234]
[ 3.26940343 3.74113131 4.21916219 ... 17.40612396 18.5280745
17.82197177]
...
[ 3.50422172 3.9759496 4.45398049 ... 17.64094225 18.76289279
18.05679006]
[ 3.39719918 3.86892707 4.34695795 ... 17.53391972 18.65587026
17.94976753]
[ 3.28192314 3.75365102 4.23168191 ... 17.41864367 18.54059421
17.83449148]]
...
[[ 3.55534395 4.15968699 4.59111093 ... 17.83059709 17.67154488
19.21062216]
[ 3.67272917 4.27707222 4.70849616 ... 17.94798232 17.78893011
19.32800738]
[ 3.52231684 4.12665989 4.55808382 ... 17.79756999 17.63851778
19.17759505]
...
[ 3.67597504 4.28031808 4.71174202 ... 17.95122818 17.79217597
19.33125325]
[ 3.36325682 3.96759986 4.3990238 ... 17.63850996 17.47945775
19.01853503]
[ 3.31555122 3.91989426 4.3513182 ... 17.59080436 17.43175215
18.97082943]]
[[ 3.47151714 3.99753042 4.47251109 ... 16.42671631 18.43416299
16.24639599]
[ 3.61056617 4.13657944 4.61156011 ... 16.56576533 18.57321201
16.38544501]
[ 3.32064231 3.84665559 4.32163626 ... 16.27584147 18.28328816
16.09552116]
...
[ 3.49341324 4.01942651 4.49440718 ... 16.4486124 18.45605908
16.26829208]
[ 3.41304088 3.93905416 4.41403483 ... 16.36824004 18.37568673
16.18791973]
[ 3.26222732 3.78824059 4.26322126 ... 16.21742648 18.22487316
16.03710616]]
[[ 3.40217069 3.95182548 4.3069421 ... 16.84621171 17.38633983
18.06202122]
[ 3.52710202 4.0767568 4.43187342 ... 16.97114303 17.51127115
18.18695255]
[ 3.41807911 3.9677339 4.32285051 ... 16.86212013 17.40224824
18.07792964]
...
[ 3.42397507 3.97362985 4.32874647 ... 16.86801608 17.4081442
18.08382559]
[ 3.4075063 3.95716109 4.3122777 ... 16.85154731 17.39167543
18.06735683]
[ 3.67371681 4.2233716 4.57848822 ... 17.11775783 17.65788594
18.33356734]]]
[[[ 3.59662683 4.252075 4.8812187 ... 17.90938671 19.31664881
19.06407051]
[ 3.68561196 4.34106013 4.97020383 ... 17.99837184 19.40563394
19.15305564]
[ 3.55817844 4.21362661 4.84277031 ... 17.87093832 19.27820042
19.02562212]
...
[ 3.59643676 4.25188493 4.88102863 ... 17.90919664 19.31645873
19.06388043]
[ 3.53490875 4.19035692 4.81950062 ... 17.84766863 19.25493073
19.00235243]
[ 3.2913799 3.94682807 4.57597177 ... 17.60413978 19.01140188
18.75882357]]
[[ 3.56613881 4.12716267 4.60048278 ... 17.7748226 17.26456718
18.85613262]
[ 3.49546537 4.05648923 4.52980935 ... 17.70414917 17.19389374
18.78545918]
[ 3.57954449 4.14056835 4.61388847 ... 17.78822829 17.27797287
18.86953831]
...
[ 3.4430316 4.00405546 4.47737558 ... 17.6517154 17.14145998
18.73302542]
[ 2.88801205 3.44903591 3.92235603 ... 17.09669585 16.58644042
18.17800586]
[ 3.52879896 4.08982282 4.56314293 ... 17.73748275 17.22722733
18.81879277]]
[[ 3.69503059 4.21629944 4.82068972 ... 18.50449855 19.49763925
19.24526295]
[ 3.47843926 3.99970811 4.60409838 ... 18.28790721 19.28104792
19.02867162]
[ 3.56408244 4.08535129 4.68974156 ... 18.37355039 19.3666911
19.1143148 ]
...
[ 3.80493016 4.32619901 4.93058929 ... 18.61439812 19.60753882
19.35516252]
[ 3.71097275 4.2322416 4.83663187 ... 18.5204407 19.51358141
19.26120511]
[ 3.33956924 3.86083809 4.46522836 ... 18.14903719 19.1421779
18.88980159]]
...
[[ 3.62438756 4.19702517 4.86759412 ... 18.63698777 19.34607673
19.39690756]
[ 3.70623201 4.27886962 4.94943857 ... 18.71883222 19.42792118
19.47875201]
[ 3.6761077 4.2487453 4.91931426 ... 18.68870791 19.39779687
19.4486277 ]
...
[ 3.56728596 4.13992357 4.81049252 ... 18.57988618 19.28897514
19.33980597]
[ 3.50949467 4.08213227 4.75270123 ... 18.52209488 19.23118384
19.28201467]
[ 3.15634269 3.7289803 4.39954926 ... 18.16894291 18.87803187
18.9288627 ]]
[[ 3.53726112 4.08857957 4.76932889 ... 18.38765372 18.68662699
19.35084428]
[ 3.46854994 4.01986839 4.70061771 ... 18.31894255 18.61791581
19.2821331 ]
[ 3.6517371 4.20305555 4.88380487 ... 18.50212971 18.80110297
19.46532026]
...
[ 3.77718046 4.3284989 5.00924823 ... 18.62757306 18.92654633
19.59076362]
[ 3.57344457 4.12476301 4.80551234 ... 18.42383717 18.72281044
19.38702773]
[ 3.58197898 4.13329742 4.81404675 ... 18.43237158 18.73134485
19.39556214]]
[[ 3.4158829 4.04627029 4.49448005 ... 18.13941589 18.39236742
19.01287394]
[ 3.49587651 4.1262639 4.57447366 ... 18.2194095 18.47236103
19.09286755]
[ 3.54734937 4.17773676 4.62594651 ... 18.27088235 18.52383388
19.1443404 ]
...
[ 3.6890079 4.31939529 4.76760505 ... 18.41254089 18.66549242
19.28599894]
[ 3.57341204 4.20379943 4.65200919 ... 18.29694503 18.54989656
19.17040308]
[ 3.68512603 4.31551342 4.76372318 ... 18.40865902 18.66161054
19.28211706]]]
[[[ 3.59662683 4.252075 4.8812187 ... 17.90938671 19.31664881
19.06407051]
[ 3.68561196 4.34106013 4.97020383 ... 17.99837184 19.40563394
19.15305564]
[ 3.55817844 4.21362661 4.84277031 ... 17.87093832 19.27820042
19.02562212]
...
[ 3.59643676 4.25188493 4.88102863 ... 17.90919664 19.31645873
19.06388043]
[ 3.53490875 4.19035692 4.81950062 ... 17.84766863 19.25493073
19.00235243]
[ 3.2913799 3.94682807 4.57597177 ... 17.60413978 19.01140188
18.75882357]]
[[ 3.56613881 4.12716267 4.60048278 ... 17.7748226 17.26456718
18.85613262]
[ 3.49546537 4.05648923 4.52980935 ... 17.70414917 17.19389374
18.78545918]
[ 3.57954449 4.14056835 4.61388847 ... 17.78822829 17.27797287
18.86953831]
...
[ 3.4430316 4.00405546 4.47737558 ... 17.6517154 17.14145998
18.73302542]
[ 2.88801205 3.44903591 3.92235603 ... 17.09669585 16.58644042
18.17800586]
[ 3.52879896 4.08982282 4.56314293 ... 17.73748275 17.22722733
18.81879277]]
[[ 3.69503059 4.21629944 4.82068972 ... 18.50449855 19.49763925
19.24526295]
[ 3.47843926 3.99970811 4.60409838 ... 18.28790721 19.28104792
19.02867162]
[ 3.56408244 4.08535129 4.68974156 ... 18.37355039 19.3666911
19.1143148 ]
...
[ 3.80493016 4.32619901 4.93058929 ... 18.61439812 19.60753882
19.35516252]
[ 3.71097275 4.2322416 4.83663187 ... 18.5204407 19.51358141
19.26120511]
[ 3.33956924 3.86083809 4.46522836 ... 18.14903719 19.1421779
18.88980159]]
...
[[ 3.62438756 4.19702517 4.86759412 ... 18.63698777 19.34607673
19.39690756]
[ 3.70623201 4.27886962 4.94943857 ... 18.71883222 19.42792118
19.47875201]
[ 3.6761077 4.2487453 4.91931426 ... 18.68870791 19.39779687
19.4486277 ]
...
[ 3.56728596 4.13992357 4.81049252 ... 18.57988618 19.28897514
19.33980597]
[ 3.50949467 4.08213227 4.75270123 ... 18.52209488 19.23118384
19.28201467]
[ 3.15634269 3.7289803 4.39954926 ... 18.16894291 18.87803187
18.9288627 ]]
[[ 3.53726112 4.08857957 4.76932889 ... 18.38765372 18.68662699
19.35084428]
[ 3.46854994 4.01986839 4.70061771 ... 18.31894255 18.61791581
19.2821331 ]
[ 3.6517371 4.20305555 4.88380487 ... 18.50212971 18.80110297
19.46532026]
...
[ 3.77718046 4.3284989 5.00924823 ... 18.62757306 18.92654633
19.59076362]
[ 3.57344457 4.12476301 4.80551234 ... 18.42383717 18.72281044
19.38702773]
[ 3.58197898 4.13329742 4.81404675 ... 18.43237158 18.73134485
19.39556214]]
[[ 3.4158829 4.04627029 4.49448005 ... 18.13941589 18.39236742
19.01287394]
[ 3.49587651 4.1262639 4.57447366 ... 18.2194095 18.47236103
19.09286755]
[ 3.54734937 4.17773676 4.62594651 ... 18.27088235 18.52383388
19.1443404 ]
...
[ 3.6890079 4.31939529 4.76760505 ... 18.41254089 18.66549242
19.28599894]
[ 3.57341204 4.20379943 4.65200919 ... 18.29694503 18.54989656
19.17040308]
[ 3.68512603 4.31551342 4.76372318 ... 18.40865902 18.66161054
19.28211706]]]
[[[ 3.46182359 3.97007677 4.32225271 ... 16.58712603 16.5570733
15.92958507]
[ 3.62242575 4.13067894 4.48285488 ... 16.7477282 16.71767547
16.09018724]
[ 3.23793154 3.74618473 4.09836067 ... 16.36323398 16.33318125
15.70569302]
...
[ 3.34645876 3.85471195 4.20688789 ... 16.47176121 16.44170847
15.81422025]
[ 3.49121761 3.9994708 4.35164674 ... 16.61652006 16.58646732
15.9589791 ]
[ 3.25705484 3.76530803 4.11748397 ... 16.38235728 16.35230455
15.72481632]]
[[ 3.31495307 3.67478917 4.14065619 ... 16.25489612 15.35446987
15.22320202]
[ 3.14541825 3.50525434 3.97112137 ... 16.0853613 15.18493505
15.0536672 ]
[ 3.02761858 3.38745468 3.8533217 ... 15.96756163 15.06713538
14.93586753]
...
[ 3.40579104 3.76562713 4.23149416 ... 16.34573409 15.44530784
15.31403999]
[ 2.99760837 3.35744446 3.82331149 ... 15.93755142 15.03712517
14.90585732]
[ 3.22176647 3.58160256 4.04746959 ... 16.16170952 15.26128326
15.13001542]]
[[ 3.18015663 3.55035654 3.99704659 ... 15.16930232 15.94242484
14.46975893]
[ 3.23980533 3.61000524 4.05669529 ... 15.22895102 16.00207354
14.52940763]
[ 2.94070861 3.31090852 3.75759857 ... 14.9298543 15.70297682
14.2303109 ]
...
[ 3.27239296 3.64259287 4.08928293 ... 15.26153866 16.03466117
14.56199526]
[ 3.18562099 3.55582089 4.00251095 ... 15.17476668 15.9478892
14.47522328]
[ 3.19270835 3.56290826 4.00959831 ... 15.18185404 15.95497656
14.48231065]]
...
[[ 3.42674083 3.93798484 4.35466471 ... 16.87557077 16.760324
16.95593028]
[ 3.50938085 4.02062486 4.43730472 ... 16.95821078 16.84296402
17.03857029]
[ 3.28208803 3.79333204 4.2100119 ... 16.73091796 16.6156712
16.81127747]
...
[ 3.3359072 3.84715121 4.26383107 ... 16.78473713 16.66949037
16.86509664]
[ 3.22626709 3.7375111 4.15419096 ... 16.67509702 16.55985026
16.75545653]
[ 2.91623386 3.42747787 3.84415773 ... 16.36506379 16.24981703
16.4454233 ]]
[[ 3.43847277 3.89766327 4.29929488 ... 15.23337628 16.61298068
16.0902146 ]
[ 3.38758702 3.84677752 4.24840912 ... 15.18249052 16.56209492
16.03932885]
[ 3.27592223 3.73511273 4.13674434 ... 15.07082574 16.45043013
15.92766406]
...
[ 3.26970302 3.72889352 4.13052513 ... 15.06460653 16.44421092
15.92144485]
[ 3.08119277 3.54038326 3.94201487 ... 14.87609627 16.25570067
15.7329346 ]
[ 3.09562911 3.55481961 3.95645122 ... 14.89053262 16.27013701
15.74737094]]
[[ 3.16023103 3.62789842 4.05417795 ... 14.88946439 15.96325429
15.44253724]
[ 3.2352529 3.7029203 4.12919982 ... 14.96448626 16.03827616
15.51755911]
[ 3.23151712 3.69918452 4.12546404 ... 14.96075049 16.03454038
15.51382334]
...
[ 3.04373236 3.51139976 3.93767928 ... 14.77296572 15.84675562
15.32603857]
[ 3.08010478 3.54777218 3.9740517 ... 14.80933815 15.88312804
15.362411 ]
[ 3.13584002 3.60350741 4.02978694 ... 14.86507338 15.93886328
15.41814623]]]
[[[ 3.46182359 3.97007677 4.32225271 ... 16.58712603 16.5570733
15.92958507]
[ 3.62242575 4.13067894 4.48285488 ... 16.7477282 16.71767547
16.09018724]
[ 3.23793154 3.74618473 4.09836067 ... 16.36323398 16.33318125
15.70569302]
...
[ 3.34645876 3.85471195 4.20688789 ... 16.47176121 16.44170847
15.81422025]
[ 3.49121761 3.9994708 4.35164674 ... 16.61652006 16.58646732
15.9589791 ]
[ 3.25705484 3.76530803 4.11748397 ... 16.38235728 16.35230455
15.72481632]]
[[ 3.31495307 3.67478917 4.14065619 ... 16.25489612 15.35446987
15.22320202]
[ 3.14541825 3.50525434 3.97112137 ... 16.0853613 15.18493505
15.0536672 ]
[ 3.02761858 3.38745468 3.8533217 ... 15.96756163 15.06713538
14.93586753]
...
[ 3.40579104 3.76562713 4.23149416 ... 16.34573409 15.44530784
15.31403999]
[ 2.99760837 3.35744446 3.82331149 ... 15.93755142 15.03712517
14.90585732]
[ 3.22176647 3.58160256 4.04746959 ... 16.16170952 15.26128326
15.13001542]]
[[ 3.18015663 3.55035654 3.99704659 ... 15.16930232 15.94242484
14.46975893]
[ 3.23980533 3.61000524 4.05669529 ... 15.22895102 16.00207354
14.52940763]
[ 2.94070861 3.31090852 3.75759857 ... 14.9298543 15.70297682
14.2303109 ]
...
[ 3.27239296 3.64259287 4.08928293 ... 15.26153866 16.03466117
14.56199526]
[ 3.18562099 3.55582089 4.00251095 ... 15.17476668 15.9478892
14.47522328]
[ 3.19270835 3.56290826 4.00959831 ... 15.18185404 15.95497656
14.48231065]]
...
[[ 3.42674083 3.93798484 4.35466471 ... 16.87557077 16.760324
16.95593028]
[ 3.50938085 4.02062486 4.43730472 ... 16.95821078 16.84296402
17.03857029]
[ 3.28208803 3.79333204 4.2100119 ... 16.73091796 16.6156712
16.81127747]
...
[ 3.3359072 3.84715121 4.26383107 ... 16.78473713 16.66949037
16.86509664]
[ 3.22626709 3.7375111 4.15419096 ... 16.67509702 16.55985026
16.75545653]
[ 2.91623386 3.42747787 3.84415773 ... 16.36506379 16.24981703
16.4454233 ]]
[[ 3.43847277 3.89766327 4.29929488 ... 15.23337628 16.61298068
16.0902146 ]
[ 3.38758702 3.84677752 4.24840912 ... 15.18249052 16.56209492
16.03932885]
[ 3.27592223 3.73511273 4.13674434 ... 15.07082574 16.45043013
15.92766406]
...
[ 3.26970302 3.72889352 4.13052513 ... 15.06460653 16.44421092
15.92144485]
[ 3.08119277 3.54038326 3.94201487 ... 14.87609627 16.25570067
15.7329346 ]
[ 3.09562911 3.55481961 3.95645122 ... 14.89053262 16.27013701
15.74737094]]
[[ 3.16023103 3.62789842 4.05417795 ... 14.88946439 15.96325429
15.44253724]
[ 3.2352529 3.7029203 4.12919982 ... 14.96448626 16.03827616
15.51755911]
[ 3.23151712 3.69918452 4.12546404 ... 14.96075049 16.03454038
15.51382334]
...
[ 3.04373236 3.51139976 3.93767928 ... 14.77296572 15.84675562
15.32603857]
[ 3.08010478 3.54777218 3.9740517 ... 14.80933815 15.88312804
15.362411 ]
[ 3.13584002 3.60350741 4.02978694 ... 14.86507338 15.93886328
15.41814623]]]
###Markdown
Load data from file
###Code
import pickle
with open("old_data.pkl", "rb") as f:
risks = pickle.load(f)
test_blockid_dict = pickle.load(f)
risks.shape, test_blockid_dict
y_pred_dow[0].reshape((12, 7)).sum(axis=1)
y_pred_dow[1].reshape((12, 7)).sum(axis=1)
plt.plot(np.arange(len(y_pred_dow[0].flatten())),
y_pred_dow[0].flatten(), color='blue');
plt.plot(np.arange(len(y_pred_dow[1].flatten())),
y_pred_dow[1].flatten(), color='green');
plt.plot(np.arange(len(y_pred_dow[0].flatten())),
y_pred_dow[0].flatten(), color='blue');
###Output
_____no_output_____ |
Machine Learning/Classification/Week 4/module-6-decision-tree-practical-assignment-blank.ipynb | ###Markdown
Decision Trees in Practice In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.In this assignment you will:* Implement binary decision trees with different early stopping methods.* Compare models with different stopping parameters.* Visualize the concept of overfitting in decision trees.Let's get started! Fire up GraphLab Create Make sure you have the latest version of GraphLab Create.
###Code
import graphlab
###Output
_____no_output_____
###Markdown
Load LendingClub Dataset This assignment will use the [LendingClub](https://www.lendingclub.com/) dataset used in the previous two assignments.
###Code
loans = graphlab.SFrame('lending-club-data.gl/')
###Output
This non-commercial license of GraphLab Create for academic use is assigned to [email protected] and will expire on August 21, 2017.
###Markdown
As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
###Code
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
###Output
_____no_output_____
###Markdown
We will be using the same 4 categorical features as in the previous assignment: 1. grade of the loan 2. the length of the loan term3. the home ownership status: own, mortgage, rent4. number of years of employment.In the dataset, each of these features is a categorical feature. Since we are building a binary decision tree, we will have to convert this to binary data in a subsequent section using 1-hot encoding.
###Code
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
###Output
_____no_output_____
###Markdown
Subsample dataset to make sure classes are balanced Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used `seed = 1` so everyone gets the same results.
###Code
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
###Output
Percentage of safe loans : 0.502236174422
Percentage of risky loans : 0.497763825578
Total number of loans in our new dataset : 46508
###Markdown
**Note:** There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this [paper](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5128907&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F69%2F5173046%2F05128907.pdf%3Farnumber%3D5128907 ). For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods. Transform categorical data into binary features Since we are implementing **binary decision trees**, we transform our categorical data into binary data using 1-hot encoding, just as in the previous assignment. Here is the summary of that discussion:For instance, the **home_ownership** feature represents the home ownership status of the loanee, which is either `own`, `mortgage` or `rent`. For example, if a data point has the feature ``` {'home_ownership': 'RENT'}```we want to turn this into three features: ``` { 'home_ownership = OWN' : 0, 'home_ownership = MORTGAGE' : 0, 'home_ownership = RENT' : 1 }```Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
###Code
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
###Output
_____no_output_____
###Markdown
The feature columns now look like this:
###Code
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
###Output
_____no_output_____
###Markdown
Train-Validation splitWe split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use `seed=1` so that everyone gets the same result.
###Code
train_data, validation_set = loans_data.random_split(.8, seed=1)
###Output
_____no_output_____
###Markdown
Early stopping methods for decision trees In this section, we will extend the **binary tree implementation** from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture:1. Reached a **maximum depth**. (set by parameter `max_depth`).2. Reached a **minimum node size**. (set by parameter `min_node_size`).3. Don't split if the **gain in error reduction** is too small. (set by parameter `min_error_reduction`).For the rest of this assignment, we will refer to these three as **early stopping conditions 1, 2, and 3**. Early stopping condition 1: Maximum depthRecall that we already implemented the maximum depth stopping condition in the previous assignment. In this assignment, we will experiment with this condition a bit more and also write code to implement the 2nd and 3rd early stopping conditions.We will be reusing code from the previous assignment and then building upon this. We will **alert you** when you reach a function that was part of the previous assignment so that you can simply copy and past your previous code. Early stopping condition 2: Minimum node size The function **reached_minimum_node_size** takes 2 arguments:1. The `data` (from a node)2. The minimum number of data points that a node is allowed to split on, `min_node_size`.This function simply calculates whether the number of data points at a given node is less than or equal to the specified minimum node size. This function will be used to detect this early stopping condition in the **decision_tree_create** function.Fill in the parts of the function below where you find ` YOUR CODE HERE`. There is **one** instance in the function below.
###Code
def reached_minimum_node_size(data, min_node_size):
# Return True if the number of data points is less than or equal to the minimum node size.
## YOUR CODE HERE
return len(data)<=min_node_size
###Output
_____no_output_____
###Markdown
** Quiz question:** Given an intermediate node with 6 safe loans and 3 risky loans, if the `min_node_size` parameter is 10, what should the tree learning algorithm do next? Early stopping condition 3: Minimum gain in error reduction The function **error_reduction** takes 2 arguments:1. The error **before** a split, `error_before_split`.2. The error **after** a split, `error_after_split`.This function computes the gain in error reduction, i.e., the difference between the error before the split and that after the split. This function will be used to detect this early stopping condition in the **decision_tree_create** function.Fill in the parts of the function below where you find ` YOUR CODE HERE`. There is **one** instance in the function below.
###Code
def error_reduction(error_before_split, error_after_split):
# Return the error before the split minus the error after the split.
## YOUR CODE HERE
return error_before_split-error_after_split
###Output
_____no_output_____
###Markdown
** Quiz question:** Assume an intermediate node has 6 safe loans and 3 risky loans. For each of 4 possible features to split on, the error reduction is 0.0, 0.05, 0.1, and 0.14, respectively. If the **minimum gain in error reduction** parameter is set to 0.2, what should the tree learning algorithm do next? Grabbing binary decision tree helper functions from past assignment Recall from the previous assignment that we wrote a function `intermediate_node_num_mistakes` that calculates the number of **misclassified examples** when predicting the **majority class**. This is used to help determine which feature is best to split on at a given node of the tree.**Please copy and paste your code for `intermediate_node_num_mistakes` here**.
###Code
import numpy as np
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
ones = np.ones(len(labels_in_node))
# Count the number of 1's (safe loans)
## YOUR CODE HERE
count_ones = ones[ones==labels_in_node].sum()
# Count the number of -1's (risky loans)
## YOUR CODE HERE
count_negative_ones = len(labels_in_node)-count_ones
# Return the number of mistakes that the majority classifier makes.
## YOUR CODE HERE
if count_ones > count_negative_ones :
return count_negative_ones
else :
return count_ones
###Output
_____no_output_____
###Markdown
We then wrote a function `best_splitting_feature` that finds the best feature to split on given the data and a list of features to consider.**Please copy and paste your `best_splitting_feature` code here**.
###Code
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature]==1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split[target])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes+right_mistakes)/num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_error=error
best_feature=feature
return best_feature # Return the best feature we found
###Output
_____no_output_____
###Markdown
Finally, recall the function `create_leaf` from the previous assignment, which creates a leaf node given a set of target values. **Please copy and paste your `create_leaf` code here**.
###Code
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1 ## YOUR CODE HERE
else:
leaf['prediction'] = -1 ## YOUR CODE HERE
# Return the leaf node
return leaf
###Output
_____no_output_____
###Markdown
Incorporating new early stopping conditions in binary decision tree implementation Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, **max_depth**, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.**Implementing early stopping condition 2: minimum node size:*** **Step 1:** Use the function **reached_minimum_node_size** that you implemented earlier to write an if condition to detect whether we have hit the base case, i.e., the node does not have enough data points and should be turned into a leaf. Don't forget to use the `min_node_size` argument.* **Step 2:** Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.**Implementing early stopping condition 3: minimum error reduction:****Note:** This has to come after finding the best splitting feature so we can calculate the error after splitting in order to calculate the error reduction.* **Step 1:** Calculate the **classification error before splitting**. Recall that classification error is defined as:$$\text{classification error} = \frac{\text{ mistakes}}{\text{ total examples}}$$* **Step 2:** Calculate the **classification error after splitting**. This requires calculating the number of mistakes in the left and right splits, and then dividing by the total number of examples.* **Step 3:** Use the function **error_reduction** to that you implemented earlier to write an if condition to detect whether the reduction in error is less than the constant provided (`min_error_reduction`). Don't forget to use that argument.* **Step 4:** Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.Fill in the places where you find ` YOUR CODE HERE`. There are **seven** places in this function for you to fill in.
###Code
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
# If the number of data points is less than or equal to the minimum size, return a leaf.
if reached_minimum_node_size(data,min_node_size):
print "Early stopping condition 2 reached. Reached minimum node size."
return create_leaf(target_values)
# Find the best splitting feature
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
# Early stopping condition 3: Minimum error reduction
# Calculate the error before splitting (number of misclassified examples
# divided by the total number of examples)
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
# Calculate the error after splitting (number of misclassified examples
# in both groups divided by the total number of examples)
left_mistakes = intermediate_node_num_mistakes(left_split[target])
right_mistakes = intermediate_node_num_mistakes(right_split[target])
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
# If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf.
if error_reduction(error_before_split,error_after_split)<=min_error_reduction:
print "Early stopping condition 3 reached. Minimum error reduction."
return create_leaf(target_values)
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
###Output
_____no_output_____
###Markdown
Here is a function to count the nodes in your tree:
###Code
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
###Output
_____no_output_____
###Markdown
Run the following test code to check your implementation. Make sure you get **'Test passed'** before proceeding.
###Code
small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 10, min_error_reduction=0.0)
if count_nodes(small_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_decision_tree)
print 'Number of nodes that should be there : 7'
###Output
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Early stopping condition 1 reached. Reached maximum depth.
Test passed!
###Markdown
Build a tree!Now that your code is working, we will train a tree model on the **train_data** with* `max_depth = 6`* `min_node_size = 100`, * `min_error_reduction = 0.0`**Warning**: This code block may take a minute to learn.
###Code
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
###Output
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Split on feature emp_length.n/a. (96, 5)
--------------------------------------------------------------------
Subtree, depth = 3 (96 data points).
Early stopping condition 2 reached. Reached minimum node size.
--------------------------------------------------------------------
Subtree, depth = 3 (5 data points).
Early stopping condition 2 reached. Reached minimum node size.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Split on feature grade.E. (22024, 1276)
--------------------------------------------------------------------
Subtree, depth = 3 (22024 data points).
Split on feature grade.F. (21666, 358)
--------------------------------------------------------------------
Subtree, depth = 4 (21666 data points).
Split on feature emp_length.n/a. (20734, 932)
--------------------------------------------------------------------
Subtree, depth = 5 (20734 data points).
Split on feature grade.G. (20638, 96)
--------------------------------------------------------------------
Subtree, depth = 6 (20638 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (96 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (932 data points).
Split on feature grade.A. (702, 230)
--------------------------------------------------------------------
Subtree, depth = 6 (702 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (230 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 4 (358 data points).
Split on feature emp_length.8 years. (347, 11)
--------------------------------------------------------------------
Subtree, depth = 5 (347 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Early stopping condition 2 reached. Reached minimum node size.
--------------------------------------------------------------------
Subtree, depth = 3 (1276 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Early stopping condition 3 reached. Minimum error reduction.
###Markdown
Let's now train a tree model **ignoring early stopping conditions 2 and 3** so that we get the same tree as in the previous assignment. To ignore these conditions, we set `min_node_size=0` and `min_error_reduction=-1` (a negative value).
###Code
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
###Output
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Split on feature grade.B. (8074, 1048)
--------------------------------------------------------------------
Subtree, depth = 3 (8074 data points).
Split on feature grade.C. (5884, 2190)
--------------------------------------------------------------------
Subtree, depth = 4 (5884 data points).
Split on feature grade.D. (3826, 2058)
--------------------------------------------------------------------
Subtree, depth = 5 (3826 data points).
Split on feature grade.E. (1693, 2133)
--------------------------------------------------------------------
Subtree, depth = 6 (1693 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2133 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (2058 data points).
Split on feature grade.E. (2058, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2058 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (2190 data points).
Split on feature grade.D. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (2190 data points).
Split on feature grade.E. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2190 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1048 data points).
Split on feature emp_length.5 years. (969, 79)
--------------------------------------------------------------------
Subtree, depth = 4 (969 data points).
Split on feature grade.C. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (969 data points).
Split on feature grade.D. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (969 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (79 data points).
Split on feature home_ownership.MORTGAGE. (34, 45)
--------------------------------------------------------------------
Subtree, depth = 5 (34 data points).
Split on feature grade.C. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (34 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (45 data points).
Split on feature grade.C. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (45 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Split on feature emp_length.n/a. (96, 5)
--------------------------------------------------------------------
Subtree, depth = 3 (96 data points).
Split on feature emp_length.< 1 year. (85, 11)
--------------------------------------------------------------------
Subtree, depth = 4 (85 data points).
Split on feature grade.B. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (85 data points).
Split on feature grade.C. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (85 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (11 data points).
Split on feature grade.B. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature grade.C. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (11 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (5 data points).
Split on feature grade.B. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (5 data points).
Split on feature grade.C. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (5 data points).
Split on feature grade.D. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (5 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Split on feature grade.E. (22024, 1276)
--------------------------------------------------------------------
Subtree, depth = 3 (22024 data points).
Split on feature grade.F. (21666, 358)
--------------------------------------------------------------------
Subtree, depth = 4 (21666 data points).
Split on feature emp_length.n/a. (20734, 932)
--------------------------------------------------------------------
Subtree, depth = 5 (20734 data points).
Split on feature grade.G. (20638, 96)
--------------------------------------------------------------------
Subtree, depth = 6 (20638 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (96 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (932 data points).
Split on feature grade.A. (702, 230)
--------------------------------------------------------------------
Subtree, depth = 6 (702 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (230 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 4 (358 data points).
Split on feature emp_length.8 years. (347, 11)
--------------------------------------------------------------------
Subtree, depth = 5 (347 data points).
Split on feature grade.A. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (347 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature home_ownership.OWN. (9, 2)
--------------------------------------------------------------------
Subtree, depth = 6 (9 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1276 data points).
Split on feature grade.A. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (1276 data points).
Split on feature grade.B. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (1276 data points).
Split on feature grade.C. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (1276 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Split on feature grade.A. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 3 (4701 data points).
Split on feature grade.B. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (4701 data points).
Split on feature grade.C. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (4701 data points).
Split on feature grade.E. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (4701 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
###Markdown
Making predictions Recall that in the previous assignment you implemented a function `classify` to classify a new point `x` using a given `tree`.**Please copy and paste your `classify` code here**.
###Code
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
###Output
_____no_output_____
###Markdown
Now, let's consider the first example of the validation set and see what the `my_decision_tree_new` model predicts for this data point.
###Code
validation_set[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0])
###Output
Predicted class: -1
###Markdown
Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
###Code
classify(my_decision_tree_new, validation_set[0], annotate = True)
###Output
Split on term. 36 months = 0
Split on grade.A = 0
At leaf, predicting -1
###Markdown
Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as `my_decision_tree_old`.
###Code
classify(my_decision_tree_old, validation_set[0], annotate = True)
###Output
Split on term. 36 months = 0
Split on grade.A = 0
Split on grade.B = 0
Split on grade.C = 0
Split on grade.D = 1
Split on grade.E = 0
At leaf, predicting -1
###Markdown
** Quiz question:** For `my_decision_tree_new` trained with `max_depth = 6`, `min_node_size = 100`, `min_error_reduction=0.0`, is the prediction path for `validation_set[0]` shorter, longer, or the same as for `my_decision_tree_old` that ignored the early stopping conditions 2 and 3? **Quiz question:** For `my_decision_tree_new` trained with `max_depth = 6`, `min_node_size = 100`, `min_error_reduction=0.0`, is the prediction path for **any point** always shorter, always longer, always the same, shorter or the same, or longer or the same as for `my_decision_tree_old` that ignored the early stopping conditions 2 and 3? ** Quiz question:** For a tree trained on **any** dataset using `max_depth = 6`, `min_node_size = 100`, `min_error_reduction=0.0`, what is the maximum number of splits encountered while making a single prediction? Evaluating the model Now let us evaluate the model that we have trained. You implemented this evautation in the function `evaluate_classification_error` from the previous assignment.**Please copy and paste your `evaluate_classification_error` code here**.
###Code
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
predictions = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
mistakes=predictions[predictions != data['safe_loans']].sum()
mistakes = abs(mistakes)
return (1.0*mistakes)/len(predictions)
###Output
_____no_output_____
###Markdown
Now, let's use this function to evaluate the classification error of `my_decision_tree_new` on the **validation_set**.
###Code
evaluate_classification_error(my_decision_tree_new, validation_set)
###Output
_____no_output_____
###Markdown
Now, evaluate the validation error using `my_decision_tree_old`.
###Code
evaluate_classification_error(my_decision_tree_old, validation_set)
###Output
_____no_output_____
###Markdown
**Quiz question:** Is the validation error of the new decision tree (using early stopping conditions 2 and 3) lower than, higher than, or the same as that of the old decision tree from the previous assignment? Exploring the effect of max_depthWe will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (**too small**, **just right**, and **too large**).Train three models with these parameters:1. **model_1**: max_depth = 2 (too small)2. **model_2**: max_depth = 6 (just right)3. **model_3**: max_depth = 14 (may be too large)For each of these three, we set `min_node_size = 0` and `min_error_reduction = -1`.** Note:** Each tree can take up to a few minutes to train. In particular, `model_3` will probably take the longest to train.
###Code
model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 0, min_error_reduction=-1)
model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14,
min_node_size = 0, min_error_reduction=-1)
###Output
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Split on feature grade.B. (8074, 1048)
--------------------------------------------------------------------
Subtree, depth = 3 (8074 data points).
Split on feature grade.C. (5884, 2190)
--------------------------------------------------------------------
Subtree, depth = 4 (5884 data points).
Split on feature grade.D. (3826, 2058)
--------------------------------------------------------------------
Subtree, depth = 5 (3826 data points).
Split on feature grade.E. (1693, 2133)
--------------------------------------------------------------------
Subtree, depth = 6 (1693 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2133 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (2058 data points).
Split on feature grade.E. (2058, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2058 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (2190 data points).
Split on feature grade.D. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (2190 data points).
Split on feature grade.E. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2190 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1048 data points).
Split on feature emp_length.5 years. (969, 79)
--------------------------------------------------------------------
Subtree, depth = 4 (969 data points).
Split on feature grade.C. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (969 data points).
Split on feature grade.D. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (969 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (79 data points).
Split on feature home_ownership.MORTGAGE. (34, 45)
--------------------------------------------------------------------
Subtree, depth = 5 (34 data points).
Split on feature grade.C. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (34 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (45 data points).
Split on feature grade.C. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (45 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Split on feature emp_length.n/a. (96, 5)
--------------------------------------------------------------------
Subtree, depth = 3 (96 data points).
Split on feature emp_length.< 1 year. (85, 11)
--------------------------------------------------------------------
Subtree, depth = 4 (85 data points).
Split on feature grade.B. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (85 data points).
Split on feature grade.C. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (85 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (11 data points).
Split on feature grade.B. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature grade.C. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (11 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (5 data points).
Split on feature grade.B. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (5 data points).
Split on feature grade.C. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (5 data points).
Split on feature grade.D. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (5 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Split on feature grade.E. (22024, 1276)
--------------------------------------------------------------------
Subtree, depth = 3 (22024 data points).
Split on feature grade.F. (21666, 358)
--------------------------------------------------------------------
Subtree, depth = 4 (21666 data points).
Split on feature emp_length.n/a. (20734, 932)
--------------------------------------------------------------------
Subtree, depth = 5 (20734 data points).
Split on feature grade.G. (20638, 96)
--------------------------------------------------------------------
Subtree, depth = 6 (20638 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (96 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (932 data points).
Split on feature grade.A. (702, 230)
--------------------------------------------------------------------
Subtree, depth = 6 (702 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (230 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 4 (358 data points).
Split on feature emp_length.8 years. (347, 11)
--------------------------------------------------------------------
Subtree, depth = 5 (347 data points).
Split on feature grade.A. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (347 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature home_ownership.OWN. (9, 2)
--------------------------------------------------------------------
Subtree, depth = 6 (9 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1276 data points).
Split on feature grade.A. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (1276 data points).
Split on feature grade.B. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (1276 data points).
Split on feature grade.C. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (1276 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Split on feature grade.A. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 3 (4701 data points).
Split on feature grade.B. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (4701 data points).
Split on feature grade.C. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (4701 data points).
Split on feature grade.E. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (4701 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Split on feature grade.B. (8074, 1048)
--------------------------------------------------------------------
Subtree, depth = 3 (8074 data points).
Split on feature grade.C. (5884, 2190)
--------------------------------------------------------------------
Subtree, depth = 4 (5884 data points).
Split on feature grade.D. (3826, 2058)
--------------------------------------------------------------------
Subtree, depth = 5 (3826 data points).
Split on feature grade.E. (1693, 2133)
--------------------------------------------------------------------
Subtree, depth = 6 (1693 data points).
Split on feature home_ownership.OTHER. (1692, 1)
--------------------------------------------------------------------
Subtree, depth = 7 (1692 data points).
Split on feature grade.F. (339, 1353)
--------------------------------------------------------------------
Subtree, depth = 8 (339 data points).
Split on feature grade.G. (0, 339)
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (339 data points).
Split on feature term. 60 months. (0, 339)
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (339 data points).
Split on feature home_ownership.MORTGAGE. (175, 164)
--------------------------------------------------------------------
Subtree, depth = 11 (175 data points).
Split on feature home_ownership.OWN. (142, 33)
--------------------------------------------------------------------
Subtree, depth = 12 (142 data points).
Split on feature emp_length.6 years. (133, 9)
--------------------------------------------------------------------
Subtree, depth = 13 (133 data points).
Split on feature home_ownership.RENT. (0, 133)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (133 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (9 data points).
Split on feature home_ownership.RENT. (0, 9)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (9 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (33 data points).
Split on feature emp_length.n/a. (31, 2)
--------------------------------------------------------------------
Subtree, depth = 13 (31 data points).
Split on feature emp_length.2 years. (30, 1)
--------------------------------------------------------------------
Subtree, depth = 14 (30 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (164 data points).
Split on feature emp_length.2 years. (159, 5)
--------------------------------------------------------------------
Subtree, depth = 12 (159 data points).
Split on feature emp_length.3 years. (148, 11)
--------------------------------------------------------------------
Subtree, depth = 13 (148 data points).
Split on feature home_ownership.OWN. (148, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (148 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (11 data points).
Split on feature home_ownership.OWN. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (11 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (5 data points).
Split on feature home_ownership.OWN. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (5 data points).
Split on feature home_ownership.RENT. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (5 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (1353 data points).
Split on feature grade.G. (1353, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (1353 data points).
Split on feature term. 60 months. (0, 1353)
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (1353 data points).
Split on feature home_ownership.MORTGAGE. (710, 643)
--------------------------------------------------------------------
Subtree, depth = 11 (710 data points).
Split on feature home_ownership.OWN. (602, 108)
--------------------------------------------------------------------
Subtree, depth = 12 (602 data points).
Split on feature home_ownership.RENT. (0, 602)
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (602 data points).
Split on feature emp_length.1 year. (565, 37)
--------------------------------------------------------------------
Subtree, depth = 14 (565 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (37 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (108 data points).
Split on feature home_ownership.RENT. (108, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (108 data points).
Split on feature emp_length.1 year. (100, 8)
--------------------------------------------------------------------
Subtree, depth = 14 (100 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (8 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (643 data points).
Split on feature home_ownership.OWN. (643, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (643 data points).
Split on feature home_ownership.RENT. (643, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (643 data points).
Split on feature emp_length.1 year. (602, 41)
--------------------------------------------------------------------
Subtree, depth = 14 (602 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (41 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (2133 data points).
Split on feature grade.F. (2133, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (2133 data points).
Split on feature grade.G. (2133, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (2133 data points).
Split on feature term. 60 months. (0, 2133)
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (2133 data points).
Split on feature home_ownership.MORTGAGE. (1045, 1088)
--------------------------------------------------------------------
Subtree, depth = 10 (1045 data points).
Split on feature home_ownership.OTHER. (1044, 1)
--------------------------------------------------------------------
Subtree, depth = 11 (1044 data points).
Split on feature home_ownership.OWN. (879, 165)
--------------------------------------------------------------------
Subtree, depth = 12 (879 data points).
Split on feature home_ownership.RENT. (0, 879)
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (879 data points).
Split on feature emp_length.1 year. (809, 70)
--------------------------------------------------------------------
Subtree, depth = 14 (809 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (70 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (165 data points).
Split on feature emp_length.9 years. (157, 8)
--------------------------------------------------------------------
Subtree, depth = 13 (157 data points).
Split on feature home_ownership.RENT. (157, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (157 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (8 data points).
Split on feature home_ownership.RENT. (8, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (8 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (1088 data points).
Split on feature home_ownership.OTHER. (1088, 0)
--------------------------------------------------------------------
Subtree, depth = 11 (1088 data points).
Split on feature home_ownership.OWN. (1088, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (1088 data points).
Split on feature home_ownership.RENT. (1088, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (1088 data points).
Split on feature emp_length.1 year. (1035, 53)
--------------------------------------------------------------------
Subtree, depth = 14 (1035 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (53 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (2058 data points).
Split on feature grade.E. (2058, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2058 data points).
Split on feature grade.F. (2058, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (2058 data points).
Split on feature grade.G. (2058, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (2058 data points).
Split on feature term. 60 months. (0, 2058)
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (2058 data points).
Split on feature home_ownership.MORTGAGE. (923, 1135)
--------------------------------------------------------------------
Subtree, depth = 10 (923 data points).
Split on feature home_ownership.OTHER. (922, 1)
--------------------------------------------------------------------
Subtree, depth = 11 (922 data points).
Split on feature home_ownership.OWN. (762, 160)
--------------------------------------------------------------------
Subtree, depth = 12 (762 data points).
Split on feature home_ownership.RENT. (0, 762)
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (762 data points).
Split on feature emp_length.1 year. (704, 58)
--------------------------------------------------------------------
Subtree, depth = 14 (704 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (58 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (160 data points).
Split on feature home_ownership.RENT. (160, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (160 data points).
Split on feature emp_length.1 year. (154, 6)
--------------------------------------------------------------------
Subtree, depth = 14 (154 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (6 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (1135 data points).
Split on feature home_ownership.OTHER. (1135, 0)
--------------------------------------------------------------------
Subtree, depth = 11 (1135 data points).
Split on feature home_ownership.OWN. (1135, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (1135 data points).
Split on feature home_ownership.RENT. (1135, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (1135 data points).
Split on feature emp_length.1 year. (1096, 39)
--------------------------------------------------------------------
Subtree, depth = 14 (1096 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (39 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (2190 data points).
Split on feature grade.D. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (2190 data points).
Split on feature grade.E. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2190 data points).
Split on feature grade.F. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (2190 data points).
Split on feature grade.G. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (2190 data points).
Split on feature term. 60 months. (0, 2190)
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (2190 data points).
Split on feature home_ownership.MORTGAGE. (803, 1387)
--------------------------------------------------------------------
Subtree, depth = 10 (803 data points).
Split on feature emp_length.4 years. (746, 57)
--------------------------------------------------------------------
Subtree, depth = 11 (746 data points).
Split on feature home_ownership.OTHER. (746, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (746 data points).
Split on feature home_ownership.OWN. (598, 148)
--------------------------------------------------------------------
Subtree, depth = 13 (598 data points).
Split on feature home_ownership.RENT. (0, 598)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (598 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (148 data points).
Split on feature emp_length.< 1 year. (137, 11)
--------------------------------------------------------------------
Subtree, depth = 14 (137 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (11 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (57 data points).
Split on feature home_ownership.OTHER. (57, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (57 data points).
Split on feature home_ownership.OWN. (49, 8)
--------------------------------------------------------------------
Subtree, depth = 13 (49 data points).
Split on feature home_ownership.RENT. (0, 49)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (49 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (8 data points).
Split on feature home_ownership.RENT. (8, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (8 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (1387 data points).
Split on feature emp_length.6 years. (1313, 74)
--------------------------------------------------------------------
Subtree, depth = 11 (1313 data points).
Split on feature home_ownership.OTHER. (1313, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (1313 data points).
Split on feature home_ownership.OWN. (1313, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (1313 data points).
Split on feature home_ownership.RENT. (1313, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (1313 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (74 data points).
Split on feature home_ownership.OTHER. (74, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (74 data points).
Split on feature home_ownership.OWN. (74, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (74 data points).
Split on feature home_ownership.RENT. (74, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (74 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1048 data points).
Split on feature emp_length.5 years. (969, 79)
--------------------------------------------------------------------
Subtree, depth = 4 (969 data points).
Split on feature grade.C. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (969 data points).
Split on feature grade.D. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (969 data points).
Split on feature grade.E. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (969 data points).
Split on feature grade.F. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (969 data points).
Split on feature grade.G. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (969 data points).
Split on feature term. 60 months. (0, 969)
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (969 data points).
Split on feature home_ownership.MORTGAGE. (367, 602)
--------------------------------------------------------------------
Subtree, depth = 11 (367 data points).
Split on feature home_ownership.OTHER. (367, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (367 data points).
Split on feature home_ownership.OWN. (291, 76)
--------------------------------------------------------------------
Subtree, depth = 13 (291 data points).
Split on feature home_ownership.RENT. (0, 291)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (291 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (76 data points).
Split on feature emp_length.9 years. (71, 5)
--------------------------------------------------------------------
Subtree, depth = 14 (71 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (5 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (602 data points).
Split on feature emp_length.9 years. (580, 22)
--------------------------------------------------------------------
Subtree, depth = 12 (580 data points).
Split on feature emp_length.3 years. (545, 35)
--------------------------------------------------------------------
Subtree, depth = 13 (545 data points).
Split on feature emp_length.4 years. (506, 39)
--------------------------------------------------------------------
Subtree, depth = 14 (506 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (39 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (35 data points).
Split on feature home_ownership.OTHER. (35, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (35 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (22 data points).
Split on feature home_ownership.OTHER. (22, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (22 data points).
Split on feature home_ownership.OWN. (22, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (22 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (79 data points).
Split on feature home_ownership.MORTGAGE. (34, 45)
--------------------------------------------------------------------
Subtree, depth = 5 (34 data points).
Split on feature grade.C. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (34 data points).
Split on feature grade.D. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (34 data points).
Split on feature grade.E. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (34 data points).
Split on feature grade.F. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (34 data points).
Split on feature grade.G. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (34 data points).
Split on feature term. 60 months. (0, 34)
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (34 data points).
Split on feature home_ownership.OTHER. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (34 data points).
Split on feature home_ownership.OWN. (25, 9)
--------------------------------------------------------------------
Subtree, depth = 13 (25 data points).
Split on feature home_ownership.RENT. (0, 25)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (25 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (9 data points).
Split on feature home_ownership.RENT. (9, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (9 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (45 data points).
Split on feature grade.C. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (45 data points).
Split on feature grade.D. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (45 data points).
Split on feature grade.E. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (45 data points).
Split on feature grade.F. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (45 data points).
Split on feature grade.G. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (45 data points).
Split on feature term. 60 months. (0, 45)
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (45 data points).
Split on feature home_ownership.OTHER. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (45 data points).
Split on feature home_ownership.OWN. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (45 data points).
Split on feature home_ownership.RENT. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (45 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Split on feature emp_length.n/a. (96, 5)
--------------------------------------------------------------------
Subtree, depth = 3 (96 data points).
Split on feature emp_length.< 1 year. (85, 11)
--------------------------------------------------------------------
Subtree, depth = 4 (85 data points).
Split on feature grade.B. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (85 data points).
Split on feature grade.C. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (85 data points).
Split on feature grade.D. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (85 data points).
Split on feature grade.E. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (85 data points).
Split on feature grade.F. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (85 data points).
Split on feature grade.G. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (85 data points).
Split on feature term. 60 months. (0, 85)
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (85 data points).
Split on feature home_ownership.MORTGAGE. (26, 59)
--------------------------------------------------------------------
Subtree, depth = 12 (26 data points).
Split on feature emp_length.3 years. (24, 2)
--------------------------------------------------------------------
Subtree, depth = 13 (24 data points).
Split on feature home_ownership.OTHER. (24, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (24 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (59 data points).
Split on feature home_ownership.OTHER. (59, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (59 data points).
Split on feature home_ownership.OWN. (59, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (59 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (11 data points).
Split on feature grade.B. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature grade.C. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (11 data points).
Split on feature grade.D. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (11 data points).
Split on feature grade.E. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (11 data points).
Split on feature grade.F. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (11 data points).
Split on feature grade.G. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (11 data points).
Split on feature term. 60 months. (0, 11)
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (11 data points).
Split on feature home_ownership.MORTGAGE. (8, 3)
--------------------------------------------------------------------
Subtree, depth = 12 (8 data points).
Split on feature home_ownership.OTHER. (8, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (8 data points).
Split on feature home_ownership.OWN. (6, 2)
--------------------------------------------------------------------
Subtree, depth = 14 (6 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (2 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (3 data points).
Split on feature home_ownership.OTHER. (3, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (3 data points).
Split on feature home_ownership.OWN. (3, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (3 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (5 data points).
Split on feature grade.B. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (5 data points).
Split on feature grade.C. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (5 data points).
Split on feature grade.D. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (5 data points).
Split on feature grade.E. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (5 data points).
Split on feature grade.F. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (5 data points).
Split on feature grade.G. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (5 data points).
Split on feature term. 60 months. (0, 5)
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (5 data points).
Split on feature home_ownership.MORTGAGE. (2, 3)
--------------------------------------------------------------------
Subtree, depth = 11 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (3 data points).
Split on feature home_ownership.OTHER. (3, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (3 data points).
Split on feature home_ownership.OWN. (3, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (3 data points).
Split on feature home_ownership.RENT. (3, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (3 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Split on feature grade.E. (22024, 1276)
--------------------------------------------------------------------
Subtree, depth = 3 (22024 data points).
Split on feature grade.F. (21666, 358)
--------------------------------------------------------------------
Subtree, depth = 4 (21666 data points).
Split on feature emp_length.n/a. (20734, 932)
--------------------------------------------------------------------
Subtree, depth = 5 (20734 data points).
Split on feature grade.G. (20638, 96)
--------------------------------------------------------------------
Subtree, depth = 6 (20638 data points).
Split on feature grade.A. (15839, 4799)
--------------------------------------------------------------------
Subtree, depth = 7 (15839 data points).
Split on feature home_ownership.OTHER. (15811, 28)
--------------------------------------------------------------------
Subtree, depth = 8 (15811 data points).
Split on feature grade.B. (6894, 8917)
--------------------------------------------------------------------
Subtree, depth = 9 (6894 data points).
Split on feature home_ownership.MORTGAGE. (4102, 2792)
--------------------------------------------------------------------
Subtree, depth = 10 (4102 data points).
Split on feature emp_length.4 years. (3768, 334)
--------------------------------------------------------------------
Subtree, depth = 11 (3768 data points).
Split on feature emp_length.9 years. (3639, 129)
--------------------------------------------------------------------
Subtree, depth = 12 (3639 data points).
Split on feature emp_length.2 years. (3123, 516)
--------------------------------------------------------------------
Subtree, depth = 13 (3123 data points).
Split on feature grade.C. (0, 3123)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (3123 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (516 data points).
Split on feature home_ownership.OWN. (458, 58)
--------------------------------------------------------------------
Subtree, depth = 14 (458 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (58 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (129 data points).
Split on feature home_ownership.OWN. (113, 16)
--------------------------------------------------------------------
Subtree, depth = 13 (113 data points).
Split on feature grade.C. (0, 113)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (113 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (16 data points).
Split on feature grade.C. (0, 16)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (16 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 11 (334 data points).
Split on feature grade.C. (0, 334)
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (334 data points).
Split on feature term. 60 months. (334, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (334 data points).
Split on feature home_ownership.OWN. (286, 48)
--------------------------------------------------------------------
Subtree, depth = 14 (286 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (48 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (2792 data points).
Split on feature emp_length.2 years. (2562, 230)
--------------------------------------------------------------------
Subtree, depth = 11 (2562 data points).
Split on feature emp_length.5 years. (2335, 227)
--------------------------------------------------------------------
Subtree, depth = 12 (2335 data points).
Split on feature grade.C. (0, 2335)
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (2335 data points).
Split on feature term. 60 months. (2335, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (2335 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (227 data points).
Split on feature grade.C. (0, 227)
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (227 data points).
Split on feature term. 60 months. (227, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (227 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (230 data points).
Split on feature grade.C. (0, 230)
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (230 data points).
Split on feature term. 60 months. (230, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (230 data points).
Split on feature home_ownership.OWN. (230, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (230 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (8917 data points).
Split on feature grade.C. (8917, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (8917 data points).
Split on feature term. 60 months. (8917, 0)
--------------------------------------------------------------------
Subtree, depth = 11 (8917 data points).
Split on feature home_ownership.MORTGAGE. (4748, 4169)
--------------------------------------------------------------------
Subtree, depth = 12 (4748 data points).
Split on feature home_ownership.OWN. (4089, 659)
--------------------------------------------------------------------
Subtree, depth = 13 (4089 data points).
Split on feature home_ownership.RENT. (0, 4089)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (4089 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (659 data points).
Split on feature home_ownership.RENT. (659, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (659 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (4169 data points).
Split on feature home_ownership.OWN. (4169, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (4169 data points).
Split on feature home_ownership.RENT. (4169, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (4169 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (28 data points).
Split on feature grade.B. (11, 17)
--------------------------------------------------------------------
Subtree, depth = 9 (11 data points).
Split on feature emp_length.6 years. (10, 1)
--------------------------------------------------------------------
Subtree, depth = 10 (10 data points).
Split on feature grade.C. (0, 10)
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (10 data points).
Split on feature term. 60 months. (10, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (10 data points).
Split on feature home_ownership.MORTGAGE. (10, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (10 data points).
Split on feature home_ownership.OWN. (10, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (10 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (17 data points).
Split on feature emp_length.1 year. (16, 1)
--------------------------------------------------------------------
Subtree, depth = 10 (16 data points).
Split on feature emp_length.3 years. (15, 1)
--------------------------------------------------------------------
Subtree, depth = 11 (15 data points).
Split on feature emp_length.4 years. (14, 1)
--------------------------------------------------------------------
Subtree, depth = 12 (14 data points).
Split on feature emp_length.< 1 year. (13, 1)
--------------------------------------------------------------------
Subtree, depth = 13 (13 data points).
Split on feature grade.C. (13, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (13 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (4799 data points).
Split on feature grade.B. (4799, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (4799 data points).
Split on feature grade.C. (4799, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (4799 data points).
Split on feature term. 60 months. (4799, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (4799 data points).
Split on feature home_ownership.MORTGAGE. (2163, 2636)
--------------------------------------------------------------------
Subtree, depth = 11 (2163 data points).
Split on feature home_ownership.OTHER. (2154, 9)
--------------------------------------------------------------------
Subtree, depth = 12 (2154 data points).
Split on feature home_ownership.OWN. (1753, 401)
--------------------------------------------------------------------
Subtree, depth = 13 (1753 data points).
Split on feature home_ownership.RENT. (0, 1753)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (1753 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (401 data points).
Split on feature home_ownership.RENT. (401, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (401 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (9 data points).
Split on feature emp_length.3 years. (8, 1)
--------------------------------------------------------------------
Subtree, depth = 13 (8 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (2636 data points).
Split on feature home_ownership.OTHER. (2636, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (2636 data points).
Split on feature home_ownership.OWN. (2636, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (2636 data points).
Split on feature home_ownership.RENT. (2636, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (2636 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (96 data points).
Split on feature grade.A. (96, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (96 data points).
Split on feature grade.B. (96, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (96 data points).
Split on feature grade.C. (96, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (96 data points).
Split on feature term. 60 months. (96, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (96 data points).
Split on feature home_ownership.MORTGAGE. (44, 52)
--------------------------------------------------------------------
Subtree, depth = 11 (44 data points).
Split on feature emp_length.3 years. (43, 1)
--------------------------------------------------------------------
Subtree, depth = 12 (43 data points).
Split on feature emp_length.7 years. (42, 1)
--------------------------------------------------------------------
Subtree, depth = 13 (42 data points).
Split on feature emp_length.8 years. (41, 1)
--------------------------------------------------------------------
Subtree, depth = 14 (41 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (52 data points).
Split on feature emp_length.2 years. (47, 5)
--------------------------------------------------------------------
Subtree, depth = 12 (47 data points).
Split on feature home_ownership.OTHER. (47, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (47 data points).
Split on feature home_ownership.OWN. (47, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (47 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (5 data points).
Split on feature home_ownership.OTHER. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (5 data points).
Split on feature home_ownership.OWN. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (5 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (932 data points).
Split on feature grade.A. (702, 230)
--------------------------------------------------------------------
Subtree, depth = 6 (702 data points).
Split on feature home_ownership.OTHER. (701, 1)
--------------------------------------------------------------------
Subtree, depth = 7 (701 data points).
Split on feature grade.B. (317, 384)
--------------------------------------------------------------------
Subtree, depth = 8 (317 data points).
Split on feature grade.C. (1, 316)
--------------------------------------------------------------------
Subtree, depth = 9 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (316 data points).
Split on feature grade.G. (316, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (316 data points).
Split on feature term. 60 months. (316, 0)
--------------------------------------------------------------------
Subtree, depth = 11 (316 data points).
Split on feature home_ownership.MORTGAGE. (189, 127)
--------------------------------------------------------------------
Subtree, depth = 12 (189 data points).
Split on feature home_ownership.OWN. (139, 50)
--------------------------------------------------------------------
Subtree, depth = 13 (139 data points).
Split on feature home_ownership.RENT. (0, 139)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (139 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (50 data points).
Split on feature home_ownership.RENT. (50, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (50 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (127 data points).
Split on feature home_ownership.OWN. (127, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (127 data points).
Split on feature home_ownership.RENT. (127, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (127 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (384 data points).
Split on feature grade.C. (384, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (384 data points).
Split on feature grade.G. (384, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (384 data points).
Split on feature term. 60 months. (384, 0)
--------------------------------------------------------------------
Subtree, depth = 11 (384 data points).
Split on feature home_ownership.MORTGAGE. (210, 174)
--------------------------------------------------------------------
Subtree, depth = 12 (210 data points).
Split on feature home_ownership.OWN. (148, 62)
--------------------------------------------------------------------
Subtree, depth = 13 (148 data points).
Split on feature home_ownership.RENT. (0, 148)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (148 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (62 data points).
Split on feature home_ownership.RENT. (62, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (62 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (174 data points).
Split on feature home_ownership.OWN. (174, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (174 data points).
Split on feature home_ownership.RENT. (174, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (174 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (230 data points).
Split on feature grade.B. (230, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (230 data points).
Split on feature grade.C. (230, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (230 data points).
Split on feature grade.G. (230, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (230 data points).
Split on feature term. 60 months. (230, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (230 data points).
Split on feature home_ownership.MORTGAGE. (119, 111)
--------------------------------------------------------------------
Subtree, depth = 11 (119 data points).
Split on feature home_ownership.OTHER. (119, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (119 data points).
Split on feature home_ownership.OWN. (71, 48)
--------------------------------------------------------------------
Subtree, depth = 13 (71 data points).
Split on feature home_ownership.RENT. (0, 71)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (71 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (48 data points).
Split on feature home_ownership.RENT. (48, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (48 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (111 data points).
Split on feature home_ownership.OTHER. (111, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (111 data points).
Split on feature home_ownership.OWN. (111, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (111 data points).
Split on feature home_ownership.RENT. (111, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (111 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (358 data points).
Split on feature emp_length.8 years. (347, 11)
--------------------------------------------------------------------
Subtree, depth = 5 (347 data points).
Split on feature grade.A. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (347 data points).
Split on feature grade.B. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (347 data points).
Split on feature grade.C. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (347 data points).
Split on feature grade.G. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (347 data points).
Split on feature term. 60 months. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (347 data points).
Split on feature home_ownership.MORTGAGE. (237, 110)
--------------------------------------------------------------------
Subtree, depth = 11 (237 data points).
Split on feature home_ownership.OTHER. (235, 2)
--------------------------------------------------------------------
Subtree, depth = 12 (235 data points).
Split on feature home_ownership.OWN. (203, 32)
--------------------------------------------------------------------
Subtree, depth = 13 (203 data points).
Split on feature home_ownership.RENT. (0, 203)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (203 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (32 data points).
Split on feature home_ownership.RENT. (32, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (32 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (110 data points).
Split on feature home_ownership.OTHER. (110, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (110 data points).
Split on feature home_ownership.OWN. (110, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (110 data points).
Split on feature home_ownership.RENT. (110, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (110 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature home_ownership.OWN. (9, 2)
--------------------------------------------------------------------
Subtree, depth = 6 (9 data points).
Split on feature grade.A. (9, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (9 data points).
Split on feature grade.B. (9, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (9 data points).
Split on feature grade.C. (9, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (9 data points).
Split on feature grade.G. (9, 0)
--------------------------------------------------------------------
Subtree, depth = 10 (9 data points).
Split on feature term. 60 months. (9, 0)
--------------------------------------------------------------------
Subtree, depth = 11 (9 data points).
Split on feature home_ownership.MORTGAGE. (6, 3)
--------------------------------------------------------------------
Subtree, depth = 12 (6 data points).
Split on feature home_ownership.OTHER. (6, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (6 data points).
Split on feature home_ownership.RENT. (0, 6)
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 14 (6 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (3 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1276 data points).
Split on feature grade.A. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (1276 data points).
Split on feature grade.B. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (1276 data points).
Split on feature grade.C. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (1276 data points).
Split on feature grade.F. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (1276 data points).
Split on feature grade.G. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (1276 data points).
Split on feature term. 60 months. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (1276 data points).
Split on feature home_ownership.MORTGAGE. (855, 421)
--------------------------------------------------------------------
Subtree, depth = 10 (855 data points).
Split on feature home_ownership.OTHER. (849, 6)
--------------------------------------------------------------------
Subtree, depth = 11 (849 data points).
Split on feature home_ownership.OWN. (737, 112)
--------------------------------------------------------------------
Subtree, depth = 12 (737 data points).
Split on feature home_ownership.RENT. (0, 737)
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (737 data points).
Split on feature emp_length.1 year. (670, 67)
--------------------------------------------------------------------
Subtree, depth = 14 (670 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (67 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (112 data points).
Split on feature home_ownership.RENT. (112, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (112 data points).
Split on feature emp_length.1 year. (102, 10)
--------------------------------------------------------------------
Subtree, depth = 14 (102 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (10 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (6 data points).
Split on feature home_ownership.OWN. (6, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (6 data points).
Split on feature home_ownership.RENT. (6, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (6 data points).
Split on feature emp_length.1 year. (6, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (6 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (421 data points).
Split on feature emp_length.6 years. (408, 13)
--------------------------------------------------------------------
Subtree, depth = 11 (408 data points).
Split on feature home_ownership.OTHER. (408, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (408 data points).
Split on feature home_ownership.OWN. (408, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (408 data points).
Split on feature home_ownership.RENT. (408, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (408 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (13 data points).
Split on feature home_ownership.OTHER. (13, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (13 data points).
Split on feature home_ownership.OWN. (13, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (13 data points).
Split on feature home_ownership.RENT. (13, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (13 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Split on feature grade.A. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 3 (4701 data points).
Split on feature grade.B. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (4701 data points).
Split on feature grade.C. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (4701 data points).
Split on feature grade.E. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (4701 data points).
Split on feature grade.F. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 7 (4701 data points).
Split on feature grade.G. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 8 (4701 data points).
Split on feature term. 60 months. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 9 (4701 data points).
Split on feature home_ownership.MORTGAGE. (3047, 1654)
--------------------------------------------------------------------
Subtree, depth = 10 (3047 data points).
Split on feature home_ownership.OTHER. (3037, 10)
--------------------------------------------------------------------
Subtree, depth = 11 (3037 data points).
Split on feature home_ownership.OWN. (2633, 404)
--------------------------------------------------------------------
Subtree, depth = 12 (2633 data points).
Split on feature home_ownership.RENT. (0, 2633)
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (2633 data points).
Split on feature emp_length.1 year. (2392, 241)
--------------------------------------------------------------------
Subtree, depth = 14 (2392 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (241 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 12 (404 data points).
Split on feature home_ownership.RENT. (404, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (404 data points).
Split on feature emp_length.1 year. (374, 30)
--------------------------------------------------------------------
Subtree, depth = 14 (374 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (30 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (10 data points).
Split on feature home_ownership.OWN. (10, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (10 data points).
Split on feature home_ownership.RENT. (10, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (10 data points).
Split on feature emp_length.1 year. (9, 1)
--------------------------------------------------------------------
Subtree, depth = 14 (9 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (1 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 10 (1654 data points).
Split on feature emp_length.5 years. (1532, 122)
--------------------------------------------------------------------
Subtree, depth = 11 (1532 data points).
Split on feature emp_length.3 years. (1414, 118)
--------------------------------------------------------------------
Subtree, depth = 12 (1414 data points).
Split on feature emp_length.9 years. (1351, 63)
--------------------------------------------------------------------
Subtree, depth = 13 (1351 data points).
Split on feature home_ownership.OTHER. (1351, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (1351 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (63 data points).
Split on feature home_ownership.OTHER. (63, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (63 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (118 data points).
Split on feature home_ownership.OTHER. (118, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (118 data points).
Split on feature home_ownership.OWN. (118, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (118 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 11 (122 data points).
Split on feature home_ownership.OTHER. (122, 0)
--------------------------------------------------------------------
Subtree, depth = 12 (122 data points).
Split on feature home_ownership.OWN. (122, 0)
--------------------------------------------------------------------
Subtree, depth = 13 (122 data points).
Split on feature home_ownership.RENT. (122, 0)
--------------------------------------------------------------------
Subtree, depth = 14 (122 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 14 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 13 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 12 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 9 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 8 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 7 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
###Markdown
Evaluating the modelsLet us evaluate the models on the **train** and **validation** data. Let us start by evaluating the classification error on the training data:
###Code
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data)
###Output
Training data, classification error (model 1): 0.125
Training data, classification error (model 2): 0.0603911454975
Training data, classification error (model 3): 0.0355415860735
###Markdown
Now evaluate the classification error on the validation data.
###Code
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, validation_set)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, validation_set)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, validation_set)
###Output
Training data, classification error (model 1): 0.12623869022
Training data, classification error (model 2): 0.0606419646704
Training data, classification error (model 3): 0.0297285652736
###Markdown
**Quiz Question:** Which tree has the smallest error on the validation data?**Quiz Question:** Does the tree with the smallest error in the training data also have the smallest error in the validation data?**Quiz Question:** Is it always true that the tree with the lowest classification error on the **training** set will result in the lowest classification error in the **validation** set? Measuring the complexity of the treeRecall in the lecture that we talked about deeper trees being more complex. We will measure the complexity of the tree as``` complexity(T) = number of leaves in the tree T```Here, we provide a function `count_leaves` that counts the number of leaves in a tree. Using this implementation, compute the number of nodes in `model_1`, `model_2`, and `model_3`.
###Code
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
###Output
_____no_output_____
###Markdown
Compute the number of nodes in `model_1`, `model_2`, and `model_3`.
###Code
print count_leaves(model_1)
print count_leaves(model_2)
print count_leaves(model_3)
###Output
4
41
341
###Markdown
**Quiz question:** Which tree has the largest complexity? **Quiz question:** Is it always true that the most complex tree will result in the lowest classification error in the **validation_set**? Exploring the effect of min_errorWe will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (**negative**, **just right**, and **too positive**).Train three models with these parameters:1. **model_4**: `min_error_reduction = -1` (ignoring this early stopping condition)2. **model_5**: `min_error_reduction = 0` (just right)3. **model_6**: `min_error_reduction = 5` (too positive)For each of these three, we set `max_depth = 6`, and `min_node_size = 0`.** Note:** Each tree can take up to 30 seconds to train.
###Code
model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=0)
model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=5)
###Output
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Split on feature grade.B. (8074, 1048)
--------------------------------------------------------------------
Subtree, depth = 3 (8074 data points).
Split on feature grade.C. (5884, 2190)
--------------------------------------------------------------------
Subtree, depth = 4 (5884 data points).
Split on feature grade.D. (3826, 2058)
--------------------------------------------------------------------
Subtree, depth = 5 (3826 data points).
Split on feature grade.E. (1693, 2133)
--------------------------------------------------------------------
Subtree, depth = 6 (1693 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2133 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (2058 data points).
Split on feature grade.E. (2058, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2058 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (2190 data points).
Split on feature grade.D. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (2190 data points).
Split on feature grade.E. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2190 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1048 data points).
Split on feature emp_length.5 years. (969, 79)
--------------------------------------------------------------------
Subtree, depth = 4 (969 data points).
Split on feature grade.C. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (969 data points).
Split on feature grade.D. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (969 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (79 data points).
Split on feature home_ownership.MORTGAGE. (34, 45)
--------------------------------------------------------------------
Subtree, depth = 5 (34 data points).
Split on feature grade.C. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (34 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (45 data points).
Split on feature grade.C. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (45 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Split on feature emp_length.n/a. (96, 5)
--------------------------------------------------------------------
Subtree, depth = 3 (96 data points).
Split on feature emp_length.< 1 year. (85, 11)
--------------------------------------------------------------------
Subtree, depth = 4 (85 data points).
Split on feature grade.B. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (85 data points).
Split on feature grade.C. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (85 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (11 data points).
Split on feature grade.B. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature grade.C. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (11 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (5 data points).
Split on feature grade.B. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (5 data points).
Split on feature grade.C. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (5 data points).
Split on feature grade.D. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (5 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Split on feature grade.E. (22024, 1276)
--------------------------------------------------------------------
Subtree, depth = 3 (22024 data points).
Split on feature grade.F. (21666, 358)
--------------------------------------------------------------------
Subtree, depth = 4 (21666 data points).
Split on feature emp_length.n/a. (20734, 932)
--------------------------------------------------------------------
Subtree, depth = 5 (20734 data points).
Split on feature grade.G. (20638, 96)
--------------------------------------------------------------------
Subtree, depth = 6 (20638 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (96 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (932 data points).
Split on feature grade.A. (702, 230)
--------------------------------------------------------------------
Subtree, depth = 6 (702 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (230 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 4 (358 data points).
Split on feature emp_length.8 years. (347, 11)
--------------------------------------------------------------------
Subtree, depth = 5 (347 data points).
Split on feature grade.A. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (347 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature home_ownership.OWN. (9, 2)
--------------------------------------------------------------------
Subtree, depth = 6 (9 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1276 data points).
Split on feature grade.A. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (1276 data points).
Split on feature grade.B. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (1276 data points).
Split on feature grade.C. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (1276 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Split on feature grade.A. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 3 (4701 data points).
Split on feature grade.B. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (4701 data points).
Split on feature grade.C. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (4701 data points).
Split on feature grade.E. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (4701 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Split on feature emp_length.n/a. (96, 5)
--------------------------------------------------------------------
Subtree, depth = 3 (96 data points).
Split on feature emp_length.< 1 year. (85, 11)
--------------------------------------------------------------------
Subtree, depth = 4 (85 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 4 (11 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 3 (5 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Split on feature grade.E. (22024, 1276)
--------------------------------------------------------------------
Subtree, depth = 3 (22024 data points).
Split on feature grade.F. (21666, 358)
--------------------------------------------------------------------
Subtree, depth = 4 (21666 data points).
Split on feature emp_length.n/a. (20734, 932)
--------------------------------------------------------------------
Subtree, depth = 5 (20734 data points).
Split on feature grade.G. (20638, 96)
--------------------------------------------------------------------
Subtree, depth = 6 (20638 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (96 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (932 data points).
Split on feature grade.A. (702, 230)
--------------------------------------------------------------------
Subtree, depth = 6 (702 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (230 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 4 (358 data points).
Split on feature emp_length.8 years. (347, 11)
--------------------------------------------------------------------
Subtree, depth = 5 (347 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature home_ownership.OWN. (9, 2)
--------------------------------------------------------------------
Subtree, depth = 6 (9 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1276 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Early stopping condition 3 reached. Minimum error reduction.
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Early stopping condition 3 reached. Minimum error reduction.
###Markdown
Calculate the accuracy of each model (**model_4**, **model_5**, or **model_6**) on the validation set.
###Code
print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_set)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_set)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_set)
###Output
Validation data, classification error (model 4): 0.0606419646704
Validation data, classification error (model 5): 0.0597802671262
Validation data, classification error (model 6): 0.503446790177
###Markdown
Using the `count_leaves` function, compute the number of leaves in each of each models in (**model_4**, **model_5**, and **model_6**).
###Code
print count_leaves(model_4)
print count_leaves(model_5)
print count_leaves(model_6)
###Output
41
13
1
###Markdown
**Quiz Question:** Using the complexity definition above, which model (**model_4**, **model_5**, or **model_6**) has the largest complexity?Did this match your expectation?**Quiz Question:** **model_4** and **model_5** have similar classification error on the validation set but **model_5** has lower complexity? Should you pick **model_5** over **model_4**? Exploring the effect of min_node_sizeWe will compare three models trained with different values of the stopping criterion. Again, intentionally picked models at the extreme ends (**too small**, **just right**, and **just right**).Train three models with these parameters:1. **model_7**: min_node_size = 0 (too small)2. **model_8**: min_node_size = 2000 (just right)3. **model_9**: min_node_size = 50000 (too large)For each of these three, we set `max_depth = 6`, and `min_error_reduction = -1`.** Note:** Each tree can take up to 30 seconds to train.
###Code
model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size =2000, min_error_reduction=-1)
model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 50000, min_error_reduction=-1)
###Output
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Split on feature grade.B. (8074, 1048)
--------------------------------------------------------------------
Subtree, depth = 3 (8074 data points).
Split on feature grade.C. (5884, 2190)
--------------------------------------------------------------------
Subtree, depth = 4 (5884 data points).
Split on feature grade.D. (3826, 2058)
--------------------------------------------------------------------
Subtree, depth = 5 (3826 data points).
Split on feature grade.E. (1693, 2133)
--------------------------------------------------------------------
Subtree, depth = 6 (1693 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2133 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (2058 data points).
Split on feature grade.E. (2058, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2058 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (2190 data points).
Split on feature grade.D. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (2190 data points).
Split on feature grade.E. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2190 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1048 data points).
Split on feature emp_length.5 years. (969, 79)
--------------------------------------------------------------------
Subtree, depth = 4 (969 data points).
Split on feature grade.C. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (969 data points).
Split on feature grade.D. (969, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (969 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (79 data points).
Split on feature home_ownership.MORTGAGE. (34, 45)
--------------------------------------------------------------------
Subtree, depth = 5 (34 data points).
Split on feature grade.C. (34, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (34 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (45 data points).
Split on feature grade.C. (45, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (45 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Split on feature emp_length.n/a. (96, 5)
--------------------------------------------------------------------
Subtree, depth = 3 (96 data points).
Split on feature emp_length.< 1 year. (85, 11)
--------------------------------------------------------------------
Subtree, depth = 4 (85 data points).
Split on feature grade.B. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (85 data points).
Split on feature grade.C. (85, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (85 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (11 data points).
Split on feature grade.B. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature grade.C. (11, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (11 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (5 data points).
Split on feature grade.B. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (5 data points).
Split on feature grade.C. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (5 data points).
Split on feature grade.D. (5, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (5 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Split on feature grade.E. (22024, 1276)
--------------------------------------------------------------------
Subtree, depth = 3 (22024 data points).
Split on feature grade.F. (21666, 358)
--------------------------------------------------------------------
Subtree, depth = 4 (21666 data points).
Split on feature emp_length.n/a. (20734, 932)
--------------------------------------------------------------------
Subtree, depth = 5 (20734 data points).
Split on feature grade.G. (20638, 96)
--------------------------------------------------------------------
Subtree, depth = 6 (20638 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (96 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (932 data points).
Split on feature grade.A. (702, 230)
--------------------------------------------------------------------
Subtree, depth = 6 (702 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (230 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 4 (358 data points).
Split on feature emp_length.8 years. (347, 11)
--------------------------------------------------------------------
Subtree, depth = 5 (347 data points).
Split on feature grade.A. (347, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (347 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (11 data points).
Split on feature home_ownership.OWN. (9, 2)
--------------------------------------------------------------------
Subtree, depth = 6 (9 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1276 data points).
Split on feature grade.A. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (1276 data points).
Split on feature grade.B. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (1276 data points).
Split on feature grade.C. (1276, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (1276 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Split on feature grade.A. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 3 (4701 data points).
Split on feature grade.B. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (4701 data points).
Split on feature grade.C. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (4701 data points).
Split on feature grade.E. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (4701 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Split on feature term. 36 months. (9223, 28001)
--------------------------------------------------------------------
Subtree, depth = 1 (9223 data points).
Split on feature grade.A. (9122, 101)
--------------------------------------------------------------------
Subtree, depth = 2 (9122 data points).
Split on feature grade.B. (8074, 1048)
--------------------------------------------------------------------
Subtree, depth = 3 (8074 data points).
Split on feature grade.C. (5884, 2190)
--------------------------------------------------------------------
Subtree, depth = 4 (5884 data points).
Split on feature grade.D. (3826, 2058)
--------------------------------------------------------------------
Subtree, depth = 5 (3826 data points).
Split on feature grade.E. (1693, 2133)
--------------------------------------------------------------------
Subtree, depth = 6 (1693 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (2133 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (2058 data points).
Split on feature grade.E. (2058, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2058 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (2190 data points).
Split on feature grade.D. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (2190 data points).
Split on feature grade.E. (2190, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (2190 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (1048 data points).
Early stopping condition 2 reached. Reached minimum node size.
--------------------------------------------------------------------
Subtree, depth = 2 (101 data points).
Early stopping condition 2 reached. Reached minimum node size.
--------------------------------------------------------------------
Subtree, depth = 1 (28001 data points).
Split on feature grade.D. (23300, 4701)
--------------------------------------------------------------------
Subtree, depth = 2 (23300 data points).
Split on feature grade.E. (22024, 1276)
--------------------------------------------------------------------
Subtree, depth = 3 (22024 data points).
Split on feature grade.F. (21666, 358)
--------------------------------------------------------------------
Subtree, depth = 4 (21666 data points).
Split on feature emp_length.n/a. (20734, 932)
--------------------------------------------------------------------
Subtree, depth = 5 (20734 data points).
Split on feature grade.G. (20638, 96)
--------------------------------------------------------------------
Subtree, depth = 6 (20638 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (96 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 5 (932 data points).
Early stopping condition 2 reached. Reached minimum node size.
--------------------------------------------------------------------
Subtree, depth = 4 (358 data points).
Early stopping condition 2 reached. Reached minimum node size.
--------------------------------------------------------------------
Subtree, depth = 3 (1276 data points).
Early stopping condition 2 reached. Reached minimum node size.
--------------------------------------------------------------------
Subtree, depth = 2 (4701 data points).
Split on feature grade.A. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 3 (4701 data points).
Split on feature grade.B. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 4 (4701 data points).
Split on feature grade.C. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 5 (4701 data points).
Split on feature grade.E. (4701, 0)
--------------------------------------------------------------------
Subtree, depth = 6 (4701 data points).
Early stopping condition 1 reached. Reached maximum depth.
--------------------------------------------------------------------
Subtree, depth = 6 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 5 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 4 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 3 (0 data points).
Stopping condition 1 reached. All data points have the same target value.
--------------------------------------------------------------------
Subtree, depth = 0 (37224 data points).
Early stopping condition 2 reached. Reached minimum node size.
###Markdown
Now, let us evaluate the models (**model_7**, **model_8**, or **model_9**) on the **validation_set**.
###Code
print "Validation data, classification error (model 4):", evaluate_classification_error(model_7, validation_set)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_8, validation_set)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_9, validation_set)
###Output
Validation data, classification error (model 4): 0.0606419646704
Validation data, classification error (model 5): 0.053425247738
Validation data, classification error (model 6): 0.503446790177
###Markdown
Using the `count_leaves` function, compute the number of leaves in each of each models (**model_7**, **model_8**, and **model_9**).
###Code
print count_leaves(model_7)
print count_leaves(model_8)
print count_leaves(model_9)
###Output
41
19
1
|
prediction_model/notebooks/prepare_train_data/merge_data_k80.ipynb | ###Markdown
Dense Layer
###Code
dfDense = pd.read_pickle(os.path.join(DATA_DIR,'%s/8/benchmark_dense__20180907.pkl' %file_name))
for i in range(9,10):
dfDense = pd.concat([dfDense,pd.read_pickle(os.path.join(DATA_DIR,'%s/%d/benchmark_dense__20180907.pkl' %(file_name, i)))])
ops = (dfDense['batchsize']
* dfDense['dim_input']
* dfDense['dim_output'])
memory_weights = dfDense['dim_input'] * dfDense['dim_output']
memory_in = dfDense['batchsize'] * dfDense['dim_input']
memory_out = dfDense['batchsize'] * dfDense['dim_output']
dfDense['optimizer'] = dfDense['optimizer'].replace({0:'opt_None',
1:'opt_SGD',
2:'opt_Adadelta',
3:'opt_Adagrad',
4:'opt_Momentum',
5:'opt_Adam',
6:'opt_RMSProp'})
dfDense['activation_fct'] = dfDense['activation_fct'].replace({0:'act_None',
1:'act_relu',
2:'act_tanh',
3:'act_sigmoid'})
dfDense['ops'] = ops
dfDense['memory_weights'] = memory_weights
dfDense['memory_in'] = memory_in
dfDense['memory_out'] = memory_out
one_hot_optimizer = pd.get_dummies(dfDense['optimizer'])
dfDense = dfDense.drop(labels='optimizer',axis=1)
dfDense = pd.concat([dfDense,one_hot_optimizer],axis=1)
one_hot_activation = pd.get_dummies(dfDense['activation_fct'])
dfDense = dfDense.drop(labels='activation_fct',axis=1)
dfDense = pd.concat([dfDense,one_hot_activation],axis=1)
dfDense.describe()
dfDense.to_pickle(os.path.join(SAVE_DIR,'Data_dense_%s.pkl'%model_name))
###Output
_____no_output_____
###Markdown
Convolutional Layers
###Code
dfConv = pd.read_pickle(os.path.join(DATA_DIR,'%s/0/benchmark_convolution__20181031.pkl' %file_name))
header = dfConv.columns
for i in range(1,8):
dfConv = pd.concat([dfConv,pd.read_pickle(os.path.join(DATA_DIR,'%s/%i/benchmark_convolution__20181031.pkl' %(file_name, i)))])
padding_reduction = ((dfConv['padding']==0)
*(dfConv['kernelsize']-1))
elements_output = ((dfConv['matsize'] - padding_reduction)
/ dfConv['strides'])**2
ops = (dfConv['batchsize']
* elements_output
* dfConv['kernelsize']**2
* dfConv['channels_in']
* dfConv['channels_out'])
memory_weights = (dfConv['kernelsize']**2
* dfConv['channels_in']
* dfConv['channels_out']
+ dfConv['use_bias'] * dfConv['channels_out'])
memory_in = (dfConv['batchsize']
* dfConv['matsize']**2
* dfConv['channels_in'])
memory_out = (dfConv['batchsize']
* elements_output
* dfConv['channels_out'])
dfConv['elements_matrix'] = dfConv['matsize']**2
dfConv['elements_kernel'] = dfConv['kernelsize']**2
dfConv['ops'] = ops
dfConv['memory_weights'] = memory_weights
dfConv['memory_in'] = memory_in
dfConv['memory_out'] = memory_out
dfConv['optimizer'] = dfConv['optimizer'].replace({0:'opt_None',
1:'opt_SGD',
2:'opt_Adadelta',
3:'opt_Adagrad',
4:'opt_Momentum',
5:'opt_Adam',
6:'opt_RMSProp'})
dfConv['activation_fct'] = dfConv['activation_fct'].replace({0:'act_None',
1:'act_relu',
2:'act_tanh',
3:'act_sigmoid'})
dfConv['use_bias'] = np.uint8(dfConv['use_bias'])
dfConv.dropna(inplace=True)
one_hot_optimizer = pd.get_dummies(dfConv['optimizer'])
dfConv = dfConv.drop(labels='optimizer',axis=1)
dfConv = pd.concat([dfConv,one_hot_optimizer],axis=1)
one_hot_activation = pd.get_dummies(dfConv['activation_fct'])
dfConv = dfConv.drop(labels='activation_fct',axis=1)
dfConv = pd.concat([dfConv,one_hot_activation],axis=1)
dfConv.describe()
dfConv.to_pickle(os.path.join(SAVE_DIR,'Data_convolution_%s.pkl'%model_name))
###Output
_____no_output_____ |
notebooks/4.1/CSSReference.ipynb | ###Markdown
CSS Playground A notebook that contain most of the things that could be displayed, to test CSS, feel free to add things to it, and send modification Title first level Title second Level Title third level h4 h5 h6 h1 h2 h3 h4 h6This is just a sample paragraph> With a blockquote def some_code(): return 'by indenting'```def some_other_code(): return 'bewtween_backticks'``` You can look at different level of nested unorderd list - level 1 - level 2 - level 2 - level 2 - level 3 - level 3 - level 4 - level 5 - level 6 - level 2- level 1- level 1- level 1 Ordered list 1. level 1 2. level 1 3. level 1 4. level 1 1. level 1 2. level 1 2. level 1 3. level 1 4. level 1 1. level 1 2. level 13. level 14. level 1 some Horizontal line***--- copy past from Daring Fireball link : This is [an example](http://example.com/ "Title") inline link.[This link](http://example.net/) has no title attribute. inline HtmlThis is a regular paragraph. Foo This is another regular paragraph. > This is a blockquote with two paragraphs. Lorem ipsum dolor sit amet,> consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus.> Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus.> > Donec sit amet nisl. Aliquam semper ipsum sit amet velit. Suspendisse> id sem consectetuer libero luctus adipiscing.---> This is a blockquote with two paragraphs. Lorem ipsum dolor sit amet,consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus.Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus.> Donec sit amet nisl. Aliquam semper ipsum sit amet velit. Suspendisseid sem consectetuer libero luctus adipiscing. > This is the first level of quoting.>> > This is nested blockquote.>> Back to the first level. > This is a header.> > 1. This is the first list item.> 2. This is the second list item.> > Here's some example code:> > return shell_exec("echo $input | $markdown_script"); 1. This is a list item with two paragraphs. Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus. Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus. Donec sit amet nisl. Aliquam semper ipsum sit amet velit.2. Suspendisse id sem consectetuer libero luctus adipiscing. * This is a list item with two paragraphs. This is the second paragraph in the list item. You'reonly required to indent the first line. Lorem ipsum dolorsit amet, consectetuer adipiscing elit.* Another item in the same list. * A list item with a blockquote: > This is a blockquote > inside a list item. * A list item with a code block: 1986. What a great season.1986\. What a great season. See my [About](/about/) page for details. ref linkThis is [an example][id] reference-style link.[id]: http://example.com/ "Optional Title Here" *single asterisks*_single underscores_**double asterisks**__double underscores__un*frigging*believable // should render partially as bold\*this text is surrounded by literal asterisks\*``There is a literal backtick (`) here.`` Other Notebook element A small tooltipCloseExpandOpen in PagerCloseAnd some text inside Close Expand Open in Pager Close This one should be big
###Code
# a code cell
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LightSource
# example showing how to make shaded relief plots
# like Mathematica
# (http://reference.wolfram.com/mathematica/ref/ReliefPlot.html)
# or Generic Mapping Tools
# (http://gmt.soest.hawaii.edu/gmt/doc/gmt/html/GMT_Docs/node145.html)
# test data
d= 1
def maltc(ax, lambd=1, n=1):
I0=1
I= lambda theta,d : I0*(sin(2*theta)*sin(pi*n*d/lambd))**2
X,Y=np.mgrid[-5:5:0.05,-5:5:0.05]
Z=np.sqrt(X**2+Y**2)+np.sin(X**2+Y**2)
r= np.sqrt(X**2+Y**2)
theta = np.angle(X+1.0j*Y)
Iv= np.vectorize(I)
Z = Iv(r,theta)
# create light source object.
#ls = LightSource(azdeg=0,altdeg=65)
# shade data, creating an rgb array.
#rgb = ls.shade(Z,plt.cm.copper)
# plot un-shaded and shaded images.
#plt.figure(figsize=(12,5))
#plt.subplot(121)
ax.imshow(Z,cmap=plt.cm.copper)
ax.set_title('d=%d lambda=%f'%(d,lambd))
fig, (axes) = plt.subplots(3,4)
fig.set_figheight(10)
fig.set_figwidth(20)
flatten = [item for sublist in axes for item in sublist]
for ax,l in zip(flatten,range(len(flatten))):
maltc(ax,lambd=(l+1)*pi/8.0)
from __future__ import print_function
import sys
print('stdout')
print('stderr',file=sys.stderr)
###Output
stdout
|
notebooks/plots_ellipticity.ipynb | ###Markdown
Results Visualisation ---
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from annex_new import import_
from annex_new import get_ellipticity
from annex_new import get_bh_errors
from annex_new import get_sep_errors
from annex_new import get_elli_errors
from annex_new import count_per_bin
from annex_new import get_bh_results
from annex_new import get_sep_results
###Output
_____no_output_____
###Markdown
Path
###Code
"""Check the folders hierarchy"""
from os.path import expanduser
user_home = expanduser("~")
path = user_home+'/Cosmostat/Codes/BlendHunter'
###Output
_____no_output_____
###Markdown
IMPORT DATA
###Code
""""Retrieve results for non padded images """
bh_results = get_bh_results(path_bh_results = path+'/bh_results')
sep_results = get_sep_results(path_sep_results = path+'/sep_results')
"""Retrieve ellipticity components"""
e1_total= get_ellipticity(path, get_e1=True)
e2_total= get_ellipticity(path, get_e2=True)
"""Retrieve missed blends for sep and bh"""
bh_errors = [[get_bh_errors(results=bh_results[i][j]) for j in range(len(bh_results[i]))] for i in range(len(bh_results))]
sep_errors = [[get_sep_errors(results=sep_results[i][j]) for j in range(len(sep_results[i]))] for i in range(len(sep_results))]
"""For each noise realisation and each noise level, retrieve the components corresponding to the missed blends"""
e1_errors_bh = [[get_elli_errors(e1_total, errors = bh_errors[i][j]) for j in range(len(bh_errors[i]))] for i in range(len(bh_errors))]
e1_errors_sep = [[get_elli_errors(e1_total, errors = sep_errors[i][j]) for j in range(len(sep_errors[i]))] for i in range(len(sep_errors))]
e2_errors_bh = [[get_elli_errors(e2_total, errors = bh_errors[i][j]) for j in range(len(bh_errors[i]))] for i in range(len(bh_errors))]
e2_errors_sep = [[get_elli_errors(e2_total, errors = sep_errors[i][j]) for j in range(len(sep_errors[i]))] for i in range(len(sep_errors))]
"""Get e1,e2 for missed blends by bh and sep"""
def get_e1_errors(e1=None, errors=None):
return [e1[i] for i in errors]
def get_e2_errors(e2=None, errors=None):
return [e2[i] for i in errors]
"""How to retrieve informations from each bin in total dataset"""
def count_per_bin(data =None, get_bins =False, bins_=int(180/3)):
(n, bins, patches) = plt.hist(data, bins = bins_)
if get_bins:
return n, bins[1:], bins
else:
return n
"""Computation of error ratios"""
def acc_ratio_bins(data=None, N=None, bins=int(180/3)):
"""N being the total obs per bin"""
n = count_per_bin(data, bins_=bins)
ratio = 1 - (n/N)
return ratio
"""Compute mean accuracy for each noise level """
def get_mean_acc(data=None, data_total=None, nb_ratios=23, get_mean_total=False):
if get_mean_total:
#Get total number per bin and mean distance per bin for the whole test set
n_total, mean_dist, bin_edges = count_per_bin(data=data_total, get_bins=True)
return mean_dist
else:
"""Get total number per bin and mean distance per bin for the whole test set"""
n_total, mean_dist, bin_edges = count_per_bin(data=data_total, get_bins=True)
"""Compute accuracy ratio for each bin, for each noise realisation and noise level"""
acc_ratios = [[acc_ratio_bins(x[j], N= n_total , bins=bin_edges) for j in range(len(x))] for x in data]
"""For each noise level, create sub_lists of accuracy ratios for corresponding bins but with all noise realisations"""
sub_ratios = [[np.array([acc_ratios[k][i][j] for i in range(len(acc_ratios[k]))]) for j in range(nb_ratios)] for k in range(len(acc_ratios))]
"""Compute the mean on each sub_list.
The function returns the mean accuracy ratios for each noise level"""
return [np.array([np.mean(k[i]) for i in range(len(k))]) for k in sub_ratios]
"""Retrieve the mean e1, e2 per bin for x axis"""
mean_e1_per_bin = get_mean_acc(get_mean_total=True, data_total=e1_total)
mean_e2_per_bin = get_mean_acc(get_mean_total=True, data_total=e2_total)
"""Compute the mean accuracy ratio (on all noise realisations) for each bin and each noise level"""
mean_acc_bh_e1 = get_mean_acc(data=e1_errors_bh, data_total=e1_total, nb_ratios = 23)
mean_acc_sep_e1 = get_mean_acc(data=e1_errors_sep, data_total=e1_total, nb_ratios = 23)
mean_acc_bh_e2 = get_mean_acc(data=e2_errors_bh, data_total=e2_total, nb_ratios = 23)
mean_acc_sep_e2 = get_mean_acc(data=e2_errors_sep, data_total=e2_total, nb_ratios = 23)
###Output
_____no_output_____
###Markdown
PLOT ACCURACY ACCORDING TO $e1, e2$
###Code
#Seaborn theme
sns.set(context='notebook', style='whitegrid', palette='deep')
#Font dictionnary
font = {'family': 'monospace',
'color': 'k',
'weight': 'normal',
'size': 15}
#Start plot
fig, ax = plt.subplots(3,2,figsize=(20,17), sharex=False)
#Title
fig.suptitle('Mean accuracy according to e1 w.r.t $\sigma_{noise}$',
fontdict = {'family': 'serif','color': 'k','weight': 'heavy','size': 23}, fontsize=23)
#First subplot
ax[0,0].set_title('$\sigma_{noise}$ =5.0', fontdict=font, fontsize=18.5)
ax[0,0].plot(mean_e1, 100*mean_acc_bh_e1[0], color = 'k', marker='.', label='Bh')
ax[0,0].plot(mean_e1, 100*mean_acc_sep_e1[0], color = 'steelblue', marker='.', label='SExtractor')
ax[0,0].set_ylabel('Accuracy (%)', fontdict = font)
ax[0,0].set_xlabel('e1', fontdict = font)
ax[0,0].set_ylim(0,105)
ax[0,0].tick_params(axis='both', which='major', labelsize=15)
ax[0,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#Second subplot
ax[0,1].set_title('$\sigma_{noise}$ =14.0', fontdict=font, fontsize=18.5)
ax[0,1].plot(mean_e1, 100*mean_acc_bh_e1[1], color = 'k', marker='.', label='Bh')
ax[0,1].plot(mean_e1, 100*mean_acc_sep_e1[1], color = 'steelblue', marker='.', label='SExtractor')
ax[0,1].set_ylabel('Accuracy (%)', fontdict = font)
ax[0,1].set_xlabel('e1', fontdict = font)
ax[0,1].set_ylim(0,105)
ax[0,1].tick_params(axis='both', which='major', labelsize=15)
ax[0,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#3rd subplot
ax[1,0].set_title('$\sigma_{noise}$ =18.0', fontdict=font, fontsize=18.5)
ax[1,0].plot(mean_e1, 100*mean_acc_bh_e1[2], color = 'k', marker='.', label='Bh')
ax[1,0].plot(mean_e1, 100*mean_acc_sep_e1[2], color = 'steelblue', marker='.', label='SExtractor')
ax[1,0].set_ylabel('Accuracy (%)', fontdict = font)
ax[1,0].set_xlabel('e1', fontdict = font)
ax[1,0].set_ylim(0,105)
ax[1,0].tick_params(axis='both', which='major', labelsize=15)
ax[1,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#4th subplot
ax[1,1].set_title('$\sigma_{noise}$ =26.0', fontdict=font, fontsize=18.5)
ax[1,1].plot(mean_e1, 100*mean_acc_bh_e1[3], color = 'k', marker='.', label='Bh')
ax[1,1].plot(mean_e1, 100*mean_acc_sep_e1[3], color = 'steelblue', marker='.', label='SExtractor')
ax[1,1].set_ylabel('Accuracy (%)', fontdict = font)
ax[1,1].set_xlabel('e1', fontdict = font)
ax[1,1].set_ylim(0,105)
ax[1,1].tick_params(axis='both', which='major', labelsize=15)
ax[1,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#5th subplot
ax[2,0].set_title('$\sigma_{noise}$ =35.0', fontdict=font, fontsize=18.5)
ax[2,0].plot(mean_e1, 100*mean_acc_bh_e1[4],color = 'k', marker='.', label='Bh')
ax[2,0].plot(mean_e1, 100*mean_acc_sep_e1[4], color = 'steelblue', marker='.', label='SExtractor')
ax[2,0].set_ylabel('Accuracy (%)', fontdict = font)
ax[2,0].set_xlabel('e1', fontdict = font)
ax[2,0].set_ylim(0,105)
ax[2,0].tick_params(axis='both', which='major', labelsize=15)
ax[2,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#6th subplot
ax[2,1].set_title('$\sigma_{noise}$ =40.0', fontdict=font, fontsize=18.5)
x=ax[2,1].plot(mean_e1, 100*mean_acc_bh_e1[5], color = 'k', marker='.', label='Bh')
y=ax[2,1].plot(mean_e1, 100*mean_acc_sep_e1[5], color = 'steelblue', marker='.', label='SExtractor')
ax[2,1].set_ylabel('Accuracy (%)', fontdict = font)
ax[2,1].set_xlabel('e1', fontdict = font)
ax[2,1].set_ylim(0,105)
ax[2,1].tick_params(axis='both', which='major', labelsize=15)
ax[2,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#Add legend
#labels_legend=["Bh", 'SExtractor', 'Gain on SExtractor']
plt.subplots_adjust(hspace=0.4)
#fig.legend([x,y,z], labels=labels_legend, borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
plt.show()
#Seaborn theme
sns.set(context='notebook', style='whitegrid', palette='deep')
#Font dictionnary
font = {'family': 'monospace',
'color': 'k',
'weight': 'normal',
'size': 15}
#Start plot
fig, ax = plt.subplots(3,2,figsize=(20,17), sharex=False)
#Title
fig.suptitle('Mean accuracy according to e1 w.r.t $\sigma_{noise}$',
fontdict = {'family': 'serif','color': 'k','weight': 'heavy','size': 23}, fontsize=23)
#First subplot
ax[0,0].set_title('$\sigma_{noise}$ =5.0', fontdict=font, fontsize=18.5)
ax[0,0].plot(mean_e2, 100*mean_acc_bh_e2[0], color = 'k', marker='.', label='Bh')
ax[0,0].plot(mean_e2, 100*mean_acc_sep_e2[0], color = 'steelblue', marker='.', label='SExtractor')
ax[0,0].set_ylabel('Accuracy (%)', fontdict = font)
ax[0,0].set_xlabel('e2', fontdict = font)
ax[0,0].set_ylim(0,105)
ax[0,0].tick_params(axis='both', which='major', labelsize=15)
ax[0,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#Second subplot
ax[0,1].set_title('$\sigma_{noise}$ =14.0', fontdict=font, fontsize=18.5)
ax[0,1].plot(mean_e2, 100*mean_acc_bh_e2[1], color = 'k', marker='.', label='Bh')
ax[0,1].plot(mean_e2, 100*mean_acc_sep_e2[1], color = 'steelblue', marker='.', label='SExtractor')
ax[0,1].set_ylabel('Accuracy (%)', fontdict = font)
ax[0,1].set_xlabel('e2', fontdict = font)
ax[0,1].set_ylim(0,105)
ax[0,1].tick_params(axis='both', which='major', labelsize=15)
ax[0,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#3rd subplot
ax[1,0].set_title('$\sigma_{noise}$ =18.0', fontdict=font, fontsize=18.5)
ax[1,0].plot(mean_e2, 100*mean_acc_bh_e2[2], color = 'k', marker='.', label='Bh')
ax[1,0].plot(mean_e2, 100*mean_acc_sep_e2[2], color = 'steelblue', marker='.', label='SExtractor')
ax[1,0].set_ylabel('Accuracy (%)', fontdict = font)
ax[1,0].set_xlabel('e2', fontdict = font)
ax[1,0].set_ylim(0,105)
ax[1,0].tick_params(axis='both', which='major', labelsize=15)
ax[1,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#4th subplot
ax[1,1].set_title('$\sigma_{noise}$ =26.0', fontdict=font, fontsize=18.5)
ax[1,1].plot(mean_e2, 100*mean_acc_bh_e2[3], color = 'k', marker='.', label='Bh')
ax[1,1].plot(mean_e2, 100*mean_acc_sep_e2[3], color = 'steelblue', marker='.', label='SExtractor')
ax[1,1].set_ylabel('Accuracy (%)', fontdict = font)
ax[1,1].set_xlabel('e2', fontdict = font)
ax[1,1].set_ylim(0,105)
ax[1,1].tick_params(axis='both', which='major', labelsize=15)
ax[1,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#5th subplot
ax[2,0].set_title('$\sigma_{noise}$ =35.0', fontdict=font, fontsize=18.5)
ax[2,0].plot(mean_e1, 100*mean_acc_bh_e2[4],color = 'k', marker='.', label='Bh')
ax[2,0].plot(mean_e1, 100*mean_acc_sep_e2[4], color = 'steelblue', marker='.', label='SExtractor')
ax[2,0].set_ylabel('Accuracy (%)', fontdict = font)
ax[2,0].set_xlabel('e2', fontdict = font)
ax[2,0].set_ylim(0,105)
ax[2,0].tick_params(axis='both', which='major', labelsize=15)
ax[2,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#6th subplot
ax[2,1].set_title('$\sigma_{noise}$ =40.0', fontdict=font, fontsize=18.5)
x=ax[2,1].plot(mean_e1, 100*mean_acc_bh_e2[5], color = 'k', marker='.', label='Bh')
y=ax[2,1].plot(mean_e1, 100*mean_acc_sep_e2[5], color = 'steelblue', marker='.', label='SExtractor')
ax[2,1].set_ylabel('Accuracy (%)', fontdict = font)
ax[2,1].set_xlabel('e2', fontdict = font)
ax[2,1].set_ylim(0,105)
ax[2,1].tick_params(axis='both', which='major', labelsize=15)
ax[2,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
#Add legend
#labels_legend=["Bh", 'SExtractor', 'Gain on SExtractor']
plt.subplots_adjust(hspace=0.4)
#fig.legend([x,y,z], labels=labels_legend, borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15})
plt.show()
###Output
_____no_output_____ |
TeraGreen.ipynb | ###Markdown
Distribution of class
###Code
df = pd.read_csv("file-all.lst", header=None, delimiter=" ")
df[1].hist(bins=50)
###Output
_____no_output_____
###Markdown
Top-N class
###Code
df[1].value_counts().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Extract top-N classes
###Code
THRESHOLD = 2000
N_TRAIN = int(THRESHOLD * 0.7)
df = pd.read_csv("file-all.lst", header=None, delimiter=" ")
df_train = pd.DataFrame()
df_test = pd.DataFrame()
count = 0
for i in range(max(df[1])+1):
if len(df[(df[1]==i)&(df[1]!=713)&(df[1]!=747)]) >= THRESHOLD:
count += 1
df_sample = df[df[1]==i].sample(n=THRESHOLD)
df_train = df_train.append(df_sample[:N_TRAIN], ignore_index=True)
df_test = df_test.append(df_sample[N_TRAIN:], ignore_index=True)
print("total{} classes".format(count))
###Output
total27 classes
###Markdown
Save
###Code
df_train.to_csv("train-c{}.lst".format(count), index=False, header=False)
df_test.to_csv("test-c{}.lst".format(count), index=False, header=False)
df = pd.read_csv("labels.csv")
tr = pd.read_csv("train-c{}.lst".format(count), header=None)
labels = set(tr[1])
dict_class_neuron = {j:i for i,j in enumerate(labels)}
f = lambda x: dict_class_neuron[x] if x in labels else -1
df = df.drop("NEURON", axis=1)
df["NEURON"] = df["CLASS"].apply(f)
with open("dict_class_neuron.pkl", 'wb') as f:
pickle.dump(dict_class_neuron, f)
df = pd.read_csv("labels.csv")
df.head()
df.to_csv("labels.csv", index=False)
###Output
_____no_output_____ |
fun_coding/codility/4_MaxCounters(Counting Elements).ipynb | ###Markdown
###Code
# O(N*M) first try. correctness 100%
def solution(N,A):
array = [0 for i in range(N)]
for i in A:
if i > N:
array = [max(array)] * N
else:
array[i-1] += 1
return array
solution(5, [3,4,4,6,1,4,4])
# O(N+M) 10th try? lol
def solution(N,A):
import copy
array = [0 for i in range(N)]
max_i = 0
min_i = 0
for i in A:
if i > N:
min_i = max_i
else:
array[i-1] = max(array[i-1], min_i)+1
if array[i-1] > max_i:
max_i = array[i-1]
# print(array,min_i)
for i in range(N):
array[i] = max(min_i, array[i])
return array
solution(5,[3,4,4,6,1,4,4])
a = [1,2,3,4,5]
max(a)
solution(5,[3,4,4,6,1,4,4])
###Output
_____no_output_____ |
Session_9_Random_Data.ipynb | ###Markdown
Confidence Interval on random data
###Code
from numpy.random import seed
from numpy.random import randn
import numpy as np
from scipy.stats import ttest_ind
import scipy.stats as ss
seed(1)
data_1 = 5*randn(100) + 25
data_2 = 5*randn(100) + 45
print(data_1)
print(data_2)
import seaborn as sns
sns.distplot(data_1)
sns.distplot(data_2)
mean_1, mean_2 = np.mean(data_1), np.mean(data_2)
print(mean_1, mean_2)
se_1, se_2 = ss.stats.sem(data_1), ss.stats.sem(data_2)
print(se_1, se_2)
se_diff = np.sqrt(se_1**2 + se_2**2)
t_stat = (mean_1 - mean_2) / se_diff
dof = len(data_1) + len(data_2) - 2
print(dof)
alpha = 0.05
cv = ss.t.ppf(1-alpha, dof)
p = (1.0 - ss.t.cdf(abs(t_stat), dof)) * 2.0
print(p)
n1, n2 = len(data_1), len(data_2)
# mean_1, mean_2
std_1, std_2 = np.std(data_1), np.std(data_2)
se_1 = std_1/np.sqrt(n1)
se_2 = std_2/np.sqrt(n2)
mean_diff = mean_1 - mean_2
se_diff = (np.sqrt((n1-1) * se_1**2 + (n2-1) * se_2**2)/(n1+n2-2)) * (np.sqrt(1/n1 + 1/n2))
lcbx = mean_diff - 1.96*se_diff
ucbx = mean_diff + 1.96*se_diff
print(lcbx, ucbx)
###Output
-20.470011496988636 -20.452107758456375
|
week3/ex2-logistic regression/ML-Exercise2.ipynb | ###Markdown
机器学习练习 2 - 逻辑回归 这个笔记包含了以Python为编程语言的Coursera上机器学习的第二次编程练习。请参考 [作业文件](ex2.pdf) 详细描述和方程。在这一次练习中,我们将要实现逻辑回归并且应用到一个分类任务。我们还将通过将正则化加入训练算法,来提高算法的鲁棒性,并用更复杂的情形来测试它。代码修改并注释:黄海广,[email protected] 逻辑回归 在训练的初始阶段,我们将要构建一个逻辑回归模型来预测,某个学生是否被大学录取。设想你是大学相关部分的管理者,想通过申请学生两次测试的评分,来决定他们是否被录取。现在你拥有之前申请学生的可以用于训练逻辑回归的训练样本集。对于每一个训练样本,你有他们两次测试的评分和最后是被录取的结果。为了完成这个预测任务,我们准备构建一个可以基于两次测试评分来评估录取可能性的分类模型。 让我们从检查数据开始。
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
path = 'ex2data1.txt'
data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])
data.head()
###Output
_____no_output_____
###Markdown
让我们创建两个分数的散点图,并使用颜色编码来可视化,如果样本是正的(被接纳)或负的(未被接纳)。
###Code
positive = data[data['Admitted'].isin([1])]
negative = data[data['Admitted'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted')
ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted')
ax.legend()
ax.set_xlabel('Exam 1 Score')
ax.set_ylabel('Exam 2 Score')
plt.show()
###Output
_____no_output_____
###Markdown
看起来在两类间,有一个清晰的决策边界。现在我们需要实现逻辑回归,那样就可以训练一个模型来预测结果。方程实现在下面的代码示例在"exercises" 文件夹的 "ex2.pdf" 中。 sigmoid 函数g 代表一个常用的逻辑函数(logistic function)为S形函数(Sigmoid function),公式为: \\[g\left( z \right)=\frac{1}{1+{{e}^{-z}}}\\] 合起来,我们得到逻辑回归模型的假设函数: \\[{{h}_{\theta }}\left( x \right)=\frac{1}{1+{{e}^{-{{\theta }^{T}}X}}}\\]
###Code
def sigmoid(z):
return 1 / (1 + np.exp(-z))
###Output
_____no_output_____
###Markdown
让我们做一个快速的检查,来确保它可以工作。
###Code
nums = np.arange(-10, 10, step=1)
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(nums, sigmoid(nums), 'r')
plt.show()
###Output
_____no_output_____
###Markdown
棒极了!现在,我们需要编写代价函数来评估结果。代价函数:$J\left( \theta \right)=\frac{1}{m}\sum\limits_{i=1}^{m}{[-{{y}^{(i)}}\log \left( {{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)-\left( 1-{{y}^{(i)}} \right)\log \left( 1-{{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)]}$
###Code
def cost(theta, X, y):
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
first = np.multiply(-y, np.log(sigmoid(X * theta.T)))
second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T)))
return np.sum(first - second) / (len(X))
###Output
_____no_output_____
###Markdown
现在,我们要做一些设置,和我们在练习1在线性回归的练习很相似。
###Code
# add a ones column - this makes the matrix multiplication work out easier
data.insert(0, 'Ones', 1)
# set X (training data) and y (target variable)
cols = data.shape[1]
X = data.iloc[:,0:cols-1]
y = data.iloc[:,cols-1:cols]
# convert to numpy arrays and initalize the parameter array theta
X = np.array(X.values)
y = np.array(y.values)
theta = np.zeros(3)
###Output
_____no_output_____
###Markdown
让我们来检查矩阵的维度来确保一切良好。
###Code
theta
X.shape, theta.shape, y.shape
###Output
_____no_output_____
###Markdown
让我们计算初始化参数的代价函数(theta为0)。
###Code
cost(theta, X, y)
###Output
_____no_output_____
###Markdown
看起来不错,接下来,我们需要一个函数来计算我们的训练数据、标签和一些参数thata的梯度。 gradient descent(梯度下降)* 这是批量梯度下降(batch gradient descent) * 转化为向量化计算: $\frac{1}{m} X^T( Sigmoid(X\theta) - y )$$$\frac{\partial J\left( \theta \right)}{\partial {{\theta }_{j}}}=\frac{1}{m}\sum\limits_{i=1}^{m}{({{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}})x_{_{j}}^{(i)}}$$
###Code
def gradient(theta, X, y):
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
parameters = int(theta.ravel().shape[1])
grad = np.zeros(parameters)
error = sigmoid(X * theta.T) - y
for i in range(parameters):
term = np.multiply(error, X[:,i])
grad[i] = np.sum(term) / len(X)
return grad
###Output
_____no_output_____
###Markdown
注意,我们实际上没有在这个函数中执行梯度下降,我们仅仅在计算一个梯度步长。在练习中,一个称为“fminunc”的Octave函数是用来优化函数来计算成本和梯度参数。由于我们使用Python,我们可以用SciPy的“optimize”命名空间来做同样的事情。 我们看看用我们的数据和初始参数为0的梯度下降法的结果。
###Code
gradient(theta, X, y)
###Output
_____no_output_____
###Markdown
现在可以用SciPy's truncated newton(TNC)实现寻找最优参数。
###Code
import scipy.optimize as opt
result = opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y))
result
###Output
_____no_output_____
###Markdown
让我们看看在这个结论下代价函数计算结果是什么个样子~
###Code
cost(result[0], X, y)
###Output
_____no_output_____
###Markdown
接下来,我们需要编写一个函数,用我们所学的参数theta来为数据集X输出预测。然后,我们可以使用这个函数来给我们的分类器的训练精度打分。逻辑回归模型的假设函数: \\[{{h}_{\theta }}\left( x \right)=\frac{1}{1+{{e}^{-{{\theta }^{T}}X}}}\\] 当${{h}_{\theta }}$大于等于0.5时,预测 y=1当${{h}_{\theta }}$小于0.5时,预测 y=0 。
###Code
def predict(theta, X):
probability = sigmoid(X * theta.T)
return [1 if x >= 0.5 else 0 for x in probability]
theta_min = np.matrix(result[0])
predictions = predict(theta_min, X)
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)]
accuracy = (sum(map(int, correct)) % len(correct))
print ('accuracy = {0}%'.format(accuracy))
###Output
accuracy = 89%
###Markdown
我们的逻辑回归分类器预测正确,如果一个学生被录取或没有录取,达到89%的精确度。不坏!记住,这是训练集的准确性。我们没有保持住了设置或使用交叉验证得到的真实逼近,所以这个数字有可能高于其真实值(这个话题将在以后说明)。 正则化逻辑回归 在训练的第二部分,我们将要通过加入正则项提升逻辑回归算法。如果你对正则化有点眼生,或者喜欢这一节的方程的背景,请参考在"exercises"文件夹中的"ex2.pdf"。简而言之,正则化是成本函数中的一个术语,它使算法更倾向于“更简单”的模型(在这种情况下,模型将更小的系数)。这个理论助于减少过拟合,提高模型的泛化能力。这样,我们开始吧。 设想你是工厂的生产主管,你有一些芯片在两次测试中的测试结果。对于这两次测试,你想决定是否芯片要被接受或抛弃。为了帮助你做出艰难的决定,你拥有过去芯片的测试数据集,从其中你可以构建一个逻辑回归模型。 和第一部分很像,从数据可视化开始吧!
###Code
path = 'ex2data2.txt'
data2 = pd.read_csv(path, header=None, names=['Test 1', 'Test 2', 'Accepted'])
data2.head()
positive = data2[data2['Accepted'].isin([1])]
negative = data2[data2['Accepted'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['Test 1'], positive['Test 2'], s=50, c='b', marker='o', label='Accepted')
ax.scatter(negative['Test 1'], negative['Test 2'], s=50, c='r', marker='x', label='Rejected')
ax.legend()
ax.set_xlabel('Test 1 Score')
ax.set_ylabel('Test 2 Score')
plt.show()
###Output
_____no_output_____
###Markdown
哇,这个数据看起来可比前一次的复杂得多。特别地,你会注意到其中没有线性决策界限,来良好的分开两类数据。一个方法是用像逻辑回归这样的线性技术来构造从原始特征的多项式中得到的特征。让我们通过创建一组多项式特征入手吧。
###Code
degree = 5
x1 = data2['Test 1']
x2 = data2['Test 2']
data2.insert(3, 'Ones', 1)
for i in range(1, degree):
for j in range(0, i):
data2['F' + str(i) + str(j)] = np.power(x1, i-j) * np.power(x2, j)
data2.drop('Test 1', axis=1, inplace=True)
data2.drop('Test 2', axis=1, inplace=True)
data2.head()
###Output
_____no_output_____
###Markdown
现在,我们需要修改第1部分的成本和梯度函数,包括正则化项。首先是成本函数: regularized cost(正则化代价函数)$$J\left( \theta \right)=\frac{1}{m}\sum\limits_{i=1}^{m}{[-{{y}^{(i)}}\log \left( {{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)-\left( 1-{{y}^{(i)}} \right)\log \left( 1-{{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)]}+\frac{\lambda }{2m}\sum\limits_{j=1}^{n}{\theta _{j}^{2}}$$
###Code
def cost(theta, X, y, learningRate):
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
first = np.multiply(-y, np.log(sigmoid(X * theta.T)))
second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T)))
reg = (learningRate / (2 * len(X))) * np.sum(np.power(theta[:,1:theta.shape[1]], 2))
return np.sum(first - second) / len(X) + reg
###Output
_____no_output_____
###Markdown
请注意等式中的"reg" 项。还注意到另外的一个“学习率”参数。这是一种超参数,用来控制正则化项。现在我们需要添加正则化梯度函数: 如果我们要使用梯度下降法令这个代价函数最小化,因为我们未对${{\theta }_{0}}$ 进行正则化,所以梯度下降算法将分两种情形:\begin{align} & Repeat\text{ }until\text{ }convergence\text{ }\!\!\{\!\!\text{ } \\ & \text{ }{{\theta }_{0}}:={{\theta }_{0}}-a\frac{1}{m}\sum\limits_{i=1}^{m}{[{{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}}]x_{_{0}}^{(i)}} \\ & \text{ }{{\theta }_{j}}:={{\theta }_{j}}-a\frac{1}{m}\sum\limits_{i=1}^{m}{[{{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}}]x_{j}^{(i)}}+\frac{\lambda }{m}{{\theta }_{j}} \\ & \text{ }\!\!\}\!\!\text{ } \\ & Repeat \\ \end{align}对上面的算法中 j=1,2,...,n 时的更新式子进行调整可得: ${{\theta }_{j}}:={{\theta }_{j}}(1-a\frac{\lambda }{m})-a\frac{1}{m}\sum\limits_{i=1}^{m}{({{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}})x_{j}^{(i)}}$
###Code
def gradientReg(theta, X, y, learningRate):
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
parameters = int(theta.ravel().shape[1])
grad = np.zeros(parameters)
error = sigmoid(X * theta.T) - y
for i in range(parameters):
term = np.multiply(error, X[:,i])
if (i == 0):
grad[i] = np.sum(term) / len(X)
else:
grad[i] = (np.sum(term) / len(X)) + ((learningRate / len(X)) * theta[:,i])
return grad
###Output
_____no_output_____
###Markdown
就像在第一部分中做的一样,初始化变量。
###Code
# set X and y (remember from above that we moved the label to column 0)
cols = data2.shape[1]
X2 = data2.iloc[:,1:cols]
y2 = data2.iloc[:,0:1]
# convert to numpy arrays and initalize the parameter array theta
X2 = np.array(X2.values)
y2 = np.array(y2.values)
theta2 = np.zeros(11)
###Output
_____no_output_____
###Markdown
让我们初始学习率到一个合理值。,果有必要的话(即如果惩罚太强或不够强),我们可以之后再折腾这个。
###Code
learningRate = 1
###Output
_____no_output_____
###Markdown
现在,让我们尝试调用新的默认为0的theta的正则化函数,以确保计算工作正常。
###Code
costReg(theta2, X2, y2, learningRate)
gradientReg(theta2, X2, y2, learningRate)
###Output
_____no_output_____
###Markdown
现在我们可以使用和第一部分相同的优化函数来计算优化后的结果。
###Code
result2 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate))
result2
###Output
_____no_output_____
###Markdown
最后,我们可以使用第1部分中的预测函数来查看我们的方案在训练数据上的准确度。
###Code
theta_min = np.matrix(result2[0])
predictions = predict(theta_min, X2)
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y2)]
accuracy = (sum(map(int, correct)) % len(correct))
print ('accuracy = {0}%'.format(accuracy))
###Output
accuracy = 77%
###Markdown
虽然我们实现了这些算法,值得注意的是,我们还可以使用高级Python库像scikit-learn来解决这个问题。
###Code
from sklearn import linear_model#调用sklearn的线性回归包
model = linear_model.LogisticRegression(penalty='l2', C=1.0)
model.fit(X2, y2.ravel())
model.score(X2, y2)
###Output
_____no_output_____ |
notebooks/05_Displaying_data_with_Python.ipynb | ###Markdown
Displaying data with PythonHaskell is a great language for complex processing, but it lacks the visualization libraries that R and Python users have come to enjoy. This tutorial shows how to integrate both together when doing interactive analysis.
###Code
:extension DeriveGeneric
:extension FlexibleContexts
:extension OverloadedStrings
:extension GeneralizedNewtypeDeriving
:extension FlexibleInstances
:extension MultiParamTypeClasses
import GHC.Generics (Generic)
import Spark.Core.Dataset
import Spark.Core.Context
import Spark.Core.Functions
import Spark.Core.Column
import Spark.Core.Types
import Spark.Core.Row
import Spark.Core.ColumnFunctions
conf = defaultConf {
confEndPoint = "http://10.0.2.2",
confRequestedSessionName = "session05_python" }
createSparkSessionDef conf
import Spark.Core.Types
data MyData = MyData {
aBigId :: Int,
importantData :: Int } deriving (Show, Eq, Generic, Ord)
instance SQLTypeable MyData
instance FromSQL MyData
instance ToSQL MyData
let collection = [MyData 1 2, MyData 3 2, MyData 5 4]
let ds = dataset collection @@ "dataset"
let c = collect (asCol ds) @@ "collected_data"
_ <- exec1Def c
from kraps import *
ks = connectSession("session05_python", address='localhost')
ks
ks.pandas("collected_data")
print ks.url('collected_data')
###Output
_____no_output_____ |
_notebooks/2022-02-21-Pet-Breeds.ipynb | ###Markdown
"A pet breed classifier"> "If a team did it at a hackathon, surely I can too... right?"- comments: true- categories: [vision] IntroI remember volunteering at a hackathon and sitting in the award ceremony when I saw a group win in the "fun" category for creating a pet breed classifier. You give it an image and it'll tell you what breed it thinks it is and how confident it is. It was "fun" because you could override the threshold and allow images that aren't cats and dogs to be classified as a dog or cat breed. This blog post will show you how you can train your own pet breed classifer and how it isn't that hard nor time consuming to do so. You don't need a beefy computer either since you can use Colab's GPUs. Training our own pet breed classifierFirst, we'll download the Pet dataset and see what we're given:
###Code
path = untar_data(URLs.PETS)
Path.BASE_PATH = path
path.ls()
(path/'images').ls()
###Output
_____no_output_____
###Markdown
In this dataset, there are two subfolders: `images` and `annotations`. `images` contains the images of the pet breeds (and their labels) while `annotations` contains the location of the pet in each image if you wanted to do localization. The images are structured like so: the name of the pet breed with spaces turned into underscores, followed by a number. The name is capitalized if the pet is a cat. We can get the name of the pet breed by using regular expressions:
###Code
fname = (path/'images').ls()[0]
fname, fname.name
#
# () = extract what's in the parentheses -> .+
# .+ = any character appearing one or more times
# _ = followed by an underscore
# \d+ = followed by any digit appearing one or more times
# .jpg$ = with a .jpg extension at the end of the string
re.findall(r'(.+)_\d+.jpg$', fname.name)
###Output
_____no_output_____
###Markdown
This time, we'll be using a `DataBlock` to create our `DataLoaders`
###Code
pets = DataBlock(
blocks = (ImageBlock, CategoryBlock),
get_items = partial(get_image_files, folders = 'images'),
splitter = RandomSplitter(),
get_y = using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'),
item_tfms = Resize(460),
batch_tfms = aug_transforms(size = 224, min_scale = 0.75))
dls = pets.dataloaders(path)
###Output
_____no_output_____
###Markdown
In our pets `DataBlock`, we give it the following parameters:- `blocks = (ImageBlock, CategoryBlock)`: our independent variable is an image and our dependent variable is a category. - `get_items = partial(get_image_files, folders = 'images')`: we are getting our images recursively in the `images` folder. If you've used functional programming before, `partial` is like currying; we give a function some of its parameters and it returns another function that accepts the rest of its parameters, except `partial` allows us to specify which parameters we want to give.- `splitter = RandomSplitter()`: randomly splits our data into training and validation sets with a default `80:20` split. We can also specify a seed if we want to test how tuning our hyperparameters affects the final accuracy. The final two parameters are part of "presizing":- `item_tfms = Resize(460)`: picks a random area of an image (using its max width or height, whichever is smallest) and resizes it to 460x460. This process happens for all images in the dataset.- `batch_tfms = aug_transforms(size = 224, min_scale = 0.75)`: take a random portion of the image which is at least 75% of it and resize to 224x224. This process happens for all images in a batch (like the batch we get when we call `dls.one_batch()`).We first resize an image to a much larger size than our actual size for training so that we can avoid the data destruction done by data augmentation. The larger size allows tranformation of the data without creating empty areas.  We can check if our `DataLoaders` is created successfully by using the `.show_batch()` feature:
###Code
dls.show_batch(nrows = 1, ncols = 4)
###Output
_____no_output_____
###Markdown
We can then do some Googling to make sure our images are labelled correctly. Fastai also allows us to debug our `DataBlock` in case we make an error. It attemps to create a batch from the source:
###Code
#collapse-output
pets.summary(path)
###Output
Setting-up type transforms pipelines
Collecting items from /root/.fastai/data/oxford-iiit-pet
Found 7390 items
2 datasets of sizes 5912,1478
Setting up Pipeline: PILBase.create
Setting up Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
Building one sample
Pipeline: PILBase.create
starting from
/root/.fastai/data/oxford-iiit-pet/images/great_pyrenees_179.jpg
applying PILBase.create gives
PILImage mode=RGB size=500x334
Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
starting from
/root/.fastai/data/oxford-iiit-pet/images/great_pyrenees_179.jpg
applying partial gives
great_pyrenees
applying Categorize -- {'vocab': None, 'sort': True, 'add_na': False} gives
TensorCategory(21)
Final sample: (PILImage mode=RGB size=500x334, TensorCategory(21))
Collecting items from /root/.fastai/data/oxford-iiit-pet
Found 7390 items
2 datasets of sizes 5912,1478
Setting up Pipeline: PILBase.create
Setting up Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
Setting up after_item: Pipeline: Resize -- {'size': (460, 460), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} -> ToTensor
Setting up before_batch: Pipeline:
Setting up after_batch: Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} -> Flip -- {'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} -> RandomResizedCropGPU -- {'size': (224, 224), 'min_scale': 0.75, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'max_scale': 1.0, 'p': 1.0} -> Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False}
Building one batch
Applying item_tfms to the first sample:
Pipeline: Resize -- {'size': (460, 460), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} -> ToTensor
starting from
(PILImage mode=RGB size=500x334, TensorCategory(21))
applying Resize -- {'size': (460, 460), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} gives
(PILImage mode=RGB size=460x460, TensorCategory(21))
applying ToTensor gives
(TensorImage of size 3x460x460, TensorCategory(21))
Adding the next 3 samples
No before_batch transform to apply
Collating items in a batch
Applying batch_tfms to the batch built
Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} -> Flip -- {'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} -> RandomResizedCropGPU -- {'size': (224, 224), 'min_scale': 0.75, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'max_scale': 1.0, 'p': 1.0} -> Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False}
starting from
(TensorImage of size 4x3x460x460, TensorCategory([21, 30, 15, 2], device='cuda:0'))
applying IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} gives
(TensorImage of size 4x3x460x460, TensorCategory([21, 30, 15, 2], device='cuda:0'))
applying Flip -- {'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} gives
(TensorImage of size 4x3x460x460, TensorCategory([21, 30, 15, 2], device='cuda:0'))
applying RandomResizedCropGPU -- {'size': (224, 224), 'min_scale': 0.75, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'max_scale': 1.0, 'p': 1.0} gives
(TensorImage of size 4x3x224x224, TensorCategory([21, 30, 15, 2], device='cuda:0'))
applying Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False} gives
(TensorImage of size 4x3x224x224, TensorCategory([21, 30, 15, 2], device='cuda:0'))
###Markdown
Now, let's get to training our model. This time, we'll be fine tuning a pretrained model. This process is called transfer learning, where we take a pretrained model and retrain it on our data so that it can perform well for our task. We randomize the head (last layer) of our model, freeze the parameters of the earlier layers and train our model for one epoch. Then, we unfreeze the model and update the later layers of the model with a higher learning rate than the earlier layers. The pretrained model we will be using is `resnet34`, which was trained on the ImageNet dataset with 34 layers:
###Code
learner = cnn_learner(dls, resnet34, metrics = accuracy)
lrs = learner.lr_find()
learner.fit_one_cycle(3, lr_max = lrs.valley)
learner.unfreeze()
lrs = learner.lr_find()
learner.fit_one_cycle(6, lr_max = lrs.valley)
###Output
_____no_output_____
###Markdown
When we use a pretrained model, fastai automatically freezes the early layers.We then train the head (last layer) of the model for 3 epochs so that it can get a sense of our objective. Then, we unfreeze the model and train all the layers for 6 more epochs. After training for a total of 9 epochs, we now have a model that can predict pet breeds accuractely 94% of the time. We can use fastai's confusion matrix to see where our model is having problems:
###Code
interp = ClassificationInterpretation.from_learner(learner)
interp.plot_confusion_matrix(figsize = (12, 12), dpi = 60)
interp.most_confused(5)
###Output
_____no_output_____
###Markdown
Using the `.most_confused` feature, it seems like most of the errors come from the pet breeds being very similar. We should be careful however, that we aren't overfitting on our validation set through changing hyperparameters. We can see that our training loss is always going down, but our validation loss fluctuates from going down and sometimes up. And that's all there is to training a pet breed classifier. You could improve the accuracy by exploring deeper models like `resnet50` which has 50 layers; training for more epochs (whether before unfreezing or after or both); using discriminative learning rates (giving lower learning rates or early laters using `split(lr1, lr2)` in the `lr_max` key-word argument in `fit_one_cycle`). Using our own pet breed classifierFirst, let's save the model using `.export()`:
###Code
learner.export()
#hide
from google.colab import files
#hide
files.download('export.pkl')
###Output
_____no_output_____
###Markdown
Then, let's load the `.pkl` file:
###Code
learn = load_learner('export.pkl')
###Output
_____no_output_____
###Markdown
Create some basic UI:
###Code
def pretty(name: str) -> str:
return name.replace('_', ' ').lower()
def classify(a):
if not btn_upload.data:
lbl_pred.value = 'Please upload an image.'
return
img = PILImage.create(btn_upload.data[-1])
pred, pred_idx, probs = learn.predict(img)
out_pl.clear_output()
with out_pl:
display(img.to_thumb(128, 128))
lbl_pred.value = f'Looks like a {pretty(pred)} to me. I\'m {probs[pred_idx] * 100:.02f}% confident!'
btn_upload = widgets.FileUpload()
lbl_pred = widgets.Label()
out_pl = widgets.Output()
btn_run = widgets.Button(description = 'Classify')
btn_run.on_click(classify)
VBox([
widgets.Label('Upload a pet!'),
btn_upload,
btn_run,
out_pl,
lbl_pred])
###Output
_____no_output_____
###Markdown
And there we have it! You can make it prettier and go win a hackathon.However, a bit of a downside with deep learning is that it can only predict what it has been trained on. So, drawings of pets, night-time images of pets, and breeds that weren't included in the training set won't be accurately labelled.We could solve the last case by turning this problem into a multi-label classification problem. Then, if we aren't confident that we have one of the known breeds, we can just say we don't know this breed. Siamese pair When I was watching the fastai lectures, I heard Jeremy talking about "siamese pairs" where you give the model two images and it will tell you if they are of the same breed. Now that we have a model, let's make it!
###Code
def pair(a):
if not up1.data or not up2.data:
lbl.value = 'Please upload images.'
return
im1 = PILImage.create(up1.data[-1])
im2 = PILImage.create(up2.data[-1])
pred1, x, _ = learn.predict(im1)
pred2, y, _ = learn.predict(im2)
out1.clear_output()
out2.clear_output()
with out1:
display(im1.to_thumb(128, 128))
with out2:
display(im2.to_thumb(128, 128))
if x == y:
lbl.value = f'Wow, they\'re both {pretty(pred1)}(s)!'
else:
lbl.value = f'The first one seems to be {pretty(pred1)} while the second \
one is a(n) {pretty(pred2)}. I\'m not an expert, but they \
seem to be of different breeds, chief.'
up1 = widgets.FileUpload()
up2 = widgets.FileUpload()
lbl = widgets.Label()
out1 = widgets.Output()
out2 = widgets.Output()
run = widgets.Button(description = 'Classify')
run.on_click(pair)
VBox([
widgets.Label("Siamese Pairs"),
HBox([up1, up2]),
run,
HBox([out1, out2]),
lbl
])
###Output
_____no_output_____ |
notebooks/Banana Classification.ipynb | ###Markdown
SGLD from here onwards
###Code
@torch.enable_grad()
def gradient(x, y, params):
params_ = params.clone().requires_grad_(True)
loss = log_posterior(x, y, params_)
grad, = torch.autograd.grad(loss, params_)
return loss.detach().cpu().numpy(), grad
lr = 2e-3
def step_size(n):
return lr / (1 + n)**0.55
def sgld(n_epochs, data_batch_size):
dataloader_train = DataLoader(train_data, shuffle=True, batch_size=data_batch_size, num_workers=0)
model = DNN(2, 2).to(device)
params = torch.cat([param.flatten() for param in model.parameters()]).detach()
losses = []
step = 0
for _ in range(n_epochs):
epoch_losses = []
for x, y in tqdm(iter(dataloader_train)):
x = x.to(device)
y = y.to(device)
eps = step_size(step)
loss, grad = gradient(x, y, params)
params = params + 0.5 * eps * grad #+ np.sqrt(eps) * torch.randn_like(params)
step += 1
epoch_losses.append(loss)
losses.append(epoch_losses)
param_samples = []
iterator = iter(dataloader_train)
for _ in range(100):
x = x.to(device)
y = y.to(device)
eps = step_size(step)
loss, grad = gradient(x, y, params)
params = params + 0.5 * eps * grad + np.sqrt(eps) * torch.randn_like(params)
param_samples.append(params)
step += 1
param_samples = torch.stack(param_samples)
losses = np.array(losses)
return param_samples, losses
accuracies, eces, logps = [], [], []
accuracies_train, eces_train, logps_train = [], [], []
params_all = None
for i in range(num_exp):
param_samples, losses = sgld(n_epochs, data_batch_size)
if params_all is None:
params_all = param_samples.unsqueeze(0).cpu().numpy()
else:
params_all = np.concatenate((params_all, param_samples.unsqueeze(0).cpu().numpy()))
accuracy, ece, logp = evaluate(param_samples)
accuracy_train, ece_train, logp_train = evaluate(param_samples, train)
accuracies.append(accuracy)
eces.append(ece)
logps.append(logp)
accuracies_train.append(accuracy_train)
eces_train.append(ece_train)
logps_train.append(logp_train)
del param_samples
gc.collect()
torch.cuda.empty_cache()
accuracies = np.array(accuracies)
eces = np.array(eces)
logps = np.array(logps)
accuracies_train = np.array(accuracies_train)
eces_train = np.array(eces_train)
logps_train = np.array(logps_train)
SGLD_df = pd.DataFrame({"Accuracy": accuracies, "ECE": eces, "log predictive": logps})
SGLD_train_df = pd.DataFrame({"Accuracy": accuracies_train, "ECE": eces_train, "log predictive": logps_train})
with open(f"{results_path}sgld_test.txt", "w") as f:
f.write(str(SGLD_df.describe()))
with open(f"{results_path}sgld_train.txt", "w") as f:
f.write(str(SGLD_train_df.describe()))
plt_contour(params_all, "banana_sgld", "SGLD")
def sgd(n_epochs, data_batch_size):
lr = 1e-1
dataloader_train = DataLoader(train_data, shuffle=True, batch_size=data_batch_size, num_workers=0)
model = DNN(2, 2).to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
losses = []
for i in range(n_epochs):
for x, y in tqdm(iter(dataloader_train)):
x = x.to(device)
y = y.to(device)
optimizer.zero_grad()
out = model(x)
l = F.cross_entropy(out, y, reduction="mean")
l.backward()
losses.append(l.detach().cpu().numpy())
optimizer.step()
losses = np.array(losses)
return model, losses
accuracies, eces, logps = [], [], []
accuracies_train, eces_train, logps_train = [], [], []
params_all = None
for i in range(num_exp):
model, losses = sgd(n_epochs, data_batch_size)
params = model.parameters()
params = torch.cat([param.flatten() for param in params]).detach()
params = params.view(1, -1)
if params_all is None:
params_all = params.unsqueeze(0).cpu().numpy()
else:
params_all = np.concatenate((params_all, params.unsqueeze(0).cpu().numpy()))
accuracy, ece, logp = evaluate(params)
accuracy_train, ece_train, logp_train = evaluate(params, train)
accuracies.append(accuracy)
eces.append(ece)
logps.append(logp)
accuracies_train.append(accuracy_train)
eces_train.append(ece_train)
logps_train.append(logp_train)
del params
gc.collect()
torch.cuda.empty_cache()
accuracies = np.array(accuracies)
eces = np.array(eces)
logps = np.array(logps)
accuracies_train = np.array(accuracies_train)
eces_train = np.array(eces_train)
logps_train = np.array(logps_train)
SGD_df = pd.DataFrame({"Accuracy": accuracies, "ECE": eces, "log predictive": logps})
SGD_train_df = pd.DataFrame({"Accuracy": accuracies_train, "ECE": eces_train, "log predictive": logps_train})
with open(f"{results_path}sgd_test.txt", "w") as f:
f.write(str(SGD_df.describe()))
with open(f"{results_path}sgd_train.txt", "w") as f:
f.write(str(SGD_train_df.describe()))
plt_contour(params_all, "banana_sgd", "SGD")
###Output
_____no_output_____ |
scripts/shift_hh_quartiles_for_UBI_TM_interimyrs.ipynb | ###Markdown
TAZ File
###Code
TAZ_DATA_PATH = "C:/Users/etheocharides/Box/Modeling and Surveys/Urban Modeling/Bay Area UrbanSim/PBA50/Final Blueprint runs/Final Blueprint (s24)/BAUS v2.25 - FINAL VERSION/"
TAZ_DATA_FILE = "run182_taz_summaries_2040.csv"
taz_data = pd.read_csv(TAZ_DATA_PATH+TAZ_DATA_FILE)
# 2025: 94174
# 2030: 102211
# 2035: 112964
# 2040: 120786
# 2045: 126888
# 2050: 132529
NUM_HH_TO_MOVE = 120786
print("number of households is {}".format(taz_data.TOTHH.sum()))
print("number of q1 households is {}".format(taz_data.HHINCQ1.sum()))
print("number of q2 households is {}".format(taz_data.HHINCQ2.sum()))
# randomly select TAZs to shift HHs in (Q1 -> Q2), loop so that we never end up with negative Q1 HHs in a TAZ
for i in range(0, NUM_HH_TO_MOVE):
taz_i = np.random.choice(taz_data.loc[taz_data.HHINCQ1 > 0].TAZ)
taz_data.loc[taz_data.TAZ == taz_i, 'HHINCQ1'] = taz_data.loc[taz_data.TAZ == taz_i].HHINCQ1 - 1
taz_data.loc[taz_data.TAZ == taz_i, 'HHINCQ2'] = taz_data.loc[taz_data.TAZ == taz_i].HHINCQ2 + 1
print("number of households is {}".format(taz_data.TOTHH.sum()))
print("number of q1 households is {}".format(taz_data.HHINCQ1.sum()))
print("number of q2 households is {}".format(taz_data.HHINCQ2.sum()))
# save a copy of the data in the output folder, manually make it the master file as needed
# don't want to risk overwriting the master file (run182_taz_summaries_2040_UBI.csv)
NEW_TAZ_DATA_FILE = "run182_taz_summaries_2040_UBI_output.csv"
taz_data.to_csv(TAZ_DATA_PATH+NEW_TAZ_DATA_FILE)
###Output
_____no_output_____ |
nbs/03_StaticGenerator.ipynb | ###Markdown
Web functions > API details.
###Code
#hide
from nbdev.showdoc import *
cPath = os.path.dirname(os.getcwd())
cPath
sys.path.insert(0,cPath)
#export
from refreshment.School import StudySystem
from refreshment.Program import Program, Subject, Record, Lesson
from refreshment.munge import dateGrid, processGuide, unassignedForSubject, printAllUnassigned
#hide
guide = StudySystem("./data/programs.json")
for x in guide.programs:
print(x.name)
for y in x.subjects:
print("\"" + x.name + "\",\"" + y.name + "\"")
guide.loadDirectory()
for x in guide.programs:
print(x.name)
for y in x.subjects:
print("\"" + x.name + "\",\"" + y.name + "\"")
#hide
cath = guide.programs[0]
foo = cath.subjects
sorted(foo, key=lambda x: x.name)[:2]
reading = [x for x in cath.subjects if x.name == "Reading"][0]
for less in reading.lessons:
print(less.fileName)
#hide
template = Template('Hello {{ name }}!')
template.render(name=cath.name)
#hide
guide = StudySystem("./data/programs.json")
for x in guide.programs:
print(x.name)
for y in x.subjects:
print("\"" + x.name + "\",\"" + y.name + "\"")
#export
class Render:
def __init__(self, program, guide , dir="../../web",template = "../templates"):
self.guide = guide
self.program = [x for x in guide.programs if x.name == program][0]
print("loaded program " + self.program.name)
self.outputdir = os.path.join(dir,self.program.name)
self.template = template
self.file_loader = FileSystemLoader(self.template)
self.env = Environment(loader=self.file_loader)
self.resourcesName = "resources"
self.resourcesStart = os.path.join(".", self.resourcesName )
self.resourcesEnd = os.path.join(self.outputdir,self.resourcesName)
def basePath(self):
return self.outputdir
def renderLesson(self,sub,lesson):
template = self.env.get_template("lesson.html")
resDir = "../../.."
output = template.render(item=sub,
resDir=resDir,
less=lesson,
program = self.program,
styleDir=os.path.join("..",self.resourcesName))
subDir = os.path.join(self.outputdir,sub.name)
if not os.path.exists(subDir):
os.mkdir(subDir)
f = open(os.path.join(subDir,"vid_" +str(lesson.key) + ".html"), "w")
f.write(output)
def renderSubject(self,sub):
template = self.env.get_template("subject.html")
videos = [x for x in sub.lessons if x.fileName.endswith(".mp4")]
resources = [x for x in sub.lessons if not x.fileName.endswith(".mp4")]
for x in sub.lessons:
self.renderLesson(sub,x)
resDir = "../../.."
#print(sub.sequences)
output = template.render(item=sub,
videos=videos,
res=resources,
resDir=resDir,
program = self.program,
grid = reversed(dateGrid(self.program,subject=sub.name)),
styleDir=os.path.join("..",self.resourcesName))
subDir = os.path.join(self.outputdir,sub.name)
if not os.path.exists(subDir):
os.mkdir(subDir)
f = open(os.path.join(subDir,"index.html"), "w")
f.write(output)
def renderCalendar(self):
grid = reversed(dateGrid(self.program))
if not os.path.exists(self.outputdir):
os.mkdir(self.outputdir)
template = self.env.get_template("calendar.html")
output = template.render(program=self.program,
grid=grid,
styleDir=os.path.join(".",self.resourcesName))
f = open(os.path.join(self.outputdir,"calendar.html"), "w")
f.write(output)
f.close()
def renderSchool(self):
foo = self.program.subjects
seq = sorted(foo, key=lambda x: x.name)
if not os.path.exists(self.outputdir):
os.mkdir(self.outputdir)
for x in seq:
self.renderSubject(x)
template = self.env.get_template("school.html")
output = template.render(program=self.program,
seq=seq,
styleDir=os.path.join(".",self.resourcesName))
f = open(os.path.join(self.outputdir,"index.html"), "w")
f.write(output)
f.close()
self.addResources()
def addResources(self):
if not os.path.exists(self.resourcesEnd):
os.mkdir(self.resourcesEnd)
src_files = os.listdir(self.resourcesStart)
for file_name in src_files:
full_file_name = os.path.join(self.resourcesStart, file_name)
if os.path.isfile(full_file_name):
shutil.copy(full_file_name, os.path.join(self.resourcesEnd, file_name))
with open(os.path.join( self.resourcesEnd, "lemonade.json"), "w") as dataFile:
json.dump(self.program.toDict(), dataFile, indent=4, sort_keys=True)
def addFiles(self):
for sub in self.program.subjects:
for lesson in sub.lessons:
source = os.path.join(self.guide.origin, self.program.name,sub.name,lesson.fileName)
dest = os.path.join(self.outputdir,sub.name,lesson.fileName)
if not os.path.isfile(dest):
print("found " + dest)
shutil.copy(source, dest)
#hide
guide.loadDirectory()
processGuide(guide)
cath = guide.programs[0]
printAllUnassigned(prog=cath)
#export
def makeSite(school,programName):
baseRender = Render(programName,school)
baseRender.renderSchool()
school.save()
baseRender.addFiles()
baseRender.addResources()
baseRender.renderCalendar()
#hide
makeSite(school=guide,programName="Cathedral")
#import json
#print(json.dumps(guide.toDict(), sort_keys=True, indent=4))
###Output
_____no_output_____ |
05-Pandas-with-Time-Series/.ipynb_checkpoints/Rolling and Expanding-checkpoint.ipynb | ###Markdown
___ ___*Copyright Pierian Data 2017**For more information, visit us at www.pieriandata.com* Rolling and ExpandingA very common process with time series is to create data based off of a rolling mean. Let's show you how to do this easily with pandas!
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Best way to read in data with time series index!
df = pd.read_csv('time_data/walmart_stock.csv',index_col='Date',parse_dates=True)
df.head()
df['Open'].plot(figsize=(16,6))
###Output
_____no_output_____
###Markdown
Now let's add in a rolling mean! This rolling method provides row entries, where every entry is then representative of the window.
###Code
# 7 day rolling mean
df.rolling(7).mean().head(20)
df['Open'].plot()
df.rolling(window=30).mean()['Close'].plot()
###Output
_____no_output_____
###Markdown
Easiest way to add a legend is to make this rolling value a new column, then pandas does it automatically!
###Code
df['Close: 30 Day Mean'] = df['Close'].rolling(window=30).mean()
df[['Close','Close: 30 Day Mean']].plot(figsize=(16,6))
###Output
_____no_output_____
###Markdown
expandingNow what if you want to take into account everything from the start of the time series as a rolling value? For instance, not just take into account a period of 7 days, or monthly rolling average, but instead, take into everything since the beginning of the time series, continuously:
###Code
# Optional specify a minimum number of periods
df['Close'].expanding(min_periods=1).mean().plot(figsize=(16,6))
###Output
_____no_output_____
###Markdown
Bollinger BandsWe will talk a lot more about financial analysis plots and technical indicators, but here is one worth mentioning!More info : http://www.investopedia.com/terms/b/bollingerbands.asp*Developed by John Bollinger, Bollinger Bands® are volatility bands placed above and below a moving average. Volatility is based on the standard deviation, which changes as volatility increases and decreases. The bands automatically widen when volatility increases and narrow when volatility decreases. This dynamic nature of Bollinger Bands also means they can be used on different securities with the standard settings. For signals, Bollinger Bands can be used to identify Tops and Bottoms or to determine the strength of the trend.**Bollinger Bands reflect direction with the 20-period SMA and volatility with the upper/lower bands. As such, they can be used to determine if prices are relatively high or low. According to Bollinger, the bands should contain 88-89% of price action, which makes a move outside the bands significant. Technically, prices are relatively high when above the upper band and relatively low when below the lower band. However, relatively high should not be regarded as bearish or as a sell signal. Likewise, relatively low should not be considered bullish or as a buy signal. Prices are high or low for a reason. As with other indicators, Bollinger Bands are not meant to be used as a stand alone tool. *
###Code
df['Close: 30 Day Mean'] = df['Close'].rolling(window=20).mean()
df['Upper'] = df['Close: 30 Day Mean'] + 2*df['Close'].rolling(window=20).std()
df['Lower'] = df['Close: 30 Day Mean'] - 2*df['Close'].rolling(window=20).std()
df[['Close','Close: 30 Day Mean','Upper','Lower']].plot(figsize=(16,6))
###Output
_____no_output_____ |
Note_books/Explore_Models/.ipynb_checkpoints/betting_strategies_v4_spread-checkpoint.ipynb | ###Markdown
Part 2 Evaluate betting ,... "tune" on x_16 ... test again on x_17
###Code
y_pred= model.predict(x_test_sc) #this is <= 20152016 data
acc_pre = accuracy_score(y_test, y_pred) #this x_test_sc needs to be defined already above during training session on <=2015
def make_bet_df(model=rfc, x = x_16, y = y_16, Y = Y_16):
dic = {} #x_16, lgr 0.55 ... Request rfc 0.58 x_16 and 0.6 on x_17 (later)
dic['model_name'] = str(model)[0:10]
dic['actual_res'] = [ t[0] for t in list(y)]
dic['model_pred'] = list(model.predict(x))
dic['model_conf_1'] = [round(t,4) for t in model.predict_proba(x)[0:, 1]]
dic['model_conf_0'] =[round(t,4) for t in model.predict_proba(x)[0:, 0]]
dic['home_odds'] = list(Y['home_odds'])
dic['away_odds'] = list(Y['away_odds'])
df= pd.DataFrame(dic)
#wrong! ... actually ok
df['home_impl_proba'] = v_impl_proba(df['home_odds'])
df['away_impl_proba'] = v_impl_proba(df['away_odds'])
df['conf_1_sub_home_impl'] = df['model_conf_1'] - df['home_impl_proba']
df['conf_0_sub_away_impl'] = df['model_conf_0'] - df['away_impl_proba']
df['pre_acc_sub_home_impl'] = (acc_pre - df['home_impl_proba']).copy()
df['pre_acc_sub_away_impl'] = (acc_pre - df['away_impl_proba']).copy()
df['fav_pred'] = v_fav_pred(df['home_odds'] )
return df #df_bet_info
df_bet_info = make_bet_df(model=rfc, x = x_16, y = y_16, Y = Y_16) ##this is for model = rfc and season = 20162017; rfc dif well on both seasons
df_bet_info
#simpler approach?
y_pred= model.predict(x_test_sc) #this is <= 20152016 data
acc_pre = accuracy_score(y_test, y_pred) #this x_test_sc needs to be defined already above during training session on <=2015
##bet the spread mofo!
#conf_min >=0 is how far from 0.5 you want conf_1 or conf_0 to be (if one is close to 0.5 so is the other)
## pre_acc_above_break_even = 0.1 means you would ask pre_acc >= break_even_prob + 0.1 pre_acc is accuracy of model on pretraining
##model_conf_above_break_even = 0.1 means you would ask predict_proba >= break_even_prob + 0.1 pred_proba is the current model's
#estimate on how likely it thimks the anwer is 1 (or 0 if it is predicting 0
def make_pay_off_df(df_bet_info=df_bet_info, model_conf_min = 0 , min_pre_acc_above_break_even = 0, min_model_conf_above_break_even = 0, bet_fav = True, bet_und = True):
##return df with columns for each season in question:
#type_bet, total_invested, total_earned, profit, ROI,
##rows should be: avg for season in question let's do x_16 for now
###filters for df ..
df= df_bet_info.loc[: , 'model_pred'].copy() ##set up stub df
HA_bet = df['model_pred'].copy() #0 or 1
fav_pred = df['fav_pred'].copy() #0 or 1 ... need 4 terms one for each combo
actual_res= df['actual_res'].copy()
##STEP 1
##decide on bet_fav bet_und or do both strategy
if bet_fav and ~bet_und:
filt_bet = (HA_bet == fav_pred).copy()
if ~bet_fav and bet_und:
filt_bet = (HA_bet != fav_pred).copy()
if ~bet_fav and ~bet_und:
filt_bet = (HA_bet != HA_bet).copy() #all False
else:
filt_bet = (HA_bet == HA_bet).copy() ##all True no filter
#filt_bet_fav = (HA_bet == fav_pred)|(~bet_fav) # if bet_fav = False, want this to be all True; if bet_fav = True, want to use this filter as is; do OR not bet_fav
#filt_bet_und = (HA_bet != fav_pred)|(~bet_und) #same deal ... if bet_und = True, want this filter to take effect; if False, want it to be all True
#filt_fav_und = filt_bet_fav & filt_bet_und ##not possible to do both stragies ...
df = df.loc[filt_bet, :].copy() #set betting fav/und strategy
##STEP 2 ## impose the restrictions on confidence levels, if any; to keep it simple we impose same constraint whether H/A or fav/und
##! I think filtering by betting fav or und FIRST before calculating confidence levels is important
## therfore USE df and not df_bet_info (unfiltered) below
#conf_min = 0; first we define model not confident to mean proba is close to 0.5 (both 1 and 0 conf will both be close to 0.5)
#eg within conf_min =0.05 of 0.5 means not confident
model_not_conf = (0.5 - model_conf_min <= df['model_conf_1'])&(df['model_conf_1'] <= 0.5 + model_conf_min).copy()
filt_model_is_conf = (~model_not_conf).copy()
filt_= df['conf_1_sub_home_impl'] >= min_pre_acc_above_break_even
##it is tricky because if you are betting on home team you need your conf_1 to be high; but it betting away team, you need conf_0 to be high
conf_above_impl_proba_home_bet = (HA_bet*df['conf_1_sub_home_impl'] >= HA_bet*min_model_conf_above_break_even).copy() #if HA_bet =0, reduced to 0>=0 (all True)
conf_above_impl_proba_away_bet = ((1-HA_bet)*df['conf_0_sub_away_impl'] >=(1-HA_bet)*min_model_conf_above_break_even).copy()
filt_conf_above_impl_proba = conf_above_impl_proba_home_bet&conf_above_impl_proba_away_bet #one of these will be all True
pre_acc_above_impl_proba_home_bet = (HA_bet*df['pre_acc_sub_home_impl'] >= HA_bet*min_pre_acc_above_break_even).copy() #if HA_bet =0, reduced to 0>=0 (all True)
pre_acc_above_impl_proba_away_bet = ((1-HA_bet)*df['pre_acc_sub_home_impl'] >=(1-HA_bet)* min_pre_acc_above_break_even).copy()
filt_acc_above_impl_proba = pre_acc_above_impl_proba_home_bet&pre_acc_away_impl_proba_home_bet
filt_conf = filt_model_is_conf&filt_conf_above_impl_proba&filt_acc_above_impl_proba
#second restriction:
df = df.loc[filt_conf, :].copy()
return df
df_pay = make_pay_off_df(df_bet_info=df_bet_info, model_conf_min = 0 , min_pre_acc_above_break_even = -10**6, min_model_conf_above_break_even = -10**6, bet_fav = True, bet_und = True)
##return df wi
df_pay
###Output
_____no_output_____
###Markdown
can do version where we get y regr version at beinning ... do all teh regression stuff ... then for later cells do y = v_win(y) for classifiers ... regression models now ...y = Y['goal_diff_target'].copy()x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)do standard/minmax scaling on X_train numeric columns ... better to do pipeline? x_train_sc = std_scal.fit_transform(x_train) fit the scaler from train portion to the test portion x_test_sc = std_scal.transform(x_test)lr = Ridge(alpha=50000) rfr = RandomForestRegressor(max_depth=4, random_state=0)xgbr = XGBRegressor()quick checks for model in [lr, rfr, xgbr]: model.fit(x_train_sc, y_train) model.fit(x_train_sc, y_train.ravel()) y_pred= model.predict(x_test_sc) print(y_pred[0:5]) y_predw = v_win(y_pred) y_predt= model.predict(x_train_sc) y_predwt = v_win(y_predt) y_trainw = v_win(y_train) y_testw = v_win(y_test) same as usual win/loss acc = accuracy_score(y_testw, y_predw) f1 = f1_score(y_testw,y_predw) acct = accuracy_score(y_trainw, y_predwt) f1t = f1_score(y_trainw,y_predwt) print(str(model)[0:20], 'TEST: ', acc, f1 ,'training : ', acct, f1t)
###Code
#Let's go with lgr, lgr2 (tuned or not) for betting investigation
x_16 = std_scal.transform(x_16).copy()
svc2.predict_proba(x_16)
gnb.predict_proba(x_16)[0:, 0]
y_16
##make betting strategy on x_16 test it on x_17 ... and other seasons if needed
df_lgr.columns
df_lgr['away_impl_proba'] = v_impl_proba(df_lgr['away_odds'])
df_lgr['home_impl_proba'] = v_impl_proba(df_lgr['home_odds'])
df_lgr['conf_1_sub_home_impl'] = df_lgr['lgr_conf_1'] - df_lgr['home_impl_proba']
df_lgr['conf_0_sub_away_impl'] = df_lgr['lgr_conf_0'] - df_lgr['away_impl_proba']
df_lgr = df_lgr.loc[:, ['actual', 'lgr_pred', 'lgr_conf_1', 'home_impl_proba', 'conf_1_sub_home_impl', 'conf_0_sub_away_impl', 'lgr_conf_0', 'away_impl_proba',
'home_odds', 'away_odds']].copy()
df['away_odds']
df_100.describe()
df_lgr.loc[df_lgr.loc[df_lgr['home_odds'] == df_lgr['away_odds'], :]
np.abs(-6)
def v_fav(x):
if x <0:
return 1
df['profit'].sum()
cols = ['actual', 'lgr_pred', 'lgr_conf_1', 'home_impl_proba',
'conf_1_sub_home_impl', 'conf_0_sub_away_impl',
'away_impl_proba', 'bet_HA',
'bet', 'pay_out', 'profit', 'total_profit','cumul_profit', 'total_bet', 'total_ROI' ]
df_100.shape
df_100.loc[800:900,cols]
df_100 = find_pay_off_bet_100(conf_1_at_least = 0 , conf_0_at_least =0 ,conf_thresh=0 , diff_home_C_1_at_least =0, diff_away_C_0_at_least =0, type_bet = "Bet_100"): #or "bet_100"
#
df_100 = find_pay_off_bet_100(conf_1_at_least = 0, conf_0_at_least = 0, conf_thresh = -100,diff_home_C_1_at_least =0.2, diff_away_C_0_at_least =0.1, type_bet = "bet_100" )
df_100['total_profit']
df = find_pay_off(conf_1_at_least = 0.2, conf_0_at_least = 0.2, diff_home_C_1_at_least = -10, diff_away_C_0_at_least = -10, type_bet = "get_100" )
print(accuracy_score(df['actual'], df['lgr_pred'], ))
df['total_profit'] #OR# #AND# ##OR##
df_lgr.describe()
accuracy_score(df_lgr['actual'], df_lgr['lgr_pred'], )
print(df.loc[(df['bet_HA'] == 1)&(df['pay_out'] >0), ['profit']].sum(),
df.loc[(df['bet_HA'] == 1)&(df['pay_out'] >0), ['profit']].count(),
'avg loss ', df.loc[(df['bet_HA'] == 1)&(df['pay_out'] >0), ['profit']].sum()/df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count() )
print(df.loc[(df['bet_HA'] == 1)&(df['pay_out'] ==0), ['profit']].sum(),
df.loc[(df['bet_HA'] == 1)&(df['pay_out'] ==0), ['profit']].count(),
'avg loss ', df.loc[(df['bet_HA'] == 1)&(df['pay_out'] ==0), ['profit']].sum()/df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count() )
print(df.loc[(df['bet_HA'] == 0)&(df['pay_out'] >0), ['profit']].sum(),
df.loc[(df['bet_HA'] == 0)&(df['pay_out'] >0), ['profit']].count(),
'avg loss ', df.loc[(df['bet_HA'] == 0)&(df['pay_out'] >0), ['profit']].sum()/df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count() )
print(df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].sum(),
df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count(),
'avg loss ', df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].sum()/df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count() )
print(df['profit'].sum())
print('avg loss', df['profit'].sum()/df['profit'].count())
(df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']] <0).sum()
dic2['lgr_pred'] = list(lgr.predict(x_16))
dic2['gnb_pred'] = list(gnb.predict(x_16))
dic2'svc_proba'] = list(svc.predict_proba(x_16)[:,1])
predictions = gnb.predict(x_17)
actual = y_17
confusionMatrix = confusion_matrix(actual, predictions)
display(confusionMatrix)
tn = confusionMatrix[0][0]
fp = confusionMatrix[0][1]
fn = confusionMatrix[1][0]
tp = confusionMatrix[1][1]
actualYes = fn + tp
actualNo = tn + fp
predictedYes = fp + tp
predictedNo = tn + fn
print('When home team wins, classifier predicts they will win %6.2f%% of the time' % (tp / actualYes * 100))
print('When home team loses, classifier predicts they will win %6.2f%% of the time' % (fp / actualNo * 100))
print('When home team loses, classifier predicts they will lose %6.2f%% of the time' % (tn / actualNo * 100))
print('When classifer predicts home team will win, home team actually wins %6.2f%% of the time' % (tp / predictedYes * 100))
print('When classifer predicts home team will lose, home team actually loses %6.2f%% of the time' % (tn / predictedNo * 100))
## is the betting and calcs being done correctly? 62% accuracy should easily have a profit ...
##try all models
##try regression
##think about this "confidence diff " Leung mentioned
##also ... sigh ... put in the FW% thing I guess ... [that was what I was struggling with calculating yesterday ]
###Output
_____no_output_____ |
examples/grid_resilience/PowerBalance.ipynb | ###Markdown
Defining Generation and Load Nodes
###Code
# define the power used by each of the nodes - this is where you would put your human behavior decisions
# for each of these nodes
power_used = np.zeros((N,Nt)) # keep track of power used by each node
power_used[:,0] = 100 * np.random.randn(N) # kW - starting power
# Assume each node has solar and can generate power or consume power, if generated power, the power will be positive
# and if consuming power the power will be negative
# TODO: we don't need to worry currently about control of solar
# eventually we will want to add some controls to make the output at the substation near zero
for i in range(1,Nt):
for j in range(N):
# TODO: correlation with time of day
power_used[j,i] = power_used[j,i-1] + 1 * np.random.randn(1)
# change the power by a small amount for each minute
# Caity - this is where you would add your inputs
# plot the profiles of each of the nodes
plt.figure(figsize=(10,7))
for i in range(N):
plt.plot(power_used[i,:])
plt.xlabel('Time (min)', fontsize=15)
plt.ylabel('Power (kW)')
# plot the total power requested by the substation
plt.figure(figsize=(10,7))
plt.plot(np.sum(power_used,axis=0))
plt.xlabel('Time (min)', fontsize=15)
plt.ylabel('Total Power out of Substation (kW)', fontsize=15)
plt.grid()
###Output
_____no_output_____
###Markdown
Visualizations
###Code
# def cluster(x, y, distance, method='radius', view_plot=False):
# # 3 clustering techniques:
# # 1. radius - agents within a radius
# # 2. box - agents within a box
# # 3. nearest - nearest agents
# # number of turbines
# nAgents = len(x)
# # reference point
# xMin = np.min(x)
# yMin = np.min(y)
# # initialize cluster dictionary
# cluster_agents = dict()
# if view_plot:
# plt.figure(figsize=(10, 7))
# for i in range(nAgents):
# # print('Agent ', i, 'out of ', nAgents)
# # plot the turbines to see the clustering
# if view_plot:
# plt.plot(x[i] - xMin, y[i] - yMin, 'ko', markersize=10)
# cluster_agents[i] = []
# if method == 'nearest':
# dist = np.zeros(nAgents)
# for j in range(nAgents):
# if i != j:
# # ===============================================
# # cluster turbines within some radius
# # ===============================================
# if method == 'radius':
# dist = np.sqrt((x[i] - x[j]) ** 2 + (y[i] - y[j]) ** 2 )
# if dist < distance[0]:
# if view_plot:
# plt.plot([x[i] - xMin, x[j] - xMin],
# [y[i] - yMin, y[j] - yMin],
# color=plt.cm.RdYlBu(i))
# cluster_agents[i].append(j)
# # ===============================================
# # cluster turbines within some box boundaries
# # ===============================================
# elif method == 'box':
# dist_east = abs(x[i] - x[j])
# dist_north = abs(y[i] - y[j])
# if dist_east < distance[0] and dist_north < distance[1]:
# if view_plot:
# plt.plot([x[i] - xMin, x[j]] - xMin,
# [y[i] - yMin, y[j] - yMin],
# color=plt.cm.RdYlBu(i))
# cluster_agents[i].append(j)
# # ===============================================
# # cluster nearest turbines
# # ===============================================
# elif method == 'nearest':
# dist[j] = np.sqrt((x[i] - x[j]) ** 2 \
# + (y[i] - y[j]) ** 2)
# # nearest neighbor has a different plotting strategy
# if method == 'nearest':
# idx = np.argsort(dist)
# cluster_agents[i] = idx[0:distance[0] + 1]
# for k in cluster_agents[i]:
# if view_plot:
# plt.plot([x[i] - xMin, x[k] - xMin], \
# [y[i] - yMin, y[k] - yMin],
# color=plt.cm.RdYlBu(i))
# if view_plot:
# plt.title('Clustering', fontsize=25)
# plt.xlabel('x (m)', fontsize=25)
# plt.ylabel('y (m)', fontsize=25)
# plt.tick_params(which='both', labelsize=25)
# plt.grid()
# return cluster_agents
# def plot_distributed(x, y, cluster_agents):
# # number of turbines
# nAgents = len(x)
# # reference point
# xMin = np.min(x)
# yMin = np.min(y)
# xMax = np.max(x)
# yMax = np.max(y)
# plt.plot(x-xMin,y-yMin,'ko')
# for i in range(nAgents):
# for j in cluster_agents[i]:
# scale_factor = 1.0
# plt.plot([x[i],x[j]]-xMin,[y[i],y[j]]-yMin,color='g')
# plt.xlim([xMin - 10, xMax+10])
# plt.ylim([yMin - 10, yMax + 10])
# # Show power at each nodes - for t = 0
# x = 10*np.random.rand(N)
# y = 10*np.random.rand(N)
# # connect to nearest turbines
# cluster_turbines = cluster(x,y,np.ones(N))
# # plot the network
# # plot_distributed(x,y,cluster_turbines)
# plt.plot(x,y,marker_size=power_used[:,0])
###Output
_____no_output_____ |
tensorflow/keras_vivit.ipynb | ###Markdown
tf.data pipeline
###Code
@tf.function
def preprocess(frames: tf.Tensor, label: tf.Tensor):
frames = tf.image.convert_image_dtype(
frames[
..., tf.newaxis
], tf.float32
)
label = tf.cast(label, tf.float32)
return frames, label
def prepare_dataloader(
videos: np.ndarray,
labels: np.ndarray,
loader_type: str = "train",
batch_size: int = BATCH_SIZE
):
dataset = tf.data.Dataset.from_tensor_slices((videos, labels))
if loader_type == 'train':
dataset = dataset.shuffle(BATCH_SIZE * 2)
dataloader = (
dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size)
.prefetch(tf.data.AUTOTUNE)
)
return dataloader
trainloader = prepare_dataloader(train_videos, train_labels, 'train')
validloader = prepare_dataloader(valid_videos, valid_labels, 'valid')
testloader = prepare_dataloader(test_videos, test_labels, 'test')
class TubeletEmbedding(layers.Layer):
def __init__(self, embed_dim, patch_size, **kwargs):
super().__init__(**kwargs)
self.projection = layers.Conv3D(
filters=embed_dim,
kernel_size=patch_size,
strides=patch_size,
padding='valid'
)
self.flatten = layers.Reshape(target_shape=(-1, embed_dim))
def call(self, videos):
projected_patches = self.projection(videos)
flattend_patches = self.flatten(projected_patches)
return flattend_patches
class TubeletEmbedding(layers.Layer):
def __init__(self, embed_dim, patch_size, **kwargs):
super().__init__(**kwargs)
self.projection = layers.Conv3D(
filters=embed_dim,
kernel_size=patch_size,
strides=patch_size,
padding="VALID",
)
self.flatten = layers.Reshape(target_shape=(-1, embed_dim))
def call(self, videos):
projected_patches = self.projection(videos)
flattened_patches = self.flatten(projected_patches)
return flattened_patches
class PositionalEncoder(layers.Layer):
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
def build(self, input_shape):
_, num_tokens, _ = input_shape
self.position_embedding = layers.Embedding(
input_dim=num_tokens, output_dim=self.embed_dim
)
self.positions = tf.range(start=0, limit=num_tokens, delta=1)
def call(self, encoded_tokens):
encoded_positions = self.position_embedding(self.positions)
encoded_tokens = encoded_tokens + encoded_positions
return encoded_tokens
###Output
_____no_output_____
###Markdown
Video vision transformer with spatio-temporal attention
###Code
def create_vivit_classifier(
tubelet_embedder,
positional_encoder,
input_shape=INPUT_SHAPE,
transformer_layers=NUM_LAYERS,
num_heads=NUM_HEADS,
embed_dim=PROJECTION_DIM,
layer_norm_eps=LAYER_NORM_EPS,
num_classes=NUM_CLASSES
):
inputs = layers.Input(shape=input_shape)
patches = tubelet_embedder(inputs)
encoded_patches = positional_encoder(patches)
# Create multiple layers of transformer block
for _ in range(transformer_layers):
# Layer norm and multi head self attention
x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim//num_heads,
dropout=0.1
)(x1, x1)
# Skip connection
x2 = layers.Add()([attention_output, encoded_patches])
# Layer norm and MLP
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
x3 = keras.Sequential([
layers.Dense(units=embed_dim*4, activation=tf.nn.gelu),
layers.Dense(units=embed_dim, activation=tf.nn.gelu)
])(x3)
# Skip connection
encoded_patches = layers.Add()([x3, x2])
# Layer norm and global average pooling
representation = layers.LayerNormalization(epsilon=layer_norm_eps)(encoded_patches)
representation = layers.GlobalAvgPool1D()(representation)
# Classify outputs
outputs = layers.Dense(units=num_classes, activation='softmax')(representation)
# Create the keras model
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def run_experiment():
model = create_vivit_classifier(
tubelet_embedder=TubeletEmbedding(
embed_dim=PROJECTION_DIM, patch_size=PATCH_SIZE
),
positional_encoder=PositionalEncoder(embed_dim=PROJECTION_DIM)
)
optimizer = keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(
optimizer=optimizer,
loss='sparse_categorical_crossentropy',
metrics=[
keras.metrics.SparseCategoricalAccuracy(name='accuracy'),
keras.metrics.SparseTopKCategoricalAccuracy(5, name='top-5-accuracy')
]
)
# Train the model
_ = model.fit(trainloader, epochs=EPOCHS, validation_data=validloader)
_, accuracy, top5_accuracy = model.evaluate(testloader)
print(f"Test acc: {round(accuracy * 100, 2)}%")
print(f"Test top5 acc: {round(top5_accuracy * 100, 2)}%")
return model
model = run_experiment()
###Output
_____no_output_____
###Markdown
Inference
###Code
NUM_SAMPLES_VIZ = 25
testsamples, labels = next(iter(testloader))
testsamples, labels = testsamples[:NUM_SAMPLES_VIZ], labels[:NUM_SAMPLES_VIZ]
ground_truths = []
preds = []
videos = []
for i, (testsample, label) in enumerate(zip(testsamples, labels)):
with io.BytesIO() as gif:
imageio.mimsave(gif, (testsample.numpy() * 255).astype('uint8'), 'GIF', fps=5)
videos.append(gif.getvalue())
output = model.predict(tf.expand_dims(testsample, axis=0))[0]
pred = np.argmax(output, axis=0)
ground_truths.append(label.numpy().astype('int'))
preds.append(pred)
def make_box_for_grid(image_widget, fit):
if fit is not None:
fit_str = '{}'.format(fit)
else:
fit_str = str(fit)
h = ipywidgets.HTML(value='' + str(fit_str) + '')
boxb = ipywidgets.widgets.Box()
boxb.children = [image_widget]
vb = ipywidgets.widgets.VBox()
vb.layout.align_items = 'center'
vb.children = [h, boxb]
return vb
boxes = []
for i in range(NUM_SAMPLES_VIZ):
ib = ipywidgets.widgets.Image(value=videos[i], width=100, height=100)
true_class = info['label'][str(ground_truths[i])]
pred_class = info['label'][str(preds[i])]
caption = f'T: {true_class} | P: {pred_class}'
boxes.append(make_box_for_grid(ib, caption))
ipywidgets.widgets.GridBox(
boxes, layout=ipywidgets.widgets.Layout(grid_template_columns='repeat(5, 200px)')
)
###Output
_____no_output_____ |
04-Linear-Regression-with-Python/related-tutorials/01-B-Introduction-Numpy.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* Introduction to NumPy This chapter, along with chapter 3, outlines techniques for effectively loading, storing, and manipulating in-memory data in Python.The topic is very broad: datasets can come from a wide range of sources and a wide range of formats, including be collections of documents, collections of images, collections of sound clips, collections of numerical measurements, or nearly anything else.Despite this apparent heterogeneity, it will help us to think of all data fundamentally as arrays of numbers.For example, images–particularly digital images–can be thought of as simply two-dimensional arrays of numbers representing pixel brightness across the area.Sound clips can be thought of as one-dimensional arrays of intensity versus time.Text can be converted in various ways into numerical representations, perhaps binary digits representing the frequency of certain words or pairs of words.No matter what the data are, the first step in making it analyzable will be to transform them into arrays of numbers.(We will discuss some specific examples of this process later in [Feature Engineering](05.04-Feature-Engineering.ipynb))For this reason, efficient storage and manipulation of numerical arrays is absolutely fundamental to the process of doing data science.We'll now take a look at the specialized tools that Python has for handling such numerical arrays: the NumPy package, and the Pandas package (discussed in Chapter 3).This chapter will cover NumPy in detail. NumPy (short for *Numerical Python*) provides an efficient interface to store and operate on dense data buffers.In some ways, NumPy arrays are like Python's built-in ``list`` type, but NumPy arrays provide much more efficient storage and data operations as the arrays grow larger in size.NumPy arrays form the core of nearly the entire ecosystem of data science tools in Python, so time spent learning to use NumPy effectively will be valuable no matter what aspect of data science interests you.If you followed the advice outlined in the Preface and installed the Anaconda stack, you already have NumPy installed and ready to go.If you're more the do-it-yourself type, you can go to http://www.numpy.org/ and follow the installation instructions found there.Once you do, you can import NumPy and double-check the version:
###Code
import numpy
numpy.__version__
###Output
_____no_output_____
###Markdown
For the pieces of the package discussed here, I'd recommend NumPy version 1.8 or later.By convention, you'll find that most people in the SciPy/PyData world will import NumPy using ``np`` as an alias:
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Throughout this chapter, and indeed the rest of the book, you'll find that this is the way we will import and use NumPy. Reminder about Built In DocumentationAs you read through this chapter, don't forget that IPython gives you the ability to quickly explore the contents of a package (by using the tab-completion feature), as well as the documentation of various functions (using the ``?`` character – Refer back to [Help and Documentation in IPython](01.01-Help-And-Documentation.ipynb)).For example, to display all the contents of the numpy namespace, you can type this:```ipythonIn [3]: np.```And to display NumPy's built-in documentation, you can use this:```ipythonIn [4]: np?```More detailed documentation, along with tutorials and other resources, can be found at http://www.numpy.org. *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* The Basics of NumPy Arrays Data manipulation in Python is nearly synonymous with NumPy array manipulation: even newer tools like Pandas ([Chapter 3](03.00-Introduction-to-Pandas.ipynb)) are built around the NumPy array.This section will present several examples of using NumPy array manipulation to access data and subarrays, and to split, reshape, and join the arrays.While the types of operations shown here may seem a bit dry and pedantic, they comprise the building blocks of many other examples used throughout the book.Get to know them well!We'll cover a few categories of basic array manipulations here:- *Attributes of arrays*: Determining the size, shape, memory consumption, and data types of arrays- *Indexing of arrays*: Getting and setting the value of individual array elements- *Slicing of arrays*: Getting and setting smaller subarrays within a larger array- *Reshaping of arrays*: Changing the shape of a given array- *Joining and splitting of arrays*: Combining multiple arrays into one, and splitting one array into many NumPy Array Attributes First let's discuss some useful array attributes.We'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array.We'll use NumPy's random number generator, which we will *seed* with a set value in order to ensure that the same random arrays are generated each time this code is run:
###Code
import numpy as np
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array
x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
###Output
_____no_output_____
###Markdown
Each array has attributes ``ndim`` (the number of dimensions), ``shape`` (the size of each dimension), and ``size`` (the total size of the array):
###Code
print("x3 ndim: ", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
###Output
x3 ndim: 3
x3 shape: (3, 4, 5)
x3 size: 60
###Markdown
Another useful attribute is the ``dtype``, the data type of the array (which we discussed previously in [Understanding Data Types in Python](02.01-Understanding-Data-Types.ipynb)):
###Code
print("dtype:", x3.dtype)
###Output
dtype: int64
###Markdown
Other attributes include ``itemsize``, which lists the size (in bytes) of each array element, and ``nbytes``, which lists the total size (in bytes) of the array:
###Code
print("itemsize:", x3.itemsize, "bytes")
print("nbytes:", x3.nbytes, "bytes")
###Output
itemsize: 8 bytes
nbytes: 480 bytes
###Markdown
In general, we expect that ``nbytes`` is equal to ``itemsize`` times ``size``. Array Indexing: Accessing Single Elements If you are familiar with Python's standard list indexing, indexing in NumPy will feel quite familiar.In a one-dimensional array, the $i^{th}$ value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists:
###Code
x1
x1[0]
x1[4]
###Output
_____no_output_____
###Markdown
To index from the end of the array, you can use negative indices:
###Code
x1[-1]
x1[-2]
###Output
_____no_output_____
###Markdown
In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:
###Code
x2
x2[0, 0]
x2[2, 0]
x2[2, -1]
###Output
_____no_output_____
###Markdown
Values can also be modified using any of the above index notation:
###Code
x2[0, 0] = 12
x2
###Output
_____no_output_____
###Markdown
Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.This means, for example, that if you attempt to insert a floating-point value to an integer array, the value will be silently truncated. Don't be caught unaware by this behavior!
###Code
x1[0] = 3.14159 # this will be truncated!
x1
###Output
_____no_output_____
###Markdown
Array Slicing: Accessing Subarrays Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the *slice* notation, marked by the colon (``:``) character.The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array ``x``, use this:``` pythonx[start:stop:step]```If any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.We'll take a look at accessing sub-arrays in one dimension and in multiple dimensions. One-dimensional subarrays
###Code
x = np.arange(10)
x
x[:5] # first five elements
x[5:] # elements after index 5
x[4:7] # middle sub-array
x[::2] # every other element
x[1::2] # every other element, starting at index 1
###Output
_____no_output_____
###Markdown
A potentially confusing case is when the ``step`` value is negative.In this case, the defaults for ``start`` and ``stop`` are swapped.This becomes a convenient way to reverse an array:
###Code
x[::-1] # all elements, reversed
x[5::-2] # reversed every other from index 5
###Output
_____no_output_____
###Markdown
Multi-dimensional subarraysMulti-dimensional slices work in the same way, with multiple slices separated by commas.For example:
###Code
x2
x2[:2, :3] # two rows, three columns
x2[:3, ::2] # all rows, every other column
###Output
_____no_output_____
###Markdown
Finally, subarray dimensions can even be reversed together:
###Code
x2[::-1, ::-1]
###Output
_____no_output_____
###Markdown
Accessing array rows and columnsOne commonly needed routine is accessing of single rows or columns of an array.This can be done by combining indexing and slicing, using an empty slice marked by a single colon (``:``):
###Code
print(x2[:, 0]) # first column of x2
print(x2[0, :]) # first row of x2
###Output
[12 5 2 4]
###Markdown
In the case of row access, the empty slice can be omitted for a more compact syntax:
###Code
print(x2[0]) # equivalent to x2[0, :]
###Output
[12 5 2 4]
###Markdown
Subarrays as no-copy viewsOne important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.Consider our two-dimensional array from before:
###Code
print(x2)
###Output
[[12 5 2 4]
[ 7 6 8 8]
[ 1 6 7 7]]
###Markdown
Let's extract a $2 \times 2$ subarray from this:
###Code
x2_sub = x2[:2, :2]
print(x2_sub)
###Output
[[12 5]
[ 7 6]]
###Markdown
Now if we modify this subarray, we'll see that the original array is changed! Observe:
###Code
x2_sub[0, 0] = 99
print(x2_sub)
print(x2)
###Output
[[99 5 2 4]
[ 7 6 8 8]
[ 1 6 7 7]]
###Markdown
This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer. Creating copies of arraysDespite the nice features of array views, it is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method:
###Code
x2_sub_copy = x2[:2, :2].copy()
print(x2_sub_copy)
###Output
[[99 5]
[ 7 6]]
###Markdown
If we now modify this subarray, the original array is not touched:
###Code
x2_sub_copy[0, 0] = 42
print(x2_sub_copy)
print(x2)
###Output
[[99 5 2 4]
[ 7 6 8 8]
[ 1 6 7 7]]
###Markdown
Reshaping of ArraysAnother useful type of operation is reshaping of arrays.The most flexible way of doing this is with the ``reshape`` method.For example, if you want to put the numbers 1 through 9 in a $3 \times 3$ grid, you can do the following:
###Code
grid = np.arange(1, 10).reshape((3, 3))
print(grid)
###Output
[[1 2 3]
[4 5 6]
[7 8 9]]
###Markdown
Note that for this to work, the size of the initial array must match the size of the reshaped array. Where possible, the ``reshape`` method will use a no-copy view of the initial array, but with non-contiguous memory buffers this is not always the case.Another common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix.This can be done with the ``reshape`` method, or more easily done by making use of the ``newaxis`` keyword within a slice operation:
###Code
x = np.array([1, 2, 3])
# row vector via reshape
x.reshape((1, 3))
# row vector via newaxis
x[np.newaxis, :]
# column vector via reshape
x.reshape((3, 1))
# column vector via newaxis
x[:, np.newaxis]
###Output
_____no_output_____
###Markdown
We will see this type of transformation often throughout the remainder of the book. Array Concatenation and SplittingAll of the preceding routines worked on single arrays. It's also possible to combine multiple arrays into one, and to conversely split a single array into multiple arrays. We'll take a look at those operations here. Concatenation of arraysConcatenation, or joining of two arrays in NumPy, is primarily accomplished using the routines ``np.concatenate``, ``np.vstack``, and ``np.hstack``.``np.concatenate`` takes a tuple or list of arrays as its first argument, as we can see here:
###Code
x = np.array([1, 2, 3])
y = np.array([3, 2, 1])
np.concatenate([x, y])
###Output
_____no_output_____
###Markdown
You can also concatenate more than two arrays at once:
###Code
z = [99, 99, 99]
print(np.concatenate([x, y, z]))
###Output
[ 1 2 3 3 2 1 99 99 99]
###Markdown
It can also be used for two-dimensional arrays:
###Code
grid = np.array([[1, 2, 3],
[4, 5, 6]])
# concatenate along the first axis
np.concatenate([grid, grid])
# concatenate along the second axis (zero-indexed)
np.concatenate([grid, grid], axis=1)
###Output
_____no_output_____
###Markdown
For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions:
###Code
x = np.array([1, 2, 3])
grid = np.array([[9, 8, 7],
[6, 5, 4]])
# vertically stack the arrays
np.vstack([x, grid])
# horizontally stack the arrays
y = np.array([[99],
[99]])
np.hstack([grid, y])
###Output
_____no_output_____
###Markdown
Similary, ``np.dstack`` will stack arrays along the third axis. Splitting of arraysThe opposite of concatenation is splitting, which is implemented by the functions ``np.split``, ``np.hsplit``, and ``np.vsplit``. For each of these, we can pass a list of indices giving the split points:
###Code
x = [1, 2, 3, 99, 99, 3, 2, 1]
x1, x2, x3 = np.split(x, [3, 5])
print(x1, x2, x3)
###Output
[1 2 3] [99 99] [3 2 1]
###Markdown
Notice that *N* split-points, leads to *N + 1* subarrays.The related functions ``np.hsplit`` and ``np.vsplit`` are similar:
###Code
grid = np.arange(16).reshape((4, 4))
grid
upper, lower = np.vsplit(grid, [2])
print(upper)
print(lower)
left, right = np.hsplit(grid, [2])
print(left)
print(right)
###Output
[[ 0 1]
[ 4 5]
[ 8 9]
[12 13]]
[[ 2 3]
[ 6 7]
[10 11]
[14 15]]
|
02_datasets.ipynb | ###Markdown
Welcome to the Google Earth Engine (GEE) Python API! These notebooks will provide an overview of how to use the GEE python API and access all it has to offer. Notebook 2: Dataset Types There are a **ton** of datasets available for use on GEE. Check out the [GEE Catalog](https://developers.google.com/earth-engine/datasets/catalog) to get a sense of what's there. The datasets can be broken down into 3 main types: features, images, and collections. From the GEE website: - **Features** which are geometric objects with a list of properties. For example, a watershed with some properties such as name and area, is an ee.Feature.- **Images** which are like features, but may include several bands. For example, a satellite image is an ee.Image.- **Collections** which are groups of features or images. For example, the Global Administrative Unit Layers giving administrative boundaries is a ee.FeatureCollection and the MODIS Land Surface Temperature dataset is an ee.ImageCollection. We'll look into each type. 1. Features Features are geometric objects. Typically these are points or vector boundaries, such as a lat/lon location or a zip code. We can create a Feature simply by calling _ee.Feature_
###Code
import ee
import numpy as np
import geemap
ee.Initialize()
feat = ee.Feature(None)
###Output
_____no_output_____
###Markdown
This is a simple, empty feature, with the basic attributes (which are empty): type, geometries, and properties.You can add any properties you'd like:
###Code
feat = feat.set({'ID':'test1','name':'testing_feature_1'})
feat.getInfo()
###Output
_____no_output_____
###Markdown
* One key difference between the GEE python API and the online javascript workspace, is that _getInfo()_ needs to be called here when printing info about a given feature/image. You'll find _getInfo()_ throughout this notebook. To create a feature with a point in mind, simply pass a lon/lat pair to _Point_...
###Code
point = ee.Feature(ee.Geometry.Point([-105.25, 40.015]),{'name':'Boulder'})
###Output
_____no_output_____
###Markdown
...or a repeating list of lon/lat pairs to _Polygon_ to create an area:
###Code
poly = ee.Feature(ee.Geometry.Polygon([-102.06, 41.01,-102.05,37.01,-109.05,37.00,-109.05,41,-102.06, 41.01],
proj='EPSG:4326',geodesic=False),{'name':'Colorado'})
poly.getInfo()
###Output
_____no_output_____
###Markdown
_proj_ defaults to the coordinate inputs, where numbers are assumed to be 'EPSG:4326'. _geodesic=False_ gives straight edges in a map projection, where _geodesic=True_ gives curved edges that follow the shortest path on the curved earth surface. Plotting will be covered more in the next notebook, but just to check our features:
###Code
Map = geemap.Map()
Map.centerObject(poly, 7)
Map.addLayer(poly, {}, "Colordao")
Map.addLayer(point, {'color':'red'}, "Boulder")
Map
###Output
_____no_output_____
###Markdown
2. Images Images are 2D datasets. Typically, these are rasters or satellite images. \We can create an image similar to how we made a feature:
###Code
transparent = ee.Image()
transparent.getInfo()
###Output
_____no_output_____
###Markdown
This is a blank image with a single band. A multi-band image can be created by providing a list of constants:
###Code
orange = ee.Image([0xff, 0x88, 0x00])
orange.getInfo()
###Output
_____no_output_____
###Markdown
An image can also be read in from the GEE catalog by providing the product ID. For example, the JAXA 30m global DSM can be read by:
###Code
image = ee.Image('JAXA/ALOS/AW3D30/V2_2')
image.getInfo()
###Output
_____no_output_____
###Markdown
While there's a lot of info, it can be subset by specifying the attribute desired:
###Code
image.getInfo()['bands']
###Output
_____no_output_____
###Markdown
And can be added to a map by selecting a given band - in this case the heigh above sea level _AVE_DSM_
###Code
Map = geemap.Map(center=(40, -105), zoom=3)
Map.addLayer(image.select('AVE_DSM'), {'min': 0, 'max': 2000}, 'AVE_DSM');
Map
###Output
_____no_output_____
###Markdown
3. Collections Collections are groups of features or images. They provide an easy way to grab many features/images and organize based on whatever parameters you choose. You can create a collection by simply providing a list of features or images:
###Code
listOfFeatures = [
ee.Feature(ee.Geometry.Point(-62.54, -27.32), {'key': 'val1'}),
ee.Feature(ee.Geometry.Point(-69.18, -10.64), {'key': 'val2'}),
ee.Feature(ee.Geometry.Point(-45.98, -18.09), {'key': 'val3'})
]
FC = ee.FeatureCollection(listOfFeatures);
FC.getInfo()
img1 = ee.Image('COPERNICUS/S2_SR/20170328T083601_20170328T084228_T35RNK');
img2 = ee.Image('COPERNICUS/S2_SR/20170328T083601_20170328T084228_T35RNL');
img3 = ee.Image('COPERNICUS/S2_SR/20170328T083601_20170328T084228_T35RNM');
IC = ee.ImageCollection([img1, img2, img3])
IC.getInfo()
###Output
_____no_output_____
###Markdown
Most frequently, image collections are loaded in from the GEE catalog. This enables easier filtering. \Below is an example using Landsat 8 data that shows loading in and filtering collections by applying spatial and temporal filters 4. An example with Landsat 8 Here's a quick example of working with all three of the dataset types described above. First, import the Landsat 8 surface reflectance image collection and filter by a given day range and cloud cover
###Code
date_strt ='2021-05-01'
date_end = '2021-06-01'
landsat_collection_unfiltered = ee.ImageCollection("LANDSAT/LC08/C02/T2_L2")
landsat_collection = landsat_collection_unfiltered.filterDate(date_strt, date_end)
#print the size of the collection
print(str(landsat_collection.size().getInfo())+' images in filtered collection between '+date_strt+' and '+date_end)
###Output
5740 images in filtered collection between 2021-05-01 and 2021-06-01
###Markdown
Next, we'll import a feature collection (TIGER 2018 US Census Counties) and filter the image collection by a given feature.
###Code
county_name = 'Boulder'
counties = ee.FeatureCollection('TIGER/2018/Counties')
#get boulder county
boulderCo = counties.filter(ee.Filter.eq("NAME", county_name))
#filter landsat images by Boulder County Bounds
landsat_boulder = landsat_collection.filterBounds(boulderCo)
print(str(landsat_boulder.size().getInfo())+' images in filtered collection between '+date_strt+' and '+date_end+' in '+county_name+' County.')
print(landsat_boulder.aggregate_array('LANDSAT_PRODUCT_ID').getInfo())
###Output
2 images in filtered collection between 2021-05-01 and 2021-06-01 in Boulder County.
['LC08_L2SP_033032_20210511_20210524_02_T2', 'LC08_L2SP_033033_20210511_20210524_02_T2']
###Markdown
Get the first image that fits our subset and plot it:
###Code
ls_img = landsat_boulder.first()
Map = geemap.Map(center=(40, -105), zoom=7)
Map.addLayer(ls_img.select('SR_B1'), {'min': 20000, 'max': 50000}, 'Landsat Band 1')
Map.addLayer(boulderCo,{'opacity':.5,'color':'red'},'County Boundary')
Map
###Output
_____no_output_____ |
Feature_Section_Regression.ipynb | ###Markdown
We will be using **Superconductivty Data Data Set**. The goal here is to predict the critical temperature based on the features extracted. Data can be downloaded through this [link](https://archive.ics.uci.edu/ml/datasets/Superconductivty+Data)
###Code
df = pd.read_csv('train.csv')
df.head()
df.isna().sum().sum()
###Output
_____no_output_____
###Markdown
Data has no missing values throughout all the columns
###Code
df.describe()
df.shape
###Output
_____no_output_____
###Markdown
So the data has 81 features. Our goal is to reduce the number of
###Code
# Organizing Train data
X = df[df.columns[:-1]]
y = df[df.columns[-1:]]
###Output
_____no_output_____
###Markdown
Dictionary of Regressors used in this tutorial
###Code
d = {'Linear Regression': LinearRegression(),
'Lasso Regression': LassoCV(),
'Decision Tree Regression': DecisionTreeRegressor(),
'ExtraTreesRegressor':ExtraTreesRegressor(),
'XGBRegressor':XGBRegressor(verbose=0,objective='reg:squarederror')
}
###Output
_____no_output_____
###Markdown
Use Below Regressors if you need more precise results, but we are avoiding the use of these regressors as the time taken to run these regressors is very high- 'CatBoostRegressor':CatBoostRegressor(verbose=0) DImentionality Reduction using various techniques 1. Univariate feature selection **Univariate feature selection** works by selecting the best features based on univariate statistical tests.***GenericUnivariateSelect*** allows to perform univariate feature selection with a configurable strategy. This allows to select the best univariate selection strategy with hyper-parameter search estimator.This function take as input a scoring function that returns univariate scores and p-values.- **modes** : ‘percentile’ - removes all but a user-specified highest scoring percentage of features ‘k_best’ - removes all but the 'k' highest scoring features ‘fpr’ - false positive rate ‘fdr’ - false discovery rate ‘fwe’ - family wise error - **score_fun** : For regression: f_regression, mutual_info_regression For classification: chi2, f_classif, mutual_info_classif
###Code
n_best_parameters = 20
trans = GenericUnivariateSelect(score_func=mutual_info_regression, mode='k_best', param=n_best_parameters)
trans.fit(X, y)
columns_retained_GUV = df.iloc[:, :-1].columns[trans.get_support()].values
columns_retained_GUV
X_trans = trans.transform(X)
pd.DataFrame(X_trans, columns=columns_retained_GUV).head()
###Output
_____no_output_____
###Markdown
2. Backward Elimination using Statistical Significance- This method used p-values for elemination of the feature.- Significance level can be set using p_threshold.- We have used OLS(Ordinary Least Squares) regression (commonly known as Linear Regression) for finding p-values
###Code
# Using P-value for for elemination
p_threshold = 0.05
cols = list(X.columns)
pmax = 1
while (len(cols)>0):
p= []
X_1 = X[cols]
X_1 = sm.add_constant(X_1)
model = sm.OLS(y,X_1).fit()
p = pd.Series(model.pvalues.values[1:],index = cols)
pmax = max(p)
feature_with_p_max = p.idxmax()
if(pmax>p_threshold):
cols.remove(feature_with_p_max)
else:
break
selected_features_BE = cols
print('number of features selected: ',len(selected_features_BE)
#print(selected_features_BE)
###Output
number of features selected: 69
['number_of_elements', 'mean_atomic_mass', 'wtd_mean_atomic_mass', 'gmean_atomic_mass', 'wtd_gmean_atomic_mass', 'entropy_atomic_mass', 'range_atomic_mass', 'std_atomic_mass', 'mean_fie', 'wtd_mean_fie', 'gmean_fie', 'wtd_gmean_fie', 'entropy_fie', 'wtd_entropy_fie', 'range_fie', 'wtd_range_fie', 'std_fie', 'mean_atomic_radius', 'wtd_mean_atomic_radius', 'wtd_gmean_atomic_radius', 'entropy_atomic_radius', 'wtd_entropy_atomic_radius', 'range_atomic_radius', 'wtd_range_atomic_radius', 'std_atomic_radius', 'wtd_std_atomic_radius', 'mean_Density', 'gmean_Density', 'wtd_gmean_Density', 'entropy_Density', 'wtd_entropy_Density', 'range_Density', 'std_Density', 'wtd_std_Density', 'mean_ElectronAffinity', 'wtd_mean_ElectronAffinity', 'gmean_ElectronAffinity', 'wtd_gmean_ElectronAffinity', 'wtd_entropy_ElectronAffinity', 'range_ElectronAffinity', 'wtd_range_ElectronAffinity', 'std_ElectronAffinity', 'wtd_std_ElectronAffinity', 'mean_FusionHeat', 'wtd_mean_FusionHeat', 'gmean_FusionHeat', 'wtd_gmean_FusionHeat', 'entropy_FusionHeat', 'wtd_entropy_FusionHeat', 'range_FusionHeat', 'wtd_range_FusionHeat', 'wtd_std_FusionHeat', 'mean_ThermalConductivity', 'wtd_mean_ThermalConductivity', 'gmean_ThermalConductivity', 'wtd_gmean_ThermalConductivity', 'entropy_ThermalConductivity', 'range_ThermalConductivity', 'wtd_range_ThermalConductivity', 'std_ThermalConductivity', 'mean_Valence', 'wtd_mean_Valence', 'gmean_Valence', 'wtd_gmean_Valence', 'entropy_Valence', 'wtd_entropy_Valence', 'range_Valence', 'std_Valence', 'wtd_std_Valence']
###Markdown
3. Model-based (Select-from-Model) SelectFromModel is a meta-transformer that can be used along with any estimator that has a coef_ or feature_importances_ attribute after fitting. The features are considered unimportant and removed, if the corresponding coef_ or feature_importances_ values are below the provided threshold parameter. Apart from specifying the threshold numerically, there are built-in heuristics for finding a threshold using a string argument. Available heuristics are “mean”, “median” and float multiples of these like “0.1*mean” Using 'median' as threshold to remove the features
###Code
importance_df = pd.DataFrame(columns=[k for k in d.keys()],index=df.columns[:-1])
for clf_name,classifier in d.items():
trans = SelectFromModel(classifier, threshold='median')
X_trans = trans.fit_transform(X, y)
classifier.fit(X, y)
try:
#print(classifier.feature_importances_)
importance_df[clf_name] = classifier.feature_importances_
except:
if len(classifier.coef_) == 1:
#print(classifier.coef_[0])
importance_df[clf_name] = classifier.coef_[0]
else:
#print(classifier.coef_)
importance_df[clf_name] = classifier.coef_
columns_retained = df.iloc[:,:-1].columns[trans.get_support()].values
#print('Columns Selected by ',clf_name,' are: [',','.join(columns_retained),']')
print('No of columns retained by',clf_name,': ',len(columns_retained))
print()
###Output
No of columns retained by Linear Regression : 41
No of columns retained by Lasso Regression : 81
No of columns retained by Decision Tree Regression : 41
No of columns retained by ExtraTreesRegressor : 41
No of columns retained by XGBRegressor : 41
###Markdown
Using 'mean' as threshold to remove the features
###Code
for clf_name,classifier in d.items():
trans = SelectFromModel(classifier, threshold='mean')
X_trans = trans.fit_transform(X, y)
classifier.fit(X, y)
try:
#print(classifier.feature_importances_)
importance_df[clf_name] = classifier.feature_importances_
except:
if len(classifier.coef_) == 1:
#print(classifier.coef_[0])
importance_df[clf_name] = classifier.coef_[0]
else:
#print(classifier.coef_)
importance_df[clf_name] = classifier.coef_
columns_retained = df.iloc[:,:-1].columns[trans.get_support()].values
#print('Columns Selected by ',clf_name,' are: [',','.join(columns_retained),']')
print('No of columns retained by',clf_name,': ',len(columns_retained))
print()
###Output
No of columns retained by Linear Regression : 18
No of columns retained by Lasso Regression : 7
No of columns retained by Decision Tree Regression : 12
No of columns retained by ExtraTreesRegressor : 15
No of columns retained by XGBRegressor : 13
###Markdown
**importance_df** contains the feature importance of all the features calculated using the 5 classifiers
###Code
importance_df.head()
###Output
_____no_output_____
###Markdown
4. RFE (Recursive feature elimination) Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through a coef_ attribute or through a feature_immportances_ attribute. Then, the least important features are pruned from current set of features.That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.Below code outputs the optimal number of features, which when used by the classifier gives the best score.
###Code
for clf_name, classifier in d.items():
high_score=0
nof=0
score_list =[]
for n in range(1,len(X.columns)+1):
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.3, random_state = 0)
model = classifier
rfe = RFE(model,n)
X_train_rfe = rfe.fit_transform(X_train,y_train)
X_test_rfe = rfe.transform(X_test)
model.fit(X_train_rfe,y_train)
score = model.score(X_test_rfe,y_test)
score_list.append(score)
if(score>high_score):
high_score = score
nof = n
print("Optimum number of features by ",clf_name,' : %d' %nof)
print("Score of",clf_name," with %d features: %f" % (nof, high_score))
###Output
Optimum number of features by Linear Regression : 80
Score of Linear Regression with 80 features: 0.738674
Optimum number of features by Lasso Regression : 8
Score of Lasso Regression with 8 features: 0.607689
Optimum number of features by Decision Tree Regression : 81
Score of Decision Tree Regression with 81 features: 0.878737
Optimum number of features by ExtraTreesRegressor : 69
Score of ExtraTreesRegressor with 69 features: 0.918793
###Markdown
WE can use the **RFE** to find the `n` most important features*change the the `n_features_to_select` to the optimal number of features*
###Code
n_features_to_select = 2
for clf_name, classifier in d.items():
trans = RFE(classifier, n_features_to_select)
X_trans = trans.fit_transform(X, y)
columns_retained_RFE = df.iloc[:, :-1].columns[trans.get_support()].values
#print('Columns Selected by ',clf_name,' are: [',','.join(columns_retained_RFE),']')
print("Optimum number of features by ",clf_name,' :', len(columns_retained_RFE))
print()
###Output
_____no_output_____
###Markdown
RFE-CV (Recursive feature elimination with Cross Validation)RFECV is silimar to RFE but performs RFE in a cross-validation loop to find the optimal number of features.
###Code
for clf_name, classifier in d.items():
trans = RFECV(classifier)
X_trans = trans.fit_transform(X, y)
columns_retained_RFECV = df.iloc[:, :-1].columns[trans.get_support()].values
#print('Columns Selected by ',clf_name,' are: [',','.join(columns_retained_RFECV),']')
print("Optimum number of features by ",clf_name,' :', len(columns_retained_RFECV))
print()
###Output
Optimum number of features by Linear Regression : 77
Optimum number of features by Lasso Regression : 6
Optimum number of features by Decision Tree Regression : 24
Optimum number of features by ExtraTreesRegressor : 70
Optimum number of features by XGBRegressor : 58
###Markdown
5. Using feature_selector Feature selector is a tool for dimensionality reduction of machine learning datasets.Clone the repository through this [link](https://github.com/WillKoehrsen/feature-selector)Technique available to identify features to remove:- Missing Values- Single Unique Values- Collinear Features- Zero Importance Features- Low Importance Features Create the feature_selector object for performing various types of feature selection techniques and plotting graphs
###Code
from feature_selector import FeatureSelector
fs = FeatureSelector(data = X,labels=y)
###Output
_____no_output_____
###Markdown
Finding Missing Values
###Code
fs.identify_missing(missing_threshold=0.2)
missing_features = fs.ops['missing']
missing_features
###Output
0 features with greater than 0.20 missing values.
###Markdown
Finding Features with Single Unique Values
###Code
# Single Unique Value Features
fs.identify_single_unique()
###Output
0 features with a single unique value.
###Markdown
Finding Collinear Features
###Code
fs.identify_collinear(correlation_threshold=0.95)
collinear_features = fs.ops['collinear']
fs.record_collinear.head()
###Output
23 features with a correlation magnitude greater than 0.95.
###Markdown
Plotting Correlation Heatmap
###Code
sns.set(rc={'figure.figsize':(15,10)})
sns.heatmap(fs.corr_matrix)
###Output
_____no_output_____
###Markdown
Finding Zero Importance Features
###Code
fs.identify_zero_importance(task='regression',eval_metric='rmse',n_iterations=5,early_stopping=True)
###Output
Training Gradient Boosting Model
Training until validation scores don't improve for 100 rounds.
Did not meet early stopping. Best iteration is:
[1000] valid_0's rmse: 9.38301 valid_0's l2: 88.0409
Training until validation scores don't improve for 100 rounds.
Did not meet early stopping. Best iteration is:
[1000] valid_0's rmse: 8.89715 valid_0's l2: 79.1593
Training until validation scores don't improve for 100 rounds.
Did not meet early stopping. Best iteration is:
[1000] valid_0's rmse: 9.68113 valid_0's l2: 93.7244
Training until validation scores don't improve for 100 rounds.
Did not meet early stopping. Best iteration is:
[1000] valid_0's rmse: 9.68775 valid_0's l2: 93.8525
Training until validation scores don't improve for 100 rounds.
Did not meet early stopping. Best iteration is:
[996] valid_0's rmse: 9.47357 valid_0's l2: 89.7485
0 features with zero importance after one-hot encoding.
###Markdown
Finding Low Importance Featurescumulative_importance = percent of cumulative importance to be gained after removing the low importance features.
###Code
fs.identify_low_importance(cumulative_importance=0.95)
###Output
67 features required for cumulative importance of 0.95 after one hot encoding.
14 features do not contribute to cumulative importance of 0.95.
###Markdown
Plotting feature importances with highest importances
###Code
fs.plot_feature_importances()
feature_importances = pd.DataFrame(fs.feature_importances)
feature_importances.head(10)
###Output
_____no_output_____
###Markdown
Removing Features using the methods used abovemethods - 'all', 'missing', 'single_unique', 'collinear', 'zero_importance', 'low_importance'
###Code
train_removed = fs.remove(methods = ['missing', 'single_unique', 'collinear', 'zero_importance', 'low_importance'])
###Output
Removed 34 features.
###Markdown
We can even run all at once using 'identify_all' function
###Code
fs.identify_all(selection_params = {'missing_threshold': 0.6,
'correlation_threshold': 0.9,
'task': 'regression',
'eval_metric': 'rmse',
'cumulative_importance': 0.99})
###Output
_____no_output_____
###Markdown
Analysis of feature selection In this analysis we have used the values for feature importance gained from the RFE technique and the number of features selected by RFE-CV technique. We then plot the model score vs the number of features selection in the descending order of the importances.
###Code
feature_list = []
scores_lr = []
Sorted_features = importance_df.sort_values(by='Linear Regression',ascending=False).index.values
for feature in Sorted_features:
feature_list.append(feature)
cls = LinearRegression()
X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42)
cls.fit(X_train,y_train)
scores_lr.append(cls.score(X_test,y_test))
ax = sns.lineplot(x=list(range(1,len(scores_lr)+1)),y=scores_lr)
ax.set_title('Model score vs number of features using Linear Regression')
n = 77
plt.axvline(x=n,color = 'red')
ax.annotate('Score = '+str(scores_lr[n]), xy=(n,scores_lr[n] ), xytext=(n+2,scores_lr[n]-0.02),arrowprops=dict(facecolor='black', shrink=0.05),)
feature_list = []
scores_lasso = []
Sorted_features = importance_df.sort_values(by='Lasso Regression',ascending=False).index.values
for feature in Sorted_features:
feature_list.append(feature)
cls = LassoCV()
X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42)
cls.fit(X_train,y_train)
scores_lasso.append(cls.score(X_test,y_test))
ax = sns.lineplot(x=list(range(1,len(scores_lasso)+1)),y=scores_lasso)
ax.set_title('Model score vs number of features using Lasso Regression')
n = 21
plt.axvline(x=n,color = 'red')
ax.annotate('Score = '+str(scores_lasso[n]), xy=(n,scores_lasso[n] ), xytext=(n+3,scores_lasso[n]-0.007),arrowprops=dict(facecolor='black', shrink=0.05),)
feature_list = []
scores_dtr = []
Sorted_features = importance_df.sort_values(by='Decision Tree Regression',ascending=False).index.values
for feature in Sorted_features:
feature_list.append(feature)
cls = DecisionTreeRegressor()
X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42)
cls.fit(X_train,y_train)
scores_dtr.append(cls.score(X_test,y_test))
ax = sns.lineplot(x=list(range(1,len(scores_dtr)+1)),y=scores_dtr)
ax.set_title('Model score vs number of features using Decision Tree Regression')
n = 24
plt.axvline(x=n,color = 'red')
ax.annotate('Score = '+str(scores_dtr[n]), xy=(n,scores_dtr[n] ), xytext=(n+2,scores_dtr[n]-0.02),arrowprops=dict(facecolor='black', shrink=0.05),)
feature_list = []
scores_etr = []
Sorted_features = importance_df.sort_values(by='ExtraTreesRegressor',ascending=False).index.values
for feature in Sorted_features:
feature_list.append(feature)
cls = ExtraTreesRegressor()
X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42)
cls.fit(X_train,y_train)
scores_etr.append(cls.score(X_test,y_test))
ax = sns.lineplot(x=list(range(1,len(scores_etr)+1)),y=scores_etr)
ax.set_title('Model score vs number of features using Extra Trees Regression')
n = 70
plt.axvline(x=n,color = 'red')
ax.annotate('Score = '+str(scores_etr[n]), xy=(n,scores_etr[n] ), xytext=(n+2,scores_etr[n]-0.02),arrowprops=dict(facecolor='black', shrink=0.05),)
feature_list = []
scores_xgb = []
Sorted_features = importance_df.sort_values(by='XGBRegressor',ascending=False).index.values
for feature in Sorted_features:
feature_list.append(feature)
cls = XGBRegressor(objective='reg:squarederror')
X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42)
cls.fit(X_train,y_train)
scores_xgb.append(cls.score(X_test,y_test))
ax = sns.lineplot(x=list(range(1,len(scores_xgb)+1)),y=scores_xgb)
ax.set_title('Model score vs number of features using XGB regression')
n = 58
plt.axvline(x=n,color = 'red')
ax.annotate('Score = '+str(scores_xgb[n]), xy=(n,scores_xgb[n] ), xytext=(n+2,scores_xgb[n]-0.02),arrowprops=dict(facecolor='black', shrink=0.05),)
###Output
_____no_output_____ |
Kaggle/ House Prices: Advanced Regression Techniques/Simple_House_Prediction.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
#Import And Load Data
import numpy as np # For numerical fast numerical calculations
import matplotlib.pyplot as plt # For making plots
import pandas as pd # Deals with data
import seaborn as sns # Makes beautiful plots
from sklearn.preprocessing import StandardScaler # Testing sklearn
import tensorflow as tf # Imports tensorflow
from sklearn import metrics
import keras # Imports keras
from tensorflow.python.data import Dataset
import math
tf.logging.set_verbosity(tf.logging.ERROR)
Ames_House_data = pd.read_csv("/content/drive/My Drive/House Price Prediction/train.csv")
#Ames_House_data = Ames_House_data.reindex(
# np.random.permutation(Ames_House_data.index))
sns.scatterplot(x=Ames_House_data["OverallQual"],y=Ames_House_data["SalePrice"])
sns.scatterplot(x=Ames_House_data["GrLivArea"],y=Ames_House_data["SalePrice"])
sns.scatterplot(x=Ames_House_data["GarageCars"],y=Ames_House_data["SalePrice"])
sns.scatterplot(x=Ames_House_data["GarageArea"],y=Ames_House_data["SalePrice"])
sns.scatterplot(x=Ames_House_data["TotalBsmtSF"],y=Ames_House_data["SalePrice"])
sns.scatterplot(x=Ames_House_data["1stFlrSF"],y=Ames_House_data["SalePrice"])
def preprocess_features(Ames_House_data):
selected_features = Ames_House_data[
["LotArea"
]]
#Mean Normalization
selected_features["LotArea"]=((np.log2(selected_features["LotArea"])-13.1)/(0.7))
processed_features = selected_features.copy()
return processed_features
#define output features
def preprocess_targets(Ames_House_data):
output_targets = pd.DataFrame()
output_targets["SalePrice"] = (
Ames_House_data["SalePrice"])
output_targets["SalePrice"] = (output_targets["SalePrice"]/100)
return output_targets
training = preprocess_features(Ames_House_data)
training_examples = training.head(1022)
training_examples.describe()
test_examples = training.tail(438)
training_targ = preprocess_targets(Ames_House_data)
training_targets =training_targ.head(1022)
test_targets=training_targ.tail(438)
training_targets.describe()
sns.scatterplot(x=training_examples["LotArea"],y=training_targets["SalePrice"],color="g")
sns.scatterplot(x=test_examples["LotArea"],y=test_targets["SalePrice"],color="g")
def construct_feature_columns(input_features):
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets):
periods = 10
steps_per_period = steps / periods
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=construct_feature_columns(training_examples),
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(
training_examples,
training_targets["SalePrice"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(
training_examples,
training_targets["SalePrice"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
# Compute trainingloss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.legend()
return linear_regressor
training_examples.describe()
#print(type(training_examples))
training_targets.describe()
#print(type(training_targets))
linear_regressor = train_model(
learning_rate=0.8,
steps=800,
batch_size=5,
training_examples=training_examples,
training_targets=training_targets)
def my_input_fn1(features, batch_size=1, shuffle=True, num_epochs=None):
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features= ds.make_one_shot_iterator().get_next()
return features
predict_test_input_fn = lambda: my_input_fn1(
test_examples,
# test_targets["SalePrice"],
num_epochs=1,
shuffle=False)
test_predictions = linear_regressor.predict(input_fn=predict_test_input_fn)
#print(type(test_predictions),list(test_predictions),type(list(test_predictions)))
test_predictions2 =np.array([item['predictions'][0] for item in test_predictions])
test_predictions2 =np.array([item['predictions'][0] for item in test_predictions])
x = np.array(test_examples["LotArea"])
y = np.array(test_targets["SalePrice"])
# #print(test_predictions)
# main =[]
# k=1461
# for i in range(len(test_predictions1)):
# l=[k+i,test_predictions1[i]]
# main.append(l)
df = pd.DataFrame(test_predictions2)
#sns.distplot(df[1], bins=10, kde=False)
#print(len(main))
# df = pd.DataFrame(main)
# df.to_csv('/content/drive/My Drive/House Price Prediction/submission2.csv', index=False)
#main_np = np.array(main)
#pd.DataFrame(main_np).to_csv("/content/drive/My Drive/House Price Prediction/submission.csv")
#print(main_np)
#root_mean_squared_error = math.sqrt(
#metrics.mean_squared_error(test_predictions, test_targets))
#print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
plt.plot(x, y, 'ro', label ='Original data')
plt.plot(x, test_predictions2, label ='Fitted line')
plt.title('Linear Regression Result')
plt.legend()
plt.show()
print(type(df))
sns.scatterplot(x=test_examples["LotArea"],y=test_predictions2)
Ames_House_test_data = pd.read_csv("/content/drive/My Drive/House Price Prediction/test.csv")
testf = preprocess_features(Ames_House_test_data)
#test_targets = preprocess_targets(Ames_House_test_data)
predict_test_input_fn = lambda: my_input_fn1(
testf,
# test_targets["SalePrice"],
num_epochs=1,
shuffle=False)
testpf = linear_regressor.predict(input_fn=predict_test_input_fn)
#print(type(test_predictions),list(test_predictions),type(list(test_predictions)))
testpf =([item['predictions'][0] for item in testpf])
#print(test_predictions)
main =[]
k=1461
for i in range(len(testpf)):
l=[k+i,testpf[i]*100]
main.append(l)
df = pd.DataFrame(main)
sns.distplot(df[1], bins=10, kde=False)
#print(len(main))
# df = pd.DataFrame(main)
#main_np = np.array(main)
#pd.DataFrame(main_np).to_csv("/content/drive/My Drive/House Price Prediction/submission.csv")
#print(main_np)
#root_mean_squared_error = math.sqrt(
#metrics.mean_squared_error(test_predictions, test_targets))
#print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
df.to_csv('/content/drive/My Drive/House Price Prediction/submission_simple1.csv', index=False)
###Output
_____no_output_____ |
StudentPerformance(minor_project).ipynb | ###Markdown
We do not have any null values in the database.
###Code
#display the type of data stored in the column
database.dtypes
###Output
_____no_output_____
###Markdown
Numerical Variables are Math score, Reading score and Writing score.Categorical Variables are Gender, Race/ethnicity, Parental level of education, Lunch and Test preparation course.
###Code
# to display the count of gender
count=database['gender'].value_counts()
print(count)
#barplot to display the gender
database['gender'].value_counts().plot(kind='bar');
# to display the count of parental level of education
count=database['parental level of education'].value_counts()
print(count)
#barplot to display the parental level of education
database['parental level of education'].value_counts().plot(kind='bar');
#to display the count of race/ethnicity
count=database['race/ethnicity'].value_counts()
print(count)
#barplot to display race/ethnicity
database['race/ethnicity'].value_counts().plot(kind='bar')
#to display the count of lunch type
count=database['lunch'].value_counts()
print(count)
#barplot to display lunch type
database['lunch'].value_counts().plot(kind='bar')
#to display count of test preparation course completed or not
count=database['test preparation course'].value_counts()
print(count)
#barplot to display the test preparation course taken or not
database['test preparation course'].value_counts().plot(kind='bar')
###Output
none 642
completed 358
Name: test preparation course, dtype: int64
###Markdown
OBSERVATIONS:We have almost same ratio of boys and girls in the database.Most of the parents have education level as 'some college' and then 'associate degree'.Group C have the highest number of students followed by group D, B, A, E respectively.Almost two-third students have standrad lunch compared to one-third who got free/reduced lunchAgain almost two-third of the students did not take any test preparation courses.
###Code
database.describe()
###Output
_____no_output_____
###Markdown
Descriptive statistics of numerical variables such as total count, mean, standard deviation, minimum and maximum values and three quantiles of the data (25%,50%,75%) are shown above.
###Code
#adding new column named total i.e. sum of all the three subjects
database['total'] = database['math score'] + database['reading score'] + database['writing score']
#adding new column named average i.e. the average score obtained combining all three subjects
database['average'] = database['total'] / 3
#display top 5 rows
database.head(5)
#display bottom 5 rows
database.tail(5)
print("Minimum total score in the database is:",database.total.min())
print("Minimum average in the database is:",database.average.min())
print("Maximum total score in the database is:",database.total.max())
print("Maximum average in the database is:",database.average.max())
plt.figure(figsize=(20,5))
sns.countplot(database['math score'])
#set passing score to 40 and check the number of students passed in maths
database['Math_PassStatus'] = np.where(database['math score']<40, 'Fail', 'Pass')
count=database.Math_PassStatus.value_counts()
print(count)
database.Math_PassStatus.value_counts().plot(kind='bar')
plt.figure(figsize=(20,5))
sns.countplot(database['reading score'])
#set passing score to 40 and check the number of students passed in reading
database['Reading_PassStatus'] = np.where(database['reading score']<40, 'Fail', 'Pass')
count=database.Reading_PassStatus.value_counts()
print(count)
database.Reading_PassStatus.value_counts().plot(kind='bar')
plt.figure(figsize=(20,5))
sns.countplot(database['writing score'])
##set passing score to 40 and check the number of students passed in writing
database['Writing_PassStatus'] = np.where(database['writing score']<40, 'Fail', 'Pass')
count=database.Writing_PassStatus.value_counts()
print(count)
database.Writing_PassStatus.value_counts().plot(kind='bar')
sns.distplot(database['average'])
###Output
_____no_output_____
###Markdown
Seeing this graph we can depict that the average score in all the three subjects is between 60 to 80.
###Code
# scores obtained based on gender
plt.figure(figsize=(4,4))
sns.barplot(database['gender'], database['math score'])
plt.show()
plt.figure(figsize=(4,4))
sns.barplot(database['gender'], database['reading score'])
plt.show()
plt.figure(figsize=(4,4))
sns.barplot(database['gender'], database['writing score'])
plt.show()
###Output
_____no_output_____
###Markdown
We can hereby, observe that, boys have scored better marks in maths than girls but girls have an edge in reading and writing tests.
###Code
# scores obtained based on race/ethnicity
plt.figure(figsize=(4,4))
sns.barplot(database['race/ethnicity'], database['math score'])
plt.show()
plt.figure(figsize=(4,4))
sns.barplot(database['race/ethnicity'], database['reading score'])
plt.show()
plt.figure(figsize=(4,4))
sns.barplot(database['race/ethnicity'], database['writing score'])
plt.show()
# Data to plot pie chart based on race/ethnicity
labels = 'group A', 'group B', 'group C', 'group D','group E'
sizes = database.groupby('race/ethnicity')['average'].mean().values
colors = ['orange', 'yellow','green', 'lightcoral', 'lightskyblue']
explode = (0, 0, 0, 0,0.1) # explode 1st slice
# Plot the pie chart for math score
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=140)
plt.title('Average for Every Race/Ethnicity Mean')
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Group E students have scored higher marks in all three subjects and group A students have scored the less marks.
###Code
# scores obtained based on test preparation courses
plt.figure(figsize=(4,4))
sns.barplot(database['test preparation course'], database['math score'])
plt.show()
plt.figure(figsize=(4,4))
sns.barplot(database['test preparation course'], database['reading score'])
plt.show()
plt.figure(figsize=(4,4))
sns.barplot(database['test preparation course'], database['writing score'])
plt.show()
###Output
_____no_output_____
###Markdown
We can observe that students who have completed test preparation courses have scored better in all three subjects.
###Code
sns.countplot(x='gender', data = database, hue='test preparation course', palette='bright')
plt.show()
plt.figure(figsize=(10,5))
sns.countplot(x='race/ethnicity', data = database, hue='test preparation course', palette='bright')
plt.show()
sns.countplot(x='lunch', data = database, hue='test preparation course',palette='bright')
plt.show()
plt.figure(figsize=(10,5))
sns.countplot(x='parental level of education', data = database, hue='test preparation course',palette='bright')
plt.show()
###Output
_____no_output_____
###Markdown
Most of the students have not completed the test preparation course.Highest number Students who belong to group C ethinicity have completed the test preparation course.More number of standard lunch students have completed the test preparation course compard to free or reduced lunch students. More students whose parental level of education is 'some college, 'associate's degree', and 'some high school' have completed the test preparation course.
###Code
# function to check overall pass status
database['OverAll_PassStatus'] = database.apply(lambda x : 'Fail' if x['Math_PassStatus'] == 'Fail' or
x['Reading_PassStatus'] == 'Fail' or x['Writing_PassStatus'] == 'Fail' else 'Pass', axis =1)
count=database.OverAll_PassStatus.value_counts()
print(count)
database['OverAll_PassStatus'].value_counts().plot(kind='bar');
###Output
Pass 949
Fail 51
Name: OverAll_PassStatus, dtype: int64
###Markdown
If passing marks is to be set to 40 only 51 students have failed the exam.
###Code
# function to assign grades based on overall pass status
def GetGrade(average, OverAll_PassStatus):
if ( OverAll_PassStatus == 'Fail'):
return 'Fail'
if ( average >= 80 ):
return 'A'
if ( average >= 70):
return 'B'
if ( average >= 60):
return 'C'
if ( average >= 50):
return 'D'
if ( average >= 40):
return 'E'
else:
return 'F'
database['Grade'] = database.apply(lambda x : GetGrade(x['average'], x['OverAll_PassStatus']), axis=1)
count=database.Grade.value_counts()
print(count)
database['Grade'].value_counts().plot.pie(autopct="%1.1f%%")
plt.show()
###Output
B 261
C 256
A 198
D 178
E 56
Fail 51
Name: Grade, dtype: int64
###Markdown
More students have obtained grade B(26.1%) followed by grade C(25.6%) and grade A(19.8%).
###Code
sns.countplot(x='gender', data=database, hue='Grade', palette='pastel')
###Output
_____no_output_____
###Markdown
Female students have more numbers of grades A's and B's compared to male.
###Code
plt.figure(figsize=(15,5))
sns.countplot(x='race/ethnicity', data=database, hue='Grade', palette='pastel')
###Output
_____no_output_____
###Markdown
Group C have most number of failed students and group A have the worst grades.
###Code
plt.figure(figsize=(15,5))
sns.countplot(x='parental level of education', data=database, hue='Grade', palette='pastel')
###Output
_____no_output_____
###Markdown
Students whose parents have master's degree did not fail the exam.Students whose parents have educational level of associate degree or some college have obtained better grades.
###Code
plt.figure(figsize=(15,5))
sns.countplot(x='Grade', data=database, hue='test preparation course', palette='pastel')
###Output
_____no_output_____
###Markdown
Students who scored grade A, most of them have completed their test preparation course.Other students have scored good grades without taking the test preparation course.
###Code
plt.figure(figsize=(15,5))
sns.barplot(x = "race/ethnicity", y = "math score", hue = "gender", data = database, palette='pastel')
###Output
_____no_output_____
###Markdown
Boys have scored better marks in maths than girls irrespective of their race/ethnicity.
###Code
plt.figure(figsize=(15,5))
sns.barplot(x = "race/ethnicity", y = "writing score", hue = "gender", data = database,palette='pastel')
plt.show()
plt.figure(figsize=(15,5))
sns.barplot(x = "race/ethnicity", y = "reading score", hue = "gender", data = database, palette='pastel')
plt.show()
###Output
_____no_output_____
###Markdown
Girls have scored better than boys in reading and writing irrespective of their race/ethnicity.
###Code
plt.figure(figsize=(15,5))
sns.barplot(x = "race/ethnicity", y = "average", hue = "gender", data = database, palette='pastel')
###Output
_____no_output_____
###Markdown
Overall, girls have scored better marks than boys in all the groups.
###Code
plt.figure(figsize=(15,5))
sns.barplot(x = "parental level of education", y = "math score", hue = "lunch", data = database, palette='pastel')
plt.show()
plt.figure(figsize=(15,5))
sns.barplot(x = "parental level of education", y = "writing score", hue = "lunch", data = database, palette='pastel')
plt.show()
plt.figure(figsize=(15,5))
sns.barplot(x = "parental level of education", y = "reading score", hue = "lunch", data = database, palette='pastel')
plt.show()
plt.figure(figsize=(15,5))
sns.barplot(x = "parental level of education", y = "average", hue = "lunch", data = database, palette='pastel')
plt.show()
###Output
_____no_output_____
###Markdown
Students who got standrad lunch have scored better marks irrespective of their parents' level of education.
###Code
plt.figure(figsize=(15,5))
sns.barplot(x = "test preparation course", y = "math score", hue = "Grade", data = database, palette='pastel')
plt.show()
plt.figure(figsize=(15,5))
sns.barplot(x = "test preparation course", y = "writing score", hue = "Grade", data = database, palette='pastel')
plt.show()
plt.figure(figsize=(15,5))
sns.barplot(x = "test preparation course", y = "reading score", hue = "Grade", data = database, palette='pastel')
plt.show()
plt.figure(figsize=(15,5))
sns.barplot(x = "test preparation course", y = "average", hue = "Grade", data = database, palette='pastel')
plt.show()
###Output
_____no_output_____ |
Project_1_RBM_and_Tomography/h2_energy_with_rnn.ipynb | ###Markdown
In the file `rnn_helper.py` we defined a recurrent neural network and the functions necesarry to train it on the H2 data.Below we will show the results for training multiple RNNs on new values of R.
###Code
rnn_helper = RNNHelper(epochs=9, verbose=True)
rnn_helper.iterate_over_r()
###Output
_____no_output_____ |
lightautoml/google_colab_sberbank_lightautoml_demo.ipynb | ###Markdown
**Sberbank LightAutoML (LAMA)** *Код данного ноутбука позаимствован из официального репозитория библиотеки https://github.com/sberbank-ai-lab/LightAutoML* Install LightAutoML
###Code
#! pip install -U lightautoml
###Output
_____no_output_____
###Markdown
Import necessary libraries
###Code
# Standard python libraries
import logging
import os
import time
import requests
logging.basicConfig(format='[%(asctime)s] (%(levelname)s): %(message)s', level=logging.INFO)
# Installed libraries
import numpy as np
import pandas as pd
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
import torch
# Imports from our package
from lightautoml.automl.presets.tabular_presets import TabularAutoML, TabularUtilizedAutoML
from lightautoml.dataset.roles import DatetimeRole
from lightautoml.tasks import Task
###Output
[2021-07-20 15:13:32,248] (WARNING): /usr/local/lib/python3.7/dist-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/project/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning.
warnings.warn(msg)
###Markdown
Parameters
###Code
N_THREADS = 8 # threads cnt for lgbm and linear models
N_FOLDS = 5 # folds cnt for AutoML
RANDOM_STATE = 42 # fixed random state for various reasons
TEST_SIZE = 0.2 # Test size for metric check
TIMEOUT = 60 # Time in seconds for automl run
TARGET_NAME = 'TARGET' # Target column name
###Output
_____no_output_____
###Markdown
Fix torch number of threads and numpy seed
###Code
np.random.seed(RANDOM_STATE)
torch.set_num_threads(N_THREADS)
###Output
_____no_output_____
###Markdown
Example data load
###Code
DATASET_DIR = './example_data/test_data_files'
DATASET_NAME = 'sampled_app_train.csv'
DATASET_FULLNAME = os.path.join(DATASET_DIR, DATASET_NAME)
DATASET_URL = 'https://raw.githubusercontent.com/sberbank-ai-lab/LightAutoML/master/example_data/test_data_files/sampled_app_train.csv'
%%time
if not os.path.exists(DATASET_FULLNAME):
os.makedirs(DATASET_DIR, exist_ok=True)
dataset = requests.get(DATASET_URL).text
with open(DATASET_FULLNAME, 'w') as output:
output.write(dataset)
%%time
data = pd.read_csv(DATASET_FULLNAME)
data.head()
###Output
_____no_output_____
###Markdown
Some user feature preparation
###Code
%%time
data['BIRTH_DATE'] = (np.datetime64('2018-01-01') + data['DAYS_BIRTH'].astype(np.dtype('timedelta64[D]'))).astype(str)
data['EMP_DATE'] = (np.datetime64('2018-01-01') + np.clip(data['DAYS_EMPLOYED'], None, 0).astype(np.dtype('timedelta64[D]'))
).astype(str)
data['constant'] = 1
data['allnan'] = np.nan
data['report_dt'] = np.datetime64('2018-01-01')
data.drop(['DAYS_BIRTH', 'DAYS_EMPLOYED'], axis=1, inplace=True)
###Output
[2021-07-20 15:13:34,761] (INFO): NumExpr defaulting to 2 threads.
###Markdown
Data splitting for train-test
###Code
%%time
train_data, test_data = train_test_split(data,
test_size=TEST_SIZE,
stratify=data[TARGET_NAME],
random_state=RANDOM_STATE)
logging.info('Data splitted. Parts sizes: train_data = {}, test_data = {}'
.format(train_data.shape, test_data.shape))
train_data.head()
###Output
_____no_output_____
###Markdown
========= AutoML preset usage ========= Create Task
###Code
%%time
task = Task('binary', )
###Output
CPU times: user 3.96 ms, sys: 0 ns, total: 3.96 ms
Wall time: 4.08 ms
###Markdown
Setup columns roles
###Code
%%time
roles = {'target': TARGET_NAME,
DatetimeRole(base_date=True, seasonality=(), base_feats=False): 'report_dt',
}
###Output
CPU times: user 51 µs, sys: 11 µs, total: 62 µs
Wall time: 67.5 µs
###Markdown
Create AutoML from preset
###Code
%%time
automl = TabularAutoML(task = task,
timeout = TIMEOUT,
cpu_limit = N_THREADS,
reader_params = {'n_jobs': N_THREADS, 'cv': N_FOLDS, 'random_state': RANDOM_STATE},
)
oof_pred = automl.fit_predict(train_data, roles = roles)
logging.info('oof_pred:\n{}\nShape = {}'.format(oof_pred, oof_pred.shape))
%%time
# Fast feature importances calculation
fast_fi = automl.get_feature_scores('fast')
fast_fi.set_index('Feature')['Importance'].plot.bar(figsize = (20, 10), grid = True)
%%time
# Accurate feature importances calculation (Permutation importances) - can take long time to calculate
accurate_fi = automl.get_feature_scores('accurate', test_data, silent = False)
accurate_fi.set_index('Feature')['Importance'].plot.bar(figsize = (20, 10), grid = True)
###Output
LightAutoML used 111 feats
1/111 Calculated score for FLOORSMIN_AVG: 0.0002140
2/111 Calculated score for LIVINGAPARTMENTS_MEDI: -0.0007269
3/111 Calculated score for FLAG_DOCUMENT_13: 0.0008186
4/111 Calculated score for BASEMENTAREA_AVG: 0.0001495
5/111 Calculated score for NAME_EDUCATION_TYPE: 0.0003465
6/111 Calculated score for FLAG_DOCUMENT_3: 0.0006182
7/111 Calculated score for NAME_INCOME_TYPE: 0.0006216
8/111 Calculated score for ENTRANCES_MEDI: 0.0004110
9/111 Calculated score for LANDAREA_AVG: 0.0000849
10/111 Calculated score for WEEKDAY_APPR_PROCESS_START: 0.0014130
11/111 Calculated score for REG_REGION_NOT_LIVE_REGION: -0.0000102
12/111 Calculated score for DAYS_ID_PUBLISH: 0.0006454
13/111 Calculated score for NAME_TYPE_SUITE: -0.0000951
14/111 Calculated score for FLAG_DOCUMENT_8: -0.0007235
15/111 Calculated score for ELEVATORS_MODE: 0.0005774
16/111 Calculated score for LIVINGAREA_MEDI: -0.0000917
17/111 Calculated score for YEARS_BUILD_MODE: 0.0002955
18/111 Calculated score for YEARS_BEGINEXPLUATATION_AVG: 0.0003329
19/111 Calculated score for NONLIVINGAREA_MEDI: -0.0003940
20/111 Calculated score for AMT_CREDIT: 0.0008050
21/111 Calculated score for HOUR_APPR_PROCESS_START: -0.0000951
22/111 Calculated score for NAME_CONTRACT_TYPE: 0.0013179
23/111 Calculated score for FLOORSMIN_MODE: 0.0001902
24/111 Calculated score for YEARS_BEGINEXPLUATATION_MEDI: -0.0010054
25/111 Calculated score for FLAG_OWN_REALTY: -0.0003057
26/111 Calculated score for FLAG_DOCUMENT_6: 0.0000068
27/111 Calculated score for FLAG_DOCUMENT_16: -0.0015115
28/111 Calculated score for FLOORSMAX_MODE: -0.0002819
29/111 Calculated score for FLAG_PHONE: 0.0000917
30/111 Calculated score for ENTRANCES_AVG: -0.0002649
31/111 Calculated score for ORGANIZATION_TYPE: -0.0020856
32/111 Calculated score for FLOORSMIN_MEDI: -0.0000679
33/111 Calculated score for LIVE_REGION_NOT_WORK_REGION: 0.0002717
34/111 Calculated score for EMERGENCYSTATE_MODE: -0.0012704
35/111 Calculated score for FLAG_OWN_CAR: 0.0014572
36/111 Calculated score for REG_CITY_NOT_WORK_CITY: 0.0000951
37/111 Calculated score for LIVE_CITY_NOT_WORK_CITY: -0.0006386
38/111 Calculated score for NONLIVINGAPARTMENTS_MODE: -0.0001970
39/111 Calculated score for AMT_ANNUITY: 0.0104178
40/111 Calculated score for CNT_FAM_MEMBERS: -0.0003091
41/111 Calculated score for DAYS_REGISTRATION: 0.0012160
42/111 Calculated score for APARTMENTS_MEDI: -0.0008424
43/111 Calculated score for AMT_REQ_CREDIT_BUREAU_WEEK: -0.0000034
44/111 Calculated score for CNT_CHILDREN: -0.0000136
45/111 Calculated score for EXT_SOURCE_1: 0.0088349
46/111 Calculated score for NONLIVINGAREA_MODE: 0.0002038
47/111 Calculated score for LIVINGAREA_AVG: -0.0000713
48/111 Calculated score for FLAG_DOCUMENT_5: -0.0001053
49/111 Calculated score for FLAG_DOCUMENT_11: -0.0004416
50/111 Calculated score for NONLIVINGAPARTMENTS_MEDI: 0.0001257
51/111 Calculated score for NAME_HOUSING_TYPE: 0.0023573
52/111 Calculated score for FLAG_EMP_PHONE: -0.0006793
53/111 Calculated score for BASEMENTAREA_MODE: -0.0000510
54/111 Calculated score for NAME_FAMILY_STATUS: -0.0026630
55/111 Calculated score for COMMONAREA_AVG: -0.0042833
56/111 Calculated score for AMT_REQ_CREDIT_BUREAU_MON: 0.0027344
57/111 Calculated score for FLOORSMAX_MEDI: -0.0000611
58/111 Calculated score for CODE_GENDER: 0.0026834
59/111 Calculated score for REG_REGION_NOT_WORK_REGION: 0.0000476
60/111 Calculated score for ELEVATORS_AVG: -0.0007167
61/111 Calculated score for FLAG_DOCUMENT_14: 0.0002276
62/111 Calculated score for FLAG_DOCUMENT_9: 0.0006012
63/111 Calculated score for HOUSETYPE_MODE: 0.0003363
64/111 Calculated score for FLAG_EMAIL: -0.0009783
65/111 Calculated score for NONLIVINGAREA_AVG: 0.0002106
66/111 Calculated score for ENTRANCES_MODE: -0.0003125
67/111 Calculated score for OBS_30_CNT_SOCIAL_CIRCLE: 0.0001291
68/111 Calculated score for DEF_60_CNT_SOCIAL_CIRCLE: -0.0015999
69/111 Calculated score for FONDKAPREMONT_MODE: -0.0008696
70/111 Calculated score for COMMONAREA_MEDI: -0.0004042
71/111 Calculated score for TOTALAREA_MODE: 0.0012466
72/111 Calculated score for DEF_30_CNT_SOCIAL_CIRCLE: 0.0018852
73/111 Calculated score for APARTMENTS_AVG: -0.0003804
74/111 Calculated score for EXT_SOURCE_2: 0.0522011
75/111 Calculated score for BIRTH_DATE: -0.0036311
76/111 Calculated score for NONLIVINGAPARTMENTS_AVG: -0.0003227
77/111 Calculated score for EMP_DATE: -0.0025951
78/111 Calculated score for OWN_CAR_AGE: -0.0009375
79/111 Calculated score for YEARS_BUILD_AVG: 0.0008764
80/111 Calculated score for YEARS_BEGINEXPLUATATION_MODE: 0.0000068
81/111 Calculated score for LANDAREA_MODE: 0.0005774
82/111 Calculated score for LIVINGAREA_MODE: 0.0005027
83/111 Calculated score for FLOORSMAX_AVG: -0.0016338
84/111 Calculated score for BASEMENTAREA_MEDI: -0.0001461
85/111 Calculated score for LANDAREA_MEDI: -0.0010326
86/111 Calculated score for YEARS_BUILD_MEDI: -0.0007303
87/111 Calculated score for report_dt: 0.0000000
88/111 Calculated score for FLAG_CONT_MOBILE: -0.0000781
89/111 Calculated score for EXT_SOURCE_3: 0.0467425
90/111 Calculated score for REGION_POPULATION_RELATIVE: -0.0014266
91/111 Calculated score for COMMONAREA_MODE: -0.0015489
92/111 Calculated score for AMT_REQ_CREDIT_BUREAU_HOUR: -0.0011549
93/111 Calculated score for LIVINGAPARTMENTS_AVG: 0.0002344
94/111 Calculated score for REGION_RATING_CLIENT_W_CITY: 0.0020380
95/111 Calculated score for AMT_GOODS_PRICE: 0.0027446
96/111 Calculated score for REGION_RATING_CLIENT: 0.0010530
97/111 Calculated score for AMT_INCOME_TOTAL: 0.0004416
98/111 Calculated score for APARTMENTS_MODE: -0.0009986
99/111 Calculated score for REG_CITY_NOT_LIVE_CITY: 0.0007575
100/111 Calculated score for LIVINGAPARTMENTS_MODE: -0.0008798
101/111 Calculated score for OCCUPATION_TYPE: -0.0006861
102/111 Calculated score for DAYS_LAST_PHONE_CHANGE: -0.0010598
103/111 Calculated score for AMT_REQ_CREDIT_BUREAU_DAY: -0.0000068
104/111 Calculated score for ELEVATORS_MEDI: -0.0003651
105/111 Calculated score for FLAG_DOCUMENT_18: -0.0000374
106/111 Calculated score for OBS_60_CNT_SOCIAL_CIRCLE: 0.0011141
107/111 Calculated score for SK_ID_CURR: -0.0011175
108/111 Calculated score for AMT_REQ_CREDIT_BUREAU_YEAR: 0.0029008
109/111 Calculated score for AMT_REQ_CREDIT_BUREAU_QRT: -0.0012704
110/111 Calculated score for WALLSMATERIAL_MODE: 0.0001257
111/111 Calculated score for FLAG_WORK_PHONE: 0.0002072
CPU times: user 1min 2s, sys: 675 ms, total: 1min 3s
Wall time: 55.5 s
###Markdown
Predict to test data and check scores
###Code
%%time
test_pred = automl.predict(test_data)
logging.info('Prediction for test data:\n{}\nShape = {}'
.format(test_pred, test_pred.shape))
logging.info('Check scores...')
# logging.info('OOF score: {}'.format(roc_auc_score(train_data[TARGET_NAME].values, oof_pred.data[:, 0])))
logging.info('TEST score: {}'.format(roc_auc_score(test_data[TARGET_NAME].values, test_pred.data[:, 0])))
oof_pred.data[:,0]
###Output
_____no_output_____ |
Notebook-Class-exercises/.ipynb_checkpoints/Hello_world-checkpoint.ipynb | ###Markdown
Hello World
###Code
import os
import pandas as pd
your_name = "Arvind Sathi"
print("Hello ", your_name)
current_dir = os.getcwd()
current_dir
###Output
_____no_output_____ |
notebooks/1.0-rec-initial-data-cleaning.ipynb | ###Markdown
Inital Data Cleaning PurposeThis notebook provides the intial data cleaning and data exploration for the Christmas BirdCount Project. We also limit the scope to just cirlces in the USA. Author: Jeff Hale Date: 2019-05-29 Update Date: 2020-03-27 Inputs Raw Christmas Bird Count Data from Audubon Socity.Examplecbc_effort_weather_1900-2018.txt - Tab seperated file of Christmas Bird Count events going back to 1900. Each row represents a single count in a given year. Data Dictonary can be found here: http://www.audubon.org/sites/default/files/documents/cbc_report_field_definitions_2013.pdfData is saved in this folder: https://drive.google.com/drive/folders/1Nlj9Nq-_dPFTDbrSDf94XMritWYG6E2I Output Files1.0-rec-initial-data-cleaning.txt - Tab seperated file that has been clean adn the scope limited to cbc circles in the united states. File will be saved in the google drive folder:https://drive.google.com/drive/folders/1Nlj9Nq-_dPFTDbrSDf94XMritWYG6E2I Steps or Proceedures in the notebook - Load data from the Audubon Socity - Drop the test sites- Explore the Shpae and Contents of the Data- Data Metric Conversions for temperture snow and wind- Impossible value removal- Limit the Data to Use Circles in the USA Where the Data will Be Saved All data for this project will be saved in Google Drive. To start experimenting with data, download the folder here and put it into your data folder.https://drive.google.com/drive/folders/1Nlj9Nq-_dPFTDbrSDf94XMritWYG6E2IThe path should look like this: audubon-cbc/data/Cloud_Data/ See data dictionary: http://www.audubon.org/sites/default/files/documents/cbc_report_field_definitions_2013.pdf
###Code
# Imports
import numpy as np
import pandas as pd
import plotly_express as px
import matplotlib.pyplot as plt
import seaborn as sns
import gcsfs
pd.set_option('display.max_columns', 500)
###Output
_____no_output_____
###Markdown
Set Global Variables
###Code
# ALL File Paths should be declared at the TOP of the notebook
PATH_TO_RAW_CBC_DATA = "../data/Cloud_Data/cbc_effort_weather_1900-2018.txt"
raw_data = pd.read_csv(PATH_TO_RAW_CBC_DATA, encoding = "ISO-8859-1", sep="\t")
raw_data.head()
raw_data.tail()
len(raw_data)
###Output
_____no_output_____
###Markdown
Drop the test sites
###Code
raw_data = raw_data.drop(raw_data[raw_data["circle_name"].str.contains("do not")].index)
###Output
_____no_output_____
###Markdown
Explore the Shpae and Contents of the Data
###Code
raw_data.shape
raw_data.info()
raw_data.describe(include = 'all')
raw_data.isnull().sum()
#What percentage are null
pd.DataFrame((raw_data.isnull().sum())/len(raw_data) * 100).sort_values(by = 0, ascending = False)
#What types are the different variables
raw_data.dtypes.sort_values()
###Output
_____no_output_____
###Markdown
__N Field Counters__
###Code
raw_data['n_field_counters'].describe()
(raw_data['n_field_counters'].isnull().sum()) / len(raw_data) * 100
raw_data['n_field_counters'].hist(bins = 50);
raw_data.loc[raw_data['n_field_counters'] < 100].shape[0] / len(raw_data)
###Output
_____no_output_____
###Markdown
__Count Year__
###Code
raw_data['count_year'].describe()
(raw_data['count_year'].isnull().sum())/len(raw_data) * 100
raw_data.min()
raw_data.max()
###Output
_____no_output_____
###Markdown
Make a new dataframe named df that we will add columns to.
###Code
df = raw_data
###Output
_____no_output_____
###Markdown
Data Metric ConversionsWill make two columns.One for metric (SI) and one for imperial. Create columsn for imperial and metric.key = distance_unitsmiles = 1inches = 2kilometers = 3centimeters = 4
###Code
distance_cols = ['field_distance', 'nocturnal_distance']
df.distance_units.value_counts()
df['field_distance_imperial'] = np.where(df['distance_units']=='Miles', df['field_distance'], (df['field_distance'] * .6214))
df['field_distance_metric'] = np.where(df['distance_units']=='Kilometers', df['field_distance'], (df['field_distance'] / .6214))
df['nocturnal_distance_imperial'] = np.where(df['distance_units']=='Miles', df['nocturnal_distance'], (df['nocturnal_distance'] * .6214))
df['nocturnal_distance_metric'] = np.where(df['distance_units']=='Kilometers', df['nocturnal_distance'], (df['nocturnal_distance'] / .6214))
df.head()
df.tail()
df.nocturnal_distance_imperial.value_counts().head()
df.field_distance_imperial.value_counts().head()
df.field_distance_metric.value_counts().head()
df.nocturnal_distance_metric.value_counts().head()
###Output
_____no_output_____
###Markdown
Convert snowkey = snow_unit2 = inches4 = centimeters
###Code
snow_cols = ['min_snow', 'max_snow']
df.snow_unit.value_counts()
df['min_snow_imperial'] = np.where(df['snow_unit']==2, df['min_snow'], (df['min_snow'] / 2.54))
df.head()
df.tail()
df['min_snow_metric'] = np.where(df['snow_unit']==4, df['min_snow'], (df['min_snow'] * 2.54))
df.tail()
df['min_snow_metric'].value_counts().sort_values(ascending=False).head()
df['min_snow_imperial'].value_counts().sort_values(ascending=False).head()
df['max_snow_metric'] = np.where(df['snow_unit']==4, df['max_snow'], (df['max_snow'] * 2.54))
df['max_snow_imperial'] = np.where(df['snow_unit']==2, df['max_snow'], (df['max_snow'] / 2.54))
df.loc[:, ['max_snow_imperial', 'max_snow_metric']].describe()
df.max_snow_imperial.value_counts().sort_values(ascending=False).head()
df.max_snow_imperial.plot(kind='hist', bins = 100, range=[200, 2000])
###Output
_____no_output_____
###Markdown
Convert temperatureskey = temp_unit1 = celsius2 = farenheit
###Code
temp_cols = ['min_temp', 'max_temp']
df['min_temp_imperial'] = np.where(df['temp_unit']==2, df['min_temp'], (df['min_temp']+32)*9/5)
df.head()
df['max_temp_imperial'] = np.where(df['temp_unit']==2, df['max_temp'], (df['max_temp']+32)*9/5)
df.tail()
df['min_temp_metric'] = np.where(df['temp_unit']==1, df['min_temp'], (df['min_temp']-32)*5/9)
df['max_temp_metric'] = np.where(df['temp_unit']==1, df['max_temp'], (df['max_temp']-32)*5/9)
df[df.loc[:, 'temp_unit']==2].head()
###Output
_____no_output_____
###Markdown
Convert wind
###Code
df['max_wind'].value_counts().sort_values(ascending=False).head()
df["wind_unit"].value_counts()
###Output
_____no_output_____
###Markdown
wind_unit key1 = mph3 = kmh
###Code
df['min_wind_metric'] = np.where(df['wind_unit']==3, df['min_wind'], (df['min_wind'] /.6214))
df['max_wind_metric'] = np.where(df['wind_unit']==3, df['max_wind'], (df['max_wind'] / .6214))
df['min_wind_imperial'] = np.where(df['wind_unit']==1, df['min_wind'], (df['min_wind'] * .6214))
df['max_wind_imperial'] = np.where(df['wind_unit']==1, df['max_wind'], (df['max_wind'] * .6214))
df.min_wind_metric.value_counts().head(10)
df.min_wind_imperial.value_counts().head(10)
###Output
_____no_output_____
###Markdown
Note that due to rounding, km to miles aren't exact. List all columns
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 106925 entries, 0 to 106928
Data columns (total 47 columns):
circle_name 106925 non-null object
country_state 106925 non-null object
lat 106925 non-null float64
lon 106925 non-null float64
count_year 106925 non-null int64
count_date 106925 non-null object
n_field_counters 106707 non-null float64
n_feeder_counters 50575 non-null float64
min_field_parties 56210 non-null float64
max_field_parties 57093 non-null float64
field_hours 97024 non-null float64
feeder_hours 61910 non-null float64
nocturnal_hours 58526 non-null float64
field_distance 98802 non-null float64
nocturnal_distance 53193 non-null float64
distance_units 106597 non-null object
min_temp 82436 non-null float64
max_temp 82420 non-null float64
temp_unit 57026 non-null float64
min_wind 80027 non-null float64
max_wind 80109 non-null float64
wind_unit 57026 non-null float64
min_snow 76936 non-null float64
max_snow 77165 non-null float64
snow_unit 53379 non-null float64
am_cloud 82396 non-null float64
pm_cloud 82268 non-null float64
am_rain 81682 non-null object
pm_rain 81582 non-null object
am_snow 81484 non-null object
pm_snow 81410 non-null object
field_distance_imperial 98802 non-null float64
field_distance_metric 98802 non-null float64
nocturnal_distance_imperial 53193 non-null float64
nocturnal_distance_metric 53193 non-null float64
min_snow_imperial 76936 non-null float64
min_snow_metric 76936 non-null float64
max_snow_metric 77165 non-null float64
max_snow_imperial 77165 non-null float64
min_temp_imperial 82436 non-null float64
max_temp_imperial 82420 non-null float64
min_temp_metric 82436 non-null float64
max_temp_metric 82420 non-null float64
min_wind_metric 80027 non-null float64
max_wind_metric 80109 non-null float64
min_wind_imperial 80027 non-null float64
max_wind_imperial 80109 non-null float64
dtypes: float64(38), int64(1), object(8)
memory usage: 39.2+ MB
###Markdown
Impossible value removal
###Code
df.min_wind_imperial.max()
###Output
_____no_output_____
###Markdown
That's okay
###Code
df.min_wind_imperial.min()
###Output
_____no_output_____
###Markdown
That's not okay. Numbers must be positive. Will make all negative numbers positive.
###Code
df['min_wind_imperial'] = np.where(df['min_wind_imperial']<0, np.NaN, df['min_wind_imperial'])
df.min_wind_imperial.min()
###Output
_____no_output_____
###Markdown
Also need to do for metric equivalent.
###Code
df['min_wind_metric'] = np.where(df['min_wind_metric']<0, np.NaN, df['min_wind_metric'])
df.min_wind_metric.min()
df.max_wind_imperial.max()
###Output
_____no_output_____
###Markdown
That's not okay. Pretty sure that's faster than the highest ever recorded wind speed. Yep. Second fastest ever according to wikipedia is 231 mph (372 kmh).
###Code
df['max_wind_imperial'] = np.where(df['max_wind_imperial']>231, np.NaN, df['max_wind_imperial'])
df.max_wind_imperial.max()
###Output
_____no_output_____
###Markdown
Better. Need to do the same for metric.
###Code
df['max_wind_metric'] = np.where(df['max_wind_metric']>372, np.NaN, df['max_wind_metric'])
df.max_wind_metric.max()
###Output
_____no_output_____
###Markdown
ok
###Code
df.max_wind_imperial.min()
df.min_snow_imperial.max()
###Output
_____no_output_____
###Markdown
That's a lot of inches of snow, but not impossible, so we'll keep it for now.
###Code
df.min_snow_imperial.min()
###Output
_____no_output_____
###Markdown
Ok.
###Code
df.max_snow_imperial.max()
###Output
_____no_output_____
###Markdown
That's a lot of inches of snow, but not impossible, so we'll keep it for now.
###Code
df.max_snow_imperial.min()
###Output
_____no_output_____
###Markdown
Ok.
###Code
df['max_temp_imperial'].max()
###Output
_____no_output_____
###Markdown
That's not okay. Highest air temperature reading according to wikipedia: 56.7 °C, 134.1 °F. Anything higher than those number will be removed.
###Code
df['max_temp_imperial'] = np.where(df['max_temp_imperial']>134, np.NaN, df['max_temp_imperial'])
df.max_temp_imperial.max()
df['max_temp_metric'] = np.where(df['max_temp_metric']>56, np.NaN, df['max_temp_metric'])
df.max_temp_metric.max()
df['max_temp_imperial'].min()
###Output
_____no_output_____
###Markdown
That's mighty cold, but possible. Leaving alone for now.
###Code
df['min_temp_imperial'].max()
###Output
_____no_output_____
###Markdown
Nope.
###Code
df['min_temp_imperial'] = np.where(df['min_temp_imperial']>134, np.NaN, df['min_temp_imperial'])
df.min_temp_imperial.max()
df['min_temp_metric'] = np.where(df['min_temp_metric']>56, np.NaN, df['min_temp_metric'])
df.min_temp_metric.max()
df['min_temp_imperial'].min()
###Output
_____no_output_____
###Markdown
Nope, that's not possible. Let's drop anything less than -305F.
###Code
df['min_temp_imperial'] = np.where(df['min_temp_imperial']<-305, np.NaN, df['min_temp_imperial'])
df.min_temp_imperial.min()
df['min_temp_metric'] = np.where(df['min_temp_metric']<-187, np.NaN, df['min_temp_metric'])
df.min_temp_metric.min()
df['nocturnal_distance'].max()
###Output
_____no_output_____
###Markdown
Seems like a lot. Leaving alone for now.
###Code
df['nocturnal_distance'].min()
###Output
_____no_output_____
###Markdown
Ok.
###Code
df['field_distance'].max()
###Output
_____no_output_____
###Markdown
Seems like a lot. Leaving alone for now.
###Code
df['field_distance'].min()
###Output
_____no_output_____
###Markdown
Ok. Let's look at other columns.
###Code
df.info()
df.feeder_hours.min()
###Output
_____no_output_____
###Markdown
That doesn't make sense. Let's drop values < 0.
###Code
df['feeder_hours'] = np.where(df['feeder_hours']<0, np.NaN, df['feeder_hours'])
df.feeder_hours.min()
df.min_field_parties.max()
df.min_field_parties.min()
df.field_hours.max()
df.field_hours.min()
df.nocturnal_hours.min()
df.nocturnal_hours.max()
###Output
_____no_output_____
###Markdown
Question: What's the maximum number of hours possible?
###Code
df.describe()
###Output
_____no_output_____
###Markdown
All these numeric values appear to possible (perhaps with exception of the maximum number of hours for some tasks. Note that impossible values for derived columns with _imperial_ and _metric_ suffixes were replaced with NaN. The original column values were not replaced (e.g. max_wind wasn't replaced). Also note that missing values were not imputed. Depending upon the variable of interest and the analysis, missing values might want to be treated various ways. Limit the Data to Use Circles in the USA
###Code
print(df.shape)
df.head(10)
# Drop all the locations that are not in the united states
indexNamesNUSA = df[~df['country_state'].str.contains("US-")].index
# Delete these row indexes from dataFrame
df.drop(indexNamesNUSA , inplace=True)
print(df.shape)
df.head(10)
###Output
(89568, 47)
###Markdown
Save the Output
###Code
df.to_csv("../data/Cloud_Data/1.0-rec-initial-data-cleaning.txt", sep="\t")
###Output
_____no_output_____ |
AppStat2022/Week4/original/HypothesisTesting/HypothesisTesting_original.ipynb | ###Markdown
Hypothesis TestingPython notebook for illustrating the concept of Hypothesis Testing and specific test statistics; among them the very useful Kolmogorov-Smirnov test.The Kolmogorov-Smirnov test (KS-test) is a general test to evaluate if two distributions in 1D are the same. This program applies an unbinned KS test, and compares it to a $\chi^2$-test and a simple comparison of means. The distributions compared are two unit Gaussians, where one is then modified by changing:- Mean- Width- NormalisationThe sensitivity of each test is then considered for each of these changes. References:- Barlow: p. 155-156- __[Wikipedia: Kolmogorov-Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov-Smirnov_test)__- Though influenced by biostatistics, a good discussion of p-values and their distribution can be found here: [How to interpret a p-value histogram?](http://varianceexplained.org/statistics/interpreting-pvalue-histogram/) Authors: Troels C. Petersen (Niels Bohr Institute) Date: 07-12-2021 (latest update)***
###Code
import numpy as np # Matlab like syntax for linear algebra and functions
import matplotlib.pyplot as plt # Plots and figures like you know them from Matlab
import seaborn as sns # Make the plots nicer to look at
from iminuit import Minuit # The actual fitting tool, better than scipy's
import sys # Module to see files and folders in directories
from scipy.special import erfc
from scipy import stats
sys.path.append('../../../External_Functions')
from ExternalFunctions import Chi2Regression, BinnedLH, UnbinnedLH
from ExternalFunctions import nice_string_output, add_text_to_ax # useful functions to print fit results on figure
###Output
_____no_output_____
###Markdown
Set the parameters of the plot:
###Code
r = np.random # Random generator
r.seed(42) # Set a random seed (but a fixed one)
save_plots = False
verbose = True
###Output
_____no_output_____
###Markdown
The small function below is just a simple helper function that takes a 1D-array input along with an axis, position and color arguments an plot the number of entries, the mean and the standard deviation on the axis:
###Code
def ax_text(x, ax, posx, posy, color='k'):
d = {'Entries': len(x),
'Mean': x.mean(),
'STD': x.std(ddof=1),
}
add_text_to_ax(posx, posy, nice_string_output(d), ax, fontsize=12, color=color)
return None
###Output
_____no_output_____
###Markdown
and finally a function that calculates the mean, standard deviation and the standard deviation (i.e. uncertainty) on mean (sdom):
###Code
def mean_std_sdom(x):
std = np.std(x, ddof=1)
return np.mean(x), std, std / np.sqrt(len(x))
###Output
_____no_output_____
###Markdown
Set up the experiment:How many experiments, and how many events in each:
###Code
N_exp = 1
N_events_A = 100
N_events_B = 100
###Output
_____no_output_____
###Markdown
Define the two Gaussians to be generated (no difference to begin with!):
###Code
dist_mean_A = 0.0
dist_width_A = 1.0
dist_mean_B = 0.0
dist_width_B = 1.0
###Output
_____no_output_____
###Markdown
Define the number of bins and the range, initialize empty arrays to store the results in and make an empty figure (to be filled in later):
###Code
N_bins = 100
xmin, xmax = -5.0, 5.0
all_p_mean = np.zeros(N_exp)
all_p_chi2 = np.zeros(N_exp)
all_p_ks = np.zeros(N_exp)
# Figure for the two distributions, A and B, in the first experiment:
fig1, ax1 = plt.subplots(figsize=(10, 6))
plt.close(fig1)
###Output
_____no_output_____
###Markdown
Loop over how many times we want to run the experiment, and for each calculate the p-value of the two distributions coming from the same underlying PDF (put in calculations yourself):
###Code
for iexp in range(N_exp):
if ((iexp+1)%1000 == 0):
print(f"Got to experiment number: {iexp+1}")
# Generate data:
x_A_array = r.normal(dist_mean_A, dist_width_A, N_events_A)
x_B_array = r.normal(dist_mean_B, dist_width_B, N_events_B)
# Test if there is a difference in the mean:
# ------------------------------------------
# Calculate mean and error on mean:
mean_A, width_A, sdom_A = mean_std_sdom(x_A_array)
mean_B, width_B, sdom_B = mean_std_sdom(x_B_array)
# Consider the difference between means in terms of the uncertainty:
d_mean = mean_A - mean_B
# ... how many sigmas is that away?
# Turn a number of sigmas into a probability (i.e. p-value):
p_mean = 0.5 # Calculate yourself. HINT: "stats.norm.cdf or stats.norm.sf may be useful!"
all_p_mean[iexp] = p_mean
# Test if there is a difference with the chi2:
# --------------------------------------------
# Chi2 Test:
p_chi2 = 0.5 # Calculate the p-value of the Chi2 between histograms of A and B yourself.
all_p_chi2[iexp] = p_chi2
# Test if there is a difference with the Kolmogorov-Smirnov test on arrays (i.e. unbinned):
# -----------------------------------------------------------------------------------------
p_ks = stats.ks_2samp(x_A_array, x_B_array)[1] # Fortunately, the K-S test is implemented in stats!
all_p_ks[iexp] = p_ks
# Print the results for the first 10 experiments
if (verbose and iexp < 10) :
print(f"{iexp:4d}: p_mean: {p_mean:7.5f} p_chi2: {p_chi2:7.5f} p_ks: {p_ks:7.5f}")
# In case one wants to plot the distribution for visual inspection:
if (iexp == 0):
ax1.hist(x_A_array, N_bins, (xmin, xmax), histtype='step', label='A', color='blue')
ax1.set(title='Histograms of A and B', xlabel='A / B', ylabel='Frequency / 0.05')
ax_text(x_A_array, ax1, 0.04, 0.85, 'blue')
ax1.hist(x_B_array, N_bins, (xmin, xmax), histtype='step', label='B', color='red')
ax_text(x_B_array, ax1, 0.04, 0.65, 'red')
ax1.legend()
fig1.tight_layout()
fig1
###Output
0: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.70206
1: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.70206
2: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.70206
3: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.70206
4: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.36819
5: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.28194
6: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.90841
7: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.03638
8: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.90841
9: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.28194
Got to experiment number: 1000
###Markdown
Show the distribution of hypothesis test p-values:
###Code
N_bins = 50
if (N_exp > 1):
fig2, ax2 = plt.subplots(nrows=3, figsize=(12, 14))
ax2[0].hist(all_p_mean, N_bins, (0, 1), histtype='step')
ax2[0].set(title='Histogram, probability mu', xlabel='p-value', ylabel='Frequency / 0.02', xlim=(0, 1))
ax_text(all_p_mean, ax2[0], 0.04, 0.25)
ax2[1].hist(all_p_chi2, N_bins, (0, 1), histtype='step')
ax2[1].set(title='Histogram, probability chi2', xlabel='p-value', ylabel='Frequency / 0.02', xlim=(0, 1))
ax_text(all_p_chi2, ax2[1], 0.04, 0.25)
ax2[2].hist(all_p_ks, N_bins, (0, 1), histtype='step')
ax2[2].set(title='Histogram, probability Kolmogorov', xlabel='p-value', ylabel='Frequency / 0.02', xlim=(0, 1))
ax_text(all_p_ks, ax2[2], 0.04, 0.25)
fig2.tight_layout()
if save_plots:
fig2.savefig('PvalueDists.pdf', dpi=600)
###Output
_____no_output_____ |
Play_Pandas.ipynb | ###Markdown
Testing some different commands in pandas
###Code
df2 = pd.DataFrame([[1, 2], [4, 5], [7, 8],[7,5]],
... index=['cobra', 'viper', 'sidewinder','gladius'],
... columns=['max_speed', 'shield'])
###Output
_____no_output_____
###Markdown
Documentation See: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html Selecting data with loc
###Code
# df2.loc?
df2
###Output
_____no_output_____
###Markdown
Get some indices
###Code
inds1 = [False, False, True,True]
print(inds1)
inds2 = df2.shield == 5
print(inds2)
###Output
cobra False
viper True
sidewinder False
gladius True
Name: shield, dtype: bool
###Markdown
Selecting rows
###Code
# Select single row
df2.loc['cobra']
df2.loc[[True,False,False,False]]
# Select row using indices thing boolean indexing
df2.loc[inds2]
# Slice range of rows - note: it's inclusive
df2.loc['cobra':'sidewinder']
# Select several rows
df2.loc[['cobra','sidewinder']]
###Output
_____no_output_____
###Markdown
Selecting single element
###Code
df2.loc['cobra','shield']
###Output
_____no_output_____
###Markdown
Selecting rows + columns
###Code
# Select rows + single column - returns Series
df2.loc[['cobra','sidewinder'], 'shield']
# Select rows + multiple columns - returns DataFrame
df2.loc[['cobra','sidewinder'], ['shield']]
# Slice rows + single column - returns Series
df2.loc['cobra':'sidewinder', 'shield']
# Slice rows + 1 or more column - returns DataFrame
# NBNB ** Double braces are not needed when using : **
df2.loc['cobra':'sidewinder',['shield']]
# Use indices
df2.loc[inds1,'shield']
# Use inds, multiple columns
df2.loc[inds1,['shield']]
# Use inds, multiple columns
df2.loc[inds1,['shield','max_speed']]
###Output
_____no_output_____
###Markdown
Selecting data with iloc
###Code
# df2.iloc?
# Use indices, multiple columns
df2.iloc[0:2,0:1]
###Output
_____no_output_____
###Markdown
Selecting data with query
###Code
df2
df2.query('shield >= 5 and max_speed >= 4')
# Query and then select columns
df2.query('shield >= 5 and max_speed >= 4')[['shield','max_speed']]
###Output
_____no_output_____
###Markdown
Pivot
###Code
df2.pivot(columns='shield')
df3 = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
... "bar", "bar", "bar", "bar"],
... "B": ["one", "one", "one", "two", "two",
... "one", "one", "two", "two"],
... "C": ["small", "large", "large", "small",
... "small", "large", "small", "small",
... "large"],
... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
df3
table = pd.pivot_table(df3, values='D', index=['A', 'B'],
... columns=['C'], aggfunc=np.sum)
table
# Not sure why this breaks
# df3.pivot(values='D', index=['A', 'B'],columns=['C'])
###Output
_____no_output_____ |
notebooks/8_Model_Data_Augmentation_Pseudo-label_Gen1.ipynb | ###Markdown
Swish-based classifier using cosine-annealed LR with restarts and data augmentation- Swish activation, 4 layers, 100 neurons per layer- LR using cosine-annealing with restarts and cycle multiplicity of 2- Data is augmentaed via phi rotations, and transvers and longitudinal flips- Validation score use ensemble of 10 models weighted by loss Import modules
###Code
%matplotlib inline
from __future__ import division
import sys
import os
sys.path.append('../')
from Modules.Basics import *
from Modules.Class_Basics import *
###Output
Using TensorFlow backend.
###Markdown
Options
###Code
with open(dirLoc + 'features.pkl', 'rb') as fin:
classTrainFeatures = pickle.load(fin)
nSplits = 10
patience = 2
maxEpochs = 200
ensembleSize = 10
ensembleMode = 'loss'
compileArgs = {'loss':'binary_crossentropy', 'optimizer':'adam'}
trainParams = {'epochs' : 1, 'batch_size' : 256, 'verbose' : 0}
modelParams = {'version':'modelSwish', 'nIn':len(classTrainFeatures), 'compileArgs':compileArgs, 'mode':'classifier'}
print ("\nTraining on", len(classTrainFeatures), "features:", [var for var in classTrainFeatures])
###Output
Training on 31 features: ['DER_mass_MMC', 'DER_mass_transverse_met_lep', 'DER_mass_vis', 'DER_pt_h', 'DER_deltaeta_jet_jet', 'DER_mass_jet_jet', 'DER_prodeta_jet_jet', 'DER_deltar_tau_lep', 'DER_pt_tot', 'DER_sum_pt', 'DER_pt_ratio_lep_tau', 'DER_met_phi_centrality', 'DER_lep_eta_centrality', 'PRI_met_pt', 'PRI_met_sumet', 'PRI_jet_num', 'PRI_jet_all_pt', 'PRI_tau_px', 'PRI_tau_py', 'PRI_tau_pz', 'PRI_lep_px', 'PRI_lep_py', 'PRI_lep_pz', 'PRI_jet_leading_px', 'PRI_jet_leading_py', 'PRI_jet_leading_pz', 'PRI_jet_subleading_px', 'PRI_jet_subleading_py', 'PRI_jet_subleading_pz', 'PRI_met_px', 'PRI_met_py']
###Markdown
Import data
###Code
with open(dirLoc + 'inputPipe.pkl', 'rb') as fin:
inputPipe = pickle.load(fin)
trainData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'pseudo_train.hdf5', "r+"),
inputPipe=inputPipe, augRotMult=16)
###Output
_____no_output_____
###Markdown
Determine LR
###Code
lrFinder = batchLRFind(trainData, getModel, modelParams, trainParams,
lrBounds=[1e-5,1e-1], trainOnWeights=True, verbose=0)
###Output
2 classes found, running in binary mode
###Markdown
Train classifier
###Code
results, histories = batchTrainClassifier(trainData, nSplits, getModel,
{**modelParams, 'compileArgs':{**compileArgs, 'lr':2e-3}},
trainParams, trainOnWeights=True, maxEpochs=maxEpochs,
cosAnnealMult=2, plotLR=1, reduxDecay=1,
patience=patience, verbose=1, amsSize=250000)
###Output
Using cosine annealing
Training using weights
Running fold 1 / 10
2 classes found, running in binary mode
1 New best found: 3.3494162287467103e-05
2 New best found: 3.1251350577619516e-05
3 New best found: 3.0968467955218404e-05
4 New best found: 3.0624923153513834e-05
5 New best found: 2.972758020216545e-05
6 New best found: 2.937384930813088e-05
7 New best found: 2.9207264546658048e-05
10 New best found: 2.908599202641484e-05
11 New best found: 2.887690176432633e-05
12 New best found: 2.8494943980020672e-05
13 New best found: 2.8454158588881782e-05
14 New best found: 2.8288575807931837e-05
15 New best found: 2.8256847588616272e-05
21 New best found: 2.8249826953841972e-05
23 New best found: 2.803021944427844e-05
24 New best found: 2.7972700118413557e-05
26 New best found: 2.792655028370097e-05
27 New best found: 2.7758187295728315e-05
28 New best found: 2.7746438404781287e-05
29 New best found: 2.7712935441725306e-05
30 New best found: 2.7699189685260517e-05
31 New best found: 2.7697248742828404e-05
46 New best found: 2.7691872886670435e-05
50 New best found: 2.7597760291347505e-05
52 New best found: 2.7534906033198385e-05
55 New best found: 2.751687941008138e-05
56 New best found: 2.744188206562769e-05
57 New best found: 2.7411726427314398e-05
59 New best found: 2.7408041547214524e-05
60 New best found: 2.7386482904749667e-05
61 New best found: 2.7376100023715723e-05
62 New best found: 2.737245789523894e-05
96 New best found: 2.7313614653958784e-05
97 New best found: 2.7238783947878085e-05
102 New best found: 2.7208994674232294e-05
113 New best found: 2.7160848750986777e-05
115 New best found: 2.714154850817119e-05
116 New best found: 2.7121460546043484e-05
CosineAnneal stalling after 255 epochs, entering redux decay at LR=0.0001425649778654921
Early stopping after 265 epochs
Score is: {'loss': 2.7121460546043484e-05, 'wAUC': 0.05572767934471157, 'AUC': 0.08961860231515995, 'AMS': 10.074470784980099, 'cut': 0.9971368312835693}
###Markdown
The impact of data augmentation is pretty clear. Comparing the training here to that of the the CRL Swish model without augmentation we can see that we effectively gain another LR cycles worth of training epochs before we start overfitting, which allows the networks to reach much lower looses (3.18e-5 c.f. 3.23e-5) and a higher AMSs (3.98 c.f. 3.71) Construct ensemble
###Code
with open('train_weights/resultsFile.pkl', 'rb') as fin:
results = pickle.load(fin)
ensemble, weights = assembleEnsemble(results, ensembleSize, ensembleMode, compileArgs)
###Output
Choosing ensemble by loss
Model 0 is 5 with loss = 2.6590147800629504e-05
Model 1 is 6 with loss = 2.6820445510714143e-05
Model 2 is 2 with loss = 2.6826869298535008e-05
Model 3 is 3 with loss = 2.689515164667775e-05
Model 4 is 1 with loss = 2.7071041502239852e-05
Model 5 is 0 with loss = 2.7121460546043484e-05
Model 6 is 8 with loss = 2.7251977471026633e-05
Model 7 is 9 with loss = 2.7374359068844444e-05
Model 8 is 4 with loss = 2.7409945491431063e-05
Model 9 is 7 with loss = 2.7461055062593557e-05
###Markdown
Response on validation data with TTA
###Code
valData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'val.hdf5', "r+"), inputPipe=inputPipe,
rotate = True, reflect = True, augRotMult=8)
batchEnsemblePredict(ensemble, weights, valData, ensembleSize=ensembleSize, verbose=1)
print('Testing ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source)),
roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source), sample_weight=getFeature('weights', valData.source))))
amsScanSlow(convertToDF(valData.source))
%%time
bootstrapMeanAMS(convertToDF(valData.source), N=512)
###Output
50000 candidates loaded
Mean AMS=1.25+-0.02, at mean cut of 0.21+-0.02
Exact mean cut 0.2129122701298911, corresponds to AMS of 1.2450575639639025
CPU times: user 3.48 s, sys: 5.27 s, total: 8.75 s
Wall time: 1min 26s
###Markdown
Adding test-time augmentation provides further benefits: overall AMS 3.90->3.97, AMS corresponding to mean cut 3.89->3.91.
###Code
val = convertToDF(valData.source)
plotFeat(val, 'pred_class', [(val.gen_target==0), (val.gen_target==1)], ['bkg', 'sig'])
batchEnsemblePredict(ensemble, weights, trainData, ensembleSize=1, verbose=1)
train = convertToDF(trainData.source)
plotFeat(val, 'pred_class', [(val.gen_target==0), (val.gen_target==1)], ['bkg', 'sig'])
###Output
/Users/giles/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
/Users/giles/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6499: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
/Users/giles/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6499: MatplotlibDeprecationWarning:
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
alternative="'density'", removal="3.1")
###Markdown
Test scoring
###Code
testData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'testing.hdf5', "r+"), inputPipe=inputPipe,
rotate = True, reflect = True, augRotMult=8)
%%time
batchEnsemblePredict(ensemble, weights, testData, ensembleSize=ensembleSize, verbose=1)
scoreTestOD(testData.source, 0.9619597619166598)
###Output
_____no_output_____
###Markdown
Using the cuts we optimised by bootstrapping the validation data, we end up with a private score which would have beaten the winning entry (3.817 c.f. 3.806). It would be nice if the public score were higher, though. Save/Load
###Code
name = "weights/Swish_CLR_TTA_Pseudo1"
saveEnsemble(name, ensemble, weights, compileArgs, overwrite=1)
ensemble, weights, compileArgs, _, _ = loadEnsemble(name)
###Output
_____no_output_____ |
Lab_Activity_Week_2.ipynb | ###Markdown
Declaring Variables
###Code
import numpy as np
import matplotlib.pyplot as mplib
lit1 = ([4, 6,])
lit2 = ([2, 5,])
lit3 = ([6, 7,])
lit4 = ([5, 1,])
lit5 = ([6, 6,])
vec1 = np.array(lit1)
vec2 = np.array(lit2)
vec3 = np.array(lit3)
vec4 = np.array(lit4)
vec5 = np.array(lit5)
###Output
_____no_output_____
###Markdown
Printing Vertically
###Code
#printing arrays
print(vec1)
print(vec2)
print(vec3)
print(vec4)
print(vec5)
###Output
[4 6 4]
[2 5 2]
[6 7 8]
[5 1 0]
[6 6 2]
###Markdown
Printing Horizontally
###Code
#using for loop to print vertically
int_list = [[2, 5, 6], [3, 5, 1], [9, 2, 8]]
for i in range(len(int_list)):
for v in int_list:
vec_vert=np.array(np.array)
print(v[i], end =' ')
print()
###Output
2 3 9
5 5 2
6 1 8
###Markdown
addition and subtraction

###Code
#Addition
add1 = np.add(vec1, vec2)
add2 = np.add(add1, vec3)
add3 = np.add(add2, vec4)
add4 = np.add(add3, vec5)
#Subtraction
sub1 = np.subtract(vec1, vec2)
sub2 = np.subtract(sub1, vec3)
sub3 = np.subtract(sub2, vec4)
sub4 = np.subtract(sub3, vec5)
print (f"The sum of the arrays is {add4}")
print (f"The Diffrence of the arrays is {sub4}")
###Output
The sum of the arrays is [23 25]
The Diffrence of the arrays is [-15 -13]
###Markdown
Squaring

Square root (for lengths)


###Code
#Squaring
sq1 = np.square(vec1, vec2)
sq2 = np.square(sq1, vec3)
sq3 = np.square(sq2, vec4)
sq4 = np.square(sq3, vec5)
#Square root
sr1 = np.sqrt(add4)
###Output
_____no_output_____
###Markdown
Summation

###Code
#Summation
su1 = np.sum(add4)
su2 = np.sum(sub4)
su3 = np.sum(sq4)
print(sq4)
###Output
48
[-15 -13]
[ 4294967296 2821109907456]
###Markdown
Visualizing
###Code
A = add4
B = sub4
C = sq1
D = sr1
mplib.scatter(A[0],A[1], label='A', c='black')
mplib.scatter(B[0],B[1], label='B', c='blue')
mplib.scatter(C[0],C[1], label='C', c='yellow')
mplib.scatter(D[0],D[1], label='D', c='red')
mplib.title("Visualizing the Vectors")
mplib.xlim(-100, 100)
mplib.ylim(-100, 100)
mplib.axhline(y=0, color='black')
mplib.axvline(x=0, color='black')
mplib.grid()
mplib.legend()
mplib.show()
###Output
_____no_output_____
###Markdown
Result of the Operations
###Code
print(f"The sum of the arrays is {add1}")
print("I used the numpy add function which is used when we want to compute the")
print("addition of two array. It add arguments element-wise. ")
print("------------------------------------------------")
print(f"The Diffrence of the arrays is {sub4}")
print("numpy.subtract() function is used when we want to compute the difference")
print("of two array.It returns the difference of arr1 and arr2, element-wise.")
print("------------------------------------------------")
print(f"Squaring the vectors result is {sq4}")
print("function returns a new array with the element value as the square of the")
print("source array elements. The source array remains unchanged")
print("------------------------------------------------")
print(f"The Square root of a Vector is {sr1}")
print("The output of the function is simply an array of those calculated square")
print("roots, arranged in exactly the same shape as the input array.")
print("-----------------------------------------------")
print(f"The Summation of the given vector is {su1}")
print("this sum ups the elements of an array, takes the elements within an ")
print("array, and adds them together")
###Output
The sum of the arrays is [ 6 11]
I used the numpy add function which is used when we want to compute the
addition of two array. It add arguments element-wise.
------------------------------------------------
The Diffrence of the arrays is [-15 -13]
numpy.subtract() function is used when we want to compute the difference
of two array.It returns the difference of arr1 and arr2, element-wise.
------------------------------------------------
Squaring the vectors result is [ 4294967296 2821109907456]
function returns a new array with the element value as the square of the
source array elements. The source array remains unchanged
------------------------------------------------
The Square root of a Vector is [4.79583152 5. ]
The output of the function is simply an array of those calculated square
roots, arranged in exactly the same shape as the input array.
-----------------------------------------------
The Summation of the given vector is 48
|
beginner-lessons/interdisciplinary-communication/.ipynb_checkpoints/Welcome-checkpoint.ipynb | ###Markdown
Welcome to the Hour of CI!The Hour of Cyberinfrastructure (Hour of CI) project will introduce you to the world of cyberinfrastructure (CI). If this is your first lesson, then we recommend starting with the **[Gateway Lesson](https://www.hourofci.org/gateway-lesson)**, which will introduce you to the Hour of CI project and the eight knowledge areas that make up Cyber Literacy for Geographic Information Science. This is the **Beginner Interdisciplinary Communication** lesson.To start, click on the "Run this cell" button below to setup your Hour of CI environment. It looks like this:
###Code
!cd ../..; sh setupHourofCI # Run this cell (button on left) to setup your Hour of CI environment
###Output
_____no_output_____ |
Part-2-PyMC3-modeling/ipynb/GAM/Step-2-Modeling_AOSTZ_with_pymc3-Piecewise_trend_model.ipynb | ###Markdown
Notebook Synopsis:Here I develop a set of models similar to that of Step-1-, here substituting the single trend component for a piecewise trend sub-model. Specifically I:* Load the training data generated and saved in previous NB.* Develop and combine piecewise trend, seasonal, and residual noise submodels similar to previous NB.* Compare models using WAIC or PSIS-LOOCV.* Retain and save models predicted to perform better.
###Code
import pickle
import pathlib
from platform import python_version as pyver
import pandas as pd
import numpy as np
import pymc3 as pm
import theano.tensor as tt
from sklearn.preprocessing import MinMaxScaler
import arviz as ar
import matplotlib.pyplot as pl
import matplotlib.dates as mdates
from matplotlib import rcParams
def print_ver(pkg, name=None):
try:
print(f'{pkg.__name__}: {pkg.__version__}')
except AttributeError:
print(f'{name}: {pkg}')
print_ver(pyver(), 'python')
for pi in [np, pd, pm, ar]:
print_ver(pi)
%matplotlib inline
years = mdates.YearLocator(day=1)
months = mdates.MonthLocator(bymonthday=1)
rcParams['xtick.major.size'] = 8
rcParams['xtick.minor.size'] = 4
rcParams['xtick.minor.visible'] = True
rcParams['xtick.labelsize'] = 16
rcParams['ytick.labelsize'] = 16
rcParams['axes.labelsize'] = 16
rcParams['axes.titlesize'] = 18
rcParams['axes.formatter.limits'] = (-3, 2)
with open('../../pickleJar/datadict.pkl', 'rb') as fb:
datadict = pickle.load(fb)
df = pd.DataFrame(datadict['frame'],)
df['aostz_scaled'] = datadict['y_s']
minmax_t = MinMaxScaler()
df['t_scaled'] = minmax_t.fit_transform(datadict['x'][:,None]).squeeze()
del datadict
df.head()
###Output
_____no_output_____
###Markdown
Modeling a Piecewise Trend:Within the context of Generalized Additive Models(GAMs), which arise from the simple additive combination of submodels, I develop here a set of models following $$y(t) = g(t) + s(t) + ar1(t)$$where \\(y(t)\\) is the modeled signal (chlorophyll in the AOSTZ sector), \\(g(t) \\) is the trend (i.e *rate of change*) sub-model, \\(s(t)\\) is the seasonal sub-model, \\(ar1(t)\\) is the AR1 residual.The piecewise model is implemented by inserting a fixed number of changepoints such that $$g(t) = (k + a(t)^T\delta)t + (m + a(t)^T\gamma)$$where \\(k\\) is the base trend, modified by preset changpoints stored in a vector \\(s\\). At each unique changepoint \\(s_j\\) the trend is adjusted by \\(\delta_j\\), stored in a vector \\(\delta\\), everytime \\(t\\) surpasses a changepoint \\(s_j\\). Used for this purpose, \\(a(t)\\) is basically a vectorized switchboard that turns on for a given switchpoint such that \begin{equation}a(t) = \begin{cases} 1 , & \text{if $t \geq s_j$} \\ 0 , & \text{otherwise}\end{cases} \end{equation}The second part, \\( m + a(t)^T\gamma\\) ensures the segments defined by the switchpoints are connected. Here, \\(m\\) is an offset parameter, and \\(\gamma_j\\) is set to \\(-s_j\delta_j\\).The issue though is to find the right number of preset changepoint that will capture actual changepoints while not bogging down the inference. Moreover, for the sake of practicality, these will need to be regularly spaced. Here I try several setups including, one change point at the beginning of the year, and one for every season (4pts/year), one every two months (6pts/year), and one for every month (as many changepoints as data points). The idea is then to put a rather restrictive Laplace prior on \\(\delta\\) to rule out unlikely changepoints, effectively setting the corresponding \\(\delta_j\\) to 0.First is to define some [helper functions as in the previous notebook](https://tinyurl.com/y3nubuquhelpers):
###Code
def fourier_series(t, p=12, n=1):
"""
input:
------
t [numpy array]: vector of time index
p [int]: period
n [int]: number of fourier components
output:
-------
sinusoids [numpy array]: 2D array of cosines and sines
"""
p = p / t.size
wls = 2 * π * np.arange(1, n+1) / p
x_ = wls * t[:, None]
sinusoids = np.concatenate((np.cos(x_), np.sin(x_)), axis=1)
return sinusoids
def seasonality(mdl, n_fourier, t):
"""
m [pymc3 Model class]: model object
n_fourier [int]: number of fourier components
t [numpy array]: vector of time index
"""
with mdl:
σ = pm.Exponential('σ', 1)
f_coefs = pm.Normal('fourier_coefs', 0, sd=1, shape=(n_fourier*2))
season = pm.Deterministic('season',
tt.dot(fourier_series(t, n=n_fourier), f_coefs)
)
return season
def piecewise_trend(mdl, s, t, a_t, obs,
k_prior_scale=5, δ_prior_scale=0.05, m_prior_scale=5):
"""
input:
------
mdl [pymc3 Model class]: model object
s [numpy array]: changepoint vector
t [numpy array]: time vector
obs [numpy array]: vector of observations
a_t [numpy int array]: 2D (t*s) adjustment indicator array
k_prior_scale [float]: base trend normal prior scale parameter (default=5)
δ_prior_scale [float]: trend adjustment laplace prior scale param. (default=0.05)
m_prior_scale [float]: base offset normal prior scale param. (default=5)
"""
with mdl:
# Priors:
k = pm.Normal('k', 0, k_prior_scale) # base trend prior
if δ_prior_scale is None:
δ_prior_scale = pm.Exponential('τ', 1.5)
δ = pm.Laplace('δ', 0, δ_prior_scale, shape=s.size) # rate of change prior
m = pm.Normal('m', 0, m_prior_scale) # offset prior
γ = -s * δ
trend = pm.Deterministic('trend',
(k + tt.dot(a_t, δ)) * t + (m + tt.dot(a_t, γ)))
return trend
def ar1_residual(mdl, n_obs):
with mdl:
k_ = pm.Uniform('k', -1.1, 1.1)
tau_ = pm.Gamma('tau', 10, 3)
ar1 = pm.AR1('ar1', k=k_, tau_e=tau_, shape=n_obs)
return ar1
def changepoint_setup(t, n_changepoints, s_start=None, s=None, changepoint_range=1):
"""
input:
------
t [numpy array]: time vector
n_changepoints [int]: number of changepoints to consider
s [numpy array]: user-specified changepoint vector (default=None)
s_start [int]: changepoint start index (default=0)
changepoint_range[int]: adjustable time proportion (default=1)
output:
-------
s [numpy array]: changepoint vector
a_t [numpy int array]: 2D (t*s) adjustment indicator array
"""
if s is None:
if s_start is None:
s = np.linspace(start=0, stop=changepoint_range*t.max(),
num=n_changepoints+1)[1:]
else:
s = np.linspace(start=s_start, stop=changepoint_range*t.max(),
num=n_changepoints)
a_t = (t[:,None] > s) * 1
return a_t, s
def model_runner(t_, obs_s, add_trend=False, add_season=False, add_AR1=False,
**payload):
mdl = pm.Model()
a_t, s = None, None
with mdl:
y_ = 0
σ = pm.HalfCauchy('σ', 2.5)
if add_trend:
n_switches = payload.pop('n_switches', t_.size)
s_start = payload.pop('s_start', None)
s = payload.pop('s', None)
chg_pt_rng = payload.pop('changepoint_range', 1)
k_prior_scale = payload.pop('k_prior_scale', 5)
δ_prior_scale = payload.pop('δ_prior_scale', 0.05)
m_prior_scale = payload.pop('m_prior_scale', 5)
a_t, s = changepoint_setup(t_, n_switches, s_start=s_start, s=s,
changepoint_range=chg_pt_rng)
trend_ = piecewise_trend(mdl, s, t_, a_t, obs_s,
k_prior_scale, δ_prior_scale, m_prior_scale)
y_ += trend_
if add_season:
n_fourier = payload.pop('n_fourier', 4)
season = seasonality(mdl, n_fourier=n_fourier, t=t_)
y_ += season
if add_AR1:
ar1 = ar1_residual(mdl, obs_s.size)
y_ += ar1
pm.Normal('obs', mu=y_, sd=σ, observed=obs_s)
return mdl, a_t, s
def sanity_check(m, df):
"""
:param m: (pm.Model)
:param df: (pd.DataFrame)
"""
# Sample from the prior and check of the model is well defined.
y = pm.sample_prior_predictive(model=m, vars=['obs'])['obs']
pl.figure(figsize=(16, 6))
pl.plot(y.mean(0), label='mean prior')
pl.fill_between(np.arange(y.shape[1]), -y.std(0), y.std(0), alpha=0.25, label='standard deviation')
pl.plot(df['y_scaled'], label='true value')
pl.legend()
def plot_component(axi, x, y, hpd_=None, obs=None, line_label=None, y_axis_label=None,
ax_title=None):
if isinstance(obs, np.ndarray):
axi.plot(x, obs, color='k', label='observations')
axi.plot(x, y, color='darkblue', label=line_label)
if isinstance(hpd_, np.ndarray):
axi.fill_between(x, hpd_[:, 0], hpd_[:, 1], color='steelblue',
alpha=0.5, label='95% CI')
if y_axis_label:
axi.set_ylabel(y_axis_label)
axi.legend()
if ax_title:
axi.set_title(ax_title)
axi.xaxis_date()
axi.xaxis.set_major_locator(years)
axi.xaxis.set_minor_locator(months)
axi.tick_params(axis='x', labelrotation=30)
axi.grid()
mdl_trend_only, A, s_pts = model_runner(df.t_scaled, df.aostz_scaled,
add_trend=True, n_switches=df.shape[0],)
render = pm.model_to_graphviz(mdl_trend_only)
render.render('piecewise_trend_only', directory='../../figjar/', format='png')
###Output
_____no_output_____
###Markdown
###Code
y = pm.sample_prior_predictive(model=mdl_trend_only, vars=['obs'])['obs']
pl.figure(figsize=(16, 6))
pl.plot(y.mean(0), label='mean prior')
pl.fill_between(np.arange(y.shape[1]), -y.std(0), y.std(0), alpha=0.25,
label='standard deviation')
pl.plot(df.aostz_scaled.values, marker='.', label='true value', )
pl.hlines(0, 0, df.shape[0], linestyles='--', label='expected trend')
pl.legend();
with mdl_trend_only:
trace_trend_only = pm.sample(2000, tune=2000)
trend_only_inference = ar.from_pymc3(trace=trace_trend_only,
prior=pm.sample_prior_predictive(model=mdl_trend_only),
posterior_predictive=pm.sample_posterior_predictive(trace_trend_only,
model=mdl_trend_only
)
)
trend_only_inference.to_netcdf('../../pickleJar/model_results_nc/piecewise_trend_only_inference.nc')
ppc_obs = pm.sample_posterior_predictive(trace_trend_only, model=mdl_trend_only)['obs']
f, ax = pl.subplots(nrows=1, sharex=True, figsize=(12, 4))
ylbl = 'standardized chl'
plot_component(ax, df.index, obs.mean(axis=0), hpd_= pm.hpd(obs),
obs=df.aostz_scaled.values, line_label='model_mean', y_axis_label=ylbl,
ax_title='All Components')
#plot_component(ax[1], d_aostz.index, ts_m4_f4_trend_mu, hpd_=ts_m4_f4_trend_hpd,
# line_label='mean', y_axis_label=ylbl, ax_title='Trend')
#plot_component(ax[2], d_aostz.index, ts_m4_f4_season_mu, hpd_=ts_m4_f4_season_hpd,
# line_label='mean', y_axis_label=ylbl,
# ax_title='Season')
#plot_component(ax[3], d_aostz.index, ts_m4_f4_ar1_mu, hpd_=ts_m4_f4_ar1_hpd,
# line_label='mean', y_axis_label=ylbl, ax_title='AR1')
ax.axhline(label='trend=0', color='r', ls=':');
ax.legend();
###Output
_____no_output_____ |
Nanodegree Blog Post.ipynb | ###Markdown
Initialise the Data
###Code
##source: https://www.kaggle.com/saurabhbagchi/dish-network-hackathon?select=Train_Dataset.csv
##60% of data was randomly chosen to allow the csv to be uploaded to GitHub
import pandas as pd
import requests
import io
# Downloading the csv file from GitHub account
url = "https://raw.githubusercontent.com/docju/nanodegreeblogpost/master/CarLoan.csv"
download = requests.get(url).content
df = pd.read_csv(io.StringIO(download.decode('utf-8')))
print(df.head())
###Output
ID Client_Income Car_Owned Bike_Owned Active_Loan House_Own \
0 12162008 45000.0 0.0 0.0 0.0 1.0
1 12201095 30150.0 1.0 0.0 0.0 1.0
2 12188608 14400.0 0.0 0.0 0.0 1.0
3 12188085 14850.0 0.0 0.0 1.0 1.0
4 12136418 12150.0 0.0 1.0 0.0 1.0
Child_Count Credit_Amount Loan_Annuity Accompany_Client ... \
0 0.0 131211.0 4875.30 Alone ...
1 0.0 72846.0 4456.35 Alone ...
2 0.0 102202.2 4342.95 Alone ...
3 0.0 83538.0 4032.00 Relative ...
4 0.0 100956.6 3349.35 Alone ...
Client_Permanent_Match_Tag Client_Contact_Work_Tag Type_Organization \
0 No Yes Business Entity Type 3
1 Yes Yes XNA
2 Yes Yes XNA
3 No Yes Self-employed
4 Yes Yes XNA
Score_Source_1 Score_Source_2 Score_Source_3 Social_Circle_Default \
0 NaN 0.489135 0.067794 0.0515
1 NaN 0.738472 0.298595 0.1031
2 NaN 0.386343 NaN NaN
3 0.501634 0.448004 0.538863 NaN
4 NaN 0.799336 NaN 0.0742
Phone_Change Credit_Bureau Default
0 832.0 0.0 0
1 2945.0 3.0 0
2 1002.0 NaN 0
3 2413.0 2.0 0
4 2029.0 3.0 1
[5 rows x 40 columns]
(121856, 40)
###Markdown
Data Understanding- get shape of data, which variables are numeric, which are binary, which are categorical, proportion of missing values, distributions etc
###Code
df.shape
##121856 rows, 40 columns
df.dtypes
##create list of categorial variables
cat_df = df.select_dtypes(include=['object']).copy()
cat_columns=list(cat_df.columns.values)
##create list of numeric variables
num_df= df.select_dtypes(include=['float','integer']).copy()
num_columns=list(num_df.columns.values)
###Output
_____no_output_____ |
Staircase.ipynb | ###Markdown
https://www.hackerrank.com/challenges/staircase/problem
###Code
def staircase(n):
for i in range(1, n+1):
print ("#"*i)
staircase(5)
def staircase(n):
for i in range(1, n+1):
print ((" "*(n-i))+("#"*i))
staircase(5)
#!/bin/python3
import math
import os
import random
import re
import sys
# Complete the staircase function below.
def staircase(n):
for i in range(1, n+1):
print ((" "*(n-i))+("#"*i))
if __name__ == '__main__':
n = int(input())
staircase(n)
###Output
10
|
Feature-Selection-for-Machine-Learning-master/Filter Methods/Combining-all-Methods.ipynb | ###Markdown
**Connect With Me in Linkedin** :- https://www.linkedin.com/in/dheerajkumar1997/ Filter Methods - Basics - Correlations - Univariate ROC-AUC Putting it all together
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import VarianceThreshold
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
# load the Santander customer satisfaction dataset from Kaggle
data = pd.read_csv('santander-train.csv')
data.shape
# separate dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(
data.drop(labels=['TARGET'], axis=1),
data['TARGET'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# Keep a copy of the dataset with all the variables
# to measure the performance of machine learning models
# at the end of the notebook
X_train_original = X_train.copy()
X_test_original = X_test.copy()
###Output
_____no_output_____
###Markdown
Remove constant features
###Code
# remove constant features
constant_features = [
feat for feat in X_train.columns if X_train[feat].std() == 0
]
X_train.drop(labels=constant_features, axis=1, inplace=True)
X_test.drop(labels=constant_features, axis=1, inplace=True)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Remove quasi-constant features
###Code
# remove quasi-constant features
# 0.1 indicates 99% of observations approximately
sel = VarianceThreshold(threshold=0.01)
# fit finds the features with low variance
sel.fit(X_train)
# how many not quasi-constant?
sum(sel.get_support())
features_to_keep = X_train.columns[sel.get_support()]
# we can then remove the features like this
X_train = sel.transform(X_train)
X_test = sel.transform(X_test)
X_train.shape, X_test.shape
# sklearn transformations lead to numpy arrays
# here we transform the arrays back to dataframes
# please be mindful of getting the columns assigned
# correctly
X_train= pd.DataFrame(X_train)
X_train.columns = features_to_keep
X_test= pd.DataFrame(X_test)
X_test.columns = features_to_keep
###Output
_____no_output_____
###Markdown
Remove duplicated features
###Code
# check for duplicated features in the training set
duplicated_feat = []
for i in range(0, len(X_train.columns)):
if i % 10 == 0: # this helps me understand how the loop is going
print(i)
col_1 = X_train.columns[i]
for col_2 in X_train.columns[i + 1:]:
if X_train[col_1].equals(X_train[col_2]):
duplicated_feat.append(col_2)
len(duplicated_feat)
# remove duplicated features
X_train.drop(labels=duplicated_feat, axis=1, inplace=True)
X_test.drop(labels=duplicated_feat, axis=1, inplace=True)
X_train.shape, X_test.shape
# Keep a copy of the dataset except constant and duplicated variables
# to measure the performance of machine learning models
# at the end of the notebook
X_train_basic_filter = X_train.copy()
X_test_basic_filter = X_test.copy()
###Output
_____no_output_____
###Markdown
Remove correlated features
###Code
# find and remove correlated features
def correlation(dataset, threshold):
col_corr = set() # Set of all the names of correlated columns
corr_matrix = dataset.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
if abs(corr_matrix.iloc[i, j]) > threshold: # we are interested in absolute coeff value
colname = corr_matrix.columns[i] # getting the name of column
col_corr.add(colname)
return col_corr
corr_features = correlation(X_train, 0.8)
print('correlated features: ', len(set(corr_features)) )
# removed correlated features
X_train.drop(labels=corr_features, axis=1, inplace=True)
X_test.drop(labels=corr_features, axis=1, inplace=True)
X_train.shape, X_test.shape
# keep a copy of the dataset at this stage
X_train_corr = X_train.copy()
X_test_corr = X_test.copy()
###Output
_____no_output_____
###Markdown
Remove features using univariate roc_auc
###Code
# find important features using univariate roc-auc
# loop to build a tree, make predictions and get the roc-auc
# for each feature of the train set
roc_values = []
for feature in X_train.columns:
clf = DecisionTreeClassifier()
clf.fit(X_train[feature].fillna(0).to_frame(), y_train)
y_scored = clf.predict_proba(X_test[feature].fillna(0).to_frame())
roc_values.append(roc_auc_score(y_test, y_scored[:, 1]))
# let's add the variable names and order it for clearer visualisation
roc_values = pd.Series(roc_values)
roc_values.index = X_train.columns
roc_values.sort_values(ascending=False).plot.bar(figsize=(20, 8))
# by removing features with univariate roc_auc == 0.5
# we remove another 30 features
selected_feat = roc_values[roc_values>0.5]
len(selected_feat), X_train.shape[1]
###Output
_____no_output_____
###Markdown
Compare the performance in machine learning algorithms
###Code
# create a function to build random forests and compare performance in train and test set
def run_randomForests(X_train, X_test, y_train, y_test):
rf = RandomForestClassifier(n_estimators=200, random_state=39, max_depth=4)
rf.fit(X_train, y_train)
print('Train set')
pred = rf.predict_proba(X_train)
print('Random Forests roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))
print('Test set')
pred = rf.predict_proba(X_test)
print('Random Forests roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))
# original
run_randomForests(X_train_original.drop(labels=['ID'], axis=1),
X_test_original.drop(labels=['ID'], axis=1),
y_train, y_test)
# filter methods - basic
run_randomForests(X_train_basic_filter.drop(labels=['ID'], axis=1),
X_test_basic_filter.drop(labels=['ID'], axis=1),
y_train, y_test)
# filter methods - correlation
run_randomForests(X_train_corr.drop(labels=['ID'], axis=1),
X_test_corr.drop(labels=['ID'], axis=1),
y_train, y_test)
# filter methods - univariate roc-auc
run_randomForests(X_train[selected_feat.index],
X_test_corr[selected_feat.index],
y_train, y_test)
###Output
Train set
Random Forests roc-auc: 0.8105671870819526
Test set
Random Forests roc-auc: 0.7985492537265694
###Markdown
We can see that removing constant, quasi-constant, duplicated, correlated and now **features with univariate roc-auc ==0.5** we still keep or even enhance the performance of the random forests (0.7985 vs 0.7900) at the time that we reduce the feature space dramatically (from 371 to 90).Let's have a look at the performance of logistic regression.
###Code
# create a function to build logistic regression and compare performance in train and test set
def run_logistic(X_train, X_test, y_train, y_test):
# function to train and test the performance of logistic regression
logit = LogisticRegression(random_state=44)
logit.fit(X_train, y_train)
print('Train set')
pred = logit.predict_proba(X_train)
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))
print('Test set')
pred = logit.predict_proba(X_test)
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))
# original
scaler = StandardScaler().fit(X_train_original.drop(labels=['ID'], axis=1))
run_logistic(scaler.transform(X_train_original.drop(labels=['ID'], axis=1)),
scaler.transform(X_test_original.drop(labels=['ID'], axis=1)),
y_train, y_test)
# filter methods - basic
scaler = StandardScaler().fit(X_train_basic_filter.drop(labels=['ID'], axis=1))
run_logistic(scaler.transform(X_train_basic_filter.drop(labels=['ID'], axis=1)),
scaler.transform(X_test_basic_filter.drop(labels=['ID'], axis=1)),
y_train, y_test)
# filter methods - correlation
scaler = StandardScaler().fit(X_train_corr.drop(labels=['ID'], axis=1))
run_logistic(scaler.transform(X_train_corr.drop(labels=['ID'], axis=1)),
scaler.transform(X_test_corr.drop(labels=['ID'], axis=1)),
y_train, y_test)
# filter methods - univariate roc-auc
scaler = StandardScaler().fit(X_train[selected_feat.index])
run_logistic(scaler.transform(X_train[selected_feat.index]),
scaler.transform(X_test_corr[selected_feat.index]),
y_train, y_test)
###Output
/Users/anujdutt/miniconda3/envs/deeplearning/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.