path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
example/Simulate Ground Motion Maps for an Earthquake Scenario.ipynb | ###Markdown
input the site locations
###Code
site_file = 'SF_Downtown_Sites.csv'
sites = pd.read_csv(site_file)
n_sites = len(sites)
###Output
_____no_output_____
###Markdown
specify rupture index for EQHazard (see the rupture selection Notebook)
###Code
name = 'SF_NSanAndreasM725_UCERF2'
rupture_forecast = 'WGCEP (2007) UCERF2 - Single Branch'
source_idx = 127
rupture_idx = 636
rupture_dict = {'rupture_forecast':rupture_forecast,
'source_idx':source_idx,
'rupture_idx':rupture_idx}
###Output
_____no_output_____
###Markdown
select the number of realizations, a Ground Motion Model and the desired intensity measures
###Code
n_realizations = 10000
gmm = 'Chiou & Youngs (2014)'
# a list of the desired periods or None
sa_periods = None
if (sa_periods == None) & (gmm=='Chiou & Youngs (2014)'):
sa_periods = [0.01, 0.02, 0.03, 0.05, 0.075, 0.1, 0.15, 0.2, 0.25, 0.3,
0.4, 0.5, 0.75, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.5, 10.0]
###Output
_____no_output_____
###Markdown
set up the file names
###Code
output_folder = 'map_simulations/'
if not os.path.exists(output_folder[:-1]):
os.makedirs(output_folder[:-1])
eq_input_file = output_folder + name + '.json'
eqhazard_file = output_folder + name + '_EQHazard.h5'
output_file = output_folder + name + '_Realizations.h5'
###Output
_____no_output_____
###Markdown
create the EQHazard input file
###Code
create_eqhazard_input(eq_input_file, rupture_dict, sites, gmm, sa_periods)
###Output
_____no_output_____
###Markdown
run EQHazard, convert output to an .h5 file
###Code
extract_eqhazard_data(eq_input_file, eqhazard_file)
###Output
EQHazard ran successfully.
###Markdown
simulate the ground motion maps
###Code
ground_motion_simulation(eqhazard_file, n_realizations, output_file)
###Output
_____no_output_____
###Markdown
retrieve the ground motion maps and statistics
###Code
ruptures = pd.read_hdf(output_file, key='Ruptures')
sites = pd.read_hdf(output_file, key='Sites')
display(ruptures)
display(sites)
with h5py.File(output_file, 'r') as hf:
# list of periods
periods = hf['Periods'][:]
# OpenSHA output
medians = hf['Medians'][:]
between_event_std = hf['BetweenEvStdDevs'][:]
within_event_std = hf['WithinEvStdDevs'][:]
total_std = hf['TotalStdDevs'][:]
# Ground Motion Simulation Maps
ground_motions = hf['GroundMotions'][:]
etas = hf['Etas'][:]
between_event_residuals = hf['BetweenEvResiduals'][:]
epsilons = hf['Epsilons'][:]
within_event_residuals = hf['WithinEvResiduals'][:]
[n_rups, n_sites, n_periods, n_realizations] = ground_motions.shape
# plot the first n simulations
n_sims = 100
for i_rup in range(n_rups):
for i_site in range(n_sites):
print('Site: '+str(i_site)+', Vs30: '+'{0:.0f}'.format(sites.loc[i_site,'Vs30']))
fig,ax = plt.subplots(1,1)
_ = plt.plot(periods, medians[i_rup,i_site,:], color='k', linewidth=2)
for a in [1,-1]:
_ = plt.plot(periods, np.exp(np.log(medians[i_rup,i_site,:])+a*total_std[i_rup,i_site,:]), color='k', linestyle='--', linewidth=2)
for i_real in range(n_sims):
_ = plt.plot(periods, ground_motions[i_rup,i_site,:, i_real], color='dimgray', alpha=0.2, zorder=-1)
_ = plt.xlabel('Period, $T$')
_ = plt.xlim([0, max(periods)])
_ = plt.ylabel('Spectral Acceleration, $Sa(T)$')
if False:
_ = plt.ylim(bottom=0)
else:
_ = plt.yscale('log')
_ = ax.spines['top'].set_visible(False)
_ = ax.spines['right'].set_visible(False)
_ = plt.show()
###Output
Site: 0, Vs30: 800
|
Labsheets/Lab6/Lab6_Clustering_Practice.ipynb | ###Markdown
This lab uses the Crecit Card dataset Load the data using pandas and inspect the head. Use the describe function to get a feel for the data, and the categorys Use the info function to get a feel for the different categories and there counts and data type. Using the drop function, remove the 'CUST_ID' column as we don't need this piece of information. Inspect the original data see if we have an NA or missing values Based on the column(s) you found had missing values, replace the data with an appropriate fill value (Median, Mean etc.) For each data column, plot the Kernel Density Estimate using the seaborn package. Inpsect the KDE plots and consider which columns you think are important to the Credit Card dataset. Consider how the plots are skewed, and the variation across the plots. Because we're going to be using clustering to get a good visualisation we want to include the skewness.
###Code
cols = ['BALANCE', 'ONEOFF_PURCHASES', 'INSTALLMENTS_PURCHASES', 'CASH_ADVANCE', 'ONEOFF_PURCHASES_FREQUENCY','PURCHASES_INSTALLMENTS_FREQUENCY', 'CASH_ADVANCE_TRX', 'PURCHASES_TRX', 'CREDIT_LIMIT', 'PAYMENTS', 'MINIMUM_PAYMENTS', 'PRC_FULL_PAYMENT']
###Output
_____no_output_____ |
00_download_preprocess_sentinel2.ipynb | ###Markdown
Download and preprocess Sentinel-2 images Notebook for downloading and preprocessing Sentinel-2 images from Copernicus Open Access Hub (requires account)* Level-2A products are globally available from December 2018 onwards * Older images (Level-1C) in the archive are processed using a standalone Sen2Cor tool (http://step.esa.int/main/third-party-plugins-2/sen2cor/sen2cor_v2-8/)* Sen2Cor-02.08.00-win64 directory path should be added to System Variables* Images are downloaded over two different tile IDs: T19PEP (covers Bonaires) T19PFP (covers east sea of Bonaire)* Some products are not readily available and are stored in a Long Term Archive (LTA). Running download_all() will trigger retrieval from LTA and make the data available within 24 hours. Unfortunately, offline products can only be requested every 30 minutes. These products were downloaded manually via Copernicus Hub.* Sentinel products are always stored outside project directory (GitHub repository)
###Code
from sentinelsat import *
from collections import OrderedDict
from datetime import datetime,timedelta, date
import pandas as pd
import getpass
import os
import re
from glob import glob
import subprocess
###Output
_____no_output_____
###Markdown
Downloading Sentinel-2 images
###Code
#user authentication (Copernicus account)
username = getpass.getpass("Username:")
pswd = getpass.getpass("Password:")
api = SentinelAPI(username,pswd,'https://scihub.copernicus.eu/dhus')
#dictionary with selected dates per tile
dates_tiles = {"T19PEP":[20180304,20180309,20180314,20180319,20190108,
20190128,20190212,20190304, 20190309, 20190314,
20190319, 20190508, 20190513, 20190518, 20190523,
20190821, 20191129],
"T19PFP":[20180304,20190304,20190428]}
#retrieving product informations
products = OrderedDict()
for tile in list(dates_tiles.keys()):
for d in dates_tiles[tile]:
date = datetime.strptime(str(d),'%Y%m%d').date()
#contrsuct query
kw_query = {'platformname': 'Sentinel-2',
'filename':f'*_{tile}_*',
'date':(date, date+timedelta(days=5))} #plus 5 days to get single scene
#get level-2 products if date> December 2018
if date>datetime.strptime(str(20181201),'%Y%m%d').date():
kw_query['producttype']= 'S2MSI2A'
else:
kw_query['producttype']= 'S2MSI1C'
#retrieve ID used to download the data and store to OrderedDict()
pp = api.query(**kw_query)
products.update(pp)
#convert to dataframe to view product information (cloud coverage, sensing date, etc.)
df_products = api.to_dataframe(products)
#store product IDs according to product type
level2_online = []
level1_online = []
#check online products
for product_id in df_products.index:
odata = api.get_product_odata(product_id)
print(f"{odata['title']} is available: {odata['Online']} ")
#sort products
if odata['Online'] and "MSIL2A" in odata['title']:
level2_online.append(product_id)
elif odata['Online'] and "MSIL1C" in odata['title']:
level1_online.append(product_id)
#create output folders for each product type
level2_dir = '...'
level1_dir = '...'
os.makedirs(level2_dir,exist_ok=True)
os.makedirs(level1_dir,exist_ok=True)
#download products to each folder
if os.path.exists(level1_dir) and os.path.exists(level2_dir):
api.download_all(products=level1_online,directory_path=level1_dir)
api.download_all(products=level2_online,directory_path=level2_dir)
###Output
_____no_output_____
###Markdown
Processing level-1C to level-2A products
###Code
#set I/O directories
level2_dir = '...'
level1_dir = '...'
#get level-1C file paths
level1_files = glob(level1_dir+"/*.SAFE")
#pop-up cmd window(s) and execute Sen2Cor processor
sen2cor_dir = "../projects/Sen2Cor-02.08.00-win64"
for file in level1_files:
cmd = f'L2A_Process --resolution 10 {file} --output_dir {level2_dir}'
os.system(f' start cmd /k "cd {sen2cor_dir} && {cmd}" ')
###Output
_____no_output_____ |
House_Price_Data_V2/Project.ipynb | ###Markdown
Take a look at the data
###Code
df[['RegionName', 'State','average_price',]].nlargest(10,'average_price')
df[['RegionName', 'State', 'average_price']].nsmallest(10, 'average_price')
###Output
_____no_output_____
###Markdown
Let's see how the average price over the time period is related to size of the region and the volatility of the price.
###Code
#Let's color code for each region
colors = np.random.rand(df.shape[0])
#the variance by a factor of 10 so the sizes are more managable
sizes = df['price_variance'] / 10
plt.style.use('fivethirtyeight')
#Scatter plot of the data
plt.scatter(df['size'],df['average_price'],s=sizes,c=colors,alpha=0.7)
plt.ylim([0,2500])
plt.xlim([-10,85])
plt.ylabel('Median Listing Price')
plt.xlabel('Size')
labels = df['RegionName']
plt.text(-10,2500,'The size of the dot represents varinace in price for the region',fontsize=10,color='red')
#let's label out plot, dots
top_five_variance = df[['RegionName','size','average_price','price_variance']].nlargest(5,'price_variance')
for r in top_five_variance.itertuples(index=False):
plt.annotate(r[0],xy=(r[1],r[2]),size=10,xycoords='data',xytext=(r[1]+10,r[2]+20),arrowprops=dict(arrowstyle = '->', color='black'))
plt.show()
###Output
_____no_output_____ |
AWS Machine Learning Engineering/4_optimizing_code_holiday_gifts.ipynb | ###Markdown
Optimizing Code: Holiday GiftsIn the last example, you learned that using vectorized operations and more efficient data structures can optimize your code. Let's use these tips for one more example.Say your online gift store has one million users that each listed a gift on a wish list. You have the prices for each of these gifts stored in `gift_costs.txt`. For the holidays, you're going to give each customer their wish list gift for free if it is under 25 dollars. Now, you want to calculate the total cost of all gifts under 25 dollars to see how much you'd spend on free gifts. Here's one way you could've done it.
###Code
import time
import numpy as np
with open('gift_costs.txt') as f:
gift_costs = f.read().split('\n')
gift_costs = np.array(gift_costs).astype(int) # convert string to int
start = time.time()
total_price = 0
for cost in gift_costs:
if cost < 25:
total_price += cost * 1.08 # add cost after tax
print(total_price)
print('Duration: {} seconds'.format(time.time() - start))
###Output
32765421.24
Duration: 5.542772054672241 seconds
###Markdown
Here you iterate through each cost in the list, and check if it's less than 25. If so, you add the cost to the total price after tax. This works, but there is a much faster way to do this. Can you refactor this to run under half a second? Refactor Code**Hint:** Using numpy makes it very easy to select all the elements in an array that meet a certain condition, and then perform operations on them together all at once. You can them find the sum of what those values end up being.
###Code
start = time.time()
total_price = (gift_costs[gift_costs < 25]).sum() * 1.08 # TODO: compute the total price
print(total_price)
print('Duration: {} seconds'.format(time.time() - start))
###Output
32765421.24
Duration: 0.11274290084838867 seconds
|
experiments/train_parallel_coco_i1_5-shot.ipynb | ###Markdown
Train
###Code
# Create model object in inference mode.
model = siamese_model.SiameseMaskRCNN(mode="training", model_dir=MODEL_DIR, config=config)
train_schedule = OrderedDict()
train_schedule[1] = {"learning_rate": config.LEARNING_RATE, "layers": "heads"}
train_schedule[120] = {"learning_rate": config.LEARNING_RATE, "layers": "all"}
train_schedule[160] = {"learning_rate": config.LEARNING_RATE/10, "layers": "all"}
# Load weights trained on Imagenet
try:
model.load_latest_checkpoint(training_schedule=train_schedule)
except:
model.load_imagenet_weights(pretraining='imagenet-687')
for epochs, parameters in train_schedule.items():
print("")
print("training layers {} until epoch {} with learning_rate {}".format(parameters["layers"],
epochs,
parameters["learning_rate"]))
model.train(coco_train, coco_val,
learning_rate=parameters["learning_rate"],
epochs=epochs,
layers=parameters["layers"])
###Output
_____no_output_____ |
final_notebooks_Final-XGBoost.ipynb | ###Markdown
This notebook trains and test the XGBoost Model.
###Code
%autosave 60
# defining os variables
BUCKET_NAME = "msil_raw"
FOLDER_NAME = "training_data"
TRAINFILE = "trainset_final.csv"
VALIDFILE = "validset_final.csv"
TESTFILE = "testset_final.csv"
# importing the variables
import google.datalab.storage as storage
import pandas as pd
from io import BytesIO
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import xgboost as xgb
from sklearn.model_selection import GridSearchCV
import time
from datetime import datetime
from scipy import integrate
import pickle
# setting up the parameters
plt.rcParams["figure.figsize"] = (10, 10)
pd.set_option("display.max_rows", 200)
pd.set_option("display.max_columns", 200)
pd.set_option("precision", 15)
sns.set_style("darkgrid")
# importing the training data. If using local system, skip this cell and use os library instead.
mybucket = storage.Bucket(BUCKET_NAME)
data_csv = mybucket.object(FOLDER_NAME + "/" + TRAINFILE)
uri = data_csv.uri
%gcs read --object $uri --variable data
trainset = pd.read_csv(BytesIO(data))
trainset.head()
# importing the validset
mybucket = storage.Bucket(BUCKET_NAME)
data_csv = mybucket.object(FOLDER_NAME + "/" + VALIDFILE)
uri = data_csv.uri
%gcs read --object $uri --variable data
validset = pd.read_csv(BytesIO(data))
validset.head()
# importing the testset
mybucket = storage.Bucket(BUCKET_NAME)
data_csv = mybucket.object(FOLDER_NAME + "/" + TESTFILE)
uri = data_csv.uri
%gcs read --object $uri --variable data
testset = pd.read_csv(BytesIO(data))
testset.head()
len(trainset)
###Output
_____no_output_____
###Markdown
Info Table regarding Dataset division| Data | Range of Trips |Number of Observations||---------|----------------|----------------------||Trainset | 0 - 1643 | 3871645 ||Validset | 1643 - 1743 | 224878 ||Testset | 1743 - 2218 | 667516 |
###Code
trainset = trainset.drop(columns = ["tp", "EVSMA_EWMA"])
validset = validset.drop(columns = ["tp", "EVSMA_EWMA"])
testset = testset.drop(columns = ["tp", "EVSMA_EWMA"])
# dropping the target variables from our dataset
x_trainset = trainset.drop(columns = ["EVSMA_delta"])
y_trainset = trainset["EVSMA_delta"]
x_validset = validset.drop(columns = ["EVSMA_delta"])
y_validset = validset["EVSMA_delta"]
x_testset = testset.drop(columns = ["EVSMA_delta"])
y_testset = testset["EVSMA_delta"]
# defining the model parameters
params = {
"eta":0.01,
"n_estimators": 100,
"max_depth": 6,
"subsample": 0.8,
"colsample_bytree": 1,
"gamma": 0,
"eval_metric": "rmse",
"nthreads": 4,
"objective": "reg:linear"
}
# converting the datasets into DMatrix, a format required by XGBoost
dtrainset = xgb.DMatrix(x_trainset, label = y_trainset)
dvalidset = xgb.DMatrix(x_validset, label = y_validset)
# training the Model
model_train = xgb.train(params, dtrainset, 5000, evals = [(dvalidset, "valid_set")], verbose_eval=1000)
# saving the trained model
pickle.dump(model_train, open("model_xgb_stack_final.pickle.dat", "wb"))
# loading the saved model
model_train = pickle.load(open('model_xgb_stack_final.pickle.dat','rb'))
# converting the testset into DMatrix
dtest = xgb.DMatrix(x_testset)
# Predictions
y_pred = model_train.predict(dtest)
# making a dataframe of actual and predicted values
result_df = pd.DataFrame({
"y": y_testset,
"yhat": y_pred
})
# calculating the Root Mean Square Error
err = (((result_df["y"] - result_df["yhat"])**2).mean())**0.5
print("RMSE = {:.4f}".format(err))
# calculating the Mean Average Precision Error
#mape = ((result_df["y"] - result_df["yhat"])/result_df["y"]).mean()
#print("MAPE = {:.4f}".format(mape))
###Output
RMSE = 0.0056
###Markdown
--- Testing Model on different trips
###Code
# importing the testset
mybucket = storage.Bucket(BUCKET_NAME)
data_csv = mybucket.object(FOLDER_NAME + "/" + TESTFILE)
uri = data_csv.uri
%gcs read --object $uri --variable data
testset = pd.read_csv(BytesIO(data))
testset.head()
# extracting few trips
test_trip_1814 = testset[testset["tp"] == 1814]
test_trip_1936 = testset[testset["tp"] == 1936]
test_trip_1973 = testset[testset["tp"] == 1973]
test_trip_1757 = testset[testset["tp"] == 1757]
test_trip_1937 = testset[testset["tp"] == 1937]
test_trip_1889 = testset[testset["tp"] == 1889]
test_trip_2018 = testset[testset["tp"] == 2018]
test_trip_2011 = testset[testset["tp"] == 2011]
test_trip_1947 = testset[testset["tp"] == 1947]
test_trip_1860 = testset[testset["tp"] == 1860]
tpno = 1756
test_trip = testset[testset["tp"] == tpno]
dist = testset[testset["tp"] == tpno]["EVODOH"].iloc[-1]
sma_absolute = test_trip["EVSMA_EWMA"].iloc[0]
print("SMA Absolute = {}".format(sma_absolute))
sma_actual = test_trip["EVSMA_EWMA"]
test_trip = test_trip.drop(columns = ["EVSMA_EWMA", "tp"])
x_test_trip = test_trip.drop(columns = ["EVSMA_delta"])
y_test_trip = test_trip["EVSMA_delta"]
#model_train = pickle.load(open('xgb_finale.dat','rb'))
d_test_trip = xgb.DMatrix(x_test_trip)
predictions = model_train.predict(d_test_trip)
for i in range(0, len(predictions)):
if predictions[i]<0:
predictions[i]=0
# making a dataframe of actual and predicted values
test_trip_df = pd.DataFrame({
"y": y_test_trip,
"yhat": predictions
})
sma_list = []
for i in range(0, len(predictions)):
temp_sma = sma_absolute - predictions[i]
sma_list.append(temp_sma)
sma_absolute = temp_sma
title = "Trip " + str(tpno) + " | Dist ==" + str(round(dist, 2))
plt.plot(sma_list, label = "prediction")
plt.plot(list(sma_actual), label = "actual")
plt.title(title)
plt.legend()
plt.show()
err = (((sma_list[-1] - list(sma_actual)[-1])))/(list(sma_actual)[0] - list(sma_actual)[-1])
print("Error for the Trip = {:.2f} %".format(err * 100))
for i in range(1744,1750):
test_trip = testset[testset["tp"] == i]
dist = testset[testset["tp"] == i]["EVODOH"].iloc[-1]
sma_absolute = test_trip["EVSMA_EWMA"].iloc[0]
sma_actual = test_trip["EVSMA_EWMA"]
test_trip = test_trip.drop(columns = ["EVSMA_EWMA", "tp"])
x_test_trip = test_trip.drop(columns = ["EVSMA_delta"])
y_test_trip = test_trip["EVSMA_delta"]
d_test_trip = xgb.DMatrix(x_test_trip)
predictions = model_train.predict(d_test_trip)
for k in range(0, len(predictions)):
if predictions[k]<0:
predictions[k]=0
# making a dataframe of actual and predicted values
test_trip_df = pd.DataFrame({
"y": y_test_trip,
"yhat": predictions
})
sma_list = []
for j in range(0, len(predictions)):
temp_sma = sma_absolute - predictions[j]
sma_list.append(temp_sma)
sma_absolute = temp_sma
err = (((sma_list[-1] - list(sma_actual)[-1])))/(list(sma_actual)[0] - list(sma_actual)[-1])
title = "Trip "+str(i)+" | Dist = "+str(round(dist, 2))+" Error = "+str(round(err, 2))
plot_name = "XGB" + str(i) +".png"
plt.plot(sma_list, label = "prediction")
plt.plot(list(sma_actual), label = "actual")
plt.title(title)
plt.legend()
plt.savefig(plot_name)
print(plot_name)
print("------------------------------")
xgb.plot_importance(model_train)
###Output
_____no_output_____
###Markdown
--- Creating the Stacked DataSet
###Code
test_trip = trainset[trainset["tp"] == 0]
sma_absolute = test_trip["EVSMA_EWMA"].iloc[0]
print("SMA Absolute = {}".format(sma_absolute))
sma_actual = test_trip["EVSMA_EWMA"]
test_trip = test_trip.drop(columns = ["EVSMA_EWMA", "tp"])
x_test_trip = test_trip.drop(columns = ["EVSMA_delta"])
y_test_trip = test_trip["EVSMA_delta"]
d_test_trip = xgb.DMatrix(x_test_trip)
predictions = model_train.predict(d_test_trip)
for i in range(0, len(predictions)):
if predictions[i]<0:
predictions[i]=0
sma_list = []
for i in range(0, len(predictions)):
temp_sma = sma_absolute - predictions[i]
sma_list.append(temp_sma)
sma_absolute = temp_sma
# making a dataframe of actual and predicted values
test_trip_df = pd.DataFrame({
"y": sma_actual,
"yhat": sma_list
})
test_trip_df.head()
# calculating the Root Mean Square Error
err = (((test_trip_df["y"] - test_trip_df["yhat"])**2).mean())**0.5
print("RMSE = {:.4f}".format(err))
# calculating the Mean Average Precision Error
mape = ((test_trip_df["y"] - test_trip_df["yhat"])/test_trip_df["y"]).mean()
print("MAPE = {:.4f} %".format(mape*100))
len(test_trip_df)
test_trip_df.to_csv('stack_xgb_data.csv', index = False)
!gsutil cp 'stack_xgb_data.csv' 'gs://msil_raw/training_data/stack_xgb_data.csv'
%gcs read --object gs://msil_raw/training_data/stack_xgb_data.csv --variable stack_xgb_data
df2 = pd.read_csv(BytesIO(stack_xgb_data))
#################################
test_trip = trainset[trainset["tp"] == 1]
sma_absolute = test_trip["EVSMA_EWMA"].iloc[0]
print("SMA Absolute = {}".format(sma_absolute))
sma_actual = test_trip["EVSMA_EWMA"]
test_trip = test_trip.drop(columns = ["EVSMA_EWMA", "tp"])
x_test_trip = test_trip.drop(columns = ["EVSMA_delta"])
y_test_trip = test_trip["EVSMA_delta"]
d_test_trip = xgb.DMatrix(x_test_trip)
predictions = model_train.predict(d_test_trip)
for i in range(0, len(predictions)):
if predictions[i]<0:
predictions[i]=0
for i in range(0, len(predictions)):
if predictions[i]<0:
predictions[i]=0
sma_list = []
for i in range(0, len(predictions)):
temp_sma = sma_absolute - predictions[i]
sma_list.append(temp_sma)
sma_absolute = temp_sma
test_trip_df = pd.DataFrame({
"y": sma_actual,
"yhat": sma_list
})
test_trip_df.head()
# calculating the Root Mean Square Error
err = (((test_trip_df["y"] - test_trip_df["yhat"])**2).mean())**0.5
print("RMSE = {:.4f}".format(err))
# calculating the Mean Average Precision Error
mape = ((test_trip_df["y"] - test_trip_df["yhat"])/test_trip_df["y"]).mean()
print("MAPE = {:.4f} %".format(mape*100))
len(test_trip_df)
mybucket = storage.Bucket('msil_raw')
data_csv = mybucket.object('training_data/stack_xgb_data.csv')
uri = data_csv.uri
%gcs read --object $uri --variable daaa
stacked_df = pd.read_csv(BytesIO(daaa))
stacked_df.head()
len(stacked_df)
test_trip_df = pd.concat((stacked_df, test_trip_df), axis = 0).reset_index(drop = True)
len(test_trip_df)
test_trip_df.to_csv('stack_xgb_data.csv', index = False)
!gsutil cp 'stack_xgb_data.csv' 'gs://msil_raw/training_data/stack_xgb_data.csv'
%gcs read --object gs://msil_raw/training_data/stack_xgb_data.csv --variable stack_xgb_data
df2 = pd.read_csv(BytesIO(stack_xgb_data))
###Output
Copying file://stack_xgb_data.csv [Content-Type=text/csv]...
- [1 files][109.9 KiB/109.9 KiB]
Operation completed over 1 objects/109.9 KiB.
###Markdown
Looping through all other trips
###Code
for i in range(756, 1643):
print("------------------------------")
test_trip = trainset[trainset["tp"] == i]
print("Trip Number = {}".format(i))
sma_absolute = test_trip["EVSMA_EWMA"].iloc[0]
print("SMA Absolute = {}".format(sma_absolute))
sma_actual = test_trip["EVSMA_EWMA"]
test_trip = test_trip.drop(columns = ["EVSMA_EWMA", "tp"])
x_test_trip = test_trip.drop(columns = ["EVSMA_delta"])
y_test_trip = test_trip["EVSMA_delta"]
d_test_trip = xgb.DMatrix(x_test_trip)
predictions = model_train.predict(d_test_trip)
for i in range(0, len(predictions)):
if predictions[i]<0:
predictions[i]=0
sma_list = []
for i in range(0, len(predictions)):
temp_sma = sma_absolute - predictions[i]
sma_list.append(temp_sma)
sma_absolute = temp_sma
test_trip_df = pd.DataFrame({
"y": sma_actual,
"yhat": sma_list
})
# calculating the Root Mean Square Error
err = (((test_trip_df["y"] - test_trip_df["yhat"])**2).mean())**0.5
print("RMSE = {:.4f}".format(err))
# calculating the Mean Average Precision Error
mape = ((test_trip_df["y"] - test_trip_df["yhat"])/test_trip_df["y"]).mean()
print("MAPE = {:.4f}".format(mape))
mybucket = storage.Bucket('msil_raw')
data_csv = mybucket.object('training_data/stack_xgb_data.csv')
uri = data_csv.uri
%gcs read --object $uri --variable daaa
stacked_df = pd.read_csv(BytesIO(daaa))
stacked_df.head()
print("Trip length = {}".format(len(test_trip_df)))
print("Data length prior = {}".format(len(stacked_df)))
test_trip_df = pd.concat((stacked_df, test_trip_df), axis = 0).reset_index(drop = True)
print("Data length after = {}".format(len(test_trip_df)))
test_trip_df.to_csv('stack_xgb_data.csv', index = False)
!gsutil cp 'stack_xgb_data.csv' 'gs://msil_raw/training_data/stack_xgb_data.csv'
%gcs read --object gs://msil_raw/training_data/stack_xgb_data.csv --variable stack_xgb_data
df2 = pd.read_csv(BytesIO(stack_xgb_data))
path_fig='gs://msil_raw/test_figures/'+plot_name
!gsutil cp plot_name path_fig
path_fig
###Output
_____no_output_____ |
examples/atmospheres/surface_radiation_field_checking_tools_tutorial.ipynb | ###Markdown
Surface radiation field tools In this tutorial we demonstrate usage of several tools for checking the implementation of a surface radiation field extension module.
###Code
%matplotlib inline
import warnings
warnings.filterwarnings(action='ignore')
import numpy as np
import xpsi
from matplotlib import pyplot as plt
plt.rc('font', size=20.0)
plt.rc('font', family = 'Ubuntu')
###Output
_____no_output_____
###Markdown
Calculate the specific intensity directly from local variables
###Code
# keV (local comoving frame)
E = np.logspace(-2.0, 0.5, 1000, base=10.0)
# cos(angle to local surface normal in comoving frame)
mu = np.ones(1000) * 0.5
# log10(eff. temperature [K]) and log10(local eff. gravity [cm/s^2])
local_vars = np.array([[6.11, 13.8]]*1000)
xpsi.surface_radiation_field?
xpsi.surface_radiation_field.intensity?
###Output
_____no_output_____
###Markdown
For the following cell, compile `blackbody.pyx` radiation field as the `hot.pyx` extension:
###Code
plt.figure(figsize=(8,8))
BB_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars, # NB: isotropic blackbody
extension='hot', numTHREADS=2)
plt.plot(E, BB_I, 'k-', lw=2.0)
# write it to disk so accessible upon kernel restart
np.savetxt('./blackbody_spectrum_cache.txt', BB_I)
ax = plt.gca()
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_ylabel('Photon specific intensity')
_ = ax.set_xlabel('Energy [keV]')
###Output
_____no_output_____
###Markdown
Let's check out a numerical atmosphere (this code you typically find in a custom photosphere class). The numerical atmospheres loaded here were genereated by the NSX atmosphere code [(Ho, W.C.G & Heinke, C.O. 2009)](https://ui.adsabs.harvard.edu/link_gateway/2009Natur.462...71H/doi:10.1038/nature08525), courtesy of W.C.G. Ho for NICER modeling efforts. One of these atmospheres (fully-ionized hydrogen) was used in [Riley et al. (2019)](https://arxiv.org/abs/1912.05702).
###Code
def preload(path, size):
NSX = np.loadtxt(path, dtype=np.double)
logT = np.zeros(size[0])
logg = np.zeros(size[1])
_mu = np.zeros(size[2]) # use underscore to bypass errors with the other mu array
logE = np.zeros(size[3])
reorder_buf = np.zeros(size)
index = 0
for i in range(reorder_buf.shape[0]):
for j in range(reorder_buf.shape[1]):
for k in range(reorder_buf.shape[3]):
for l in range(reorder_buf.shape[2]):
logT[i] = NSX[index,3]
logg[j] = NSX[index,4]
logE[k] = NSX[index,0]
_mu[reorder_buf.shape[2] - l - 1] = NSX[index,1]
reorder_buf[i,j,reorder_buf.shape[2] - l - 1,k] = 10.0**(NSX[index,2])
index += 1
buf = np.zeros(np.prod(reorder_buf.shape))
bufdex = 0
for i in range(reorder_buf.shape[0]):
for j in range(reorder_buf.shape[1]):
for k in range(reorder_buf.shape[2]):
for l in range(reorder_buf.shape[3]):
buf[bufdex] = reorder_buf[i,j,k,l]; bufdex += 1
atmosphere = (logT, logg, _mu, logE, buf)
return atmosphere
H_fully = preload('/home/thomas/Documents/NICER_analyses/H-atmosphere_Spectra (fully ionized)/NSX_H-atmosphere_Spectra/nsx_H_v171019.out',
size=(35, 11, 67, 166))
He_fully = preload('/home/thomas/Documents/NICER_analyses/He-atmosphere_Spectra (fully ionized)/NSX_He-atmosphere_Spectra/nsx_He_v170925.out',
size=(29, 11, 67, 166))
###Output
_____no_output_____
###Markdown
Next compile the `archive/hot/numerical.pyx` radiation field as the `hot.pyx` extension, and compile the `archive/elsewhere/numerical.pyx` radiation field as the `elsewhere.pyx` extension. The numerical extensions infer the size of the parameter grid, but are hard-coded for four-dimensional cubic polynomial interpolation.
###Code
plt.figure(figsize=(8,8))
BB_I = np.loadtxt('./blackbody_spectrum_cache.txt')
plt.plot(E, BB_I, 'k--', lw=1.0)
hot_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=H_fully,
extension='hot',
numTHREADS=2)
plt.plot(E, hot_I, 'b-', lw=2.0)
elsewhere_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=H_fully,
extension='elsewhere',
numTHREADS=2)
plt.plot(E, elsewhere_I, 'r-', lw=1.0)
He_fully_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=He_fully,
extension='hot',
numTHREADS=2)
plt.plot(E, He_fully_I, 'k-.', lw=1.0)
# H_partial_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
# atmosphere=H_partial,
# extension='hot',
# numTHREADS=2)
# plt.plot(E, H_partial_I, 'b-.', lw=2.0)
ax = plt.gca()
ax.set_yscale('log')
ax.set_ylim([9.0e25,4.0e29])
ax.set_xscale('log')
ax.set_ylabel('Photon specific intensity')
_ = ax.set_xlabel('Energy [keV]')
###Output
_____no_output_____
###Markdown
This behaviour is typical for an isotropic blackbody radiation field with temperature $T$ in comparison to a radiation field emergent from a (non-magnetic, fully-ionized) geometrically-thin H/He atmosphere with effective temperature $T$. Let's plot the angular dependence:
###Code
# keV (local comoving frame)
E = np.ones(1000) * 0.2
# cos(angle to local surface normal in comoving frame)
mu = np.linspace(0.01,1.0,1000)
fig = plt.figure(figsize=(16,8))
# Hydrogen
ax = fig.add_subplot(121, projection='polar')
ax.set_theta_direction(1)
ax.set_thetamin(-90.0)
ax.set_thetamax(90.0)
# log10(eff. temperature [K]) and log10(local eff. gravity [cm/s^2])
local_vars = np.array([[6.0, 13.8]]*1000)
H_fully_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=H_fully,
extension='hot',
numTHREADS=2)
ax.plot(np.arccos(mu), np.log10(H_fully_I/np.max(H_fully_I)), 'k-', lw=1.0)
ax.plot(-np.arccos(mu), np.log10(H_fully_I/np.max(H_fully_I)), 'k-', lw=1.0)
# log10(eff. temperature [K]) and log10(local eff. gravity [cm/s^2])
local_vars = np.array([[5.5, 13.8]]*1000)
H_fully_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=H_fully,
extension='hot',
numTHREADS=2)
ax.plot(np.arccos(mu), np.log10(H_fully_I/np.max(H_fully_I)), 'r-', lw=1.0)
ax.plot(-np.arccos(mu), np.log10(H_fully_I/np.max(H_fully_I)), 'r-', lw=1.0)
# log10(eff. temperature [K]) and log10(local eff. gravity [cm/s^2])
local_vars = np.array([[6.5, 13.8]]*1000)
H_fully_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=H_fully,
extension='hot',
numTHREADS=2)
ax.plot(np.arccos(mu), np.log10(H_fully_I/np.max(H_fully_I)), 'b-', lw=1.0)
ax.plot(-np.arccos(mu), np.log10(H_fully_I/np.max(H_fully_I)), 'b-', lw=1.0)
ax.set_rmax(0.05)
ax.set_rmin(-1)
ax.set_theta_zero_location("N")
ax.set_rticks([-1.0,-0.5, 0.0])
ax.set_xlabel('log10$(I_E/I_E(\mu=1))$')
ax.xaxis.set_label_coords(0.5, 0.15)
_ = ax.set_title('H (fully-ionized)', pad=-50)
# Helium
ax = fig.add_subplot(122, projection='polar')
ax.set_theta_direction(1)
ax.set_thetamin(-90.0)
ax.set_thetamax(90.0)
# log10(eff. temperature [K]) and log10(local eff. gravity [cm/s^2])
local_vars = np.array([[6.0, 13.8]]*1000)
He_fully_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=He_fully,
extension='hot',
numTHREADS=2)
ax.plot(np.arccos(mu), np.log10(He_fully_I/np.max(He_fully_I)), 'k-', lw=1.0)
ax.plot(-np.arccos(mu), np.log10(He_fully_I/np.max(He_fully_I)), 'k-', lw=1.0)
# log10(eff. temperature [K]) and log10(local eff. gravity [cm/s^2])
local_vars = np.array([[5.5, 13.8]]*1000)
He_fully_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=He_fully,
extension='hot',
numTHREADS=2)
ax.plot(np.arccos(mu), np.log10(He_fully_I/np.max(He_fully_I)), 'r-', lw=1.0)
ax.plot(-np.arccos(mu), np.log10(He_fully_I/np.max(He_fully_I)), 'r-', lw=1.0)
# log10(eff. temperature [K]) and log10(local eff. gravity [cm/s^2])
local_vars = np.array([[6.5, 13.8]]*1000)
He_fully_I = xpsi.surface_radiation_field.intensity(E, mu, local_vars,
atmosphere=He_fully,
extension='hot',
numTHREADS=2)
ax.plot(np.arccos(mu), np.log10(He_fully_I/np.max(He_fully_I)), 'b-', lw=1.0)
ax.plot(-np.arccos(mu), np.log10(He_fully_I/np.max(He_fully_I)), 'b-', lw=1.0)
ax.set_rmax(0.05)
ax.set_rmin(-1)
ax.set_theta_zero_location("N")
ax.set_rticks([-1.0,-0.5, 0.0])
ax.set_xlabel('log10$(I_E/I_E(\mu=1))$')
ax.xaxis.set_label_coords(0.5, 0.15)
_ = ax.set_title('He (fully-ionized)', pad=-50)
###Output
_____no_output_____
###Markdown
Calculate the specific intensity indirectly via global variables We can also calculate intensities by specifying spacetime coordinates at the surface and values for some set of global variables that control the radiation field.
###Code
xpsi.surface_radiation_field.intensity_from_globals?
# unimportant here; just use strict bounds
bounds = dict(mass = (None, None),
radius = (None, None),
distance = (None, None),
inclination = (None, None))
spacetime = xpsi.Spacetime(bounds, dict(frequency = 1.0/(4.87e-3))) # J0030 spin
colatitude = np.ones(1000) * 1.0 # radians
azimuth = np.zeros(1000)
phase = np.zeros(1000)
global_vars = np.array([6.11]) # just temperature (globally invariant local variable)
spacetime.params
spacetime['radius'] = 12.0
spacetime['mass'] = 1.4
# we do not need the observer coordinates to compute effective gravity
# the first 5 arguments are 1D arrays that specific a point sequence in the
# joint space of surface spacetime coordinates, energy, and angle
# if you have a set of such points that does not conform readily
# to a 1D array, write a custom wrapper to handle the structure
# in your point set
I_E = xpsi.surface_radiation_field.intensity_from_globals(E,
mu,
colatitude,
azimuth,
phase,
global_vars, # -> eff. temp.
spacetime.R, # -> eff. grav.
spacetime.zeta, # -> eff. grav.
spacetime.epsilon, # -> eff. grav.
atmosphere=atmosphere,
numTHREADS=2)
###Output
_____no_output_____
###Markdown
Note that only the `hot.pyx` extension is invoked here. Let's plot the spectrum and also the spectrum generated by declaring the effective gravity directly above:
###Code
plt.figure(figsize=(8,8))
plt.plot(E, hot_I, 'k-', lw=1.0)
plt.plot(E, I_E, 'r-', lw=1.0)
ax = plt.gca()
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_ylabel('Photon specific intensity')
_ = ax.set_xlabel('Energy [keV]')
###Output
_____no_output_____ |
Matt/linear_modeling.ipynb | ###Markdown
log: ['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin'] : 1e-06 , 0.943454281780628log: ['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log'] : 1e-06 , 0.9424740060473317log: ['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF'] : 1e-06 , 0.9420770647332499log: ['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr'] : 1e-06 , 0.9417616764602397['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr', 'TotRmsAbvGrd', 'GarageArea'] : 1e-06 , 0.9416195613781179 , 39.9001979416710['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr', 'TotRmsAbvGrd', 'GarageArea', 'GarageFinish', 'Fence'] : 1e-06 , 0.9417193728756516 , 41.05487105565105['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr', 'TotRmsAbvGrd', 'GarageArea', 'GarageFinish', 'Fence'] : 1e-06 , 0.9416530339287166 , 39.99237976054064['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr', 'TotRmsAbvGrd', 'GarageArea', 'GarageFinish', 'Fence', 'Alley'] : 1e-06 , 0.9417665575176081 , 41.1929734948757['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr', 'TotRmsAbvGrd', 'GarageArea', 'GarageFinish', 'Fence', 'Alley', 'number_floors'] : 1e-06 , 0.9418035097906687 , 39.559875261890845['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr', 'TotRmsAbvGrd', 'GarageArea', 'GarageFinish', 'Fence', 'Alley', 'number_floors', 'FireplaceQu'] : 1e-06 , 0.9417500104695516 , 40.30945260646638['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr', 'TotRmsAbvGrd', 'GarageArea', 'GarageFinish', 'Fence', 'Alley', 'number_floors', 'FireplaceQu', 'LotFrontage', 'LowQualFinSF', 'BsmtExposure_ord', 'MasVnrArea'] : 1e-06 , 0.9468407408759749 , 36.57884336374308No radials: ['TotalBsmtSF', 'BsmtCond_ord', 'BsmtQual_ord', 'GarageCond', 'GarageQual', 'GarageType_com', 'SalePrice_log', 'Garage_age_bin', 'Remod_age_bin', '1stFlrSF_log', '2ndFlrSF', 'KitchenAbvGr', 'TotRmsAbvGrd', 'GarageArea', 'GarageFinish', 'Fence', 'Alley', 'number_floors', 'FireplaceQu', 'LotFrontage', 'LowQualFinSF', 'BsmtExposure_ord', 'MasVnrArea', 'LotShape_com'] : 1e-06 , 0.946833771716485 , 35.69657081462711
###Code
radial = pd.read_csv('./../data/house_coordinates_1.0.csv')
radial.drop(columns = ('2204_park'), inplace = True)
for col in radial.columns:
prefix = str(col)[0:4]
if re.search('^\d\d\d\d_', str(col)):
radial.rename(columns = {col: col[5:]}, inplace = True)
rad_drops = [
'Address',
'Coords4',
'latitude',
'longitude',
'town_hall',
'cemetery',
'motel',
'camp_site',
'general',
'picnic_site',
'wastewater_plant',
'spring',
'beach',
'street_lamp',
'helipad',
'vineyard',
'crossing',
'tree',
'grass',
'christian',
'bus_stop',
'parking',
'toilet',
'bench',
'commercial',
'waste_basket',
'drinking_water',
'convenience',
'camera_surveillance',
'comms_tower',
'residential',
'gift_shop',
'jeweller',
'hairdresser',
'bookshop',
'clothes',
'retail',
'food_court',
'artwork',
'cafe',
'traffic_signals',
'beauty_shop',
'sports_shop',
'weir',
'track',
'turning_circle',
'computer_shop',
'bicycle_shop',
'department_store',
'parking_bicycle',
'golf_course',
'tower',
'beverages',
'university'
]
radial.drop(columns = rad_drops, inplace = True)
sub = df.loc[:,['PID', 'SalePrice_log']]
radial = pd.merge(radial, sub, how = 'right', on = 'PID')
radial.drop(columns = ['PID','SalePrice_log'], inplace = True)
lasso_tuner3 = GridSearchCV(lasso2, params_log, cv=kfold, return_train_score = True)
lasso_tuner3.fit(radial, price_log)
lasso_tuner3.cv_results_['mean_test_score']
lasso_tuner3.cv_results_['mean_train_score']
len(radial.columns)
feat_imp_rad = pd.Series(data = lasso_tuner3.best_estimator_.coef_, index = radial.columns)
feat_imp_rad = feat_imp_rad.sort_values(ascending = False)
ignored_rad = feat_imp_rad[feat_imp_rad == 0]
feat_imp_rad = feat_imp_rad[feat_imp_rad != 0]
print(len(feat_imp_rad))
print(feat_imp_rad)
print(len(ignored_rad))
print(ignored_rad)
vif_rad = pd.DataFrame()
vif_rad['feature'] = radial.columns
vif_rad['vif'] = [variance_inflation_factor(radial.values, i)
for i in range(len(radial.columns))]
print(sum(vif_rad['vif'])/len(vif_rad))
vif_rad.sort_values(by = 'vif', ascending = False)
radial.columns
radial = pd.read_csv('./../data/house_coordinates_1.0.csv')
radial.drop(columns = ('2204_park'), inplace = True)
for col in radial.columns:
prefix = str(col)[0:4]
if re.search('^\d\d\d\d_', str(col)):
radial.rename(columns = {col: col[5:]}, inplace = True)
df6 = pd.merge(df.copy(), radial, on = 'PID', how = 'left')
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
def fit_scale(df, col):
scaler.fit(df[[col]])
df[[col]]=scaler.transform(df[[col]])
fit_scale(df6, 'OverallQual')
fit_scale(df6, 'ExterQual')
fit_scale(df6, 'OverallCond')
fit_scale(df6, 'KitchenQual')
#df2['Porch']=((df2['OpenPorchSF']>0) | (df2['EnclosedPorch']>0) | (df2['3SsnPorch']>0) | (df2['ScreenPorch']>0))
df6['PorchSF']=df6['OpenPorchSF']+df6['EnclosedPorch']+df6['3SsnPorch']+df6['ScreenPorch']
#df2['1stFloorArea%']=df2['1stFlrSF']/df2['GrLivArea']
#df2['2ndFloorArea%']=df2['2ndFlrSF']/df2['GrLivArea']
df6['ExterQualDisc'] = df6['ExterQual'] - df6['OverallQual']
df6['OverallCondDisc'] = df6['OverallCond'] - df6['OverallQual']
df6['KitchenQualDisc'] = df6['KitchenQual'] - df6['OverallQual']
df6['SaleTypeNew']=(df6['SaleType']=='New')
df6['SaleTypeNew']=df6['SaleTypeNew'].apply(lambda x: 1 if x==True else 0)
#df2['BSMT_GLQ%']=df2['BSMT_GLQ']/df2['TotalBsmtSF']
#df2['BSMT_ALQ%']=df2['BSMT_ALQ']/df2['TotalBsmtSF']
#df2['BSMT_GLQ%']=df2['BSMT_GLQ%'].fillna(0)
#df2['BSMT_ALQ%']=df2['BSMT_ALQ%'].fillna(0)
df6['BSMT_LowQual']=df6['TotalBsmtSF']-df6['BSMT_GLQ']-df6['BSMT_ALQ']
df6['BSMT_HighQual']=df6['BSMT_GLQ']+df6['BSMT_ALQ']
df6['AreaPerPerson'] = np.log10(df6['GrLivArea']/df6['BedroomAbvGr'])
df6['BSMT_HighQual_bin'] = pd.cut(df6['BSMT_HighQual'], [-1, 1, 500, 1000, 1500, 2500], labels = ['No basement', '0-500', '500-1000', '1000-1500', '1500+'])
df6['BSMT_LowQual_bin'] = pd.cut(df6['BSMT_LowQual'], [-1, 1, 500, 1000, 1500, 2500], labels = ['No basement', '0-500', '500-1000', '1000-1500', '1500+'])
feat_incl =[
### from original dataset
'GrLivArea',
'LotArea',
'OverallQual',
'BSMT_LowQual',
'house_age_years',
'GarageCars',
'MasVnrType',
'FullBath',
'HalfBath',
'BsmtExposure_ord',
'SaleTypeNew',
'Neighborhood',
'BldgType',
'PorchSF',
'BSMT_HighQual',
'Fireplaces',
'Pool',
'BedroomAbvGr',
'ExterQual',
'OverallCond',
'KitchenQual',
### from radial location data
'water_tower',
'graveyard',
'police',
'optician',
'slipway',
'bar',
'cinema',
'supermarket',
'hotel',
'stop',
'farmyard',
'christian_catholic',
'jewish',
'muslim',
'garden_centre',
'christian_lutheran'
]
list(radial.columns)
df7 = df6.loc[:,feat_incl]
df7
non_dummies = [
'MasVnrType',
'Neighborhood',
'BldgType',
'BSMT_HighQual_bin',
'BSMT_LowQual_bin'
]
dummies = [
'Neighborhood_Blueste',
'Neighborhood_BrDale', 'Neighborhood_BrkSide', 'Neighborhood_ClearCr',
'Neighborhood_CollgCr', 'Neighborhood_Crawfor', 'Neighborhood_Edwards',
'Neighborhood_Gilbert', 'Neighborhood_Greens', 'Neighborhood_GrnHill',
'Neighborhood_IDOTRR', 'Neighborhood_Landmrk', 'Neighborhood_MeadowV',
'Neighborhood_Mitchel', 'Neighborhood_NAmes', 'Neighborhood_NPkVill',
'Neighborhood_NWAmes', 'Neighborhood_NoRidge', 'Neighborhood_NridgHt',
'Neighborhood_OldTown', 'Neighborhood_SWISU', 'Neighborhood_Sawyer',
'Neighborhood_SawyerW', 'Neighborhood_Somerst', 'Neighborhood_StoneBr',
'Neighborhood_Timber', 'Neighborhood_Veenker', 'BldgType_2fmCon',
'BldgType_Duplex', 'BldgType_Twnhs', 'BldgType_TwnhsE',
'MasVnrType_None', 'MasVnrType_Stone',
'BSMT_HighQual_bin_500-1000', 'BSMT_HighQual_bin_0-500',
'BSMT_HighQual_bin_1000-1500', 'BSMT_HighQual_bin_1500+',
'BSMT_LowQual_bin_0-500', 'BSMT_LowQual_bin_500-1000', 'BSMT_LowQual_bin_1000-1500',
'BSMT_LowQual_bin_1500+'
]
def dummify(df, non_dummies, dummies):
for dummified in dummies:
for original in non_dummies:
if original in dummified:
orig_name = f'{original}_'
value = dummified.replace(orig_name, '')
df[dummified] = df[original].map(lambda x: 1 if x == value else 0)
df = df.drop(columns = non_dummies, axis = 1)
return df
df7.columns
df7 = dummify(df7, non_dummies, dummies)
lasso_tuner4 = GridSearchCV(lasso2, params_log, cv=kfold, return_train_score = True)
lasso_tuner4.fit(df7, price_log)
lasso_tuner4.cv_results_['mean_test_score']
lasso_tuner4.best_params_
import pickle
lasso_tuner4.best_estimator_.predict(df7)
asdf = open('linear_model.txt', mode = 'wb')
asdf.close()
with open('linearmodel.pickle', mode = 'wb') as file:
pickle.dump(lasso_tuner4.best_estimator_, file)
with open('linearmodel.pickle', mode = 'rb') as file:
lm = pickle.load(file)
loaded_obj.predict(df7)
print(loc_feat_incl, ': ', max(lasso_tuner4.cv_results_['mean_test_score']), ', ', sum(vif_df['vif'])/len(vif_df))
###Output
['slipway', 'bar', 'farmyard', 'christian_catholic', 'jewish', 'muslim', 'garden_centre', 'christian_methodist', 'christian_evangelical', 'christian_lutheran'] : 0.9345469007364361 , 36.054356259285186
###Markdown
['slipway', 'bar', 'cinema', 'supermarket', 'farmyard', 'christian_catholic', 'jewish', 'muslim', 'garden_centre', 'christian_methodist', 'christian_evangelical', 'christian_lutheran'] : 0.9350983215981801 , 36.054356259285186['slipway', 'bar', 'cinema', 'supermarket', 'farmyard', 'christian_catholic', 'jewish', 'muslim', 'garden_centre', 'christian_methodist', 'christian_evangelical', 'christian_lutheran'] : 0.9351894282916218 , 36.054356259285186
###Code
feat_imp_min = pd.Series(data = lasso_tuner4.best_estimator_.coef_, index = df7.columns)
feat_imp_min = feat_imp_min.sort_values(ascending = False)
ignored_min = feat_imp_min[feat_imp_min == 0]
feat_imp_min = feat_imp_min[feat_imp_min != 0]
print(len(feat_imp_min))
print(feat_imp_min)
print(len(ignored_min))
print(ignored_min)
vif_min = pd.DataFrame()
vif_min['feature'] = df7.columns
vif_min['vif'] = [variance_inflation_factor(df7.values, i)
for i in range(len(df7.columns))]
print(sum(vif_min['vif'])/len(vif_min))
vif_min.sort_values(by = 'vif', ascending = False)
column_title_dict = {
### from original dataset
'GrLivArea' : 'Above-ground living area in sq ft',
'LotArea' : 'Lot area in sq ft',
'OverallQual' : 'Overall quality',
'BSMT_LowQual' : 'Low-quality basement area in sq ft',
'BSMT_HighQual' : 'High-quality basement area in sq ft',
'house_age_years' : 'House age in years',
'GarageCars' : 'Number of cars held by garage',
'FullBath' : 'Number of full bathrooms',
'HalfBath' : 'Number of half-bathrooms',
'BsmtExposure_ord' : 'Basement exposure',
'Neighborhood' : 'Neighborhood',
'BldgType' : 'Building type',
'PorchSF' : 'Porch area in sq ft',
'ExterQualDisc' : 'Exterior quality score - overall quality score',
'OverallCondDisc' : 'Overall condition score - overall quality score',
'KitchenQualDisc' : 'Kitchen quality score - overall quality score',
'Fireplaces' : 'Number of fireplaces',
'Pool' : 'Pool',
'BedroomAbvGr' : 'Number of bedrooms',
'ext_Asbestos_Shingles' : 'Asbestos used in walls',
### location features
'graveyard' : 'Number graveyards within 1 mile',
'police' : 'Number of police stations within 1 mile',
'optician' : 'Number of opticians within 1 mile',
'stop' : 'Number of stop signs within 1 mile',
'slipway' : 'Number of slipways within 1 mile',
'bar' : 'Number of bars within 1 mile',
'cinema' : 'Number of cinemas within 1 mile',
'supermarket' : 'Number of supermarkets within 1 mile',
'hotel' : 'Number of hotels within 1 mile',
'farmyard' : 'Number of farmyards within 1 mile',
'water_tower' : 'Number of water towers within 1 mile',
'christian_catholic' : 'Number of catholic churches within 1 mile',
'jewish' : 'Number of synagogues within 1 mile',
'muslim' : 'Number of mosques within 1 mile',
'garden_centre' : 'Number of garden centers within 1 mile',
'christian_lutheran' : 'Number of lutheran churches within 1 mile'
}
###Output
_____no_output_____ |
udemy/machine_learning_a-z_hands-on_python_&_r_in_data_science/python/simple_linear_regression_template.ipynb | ###Markdown
Simple Linear Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Salary_Data.csv')
x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 1/3, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the Simple Linear Regression model on the Training set
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(x_test)
###Output
_____no_output_____
###Markdown
Visualising the Training set results
###Code
plt.scatter(x_train, y_train, color = 'red')
plt.plot(x_train, regressor.predict(x_train), color = 'blue')
plt.title('Salary vs Experience (Training set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
###Output
_____no_output_____
###Markdown
Visualising the Test set results
###Code
plt.scatter(x_test, y_test, color = 'red')
plt.plot(x_train, regressor.predict(x_train), color = 'blue')
plt.title('Salary vs Experience (Test set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
###Output
_____no_output_____
###Markdown
Making a single prediction (for example the salary of an employee with 12 years of experience)
###Code
print(regressor.predict([[12]]))
###Output
_____no_output_____
###Markdown
Getting the final linear regression equation with the values of the coefficients
###Code
print(regressor.coef_)
print(regressor.intercept_)
###Output
[9345.94244312]
26816.19224403119
|
examples/.ipynb_checkpoints/homework6-checkpoint.ipynb | ###Markdown
Greedy PiracyBitTorrent allows people to download movies without staying strictly within the confines of the law, but because of the peer to peer naturre of the download, the file will not download sequentially. The VLC player can play the incomplete movie, but if it encounters a missing chunk while streaming it will fail.A pirate is downloading _Avengers: Infinity War_, which is 149 minutes long and 12.91 GB. The priate has been watching the download speed, and has reccorded a list of download speeds in megabytes per second, each sampled over two seconds. The torrent is downloaded in 4 MB chunks in a random order.If the pirate starts watching the movie when the client says it is $x$ percent downloaded, what is the probability that they can watch the entire movie without encountering a missing chunk? For this I'll assume that all missing chunks are equally likely to be downloaded, and that chunk reception is a poisson process.The priate, being a l33t hax0r, has used wireshark to obtain a list of arrival times for chunks, to be used in modeling.
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
from scipy.stats import poisson, norm
import pymc3 as pm
from math import ceil
from thinkbayes2 import Suite,Joint,Pmf,MakePoissonPmf,MakeNormalPmf
import thinkplot
import pandas as pd
from math import exp
fileSize = 12.91*1000; #MB
chunkSize = 4;
fileSize = fileSize/chunkSize; #flie size in chunks
runtime = 149*60; #s
data = pd.read_csv('torrent pieces.csv') #wireshark dump
data = data[data.Info=="Piece[Malformed Packet]"] #this finds the piece packets
times = np.array(data.Time);
times = times[45:] #dump the initial times, they aren't representitive
interTimes = np.diff(times)
lamPrior = np.linspace(0.5,1.6);
class Chunk(Suite):
def Likelihood(self, inter, lam):
return lam*exp(-lam*inter)
lamSuite = Chunk(lamPrior)
lamSuite.UpdateSet(interTimes)
thinkplot.Pdf(lamSuite)
thinkplot.decorate(title="PMF for $\lambda$",xlabel="$\lambda$ (chunks/s)",ylabel="PMF")
print(lamSuite.Mean())
###Output
1.0925243203772743
###Markdown
Here's a histogram of the interarrival times:That looks a exponential, so I'd say it was ok to model chunk arival as a poisson process.For now let's do the forward problem, assuming that we know $\lambda$ (the mean download rate in chunks per second) exactly. This will help us find an easy optimization.
###Code
lam = lamSuite.Mean()
nChunks = ceil(fileSize); #number of chunks in the file
sPerChunk = runtime/nChunks; #how long each chunk takes to play
def PHaveChunkSlow(t):
"""
Probability that we have a specific chunk by time t
"""
pmf = MakePoissonPmf(lam*t,nChunks) #probabilities that have each number of chunks, 0-nChunks
pHave = 0;
for n,p in pmf.Items():
pHave += (n/nChunks)*p
return pHave
def PHaveChunk(t):
n = min(lam*t,nChunks)
return n/nChunks
ts = np.linspace(0,4000);
ps = [PHaveChunkSlow(t) for t in ts];
ps2 = [PHaveChunk(t) for t in ts];
thinkplot.plot(ts,ps,label='correct')
thinkplot.plot(ts,ps2,label='approx')
thinkplot.decorate(title='Probability of having a specific chunk over time',
xlabel='time (s)',
ylabel='probability')
###Output
_____no_output_____
###Markdown
It looks like the native interpretation, where the probability of having a specific chunk at time t is$$P=\frac{\min(\lambda t,N)}{N}$$(where $N$ is the total number of chunks) is very close to the 'correct' implementation where$$P=\sum_{n=0}^N \frac{n\cdot\text{poisson}(n;\lambda t)}{N}$$but the approximate solution is much faster, so let's go with that.Now we can predict how likely the priate is to be able to watch the movie uninterrupted.
###Code
#we need a specific chunk every sPerChunk seconds to not break VLC
ts = np.linspace(0,runtime, ceil(runtime/sPerChunk)+1);
def PHaveChunk(x,t,lam):
n0 = x*nChunks #number of chunks at the begining
n = min(lam*t+n0,nChunks) #number of chunks at time t
return n/nChunks
def PSuccess(x,lam,ts=ts):
"""
probability of getting all the way through the movie without missing a chunk
having started watching at x percent downloaded
"""
ps = [PHaveChunk(x, t, lam) for t in ts];
return np.product(ps)
xs = np.linspace(0,1);
ps = [PSuccess(x,lam) for x in xs];
thinkplot.plot(xs,ps)
thinkplot.decorate(title='Probability of finishing the movie for different starting percentages',
xlabel='starting percentage',
ylabel='probability')
###Output
_____no_output_____
###Markdown
And we can now sum that over our $\lambda$ suite to find the real prediction:
###Code
xs = np.linspace(0.8,1);
psTotal = np.zeros(len(xs));
for lam,p in lamSuite.Items():
ps = [PSuccess(x,lam) for x in xs];
psTotal += np.array(ps)*p
thinkplot.plot(xs,ps)
thinkplot.decorate(title='Probability of finishing the movie for different starting percentages',
xlabel='starting percentage',
ylabel='probability')
###Output
_____no_output_____
###Markdown
And would you look at that, nothing really changed. To answer the question, it looks like the pirate will have to wait until the movie is about 90% downloaded before they have any chance of finishing it, and they will have to wait until 95% downloaded to have a 50-50 shot.
###Code
def P(x,t):
pTot = 0
ts = ts = np.linspace(0, t, ceil(t/sPerChunk)+1)
for lam,p in lamSuite.Items():
pTot += p*PSuccess(x,lam,ts)
return pTot
ps = [P(0.6,t) for t in ts]
###Output
_____no_output_____ |
notebooks/prepare_and_index_news.ipynb | ###Markdown
Load real corona news and data to index
###Code
import pandas as pd
import os
import random
import json
data_path = "../data/"
fake_news_path = os.path.join(data_path+"fake_news/", "fake_news_corona.csv")
news_data_frame = pd.read_csv(fake_news_path, sep=";", encoding="utf8", names=["fake","real","real_url"])
news_data_frame.head()
###Output
_____no_output_____
###Markdown
Create dataset
###Code
dataset_path = "../data/preprocessed/"
data_file = os.path.join(dataset_path, "fake_news_train.tsv")
data_frame = pd.DataFrame(columns=["text", "label"])
for index, row in news_data_frame.iterrows():
fake = row["fake"]
data_frame = data_frame.append({"text": fake, "label": "fake"}, ignore_index=True)
real = row["real"]
if not pd.isna(real):
data_frame = data_frame.append({"text": real, "label": "real"}, ignore_index=True)
real_url = row["real_url"]
data_frame.to_csv(data_file, sep="\t", encoding="utf8", index=False)
###Output
_____no_output_____
###Markdown
Create mock jsons
###Code
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="specify_your_app_name_here")
cities = ["Berlin", "München", "Hamburg", "Stuttgart", "Köln", "Heinsberg", "Bremen", "Potsdam", "Mannheim", "Darmstadt", "Kaiserslautern", "Nürnberg", "Freiburg"]
locations = []
for city in cities:
location = geolocator.geocode(city)
locations.append(location)
jsons = list()
for index, row in news_data_frame.iterrows():
template = dict()
fake = row["fake"]
real = row["real"]
real_url = row["real_url"]
template["text"] = fake
fake_prob = random.random()
fake_prob = max(1-fake_prob, fake_prob)
template["classification"] = {
"fake": fake_prob,
"unknown": 0.0,
"real": 1-fake_prob
}
template["evidence"] = []
if not pd.isna(real):
template["evidence"].append({
"title": "Real title",
"text": real,
"url": real_url if pd.isna(real_url) else None,
"for_class": "real"
})
location = random.choice(locations)
template["derived"] = dict()
template["derived"]["locations"] = [{
"country": "Deutschland",
"country_code": "DE",
"locality": "Deutschland",
"region": "Bundesland",
"sub_region": "Landkreis",
"full_name": str(location),
"geo": {
"coordinates": [
location.latitude,
location.longitude
],
"type": "point"
}
}
]
jsons.append(template)
# save list in file
with open("../data/mock_jsons/mock_jsons.json","w+", encoding="utf8", newline='') as json_file:
json.dump(jsons, json_file, indent=2, ensure_ascii=False)
###Output
_____no_output_____ |
notebooks/ExploringDriveProGPS.ipynb | ###Markdown
Exploring DrivePro GPS formatThe Transcend DrivePro 220 exports its videos in Quicktime MOV format, in a way that also includes GPS information every second in the video. This information can be viewed using their Windows and/or Mac apps, but not exported. This notebook will attempt to get to the bottom of how the GPS data is stored, so I can use this dashcam to provide data to the OpenStreetView project. The Quicktime `.mov` files exported by the dashcam do appear to have a custom tag, which can be extracted using the `Unknown_gps` tag using `exiftool` (or `pyexiftool` in this case). Let's choose a video ([this one](sample/2017_0706_093256_013.MOV)), and see how far we can get:
###Code
video = 'sample/2017_0706_093256_013.MOV'
# use exiftool to load the gps tag
import exiftool
import base64
import numpy as np
with exiftool.ExifTool() as et:
data = et.get_tag('Unknown_gps', video)
# decode the base64 data to a byte-string
assert data.startswith('base64:')
data = base64.b64decode(data[len('base64:'):])
# convert the byte string to a numpy array
data = np.frombuffer(data, dtype=np.uint8)
# and reshape into 8 bytes per sample
data = data.reshape((-1,8))
###Output
_____no_output_____
###Markdown
Now that we have the data, let's see if anything useful is apparent
###Code
from matplotlib import pyplot as plt
f, ax = plt.subplots(data.shape[1], 1, figsize=(20,10), sharex='col')
for i in range(data.shape[1]):
ax[i].plot(data[:,i])
ax[i].legend(['byte {}'.format(i)], loc='upper right')
###Output
_____no_output_____
###Markdown
We can see that byte 0 and 1 (of every 8 bytes) appears to be related to a timestamp, and bytes 4-7 appear to largely be unchanging.Let's have a closer look at bytes 2 and 3 and see if we can see anything in them:
###Code
f, ax = plt.subplots(1, 1, figsize=(10,10))
ax.plot(data[:,2],data[:,3],'o-')
ax.set_xlabel('Byte 2')
ax.set_ylabel('Byte 3')
###Output
_____no_output_____ |
Fut-Brasileiro.ipynb | ###Markdown
Utilização de algoritmos de inteligência artificial na previsão de resultados de partidas de futebol Estudo e comparação do desempenho de diferentes algoritmos de inteligência artificialTCC do curso de Ciência da Computação do Instituto Federal do Triângulo Mineiro - Campus ItuiutabaAutor: Olesio Gardenghi Neto Pré-processamento dos dados
###Code
# Import das bibliotecas que serão utilizadas
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
# Silenciando os warnings
warnings.filterwarnings("ignore")
# Mudando o estilo de plot dos gráficos
plt.style.use('seaborn')
# Comando para mostrar os gráficos dentro do Jupyter
%matplotlib inline
# Lendo a base de dados e transformando em dataframe
df = pd.read_csv("data/Brasileirao2012.csv")
df.head()
df.columns
df.describe()
# Selecionando apenas as características que nos interessa
df = df[['assistances', 'receivedBalls', 'recoveredBalls', 'lostBalls', 'yellowCards', 'redCards', 'receivedCrossBalls', 'missedCrossBalls', 'defenses', 'sucessfulTackles','unsucessfulTackles','sucessfulDribles',
'unsucessfulDribles', 'givenCorners', 'receivedCorners',
'receivedFouls', 'committedFouls', 'goodFinishes','badFinishes', 'ownGoals', 'offsides','sucessfulLongPasses', 'unsucessfulLongPasses',
'sucessfulPasses', 'unsucessfulPasses', 'win', 'draw', 'defeat']]
df.head()
# Junção das 3 colunas de resultados em uma só
def convert_output(source):
target = source.copy() #make a copy from source
target['new'] = 2 #create a new column and initialize it with a random value
for i, rows in target.iterrows():
if rows['win'] == 1:
rows['new'] = 2
if rows['draw'] == 1:
rows['new'] = 1
if rows['defeat'] == 1:
rows['new'] = 0
return target.iloc[:, -1] # return all rows, and only the last column
df['FTR'] = convert_output(df[['win','draw','defeat']])
df.drop(['win','draw','defeat'],axis=1, inplace=True)
df.head()
df.info()
df.isnull().sum()
df.dropna(inplace=True)
df.head()
# 0 - Derrota, 1 - Empate, 2 - Vitória
sns.countplot(x='FTR', data=df)
# Normalizando os dados com o StandardScaler
# A distribuição dos dados será transformada tal que sua média = 0 e o desvio padrão = 1
# z = (x-u)/σ
# x = dados, u = média, σ = desvio padrão
scaler = StandardScaler()
scaler.fit(df.drop(['FTR'],axis=1))
dados_normalizados = scaler.transform(df.drop(['FTR'],axis=1))
df_normalizado = pd.DataFrame(dados_normalizados, columns=df.columns[:-1])
df = df[['FTR']]
df = pd.concat([df, df_normalizado], axis=1, sort=False)
df.dropna(inplace=True)
df.head()
#Mapa de calor de correlações
sns.heatmap(df.corr(),linewidths=0.1,linecolor="black")
###Output
_____no_output_____
###Markdown
Aplicando os algoritmos de IA
###Code
# Características
X = df.drop('FTR',axis=1)
# Alvo da previsão
y = df['FTR']
# Divisão treino/teste
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=101)
df_y_test = y_test.reset_index()
df_y_test.drop('index',axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Regressão Logística
###Code
from sklearn.linear_model import LogisticRegression
logistic_regression = LogisticRegression(solver='lbfgs', multi_class='auto')
logistic_regression.fit(X_train, y_train)
predict_logistic_regression = logistic_regression.predict(X_test)
reg_log_all = logistic_regression.score(X_test, y_test) * 100
cross_log_all = max(cross_val_score(logistic_regression, X, y, cv=10)) * 100
print(classification_report(y_test,predict_logistic_regression))
print(confusion_matrix(y_test,predict_logistic_regression))
print('\nScore Regressão Logística: %.2f' %reg_log_all + "%")
print('\nScore Regressão Logística Cross Validation: %.2f' %cross_log_all + "%")
plt.figure(figsize=(25, 5))
plt.plot(df_y_test, 'go', ms=15, label='Real')
plt.plot(predict_logistic_regression, '+', color='black', ms=10, markeredgewidth=2, label='Predicted')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize = 'xx-large')
plt.ylabel('FTR',fontsize=16)
plt.xlabel('Número',fontsize=16)
plt.title('Regressão Logística',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Árvore de Decisão
###Code
from sklearn.tree import DecisionTreeClassifier
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, y_train)
predict_decision_tree = decision_tree.predict(X_test)
dec_tree_all = decision_tree.score(X_test, y_test) * 100
cross_dec_tree_all = max(cross_val_score(decision_tree, X, y, cv=10)) * 100
print(classification_report(y_test,predict_decision_tree))
print(confusion_matrix(y_test,predict_decision_tree))
print('\nScore Árvore de Decisão: %.2f' %dec_tree_all + "%")
print('\nScore Árvore de Decisão Cross Validation: %.2f' %cross_dec_tree_all + "%")
plt.figure(figsize=(25, 5))
plt.plot(df_y_test, 'go', ms=15, label='Real')
plt.plot(predict_decision_tree, '+', color='black', ms=10, markeredgewidth=2, label='Predicted')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize = 'xx-large')
plt.ylabel('FTR',fontsize=16)
plt.xlabel('Número',fontsize=16)
plt.title('Árvore de Decisão',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Floresta Aleatória
###Code
from sklearn.ensemble import RandomForestClassifier
#Método do cotovelo
error_rate = []
for i in range(1,200):
random_forest = RandomForestClassifier(n_estimators=i)
random_forest.fit(X_train, y_train)
predict_random_forest = random_forest.predict(X_test)
error_rate.append(np.mean(predict_random_forest!=y_test))
plt.figure(figsize=(14,8))
plt.plot(range(1,200),error_rate,color="blue",linestyle='dashed',marker='o',markerfacecolor='red')
plt.xlabel('N')
plt.ylabel("Taxa de erro")
plt.title("Taxa de erro vs. Número estimativas")
random_forest = RandomForestClassifier(n_estimators=error_rate.index(min(error_rate)))
random_forest.fit(X_train, y_train)
predict_random_forest = random_forest.predict(X_test)
rand_for_all = random_forest.score(X_test, y_test) * 100
cross_rand_for_all = max(cross_val_score(random_forest, X, y, cv=10)) * 100
print(classification_report(y_test,predict_random_forest))
print(confusion_matrix(y_test,predict_random_forest))
print('\nScore Floresta Aleatória: %.2f' %rand_for_all + "%")
print("\nScore Floresta Aleatória: %.2f" %cross_rand_for_all + "%")
plt.figure(figsize=(25, 5))
plt.plot(df_y_test, 'go', ms=15, label='Real')
plt.plot(predict_random_forest, '+', color='black', ms=10, markeredgewidth=2, label='Predicted')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize = 'xx-large')
plt.ylabel('FTR',fontsize=16)
plt.xlabel('Número',fontsize=16)
plt.title('Floresta Aleatória',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
K Nearest Neighbours (KNN)
###Code
from sklearn.neighbors import KNeighborsClassifier
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
predict_knn = knn.predict(X_test)
error_rate.append(np.mean(predict_knn!=y_test))
plt.figure(figsize=(14,8))
plt.plot(range(1,40),error_rate,color="blue",linestyle='dashed',marker='o',markerfacecolor='red')
plt.xlabel('N')
plt.ylabel("Taxa de erro")
plt.title("Taxa de erro vs. Número estimativas")
knn = KNeighborsClassifier(n_neighbors=error_rate.index(min(error_rate)))
knn.fit(X_train, y_train)
predict_knn = knn.predict(X_test)
knn_all = knn.score(X_test, y_test) * 100
cross_knn_all = max(cross_val_score(knn, X, y, cv=10)) * 100
print(classification_report(y_test,predict_knn))
print(confusion_matrix(y_test,predict_knn))
print('\nScore KNN: %.2f' %knn_all + "%")
print("\nScore KNN Cross Validation: %.2f" %cross_knn_all + "%")
plt.figure(figsize=(25, 5))
plt.plot(df_y_test, 'go', ms=15, label='Real')
plt.plot(predict_knn, '+', color='black', ms=10, markeredgewidth=2, label='Predicted')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize = 'xx-large')
plt.ylabel('FTR',fontsize=16)
plt.xlabel('Número',fontsize=16)
plt.title('KNN',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Support-vector Machine (SVM)
###Code
from sklearn.svm import SVC
param_grid = {'C':[0.1,1,10,100,1000],'gamma': [1,0.1,0.01,0.001,0.001], 'kernel':['rbf']}
grid = GridSearchCV(SVC(),param_grid,refit=True,cv=10,iid=False)
grid.fit(X_train, y_train)
predict_svm = grid.predict(X_test)
svm_all = grid.score(X_test, y_test) * 100
cross_svm_all = max(cross_val_score(grid, X, y, cv=10)) * 100
print(confusion_matrix(y_test,predict_svm))
print('\nScore SVM: %.2f' %svm_all + "%")
print("\nScore SVM Cross Validation: %.2f" %cross_svm_all + "%")
plt.figure(figsize=(25, 5))
plt.plot(df_y_test, 'go', ms=15, label='Real')
plt.plot(predict_svm, '+', color='black', ms=10, markeredgewidth=2, label='Predicted')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize = 'xx-large')
plt.ylabel('FTR',fontsize=16)
plt.xlabel('Número',fontsize=16)
plt.title('SVM',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Multi-layer Perceptron Classifier
###Code
from sklearn.neural_network import MLPClassifier
mlp_classifier = MLPClassifier(hidden_layer_sizes=(2,8), activation='logistic', solver='adam', max_iter=1000)
mlp_classifier.fit(X_train,y_train)
predict_mlp_classifier = mlp_classifier.predict(X_test)
mlp_all = mlp_classifier.score(X_test, y_test) * 100
cross_mlp_all = max(cross_val_score(mlp_classifier, X, y, cv=10)) * 100
#print(classification_report(y_test,predict_mlp))
print(confusion_matrix(y_test,predict_mlp_classifier))
print('\nScore MLP: %.2f' %mlp_all + "%")
print("\nScore MLP Cross Validation: %.2f" %cross_mlp_all + "%")
plt.figure(figsize=(25, 5))
plt.plot(df_y_test, 'go', ms=15, label='Real')
plt.plot(predict_mlp_classifier, '+', color='black', ms=10, markeredgewidth=2, label='Predicted')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize = 'xx-large')
plt.ylabel('FTR',fontsize=16)
plt.xlabel('Número',fontsize=16)
plt.title('MLP',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Resultados finais
###Code
print('\nRegressão logística: %.2f' %reg_log_all + "%")
print('\nÁrvore de decisão: %.2f' %dec_tree_all + "%")
print('\nFloresta aleatória: %.2f' %rand_for_all + "%")
print('\nKNN: %.2f' %knn_all + "%")
print('\nSVM: %.2f' %svm_all + "%")
print('\nMLP: %.2f' %mlp_all + "%")
print("Cross Validation")
print('\nRegressão logística: %.2f' %cross_log_all + "%")
print('\nÁrvore de decisão: %.2f' %cross_dec_tree_all + "%")
print('\nFloresta aleatória: %.2f' %cross_rand_for_all + "%")
print('\nKNN: %.2f' %cross_knn_all + "%")
print('\nSVM: %.2f' %cross_svm_all + "%")
print('\nMLP: %.2f' %cross_mlp_all + "%")
###Output
Cross Validation
Regressão logística: 58.67%
Árvore de decisão: 54.67%
Floresta aleatória: 54.67%
KNN: 49.33%
SVM: 60.00%
MLP: 53.33%
###Markdown
Outra abordagem Vitória x Derrota
###Code
# 2 - Vitória, 0 - Derrota, 1 - Empate
df_cxf = df[df['FTR'] != 1]
df_cxf.head()
sns.countplot(x='FTR', data=df_cxf)
# Características
X = df_cxf.drop('FTR',axis=1)
# Alvo da previsão
y = df_cxf['FTR']
# Divisão treino/teste
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=101)
# Regressão Logística
logistic_regression = LogisticRegression(solver='lbfgs', multi_class='auto')
logistic_regression.fit(X_train, y_train)
predict_logistic_regression = logistic_regression.predict(X_test)
reg_log_cxf = logistic_regression.score(X_test, y_test) * 100
cross_log_cxf = max(cross_val_score(logistic_regression, X, y, cv=10)) * 100
# Árvore de Decisão
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, y_train)
predict_decision_tree = decision_tree.predict(X_test)
dec_tree_cxf = decision_tree.score(X_test, y_test) * 100
cross_dec_tree_cxf = max(cross_val_score(decision_tree, X, y, cv=10)) * 100
# Floresta Aleatória
error_rate = []
for i in range(1,200):
random_forest = RandomForestClassifier(n_estimators=i)
random_forest.fit(X_train, y_train)
predict_random_forest = random_forest.predict(X_test)
error_rate.append(np.mean(predict_random_forest!=y_test))
random_forest = RandomForestClassifier(n_estimators=error_rate.index(min(error_rate)))
random_forest.fit(X_train, y_train)
predict_random_forest = random_forest.predict(X_test)
rand_for_cxf = random_forest.score(X_test, y_test) * 100
cross_rand_for_cxf = max(cross_val_score(random_forest, X, y, cv=10)) * 100
# KNN
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
predict_knn = knn.predict(X_test)
error_rate.append(np.mean(predict_knn!=y_test))
knn = KNeighborsClassifier(n_neighbors=error_rate.index(min(error_rate)))
knn.fit(X_train, y_train)
predict_knn = knn.predict(X_test)
knn_cxf = knn.score(X_test, y_test) * 100
cross_knn_cxf = max(cross_val_score(knn, X, y, cv=10)) * 100
# SVM
param_grid = {'C':[0.1,1,10,100,1000],'gamma': [1,0.1,0.01,0.001,0.001], 'kernel':['rbf']}
grid = GridSearchCV(SVC(),param_grid,refit=True,cv=10,iid=False)
grid.fit(X_train, y_train)
predict_svm = grid.predict(X_test)
svm_cxf = grid.score(X_test, y_test) * 100
cross_svm_cxf = max(cross_val_score(grid, X, y, cv=10)) * 100
# MLP
mlp_classifier = MLPClassifier(hidden_layer_sizes=(2,8), activation='logistic', solver='adam', max_iter=1000)
mlp_classifier.fit(X_train,y_train)
predict_mlp_classifier = mlp_classifier.predict(X_test)
mlp_cxf = mlp_classifier.score(X_test, y_test) * 100
cross_mlp_cxf = max(cross_val_score(mlp_classifier, X, y, cv=10)) * 100
print('\nRegressão logística: %.2f' %reg_log_cxf + "%")
print('\nÁrvore de decisão: %.2f' %dec_tree_cxf + "%")
print('\nFloresta aleatória: %.2f' %rand_for_cxf + "%")
print('\nKNN: %.2f' %knn_cxf + "%")
print('\nSVM: %.2f' %svm_cxf + "%")
print('\nMLP: %.2f' %mlp_cxf + "%")
print("Cross Validation")
print('\nRegressão logística: %.2f' %cross_log_cxf + "%")
print('\nÁrvore de decisão: %.2f' %cross_dec_tree_cxf + "%")
print('\nFloresta aleatória: %.2f' %cross_rand_for_cxf + "%")
print('\nKNN: %.2f' %cross_knn_cxf + "%")
print("\nSVM: %.2f" %cross_svm_cxf + "%")
print('\nMLP: %.2f' %cross_mlp_cxf + "%")
###Output
Cross Validation
Regressão logística: 83.33%
Árvore de decisão: 64.81%
Floresta aleatória: 79.63%
KNN: 75.93%
SVM: 81.48%
MLP: 75.93%
###Markdown
Vitória x Empate
###Code
# 2 - Casa, 0 - Fora, 1 - Empate
df_cxe = df[df['FTR'] != 0]
df_cxe.head()
sns.countplot(x='FTR', data=df_cxe)
# Características
X = df_cxe.drop('FTR',axis=1)
# Alvo da previsão
y = df_cxe['FTR']
# Divisão treino/teste
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=101)
# Regressão Logística
logistic_regression = LogisticRegression(solver='lbfgs', multi_class='auto')
logistic_regression.fit(X_train, y_train)
predict_logistic_regression = logistic_regression.predict(X_test)
reg_log_cxe = logistic_regression.score(X_test, y_test) * 100
cross_log_cxe = max(cross_val_score(logistic_regression, X, y, cv=10)) * 100
# Árvore de Decisão
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, y_train)
predict_decision_tree = decision_tree.predict(X_test)
dec_tree_cxe = decision_tree.score(X_test, y_test) * 100
cross_dec_tree_cxe = max(cross_val_score(decision_tree, X, y, cv=10)) * 100
# Floresta Aleatória
error_rate = []
for i in range(1,200):
random_forest = RandomForestClassifier(n_estimators=i)
random_forest.fit(X_train, y_train)
predict_random_forest = random_forest.predict(X_test)
error_rate.append(np.mean(predict_random_forest!=y_test))
random_forest = RandomForestClassifier(n_estimators=error_rate.index(min(error_rate)))
random_forest.fit(X_train, y_train)
predict_random_forest = random_forest.predict(X_test)
rand_for_cxe = random_forest.score(X_test, y_test) * 100
cross_rand_for_cxe = max(cross_val_score(random_forest, X, y, cv=10)) * 100
# KNN
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
predict_knn = knn.predict(X_test)
error_rate.append(np.mean(predict_knn!=y_test))
knn = KNeighborsClassifier(n_neighbors=error_rate.index(min(error_rate)))
knn.fit(X_train, y_train)
predict_knn = knn.predict(X_test)
knn_cxe = knn.score(X_test, y_test) * 100
cross_knn_cxe = max(cross_val_score(knn, X, y, cv=10)) * 100
# SVM
param_grid = {'C':[0.1,1,10,100,1000],'gamma': [1,0.1,0.01,0.001,0.001], 'kernel':['rbf']}
grid = GridSearchCV(SVC(),param_grid,refit=True,cv=10,iid=False)
grid.fit(X_train, y_train)
predict_svm = grid.predict(X_test)
svm_cxe = grid.score(X_test, y_test) * 100
cross_svm_cxe = max(cross_val_score(grid, X, y, cv=10)) * 100
# MLP
mlp_classifier = MLPClassifier(hidden_layer_sizes=(2,8), activation='logistic', solver='adam', max_iter=1000)
mlp_classifier.fit(X_train,y_train)
predict_mlp_classifier = mlp_classifier.predict(X_test)
mlp_cxe = mlp_classifier.score(X_test, y_test) * 100
cross_mlp_cxe = max(cross_val_score(mlp_classifier, X, y, cv=10)) * 100
print('\nRegressão logística: %.2f' %reg_log_cxe + "%")
print('\nÁrvore de decisão: %.2f' %dec_tree_cxe + "%")
print('\nFloresta aleatória: %.2f' %rand_for_cxe + "%")
print('\nKNN: %.2f' %knn_cxe + "%")
print('\nSVM: %.2f' %svm_cxe + "%")
print('\nMLP: %.2f' %mlp_cxe + "%")
print("Cross Validation")
print('\nRegressão logística: %.2f' %cross_log_cxe + "%")
print('\nÁrvore de decisão: %.2f' %cross_dec_tree_cxe + "%")
print('\nFloresta aleatória: %.2f' %cross_rand_for_cxe + "%")
print('\nKNN: %.2f' %cross_knn_cxe + "%")
print('\nSVM: %.2f' %cross_svm_cxe + "%")
print('\nMLP: %.2f' %cross_mlp_cxe + "%")
###Output
Cross Validation
Regressão logística: 66.67%
Árvore de decisão: 60.42%
Floresta aleatória: 68.75%
KNN: 68.75%
SVM: 68.75%
MLP: 57.14%
###Markdown
Derrota x Empate
###Code
# 2 - Casa, 0 - Fora, 1 - Empate
df_fxe = df[df['FTR'] != 2]
df_fxe.head()
sns.countplot(x='FTR', data=df_fxe)
# Características
X = df_fxe.drop('FTR',axis=1)
# Alvo da previsão
y = df_fxe['FTR']
# Divisão treino/teste
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=101)
# Regressão Logística
logistic_regression = LogisticRegression(solver='lbfgs', multi_class='auto')
logistic_regression.fit(X_train, y_train)
predict_logistic_regression = logistic_regression.predict(X_test)
reg_log_fxe = logistic_regression.score(X_test, y_test) * 100
cross_log_fxe = max(cross_val_score(logistic_regression, X, y, cv=10)) * 100
# Árvore de Decisão
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, y_train)
predict_decision_tree = decision_tree.predict(X_test)
dec_tree_fxe = decision_tree.score(X_test, y_test) * 100
cross_dec_tree_fxe = max(cross_val_score(decision_tree, X, y, cv=10)) * 100
# Floresta Aleatória
error_rate = []
for i in range(1,200):
random_forest = RandomForestClassifier(n_estimators=i)
random_forest.fit(X_train, y_train)
predict_random_forest = random_forest.predict(X_test)
error_rate.append(np.mean(predict_random_forest!=y_test))
random_forest = RandomForestClassifier(n_estimators=error_rate.index(min(error_rate)))
random_forest.fit(X_train, y_train)
predict_random_forest = random_forest.predict(X_test)
rand_for_fxe = random_forest.score(X_test, y_test) * 100
cross_rand_for_fxe = max(cross_val_score(random_forest, X, y, cv=10)) * 100
# KNN
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
predict_knn = knn.predict(X_test)
error_rate.append(np.mean(predict_knn!=y_test))
knn = KNeighborsClassifier(n_neighbors=error_rate.index(min(error_rate)))
knn.fit(X_train, y_train)
predict_knn = knn.predict(X_test)
knn_fxe = knn.score(X_test, y_test) * 100
cross_knn_fxe = max(cross_val_score(knn, X, y, cv=10)) * 100
# SVM
param_grid = {'C':[0.1,1,10,100,1000],'gamma': [1,0.1,0.01,0.001,0.001], 'kernel':['rbf']}
grid = GridSearchCV(SVC(),param_grid,refit=True,cv=10,iid=False)
grid.fit(X_train, y_train)
predict_svm = grid.predict(X_test)
svm_fxe = grid.score(X_test, y_test) * 100
cross_svm_fxe = max(cross_val_score(grid, X, y, cv=10)) * 100
# MLP
mlp_classifier = MLPClassifier(hidden_layer_sizes=(2,8), activation='logistic', solver='adam', max_iter=1000)
mlp_classifier.fit(X_train,y_train)
predict_mlp_classifier = mlp_classifier.predict(X_test)
mlp_fxe = mlp_classifier.score(X_test, y_test) * 100
cross_mlp_fxe = max(cross_val_score(mlp_classifier, X, y, cv=10)) * 100
print('\nRegressão logística: %.2f' %reg_log_fxe + "%")
print('\nÁrvore de decisão: %.2f' %dec_tree_fxe + "%")
print('\nFloresta aleatória: %.2f' %rand_for_fxe + "%")
print('\nKNN: %.2f' %knn_fxe + "%")
print('\nSVM: %.2f' %svm_fxe + "%")
print('\nMLP: %.2f' %mlp_fxe + "%")
print("Cross Validation")
print('\nRegressão logística: %.2f' %cross_log_fxe + "%")
print('\nÁrvore de decisão: %.2f' %cross_dec_tree_fxe + "%")
print('\nFloresta aleatória: %.2f' %cross_rand_for_fxe + "%")
print('\nKNN: %.2f' %cross_knn_fxe + "%")
print('\nSVM: %.2f' %cross_svm_fxe + "%")
print('\nMLP: %.2f' %cross_mlp_fxe + "%")
###Output
Cross Validation
Regressão logística: 60.42%
Árvore de decisão: 62.50%
Floresta aleatória: 62.50%
KNN: 56.25%
SVM: 56.25%
MLP: 57.14%
###Markdown
Resultados
###Code
print("Casa x Fora x Empate")
print('\nRegressão logística: %.2f' %reg_log_all + "%")
print('\nÁrvore de decisão: %.2f' %dec_tree_all + "%")
print('\nFloresta aleatória: %.2f' %rand_for_all + "%")
print('\nKNN: %.2f' %knn_all + "%")
print("\nSVM: %.2f" %svm_all + "%")
print('\nMLP: %.2f' %mlp_all + "%")
print("\nCross Validation")
print('\nRegressão logística: %.2f' %cross_log_all + "%")
print('\nÁrvore de decisão: %.2f' %cross_dec_tree_all + "%")
print('\nFloresta aleatória: %.2f' %cross_rand_for_all + "%")
print('\nKNN: %.2f' %cross_knn_all + "%")
print('\nSVM: %.2f' %cross_svm_all + "%")
print('\nMLP: %.2f' %cross_mlp_all + "%")
print("\n\nCasa x Fora")
print('\nRegressão logística: %.2f' %reg_log_cxf + "%")
print('\nÁrvore de decisão: %.2f' %dec_tree_cxf + "%")
print('\nFloresta aleatória: %.2f' %rand_for_cxf + "%")
print('\nKNN: %.2f' %knn_cxf + "%")
print("\nSVM: %.2f" %svm_cxf + "%")
print('\nMLP: %.2f' %mlp_cxf + "%")
print("\nCross Validation")
print('\nRegressão logística: %.2f' %cross_log_cxf + "%")
print('\nÁrvore de decisão: %.2f' %cross_dec_tree_cxf + "%")
print('\nFloresta aleatória: %.2f' %cross_rand_for_cxf + "%")
print('\nKNN: %.2f' %cross_knn_cxf + "%")
print('\nSVM: %.2f' %cross_svm_cxf + "%")
print('\nMLP: %.2f' %cross_mlp_cxf + "%")
print("\n\nCasa x Empate")
print('\nRegressão logística: %.2f' %reg_log_cxe + "%")
print('\nÁrvore de decisão: %.2f' %dec_tree_cxe + "%")
print('\nFloresta aleatória: %.2f' %rand_for_cxe + "%")
print('\nKNN: %.2f' %knn_cxe + "%")
print("\nSVM: %.2f" %svm_cxe + "%")
print('\nMLP: %.2f' %mlp_cxe + "%")
print("\nCross Validation")
print('\nRegressão logística: %.2f' %cross_log_cxe + "%")
print('\nÁrvore de decisão: %.2f' %cross_dec_tree_cxe + "%")
print('\nFloresta aleatória: %.2f' %cross_rand_for_cxe + "%")
print('\nKNN: %.2f' %cross_knn_cxe + "%")
print('\nSVM: %.2f' %cross_svm_cxe + "%")
print('\nMLP: %.2f' %cross_mlp_cxe + "%")
print("\n\nFora x Empate")
print('\nRegressão logística: %.2f' %reg_log_fxe + "%")
print('\nÁrvore de decisão: %.2f' %dec_tree_fxe + "%")
print('\nFloresta aleatória: %.2f' %rand_for_fxe + "%")
print('\nKNN: %.2f' %knn_fxe + "%")
print("\nSVM: %.2f" %svm_fxe + "%")
print('\nMLP: %.2f' %mlp_fxe + "%")
print("\nCross Validation")
print('\nRegressão logística: %.2f' %cross_log_fxe + "%")
print('\nÁrvore de decisão: %.2f' %cross_dec_tree_fxe + "%")
print('\nFloresta aleatória: %.2f' %cross_rand_for_fxe + "%")
print('\nKNN: %.2f' %cross_knn_fxe + "%")
print('\nSVM: %.2f' %cross_svm_fxe + "%")
print('\nMLP: %.2f' %cross_mlp_fxe + "%")
sns.set(font_scale=1.1)
# Regressões Logística
plt.figure(figsize=(15,5))
plt.suptitle('Regressão Logística')
plt.subplot(1, 2, 1)
plt.title("Train Test Split(70/30)")
graph = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[reg_log_all,reg_log_cxf,reg_log_cxe,reg_log_fxe],palette='bright')
for p in graph.patches:
graph.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.subplot(1, 2, 2)
plt.title("Cross Validation")
graph2 = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[cross_log_all,cross_log_cxf,cross_log_cxe,cross_log_fxe],palette='bright')
for p in graph2.patches:
graph2.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.savefig('img/regressao_logistica_br.png')
# Árvore de Decisão
plt.figure(figsize=(15,5))
plt.suptitle('Árvore de Decisão')
plt.subplot(1, 2, 1)
plt.title("Train Test Split(70/30)")
graph = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[dec_tree_all,dec_tree_cxf,dec_tree_cxe,dec_tree_fxe],palette='bright')
for p in graph.patches:
graph.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.subplot(1, 2, 2)
plt.title("Cross Validation")
graph2 = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[cross_dec_tree_all,cross_dec_tree_cxf,cross_dec_tree_cxe,cross_dec_tree_fxe],palette='bright')
for p in graph2.patches:
graph2.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.savefig('img/arvore_decisao_br.png')
# Floresta Aleatória
plt.figure(figsize=(15,5))
plt.suptitle('Floresta Aleatória')
plt.subplot(1, 2, 1)
plt.title("Train Test Split(70/30)")
graph = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[rand_for_all,rand_for_cxf,rand_for_cxe,rand_for_fxe],palette='bright')
for p in graph.patches:
graph.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.subplot(1, 2, 2)
plt.title("Cross Validation")
graph2 = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[cross_rand_for_all,cross_rand_for_cxf,cross_rand_for_cxe,cross_rand_for_fxe],palette='bright')
for p in graph2.patches:
graph2.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.savefig('img/floresta_aleatoria_br.png')
# KNN
plt.figure(figsize=(15,5))
plt.suptitle('KNN')
plt.subplot(1, 2, 1)
plt.title("Train Test Split(70/30)")
graph = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[knn_all,knn_cxf,knn_cxe,knn_fxe],palette='bright')
for p in graph.patches:
graph.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.subplot(1, 2, 2)
plt.title("Cross Validation")
graph2 = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[cross_knn_all,cross_knn_cxf,cross_knn_cxe,cross_knn_fxe],palette='bright')
for p in graph2.patches:
graph2.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.savefig('img/knn_br.png')
# SVM
plt.figure(figsize=(15,5))
plt.suptitle('SVM')
plt.subplot(1, 2, 1)
plt.title("Train Test Split(70/30)")
graph = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[svm_all,svm_cxf,svm_cxe,svm_fxe],palette='bright')
for p in graph.patches:
graph.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.subplot(1, 2, 2)
plt.title("Cross Validation")
graph2 = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[cross_svm_all,cross_svm_cxf,cross_svm_cxe,cross_svm_fxe],palette='bright')
for p in graph2.patches:
graph2.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.savefig('img/svm_br.png')
# MLP
plt.figure(figsize=(15,5))
plt.suptitle('MLP')
plt.subplot(1, 2, 1)
plt.title("Train Test Split(70/30)")
graph = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[mlp_all,mlp_cxf,mlp_cxe,mlp_fxe],palette='bright')
for p in graph.patches:
graph.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.subplot(1, 2, 2)
plt.title("Cross Validation")
graph2 = sns.barplot(x=['VxDxE','VxD','VxE','DxE'], y=[cross_mlp_all,cross_mlp_cxf,cross_mlp_cxe,cross_mlp_fxe],palette='bright')
for p in graph2.patches:
graph2.annotate(format(p.get_height(), '.2f') + "%", (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 8), textcoords = 'offset points')
plt.xlabel("Abordagem")
plt.ylabel("Precisão")
plt.ylim([0, 100])
plt.savefig('img/mlp_br.png')
###Output
_____no_output_____ |
data-ingestion-and-preparation/grafana-grafwiz.ipynb | ###Markdown
Generating a Grafana Dashboard with grafwizThis tutorial demonstrates how to use [grafwiz](https://github.com/v3io/grafwiz), Iguazio's open-source Python library for generating a Grafana dashboard programmatically. - [Setup](grafwiz-setup)- [Generating Data](grafwiz-gen-data)- [Creating a DataFrame with the Generated Data](grafwiz-df-create)- [Writing the Data to the Platform's Data Store](grafwiz-write-to-data-store)- [Adding a Platform Data Source to Grafana](grafwiz-add-data-source)- [Creating a Grafana Dashboard](grafwiz-grafana-dashboard-create)- [Adding Dashboard Visualization Elements](grafwiz-add-dashboard-visualization-elements)- [Deploying the Dashboard to Grafana](grafwiz-grafana-dashboard-deploy) SetupInitialize and configure your environment. Installing grafwizRun the following code to ensure that the `grafwiz` Python package is installed, and the restart the Jupyter kernel.
###Code
!pip install git+https://github.com/v3io/grafwiz --upgrade
###Output
_____no_output_____
###Markdown
Creating a Grafana Service1. Ensure that you have a running platform Grafana service. You can create such a service from the platform dashboard's **Services** page.2. Copy the URL of your Grafana service from the service-name link on the **Services** dashboard page. Defining VariablesDefine variables for your environment.> **Note:** Replace the `` placeholder with the URL of your Grafana service, as copied in the previous step.
###Code
import os
grafana_url = '<Grafana URL>' # TODO: Replace <Grafana URL> with the API URL of your Grafana API service.
v3io_container = 'users'
stocks_kv_table = os.path.join(os.getenv("V3IO_USERNAME"),'stocks_kv_table')
stocks_tsdb_table = os.path.join(os.getenv("V3IO_USERNAME"),'stocks_tsdb_table')
sym = 'XYZ'
rows = 3450
###Output
_____no_output_____
###Markdown
Importing LibrariesImport required libraries.
###Code
from grafwiz import *
import v3io_frames as v3f
import pandas as pd
###Output
_____no_output_____
###Markdown
Creating a V3IO Frames ClientCreate a V3IO Frames client object.
###Code
client = v3f.Client('framesd:8081',container=v3io_container)
###Output
_____no_output_____
###Markdown
Generating DataGenerate random data to visualize on the Grafana dashboard.
###Code
import random
import datetime
import numpy as np
def generate_date(rows):
datetimes = [datetime.datetime.today() - (random.random() * datetime.timedelta(minutes=15)) for i in range(rows)]
return datetimes
time = sorted(generate_date(rows))
volume = np.random.randint(low=100, high=10000, size=rows)
price = np.cumsum([0.0001] * rows + np.random.random(rows))
###Output
_____no_output_____
###Markdown
Creating a DataFrame with the Generated DataStore the generated data in a pandas DataFrame.
###Code
stocks_df = pd.DataFrame(
{'last_updated': time,
'volume': volume,
'price': price
})
stocks_df['symbol'] = sym
stocks_df = stocks_df.sort_values('last_updated')
stocks_df
###Output
_____no_output_____
###Markdown
Define the `last_updated` column (attribute) as a DataFrame index column, which will be used to identify the ingestion times of the TSDB metric samples.
###Code
stocks_df_tsdb = stocks_df
stocks_df_tsdb = stocks_df.reset_index()
stocks_df_tsdb = stocks_df.set_index(['last_updated'])
###Output
_____no_output_____
###Markdown
Writing the Data to the Platform's Data StoreUse the V3IO Frames API to write the data from the pandas DataFrame to TSDB and NoSQL tables in the platform's persistent data store. Writing the Data to a TSDB TableWrite the data from the DataFrame to a new platform TSDB table.
###Code
client.create(backend='tsdb', table=stocks_tsdb_table, rate='1/m', if_exists=1)
client.write(backend='tsdb', table=stocks_tsdb_table, dfs=stocks_df_tsdb)
###Output
_____no_output_____
###Markdown
Writing the Data to a NoSQL TableWrite the data from the DataFrame to a new platform NoSQL table in order of rows arrival, to simulate real-time data consumption.
###Code
expr_template = "symbol='{symbol}';price='{price}';volume='{volume}';last_updated='{last_updated}'"
# Write the stock data to a NoSQL table
for idx, record in stocks_df.iterrows():
stock = {'symbol': sym, 'price': record['price'], 'volume': record['volume'], 'last_updated': record['last_updated']}
expr = expr_template.format(**stock)
client.execute('kv', stocks_kv_table, 'update', args={'key': sym, 'expression': expr})
###Output
_____no_output_____
###Markdown
Infer the schema of the NoSQL table to verify that it can be accessed and displayed on the dashboard.
###Code
# Infer the schema of the NoSQL table
client.execute(backend='kv', table=stocks_kv_table, command='infer')
###Output
_____no_output_____
###Markdown
Adding a Platform Data Source to GrafanaAdd an "Iguazio" data source for the platform's custom `iguazio` Grafana data source to your Grafana service.
###Code
# Create a data source
DataSource(name='Iguazio').deploy(grafana_url, use_auth=True)
###Output
_____no_output_____
###Markdown
Creating a Grafana DashboardCreate a new Grafana dashboard that uses the platform's `iguazio` data source.
###Code
# Create grafana dashboard
dash = Dashboard("stocks", start='now-15m', dataSource='Iguazio', end='now')
###Output
_____no_output_____
###Markdown
Adding Dashboard Visualization ElementsCreate a table for the NoSQL table and graphs for each of the metrics in the TSDB table, to be used for visualizing the data on the Grafana dashboard.> **Note:** It might take a few minutes for the graphs to be updated with the data.
###Code
# Create a table and log viewer for the NoSQL table in one row
tbl = Table('Current Stocks Value', span=12).source(table=stocks_kv_table,fields=['symbol','volume', 'price', 'last_updated'],container=v3io_container)
dash.row([tbl])
# Create TSDB-metric graphs
metrics_row = [Graph(metric).series(table=stocks_tsdb_table, fields=[metric], container=v3io_container) for metric in ['price','volume']]
dash.row(metrics_row)
###Output
_____no_output_____
###Markdown
Deploying the Dashboard to GrafanaDeploy the new Grafana dashboard to your Grafana service.
###Code
# Deploy to Grafana
dash.deploy(grafana_url)
###Output
_____no_output_____ |
week-3-project-bak.ipynb | ###Markdown
Toronto Neighborhoods
###Code
from bs4 import BeautifulSoup
import requests
import numpy as np
import pandas as pd
from geopy.geocoders import Nominatim
import folium
###Output
_____no_output_____
###Markdown
will use 'https://en.wikipedia.org/wiki/List_of_neighbourhoods_in_Toronto'
###Code
url = 'https://en.wikipedia.org/wiki/List_of_neighbourhoods_in_Toronto'
result = requests.get(url)
print(url)
print(result.status_code)
print(result.headers)
# define the dataframe
df = pd.DataFrame(columns=['Hood', 'Latitude', 'Longitude'])
df.head()
###Output
_____no_output_____
###Markdown
get data + clean it
###Code
soup = BeautifulSoup(result.content, 'html.parser')
table = soup.find('table')
lis = table.find_all('li')
list_of_n = []
for li in lis:
a = li.find('a')
list_of_n.append(a.get('title').split(", ")[0].split(" (neighbourhood)")[0].split(" (Toronto)")[0] )
###Output
_____no_output_____
###Markdown
will start populating the dataframe with hood names
###Code
df['Hood'] = pd.Series(list_of_n)
print(df.shape)
df.head()
###Output
(89, 3)
###Markdown
duplicates?
###Code
df.drop_duplicates(inplace=True)
print(df.shape)
df.head()
###Output
(86, 3)
###Markdown
loop over to get coordinates and populate the dfneed to drop those hoods that the geo does not find
###Code
to_drop_unknown = []
geolocator = Nominatim(user_agent="coursera")
for index, row in df.iterrows():
address = row['Hood'] + ', Toronto'
try:
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of {} are {}, {}.'.format(address, latitude, longitude))
df.loc[index, 'Latitude'] = latitude
df.loc[index, 'Longitude'] = longitude
except AttributeError:
print('Cannot do: {}, will drop index: {}'.format(address, index))
to_drop_unknown.append(index)
df.head()
clean_df = df.drop(to_drop_unknown)
clean_df.shape
###Output
_____no_output_____
###Markdown
mapping time
###Code
address = 'Toronto'
try:
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of {} are {}, {}.'.format(address, latitude, longitude))
df.loc[index, 'Latitude'] = latitude
df.loc[index, 'Longitude'] = longitude
except AttributeError:
print('Cannot do: {}, will drop index: {}'.format(address, index))
my_map = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, label in zip(clean_df['Latitude'], clean_df['Longitude'], clean_df['Hood']):
label = folium.Popup(label)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill_color='#3186cc',
fill_opacity=0.7).add_to(my_map)
my_map
###Output
The geograpical coordinate of Toronto are 43.653963, -79.387207.
|
Instructions/Pymaceuticals/Pymaceuticals_starter_with_outputs_JB.ipynb | ###Markdown
Pymaceuticals Inc.--- Analysis* This is a great spot to put your final analysis
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_data = pd.merge(study_results, mouse_metadata, on=('Mouse ID'))
# Display the data table for preview
combined_data
# Checking the number of mice.
mouse_count = len(combined_data['Mouse ID'].unique())
print(f"There are {mouse_count} mice in this study.")
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mice = combined_data.loc[combined_data.duplicated(subset= ['Mouse ID', 'Timepoint']), 'Mouse ID'].unique()
# Optional: Get all the data for the duplicate mouse ID.
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
cleaned_mice = combined_data[combined_data["Mouse ID"].isin(duplicate_mice) == False]
cleaned_mice.head()
# Checking the number of mice in the clean DataFrame.
cleaned_mouse_count = len(cleaned_mice['Mouse ID'].unique())
print(f"There are {cleaned_mouse_count} mice in this study.")
###Output
There are 248 mice in this study.
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method is the most straighforward, creating multiple series and putting them all together at the end.
tum_vol_stats = cleaned_mice.loc[:, ['Mouse ID', 'Drug Regimen', 'Tumor Volume (mm3)']]
mean = tum_vol_stats.groupby(["Drug Regimen"]).mean()["Tumor Volume (mm3)"]
median = tum_vol_stats.groupby(["Drug Regimen"]).median()["Tumor Volume (mm3)"]
variance = tum_vol_stats.groupby(["Drug Regimen"]).var()["Tumor Volume (mm3)"]
stddev = tum_vol_stats.groupby(["Drug Regimen"]).std()["Tumor Volume (mm3)"]
sem = tum_vol_stats.groupby(["Drug Regimen"]).sem()["Tumor Volume (mm3)"]
summary_stats = pd.DataFrame({"Mean Tumor Volume": mean, "Median Tumor Volume": median, "Tumor Volume Variance": variance, "Tumor Volume Std Dev": stddev, "Tumor Volume Std Er": sem})
summary_stats
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# This method produces everything in a single groupby function
groupby_stats = cleaned_mice.groupby('Drug Regimen')
summary_stats_2 = groupby_stats.agg(['mean', 'median', 'var', 'std', 'sem'])["Tumor Volume (mm3)"]
summary_stats_2
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
mouse_per_treatment = cleaned_mice["Drug Regimen"].value_counts()
y_axis = mouse_per_treatment.values
x_axis= mouse_per_treatment.index
mouse_per_treatment.plot(kind="bar", color='green')
plt.ylabel("Number of Mice")
plt.xlabel("Drug Regimen")
plt.show()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
mouse_per_treatment = cleaned_mice["Drug Regimen"].value_counts()
y_axis = mouse_per_treatment.values
x_treatment= mouse_per_treatment.index
plt.bar(x_treatment, y_axis, color='g')
plt.ylabel("Number of Mice")
plt.xlabel("Drug Regimen")
plt.xticks(rotation=90)
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
male_female_dis = cleaned_mice["Sex"].value_counts()
labels = male_female_dis.index
size = male_female_dis.values
colors = ["lightblue", "orange"]
explode=[0,0]
male_female_dis.plot(kind="pie", explode=explode, labels=labels, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=0)
# Generate a pie plot showing the distribution of female versus male mice using pyplot
male_female_dis = cleaned_mice["Sex"].value_counts()
labels = male_female_dis.index
size = male_female_dis.values
colors = ["lightblue", "orange"]
explode=[0,0]
plt.pie(size, explode=explode, labels=labels, colors=colors, autopct="%1.1f%%", shadow=True, startangle=0)
plt.title("Sex")
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
greatest_timepoint = cleaned_mice.groupby("Mouse ID").max().reset_index()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
merged_df = greatest_timepoint[["Mouse ID", "Timepoint"]].merge(cleaned_mice, on=["Mouse ID", "Timepoint"])
merged_df
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
capomulin_tv = []
ramicane_tv = []
infubinol_tv = []
ceftamin_tv = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
#see regimen boxes below
# Locate the rows which contain mice on each drug and get the tumor volumes
capomulin = merged_df.loc[merged_df['Drug Regimen'] == 'Capomulin']['Tumor Volume (mm3)']
ramicane = merged_df.loc[merged_df['Drug Regimen'] == 'Ramicane']['Tumor Volume (mm3)']
infubinol = merged_df.loc[merged_df['Drug Regimen'] == 'Infubinol']['Tumor Volume (mm3)']
ceftamin = merged_df.loc[merged_df['Drug Regimen'] == 'Ceftamin']['Tumor Volume (mm3)']
# Determine outliers using upper and lower bounds
#see regimen boxes below
#capomulin
ca_quartiles = capomulin.quantile([.25,.5,.75])
ca_lowerq = ca_quartiles[0.25]
ca_upperq = ca_quartiles[0.75]
ca_iqr = ca_upperq-ca_lowerq
print(f"The lower quartile is: {ca_lowerq}")
print(f"The upper quartile is: {ca_upperq}")
print(f"The interquartile range is: {ca_iqr}")
print(f"The median is: {ca_quartiles[0.5]} ")
ca_lower_bound = ca_lowerq - (1.5*ca_iqr)
ca_upper_bound = ca_upperq + (1.5*ca_iqr)
print(f"Values below {ca_lower_bound} could be outliers.")
print(f"Values above {ca_upper_bound} could be outliers.")
#ramicane
ra_quartiles = ramicane.quantile([.25,.5,.75])
ra_lowerq = ra_quartiles[0.25]
ra_upperq = ra_quartiles[0.75]
ra_iqr = ra_upperq-ra_lowerq
print(f"The lower quartile is: {ra_lowerq}")
print(f"The upper quartile is: {ra_upperq}")
print(f"The interquartile range is: {ra_iqr}")
print(f"The median is: {ra_quartiles[0.5]} ")
ra_lower_bound = ra_lowerq - (1.5*ra_iqr)
ra_upper_bound = ra_upperq + (1.5*ra_iqr)
print(f"Values below {ra_lower_bound} could be outliers.")
print(f"Values above {ra_upper_bound} could be outliers.")
#Infubinol
in_quartiles = infubinol.quantile([.25,.5,.75])
in_lowerq = in_quartiles[0.25]
in_upperq = in_quartiles[0.75]
in_iqr = in_upperq-in_lowerq
print(f"The lower quartile is: {in_lowerq}")
print(f"The upper quartile is: {in_upperq}")
print(f"The interquartile range is: {in_iqr}")
print(f"The median is: {in_quartiles[0.5]} ")
in_lower_bound = in_lowerq - (1.5*in_iqr)
in_upper_bound = in_upperq + (1.5*in_iqr)
print(f"Values below {in_lower_bound} could be outliers.")
print(f"Values above {in_upper_bound} could be outliers.")
#Ceftamin
ce_quartiles = ceftamin.quantile([.25,.5,.75])
ce_lowerq = ce_quartiles[0.25]
ce_upperq = ce_quartiles[0.75]
ce_iqr = ce_upperq-ce_lowerq
print(f"The lower quartile is: {ce_lowerq}")
print(f"The upper quartile is: {ce_upperq}")
print(f"The interquartile range is: {ce_iqr}")
print(f"The median is: {ce_quartiles[0.5]} ")
ce_lower_bound = ce_lowerq - (1.5*ce_iqr)
ce_upper_bound = in_upperq + (1.5*ce_iqr)
print(f"Values below {ce_lower_bound} could be outliers.")
print(f"Values above {ce_upper_bound} could be outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
dark_out = dict(markerfacecolor='red', markersize=10)
plt.boxplot([capomulin,ramicane, infubinol, ceftamin], labels=["Capomulin","Ramicane","Infubinol","Ceftamin"],flierprops= dark_out)
plt.title("Final Tumor Volumes Across Four Regimens")
plt.ylabel("Tumor Volume (mm3)")
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
capomulin_time = cleaned_mice.loc[cleaned_mice["Drug Regimen"] == "Capomulin"]
cap_mouse = cleaned_mice.loc[cleaned_mice["Mouse ID"] == "l509"]
plt.plot(cap_mouse["Timepoint"], cap_mouse["Tumor Volume (mm3)"])
plt.xlabel("Timepoint(days)")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Capomulin treatment of mouse l509")
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin_weight = cleaned_mice.loc[cleaned_mice["Drug Regimen"] == "Capomulin"]
cap_mouse_avg = capomulin_weight.groupby(["Mouse ID"]).mean()
plt.scatter(cap_mouse_avg["Weight (g)"], cap_mouse_avg["Tumor Volume (mm3)"])
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
(slope, intercept, rvalue, pvalue, stderr) = linregress(cap_mouse_avg["Weight (g)"], cap_mouse_avg["Tumor Volume (mm3)"])
regress_values = cap_mouse_avg["Weight (g)"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
print(f"The correlation between mouse weight and the average tumor volume is {round(rvalue,2)}")
plt.scatter(cap_mouse_avg["Weight (g)"], cap_mouse_avg["Tumor Volume (mm3)"])
plt.plot(cap_mouse_avg["Weight (g)"],regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.ylabel("Average Tumor Volume (mm3)")
plt.xlabel("Weight (g)")
plt.show()
#Observations
#1) There are more male mice in this experiment.
#2) Mice in the Capomulin treatment group survived longer throughout the study compared to other treatments
#3) Tumors volumes in mice treated with Capomulin were smaller in comparison to mice treated with Ceftamin
###Output
_____no_output_____ |
Multiclass Classification Of Flower Species.ipynb | ###Markdown
Iris FLowers Classification Project The attributes for this dataset1. Sepal length in centimeters2. Sepal width in centimeters3. Petal length in centimeters4. Petal width in centimeters5. Class 1. Import Classes and Functions
###Code
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
2. Initialize Random Number Generator
###Code
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
###Output
_____no_output_____
###Markdown
3. Load the Dataset
###Code
# load dataset
dataframe = pd.read_csv('iris.csv',header=None)
dataset = dataframe.values
X = dataset[:,0:4].astype(float)
Y = dataset[:,4]
###Output
_____no_output_____
###Markdown
4. Encode the output variable* The three class values **Iris-setosa,Iris-versicolor and Iris-virginica**.* First encoding the strings consistently to integers using the scikit-learn class **LabelEncoder**.* Then convert the vector of integers to a one hot encoding using the keras function **to_categorical()**.
###Code
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
print(encoded_Y)
# convert integers to dummy variables (one hot encoded)
dummy_y = np_utils.to_categorical(encoded_Y)
print(dummy_y)
###Output
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
[[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[1. 0. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]]
###Markdown
5. Define the Neural Netowork Model* 4 inputs -> [4 hidden nodes] -> 3 outputs* we use sigmoid activation function in the output layer,to ensure the output values are in the range of 0 and 1.* we use logarithmic loss function,which is called *categorical_Crossentropy* in keras
###Code
# define the model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(4,input_dim=4,activation='relu'))
model.add(Dense(3,activation='sigmoid'))
# compile model
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
return model
estimator = KerasClassifier(build_fn=baseline_model,epochs=200,batch_size=5,verbose=0)
###Output
_____no_output_____
###Markdown
6. Evaluate the Model with k-fold Cross Validation
###Code
kfold = KFold(n_splits=10,shuffle=True,random_state=seed)
results = cross_val_score(estimator,X,dummy_y,cv=kfold)
print('Accuracy: %.2f%% (%.2f%%)' %(results.mean()*100,results.std()*100))
###Output
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fa0843d63b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fa0842a9320> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fa0841a0e60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fa07c7bccb0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fa07c686a70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x7fa07c551830> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Accuracy: 94.00% (9.64%)
|
talleres_inov_docente/1-03-representacion_datos_aa.ipynb | ###Markdown
Representación y visualización de datos El aprendizaje automático trata de ajustar modelos a los datos; por esta razón, empezaremos discutiendo como los datos pueden ser representados para ser accesibles por el ordenador. Además de esto, nos basaremos en los ejemplos de matplotlib de la sección anterior para usarlos para representar datos. Datos en scikit-learn Los datos en scikit-learn, salvo algunas excepciones, suelen estar almacenados en **arrays de 2 dimensiones**, con forma `[n_samples, n_features]`. Muchos algoritmos aceptan también matrices ``scipy.sparse`` con la misma forma. - **n_samples:** este es el número de ejemplos. Cada ejemplo es un item a procesar (por ejemplo, clasificar). Un ejemplo puede ser un documento, una imagen, un sonido, un vídeo, un objeto astronómico, una fila de una base de datos o de un fichero CSV, o cualquier cosa que se pueda describir usando un conjunto prefijado de trazas cuantitativas.- **n_features:** este es el número de características descriptoras que se utilizan para describir cada item de forma cuantitativa. Las características son, generalmente, valores reales, aunque pueden ser categóricas o valores discretos.El número de características debe ser fijado de antemano. Sin embargo, puede ser extremadamente alto (por ejemplo, millones de características), siendo cero en la mayoría de casos. En este tipo de datos, es buena idea usar matrices `scipy.sparse` que manejan mucho mejor la memoria.Como ya comentamos en la sección anterior, representamos los ejemplos (puntos o instancias) como filas en el array de datos y almacenamos las características correspondientes, las "dimensiones", como columnas. Un ejemplo simple: el dataset Iris Como ejemplo de un dataset simple, vamos a echar un vistazo al conjunto iris almacenado en scikit-learn.Los datos consisten en medidas de tres especies de flores iris distintas: Iris SetosaIris VersicolorIris Virginica Pregunta rápida: **Asumamos que estamos interesados en categorizar nuevos ejemplos; queremos predecir si una flor nueva va a ser Iris-Setosa, Iris-Versicolor, o Iris-Virginica. Basándonos en lo discutido en secciones anteriores, ¿cómo construiríamos este dataset?**Recuerda: necesitamos un array 2D con forma (*shape*) `[n_samples x n_features]`.- ¿Qué sería `n_samples`?- ¿Qué podría ser `n_features`?Recuerda que debe haber un número **fijo** de características por cada ejemplo, y cada característica *j* debe ser el mismo tipo de cantidad para cada ejemplo. Cargando el dataset Iris desde scikit-learn Para futuros experimentos con algoritmos de aprendizaje automático, te recomendamos que añadas a favoritos el [Repositorio UCI](http://archive.ics.uci.edu/ml/), que aloja muchos de los datasets que se utilizan para probar los algoritmos de aprendizaje automático. Además, algunos de estos datasets ya están incluidos en scikit-learn, pudiendo así evitar tener que descargar, leer, convertir y limpiar los ficheros de texto o CSV. El listado de datasets ya disponibles en scikit learn puede consultarse [aquí](http://scikit-learn.org/stable/datasets/toy-datasets).Por ejemplo, scikit-learn contiene el dataset iris. Los datos consisten en:- Características: 1. Longitud de sépalo en cm 2. Ancho de sépalo en cm 3. Longitud de pétalo en cm 4. Ancho de sépalo en cm- Etiquetas a predecir: 1. Iris Setosa 2. Iris Versicolour 3. Iris Virginica (Image: "Petal-sepal". Licensed under CC BY-SA 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg/media/File:Petal-sepal.jpg) ``scikit-learn`` incluye una copia del archivo CSV de iris junto con una función que lo lee a arrays de numpy:
###Code
from sklearn.datasets import load_iris
iris = load_iris()
###Output
_____no_output_____
###Markdown
El dataset es un objeto ``Bunch``. Puedes ver que contiene utilizando el método ``keys()``:
###Code
iris.keys()
###Output
_____no_output_____
###Markdown
Las características de cada flor se encuentra en el atributo ``data`` del dataset:
###Code
n_samples, n_features = iris.data.shape
print('Número de ejemplos: %d'% n_samples)
print('Número de características: %d'% n_features)
# sepal length, sepal width, petal length y petal width del primer ejemplo (primera flor)
print(iris.data[0])
###Output
_____no_output_____
###Markdown
La información sobre la clase de cada ejemplo se encuentra en el atributo ``target`` del dataset:
###Code
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
import numpy as np
np.bincount(iris.target)
###Output
_____no_output_____
###Markdown
La función de numpy llamada `bincount` (arriba) nos permite ver que las clases se distribuyen de forma uniforme en este conjunto de datos (50 flores de cada especie), donde:- clase 0: Iris-Setosa- clase 1: Iris-Versicolor- clase 2: Iris-Virginica Los nombres de las clases se almacenan en ``target_names``:
###Code
print(iris.target_names)
###Output
_____no_output_____
###Markdown
Estos datos tienen cuatro dimensiones, pero podemos visualizar una o dos de las dimensiones usando un histograma o un scatter. Primero, activamos el *matplotlib inline mode*:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
x_index = 3
colors = ['blue', 'red', 'green']
for label, color in zip(range(len(iris.target_names)), colors):
plt.hist(iris.data[iris.target==label, x_index],
label=iris.target_names[label],
color=color)
plt.xlabel(iris.feature_names[x_index])
plt.legend(loc='upper right')
plt.show()
x_index = 3
y_index = 0
colors = ['blue', 'red', 'green']
for label, color in zip(range(len(iris.target_names)), colors):
plt.scatter(iris.data[iris.target==label, x_index],
iris.data[iris.target==label, y_index],
label=iris.target_names[label],
c=color)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio: **Cambia** `x_index` **e** `y_index` ** en el script anterior y encuentra una combinación de los dos parámetros que separe de la mejor forma posible las tres clases.** Este ejercicio es un adelanto a lo que se denomina **reducción de dimensionalidad**, que veremos después. Matrices scatterplotEn lugar de realizar los plots por separado, una herramienta común que utilizan los analistas son las **matrices scatterplot**.Estas matrices muestran los scatter plots entre todas las características del dataset, así como los histogramas para ver la distribución de cada característica.
###Code
import pandas as pd
iris_df = pd.DataFrame(iris.data, columns=iris.feature_names)
pd.plotting.scatter_matrix(iris_df, c=iris.target, figsize=(8, 8));
###Output
_____no_output_____
###Markdown
Otros datasets disponibles [Scikit-learn pone a disposición de la comunidad una gran cantidad de datasets](http://scikit-learn.org/stable/datasets/dataset-loading-utilities). Vienen en tres modos:- **Packaged Data:** pequeños datasets ya disponibles en la distribución de scikit-learn, a los que se puede acceder mediante ``sklearn.datasets.load_*``- **Downloadable Data:** estos datasets son más grandes y pueden descargarse mediante herramientas que scikit-learn ya incluye. Estas herramientas están en ``sklearn.datasets.fetch_*``- **Generated Data:** estos datasets se generan mediante modelos basados en semillas aleatorias (datasets sintéticos). Están disponibles en ``sklearn.datasets.make_*``Puedes explorar las herramientas de datasets de scikit-learn usando la funcionalidad de autocompletado que tiene IPython. Tras importar el paquete ``datasets`` de ``sklearn``, teclea datasets.load_o datasets.fetch_o datasets.make_para ver una lista de las funciones disponibles
###Code
from sklearn import datasets
###Output
_____no_output_____
###Markdown
Advertencia: muchos de estos datasets son bastante grandes y puede llevar bastante tiempo descargarlos.Si comienzas una descarga con un libro de IPython y luego quieres detenerla, puedes utilizar la opción "kernel interrupt" accesible por el menú o con ``Ctrl-m i``.Puedes presionar ``Ctrl-m h`` para una lista de todos los atajos ``ipython``. Cargando los datos de dígitos Ahora vamos a ver otro dataset, donde podemos estudiar mejor como representar los datos. Podemos explorar los datos de la siguiente forma:
###Code
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
n_samples, n_features = digits.data.shape
print((n_samples, n_features))
print(digits.data[0])
print(digits.data[-1])
print(digits.target)
###Output
_____no_output_____
###Markdown
Aquí la etiqueta es directamente el dígito que representa cada ejemplo. Los datos consisten en un array de longitud 64... pero, ¿qué significan estos datos? Una pista viene dada por el hecho de que tenemos dos versiones de los datos:``data`` y ``images``. Vamos a echar un vistazo a ambas:
###Code
print(digits.data.shape)
print(digits.images.shape)
###Output
_____no_output_____
###Markdown
Podemos ver que son lo mismo, mediante un simple *reshaping*:
###Code
import numpy as np
print(np.all(digits.images.reshape((1797, 64)) == digits.data))
###Output
_____no_output_____
###Markdown
Vamos a visualizar los datos. Es un poco más complejo que el scatter plot que hicimos anteriormente.
###Code
# Configurar la figura
fig = plt.figure(figsize=(6, 6)) # tamaño en pulgadas
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# mostrar algunos dígitos: cada imagen es de 8x8
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# Etiquetar la imagen con el valor objetivo
ax.text(0, 7, str(digits.target[i]))
###Output
_____no_output_____
###Markdown
Ahora podemos saber que significan las características. Cada característica es una cantidad real que representa la oscuridad de un píxel en una imagen 8x8 de un dígito manuscrito.Aunque cada ejemplo tiene datos que son inherentemente de dos dimensiones, la matriz de datos incluye estos datos 2D en un **solo vector**, contenido en cada **fila** de la misma. Ejercicio: trabajando con un dataset de reconocimiento facial: Vamos a pararnos a explorar el dataset de reconocimiento facial de Olivetti.Descarga los datos (sobre 1.4MB), y visualiza las caras.Puedes copiar el código utilizado para visualizar los dígitos, modificándolo convenientemente.
###Code
from sklearn.datasets import fetch_olivetti_faces
# descarga el dataset faces
# Utiliza el script anterior para representar las caras
# Pista: plt.cm.bone es un buen colormap para este dataset
###Output
_____no_output_____ |
MiscNotebookFiles/Update_VS_TTC_Tester.ipynb | ###Markdown
Main tester
###Code
ttct = 0.
# ---Variables---
nTests = 10 #Number of intervals ex: 100, 200, 300 updates is 3 intervals
nloop = 5 #Number of times the calculations are to be redone for avg
nStep = 100. #Step number ex: 100, 200, 300 updates is 100 step number
dataArray = np.zeros(shape=(5,nTests))
for j in range(0, nTests):
rTotal = 0.
eMax = 0.
eTotal = np.zeros_like(system.planet.map.values)
lcTotal = np.zeros_like(baselineLightcurve)
for i in range(0, nloop):
Teq = system.get_teq()
T0 = np.ones_like(system.planet.map.values)*Teq
t0 = 0.
t1 = t0+system.planet.Porb*1
dt = system.planet.Porb/(nStep*(j+1))
testMaps, ttc = system.run_model_tester(T0, t0, t1, dt, verbose=False)
rTotal = rTotal + float(ttc)
ttct = ttct + float(ttc)
eTotal = eTotal + testMaps
lcTotal = lcTotal + np.absolute(system.lightcurve())
if (np.amax(np.absolute(testMaps))>eMax):
eMax = np.amax(np.absolute(baselineMaps-testMaps))
ttcavg = rTotal/nloop
eTotalavg = eTotal/nloop
lcTotalavg = lcTotal/nloop
dataArray[0,j] = ttcavg #Time to Compute
dataArray[1,j] = (nStep*(j+1)) #Time steps
dataArray[2,j] = (np.mean(np.absolute(baselineMaps-eTotalavg))) #Mean error on heat
dataArray[3,j] = (np.amax(np.absolute(baselineMaps-eTotalavg))) #Maximum error on heat
dataArray[4,j] = (np.mean(np.absolute(baselineLightcurve-lcTotalavg))) #Mean error on lightcurve
print('Accuracy lost at ' + str((nStep*(j+1))) + ' updates:' + str(np.mean(np.absolute(baselineMaps-eTotalavg))))
print('Max accuracy lost at ' + str((nStep*(j+1))) + ' updates:' + str(eMax))
print('Avergae time to compute at ' + str((nStep*(j+1)))+ ' updates: ' + str(ttcavg))
print('Accuracy lost (LC) at ' + str((nStep*(j+1))) + ' updates:' + str(dataArray[4,j]))
print('----------')
print('Total computational time: ' + str(ttct/60) + ' minutes')
print(str(eTotal))
y = dataArray[0,:]
x = dataArray[1,:]
plt.scatter(x, y)
plt.xlabel("Updates")
plt.ylabel("Time to Compute (s)")
plt.title('Time to Compute Compared to Updates')
plt.grid(True, linestyle='-.')
plt.show()
"""
y = dataArray[2,:]
x = dataArray[1,:]
plt.scatter(x, y)
plt.xlabel("Updates")
plt.ylabel("Averge Error (K)")
plt.title('Averge Error Compared to Updates')
plt.grid(True, linestyle='-.')
plt.show()
"""
x = dataArray[0,:]
y = dataArray[2,:]
plt.scatter(x, y)
plt.ylabel("Averge Error (K)")
plt.xlabel("Time to Compute (s)")
plt.title('Time to Compute Compared to Average Error')
plt.grid(True, linestyle='-.')
plt.show()
x = dataArray[0,:]
y = dataArray[3,:]
plt.scatter(x, y)
plt.ylabel("Max Error (K)")
plt.xlabel("Time to Compute (s)")
plt.title('Time to Compute Compared to Max Error')
plt.grid(True, linestyle='-.')
plt.show()
x = dataArray[0,:]
y = np.log(dataArray[4,:])
plt.scatter(x, y)
plt.ylabel("Mean LC Error")
plt.xlabel("Time to Compute (s)")
plt.title('Time to Compute Compared to Mean LC Error')
plt.grid(True, linestyle='-.')
#plt.ylim(bottom=0, top = 1e-6)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.show()
fig = system.planet.plot_map(baselineMaps)
title = 'High Update Baseline Map'
plt.title(title)
plt.show()
fig = system.planet.plot_map((eTotalavg))
title = 'Lower Update Test Map'
plt.title(title)
plt.show()
fig = system.planet.plot_map((baselineMaps-eTotalavg))
plt.title('High-Low Difference Map')
plt.show()
system.lightcurve()
dataArray[3,:]
dataArray[3,:]
###Output
_____no_output_____ |
2__feature_extract.ipynb | ###Markdown
**Visualise the MFCC**
###Code
# Source - RAVDESS; Gender - Female; Emotion - Angry
path = RAV + "/Actor_08/03-01-05-02-01-01-08.wav"
X, sample_rate = librosa.load(path, res_type='kaiser_fast',duration=2.5,sr=22050*2,offset=0.5)
mfcc = librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=13)
# audio wave
plt.figure(figsize=(20, 15))
plt.subplot(3,1,1)
librosa.display.waveplot(X, sr=sample_rate)
plt.title('Audio sampled at 44100 hrz')
# MFCC
plt.figure(figsize=(20, 15))
plt.subplot(3,1,1)
librosa.display.specshow(mfcc, x_axis='time')
plt.ylabel('MFCC')
plt.colorbar()
ipd.Audio(path)
###Output
_____no_output_____
###Markdown
**Statistical features**
###Code
# Source - RAVDESS; Gender - Female; Emotion - Angry
path = RAV + "/Actor_08/03-01-05-02-01-01-08.wav"
X, sample_rate = librosa.load(path, res_type='kaiser_fast',duration=2.5,sr=22050*2,offset=0.5)
female = librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=13)
female = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=13), axis=0)
print(len(female))
# Source - RAVDESS; Gender - Male; Emotion - Angry
path = RAV + "/Actor_09/03-01-05-01-01-01-09.wav"
X, sample_rate = librosa.load(path, res_type='kaiser_fast',duration=2.5,sr=22050*2,offset=0.5)
male = librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=13)
male = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=13), axis=0)
print(len(male))
# audio wave
plt.figure(figsize=(20, 15))
plt.subplot(3,1,1)
plt.plot(female, label='female')
plt.plot(male, label='male')
plt.legend()
###Output
216
216
|
examples/Processing Magic in IPython.ipynb | ###Markdown
Processing in IPythonThis notebook shows the ability to use the Processing library (based on Java). There is also a full [Processing kernel](https://github.com/Calysto/calysto_processing) that does a Java-compile (showing any errors) with additional benefits. This magic does no error checking.Requirements:* IPython/Jupyter notebook* [metakernel](https://github.com/Calysto/metakernel)* Internet connectionFirst you need to install metakernel:
###Code
! pip install metakernel --user
###Output
_____no_output_____
###Markdown
Next, you should enable metakernel magics for IPython:
###Code
from metakernel import register_ipython_magics
register_ipython_magics()
###Output
_____no_output_____
###Markdown
Now, you are ready to embed Processing sketches in your notebook. Try moving your mouse over the sketch:
###Code
%%processing
void draw() {
background(128);
ellipse(mouseX, mouseY, 10, 10);
}
###Output
_____no_output_____
###Markdown
This example from https://processing.org/examples/clock.html :
###Code
%%processing
int cx, cy;
float secondsRadius;
float minutesRadius;
float hoursRadius;
float clockDiameter;
void setup() {
size(640, 360);
stroke(255);
int radius = min(width, height) / 2;
secondsRadius = radius * 0.72;
minutesRadius = radius * 0.60;
hoursRadius = radius * 0.50;
clockDiameter = radius * 1.8;
cx = width / 2;
cy = height / 2;
}
void draw() {
background(0);
// Draw the clock background
fill(80);
noStroke();
ellipse(cx, cy, clockDiameter, clockDiameter);
// Angles for sin() and cos() start at 3 o'clock;
// subtract HALF_PI to make them start at the top
float s = map(second(), 0, 60, 0, TWO_PI) - HALF_PI;
float m = map(minute() + norm(second(), 0, 60), 0, 60, 0, TWO_PI) - HALF_PI;
float h = map(hour() + norm(minute(), 0, 60), 0, 24, 0, TWO_PI * 2) - HALF_PI;
// Draw the hands of the clock
stroke(255);
strokeWeight(1);
line(cx, cy, cx + cos(s) * secondsRadius, cy + sin(s) * secondsRadius);
strokeWeight(2);
line(cx, cy, cx + cos(m) * minutesRadius, cy + sin(m) * minutesRadius);
strokeWeight(4);
line(cx, cy, cx + cos(h) * hoursRadius, cy + sin(h) * hoursRadius);
// Draw the minute ticks
strokeWeight(2);
beginShape(POINTS);
for (int a = 0; a < 360; a+=6) {
float angle = radians(a);
float x = cx + cos(angle) * secondsRadius;
float y = cy + sin(angle) * secondsRadius;
vertex(x, y);
}
endShape();
}
###Output
_____no_output_____ |
docs/examples/superoperator_tools.ipynb | ###Markdown
Superoperator toolsIn this notebook we explore the submodules of `operator_tools` that enable easy manipulation of the various quantum channel representations.To summarize the functionality:- vectorization and conversions between different representations of quantum channels- apply quantum operations- compose quantum operations- validate that quantum channels are physical- project unphysical channels to physical channels Brief motivation and introduction Perfect gates in **reversible classical computation** are described by permutation matrices, e.g. the [Toffoli gate](https://en.wikipedia.org/wiki/Toffoli_gate), while the input states are vectors. A noisy classical gate could be modeled as a perfect gate followed by a noise channel, e.g. [binary symmetric channel](https://en.wikipedia.org/wiki/Binary_symmetric_channel), on all the bits in the state vector.Perfect gates in **quantum computation** are described by unitary matrices and states are described by complex vectors, e.g.$$|\psi\rangle = U |\psi_0\rangle$$Modeling **noisy quantum computation** often makes use of [mixed states](https://en.wikipedia.org/wiki/Density_matrix) and quantum operations or quantum noise channels.Interestingly there are a number of ways to represent quantum noise channels, and depending on your task some can be more convenient than others. The simplest case to illustrate this point is to consider a mixed initial state $\rho$ undergoing unitary evolution$$\rho' = U \rho U^\dagger$$The fact that the unitary has to act on both sides of the initial state means it is a [*superoperator*](https://en.wikipedia.org/wiki/Superoperator), that is an object that can act on operators like the state matrix. It turns out using a special matrix multiplication identity we can write this as$$|\rho'\rangle \rangle = \mathcal U |\rho\rangle\rangle$$where $\mathcal U = U^*\otimes U$ and $|\rho\rangle\rangle = {\rm vec}(\rho)$. The nice thing about this is it looks like the pure state case. This is because the operator (the state) has become a vector and the superoperator (the left right action of $U$) has become an operator. **More information** Below we will assume that you are already an expert in these topics. If you are unfamiliar with these topics we recommend the following references- chapter 8 of [Mike_N_Ike] which is on *Quantum noise and quantum operations*. - chapter 3 of John Preskill's lecture notes [Physics 219/Computer Science 219](http://www.theory.caltech.edu/people/preskill/ph219/chap3_15.pdf)- the [file](../superoperator_representations.rst) `/docs/superoperator_representations.md` - for an intuitive but advanced treatment see [GRAPTN]| [Mike_N_Ike] *Quantum Computation and Quantum Information*. | Michael A. Nielsen & Isaac L. Chuang. | Cambridge: Cambridge University Press (2000). | [GRAPTN] *Tensor networks and graphical calculus for open quantum systems*. | Christopher Wood et al. | Quant. Inf. Comp. 15, 0579-0811 (2015). | https://arxiv.org/abs/1111.6950 Conversion between different descriptions of quantum channelsWe intentionally chose not to make quantum channels python objects with methods that would automatically transform between representations. The functions to convert between different representations are called things like `kraus2chi`, `kraus2choi`, `pauli_liouville2choi` etc.This assumes the user does not do silly things like input a Choi matrix to a function `chi2choi`.
###Code
import numpy as np
from pyquil.gate_matrices import I, X, Y, Z, H, CNOT
###Output
_____no_output_____
###Markdown
Define some channels
###Code
def amplitude_damping_kraus(p):
Ad0 = np.asarray([[1, 0], [0, np.sqrt(1 - p)]])
Ad1 = np.asarray([[0, np.sqrt(p)], [0, 0]])
return [Ad0, Ad1]
def bit_flip_kraus(p):
M0 = np.sqrt(1 - p) * I
M1 = np.sqrt(p) * X
return [M0, M1]
###Output
_____no_output_____
###Markdown
Define some states
###Code
one_state = np.asarray([[0,0],[0,1]])
zero_state = np.asarray([[1,0],[0,0]])
rho_mixed = np.asarray([[0.9,0],[0,0.1]])
###Output
_____no_output_____
###Markdown
vec and unvec We can vectorize i.e. `vec` and unvec matrices.We chose a column stacking convention so that the matrix$$A = \begin{pmatrix} 1 & 2\\ 3 & 4\end{pmatrix}$$becomes$$|A\rangle\rangle = {\rm vec}(A) = \begin{pmatrix} 1\\ 3\\ 2\\ 4\end{pmatrix}$$Let's check that
###Code
from forest.benchmarking.operator_tools import vec, unvec
A = np.asarray([[1, 2], [3, 4]])
print(A)
print(" ")
print(vec(A))
print(" ")
print('Does the story check out? ', np.all(unvec(vec(A))==A))
###Output
[[1 2]
[3 4]]
[[1]
[3]
[2]
[4]]
Does the story check out? True
###Markdown
Kraus to $\chi$ matrix (aka chi or process matrix)
###Code
from forest.benchmarking.operator_tools import kraus2chi
###Output
_____no_output_____
###Markdown
Lets do a unitary gate first, say the Hadamard
###Code
print('The Kraus operator is:\n', np.round(H,3))
print('\n')
print('The Chi matrix is:\n', kraus2chi(H))
###Output
The Kraus operator is:
[[ 0.707 0.707]
[ 0.707 -0.707]]
The Chi matrix is:
[[0. +0.j 0. +0.j 0. +0.j 0. +0.j]
[0. +0.j 0.5+0.j 0. +0.j 0.5+0.j]
[0. +0.j 0. +0.j 0. +0.j 0. +0.j]
[0. +0.j 0.5+0.j 0. +0.j 0.5+0.j]]
###Markdown
Now consider the Amplitude damping channel
###Code
AD_kraus = amplitude_damping_kraus(0.1)
print('The Kraus operators are:\n', np.round(AD_kraus,3))
print('\n')
print('The Chi matrix is:\n', np.round(kraus2chi(AD_kraus),3))
###Output
The Kraus operators are:
[[[1. 0. ]
[0. 0.949]]
[[0. 0.316]
[0. 0. ]]]
The Chi matrix is:
[[0.949+0.j 0. +0.j 0. +0.j 0.025+0.j ]
[0. +0.j 0.025+0.j 0. -0.025j 0. +0.j ]
[0. +0.j 0. +0.025j 0.025+0.j 0. +0.j ]
[0.025+0.j 0. +0.j 0. +0.j 0.001+0.j ]]
###Markdown
Kraus to Pauli Liouville aka the "Pauli Transfer Matrix"
###Code
from forest.benchmarking.operator_tools import kraus2pauli_liouville
Hpaulirep = kraus2pauli_liouville(H)
Hpaulirep
###Output
_____no_output_____
###Markdown
We can visualize this using the tools from the plotting module.
###Code
from forest.benchmarking.plotting.state_process import plot_pauli_transfer_matrix
import matplotlib.pyplot as plt
f, (ax1) = plt.subplots(1, 1, figsize=(5, 4.2))
plot_pauli_transfer_matrix(Hpaulirep,ax=ax1)
###Output
_____no_output_____
###Markdown
The above figure is a graphical representation of: (out operator) = H (in operator) HZ = H X H -Y = H Y H X = H Z H Evolving states using quantum channelsIn many superoperator representations evolution corresponds to multiplying the vec'ed state by the superoperator. E.g.
###Code
from forest.benchmarking.operator_tools import kraus2superop
zero_state_vec = vec(zero_state)
answer_vec = np.matmul(kraus2superop([H]), zero_state_vec)
print('The vec\'ed answer is', answer_vec)
print('\n')
print('The unvec\'ed answer is\n', np.real(unvec(answer_vec)))
print('\n')
print('Let\'s compare it to the normal calculation\n', H @ zero_state @ H)
###Output
The vec'ed answer is [[0.5+0.j]
[0.5+0.j]
[0.5+0.j]
[0.5+0.j]]
The unvec'ed answer is
[[0.5 0.5]
[0.5 0.5]]
Let's compare it to the normal calculation
[[0.5 0.5]
[0.5 0.5]]
###Markdown
For representations with this simple application there are no inbuilt functions in forest benchmarking. However applying a channel is more painful in the Choi and Kraus representation.Consider the amplitude damping channel where we need to perform the following calculation to find out put of channel $\rho_{out} = A_0 \rho A_0^\dagger + A_1 \rho A_1^\dagger$.We provide helper functions to do these calculations.
###Code
from forest.benchmarking.operator_tools import apply_kraus_ops_2_state, apply_choi_matrix_2_state, kraus2choi
apply_kraus_ops_2_state(AD_kraus, one_state)
###Output
_____no_output_____
###Markdown
In the Choi representation we get the same answer:
###Code
AD_choi = kraus2choi(AD_kraus)
apply_choi_matrix_2_state(AD_choi, one_state)
###Output
_____no_output_____
###Markdown
Compose quantum channelsComposing channels is useful when describing larger circuits. In some representations e.g. in the superoperator or Liouville representation it is just matrix multiplication e.g.
###Code
from forest.benchmarking.operator_tools import superop2kraus, kraus2superop
H_super = kraus2superop(H)
H_squared_super = H_super @ H_super
print('Hadamard squared as a superoperator:\n', np.round(H_squared_super,2))
print('\n As a Kraus operator:\n', np.round(superop2kraus(H_squared_super),2))
###Output
Hadamard squared as a superoperator:
[[ 1.+0.j -0.+0.j -0.+0.j 0.+0.j]
[-0.+0.j 1.+0.j 0.+0.j -0.+0.j]
[-0.+0.j 0.+0.j 1.+0.j -0.+0.j]
[ 0.+0.j -0.+0.j -0.+0.j 1.+0.j]]
As a Kraus operator:
[[[ 1.+0.j -0.+0.j]
[ 0.+0.j 1.+0.j]]]
###Markdown
Composing channels in the Kraus representation is more difficult. Consider composing two channels $\mathcal A$ (with Kraus operators $[A_0, A_1]$) and $\mathcal B$ (with Kraus operators $[B_0, B_1]$). The composition is $$\begin{align}\mathcal B(\mathcal A(\rho)) & = \sum_i \sum_j B_j A_i \rho A_i^\dagger B_j^\dagger \end{align}$$
###Code
from forest.benchmarking.operator_tools import compose_channel_kraus, superop2kraus
BitFlip_kraus = bit_flip_kraus(0.2)
kraus2superop(compose_channel_kraus(AD_kraus, BitFlip_kraus))
###Output
_____no_output_____
###Markdown
This is the same as if we do
###Code
BitFlip_super = kraus2superop(BitFlip_kraus)
AD_super = kraus2superop(AD_kraus)
AD_super @ BitFlip_super
###Output
_____no_output_____
###Markdown
We can also easily compose channels acting on independent spaces.Consider composing the same two channels as above, $\mathcal A$ and $\mathcal B$. However this time they act on different Hilbert spaces. With respect to the tensor product structure $H_2 \otimes H_1$ the Kraus operators are $[A_0\otimes I, A_1\otimes I]$ and $[I \otimes B_0, I \otimes B_1]$.In this case the order of the operations commutes $$\begin{align}\mathcal A(\mathcal B(\rho))= \mathcal B(\mathcal A(\rho)) & = \sum_i \sum_j A_i\otimes B_j \rho A_i^\dagger\otimes B_j^\dagger \end{align}$$In forest benchmarking you can specify the two channels without the Identity tensored on and it will take care of it for you:
###Code
from forest.benchmarking.operator_tools import tensor_channel_kraus
np.round(tensor_channel_kraus(AD_kraus,BitFlip_kraus),3)
###Output
_____no_output_____
###Markdown
Validate quantum channels are physicalWhen doing process tomography sometimes the estimates returned by various estimation methods can result in unphysical processes.The functions below can be used to check if the estimates are physical. As a starting point, we might want to check if a process specified by Kraus operators is valid. Unless a process is unitary you need more than one Kraus operator to be a valid quantum operation.
###Code
from forest.benchmarking.operator_tools import kraus_operators_are_valid
kraus_operators_are_valid(AD_kraus[0])
###Output
_____no_output_____
###Markdown
However a full set is valid:
###Code
kraus_operators_are_valid(AD_kraus)
###Output
_____no_output_____
###Markdown
We can also validate other properties of quantum channels such as completely positivity and trace preservation. This is done on the **Choi** representation, so you many need to convert your quantum operation to the Choi representation first.
###Code
from forest.benchmarking.operator_tools import (choi_is_unitary,
choi_is_unital,
choi_is_trace_preserving,
choi_is_completely_positive,
choi_is_cptp)
# amplitude damping is not unitary
print(choi_is_unitary(AD_choi),'\n')
# amplitude damping is not unital
print(choi_is_unital(AD_choi))
# amplitude damping is trace preserving (TP)
print(choi_is_trace_preserving(AD_choi),'\n')
# amplitude damping is completely positive (CP)
print(choi_is_completely_positive(AD_choi), '\n')
# amplitude damping is CPTP
print(choi_is_cptp(AD_choi))
###Output
True
True
True
###Markdown
Project an unphysical state to the closest physical state
###Code
from forest.benchmarking.operator_tools.project_state_matrix import project_state_matrix_to_physical
# Test the method. Example from fig 1 of maximum likelihood minimum effort
# https://doi.org/10.1103/PhysRevLett.108.070502
eigs = np.diag(np.array(list(reversed([3.0/5, 1.0/2, 7.0/20, 1.0/10, -11.0/20]))))
phys = project_state_matrix_to_physical(eigs)
np.allclose(phys, np.diag([0, 0, 1.0/5, 7.0/20, 9.0/20]))
from forest.benchmarking.plotting import hinton
rho_unphys = np.random.uniform(-1, 1, (2, 2)) \
* np.exp(1.j * np.random.uniform(-np.pi, np.pi, (2, 2)))
rho_phys = project_state_matrix_to_physical(rho_unphys)
fig, (ax1, ax2) = plt.subplots(1, 2)
hinton(rho_unphys, ax=ax1)
hinton(rho_phys, ax=ax2)
ax1.set_title('Unphysical')
ax2.set_title('Physical projection')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Project unphysical channels to physical channelsWhen doing process tomography often the estimates returned by maximum likelihood estimation or linear inversion methods can result in unphysical processes.The functions below can be used to project the unphysical estimates back to physical estimates.
###Code
from forest.benchmarking.operator_tools.project_superoperators import (proj_choi_to_completely_positive,
proj_choi_to_trace_non_increasing,
proj_choi_to_trace_preserving,
proj_choi_to_physical,
proj_choi_to_unitary)
neg_Id_choi = -kraus2choi(I)
proj_choi_to_completely_positive(neg_Id_choi)
proj_choi_to_trace_non_increasing(neg_Id_choi)
proj_choi_to_trace_preserving(neg_Id_choi)
proj_choi_to_physical(neg_Id_choi)
# closer to identity
proj_choi_to_unitary(kraus2choi(bit_flip_kraus(0.1)))
# closer to X gate
proj_choi_to_unitary(kraus2choi(bit_flip_kraus(0.9)))
###Output
_____no_output_____
###Markdown
Validate operatorsA lot of the work in validating the physicality of quantum channels comes down to validating properties of matrices:
###Code
from forest.benchmarking.operator_tools.validate_operator import (is_square_matrix,
is_identity_matrix,
is_idempotent_matrix,
is_unitary_matrix,
is_positive_semidefinite_matrix)
# a vector is not square
is_square_matrix(np.array([[1], [0]]))
# NBVAL_RAISES_EXCEPTION
# the line above is for testing purposes, do not remove.
# a tensor is not a matrix
tensor = np.ones(8).reshape(2,2,2)
print(tensor)
is_square_matrix(tensor)
is_identity_matrix(X)
projector_zero = np.array([[1, 0], [0, 0]])
is_idempotent_matrix(projector_zero)
is_unitary_matrix(AD_kraus[0])
is_positive_semidefinite_matrix(I)
###Output
_____no_output_____ |
evacuation_kanagawa.ipynb | ###Markdown
>本研究では,解析範囲内の国勢調査の基本単位区毎に累積収容人数を累積収容人数曲線として表現し,基本単位区毎の累積収容人数を比較した.各基本単位区から,各広域避難所までの移動距離を求める。
###Code
ku_dist=pd.DataFrame()
for i,row in ninomiya.iterrows():
#i番目の基本区のcentroidをポイントとしてcent_locに設定する
print(i,"番目の基本区から")
cent_loc=Point(row['geometry'].centroid.x,row['geometry'].centroid.y)
orig=ox.distance.nearest_nodes(ninomiya_graph,X=cent_loc.x,Y=cent_loc.y)
print("orig:",orig)
for j,row2 in evac_fac.iterrows():
#j番目の施設をポイントとしてfac_locに設定する
fac_loc=Point(row2['geometry'].x,row2['geometry'].y)
dest=ox.distance.nearest_nodes(ninomiya_graph,X=fac_loc.x,Y=fac_loc.y)
print("dest:",dest)
route = ox.shortest_path(ninomiya_graph, orig, dest, weight="travel_time")
print("")
print("")
evac_fac
# print("")
import networkx as nx
ku_dist=pd.DataFrame()
for i,row in ninomiya.iterrows():
#i番目の基本区のcentroidをポイントとしてcent_locに設定する
print(i,"番目の基本区から")
cent_loc=Point(row['geometry'].centroid.x,row['geometry'].centroid.y)
ku_dist.loc[i,"cent_loc"]=row['KEY_CODE']
orig=ox.distance.nearest_nodes(ninomiya_graph,X=cent_loc.x,Y=cent_loc.y)
for j,row2 in evac_fac.iterrows():
fac_loc=Point(row2['geometry'].x,row2['geometry'].y)
dest=ox.distance.nearest_nodes(ninomiya_graph,X=fac_loc.x,Y=fac_loc.y)
print("cent_loc:",cent_loc.x,cent_loc.y," fac:",fac_loc.x,fac_loc.y)
route = ox.shortest_path(ninomiya_graph, orig, dest)
min_dist = nx.shortest_path_length(ninomiya_graph, orig, dest)
ku_dist.loc[i,j]=min_dist*100
print("dist:",min_dist*100, " m")
fig,ax=ox.plot_graph_route(ninomiya_graph,route,node_size=0)
#各基本区のcentroidから,各広域避難所への距離
ku_dist
#平均徒歩避難速度を50m/分として, 一定時間以内に到達可能な広域避難所の収容人数を足して累積収容人数を計算する。
thres_min=[5,10,15,20,30] #累積収容人数を計算するための移動時間閾値
walk_speed=50
for i,row in ku_dist.iterrows():
print(i,"th 基本区")
row['cum_cap']=0
for tmin in thres_min:
print(tmin," 分以内に到達できる施設")
print(row[ row.astype(float)/walk_speed< tmin ] )
fac_reachable=row.astype(float)/walk_speed< tmin
print("fac:",fac_reachable)
# print("sum:",row[ row.astype(float)/walk_speed< tmin ] )
print("fac reachable:",ninomiya_evac_fac[fac_reachable])
# print("cumsum:",ninomiya_evac_fac[fac_reachable]['ninzu'].sum())
#row['cum_cap'+str(tmin)]=ninomiya_evac_fac[fac_reachable]['ninzu'].sum()
print("")
#print(ku_dist)
###Output
_____no_output_____ |
notebooks/Individuals_using_the_Internet/Individuals_using_the_Internet.ipynb | ###Markdown
Individuals using the Internet (% of population) from 1990 to 2017 The digital and information revolution has dramatically changed the way the world communicates, learns, does business and treats disease. Indeed, the new information and communications technologies (ICTs) offer vast possibilities for advancement in all fields in all countries, from the most to the least developed. Comparable statistics on access, use, quality and affordability of ICT are essential for formulating policies favorable to the growth of the sector and for monitoring and evaluating the impact of this sector on the development of each country. Although basic access data are available for many countries, in most developing countries little is known about ICT users, including their usage, and how they affect people and businesses. The Global Partnership on Measuring ICT for Development is there to help set standards, harmonize information and communications technology statistics, and build the statistical capacity of developing countries. However, despite significant improvements in developing countries, the gap remains.Hereafter, we will use Plotly library to spatially visualize the time evolution of the individuals using the Internet through the world. Import required libraries
###Code
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.io as pio
from IPython.display import Javascript
Javascript(
"""require.config({
paths: {
plotly: 'https://cdn.plot.ly/plotly-latest.min'
}
});"""
)
pio.renderers.default = 'notebook_connected'
###Output
_____no_output_____
###Markdown
Data pre-processing
###Code
df = pd.read_csv('Data/Individuals_using_the_Internet.csv',
header=0,
names=['year', 'time_code', 'country_name', 'country_code', 'percentage_internet_users'],
usecols=['year', 'country_name', 'country_code', 'percentage_internet_users'],
parse_dates=True,
dtype={'percentage_internet_users': float},
na_values='..')
df.head()
###Output
<ipython-input-2-8c7ef5338f76>:6: DeprecationWarning:
`np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
###Markdown
Cleaning Our dataset extends from 1960 to 2018. The Internet has started to be developped in the 1960's, but has really started to be popularized in the 1990's, there has therefor been few or no users between the 1960's and the 1990's.To prepare our mapping, we begin by dropping all "not a number (NaN)" values. Then, because there is no real interest to map years with all data to 0 - typically years from 1960 to 1990 - we look for and exclude each year where the sum of the percentage of internet users for the whole countries is 0. This process brings us to 1990 as first year with significant values. We also choose to exclude 2018 from the dataset because it still lacks some not negligible values for this year.
###Code
df.dropna(inplace=True)
group = df.groupby('year')
df = group.filter(lambda x: x['percentage_internet_users'].sum() > 0)
df = df.drop(df[df['year']=='2018'].index)
df.reset_index(drop=True, inplace=True)
df
###Output
_____no_output_____
###Markdown
Mapping
###Code
fig = px.choropleth(df,
locations='country_code',
color='percentage_internet_users',
hover_name='country_name',
animation_frame='year',
range_color=[0,100],
scope='world',
labels={'percentage_internet_users':'% of population<br>using Internet'},
title="<b>Individuals using the Internet from 1990 to 2017</b><br>" +
"<i>Source : International Telecommunication Union</i>",
color_continuous_scale=px.colors.sequential.deep)
# Style
fig.update_layout(
font_family='Helvetica',
font_color='grey',
font_size=12,
title_font_size=20
)
fig.show()
fig = px.choropleth(df,
locations='country_code',
color='percentage_internet_users',
hover_name='country_name',
scope='world',
labels={'percentage_internet_users':'% of population<br>using Internet'},
color_continuous_scale=px.colors.sequential.deep,
title="<b>Individuals using the Internet in 2017</b><br>" +
"<i>Source : International Telecommunication Union</i>"
)
# Style
fig.update_layout(
font_family='Helvetica',
font_color='grey',
font_size=12,
title_font_size=20,
)
fig.show()
###Output
_____no_output_____ |
PyData_Pune_2019.ipynb | ###Markdown
Haptic Learning : inferencing human features using deep networks This python notebook is for explanation of the core concepts used and the models developed for this webinar. AcknowledgementI would like to extend my gratitude towards PyData, Pune team for giving me this opportunity to showcase my findings Akshay Bahadur * Software engineer, Symantec. * ML Researcher * Software Innovator, Intel Contact * [Portfolio](https://www.akshaybahadur.com/) * [LinkedIN](https://www.linkedin.com/in/akshaybahadur21/) * [GitHub](https://github.com/akshaybahadur21) Tania's story
###Code
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/Oc_QMQ4QHcw"></iframe>
###Output
_____no_output_____
###Markdown
MNIST Digit Recognition Showing content using Webcam
###Code
from keras import Sequential
from keras.callbacks import ModelCheckpoint
from keras.datasets import mnist
import numpy as np
import matplotlib.pyplot as plt
from keras.layers import Flatten, Dense, Dropout
from keras.utils import np_utils, print_summary
from keras.models import load_model
import warnings
warnings.filterwarnings('ignore')
(x_train, y_train), (x_test, y_test) = mnist.load_data()
def showData(x, label):
pixels = np.array(x, dtype='uint8')
pixels = pixels.reshape((28, 28))
plt.title('Label is {label}'.format(label=label))
plt.imshow(pixels, cmap='gray')
plt.show()
showData(x_train[250], y_train[250])
showData(x_train[24], y_train[24])
x_train_norm= x_train / 255.
x_test_norm=x_test / 255.
def preprocess_labels(y):
labels = np_utils.to_categorical(y)
return labels
y_train = preprocess_labels(y_train)
y_test = preprocess_labels(y_test)
x_train_norm = x_train_norm.reshape(x_train_norm.shape[0], 28, 28, 1)
x_test_norm = x_test_norm.reshape(x_test_norm.shape[0], 28, 28, 1)
print("number of training examples = " + str(x_train.shape[0]))
print("number of test examples = " + str(x_test.shape[0]))
print("X_train shape: " + str(x_train.shape))
print("Y_train shape: " + str(y_train.shape))
def keras_model(image_x, image_y):
num_of_classes = 10
model = Sequential()
model.add(Flatten(input_shape=(image_x, image_y, 1)))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.6))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.6))
model.add(Dense(num_of_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
filepath = "pyData.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
return model, callbacks_list
model, callbacks_list = keras_model(28, 28)
print_summary(model)
model, callbacks_list = keras_model(28, 28)
model.fit(x_train_norm, y_train, validation_data=(x_test_norm, y_test), epochs=2, batch_size=64,
callbacks=callbacks_list)
# Computer vision part
import cv2
loaded_model=load_model('pyData.h5')
cap = cv2.VideoCapture(0)
while (cap.isOpened()):
ret, img = cap.read()
img, contours, thresh = get_img_contour_thresh(img)
if len(contours) > 0:
contour = max(contours, key=cv2.contourArea)
if cv2.contourArea(contour) > 2500:
x, y, w, h = cv2.boundingRect(contour)
newImage = thresh[y:y + h, x:x + w]
newImage = cv2.resize(newImage, (28, 28))
newImage = np.array(newImage)
newImage = newImage.flatten()
newImage = newImage.reshape(newImage.shape[0], 1)
ans= loaded_model.predict(newImage)
x, y, w, h = 0, 0, 300, 300
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.putText(img, "Prediction : " + str(ans), (10, 320),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
cv2.imshow("Frame", img)
cv2.imshow("Contours", thresh)
k = cv2.waitKey(10)
if k == 27:
break
def get_img_contour_thresh(img):
x, y, w, h = 0, 0, 300, 300
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (35, 35), 0)
ret, thresh1 = cv2.threshold(blur, 70, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
thresh1 = thresh1[y:y + h, x:x + w]
contours, hierarchy = cv2.findContours(thresh1, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[-2:]
return img, contours, thresh1
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/MRNODXrYK3Q"></iframe>
###Output
_____no_output_____
###Markdown
Quick, Draw Feeding data by writing on screen For the initial steps, you can look here : https://www.akshaybahadur.com/post/quick-draw
###Code
from collections import deque
cap = cv2.VideoCapture(0)
Lower_blue = np.array([110, 50, 50])
Upper_blue = np.array([130, 255, 255])
pts = deque(maxlen=512)
blackboard = np.zeros((480, 640, 3), dtype=np.uint8)
digit = np.zeros((200, 200, 3), dtype=np.uint8)
pred_class = 0
model = load_model('QuickDraw.h5')
while (cap.isOpened()):
ret, img = cap.read()
img = cv2.flip(img, 1)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
kernel = np.ones((5, 5), np.uint8)
mask = cv2.inRange(hsv, Lower_green, Upper_green)
mask = cv2.erode(mask, kernel, iterations=2)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask = cv2.dilate(mask, kernel, iterations=1)
res = cv2.bitwise_and(img, img, mask=mask)
cnts, heir = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:]
center = None
if len(cnts) >= 1:
cnt = max(cnts, key=cv2.contourArea)
if cv2.contourArea(cnt) > 200:
((x, y), radius) = cv2.minEnclosingCircle(cnt)
cv2.circle(img, (int(x), int(y)), int(radius), (0, 255, 255), 2)
cv2.circle(img, center, 5, (0, 0, 255), -1)
M = cv2.moments(cnt)
center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))
pts.appendleft(center)
for i in range(1, len(pts)):
if pts[i - 1] is None or pts[i] is None:
continue
cv2.line(blackboard, pts[i - 1], pts[i], (255, 255, 255), 7)
cv2.line(img, pts[i - 1], pts[i], (0, 0, 255), 2)
elif len(cnts) == 0:
if len(pts) != []:
blackboard_gray = cv2.cvtColor(blackboard, cv2.COLOR_BGR2GRAY)
blur1 = cv2.medianBlur(blackboard_gray, 15)
blur1 = cv2.GaussianBlur(blur1, (5, 5), 0)
thresh1 = cv2.threshold(blur1, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
blackboard_cnts = cv2.findContours(thresh1.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)[1]
if len(blackboard_cnts) >= 1:
cnt = max(blackboard_cnts, key=cv2.contourArea)
print(cv2.contourArea(cnt))
if cv2.contourArea(cnt) > 2000:
x, y, w, h = cv2.boundingRect(cnt)
digit = blackboard_gray[y:y + h, x:x + w]
pred_probab, pred_class = keras_predict(model, digit)
print(pred_class, pred_probab)
pts = deque(maxlen=512)
blackboard = np.zeros((480, 640, 3), dtype=np.uint8)
img = overlay(img, emojis[pred_class], 400, 250, 100, 100)
cv2.imshow("Frame", img)
k = cv2.waitKey(10)
if k == 27:
break
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/X0qk4aEqg3o"></iframe>
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/Qkpgv16-JRM"></iframe>
###Output
_____no_output_____
###Markdown
Emojinator Haptically feeding hand gestures For more details, you can look here : https://github.com/akshaybahadur21/Emojinator
###Code
model = load_model('emojinator.h5')
cap = cv2.VideoCapture(0)
x, y, w, h = 300, 50, 350, 350
while (cap.isOpened()):
ret, img = cap.read()
img = cv2.flip(img, 1)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask2 = cv2.inRange(hsv, np.array([2, 50, 60]), np.array([25, 150, 255]))
res = cv2.bitwise_and(img, img, mask=mask2)
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
median = cv2.GaussianBlur(gray, (5, 5), 0)
kernel_square = np.ones((5, 5), np.uint8)
dilation = cv2.dilate(median, kernel_square, iterations=2)
opening = cv2.morphologyEx(dilation, cv2.MORPH_CLOSE, kernel_square)
ret, thresh = cv2.threshold(opening, 30, 255, cv2.THRESH_BINARY)
thresh = thresh[y:y + h, x:x + w]
contours = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)[1]
if len(contours) > 0:
contour = max(contours, key=cv2.contourArea)
if cv2.contourArea(contour) > 2500:
x, y, w1, h1 = cv2.boundingRect(contour)
newImage = thresh[y:y + h1, x:x + w1]
newImage = cv2.resize(newImage, (50, 50))
pred_probab, pred_class = keras_predict(model, newImage)
print(pred_class, pred_probab)
img = overlay(img, emojis[pred_class], 400, 250, 90, 90)
x, y, w, h = 300, 50, 350, 350
cv2.imshow("Frame", img)
cv2.imshow("Contours", thresh)
k = cv2.waitKey(10)
if k == 27:
break
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/1eor41gIbF8"></iframe>
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/WFm23haaWTQ"></iframe>
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/Ujl8L4QoHHU"></iframe>
###Output
_____no_output_____
###Markdown
Drowsiness Detection Feeding Eye aspect ratio for detection
###Code
def eye_aspect_ratio(eye):
A = distance.euclidean(eye[1], eye[5])
B = distance.euclidean(eye[2], eye[4])
C = distance.euclidean(eye[0], eye[3])
ear = (A + B) / (2.0 * C)
return ear
thresh = 0.25
frame_check = 20
detect = dlib.get_frontal_face_detector()
predict = dlib.shape_predictor("E:\Github projects\Drowsiness_Detection_fork\shape_predictor_68_face_landmarks.dat")# Dat file is the crux of the code
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["right_eye"]
cap=cv2.VideoCapture(0)
flag=0
while True:
ret, frame=cap.read()
frame = imutils.resize(frame, width=450)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
subjects = detect(gray, 0)
for subject in subjects:
shape = predict(gray, subject)
shape = face_utils.shape_to_np(shape)#converting to NumPy Array
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = eye_aspect_ratio(leftEye)
rightEAR = eye_aspect_ratio(rightEye)
ear = (leftEAR + rightEAR) / 2.0
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
if ear < thresh:
flag += 1
print (flag)
if flag >= frame_check:
cv2.putText(frame, "****************ALERT!****************", (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
cv2.putText(frame, "****************ALERT!****************", (10,325),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
#print ("Drowsy")
else:
flag = 0
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/twmHZE20rRY"></iframe>
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/eFQvKHdjeEw"></iframe>
###Output
_____no_output_____
###Markdown
Facial Recognition using FaceNets For detailed code : https://github.com/akshaybahadur21/Facial-Recognition-using-Facenet
###Code
def recognize_face(face_descriptor, database):
encoding = img_to_encoding(face_descriptor, FRmodel)
min_dist = 100
identity = None
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database.
dist = np.linalg.norm(db_enc - encoding)
print('distance for %s is %s' % (name, dist))
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name
if dist < min_dist:
min_dist = dist
identity = name
if int(identity) <=4:
return str('Akshay'), min_dist
if int(identity) <=8:
return str('Apoorva'), min_dist
def img_to_encoding(image, model):
image = cv2.resize(image, (96, 96))
img = image[...,::-1]
img = np.around(np.transpose(img, (2,0,1))/255.0, decimals=12)
x_train = np.array([img])
embedding = model.predict(x_train)
return embedding
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/v2dPVx9qCEo"></iframe>
###Output
_____no_output_____
###Markdown
Open Pose
###Code
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/C1Sxk6zxWLM"></iframe>
%%HTML
<iframe width="700" height="400" src="https://www.youtube.com/embed/xyiLxIMDiAY"></iframe>
###Output
_____no_output_____ |
analysis/cogsci2021/perceptual_chunk_exploratory.ipynb | ###Markdown
Visualizations
###Code
# visualize all participant's chunks
fig, axs = plt.subplots(n_ppt, numTrials, figsize=(20,2*n_ppt))
for i, ppt in enumerate(ppts):
for j, target in enumerate(targets):
chunks = df_trial[(df_trial.gameID==ppt) & (df_trial.targetName==target)]['gameGrid'].iloc[0]
chunks = np.rot90(chunks)
axs[i,j].axis('off')
axs[i,j].imshow(chunks, cmap='Set3')
# how many chunks do people identify in each structure?
fig = plt.figure(figsize=(10,6))
sns.set_context('poster')
sns.set_style('whitegrid', {'legend':False})
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
sns.barplot(data=df_trial, y='nChunksHighlighted',x='targetName', order=targets)
###Output
_____no_output_____
###Markdown
Perceptual chunk analysis notes Properties of the chunk painting process- size of chunk colored in over time - distinguish adding to the same chunk from creating new ones- average position of colored square over trial - is it bottom to top? Simple properties of perceptual chunks as predictors of difficulty/ complexity- Number of perceptual chunks in each structure, as a measure of (perceptual) complexity- Proportion of perceptual chunks that can't be made with blocks provided (as a measure of difficulty)- Variance in perceptual chunks as a measure of 'knowing what to do first' when building (e.g. thinking time pre first block) - calculate using edit distance below Strategies for comparing perceptual chunks with procedural chunksWithin *perceptual* chunks:- Find an edit distance - cost: +1 for changing a square, 0 for changing color of all members of a group to a unique color- Find a unique 'median' perceptual decomposition - minimum edit distance to all decompositionsCurrently, our *procedural* chunk measures don't give us a full decomposition.Bag of chunks:- Proportion of perceptual chunks that are also procedural chunks - i.e. get the overlap in distributions - Would need to think about chunk sizes, as well as popularity: don't want to be systematically skimming-off the procedural chunks that could match. - As a measure of difficulty?- Find the most popular procedural chunks in all reconstructions - Are these more likely to be within, or crossing, a perceptual chunk? Compare to some baseline. - Do procedural chunks become less tied to the perceptual ones with practice? Alternatively, we could find a way to obtain decompositions from procedural chunks. Once we have a metric:- Pre vs. in post: do people start off with perceptual chunks but move on to procedural ones? Future analyses and experiments - Do *perceptual parses* change with building experience?- How consistent are perceptual parses for an individual? - Do they become more consistent with building practice? Properties of chunk painting processThe main purpose of this experiment was to obtain perceptual decompositions. The process of recording them is less relevant to our goals, however we include some basic analyses. Notes about experiment:- To add a chunk people could either click once to change the color on one square (colorType='click') or drag color from a square (colorType='drag'). If they dragged from an empty square, the color would auto-increment to a new color. Clicks on individual squares cycle through colors, so we expect many more clicks than drags.- People may also overwrite previously colored squares. Therefore the recording of one particular chunk may span several color events, and may also be distributed among other coloring events unrelated to that chunk.- For each coloring event, we record the squares changed, the new color group (1-8), the number of chunks currently highlighted (number of colors on shape not including the default grey), and timing data.
###Code
# how many grid-squares are selected in each action?
# In general, people are selecting the biggest regions first
sns.scatterplot(data=df_color, x='relativeTrialDuration', y='nSquaresSelected')
# do people highlight the largest chunks first?
# although- nChunksHighlighted isn't the same as finishing a chunk.
# this will be biased: nChunks highlighted stays the same if you just extend a chunk by a little bit, but only one square is selected at that point
#
fig = plt.figure(figsize=(10,6))
sns.set_context('poster')
sns.set_style('whitegrid', {'legend':False})
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
sns.pointplot(data=df_color, x='nChunksHighlighted', y='nSquaresSelected')
###Output
_____no_output_____
###Markdown
Perceptual chunks as predictors of difficulty/ complexity Number of perceptual chunks in each structureThis is a potential measure of structure difficulty, particularly for early trials where we expect perceptual decompositions to more strongly structure participant's plans.I'd predict that structures with a greater number of perceptual chunks require more extensive planning.
###Code
fig = plt.figure(figsize=(10,6))
sns.set_context('poster')
sns.set_style('whitegrid', {'legend':False})
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
g = sns.FacetGrid(df_trial, col="targetName", col_order=targets)
g.map(sns.countplot, "nChunksHighlighted", order=range(2,9));
###Output
_____no_output_____
###Markdown
Comparing perceptual chunks with building procedures Cluster chunks to identify a set of chunks for each tower
###Code
def chunks_from_KMeans(chunks,
k_values = [10, 20],
thresholds = [0.4]):
kms = {}
df_kms = pd.DataFrame()
for target in targets:
kms[target] = {}
for n_cluster in k_values:
feature_mat = np.array(chunks[target])
# get the mean number of chunks for that structure
meanNChunks = np.round(df_trial.groupby('targetName')['nChunksHighlighted'].mean()).astype(int).to_dict()
# group into n clusters where n is the mean amount of chunks for that structure
# kmeans = KMeans(n_clusters=meanNChunks[target], random_state=0).fit(feature_mat)
kms[target][n_cluster] = KMeans(n_clusters=n_cluster, random_state=0).fit(feature_mat)
for threshold in thresholds:
df_kms = df_kms.append(
{
'cluster_method': 'k-means',
'cluster_object': kms[target][n_cluster],
'targetName': target,
'n_cluster': n_cluster,
'cluster_centers': kms[target][n_cluster].cluster_centers_,
'chunks': (kms[target][n_cluster].cluster_centers_>=threshold)*1,
'threshold': threshold,
'inertia': kms[target][n_cluster].inertia_,
},
ignore_index=True
)
return df_kms, kms
def max_values(cc):
return np.array([cc[i] == np.max(cc,axis=1)[i] for i in range(0, cc.shape[0])])*1
def quantile_values(cc, quantile = 0.5):
b = cc
b[b==0] = np.nan
#take values greater than 0, find a quantile
return np.array([b[i] >= np.nanquantile(b, quantile, axis=1)[i] for i in range(0, b.shape[0])])*1
# def chunks_from_KMeans_method(chunks,
# k_values = [10, 20],
# method = 'max',
# quantile = 0.5):
# kms = {}
# df_kms = pd.DataFrame()
# for target in targets:
# kms[target] = {}
# for n_cluster in k_values:
# feature_mat = np.array(chunks[target])
# # get the mean number of chunks for that structure
# meanNChunks = np.round(df_trial.groupby('targetName')['nChunksHighlighted'].mean()).astype(int).to_dict()
# # group into n clusters where n is the mean amount of chunks for that structure
# # kmeans = KMeans(n_clusters=meanNChunks[target], random_state=0).fit(feature_mat)
# kms[target][n_cluster] = KMeans(n_clusters=n_cluster, random_state=0).fit(feature_mat)
# if method == 'max':
# rounded_chunks = max_values(kms[target][n_cluster].cluster_centers_)
# quantile = ''
# elif method == 'quantile':
# rounded_chunks = quantile_values(kms[target][n_cluster].cluster_centers_, quantile=quantile)
# print(rounded_chunks)
# df_kms = df_kms.append(
# {
# 'cluster_method': 'k-means',
# 'cluster_object': kms[target][n_cluster],
# 'targetName': target,
# 'n_cluster': n_cluster,
# 'cluster_centers': kms[target][n_cluster].cluster_centers_,
# 'chunks': rounded_chunks,
# 'threshold': method + str(quantile),
# 'inertia': kms[target][n_cluster].inertia_,
# },
# ignore_index=True
# )
# return df_kms, kms
def chunks_from_affinity_prop(feature_mats, damping_values = [0.74]):
clusters = {}
df_AP = pd.DataFrame()
for target in targets:
clusters[target] = {}
for d in damping_values:
clusters[target][d] = AffinityPropagation(damping=d).fit(feature_mats[target])
df_AP = df_AP.append(
{
'cluster_method': 'affinity_propagation',
'cluster_object': clusters[target][d],
'targetName': target,
'n_cluster': len(clusters[target][d].cluster_centers_indices_),
'chunks': clusters[target][d].cluster_centers_,
'damping': d,
},
ignore_index=True
)
return df_AP
# sorted_chunks = feature_mat[np.argsort(kmeans.labels_),:]
def find_world_diffs(df_proc_world_states):
# find all chunks for all structures (so we can search for the structures that involve this chunk)
# a 'window-size' is the amount of states between first and final one considered INCLUSIVE. i.e. n is n-1 actions.
# i.e. window size 3 means 2 consecutive actions
window_sizes = range(2,10)
df_target_grouped = df_proc_world_states.groupby(['gameID','targetName','phase_extended'])['flatDiscreteWorldStr']
df_world_deltas = df_proc_trial.copy()
for chunk_size in window_sizes:
# for each reconstruction, get a list of ngrams of that length
df_ngrams = df_target_grouped.agg(lambda ws: list(nltk.ngrams(list(ws), chunk_size))).reset_index()
# find the chunks (world deltas) from those ngrams
df_ngrams['world_diff'] = df_ngrams['flatDiscreteWorldStr'].apply(lambda ngrams:
["".join([str(int(a)) for a in
list(
np.logical_xor(np.array(list(ngram[-1])).astype(np.bool),
np.array(list(ngram[0])).astype(np.bool))
)])
for ngram in ngrams])
df_ngrams = df_ngrams.rename(columns={"flatDiscreteWorldStr": str(chunk_size)+'_grams',
"world_diff": str(chunk_size)+'_chunks'})
df_world_deltas = df_world_deltas.merge(df_ngrams, how='left', on=['gameID','targetName','phase_extended'])
# combine chunks from all window sized into list, so we can search for chunks in the entire reconstruction
df_world_deltas['all_chunks'] = df_world_deltas[[(str(chunk_window)+'_chunks') \
for chunk_window in window_sizes if (str(chunk_window)+'_chunks') in df_world_deltas.columns]]\
.apply(lambda row: [chunk for chunks in list(row) for chunk in chunks], axis=1)
return df_world_deltas
def find_perc_chunks_in_procedures(df_cluster_rows,
df_proc_chunks,
min_cluster_members = 0):
# for each exemplar with more than 3 members, count proportion of reconstructions in first, and number of reconstructions in final attempt
cluster_counts = pd.DataFrame()
for target in targets:
# row = df_cluster_rows[(df_cluster_rows.targetName == target) &
# (df_cluster_rows.cluster_method=='affinity_propagation') &
# (df_cluster_rows.damping==0.74)].reset_index()
row = df_cluster_rows[(df_cluster_rows.targetName == target)].reset_index()
labels = row.cluster_object[0].labels_
for cluster_number, exemplar in enumerate(row.chunks[0]):
chunk_array = exemplar.reshape((8,8))
chunk_str = bc.cropped_chunk_to_string(chunk_array)
n_cluster_members = sum(labels == cluster_number)
if n_cluster_members >= min_cluster_members:
props = {}
for phase in ['pre','post']:
subset_for_target = df_proc_chunks[#(df_proc_chunks.blockFell == False) &
(df_proc_chunks.targetName == target) &
(df_proc_chunks.phase == phase)]
subset_with_chunk = subset_for_target[(subset_for_target['all_chunks']\
.apply(lambda chunks: chunk_str in chunks))]
row = {
'targetName': target,
'phase': phase,
'chunk_str': chunk_str,
'chunk_array': chunk_array,
'n_cluster_members': n_cluster_members,
# 'reconstructions_with_chunk': list(subset_with_chunk['discreteWorld']),
'total_phase_reconstructions': subset_for_target.shape[0],
'n_with_chunk': subset_with_chunk.shape[0],
'chunk_id': cluster_number,
'chunk_height': np.sum(np.dot(np.sum(chunk_array, axis=0),np.arange(8)))/np.sum(chunk_array) + 0.5,
'proportion_with_chunk': subset_with_chunk.shape[0] / subset_for_target.shape[0]
}
props[phase] = subset_with_chunk.shape[0] /subset_for_target.shape[0]
cluster_counts = cluster_counts.append(row,ignore_index=True)
cluster_counts.loc[(cluster_counts.targetName == target) & (cluster_counts.chunk_str == chunk_str), 'difference'] = props['post'] - props['pre']
cluster_counts.loc[(cluster_counts.targetName == target) & (cluster_counts.chunk_str == chunk_str), 'both_zero'] = \
(props['pre'] == 0) & (props['post'] == 0)
return cluster_counts
###Output
_____no_output_____
###Markdown
Precompute clustering Create dictionaries of chunks (for k-means), and distance matrices between chunks (for affinity propagation)
###Code
def addPerceptualChunks(chunk_list, decomposition, group_number):
'''
Checks whether a chunk with that group number exists in the decomposition and adds it to chunk_list
'''
chunk = (decomposition==group_number)*1
if chunk.any():
chunk_list.append(chunk)
# for each structure, throw all chunks from all decompositions into a giant list
perceptual_chunks = {}
for target in targets:
perceptual_chunks[target] = []
for group in range(1,9):
df_trial[df_trial.targetName==target].structureGrid.apply(\
lambda decomposition: addPerceptualChunks(perceptual_chunks[target],
decomposition,
group))
# create distance matrices between chunks within each structure
dmats = {}
chunks = {}
for target in targets:
chunks[target] = [chunk.flatten() for chunk in perceptual_chunks[target]]
dmats[target] = np.zeros((len(chunks[target]), len(chunks[target])))
for i, chunk_i in enumerate(chunks[target]):
for j, chunk_j in enumerate(chunks[target]):
dmats[target][i,j] = distance.euclidean(chunk_i, chunk_j)
# create feature matrices for affinity propagation (nsamples, nfeatures)
feature_mats = {}
for target in targets:
flat_chunks = [chunk.flatten() for chunk in perceptual_chunks[target]]
feature_mats[target] = np.array(flat_chunks)
# Do clustering
# affinity propagation: provides us with exemplar, and allows us to filter out clusters with few members.
# WARNING: sensitive to damping value!
df_ap = chunks_from_affinity_prop(feature_mats, damping_values = [0.74])
# k-means: needs prespecified k
df_kms, _ = chunks_from_KMeans(chunks, thresholds=[0.1,0.2,0.4,0.6,0.8])
df_chunk_clusters = df_ap.append(df_kms).reset_index()
# df_kms_max, kms = chunks_from_KMeans_method(chunks, method='quantile', quantile=0.9)
# df_chunk_clusters = df_chunk_clusters.append(df_kms_max).reset_index()
# df_chunk_clusters = df_chunk_clusters.drop(['level_0','index'],axis=1)
###Output
_____no_output_____
###Markdown
Load in building procedures from block_silhouette, and find all world-deltas for all reconstructions'world-deltas': change in world state (i.e. squares covered by blocks) between action i and action j, for all i and j.
###Code
# load in procedural data from silhouette experiment
silhouette_world_path = os.path.join(silhouette_csv_dir,'procedural_chunks_world_states_{}.p'.format('Exp2Pilot3_all'))
df_proc_world_states = pickle.load( open(silhouette_world_path, "rb" ))
silhouette_trial_path = os.path.join(silhouette_csv_dir,'block_silhouette_{}_good.csv'.format('Exp2Pilot3_all'))
df_proc_trial = pd.read_csv(silhouette_trial_path)
# find the world-deltas in building procedures
df_world_deltas = find_world_diffs(df_proc_world_states)
# count occurrences of each chunk by looking at world deltas
cluster_counts = find_perc_chunks_in_procedures(df_chunk_clusters[
(df_chunk_clusters.cluster_method=='k-means') &
(df_chunk_clusters.threshold == 0.4) &
(df_chunk_clusters.n_cluster == 20)],
df_world_deltas,
min_cluster_members = 0)
n_chunks = 20
fig, axs = plt.subplots(n_chunks, len(targets), figsize=(20,2*n_chunks))
for i, target in enumerate(targets):
for j in range(0, n_chunks):
greatest_increase = cluster_counts[(cluster_counts.phase=='post') &
(cluster_counts.targetName==target)].sort_values('n_cluster_members', ascending=False).reset_index()
axs[j,i].axis('off')
axs[j,i].set_title(str(round(greatest_increase.loc[j,'n_cluster_members'], 2)))
drawing.show_chunk([greatest_increase.loc[j,'chunk_str']], axs[j,i], target=target, cmap='Blues', cropped=True)
# x = df_chunk_clusters.loc[(df_chunk_clusters.targetName==targets[5]) &
# (df_chunk_clusters.threshold==0.4) &
# (df_chunk_clusters.n_cluster==20),'cluster_object'].reset_index().loc[0,'cluster_object']
cluster_counts[(cluster_counts.phase=='post') & (cluster_counts.both_zero)].groupby('targetName').count()
#something going wrong here
n_chunks = 5
fig, axs = plt.subplots(len(targets), n_chunks*2, figsize=(4*n_chunks,2.5*len(targets)))
for i, target in enumerate(targets):
for j in range(0, n_chunks):
greatest_increase = cluster_counts[(cluster_counts.phase=='post') &
(cluster_counts.targetName==target)].sort_values('difference', ascending=False).reset_index()
# do something graphically with: greatest_increase.loc[j,'diff']
axs[i,j].axis('off')
axs[i,j].set_title(str(round(greatest_increase.loc[j,'difference'], 2)))
drawing.show_chunk([greatest_increase.loc[j,'chunk_str']], axs[i,j], target=target, cmap='Blues', cropped=True)
for i, target in enumerate(targets):
for j in range(0, n_chunks):
greatest_increase = cluster_counts[(cluster_counts.phase=='post') &
(cluster_counts.targetName==target)].sort_values('difference', ascending=True).reset_index()
# do something graphically with: greatest_increase.loc[j,'diff']
axs[i,n_chunks*2-1-j].axis('off')
axs[i,n_chunks*2-1-j].set_title(str(round(greatest_increase.loc[j,'difference'], 2)))
drawing.show_chunk([greatest_increase.loc[j,'chunk_str']], axs[i,n_chunks*2-1-j], target=target, cmap='Blues', cropped=True)
# <-- Largest increase first to final ... Largest decrease first to final-->
# filter out totally missing chunks
cluster_counts_full = cluster_counts
cluster_counts = cluster_counts[cluster_counts.both_zero==False]
proportion_chunks_not_built_at_all = cluster_counts[cluster_counts.phase=='pre'].shape[0] / cluster_counts_full[cluster_counts_full.phase=='pre'].shape[0]
print(str(proportion_chunks_not_built_at_all*100) + '% of perceptual chunks built in one of first and final reps')
n_chunks_total = cluster_counts_full[(cluster_counts_full.phase=='pre')].shape[0]
assert cluster_counts_full[(cluster_counts_full.phase=='pre')].shape[0] == cluster_counts_full[(cluster_counts_full.phase=='post')].shape[0]
n_chunks_built_pre = cluster_counts_full[(cluster_counts_full.phase=='pre') & (cluster_counts_full.n_with_chunk==0)].shape[0]
n_chunks_built_post = cluster_counts_full[(cluster_counts_full.phase=='post') & (cluster_counts_full.n_with_chunk==0)].shape[0]
print(str(100*n_chunks_built_pre/n_chunks_total) + '% of perceptual chunks not built in pre')
print(str(100*n_chunks_built_post/n_chunks_total) + '% of perceptual chunks not built in post')
###Output
33.75% of perceptual chunks not built in pre
31.875% of perceptual chunks not built in post
###Markdown
How often are the shapes identified in the perceptual experiment built in a sequence of consecutive block-placements?Clustering has given us a set of 'perceptual chunks'. We now look at building procedures to see how often reconstructions contained each chunk. If consecutive actions yield a world-delta that is the same shape as a perceptual chunk, we say that that chunk was built.
###Code
# Were perceptual chunks built more in the first or final repetition? By structure
fig = plt.figure(figsize=(10,6))
sns.set_context('poster')
sns.set_style('whitegrid', {'legend':False})
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
sns.pointplot(data=cluster_counts, x='phase', y='proportion_with_chunk', hue='targetName')
# How many chunks were build more, and how many were built less?
fig = plt.figure(figsize=(14,10))
sns.set_context('poster')
sns.set_style('whitegrid', {'legend':False})
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
g = sns.FacetGrid(data=cluster_counts, col="targetName", hue="chunk_str", col_order=targets)
g.map(sns.pointplot,"phase","proportion_with_chunk", order=['pre','post'])
p = sns.swarmplot(y='difference', x='targetName', data=cluster_counts[cluster_counts.phase=='post'], dodge=True)
ax = p.axes
ax.axhline(0, ls='--')
# How many chunks were build more, and how many were built less?
fig = plt.figure(figsize=(14,10))
sns.set_context('poster')
sns.set_style('whitegrid', {'legend':False})
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
# just select one phase
g = sns.FacetGrid(data=cluster_counts[cluster_counts.phase=='post'], col="targetName", col_order=targets)
g.map(sns.distplot,"difference", rug=True, bins=10,)
sns.distplot(cluster_counts[cluster_counts.phase=='post']['difference'], rug=True)
def draw_row_chunk(row):
axs[row.name].axis('off')
chunk = bc.cropped_chunk_to_string(row.chunk_array)
drawing.show_chunk([chunk], axs[row.name], target=row.targetName)
cluster_counts
cluster_count_diffs = cluster_counts[(cluster_counts.phase=='post')]
cluster_count_diffs[cluster_count_diffs.targetName=='hand_selected_009']
# show chunks built less over time
df_negative_diffs = cluster_counts[(cluster_counts.phase=='post') & (cluster_counts.difference < 0)].reset_index()
n_chunks = df_negative_diffs.shape[0]
fig, axs = plt.subplots(n_chunks, figsize=(4,n_chunks*4))
_ = df_negative_diffs.apply(lambda row: draw_row_chunk(row), axis=1)
# show chunks built a lot more over time
df_negative_diffs = cluster_counts[(cluster_counts.phase=='post') & (cluster_counts.difference > 0.15)].reset_index()
n_chunks = df_negative_diffs.shape[0]
fig, axs = plt.subplots(n_chunks, figsize=(4,n_chunks*4))
_ = df_negative_diffs.apply(lambda row: draw_row_chunk(row), axis=1)
sns.scatterplot(data=cluster_counts, x='difference', y='chunk_height')
up_mean = np.mean(cluster_counts[(cluster_counts.phase=='pre') & (cluster_counts.difference > 0)].chunk_height)
up_std = np.std(cluster_counts[(cluster_counts.phase=='pre') & (cluster_counts.difference > 0)].chunk_height)
down_mean = np.mean(cluster_counts[(cluster_counts.phase=='pre') & (cluster_counts.difference < 0)].chunk_height)
down_std = np.mean(cluster_counts[(cluster_counts.phase=='pre') & (cluster_counts.difference < 0)].chunk_height)
#find out if these means are different
# https://en.wikipedia.org/wiki/Student%27s_t-test
(up_mean - down_mean)/(np.sqrt((up_std**2 + down_std**2)/2)) #not this!
# inspect one target
target = 'hand_selected_016'
fig = plt.figure(figsize=(10,6))
sns.set_context('poster')
sns.set_style('whitegrid')
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
sns.pointplot(data=cluster_counts[cluster_counts.targetName== target],\
x='phase', y='proportion_with_chunk', hue='chunk_id')
# print chunks from
chunks = df_chunk_clusters[(df_chunk_clusters.targetName == target) &
(df_chunk_clusters.cluster_method=='k-means') &
(df_chunk_clusters.n_cluster==10)].reset_index().chunks[0]
n_chunks = len(chunks)
fig, axs = plt.subplots(n_chunks, figsize=(4,n_chunks*4))
target_name = df_proc_chunks.iloc[30]['targetName']
for j, chunk in enumerate(chunks):
axs[j].axis('off')
drawing.show_chunk([bc.cropped_chunk_to_string(chunk.reshape((8,8)))], axs[j], target=target)
axs[j].set_title(str(j))
###Output
_____no_output_____
###Markdown
Exploration For each structure I've got a list of all chunks from all decompositions. I'm now clustering these to give us something to compare to building procedures (either a median, or exemplar, or set of chunks from that cluster).As I see it there are two sensible ways of clustering:1. Use biclustering where k = the mean number of chunks assigned to that structure. - this seems intuitive and works fairly well, but in trying to assign every single chunk to a cluster it ends up with some messier clusters. It seems like a bad decision to force obscure chunks into a cluster.2. Use affinity propagation - this seems the better strategy. Here we don't have to prespecify the number of chunks, and we can just throw away any clusters with few members. It also clusters by finding an exemplar, which gives us something simple to work with when comparing with procedures. Cluster using biclustering, where k = mean number of chunks for that structure.Looks cool, but probably not the best clustering method as it forces every chunk into a cluster. Maybe some chunks are completely different from the others and we'd rather throw them away.
###Code
target = 'hand_selected_012'
# get the mean number of chunks for that structure
meanNChunks = np.round(df_trial.groupby('targetName')['nChunksHighlighted'].mean()).astype(int).to_dict()
# group into n clusters where n is the mean amount of chunks for that structure
clustering = SpectralBiclustering(n_clusters=meanNChunks[target], random_state=0).fit(dmats[target]) # https://scikit-learn.org/stable/auto_examples/bicluster/plot_spectral_biclustering.html
order = clustering.row_labels_
sorted_rdm = dmats[target][np.argsort(clustering.row_labels_)]
sorted_rdm = sorted_rdm[:, np.argsort(clustering.column_labels_)]
img1 = plt.matshow(dmats[target])
plt.axis('off')
plt.colorbar()
img2 = plt.matshow(sorted_rdm)
plt.axis('off')
img1.set_cmap('hot')
img2.set_cmap('hot')
plt.colorbar()
###Output
_____no_output_____
###Markdown
cluster using k-means
###Code
# explore k-means
target = 'hand_selected_012'
feature_mat = np.array(chunks[target])
# get the mean number of chunks for that structure
meanNChunks = np.round(df_trial.groupby('targetName')['nChunksHighlighted'].mean()).astype(int).to_dict()
# group into n clusters where n is the mean amount of chunks for that structure
kmeans = KMeans(n_clusters=meanNChunks[target], random_state=0).fit(feature_mat)
# kmeans = KMeans(n_clusters=19, random_state=0).fit(feature_mat)
order = kmeans.labels_
sorted_chunks = feature_mat[np.argsort(kmeans.labels_),:]
# kmeans.labels_
# for i in range(sorted_chunks.shape[0]):
# plt.matshow(np.rot90(np.reshape(sorted_chunks[i,:],(8,8))))
# plt.axis('off')
# plt.title(np.sort(kmeans.labels_)[i])
for prototype in kmeans.cluster_centers_:
fig = plt.figure(figsize=(1,1))
img1 = plt.imshow(np.rot90(prototype.reshape((8,8))))
plt.axis('off')
# round up to get possible chunks
threshold = 0.4
for prototype in (kmeans.cluster_centers_>=threshold)*1:
fig = plt.figure(figsize=(1,1))
img1 = plt.imshow(np.rot90(prototype.reshape((8,8))))
plt.axis('off')
# explore parameters of k-means that minimize objective
# number of clusters
kms = {}
df_kms = pd.DataFrame()
for target in targets:
kms[target] = {}
for n_cluster in range(3,20):
feature_mat = np.array(chunks[target])
# get the mean number of chunks for that structure
meanNChunks = np.round(df_trial.groupby('targetName')['nChunksHighlighted'].mean()).astype(int).to_dict()
# group into n clusters where n is the mean amount of chunks for that structure
# kmeans = KMeans(n_clusters=meanNChunks[target], random_state=0).fit(feature_mat)
kms[target][n_cluster] = KMeans(n_clusters=n_cluster, random_state=0).fit(feature_mat)
df_kms = df_kms.append(
{
'targetName': target,
'n_cluster': n_cluster,
'kmeans': kms[target][n_cluster],
'inertia': kms[target][n_cluster].inertia_
},
ignore_index=True
)
# sorted_chunks = feature_mat[np.argsort(kmeans.labels_),:]
sns.lineplot(x='n_cluster', y='inertia',hue='targetName',data=df_kms)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
###Output
_____no_output_____
###Markdown
Visualize clusters
###Code
target = 'hand_selected_006'
# Explore clustering
clustering = clusters[target]
labels = clustering.labels_
cluster_centers_indices = clustering.cluster_centers_indices_
cluster_centers_ = clustering.cluster_centers_
n_clusters_ = len(cluster_centers_indices)
print(str(n_clusters_) + ' clusters')
label = 0
for label in np.unique(labels):
chunk_cluster = featureMats[target][labels==label,:].sum(axis=0).reshape((8,8))
fig = plt.figure(figsize=(1,1))
img1 = plt.imshow(np.rot90(chunk_cluster))
plt.title(str(featureMats[target][labels==label,:].shape[0]))
plt.axis('off')
for exemplar in cluster_centers_:
fig = plt.figure(figsize=(1,1))
img1 = plt.imshow(np.rot90(exemplar.reshape((8,8))))
plt.axis('off')
###Output
_____no_output_____
###Markdown
Next:- For each chunk: - go through action sequences to see: - number of exact matches - ratio of contained vs. spanning Somewhere I have a way of searching action sequences by world-diff, which should be the same representation as these perceptual chunks (once they've been aligned in an 18x13 gridworld)Lots of testing needed at this stage consider:- construct dataframe with all world differences. - i.e. action 0-1, 0-2, 0-3, 1-2, 1-3, etc. - would be large. - (gameID, targetName, trialNum, rep, condition, world-diff, action_1, action_2, window)- see if there's a match, if so +1
###Code
# construct dataframe with all world differences.
# i.e. action 0-1, 0-2, 0-3, 1-2, 1-3, etc.
# (gameID, targetName, trialNum, rep, condition, world-diff, action_1, action_2, window)
###Output
_____no_output_____
###Markdown
Example of searching for reconstructions containing perceptual chunk
###Code
n_chunks = len(df_proc_chunks.iloc[30]['all_chunks'])
fig, axs = plt.subplots(n_chunks, figsize=(4,n_chunks*4))
target_name = df_proc_chunks.iloc[30]['targetName']
for j, chunk in enumerate(df_proc_chunks.iloc[30]['all_chunks']):
axs[j].axis('off')
drawing.show_chunk([chunk], axs[j], target=target_name)
# find the structures with that chunk (assumes chunk in same format, and a given window size)
target = 'hand_selected_012'
# convert perceptual chunks into string
chunk_str = bc.cropped_chunk_to_string(cluster_centers_[0].reshape((8,8)))
subset_with_chunk = df_proc_chunks[(df_proc_chunks.targetName == target) &
(df_proc_chunks['all_chunks'].apply(lambda chunks: chunk_str in chunks))]
# draw all reconstructions for h
drawing.draw_reconstructions(subset_with_chunk)
###Output
_____no_output_____
###Markdown
Find proportion of reconstructions with each chunkQuestions:- some average of cluster members, or exemplars?- do I use all clusters, take a pre-specified number, or drop clusters with few members? - I'm fairly sure I should drop clusters with few members, but not sure of the exact criteria I should use
###Code
# for each exemplar with more than 3 members, count proportion of reconstructions in pre, and number of reconstructions in post
cluster_counts = pd.DataFrame()
for target in targets:
for cluster_number, exemplar in enumerate(clusters[target].cluster_centers_):
chunk_str = bc.cropped_chunk_to_string(exemplar.reshape((8,8)))
n_cluster_members = sum(clusters[target].labels_ == cluster_number)
if n_cluster_members > 3:
for phase in ['pre','post']:
subset_for_target = df_proc_chunks[(df_proc_chunks.blockFell == False) &
(df_proc_chunks.targetName == target) &
(df_proc_chunks.phase == phase)]
subset_with_chunk = subset_for_target[(subset_for_target['all_chunks']\
.apply(lambda chunks: chunk_str in chunks))]
row = {
'targetName': target,
'phase': phase,
'chunk_str': chunk_str,
'n_cluster_members': n_cluster_members,
# 'reconstructions_with_chunk': list(subset_with_chunk['discreteWorld']),
'total_phase_reconstructions': subset_for_target.shape[0],
'n_with_chunk': subset_with_chunk.shape[0],
'proportion_with_chunk': subset_with_chunk.shape[0] /subset_for_target.shape[0]
}
cluster_counts = cluster_counts.append(row,ignore_index=True)
cluster_counts
fig = plt.figure(figsize=(10,6))
sns.set_context('poster')
sns.set_style('whitegrid', {'legend':False})
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
sns.pointplot(data=cluster_counts, x='phase', y='proportion_with_chunk', hue='targetName')
fig = plt.figure(figsize=(10,6))
sns.set_context('poster')
sns.set_style('whitegrid', {'legend':False})
sns.set(style="ticks", rc={"lines.linewidth": 0.7})
g = sns.FacetGrid(data=cluster_counts, col="targetName", hue="chunk_str", col_order=targets)
g.map(sns.pointplot,"phase","proportion_with_chunk", order=['pre','post'])
cluster_counts[cluster_counts.targetName=='hand_selected_006']
drawing.show_chunk([chunk], axs[j], target='hand_selected_006')
###Output
_____no_output_____
###Markdown
Facetgrid:Facet is silhouetteDot is chunkdifference scoreSlope is stat we're interested in___Spatial biases:Chunks near the top more likely to appear more at the end?Popular chunks:Can popularity be explained by perceptual biases?Are more popular chunks ones that appear less at the end?(are less popular chunks relatively flat pre to post?)___Keep returning to: is convergence explained by convergence to perceptual chunks? Or something else- not perceptual chunks?
###Code
# Distribution of differences between pre and post
cluster_counts.groupby('chunk_str')
###Output
_____no_output_____ |
pandas_series_lesson.ipynb | ###Markdown
Pandas Overview - The pandas library is used to deal with structured data stored in tables. You might aquire the structured data from CSV files, TSV files, SQL database tables, or spreadsheets. You can also *create* pandas Series and DataFrames. - "[P]andas objects (Index, Series, DataFrame) can be thought of as containers for arrays, which hold the actual data and do the actual computation. For many types, the underlying array is a numpy.ndarray." [source](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html)- "A DataFrame is a two-dimensional array with labeled axes. In other words, a DataFrame is a matrix of rows and columns that have labels — column names for columns, and index labels for rows. A single column or row in a Pandas DataFrame is a Pandas series — a one-dimensional array with axis labels." [source](https://engineering.upside.com/a-beginners-guide-to-optimizing-pandas-code-for-speed-c09ef2c6a4d6)- You can think of a pandas DataFrame like a table in SQL, Excel, or Google Sheets and a pandas Series like a single column from a table.[Image Source](https://www.w3resource.com/python-exercises/pandas/index.php) The Pandas Series Object What Is a Pandas Series?A pandas Series object is a one-dimensional, labeled array made up of an autogenerated index that starts at 0 and data of a single data type. Think of the index as the address of a data point; did you ever play the game Battleship? A couple of important things to note here:- If I try to make a pandas Series using multiple data types like `int` and `string` values, the data will be converted to the same `object` data type; the `int` values will lose their `int` functionality. - A pandas Series can be created in several ways, some of which I'll demonstrate below, but **it will most often be created by slecting a single column from a pandas Dataframe in which case the Series retains the same index as the Dataframe.** Create a Pandas Series: From a Python List
###Code
# Here I create a list of colors creatively named 'colors'.
colors = ['red', 'yellow', 'green', 'blue', 'orange', 'red', 'violet', 'indigo']
colors
# Here I create the 'colors_series' Series using the Series() constructor method.
colors_series = pd.Series(colors)
# How can I confirm that 'colors_series' is now a pandas Series object?
type(colors_series)
###Output
_____no_output_____
###Markdown
From a NumPy Array
###Code
# Create a numpy array 'arr'.
arr = np.array([5, 10, 15, 20, 25, 30, 35, 40, 40])
# Convert my numpy array to a pandas Series called 'numeric_series'.
numeric_series = pd.Series(arr)
# How can I confirm that 'numeric_series' is now a pandas Series object?
type(numeric_series)
###Output
_____no_output_____
###Markdown
From a Python Dictionary. - Here the dictionary keys are used to construct the labeled index.
###Code
# Create a python dictionary.
data = {'a' : 0, 'b' : 1.5, 'c' : 2, 'd': 3.5, 'e': 4, 'f': 5.5}
data
# Create a pandas Series 'diction_series' using the pandas Series() constructor method.
diction_series = pd.Series(data)
# Confirm the type of 'diction_series.'
type(diction_series)
###Output
_____no_output_____
###Markdown
From a Pandas DataFrame - When I select a column from a pandas DataFrame, this is also a Series object. It will retain the same index as the DataFrame.*This is just a preview of acquiring data from a database as a DataFrame. For now, focus on the Series, not the code reading in the data. We will get plenty of practice using pandas functions to acquire data in the near future.* ```python Import my access information to connect to Codeup's database.from env import host, password, user Function to connect to database.def get_connection(db, user=user, host=host, password=password): return f'mysql+pymysql://{user}:{password}@{host}/{db}'``` ```python Create SQL query to acquire desired data.sql_query = ''' SELECT first_name, last_name, dept_name FROM employees AS e JOIN dept_emp AS de ON e.emp_no = de.emp_no AND to_date > CURDATE() JOIN departments AS d USING(dept_no) '''``` ```python Read data from database using sql and assign DataFrame to df.df = pd.read_sql(sql_query, get_connection('employees'))``` ```python Write DataFrame to a csv to quickly read in data.df.to_csv('names.csv')```
###Code
# Read data in from my csv to a pandas DataFrame.
pd.read_csv('names.csv', index_col=0)
# This is a pandas DataFrame from which I will select Series I want to use below.
# How can we return information about the index of this DataFrame?
# How can we return information about the columns of this DataFrame?
# How can we return information about the values of this DataFrame?
###Output
_____no_output_____
###Markdown
**For now, all you need to know is that a Series can be selected from a DataFrame in one of the following ways:**- **By Passing a Column Name as a String to the Indexing Operator *aka Bracket Notation*.**```pythondf['series']```
###Code
# Grab a Series using bracket notation. Assign it to a variable called 'names'.
# Validate the object type of 'names'.
###Output
_____no_output_____
###Markdown
- **Using Attribute Access *aka Dot Notation*.**```pythondf.series```
###Code
# Grab a Series using dot notation. Assign it to a variable called 'dept_names'.
# Validate the object type of 'dept_name'.
###Output
_____no_output_____
###Markdown
So What's So Great About a Pandas Series?**A Series...**- can handle any data type.- allows for fast indexing and subsetting of data.- has lots of built-in attributes and methods.- is optimized for Pandas vectorized functions. Attributes**Attributes** return useful information about the Series properties; they don't perform operations or calculations with the Series.- Jupyter Notebook allows you to quickly access a list of available attributes by pressing the tab key after the series name followed by a period or dot; this is called dot notation or attribute access. Common Attributes `.index`, `.values`**The Components of a Pandas Series - Index, Data**- Now that I have some pandas Series to work with, I can look at the components of the Series object using the `.index` and the `.values` attributes.
###Code
# I can access its autogenerated index by using the .index attribute.
# I can access its data by using the .values attribute.
# I can see that accessing the data in my Series using the .values attribute returns a numpy array.
###Output
_____no_output_____
###Markdown
`.dtype`- The `.dtype` attribute returns the pandas data type for the Series. **Below is a helpful overview of pandas data types and their relation to python and NumPy data types.**
###Code
# What is the data type of our 'colors_series' Series?
# What is the data type of our 'numeric_series' Series?
# What is the data type of our 'names' Series?
###Output
_____no_output_____
###Markdown
`.size`- The `.size` attribute returns an int representing the number of rows in the Series.
###Code
# What is the size of our 'colors_series' Series?
# What is the size of our 'numeric_series' Series?
# What is the size of our 'names' Series?
###Output
_____no_output_____
###Markdown
`.shape`- The `.shape` attribute returns a tuple representing the rows and columns in a DataFrame, but it can also be used on a Series to return the rows.
###Code
# What is the shape of our 'names' Series?
###Output
_____no_output_____
###Markdown
Methods**Methods** used on pandas Series objects often return new Series objects; most also offer parameters with default settings designed to keep the user from mutating the original Series objects. (`inplace=False`)- I can either assign the transformed Series to a variable or adjust my parameters. Be careful about mutating your original data, and always, always confirm that the data you are working with is the data, and data type, that you think you are working with! Now What? `.head()`, `.tail()`, `.sample()`- The `.head(n)` method returns the first n rows in the Series; `n = 5` by default. This method returns a new Series with the same indexing as the original Series. - The `.tail(n)` method returns the last n rows in the Series; `n = 5` by default. Increase or decrease your value for n to return more or less than 5 rows.- The `.sample(n)` method returns a random sample of rows in the Series; `n = 1` by default. Again, the index is retained.
###Code
# Grab the first five rows in our 'names' Series; the default is the first 5 rows.
# Grab the last two rows of the 'names' Series; we can pass 2 as our argument to n.
# Grab a random sample of 10 rows from the 'names' Series; the default argument is 1.
# What type of object is returned by the `.head()`, `.tail()`, or `.sample()` methods?
###Output
_____no_output_____
###Markdown
`.astype()`- The `.astype()` method allows me to convert a Series from one data type to another. - Like most methods, it returns a new transformed Series by default instead of mutating my original data.
###Code
# How can I change the data type of `numeric_series` to an object?
# Did this transform the data type of my 'numeric_series'?
###Output
_____no_output_____
###Markdown
`.value_counts()`- The `.value_counts()` method returns a new Series consisting of a labeled index representing the unique values from the original Series and values representing the frequency each unique value appears in the original Series. - This is an extremely useful method you will find yourself using often with Series containing object and category data types. Below you can see the default settings for the method's parameters.```pythonseries.value_counts( normalize=False, sort=True, ascending=False, bins=None, dropna=True,)```
###Code
# How can I obtain the frequency of unique values in 'colors_series'?
# How can I obtain the relative frequency of the unique values in 'colors_series'?
###Output
_____no_output_____
###Markdown
`.sort_values()` and `.sort_index()`- These are handy methods that allow you to either sort your values or index respectively in ascending or descending order.
###Code
# How can I obtain my 'colors_series' with the values in alphabetical order?
# How can I reverse the order?
# How can I obtain my 'numeric_series' ordered from least to greatest values?
# How can I reverse the order?
# How can I sort my labeled index in 'diction_series' to be in reverse alphabetical order?
###Output
_____no_output_____
###Markdown
`.describe()`- The `.describe()` method can be used to return descriptive statistics on either a pandas Series or DataFrame object; the information it returns depends on whether it's used on a numerical or non-numerical Series. - *Note that when used on a DataFrame, `.describe()` analyzes only the numerical columns by default. The parameters can be adjusted to include other data types.*```pythonseries_or_df.describe(percentiles=None, include=None, exclude=None)```
###Code
# What does the .describe() method return if our Series values are strings? (Try 'dept_names' or 'colors_series')
# Validate that the .describe() method returns a new Series.
# What does the .describe() method return if our Series values are numeric? (Try 'numeric_series')
###Output
_____no_output_____
###Markdown
`.any()` and `.all()`- The `.any()` method performs a logical `OR` operation on a row or column and returns a bool value indicating whether **any of the elements are True**.
###Code
# Are any of the values in my 'colors_series' 'red'?
# How can I check to see if any of the values in `numeric_series` are less than 0?
###Output
_____no_output_____
###Markdown
- The `.all()` method performs a logical `AND` operation on a row or column and returns a bool value indicating whether **all of the elements are True**.
###Code
# Are all of the values in 'colors_series' 'red'?
# Are all of the values in the 'dept_names' Series 'Customer Service'?
###Output
_____no_output_____
###Markdown
String Methods- **String Methods** perform vectorized string operations on each string value in the original Series and return a transformed copy of the original Series. - We have to use the `.str` attribute to access the string method. ```pythonseries.str.string_method()```- More string methods listed [here](https://docs.python.org/2.5/lib/string-methods.html).
###Code
# How can I capitalize every string in my 'colors_series'?
# How can I check to see if the string values in my 'colors_series' start with the letter 'r'?
# How could I remove all of the 'e's in my 'colors_series'?
###Output
_____no_output_____
###Markdown
Method Chaining- Since many pandas Series methods return a new Series object, I can call one method after another using dot notation to chain them together.```pythonseries.method().method().method()```
###Code
# Can I generate a boolean Series identifying values in 'colors_series' ending with the letter `d`.
# Can I return the actual values from 'colors_series' ending with the letter 'd'.
# Can I use method chaining to also make those values all uppercased?
###Output
_____no_output_____
###Markdown
`.apply()`- The `.apply()` method accepts a python or NumPy function as an argument and applies that function to each element in my Series. - *`.apply()` does not only accept a built-in function as an argument; you can pass custom and even lambda functions as arguments.*>**Scenario:** What if I want to know the length of each element in my `colors_series` Series? What if I then want to see the frequency of the unique values in the Series returned?
###Code
# How can I use `.apply()` with a lambda function to count the letter 'r' in each value in my 'colors_series'?
# Create custom function I can apply to each element in my 'colors_series'; it must take in a string argument.
def red_or_not(string):
if string.lower() == 'red':
return 'red'
else:
return 'not_red'
# How can I use the `.apply()` method with my custom function to return a new Series?
# How can I use method chaining to get a count of each unique value in this new Series?
###Output
_____no_output_____
###Markdown
Remember: Unless I assign the Series returned from using the functions and methods above, my original Series data remains the same. If I want to keep the Series with weekend and weekday labels, I have to assign it to a variable.
###Code
# Confirm that my 'colors_series' still contains its original values.
###Output
_____no_output_____
###Markdown
`.isin()`- The `.isin()` method returns a boolean Series with the same index as the original Series. - `True` values indicate that the original Series value at a given index position is in the sequence. - `False` values indicate that the original value is not present in the sequence.```pythonseries.isin(values)```
###Code
# Create a list of colors.
my_colors = ['black', 'white', 'red']
# How can I check which values in `colors_series` are in the 'my_colors' list and create a new Series 'bools'?
###Output
_____no_output_____
###Markdown
**This is handy, but what if I want to access the actual observations or rows where the condition is True for being in the `my_colors` list, not just the bool values True or False?** The Indexing Operator `[]`- This is where the pandas index shines; we can select subsets of our data using index labels, index position, or boolean sequences (list, array, Series).- Earlier, I demonstrated that bracket notation, `df['series']` can be used to pull a Series from a pandas DataFrame when a column label is passed into the indexing operator `[]`. - I can also pass a sequence of boolean values to the indexing operator; that sequence could be a list or array, but it can also be another pandas Series **if the index of the boolean Series matches the original Series**. >**Example:** Here I use the boolean Series `bools` that I created above as the selector in the indexing operator for `colors_series`. This returns only the rows in `colors_series` where the value is `True` in our boolean Series, `bools`. - Since I created my boolean Series from my original Series, they share the same index. That's what makes this operation possible.
###Code
# What type of pandas object is my 'bools' Series?
# Which rows meet my conditional above?
###Output
_____no_output_____
###Markdown
**How can I return the actual values from `colors_series` where my condition is being met, the value is `red`, instead of just a True or False value?**
###Code
# Use the boolean Series as a selector for values in 'colors_series' that meet my condition.
# I can skip the middle woman and pass a conditional directly into the indexing operator.
###Output
_____no_output_____
###Markdown
>**Example of Indexing with a Labeled Index**- Recall that our `diction_series` has a labeled index.- Notice that the indexing is inclusive when using index labels.
###Code
# Can I return a subset of the first three rows of 'diction_series' using labels instead of integer positions?
# Can I return a subset of 'diction_series' containing only rows ['a', 'd', 'f']?
###Output
_____no_output_____
###Markdown
Binning Data- I can bin continuous data to convert it to categorical data.- We will look at two different ways I can accomplish binning below. - `.value_counts()` - `pd.cut()`
###Code
# I need a numerical Series to work with here; I'll import the 'tips' dataset from pydataset.
from pydataset import data
tips = data('tips')
tips
# How can I create a Series named `tip` from our tips DataFrame above.
# How can I see the descriptive statistics for this Series?
# How can I create 5 bins of equal size using `.cut()`? What is the data type of this Series of bins?
# How can I return a Series with my unique bin values as the index and the frequency of each bin as the value.
# Is there another way I can bin my 'tip' data get the value counts like I did above? Spoiler alert, Yes!
###Output
_____no_output_____
###Markdown
`.plot()`- **The `.plot()` method** allows us to quickly visualize the data in our Series.- By default, Matplotlib will choose the best type of plot for us.- We can also customize our plot if we like.Check the docs [here](https://pandas.pydata.org/pandas-docs/version/0.24.2/reference/api/pandas.Series.plot.html) for more on the `.plot()` method.
###Code
# How can I make a quick plot of the data in the 'tip' Series? (bar plot)
tip.value_counts(bins=5).plot.bar()
# How can I make a quick plot of the data in the 'tip' Series? (horizontal bar plot using value_count(bins=5))
tip.value_counts(bins=5).sort_values().plot.barh()
# I can clean up my plot and add labels.
tip.value_counts(bins=5).plot.barh(color='thistle',
width=1,
ec='black')
plt.title('Tip Bins')
plt.xlabel('Number of Tips')
plt.ylabel('US $')
# reorder y-axis of horizontal bar chart
plt.gca().invert_yaxis()
plt.show()
###Output
_____no_output_____
###Markdown
`.cut()`- The pandas `.cut()` function allows me to create bins of equal size to convert a continuous variable to a categorical variable if I like. - This function has parameters that make it versatile; I can define my own bin edges and labels.```python Defaults for parameters I will use in this example.pd.cut(x, bins, labels=None, include_lowest=False)``` Note: The lower bounds of the bins are open-ended while the upper bounds are closed-ended by default; there are parameters if you want to adjust this behavior.
###Code
# Define bin edges.
bin_edges = [0, 2, 4, 6, 8, 10.01]
# Create a list of bin labels; you should have one less than bin edges.
bin_labels = ['$0-1.99', '$2.00-3.99', '$4.00-5.99', '$6.00-7.99', '$8.00-10.00']
# Use the .cut() function to create 5 bins as defined and labeled and create Series of value_counts sorted by index value.
pd.cut(tip, bins=bin_edges, labels=bin_labels, include_lowest=True).value_counts().sort_index()
# Define bin edges
bin_edges = [0, 2, 4, 6, 8, 10.01]
# Create a list of bin labels
bin_labels = ['$0-2.00', '$2.01-4.00', '$4.01-6.00', '$6.01-8.00', '$8.01-10.00']
# Use the .cut() function to create my 5 equal-sized bins and create a horizontal bar plot to visualize value_counts().
pd.cut(tip, bins=bin_edges, labels=bin_labels, include_lowest=True).value_counts().sort_index().plot.barh(color='thistle', width=1, ec='black')
# Axes labels and plot title
plt.title('Tip Bins')
plt.xlabel('Number of Tips')
plt.ylabel('US $')
# Reorder y-axis of horizontal bar chart
plt.gca().invert_yaxis()
# Clean up plot display
plt.show()
###Output
_____no_output_____ |
examples/non-interactive/importing_and_exporting/LiP_import_export_example.ipynb | ###Markdown
Importing/exporting
###Code
from matador.query import DBQuery
from matador.hull import QueryConvexHull
kwargs = {'composition': ['LiP'], 'summary': True,
'hull_cutoff': 0.05, 'cutoff': [300, 301]}
hull = QueryConvexHull(**kwargs)
###Output
8386 results found for query in ajm.
Creating hull from AJM db structures.
Finding the best calculation set for hull...
possible shock : matched 8383 structures. -> PBE, 300.0 eV, 0.08 1/A
Matched at least 2/3 of total number, composing hull...
[92m[1mComposing hull from set containing possible shock[0m
────────────────────────────────────────────────────────────
Scanning for suitable Li chemical potential...
Using difference crate as chem pot for Li
────────────────────────────────────────────────────────────
Scanning for suitable P chemical potential...
Using contributor visitor as chem pot for P
────────────────────────────────────────────────────────────
Constructing binary hull...
[94m18 structures within 0.05 eV of the hull with chosen chemical potentials.[0m
─────────────────────────────────────────────────────────────────────────────────────────────────────────────
ID !?! Pressure Volume/fu Hull dist./atom Space group Formula # fu Prov.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────
* contributor visitor 0.005 20.677 0.00000 Cmca P 4 ICSD
* hysteria expert 0.012 170.510 0.00000 I41/acd LiP7 8 ICSD
distention cry 0.004 126.142 0.04266 R-3m LiP6 1 AIRSS
infinity throne 0.021 114.089 0.01039 Pna21 LiP5 4 ICSD
proficiency cake -0.009 278.933 0.01584 Pbcn Li3P11 4 ICSD
* percolate copper -0.022 197.601 0.00000 P212121 Li3P7 4 ICSD
salience mountain 0.056 119.112 0.04695 P-1 Li3P4 1 AIRSS
* missal writer -0.026 31.351 0.00000 P21/c LiP 8 ICSD
gauge planes -0.029 178.171 0.04578 P-1 Li6P5 2 AIRSS
butt vest 0.003 141.239 0.01148 C2/m Li5P4 1 ICSD
eulogize mark 0.000 110.235 0.01509 Cmcm Li4P3 2 ICSD
sonata hobbies 0.041 77.160 0.01844 Immm Li3P2 2 AIRSS
incapacitate organization 0.042 120.438 0.04901 Pna21 Li5P3 4 AIRSS
redoubtable bird 0.026 41.984 0.02983 P-62c Li2P 6 ICSD
* nomic guitar -0.016 59.296 0.00000 P63/mmc Li3P 2 AIRSS
rupture sack 0.024 76.434 0.04529 P1 Li4P 3 AIRSS
possible shock 0.005 123.099 0.02873 Pbcn Li6P 4 AIRSS
* difference crate 0.002 20.394 0.00000 R-3m Li 3 ICSD
###Markdown
Dump to json files
###Code
from json import dump, load
for doc in hull.cursor[:5]:
source_root = [src for src in doc['source'] if src.endswith('.res') or src.endswith('.castep')][0].split('/')[-1]
del doc['_id']
with open(source_root + '.json', 'w') as f:
dump(doc, f)
###Output
_____no_output_____
###Markdown
Load from json files
###Code
from glob import glob
json_list = glob('*.json')
hull_cursor = []
for json_file in json_list:
with open(json_file, 'r') as f:
hull_cursor.append(load(f))
hull_cursor[4]
###Output
_____no_output_____ |
Digit_Recognition_With_CNN_On_MNIST_Dataset.ipynb | ###Markdown
1. The model type that we will be using is Sequential. Sequential is the easiest way to build a model in Keras. It allows you to build a model layer by layer.2. The ‘add()’ function is to add layers to our model.3. These are convolution layers that will deal with our input images, which are seen as 2-dimensional matrices. Add as many convolutinal layers until staisfied.4. 64 in the first layer and 32 in the second layer are the number of nodes in each layer. This number can be adjusted to be higher or lower, depending on the size of the dataset.5. Kernel size is the size of the filter matrix for our convolution. So a kernel size of 3 means we will have a 3x3 filter matrix.6. Activation is the activation function for the layer. The activation function we will be using for our first 2 layers is the ReLU, or Rectified Linear Activation. 7. In between the Conv2D layers and the dense layer, there is a ‘Flatten’ layer. Flatten serves as a connection between the convolution and dense layers.8. ‘Dense’ is the layer type we will use in for our output layer. Dense is a standard layer type that is used in many cases for neural networks.9. We will have 10 nodes in our output layer, one for each possible outcome (0–9).10. The activation is ‘softmax’. Softmax makes the output sum up to 1 so the output can be interpreted as probabilities.The model will then make its prediction based on which option has the highest probability.
###Code
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 64) 640
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 11, 11, 32) 18464
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 800) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 8010
=================================================================
Total params: 27,114
Trainable params: 27,114
Non-trainable params: 0
_________________________________________________________________
###Markdown
The summary is textual and includes information about:1. The layers and their order in the model.2. The output shape of each layer.3. The number of parameters (weights) in each layer.4. The total number of parameters (weights) in the model. Next, we need to compile our model. Compiling the model takes three parameters: optimizer, loss and metrics.
###Code
# Compile model using accuracy as a measure of model performance
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
1. The optimizer controls the learning rate. We will be using ‘adam’ as our optmizer. Adam is generally a good optimizer to use for many cases. The adam optimizer adjusts the learning rate throughout training.2. The learning rate determines how fast the optimal weights for the model are calculated. A smaller learning rate may lead to more accurate weights (up to a certain point), but the time it takes to compute the weights will be longer.3. ‘categorical_crossentropy’ is used for our loss function. This is the most common choice for classification. A lower score indicates that the model is performing better.4. ‘accuracy’ metric is used to see the accuracy score on the validation set when we train the model.
###Code
#train model
model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=3)
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/3
60000/60000 [==============================] - 148s 2ms/step - loss: 5.5121 - acc: 0.6374 - val_loss: 0.0883 - val_acc: 0.9738
Epoch 2/3
60000/60000 [==============================] - 143s 2ms/step - loss: 0.0774 - acc: 0.9765 - val_loss: 0.0626 - val_acc: 0.9820
Epoch 3/3
60000/60000 [==============================] - 141s 2ms/step - loss: 0.0580 - acc: 0.9829 - val_loss: 0.0568 - val_acc: 0.9831
###Markdown
To train, we will use the ‘fit()’ function on our model with the following parameters: training data (x_train), target data (y_train), validation data, and the number of epochs.1. x_train: The training data consisting of only the independent factors2. y_train: The training data consisting of only the dependent factors3. validation_data: For our validation data, we will use the test set provided to us in our dataset, which we have split into x_test and y_test.4. epochs: one epoch stands for one complete training of the neural network with all samples.
###Code
# Observing predictions for the first 3 images in the test set
preds=model.predict(x_test[:4])
preds
###Output
_____no_output_____
###Markdown
For seeing the predictions that our model has made for the test data, we can use the predict function. The predict function will give an array with 10 numbers. These numbers are the probabilities that the input image represents each digit (0–9). The array index with the highest number represents the model prediction.
###Code
# For getting tha index with maximum value
np.argmax(preds, axis=-1)
# show actual results for the first 3 images in the test set
y_test[:4]
# For getting tha index with maximum value
np.argmax(y_test[:4], axis=-1)
# Evaluating the performance on the test set
test_loss, test_acc = model.evaluate(x_test, y_test)
print("Loss: ",test_loss.round(3),"\nAccu: ",test_acc.round(3))
###Output
10000/10000 [==============================] - 8s 778us/step
Loss: 0.057
Accu: 0.983
###Markdown
Visualize Model
###Code
model.layers
# Printing the name of the layers
for layer in model.layers:
print(layer.name, layer.trainable)
for layer in model.layers:
print('Layer Configuration:')
print(layer.get_config(),"\n","------"*20)
###Output
Layer Configuration:
{'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'seed': None, 'mode': 'fan_avg', 'distribution': 'uniform', 'scale': 1.0}}, 'activation': 'relu', 'padding': 'valid', 'batch_input_shape': (None, 28, 28, 1), 'strides': (1, 1), 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'activity_regularizer': None, 'bias_constraint': None, 'bias_regularizer': None, 'dtype': 'float32', 'trainable': True, 'dilation_rate': (1, 1), 'kernel_constraint': None, 'kernel_size': (3, 3), 'name': 'conv2d_1', 'data_format': 'channels_last', 'filters': 64, 'kernel_regularizer': None, 'use_bias': True}
------------------------------------------------------------------------------------------------------------------------
Layer Configuration:
{'trainable': True, 'strides': (2, 2), 'pool_size': (2, 2), 'name': 'max_pooling2d_1', 'padding': 'valid', 'data_format': 'channels_last'}
------------------------------------------------------------------------------------------------------------------------
Layer Configuration:
{'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'seed': None, 'mode': 'fan_avg', 'distribution': 'uniform', 'scale': 1.0}}, 'filters': 32, 'activation': 'relu', 'bias_regularizer': None, 'strides': (1, 1), 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'activity_regularizer': None, 'padding': 'valid', 'data_format': 'channels_last', 'trainable': True, 'dilation_rate': (1, 1), 'kernel_constraint': None, 'kernel_size': (3, 3), 'name': 'conv2d_2', 'bias_constraint': None, 'kernel_regularizer': None, 'use_bias': True}
------------------------------------------------------------------------------------------------------------------------
Layer Configuration:
{'trainable': True, 'strides': (2, 2), 'pool_size': (2, 2), 'name': 'max_pooling2d_2', 'padding': 'valid', 'data_format': 'channels_last'}
------------------------------------------------------------------------------------------------------------------------
Layer Configuration:
{'trainable': True, 'name': 'flatten_1', 'data_format': 'channels_last'}
------------------------------------------------------------------------------------------------------------------------
Layer Configuration:
{'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'seed': None, 'mode': 'fan_avg', 'distribution': 'uniform', 'scale': 1.0}}, 'activation': 'softmax', 'bias_regularizer': None, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'activity_regularizer': None, 'bias_constraint': None, 'trainable': True, 'kernel_constraint': None, 'name': 'dense_1', 'kernel_regularizer': None, 'units': 10, 'use_bias': True}
------------------------------------------------------------------------------------------------------------------------
###Markdown
The weights of each layer can be obtained using
###Code
for i in range(len(model.layers)):
print("For Layer ",model.layers[i].name," weights are")
print(model.layers[i].get_weights())
print()
print("----"*30)
###Output
For Layer conv2d_1 weights are
[array([[[[-0.07117332, -0.11871236, -0.02792738, -0.02604584,
-0.12552024, 0.06599326, -0.0009745 , -0.10929321,
-0.00706674, 0.05502931, -0.08452702, -0.04653251,
-0.0949929 , -0.11232311, 0.00024684, 0.00849208,
-0.00574883, -0.0736568 , 0.01780812, -0.00077945,
-0.07600698, -0.07713334, -0.01890288, -0.12654534,
-0.00616101, -0.09747548, -0.01090491, -0.02375196,
0.03713597, -0.07474131, -0.00866111, -0.00204889,
-0.00632484, 0.03857862, -0.05244157, 0.01286643,
-0.0185243 , -0.04976959, -0.08375214, -0.11644132,
-0.03624744, -0.04530561, -0.07460491, -0.0014205 ,
-0.13252297, -0.02572778, -0.03993769, 0.02834358,
-0.10054269, -0.08127081, 0.00123064, -0.0067372 ,
0.02846469, -0.05443987, 0.02492472, -0.00435449,
-0.0030198 , -0.07127823, -0.02900791, -0.09636054,
0.0123684 , -0.16246228, 0.0245565 , -0.09998744]],
[[ 0.08447798, -0.10675451, -0.11540053, 0.01037752,
0.03638961, 0.03950764, -0.05664811, 0.01244815,
-0.00035768, -0.10479114, 0.01640861, -0.10472696,
-0.1353851 , -0.05508336, -0.02732663, -0.11511774,
-0.08889554, -0.07136763, 0.0201474 , -0.06985815,
-0.07117468, -0.12249623, -0.07940701, 0.00898849,
-0.09195764, 0.0561309 , -0.00092454, 0.05771503,
-0.06547861, -0.06580272, -0.04000534, 0.0277759 ,
-0.11919779, 0.03464814, -0.1292196 , 0.0152442 ,
0.06336793, 0.11401469, -0.05180689, -0.03307243,
-0.04246629, -0.09695424, 0.06761556, -0.09985224,
-0.00262992, -0.03270812, 0.08091559, 0.00240913,
-0.06545446, -0.1295374 , 0.03737393, -0.02615711,
-0.04106867, -0.05803075, -0.0033087 , -0.12559319,
-0.00031429, -0.06331053, 0.04220006, -0.06885358,
0.05371145, -0.01354953, -0.1222773 , 0.03474844]],
[[-0.02345826, 0.00879307, -0.03506756, -0.0998463 ,
0.03574207, -0.06883927, -0.09460594, -0.09840403,
-0.05230983, -0.04664407, -0.00508413, -0.08192607,
-0.11315824, -0.00108418, -0.07536927, 0.03852654,
0.03417294, -0.11292624, -0.06174188, -0.09413037,
-0.00140126, -0.03871202, 0.0360465 , 0.00157606,
-0.00231779, -0.08265376, -0.12642808, -0.04307734,
-0.1829269 , -0.12454443, -0.09541831, -0.02079365,
-0.11971967, -0.15446821, -0.08507422, 0.02740215,
-0.01129526, -0.04340963, -0.05075004, -0.11283467,
0.0075107 , -0.12878223, -0.13040459, -0.0483269 ,
-0.01540897, -0.01812512, -0.09962723, -0.11089242,
0.04066715, -0.03661088, 0.00576867, -0.1140487 ,
-0.1165807 , -0.04230535, 0.01364185, -0.00622581,
-0.05396102, -0.04765547, -0.03558381, -0.12614685,
0.06361349, 0.01375276, -0.06466734, -0.06951197]]],
[[[-0.00872445, -0.05021581, -0.00515768, -0.0078065 ,
0.03765333, 0.00746253, -0.04333509, -0.08510514,
-0.0226424 , -0.05700384, -0.00056886, 0.05590802,
-0.08579811, 0.02949394, -0.01391746, -0.04049771,
-0.02890239, -0.05394748, -0.07647687, -0.11196511,
-0.05487207, 0.03229039, 0.07973446, -0.1272497 ,
-0.17779139, 0.02318418, -0.03869567, -0.07362141,
0.02992093, -0.06130052, -0.09674872, 0.04340508,
-0.00952491, -0.03737658, -0.02154541, 0.05971988,
0.02745396, -0.05833017, -0.08287732, 0.05988121,
-0.06774816, 0.02100799, 0.01141225, -0.00072003,
0.00715496, -0.00547277, -0.14209735, -0.11612639,
-0.13624136, -0.02552314, -0.0868283 , -0.04525357,
-0.10614338, -0.117608 , -0.07952435, -0.06351376,
-0.01496877, -0.02894251, -0.07460505, 0.01891474,
-0.06865213, -0.02794718, 0.00132732, -0.1031634 ]],
[[-0.06850489, 0.03387357, -0.05687363, 0.00574512,
-0.07613487, -0.04301169, -0.08362585, 0.06322757,
-0.01905222, -0.13250452, -0.04425525, 0.05665417,
-0.02932111, 0.08929042, -0.10340419, 0.04147007,
-0.07240648, -0.10258965, 0.00392788, -0.01920092,
-0.08376064, 0.00379926, -0.07540207, 0.02632105,
-0.0805808 , -0.0621942 , 0.01209461, -0.06104959,
0.01676912, 0.0794052 , 0.06866191, 0.05021181,
-0.22541064, -0.02771796, -0.05791441, 0.00034 ,
0.00567228, -0.02506464, 0.0224198 , -0.00608086,
-0.03740827, 0.01196067, 0.02913756, -0.02652275,
-0.03672565, -0.01586006, -0.14681304, -0.05059617,
-0.01705231, -0.0349501 , -0.02855689, 0.09042568,
-0.12469415, -0.05441951, 0.00118525, 0.06166201,
-0.01199335, 0.00273925, -0.08333818, 0.02486665,
0.00797554, -0.01194996, 0.02549597, 0.06496098]],
[[ 0.02773764, 0.04820566, -0.00293464, 0.05010955,
-0.04620614, -0.08360271, -0.0066494 , -0.11574119,
0.0169933 , -0.02419385, 0.05776184, 0.00421795,
0.03337151, 0.03932162, -0.02406464, -0.11237382,
-0.12990002, -0.08341685, -0.00645028, -0.07985427,
-0.07991454, -0.13329422, -0.0693965 , 0.01809661,
-0.14013888, -0.09186098, 0.01681847, -0.06660898,
-0.1718421 , 0.05314119, -0.09123161, -0.1154896 ,
0.07425418, -0.1588041 , 0.01623648, -0.08485984,
0.02920865, -0.05555152, 0.00223899, -0.15745491,
0.0059534 , -0.01335897, 0.03421222, -0.0598258 ,
-0.0728912 , -0.03396851, -0.00938856, -0.08656713,
0.0702358 , 0.01179086, -0.07536164, -0.00098328,
-0.03001666, -0.03504293, 0.02902822, -0.10593861,
-0.07953295, 0.10574228, -0.06319083, -0.06887974,
-0.01124017, 0.06895528, -0.12196758, -0.05369453]]],
[[[-0.20815198, -0.00440105, 0.0287188 , -0.03328489,
0.02903937, -0.00152312, -0.00727096, -0.01578625,
-0.02864425, -0.17098002, -0.10597537, -0.12533881,
-0.08826925, 0.02960703, -0.05718121, -0.06383619,
-0.10313582, -0.10480042, -0.10670441, 0.07464282,
-0.07483207, -0.06320179, -0.05682871, -0.1408936 ,
0.02406375, -0.08334149, 0.02730196, -0.09242893,
-0.09937346, -0.03572691, -0.10011245, 0.02903499,
-0.02808065, -0.10052233, -0.02385575, -0.06670771,
-0.020346 , 0.03682933, -0.01903624, 0.06887703,
-0.07739822, -0.12165511, -0.11376353, -0.04472189,
-0.01383531, -0.03941099, -0.11727992, -0.00296587,
-0.03418158, -0.00225437, -0.13556832, -0.07532859,
-0.0394014 , -0.00126334, 0.013274 , -0.02858677,
-0.01914487, -0.00297442, -0.08698839, -0.06130847,
-0.03728948, -0.03530406, 0.02397774, -0.00980016]],
[[-0.01391999, 0.00533542, -0.13022862, -0.09590016,
-0.04021743, -0.06029318, -0.00918405, -0.00960081,
-0.00134189, -0.08668274, -0.05736934, -0.15192612,
-0.06902713, -0.04347692, 0.03018776, -0.09302849,
0.11542557, -0.13387617, -0.07090872, 0.06899968,
-0.09588496, 0.02702403, -0.02122652, 0.02879054,
-0.06693469, -0.04194311, -0.09225386, -0.05814867,
-0.14327322, -0.06111673, -0.06016694, 0.0049184 ,
-0.10998747, -0.04329059, -0.00077693, -0.11546659,
-0.0313705 , -0.07584186, -0.1326769 , -0.02986187,
-0.04949946, 0.04571829, -0.04038511, -0.00684647,
-0.05333999, -0.04389523, -0.04881751, -0.10692581,
-0.06753148, -0.02310853, -0.05022598, -0.11820489,
-0.00323643, -0.03145607, -0.10103579, -0.09857317,
-0.06434342, -0.19839574, -0.11829505, -0.05266193,
-0.07491773, -0.0622046 , 0.03803477, -0.02933186]],
[[-0.1247382 , -0.03485401, 0.04220204, -0.0533133 ,
-0.02622193, 0.03903452, -0.07107251, -0.09940782,
-0.03402586, -0.00651949, -0.10042509, -0.03209064,
0.00152026, -0.08211438, -0.03205695, -0.01295862,
-0.06220527, -0.00539724, 0.0269601 , -0.02895791,
-0.04000992, -0.0183319 , -0.10718589, -0.01822807,
-0.06522793, 0.04155295, -0.114325 , 0.01984718,
0.01465203, -0.06666005, -0.0033835 , -0.06465342,
0.00172313, -0.0771261 , 0.03461384, 0.01092802,
0.0206871 , -0.03581536, -0.00700719, -0.06308696,
-0.00029517, 0.03146335, -0.05338996, -0.07290274,
-0.06172956, -0.07477669, 0.03361305, 0.03839117,
-0.00278286, -0.11538932, 0.0016399 , -0.03960989,
-0.07888297, -0.00318916, -0.05844743, -0.01737376,
-0.00167911, -0.1025303 , 0.00033489, 0.01045327,
-0.11905272, -0.06980276, -0.06009643, -0.03707501]]]],
dtype=float32), array([ 0.16362906, -0.19920488, -0.05131278, -0.03918614, -0.07396127,
-0.11191096, -0.0122976 , 0.09280987, -0.14364952, 0.11108728,
-0.07637112, -0.0727768 , -0.0769791 , -0.19905463, -0.14828241,
-0.05303967, -0.0095891 , 0.08531517, -0.2008735 , -0.03723214,
-0.01810488, -0.14492409, 0.09305369, -0.11942552, -0.02450363,
-0.01291143, -0.15586361, -0.03780345, 0.13438338, -0.0491783 ,
-0.13057396, -0.14469838, -0.04261843, -0.01492063, -0.05406488,
-0.13373847, -0.1871528 , -0.0114229 , 0.2265421 , 0.19165981,
-0.06756905, -0.07845617, -0.10806536, -0.07989664, -0.10426535,
-0.11765559, -0.01571426, -0.06161196, -0.11035869, -0.04827874,
-0.15027952, -0.03222513, 0.09988561, -0.05248236, -0.24445036,
0.17703323, -0.02300324, -0.00808931, -0.09001867, 0.055414 ,
-0.0425432 , -0.00539372, -0.16036756, -0.02117861], dtype=float32)]
------------------------------------------------------------------------------------------------------------------------
For Layer max_pooling2d_1 weights are
[]
------------------------------------------------------------------------------------------------------------------------
For Layer conv2d_2 weights are
[array([[[[ 1.47242755e-01, 2.66789906e-02, 5.04215211e-02, ...,
1.17153395e-02, -8.84918496e-02, 1.49413375e-02],
[-9.45985783e-03, -1.70304760e-01, -1.29538938e-01, ...,
-1.53739909e-02, -1.57493889e-01, -8.63202438e-02],
[ 9.25283507e-02, -8.66336096e-03, -5.41876769e-03, ...,
3.63539569e-02, -5.36373965e-02, 1.83970667e-02],
...,
[ 2.97880471e-02, -7.87177309e-02, -3.33618699e-03, ...,
-5.09566143e-02, 4.74112816e-02, -1.38479456e-01],
[ 2.66195182e-02, -1.25526246e-02, -6.53924197e-02, ...,
-6.34578466e-02, -2.29504704e-02, 3.41749117e-02],
[ 1.10229738e-01, 3.45107391e-02, -1.43238783e-01, ...,
4.32725549e-02, -1.82550214e-02, -8.12208503e-02]],
[[-3.23588774e-02, -1.40136749e-01, 1.12119121e-02, ...,
-6.85858577e-02, -8.10684562e-02, 3.60856391e-02],
[-1.53639978e-02, -9.84892547e-02, -2.57338071e-03, ...,
3.67429070e-02, -1.21523209e-01, -5.12350397e-03],
[-9.01409909e-02, -8.78074914e-02, -1.35933533e-01, ...,
-7.30963647e-02, 9.42765623e-02, 2.47306395e-02],
...,
[-3.83386873e-02, -1.25316992e-01, -1.76616102e-01, ...,
5.03977053e-02, -3.78032140e-02, -1.92265622e-02],
[-6.09666072e-02, -2.14817934e-02, -3.19942944e-02, ...,
2.25282777e-02, 1.82351787e-02, -7.15589896e-02],
[ 1.68286934e-02, 1.67781319e-02, -1.49384633e-01, ...,
-2.91456785e-02, -1.62401311e-02, -7.03098327e-02]],
[[-1.04356920e-02, -7.50258490e-02, -1.05425358e-01, ...,
-8.17408785e-02, -1.10115465e-02, -2.08047722e-02],
[-7.13476585e-03, 1.89891625e-02, 8.59914441e-03, ...,
2.23856810e-02, 6.27234876e-02, 4.09330055e-02],
[-9.55698565e-02, 3.58661450e-02, 3.69029641e-02, ...,
-3.42580006e-02, -9.77186486e-02, 3.97637151e-02],
...,
[ 4.51069735e-02, -5.88098168e-02, -2.25810632e-02, ...,
-4.33140621e-02, -5.65709136e-02, 4.75902110e-03],
[-3.01821642e-02, -5.70200458e-02, -1.06811345e-01, ...,
4.27577235e-02, -4.73121926e-02, -4.00995351e-02],
[-2.85557173e-02, -1.13448285e-01, -1.10264853e-01, ...,
7.31734484e-02, -1.21356174e-01, -1.10395938e-01]]],
[[[-7.42831379e-02, -4.20271643e-02, -4.56541590e-02, ...,
-3.64116170e-02, 9.63895768e-03, -4.15653177e-02],
[ 2.16544550e-02, -3.89177874e-02, -1.61513746e-01, ...,
-5.99172413e-02, -3.30981016e-02, 4.24917489e-02],
[-6.32038563e-02, 4.97998670e-03, 4.99285832e-02, ...,
-3.57105918e-02, -5.50891757e-02, -6.36984110e-02],
...,
[-4.58841696e-02, 4.65462729e-02, 6.51880130e-02, ...,
1.76802650e-02, -3.16915922e-02, -6.87682852e-02],
[-1.05615053e-02, 7.21931309e-02, 4.40869778e-02, ...,
-6.05663173e-02, -5.45338116e-05, 3.61469351e-02],
[-2.04362646e-02, 8.61063674e-02, 1.01525724e-01, ...,
-6.57881983e-03, 1.17480099e-01, 1.65729776e-01]],
[[ 6.03367900e-03, -7.44615644e-02, 6.43182620e-02, ...,
-1.83179732e-02, -1.02882646e-01, -4.33815308e-02],
[-1.33785224e-02, -9.32527855e-02, -2.40945611e-02, ...,
-4.30291668e-02, 1.29592726e-02, 8.10526535e-02],
[ 3.40652317e-02, -1.36186397e-02, -1.45747140e-02, ...,
-9.07168314e-02, -5.16638830e-02, 2.26590615e-02],
...,
[-6.59192652e-02, 1.31926080e-02, 1.57805476e-02, ...,
3.70068918e-03, -1.12008424e-02, -1.18654966e-01],
[-5.67757487e-02, -3.50484289e-02, -6.03253506e-02, ...,
-2.88796425e-02, -5.09755202e-02, -2.39123292e-02],
[ 1.50047513e-02, 4.90384139e-02, -6.05917796e-02, ...,
-5.03946729e-02, -4.91564570e-04, -3.86972278e-02]],
[[-9.70722660e-02, -3.92881893e-02, -3.31214629e-02, ...,
-6.37730956e-02, -5.40726297e-02, -1.52105480e-01],
[-4.12522033e-02, -4.01144736e-02, -5.51840067e-02, ...,
-6.73935935e-02, -8.19227472e-02, -2.70670075e-02],
[-4.29089367e-02, -1.06783845e-02, -1.50408102e-02, ...,
-5.92597015e-02, -5.85464388e-02, 3.39204520e-02],
...,
[ 7.02334568e-03, -3.79257277e-02, 5.41646034e-02, ...,
2.54155900e-02, -1.25564054e-01, 4.14403751e-02],
[-1.15616685e-02, 6.49858043e-02, -6.27052560e-02, ...,
-3.55830975e-02, 8.95072240e-03, -8.53924081e-02],
[-1.25390282e-02, 5.42574786e-02, -1.71353206e-01, ...,
-4.02617604e-02, -5.54625280e-02, -5.63652851e-02]]],
[[[ 1.24008413e-02, -2.69593503e-02, 4.84607033e-02, ...,
-1.27874717e-01, 2.28085816e-02, -8.17716643e-02],
[-2.21765451e-02, 2.11700350e-02, -1.33167401e-01, ...,
-1.06587075e-02, -1.88206974e-02, -8.38784650e-02],
[-9.19862688e-02, -4.81441431e-02, -1.72143113e-02, ...,
-8.73892978e-02, -3.43989879e-02, 8.24080855e-02],
...,
[-1.51772646e-03, 1.09046400e-01, -6.16853572e-02, ...,
2.14315318e-02, 2.30486244e-02, -9.62861720e-03],
[ 2.75206156e-02, 1.04895616e-02, -1.31891191e-01, ...,
-6.64265677e-02, -3.10947467e-02, -5.32123074e-02],
[-1.96635704e-02, -3.76867093e-02, 8.65489319e-02, ...,
-1.22101726e-02, -9.00545046e-02, 8.84499215e-03]],
[[-8.03298652e-02, 4.35877740e-02, 2.04582531e-02, ...,
-3.69669348e-02, -5.39523666e-04, -3.51063758e-02],
[ 5.01956232e-02, -6.86801225e-02, 6.22917525e-02, ...,
-7.33482689e-02, -4.32569385e-02, 5.65941306e-03],
[-3.28259394e-02, 3.74205746e-02, -2.66329534e-02, ...,
-3.10606211e-02, -5.68326339e-02, 2.98417024e-02],
...,
[ 3.26395929e-02, -1.99200232e-02, -8.53372440e-02, ...,
-2.92070955e-02, 2.88725812e-02, -8.85830969e-02],
[-6.83150515e-02, -8.10724571e-02, 3.56491245e-02, ...,
-5.66566736e-02, 4.44304310e-02, -1.27224252e-01],
[-4.78342809e-02, 6.20808452e-03, 8.07235762e-03, ...,
-1.57145709e-02, 6.72117323e-02, -1.25826120e-01]],
[[-7.95219690e-02, 2.43019331e-02, -8.30961540e-02, ...,
-7.69272894e-02, 5.75096870e-04, -8.86841211e-03],
[ 1.55063821e-02, 1.57060511e-02, -5.02336472e-02, ...,
-5.66256382e-02, -7.39021376e-02, 7.32171088e-02],
[ 6.41202554e-02, 1.32158957e-02, -7.18665272e-02, ...,
5.44177070e-02, -3.46358716e-02, -1.02468237e-01],
...,
[-4.28491235e-02, -1.71934769e-01, 4.27224599e-02, ...,
6.17890656e-02, -3.02133523e-02, -6.42241985e-02],
[-5.36799245e-02, -1.55935651e-02, 4.50714529e-02, ...,
-4.25501727e-02, 1.09358709e-02, -9.96129662e-02],
[-4.72083129e-02, 9.64651257e-03, -4.03786227e-02, ...,
6.00905754e-02, -1.57352407e-02, -8.68089683e-03]]]],
dtype=float32), array([-0.06703257, -0.14032975, -0.08035215, -0.1151592 , -0.10963862,
-0.04494528, 0.03309212, -0.01448882, -0.16388717, -0.14389631,
-0.0592902 , -0.04692275, -0.05926466, -0.09937331, -0.042744 ,
-0.1435697 , -0.08504525, -0.04954272, -0.04959323, -0.03587022,
-0.05506749, -0.0513914 , -0.07166238, -0.08292232, 0.05499171,
-0.11737438, 0.08524324, -0.0494863 , -0.09996063, -0.09213842,
-0.06284288, -0.03348019], dtype=float32)]
------------------------------------------------------------------------------------------------------------------------
For Layer max_pooling2d_2 weights are
[]
------------------------------------------------------------------------------------------------------------------------
For Layer flatten_1 weights are
[]
------------------------------------------------------------------------------------------------------------------------
For Layer dense_1 weights are
[array([[-0.0393944 , 0.03685861, 0.10786375, ..., 0.01056521,
0.04718429, 0.05705115],
[ 0.02495749, -0.12434618, 0.06060372, ..., -0.05735463,
0.07334856, -0.00496952],
[-0.17000145, 0.05811244, 0.03101713, ..., 0.05700269,
-0.10992602, -0.12855726],
...,
[-0.15952289, -0.05728431, -0.03810475, ..., -0.10806225,
-0.13424054, -0.0302014 ],
[ 0.05468585, -0.15723929, -0.15520404, ..., 0.0444172 ,
-0.03954192, 0.02277395],
[-0.05103776, -0.06624476, 0.02015143, ..., -0.02508527,
-0.07558399, -0.09718696]], dtype=float32), array([ 0.03967373, 0.07477923, -0.01405334, -0.00678689, -0.0154657 ,
0.00377952, -0.03890229, 0.03050109, 0.05264371, -0.03742404],
dtype=float32)]
------------------------------------------------------------------------------------------------------------------------
###Markdown
Keras also provides a function to create a plot of the network neural network graph that can make more complex models easier to understand.The plot_model() function in Keras will create a plot of your network. This function takes a few useful arguments:1. model: (required) The model that you wish to plot.2. to_file: (required) The name of the file to which to save the plot.3. show_shapes: (optional, defaults to False) Whether or not to show the output shapes of each layer.4. show_layer_names: (optional, defaults to True) Whether or not to show the name for each layer.Example : plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
###Code
from keras.utils.vis_utils import plot_model
import pydot
plot_model(model, to_file='DigitRecognitionWithANN.png', show_shapes=True, show_layer_names=True)
# Python program to read
# image using matplotlib
# importing matplotlib modules
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
# Read Images
img = mpimg.imread('DigitRecognitionWithANN.png')
# increasing the size of image
plt.figure(figsize=(10,12))
# Output Images
plt.imshow(img)
# for deleting the png file
import os
try:
os.remove("DigitRecognitionWithANN.png")
except:
print("Not Removed")
###Output
_____no_output_____ |
IoTHub_Device_Basics.ipynb | ###Markdown
Setup1. Upgrade ipykernel - necessary to support async io2. Restart the runtime3. Install the Azure IOT Device package2. (May need to) Restart the runtime
###Code
%pip install ipython ipykernel --upgrade
%pip install azure-iot-device
###Output
_____no_output_____
###Markdown
IOT Hub OverviewAzure IoT Hub is a managed service hosted in the cloud that acts as a central message hub for communication between an IoT application and its attached devices. In our simple example, IoT Hub receives messages fromm devices (real and simulated) and forwards them to Azure Event Hub, which runs as part of the SAS Intelligent Monitoring solution.Multiple IoT Hubs are typically used with a large number of IoT assets.See: https://docs.microsoft.com/en-us/azure/iot-hub/iot-concepts-and-iot-hub AuthenticationBefore a device can connect to IoT Hub, it must be registered in the IoT Hub's device registry. When you do this, the IoT Hub registry generates authentication credentials for the device.IoT hub supports two types of authentication:- Shared Access Signature (SAS) - symmetric key sent with each call- X.509 -physical layer TLS, the foundation of HTTPS The Code
###Code
import asyncio
from azure.iot.device.aio import IoTHubDeviceClient
import json
from time import sleep
from datetime import datetime
# In a live environment, these should be loaded from an environment variable, not code
CONN_STR_202801=''
###Output
_____no_output_____
###Markdown
Instructions: Configure your device information1. Replace ```CONN_STR_202801``` with the connection string for your device ID
###Code
MY_DEVICE_ID = '202801'
MY_CONNECTION_STRING = CONN_STR_202801
# Utilities
def SAS_now_string():
return( datetime.now().strftime("%b %d, %Y %I:%M:%S %p") )
###Output
_____no_output_____
###Markdown
Simple Function Example from Microsoft
###Code
async def run_device(my_connection_string):
# Fetch the connection string from an environment variable
#conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
# Create instance of the device client using the authentication provider
device_client = IoTHubDeviceClient.create_from_connection_string(my_connection_string)
# Connect the device client.
await device_client.connect()
# Send a single message
print("Sending message...", 1)
await device_client.send_message("This is a message that is being sent")
print("Message successfully sent!")
# finally, shut down the client
await device_client.shutdown()
await run_device(MY_CONNECTION_STRING)
###Output
_____no_output_____
###Markdown
A Simple Device Simulator Class
###Code
# Define a basic simulator class
class deviceSimulator:
_conn_str = None
_device_client = None
_device_ID = None
def __init__(self, conn_str, device_ID):
self._conn_str = conn_str
self._device_ID = device_ID
async def connect(self):
if self._conn_str is not None:
self._device_client = IoTHubDeviceClient.create_from_connection_string(self._conn_str)
await self._device_client.connect()
async def send_message(self, msg):
print(msg)
await self._device_client.send_message(msg)
async def disconnect(self):
await self._device_client.disconnect()
###Output
_____no_output_____
###Markdown
Create a Simulator and Send Four Messages
###Code
# Create a device simulator
test_id = MY_DEVICE_ID
test_sim = deviceSimulator(MY_CONNECTION_STRING, test_id)
# Connect to the IoT Hub
await test_sim.connect()
# Send a few test messages
for i in range(0, 4,1):
val4 = round((i/50)+0.387104, 5)
val5 = round(-0.145001 + (i/1000), 5)
val6 = round(0.09452 - (i/100), 5)
val3 = 85 + i
message = json.dumps({"telemetryDataList" :[
{"devId" : test_id, "varId" : "3", "value" : val3,"dateTime" : SAS_now_string()},
{"devId" : test_id, "varId" : "4", "value" : val4,"dateTime" : SAS_now_string()},
{"devId" : test_id, "varId" : "5", "value" : val5,"dateTime" : SAS_now_string()},
{"devId" : test_id, "varId" : "6", "value" : val6,"dateTime" : SAS_now_string()}
]})
await test_sim.send_message(message)
sleep(1)
# When done, disconnect and release resources
await test_sim.disconnect()
###Output
_____no_output_____ |
TSF TASK-2.ipynb | ###Markdown
**TAST-2** **Anurag Ranjan** **Prediction using Unupervised ML****Goal:** From the given dataset, predict the optimum number of cluster and represent it visually. Importing required modules.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import datasets
pwd
###Output
_____no_output_____
###Markdown
Importing Dataset
###Code
d = datasets.load_iris()
df = pd.DataFrame(d.data, columns = d.feature_names)
df.head()
d.target_names
###Output
_____no_output_____
###Markdown
Checking for null values
###Code
df.isnull().sum()
df.describe()
###Output
_____no_output_____
###Markdown
A simple Box Plotting of the Dataset
###Code
df.plot.box()
###Output
_____no_output_____
###Markdown
Finding Optimal number of Cluster
###Code
x = df.iloc[:, [0, 1, 2, 3]].values
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++', max_iter = 400,
n_init = 10, random_state = 0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss, color='g')
plt.title('elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('wcss')
plt.show()
###Output
_____no_output_____
###Markdown
Number of clusters is 3 (according to the Plotting-Graph)The optimum clusters is where the elbow occurs,when the within cluster sum of squares (WCSS) doesn't decrease significantly with every iteration. Training the model with optimal number of cluster
###Code
kmeans = KMeans(n_clusters = 3, init = "k-means++", max_iter = 300,
n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(x)
###Output
_____no_output_____
###Markdown
Visualising the clusters
###Code
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans ==0, 1],
s = 50, c = 'r', label = 'D-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans ==1, 1],
s = 50, c = 'b', label = 'D-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1],
s = 50, c = 'g', label = 'D-virginica')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1],
s = 100, c = 'yellow', label = 'Centroids')
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), shadow=True, ncol=2)
###Output
_____no_output_____ |
.ipynb_checkpoints/1main-v8-sparse0.5-ln4-checkpoint.ipynb | ###Markdown
Network inference of categorical variables: non-sequential data
###Code
import sys
import numpy as np
from scipy import linalg
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
%matplotlib inline
import inference
# setting parameter:
np.random.seed(1)
n = 20 # number of positions
m = 3 # number of values at each position
l = int(4*((n*m)**2)) # number of samples
g = 2.
sp = 0. # degree of sparsity
nm = n*m
def itab(n,m):
i1 = np.zeros(n)
i2 = np.zeros(n)
for i in range(n):
i1[i] = i*m
i2[i] = (i+1)*m
return i1.astype(int),i2.astype(int)
# generate coupling matrix w0:
def generate_interactions(n,m,g,sp):
nm = n*m
w = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
i1tab,i2tab = itab(n,m)
for i in range(n):
for j in range(n):
if (j != i) and (np.random.rand() < sp):
w[i1tab[i]:i2tab[i],i1tab[j]:i2tab[j]] = 0.
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,:] -= w[i1:i2,:].mean(axis=0)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,i1:i2] = 0. # no self-interactions
for i in range(nm):
for j in range(nm):
if j > i: w[i,j] = w[j,i]
return w
i1tab,i2tab = itab(n,m)
w0 = generate_interactions(n,m,g,sp)
plt.imshow(w0,cmap='rainbow',origin='lower')
plt.clim(-0.5,0.5)
plt.colorbar(fraction=0.045, pad=0.05,ticks=[-0.5,0,0.5])
plt.show()
#print(w0)
# 2018.11.07: equilibrium
def generate_sequences_vp_tai(w,n,m,l):
nm = n*m
nrepeat = 50*n
nrelax = m
b = np.zeros(nm)
s0 = np.random.randint(0,m,size=(l,n)) # integer values
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s0).toarray()
e_old = np.sum(s*(s.dot(w.T)),axis=1)
for irepeat in range(nrepeat):
for i in range(n):
for irelax in range(nrelax):
r_trial = np.random.randint(0,m,size=l)
s0_trial = s0.copy()
s0_trial[:,i] = r_trial
s = enc.fit_transform(s0_trial).toarray()
e_new = np.sum(s*(s.dot(w.T)),axis=1)
t = np.exp(e_new - e_old) > np.random.rand(l)
s0[t,i] = r_trial[t]
e_old[t] = e_new[t]
if irepeat%(5*n) == 0: print(irepeat,np.mean(e_old))
return enc.fit_transform(s0).toarray()
s = generate_sequences_vp_tai(w0,n,m,l)
## 2018.11.07: for non sequencial data
def fit_additive(s,n,m):
nloop = 10
i1tab,i2tab = itab(n,m)
nm = n*m
nm1 = nm - m
w_infer = np.zeros((nm,nm))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
# remove column i
x = np.hstack([s[:,:i1],s[:,i2:]])
x_av = np.mean(x,axis=0)
dx = x - x_av
c = np.cov(dx,rowvar=False,bias=True)
c_inv = linalg.pinv(c,rcond=1e-15)
#print(c_inv.shape)
h = s[:,i1:i2].copy()
for iloop in range(nloop):
h_av = h.mean(axis=0)
dh = h - h_av
dhdx = dh[:,:,np.newaxis]*dx[:,np.newaxis,:]
dhdx_av = dhdx.mean(axis=0)
w = np.dot(dhdx_av,c_inv)
#w = w - w.mean(axis=0)
h = np.dot(x,w.T)
p = np.exp(h)
p_sum = p.sum(axis=1)
#p /= p_sum[:,np.newaxis]
for k in range(m):
p[:,k] = p[:,k]/p_sum[:]
h += s[:,i1:i2] - p
w_infer[i1:i2,:i1] = w[:,:i1]
w_infer[i1:i2,i2:] = w[:,i1:]
return w_infer
w2 = fit_additive(s,n,m)
plt.plot([-1,1],[-1,1],'r--')
plt.scatter(w0,w2)
def fit_multiplicative(s,n,m,l):
i1tab,i2tab = itab(n,m)
nloop = 10
nm1 = nm - m
w_infer = np.zeros((nm,nm))
wini = np.random.normal(0.0,1./np.sqrt(nm),size=(nm,nm1))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
x = np.hstack([s[:,:i1],s[:,i2:]])
y = s.copy()
# covariance[ia,ib]
cab_inv = np.empty((m,m,nm1,nm1))
eps = np.empty((m,m,l))
for ia in range(m):
for ib in range(m):
if ib != ia:
eps[ia,ib,:] = y[:,i1+ia] - y[:,i1+ib]
which_ab = eps[ia,ib,:] !=0.
xab = x[which_ab]
# ----------------------------
xab_av = np.mean(xab,axis=0)
dxab = xab - xab_av
cab = np.cov(dxab,rowvar=False,bias=True)
cab_inv[ia,ib,:,:] = linalg.pinv(cab,rcond=1e-15)
w = wini[i1:i2,:].copy()
cost = np.full(nloop,100.)
for iloop in range(nloop):
h = np.dot(x,w.T)
# stopping criterion --------------------
p = np.exp(h)
p_sum = p.sum(axis=1)
p /= p_sum[:,np.newaxis]
cost[iloop] = ((y[:,i1:i2] - p[:,:])**2).mean()
if iloop > 1 and cost[iloop] >= cost[iloop-1]: break
for ia in range(m):
wa = np.zeros(nm1)
for ib in range(m):
if ib != ia:
which_ab = eps[ia,ib,:] !=0.
eps_ab = eps[ia,ib,which_ab]
xab = x[which_ab]
# ----------------------------
xab_av = np.mean(xab,axis=0)
dxab = xab - xab_av
h_ab = h[which_ab,ia] - h[which_ab,ib]
ha = np.divide(eps_ab*h_ab,np.tanh(h_ab/2.), out=np.zeros_like(h_ab), where=h_ab!=0)
dhdx = (ha - ha.mean())[:,np.newaxis]*dxab
dhdx_av = dhdx.mean(axis=0)
wab = cab_inv[ia,ib,:,:].dot(dhdx_av) # wa - wb
wa += wab
w[ia,:] = wa/m
w_infer[i1:i2,:i1] = w[:,:i1]
w_infer[i1:i2,i2:] = w[:,i1:]
return w_infer
w_infer = fit_multiplicative(s,n,m,l)
plt.plot([-1,1],[-1,1],'r--')
plt.scatter(w0,w_infer)
#plt.scatter(w0[0:3,3:],w[0:3,:])
###Output
_____no_output_____ |
doc/euclidean/natural-non-uniform.ipynb | ###Markdown
This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also https://splines.readthedocs.io/.[back to overview](natural.ipynb) Non-Uniform Natural SplinesThe derivation is similar to[the uniform case](natural-uniform.ipynb),but this time the parameter intervals can have arbitrary values.
###Code
import sympy as sp
sp.init_printing(order='grevlex')
from utility import NamedExpression
t = sp.symbols('t')
###Output
_____no_output_____
###Markdown
Just like in the uniform case,we are considering two adjacent spline segments,but this time we must allow arbitrary parameter values:
###Code
t3, t4, t5 = sp.symbols('t3:6')
b_monomial = sp.Matrix([t**3, t**2, t, 1]).T
b_monomial
coefficients3 = sp.symbols('a:dbm3')[::-1]
coefficients4 = sp.symbols('a:dbm4')[::-1]
b_monomial.dot(coefficients3)
p3 = NamedExpression(
'pbm3',
b_monomial.dot(coefficients3).subs(t, (t - t3)/(t4 - t3)))
p4 = NamedExpression(
'pbm4',
b_monomial.dot(coefficients4).subs(t, (t - t4)/(t5 - t4)))
display(p3, p4)
pd3 = p3.diff(t)
pd4 = p4.diff(t)
display(pd3, pd4)
equations = [
p3.evaluated_at(t, t3).with_name('xbm3'),
p3.evaluated_at(t, t4).with_name('xbm4'),
p4.evaluated_at(t, t4).with_name('xbm4'),
p4.evaluated_at(t, t5).with_name('xbm5'),
pd3.evaluated_at(t, t3).with_name('xbmdot3'),
pd3.evaluated_at(t, t4).with_name('xbmdot4'),
pd4.evaluated_at(t, t4).with_name('xbmdot4'),
pd4.evaluated_at(t, t5).with_name('xbmdot5'),
]
###Output
_____no_output_____
###Markdown
We introduce a few new symbols to simplify the display,but we keep calculating with $t_i$:
###Code
deltas = {
t3: 0,
t4: sp.Symbol('Delta3'),
t5: sp.Symbol('Delta3') + sp.Symbol('Delta4'),
}
for e in equations:
display(e.subs(deltas))
coefficients = sp.solve(equations, coefficients3 + coefficients4)
for c, e in coefficients.items():
display(NamedExpression(c, e.subs(deltas)))
pdd3 = pd3.diff(t)
pdd4 = pd4.diff(t)
display(pdd3, pdd4)
sp.Eq(pdd3.expr.subs(t, t4), pdd4.expr.subs(t, t4))
_.subs(coefficients).subs(deltas).simplify()
###Output
_____no_output_____ |
Notebooks/weighted_stats_xarray.ipynb | ###Markdown
How to deal with weighted statistics (with xarray)Xarray introduced weighted statistics in v0.15.1 (23 Mar 2020). Here we take a quick look at how to make use of this. It's a good time-saving approach, since broadcasting seems to work well.
###Code
import xarray as xr
# some data
ds = xr.open_dataset('/Users/brianpm/Dropbox/DataTemporary/f.e20.F2000climo.f09_f09.ag.release_tag.cam.h0.0001-01.ncrcat.FLNT.nc')
# don't worry about correcting time rightnow
X = ds['FLNT']
X
lat = ds['lat']
import numpy as np
# imagine you don't have any way to get weights:
weights_uni = xr.DataArray(1.0)
weighted_uniform = X.weighted(weights_uni)
# average over spatial dims
dims = X.dims
avgdims = [dim for dim in X.dims if dim != 'time']
print(avgdims)
x_avg_uniwgt = weighted_uniform.mean(dim=avgdims)
x_avg_uniwgt
# cos(lat) weights:
weights = np.cos(np.deg2rad(X.lat))
weights.name = "weights"
weighted_coslat = X.weighted(weights)
# average over spatial dims
dims = X.dims
avgdims = [dim for dim in X.dims if dim != 'time']
print(avgdims)
x_avg_coslat = weighted_coslat.mean(dim=avgdims)
x_avg_coslat
###Output
['lat', 'lon']
|
Integracao_por_origem_exame_HSL_v3.ipynb | ###Markdown
**DADOS DO HOSPITAL SÍRIO-LIBANÊS (HSL)** Data: 19/10/2021Filipe Loyola LopesInformativo: - Análise dos dados com a classificação de gravidade a partir da origem do exame. - Origens dos exames: pronto socorro, internação ou UTI.- Os pacientes foram divididos em quatro grupos e cada grupo classificado como GRAVE ou NÃO_GRAVE, conforme abaixo:GRUPO_0 - pacientes com exames provindos apenas do pronto socorro (NÃO_GRAVE); GRUPO_1 - pacientes com exames provindos do pronto socorro e internação (NÃO_GRAVE);GRUPO_2 - pacientes com exames provindos do pronto socorro e UTI (GRAVE).GRUPO_3 - pacientes com exames provindos do pronto socorro, internação e UTI (GRAVE). Links úteis:https://www.vooo.pro/insights/12-tecnicas-pandas-uteis-em-python-para-manipulacao-de-dados/https://medium.com/data-hackers/pandas-combinando-data-frames-com-merge-e-concat-10e7d07ca5echttps://minerandodados.com.br/analise-de-dados-com-python-usando-pandas/
###Code
#Bibliotecas
import numpy as np
import pandas as pd
from pandas import DataFrame
import csv
from numpy import mean
from numpy import std
from numpy import correlate
from numpy.random import randn
from numpy.random import seed
from matplotlib import pyplot
import matplotlib.pyplot as plt
import seaborn as sns
import pandas_profiling
from google.colab import files
import datetime as dt
from matplotlib import pyplot as plt
plt.style.use('default')
#%matplotlib inline
import seaborn as sns
import warnings
import datetime as dt
from datetime import date
###Output
_____no_output_____
###Markdown
**INTEGRAÇÃO DE DADOS**
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
**DATASET HSL_PACIENTES**
###Code
# arquivo "HSL_Pacientes_3.csv" referente a janeiro 2021
pacientes = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/2021 dezembro Artigo/HSL_Pacientes_3.csv', sep='|')
print(pacientes.shape)
pacientes.head(3)
#verificando a existência de valores duplicatos
pacientes['ID_PACIENTE'].nunique()
###Output
_____no_output_____
###Markdown
**DATASET HSL_EXAMES**
###Code
# arquivo "HSL_Exames_3.csv"
sirio_libanes = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/2021 dezembro Artigo/HSL_Exames_3.csv', sep='|')
print(sirio_libanes.shape)
sirio_libanes.head(2)
#Eliminando exemplos repetidos
sirio_libanes = sirio_libanes.drop_duplicates()
sirio_libanes.shape
#verificando analitos unicos
sirio_libanes['DE_ANALITO'].nunique()
###Output
_____no_output_____
###Markdown
**DATASET HSL_DESFECHO**
###Code
# arquivo "HSL_Desfechos_3.csv"
desfecho = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/2021 dezembro Artigo/HSL_Desfechos_3.csv', sep='|')
desfecho.shape
#Eliminando exemplos repetidos
desfecho = desfecho.drop_duplicates()
desfecho.shape
desfecho.head(3)
desfecho['ID_PACIENTE'].nunique()
desfecho['DE_DESFECHO'].value_counts()
desfecho['DE_TIPO_ATENDIMENTO'].value_counts()
###Output
_____no_output_____
###Markdown
**JUNTANDO DATASET'S** Exames e desfecho
###Code
#adiciona colunas do dataset 'desfecho' na frente dos repectivos id_paciente e id_atendimento iguais
sirio = sirio_libanes.merge(desfecho, on = ["ID_PACIENTE", "ID_ATENDIMENTO"], how = "left")
sirio.head(3)
sirio.shape
sirio.head(1)
pacientes.head(1)
###Output
_____no_output_____
###Markdown
https://medium.com/data-hackers/pandas-combinando-data-frames-com-merge-e-concat-10e7d07ca5ec Obtendo SEXO e Ano de nascimento da planilha HSL_PACIENTES
###Code
pacientes_2 = pacientes[['ID_PACIENTE','aa_nascimento','IC_SEXO']]
pacientes_2.head(1)
###Output
_____no_output_____
###Markdown
**DATASET SIRIO**
###Code
#adiciona a coluna aa_nacimento do dataframe pacientes em sirio
sirio = sirio.merge(pacientes_2, on=['ID_PACIENTE'], how='left')
sirio.head(3)
sirio.shape
sirio['ID_PACIENTE'].nunique()
###Output
_____no_output_____
###Markdown
--- **VERIFICANDO VALORES NULOS**
###Code
# cópia profunda do dataframe para não alterar o df original
sirio2 = sirio.copy(deep=True)
# Considera apenas o último exame no caso de repetidos no mesmo dia.
sirio2 = sirio2.groupby(['ID_PACIENTE', 'ID_ATENDIMENTO','DT_COLETA','DE_ANALITO']).agg({'DE_RESULTADO' : ['last']}).reset_index()
# solucionando nome colunas dois niveis
sirio2.columns = [ '_'.join(x) for x in sirio2.columns ]
# criando chave a partir de ID_PACIENTE, ID_ATENDIMENTO e DT_COLETA
sirio2['chave'] = sirio2['ID_PACIENTE_']+'.'+sirio2['ID_ATENDIMENTO_']+'.'+sirio2['DT_COLETA_']
sirio2 = sirio2[['chave','DE_ANALITO_','DE_RESULTADO_last']]
sirio2.columns = ['chave', 'analito','resultado']
print(sirio2.shape)
sirio2.head(3)
sirio_pivot = sirio2.pivot(index='chave',
columns='analito',
values='resultado').reset_index()
print(sirio_pivot.shape)
sirio_pivot.head()
#sirio_pivot.to_csv('sirio_pivot.csv', sep='|', encoding='utf-8') # gera csv co
valores_nulos = pd.DataFrame()
valores_nulos['Null'] = sirio_pivot.isnull().sum()
valores_nulos = valores_nulos.reset_index()
#valores_nulos = valores_nulos.T
# Get names of indexes for which column Stock has value No
indexNames = valores_nulos[ valores_nulos['analito'] == 'chave' ].index
# Delete these row indexes from dataFrame
valores_nulos.drop(indexNames , inplace=True)
print(valores_nulos.head())
print("\nshape: ", valores_nulos.shape, "\n")
valores_nulos.describe()
# No total são 39104 exemplos
# Transformando valor absoluto em porcentagem:
# Media
media_null = (37753.619497 / 39104)*100
print("Média: ", media_null)
# desvio padrão
desvio_null = (4836.900438 / 39104)*100
print("\nDesvio padrão: ", desvio_null)
# mínimo
minimo_null = (11354 / 39104)*100
print("\nMínimo: ", minimo_null)
# máximo
maximo_null = (39103 / 39104)*100
print("\nMáximo: ", maximo_null)
###Output
Média: 96.5466947038666
Desvio padrão: 12.36932395151391
Mínimo: 29.035392798690673
Máximo: 99.9974427168576
###Markdown
--- **FILTRO 1: SELEÇÃO MANUAL DE ATRIBUTOS**
###Code
# Excluindo colunas desnecessárias
df_sirio = sirio.drop(columns=['CD_UNIDADE', 'DE_VALOR_REFERENCIA'])
print(df_sirio.shape)
df_sirio.head(3)
###Output
(1436537, 15)
###Markdown
DATETIME
###Code
#Formato data
df_sirio['DT_ATENDIMENTO'] = pd.to_datetime(df_sirio['DT_ATENDIMENTO'])
df_sirio['DT_COLETA'] = pd.to_datetime(df_sirio['DT_COLETA'])
df_sirio['DT_DESFECHO'] = pd.to_datetime(df_sirio['DT_COLETA'])
###Output
_____no_output_____
###Markdown
**FILTRO 2: VERIFICANDO PACIENTES COM COVID POSITIVO**
###Code
tipos_exames = df_sirio['DE_EXAME'].value_counts()
print(tipos_exames.shape)
tipos_exames
tipos_exames.to_csv('tipos_exames.csv', sep='|', encoding='utf-8') # gera csv com os tipos de exame
#**Lista de exames de COVID**
#Analisando os tipos de exames foram identificados aqueles que são teste de COVID-19 (ls_exames_covid)
ls_exames_covid = ['COVID-19-PCR para SARS-COV-2, Vários Materiais (Fleury)',
'COVID-19-Sorologia IgM e IgG por quimiluminescência, soro',
'Detecção de Coronavírus (NCoV-2019) POR PCR (Anatomia Patológica)',
'COVID-19-Teste Rápido (IgM e IgG), soro',
'COVID-19, anticorpos IGA e IGG, soro',
'Sars Cov-2, Teste Molecular Rápido Para Detecção, Vários Materiais',
'Sorologia - Coronavírus, IgG',
'Sorologia - Coronavírus, IgA']
df_exames_covid = df_sirio.loc[df_sirio['DE_EXAME'].isin(ls_exames_covid)]
#df_exames_covid.to_csv('exames_covid', sep='\t', encoding='utf-8')
df_exames_covid.shape
# Tipos de resultados para o exame de covid
resultados_covid = df_exames_covid['DE_RESULTADO'].value_counts()
#resultados_covid.to_csv('tipos_resultados.csv', sep='\t', encoding='utf-8')
resultados_covid.shape
# Foi realizada uma análise para identificar os resultados que indicam COVID POSITIVOS (ls_resultados_positivos)
ls_resultados_positivos = ['DETECTADO',
'DETECTADO (POSITIVO)',
'REAGENTE',
'Detectados anticorpos da classe IgG contra SARS-CoV-2, Este perfil é compatível com infecção pregressa, Estudos demonstram que, sobretudo em pessoas que apresentaram quadro clínico leve ou não apresentaram sintomas, os níveis de anticorpos podem diminuir ao longo do tempo, podendo inclusive, se tornar indetectáveis (negativos), O papel destes anticorpos na proteção contra reinfecção não é completamente estabelecido,',
'Amostra REAGENTE para IgG contra SARS-CoV-2,',
'Detectados anticorpos das classes IgM e IgG contra SARS-CoV-2, Este perfil sugere infecção recente, Estudos demonstram que, sobretudo em pessoas que apresentaram quadro clínico leve ou não apresentaram sintomas, os níveis de anticorpos podem diminuir ao longo do tempo, podendo inclusive, se tornar indetectáveis (negativos), O papel destes anticorpos na proteção contra reinfecção não é completamente estabelecido,',
'Amostra REAGENTE para IgM e IgG contra SARS-CoV-2,',
'Evidência sorológica de infecção recente por SARS-CoV-2,',
'Detectados anticorpos da classe IgM contra SARS-CoV-2, Este perfil é compatível com soroconversão inicial ou produção de baixos níveis de anticorpos da classe IgG, Estudos demonstram que, sobretudo em pessoas que apresentaram quadro clínico leve ou não apresentaram sintomas, a soroconversão pode ocorrer mais tardiamente, em baixos níveis de anticorpos, ou mesmo não ocorrer, Sugere-se seguimento sorológico para avaliar a soroconversão de IgG em, no mínimo, 7 dias,',
'Evidência sorológica de infecção pregressa por SARS-CoV-2,',
'Detectados anticorpos totais contra SARS-CoV-2, porém não foi possível definir, nesta amostra, a(s) classe(s) de imunoglobulina(s) presente(s) (IgM e/ou IgG), Estudos demonstram que, sobretudo em pessoas que apresentaram quadro clínico leve ou não apresentaram sintomas, a soroconversão pode ocorrer mais tardiamente ou em baixos níveis de anticorpos, A aparente discrepância observada entre as metodologias pode se dever à diferença de sensibilidade ou à utilização de antígenos distintos, Sugere-se o seguimento sorológico em, no mínimo, 7 dias,',
'Amostra REAGENTE para anticorpos contra SARS-CoV-2,',
'Amostra REAGENTE para IgM contra SARS-CoV-2,',
'Detectados anticorpos da classe IgM contra SARS-CoV-2, em apenas uma das metodologias utilizadas, A possibilidade de falsa reatividade não pode ser descartada, Sugere-se seguimento sorológico em, no mínimo, 7 dias,',
'Amostra REAGENTE para IgG contra SARS-CoV-2, em apenas uma metodologia,',
'Amostra REAGENTE para IgM contra SARS-CoV-2, em apenas uma metodologia,',
'Detectados anticorpos da classe IgG contra SARS-CoV-2 em baixos níveis, em apenas uma das metodologias utilizadas, A possibilidade de falsa reatividade não pode ser descartada, embora a aparente discrepância entre as metodologias possa se dever às diferenças de sensibilidade ou à utilização de antígenos distintos, Sugere-se seguimento sorológico em, no mínimo, 7 dias,',
'Possível evidência sorológica de infecção recente por SARS-CoV-2,',
'Detectados anticorpos da classe IgG contra SARS-CoV-2 em baixos níveis, em apenas uma das metodologias utilizadas, Este perfil pode se dever à soroconversão inicial ou à produção de baixos níveis de anticorpos, contudo, a possibilidade de falsa reatividade não pode ser descartada, Sugere-se seguimento sorológico em, no mínimo, 7 dias,',
'Detectados anticorpos da classe IgG contra SARS-CoV-2, Este perfil é compatível com infecção pregressa, Estudos mostram que, sobretudo em pessoas que apresentaram quadro clínico leve ou não apresentaram sintomas, os níveis de anticorpos podem diminuir ao longo do tempo, podendo inclusive, tornar-se negativos, O papel destes anticorpos na proteção contra reinfecção não é completamente estabelecido,',
'Possível evidência sorológica de infecção recente por SARS-COV-2,',
'O resultado sugere que já tenham transcorrido mais de 3 semanas da infecção aguda, A capacidade protetora dos anticorpos da classe IgG não é completamente estabelecida,;'
]
df_covid_positivo = df_exames_covid.loc[df_exames_covid['DE_RESULTADO'].isin(ls_resultados_positivos)]
df_covid_positivo.shape
#número de pacientes com covid positivo
df_covid_positivo['ID_PACIENTE'].nunique()
#pacientes com covid positivo
pacientes_positivos = df_covid_positivo['ID_PACIENTE'].unique()
pacientes_positivos
df_covid_positivo = df_sirio.loc[df_sirio['ID_PACIENTE'].isin(pacientes_positivos)]
print(df_covid_positivo.shape)
df_covid_positivo['ID_PACIENTE'].nunique()
#pacientes com covid negativo
df_covid_negativo = df_sirio.loc[~df_sirio['ID_PACIENTE'].isin(pacientes_positivos)]
print(df_covid_negativo.shape)
print(df_covid_negativo['ID_PACIENTE'].nunique())
#verificando qual a porcentagem de pacientes para cada desfecho, entre os que não tiveram covid:
desfechos_pacientes_sem_covid = df_covid_negativo['DE_DESFECHO'].value_counts()
desfechos_pacientes_sem_covid
df_covid_negativo.loc[df_covid_negativo['DE_DESFECHO'] =='Óbito após 48hs de internação sem necrópsia'].value_counts()
# PACIENTES POSITIVOS
df_hsl = df_covid_positivo
df_hsl['DE_ORIGEM'].value_counts()
df_hsl.head(3)
#agrupando pacientes para verificar última data de atendimento, para pacientes com mais de uma data de atendimento
atendimento_paciente = df_hsl.groupby(['ID_PACIENTE']).agg({'DT_ATENDIMENTO': ['max']}).reset_index()
atendimento_paciente.columns=['ID_PACIENTE', 'DT_ATENDIMENTO_MAXIMA']
atendimento_paciente['DT_ATENDIMENTO_MAXIMA'] = pd.to_datetime(atendimento_paciente['DT_ATENDIMENTO_MAXIMA'])
atendimento_paciente.head(3)
#juntando datasets para auxiliar próximo filtro que é Exame covid até 15 dias após DT_ATENDIMENTO_MAXIMA
df_hsl_temp = df_hsl.merge(atendimento_paciente, on=['ID_PACIENTE'], how='left')
df_hsl_temp.head(3)
#verificando se existe diferença entre DT_COLETA e DT_COLETA_MAXIMA
df_hsl_temp['delta_dt_atendimento'] = (df_hsl_temp['DT_ATENDIMENTO_MAXIMA']-df_hsl_temp['DT_ATENDIMENTO']).dt.days
df_hsl_temp.head(3)
# Filtrando apenas pacientes com COVID confirmado até 15 dias após o atendimento
df_hsl_temp['delta_covid_positivo'] = (df_hsl_temp['DT_COLETA'] - df_hsl_temp['DT_ATENDIMENTO_MAXIMA']).dt.days
print(df_hsl_temp['delta_covid_positivo'].value_counts())
#delta_covid_positivo = df_hsl_temp['delta_covid_positivo'].value_counts()
#delta_covid_positivo.to_csv('Delta_covid_positivo.csv', sep='|', encoding='utf8')
#Selecionando apenas exemplos com delta_covid_positivo <= 15 para selecionar esses pacientes
quinze_dias = df_hsl_temp[df_hsl_temp['delta_covid_positivo']<=15]
quinze_dias = quinze_dias[quinze_dias['delta_covid_positivo']>=0]
print(quinze_dias.shape)
quinze_dias['delta_covid_positivo'].value_counts()
pacientes_quinze_dias = quinze_dias['ID_PACIENTE'].unique()
print(pacientes_quinze_dias.shape)
pacientes_quinze_dias
# Finalmente base com o filtro de 15 dias (covid positivo)
df_hsl = df_hsl.loc[df_hsl['ID_PACIENTE'].isin(pacientes_quinze_dias)]
print(df_hsl['ID_PACIENTE'].nunique())
df_hsl.head(3)
###Output
8750
###Markdown
**ANALISANDO AS ORIGENS DOS EXAMES PARA DIVISÃO EM GRUPOS**GRUPO_0 - pacientes com exames provindos apenas do pronto socorro (NÃO_GRAVE);GRUPO_1 - pacientes com exames provindos apenas do pronto socorro e internação (NÃO_GRAVE);GRUPO_2 - pacientes com exames provindos do PS e UTI (GRAVE).GRUPO_3 - pacientes com exames provindos do PS, Internação e UTI (GRAVE).
###Code
# Origens categorizadas em três tipos: UTI (Unidade de Terapia Intensiva), INT (Internação) ou PS (Pronto Socorro)
grupos = df_hsl
grupos['UTI'] = 'NaN' #Cria coluna UTI com valores nulos
grupos['INT'] = 'Nan' #Cria coluna INT com valores nulos
grupos['PS'] = 'Nan' #Cria coluna PS com valores nulos
print(grupos.shape)
grupos.head(2)
# Rotular exemplos provindos de UTI
# Toda origem "UTI" foi categorizada como UTI, tendo os atributos:
# UTI = 1
# INT = 0
# PS = 0
exemplos_UTI = grupos[grupos['DE_ORIGEM']=='UTI']
print('total de exemplos UTI: ', exemplos_UTI.shape)
atendimentos_UTI = exemplos_UTI['ID_ATENDIMENTO'].unique()
print('Total de chaves unicas de atendimentos com exames UTI: ', atendimentos_UTI.shape)
print('\n')
print('\n')
exemplos_UTI['UTI'] = 1
exemplos_UTI['INT'] = 0
exemplos_UTI['PS'] = 0
exemplos_UTI.head(3)
# Rotular exemplos provindos da INT
# Toda origem "Unidades de Internação" ou "Atendimento - Recepção Internação" foi categorizada como INT, ou seja:
# UTI = 0
# INT = 1
# PS = 0
exemplos_INT = grupos.query('DE_ORIGEM == "Unidades de Internação" | DE_ORIGEM == "Atendimento - Recepção / Internação"')
print('total de exemplos INT: ', exemplos_INT.shape)
atendimentos_INT = exemplos_INT['ID_ATENDIMENTO'].unique()
print('Total de chaves unicas de atendimentos com exames em INT: ', atendimentos_INT.shape)
exemplos_INT['UTI'] = 0
exemplos_INT['INT'] = 1
exemplos_INT['PS'] = 0
exemplos_INT.head(3)
###Output
total de exemplos INT: (345799, 18)
Total de chaves unicas de atendimentos com exames em INT: (1848,)
###Markdown
Todos os outros tipos de origens foram categorizadas como PS, tendo os atributos: UTI = 0INT = 0PS = 1
###Code
#Rotular exemplos provindos do PS
exemplos_PS = grupos.query('DE_ORIGEM != "Unidades de Internação" & DE_ORIGEM != "Atendimento - Recepção / Internação" & DE_ORIGEM != "UTI"')
print('total de exemplos PS: ', exemplos_PS.shape)
atendimentos_PS = exemplos_PS['ID_ATENDIMENTO'].unique()
print('Total de chaves unicas de atendimentos com exame em PS: ', atendimentos_PS.shape)
exemplos_PS['UTI'] = 0
exemplos_PS['INT'] = 0
exemplos_PS['PS'] = 1
exemplos_PS.head(5)
# juntando os exemplos de UTI, INT e PS em um dataset só
df_grupos = pd.concat([exemplos_UTI, exemplos_INT, exemplos_PS])
df_grupos[df_grupos['UTI']==1].head(2)
df_grupos[df_grupos['INT']==1].head(2)
#Passando para csv
#df_grupos.to_csv('dados_grupos.csv', sep='\t', encoding='utf-8')
###Output
_____no_output_____
###Markdown
**FILTRO 3: CRIANDO DATAFRAME COM EXAMES APENAS PROVINDOS DO PS**
###Code
df_PS = df_grupos[df_grupos['PS']==1]
df_PS.head(2)
df_PS.shape
df_PS['ID_PACIENTE'].nunique()
#df_PS.to_csv('ps.csv', sep='\t', encoding='utf-8')
###Output
_____no_output_____
###Markdown
**PACIENTES vs FLAGS (INT, PS, UTI)**
###Code
#Criando um agrupamento em função do ID_Paciente e agregando valores das flags 'UTI', 'INT' e 'PS'
df_hsl_1 = df_grupos.pivot_table(index='ID_PACIENTE', values=['UTI','INT','PS'], columns=[], aggfunc='sum')
print(df_hsl_1.shape)
df_hsl_1.head(10)
#gera csv
#df_hsl_1.to_csv('pacientes_flag.csv', sep='\t', encoding='utf-8')
# transformando o index em uma coluna
df_hsl_1 = df_hsl_1.reset_index()
df_hsl_1.head()
###Output
_____no_output_____
###Markdown
**PACIENTES GRUPO_0**
###Code
# Selecionando pacientes do grupo 0
grupo_0 = df_hsl_1.query('PS > 0 & INT == 0 & UTI == 0')
print(grupo_0.head(3), '\n')
pacientes_grupo_0 = grupo_0['ID_PACIENTE'].unique()
print(pacientes_grupo_0,'\n')
print('quantidade de pacientes no GRUPO_0: ', len(pacientes_grupo_0))
#Criando o Grupo_0 a partir do dataframe df_PS
GRUPO_0 = df_PS.loc[df_hsl['ID_PACIENTE'].isin(pacientes_grupo_0)]
GRUPO_0['GRUPO'] = 'GRUPO_0'
GRUPO_0.head(2)
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:5: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""
###Markdown
**PACIENTES GRUPO_1**
###Code
# Selecionando pacientes do grupo 1
grupo_1 = df_hsl_1.query('PS > 0 & INT > 0 & UTI == 0')
print(grupo_1.head(3), '\n')
pacientes_grupo_1 = grupo_1['ID_PACIENTE'].unique()
#print(pacientes_grupo_1,'\n')
print('quantidade de pacientes no GRUPO_1: ', len(pacientes_grupo_1))
#Criando o Grupo_1 a partir do dataframe df_PS
GRUPO_1 = df_PS.loc[df_hsl['ID_PACIENTE'].isin(pacientes_grupo_1)]
GRUPO_1['GRUPO'] = 'GRUPO_1'
GRUPO_1.head(3)
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:5: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""
###Markdown
**PACIENTES GRUPO_2**
###Code
# Selecionando pacientes grupo 2
grupo_2 = df_hsl_1.query('INT == 0 & PS > 0 & UTI > 0')
print(grupo_2.head(5), '\n')
pacientes_grupo_2 = grupo_2['ID_PACIENTE'].unique()
#print(pacientes_grupo_2,'\n')
print('Quantidade de pacientes no GRUPO_2: ', len(pacientes_grupo_2))
#Criando o Grupo_2 a partir do dataframe df_PS
GRUPO_2 = df_PS.loc[df_hsl['ID_PACIENTE'].isin(pacientes_grupo_2)]
GRUPO_2['GRUPO'] = 'GRUPO_2'
print(GRUPO_2['ID_PACIENTE'].nunique(), '\n')
GRUPO_2.head(3)
###Output
112
###Markdown
**PACIENTES GRUPO_3**
###Code
# Selecionando pacientes grupo 3
grupo_3 = df_hsl_1.query('UTI > 0 & PS > 0 & INT > 0')
print(grupo_3.head(5), '\n')
pacientes_grupo_3 = grupo_3['ID_PACIENTE'].unique()
#print(pacientes_grupo_3,'\n')
print('Quantidade de pacientes no GRUPO_3: ', len(pacientes_grupo_3))
#Criando o Grupo_3 a partir do dataframe df_PS
GRUPO_3 = df_PS.loc[df_hsl['ID_PACIENTE'].isin(pacientes_grupo_3)]
GRUPO_3['GRUPO'] = 'GRUPO_3'
print(GRUPO_3['ID_PACIENTE'].nunique(), '\n')
GRUPO_3.head(3)
###Output
400
###Markdown
**OUTROS PACIENTES**OBS: ESSES PACIENTES SERÃO DESCONSIDERADOS, POIS NÃO POSSUIEM EXAMES QUE INDIQUEM PREDISPOSIÇÃO INCIAL
###Code
# Verificando os pacientes que não possuem exames em PS, esses pacientes foram desconsiderados
grupo_4 = df_hsl_1.query('PS == 0')
print(grupo_4.head(5), '\n')
pacientes_grupo_4 = grupo_4['ID_PACIENTE'].unique()
#print(pacientes_grupo_2,'\n')
print('Quantidade de pacientes no GRUPO_4: ', len(pacientes_grupo_4))
###Output
ID_PACIENTE INT PS UTI
43 01451931334246A7DE4F71DEE7710859 113 0 0
53 0183BA4D9368936BAD131398B55CDDC3 603 0 1495
58 01A29BBFDC18988C5200E74AE169841E 155 0 0
86 0296EEF7845CE2C5DAB358E894256092 1 0 730
121 03921BD7EFD5787934B8900682F73608 0 0 437
Quantidade de pacientes no GRUPO_4: 528
###Markdown
**JUNTANDO OS GRUPOS EM UM SÓ DATASET**
###Code
df_sirio_libanes = pd.concat([GRUPO_0, GRUPO_1, GRUPO_2, GRUPO_3])
df_sirio_libanes.head(2)
df_sirio_libanes.shape
df_sirio_libanes['ID_PACIENTE'].nunique()
G0 = df_sirio_libanes[df_sirio_libanes['GRUPO']=='GRUPO_0']
print('GRUPO_0: ', G0['ID_PACIENTE'].nunique())
G1 = df_sirio_libanes[df_sirio_libanes['GRUPO']=='GRUPO_1']
print('GRUPO_1: ', G1['ID_PACIENTE'].nunique())
G2 = df_sirio_libanes[df_sirio_libanes['GRUPO']=='GRUPO_2']
print('GRUPO_2: ', G2['ID_PACIENTE'].nunique())
G3 = df_sirio_libanes[df_sirio_libanes['GRUPO']=='GRUPO_3']
print('GRUPO_3: ', G3['ID_PACIENTE'].nunique())
###Output
GRUPO_0: 7092
GRUPO_1: 618
GRUPO_2: 112
GRUPO_3: 400
###Markdown
**FILTRO 4: NOVA SELEÇÃO MANUAL DE ATRIBUTOS**
###Code
#deletando colunas que não serão mais necesárias
df_sirio = df_sirio_libanes.drop(columns=['ID_ATENDIMENTO','DE_ORIGEM','ID_CLINICA','DE_CLINICA', 'DT_DESFECHO', 'UTI','INT','PS'])
df_sirio.head(3)
df_sirio.shape
###Output
_____no_output_____
###Markdown
**FILTRO 5: SELECIONANDO EXAMES COM ATÉ 3 DIAS APÓS O ATENDIMENTO** (OU SEJA, DATA DE COLETA - DATA DE ATENDIMENTO <= 3 DIAS)
###Code
#criando coluna com período de exames
df_sirio['PERIODO_EXAMES'] = (df_sirio['DT_COLETA']-df_sirio['DT_ATENDIMENTO']).dt.days
print(df_sirio.shape)
df_sirio.head(3)
#Selecionando apenas exemplos de exames com até três dias da entrada no hospital
tres_dias = df_sirio[df_sirio['PERIODO_EXAMES']<=3]
print(tres_dias.shape)
tres_dias['PERIODO_EXAMES'].value_counts()
#investigando 'PERIODO_EXAMES' negativos
# Os intervalos negativos significam que DT_COLETA é anterior a DT_ATENDIMENTO.
x = tres_dias[tres_dias['PERIODO_EXAMES']<0]
x.head(3)
#pegando um paciente como exemplo
x[x['ID_PACIENTE']=='A812B082EE43AFA716B6F9C33145F8EE']
###Output
_____no_output_____
###Markdown
Os pacientes com período_exames negativo possuem data de coleta anterior ao atendimento, o que pode ser uma inconsistência.
###Code
# Verificando quais são os desfechos desses exemplos:
x['DE_DESFECHO'].value_counts()
#eliminando exemplos com 'PERIODO_EXAMES' negativos
tres_dias = tres_dias[tres_dias['PERIODO_EXAMES']>=0]
print(tres_dias['PERIODO_EXAMES'].value_counts())
print('\n', tres_dias.shape)
tres_dias['DE_DESFECHO'].value_counts()
#investigando quantos pacientes unicos em cada tipo de desfecho
pacientes = tres_dias['ID_PACIENTE'].unique()
print('Pacientes unicos com exames de 0 a 3 dias: ', pacientes.shape, '\n')
pacientes = tres_dias.drop_duplicates(subset='ID_PACIENTE', keep='first')
pacientes['DE_DESFECHO'].value_counts()
df_hsl = tres_dias
df_hsl.head(2)
df_hsl.shape
df_hsl['ID_PACIENTE'].nunique()
###Output
_____no_output_____
###Markdown
**ÓBITOS**
###Code
#Verificando se todos os óbitos estão no grupo grave
obitos = ['Óbito após 48hs de internação sem necrópsia',
'Óbito nas primeiras 48hs de internação sem necrópsia não agônico',
'Óbito nas primeiras 48hs de internação sem necrópsia agônico']
df_obitos = df_hsl.loc[df_hsl['DE_DESFECHO'].isin(obitos)]
df_obitos['GRUPO'].value_counts()
obitos_GRUPO_1 = df_obitos[df_obitos['GRUPO']=='GRUPO_1']
obitos_GRUPO_1['ID_PACIENTE'].nunique()
#esses pacientes precisão passar para o grupo 2, pois dem ter gravidade máxima
obitos_GRUPO_1['ID_PACIENTE'].unique()
df_hsl.head(50)
###Output
_____no_output_____
###Markdown
**FILTRO 6: ELIMINANDO EXAMES COM RESULTADOS EM FORMA DE TEXTO** https://www.vooo.pro/insights/12-tecnicas-pandas-uteis-em-python-para-manipulacao-de-dados/
###Code
# Trocando ',' por '.'.
df_hsl['DE_RESULTADO'] = [x.replace(',', '.') for x in df_hsl['DE_RESULTADO']]
resultados = df_hsl['DE_RESULTADO'].value_counts()
resultados.to_csv('resultados.csv', sep='\t',encoding='utf-8')
# Convertendo a coluna DE_RESULTADO para numérico
# Deu erro, pois existem resultados não numericos
# Então nos próximos passos serão excluídos os resultados não numéricos
#df_hsl['DE_RESULTADO'] = df_hsl['DE_RESULTADO'].astype(float)
#df_final.info()
df_hsl.dtypes
#função para verificar se um variável é numérica
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
num = '9999.999'
is_number(num)
#Criando uma nova coluna para testar se o resultado é numérico
df_hsl['Tipo_resultado'] = ''
df_hsl.head(3)
###Output
_____no_output_____
###Markdown
O código abaixo verifica linha por linha se a variável é numérica, ou seja, se poderá ser convertida em float.
###Code
# Este bloco funciona, porém é demorado.
"""
for index, row in df_hsl.iterrows():
verifica = is_number(row['DE_RESULTADO'])
if verifica == True:
df_hsl.loc[index,'Tipo_resultado'] = True
else:
df_hsl.loc[index,'Tipo_resultado'] = False
# gera arquivo: df_hsl_true_false.csv
df_hsl.to_csv('df_hsl_true_false.csv', sep=';',encoding='utf-8')
"""
# lê arquivo: df_hsl_true_false.csv
df_hsl_1 = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/2021 dezembro Artigo/Arquivos_v3/df_hsl_true_false_v3.csv', sep=';')
df_hsl_1.head(3)
df_hsl_1 = df_hsl_1.drop(columns=['Unnamed: 0'])
df_hsl_1.head(3)
df_hsl_1.shape
df_hsl_1[df_hsl_1['Tipo_resultado']==False]
# Apaga linhas em que Resultado não é do tipo numérico
df_hsl_1.drop(df_hsl_1.loc[df_hsl_1['Tipo_resultado']==False].index, inplace=True)
df_hsl_1.shape
df_hsl_1[df_hsl_1['Tipo_resultado']==False]
df_hsl_1['ID_PACIENTE'].nunique()
###Output
_____no_output_____
###Markdown
**CONVERTENDO RESULTADOS DE STR PARA FLOAT**
###Code
df_hsl_1['DE_RESULTADO'] = df_hsl_1['DE_RESULTADO'].astype(float)
###Output
_____no_output_____
###Markdown
**NOVA SELEÇÃO MANUAL DE ATRIBUTOS**
###Code
pivot_sirio = df_hsl_1.drop(columns=(['DE_EXAME',
'DT_ATENDIMENTO',
'DE_TIPO_ATENDIMENTO',
'DE_DESFECHO',
'PERIODO_EXAMES',
'Tipo_resultado']))
pivot_sirio['ID_PACIENTE'].value_counts()
pivot_sirio.head(3)
#pivot_sirio.to_csv("ANALISE_SIRIO_FINAL.csv", encoding="utf-8")
pivot_sirio.shape
###Output
_____no_output_____
###Markdown
**GERANDO DATAFRAME SIRIO_APRENDIZADO**
###Code
AL7 = pivot_sirio
# AL7 = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/2021 dezembro Artigo/ANALISE_SIRIO_FINAL.csv', sep=',', index_col=0)
AL7.head(3)
###Output
_____no_output_____
###Markdown
**FILTRO 8: AGRUPAMENTO DE EXAMES REPETIDOS PARA O MESMO PACIENTE**
###Code
#Considera apenas o último exame no caso de repetidos.
AL7 = AL7.groupby(['ID_PACIENTE', 'GRUPO','aa_nascimento','IC_SEXO','DE_ANALITO']).agg({'DT_COLETA': ['max'], 'DE_RESULTADO' : ['last']}).reset_index()
AL7.head(3)
AL7.columns = ['ID_PACIENTE', 'GRUPO', 'Idade','Sexo', 'DE_ANALITO','DT_COLETA', 'DE_RESULTADO']
AL7.head(3)
AL7.info()
AL7.shape
AL7['ID_PACIENTE'].nunique()
# Transformando Ano de nascimento em idade
# Erro porque pacientes com ano de nascimento = YYYY ou AAAA
# Esses exemplos serão removidos nos próximos passos
#AL7['Idade'] = AL7['Idade'].astype(int)
###Output
_____no_output_____
###Markdown
**FILTRO 9: ELIMINANDO LINHAS COM DATA DE NASCIMENTO AAAA OU YYYY**
###Code
AL7.drop(AL7.loc[AL7['Idade']=='AAAA'].index, inplace=True)
AL7.drop(AL7.loc[AL7['Idade']=='YYYY'].index, inplace=True)
AL7['Idade'] = AL7['Idade'].astype(int)
AL7['Idade'] = 2021 - AL7['Idade']
AL7.head(3)
AL7.shape
AL7['ID_PACIENTE'].nunique()
#AL7.to_excel("ANALISE_SIRIO_FINAL_v2.xlsx")
###Output
_____no_output_____
###Markdown
**FILTRO 10: PIVOTAMENTO PARA OS EXAMES SE TORNAREM COLUNAS**
###Code
sirio_aprendizado = AL7.pivot_table(index=['ID_PACIENTE','GRUPO','Idade','Sexo'],
values=['DE_RESULTADO'],
columns=['DE_ANALITO'],
aggfunc=[np.mean]).reset_index()
sirio_aprendizado.head(3)
sirio_aprendizado.to_csv('pivot_table_V3.csv', sep='|',encoding='utf-8')
#analise_sirio_final.to_excel("ANALISE_SIRIO_FINAL_v3.xlsx")
#nomes_exames = sirio_aprendizado
nomes_exames = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/2021 dezembro Artigo/Arquivos_v3/pivot_table_v3.csv', sep='|')
nomes_exames.head(10)
nomes_exames = nomes_exames.T
nomes_exames.head(10)
#nomes_exames = nomes_exames.reset_index()
#nomes_exames.head(5)
nomes = nomes_exames[1].values.tolist()
nomes[0:10]
nomes[0] = 'ID_PACIENTE'
nomes[1] = 'GRUPO'
nomes[2] = 'Idade'
nomes[3] = 'Sexo'
del nomes[4] # deleta valor nulo
nomes[0:10]
###Output
_____no_output_____
###Markdown
**TABELA FINAL PARA MANIPULAÇÃO**
###Code
sirio_aprendizado.columns = nomes
sirio_aprendizado.head(5)
#sirio_aprendizado.to_excel("dados_pre_processado.xlsx")
sirio_aprendizado.shape
###Output
_____no_output_____
###Markdown
**VERIFICANDO VALORES NULOS POR EXAME**https://sigmoidal.ai/como-tratar-dados-ausentes-com-pandas/
###Code
ausentes = sirio_aprendizado.isnull().sum()
ausentes = pd.DataFrame([ausentes])
ausentes = ausentes.drop(columns=['ID_PACIENTE', 'GRUPO'])
ausentes = ausentes.T
ausentes
ausentes['Nulos por cento'] = ''
ausentes.head(3)
for index, row in ausentes.iterrows():
porcentagem = round((row[0]/4320)*100, 2)
ausentes.loc[index,'Nulos por cento'] = porcentagem
ausentes = ausentes.sort_values(by=[0], ascending=False)
ausentes.head(3)
ausentes.reset_index(inplace=True, drop=False)
ausentes.head()
ausentes.to_csv('ausentes.csv', sep='|', encoding='utf-8')
###Output
_____no_output_____
###Markdown
**FILTRO 11: ELIMINANDO EXAMES AUSENTES EM MAIS DE 50% DOS PACIENTES**
###Code
ausentes.drop(ausentes.loc[ausentes['Nulos por cento']<50].index, inplace=True)
ausentes
exames_eliminar = ausentes['index'].values.tolist()
exames_eliminar[0:10]
sirio_aprendizado = sirio_aprendizado.drop(columns=(exames_eliminar))
sirio_aprendizado.head(3)
###Output
_____no_output_____
###Markdown
Substituindo sexo feminino por 0 e sexo masculino por 1
###Code
sirio_aprendizado['SEXO'] = ''
for index, row in sirio_aprendizado.iterrows():
if row['Sexo'] == 'F':
sirio_aprendizado.loc[index,'SEXO'] = 0
else:
sirio_aprendizado.loc[index,'SEXO'] = 1
sirio_aprendizado['Sexo'].value_counts()
sirio_aprendizado['SEXO'].value_counts()
#Converto Sexo para int
sirio_aprendizado['SEXO'] = sirio_aprendizado['SEXO'].astype(int)
sirio_aprendizado.head(10)
#Gerando arquivo csv
sirio_aprendizado.to_csv('sirio_aprendizado_v3.csv', sep='|', encoding='utf-8')
###Output
_____no_output_____
###Markdown
**ANÁLISE DOS DADOS**
###Code
sirio_aprendizado.info()
sirio_aprendizado.shape
sirio_aprendizado.head()
#Quantidade de pacientes por grupo
G0 = sirio_aprendizado[sirio_aprendizado['GRUPO']=='GRUPO_0']
print('GRUPO_0: ', G0.shape)
G1 = sirio_aprendizado[sirio_aprendizado['GRUPO']=='GRUPO_1']
print('GRUPO_1: ', G1.shape)
G2 = sirio_aprendizado[sirio_aprendizado['GRUPO']=='GRUPO_2']
print('GRUPO_2: ', G2.shape)
G3 = sirio_aprendizado[sirio_aprendizado['GRUPO']=='GRUPO_3']
print('GRUPO_3: ', G3.shape)
# Valores ausentes por exames
mi = sirio_aprendizado.isnull().sum()
#mi = mi.sort()
mi
mi = pd.Series(mi)
#mi.index = X_train.columns
mi=mi.sort_values(ascending = False)
my_colors = ['r', 'g', 'b', 'k', 'y', 'm', 'c']
plt.rcParams['xtick.labelsize'] = 14
plt.rcParams['ytick.labelsize'] = 14
mi.plot(kind='bar', color=my_colors, figsize=(15,3))
plt.show()
#ausentes por grupo
ausentes_G0 = G0.isnull().sum()
ausentes_G0 = pd.DataFrame([ausentes_G0])
ausentes_G0 = ausentes_G0.T
ausentes_G0.reset_index(inplace=True, drop=False)
ausentes_G0.columns = [['exame','G0']]
ausentes_G1 = G1.isnull().sum()
ausentes_G1 = pd.DataFrame([ausentes_G1])
ausentes_G1 = ausentes_G1.T
ausentes_G1.reset_index(inplace=True, drop=False)
ausentes_G1.columns = [['exame','G1']]
ausentes_G2 = G2.isnull().sum()
ausentes_G2 = pd.DataFrame([ausentes_G2])
ausentes_G2 = ausentes_G2.T
ausentes_G2.columns = ['G2']
ausentes_G2.reset_index(inplace=True, drop=False)
ausentes_G2.columns = [['exame','G2']]
ausentes_G3 = G3.isnull().sum()
ausentes_G3 = pd.DataFrame([ausentes_G3])
ausentes_G3 = ausentes_G3.T
ausentes_G3.columns = ['G3']
ausentes_G3.reset_index(inplace=True, drop=False)
ausentes_G3.columns = [['exame','G3']]
df_ausentes = ausentes_G0.merge(ausentes_G1)
df_ausentes = df_ausentes.merge(ausentes_G2)
df_ausentes = df_ausentes.merge(ausentes_G3)
df_ausentes
###Output
_____no_output_____
###Markdown
**DESCRIBE**
###Code
des_G0 = G0.describe().T
des_G1 = G1.describe().T
des_G2 = G2.describe().T
des_G3 = G3.describe().T
des_G0.head(3)
describe_G0 = des_G0.drop(columns=['count','min','25%','50%','75%','max'])
describe_G0.reset_index(inplace=True, drop=False)
describe_G0.columns = ['Exame', 'mean_G0', 'std_G0']
describe_G1 = des_G1.drop(columns=['count','min','25%','50%','75%','max'])
describe_G1.reset_index(inplace=True, drop=False)
describe_G1.columns = ['Exame', 'mean_G1', 'std_G1']
describe_G2 = des_G2.drop(columns=['count','min','25%','50%','75%','max'])
describe_G2.reset_index(inplace=True, drop=False)
describe_G2.columns = ['Exame', 'mean_G2', 'std_G2']
describe_G3 = des_G3.drop(columns=['count','min','25%','50%','75%','max'])
describe_G3.reset_index(inplace=True, drop=False)
describe_G3.columns = ['Exame', 'mean_G3', 'std_G3']
describe_G0.head(3)
print(describe_G0.shape)
print(describe_G1.shape)
print(describe_G2.shape)
print(describe_G3.shape)
#Juntando o describe de cada grupo em um único dataframe
df_describe = describe_G0.merge(describe_G1, on = ["Exame"], how = "left")
df_describe = df_describe.merge(describe_G2, on=['Exame'], how = 'left')
df_describe = df_describe.merge(describe_G3, on=['Exame'], how = 'left')
df_describe.head(3)
decimals = 2
df_describe[['mean_G0', 'std_G0', 'mean_G1', 'std_G1', 'mean_G2', 'std_G2', 'mean_G3', 'std_G3']] = df_describe[['mean_G0', 'std_G0', 'mean_G1', 'std_G1', 'mean_G2', 'std_G2', 'mean_G3', 'std_G3']].apply(lambda x: round(x, decimals))
df_describe
df_describe.to_csv('df_describe.csv', sep='|', encoding='utf-8')
###Output
_____no_output_____
###Markdown
**BOXPLOT**
###Code
plt.title("Idade", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 100, 10)) # mudar escala do eixo X
ax = sns.boxplot(x="Idade", y="GRUPO", data=sirio_aprendizado)
plt.title("ALT (TGP)", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 420, 15)) # mudar escala do eixo X
ax = sns.boxplot(x="ALT (TGP)", y="GRUPO", data=sirio_aprendizado)
plt.title("AST (TGO)", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 250, 10)) # mudar escala do eixo X
ax = sns.boxplot(x="AST (TGO)", y="GRUPO", data=sirio_aprendizado)
plt.title("Basófilos", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 420, 15)) # mudar escala do eixo X
ax = sns.boxplot(x="Basófilos", y="GRUPO", data=sirio_aprendizado)
plt.title("Basófilos (%)", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 5, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Basófilos (%)", y="GRUPO", data=sirio_aprendizado)
plt.title("CHCM", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 38, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="CHCM", y="GRUPO", data=sirio_aprendizado)
plt.title("Creatinina", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 10, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Creatinina", y="GRUPO", data=sirio_aprendizado)
plt.title("Eosinófilos", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 3200, 100)) # mudar escala do eixo X
ax = sns.boxplot(x="Eosinófilos", y="GRUPO", data=sirio_aprendizado)
plt.title("Eosinófilos (%)", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 25, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Eosinófilos (%)", y="GRUPO", data=sirio_aprendizado)
plt.title("Eritrócitos", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 8, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Eritrócitos", y="GRUPO", data=sirio_aprendizado)
plt.title("HCM", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 41, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="HCM", y="GRUPO", data=sirio_aprendizado)
plt.title("Hematócrito", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 65, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Hematócrito", y="GRUPO", data=sirio_aprendizado)
plt.title("Hemoglobina", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 25, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Hemoglobina", y="GRUPO", data=sirio_aprendizado)
plt.title("Leucócitos", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 120000, 10000)) # mudar escala do eixo X
ax = sns.boxplot(x="Leucócitos", y="GRUPO", data=sirio_aprendizado)
plt.title("Linfócitos", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 120000, 10000)) # mudar escala do eixo X
ax = sns.boxplot(x="Linfócitos", y="GRUPO", data=sirio_aprendizado)
plt.title("Linfócitos (%)", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 100, 5)) # mudar escala do eixo X
ax = sns.boxplot(x="Linfócitos (%)", y="GRUPO", data=sirio_aprendizado)
plt.title("Monócitos", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 4000, 500)) # mudar escala do eixo X
ax = sns.boxplot(x="Monócitos", y="GRUPO", data=sirio_aprendizado)
plt.title("Monócitos (%)", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 100, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Monócitos (%)", y="GRUPO", data=sirio_aprendizado)
plt.title("Neutrófilos", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 30000, 2500)) # mudar escala do eixo X
ax = sns.boxplot(x="Neutrófilos", y="GRUPO", data=sirio_aprendizado)
plt.title("Neutrófilos (%)", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 100, 10)) # mudar escala do eixo X
ax = sns.boxplot(x="Neutrófilos (%)", y="GRUPO", data=sirio_aprendizado)
plt.title("Plaquetas", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 750000, 50000)) # mudar escala do eixo X
ax = sns.boxplot(x="Plaquetas", y="GRUPO", data=sirio_aprendizado)
plt.title("Potássio", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 10, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Potássio", y="GRUPO", data=sirio_aprendizado)
plt.title("Proteína C-Reativa", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 100, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Proteína C-Reativa", y="GRUPO", data=sirio_aprendizado)
plt.title("RDW", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 30, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="RDW", y="GRUPO", data=sirio_aprendizado)
plt.title("Sódio", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 170, 10)) # mudar escala do eixo X
ax = sns.boxplot(x="Sódio", y="GRUPO", data=sirio_aprendizado)
plt.title("Uréia", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 300, 10)) # mudar escala do eixo X
ax = sns.boxplot(x="Uréia", y="GRUPO", data=sirio_aprendizado)
plt.title("VCM", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 120, 10)) # mudar escala do eixo X
ax = sns.boxplot(x="VCM", y="GRUPO", data=sirio_aprendizado)
plt.title("Volume plaquetário médio", size=14)
plt.gcf().set_size_inches(15, 3) # alterar tamanho
plt.xticks(range(0, 15, 1)) # mudar escala do eixo X
ax = sns.boxplot(x="Volume plaquetário médio", y="GRUPO", data=sirio_aprendizado)
#! pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip
#pandas_profiling.ProfileReport(sirio)
#profile = sirio_aprendizado.profile_report(title="RELATORIO")
#profile.to_file(output_file="RELATORIO.html")
###Output
_____no_output_____ |
Chapter 4/ex4_7/.ipynb_checkpoints/RL_ex4_7B-checkpoint.ipynb | ###Markdown
Car Rental Problem Exercise 4.7 (programming)Write a program for policy iteration and re-solve Jack’s carrental problem with the following changes. One of Jack’s employees at the first locationrides a bus home each night and lives near the second location. She is happy to shuttleone car to the second location for free. Each additional car still costs 2, as do all carsmoved in the other direction. In addition, Jack has limited parking space at each location.If more than 10 cars are kept overnight at a location (after any moving of cars), then anadditional cost of 4 must be incurred to use a second parking lot (independent of howmany cars are kept there). These sorts of nonlinearities and arbitrary dynamics oftenoccur in real problems and cannot easily be handled by optimization methods other thandynamic programming. To check your program, first replicate the results given for theoriginal problem. Solve problem as presented in Ex4.7
###Code
import numpy as np
import pickle
import matplotlib.pyplot as plt
import os
from jupyterthemes import jtplot
jtplot.style()
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from scipy.special import factorial
"""
Parameters:
n_cars: Max #cars allowed at each lot
n_cars_mv: Max #cars moved between lots each step
V: Initial state-value function
PI: Initial policy
theta: Policy evaluation convergence constant
gamma: Return discount parameter
lambda_req: Parameters for car request poisson r.v.
lambda_ret: Parameters for car return poisson r.v.
"""
n_cars1 = 20
n_cars2 = 20
n_cars_mv = 5
V = np.zeros((n_cars1+1, n_cars2+1))
PI = np.zeros((n_cars1+1, n_cars2+1), dtype=int)
theta = 0.00001
gamma = 0.9
lambda_req = [3,4]
lambda_ret = [3,2]
PICKLE_DIR = "RL_ex4_7_data"
def evaluate_policy(pi, v):
"""
Evaluate a policy by determining expected returns at each state.
Intuitively, the value at each state is updated to reflect
the new policy's action at the current state.
This implementation does not sum over environment probabilities,
but instead uses the mean of poisson random variables.
Parameters
----------
pi : ndarray(shape=(n_cars1+1,n_cars2+1), dtype=int)
Policy to be evaluated
v : ndarray(shape=(n_cars1+1,n_cars2+1), dtype = float)
Current state-value function
Returns
-------
ndarray
State-value function after evaluating pi
"""
while True:
delta = 0
for i in range(n_cars1+1):
for j in range(n_cars2+1):
v_old = v[i,j]
a = pi[i,j]
i_day = max(i-a, 0)
j_day = max(j+a, 0)
reward = 10 * (min(i_day, lambda_req[0]) +
min(j_day, lambda_req[1])) - 2 * abs(a)
if (a > 0):
reward += 2
reward += (0 if ((i - a) <= 10) else -4)
reward += (0 if ((j + a) <= 10) else -4)
i_p = min(max(i_day-lambda_req[0], 0) + lambda_ret[0], n_cars1)
j_p = min(max(j_day-lambda_req[1], 0) + lambda_ret[1], n_cars2)
s_p = [i_p, j_p]
v[i,j] = reward + gamma * v[s_p[0],s_p[1]]
delta = max(delta, np.abs(v[i,j]-v_old))
if (delta < theta):
return v
def improve_policy(pi, v, dynamics):
"""
Updates policy greedily w.r.t. to previously calculated state-values.
For each state, the new policy chooses the action
that gives the highest expected returns.
Uses a dictionary to lookup environment dynamics for state-action
Checks policy stability via lookback. If a state-value function has been seen before,
then the policy is stable
Multiple optimal policies are possible, hence the lookback to prevent infinite loops
Parameters
----------
pi : ndarray(shape=(n_cars1+1,n_cars2+1), dtype=int)
Policy to be improved
v : ndarray(shape=(n_cars1+1,n_cars2+1), dtype = float)
Current state-value function
dynamics : dict
Environment dynamics
f(s'r|s,a) = p(s',r|s,a) = { (s,a): { (s',r): y } }
Returns
-------
(ndarray, ndarray)
Optimal policies and state-value functions
"""
lookback = 5
policies = []
reward_rec = []
while True:
policy_stable = True
for i in range(n_cars1+1):
for j in range(n_cars2+1):
if (i != 0 or j != 0):
actions = np.arange(-min(n_cars_mv,j),
min(n_cars_mv,i) + 1, 1,
dtype=float)
actions = actions[np.where(
(actions <= i) &
(-actions <= j) &
(-actions + i <= n_cars1) &
(actions + j <= n_cars2))]
action_returns = np.zeros(actions.size)
for n, a in enumerate(actions):
cond_dynamics = dynamics[(i, j, a)]
action_return = 0
for k in cond_dynamics.keys():
action_return += cond_dynamics[k] * (k[2] +
gamma *
v[k[0], k[1]])
action_returns[n] = action_return
pi[i,j] = actions[np.argmax(action_returns)]
v = evaluate_policy(pi, v)
if (round(np.sum(v), 1) not in reward_rec):
plt.figure()
plt.imshow(pi, origin='lower')
plt.show()
policy_stable = False
policies.append(pi)
reward_rec.append(round(np.sum(v), 1))
if (len(policies) > lookback):
policies.pop(0)
reward_rec.pop(0)
if policy_stable:
return (policies, v)
def eval_poisson(l, n):
"""
Evaluates probability P(n) according to poisson(l) distribution
Parameters
----------
l : list
Poisson parameters
n : list
Returns
-------
ndarray
Probabilities
"""
return np.maximum(np.repeat(np.finfo(float).eps,len(l)),
np.abs(np.divide(np.multiply(np.power(l, n),
np.exp(np.multiply(l, -1))),
factorial(n))))
def train():
"""
Calculate environment dynamics
For each (s',r,s,a), calculate its probability
s' and r are indirectly determined from (reqx,reqy,retx,retx),
the number of requests/returns on each site
(reqx,reqy,retx,rety) makes up a joint distribution of poisson r.v.s
Returns
-------
dict
f(s'r|s,a) = p(s',r|s,a) = { (s,a): { (s',r): y } }
"""
all_possibilities = {}
for reqx in range(n_cars1+1):
for reqy in range(n_cars2+1):
for retx in range(n_cars1+1):
for rety in range(n_cars2+1):
all_possibilities[(reqx, reqy, retx, rety)] = np.prod(eval_poisson([lambda_ret[0], lambda_ret[1], lambda_req[0], lambda_req[1]],
[retx, rety, reqx, reqy]))
P = {}
for sx in range(n_cars1+1):
print("State: {}".format(sx))
for sy in range(n_cars2+1):
for a in np.arange(-n_cars_mv, n_cars_mv +1, 1, dtype=int):
if a <= sx and -a <= sy and -a + sx <= n_cars1 and a + sy <= n_cars2:
P[(sx,sy,a)] = {}
for reqx in range(n_cars1+1):
for reqy in range(n_cars2+1):
r = int(10 * min(sx - a, reqx) + 10 * min(sy + a, reqy) - 2 * abs(a))
if (a > 0):
r += 2
if (sx - a > 10):
r += -4
if (sy + a > 10):
r += -4
for retx in range(n_cars1+1):
for rety in range(n_cars2+1):
sx_p = min(max(sx - a - reqx, 0) + retx, n_cars1)
sy_p = min(max(sy + a - reqy, 0) + rety, n_cars2)
if (sx_p,sy_p,r) in P[(sx,sy,a)]:
P[(sx,sy,a)][(sx_p,sy_p,r)] += all_possibilities[(reqx, reqy, retx, rety)]
else:
P[(sx,sy,a)][(sx_p,sy_p,r)] = all_possibilities[(reqx, reqy, retx, rety)]
return P
if __name__ == "__main__":
dynamics = train()
if not os.path.isdir(PICKLE_DIR):
os.mkdir(PICKLE_DIR)
with open(PICKLE_DIR + '/dynamicsB.pickle', 'wb') as handle:
pickle.dump(dynamics, handle, protocol=pickle.HIGHEST_PROTOCOL)
#with open(PICKLE_DIR + '/dynamicsB.pickle', 'rb') as handle:
#dynamics = pickle.load(handle)
v = evaluate_policy(PI,V)
(policies, v) = improve_policy(PI, v, dynamics)
fig = plt.figure()
ax = fig.gca(projection='3d')
X = np.arange(0,n_cars1+1,1)
Y = np.arange(0,n_cars2+1,1)
X, Y = np.meshgrid(X, Y)
surf = ax.plot_surface(X, Y, v, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
plt.title('Optimal State-Value Function')
plt.xlabel('#Cars at Loc 1')
plt.ylabel('#Cars at Loc 2')
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
X = np.arange(0,n_cars1+1,1)
Y = np.arange(0,n_cars2+1,1)
X, Y = np.meshgrid(X, Y)
surf = ax.plot_surface(X, Y, policies[-1], cmap=cm.coolwarm,
linewidth=0, antialiased=False)
plt.title('Optimal Policy')
plt.show()
###Output
_____no_output_____ |
notebooks/Basic Solution.ipynb | ###Markdown
AbstractThis is a clone of the script at https://www.kaggle.com/ceshine/lgbm-starter which is intended to give an idea of how to structure the data for trainig Prelude Configuration
###Code
DataSetPath = "/home/bryanfeeney/Workspace/OttomanDiviner/favorita/"
StoresPath = DataSetPath + "stores.csv.gz"
ItemsPath = DataSetPath + "items.csv.gz"
OilPricePath = DataSetPath + "oil.csv.gz"
HolidaysPath = DataSetPath + "holidays_events.csv.gz"
Transactions = DataSetPath + "transactions.csv.gz"
TrainData = DataSetPath + "train-2017.csv.gz"
TestData = DataSetPath + "test.csv.gz"
# TrainData = DataSetPath + "train-2018.csv.gz"
# TestData = DataSetPath + "query-2018.csv"
FutureDaysToCalculate=16
WeeksOfHistoryForFeature=8
WeeksOfHistoryForFeatureOnValidation=3
###Output
_____no_output_____
###Markdown
Imports
###Code
from datetime import date, datetime, timedelta
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
import lightgbm as lgb
###Output
_____no_output_____
###Markdown
Intro to the Data
###Code
cumul_sales = pd.read_csv(
TrainData,
usecols=[1, 2, 3, 4, 5],
dtype={'onpromotion': bool},
converters={'unit_sales': lambda u: np.log1p(float(u)) if float(u) > 0 else 0},
parse_dates=["date"],
compression='gzip'
)
cumul_sales_query = pd.read_csv(
TestData,
usecols=[0, 1, 2, 3, 4],
dtype={'onpromotion': bool},
parse_dates=["date"] # , date_parser=parser
)
query_start_date = str(cumul_sales_query.iloc[0,1]).split(" ")[0]
query_start_date
cumul_sales_query = cumul_sales_query.set_index(
['store_nbr', 'item_nbr', 'date']
)
cumul_sales.shape, cumul_sales_query.shape
cumul_sales.head()
###Output
_____no_output_____
###Markdown
DEBUG
###Code
cumul_sales_query.head()
promo_variables_test = cumul_sales_query[["onpromotion"]].unstack(level=-1).fillna(False)
promo_variables_test.head()
###Output
_____no_output_____
###Markdown
DEBUG
###Code
cumul_sales
cumul_sales_query.iloc[0,:]
items = pd.read_csv(
ItemsPath,
).set_index("item_nbr")
stores = pd.read_csv(
StoresPath
).set_index("store_nbr")
cumul_sales_query
cumul_sales.shape
cumul_sales_query.shape
items.shape
###Output
_____no_output_____
###Markdown
Select only Last Three MonthsThis is a peculiar one, and it **games the benchmark** in a not great way. Essentially it uses the last 11 weeks of data before the prediction threshold to predict what's happening next
###Code
nowtime = datetime.now()
now = date(nowtime.year, nowtime.month, nowtime.day)
# How far back to go to start generating trend features for demand
data_start = now - timedelta(7*11) + timedelta(1)
training_history_start = now - timedelta(7*WeeksOfHistoryForFeature) + timedelta(1)
validation_start = now - timedelta(7*WeeksOfHistoryForFeatureOnValidation) + timedelta(1)
data_start, training_history_start, query_start_date
cumul_sales = cumul_sales[cumul_sales.date.isin(
pd.date_range(data_start, periods=7 * 11))].copy()
cumul_sales.head()
cumul_sales.shape
cumul_sales.iloc[-1,:]
###Output
_____no_output_____
###Markdown
Creating Promotion VariablesSo this is a tricky. If one presumes that on-promotion will lead to a boost in demand, if if we presume we'll know *whats on promotion in advance*, then we can create variables to say that this product will be on promotion 1, 2, 3, ... 16 days from now (16 days in the future is the target)In this case, this is also peculiar, there is a column for every single day!
###Code
promo_variables = cumul_sales.set_index(
["store_nbr", "item_nbr", "date"])[["onpromotion"]]
promo_variables.head()
promo_variables = cumul_sales.set_index(
["store_nbr", "item_nbr", "date"])[["onpromotion"]].unstack(
level=-1).fillna(False)
promo_variables.head()
promo_variables.columns = promo_variables.columns.get_level_values(1)
promo_variables_query = cumul_sales_query[["onpromotion"]].unstack(level=-1).fillna(False)
promo_variables_query.columns = promo_variables_query.columns.get_level_values(1)
promo_variables_query = promo_variables_query.reindex(promo_variables.index).fillna(False)
promo_variables_train_and_query = pd.concat([promo_variables, promo_variables_query], axis=1)
promo_variables.shape, items.shape[0] * stores.shape[0]
cumul_sales.shape, cumul_sales_query.shape
###Output
_____no_output_____
###Markdown
Unstack unit sales - do it across all days in a sliding windowAh... they're creating a multi-task learning problem
###Code
cumul_sales = cumul_sales.set_index(
["store_nbr", "item_nbr", "date"])[["unit_sales"]].unstack(
level=-1).fillna(0)
cumul_sales.columns = cumul_sales.columns.get_level_values(1)
cumul_sales.shape
cumul_sales.head()
###Output
_____no_output_____
###Markdown
Make items match other data framesThey're sacraficing generability
###Code
items = items.reindex(cumul_sales.index.get_level_values(1))
items.head()
items.shape
###Output
_____no_output_____
###Markdown
Time futzing
###Code
# Return that portion of the data frame that corresponds to the time period
# beginning "minus" days before "dt" and extending for "periods" days
def get_timespan(df, dt, minus, periods):
return df[
pd.date_range(dt - timedelta(days=minus), periods=periods)
]
def prepare_dataset(cumul_sales, promo_variables_train_and_query, start_date, is_train=True):
X = pd.DataFrame({ # Mean target for different retrospective timespans & total # promotions
"mean_3_2017": get_timespan(cumul_sales, start_date, 3, 3).mean(axis=1).values,
"mean_7_2017": get_timespan(cumul_sales, start_date, 7, 7).mean(axis=1).values,
"mean_14_2017": get_timespan(cumul_sales, start_date, 14, 14).mean(axis=1).values,
"promo_14_2017": get_timespan(promo_variables_train_and_query, start_date, 14, 14).sum(axis=1).values
})
for i in range(16): # Promotions on future days
X["promo_{}".format(i)] = promo_variables_train_and_query[
start_date + timedelta(days=i)].values.astype(np.uint8)
if is_train:
y = cumul_sales[ # Target values for future days
pd.date_range(start_date, periods=16)
].values
return X, y
return X
promo_variables_train_and_query.shape
training_history_start, validation_start, now
promo_variables
print("Preparing dataset...")
X_l, y_l = [], []
for i in range(4):
delta = timedelta(days=7 * i)
X_tmp, y_tmp = prepare_dataset(cumul_sales, promo_variables_train_and_query, training_history_start + delta)
X_l.append(X_tmp)
y_l.append(y_tmp)
X_train = pd.concat(X_l, axis=0)
y_train = np.concatenate(y_l, axis=0)
del X_l, y_l
X_validate, y_validate = prepare_dataset(cumul_sales, promo_variables_train_and_query, validation_start)
X_query = prepare_dataset(cumul_sales, promo_variables_train_and_query, now, is_train=False)
X_train.shape, X_validate.shape, X_query.shape
###Output
_____no_output_____
###Markdown
This dataset is **super gamey**. They're using the means for the week, fortnight, and last three days, and then seeing how to permute it to generate values for the following window of time. It's hardcoded to product IDs, not categories.It does however, permit multi-task learning, and therefore better representation learningIt does not incorporate any information about seasonality at all, and so would fall arse over face at Christmas
###Code
print("Training and predicting models...")
params = {
'num_leaves': 2**5 - 1,
'objective': 'regression_l2',
'max_depth': 8,
'min_data_in_leaf': 50,
'learning_rate': 0.05,
'feature_fraction': 0.75,
'bagging_fraction': 0.75,
'bagging_freq': 1,
'metric': 'l2',
'num_threads': 4
}
MAX_ROUNDS = 1000
validate_pred = []
query_pred = []
cate_vars = cat
for i in range(16):
print("=" * 50)
print("Step %d" % (i+1))
print("=" * 50)
dtrain = lgb.Dataset(
X_train, label=y_train[:, i],
categorical_feature=cate_vars,
weight=pd.concat([items["perishable"]] * 4) * 0.25 + 1
)
dvalidate = lgb.Dataset(
X_validate, label=y_validate[:, i], reference=dtrain,
weight=items["perishable"] * 0.25 + 1,
categorical_feature=cate_vars)
bst = lgb.train(
params, dtrain, num_boost_round=MAX_ROUNDS,
valid_sets=[dtrain, dvalidate], early_stopping_rounds=50, verbose_eval=50
)
print("\n".join(("%s: %.2f" % x) for x in sorted(
zip(X_train.columns, bst.feature_importance("gain")),
key=lambda x: x[1], reverse=True
)))
validate_pred.append(bst.predict(
X_validate, num_iteration=bst.best_iteration or MAX_ROUNDS))
query_pred.append(bst.predict(
X_query, num_iteration=bst.best_iteration or MAX_ROUNDS))
print("Validation mse:", np.sqrt(mean_squared_error(
np.expm1(y_validate), np.expm1(np.array(validate_pred)).transpose())))
validate_pred
query_pred
print("Making submission...")
y_query = np.array(query_pred).transpose()
df_preds = pd.DataFrame(
y_query, index=cumul_sales.index,
columns=pd.date_range(query_start_date, periods=16)
).stack().to_frame("unit_sales")
df_preds.to_csv("/tmp/preds-2018.csv")
df_preds
df_preds.index.set_names(["store_nbr", "item_nbr", "date"], inplace=True)
submission = df_test[["id"]].join(df_preds, how="left").fillna(0)
submission["unit_sales"] = np.clip(np.expm1(submission["unit_sales"]), 0, 1000)
submission
###Output
_____no_output_____
###Markdown
Further Improvements This is based on the work in this file: https://www.kaggle.com/vrtjso/lgbm-one-step-aheadThis was apparently in the top 10% at one point.
###Code
df_train = pd.read_csv(
TrainData, usecols=[1, 2, 3, 4, 5],
dtype={'onpromotion': bool},
converters={'unit_sales': lambda u: np.log1p(
float(u)) if float(u) > 0 else 0},
parse_dates=["date"],
skiprows=range(1, 66458909) # 2016-01-01
)
df_test = pd.read_csv(
TestData, usecols=[0, 1, 2, 3, 4],
dtype={'onpromotion': bool},
parse_dates=["date"] # , date_parser=parser
).set_index(
['store_nbr', 'item_nbr', 'date']
)
items = pd.read_csv(
ItemsPath,
).set_index("item_nbr")
df_2017 = df_train.loc[df_train.date>=pd.datetime(2017,1,1)]
del df_train
promo_2017_train = df_2017.set_index(
["store_nbr", "item_nbr", "date"])[["onpromotion"]].unstack(
level=-1).fillna(False)
promo_2017_train.columns = promo_2017_train.columns.get_level_values(1)
promo_2017_test = df_test[["onpromotion"]].unstack(level=-1).fillna(False)
promo_2017_test.columns = promo_2017_test.columns.get_level_values(1)
promo_2017_test = promo_2017_test.reindex(promo_2017_train.index).fillna(False)
promo_2017 = pd.concat([promo_2017_train, promo_2017_test], axis=1)
del promo_2017_test, promo_2017_train
df_2017 = df_2017.set_index(
["store_nbr", "item_nbr", "date"])[["unit_sales"]].unstack(
level=-1).fillna(0)
df_2017.columns = df_2017.columns.get_level_values(1)
items = items.reindex(df_2017.index.get_level_values(1))
def get_timespan(df, dt, minus, periods, freq='D'):
return df[pd.date_range(dt - timedelta(days=minus), periods=periods, freq=freq)]
def prepare_dataset(t2017, is_train=True):
X = pd.DataFrame({
"day_1_2017": get_timespan(df_2017, t2017, 1, 1).values.ravel(),
"mean_3_2017": get_timespan(df_2017, t2017, 3, 3).mean(axis=1).values,
"mean_7_2017": get_timespan(df_2017, t2017, 7, 7).mean(axis=1).values,
"mean_14_2017": get_timespan(df_2017, t2017, 14, 14).mean(axis=1).values,
"mean_30_2017": get_timespan(df_2017, t2017, 30, 30).mean(axis=1).values,
"mean_60_2017": get_timespan(df_2017, t2017, 60, 60).mean(axis=1).values,
"mean_140_2017": get_timespan(df_2017, t2017, 140, 140).mean(axis=1).values,
"promo_14_2017": get_timespan(promo_2017, t2017, 14, 14).sum(axis=1).values,
"promo_60_2017": get_timespan(promo_2017, t2017, 60, 60).sum(axis=1).values,
"promo_140_2017": get_timespan(promo_2017, t2017, 140, 140).sum(axis=1).values
})
for i in range(7):
X['mean_4_dow{}_2017'.format(i)] = get_timespan(df_2017, t2017, 28-i, 4, freq='7D').mean(axis=1).values
X['mean_20_dow{}_2017'.format(i)] = get_timespan(df_2017, t2017, 140-i, 20, freq='7D').mean(axis=1).values
for i in range(16):
X["promo_{}".format(i)] = promo_2017[
t2017 + timedelta(days=i)].values.astype(np.uint8)
if is_train:
y = df_2017[
pd.date_range(t2017, periods=16)
].values
return X, y
return X
print("Preparing dataset...")
t2017 = date(2017, 5, 31)
X_l, y_l = [], []
for i in range(6):
delta = timedelta(days=7 * i)
X_tmp, y_tmp = prepare_dataset(
t2017 + delta
)
X_l.append(X_tmp)
y_l.append(y_tmp)
X_train = pd.concat(X_l, axis=0)
y_train = np.concatenate(y_l, axis=0)
del X_l, y_l
X_val, y_val = prepare_dataset(date(2017, 7, 26))
X_test = prepare_dataset(date(2017, 8, 16), is_train=False)
print("Training and predicting models...")
params = {
'num_leaves': 31,
'objective': 'regression',
'min_data_in_leaf': 300,
'learning_rate': 0.1,
'feature_fraction': 0.8,
'bagging_fraction': 0.8,
'bagging_freq': 2,
'metric': 'l2',
'num_threads': 4
}
MAX_ROUNDS = 500
val_pred = []
test_pred = []
cate_vars = []
for i in range(16):
print("=" * 50)
print("Step %d" % (i+1))
print("=" * 50)
dtrain = lgb.Dataset(
X_train, label=y_train[:, i],
categorical_feature=cate_vars,
weight=pd.concat([items["perishable"]] * 6) * 0.25 + 1
)
dval = lgb.Dataset(
X_val, label=y_val[:, i], reference=dtrain,
weight=items["perishable"] * 0.25 + 1,
categorical_feature=cate_vars)
bst = lgb.train(
params, dtrain, num_boost_round=MAX_ROUNDS,
valid_sets=[dtrain, dval], early_stopping_rounds=50, verbose_eval=100
)
print("\n".join(("%s: %.2f" % x) for x in sorted(
zip(X_train.columns, bst.feature_importance("gain")),
key=lambda x: x[1], reverse=True
)))
val_pred.append(bst.predict(
X_val, num_iteration=bst.best_iteration or MAX_ROUNDS))
test_pred.append(bst.predict(
X_test, num_iteration=bst.best_iteration or MAX_ROUNDS))
print("Validation mse:", mean_squared_error(
y_val, np.array(val_pred).transpose()))
print("Making submission...")
y_test = np.array(test_pred).transpose()
df_preds = pd.DataFrame(
y_test, index=df_2017.index,
columns=pd.date_range("2017-08-16", periods=16)
).stack().to_frame("unit_sales")
df_preds.index.set_names(["store_nbr", "item_nbr", "date"], inplace=True)
submission = df_test[["id"]].join(df_preds, how="left").fillna(0)
submission["unit_sales"] = np.clip(np.expm1(submission["unit_sales"]), 0, 1000)
submission.to_csv('lgb.csv', float_format='%.4f', index=None)
print("Validation mse:", mean_squared_error(
np.expm1(y_validate), np.expm1(np.array(validate_pred)).transpose()))
np.sqrt(275), np.sqrt(247)
###Output
_____no_output_____ |
FourierConstruction.ipynb | ###Markdown
Fourier Construction interactive demonstrationThis is a prototype of an interactive app to demonstrate the construction of square, sawtooth, and other periodic signals using N components of the fourier series. Adapted by Jamie Bayer, based on [code described by Dr. Shyamal Bhar](https://vcfw.org/pdf/Department/Physics/Fourier_series_python_code.pdf), Department of Physics, Vidyasagar College for Women, Kolkata.Please send any ideas for improvement to F. Jones.
###Code
from ipywidgets import interact
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import square, sawtooth, triang
from scipy.integrate import simps
def fourier_series(x, y, L, n):
# Calculation of Co-efficients
a0 = 2.0/L*simps(y, x)
an = lambda n:2.0/L*simps(y*np.cos(2.0*np.pi*n*x/L), x)
bn = lambda n:2.0/L*simps(y*np.sin(2.0*np.pi*n*x/L), x)
# Sum of the series
s = a0/2.0 + sum([an(k)*np.cos(2.*np.pi*k*x/L)+bn(k)*np.sin(2.*np.pi*k*x/L) for k in range(1,n+1)])
return s
def plot_periodic_function(Function):
if Function == 'Square':
L = 1 # Periodicity of the periodic function f(x)
freq = 1 # No of waves in time period L
dutycycle = 0.5
samples = 1000
# Generation of square wave
x = np.linspace(0, L, samples, endpoint=False)
y = square(2.0*np.pi*x*freq/L, duty=dutycycle)
elif Function == 'Sawtooth':
L = 1 # Periodicity of the periodic function f(x)
freq = 2 # No of waves in time period L
width_range = 1
samples = 1000
# Generation of Sawtooth function
x = np.linspace(0, L, samples,endpoint=False)
y = sawtooth(2.0*np.pi*x*freq/L, width=width_range)
elif Function == 'Triangular':
L = 1 #Periodicity of the periodic function f(x)
samples = 501
# Generation of Triangular wave
x = np.linspace(0,L,samples,endpoint=False)
y = triang(samples)
@interact(n=(1, 50))
def plot_functions(n):
# Plotting
plt.plot(x, fourier_series(x, y, L, n))
plt.plot(x, y)
#plt.xlabel("$x$")
#plt.ylabel("$y=f(x)$")
plt.title(Function + " signal reconstruction by Fourier series")
interact(plot_periodic_function, Function=['Square','Sawtooth','Triangular']);
###Output
_____no_output_____ |
notebook/rbsa-demo.ipynb | ###Markdown
Change point model for real datasetBy Sang woo Ham ([email protected]), Last edited on 09/15/2021 Table of Contents* [Introduction](Introduction)* [Dataset](Dataset)* [Example](Example)* [Discussion points](Discussion-points)* [References](References) IntroductionBuilding energy analysis is a challenging task because of its complexity and lacks in systematic data collection. Therefore, the application of building energy model into the real dataset is complicated and even fails in many cases. In this notebook, we apply the change point model into the real dataset and discuss possible challenges. DatasetResidential Building Stock Assessment(RBSA) dataset [1,2] is large-scale residential energy consumption survey prepared by Ecotope, Inc. for Northwest Energy Efficiency Alliance (NEEA). Two studies have beeen conducted in parallel. One is survey (phone call and billing information) based baseline study for large poluation. The other is detailed measurements for daily load shapes of end-use level. >*primary objective of the RBSA is to develop an inventory and profile of existing residential building stock in the Northwest based on field data from a representative, random sample of existing homes. The RBSA establishes the 2011 regional baseline for housing stock for three categories of residences: single-family homes, manufactured homes, and multifamily homes. The results will guide future planning efforts and provide a solid base for assessing energy savings on residential programs throughout the Northwest.*The dataset is available from these two links [link1](https://neea.org/data/residential-building-stock-assessment) and [link2](https://neea.org/resources/2011-rbsa-metering-study). But, for the simplicity, we provide the pre-processed data in this notebook.
###Code
# loading required packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt # visualization
#import pyarrow.feather as feather
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
The first part of the RBSA data is building metadata and yearly energy consumption. We've processed the data as a csv file. Below table shows the data.
###Code
# building metadata
survey=pd.read_csv("../data/rbsa/survey.csv")
survey.head(3)
###Output
_____no_output_____
###Markdown
| Name | Description || :--- | :----------- || siteid | An unique identifier for a a residential building || heat_[elec/gas] | Whether to have an electric/gas heating device. (1: yes, 0: no). || heat_[elec/gas]_type | Type of heating device. (`baseboard`, `boiler`, `hp`: heatpump, `faf`: forced air furnace, `gshp`: geo-source heatpump, `dualfuelhp`: dual fuel heatpump) || heat_[elec/gas]_control | Control method of heating device. (`programmable`: programmable thermostat, `thermostat`: non-programmable thermostat, `none`: no control device, `on/off` or `manual`: on/off switch|| heat_[elec/gas]_dist | Heating distribution method of heating device. (`ducted`: air duct, `zonal`: device in each zone, `none`: no heating device.|| backup_[elec/gas/other] | If there is backup [electric/gas/other] heating device.||num_[bath/bedroom] | Number of bathroom/bedroom. ||MoveIn | Move in year. || year_built | Built year of the building. || homebusiness | If residnets are doing home business. || homerent | Home ownership (rent:1, non-rent: 0).|| primaryres | Is this home your primary residence? (1: yes, 0: no) || income_support | Do you get any income support? (1: yes, 0: no) || workingoutside | How many people are working outside? ||num_occupant | Number of occupants. ||has_[kid/senior]|Whether to have kids or senior people in the building.||dish_load|Number of dishwasher loads per week||wash_load|Number of clothes washer loads per week||ac_use|Whether to use air-conditioning device. (1: yes, 0: no).||heat_sp|Self-reported averaged heating setpoint [F].||heat_sp_night|Self-reported heating setpoint in night time [F].||ave_height|Average height of the building [ft].||ua_ceiling|Overall UA value of ceiling [BTU/(hr-F)].||has_dryer|Whether to have a dryer (or more) (1: yes, 0: no). ||num_computer|Number of computers.|| cook_[elec/gas] | Electric or gas cooking. || has_washer | Do you have a washer (1: yes, 0: no). || dryer_elec | Whether to have electric dryer (1: yes, 0: no). || dryer_gas | Whether to have gas dryer (1: yes, 0: no). || num_[audio/charger/game/tv] | Number of audio device, charger, game, or TV. || y_kwh |Yearly electricity consumption. || y_kbtu | Yearly gas consumption. || light_ex_watt | The total wattage of exterior lights installed [W]. || ua_floor | Overall UA value of floors [BTU/(hr-F)]. || light_in_watt | The total wattage of interior lights installed [W]. || bldg_type | Building type (Single residnetial home or multiplex building). || level_floor | Indicates the number of floors above grade present at site || num_room | Number of rooms || tot_sqft | The conditioned area in square feet (calculated). || tot_vol | The estimated volume of the house (calculated). || fraction_window | Calculated ratio of window area square foot over site conditioned square foot. || [hdd65/cdd65] | Heating/cooling degree day. || population_city | Number of population in the city. || pv | Whether to have a photovoltaic. || year_ref | Refrigerator year of manufacture. || vol_ref | Volume of refrigerator [ft3]. || flow_shower | Size of shower fixtures [gpm] || ua_total | Overall UA value of all surfaces [BTU/(hr-F)]. || ua_wall | Overall UA value of walls [BTU/(hr-F)]. || hw_[elec/gas] | Electric or gas water heater (1: yes, 0: no). || hw_[btuhr/kw] | Size of gas/electric water heater [btu/hr or kW] || hw_solar | Whether to use solar water heater. || hw_conditioned | If the water heater is located in conditioned space. || hw_year | Water heater year of manufacture. || hw_size | Water heater size [Gallons]. || hw_type | Water heater type. || ua_window | Overall UA value of windows [BTU/(hr-F)]. | The second part of the RBSA data is time-series meter data for each house. Hourly data is splitted into 8 pieces (i.e., `hourly_meter_data_x.feather`). Daily data is one file. Loading all data may be not available when you have a computer with small memory. Each file includes data of different houses. It includes appliance specific energy consumption in kWh. Also, it has outdoor and indoor air temperatures.
###Code
#df=feather.read_feather("../data/rbsa/daily_meter_data.feather")
df=pd.read_csv("../data/rbsa/daily_meter_data.csv")
df.head(3)
###Output
_____no_output_____
###Markdown
|Name|Description||:-|:-||[ymd/timehour]|Day or Hourly timestamp||siteid|An unique identifier for a a residential building||heating|Heating device electricity consumption [kWh].||heating_gas|Heating device gas consumption [kWh].||cooling|Cooling device electricity consumption [kWh].||total|Total electricity consumption [kWh].||other|Total electricity - sum of all appliance specific electricity [kWh].||rat|Room air temperature [F].||oat|Outdoor air temperature [F].||thp| Heat pump vapor line temperature measured in Fahrenheit [F].||wst| Outdoor air temperature from the nearest weather station [F].|The other columns (lighting, plug, water_heater, water_haeter_gas, dryer, dwasher, fridge, washer, microwave, range) show the electricity consumption of each appliance. ExampleChange point model [3-5] is used to analyze the impact of retrofit. However, it is also used to characterize house's building thermal performance based on the data. In this example, we use a simple example of how to build the change point model by using data of two houses.
###Code
# loading data
df=pd.read_csv("../data/rbsa/daily_meter_data.csv") # meter data
house_survey=survey[survey.siteid.isin(np.array([21355,22938]))] # meta data
# Select two houses. 21355, 22938
house1=df[df['siteid']==21355]
house2=df[df['siteid']==22938]
###Output
_____no_output_____
###Markdown
We use two houses (21355: House1, 22938: House2). These two houses show very similar characteristics except for House2 is bigger than House1. Also, House2 is in cold region because its Heating degree days higher.
###Code
house_survey[['siteid','heat_elec','heat_elec_control','heat_elec_type','year_built','tot_sqft','heat_sp','ua_total','hdd65','y_kwh']]
###Output
_____no_output_____
###Markdown
Visualize the data. It seems like House2 has cooling energy consumption, but House1's measurement does not have enough measurement in the cooling season (i.e., $oat>75^\circ\text{F}$). Therefore, we discard the data for $oat>75^\circ\text{F}$ in this analysis.
###Code
fig, ax =plt.subplots(nrows=1, ncols=2, figsize=(12,5))
ax[0].plot(house1['oat'].to_numpy(), house1['total'].to_numpy(), "kx",label="House1",markersize=5,alpha=0.8)
#ax[0,0].plot(T_out_grid, piecewise_linear(T_out_grid, *theta_case1),'r-',label='Model (case1)',linewidth=1.0)
ax[0].legend(fontsize=10,loc="best")
ax[0].set_xlabel("$T_{out}$ [${^{\circ}}$F]",fontsize=12)
ax[0].set_ylabel("$E_{total}$ [kWh]",fontsize=12)
#ax[0].set_xlim([-22,30])
#ax[0].set_ylim([0,2])
ax[1].plot(house2['oat'].to_numpy(), house2['total'].to_numpy(), "bx",label="House2",markersize=5,alpha=0.8)
#ax[0,0].plot(T_out_grid, piecewise_linear(T_out_grid, *theta_case1),'r-',label='Model (case1)',linewidth=1.0)
ax[1].legend(fontsize=10,loc="best")
ax[1].set_xlabel("$T_{out}$ [${^{\circ}}$F]",fontsize=12)
ax[1].set_ylabel("$E_{total}$ [kWh]",fontsize=12)
# discard summer data
house1=house1[house1['oat']<75]
house2=house2[house2['oat']<75]
###Output
_____no_output_____
###Markdown
Also, it is numerically useful for learning change point model parameters to scale the data into [0,1] range by dividing each variable's maximum value.
###Code
# scaled data frame as shouse1 and shouse2
shouse1=house1.copy()
shouse2=house2.copy()
oat_max=100 # maximum value
total_max=200 # maximum value
shouse1['oat']=shouse1['oat']/oat_max
shouse2['oat']=shouse2['oat']/oat_max
shouse1['total']=shouse1['total']/total_max
shouse2['total']=shouse2['total']/total_max
###Output
_____no_output_____
###Markdown
Also, we put bounds to help the optimizer finds correct answer. beta0 is positive number as it represents baseline load. beta1 is negative value because it is heating coefficient. beta2 is in [0,1] range because the oat value is scaled into [0,1].
###Code
# Piecewise linear regression model (change point model)
# loading package
from scipy import optimize
def piecewise_linear(x, beta0, beta1, beta2):
condlist = [x < beta2, x >= beta2] # x<beta3 applies to lambda x: beta0+beta1*x.
funclist = [lambda x: beta0+beta1*(x-beta2), lambda x:beta0 ]
return np.piecewise(x, condlist, funclist)
# estimate theta* and covariance of theta*
theta_house1 , theta_cov_house1 = optimize.curve_fit(piecewise_linear, shouse1['oat'].to_numpy(), shouse1['total'].to_numpy(),bounds=((0,-np.inf,0),(np.inf,0,1))) #least square
theta_house2 , theta_cov_house2 = optimize.curve_fit(piecewise_linear, shouse2['oat'].to_numpy(), shouse2['total'].to_numpy(),bounds=((0,-np.inf,0),(np.inf,0,1))) #least square
###Output
_____no_output_____
###Markdown
The change model is well identified.
###Code
oat_grid=np.linspace(0.2,0.8,51)
fig, ax =plt.subplots(nrows=1, ncols=2, figsize=(12,5))
ax[0].plot(house1['oat'].to_numpy(), house1['total'].to_numpy(), "kx",label="House1",markersize=5,alpha=0.8)
ax[0].plot(oat_grid*oat_max, piecewise_linear(oat_grid, *theta_house1)*total_max,'r-',label='Model (House1)',linewidth=1.0)
ax[0].legend(fontsize=10,loc="best")
ax[0].set_xlabel("$T_{out}$ [${^{\circ}}$F]",fontsize=12)
ax[0].set_ylabel("$E_{total}$ [kWh]",fontsize=12)
ax[0].set_xlim([20,80])
ax[0].set_ylim([0,160])
ax[1].plot(house2['oat'].to_numpy(), house2['total'].to_numpy(), "kx",label="House2",markersize=5,alpha=0.8)
ax[1].plot(oat_grid*oat_max, piecewise_linear(oat_grid, *theta_house2)*total_max,'r-',label='Model (House2)',linewidth=1.0)
ax[1].legend(fontsize=10,loc="best")
ax[1].set_xlabel("$T_{out}$ [${^{\circ}}$F]",fontsize=12)
ax[1].set_ylabel("$E_{total}$ [kWh]",fontsize=12)
ax[1].set_xlim([20,80])
ax[1].set_ylim([0,160])
###Output
_____no_output_____
###Markdown
beta1 indicates $HC\frac{\Delta t}{\eta_{\text{heat}}}$ where $HC=\left( UA+ c_{p,\text{air}} \rho_{\text{air}} \dot{V}_{\text{out}} \right)$. Therefore, the ratio of beta1 of two houses should be similar to the ratio of UA values of two houses.
###Code
# ratio of slopes
theta_house1[1]/theta_house2[1]
# ratio of UAs
house_survey['ua_total'].to_numpy()[0]/house_survey['ua_total'].to_numpy()[1]
###Output
_____no_output_____ |
Student-notebook.ipynb | ###Markdown
 Diversity in Math: Modeling the COVID 19 OutbreakUse this notebook to enter your code and exercises. TA/Mentor:Team members (specify if undergraduate or high school student):- - - Task I: Explain the flow diagram we worked on during the first session Our assumptions1. Mode of transmission of the disease from person to person is through contact ("contact transmission") between a person who interacts with an infectious person. 2. Once a person comes into contact with the pathogen, there is a period of time (called the latency period) in which they are infected, but cannot infect others (yet!). 3. Population is not-constant (that is, people are born and die as time goes by).4. A person in the population is either one of: - Susceptible, i.e. not infected but not yet exposed, - Exposed to the infection, i.e. exposed to the virus, but not yet infectious, - Infectious, and - Recovered from the infection. 5. People can die by "natural causes" during any of the stages. We assume an additional cause of death associated with the infectious stage. How does a person move from one stage into another? In other words, how does a person go from susceptible to exposed, to infected, to recovered? $\Delta$: Per-capita birth rate.$\mu$: Per-capita natural death rate.$\alpha$: Virus-induced average fatality rate.$\beta$: Probability of disease transmission per contact (dimensionless) times the number of contacts per unit time.$\epsilon$: Rate of progression from exposed to infectious (the reciprocal is the incubation period).$\gamma$: Recovery rate of infectious individuals (the reciprocal is the infectious period). Flow diagram$$\stackrel{\Delta N} {\longrightarrow} \text{S} \stackrel{\beta\frac{S}{N} I}{\longrightarrow} \text{E} \stackrel{\epsilon}{\longrightarrow} \text{I} \stackrel{\gamma}{\longrightarrow} \text{R}$$$$\hspace{1.1cm} \downarrow \mu \hspace{0.6cm} \downarrow \mu \hspace{0.5cm} \downarrow \mu, \alpha \hspace{0.1cm} \downarrow \mu $$ Optional: Team members are welcome to discuss other ways we can capture the behaviour of people moving in between stages, other assumptions, and to create your own flow diagram. Use this cell to capture other assumptions you make. You are free to draw a different diagram. Make sure you create one along with your mentor, and send to the facilitator. ____ Task II: Choosing Differential EquationsWork together to use differential equations to generate the rest of the equations for Exposed, Infectious and Recovered individuals.Your task is to discuss and agree on the equations for $$\frac{dE}{dt} = \text{?}, \frac{dI}{dt}= \text{?}, \frac{dR}{dt} = \text{?}$$If your model makes different assumptions, and as such the flow diagram is different from the instructors, make sure your equations reflect this. ___ Task III: Implement the system of equations using PythonYour task is to guide the TA to implement the set of equations using Python.When we translate from Math to Python, it is useful to give appropriate names to our variables. |Math symbol|Variable name in Python | What it represents|| - | - | - |$\Delta $ |$\text{Delta}$| Per-capita birth rate|$\mu$|$\text{mu}$|Per-capita natural death rate|$\alpha$|$\text{alpha}$| Virus-induced average fatality rate.|$\beta$|$\text{beta}$|Probability of disease transmission per contact (dimensionless) times the number of contacts per unit time.|$\epsilon$|$\text{epsilon}$|Rate of progression from exposed to infectious (the reciprocal is the incubation period).|$\gamma$|$\text{gamma}$|Recovery rate of infectious individuals (the reciprocal is the infectious period).|$N$| N | Total population||$S$| S | Susceptible population||$E$| E | Exposed population||$I$| I | Infectious population||$R$| R | Recovered population||$\frac{dS}{dt}$|dS|Rate of change of Susceptible population|
###Code
# Code here
###Output
_____no_output_____ |
notebooks/fast-test.ipynb | ###Markdown
The following table shows the probability at each beach that an exceedance (i.e. period of time when the tests were all above the beach-closure threshold) will only last one day.The probability is derived from the historical data:$$ \text{probability} = \frac{\text{ times an exceedance lasted only 1 day}}{\text{ exceedances in total}} $$We also report the value $\text{ exceedances in total}$ as the column `n` to get a sense of how trust worthy the probability is (the higher the `n`, the more accurate the estimated probability).Lastly, we also show the average length of exceedances (in days) for each beach.
###Code
fast_test_df
###Output
_____no_output_____ |
vdma_test/VDMA Test.ipynb | ###Markdown
Video DMA TestTransfer a frame of data from one Video DMA to a second Video DMA.This driver was built from [AXI Video Direct Memory Access V6.2](https://www.xilinx.com/support/documentation/ip_documentation/axi_vdma/v6_2/pg020_axi_vdma.pdf) Sending a frameIn order to send a frame through the VDMA the following steps are provided:1. Instantiate the VDMA core. ```python vdma_egress = VDMA(name = VDMA_NAME_IN_IP) ```2. Set the size of the image. ```python vdma_egress.set_image_size(WIDTH, HEIGHT) ```3. Write an image to one of the internal buffers. ```python image_in = np.zeros((HEIGHT, WIDTH, 3)).astype(np.uint8) i = 0 for y in range(HEIGHT): for x in range(WIDTH): for p in range(3): image_in[y, x, p] = i if i < 255: i += 1 else: i = 0 egress_frame = vdma_egress.get_frame() egress_frame.set_bytearray(bytearray(image_in.astype(np.int8).tobytes())) ```4. Start the Egress Engine. ```python vdma_egress.start_egress_engine(continuous = , parked = , num_frames = , frame_index = , interrupt = ) ``` Receiving a frameIn order to receive a frame from the VDMA the following steps are provided:1. Instantiate the VDMA core. ```python vdma_ingress = VDMA(name = VDMA_NAME_IN_IP) ```2. Set the size of the image. ```python vdma_ingress.set_image_size(WIDTH, HEIGHT) ```3. Start the ingress engine. ```python vdma_ingress.start_ingress_engine( continuous = , parked = , num_frames = , frame_index = , interrupt = ) ```4. Stop the ingress engine. ```python vdma_ingress.stop_ingress_engine() ``` Source Code for this project can be found here[VDMA Demo](https://github.com/CospanDesign/pynq-hdl/tree/master/Projects/Simple%20VDMA)
###Code
# %matplotlib inline
from time import sleep
from pynq import Overlay
from pynq.drivers import VDMA
from pynq.drivers import Frame
from pynq.drivers import video
import cv2
from matplotlib import pyplot as plt
from IPython.display import Image
import numpy as np
#Constants
BITFILE_NAME = "./simple_vdma.bit"
IMAGE_FILE = "./orig.jpg"
EGRESS_VDMA_NAME = "SEG_axi_vdma_0_Reg"
INGRESS_VDMA_NAME = "SEG_axi_vdma_1_Reg"
# Set Debug to true to enable debug messages from the VDMA core
DEBUG = False
#DEBUG = True
# Set Verbose to true to dump a lot of messages about
VERBOSE = False
#VERBOSE = True
#These can be set between 0 - 2, the VDMA can also be configured for up to 32 frames in 32-bit memspace and 16 in 64-bit memspace
EGRESS_FRAME_INDEX = 0
INGRESS_FRAME_INDEX = 0
image_in = cv2.imread(IMAGE_FILE)
#Flip the color, the image stored in the image
image_in = cv2.cvtColor(image_in, cv2.COLOR_BGR2RGB)
IMAGE_WIDTH = image_in.shape[1]
IMAGE_HEIGHT = image_in.shape[0]
#Download Images
ol = Overlay(BITFILE_NAME)
ol.download()
vdma_egress = VDMA(name = EGRESS_VDMA_NAME, debug = DEBUG)
vdma_ingress = VDMA(name = INGRESS_VDMA_NAME, debug = DEBUG)
#Set the size of the image
vdma_egress.set_image_size(IMAGE_WIDTH, IMAGE_HEIGHT)
vdma_ingress.set_image_size(IMAGE_WIDTH, IMAGE_HEIGHT)
#The above functions created the video frames
#Create a Numpy NDArray
frame_out = np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH, 3)).astype(np.uint8)
frame_out[0:IMAGE_HEIGHT, 0:IMAGE_WIDTH, :] = image_in[0:IMAGE_HEIGHT, 0:IMAGE_WIDTH, :]
#Populate the frame
frame = vdma_egress.get_frame(EGRESS_FRAME_INDEX)
frame.set_bytearray(bytearray(frame_out.astype(np.int8).tobytes()))
print ("Frame width, height: %d, %d" % (frame.width, frame.height))
print ("")
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("")
print ("Enabling One of the Engine")
#Open Up the Ingress Side
vdma_ingress.start_ingress_engine( continuous = False,
num_frames = 1,
frame_index = INGRESS_FRAME_INDEX,
interrupt = False)
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
print ("")
print ("Enabling Both Engines")
#Quick Start
vdma_egress.start_egress_engine( continuous = False,
num_frames = 1,
frame_index = EGRESS_FRAME_INDEX,
interrupt = False)
print ("")
print ("Both of the engines should be halted after transferring one frame")
#XXX: I think this sleep isn't needed but the core erroniously reports an engine isn't finished even though it is.
#XXX: This sleep line can be commented out but the egress core may report it is not finished.
sleep(0.1)
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
print ("Egress WIP: %d" % vdma_egress.get_wip_egress_frame())
print ("Ingress WIP: %d" % vdma_ingress.get_wip_ingress_frame())
#Check to see if the egress frame point progressed
print ("")
print ("Disabling both engines")
#Disable both
vdma_egress.stop_egress_engine()
vdma_ingress.stop_ingress_engine()
print ("Running? Egress:Ingress %s:%s" % (vdma_egress.is_egress_enabled(), vdma_ingress.is_ingress_enabled()))
if VERBOSE:
vdma_egress.dump_egress_registers()
vdma_ingress.dump_ingress_registers()
print ("Egress Error: 0x%08X" % vdma_egress.get_egress_error())
print ("Ingress Error: 0x%08X" % vdma_ingress.get_ingress_error())
frame = vdma_ingress.get_frame(INGRESS_FRAME_INDEX)
#frame.save_as_jpeg("./image.jpg")
np_frame = np.ndarray( shape = (IMAGE_HEIGHT, IMAGE_WIDTH, 3),
dtype=np.uint8,
buffer = frame.get_bytearray())
#SHOW IMAGE
plt.imshow(np_frame)
plt.show()
###Output
Frame width, height: 1920, 1080
Running? Egress:Ingress False:False
Enabling One of the Engine
Running? Egress:Ingress False:True
Enabling Both Engines
Both of the engines should be halted after transferring one frame
Running? Egress:Ingress False:False
Disabling both engines
Running? Egress:Ingress False:False
|
03NamedEntityRecognition.ipynb | ###Markdown
1) Basics of Named Entity Recognition Named Entity Recognition is a subtask of information extraction that classify named entities into pre-defined categories such as names of persons, organizations, locationsspaCy features an extremely fast statistical entity recognition system, that assigns labels to contiguous spans of tokensThe default model identifies a variety of named and numeric entities, including companies, locations, organizations and products
###Code
# officaial documentation
# https://spacy.io/usage/linguistic-features/#named-entities
# Import spaCy
import spacy
# load the English language library
nlp = spacy.load(name='en_core_web_sm')
# Create a simple doc object
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_, str(spacy.explain(ent.label_)))
# Create another doc object
doc_2 = nlp("San Francisco considers banning sidewalk delivery robots")
for ent in doc_2.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_, str(spacy.explain(ent.label_)))
###Output
San Francisco 0 13 GPE Countries, cities, states
###Markdown
2) Adding Named Entity to Span
###Code
doc_3 = nlp("facebook is hiring a new vice president in U.S.")
for ent in doc_3.ents:
print(ent.text, ent.label_, str(spacy.explain(ent.label_)))
# we will add Facebook as Named Entity as a company
from spacy.tokens import Span
# Get the hash value of ORG entity label
ORG = doc_3.vocab.strings['ORG']
print(ORG)
# Create a Span for new entity
new_ent = Span(doc_3, 0, 1, label=ORG)
# Index locations from 0 to 1 (excludes 1)
# Add the entity to the existing Doc object
doc_3.ents = list(doc_3.ents) + [new_ent]
for ent in doc_3.ents:
print(ent.text, ent.label_, str(spacy.explain(ent.label_)))
###Output
facebook ORG Companies, agencies, institutions, etc.
U.S. GPE Countries, cities, states
###Markdown
3) Visualizing Named Entities
###Code
# Import spaCy
import spacy
# load the English language library
nlp = spacy.load(name='en_core_web_sm')
# Import the displaCy library
from spacy import displacy
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
displacy.render(docs=doc,style='ent',jupyter=True)
# Viewing Specific Entities
options = {'ents': ['ORG', 'MONEY']}
displacy.render(docs=doc,style='ent',jupyter=True,options=options)
###Output
_____no_output_____ |
guest_lectures/material/nlsy_introduction_notebook.ipynb | ###Markdown
Introduction to the NLSY79 dataset We will see:1. How to perform simple regressions2. How to generate density plot3. How to generate a heatmap Background: Investigation of the wage dynamics in [The career decisions of young men](http://www.journals.uchicago.edu/doi/10.1086/262080) by Keane, M. P. and Wolpin, K. I. (1997).$\rightarrow$ How persistent are the wage shocks?* Perform wage regressions* Use wage residuals where the effect of observable characteristics and common aggregate time trends have been eliminated* Investigate persistence by * Density plots * Coveariance matrix in a heatmaps Sample selection in this analysis:* White males aged 16 or less as of October 1, 1977* Time dimension is measured in periods, i.e. period 0 begins once an individual has turned 16 by October* 10 years of follow-up 1) Import packagesImporting packages in a notebook once is sufficient.
###Code
import pandas as pd
import statsmodels.api as sm
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from patsy import dmatrices, dmatrix
import math
% matplotlib inline
# We ensure a proper formatting of the variables.
pd.options.display.float_format = '{:,.2f}'.format
# Adjust the default options s.t. the full dataset can be viewed.
pd.set_option('display.max_columns', 100)
###Output
_____no_output_____
###Markdown
2) Import the dataset
###Code
df = pd.read_pickle('Data/nlsy_intro_data')
# show dataframe
df
# Seperate the dataset by years
df_container = []
for i in range(0,11):
df_container.append(df.loc[df['Period'] == i])
df_container[i] = df_container[i].set_index('Identifier')
# show data_container for different years
df_container[8]
###Output
_____no_output_____
###Markdown
3) Explore the dataset
###Code
# number of individuals:
len(df.Identifier.unique())
# describe the dataset
df.describe()
###Output
_____no_output_____
###Markdown
3.1.) Explore the wage variable
###Code
# Generate a table with the average wage by occupation and wage
pd.crosstab(index = df['Age'], columns = df['Choice'], values = df['Wage'], aggfunc = 'mean', margins =True)
###Output
_____no_output_____
###Markdown
4) Run a simple regressions for period 9 $\text{log}(\text{wage}_{i,t}) = $ $\beta_{t,0}$ $+ \beta_{t,1} \cdot \text{schooling}_{i,t} $ $+ \beta_{t,2} \cdot \text{AFQT}_{i,t} $ $+ \epsilon_{i,t}$where $t = 9$
###Code
# create matrices
y, x = dmatrices('Log_wage ~ Schooling + AFQT_1', data = df_container[9])
# show dependent variable
y
# show regressor matrix
x
# choose the model (here OLS)
model_fit = sm.OLS(y,x)
# fit the model and store results
results = model_fit.fit()
# print results
print(results.summary())
# Predict fitted values
y_hat = results.predict()
y_hat
# compute the residuals
u = y - y_hat
u
# Access the parameters
results.params
# t statistic
results.tvalues
###Output
_____no_output_____
###Markdown
Please check [Statsmodels's](https://www.statsmodels.org/dev/regression.html) documentation website for further information and examples. 4) Describe the regressionsFor each year run the regression: $\text{log}(\text{wage}_{i,t}) = $$\beta_{t,0}$ $+ \beta_{t,1} \cdot \text{schooling}_{i,t} $$+ \beta_{t,2} \cdot \text{exper_blue}_{i,t} $$+ \beta_{t,3} \cdot \text{exper_blue}_{i,t}^2 $$+ \beta_{t,4} \cdot \text{exper_white}_{i,t} $$+ \beta_{t,5} \cdot \text{exper_white}_{i,t}^2 $$+ \beta_{t,6} \cdot \text{exper_military}_{i,t} $$+ \beta_{t,7} \cdot \text{exper_military}_{i,t}^2 $$+ \beta_{t,8} \cdot \text{AFQT}_{i,t} $$+ \beta_{t,9} \cdot \text{rotter_score}_{i,t} $$+ \beta_{t,10} \cdot \text{rosenberg_score}_{i,t} $$+ \beta_{t, 11} \cdot \text{mother_schooling}_{i,t} $$+ \text{year_dummies} $$+ \epsilon_{i,t}$ 5) Run the regressions for period 9
###Code
y, x = dmatrices('Log_wage ~ Schooling + exper_blue + np.power(exper_blue,2)+ exper_white + \
np.power(exper_white,2) + exper_military + np.power(exper_military,2) + AFQT_1 + \
ROTTER_SCORE + ROSENBERG_SCORE + Mother_edu + d_1985 + d_1986 + d_1987 \
+ d_1988', data = df_container[9])
# choose the model (here OLS)
model_fit = sm.OLS(y,x)
# fit the model and store results
results = model_fit.fit()
# print results
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Log_wage R-squared: 0.168
Model: OLS Adj. R-squared: 0.140
Method: Least Squares F-statistic: 6.076
Date: Sun, 25 Nov 2018 Prob (F-statistic): 5.23e-11
Time: 15:06:47 Log-Likelihood: -321.46
No. Observations: 436 AIC: 672.9
Df Residuals: 421 BIC: 734.1
Df Model: 14
Covariance Type: nonrobust
===============================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------------------
Intercept 7.2599 0.269 27.002 0.000 6.731 7.788
Schooling 0.0437 0.019 2.332 0.020 0.007 0.081
exper_blue -0.0169 0.041 -0.415 0.678 -0.097 0.063
np.power(exper_blue, 2) 0.0096 0.005 1.909 0.057 -0.000 0.020
exper_white 0.1138 0.046 2.481 0.013 0.024 0.204
np.power(exper_white, 2) -0.0077 0.009 -0.892 0.373 -0.025 0.009
exper_military -0.0661 0.073 -0.902 0.368 -0.210 0.078
np.power(exper_military, 2) 0.0069 0.013 0.520 0.604 -0.019 0.033
AFQT_1 0.0022 0.001 1.841 0.066 -0.000 0.005
ROTTER_SCORE -0.0264 0.011 -2.298 0.022 -0.049 -0.004
ROSENBERG_SCORE 0.0158 0.007 2.305 0.022 0.002 0.029
Mother_edu -0.0201 0.011 -1.790 0.074 -0.042 0.002
d_1985 1.8883 0.136 13.868 0.000 1.621 2.156
d_1986 1.8794 0.083 22.682 0.000 1.717 2.042
d_1987 1.7905 0.082 21.763 0.000 1.629 1.952
d_1988 1.7016 0.148 11.470 0.000 1.410 1.993
==============================================================================
Omnibus: 47.141 Durbin-Watson: 1.967
Prob(Omnibus): 0.000 Jarque-Bera (JB): 291.055
Skew: 0.095 Prob(JB): 6.28e-64
Kurtosis: 6.998 Cond. No. 3.40e+17
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 1.73e-29. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
5) Run the regressions for all periods and store the residuals as a variable
###Code
model_string = 'Log_wage ~ Schooling + exper_blue + np.power(exper_blue,2)+ exper_white + \
np.power(exper_white,2) + exper_military + np.power(exper_military,2) + AFQT_1 + \
ROTTER_SCORE + ROSENBERG_SCORE + Mother_edu + d_1978 + d_1979 + d_1980 + d_1981 + \
d_1982 + d_1983 + d_1984 + d_1985 + d_1986 + d_1987 + d_1988'
for i in df_container:
y,x = dmatrices(model_string, i)
i['resid'] = sm.OLS(y, x).fit().resid
###Output
_____no_output_____
###Markdown
6) Density Plots of Residuals 6.1) Density between residuals of period 2 and period 9
###Code
# Intersection of residuals in period 2 and period 9 (may not be the case due to attrition)
matched_identifier = list(set(df_container[2].index) & set(df_container[9].index))
# Define values used for the x and y axis of the density plot
x = df_container[2].loc[matched_identifier]['resid'].values
y = df_container[9].loc[matched_identifier]['resid'].values
# Create density plot
graph = sns.jointplot(x,y, kind = 'kde')
x_axis = 'Resid of period 2'
y_axis = 'Residual of period 9'
graph.set_axis_labels(x_axis,y_axis, fontsize =12)
###Output
_____no_output_____
###Markdown
6.1) Density between residuals of period 8 and period 9
###Code
# Intersection of residuals in period 8 and period 9 (may not be the case due to attrition)
matched_identifier = list(set(df_container[8].index) & set(df_container[9].index))
# Define values used for the x and y axis
x = df_container[8].loc[matched_identifier]['resid'].values
y = df_container[9].loc[matched_identifier]['resid'].values
graph = sns.jointplot(x, y, kind = "kde")
x_axis = 'Residual of Period 8'
y_axis = 'Residual of Period 9'
graph.set_axis_labels(x_axis, y_axis, fontsize=12)
sns.plt.title('Observations: ' + str(len(matched_identifier)))
###Output
_____no_output_____
###Markdown
More information on seaborn jointplots may be found [here](https://seaborn.pydata.org/generated/seaborn.jointplot.html) 7) Heatmap 7.1) Covariance matrix
###Code
cov = np.empty((11,11))
cov[:] = np.nan
column = -1
for period in df_container:
row = -1
column += 1
for lag in df_container:
row += 1
matched_identifier = list(set(period.index) & set(lag.index))
# more than 30 observations:
if len(matched_identifier) >= 30:
cov[(row,column)] = round((np.cov(period.loc[matched_identifier]['resid'].values,
lag.loc[matched_identifier]['resid'].values, rowvar = False, bias = True)[(0,1)]),2)
else:
pass
# show coariance
cov
###Output
_____no_output_____
###Markdown
7.2) Covariance matrix as a heatmap Result
###Code
# Heatmap of covariance matrix of residuals
mask = np.zeros_like(cov[1:11,1:11])
mask[np.triu_indices_from(mask, k = 1)] = True
with sns.axes_style("white"):
ax = sns.heatmap(np.round(cov[1:11,1:11],2),mask=mask,annot=True)
plt.xlabel('Residuals of Period')
plt.ylabel('Residuals of Period')
ax.set_yticklabels(reversed(range(1, 11)))
ax.set_xticklabels(range(1, 11))
plt.title('Covariance matrix of residuals')
plt.savefig('Figures/cov_heatmap.png')
###Output
_____no_output_____
###Markdown
Generate the covariance matrix heatmap step by step1. Plot the covariance matrix in a heatmap2. Select relevant covariance matrix rows and columns3. Create a mask to show only covariance triangular matrix below the diagonal4. Add covariance values to the heatmap5. Add x axis and y axis names. Add a title.6. Adjust the counter of the axes7. Save the figure8. Change the color to blue
###Code
# create the heatmap step by step
mask = np.zeros_like(cov[1:11, 1:11])
mask[np.triu_indices_from(mask, k = 1)] = True
with sns.axes_style('white'):
ax = sns.heatmap(cov[1:11, 1:11], mask = mask, annot = True, cmap="YlGnBu")
plt.title('Covariance matrix')
plt.xlabel('Resid of period')
plt.ylabel('Resid of period')
ax.set_yticklabels(reversed(range(1, 11)))
ax.set_xticklabels(range(1, 11))
plt.savefig('Figures/cov_matrix.png')
###Output
_____no_output_____ |
Showcase Notebook Energy Consumption.ipynb | ###Markdown
Day 1
###Code
item.gap_minder(1990)
item.gap_minder(2016)
###Output
_____no_output_____
###Markdown
The two plots clearly show, that the overall word’s energy consumption as well as GDP has gone up over the recent years. For countries in the lower end, it looks like there GDP has slightly increased while energy consumption remained roughly the same. On the other hand, high economic development stands in conjunction with increased energy consumption. It is visible that countries with a higher population tend to be on the upper end of the consumption side. Also, the overall density of the map increased corresponding to a higher population.
###Code
item.plot_consumption("Germany")
item.plot_consumption("Germany", True)
item.plot_consumption("India")
item.plot_consumption("India", True)
item.plot_consumption("China")
item.plot_consumption("China", True)
###Output
_____no_output_____
###Markdown
Looking at the six plots, it is clearly evident that the total energy consumption of India and China vastly increased over the years corresponding to their strong development. This is backed by coal and oil as the main energy providers for both countries. In comparison, the overall energy consumption remained rather constant for Germany, and even seems to have slightly decreased in recent years. This corresponds to the fact that Germany can already be considered a developed country over the observation period whereas India and China only recently experienced exponential economic growth.As India and China, Germany is also still highly reliant on coal and oil. Additionally, it shows high dependance on gas and some dependency on nuclear. While coal, oil, and nuclear consumption slightly decreased over the past, gas consumption stayed at a rather constant level. However, it should be pointed out that Germany also tries so increasingly shift to renewable energy sources such as wind and solar from the 2000s on. Nevertheless, while their share is noticeably higher in Germany compared to India and China, it still only makes up a tiny portion of the energy mix. The EU carbon tax likely also contributes to this shift which will be further discussed below. Looking at China, the relative consumption amounts seem to be somehow similar to India whereas its absolute consumption skyrockets in comparison to both - India and Germany. This corresponds to the fact that it is the country with the highest population while simultaneously being the number one greenhouse emissioner.Overall, all plots clearly show that the current efforts of all countries are certainly not enough to meet the goal of 1.5 degrees by 2030, and neither net-zero by 2050.
###Code
item.gdp("Germany", "India", "China")
###Output
_____no_output_____
###Markdown
The plot shows that the GDP of all countries has risen since 1970. However, there are clear differences in the steepness of the curves. Germany shows a rather constant increase. While it had the highest GDP back in the 1970s, it has been overtaken by China around 1980 as well as by India in the 2000s. This corresponds to the huge developments of the Indian and Chinese economy over the recent years whereas Germany already has reached a high level of economic development. However, as especially apparent when looking at China in both - consumption and GDP plots - economic development also leads to an extreme increase of energy consumption which is currently majorly carried by non-sustainable energy sources. To conclude, it is evident that there is a clear connection between energy consumption, GDP and population. A higher GDP corresponds to a higher energy consumption as does an increasing population. This intuitively makes sense; however, the key issue is what energy sources are backing a country's development and how much emissions they are causing. Here, individual political frameworks and requirements come into play. Day 2 will shed some more light on recent developments, interdependencies and also tries to give some insights into possible ways ahead. Day 2
###Code
item.compare_consumption("Germany", "India", "China")
###Output
_____no_output_____
###Markdown
This plot backs the insights of the previously plotted consumption patterns in absolute and relative terms. Furthermore, it gives some insights on the corresponding Co2 emissions which are a key factor when assessing the sustainability of energy sources. Evidently, wind, solar as well as nuclear energy consumption do not cause any Co2 emissions. In contrast, coal is the main driver of greenhouse gases followed by oil. Comparatively, gas consumption certainly leads to less Co2 emissions than oil or gas. Thus, while it clearly needs to be reduced and replaced by green solutions in the future, it might serve as an intermediate solution/buffer on the way to net zero. This is even more true for hydro consumption which shows even less Co2 emissions and, thus presents one of the hopes of future energy mixes besides the even preferred zero-emission sources. However, recent developments clearly showed that the wide expansion and adoption of those sources is hindered by the high investments in necessary infrastructure, bureaucracy and corresponding political unwillingness on a national and international level.
###Code
item.scatter_plot()
###Output
No handles with labels found to put in legend.
###Markdown
This plot complements the interdependence of economic development and energy consumption by clearly showing that higher development alias higher consumption ultimately also leads to higher emissions, given countries do ot change their energy mix. Therefore, it emphasizes the urgent need for all countries to initiate a shift to renewable energy sources right now!Let's have a look how energy consumptions and emissions are predicted to evolve over the next 5 years based on the historical data and trends:
###Code
item.arima_predict("Germany", 5)
item.arima_predict("India", 5)
item.arima_predict("China", 5)
###Output
_____no_output_____ |
MLEveryday2.ipynb | ###Markdown
**ML** **day2** >今天的目标是 NumPy
###Code
import numpy as np
np.random.seed(seed=1234)
# 标量(scalars)
x = np.array(6)
print ("x: ",x) #输出x
print("x ndim: ",x.ndim) #x的维度
print("x shape:",x.shape) #表示数组形状(shape)的元组,表示各维度大小的元组
print("x size: ",x.size) #
print ("x dtype: ",x.dtype)
# 数组(array)
x = np.array([1,2,3,4,5])
print("x :",x)
print("x ndim",x.ndim)
###Output
x: 6
x ndim: 0
x shape: ()
x size: 1
x dtype: int64
###Markdown
今日进度: NumPy 0% 亚人真好看,不写了
###Code
###Output
_____no_output_____ |
#100Viz/01 - Firefighters in CA/src/01 Fighting Fire in CA.ipynb | ###Markdown
Daily Chart 01: Firefighters in CaliforniaSource: American Community Survey, 2003-2016.Notes: Adults (18+) in California (FIPS = 06), occupational code (OCC) 3740: Firefighters. List and dates of largest fires in California from [Wikipedia](https://en.wikipedia.org/wiki/List_of_California_wildfires) *****Set up**
###Code
import pandas as pd
import altair as alt
# Theme:
%run "../../00 - Set Up/scripts/cimarron_theme.py"
%%html
<style>
@import url('https://fonts.googleapis.com/css?family=Ubuntu|Ubuntu+Condensed|Ubuntu+Mono');
</style>
df = pd.read_csv('../data/processed/Firefighters in CA.csv', parse_dates=['Year'])
df.head()
dff = df.melt(id_vars='Year').copy()
dff.columns = ['year', 'native status', 'number of people',]
fires = alt.pd.read_csv('../data/processed/fires.csv', encoding = 'utf-8', parse_dates=['end','start'])
fires.head()
base_df = dff.groupby('year')["number of people"].sum().reset_index()
alt_df = dff[dff['native status'] == 'Foreign-Born']
base = alt.Chart(base_df).mark_line().encode(
x = alt.X("year:T", title = " ", axis = alt.Axis(tickCount = 13, grid = False,),),
y = alt.Y("number of people:Q", title = "number of firefighters",),
).properties(
title = "01: Fighting Fire in California",
width = 1200,
height = 600,
)
band = alt.Chart(fires).mark_rect().encode(
x='start:T',
x2='end:T',
color = alt.Color("acres:Q", legend = alt.Legend(title = "total acres burned", zindex = 0, padding = 0, offset=-120))
).properties(
width = 1080,
height = 800,
)
main_chart = base + band
source = "SOURCE: American Community Survey, 2003-2016."
source2 = "List and dates of largest fires in California from Wikipedia."
notes = "NOTES: Adults (18+) in California, OCCupational code 3740: Firefighters. "
source_chart = alt.Chart(fires).mark_text(text = source, dx = 800, size = 18).properties(
height = 20,
width = 1080,
)
# source2_chart = alt.Chart(fires).mark_text(text = source2, dx = 800, size = 18).properties(
# height = 20,
# width = 1080,
# )
notes_chart = alt.Chart(fires).mark_text(text = notes + source2, dx = 800, size = 18).properties(
height = 20,
width = 1080,
)
# caption = source_chart & source2_chart & notes_chart
daily_chart1 = main_chart & source_chart & notes_chart
daily_chart1
###Output
_____no_output_____ |
docs/source/examples/tutorial/01-preprocess.ipynb | ###Markdown
Preliminary Preprocessing Read and Process E-Commerce data In this notebook, we are going to use a subset of a publicly available [eCommerce dataset](https://www.kaggle.com/mkechinov/ecommerce-behavior-data-from-multi-category-store). The full dataset contains 7 months data (from October 2019 to April 2020) from a large multi-category online store. Each row in the file represents an event. All events are related to products and users. Each event is like many-to-many relation between products and users.Data collected by Open CDP project and the source of the dataset is [REES46 Marketing Platform](https://rees46.com/). We use only `2019-Oct.csv` file for training our models, so you can visit this site and download the csv file: https://www.kaggle.com/mkechinov/ecommerce-behavior-data-from-multi-category-store. Import the required libraries
###Code
import os
import numpy as np
import gc
import shutil
import glob
import cudf
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
Read Data via cuDF from CSV At this point we expect that you have already downloaded the `2019-Oct.csv` dataset and stored it in the `INPUT_DATA_DIR` as defined below. It is worth mentioning that the raw dataset is ~ 6 GB, therefore a single GPU with 16 GB or less memory might run out of memory. To avoid that, you can directly start from the second notebook, `02-ETL_with_NVTabular`, using `'Oct-2019.parquet` provided in [here](https://drive.google.com/drive/folders/1GjNKerPMvEtQHt9Z37ncF1zFedDXL_RJ).
###Code
# define some information about where to get our data
INPUT_DATA_DIR = os.environ.get("INPUT_DATA_DIR", "/workspace/data/")
%%time
raw_df = cudf.read_csv(os.path.join(INPUT_DATA_DIR, '2019-Oct.csv'))
raw_df.head()
raw_df.shape
###Output
_____no_output_____
###Markdown
Convert timestamp from datetime
###Code
raw_df['event_time_dt'] = raw_df['event_time'].astype('datetime64[s]')
raw_df['event_time_ts']= raw_df['event_time_dt'].astype('int')
raw_df.head()
# check out the columns with nulls
raw_df.isnull().any()
# Remove rows where `user_session` is null.
raw_df = raw_df[raw_df['user_session'].isnull()==False]
len(raw_df)
###Output
_____no_output_____
###Markdown
We no longer need `event_time` column.
###Code
raw_df = raw_df.drop(['event_time'], axis=1)
###Output
_____no_output_____
###Markdown
Categorify `user_session` columnAlthough `user_session` is not used as an input feature for the model, it is useful to convert those raw long string to int values to avoid potential failures when grouping interactions by `user_session` in the next notebook.
###Code
cols = list(raw_df.columns)
cols.remove('user_session')
cols
# load data
df_event = nvt.Dataset(raw_df)
# categorify user_session
cat_feats = ['user_session'] >> nvt.ops.Categorify()
workflow = nvt.Workflow(cols + cat_feats)
workflow.fit(df_event)
df = workflow.transform(df_event).to_ddf().compute()
df.head()
raw_df = None
del(raw_df)
gc.collect()
###Output
_____no_output_____
###Markdown
Removing consecutive repeated (user, item) interactions We keep repeated interactions on the same items, removing only consecutive interactions, because it might be due to browser tab refreshes or different interaction types (e.g. click, add-to-card, purchase)
###Code
%%time
df = df.sort_values(['user_session', 'event_time_ts']).reset_index(drop=True)
print("Count with in-session repeated interactions: {}".format(len(df)))
# Sorts the dataframe by session and timestamp, to remove consecutive repetitions
df['product_id_past'] = df['product_id'].shift(1).fillna(0)
df['session_id_past'] = df['user_session'].shift(1).fillna(0)
#Keeping only no consecutive repeated in session interactions
df = df[~((df['user_session'] == df['session_id_past']) & \
(df['product_id'] == df['product_id_past']))]
print("Count after removed in-session repeated interactions: {}".format(len(df)))
del(df['product_id_past'])
del(df['session_id_past'])
gc.collect()
###Output
Count with in-session repeated interactions: 42448762
Count after removed in-session repeated interactions: 30733301
CPU times: user 789 ms, sys: 120 ms, total: 909 ms
Wall time: 1.16 s
###Markdown
Include the item first time seen feature (for recency calculation) We create `prod_first_event_time_ts` column which indicates the timestamp that an item was seen first time.
###Code
item_first_interaction_df = df.groupby('product_id').agg({'event_time_ts': 'min'}) \
.reset_index().rename(columns={'event_time_ts': 'prod_first_event_time_ts'})
item_first_interaction_df.head()
gc.collect()
df = df.merge(item_first_interaction_df, on=['product_id'], how='left').reset_index(drop=True)
df.head()
del(item_first_interaction_df)
item_first_interaction_df=None
gc.collect()
###Output
_____no_output_____
###Markdown
In this tutorial, we only use one week of data from Oct 2019 dataset.
###Code
# check the min date
df['event_time_dt'].min()
# Filters only the first week of the data.
df = df[df['event_time_dt'] < np.datetime64('2019-10-08')].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
We verify that we only have the first week of Oct-2019 dataset.
###Code
df['event_time_dt'].max()
###Output
_____no_output_____
###Markdown
We drop `event_time_dt` column as it will not be used anymore.
###Code
df = df.drop(['event_time_dt'], axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Save the data as a single parquet file to be used in the ETL notebook.
###Code
# save df as parquet files on disk
df.to_parquet(os.path.join(INPUT_DATA_DIR, 'Oct-2019.parquet'))
###Output
_____no_output_____
###Markdown
- Shut down the kernel
###Code
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____ |
notebooks/visualizing_imagenet_classes_upscaled.ipynb | ###Markdown
Install dependencies
###Code
!pip install ipdb tqdm cloudpickle matplotlib lucid PyDrive
###Output
_____no_output_____
###Markdown
Download checkpoint files for painters
###Code
!mkdir tf_vae
!wget -O tf_vae/vae-300000.index 'https://docs.google.com/uc?export=download&id=1ulHdDxebH46m_0ZoLa2Wsz_6vStYqJQm'
!wget -O tf_vae/vae-300000.meta 'https://docs.google.com/uc?export=download&id=1nHN_i7Ro9g0lP4y_YQCvIWrOVX1I3CJa'
!wget -O tf_vae/vae-300000.data-00000-of-00001 'https://docs.google.com/uc?export=download&id=18rAJcUJwFJOAcjzsabtqK12udsHMZkVk'
!wget -O tf_vae/checkpoint 'https://docs.google.com/uc?export=download&id=18U4qMNBdyvEk-Y-Mr3MNPEHSHxhcO9hn'
!mkdir tf_gan3
!wget -O tf_gan3/gan-571445.meta 'https://docs.google.com/uc?export=download&id=15kEG1Tiu2FUg5SILVt_9yOsSd3QHwVGA'
!wget -O tf_gan3/gan-571445.index 'https://docs.google.com/uc?export=download&id=11uyFbQsRZoWa9Yq52AFXDXPjPQoGF_ER'
!wget -O tf_gan3/gan-571445.data-00000-of-00001 'https://docs.google.com/uc?export=download&id=11cbvz-CH3KvfZEwNQ2OUujfbf6AKNoQa'
!wget -O tf_gan3/checkpoint 'https://docs.google.com/uc?export=download&id=1A539u51t0L31Ab1M2uPUV2SsCFsNDQRo'
!mkdir tf_gan4
!wget -O tf_gan4/gan-279892.meta 'https://docs.google.com/uc?export=download&id=15qcjIqxnJ7UaB_EP8Jko1IjpY1JQMCh7'
!wget -O tf_gan4/gan-279892.index 'https://docs.google.com/uc?export=download&id=1q5g-q04HOGpNJY83tk4_0aRLwg800av1'
!wget -O tf_gan4/gan-279892.data-00000-of-00001 'https://docs.google.com/uc?export=download&id=1Jtx9_5Dms9NXUnNq8r-TIf94dZyDjdBj'
!wget -O tf_gan4/checkpoint 'https://docs.google.com/uc?export=download&id=1cnagxjLZvWWWPFl0FJzTVuoja2HorBk8'
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import tensorflow as tf
import tensorflow.contrib.layers as tcl
from IPython.display import display
import moviepy.editor as mpy
from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter
import lucid.modelzoo.vision_models as models
from lucid.misc.io import show
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
from lucid.misc.redirected_relu_grad import redirected_relu_grad, redirected_relu6_grad
from lucid.misc.gradient_override import gradient_override_map
print(tf.__version__)
###Output
_____no_output_____
###Markdown
VAE painter
###Code
class ConvVAE2(object):
def __init__(self, reuse=False, gpu_mode=True, graph=None):
self.z_size = 64
self.reuse = reuse
if not gpu_mode:
with tf.device('/cpu:0'):
tf.logging.info('conv_vae using cpu.')
self._build_graph(graph)
else:
tf.logging.info('conv_vae using gpu.')
self._build_graph(graph)
self._init_session()
def build_decoder(self, z, reuse=False):
with tf.variable_scope('decoder', reuse=reuse):
h = tf.layers.dense(z, 4*256, name="fc")
h = tf.reshape(h, [-1, 1, 1, 4*256])
h = tf.layers.conv2d_transpose(h, 128, 5, strides=2, activation=tf.nn.relu, name="deconv1")
h = tf.layers.conv2d_transpose(h, 64, 5, strides=2, activation=tf.nn.relu, name="deconv2")
h = tf.layers.conv2d_transpose(h, 32, 6, strides=2, activation=tf.nn.relu, name="deconv3")
return tf.layers.conv2d_transpose(h, 3, 6, strides=2, activation=tf.nn.sigmoid, name="deconv4")
def build_predictor(self, actions, reuse=False, is_training=False):
with tf.variable_scope('predictor', reuse=reuse):
h = tf.layers.dense(actions, 256, activation=tf.nn.leaky_relu, name="fc1")
h = tf.layers.batch_normalization(h, training=is_training, name="bn1")
h = tf.layers.dense(h, 64, activation=tf.nn.leaky_relu, name="fc2")
h = tf.layers.batch_normalization(h, training=is_training, name="bn2")
h = tf.layers.dense(h, 64, activation=tf.nn.leaky_relu, name="fc3")
h = tf.layers.batch_normalization(h, training=is_training, name="bn3")
return tf.layers.dense(h, self.z_size, name='fc4')
def _build_graph(self, graph):
if graph is None:
self.g = tf.Graph()
else:
self.g = graph
with self.g.as_default(), tf.variable_scope('conv_vae', reuse=self.reuse):
#### predicting part
self.actions = tf.placeholder(tf.float32, shape=[None, 12])
self.predicted_z = self.build_predictor(self.actions, is_training=False)
self.predicted_y = self.build_decoder(self.predicted_z)
# initialize vars
self.init = tf.global_variables_initializer()
def generate_stroke_graph(self, actions):
with tf.variable_scope('conv_vae', reuse=True):
with self.g.as_default():
# Encoder?
z = self.build_predictor(actions, reuse=True, is_training=False)
# Decoder
return self.build_decoder(z, reuse=True)
def _init_session(self):
"""Launch TensorFlow session and initialize variables"""
self.sess = tf.Session(graph=self.g)
self.sess.run(self.init)
def close_sess(self):
""" Close TensorFlow session """
self.sess.close()
###Output
_____no_output_____
###Markdown
GAN Painter
###Code
def relu_batch_norm(x):
return tf.nn.relu(tf.contrib.layers.batch_norm(x, updates_collections=None))
class GeneratorConditional(object):
def __init__(self, divisor=1, add_noise=False):
self.x_dim = 64 * 64 * 3
self.divisor=divisor
self.name = 'lsun/dcgan/g_net'
self.add_noise = add_noise
def __call__(self, conditions, is_training):
with tf.contrib.framework.arg_scope([tcl.batch_norm],
is_training=is_training):
with tf.variable_scope(self.name) as vs:
bs = tf.shape(conditions)[0]
if self.add_noise:
conditions = tf.concat([conditions, tf.random.uniform([bs, 10])], axis=1)
fc = tcl.fully_connected(conditions, 4 * 4 * 1024/self.divisor, activation_fn=tf.identity)
conv1 = tf.reshape(fc, tf.stack([bs, 4, 4, 1024/self.divisor]))
conv1 = relu_batch_norm(conv1)
conv2 = tcl.conv2d_transpose(
conv1, 512/self.divisor, [4, 4], [2, 2],
weights_initializer=tf.random_normal_initializer(stddev=0.02),
activation_fn=relu_batch_norm
)
conv3 = tcl.conv2d_transpose(
conv2, 256/self.divisor, [4, 4], [2, 2],
weights_initializer=tf.random_normal_initializer(stddev=0.02),
activation_fn=relu_batch_norm
)
conv4 = tcl.conv2d_transpose(
conv3, 128/self.divisor, [4, 4], [2, 2],
weights_initializer=tf.random_normal_initializer(stddev=0.02),
activation_fn=relu_batch_norm
)
conv5 = tcl.conv2d_transpose(
conv4, 3, [4, 4], [2, 2],
weights_initializer=tf.random_normal_initializer(stddev=0.02),
activation_fn=tf.sigmoid)
return conv5
@property
def vars(self):
return [var for var in tf.global_variables() if self.name in var.name]
class ConvGAN(object):
def __init__(self, add_noise=False, reuse=False, gpu_mode=True, graph=None):
self.reuse = reuse
self.g_net = GeneratorConditional(divisor=4, add_noise=add_noise)
if not gpu_mode:
with tf.device('/cpu:0'):
tf.logging.info('conv_gan using cpu.')
self._build_graph(graph)
else:
tf.logging.info('conv_gan using gpu.')
self._build_graph(graph)
self._init_session()
def _build_graph(self, graph):
if graph is None:
self.g = tf.Graph()
else:
self.g = graph
with self.g.as_default(), tf.variable_scope('conv_gan', reuse=self.reuse):
self.actions = tf.placeholder(tf.float32, shape=[None, 12])
self.y = self.g_net(self.actions, is_training=False)
self.init = tf.global_variables_initializer()
def generate_stroke_graph(self, actions):
with tf.variable_scope('conv_gan', reuse=True):
with self.g.as_default():
return self.g_net(actions, is_training=False)
def _init_session(self):
"""Launch TensorFlow session and initialize variables"""
self.sess = tf.Session(graph=self.g)
self.sess.run(self.init)
def close_sess(self):
""" Close TensorFlow session """
self.sess.close()
###Output
_____no_output_____
###Markdown
Construct the Lucid graph
###Code
def import_model(model, t_image, t_image_raw, scope="import"):
model.import_graph(t_image, scope=scope, forget_xy_shape=True)
def T(layer):
if layer == "input": return t_image_raw
if layer == "labels": return model.labels
if ":" in layer:
return t_image.graph.get_tensor_by_name("%s/%s" % (scope,layer))
else:
return t_image.graph.get_tensor_by_name("%s/%s:0" % (scope,layer))
return T
class LucidGraph(object):
def __init__(self, class_to_plot='centipede', num_strokes=4, batch_size=1, painter_type="GAN",
connected=True, add_noise=False, lr=0.05, models_to_optimize=['inception_v1', 'inception_v1_slim'],
overlap_px=10, repeat=2, alternate=True,
gpu_mode=True, graph=None):
self.class_to_plot = class_to_plot
self.batch_size = batch_size
self.painter_type = painter_type
self.connected=connected
self.add_noise = add_noise
# For overlapping canvases
self.overlap_px = overlap_px
self.repeat = repeat
self.alternate = alternate
self.full_size = 64*repeat - overlap_px*(repeat - 1)
self.unrepeated_num_strokes= num_strokes
self.num_strokes= num_strokes * self.repeat**2
print('full_size', self.full_size, 'max_seq_len', self.num_strokes)
self.inception_v1 = models.InceptionV1()
self.inception_v1.load_graphdef()
self.inception_v1_slim = models.InceptionV1_slim()
self.inception_v1_slim.load_graphdef()
self.inception_v2_slim = models.InceptionV2_slim()
self.inception_v2_slim.load_graphdef()
self.mobilenet_v2_14 = models.MobilenetV2_14_slim()
self.mobilenet_v2_14.load_graphdef()
self.resnet_v1_50 = models.ResnetV1_50_slim()
self.resnet_v1_50.load_graphdef()
transforms = [
#transform.pad(12, mode='constant', constant_value=.5),
transform.jitter(8),
#transform.random_scale([1 + (i-5)/50. for i in range(11)]),
transform.random_rotate(list(range(-20, 21)) + 5*[0]),
transform.jitter(4),
]
self.transform_f = render.make_transform_f(transforms)
self.optim = render.make_optimizer(tf.train.AdamOptimizer(lr), [])
self.obj_inception_v1 = objectives.class_logit('softmax1', class_to_plot)
self.obj_inception_v1_slim = objectives.class_logit('InceptionV1/Logits/Predictions/Softmax', class_to_plot)
self.obj_inception_v2_slim = objectives.class_logit('InceptionV2/Predictions/Softmax', class_to_plot)
self.obj_mobilenet_v2_14 = objectives.class_logit('MobilenetV2/Predictions/Softmax', class_to_plot)
self.obj_resnet_v1_50 = objectives.class_logit('resnet_v1_50/predictions/Softmax', class_to_plot)
self.models_to_optimize_dict = {
'inception_v1': {'model': self.inception_v1, 'obj': self.obj_inception_v1, 'scope': 'i'},
'inception_v1_slim': {'model': self.inception_v1_slim, 'obj': self.obj_inception_v1_slim, 'scope': 'i_slim'},
'inception_v2_slim': {'model': self.inception_v2_slim, 'obj': self.obj_inception_v2_slim, 'scope': 'i2_slim'},
'mobilenet_v2_14': {'model': self.mobilenet_v2_14, 'obj': self.obj_mobilenet_v2_14, 'scope': 'm_v2_14'},
'resnet_v1_50': {'model': self.resnet_v1_50, 'obj': self.obj_resnet_v1_50, 'scope': 'resnet_v1_50'}
}
self.models_to_optimize = [self.models_to_optimize_dict[key] for key in models_to_optimize]
self.gpu_mode = gpu_mode
if not gpu_mode:
with tf.device('/cpu:0'):
tf.logging.info('Model using cpu.')
self._build_graph(graph)
else:
#tf.logging.info('Model using gpu.')
self._build_graph(graph)
self._init_session()
def _build_graph(self, graph):
if graph is None:
self.g = tf.Graph()
else:
self.g = graph
# Set up graphs of VAE or GAN
if self.painter_type == "GAN":
self.painter = ConvGAN(
add_noise=self.add_noise,
reuse=False,
gpu_mode=self.gpu_mode,
graph=self.g)
elif self.painter_type=="VAE":
self.painter = ConvVAE2(
reuse=False,
gpu_mode=self.gpu_mode,
graph=self.g)
self.painter.close_sess()
with self.g.as_default():
print('GLOBAL VARS', tf.global_variables())
with self.g.as_default():
batch_size = self.batch_size
tile_size = 5
self.actions = tf.get_variable("action_vars", [batch_size, self.num_strokes, 12],
#initializer=tf.initializers.random_normal()
initializer=tf.initializers.random_uniform()
)
# Prepare loop vars for rnn loop
canvas_state = tf.ones(shape=[batch_size, self.full_size, self.full_size, 3], dtype=tf.float32)
i = tf.constant(0)
initial_canvas_ta = tf.TensorArray(dtype=tf.float32, size=self.num_strokes)
loop_vars = (
canvas_state,
initial_canvas_ta, i)
# condition for continuation
def cond(cs, c_ta, i):
return tf.less(i, self.num_strokes)
# run one state of rnn cell
def body(cs, c_ta, i):
trimmed_actions = tf.sigmoid(self.actions)
print(trimmed_actions.get_shape())
def use_whole_action():
return trimmed_actions[:, i, :12]
def use_previous_entrypoint():
# start x and y are previous end x and y
# start pressure is previous pressure
return tf.concat([trimmed_actions[:, i, :9], trimmed_actions[:, i-1, 4:6], trimmed_actions[:, i-1, 0:1]], axis=1)
if self.connected:
inp = tf.cond(tf.equal(i, 0), true_fn=use_whole_action, false_fn=use_previous_entrypoint)
else:
inp = use_whole_action()
inp = tf.reshape(inp, [-1, 12])
print(inp.get_shape())
decoded_stroke = self.painter.generate_stroke_graph(inp)
cases = []
ctr = 0
for a in range(self.repeat):
for b in range(self.repeat):
print([int(self.repeat**2), ctr])
print([[0, 0], [(64-self.overlap_px)*a, (64-self.overlap_px)*(self.repeat-1-a)], [(64-self.overlap_px)*b, (64-self.overlap_px)*(self.repeat-1-b)], [0, 0]])
cases.append(
(
tf.equal(tf.floormod(i, int(self.repeat**2)), ctr) if self.alternate else tf.less(i, self.unrepeated_num_strokes*(ctr+1)),
lambda a=a, b=b: tf.pad(decoded_stroke,
[[0, 0], [(64-self.overlap_px)*a, (64-self.overlap_px)*(self.repeat-1-a)], [(64-self.overlap_px)*b, (64-self.overlap_px)*(self.repeat-1-b)], [0, 0]],
constant_values=1)
)
)
ctr += 1
print(cases)
decoded_stroke = tf.case(cases)
darkness_mask = tf.reduce_mean(decoded_stroke, axis=3)
darkness_mask = 1 - tf.reshape(darkness_mask, [batch_size, self.full_size, self.full_size, 1])
darkness_mask = darkness_mask / tf.reduce_max(darkness_mask)
color_action = trimmed_actions[:, i, 6:9]
color_action = tf.reshape(color_action, [batch_size, 1, 1, 3])
color_action = tf.tile(color_action, [1, self.full_size, self.full_size, 1])
stroke_whitespace = tf.equal(decoded_stroke, 1.)
maxed_stroke = tf.where(stroke_whitespace, decoded_stroke, color_action)
cs = (darkness_mask)*maxed_stroke + (1-darkness_mask)*cs
c_ta = c_ta.write(i, cs)
i = tf.add(i, 1)
return (cs, c_ta, i)
final_canvas_state, final_canvas_ta, _ = tf.while_loop(cond, body, loop_vars, swap_memory=True)
self.final_canvas_state = final_canvas_state
self.intermediate_canvases = final_canvas_ta.stack()
self.resized_canvas = tf.image.resize_images(self.final_canvas_state, [224, 224])
self.resized_canvas_227 = tf.image.resize_images(self.final_canvas_state, [227, 227])
tiled_canvas = tf.tile(self.resized_canvas, [tile_size, 1, 1, 1])
tiled_canvas_227 = tf.tile(self.resized_canvas_227, [tile_size, 1, 1, 1])
global_step = tf.train.get_or_create_global_step()
with gradient_override_map({'Relu': redirected_relu_grad,
'Relu6': redirected_relu6_grad}):
#self.T = render.import_model(self.inception_v1, self.transform_f(tiled_canvas), tiled_canvas)
#self.T2 = import_model(self.inception_v1_slim, self.transform_f(tiled_canvas), tiled_canvas, scope='i_slim')
#self.T3 = import_model(self.inception_v2_slim, self.transform_f(tiled_canvas), tiled_canvas, scope='i2_slim')
T_list = [import_model(x['model'], self.transform_f(tiled_canvas), tiled_canvas, scope=x['scope'])
for x in self.models_to_optimize]
self.loss = 0
self.loss_list = []
for i in range(len(T_list)):
l = self.models_to_optimize[i]['obj'](T_list[i])/tile_size
self.loss = self.loss + l
self.loss_list.append(l)
self.loss = self.loss / len(T_list)
#self.loss = self.obj_inception_v1(self.T)/5 + self.obj_inception_v1_slim(self.T2)/5 + self.obj_inception_v2_slim(self.T3)/5
self.vis_op = self.optim.minimize(-self.loss, global_step=global_step, var_list=[self.actions])
# initialize vars
self.init = tf.global_variables_initializer()
print('TRAINABLE', tf.trainable_variables())
def train(self, thresholds=range(0, 5000, 30)):
self.images = []
vis = self.sess.run(self.resized_canvas)
show(np.hstack(vis))
try:
for i in range(max(thresholds)+1):
loss_, _ = self.sess.run([self.loss_list, self.vis_op])
if i in thresholds:
#print(self.sess.run(self.actions))
vis = self.sess.run(self.resized_canvas)
print('step', i, 'scores_per_net', loss_, 'max_score', self.batch_size)
show(np.hstack(vis))
if i % 1 == 0:
vis = self.sess.run(self.resized_canvas)
self.images.append(vis)
except KeyboardInterrupt:
vis = self.sess.run(self.resized_canvas)
show(np.hstack(vis))
def _init_session(self):
self.sess = tf.Session(graph=self.g)
self.sess.run(self.init)
def close_sess(self):
self.sess.close()
def load_painter_checkpoint(self, checkpoint_path='tf_conv_vae', actual_path=None):
sess = self.sess
with self.g.as_default():
if self.painter_type == "VAE":
pth = 'conv_vae'
elif self.painter_type == "GAN":
pth = 'conv_gan'
saver = tf.train.Saver(tf.global_variables(pth))
ckpt = tf.train.get_checkpoint_state(checkpoint_path)
if actual_path is None:
actual_path = ckpt.model_checkpoint_path
print('loading model', actual_path)
tf.logging.info('Loading model %s.', actual_path)
saver.restore(sess, actual_path)
###Output
_____no_output_____
###Markdown
Utility code for searching available ImageNet classesYou don't really need this but I found it helpful to look for classes to optimize. You will notice that the labels for the Slim models are not always the same as the labels for inception_v1 for the same object e.g. 'lipstick' vs 'lipstick, lip rouge'. In this case, you can't optimize inception_v1 together with a Slim model (of course, you are free to use inception_v1_slim).
###Code
def search(_search_term):
print('searching matching labels for {}'.format(_search_term))
inception_v1 = models.InceptionV1()
inception_v1_slim = models.InceptionV1_slim()
inception_v2_slim = models.InceptionV2_slim()
mobilenet_v2_14 = models.MobilenetV2_14_slim()
resnet_v1_50 = models.ResnetV1_50_slim()
print('inception_v1 labels: {}'.format([x for x in inception_v1.labels if _search_term in x]))
print('inception_v1_slim labels: {}'.format([x for x in inception_v1_slim.labels if _search_term in x]))
print('inception_v2_slim labels: {}'.format([x for x in inception_v2_slim.labels if _search_term in x]))
print('mobilenet_v2_14 labels: {}'.format([x for x in mobilenet_v2_14.labels if _search_term in x]))
print('resnet_v1_50 labels: {}'.format([x for x in resnet_v1_50.labels if _search_term in x]))
search('l')
###Output
searching matching labels for l
inception_v1 labels: [u'English setter', u'Australian terrier', u'English springer', u'grey whale', u'lesser panda', u'gazelle', u'sea lion', u'malamute', u'Walker hound', u'Welsh springer spaniel', u'killer whale', u'African elephant', u'red wolf', u'Old English sheepdog', u'bloodhound', u'Airedale', u'three-toed sloth', u'sorrel', u'black-footed ferret', u'dalmatian', u'black-and-tan coonhound', u'papillon', u'Staffordshire bullterrier', u'Mexican hairless', u'Bouvier des Flandres', u'weasel', u'miniature poodle', u'malinois', u'fox squirrel', u'colobus', u'impala', u'Newfoundland', u'Norwegian elkhound', u'Rottweiler', u'Saluki', u'West Highland white terrier', u'Sealyham terrier', u'Irish wolfhound', u'wild boar', u'EntleBucher', u'French bulldog', u'leopard', u'Maltese dog', u'Norfolk terrier', u'vizsla', u'squirrel monkey', u'groenendael', u'clumber', u'Japanese spaniel', u'white wolf', u'gorilla', u'toy poodle', u'Kerry blue terrier', u'Boston bull', u'Appenzeller', u'Irish water spaniel', u'Bedlington terrier', u'Arabian camel', u'collie', u'golden retriever', u'Border collie', u'silky terrier', u'beagle', u'dhole', u'bull mastiff', u'curly-coated retriever', u'flat-coated retriever', u'Brittany spaniel', u'standard poodle', u'Lakeland terrier', u'snow leopard', u'water buffalo', u'American black bear', u'howler monkey', u'Shetland sheepdog', u'armadillo', u'bluetick', u'polecat', u'kelpie', u'llama', u'Italian greyhound', u'lion', u'cocker spaniel', u'Indian elephant', u'Sussex spaniel', u'Blenheim spaniel', u'lynx', u'langur', u'timber wolf', u'English foxhound', u'sloth bear', u'koala', u'wallaby', u'platypus', u'revolver', u'umbrella', u'soccer ball', u'chambered nautilus', u'laptop', u'airliner', u'warplane', u'balloon', u'space shuttle', u'gondola', u'lifeboat', u'yawl', u'liner', u'half track', u'missile', u'bobsled', u'dogsled', u'bicycle-built-for-two', u'forklift', u'electric locomotive', u'steam locomotive', u'ambulance', u'convertible', u'limousine', u'Model T', u'golfcart', u'snowplow', u'trailer truck', u'police van', u'recreational vehicle', u'snowmobile', u'mobile home', u'tricycle', u'unicycle', u'cradle', u'table lamp', u'file', u'folding chair', u'toilet seat', u'pool table', u'dining table', u'lemon', u'pineapple', u'custard apple', u'steel drum', u'cello', u'violin', u'electric guitar', u'flute', u"yellow lady's slipper", u'cliff', u'valley', u'alp', u'volcano', u'coral reef', u'lakeside', u'cleaver', u'letter opener', u'plane', u'power drill', u'lawn mower', u'plunger', u'shovel', u'plow', u'brambling', u'goldfinch', u'bulbul', u'water ouzel', u'bald eagle', u'vulture', u'great grey owl', u'black grouse', u'quail', u'sulphur-crested cockatoo', u'lorikeet', u'coucal', u'hornbill', u'black swan', u'black stork', u'spoonbill', u'flamingo', u'little blue heron', u'limpkin', u'European gallinule', u'pelican', u'albatross', u'electric ray', u'goldfish', u'eel', u'lionfish', u'loggerhead', u'leatherback turtle', u'mud turtle', u'box turtle', u'American chameleon', u'whiptail', u'frilled lizard', u'alligator lizard', u'Gila monster', u'green lizard', u'African chameleon', u'African crocodile', u'American alligator', u'European fire salamander', u'spotted salamander', u'axolotl', u'bullfrog', u'tailed frog', u'whistle', u'hand blower', u'snorkel', u'loudspeaker', u'electric fan', u'oil filter', u'guillotine', u'rule', u'scale', u'analog clock', u'digital clock', u'wall clock', u'hourglass', u'sundial', u'digital watch', u'binoculars', u'sunglasses', u'loupe', u'radio telescope', u'assault rifle', u'rifle', u'projectile', u'lighter', u'slide rule', u'hand-held computer', u'slot', u'car wheel', u'paddlewheel', u'pinwheel', u"potter's wheel", u'carousel', u'reel', u'sunglass', u'solar dish', u'remote control', u'buckle', u'hair slide', u'combination lock', u'padlock', u'nail', u'muzzle', u'seat belt', u'candle', u"jack-o'-lantern", u'spotlight', u'maypole', u'trilobite', u'black and gold garden spider', u'black widow', u'tarantula', u'wolf spider', u'fiddler crab', u'American lobster', u'spiny lobster', u'tiger beetle', u'ladybug', u'ground beetle', u'long-horned beetle', u'leaf beetle', u'dung beetle', u'rhinoceros beetle', u'weevil', u'fly', u'walking stick', u'leafhopper', u'lacewing', u'dragonfly', u'damselfly', u'admiral', u'ringlet', u'cabbage butterfly', u'sulphur butterfly', u'lycaenid', u'jellyfish', u'brain coral', u'flatworm', u'snail', u'slug', u'sea slug', u'waffle iron', u'caldron', u'spatula', u'altar', u'triumphal arch', u'steel arch bridge', u'palace', u'library', u'planetarium', u'lumbermill', u'coil', u'obelisk', u'totem pole', u'castle', u'cliff dwelling', u'megalith', u'chainlink fence', u'stone wall', u'grille', u'sliding door', u'turnstile', u'plate rack', u'pedestal', u'bell pepper', u'broccoli', u'cauliflower', u'sandal', u'plate', u'necklace', u'croquet ball', u'thimble', u'cocktail shaker', u'manhole cover', u'balance beam', u'bagel', u'spindle', u'beer bottle', u'crash helmet', u'bottlecap', u'tile roof', u'maillot', u'football helmet', u'holster', u'pop bottle', u'crossword puzzle', u'golf ball', u'trifle', u'cloak', u'shield', u'meat loaf', u'baseball', u'beer glass', u'guacamole', u'lampshade', u'wool', u'mailbag', u'soup bowl', u'paddle', u'mixing bowl', u'wine bottle', u'bulletproof vest', u'drilling platform', u'ping-pong ball', u'pencil box', u'pencil sharpener', u'Polaroid camera', u'traffic light', u'quill', u'military uniform', u'lipstick', u'oscilloscope', u'French loaf', u'milk can', u'rugby ball', u'paper towel', u'envelope', u'trolleybus', u'coral fungus', u'bullet train', u'pillow', u'toilet tissue', u'ladle', u'lotion', u'pill bottle', u'chain mail', u'barrel', u'ballpoint', u'basketball', u'bath towel', u'cellular telephone', u'nipple', u'barbell', u'mailbox', u'lab coat', u'pole', u'horizontal bar', u'pickelhaube', u'rain barrel', u'wallet', u'cassette player', u'bell cote', u'volleyball', u'bolo tie', u'sleeping bag', u'television', u'breastplate', u'saltshaker', u'chocolate sauce', u'ballplayer', u'goblet', u'water bottle', u'dial telephone', u'school bus', u'jigsaw puzzle', u'plastic bag', u'reflex camera', u'ice lolly', u'velvet', u'tennis ball', u'pretzel', u'quilt', u'maillot', u'tape player', u'clog', u'bolete', u'CD player', u'lens cap', u'vault', u'bubble', u'parallel bars', u'flagpole', u'stole', u'dumbbell']
inception_v1_slim labels: [u'goldfish, Carassius auratus', u'tiger shark, Galeocerdo cuvieri', u'electric ray, crampfish, numbfish, torpedo', u'ostrich, Struthio camelus', u'brambling, Fringilla montifringilla', u'goldfinch, Carduelis carduelis', u'house finch, linnet, Carpodacus mexicanus', u'bulbul', u'water ouzel, dipper', u'bald eagle, American eagle, Haliaeetus leucocephalus', u'vulture', u'great grey owl, great gray owl, Strix nebulosa', u'European fire salamander, Salamandra salamandra', u'common newt, Triturus vulgaris', u'spotted salamander, Ambystoma maculatum', u'axolotl, mud puppy, Ambystoma mexicanum', u'bullfrog, Rana catesbeiana', u'tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui', u'loggerhead, loggerhead turtle, Caretta caretta', u'leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea', u'mud turtle', u'box turtle, box tortoise', u'American chameleon, anole, Anolis carolinensis', u'whiptail, whiptail lizard', u'frilled lizard, Chlamydosaurus kingi', u'alligator lizard', u'Gila monster, Heloderma suspectum', u'green lizard, Lacerta viridis', u'African chameleon, Chamaeleo chamaeleon', u'Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis', u'African crocodile, Nile crocodile, Crocodylus niloticus', u'American alligator, Alligator mississipiensis', u'night snake, Hypsiglena torquata', u'diamondback, diamondback rattlesnake, Crotalus adamanteus', u'sidewinder, horned rattlesnake, Crotalus cerastes', u'trilobite', u'harvestman, daddy longlegs, Phalangium opilio', u'black and gold garden spider, Argiope aurantia', u'black widow, Latrodectus mactans', u'tarantula', u'wolf spider, hunting spider', u'black grouse', u'ruffed grouse, partridge, Bonasa umbellus', u'prairie chicken, prairie grouse, prairie fowl', u'quail', u'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita', u'lorikeet', u'coucal', u'hornbill', u'black swan, Cygnus atratus', u'platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus', u'wallaby, brush kangaroo', u'koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus', u'jellyfish', u'brain coral', u'flatworm, platyhelminth', u'snail', u'slug', u'sea slug, nudibranch', u'chiton, coat-of-mail shell, sea cradle, polyplacophore', u'chambered nautilus, pearly nautilus, nautilus', u'fiddler crab', u'king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica', u'American lobster, Northern lobster, Maine lobster, Homarus americanus', u'spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish', u'black stork, Ciconia nigra', u'spoonbill', u'flamingo', u'little blue heron, Egretta caerulea', u'American egret, great white heron, Egretta albus', u'limpkin, Aramus pictus', u'European gallinule, Porphyrio porphyrio', u'American coot, marsh hen, mud hen, water hen, Fulica americana', u'red-backed sandpiper, dunlin, Erolia alpina', u'pelican', u'albatross, mollymawk', u'grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus', u'killer whale, killer, orca, grampus, sea wolf, Orcinus orca', u'sea lion', u'Japanese spaniel', u'Maltese dog, Maltese terrier, Maltese', u'Blenheim spaniel', u'papillon', u'beagle', u'bloodhound, sleuthhound', u'bluetick', u'black-and-tan coonhound', u'Walker hound, Walker foxhound', u'English foxhound', u'borzoi, Russian wolfhound', u'Irish wolfhound', u'Italian greyhound', u'Norwegian elkhound, elkhound', u'Saluki, gazelle hound', u'Staffordshire bullterrier, Staffordshire bull terrier', u'American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier', u'Bedlington terrier', u'Kerry blue terrier', u'Norfolk terrier', u'Lakeland terrier', u'Sealyham terrier, Sealyham', u'Airedale, Airedale terrier', u'Australian terrier', u'Boston bull, Boston terrier', u'silky terrier, Sydney silky', u'West Highland white terrier', u'flat-coated retriever', u'curly-coated retriever', u'golden retriever', u'vizsla, Hungarian pointer', u'English setter', u'Brittany spaniel', u'clumber, clumber spaniel', u'English springer, English springer spaniel', u'Welsh springer spaniel', u'cocker spaniel, English cocker spaniel, cocker', u'Sussex spaniel', u'Irish water spaniel', u'groenendael', u'malinois', u'kelpie', u'Old English sheepdog, bobtail', u'Shetland sheepdog, Shetland sheep dog, Shetland', u'collie', u'Border collie', u'Bouvier des Flandres, Bouviers des Flandres', u'Rottweiler', u'German shepherd, German shepherd dog, German police dog, alsatian', u'Appenzeller', u'EntleBucher', u'bull mastiff', u'French bulldog', u'malamute, malemute, Alaskan malamute', u'dalmatian, coach dog, carriage dog', u'Newfoundland, Newfoundland dog', u'Pembroke, Pembroke Welsh corgi', u'Cardigan, Cardigan Welsh corgi', u'toy poodle', u'miniature poodle', u'standard poodle', u'Mexican hairless', u'timber wolf, grey wolf, gray wolf, Canis lupus', u'white wolf, Arctic wolf, Canis lupus tundrarum', u'red wolf, maned wolf, Canis rufus, Canis niger', u'coyote, prairie wolf, brush wolf, Canis latrans', u'dingo, warrigal, warragal, Canis dingo', u'dhole, Cuon alpinus', u'red fox, Vulpes vulpes', u'kit fox, Vulpes macrotis', u'Arctic fox, white fox, Alopex lagopus', u'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor', u'lynx, catamount', u'leopard, Panthera pardus', u'snow leopard, ounce, Panthera uncia', u'jaguar, panther, Panthera onca, Felis onca', u'lion, king of beasts, Panthera leo', u'American black bear, black bear, Ursus americanus, Euarctos americanus', u'ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', u'sloth bear, Melursus ursinus, Ursus ursinus', u'tiger beetle', u'ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle', u'ground beetle, carabid beetle', u'long-horned beetle, longicorn, longicorn beetle', u'leaf beetle, chrysomelid', u'dung beetle', u'rhinoceros beetle', u'weevil', u'fly', u'walking stick, walkingstick, stick insect', u'cicada, cicala', u'leafhopper', u'lacewing, lacewing fly', u"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", u'damselfly', u'admiral', u'ringlet, ringlet butterfly', u'monarch, monarch butterfly, milkweed butterfly, Danaus plexippus', u'cabbage butterfly', u'sulphur butterfly, sulfur butterfly', u'lycaenid, lycaenid butterfly', u'sea cucumber, holothurian', u'wood rabbit, cottontail, cottontail rabbit', u'fox squirrel, eastern fox squirrel, Sciurus niger', u'sorrel', u'hog, pig, grunter, squealer, Sus scrofa', u'wild boar, boar, Sus scrofa', u'water buffalo, water ox, Asiatic buffalo, Bubalus bubalis', u'impala, Aepyceros melampus', u'gazelle', u'Arabian camel, dromedary, Camelus dromedarius', u'llama', u'weasel', u'polecat, fitch, foulmart, foumart, Mustela putorius', u'black-footed ferret, ferret, Mustela nigripes', u'skunk, polecat, wood pussy', u'armadillo', u'three-toed sloth, ai, Bradypus tridactylus', u'gorilla, Gorilla gorilla', u'chimpanzee, chimp, Pan troglodytes', u'gibbon, Hylobates lar', u'siamang, Hylobates syndactylus, Symphalangus syndactylus', u'langur', u'colobus, colobus monkey', u'proboscis monkey, Nasalis larvatus', u'capuchin, ringtail, Cebus capucinus', u'howler monkey, howler', u'spider monkey, Ateles geoffroyi', u'squirrel monkey, Saimiri sciureus', u'Madagascar cat, ring-tailed lemur, Lemur catta', u'Indian elephant, Elephas maximus', u'African elephant, Loxodonta africana', u'lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens', u'giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca', u'eel', u'coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch', u'rock beauty, Holocanthus tricolor', u'gar, garfish, garpike, billfish, Lepisosteus osseus', u'lionfish', u'puffer, pufferfish, blowfish, globefish', u'aircraft carrier, carrier, flattop, attack aircraft carrier', u'airliner', u'airship, dirigible', u'altar', u'ambulance', u'amphibian, amphibious vehicle', u'analog clock', u'ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin', u'assault rifle, assault gun', u'balance beam, beam', u'balloon', u'ballpoint, ballpoint pen, ballpen, Biro', u'bannister, banister, balustrade, balusters, handrail', u'barbell', u'barrel, cask', u'barrow, garden cart, lawn cart, wheelbarrow', u'baseball', u'basketball', u'bath towel', u'beacon, lighthouse, beacon light, pharos', u'beer bottle', u'beer glass', u'bell cote, bell cot', u'bicycle-built-for-two, tandem bicycle, tandem', u'binoculars, field glasses, opera glasses', u'bobsled, bobsleigh, bob', u'bolo tie, bolo, bola tie, bola', u'bookshop, bookstore, bookstall', u'bottlecap', u'brass, memorial tablet, plaque', u'breakwater, groin, groyne, mole, bulwark, seawall, jetty', u'breastplate, aegis, egis', u'bucket, pail', u'buckle', u'bulletproof vest', u'bullet train, bullet', u'caldron, cauldron', u'candle, taper, wax light', u'carousel, carrousel, merry-go-round, roundabout, whirligig', u"carpenter's kit, tool kit", u'car wheel', u'cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM', u'cassette player', u'castle', u'CD player', u'cello, violoncello', u'cellular telephone, cellular phone, cellphone, cell, mobile phone', u'chainlink fence', u'chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour', u'chime, bell, gong', u'china cabinet, china closet', u'church, church building', u'cinema, movie theater, movie theatre, movie house, picture palace', u'cleaver, meat cleaver, chopper', u'cliff dwelling', u'cloak', u'clog, geta, patten, sabot', u'cocktail shaker', u'coil, spiral, volute, whorl, helix', u'combination lock', u'container ship, containership, container vessel', u'convertible', u'corkscrew, bottle screw', u'cowboy hat, ten-gallon hat', u'cradle', u'crash helmet', u'croquet ball', u'dial telephone, dial phone', u'digital clock', u'digital watch', u'dining table, board', u'dishrag, dishcloth', u'dock, dockage, docking facility', u'dogsled, dog sled, dog sleigh', u'doormat, welcome mat', u'drilling platform, offshore rig', u'dumbbell', u'electric fan, blower', u'electric guitar', u'electric locomotive', u'envelope', u'file, file cabinet, filing cabinet', u'flagpole, flagstaff', u'flute, transverse flute', u'folding chair', u'football helmet', u'forklift', u'frying pan, frypan, skillet', u'gasmask, respirator, gas helmet', u'gas pump, gasoline pump, petrol pump, island dispenser', u'goblet', u'golf ball', u'golfcart, golf cart', u'gondola', u'greenhouse, nursery, glasshouse', u'grille, radiator grille', u'guillotine', u'hair slide', u'half track', u'hand blower, blow dryer, blow drier, hair dryer, hair drier', u'hand-held computer, hand-held microcomputer', u'holster', u'hook, claw', u'hoopskirt, crinoline', u'horizontal bar, high bar', u'hourglass', u"jack-o'-lantern", u'jean, blue jean, denim', u'jeep, landrover', u'jigsaw puzzle', u'lab coat, laboratory coat', u'ladle', u'lampshade, lamp shade', u'laptop, laptop computer', u'lawn mower, mower', u'lens cap, lens cover', u'letter opener, paper knife, paperknife', u'library', u'lifeboat', u'lighter, light, igniter, ignitor', u'limousine, limo', u'liner, ocean liner', u'lipstick, lip rouge', u'lotion', u'loudspeaker, speaker, speaker unit, loudspeaker system, speaker system', u"loupe, jeweler's loupe", u'lumbermill, sawmill', u'mailbag, postbag', u'mailbox, letter box', u'maillot', u'maillot, tank suit', u'manhole cover', u'marimba, xylophone', u'maypole', u'maze, labyrinth', u'megalith, megalithic structure', u'military uniform', u'milk can', u'missile', u'mixing bowl', u'mobile home, manufactured home', u'Model T', u'mountain bike, all-terrain bike, off-roader', u'muzzle', u'nail', u'necklace', u'nipple', u'obelisk', u'odometer, hodometer, mileometer, milometer', u'oil filter', u'oscilloscope, scope, cathode-ray oscilloscope, CRO', u'paddle, boat paddle', u'paddlewheel, paddle wheel', u'padlock', u'palace', u'paper towel', u'parallel bars, bars', u'pedestal, plinth, footstall', u'pencil box, pencil case', u'pencil sharpener', u'pick, plectrum, plectron', u'pickelhaube', u'picket fence, paling', u'pill bottle', u'pillow', u'ping-pong ball', u'pinwheel', u"plane, carpenter's plane, woodworking plane", u'planetarium', u'plastic bag', u'plate rack', u'plow, plough', u"plunger, plumber's helper", u'Polaroid camera, Polaroid Land camera', u'pole', u'police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria', u'pool table, billiard table, snooker table', u'pop bottle, soda bottle', u'pot, flowerpot', u"potter's wheel", u'power drill', u'projectile, missile', u'punching bag, punch bag, punching ball, punchball', u'quill, quill pen', u'quilt, comforter, comfort, puff', u'radio, wireless', u'radio telescope, radio reflector', u'rain barrel', u'recreational vehicle, RV, R.V.', u'reel', u'reflex camera', u'remote control, remote', u'restaurant, eating house, eating place, eatery', u'revolver, six-gun, six-shooter', u'rifle', u'rubber eraser, rubber, pencil eraser', u'rugby ball', u'rule, ruler', u'saltshaker, salt shaker', u'sandal', u'scale, weighing machine', u'school bus', u'seat belt, seatbelt', u'shield, buckler', u'shovel', u'sleeping bag', u'slide rule, slipstick', u'sliding door', u'slot, one-armed bandit', u'snorkel', u'snowmobile', u'snowplow, snowplough', u'soccer ball', u'solar dish, solar collector, solar furnace', u'soup bowl', u'space shuttle', u'spatula', u'spindle', u'spotlight, spot', u'steam locomotive', u'steel arch bridge', u'steel drum', u'stole', u'stone wall', u'streetcar, tram, tramcar, trolley, trolley car', u'suit, suit of clothes', u'sundial', u'sunglass', u'sunglasses, dark glasses, shades', u'sunscreen, sunblock, sun blocker', u'switch, electric switch, electrical switch', u'table lamp', u'tank, army tank, armored combat vehicle, armoured combat vehicle', u'tape player', u'television, television system', u'tennis ball', u'thimble', u'tile roof', u'toilet seat', u'totem pole', u'trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi', u'tricycle, trike, velocipede', u'triumphal arch', u'trolleybus, trolley coach, trackless trolley', u'turnstile', u'umbrella', u'unicycle, monocycle', u'vacuum, vacuum cleaner', u'vault', u'velvet', u'violin, fiddle', u'volleyball', u'waffle iron', u'wall clock', u'wallet, billfold, notecase, pocketbook', u'wardrobe, closet, press', u'warplane, military plane', u'washbasin, handbasin, washbowl, lavabo, wash-hand basin', u'water bottle', u'whistle', u'wine bottle', u'wool, woolen, woollen', u'worm fence, snake fence, snake-rail fence, Virginia fence', u'yawl', u'crossword puzzle, crossword', u'traffic light, traffic signal, stoplight', u'plate', u'guacamole', u'trifle', u'ice lolly, lolly, lollipop, popsicle', u'French loaf', u'bagel, beigel', u'pretzel', u'broccoli', u'cauliflower', u'artichoke, globe artichoke', u'bell pepper', u'lemon', u'pineapple, ananas', u'custard apple', u'chocolate sauce, chocolate syrup', u'meat loaf, meatloaf', u'alp', u'bubble', u'cliff, drop, drop-off', u'coral reef', u'lakeside, lakeshore', u'promontory, headland, head, foreland', u'valley, vale', u'volcano', u'ballplayer, baseball player', u"yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", u'coral fungus', u'hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa', u'bolete', u'ear, spike, capitulum', u'toilet tissue, toilet paper, bathroom tissue']
inception_v2_slim labels: [u'goldfish, Carassius auratus', u'tiger shark, Galeocerdo cuvieri', u'electric ray, crampfish, numbfish, torpedo', u'ostrich, Struthio camelus', u'brambling, Fringilla montifringilla', u'goldfinch, Carduelis carduelis', u'house finch, linnet, Carpodacus mexicanus', u'bulbul', u'water ouzel, dipper', u'bald eagle, American eagle, Haliaeetus leucocephalus', u'vulture', u'great grey owl, great gray owl, Strix nebulosa', u'European fire salamander, Salamandra salamandra', u'common newt, Triturus vulgaris', u'spotted salamander, Ambystoma maculatum', u'axolotl, mud puppy, Ambystoma mexicanum', u'bullfrog, Rana catesbeiana', u'tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui', u'loggerhead, loggerhead turtle, Caretta caretta', u'leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea', u'mud turtle', u'box turtle, box tortoise', u'American chameleon, anole, Anolis carolinensis', u'whiptail, whiptail lizard', u'frilled lizard, Chlamydosaurus kingi', u'alligator lizard', u'Gila monster, Heloderma suspectum', u'green lizard, Lacerta viridis', u'African chameleon, Chamaeleo chamaeleon', u'Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis', u'African crocodile, Nile crocodile, Crocodylus niloticus', u'American alligator, Alligator mississipiensis', u'night snake, Hypsiglena torquata', u'diamondback, diamondback rattlesnake, Crotalus adamanteus', u'sidewinder, horned rattlesnake, Crotalus cerastes', u'trilobite', u'harvestman, daddy longlegs, Phalangium opilio', u'black and gold garden spider, Argiope aurantia', u'black widow, Latrodectus mactans', u'tarantula', u'wolf spider, hunting spider', u'black grouse', u'ruffed grouse, partridge, Bonasa umbellus', u'prairie chicken, prairie grouse, prairie fowl', u'quail', u'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita', u'lorikeet', u'coucal', u'hornbill', u'black swan, Cygnus atratus', u'platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus', u'wallaby, brush kangaroo', u'koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus', u'jellyfish', u'brain coral', u'flatworm, platyhelminth', u'snail', u'slug', u'sea slug, nudibranch', u'chiton, coat-of-mail shell, sea cradle, polyplacophore', u'chambered nautilus, pearly nautilus, nautilus', u'fiddler crab', u'king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica', u'American lobster, Northern lobster, Maine lobster, Homarus americanus', u'spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish', u'black stork, Ciconia nigra', u'spoonbill', u'flamingo', u'little blue heron, Egretta caerulea', u'American egret, great white heron, Egretta albus', u'limpkin, Aramus pictus', u'European gallinule, Porphyrio porphyrio', u'American coot, marsh hen, mud hen, water hen, Fulica americana', u'red-backed sandpiper, dunlin, Erolia alpina', u'pelican', u'albatross, mollymawk', u'grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus', u'killer whale, killer, orca, grampus, sea wolf, Orcinus orca', u'sea lion', u'Japanese spaniel', u'Maltese dog, Maltese terrier, Maltese', u'Blenheim spaniel', u'papillon', u'beagle', u'bloodhound, sleuthhound', u'bluetick', u'black-and-tan coonhound', u'Walker hound, Walker foxhound', u'English foxhound', u'borzoi, Russian wolfhound', u'Irish wolfhound', u'Italian greyhound', u'Norwegian elkhound, elkhound', u'Saluki, gazelle hound', u'Staffordshire bullterrier, Staffordshire bull terrier', u'American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier', u'Bedlington terrier', u'Kerry blue terrier', u'Norfolk terrier', u'Lakeland terrier', u'Sealyham terrier, Sealyham', u'Airedale, Airedale terrier', u'Australian terrier', u'Boston bull, Boston terrier', u'silky terrier, Sydney silky', u'West Highland white terrier', u'flat-coated retriever', u'curly-coated retriever', u'golden retriever', u'vizsla, Hungarian pointer', u'English setter', u'Brittany spaniel', u'clumber, clumber spaniel', u'English springer, English springer spaniel', u'Welsh springer spaniel', u'cocker spaniel, English cocker spaniel, cocker', u'Sussex spaniel', u'Irish water spaniel', u'groenendael', u'malinois', u'kelpie', u'Old English sheepdog, bobtail', u'Shetland sheepdog, Shetland sheep dog, Shetland', u'collie', u'Border collie', u'Bouvier des Flandres, Bouviers des Flandres', u'Rottweiler', u'German shepherd, German shepherd dog, German police dog, alsatian', u'Appenzeller', u'EntleBucher', u'bull mastiff', u'French bulldog', u'malamute, malemute, Alaskan malamute', u'dalmatian, coach dog, carriage dog', u'Newfoundland, Newfoundland dog', u'Pembroke, Pembroke Welsh corgi', u'Cardigan, Cardigan Welsh corgi', u'toy poodle', u'miniature poodle', u'standard poodle', u'Mexican hairless', u'timber wolf, grey wolf, gray wolf, Canis lupus', u'white wolf, Arctic wolf, Canis lupus tundrarum', u'red wolf, maned wolf, Canis rufus, Canis niger', u'coyote, prairie wolf, brush wolf, Canis latrans', u'dingo, warrigal, warragal, Canis dingo', u'dhole, Cuon alpinus', u'red fox, Vulpes vulpes', u'kit fox, Vulpes macrotis', u'Arctic fox, white fox, Alopex lagopus', u'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor', u'lynx, catamount', u'leopard, Panthera pardus', u'snow leopard, ounce, Panthera uncia', u'jaguar, panther, Panthera onca, Felis onca', u'lion, king of beasts, Panthera leo', u'American black bear, black bear, Ursus americanus, Euarctos americanus', u'ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', u'sloth bear, Melursus ursinus, Ursus ursinus', u'tiger beetle', u'ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle', u'ground beetle, carabid beetle', u'long-horned beetle, longicorn, longicorn beetle', u'leaf beetle, chrysomelid', u'dung beetle', u'rhinoceros beetle', u'weevil', u'fly', u'walking stick, walkingstick, stick insect', u'cicada, cicala', u'leafhopper', u'lacewing, lacewing fly', u"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", u'damselfly', u'admiral', u'ringlet, ringlet butterfly', u'monarch, monarch butterfly, milkweed butterfly, Danaus plexippus', u'cabbage butterfly', u'sulphur butterfly, sulfur butterfly', u'lycaenid, lycaenid butterfly', u'sea cucumber, holothurian', u'wood rabbit, cottontail, cottontail rabbit', u'fox squirrel, eastern fox squirrel, Sciurus niger', u'sorrel', u'hog, pig, grunter, squealer, Sus scrofa', u'wild boar, boar, Sus scrofa', u'water buffalo, water ox, Asiatic buffalo, Bubalus bubalis', u'impala, Aepyceros melampus', u'gazelle', u'Arabian camel, dromedary, Camelus dromedarius', u'llama', u'weasel', u'polecat, fitch, foulmart, foumart, Mustela putorius', u'black-footed ferret, ferret, Mustela nigripes', u'skunk, polecat, wood pussy', u'armadillo', u'three-toed sloth, ai, Bradypus tridactylus', u'gorilla, Gorilla gorilla', u'chimpanzee, chimp, Pan troglodytes', u'gibbon, Hylobates lar', u'siamang, Hylobates syndactylus, Symphalangus syndactylus', u'langur', u'colobus, colobus monkey', u'proboscis monkey, Nasalis larvatus', u'capuchin, ringtail, Cebus capucinus', u'howler monkey, howler', u'spider monkey, Ateles geoffroyi', u'squirrel monkey, Saimiri sciureus', u'Madagascar cat, ring-tailed lemur, Lemur catta', u'Indian elephant, Elephas maximus', u'African elephant, Loxodonta africana', u'lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens', u'giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca', u'eel', u'coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch', u'rock beauty, Holocanthus tricolor', u'gar, garfish, garpike, billfish, Lepisosteus osseus', u'lionfish', u'puffer, pufferfish, blowfish, globefish', u'aircraft carrier, carrier, flattop, attack aircraft carrier', u'airliner', u'airship, dirigible', u'altar', u'ambulance', u'amphibian, amphibious vehicle', u'analog clock', u'ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin', u'assault rifle, assault gun', u'balance beam, beam', u'balloon', u'ballpoint, ballpoint pen, ballpen, Biro', u'bannister, banister, balustrade, balusters, handrail', u'barbell', u'barrel, cask', u'barrow, garden cart, lawn cart, wheelbarrow', u'baseball', u'basketball', u'bath towel', u'beacon, lighthouse, beacon light, pharos', u'beer bottle', u'beer glass', u'bell cote, bell cot', u'bicycle-built-for-two, tandem bicycle, tandem', u'binoculars, field glasses, opera glasses', u'bobsled, bobsleigh, bob', u'bolo tie, bolo, bola tie, bola', u'bookshop, bookstore, bookstall', u'bottlecap', u'brass, memorial tablet, plaque', u'breakwater, groin, groyne, mole, bulwark, seawall, jetty', u'breastplate, aegis, egis', u'bucket, pail', u'buckle', u'bulletproof vest', u'bullet train, bullet', u'caldron, cauldron', u'candle, taper, wax light', u'carousel, carrousel, merry-go-round, roundabout, whirligig', u"carpenter's kit, tool kit", u'car wheel', u'cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM', u'cassette player', u'castle', u'CD player', u'cello, violoncello', u'cellular telephone, cellular phone, cellphone, cell, mobile phone', u'chainlink fence', u'chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour', u'chime, bell, gong', u'china cabinet, china closet', u'church, church building', u'cinema, movie theater, movie theatre, movie house, picture palace', u'cleaver, meat cleaver, chopper', u'cliff dwelling', u'cloak', u'clog, geta, patten, sabot', u'cocktail shaker', u'coil, spiral, volute, whorl, helix', u'combination lock', u'container ship, containership, container vessel', u'convertible', u'corkscrew, bottle screw', u'cowboy hat, ten-gallon hat', u'cradle', u'crash helmet', u'croquet ball', u'dial telephone, dial phone', u'digital clock', u'digital watch', u'dining table, board', u'dishrag, dishcloth', u'dock, dockage, docking facility', u'dogsled, dog sled, dog sleigh', u'doormat, welcome mat', u'drilling platform, offshore rig', u'dumbbell', u'electric fan, blower', u'electric guitar', u'electric locomotive', u'envelope', u'file, file cabinet, filing cabinet', u'flagpole, flagstaff', u'flute, transverse flute', u'folding chair', u'football helmet', u'forklift', u'frying pan, frypan, skillet', u'gasmask, respirator, gas helmet', u'gas pump, gasoline pump, petrol pump, island dispenser', u'goblet', u'golf ball', u'golfcart, golf cart', u'gondola', u'greenhouse, nursery, glasshouse', u'grille, radiator grille', u'guillotine', u'hair slide', u'half track', u'hand blower, blow dryer, blow drier, hair dryer, hair drier', u'hand-held computer, hand-held microcomputer', u'holster', u'hook, claw', u'hoopskirt, crinoline', u'horizontal bar, high bar', u'hourglass', u"jack-o'-lantern", u'jean, blue jean, denim', u'jeep, landrover', u'jigsaw puzzle', u'lab coat, laboratory coat', u'ladle', u'lampshade, lamp shade', u'laptop, laptop computer', u'lawn mower, mower', u'lens cap, lens cover', u'letter opener, paper knife, paperknife', u'library', u'lifeboat', u'lighter, light, igniter, ignitor', u'limousine, limo', u'liner, ocean liner', u'lipstick, lip rouge', u'lotion', u'loudspeaker, speaker, speaker unit, loudspeaker system, speaker system', u"loupe, jeweler's loupe", u'lumbermill, sawmill', u'mailbag, postbag', u'mailbox, letter box', u'maillot', u'maillot, tank suit', u'manhole cover', u'marimba, xylophone', u'maypole', u'maze, labyrinth', u'megalith, megalithic structure', u'military uniform', u'milk can', u'missile', u'mixing bowl', u'mobile home, manufactured home', u'Model T', u'mountain bike, all-terrain bike, off-roader', u'muzzle', u'nail', u'necklace', u'nipple', u'obelisk', u'odometer, hodometer, mileometer, milometer', u'oil filter', u'oscilloscope, scope, cathode-ray oscilloscope, CRO', u'paddle, boat paddle', u'paddlewheel, paddle wheel', u'padlock', u'palace', u'paper towel', u'parallel bars, bars', u'pedestal, plinth, footstall', u'pencil box, pencil case', u'pencil sharpener', u'pick, plectrum, plectron', u'pickelhaube', u'picket fence, paling', u'pill bottle', u'pillow', u'ping-pong ball', u'pinwheel', u"plane, carpenter's plane, woodworking plane", u'planetarium', u'plastic bag', u'plate rack', u'plow, plough', u"plunger, plumber's helper", u'Polaroid camera, Polaroid Land camera', u'pole', u'police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria', u'pool table, billiard table, snooker table', u'pop bottle, soda bottle', u'pot, flowerpot', u"potter's wheel", u'power drill', u'projectile, missile', u'punching bag, punch bag, punching ball, punchball', u'quill, quill pen', u'quilt, comforter, comfort, puff', u'radio, wireless', u'radio telescope, radio reflector', u'rain barrel', u'recreational vehicle, RV, R.V.', u'reel', u'reflex camera', u'remote control, remote', u'restaurant, eating house, eating place, eatery', u'revolver, six-gun, six-shooter', u'rifle', u'rubber eraser, rubber, pencil eraser', u'rugby ball', u'rule, ruler', u'saltshaker, salt shaker', u'sandal', u'scale, weighing machine', u'school bus', u'seat belt, seatbelt', u'shield, buckler', u'shovel', u'sleeping bag', u'slide rule, slipstick', u'sliding door', u'slot, one-armed bandit', u'snorkel', u'snowmobile', u'snowplow, snowplough', u'soccer ball', u'solar dish, solar collector, solar furnace', u'soup bowl', u'space shuttle', u'spatula', u'spindle', u'spotlight, spot', u'steam locomotive', u'steel arch bridge', u'steel drum', u'stole', u'stone wall', u'streetcar, tram, tramcar, trolley, trolley car', u'suit, suit of clothes', u'sundial', u'sunglass', u'sunglasses, dark glasses, shades', u'sunscreen, sunblock, sun blocker', u'switch, electric switch, electrical switch', u'table lamp', u'tank, army tank, armored combat vehicle, armoured combat vehicle', u'tape player', u'television, television system', u'tennis ball', u'thimble', u'tile roof', u'toilet seat', u'totem pole', u'trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi', u'tricycle, trike, velocipede', u'triumphal arch', u'trolleybus, trolley coach, trackless trolley', u'turnstile', u'umbrella', u'unicycle, monocycle', u'vacuum, vacuum cleaner', u'vault', u'velvet', u'violin, fiddle', u'volleyball', u'waffle iron', u'wall clock', u'wallet, billfold, notecase, pocketbook', u'wardrobe, closet, press', u'warplane, military plane', u'washbasin, handbasin, washbowl, lavabo, wash-hand basin', u'water bottle', u'whistle', u'wine bottle', u'wool, woolen, woollen', u'worm fence, snake fence, snake-rail fence, Virginia fence', u'yawl', u'crossword puzzle, crossword', u'traffic light, traffic signal, stoplight', u'plate', u'guacamole', u'trifle', u'ice lolly, lolly, lollipop, popsicle', u'French loaf', u'bagel, beigel', u'pretzel', u'broccoli', u'cauliflower', u'artichoke, globe artichoke', u'bell pepper', u'lemon', u'pineapple, ananas', u'custard apple', u'chocolate sauce, chocolate syrup', u'meat loaf, meatloaf', u'alp', u'bubble', u'cliff, drop, drop-off', u'coral reef', u'lakeside, lakeshore', u'promontory, headland, head, foreland', u'valley, vale', u'volcano', u'ballplayer, baseball player', u"yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", u'coral fungus', u'hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa', u'bolete', u'ear, spike, capitulum', u'toilet tissue, toilet paper, bathroom tissue']
mobilenet_v2_14 labels: [u'goldfish, Carassius auratus', u'tiger shark, Galeocerdo cuvieri', u'electric ray, crampfish, numbfish, torpedo', u'ostrich, Struthio camelus', u'brambling, Fringilla montifringilla', u'goldfinch, Carduelis carduelis', u'house finch, linnet, Carpodacus mexicanus', u'bulbul', u'water ouzel, dipper', u'bald eagle, American eagle, Haliaeetus leucocephalus', u'vulture', u'great grey owl, great gray owl, Strix nebulosa', u'European fire salamander, Salamandra salamandra', u'common newt, Triturus vulgaris', u'spotted salamander, Ambystoma maculatum', u'axolotl, mud puppy, Ambystoma mexicanum', u'bullfrog, Rana catesbeiana', u'tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui', u'loggerhead, loggerhead turtle, Caretta caretta', u'leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea', u'mud turtle', u'box turtle, box tortoise', u'American chameleon, anole, Anolis carolinensis', u'whiptail, whiptail lizard', u'frilled lizard, Chlamydosaurus kingi', u'alligator lizard', u'Gila monster, Heloderma suspectum', u'green lizard, Lacerta viridis', u'African chameleon, Chamaeleo chamaeleon', u'Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis', u'African crocodile, Nile crocodile, Crocodylus niloticus', u'American alligator, Alligator mississipiensis', u'night snake, Hypsiglena torquata', u'diamondback, diamondback rattlesnake, Crotalus adamanteus', u'sidewinder, horned rattlesnake, Crotalus cerastes', u'trilobite', u'harvestman, daddy longlegs, Phalangium opilio', u'black and gold garden spider, Argiope aurantia', u'black widow, Latrodectus mactans', u'tarantula', u'wolf spider, hunting spider', u'black grouse', u'ruffed grouse, partridge, Bonasa umbellus', u'prairie chicken, prairie grouse, prairie fowl', u'quail', u'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita', u'lorikeet', u'coucal', u'hornbill', u'black swan, Cygnus atratus', u'platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus', u'wallaby, brush kangaroo', u'koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus', u'jellyfish', u'brain coral', u'flatworm, platyhelminth', u'snail', u'slug', u'sea slug, nudibranch', u'chiton, coat-of-mail shell, sea cradle, polyplacophore', u'chambered nautilus, pearly nautilus, nautilus', u'fiddler crab', u'king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica', u'American lobster, Northern lobster, Maine lobster, Homarus americanus', u'spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish', u'black stork, Ciconia nigra', u'spoonbill', u'flamingo', u'little blue heron, Egretta caerulea', u'American egret, great white heron, Egretta albus', u'limpkin, Aramus pictus', u'European gallinule, Porphyrio porphyrio', u'American coot, marsh hen, mud hen, water hen, Fulica americana', u'red-backed sandpiper, dunlin, Erolia alpina', u'pelican', u'albatross, mollymawk', u'grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus', u'killer whale, killer, orca, grampus, sea wolf, Orcinus orca', u'sea lion', u'Japanese spaniel', u'Maltese dog, Maltese terrier, Maltese', u'Blenheim spaniel', u'papillon', u'beagle', u'bloodhound, sleuthhound', u'bluetick', u'black-and-tan coonhound', u'Walker hound, Walker foxhound', u'English foxhound', u'borzoi, Russian wolfhound', u'Irish wolfhound', u'Italian greyhound', u'Norwegian elkhound, elkhound', u'Saluki, gazelle hound', u'Staffordshire bullterrier, Staffordshire bull terrier', u'American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier', u'Bedlington terrier', u'Kerry blue terrier', u'Norfolk terrier', u'Lakeland terrier', u'Sealyham terrier, Sealyham', u'Airedale, Airedale terrier', u'Australian terrier', u'Boston bull, Boston terrier', u'silky terrier, Sydney silky', u'West Highland white terrier', u'flat-coated retriever', u'curly-coated retriever', u'golden retriever', u'vizsla, Hungarian pointer', u'English setter', u'Brittany spaniel', u'clumber, clumber spaniel', u'English springer, English springer spaniel', u'Welsh springer spaniel', u'cocker spaniel, English cocker spaniel, cocker', u'Sussex spaniel', u'Irish water spaniel', u'groenendael', u'malinois', u'kelpie', u'Old English sheepdog, bobtail', u'Shetland sheepdog, Shetland sheep dog, Shetland', u'collie', u'Border collie', u'Bouvier des Flandres, Bouviers des Flandres', u'Rottweiler', u'German shepherd, German shepherd dog, German police dog, alsatian', u'Appenzeller', u'EntleBucher', u'bull mastiff', u'French bulldog', u'malamute, malemute, Alaskan malamute', u'dalmatian, coach dog, carriage dog', u'Newfoundland, Newfoundland dog', u'Pembroke, Pembroke Welsh corgi', u'Cardigan, Cardigan Welsh corgi', u'toy poodle', u'miniature poodle', u'standard poodle', u'Mexican hairless', u'timber wolf, grey wolf, gray wolf, Canis lupus', u'white wolf, Arctic wolf, Canis lupus tundrarum', u'red wolf, maned wolf, Canis rufus, Canis niger', u'coyote, prairie wolf, brush wolf, Canis latrans', u'dingo, warrigal, warragal, Canis dingo', u'dhole, Cuon alpinus', u'red fox, Vulpes vulpes', u'kit fox, Vulpes macrotis', u'Arctic fox, white fox, Alopex lagopus', u'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor', u'lynx, catamount', u'leopard, Panthera pardus', u'snow leopard, ounce, Panthera uncia', u'jaguar, panther, Panthera onca, Felis onca', u'lion, king of beasts, Panthera leo', u'American black bear, black bear, Ursus americanus, Euarctos americanus', u'ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', u'sloth bear, Melursus ursinus, Ursus ursinus', u'tiger beetle', u'ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle', u'ground beetle, carabid beetle', u'long-horned beetle, longicorn, longicorn beetle', u'leaf beetle, chrysomelid', u'dung beetle', u'rhinoceros beetle', u'weevil', u'fly', u'walking stick, walkingstick, stick insect', u'cicada, cicala', u'leafhopper', u'lacewing, lacewing fly', u"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", u'damselfly', u'admiral', u'ringlet, ringlet butterfly', u'monarch, monarch butterfly, milkweed butterfly, Danaus plexippus', u'cabbage butterfly', u'sulphur butterfly, sulfur butterfly', u'lycaenid, lycaenid butterfly', u'sea cucumber, holothurian', u'wood rabbit, cottontail, cottontail rabbit', u'fox squirrel, eastern fox squirrel, Sciurus niger', u'sorrel', u'hog, pig, grunter, squealer, Sus scrofa', u'wild boar, boar, Sus scrofa', u'water buffalo, water ox, Asiatic buffalo, Bubalus bubalis', u'impala, Aepyceros melampus', u'gazelle', u'Arabian camel, dromedary, Camelus dromedarius', u'llama', u'weasel', u'polecat, fitch, foulmart, foumart, Mustela putorius', u'black-footed ferret, ferret, Mustela nigripes', u'skunk, polecat, wood pussy', u'armadillo', u'three-toed sloth, ai, Bradypus tridactylus', u'gorilla, Gorilla gorilla', u'chimpanzee, chimp, Pan troglodytes', u'gibbon, Hylobates lar', u'siamang, Hylobates syndactylus, Symphalangus syndactylus', u'langur', u'colobus, colobus monkey', u'proboscis monkey, Nasalis larvatus', u'capuchin, ringtail, Cebus capucinus', u'howler monkey, howler', u'spider monkey, Ateles geoffroyi', u'squirrel monkey, Saimiri sciureus', u'Madagascar cat, ring-tailed lemur, Lemur catta', u'Indian elephant, Elephas maximus', u'African elephant, Loxodonta africana', u'lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens', u'giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca', u'eel', u'coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch', u'rock beauty, Holocanthus tricolor', u'gar, garfish, garpike, billfish, Lepisosteus osseus', u'lionfish', u'puffer, pufferfish, blowfish, globefish', u'aircraft carrier, carrier, flattop, attack aircraft carrier', u'airliner', u'airship, dirigible', u'altar', u'ambulance', u'amphibian, amphibious vehicle', u'analog clock', u'ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin', u'assault rifle, assault gun', u'balance beam, beam', u'balloon', u'ballpoint, ballpoint pen, ballpen, Biro', u'bannister, banister, balustrade, balusters, handrail', u'barbell', u'barrel, cask', u'barrow, garden cart, lawn cart, wheelbarrow', u'baseball', u'basketball', u'bath towel', u'beacon, lighthouse, beacon light, pharos', u'beer bottle', u'beer glass', u'bell cote, bell cot', u'bicycle-built-for-two, tandem bicycle, tandem', u'binoculars, field glasses, opera glasses', u'bobsled, bobsleigh, bob', u'bolo tie, bolo, bola tie, bola', u'bookshop, bookstore, bookstall', u'bottlecap', u'brass, memorial tablet, plaque', u'breakwater, groin, groyne, mole, bulwark, seawall, jetty', u'breastplate, aegis, egis', u'bucket, pail', u'buckle', u'bulletproof vest', u'bullet train, bullet', u'caldron, cauldron', u'candle, taper, wax light', u'carousel, carrousel, merry-go-round, roundabout, whirligig', u"carpenter's kit, tool kit", u'car wheel', u'cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM', u'cassette player', u'castle', u'CD player', u'cello, violoncello', u'cellular telephone, cellular phone, cellphone, cell, mobile phone', u'chainlink fence', u'chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour', u'chime, bell, gong', u'china cabinet, china closet', u'church, church building', u'cinema, movie theater, movie theatre, movie house, picture palace', u'cleaver, meat cleaver, chopper', u'cliff dwelling', u'cloak', u'clog, geta, patten, sabot', u'cocktail shaker', u'coil, spiral, volute, whorl, helix', u'combination lock', u'container ship, containership, container vessel', u'convertible', u'corkscrew, bottle screw', u'cowboy hat, ten-gallon hat', u'cradle', u'crash helmet', u'croquet ball', u'dial telephone, dial phone', u'digital clock', u'digital watch', u'dining table, board', u'dishrag, dishcloth', u'dock, dockage, docking facility', u'dogsled, dog sled, dog sleigh', u'doormat, welcome mat', u'drilling platform, offshore rig', u'dumbbell', u'electric fan, blower', u'electric guitar', u'electric locomotive', u'envelope', u'file, file cabinet, filing cabinet', u'flagpole, flagstaff', u'flute, transverse flute', u'folding chair', u'football helmet', u'forklift', u'frying pan, frypan, skillet', u'gasmask, respirator, gas helmet', u'gas pump, gasoline pump, petrol pump, island dispenser', u'goblet', u'golf ball', u'golfcart, golf cart', u'gondola', u'greenhouse, nursery, glasshouse', u'grille, radiator grille', u'guillotine', u'hair slide', u'half track', u'hand blower, blow dryer, blow drier, hair dryer, hair drier', u'hand-held computer, hand-held microcomputer', u'holster', u'hook, claw', u'hoopskirt, crinoline', u'horizontal bar, high bar', u'hourglass', u"jack-o'-lantern", u'jean, blue jean, denim', u'jeep, landrover', u'jigsaw puzzle', u'lab coat, laboratory coat', u'ladle', u'lampshade, lamp shade', u'laptop, laptop computer', u'lawn mower, mower', u'lens cap, lens cover', u'letter opener, paper knife, paperknife', u'library', u'lifeboat', u'lighter, light, igniter, ignitor', u'limousine, limo', u'liner, ocean liner', u'lipstick, lip rouge', u'lotion', u'loudspeaker, speaker, speaker unit, loudspeaker system, speaker system', u"loupe, jeweler's loupe", u'lumbermill, sawmill', u'mailbag, postbag', u'mailbox, letter box', u'maillot', u'maillot, tank suit', u'manhole cover', u'marimba, xylophone', u'maypole', u'maze, labyrinth', u'megalith, megalithic structure', u'military uniform', u'milk can', u'missile', u'mixing bowl', u'mobile home, manufactured home', u'Model T', u'mountain bike, all-terrain bike, off-roader', u'muzzle', u'nail', u'necklace', u'nipple', u'obelisk', u'odometer, hodometer, mileometer, milometer', u'oil filter', u'oscilloscope, scope, cathode-ray oscilloscope, CRO', u'paddle, boat paddle', u'paddlewheel, paddle wheel', u'padlock', u'palace', u'paper towel', u'parallel bars, bars', u'pedestal, plinth, footstall', u'pencil box, pencil case', u'pencil sharpener', u'pick, plectrum, plectron', u'pickelhaube', u'picket fence, paling', u'pill bottle', u'pillow', u'ping-pong ball', u'pinwheel', u"plane, carpenter's plane, woodworking plane", u'planetarium', u'plastic bag', u'plate rack', u'plow, plough', u"plunger, plumber's helper", u'Polaroid camera, Polaroid Land camera', u'pole', u'police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria', u'pool table, billiard table, snooker table', u'pop bottle, soda bottle', u'pot, flowerpot', u"potter's wheel", u'power drill', u'projectile, missile', u'punching bag, punch bag, punching ball, punchball', u'quill, quill pen', u'quilt, comforter, comfort, puff', u'radio, wireless', u'radio telescope, radio reflector', u'rain barrel', u'recreational vehicle, RV, R.V.', u'reel', u'reflex camera', u'remote control, remote', u'restaurant, eating house, eating place, eatery', u'revolver, six-gun, six-shooter', u'rifle', u'rubber eraser, rubber, pencil eraser', u'rugby ball', u'rule, ruler', u'saltshaker, salt shaker', u'sandal', u'scale, weighing machine', u'school bus', u'seat belt, seatbelt', u'shield, buckler', u'shovel', u'sleeping bag', u'slide rule, slipstick', u'sliding door', u'slot, one-armed bandit', u'snorkel', u'snowmobile', u'snowplow, snowplough', u'soccer ball', u'solar dish, solar collector, solar furnace', u'soup bowl', u'space shuttle', u'spatula', u'spindle', u'spotlight, spot', u'steam locomotive', u'steel arch bridge', u'steel drum', u'stole', u'stone wall', u'streetcar, tram, tramcar, trolley, trolley car', u'suit, suit of clothes', u'sundial', u'sunglass', u'sunglasses, dark glasses, shades', u'sunscreen, sunblock, sun blocker', u'switch, electric switch, electrical switch', u'table lamp', u'tank, army tank, armored combat vehicle, armoured combat vehicle', u'tape player', u'television, television system', u'tennis ball', u'thimble', u'tile roof', u'toilet seat', u'totem pole', u'trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi', u'tricycle, trike, velocipede', u'triumphal arch', u'trolleybus, trolley coach, trackless trolley', u'turnstile', u'umbrella', u'unicycle, monocycle', u'vacuum, vacuum cleaner', u'vault', u'velvet', u'violin, fiddle', u'volleyball', u'waffle iron', u'wall clock', u'wallet, billfold, notecase, pocketbook', u'wardrobe, closet, press', u'warplane, military plane', u'washbasin, handbasin, washbowl, lavabo, wash-hand basin', u'water bottle', u'whistle', u'wine bottle', u'wool, woolen, woollen', u'worm fence, snake fence, snake-rail fence, Virginia fence', u'yawl', u'crossword puzzle, crossword', u'traffic light, traffic signal, stoplight', u'plate', u'guacamole', u'trifle', u'ice lolly, lolly, lollipop, popsicle', u'French loaf', u'bagel, beigel', u'pretzel', u'broccoli', u'cauliflower', u'artichoke, globe artichoke', u'bell pepper', u'lemon', u'pineapple, ananas', u'custard apple', u'chocolate sauce, chocolate syrup', u'meat loaf, meatloaf', u'alp', u'bubble', u'cliff, drop, drop-off', u'coral reef', u'lakeside, lakeshore', u'promontory, headland, head, foreland', u'valley, vale', u'volcano', u'ballplayer, baseball player', u"yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", u'coral fungus', u'hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa', u'bolete', u'ear, spike, capitulum', u'toilet tissue, toilet paper, bathroom tissue']
resnet_v1_50 labels: [u'goldfish, Carassius auratus', u'tiger shark, Galeocerdo cuvieri', u'electric ray, crampfish, numbfish, torpedo', u'ostrich, Struthio camelus', u'brambling, Fringilla montifringilla', u'goldfinch, Carduelis carduelis', u'house finch, linnet, Carpodacus mexicanus', u'bulbul', u'water ouzel, dipper', u'bald eagle, American eagle, Haliaeetus leucocephalus', u'vulture', u'great grey owl, great gray owl, Strix nebulosa', u'European fire salamander, Salamandra salamandra', u'common newt, Triturus vulgaris', u'spotted salamander, Ambystoma maculatum', u'axolotl, mud puppy, Ambystoma mexicanum', u'bullfrog, Rana catesbeiana', u'tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui', u'loggerhead, loggerhead turtle, Caretta caretta', u'leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea', u'mud turtle', u'box turtle, box tortoise', u'American chameleon, anole, Anolis carolinensis', u'whiptail, whiptail lizard', u'frilled lizard, Chlamydosaurus kingi', u'alligator lizard', u'Gila monster, Heloderma suspectum', u'green lizard, Lacerta viridis', u'African chameleon, Chamaeleo chamaeleon', u'Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis', u'African crocodile, Nile crocodile, Crocodylus niloticus', u'American alligator, Alligator mississipiensis', u'night snake, Hypsiglena torquata', u'diamondback, diamondback rattlesnake, Crotalus adamanteus', u'sidewinder, horned rattlesnake, Crotalus cerastes', u'trilobite', u'harvestman, daddy longlegs, Phalangium opilio', u'black and gold garden spider, Argiope aurantia', u'black widow, Latrodectus mactans', u'tarantula', u'wolf spider, hunting spider', u'black grouse', u'ruffed grouse, partridge, Bonasa umbellus', u'prairie chicken, prairie grouse, prairie fowl', u'quail', u'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita', u'lorikeet', u'coucal', u'hornbill', u'black swan, Cygnus atratus', u'platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus', u'wallaby, brush kangaroo', u'koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus', u'jellyfish', u'brain coral', u'flatworm, platyhelminth', u'snail', u'slug', u'sea slug, nudibranch', u'chiton, coat-of-mail shell, sea cradle, polyplacophore', u'chambered nautilus, pearly nautilus, nautilus', u'fiddler crab', u'king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica', u'American lobster, Northern lobster, Maine lobster, Homarus americanus', u'spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish', u'black stork, Ciconia nigra', u'spoonbill', u'flamingo', u'little blue heron, Egretta caerulea', u'American egret, great white heron, Egretta albus', u'limpkin, Aramus pictus', u'European gallinule, Porphyrio porphyrio', u'American coot, marsh hen, mud hen, water hen, Fulica americana', u'red-backed sandpiper, dunlin, Erolia alpina', u'pelican', u'albatross, mollymawk', u'grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus', u'killer whale, killer, orca, grampus, sea wolf, Orcinus orca', u'sea lion', u'Japanese spaniel', u'Maltese dog, Maltese terrier, Maltese', u'Blenheim spaniel', u'papillon', u'beagle', u'bloodhound, sleuthhound', u'bluetick', u'black-and-tan coonhound', u'Walker hound, Walker foxhound', u'English foxhound', u'borzoi, Russian wolfhound', u'Irish wolfhound', u'Italian greyhound', u'Norwegian elkhound, elkhound', u'Saluki, gazelle hound', u'Staffordshire bullterrier, Staffordshire bull terrier', u'American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier', u'Bedlington terrier', u'Kerry blue terrier', u'Norfolk terrier', u'Lakeland terrier', u'Sealyham terrier, Sealyham', u'Airedale, Airedale terrier', u'Australian terrier', u'Boston bull, Boston terrier', u'silky terrier, Sydney silky', u'West Highland white terrier', u'flat-coated retriever', u'curly-coated retriever', u'golden retriever', u'vizsla, Hungarian pointer', u'English setter', u'Brittany spaniel', u'clumber, clumber spaniel', u'English springer, English springer spaniel', u'Welsh springer spaniel', u'cocker spaniel, English cocker spaniel, cocker', u'Sussex spaniel', u'Irish water spaniel', u'groenendael', u'malinois', u'kelpie', u'Old English sheepdog, bobtail', u'Shetland sheepdog, Shetland sheep dog, Shetland', u'collie', u'Border collie', u'Bouvier des Flandres, Bouviers des Flandres', u'Rottweiler', u'German shepherd, German shepherd dog, German police dog, alsatian', u'Appenzeller', u'EntleBucher', u'bull mastiff', u'French bulldog', u'malamute, malemute, Alaskan malamute', u'dalmatian, coach dog, carriage dog', u'Newfoundland, Newfoundland dog', u'Pembroke, Pembroke Welsh corgi', u'Cardigan, Cardigan Welsh corgi', u'toy poodle', u'miniature poodle', u'standard poodle', u'Mexican hairless', u'timber wolf, grey wolf, gray wolf, Canis lupus', u'white wolf, Arctic wolf, Canis lupus tundrarum', u'red wolf, maned wolf, Canis rufus, Canis niger', u'coyote, prairie wolf, brush wolf, Canis latrans', u'dingo, warrigal, warragal, Canis dingo', u'dhole, Cuon alpinus', u'red fox, Vulpes vulpes', u'kit fox, Vulpes macrotis', u'Arctic fox, white fox, Alopex lagopus', u'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor', u'lynx, catamount', u'leopard, Panthera pardus', u'snow leopard, ounce, Panthera uncia', u'jaguar, panther, Panthera onca, Felis onca', u'lion, king of beasts, Panthera leo', u'American black bear, black bear, Ursus americanus, Euarctos americanus', u'ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', u'sloth bear, Melursus ursinus, Ursus ursinus', u'tiger beetle', u'ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle', u'ground beetle, carabid beetle', u'long-horned beetle, longicorn, longicorn beetle', u'leaf beetle, chrysomelid', u'dung beetle', u'rhinoceros beetle', u'weevil', u'fly', u'walking stick, walkingstick, stick insect', u'cicada, cicala', u'leafhopper', u'lacewing, lacewing fly', u"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", u'damselfly', u'admiral', u'ringlet, ringlet butterfly', u'monarch, monarch butterfly, milkweed butterfly, Danaus plexippus', u'cabbage butterfly', u'sulphur butterfly, sulfur butterfly', u'lycaenid, lycaenid butterfly', u'sea cucumber, holothurian', u'wood rabbit, cottontail, cottontail rabbit', u'fox squirrel, eastern fox squirrel, Sciurus niger', u'sorrel', u'hog, pig, grunter, squealer, Sus scrofa', u'wild boar, boar, Sus scrofa', u'water buffalo, water ox, Asiatic buffalo, Bubalus bubalis', u'impala, Aepyceros melampus', u'gazelle', u'Arabian camel, dromedary, Camelus dromedarius', u'llama', u'weasel', u'polecat, fitch, foulmart, foumart, Mustela putorius', u'black-footed ferret, ferret, Mustela nigripes', u'skunk, polecat, wood pussy', u'armadillo', u'three-toed sloth, ai, Bradypus tridactylus', u'gorilla, Gorilla gorilla', u'chimpanzee, chimp, Pan troglodytes', u'gibbon, Hylobates lar', u'siamang, Hylobates syndactylus, Symphalangus syndactylus', u'langur', u'colobus, colobus monkey', u'proboscis monkey, Nasalis larvatus', u'capuchin, ringtail, Cebus capucinus', u'howler monkey, howler', u'spider monkey, Ateles geoffroyi', u'squirrel monkey, Saimiri sciureus', u'Madagascar cat, ring-tailed lemur, Lemur catta', u'Indian elephant, Elephas maximus', u'African elephant, Loxodonta africana', u'lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens', u'giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca', u'eel', u'coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch', u'rock beauty, Holocanthus tricolor', u'gar, garfish, garpike, billfish, Lepisosteus osseus', u'lionfish', u'puffer, pufferfish, blowfish, globefish', u'aircraft carrier, carrier, flattop, attack aircraft carrier', u'airliner', u'airship, dirigible', u'altar', u'ambulance', u'amphibian, amphibious vehicle', u'analog clock', u'ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin', u'assault rifle, assault gun', u'balance beam, beam', u'balloon', u'ballpoint, ballpoint pen, ballpen, Biro', u'bannister, banister, balustrade, balusters, handrail', u'barbell', u'barrel, cask', u'barrow, garden cart, lawn cart, wheelbarrow', u'baseball', u'basketball', u'bath towel', u'beacon, lighthouse, beacon light, pharos', u'beer bottle', u'beer glass', u'bell cote, bell cot', u'bicycle-built-for-two, tandem bicycle, tandem', u'binoculars, field glasses, opera glasses', u'bobsled, bobsleigh, bob', u'bolo tie, bolo, bola tie, bola', u'bookshop, bookstore, bookstall', u'bottlecap', u'brass, memorial tablet, plaque', u'breakwater, groin, groyne, mole, bulwark, seawall, jetty', u'breastplate, aegis, egis', u'bucket, pail', u'buckle', u'bulletproof vest', u'bullet train, bullet', u'caldron, cauldron', u'candle, taper, wax light', u'carousel, carrousel, merry-go-round, roundabout, whirligig', u"carpenter's kit, tool kit", u'car wheel', u'cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM', u'cassette player', u'castle', u'CD player', u'cello, violoncello', u'cellular telephone, cellular phone, cellphone, cell, mobile phone', u'chainlink fence', u'chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour', u'chime, bell, gong', u'china cabinet, china closet', u'church, church building', u'cinema, movie theater, movie theatre, movie house, picture palace', u'cleaver, meat cleaver, chopper', u'cliff dwelling', u'cloak', u'clog, geta, patten, sabot', u'cocktail shaker', u'coil, spiral, volute, whorl, helix', u'combination lock', u'container ship, containership, container vessel', u'convertible', u'corkscrew, bottle screw', u'cowboy hat, ten-gallon hat', u'cradle', u'crash helmet', u'croquet ball', u'dial telephone, dial phone', u'digital clock', u'digital watch', u'dining table, board', u'dishrag, dishcloth', u'dock, dockage, docking facility', u'dogsled, dog sled, dog sleigh', u'doormat, welcome mat', u'drilling platform, offshore rig', u'dumbbell', u'electric fan, blower', u'electric guitar', u'electric locomotive', u'envelope', u'file, file cabinet, filing cabinet', u'flagpole, flagstaff', u'flute, transverse flute', u'folding chair', u'football helmet', u'forklift', u'frying pan, frypan, skillet', u'gasmask, respirator, gas helmet', u'gas pump, gasoline pump, petrol pump, island dispenser', u'goblet', u'golf ball', u'golfcart, golf cart', u'gondola', u'greenhouse, nursery, glasshouse', u'grille, radiator grille', u'guillotine', u'hair slide', u'half track', u'hand blower, blow dryer, blow drier, hair dryer, hair drier', u'hand-held computer, hand-held microcomputer', u'holster', u'hook, claw', u'hoopskirt, crinoline', u'horizontal bar, high bar', u'hourglass', u"jack-o'-lantern", u'jean, blue jean, denim', u'jeep, landrover', u'jigsaw puzzle', u'lab coat, laboratory coat', u'ladle', u'lampshade, lamp shade', u'laptop, laptop computer', u'lawn mower, mower', u'lens cap, lens cover', u'letter opener, paper knife, paperknife', u'library', u'lifeboat', u'lighter, light, igniter, ignitor', u'limousine, limo', u'liner, ocean liner', u'lipstick, lip rouge', u'lotion', u'loudspeaker, speaker, speaker unit, loudspeaker system, speaker system', u"loupe, jeweler's loupe", u'lumbermill, sawmill', u'mailbag, postbag', u'mailbox, letter box', u'maillot', u'maillot, tank suit', u'manhole cover', u'marimba, xylophone', u'maypole', u'maze, labyrinth', u'megalith, megalithic structure', u'military uniform', u'milk can', u'missile', u'mixing bowl', u'mobile home, manufactured home', u'Model T', u'mountain bike, all-terrain bike, off-roader', u'muzzle', u'nail', u'necklace', u'nipple', u'obelisk', u'odometer, hodometer, mileometer, milometer', u'oil filter', u'oscilloscope, scope, cathode-ray oscilloscope, CRO', u'paddle, boat paddle', u'paddlewheel, paddle wheel', u'padlock', u'palace', u'paper towel', u'parallel bars, bars', u'pedestal, plinth, footstall', u'pencil box, pencil case', u'pencil sharpener', u'pick, plectrum, plectron', u'pickelhaube', u'picket fence, paling', u'pill bottle', u'pillow', u'ping-pong ball', u'pinwheel', u"plane, carpenter's plane, woodworking plane", u'planetarium', u'plastic bag', u'plate rack', u'plow, plough', u"plunger, plumber's helper", u'Polaroid camera, Polaroid Land camera', u'pole', u'police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria', u'pool table, billiard table, snooker table', u'pop bottle, soda bottle', u'pot, flowerpot', u"potter's wheel", u'power drill', u'projectile, missile', u'punching bag, punch bag, punching ball, punchball', u'quill, quill pen', u'quilt, comforter, comfort, puff', u'radio, wireless', u'radio telescope, radio reflector', u'rain barrel', u'recreational vehicle, RV, R.V.', u'reel', u'reflex camera', u'remote control, remote', u'restaurant, eating house, eating place, eatery', u'revolver, six-gun, six-shooter', u'rifle', u'rubber eraser, rubber, pencil eraser', u'rugby ball', u'rule, ruler', u'saltshaker, salt shaker', u'sandal', u'scale, weighing machine', u'school bus', u'seat belt, seatbelt', u'shield, buckler', u'shovel', u'sleeping bag', u'slide rule, slipstick', u'sliding door', u'slot, one-armed bandit', u'snorkel', u'snowmobile', u'snowplow, snowplough', u'soccer ball', u'solar dish, solar collector, solar furnace', u'soup bowl', u'space shuttle', u'spatula', u'spindle', u'spotlight, spot', u'steam locomotive', u'steel arch bridge', u'steel drum', u'stole', u'stone wall', u'streetcar, tram, tramcar, trolley, trolley car', u'suit, suit of clothes', u'sundial', u'sunglass', u'sunglasses, dark glasses, shades', u'sunscreen, sunblock, sun blocker', u'switch, electric switch, electrical switch', u'table lamp', u'tank, army tank, armored combat vehicle, armoured combat vehicle', u'tape player', u'television, television system', u'tennis ball', u'thimble', u'tile roof', u'toilet seat', u'totem pole', u'trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi', u'tricycle, trike, velocipede', u'triumphal arch', u'trolleybus, trolley coach, trackless trolley', u'turnstile', u'umbrella', u'unicycle, monocycle', u'vacuum, vacuum cleaner', u'vault', u'velvet', u'violin, fiddle', u'volleyball', u'waffle iron', u'wall clock', u'wallet, billfold, notecase, pocketbook', u'wardrobe, closet, press', u'warplane, military plane', u'washbasin, handbasin, washbowl, lavabo, wash-hand basin', u'water bottle', u'whistle', u'wine bottle', u'wool, woolen, woollen', u'worm fence, snake fence, snake-rail fence, Virginia fence', u'yawl', u'crossword puzzle, crossword', u'traffic light, traffic signal, stoplight', u'plate', u'guacamole', u'trifle', u'ice lolly, lolly, lollipop, popsicle', u'French loaf', u'bagel, beigel', u'pretzel', u'broccoli', u'cauliflower', u'artichoke, globe artichoke', u'bell pepper', u'lemon', u'pineapple, ananas', u'custard apple', u'chocolate sauce, chocolate syrup', u'meat loaf, meatloaf', u'alp', u'bubble', u'cliff, drop, drop-off', u'coral reef', u'lakeside, lakeshore', u'promontory, headland, head, foreland', u'valley, vale', u'volcano', u'ballplayer, baseball player', u"yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", u'coral fungus', u'hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa', u'bolete', u'ear, spike, capitulum', u'toilet tissue, toilet paper, bathroom tissue']
###Markdown
Choose parameters
###Code
#@title After running this cell manually, it will auto-run if you change the selected value. { run: "auto", display-mode: "form" }
CATEGORY_TO_OPTIMIZE = "lemon" #@param {type:"string"}
#@markdown Some of my favorite categories are "bee", "lemon", "strawberry", "Granny Smith", "pelican" (pelican is quite difficult to optimize)
#@markdown ---
NUMBER_STROKES = 2 #@param {type:"slider", min:1, max:30, step:1}
#@markdown Number of strokes per canvas. Make sure to keep this a low number (~3-10) if you are using a lot of canvases.
#@markdown ---
BATCH_SIZE = 3 #@param {type:"slider", min:1, max:5, step:1}
#@markdown ---
PAINTER_MODE = "VAE" #@param ["GAN", "VAE"]
#@markdown VAE mode results in more solid strokes that are easier to optimize for.
#@markdown GAN mode results in strokes that actually look like paintbrush strokes, although they might be harder to optimize for.
#@markdown ---
ADD_NOISE = True #@param {type:"boolean"}
#@markdown Experimental. Adding uncertainty may (or may not) help produce more robust images. Currently only the GAN painter uses this parameter.
#@markdown ---
CANVAS_MULTIPLIER = 4 #@param {type:"slider", min:1, max:8}
#@markdown Number of times canvas is repeated horizontally and vertically. The amount of computation increases exponentially with this parameter.
#@markdown ---
OVERLAP_PX = 48 #@param {type: "slider", min: 0, max: 48}
#@markdown Number of overlapping pixels between canvases (the canvases are size 64x64).
#@markdown ---
CONNECTED_STROKES = False #@param {type:"boolean"}
#@markdown If true, strokes begin at the endpoint of the previous stroke. Otherwise, strokes are independent and can start anywhere.
#@markdown ---
LEARNING_RATE = 0.05 #@param {type: "number"}
#@markdown ---
#@markdown ### Choose which models are optimized *at the same time*:
USE_INCEPTION_V1 = False #@param {type:"boolean"}
USE_INCEPTION_V1_SLIM = True #@param {type:"boolean"}
USE_INCEPTION_V2_SLIM = True #@param {type:"boolean"}
USE_MOBILENET_V2_14 = True #@param {type:"boolean"}
USE_RESNET_V1_50 = True #@param {type:"boolean"}
MODELS_TO_OPTIMIZE = []
if USE_INCEPTION_V1:
MODELS_TO_OPTIMIZE.append('inception_v1')
if USE_INCEPTION_V1_SLIM:
MODELS_TO_OPTIMIZE.append('inception_v1_slim')
if USE_INCEPTION_V2_SLIM:
MODELS_TO_OPTIMIZE.append('inception_v2_slim')
if USE_MOBILENET_V2_14:
MODELS_TO_OPTIMIZE.append('mobilenet_v2_14')
if USE_RESNET_V1_50:
MODELS_TO_OPTIMIZE.append('resnet_v1_50')
print("Category to optimize", CATEGORY_TO_OPTIMIZE)
print("Number of strokes", NUMBER_STROKES)
print("Batch size", BATCH_SIZE)
print("Using {} painter".format(PAINTER_MODE))
print("Adding noise", ADD_NOISE)
print("Canvas multiplier", CANVAS_MULTIPLIER)
print("Pixel overlap", OVERLAP_PX)
print("Using connected strokes", CONNECTED_STROKES)
print("Learning Rate", LEARNING_RATE)
print("Models to optimize", MODELS_TO_OPTIMIZE)
print('--------------------')
search(CATEGORY_TO_OPTIMIZE)
if USE_INCEPTION_V1:
if CATEGORY_TO_OPTIMIZE not in models.InceptionV1().labels:
raise Exception("{} not in inception_v1".format(CATEGORY_TO_OPTIMIZE))
if USE_INCEPTION_V1_SLIM:
if CATEGORY_TO_OPTIMIZE not in models.InceptionV1_slim().labels:
raise Exception("{} not in inception_v1_slim".format(CATEGORY_TO_OPTIMIZE))
if USE_INCEPTION_V2_SLIM:
if CATEGORY_TO_OPTIMIZE not in models.InceptionV2_slim().labels:
raise Exception("{} not in inception_v2_slim".format(CATEGORY_TO_OPTIMIZE))
if USE_MOBILENET_V2_14:
if CATEGORY_TO_OPTIMIZE not in models.MobilenetV2_14_slim().labels:
raise Exception("{} not in mobilenet_v2_14".format(CATEGORY_TO_OPTIMIZE))
if USE_RESNET_V1_50:
if CATEGORY_TO_OPTIMIZE not in models.ResnetV1_50_slim().labels:
raise Exception("{} not in resnet_v1_50".format(CATEGORY_TO_OPTIMIZE))
###Output
('Category to optimize', 'bald eagle, American eagle, Haliaeetus leucocephalus')
('Number of strokes', 2)
('Batch size', 3)
Using VAE painter
('Adding noise', True)
('Canvas multiplier', 4)
('Pixel overlap', 48)
('Using connected strokes', False)
('Learning Rate', 0.05)
('Models to optimize', ['inception_v1_slim', 'inception_v2_slim', 'mobilenet_v2_14', 'resnet_v1_50'])
--------------------
searching matching labels for bald eagle, American eagle, Haliaeetus leucocephalus
inception_v1 labels: []
inception_v1_slim labels: [u'bald eagle, American eagle, Haliaeetus leucocephalus']
inception_v2_slim labels: [u'bald eagle, American eagle, Haliaeetus leucocephalus']
mobilenet_v2_14 labels: [u'bald eagle, American eagle, Haliaeetus leucocephalus']
resnet_v1_50 labels: [u'bald eagle, American eagle, Haliaeetus leucocephalus']
###Markdown
Run!
###Code
lol = LucidGraph(CATEGORY_TO_OPTIMIZE, NUMBER_STROKES, BATCH_SIZE,
painter_type=PAINTER_MODE, connected=CONNECTED_STROKES,
add_noise=ADD_NOISE, lr=LEARNING_RATE,
overlap_px=OVERLAP_PX, repeat=CANVAS_MULTIPLIER, alternate=False,
models_to_optimize=MODELS_TO_OPTIMIZE)
if PAINTER_MODE == "GAN":
if ADD_NOISE:
lol.load_painter_checkpoint('tf_gan4')
else:
lol.load_painter_checkpoint('tf_gan3')
elif PAINTER_MODE == "VAE":
lol.load_painter_checkpoint('tf_vae')
lol.train()
###Output
_____no_output_____
###Markdown
Evaluate results
###Code
def print_results():
def sigmoid(x):
s = 1/(1+np.exp(-x))
return s
acs, dream_paintings = lol.sess.run([lol.actions, lol.resized_canvas])
actual_acs = sigmoid(acs)
for p in dream_paintings:
show(p)
print_results()
def vid(my_frames):
def frame(t):
t = int(t*30)
if t >= len(my_frames):
t = len(my_frames)-1
return (np.hstack(my_frames[t])*255).astype(np.float)
clip = mpy.VideoClip(frame, duration=len(my_frames)/30.)
clip.write_videofile('tmp.mp4', fps=30.0)
display(mpy.ipython_display('tmp.mp4', height=200, max_duration=70.))
# If the video is too long, you can skip some
keep_1_in_n = 1
vid(lol.images[::keep_1_in_n])
###Output
_____no_output_____ |
Lab08/Phase_Picking.ipynb | ###Markdown
ESS 136A Lab 8 Convolutional Neural Network (Part 2) Due Mar 9, 2021, 17:00 > `Convolutional Neural Networks (ConvNets or CNNs) are a category of Neural Networks proven effective in image recognition and classification. ConvNets have been successful in identifying faces, objects and traffic signs apart from powering vision in robots and self-driving cars.` [More details](https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/) 1. Introduction> In this lab, we will build a Convolutional Neural Network to automatically detecting P and S phases in the seismic waveforms. This lab is modified from the paper entitled ["Generalized Seismic Phase Detection with Deep Learning" by Zachary E. Ross et al., 2019](https://arxiv.org/abs/1805.01075)> The training dataset are provided in the Waveform.npy and Label.npy. The waveforms (X) are composed of three components (N,E,Z) with the window length of 4 seconds. The sampling rate is 100 Hz. Therefore, for each training seismgram, there are 400*3 data points. The Labels (Y) distinguish 3 classes (P,S, and Noise windows) with 3 numbers (0,1,2). In order to perform multiple classification by CNN, we need to do one-hot encoding for the labels. The link of why we need one-hot encoding is attached: https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/. By using one-hot encoding we change the labels 0,1,and 2 into [1,0,0],[0,1,0],and[0,0,1] > We then split the training dataset into two parts: one for training, one for testing. We use the testing dataset to select best model. To measure the performance of best trained model, we plot the [confusion matrix](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/:~:text=A%20confusion%20matrix%20is%20a,related%20terminology%20can%20be%20confusing.), [precision-recall curve](https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html) and [ROC curve](https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5).> __Note__: If you meet a bug from Keras packages (version problem), please try to change the import source. For example, you can switch `from keras.layers import Conv1D` to `from tensorflow.keras.layers import Conv1D`
###Code
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import scipy.stats as stats
from obspy.signal.trigger import trigger_onset
# sklearn packages
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import confusion_matrix, precision_recall_curve, roc_curve
# keras packages
from keras import backend as K
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.models import Sequential, Model
from keras.layers import Input, Conv1D, MaxPooling1D, UpSampling1D,Flatten,Dense,Dropout,BatchNormalization
from keras.utils import np_utils
from keras.optimizers import Adam
###Output
/Users/tianfeng/anaconda3/lib/python3.6/site-packages/obspy/signal/headers.py:93: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
], align=True)
Using TensorFlow backend.
###Markdown
2. Read Data > Load waveform (X) and label (Y) dataset from [Southern California Earthquake Data Center](http://scedc.caltech.edu/research-tools/deeplearning.html). The dataset used in this labe includes 10000 samples (1% of total dataset). The following section plot 3 examples of P/S waves and Noise windows. The window length are all 4 seconds with sampling rate of 100 Hz. The P and S wave arrivals occurs at the center of the windows. > In order to perform multiple classification with CNN, we need to perform one-hot encoding on labels [[link]](https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/). By using one-hot encoding we change the labels 0,1,and 2 into [1,0,0],[0,1,0],and[0,0,1] respectively. We use [1,0,0],[0,1,0],and[0,0,1] to represent P phase, noise, and S pahse respectively.
###Code
X=np.load('Waveform.npy')
Y=np.load('Label.npy')
labels=['P','S','Noise']
# Plot examples of 3 classes
matplotlib.rc('font', **{'size' : 15})
order=[0,2,1]
plt.figure(figsize=(8,8))
for k in range(3):
plt.subplot(3,1,k+1)
for i in range(3):
plt.plot(np.arange(400)*0.01,X[order[k],:,i]+i)
plt.title(labels[k])
plt.yticks([])
if k<2:
plt.xticks([])
plt.show()
# convert integers to dummy variables (one hot encoding)
encoder = LabelEncoder()
encoded_Y = encoder.fit_transform(Y)
en_Y = np_utils.to_categorical(encoded_Y)
# split dataset into training set and validation set
X_train, X_val, y_train, y_val = train_test_split(X, en_Y, test_size=0.33, random_state=42)
###Output
_____no_output_____
###Markdown
3. Build Model> Training a convolutional nerual network is similar to training a (fully-connected) nerual network. You can find the definition of loss function, optimizer, activation functions, epoch and batch size in the lab of nerual network. > The largest difference between CNN and NN is that CNN use layers called Conv1D or Conv2D. In our lab, waveforms are time series not a 2D images. So we use the [Conv1D](https://keras.io/api/layers/convolution_layers/convolution1d/). The first argument for Conv1D is the number of filters. It means the dimensionality of the output space (i.e. the number of output filters in the convolution). It must be a integer. The second argument is kernel size. It specifies the length of the 1D convolution window. Another important argument is strides, specifying the stride length of the convolution. It means the downsampling rate, if you set stride equals 2, the output time series would downsample by 2. It has similar effect as [pooling layers](https://keras.io/api/layers/pooling_layers/max_pooling1d/). The first layer is very special, you need to define the input shape (input_shape). In our case the shape of input is 400*3. The window length of a recording of waveform is 4 seconds and the sampling rate is 100 Hz. So we had 400 points for a waveform recording. The number 3 means the number of channels (N,E,Z).> We usually use relu function for the activation functions in the Conv1D and Dense layers, however, for the last layer, we should use softmax. The softmax function takes the output vector, and scales all values such that they sum up to 1. In this way, we get a vector of probabilities. The first entry in the output corresponds to the probability that the input image is a 0, the second entry that the input is 1, etc.:>$$P = \left[\begin{matrix} p(0) \\ p(1) \\ p(2) \\ ... \\ p(9) \end{matrix} \right] \quad , \quad \sum_{i=0}^9 P_i = 1$$>We now have to choose a loss function. For multi-class classification tasks, _categorical cross-entropy_ is usually a good choice. This loss function is defined as follows:>$$\mathcal{L} = - \sum_{c=0}^N y_c \log \left( p_c \right)$$>where $y_c$ is the label of class $c$, and $p$ is the predicted probability. Note that $y_c$ is either 0 or 1, and that $0 < p_c < 1$. With our chosen loss function, we are ready for the final assembly of the model.>In addition, we add Dropout. You can learn more about it if you are interested. [Dropout](https://towardsdatascience.com/machine-learning-part-20-dropout-keras-layers-explained-8c9f6dc4c9ab) is a technique used to prevent a model from overfitting. Dropout works by randomly setting the outgoing edges of hidden units (neurons that make up hidden layers) to 0 at each update of the training phase.> We build the model with the following code:> ```model = Sequential()model.add(Conv1D(16, 3, activation='relu',strides=2,input_shape=(n_in,3)))model.add(Conv1D(32, 3, strides=2,activation='relu'))model.add(Conv1D(64, 3, strides=2,activation='relu'))model.add(Conv1D(128, 3, strides=2,activation='relu'))model.add(Flatten())model.add(Dense(128, activation='relu'))model.add(Dropout(0.5))model.add(Dense(3, activation='softmax'))```> The model structure is shown below:> 
###Code
# 3 classes
n_in=400
model = Sequential()
# add convolutional layers
model.add(Conv1D(16, 3, activation='relu',strides=2,input_shape=(n_in,3)))
model.add(Conv1D(32, 3, strides=2,activation='relu'))
model.add(Conv1D(64, 3, strides=2,activation='relu'))
model.add(Conv1D(128, 3, strides=2,activation='relu'))
# Flatten before fully connected layers
model.add(Flatten())
model.add(Dense(128, activation='relu'))
# Dropout to prevent a model from overfitting. 0.5 means 50% neurals are deactivated.
model.add(Dropout(0.5))
# Softmax is suitable for multiple classification problem
model.add(Dense(3, activation='softmax'))
model.summary()
adam=Adam(learning_rate=0.0005, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
# Early stop
es = EarlyStopping(monitor='val_accuracy', mode='max', verbose=1, patience=5)
mc = ModelCheckpoint('CNNclassifier.h5', monitor='val_accuracy', mode='max', verbose=0, save_best_only=True)
history=model.fit(X_train, y_train, epochs=100, batch_size=128, validation_data=(X_val, y_val),
callbacks=[es,mc], verbose=0)
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_1 (Conv1D) (None, 199, 16) 160
_________________________________________________________________
conv1d_2 (Conv1D) (None, 99, 32) 1568
_________________________________________________________________
conv1d_3 (Conv1D) (None, 49, 64) 6208
_________________________________________________________________
conv1d_4 (Conv1D) (None, 24, 128) 24704
_________________________________________________________________
flatten_1 (Flatten) (None, 3072) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 393344
_________________________________________________________________
dropout_1 (Dropout) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 3) 387
=================================================================
Total params: 426,371
Trainable params: 426,371
Non-trainable params: 0
_________________________________________________________________
Epoch 00019: early stopping
###Markdown
3. Training History> We have recorded the history of training with a variable named 'history'. We wll then visualize the history of the training/testing loss. In addition to loss, we can plot the metrics change with the training epoch. In the following plots, you can see the training loss would be smaller than testing loss after certain epoch. It means the model starts to overfit after that epoch and we should stop training then.
###Code
# plot metrics
plt.figure(figsize=(7,7))
plt.subplot(211)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['train_loss','val_loss'])
plt.subplot(212)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['train_accuracy','val_accuracy'])
plt.xlabel('epoch')
scores = model.evaluate(X_val, y_val, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
###Output
accuracy: 94.58%
###Markdown
4. Plotting Confusion Matrix> In this section, we would plot the confusion matrix. You could learn more about it through the [link](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/:~:text=A%20confusion%20matrix%20is%20a,related%20terminology%20can%20be%20confusing.).
###Code
y_pred = model.predict(X_val)
y_val_nonhot=np.round(y_val.argmax(axis=1))
y_pred_nonhot=np.round(y_pred.argmax(axis=1))
cm = confusion_matrix(y_val_nonhot, y_pred_nonhot)
print(cm)
plt.figure(figsize=(6,6))
plt.imshow(cm, interpolation='nearest', cmap='jet')
plt.colorbar()
tick_marks = np.arange(3)
plt.xticks(tick_marks, labels, rotation=45)
plt.yticks(tick_marks, labels)
plt.ylim([2.5,-0.5])
plt.xlim([-0.5,2.5])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
###Output
[[1025 50 49]
[ 24 1034 27]
[ 16 13 1062]]
###Markdown
[5. Plotting Precision-Recall Curve](https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html)
###Code
# precision recall curve
plt.figure(figsize=(7,7))
precision = dict()
recall = dict()
for i in range(3):
precision[i], recall[i], _ = precision_recall_curve(y_val[:, i],y_pred[:, i])
plt.plot(recall[i], precision[i], lw=2, label='{}'.format(labels[i]))
plt.xlabel("recall")
plt.ylabel("precision")
plt.legend(loc="best")
plt.title("precision vs. recall curve")
plt.show()
###Output
_____no_output_____
###Markdown
[6. Plotting ROC Curve](https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5)
###Code
# roc curve
plt.figure(figsize=(7,7))
fpr = dict()
tpr = dict()
for i in range(3):
fpr[i], tpr[i], _ = roc_curve(y_val[:, i], y_pred[:, i])
plt.plot(fpr[i], tpr[i], lw=2, label='{}'.format(labels[i]))
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc="best")
plt.title("ROC curve")
plt.show()
###Output
_____no_output_____ |
tds/src/main/webapp/WEB-INF/altContent/startup/jupyter_notebooks/jupyter_viewer.ipynb | ###Markdown
Siphon THREDDS Jupyter Notebook ViewerDataset: {{datasetName}} Dependencies: Siphon: `pip install siphon` matplotlib: `pip install matplotlib` or `conda install -c conda-forge matplotlib` ipywidgets: `pip install ipywidgets` or `conda install -c conda-forge ipywidgets` then Using Jupyter Notebook: `jupyter nbextension enable --py widgetsnbextension` Using JupyterLab: Requires nodejs: `conda install nodejs` `jupyter labextension install @jupyter-widgets/jupyterlab-manager` numpy: `pip install numpy` or `conda install numpy`
###Code
from siphon.catalog import TDSCatalog
import matplotlib.pyplot as plt
import numpy as np
import ipywidgets as widgets
catUrl = "{{catUrl}}";
datasetName = "{{datasetName}}";
###Output
_____no_output_____
###Markdown
Access a datasetWith the TDS catalog url, we can use Siphon to get the dataset named `datasetName`.
###Code
catalog = TDSCatalog(catUrl)
ds = catalog.datasets[datasetName]
ds.name
###Output
_____no_output_____
###Markdown
Datasets each have a set of access protocols:
###Code
list(ds.access_urls)
###Output
_____no_output_____
###Markdown
Siphon's `remote-access` returns a `Dataset` object, which opens the remote dataset and provides access to its metadata:
###Code
dataset = ds.remote_access()
list(dataset.ncattrs())
###Output
_____no_output_____
###Markdown
Display a variable: Run the cells below to get an interactive list of variables in this dataset. Select the variable you wish to view. Execute the next cell to display info about the selected variable and plot it. To plot a different variable, select it from the list and rerun the following cell.
###Code
var_name = widgets.RadioButtons(
options=list(dataset.variables),
description='Variable:')
display(var_name)
var = dataset.variables[var_name.value]
# display information about the variable
print(var.name)
print(list(var.dimensions))
print(var.shape)
%matplotlib inline
# attempt to plot the variable
canPlot = var.dtype == np.uint8 or np.can_cast(var.dtype, float, "same_kind") # Only plot numeric types
if (canPlot):
ndims = np.squeeze(var[:]).ndim
# for one-dimensional data, print value
if (ndims == 0):
print(var.name, ": ", var)
# for two-dimensional data, make a line plot
elif (ndims == 1):
plt.plot(np.squeeze(np.array([range(len(np.squeeze(var[:])))])), np.squeeze(var[:]), 'bo', markersize=5)
plt.title(var.name)
plt.show()
# for three-dimensional data, make an image
elif (ndims == 2):
plt.imshow(var[:])
plt.title(var.name)
plt.show()
# for four or more dimensional data, print values
else:
print("Too many dimensions - Cannot display variable: ", var.name)
print(var[:])
else:
print("Not a numeric type - Cannot display variable: ", var.name)
print(var[:])
###Output
_____no_output_____
###Markdown
Note that data are only transferred over the network when the variable is sliced, and only data corresponding to the slice are downloaded. In this case, we are asking for all of the data with `var[:]`. More with SiphonTo see what else you can do, view the Siphon API.
###Code
### Your code here ###
###Output
_____no_output_____ |
docs/source/examples/MNIST.ipynb | ###Markdown
Autoencoders and multi-stage training for MNIST classificationIn [this blog post](https://blog.keras.io/building-autoencoders-in-keras.html), [Francois Chollet](https://twitter.com/fchollet) demonstrates how to build several different variations of image auto-encodersin Keras.We build on the example above using `timeserio`'s `multinetwork`, and demonstrate some key features:- we add a digit classifier that uses pre-trained encodings- we encapsulate a neural network with multiple inter-connected parts using `MultiNetworkBase`- we show how to implement multi-stage training with layer freezing- we show how to add training callbacks and inspect multi-stage training history
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load and normalize data
###Code
def to_onehot(y, num_classes=10):
"""Convert numpy array to one-hot."""
onehot = np.zeros((len(y), num_classes))
onehot[np.arange(len(y)), y] = 1
return onehot
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
y_train_oh = to_onehot(y_train)
y_test_oh = to_onehot(y_test)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape, y_train_oh.shape, y_test_oh.shape)
def plot_images(x, y=None):
"""Plot all images in x, with optional labels given by y.
Expect x.shape == (n, h, w), where n = number images, h = image height, w = image width
"""
plt.figure(figsize=(20, 4))
n = x.shape[0]
for i in range(n):
image = x[i]
ax = plt.subplot(2, n, i + 1)
plt.imshow(x[i])
plt.gray()
if y is not None:
label = y[i]
plt.title(label)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
plot_images(x_train[:10], y_train[:10])
###Output
_____no_output_____
###Markdown
Define network architecturesWe follow the above blog post closely, but demonstrate some of the convenient features of `timeserio`.In addition to the encoder-decoder, we add a classification model with softmax output that can be used either with image encodings,or combined with the encoder for a full image classification pipeline:
###Code
from timeserio.keras.multinetwork import MultiNetworkBase
from keras.layers import Input, Dense, Flatten, Reshape
from keras.models import Model
from keras.callbacks import EarlyStopping, ReduceLROnPlateau
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
class AutoEncoderNetwork(MultiNetworkBase):
def _model(self, image_side=28, encoding_dim=32, classifier_units=32, num_classes=10):
"""Define model architectures."""
image_shape = (image_side, image_side)
flat_shape = image_shape[0] * image_shape[1]
input_img = Input(shape=image_shape, name="input_image")
encoded = Dense(encoding_dim, activation='tanh')(Flatten()(input_img))
encoder_model = Model(input_img, encoded, name="encoder")
input_encoded = Input(shape=(encoding_dim,), name="input_encoding")
decoded = Reshape(image_shape)(Dense(flat_shape, activation='sigmoid')(input_encoded))
decoder_model = Model(input_encoded, decoded, name="decoder")
autoencoder_model = Model(input_img, decoder_model(encoder_model(input_img)))
autoencoder_model.compile(optimizer='adam', loss='binary_crossentropy')
clf_intermediate = Dense(classifier_units, activation='relu')(input_encoded)
clf = Dense(num_classes, activation='softmax')(clf_intermediate)
# this model classifies encoding vectors
encoding_clf_model = Model(input_encoded, clf, name="encoder_classifier")
# this model classifies images
classifier_model = Model(input_img, encoding_clf_model(encoder_model(input_img)), name="image_classifier")
classifier_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
return {
'encoder': encoder_model,
'decoder': decoder_model,
'autoencoder': autoencoder_model,
'encoding_classifier': encoding_clf_model, # we expose this model to allow granular freezing/un-freezing
'classifier': classifier_model,
}
def _callbacks(
self,
*,
es_params={
'patience': 20,
'monitor': 'val_loss'
},
lr_params={
'monitor': 'val_loss',
'patience': 4,
'factor': 0.2
}
):
"""Define optional callbacks for each model."""
early_stopping = EarlyStopping(**es_params)
learning_rate_reduction = ReduceLROnPlateau(**lr_params)
return {
'autoencoder': [early_stopping, learning_rate_reduction],
'classifier': [early_stopping, learning_rate_reduction],
}
multinetwork = AutoEncoderNetwork(encoding_dim=32)
SVG(model_to_dot(multinetwork.model['encoder'], show_shapes=True).create(prog='dot', format='svg'))
SVG(model_to_dot(multinetwork.model['autoencoder'], show_shapes=True).create(prog='dot', format='svg'))
SVG(model_to_dot(multinetwork.model['classifier'], show_shapes=True).create(prog='dot', format='svg'))
###Output
_____no_output_____
###Markdown
Train autoencoderWe see that using `adam` optimizer gives us a better loss compared to `adadelta`, even for a shallow auto-encoder
###Code
multinetwork.fit(
x_train, x_train,
model='autoencoder',
reset_weights=True,
epochs=100,
batch_size=2 ** 8,
shuffle=True,
validation_data=(x_test, x_test),
verbose=1,
)
###Output
_____no_output_____
###Markdown
Training history is stored in the `multinetwork.history` list. Every time we call `fit`, a new history record is appended.This allows us to track training history over multiple pre-/post-training runs.History includes information such as learning rate (`lr`) and time duration per epoch.
###Code
from kerashistoryplot.plot import plot_history
h = multinetwork.history[-1]["history"]
plot_history(h, batches=True, n_cols=3, figsize=(15,5))
###Output
_____no_output_____
###Markdown
Encode and decode some digitsSweet, eh?
###Code
encoded_imgs = multinetwork.predict(x_test, model='encoder')
decoded_imgs = multinetwork.predict(encoded_imgs, model='decoder')
plot_images(x_test[:10], y_test[:10])
plot_images(decoded_imgs[:10])
###Output
_____no_output_____
###Markdown
Visualize encodingsWe use simple PCA to visualize 32-dimensional embeddings in 2D.
###Code
from sklearn.decomposition import PCA
encoded_imgs_2D = PCA(n_components=2).fit_transform(encoded_imgs)
plt.figure(figsize=(10, 10))
for label in range(10):
encodings = encoded_imgs_2D[y_test == label, :]
plt.scatter(encodings[:, 0], encodings[:, 1], alpha=.5, label=label)
plt.legend()
###Output
_____no_output_____
###Markdown
Fit classifier modelUsing the pre-trained encoder, we can fit a classification model by training the dense layers of the `encoding_classifier` model only.
###Code
multinetwork.fit(
x_train, y_train_oh,
model='classifier', # this is the compiled model we use to perform gradient descent
trainable_models=['encoding_classifier'], # only the layers in this model will be un-frozen
epochs=100,
batch_size=2 ** 8,
shuffle=True,
validation_data=(x_test, y_test_oh),
verbose=1,
)
###Output
_____no_output_____
###Markdown
Training historyNote that `multinetwork.history` now contains two records: one for the autoencoder pre-training, and one for post-training the dense layers.By freezing the encoder, we also speed up classifier post-training significantly.
###Code
pre_training = multinetwork.history[0]
print(f"Training model: {pre_training['model']}, trainable: {pre_training['trainable_models']}")
plot_history(pre_training["history"], batches=True, n_cols=3, figsize=(15,5))
post_training = multinetwork.history[1]
print(f"Training model: {post_training['model']}, trainable: {post_training['trainable_models']}")
plot_history(post_training["history"], batches=False, n_cols=2, figsize=(15,5))
###Output
_____no_output_____
###Markdown
Final classifier scoreOur classifier performance is not ground-breaking, but our example show a simple way to implement multi-stage training using a `multinetwork`.
###Code
loss, acc = multinetwork.evaluate(x_test, y_test_oh, model='classifier')
print(f"Loss: {loss:.3f}, accuracy: {acc:.3f}")
###Output
Loss: 0.113, accuracy: 0.967
###Markdown
Some examplesWe plot original images from the test set with their true labels on top, and decoded images with classifier labels on the bottom.
###Code
y_test_pred_oh = multinetwork.predict(x_test, model='classifier')
y_test_pred = np.argmax(y_test_pred_oh, axis=1)
n = 20
idx = np.random.choice(len(x_test), size=n, replace=False)
print("True labels: ")
plot_images(x_test[idx], y_test[idx])
print("Predicted labels: ")
plot_images(decoded_imgs[idx], y_test_pred[idx])
###Output
True labels:
|
clustering/dbscan.ipynb | ###Markdown
Import libraries* `fiona` used to import/export geodata* `shapely` allows usage of geometry objects * `matplotlib` visualization* `sklearn`contains clustering algorithms* `numpy` allows to handle data efficiently as vectors/matrices
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import fiona
from shapely.geometry.geo import shape
from sklearn.cluster import DBSCAN
from sklearn.neighbors import KDTree
import numpy as np
from ipywidgets import interactive, interact
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
Import data
###Code
data = []
with fiona.open('buildings.gpkg') as src:
for f in src:
pt = shape(f['geometry'])
data.append((pt.x, pt.y))
X = np.array(data)
xlim = (min(X[:, 0]), max(X[:, 0]))
ylim = (min(X[:, 1]), max(X[:, 1]))
print(X)
###Output
[[620833.85998787 174007.15094989]
[620868.99624114 174004.57972814]
[621488.04172939 173610.55634672]
...
[619923.282964 174145.152089 ]
[619915.40984854 174142.34965183]
[620032.66250959 174144.92612771]]
###Markdown
Show data
###Code
fig = plt.figure()
plt.scatter(X[:, 0], X[:, 1], s=1)
plt.xlim(xlim)
plt.ylim(ylim)
###Output
_____no_output_____
###Markdown
Finden von optimalen Werten von eps und min_samples
###Code
def plot_nb_dists(nearest_neighbor, metric='euclidean'):
""" Plots distance sorted by `neared_neighbor`th
Args:
X (list of lists): list with data tuples
nearest_neighbor (int): nr of nearest neighbor to plot
metric (string): name of scipy metric function to use
"""
tree = KDTree(X, leaf_size=2)
if not isinstance(nearest_neighbor, list):
nearest_neighbor = [nearest_neighbor]
max_nn = max(nearest_neighbor)
dist, _ = tree.query(X, k=max_nn + 1)
plt.figure()
for nnb in nearest_neighbor:
col = dist[:, nnb]
col.sort()
plt.plot(col, label="{}th nearest neighbor".format(nnb))
#plt.ylim(0, min(250, max(dist[:, max_nn])))
plt.ylabel("Distance to k nearest neighbor")
plt.xlabel("Points sorted according to distance of k nearest neighbor")
plt.grid()
plt.legend()
plt.show()
interact(plot_nb_dists,
nearest_neighbor=widgets.IntSlider(min=1, max=100, step=1, value=1, continuous_update=False));
###Output
_____no_output_____
###Markdown
DBSCAN Clustering
###Code
def plot_dbscan(eps, min_samples, metric='euclidean'):
db = DBSCAN(eps=eps,
min_samples=min_samples,
metric=metric).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
n_noise_ = list(labels).count(-1)
#print('Estimated number of clusters: %d' % n_clusters_)
#print('Estimated number of noise points: %d' % n_noise_)
# #############################################################################
# Plot result
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
plt.figure()
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor=tuple(col), markersize=2)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor=tuple(col), markersize=2)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.xlim(xlim)
plt.ylim(ylim)
plt.show()
interact(plot_dbscan,
eps=widgets.IntSlider(min=1, max=300, step=1, value=50, continuous_update=False),
min_samples=widgets.IntSlider(min=0, max=50, step=1, value=10, continuous_update=False));
###Output
_____no_output_____ |
Module3/Mastering_Python_Data_Analysis_Code/Chapter 2/B03551_02_code.ipynb | ###Markdown
Relationships
###Code
hubble_data = pd.read_csv('data/hubble.csv', skiprows=2, names=['id', 'r', 'v'])
hubble_data.head()
hubble_data.plot(kind='scatter', x='r',y='v', s=50)
plt.locator_params(nbins=5);
from scipy.stats import linregress
rv = hubble_data.as_matrix(columns=['r','v'])
a, b, r, p, stderr = linregress(rv)
print(a, b, r, p, stderr)
hubble_data.plot(kind='scatter', x='r', y='v', s=50)
rdata = hubble_data['r']
rmin, rmax = min(rdata), max(rdata)
rvalues = np.linspace(rmin, rmax, 200)
yvalues = a * rvalues + b
plt.plot(rvalues, yvalues, color='IndianRed', lw=2)
plt.locator_params(nbins=5);
###Output
_____no_output_____ |
sklearn&machine-learning/03_classification.ipynb | ###Markdown
**Chapter 3 – Classification**_This notebook contains all the sample code and solutions to the exercises in chapter 3._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
###Code
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
###Output
_____no_output_____
###Markdown
MNIST **Warning**: `fetch_mldata()` is deprecated since Scikit-Learn 0.20. You should use `fetch_openml()` instead. However, it returns the unsorted MNIST dataset, whereas `fetch_mldata()` returned the dataset sorted by target (the training set and the test test were sorted separately). In general, this is fine, but if you want to get the exact same results as before, you need to sort the dataset using the following function:
###Code
def sort_by_target(mnist):
reorder_train = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[:60000])]))[:, 1]
reorder_test = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[60000:])]))[:, 1]
mnist.data[:60000] = mnist.data[reorder_train]
mnist.target[:60000] = mnist.target[reorder_train]
mnist.data[60000:] = mnist.data[reorder_test + 60000]
mnist.target[60000:] = mnist.target[reorder_test + 60000]
try:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True)
mnist.target = mnist.target.astype(np.int8) # fetch_openml() returns targets as strings
sort_by_target(mnist) # fetch_openml() returns an unsorted dataset
except ImportError:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist["data"], mnist["target"]
mnist.data.shape
X, y = mnist["data"], mnist["target"]
X.shape
y.shape
28*28
some_digit = X[36000]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
save_fig("some_digit_plot")
plt.show()
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = mpl.cm.binary, **options)
plt.axis("off")
plt.figure(figsize=(9,9))
example_images = np.r_[X[:12000:600], X[13000:30600:600], X[30600:60000:590]]
plot_digits(example_images, images_per_row=10)
save_fig("more_digits_plot")
plt.show()
y[36000]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
import numpy as np
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
###Output
_____no_output_____
###Markdown
Binary classifier
###Code
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
###Output
_____no_output_____
###Markdown
**Note**: a few hyperparameters will have a different default value in future versions of Scikit-Learn, so a warning is issued if you do not set them explicitly. This is why we set `max_iter=5` and `tol=-np.infty`, to get the same results as in the book, while avoiding the warnings.
###Code
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(max_iter=5, tol=-np.infty, random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = (y_train_5[train_index])
X_test_fold = X_train[test_index]
y_test_fold = (y_train_5[test_index])
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
y_train_perfect_predictions = y_train_5
confusion_matrix(y_train_5, y_train_perfect_predictions)
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
4344 / (4344 + 1307)
recall_score(y_train_5, y_train_pred)
4344 / (4344 + 1077)
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
4344 / (4344 + (1077 + 1307)/2)
y_scores = sgd_clf.decision_function([some_digit])
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
threshold = 200000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,
method="decision_function")
###Output
_____no_output_____
###Markdown
Note: there was an [issue](https://github.com/scikit-learn/scikit-learn/issues/9589) in Scikit-Learn 0.19.0 (fixed in 0.19.1) where the result of `cross_val_predict()` was incorrect in the binary classification case when using `method="decision_function"`, as in the code above. The resulting array had an extra first dimension full of 0s. Just in case you are using 0.19.0, we need to add this small hack to work around this issue:
###Code
y_scores.shape
# hack to work around issue #9589 in Scikit-Learn 0.19.0
if y_scores.ndim == 2:
y_scores = y_scores[:, 1]
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.xlabel("Threshold", fontsize=16)
plt.legend(loc="upper left", fontsize=16)
plt.ylim([0, 1])
plt.figure(figsize=(8, 4))
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.xlim([-700000, 700000])
save_fig("precision_recall_vs_threshold_plot")
plt.show()
(y_train_pred == (y_scores > 0)).all()
y_train_pred_90 = (y_scores > 70000)
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
save_fig("precision_vs_recall_plot")
plt.show()
###Output
Saving figure precision_vs_recall_plot
###Markdown
ROC curves
###Code
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr, tpr)
save_fig("roc_curve_plot")
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
###Output
_____no_output_____
###Markdown
**Note**: we set `n_estimators=10` to avoid a warning about the fact that its default value will be set to 100 in Scikit-Learn 0.22.
###Code
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=10, random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method="predict_proba")
y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
precision_score(y_train_5, y_train_pred_forest)
recall_score(y_train_5, y_train_pred_forest)
###Output
_____no_output_____
###Markdown
Multiclass classification
###Code
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
some_digit_scores = sgd_clf.decision_function([some_digit])
some_digit_scores
np.argmax(some_digit_scores)
sgd_clf.classes_
sgd_clf.classes_[5]
from sklearn.multiclass import OneVsOneClassifier
ovo_clf = OneVsOneClassifier(SGDClassifier(max_iter=5, tol=-np.infty, random_state=42))
ovo_clf.fit(X_train, y_train)
ovo_clf.predict([some_digit])
len(ovo_clf.estimators_)
forest_clf.fit(X_train, y_train)
forest_clf.predict([some_digit])
forest_clf.predict_proba([some_digit])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
def plot_confusion_matrix(matrix):
"""If you prefer color and a colorbar"""
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_errors_plot", tight_layout=False)
plt.show()
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
###Output
Saving figure error_analysis_digits_plot
###Markdown
Multilabel classification
###Code
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
###Output
_____no_output_____
###Markdown
**Warning**: the following cell may take a very long time (possibly hours depending on your hardware).
###Code
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3, n_jobs=-1)
f1_score(y_multilabel, y_train_knn_pred, average="macro")
###Output
_____no_output_____
###Markdown
Multioutput classification
###Code
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 5500
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
###Output
Saving figure cleaned_digit_example_plot
###Markdown
Extra material Dummy (ie. random) classifier
###Code
from sklearn.dummy import DummyClassifier
dmy_clf = DummyClassifier()
y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_dmy = y_probas_dmy[:, 1]
fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)
plot_roc_curve(fprr, tprr)
###Output
_____no_output_____
###Markdown
KNN classifier
###Code
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1, weights='distance', n_neighbors=4)
knn_clf.fit(X_train, y_train)
y_knn_pred = knn_clf.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_knn_pred)
from scipy.ndimage.interpolation import shift
def shift_digit(digit_array, dx, dy, new=0):
return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)
plot_digit(shift_digit(some_digit, 5, 1, new=100))
X_train_expanded = [X_train]
y_train_expanded = [y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)
X_train_expanded.append(shifted_images)
y_train_expanded.append(y_train)
X_train_expanded = np.concatenate(X_train_expanded)
y_train_expanded = np.concatenate(y_train_expanded)
X_train_expanded.shape, y_train_expanded.shape
knn_clf.fit(X_train_expanded, y_train_expanded)
y_knn_expanded_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_knn_expanded_pred)
ambiguous_digit = X_test[2589]
knn_clf.predict_proba([ambiguous_digit])
plot_digit(ambiguous_digit)
###Output
_____no_output_____
###Markdown
Exercise solutions 1. An MNIST Classifier With Over 97% Accuracy **Warning**: the next cell may take hours to run, depending on your hardware.
###Code
from sklearn.model_selection import GridSearchCV
param_grid = [{'weights': ["uniform", "distance"], 'n_neighbors': [3, 4, 5]}]
knn_clf = KNeighborsClassifier()
grid_search = GridSearchCV(knn_clf, param_grid, cv=5, verbose=3, n_jobs=-1)
grid_search.fit(X_train, y_train)
grid_search.best_params_
grid_search.best_score_
from sklearn.metrics import accuracy_score
y_pred = grid_search.predict(X_test)
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
2. Data Augmentation
###Code
from scipy.ndimage.interpolation import shift
def shift_image(image, dx, dy):
image = image.reshape((28, 28))
shifted_image = shift(image, [dy, dx], cval=0, mode="constant")
return shifted_image.reshape([-1])
image = X_train[1000]
shifted_image_down = shift_image(image, 0, 5)
shifted_image_left = shift_image(image, -5, 0)
plt.figure(figsize=(12,3))
plt.subplot(131)
plt.title("Original", fontsize=14)
plt.imshow(image.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(132)
plt.title("Shifted down", fontsize=14)
plt.imshow(shifted_image_down.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(133)
plt.title("Shifted left", fontsize=14)
plt.imshow(shifted_image_left.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.show()
X_train_augmented = [image for image in X_train]
y_train_augmented = [label for label in y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
for image, label in zip(X_train, y_train):
X_train_augmented.append(shift_image(image, dx, dy))
y_train_augmented.append(label)
X_train_augmented = np.array(X_train_augmented)
y_train_augmented = np.array(y_train_augmented)
shuffle_idx = np.random.permutation(len(X_train_augmented))
X_train_augmented = X_train_augmented[shuffle_idx]
y_train_augmented = y_train_augmented[shuffle_idx]
knn_clf = KNeighborsClassifier(**grid_search.best_params_)
knn_clf.fit(X_train_augmented, y_train_augmented)
y_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
By simply augmenting the data, we got a 0.5% accuracy boost. :) 3. Tackle the Titanic dataset The goal is to predict whether or not a passenger survived based on attributes such as their age, sex, passenger class, where they embarked and so on. First, login to [Kaggle](https://www.kaggle.com/) and go to the [Titanic challenge](https://www.kaggle.com/c/titanic) to download `train.csv` and `test.csv`. Save them to the `datasets/titanic` directory. Next, let's load the data:
###Code
import os
TITANIC_PATH = os.path.join("datasets", "titanic")
import pandas as pd
def load_titanic_data(filename, titanic_path=TITANIC_PATH):
csv_path = os.path.join(titanic_path, filename)
return pd.read_csv(csv_path)
train_data = load_titanic_data("train.csv")
test_data = load_titanic_data("test.csv")
###Output
_____no_output_____
###Markdown
The data is already split into a training set and a test set. However, the test data does *not* contain the labels: your goal is to train the best model you can using the training data, then make your predictions on the test data and upload them to Kaggle to see your final score. Let's take a peek at the top few rows of the training set:
###Code
train_data.head()
###Output
_____no_output_____
###Markdown
The attributes have the following meaning:* **Survived**: that's the target, 0 means the passenger did not survive, while 1 means he/she survived.* **Pclass**: passenger class.* **Name**, **Sex**, **Age**: self-explanatory* **SibSp**: how many siblings & spouses of the passenger aboard the Titanic.* **Parch**: how many children & parents of the passenger aboard the Titanic.* **Ticket**: ticket id* **Fare**: price paid (in pounds)* **Cabin**: passenger's cabin number* **Embarked**: where the passenger embarked the Titanic Let's get more info to see how much data is missing:
###Code
train_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
###Markdown
Okay, the **Age**, **Cabin** and **Embarked** attributes are sometimes null (less than 891 non-null), especially the **Cabin** (77% are null). We will ignore the **Cabin** for now and focus on the rest. The **Age** attribute has about 19% null values, so we will need to decide what to do with them. Replacing null values with the median age seems reasonable. The **Name** and **Ticket** attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will ignore them. Let's take a look at the numerical attributes:
###Code
train_data.describe()
###Output
_____no_output_____
###Markdown
* Yikes, only 38% **Survived**. :( That's close enough to 40%, so accuracy will be a reasonable metric to evaluate our model.* The mean **Fare** was £32.20, which does not seem so expensive (but it was probably a lot of money back then).* The mean **Age** was less than 30 years old. Let's check that the target is indeed 0 or 1:
###Code
train_data["Survived"].value_counts()
###Output
_____no_output_____
###Markdown
Now let's take a quick look at all the categorical attributes:
###Code
train_data["Pclass"].value_counts()
train_data["Sex"].value_counts()
train_data["Embarked"].value_counts()
###Output
_____no_output_____
###Markdown
The Embarked attribute tells us where the passenger embarked: C=Cherbourg, Q=Queenstown, S=Southampton. Now let's build our preprocessing pipelines. We will reuse the `DataframeSelector` we built in the previous chapter to select specific attributes from the `DataFrame`:
###Code
from sklearn.base import BaseEstimator, TransformerMixin
# A class to select numerical or categorical columns
# since Scikit-Learn doesn't handle DataFrames yet
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names]
###Output
_____no_output_____
###Markdown
Let's build the pipeline for the numerical attributes:**Warning**: Since Scikit-Learn 0.20, the `sklearn.preprocessing.Imputer` class was replaced by the `sklearn.impute.SimpleImputer` class.
###Code
from sklearn.pipeline import Pipeline
try:
from sklearn.impute import SimpleImputer # Scikit-Learn 0.20+
except ImportError:
from sklearn.preprocessing import Imputer as SimpleImputer
num_pipeline = Pipeline([
("select_numeric", DataFrameSelector(["Age", "SibSp", "Parch", "Fare"])),
("imputer", SimpleImputer(strategy="median")),
])
num_pipeline.fit_transform(train_data)
###Output
_____no_output_____
###Markdown
We will also need an imputer for the string categorical columns (the regular `SimpleImputer` does not work on those):
###Code
# Inspired from stackoverflow.com/questions/25239958
class MostFrequentImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
self.most_frequent_ = pd.Series([X[c].value_counts().index[0] for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.most_frequent_)
###Output
_____no_output_____
###Markdown
**Warning**: earlier versions of the book used the `LabelBinarizer` or `CategoricalEncoder` classes to convert each categorical value to a one-hot vector. It is now preferable to use the `OneHotEncoder` class. Since Scikit-Learn 0.20 it can handle string categorical inputs (see [PR 10521](https://github.com/scikit-learn/scikit-learn/issues/10521)), not just integer categorical inputs. If you are using an older version of Scikit-Learn, you can import the new version from `future_encoders.py`:
###Code
try:
from sklearn.preprocessing import OrdinalEncoder # just to raise an ImportError if Scikit-Learn < 0.20
from sklearn.preprocessing import OneHotEncoder
except ImportError:
from future_encoders import OneHotEncoder # Scikit-Learn < 0.20
###Output
_____no_output_____
###Markdown
Now we can build the pipeline for the categorical attributes:
###Code
cat_pipeline = Pipeline([
("select_cat", DataFrameSelector(["Pclass", "Sex", "Embarked"])),
("imputer", MostFrequentImputer()),
("cat_encoder", OneHotEncoder(sparse=False)),
])
cat_pipeline.fit_transform(train_data)
###Output
_____no_output_____
###Markdown
Finally, let's join the numerical and categorical pipelines:
###Code
from sklearn.pipeline import FeatureUnion
preprocess_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
###Output
_____no_output_____
###Markdown
Cool! Now we have a nice preprocessing pipeline that takes the raw data and outputs numerical input features that we can feed to any Machine Learning model we want.
###Code
X_train = preprocess_pipeline.fit_transform(train_data)
X_train
###Output
_____no_output_____
###Markdown
Let's not forget to get the labels:
###Code
y_train = train_data["Survived"]
###Output
_____no_output_____
###Markdown
We are now ready to train a classifier. Let's start with an `SVC`:
###Code
from sklearn.svm import SVC
svm_clf = SVC(gamma="auto")
svm_clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Great, our model is trained, let's use it to make predictions on the test set:
###Code
X_test = preprocess_pipeline.transform(test_data)
y_pred = svm_clf.predict(X_test)
###Output
_____no_output_____
###Markdown
And now we could just build a CSV file with these predictions (respecting the format excepted by Kaggle), then upload it and hope for the best. But wait! We can do better than hope. Why don't we use cross-validation to have an idea of how good our model is?
###Code
from sklearn.model_selection import cross_val_score
svm_scores = cross_val_score(svm_clf, X_train, y_train, cv=10)
svm_scores.mean()
###Output
_____no_output_____
###Markdown
Okay, over 73% accuracy, clearly better than random chance, but it's not a great score. Looking at the [leaderboard](https://www.kaggle.com/c/titanic/leaderboard) for the Titanic competition on Kaggle, you can see that you need to reach above 80% accuracy to be within the top 10% Kagglers. Some reached 100%, but since you can easily find the [list of victims](https://www.encyclopedia-titanica.org/titanic-victims/) of the Titanic, it seems likely that there was little Machine Learning involved in their performance! ;-) So let's try to build a model that reaches 80% accuracy. Let's try a `RandomForestClassifier`:
###Code
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_scores = cross_val_score(forest_clf, X_train, y_train, cv=10)
forest_scores.mean()
###Output
_____no_output_____
###Markdown
That's much better! Instead of just looking at the mean accuracy across the 10 cross-validation folds, let's plot all 10 scores for each model, along with a box plot highlighting the lower and upper quartiles, and "whiskers" showing the extent of the scores (thanks to Nevin Yilmaz for suggesting this visualization). Note that the `boxplot()` function detects outliers (called "fliers") and does not include them within the whiskers. Specifically, if the lower quartile is $Q_1$ and the upper quartile is $Q_3$, then the interquartile range $IQR = Q_3 - Q_1$ (this is the box's height), and any score lower than $Q_1 - 1.5 \times IQR$ is a flier, and so is any score greater than $Q3 + 1.5 \times IQR$.
###Code
plt.figure(figsize=(8, 4))
plt.plot([1]*10, svm_scores, ".")
plt.plot([2]*10, forest_scores, ".")
plt.boxplot([svm_scores, forest_scores], labels=("SVM","Random Forest"))
plt.ylabel("Accuracy", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
To improve this result further, you could:* Compare many more models and tune hyperparameters using cross validation and grid search,* Do more feature engineering, for example: * replace **SibSp** and **Parch** with their sum, * try to identify parts of names that correlate well with the **Survived** attribute (e.g. if the name contains "Countess", then survival seems more likely),* try to convert numerical attributes to categorical attributes: for example, different age groups had very different survival rates (see below), so it may help to create an age bucket category and use it instead of the age. Similarly, it may be useful to have a special category for people traveling alone since only 30% of them survived (see below).
###Code
train_data["AgeBucket"] = train_data["Age"] // 15 * 15
train_data[["AgeBucket", "Survived"]].groupby(['AgeBucket']).mean()
train_data["RelativesOnboard"] = train_data["SibSp"] + train_data["Parch"]
train_data[["RelativesOnboard", "Survived"]].groupby(['RelativesOnboard']).mean()
###Output
_____no_output_____
###Markdown
4. Spam classifier First, let's fetch the data:
###Code
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "http://spamassassin.apache.org/old/publiccorpus/"
HAM_URL = DOWNLOAD_ROOT + "20030228_easy_ham.tar.bz2"
SPAM_URL = DOWNLOAD_ROOT + "20030228_spam.tar.bz2"
SPAM_PATH = os.path.join("datasets", "spam")
def fetch_spam_data(spam_url=SPAM_URL, spam_path=SPAM_PATH):
if not os.path.isdir(spam_path):
os.makedirs(spam_path)
for filename, url in (("ham.tar.bz2", HAM_URL), ("spam.tar.bz2", SPAM_URL)):
path = os.path.join(spam_path, filename)
if not os.path.isfile(path):
urllib.request.urlretrieve(url, path)
tar_bz2_file = tarfile.open(path)
tar_bz2_file.extractall(path=SPAM_PATH)
tar_bz2_file.close()
fetch_spam_data()
###Output
_____no_output_____
###Markdown
Next, let's load all the emails:
###Code
HAM_DIR = os.path.join(SPAM_PATH, "easy_ham")
SPAM_DIR = os.path.join(SPAM_PATH, "spam")
ham_filenames = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20]
spam_filenames = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20]
len(ham_filenames)
len(spam_filenames)
###Output
_____no_output_____
###Markdown
We can use Python's `email` module to parse these emails (this handles headers, encoding, and so on):
###Code
import email
import email.policy
def load_email(is_spam, filename, spam_path=SPAM_PATH):
directory = "spam" if is_spam else "easy_ham"
with open(os.path.join(spam_path, directory, filename), "rb") as f:
return email.parser.BytesParser(policy=email.policy.default).parse(f)
ham_emails = [load_email(is_spam=False, filename=name) for name in ham_filenames]
spam_emails = [load_email(is_spam=True, filename=name) for name in spam_filenames]
###Output
_____no_output_____
###Markdown
Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:
###Code
print(ham_emails[1].get_content().strip())
print(spam_emails[6].get_content().strip())
###Output
Help wanted. We are a 14 year old fortune 500 company, that is
growing at a tremendous rate. We are looking for individuals who
want to work from home.
This is an opportunity to make an excellent income. No experience
is required. We will train you.
So if you are looking to be employed from home with a career that has
vast opportunities, then go:
http://www.basetel.com/wealthnow
We are looking for energetic and self motivated people. If that is you
than click on the link and fill out the form, and one of our
employement specialist will contact you.
To be removed from our link simple go to:
http://www.basetel.com/remove.html
4139vOLW7-758DoDY1425FRhM1-764SMFc8513fCsLl40
###Markdown
Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:
###Code
def get_email_structure(email):
if isinstance(email, str):
return email
payload = email.get_payload()
if isinstance(payload, list):
return "multipart({})".format(", ".join([
get_email_structure(sub_email)
for sub_email in payload
]))
else:
return email.get_content_type()
from collections import Counter
def structures_counter(emails):
structures = Counter()
for email in emails:
structure = get_email_structure(email)
structures[structure] += 1
return structures
structures_counter(ham_emails).most_common()
structures_counter(spam_emails).most_common()
###Output
_____no_output_____
###Markdown
It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have. Now let's take a look at the email headers:
###Code
for header, value in spam_emails[0].items():
print(header,":",value)
###Output
Return-Path : <[email protected]>
Delivered-To : [email protected]
Received : from localhost (localhost [127.0.0.1]) by phobos.labs.spamassassin.taint.org (Postfix) with ESMTP id 136B943C32 for <zzzz@localhost>; Thu, 22 Aug 2002 08:17:21 -0400 (EDT)
Received : from mail.webnote.net [193.120.211.219] by localhost with POP3 (fetchmail-5.9.0) for zzzz@localhost (single-drop); Thu, 22 Aug 2002 13:17:21 +0100 (IST)
Received : from dd_it7 ([210.97.77.167]) by webnote.net (8.9.3/8.9.3) with ESMTP id NAA04623 for <[email protected]>; Thu, 22 Aug 2002 13:09:41 +0100
From : [email protected]
Received : from r-smtp.korea.com - 203.122.2.197 by dd_it7 with Microsoft SMTPSVC(5.5.1775.675.6); Sat, 24 Aug 2002 09:42:10 +0900
To : [email protected]
Subject : Life Insurance - Why Pay More?
Date : Wed, 21 Aug 2002 20:31:57 -1600
MIME-Version : 1.0
Message-ID : <0103c1042001882DD_IT7@dd_it7>
Content-Type : text/html; charset="iso-8859-1"
Content-Transfer-Encoding : quoted-printable
###Markdown
There's probably a lot of useful information in there, such as the sender's email address ([email protected] looks fishy), but we will just focus on the `Subject` header:
###Code
spam_emails[0]["Subject"]
###Output
_____no_output_____
###Markdown
Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:
###Code
import numpy as np
from sklearn.model_selection import train_test_split
X = np.array(ham_emails + spam_emails)
y = np.array([0] * len(ham_emails) + [1] * len(spam_emails))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of [un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment](https://stackoverflow.com/a/1732454/38626)). The following function first drops the `` section, then converts all `` tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as `>` or ` `):
###Code
import re
from html import unescape
def html_to_plain_text(html):
text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)
text = re.sub('<a\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)
text = re.sub('<.*?>', '', text, flags=re.M | re.S)
text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S)
return unescape(text)
###Output
_____no_output_____
###Markdown
Let's see if it works. This is HTML spam:
###Code
html_spam_emails = [email for email in X_train[y_train==1]
if get_email_structure(email) == "text/html"]
sample_html_spam = html_spam_emails[7]
print(sample_html_spam.get_content().strip()[:1000], "...")
###Output
<HTML><HEAD><TITLE></TITLE><META http-equiv="Content-Type" content="text/html; charset=windows-1252"><STYLE>A:link {TEX-DECORATION: none}A:active {TEXT-DECORATION: none}A:visited {TEXT-DECORATION: none}A:hover {COLOR: #0033ff; TEXT-DECORATION: underline}</STYLE><META content="MSHTML 6.00.2713.1100" name="GENERATOR"></HEAD>
<BODY text="#000000" vLink="#0033ff" link="#0033ff" bgColor="#CCCC99"><TABLE borderColor="#660000" cellSpacing="0" cellPadding="0" border="0" width="100%"><TR><TD bgColor="#CCCC99" valign="top" colspan="2" height="27">
<font size="6" face="Arial, Helvetica, sans-serif" color="#660000">
<b>OTC</b></font></TD></TR><TR><TD height="2" bgcolor="#6a694f">
<font size="5" face="Times New Roman, Times, serif" color="#FFFFFF">
<b> Newsletter</b></font></TD><TD height="2" bgcolor="#6a694f"><div align="right"><font color="#FFFFFF">
<b>Discover Tomorrow's Winners </b></font></div></TD></TR><TR><TD height="25" colspan="2" bgcolor="#CCCC99"><table width="100%" border="0" ...
###Markdown
And this is the resulting plain text:
###Code
print(html_to_plain_text(sample_html_spam.get_content())[:1000], "...")
###Output
OTC
Newsletter
Discover Tomorrow's Winners
For Immediate Release
Cal-Bay (Stock Symbol: CBYI)
Watch for analyst "Strong Buy Recommendations" and several advisory newsletters picking CBYI. CBYI has filed to be traded on the OTCBB, share prices historically INCREASE when companies get listed on this larger trading exchange. CBYI is trading around 25 cents and should skyrocket to $2.66 - $3.25 a share in the near future.
Put CBYI on your watch list, acquire a position TODAY.
REASONS TO INVEST IN CBYI
A profitable company and is on track to beat ALL earnings estimates!
One of the FASTEST growing distributors in environmental & safety equipment instruments.
Excellent management team, several EXCLUSIVE contracts. IMPRESSIVE client list including the U.S. Air Force, Anheuser-Busch, Chevron Refining and Mitsubishi Heavy Industries, GE-Energy & Environmental Research.
RAPIDLY GROWING INDUSTRY
Industry revenues exceed $900 million, estimates indicate that there could be as much as $25 billi ...
###Markdown
Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:
###Code
def email_to_text(email):
html = None
for part in email.walk():
ctype = part.get_content_type()
if not ctype in ("text/plain", "text/html"):
continue
try:
content = part.get_content()
except: # in case of encoding issues
content = str(part.get_payload())
if ctype == "text/plain":
return content
else:
html = content
if html:
return html_to_plain_text(html)
print(email_to_text(sample_html_spam)[:100], "...")
###Output
OTC
Newsletter
Discover Tomorrow's Winners
For Immediate Release
Cal-Bay (Stock Symbol: CBYI)
Wat ...
###Markdown
Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit ([NLTK](http://www.nltk.org/)). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):`$ pip3 install nltk`
###Code
try:
import nltk
stemmer = nltk.PorterStemmer()
for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"):
print(word, "=>", stemmer.stem(word))
except ImportError:
print("Error: stemming requires the NLTK module.")
stemmer = None
###Output
Computations => comput
Computation => comput
Computing => comput
Computed => comput
Compute => comput
Compulsive => compuls
###Markdown
We will also need a way to replace URLs with the word "URL". For this, we could use hard core [regular expressions](https://mathiasbynens.be/demo/url-regex) but we will just use the [urlextract](https://github.com/lipoja/URLExtract) library. You can install it with the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the `--user` option):`$ pip3 install urlextract`
###Code
try:
import urlextract # may require an Internet connection to download root domain names
url_extractor = urlextract.URLExtract()
print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s"))
except ImportError:
print("Error: replacing URLs requires the urlextract module.")
url_extractor = None
###Output
['github.com', 'https://youtu.be/7Pq-S557XQU?t=3m32s']
###Markdown
We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's `split()` method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English.
###Code
from sklearn.base import BaseEstimator, TransformerMixin
class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):
def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True,
replace_urls=True, replace_numbers=True, stemming=True):
self.strip_headers = strip_headers
self.lower_case = lower_case
self.remove_punctuation = remove_punctuation
self.replace_urls = replace_urls
self.replace_numbers = replace_numbers
self.stemming = stemming
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X_transformed = []
for email in X:
text = email_to_text(email) or ""
if self.lower_case:
text = text.lower()
if self.replace_urls and url_extractor is not None:
urls = list(set(url_extractor.find_urls(text)))
urls.sort(key=lambda url: len(url), reverse=True)
for url in urls:
text = text.replace(url, " URL ")
if self.replace_numbers:
text = re.sub(r'\d+(?:\.\d*(?:[eE]\d+))?', 'NUMBER', text)
if self.remove_punctuation:
text = re.sub(r'\W+', ' ', text, flags=re.M)
word_counts = Counter(text.split())
if self.stemming and stemmer is not None:
stemmed_word_counts = Counter()
for word, count in word_counts.items():
stemmed_word = stemmer.stem(word)
stemmed_word_counts[stemmed_word] += count
word_counts = stemmed_word_counts
X_transformed.append(word_counts)
return np.array(X_transformed)
###Output
_____no_output_____
###Markdown
Let's try this transformer on a few emails:
###Code
X_few = X_train[:3]
X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)
X_few_wordcounts
###Output
_____no_output_____
###Markdown
This looks about right! Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose `fit()` method will build the vocabulary (an ordered list of the most common words) and whose `transform()` method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix.
###Code
from scipy.sparse import csr_matrix
class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):
def __init__(self, vocabulary_size=1000):
self.vocabulary_size = vocabulary_size
def fit(self, X, y=None):
total_count = Counter()
for word_count in X:
for word, count in word_count.items():
total_count[word] += min(count, 10)
most_common = total_count.most_common()[:self.vocabulary_size]
self.most_common_ = most_common
self.vocabulary_ = {word: index + 1 for index, (word, count) in enumerate(most_common)}
return self
def transform(self, X, y=None):
rows = []
cols = []
data = []
for row, word_count in enumerate(X):
for word, count in word_count.items():
rows.append(row)
cols.append(self.vocabulary_.get(word, 0))
data.append(count)
return csr_matrix((data, (rows, cols)), shape=(len(X), self.vocabulary_size + 1))
vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)
X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)
X_few_vectors
X_few_vectors.toarray()
###Output
_____no_output_____
###Markdown
What does this matrix mean? Well, the 64 in the third row, first column, means that the third email contains 64 words that are not part of the vocabulary. The 1 next to it means that the first word in the vocabulary is present once in this email. The 2 next to it means that the second word is present twice, and so on. You can look at the vocabulary to know which words we are talking about. The first word is "of", the second word is "and", etc.
###Code
vocab_transformer.vocabulary_
###Output
_____no_output_____
###Markdown
We are now ready to train our first spam classifier! Let's transform the whole dataset:
###Code
from sklearn.pipeline import Pipeline
preprocess_pipeline = Pipeline([
("email_to_wordcount", EmailToWordCounterTransformer()),
("wordcount_to_vector", WordCounterToVectorTransformer()),
])
X_train_transformed = preprocess_pipeline.fit_transform(X_train)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
log_clf = LogisticRegression(solver="liblinear", random_state=42)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()
###Output
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
###Markdown
Over 98.7%, not bad for a first try! :) However, remember that we are using the "easy" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:
###Code
from sklearn.metrics import precision_score, recall_score
X_test_transformed = preprocess_pipeline.transform(X_test)
log_clf = LogisticRegression(solver="liblinear", random_state=42)
log_clf.fit(X_train_transformed, y_train)
y_pred = log_clf.predict(X_test_transformed)
print("Precision: {:.2f}%".format(100 * precision_score(y_test, y_pred)))
print("Recall: {:.2f}%".format(100 * recall_score(y_test, y_pred)))
###Output
Precision: 94.90%
Recall: 97.89%
|
notebooks/dc2/validation/calibrate_dc2_densities.ipynb | ###Markdown
Define new sprinkler with a new AGN density functionWe want to calibrate probabilities of choosing DC2 AGN to be sprinkled. To calibrate these probabilities we will try to match aspects of the overall OM10 population with our sprinkled population. Currently we are looking to match redshift and i-band magnitude. We are going to try to match the OM10 redshift and magnitude distributionsFirst we design a probability distribution in each component for galaxies that will get matched. First look at overall OM10 distributions in redshift and i-band magnitudes
###Code
# Load OM10 catalog
agn_cat = dc2_sp.gl_agn_cat
# The Minimum and Maximum Redshifts in the catalog but cosmoDC2 only goes up to z=3.1
z_min, z_max = np.min(agn_cat['z_src']), np.max(agn_cat['z_src'])
print(z_min, z_max)
# Since we allow matching within 0.1 in dex these are the min, max allowable redshifts in DC2 matching
binz_min, binz_max = 10**(np.log10(z_min)-0.1), z_max
print(binz_min, binz_max)
n_z, bins_z, _ = plt.hist(agn_cat['z_src'], bins=20, range=(binz_min, binz_max))
plt.xlabel('Redshift')
plt.ylabel('Lensed AGN Count')
plt.title('Lensed AGN Redshifts in OM10')
# The Minimum and Maximum Redshifts in the catalog
mag_i_min, mag_i_max = np.min(agn_cat['mag_i_src']), np.max(agn_cat['mag_i_src'])
print(mag_i_min, mag_i_max)
# Since we allow matching within 0.25 these are the min, max allowable AGN i-band mags in DC2 matching
bin_imag_min, bin_imag_max = mag_i_min-0.25, mag_i_max+.25
print(bin_imag_min, bin_imag_max)
n_imag, bins_imag, _ = plt.hist(agn_cat['mag_i_src'], bins=20, range=(bin_imag_min, bin_imag_max))
plt.xlabel('i-band AGN Magnitude')
plt.ylabel('Lensed AGN Count')
plt.title('Lensed AGN i-band mag in OM10')
###Output
_____no_output_____
###Markdown
Set up sampling functionsWe will use these to set probabilities of choosing DC2 AGN that fall in these bins.
###Code
dens_z = copy(n_z)
dens_z = dens_z / np.max(dens_z)
dens_z[:8] += 0.1
dens_z[-8:] = 1.0
dens_z[:-8] -= 0.08
dens_z[5:-8] -= 0.05
dens_z[3] -= 0.01
dens_z[4] -= 0.03
dens_z[5] += 0.0
dens_z[6] -= 0.03
dens_z[7] -= 0.05
dens_z[8] -= 0.01
dens_z[9] -= 0.03
dens_z[10] += 0.04
dens_z[11] -= 0.02
plt.plot(bins_z[:-1], dens_z)
plt.xlabel('Redshift')
plt.ylabel('Probability')
plt.title('Probability of choosing DC2 galaxy in Sprinkler')
np.savetxt('../data/agn_z_density.dat', dens_z)
dens_imag = copy(n_imag)
dens_imag = dens_imag / np.max(dens_imag)
dens_imag[2:-6] += 2. * dens_imag[8:]
dens_imag[9] += 0.1
dens_imag[10] += 0.2
dens_imag[11] += 0.1
dens_imag[6:15] += 1. * (1. - np.linspace(0, 1, 9))
dens_imag = dens_imag / np.max(dens_imag)
dens_imag[2:4] += .2
dens_imag[4:10] = 1.0
dens_imag[10:-7] = 1.0
dens_imag[11] = 0.8
dens_imag[12] = 0.6
dens_imag[-7:] *= 0.5
bins_imag
plt.plot(bins_imag[:-1], dens_imag)
plt.xlabel('i-band magnitude')
plt.ylabel('Probability')
plt.title('Probability of choosing DC2 galaxy in Sprinkler')
bins_imag
np.savetxt('../data/agn_imag_density.dat', dens_imag)
###Output
_____no_output_____
###Markdown
We save these sampling functions to file and then use them in the sprinkler code. In the sprinkler we multiply the redshift probability times the magnitude probability and multiply by a scaling value which is currently 1.0 and calibrates the overall number of systems. Test out the sampling by running the sprinkler
###Code
agn_hosts, agn_sys_cat = dc2_sp.sprinkle_agn()
agn_hosts.head()
agn_hosts.iloc[0]['varParamStr_agn']
import json
json.loads(agn_hosts.iloc[0]['varParamStr_agn'])['p']
agn_sys_cat.head()
len(agn_hosts)
fig = plt.figure(figsize=(10,6))
n, bins, _ = plt.hist(agn_hosts['redshift'], density=True, histtype='step',
label='Matched DC2 Galaxies', lw=3, bins=10, range=(binz_min, binz_max))
plt.hist(agn_cat.query('z_src <= %f' % binz_max)['z_src'], density=True,
histtype='step', label='OM10', lw=3, bins=bins)
plt.hist(dc2_sp.gal_cat.query('magnorm_agn > -99')['redshift'], density=True,
histtype='step', bins=bins, label='Overall DC2 DDF AGN', lw=3)
plt.xlabel('Redshift', size=16)
plt.ylabel('Normalized # of AGN', size=16)
plt.xticks(size=14)
plt.yticks(size=14)
plt.legend(fontsize=14, loc=2)
plt.title('Comparing Matched Redshifts to OM10')
fig = plt.figure(figsize=(10,6))
n, bins, _ = plt.hist(agn_hosts['mag_i_agn'], density=True, histtype='step',
label='Matched DC2 Galaxies', lw=3, bins=10)
plt.hist(agn_cat.query('z_src <= %f' % binz_max)['mag_i_src'], density=True,
histtype='step', label='OM10', lw=3, bins=bins)
plt.hist(dc2_sp.gal_cat.query('magnorm_agn > -99')['mag_i_agn'], density=True,
histtype='step', bins=bins, label='Overall DC2 DDF AGN', lw=3)
plt.xlabel('i-band magnitude', size=16)
plt.ylabel('Normalized # of AGN', size=16)
plt.xticks(size=14)
plt.yticks(size=14)
plt.legend(fontsize=14, loc=2)
plt.title('Comparing Matched i-band magnitudes to OM10')
fig = corner.corner(agn_hosts[['redshift', 'mag_i_agn']].values, bins=10, hist_kwargs={'density':True},
labels=['redshift', 'AGN i-band mag'], label_kwargs={'size':14})
corner.corner(agn_cat.query('z_src <= %f' % binz_max)[['z_src', 'mag_i_src']].values,
bins=10, color='r', fig=fig, hist_kwargs={'density':True})
plt.show()
###Output
_____no_output_____
###Markdown
Our redshift distribution looks close but we have some trouble with i-band magnitudes since the OM10 population is brighter than is actually possible with the DC2 population. It looks like it's about as well as we can do with what we have and yields 1476 systems which is a good sample size. Define new sprinkler with a new SNe density functionWe want to calibrate probabilities of choosing DC2 SNe hosts to be sprinkled. To calibrate these probabilities we will try to match aspects of the overall SNe and SNe host population with our sprinkled population. Review of matching code currently in Sprinkler The function `assign_matches_sne` works in the sprinkler to match potential host galaxies to sprinkled systemsIt first calculates a probability for a galaxy to host a SN with the function `sne_density` based upon stellar mass and galaxy type. Then it draws from a value from a uniform distribution to see if that galaxy gets a SN. If it does then it moves on to `find_possible_match_sne` to find the lensed SN systems that match up to this galaxy based upon redshift and galaxy type.``` def assign_matches_sne(self, sne_gals, rand_state): sprinkled_sne_gal_rows = [] sprinkled_gl_sne_cat_rows = [] for i in range(len(sne_gals)): if i % 10000 == 0: print(i) Draw probability that galaxy is sprinkled sne_density = self.sne_density(sne_gals.iloc[i]) density_draw = rand_state.uniform() if density_draw > sne_density: continue sne_cat_idx = self.find_possible_match_sne(sne_gals.iloc[i]) sne_idx_keep = [x for x in sne_cat_idx if x not in sprinkled_gl_sne_cat_rows] if len(sne_idx_keep) == 0: continue weight = self.gl_sne_cat['weight'].iloc[sne_idx_keep] sprinkled_gl_sne_cat_rows.append( rand_state.choice(sne_idx_keep, p = weight/np.sum(weight))) sprinkled_sne_gal_rows.append(i) return sprinkled_sne_gal_rows, sprinkled_gl_sne_cat_rows``` `sne_density` determines the probability of a galaxy hosting a SNWe use the galaxy types we determined when creating the catalog and the stellar mass of the galaxy to get probabilities that conform with the rates in Table 2 of [Mannucci et al. 2005](https://www.aanda.org/articles/aa/pdf/2005/15/aa1411.pdf).We add in a normalization factor to get the approximate number of lensed SNe we want in the DDF field over the 10 years.``` def sne_density(self, sne_gal_row): density_norm = 0.05 stellar_mass = sne_gal_row['stellar_mass'] host_type = sne_gal_row['gal_type'] if host_type == 'kinney-elliptical': density_host = 0.044 * stellar_mass * 1e-10 elif host_type == 'kinney-sc': density_host = 0.17 * stellar_mass * 1e-10 elif host_type == 'kinney-starburst': density_host = 0.77 * stellar_mass * 1e-10 density_val = density_norm * density_host return density_val``` `find_possible_match_sne` matches potential host galaxies with appropriate systems in the lensed SNe catalogWe find lensed SNe systems that are approximately the same in host mass redshift and type.``` def find_possible_match_sne(self, gal_cat): gal_z = gal_cat['redshift'] gal_type = gal_cat['gal_type'] search the SNe catalog for all sources +- 0.05 dex in redshift and with matching type lens_candidate_idx = [] w = np.where((np.abs(np.log10(self.gl_sne_cat['z_src']) - np.log10(gal_z)) <= 0.05) & (self.gl_sne_cat['type_host'] == gal_type)) lens_candidate_idx = w[0] return lens_candidate_idx```
###Code
# Load Goldstein et al. catalog
sne_cat = dc2_sp.gl_sne_cat
###Output
_____no_output_____
###Markdown
Test out the sampling by running the sprinkler
###Code
#dc2_sp.gal_cat = dc2_sp.gal_cat.iloc[:10000]
len(dc2_sp.gl_sne_cat)
sne_hosts, sne_sys_cat = dc2_sp.sprinkle_sne()
len(sne_hosts)
###Output
_____no_output_____
###Markdown
Look at some properties of the matched host galaxies and SNe
###Code
plt.hist(sne_hosts['gal_type'])
plt.xlabel('Host Galaxy Type')
plt.ylabel('Galaxy Counts')
plt.hist(sne_hosts['redshift'])
plt.xlabel('Host Galaxy Redshift')
plt.ylabel('Galaxy Counts')
# Check distribution of times SNe appears
plt.hist(sne_sys_cat['t0'])
plt.xlabel('MJD of First SNe image t0')
plt.ylabel('Count')
# Check distribution of times SNe appears
plt.hist(np.log10(sne_sys_cat['x0']))
plt.xlabel('Log(SNe Salt-2 X0 parameter)')
plt.ylabel('Count')
sne_sys_cat.head()
sne_hosts.columns
sne_sys_cat.columns
sne_orig = pd.read_hdf('../data/glsne_cosmoDC2_v1.1.4.h5', key='image')
sne_orig.head(10)
sne_host_orig = pd.read_hdf('../data/glsne_cosmoDC2_v1.1.4.h5', key='system')
sne_host_orig
sne_host_orig.columns
sne_host_orig[['snx', 'sny', 'host_x', 'host_y']]
dc2_sp.output_lensed_sne_truth(sne_hosts, sne_sys_cat, 'example_sne_truth.db', id_offset=2000)
dc2_sp.output_host_galaxy_truth(agn_hosts, agn_sys_cat, sne_hosts,
sne_sys_cat, 'example_host_truth.db')
dc2_sp.output_lensed_agn_truth(agn_hosts, agn_sys_cat, 'example_agn_truth.db', id_offset=0)
###Output
_____no_output_____ |
methods/transformers/examples/movement-pruning/Saving_PruneBERT.ipynb | ###Markdown
Saving PruneBERTThis notebook aims at showcasing how we can leverage standard tools to save (and load) an extremely sparse model fine-pruned with [movement pruning](https://arxiv.org/abs/2005.07683) (or any other unstructured pruning mehtod).In this example, we used BERT (base-uncased, but the procedure described here is not specific to BERT and can be applied to a large variety of models.We first obtain an extremely sparse model by fine-pruning with movement pruning on SQuAD v1.1. We then used the following combination of standard tools:- We reduce the precision of the model with Int8 dynamic quantization using [PyTorch implementation](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html). We only quantized the Fully Connected Layers.- Sparse quantized matrices are converted into the [Compressed Sparse Row format](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html).- We use HDF5 with `gzip` compression to store the weights.We experiment with a question answering model with only 6% of total remaining weights in the encoder (previously obtained with movement pruning). **We are able to reduce the memory size of the encoder from 340MB (original dense BERT) to 11MB**, which fits on a [91' floppy disk](https://en.wikipedia.org/wiki/Floptical)!*Note: this notebook is compatible with `torch>=1.5.0` If you are using, `torch==1.4.0`, please refer to [this previous version of the notebook](https://github.com/huggingface/transformers/commit/b11386e158e86e62d4041eabd86d044cd1695737).*
###Code
# Includes
import h5py
import os
import json
from collections import OrderedDict
from scipy import sparse
import numpy as np
import torch
from torch import nn
from transformers import *
os.chdir('../../')
###Output
_____no_output_____
###Markdown
Saving Dynamic quantization induces little or no loss of performance while significantly reducing the memory footprint.
###Code
# Load fine-pruned model and quantize the model
model = BertForQuestionAnswering.from_pretrained("huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad")
model.to('cpu')
quantized_model = torch.quantization.quantize_dynamic(
model=model,
qconfig_spec = {
torch.nn.Linear : torch.quantization.default_dynamic_qconfig,
},
dtype=torch.qint8,
)
# print(quantized_model)
qtz_st = quantized_model.state_dict()
# Saving the original (encoder + classifier) in the standard torch.save format
dense_st = {name: param for name, param in model.state_dict().items()
if "embedding" not in name and "pooler" not in name}
torch.save(dense_st, 'dbg/dense_squad.pt',)
dense_mb_size = os.path.getsize("dbg/dense_squad.pt")
# Elementary representation: we decompose the quantized tensors into (scale, zero_point, int_repr).
# See https://pytorch.org/docs/stable/quantization.html
# We further leverage the fact that int_repr is sparse matrix to optimize the storage: we decompose int_repr into
# its CSR representation (data, indptr, indices).
elementary_qtz_st = {}
for name, param in qtz_st.items():
if "dtype" not in name and param.is_quantized:
print("Decompose quantization for", name)
# We need to extract the scale, the zero_point and the int_repr for the quantized tensor and modules
scale = param.q_scale() # torch.tensor(1,) - float32
zero_point = param.q_zero_point() # torch.tensor(1,) - int32
elementary_qtz_st[f"{name}.scale"] = scale
elementary_qtz_st[f"{name}.zero_point"] = zero_point
# We assume the int_repr is sparse and compute its CSR representation
# Only the FCs in the encoder are actually sparse
int_repr = param.int_repr() # torch.tensor(nb_rows, nb_columns) - int8
int_repr_cs = sparse.csr_matrix(int_repr) # scipy.sparse.csr.csr_matrix
elementary_qtz_st[f"{name}.int_repr.data"] = int_repr_cs.data # np.array int8
elementary_qtz_st[f"{name}.int_repr.indptr"] = int_repr_cs.indptr # np.array int32
assert max(int_repr_cs.indices) < 65535 # If not, we shall fall back to int32
elementary_qtz_st[f"{name}.int_repr.indices"] = np.uint16(int_repr_cs.indices) # np.array uint16
elementary_qtz_st[f"{name}.int_repr.shape"] = int_repr_cs.shape # tuple(int, int)
else:
elementary_qtz_st[name] = param
# Create mapping from torch.dtype to string description (we could also used an int8 instead of string)
str_2_dtype = {"qint8": torch.qint8}
dtype_2_str = {torch.qint8: "qint8"}
# Saving the pruned (encoder + classifier) in the standard torch.save format
dense_optimized_st = {name: param for name, param in elementary_qtz_st.items()
if "embedding" not in name and "pooler" not in name}
torch.save(dense_optimized_st, 'dbg/dense_squad_optimized.pt',)
print("Encoder Size (MB) - Sparse & Quantized - `torch.save`:",
round(os.path.getsize("dbg/dense_squad_optimized.pt")/1e6, 2))
# Save the decomposed state_dict with an HDF5 file
# Saving only the encoder + QA Head
with h5py.File('dbg/squad_sparse.h5','w') as hf:
for name, param in elementary_qtz_st.items():
if "embedding" in name:
print(f"Skip {name}")
continue
if "pooler" in name:
print(f"Skip {name}")
continue
if type(param) == torch.Tensor:
if param.numel() == 1:
# module scale
# module zero_point
hf.attrs[name] = param
continue
if param.requires_grad:
# LayerNorm
param = param.detach().numpy()
hf.create_dataset(name, data=param, compression="gzip", compression_opts=9)
elif type(param) == float or type(param) == int or type(param) == tuple:
# float - tensor _packed_params.weight.scale
# int - tensor _packed_params.weight.zero_point
# tuple - tensor _packed_params.weight.shape
hf.attrs[name] = param
elif type(param) == torch.dtype:
# dtype - tensor _packed_params.dtype
hf.attrs[name] = dtype_2_str[param]
else:
hf.create_dataset(name, data=param, compression="gzip", compression_opts=9)
with open('dbg/metadata.json', 'w') as f:
f.write(json.dumps(qtz_st._metadata))
size = os.path.getsize("dbg/squad_sparse.h5") + os.path.getsize("dbg/metadata.json")
print("")
print("Encoder Size (MB) - Dense: ", round(dense_mb_size/1e6, 2))
print("Encoder Size (MB) - Sparse & Quantized:", round(size/1e6, 2))
# Save the decomposed state_dict to HDF5 storage
# Save everything in the architecutre (embedding + encoder + QA Head)
with h5py.File('dbg/squad_sparse_with_embs.h5','w') as hf:
for name, param in elementary_qtz_st.items():
# if "embedding" in name:
# print(f"Skip {name}")
# continue
# if "pooler" in name:
# print(f"Skip {name}")
# continue
if type(param) == torch.Tensor:
if param.numel() == 1:
# module scale
# module zero_point
hf.attrs[name] = param
continue
if param.requires_grad:
# LayerNorm
param = param.detach().numpy()
hf.create_dataset(name, data=param, compression="gzip", compression_opts=9)
elif type(param) == float or type(param) == int or type(param) == tuple:
# float - tensor _packed_params.weight.scale
# int - tensor _packed_params.weight.zero_point
# tuple - tensor _packed_params.weight.shape
hf.attrs[name] = param
elif type(param) == torch.dtype:
# dtype - tensor _packed_params.dtype
hf.attrs[name] = dtype_2_str[param]
else:
hf.create_dataset(name, data=param, compression="gzip", compression_opts=9)
with open('dbg/metadata.json', 'w') as f:
f.write(json.dumps(qtz_st._metadata))
size = os.path.getsize("dbg/squad_sparse_with_embs.h5") + os.path.getsize("dbg/metadata.json")
print('\nSize (MB):', round(size/1e6, 2))
###Output
Size (MB): 99.41
###Markdown
Loading
###Code
# Reconstruct the elementary state dict
reconstructed_elementary_qtz_st = {}
hf = h5py.File('dbg/squad_sparse_with_embs.h5','r')
for attr_name, attr_param in hf.attrs.items():
if 'shape' in attr_name:
attr_param = tuple(attr_param)
elif ".scale" in attr_name:
if "_packed_params" in attr_name:
attr_param = float(attr_param)
else:
attr_param = torch.tensor(attr_param)
elif ".zero_point" in attr_name:
if "_packed_params" in attr_name:
attr_param = int(attr_param)
else:
attr_param = torch.tensor(attr_param)
elif ".dtype" in attr_name:
attr_param = str_2_dtype[attr_param]
reconstructed_elementary_qtz_st[attr_name] = attr_param
# print(f"Unpack {attr_name}")
# Get the tensors/arrays
for data_name, data_param in hf.items():
if "LayerNorm" in data_name or "_packed_params.bias" in data_name:
reconstructed_elementary_qtz_st[data_name] = torch.from_numpy(np.array(data_param))
elif "embedding" in data_name:
reconstructed_elementary_qtz_st[data_name] = torch.from_numpy(np.array(data_param))
else: # _packed_params.weight.int_repr.data, _packed_params.weight.int_repr.indices and _packed_params.weight.int_repr.indptr
data_param = np.array(data_param)
if "indices" in data_name:
data_param = np.array(data_param, dtype=np.int32)
reconstructed_elementary_qtz_st[data_name] = data_param
# print(f"Unpack {data_name}")
hf.close()
# Sanity checks
for name, param in reconstructed_elementary_qtz_st.items():
assert name in elementary_qtz_st
for name, param in elementary_qtz_st.items():
assert name in reconstructed_elementary_qtz_st, name
for name, param in reconstructed_elementary_qtz_st.items():
assert type(param) == type(elementary_qtz_st[name]), name
if type(param) == torch.Tensor:
assert torch.all(torch.eq(param, elementary_qtz_st[name])), name
elif type(param) == np.ndarray:
assert (param == elementary_qtz_st[name]).all(), name
else:
assert param == elementary_qtz_st[name], name
# Re-assemble the sparse int_repr from the CSR format
reconstructed_qtz_st = {}
for name, param in reconstructed_elementary_qtz_st.items():
if "weight.int_repr.indptr" in name:
prefix_ = name[:-16]
data = reconstructed_elementary_qtz_st[f"{prefix_}.int_repr.data"]
indptr = reconstructed_elementary_qtz_st[f"{prefix_}.int_repr.indptr"]
indices = reconstructed_elementary_qtz_st[f"{prefix_}.int_repr.indices"]
shape = reconstructed_elementary_qtz_st[f"{prefix_}.int_repr.shape"]
int_repr = sparse.csr_matrix(arg1=(data, indices, indptr),
shape=shape)
int_repr = torch.tensor(int_repr.todense())
scale = reconstructed_elementary_qtz_st[f"{prefix_}.scale"]
zero_point = reconstructed_elementary_qtz_st[f"{prefix_}.zero_point"]
weight = torch._make_per_tensor_quantized_tensor(int_repr,
scale,
zero_point)
reconstructed_qtz_st[f"{prefix_}"] = weight
elif "int_repr.data" in name or "int_repr.shape" in name or "int_repr.indices" in name or \
"weight.scale" in name or "weight.zero_point" in name:
continue
else:
reconstructed_qtz_st[name] = param
# Sanity checks
for name, param in reconstructed_qtz_st.items():
assert name in qtz_st
for name, param in qtz_st.items():
assert name in reconstructed_qtz_st, name
for name, param in reconstructed_qtz_st.items():
assert type(param) == type(qtz_st[name]), name
if type(param) == torch.Tensor:
assert torch.all(torch.eq(param, qtz_st[name])), name
elif type(param) == np.ndarray:
assert (param == qtz_st[name]).all(), name
else:
assert param == qtz_st[name], name
###Output
_____no_output_____
###Markdown
Sanity checks
###Code
# Load the re-constructed state dict into a model
dummy_model = BertForQuestionAnswering.from_pretrained('bert-base-uncased')
dummy_model.to('cpu')
reconstructed_qtz_model = torch.quantization.quantize_dynamic(
model=dummy_model,
qconfig_spec = None,
dtype=torch.qint8,
)
reconstructed_qtz_st = OrderedDict(reconstructed_qtz_st)
with open('dbg/metadata.json', 'r') as read_file:
metadata = json.loads(read_file.read())
reconstructed_qtz_st._metadata = metadata
reconstructed_qtz_model.load_state_dict(reconstructed_qtz_st)
# Sanity checks on the infernce
N = 32
for _ in range(25):
inputs = torch.randint(low=0, high=30000, size=(N, 128))
mask = torch.ones(size=(N, 128))
y_reconstructed = reconstructed_qtz_model(input_ids=inputs, attention_mask=mask)[0]
y = quantized_model(input_ids=inputs, attention_mask=mask)[0]
assert torch.all(torch.eq(y, y_reconstructed))
print("Sanity check passed")
###Output
Sanity check passed
|
Model_Attempts/Twelfth_Try_Model.ipynb | ###Markdown
King County Dataset Linear Regression Model 12 Adjustments for this model: Start with getting rid of 'id', 'zipcode', 'lat', 'long' Then deal with the NaN's in 'view', 'yr_renovated', 'waterfront', and 'sqft_basement' Change "?" in 'sqft_basement', change it to a float. Take care of outlier in 'bedrooms', Are there outliers in theres? 'sqft_living','sqft_lot', 'sqft_living15', 'sqft_lot15' Deal with the 'date' feature? - I still don't know how! Bin categorical data: 'view', 'grade', 'sqft_basement', 'yr_renovated', 'waterfront', 'condition' Lot Transform right skewed data: 'sqft_above', 'sqft_living','sqft_lot', 'sqft_living15', 'sqft_lot15' Max/Min: None. Standardization: 'sqft_above', 'sqft_living','sqft_lot','sqft_living15', 'sqft_lot15'
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv("kc_house_data.csv")
data.head()
# This time I'm going to try to not adjust the original, just a new series called king_features
king_features = pd.read_csv("kc_house_data.csv")
data.describe()
king_features.describe()
###Output
_____no_output_____
###Markdown
Missing Data
###Code
# Change "?" in 'sqft_basement' to '0';
king_features.sqft_basement = king_features.sqft_basement.replace(to_replace = '?', value = '0')
# Account for missing data in 'waterfront', 'view', 'yr_renovated';
king_features.waterfront.fillna(value=king_features.waterfront.median(), inplace = True)
king_features.view.fillna(value=king_features.view.median(), inplace = True)
king_features.yr_renovated.fillna(value=king_features.yr_renovated.median(), inplace = True)
king_features.sqft_basement.fillna(value=king_features.sqft_basement.median(), inplace = True)
# Change outlier '33' to '3' in 'bedrooms';
king_features.at[15856,'bedrooms'] = 3
# Change 'date' feature to float; Still not working!
import datetime as dt
king_features['date'] = pd.to_datetime(king_features.date)
# Look at 'date' object.
king_features.date.hist()
# Other code to try; Still not working!
# Change 'date' feature to float;
#import datetime as dt
#Run this code first and then change it!
#data["date"] = pd.to_datetime(data["date"], format = "%m/%d/%Y")
# I want day first, but it won't work this way.
#data["date"] = pd.to_datetime(data["date"], format = "%d/%m/%Y")
# Change 'sqft_basement' from an object to a float:
king_features['sqft_basement'] = king_features['sqft_basement'].astype(float)
king_features = king_features.drop(["id"], axis=1)
# Before
data.bathrooms.hist()
king_features.bathrooms.mean()
king_features.bathrooms.std()*4 + king_features.bathrooms.mean()
king_features = king_features[king_features.bathrooms < 6]
# After
king_features.bathrooms.hist()
# Before
data.bedrooms.hist()
king_features.bedrooms.mean()
king_features.bedrooms.mean()+king_features.bedrooms.std()*4
king_features = king_features[king_features.bedrooms < 7]
# After
king_features.bedrooms.hist()
# Before
data.sqft_living.hist()
king_features.sqft_living.mean()
king_features.sqft_living.mean()+king_features.sqft_living.std()*4
len(king_features.loc[king_features["sqft_living"] > 5664])
king_features = king_features[king_features.sqft_living < 5664]
# After
king_features.sqft_living.hist()
# Before
data.sqft_lot.hist()
king_features.sqft_lot.mean()
king_features.sqft_lot.mean()+king_features.sqft_lot.std()*4
# Number of homes that have more than a 1 acre lot or 43560 sqft.
len(king_features.loc[king_features["sqft_lot"] > 178007])
king_features = king_features[king_features.sqft_lot < 178007]
# After
king_features.sqft_lot.hist()
# Before
data.sqft_above.hist()
king_features.sqft_above.mean()
king_features.sqft_above.mean()+king_features.sqft_above.std()*4
# This number is different than the length of the data, because I've been cleaning it!
len(king_features.loc[king_features["sqft_above"] > 4882])
king_features = king_features[king_features.sqft_above < 4882]
king_features.sqft_above.hist()
king_features.yr_built.unique()
# Left skewed.
king_features.yr_built.hist()
# Huge gap between data. Just like 'sqft_basement'
data.yr_renovated.hist()
# Before
data.sqft_living15.hist()
king_features.sqft_living15.mean()+king_features.sqft_living15.std()*4
len(king_features.loc[king_features["sqft_living15"] > 4628])
# Let's get rid of the outliers
king_features = king_features[king_features.sqft_living15 < 4628]
king_features.sqft_living15.hist()
data.sqft_lot15.hist()
king_features.sqft_lot15.mean()+king_features.sqft_lot15.std()*4
len(king_features.loc[king_features["sqft_lot15"] > 77806])
# Let's get rid of the outliers
king_features = king_features[king_features.sqft_lot15 < 77806]
# After
king_features.sqft_lot15.hist()
data.grade.hist()
king_features.grade.describe()
king_features.describe()
# Create bins for 'yr_renovated' based on the values observed. 4 values will result in 3 bins
bins_A = [0, 1900, 2000, 2020]
bins_yr_renovated = pd.cut(king_features['yr_renovated'], bins_A)
bins_yr_renovated = bins_yr_renovated.cat.as_ordered()
yr_renovated_dummy = pd.get_dummies(bins_yr_renovated, prefix="yr-ren", drop_first=True)
king_features = king_features.drop(["yr_renovated"], axis=1)
king_features = pd.concat([king_features, yr_renovated_dummy], axis=1)
# Create bins for 'sqft_basement' based on the values observed. 3 values will result in 2 bins
bins_B = [0, 100, 5000]
bins_sqft_basement = pd.cut(king_features['sqft_basement'], bins_B)
bins_sqft_basement = bins_sqft_basement.cat.as_ordered()
sqft_basement_dummy = pd.get_dummies(bins_sqft_basement, prefix="sqft_base", drop_first=True)
king_features = king_features.drop(["sqft_basement"], axis=1)
king_features = pd.concat([king_features, sqft_basement_dummy], axis=1)
# Create bins for 'view' based on the values observed. 3 values will result in 2 bins
bins_C = [0, 2, 4]
bins_view = pd.cut(king_features['view'], bins_C)
bins_view = bins_view.cat.as_ordered()
view_dummy = pd.get_dummies(bins_view, prefix="new_view", drop_first=True)
king_features = king_features.drop(["view"], axis=1)
king_features = pd.concat([king_features, view_dummy], axis=1)
# Create bins for 'grade' based on the values observed. 3 values will result in 2 bins
bins_D = [0, 8, 13]
bins_grade = pd.cut(king_features['grade'], bins_D)
bins_grade = bins_grade.cat.as_ordered()
grade_dummy = pd.get_dummies(bins_grade, prefix="new_grade", drop_first=True)
king_features = king_features.drop(["grade"], axis=1)
king_features = pd.concat([king_features, grade_dummy], axis=1)
# Create bins for 'waterfront' based on the values observed. 3 values will result in 2 bins
bins_E = [0, 0.5, 1]
bins_waterfront = pd.cut(king_features['waterfront'], bins_E)
bins_waterfront = bins_waterfront.cat.as_ordered()
waterfront_dummy = pd.get_dummies(bins_waterfront, prefix="new_waterfront", drop_first=True)
king_features = king_features.drop(["waterfront"], axis=1)
king_features = pd.concat([king_features, waterfront_dummy], axis=1)
# Create bins for 'condition' based on the values observed. 4 values will result in 3 bins
bins_G = [0, 3, 4, 5]
bins_condition = pd.cut(king_features['condition'], bins_G)
bins_condition = bins_condition.cat.as_ordered()
condition_dummy = pd.get_dummies(bins_condition, prefix="new_condition", drop_first=True)
king_features = king_features.drop(["condition"], axis=1)
king_features = pd.concat([king_features, condition_dummy], axis=1)
###Output
_____no_output_____
###Markdown
Log Transformation: These features have right skewed histograms'sqft_above', 'sqft_lot', 'sqft_living', 'sqft_living15', 'sqft_lot15'
###Code
# Perform log transformation
logabove = np.log(king_features["sqft_above"])
loglot = np.log(king_features["sqft_lot"])
logliving = np.log(king_features["sqft_living"])
loglivingnear = np.log(king_features["sqft_living15"])
loglotnear = np.log(king_features["sqft_lot15"])
# Switch the Standardization into the original data
king_features["sqft_above"] = (logabove-np.mean(logabove))/np.sqrt(np.var(logabove))
king_features["sqft_lot"] = (loglot-np.mean(loglot))/np.sqrt(np.var(loglot))
king_features["sqft_living"] = (logliving-np.mean(logliving))/np.sqrt(np.var(logliving))
king_features["sqft_living15"] = (loglivingnear-np.mean(loglivingnear))/np.sqrt(np.var(loglivingnear))
king_features["sqft_lot15"] = (loglotnear-np.mean(loglotnear))/(np.sqrt(np.var(loglotnear)))
###Output
_____no_output_____
###Markdown
Check the histograms of the log transformed/standardization:
###Code
ax1 = plt.subplot(2, 2, 1)
king_features.sqft_above.hist(ax=ax1)
ax1.set_title("sqft_above")
ax2 = plt.subplot(2, 2, 2)
king_features.sqft_living.hist(ax=ax2)
ax2.set_title('sqft_living')
ax3 = plt.subplot(2, 2, 3)
king_features.sqft_living15.hist(ax=ax3)
ax3.set_title("sqft_living15")
ax4 = plt.subplot(2, 2, 4)
king_features.sqft_lot15.hist(ax=ax4)
ax4.set_title('sqft_lot15')
king_features.info()
king_features = king_features.drop(['yr-ren_(1900, 2000]'], axis=1)
# "ValueError: The indices for endog and exog are not aligned" - if I use both 'data' and 'king_features'
#data.reindex(king_features.index)
y = pd.DataFrame(king_features, columns = ['price'])
X = king_features.drop(['price','date', 'floors', 'sqft_lot'], axis=1)
import statsmodels.api as sm
model = sm.OLS(y,X).fit()
model.summary()
# Perform a train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
# A brief preview of our train test split
print(len(X_train), len(X_test), len(y_train), len(y_test))
# Apply your model to the train set
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X_train, y_train)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
# Calculate predictions on training and test sets
y_hat_train = linreg.predict(X_train)
y_hat_test = linreg.predict(X_test)
# Calculate training and test residuals
train_residuals = y_hat_train - y_train
test_residuals = y_hat_test - y_test
#Calculate the Mean Squared Error (MSE)
from sklearn.metrics import mean_squared_error
train_mse = mean_squared_error(y_train, y_hat_train)
test_mse = mean_squared_error(y_test, y_hat_test)
print('Train Mean Squarred Error:', train_mse)
print('Test Mean Squarred Error:', test_mse)
#Evaluate the effect of train-test split
import random
random.seed(8)
train_err = []
test_err = []
t_sizes = list(range(5,100,5))
for t_size in t_sizes:
temp_train_err = []
temp_test_err = []
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=t_size/100)
linreg.fit(X_train, y_train)
y_hat_train = linreg.predict(X_train)
y_hat_test = linreg.predict(X_test)
temp_train_err.append(mean_squared_error(y_train, y_hat_train))
temp_test_err.append(mean_squared_error(y_test, y_hat_test))
train_err.append(np.mean(temp_train_err))
test_err.append(np.mean(temp_test_err))
plt.scatter(t_sizes, train_err, label='Training Error')
plt.scatter(t_sizes, test_err, label='Testing Error')
plt.legend()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score
cv_5_results = np.mean(cross_val_score(linreg, X, y, cv=5, scoring='neg_mean_squared_error'))
cv_5_results
###Output
_____no_output_____ |
dholecenter.ipynb | ###Markdown
mmdholecenterHole center misalignment in PCB. DescriptionThe input image is a binary image of a gear. The opening top-hat is used to detect the gear teeth. Finally, the teeth detected are labeled.
###Code
import numpy as np
from PIL import Image
import ia870 as ia
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
ReadingThe image of the PCB is read.
###Code
a_pil = Image.open('data/pcbholes.tif').convert('L')
a = np.array (a_pil)
a = a.astype('bool')
(fig, axes) = plt.subplots(nrows=1, ncols=1,figsize=(5, 5))
axes.set_title('a')
axes.imshow(a, cmap='gray')
axes.axis('off')
###Output
_____no_output_____
###Markdown
First, find center of the pads.Use the close hole function to remove the holes. Note that one hole is open. This is not considered in this experiment. The regional maxima of the distance transform gives the radius of the largest disk inside the pad. We are interested only in radius larger than 20 pixels.
###Code
b = ia.iaclohole(a)
d = ia.iadist(b,ia.iasecross(),'EUCLIDEAN')
e = ia.iaregmax(d,ia.iasebox())
f = ia.iathreshad(d, np.uint16(20)) # radius larger than 20 pixels
g = ia.iaintersec(e,f)
h = ia.iablob(ia.ialabel(g,ia.iasebox()),'CENTROID') # pad center
(fig, axes) = plt.subplots(nrows=1, ncols=2,figsize=(10, 5))
axes[0].set_title('b')
axes[0].imshow(b, cmap='gray')
axes[0].axis('off')
axes[1].set_title('b, h')
axes[1].imshow(ia.iagshow(b, ia.iadil(h)).transpose(1, 2, 0))
axes[1].axis('off')
###Output
_____no_output_____
###Markdown
Find the center of the holesThe holes are given by the difference of the pad image from the original image. Repeat the same procedure to find the center of the pads to find now the center of the holes.
###Code
i = ia.iasubm(b,a)
j = ia.iadist(i,ia.iasecross(),'EUCLIDEAN')
k = ia.iaregmax(j,ia.iasebox())
l = ia.iablob(ia.ialabel(k,ia.iasebox()),'CENTROID') # hole center
(fig, axes) = plt.subplots(nrows=2, ncols=2,figsize=(10, 5))
axes[0][0].set_title('i')
axes[0][0].imshow(ia.iagshow(i).transpose(1, 2, 0))
axes[0][0].axis('off')
axes[0][1].set_title('j')
axes[0][1].imshow(d, cmap='gray')
axes[0][1].axis('off')
axes[1][0].set_title('k')
axes[1][0].imshow(ia.iagshow(ia.iadil(k)).transpose(1, 2, 0))
axes[1][0].axis('off')
axes[1][1].set_title('i, l')
axes[1][1].imshow(ia.iagshow(i, ia.iadil(l)).transpose(1, 2, 0))
axes[1][1].axis('off')
###Output
_____no_output_____
###Markdown
Show the eccentricityFirst both centers (pads and holes) are displayed together. Then the actual misalignment is computed using the distance from one point to the other.
###Code
m = ia.iadist(ia.ianeg(l),ia.iasecross(),'EUCLIDEAN');
n = ia.iaintersec(ia.iagray(h),np.uint8(m));
[x,y]=np.nonzero(n);
v = n[np.nonzero(n)]
print (x, y, v)
#fprintf('displacement of %d at (%d,%d)\n',[double(v)';x';y']);
#displacement of 3 at (44,89)
#displacement of 6 at (154,188)
#displacement of 8 at (45,192)
(fig, axes) = plt.subplots(nrows=1, ncols=2,figsize=(10, 5))
axes[0].set_title('a, h, l')
axes[0].imshow(ia.iagshow(a, h, l).transpose(1, 2, 0))
axes[0].axis('off')
axes[1].set_title('n, a')
axes[1].imshow(ia.iagshow(n, a).transpose(1, 2, 0))
axes[1].axis('off')
###Output
[ 43 44 153] [ 88 191 187] [3 8 6]
###Markdown
Find the narrowest region around the holesFirst, the thinning to compute the skeleton of the PCB image, then remove iteratively all the end points of the skeleton so just the skeleton loop around the holes remains. Find the minimum distance of these loops to the border and display their location.
###Code
o=ia.iathin(a)
p=ia.iathin(o,ia.iaendpoints())
q = ia.iadist(a,ia.iasecross(),'EUCLIDEAN')
r = ia.iagrain(ia.ialabel(p,ia.iasebox()),q,'min') # minimum
s = ia.iagrain(ia.ialabel(p,ia.iasebox()),q,'min','data'); # minimum
t = ia.iaintersec(ia.iacmp(r,'==',q),a);
print (2*s+1)
#fprintf('Minimum distance: %d pixels\n',2*double(s)+1);
#Minimum distance: 7 pixels
#Minimum distance: 3 pixels
#Minimum distance: 7 pixels
(fig, axes) = plt.subplots(nrows=1, ncols=2,figsize=(10, 5))
axes[0].set_title('a, p')
axes[0].imshow(ia.iagshow(a, p).transpose(1, 2, 0))
axes[0].axis('off')
axes[1].set_title('a, t')
axes[1].imshow(ia.iagshow(a, ia.iadil(t)).transpose(1, 2, 0))
axes[1].axis('off')
###Output
[7. 7. 3.]
|
Preprocessing/Mini_Project_3.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount("/content/drive/")
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import pandas as pd
import seaborn as sns
import math
sns.set()
data_raw = pd.read_csv(r"/content/drive/MyDrive/Colab Notebooks/NFLRush.csv")
data_raw.head()
data_raw.shape
data_raw.columns
data_raw["Yards"]
len(data_raw[data_raw["PlayId"] == 20181115001638])
specific_play = data_raw[data_raw["PlayId"] == 20181115001638]
"""
1. The features YardLine, Quarter, and PossessionTeam among others tell us important information about this play.
2. The features DisplayName, JerseyNumber, and Dir among others tell us information about each player on the field.
"""
three_players_box = data_raw[data_raw["DefendersInTheBox"] == 3]
data_raw[data_raw["DefendersInTheBox"] == 3].value_counts("PlayId").shape[0]
len(three_players_box), len(three_players_box["PlayId"].unique()), (len(three_players_box) / len(data_raw) * 100)
"""
We know that there are 22 players per play, and 396 players showed up in our query, therefore 396 / 22 = 18 unique plays should show up.
Approx. 0.00077% of plays have three players in the box, therefore it is quite uncommon.
"""
michael_thomas = data_raw[data_raw["DisplayName"] == "Michael Thomas"]
len(michael_thomas["GameId"].unique()) # people with the name Michael Thomas played 50 NFL games
michael_thomas_players = michael_thomas["NflId"].unique()
michael_thomas_players # there are 2 michael thomas's in the dataset!
mt1 = michael_thomas[michael_thomas["NflId"] == michael_thomas_players[0]]
mt2 = michael_thomas[michael_thomas["NflId"] == michael_thomas_players[1]]
len(mt1["GameId"].unique()), len(mt2["GameId"].unique())
# since 32 + 19 = 51, it appears like both of them must have played in the same game!
print(data_raw[data_raw["DisplayName"] == "Michael Thomas"].groupby(["NflId"])["GameId"].value_counts())
print(data_raw[data_raw["DisplayName"] == "Michael Thomas"].groupby(["NflId", "GameId"])["PlayId"].count())
data_raw.columns
data_raw.groupby(["Stadium", "Season"])["PlayId"].count()
data_raw["WindDirection"].unique()
sns.distplot(data_raw["Yards"], kde=True)
data_raw["Yards"].describe()
"""
The distribution looks to be skewed, as one of the tails of the graph is significantly larger than the other. Most of the values lie between
0 and approx. 10 yards, which makes sense if you know the game of football and rushing. I could not see the max yard gain from the graph alone,
so I used the describe function above and the max yard gain appears to be 99 yards.
"""
sns.boxplot(x = "Yards", data = data_raw)
"""
In comparison to the distribution plot, the boxplot shows the volume of plays that resulted in a gain of 20+ yards, whereas in the distribution
plot, I nearly assumed that the amount of such plays were near 0 due to the line being nearly close to the origin. When I did a basic boolean
filter command to filter the number of plays that resulted in greater than a 50 yard gain, I got 1518 plays.
Something that could be potentially misleading about a boxplot is that is the majority of the datapoints take up a minimal amount of space.
The box in the plot represents the 25th - 75th percentile of datapoints, yet takes up far fewer space than the supposed "outliers" despite
representing far more datapoints.
"""
sns.distplot(data_raw["Y"], kde=True)
sns.distplot(data_raw["Humidity"], kde=True)
sns.distplot(data_raw["A"], kde=True)
"""
What occured in the graph of "Y" is explainable if you understand the game of football. We can see in the graph that there is a global
maximum around Y = 27 (approx), with local maximums at Y = 10 and Y = 43 (approx). The maximum and minumum values for Y are 53.3 and 0,
respectively. In every football play, there is generally a cluster of players in the midddle of the field (approx. Y = 27), with recievers
and cornerbacks lined up at the edges of the fields (Y = 10 and Y = 43). This explains the local and global maximums that we saw in the
graph.
"""
sns.regplot(x="YardLine", y="Yards", data=data_raw, ci = None)
"""
If you add up the YardLine value and the Yards value for all points all the diagonal line, you will always get 100.
This is because these are all plays that resulted in a touchdown.
"""
sns.boxplot(x=data_raw["Season"], y=data_raw["Humidity"])
"""
I'm assuming the goal for this part was to examine the relationship between the seasons and humidity during those seasons.
Based off this goal, I would say the bivariate boxplot is useful for examining this relationship. We can see that the humidity
values in 2018 are higher than 2017, with boxplot ranges that are approx. 5 degrees higher.
"""
sns.violinplot(x="Season", y="Humidity", data=data_raw)
"""
The one feature that violin plots have that boxplots do not have is that violin plots show the probability that a datapoint will take on
a certain value through the thickness. The violin plot shows that the difference between the medians of 2017 and 2018 is 4-5 degrees, but
it does not appear to be as drastic as the difference shown in the box plots. The two describe cells below back up these claims.
"""
data_raw[data_raw["Season"] == 2017]["Humidity"].describe()
data_raw[data_raw["Season"] == 2018]["Humidity"].describe()
sns.scatterplot(x="Down", y="Yards", data=data_raw) # interesting that plays on fourth down do not yield as many yards
sns.scatterplot(x="Humidity", y="Yards", hue="Season", data=data_raw)
sns.scatterplot(x="Humidity", y="Yards", data=data_raw)
sns.scatterplot(x="Stadium", y="Yards", data=data_raw)
"""
Humidity and Stadium don't affect yard gain for the most part. Therefore, these features would not be useful to the model. Since the
yards feature is continous, I would assume we would first start by using a Logistic Regression model to predict. A good model would have
coefficients for features such as Humidity and Stadium that don't impact yard gain.
"""
###Output
_____no_output_____ |
craftworks-pm/.ipynb_checkpoints/1 - Predictive Maintenance - Modeling-checkpoint.ipynb | ###Markdown
Model buildingTry to build a model that is able to predict the machine status.
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
plt.rcParams['figure.figsize'] = [15, 5]
from pandas import HDFStore
from sklearn.utils import shuffle
from sklearn.metrics import classification_report, confusion_matrix
###Output
_____no_output_____
###Markdown
DataThe data is already pre processed (resampled, standardized, NAN filtered, split)The independed variable (column 'machine_state') has four states, beeing strongly unvalanced.States:* NORMAL* RECOVERING* FAILING* AFTERMATH* BROKEN
###Code
hdf = HDFStore('data/preprocessed.h5')
training = hdf['training']
validation = hdf['validation']
testing = hdf['testing']
training.head(5)
xColumns = list(training.columns[training.columns.str.contains('sensor_')])
xColumns += ['hour']
yColumns = 'machine_status'
xTrain, yTrain = training[xColumns], training[yColumns]
xVal, yVal = validation[xColumns], validation[yColumns]
xTest, yTest = testing[xColumns], testing[yColumns]
xTest.shape
yTest.shape
xTrain.head(5)
[df.value_counts() for df in [yTrain, yVal, yTest]]
###Output
_____no_output_____
###Markdown
1. Data preprocessing and balancing classesDo a new pre processing where we sample from the normal and the failing data by simply randomly selecting a point in time X and get data from X-N:X.
###Code
from IPython.core.debugger import set_trace as st
def sampleByIndex(X, Y, N, state, newYValue, sequence_length='30min', index_masking='30min', seed=42):
"""
Generate a numpy array containing N sequences sampled the given dataset (X and Y).
As input only a subset of the data is used for which Y == state.
Note:
The index_masking is used in order to make sure that sampling is
not trying to generate sequences that go beyond the start of the dataset
Parameters
----------
X, Y : pd.Dataframe
The dataframe containing depended and independed variables
state :
Y state of the data to consider
newYValue :
Value of the generated Y data
sequence_length : str
Pandas Timestamp string defining sequence length
index_masking : str
Mask the beginning of the dataset
seed : int
define seed for sampling process
Returns
-------
np.array : (N, seq_length, nFeatures)
X sequences
np.array : N
Y
"""
np.random.seed(seed)
Ymasked = Y.loc[Y.index[0] + pd.Timedelta(index_masking):]
Iend = np.random.choice(Ymasked[Ymasked == state].index, N)
Istart = Iend - pd.Timedelta(sequence_length)
sequences = []
for start, end in zip(Istart, Iend):
try:
sequences.append(X[start:end].values)
except Exception as e:
print('Some error with %s:%s' % (s, e) )
sequences = np.array(sequences)
newY = np.ones(sequences.shape[0]) * newYValue
return sequences, newY
def createSequenceWithSampling(X, Y, N, seed=42):
xseq0, yseq0 = sampleByIndex(X, Y, N, 'NORMAL', 0, seed=seed)
xseq1, yseq1 = sampleByIndex(X, Y, N, 'FAILING', 1, seed=seed)
xseq = np.concatenate((xseq0, xseq1))
yseq = np.concatenate((yseq0, yseq1))
xseq = shuffle(xseq, random_state=seed)
yseq = shuffle(yseq, random_state=seed)
return xseq, yseq
xTrainSeq, yTrainSeq = createSequenceWithSampling(xTrain, yTrain, 1000)
xValidSeq, yValidSeq = createSequenceWithSampling(xVal, yVal, 200)
xTestSeq, yTestSeq = createSequenceWithSampling(xTest, yTest, 100)
xTrainSeq.shape
###Output
_____no_output_____
###Markdown
Old data preprocessingPerform a class balancing by generating sequences of the FAILING class through a sliding window of over the FAILING data segment.
###Code
SEQUENCE_LENGTH = 5
NFEATURES=xTrain.shape[1]
def toSequence(X, Y, sequence_length, balance_classes=False, seed=None):
normal = X[Y == 'NORMAL']
failing = X[Y == 'FAILING']
x_normal_ = []
windows = np.arange(0, normal.shape[0], step=sequence_length)
for start, stop in zip(windows[0:-1], windows[1:]):
x_normal_.append(normal[start:stop].values)
x_normal_ = np.array(x_normal_)
y_normal_ = np.zeros(x_normal_.shape[0])
#y_normal_ = np.repeat(np.array([1, 0])[None, :], x_normal_.shape[0], axis=0)
#print('# Normal samples: %d' % x_normal_.shape[0])
x_failing_ = []
for start, stop in zip(range(0, failing.shape[0]-sequence_length), range(sequence_length, failing.shape[0])):
x_failing_.append(failing[start:stop].values)
x_failing_ = np.array(x_failing_)
y_failing_ = np.ones(x_failing_.shape[0])
#y_failing_ = np.repeat(np.array([0, 1])[None, :], x_failing_.shape[0], axis=0)
#print('# Failing samples: %d' % x_failing_.shape[0])
if balance_classes:
# Sample with replacement from the failing dataset
np.random.seed(seed)
rnd_elements = np.random.randint(0, x_failing_.shape[0], x_normal_.shape[0])
x_failing_ext = x_failing_[rnd_elements]
y_failing_ext = np.ones(x_failing_ext.shape[0])
#y_failing_ext = np.repeat(np.array([0, 1])[None, :], x_failing_ext.shape[0], axis=0)
x_failing_ = x_failing_ext
y_failing_ = y_failing_ext
#return x_normal_, x_failing_
# Join and shuffle the data
X_ = np.concatenate((x_normal_, x_failing_))
Y_ = np.concatenate((y_normal_, y_failing_))
# Now shuffle the array no have no artificats during training
X_ = shuffle(X_, random_state=seed)
Y_ = shuffle(Y_, random_state=seed)
return X_, Y_
xTrain_, yTrain_ = toSequence(xTrain, yTrain, 6, balance_classes=True, seed=100)
xVal_, yVal_ = toSequence(xVal, yVal, 6, balance_classes=True, seed=100)
xTest_, yTest_ = toSequence(xTest, yTest, 6, balance_classes=True, seed=100)
###Output
_____no_output_____
###Markdown
2. Model evaluation toolkitBuild a set of tools to evaluate model training progression and performance
###Code
from sklearn.metrics import precision_score, recall_score, accuracy_score, f1_score
def score_results(yTrue, yPred,
scores = {'precision': precision_score,
'recall': recall_score,
'accuracy': accuracy_score,
'f1': f1_score} ):
return {n: fu(yTrue, yPred) for n, fu in scores.items()}
def plot_acc(history, title="Model Accuracy", ax=None):
if ax is None:
fig, ax = plt.subplots(nrows=1, ncols=1)
ax.plot(history.history['acc'])
ax.plot(history.history['val_acc'])
ax.set_title(title)
ax.set_ylabel('Accuracy')
ax.set_xlabel('Epoch')
ax.legend(['Train', 'Val'], loc='upper left')
def plot_loss(history, title="Model Loss", ax=None):
if ax is None:
fig, ax = plt.subplots(nrows=1, ncols=1)
ax.plot(history.history['loss'])
ax.plot(history.history['val_loss'])
ax.set_title(title)
ax.set_ylabel('Loss')
ax.set_xlabel('Epoch')
ax.legend(['Train', 'Val'], loc='upper right')
def plot_training(history):
fig, axs = plt.subplots(nrows=1, ncols=2)
plot_acc(history=history, ax=axs[0])
plot_loss(history=history, ax=axs[1])
fig.suptitle('Training Progress')
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from matplotlib import pyplot as plt
def plot_ROC(yTest, yPred, ax=None):
ns_probs = [0] * len(yPred)
ns_auc = roc_auc_score(yTest, ns_probs)
pre_auc = roc_auc_score(yTest, yPred)
# calculate roc curves
ns_fpr, ns_tpr, _ = roc_curve(yTest, ns_probs)
lr_fpr, lr_tpr, thrds = roc_curve(yTest, yPred)
# Calculate metrices
acc = []
prec = []
reca = []
f1 = []
for th in thrds:
tn, fp, fn, tp = confusion_matrix(yTest, yPred > th).ravel()
acc.append((tp+tn)/(tn+fp+fn+tp))
prec.append(tp/(tp+fp))
reca.append(tp/(tp+fn))
f1.append((2*prec[-1]*reca[-1])/(prec[-1]+reca[-1]))
bThreshold = thrds[np.nanargmax(f1)]
if ax is None:
fig, ax = plt.subplots(nrows=1, ncols=1)
# plot the roc curve for the model
ax.plot(ns_fpr, ns_tpr, linestyle='--', label='Random')
ax.plot(lr_fpr, lr_tpr, marker='.', label='Model')
#plt.plot(lr_fpr, acc, marker='o', label='accuracy')
#plt.plot(lr_fpr, f1, marker='o', label='F1')
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
# show the legend
ax.legend()
ax.set_title("ROC Curve\nAUC=%.3f, F1 Peak@Threshold %.3f" % (pre_auc, bThreshold))
# summarize scores
#print('Random: ROC AUC=%.3f' % (ns_auc))
#print('\nModel: ROC AUC=%.3f' % (pre_auc))
#print('\nF1 peak at threshold: %.3f' %(bThreshold))
return pre_auc, bThreshold, (lr_fpr, lr_tpr, f1)
def plot_confusion_matrix(yTrue, yPred, title='Confusion matrix', display_labels=['NORMAL', 'FAILING'], cmap=plt.cm.Blues, normalize=False, ax=None):
cm = confusion_matrix(yTrue, yPred)
if normalize:
cm = cm / cm.sum()
if ax is None:
fig, ax = plt.subplots(nrows=1, ncols=1)
else:
fig=ax.get_figure()
ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.set_title(title)
#fig.colorbar()
tick_marks = np.arange(len(display_labels))
ax.set_xticks(tick_marks)
ax.set_yticks(tick_marks)
ax.set_xticklabels(display_labels, rotation=45)
ax.set_yticklabels(display_labels)
for (j,i),label in np.ndenumerate(cm):
ax.text(i,j,cm[i, j],ha='center',va='center')
fig.tight_layout()
ax.set_ylabel('True label')
ax.set_xlabel('Predicted label')
def show_analysis(yTest, yPred):
"""
Plot a ROC curve, and use the threshold that produces the highest F1 score
to plot a confusion matrix
"""
fig, axs = plt.subplots(nrows=1, ncols=2)
auc, bT, _ = plot_ROC(yTest, yPred, ax=axs[0])
plot_confusion_matrix(yTest, yPred>bT, ax=axs[1])
###Output
_____no_output_____
###Markdown
3.1 LSTM Sequence modelUse an LSTM model with a sequence of the sensor values
###Code
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.regularizers import l1, l2
from keras.callbacks import EarlyStopping
import keras
SEQUENCE_LENGTH = xTrainSeq.shape[1]
NFEATURES=xTrainSeq.shape[2]
# From: https://machinelearningmastery.com/how-to-develop-rnn-models-for-human-activity-recognition-time-series-classification/
model = Sequential()
model.add(LSTM(50, input_shape=(SEQUENCE_LENGTH,NFEATURES)))
#model.add(Dropout(0.5))
model.add(Dense(20, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
adam = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])
%time history = model.fit(xTrainSeq, yTrainSeq, epochs=10, batch_size=64, verbose=1, validation_data=(xValidSeq, yValidSeq))
plot_training(history)
yPred = model.predict(xTestSeq)
show_analysis(yTestSeq, yPred)
###Output
/Users/manuel.pasieka/anaconda3/envs/py3/lib/python3.6/site-packages/ipykernel_launcher.py:59: RuntimeWarning: invalid value encountered in long_scalars
/Users/manuel.pasieka/anaconda3/envs/py3/lib/python3.6/site-packages/ipykernel_launcher.py:61: RuntimeWarning: invalid value encountered in double_scalars
###Markdown
3.2 Regularized LSTMAdd regularization and early stopping
###Code
model = Sequential()
model.add(LSTM(50, input_shape=(SEQUENCE_LENGTH,NFEATURES)))
model.add(Dense(20, activation='relu', kernel_regularizer=l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(10, activation='relu', kernel_regularizer=l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid', kernel_regularizer=l2(0.01)))
adam = keras.optimizers.Adam(lr=0.0005, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=2)
%time history = model.fit(xTrainSeq, yTrainSeq, epochs=10, batch_size=64, verbose=1, validation_data=(xValidSeq, yValidSeq), callbacks=[es])
plot_training(history)
yPred = model.predict(xTestSeq)
show_analysis(yTestSeq, yPred)
###Output
/Users/manuel.pasieka/anaconda3/envs/py3/lib/python3.6/site-packages/ipykernel_launcher.py:59: RuntimeWarning: invalid value encountered in long_scalars
/Users/manuel.pasieka/anaconda3/envs/py3/lib/python3.6/site-packages/ipykernel_launcher.py:61: RuntimeWarning: invalid value encountered in double_scalars
###Markdown
3.3 Some fancy Convoluation sequence modelAll stolen fromhttps://machinelearningmastery.com/how-to-develop-rnn-models-for-human-activity-recognition-time-series-classification/
###Code
from keras.layers import TimeDistributed
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.layers import Flatten
from keras.regularizers import l1
# define model
verbose, epochs, batch_size = 0, 25, 128
n_timesteps, n_features, n_outputs = xTrainSeq.shape[1], xTrainSeq.shape[2], yTrainSeq[0]
# define model
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features), activity_regularizer=l1(0.0005)))
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', activity_regularizer=l1(0.0005)))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
#model.add(Flatten())
model.add(LSTM(100, activity_regularizer=l1(0.01)))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu', activity_regularizer=l1(0.01)))
model.add(Dropout(0.5))
model.add(Dense(20, activation='relu', activity_regularizer=l1(0.01)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=2)
%time history = model.fit(xTrainSeq, yTrainSeq, epochs=20, batch_size=64, verbose=1, validation_data=(xValidSeq, yValidSeq), callbacks=[es])
plot_training(history)
yPred = model.predict(xTestSeq)
show_analysis(yTestSeq, yPred)
yPred
###Output
_____no_output_____ |
C1-Introduction to Data Science in Python/Assignments/Week2/Assignment+2.ipynb | ###Markdown
---_You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- Assignment 2 - Pandas IntroductionAll questions are weighted the same in this assignment. Part 1The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on [All Time Olympic Games Medals](https://en.wikipedia.org/wiki/All-time_Olympic_Games_medal_table), and does some basic data cleaning. The columns are organized as of Summer games, Summer medals, of Winter games, Winter medals, total number of games, total of medals. Use this dataset to answer the questions below.
###Code
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#'+col[1:]}, inplace=True)
names_ids = df.index.str.split('\s\(') # split the index by '('
df.index = names_ids.str[0] # the [0] element is the country name (new index)
df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)
df = df.drop('Totals')
df.head()
###Output
_____no_output_____
###Markdown
Question 0 (Example)What is the first country in df?*This function should return a Series.*
###Code
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tell you the general format the autograder is expecting
return df.iloc[0]
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
###Output
_____no_output_____
###Markdown
Question 1Which country has won the most gold medals in summer games?*This function should return a single string value.*
###Code
def answer_one():
mostgold = df['Gold'].idxmax()
return str(mostgold)
answer_one()
###Output
_____no_output_____
###Markdown
Question 2Which country had the biggest difference between their summer and winter gold medal counts?*This function should return a single string value.*
###Code
def answer_two():
dfcopy = df.copy()
bigdiff = dfcopy['Gold']-dfcopy['Gold.1']
country = bigdiff.idxmax()
return str(country)
answer_two()
###Output
_____no_output_____
###Markdown
Question 3Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count? $$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$Only include countries that have won at least 1 gold in both summer and winter.*This function should return a single string value.*
###Code
def answer_three():
dfcopy = df.copy()
atleast1gold = df[(dfcopy['Gold'] > 0) & (dfcopy['Gold.1'] > 0)]
bigdiff = atleast1gold['Gold']-atleast1gold['Gold.1']
totalgold = atleast1gold['Gold'] + atleast1gold['Gold.1']
reldiff = bigdiff/totalgold
country = reldiff.idxmax()
return str(country)
answer_three()
###Output
_____no_output_____
###Markdown
Question 4Write a function that creates a Series called "Points" which is a weighted value where each gold medal (`Gold.2`) counts for 3 points, silver medals (`Silver.2`) for 2 points, and bronze medals (`Bronze.2`) for 1 point. The function should return only the column (a Series object) which you created, with the country names as indices.*This function should return a Series named `Points` of length 146*
###Code
def answer_four():
dfcopy = df.copy()
weightedsum = dfcopy['Gold.2'] * 3 + dfcopy['Silver.2'] * 2 + dfcopy['Bronze.2'] * 1
dfcopy['Points'] = weightedsum
return pd.Series(dfcopy['Points'])
answer_four()
###Output
_____no_output_____
###Markdown
Part 2For the next set of questions, we will be using census data from the [United States Census Bureau](http://www.census.gov). Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. [See this document](https://www2.census.gov/programs-surveys/popest/technical-documentation/file-layouts/2010-2015/co-est2015-alldata.pdf) for a description of the variable names.The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate. Question 5Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)*This function should return a single string value.*
###Code
census_df = pd.read_csv('census.csv')
census_df.head()
def answer_five():
counties = census_df.groupby(['STNAME'])['COUNTY'].count()
mostcounties = counties.idxmax()
return str(mostcounties)
answer_five()
###Output
_____no_output_____
###Markdown
Question 6**Only looking at the three most populous counties for each state**, what are the three most populous states (in order of highest population to lowest population)? Use `CENSUS2010POP`.*This function should return a list of string values.*
###Code
def answer_six():
countysum = census_df[census_df['SUMLEV'] == 50]
countyorder = countysum.sort_values(by='CENSUS2010POP', ascending=False).groupby('STNAME').head(3)
popstates = countyorder.groupby('STNAME').sum().sort_values(by='CENSUS2010POP', ascending=False).head(3)
return popstates.index.tolist()
answer_six()
###Output
_____no_output_____
###Markdown
Question 7Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50.*This function should return a single string value.*
###Code
def answer_seven():
census_dfc = census_df.copy()
census_dfc['POPLO'] = census_dfc.loc[:,'POPESTIMATE2010':'POPESTIMATE2015'].min(axis=1)
census_dfc['POPHI']= census_dfc.loc[:,'POPESTIMATE2010':'POPESTIMATE2015'].max(axis=1)
census_dfc['POPDIFF'] = (census_dfc['POPHI'] - census_dfc['POPLO'])
census_dfc.set_index('CTYNAME', inplace=True)
return census_dfc['POPDIFF'].idxmax()
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8In this datafile, the United States is broken up into four regions using the "REGION" column. Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.*This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).*
###Code
def answer_eight():
result = census_df[((census_df['REGION'] == 1) | (census_df['REGION'] == 2)) & (census_df['CTYNAME'] == 'Washington County') & (census_df['POPESTIMATE2015'] > census_df['POPESTIMATE2014'])][['STNAME','CTYNAME']]
return result
answer_eight()
###Output
_____no_output_____ |
Notebooks/NLP classificar clube de acordo o titulo da postagem .ipynb | ###Markdown
**NLP Classificar clube pelo título da postagem**Usando ferramentas de NLP criei um classificador para saber se o título da postagem é referente ao Flamengo ou Corinthians.> **Descrição e dataset (retirado do kaggle)**> > Este conjunto de dados tem como objetivo fornecer uma amostra de dados do mundo real cobrindo um período de tempo razoável. Contém uma coluna com o nome do clube, que pode ser considerado uma classe. > > **Link**: > [https://www.kaggle.com/lgmoneda/ge-soccer-clubs-news/](https://www.kaggle.com/lgmoneda/ge-soccer-clubs-news/)
###Code
# Importando bibliotecas
import pandas as pd
import seaborn as sns
import nltk
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
from sklearn import metrics
# carregando dataset
file = 'https://raw.githubusercontent.com/CidOSJr/data-science-portfolio/master/datasets/clubs_news.csv'
df = pd.read_csv(file)
df.head()
df.shape
# Verificar a quantidade de postagens por clube:
(df['club'].value_counts() / df.shape[0]) * 100
sns.countplot(data=df, x='club')
# Criar uma nova coluna para fazer o pre-processamento
df['desc'] = df['title'].copy()
df[['title', 'desc']].head(20)
# pre-processamentos de dados
df['desc'] = df['desc'].\
str.replace(r'[,.;:?!]+', '', regex=True).\
str.replace(r'[/<>()|\-\$%#@\'\""]+', '', regex=True).\
str.replace(r'[0-9]+', '', regex=True).copy()
# instanciando as stop words, que são palavras que não alteram o sentido do texto
stopwords = nltk.corpus.stopwords.words('portuguese')
# instanciar CountVectorizer para tokenizar e converter o texto em uma matriz binaria,
# parametros: strip_accents (remover acentos), lowercase (texto em caixa baixa),
# strip_accents (remover acentos),
# lowercase (texto em caixa baixa) e
# stop_words
cvt = CountVectorizer(
strip_accents='ascii',
lowercase=True,
stop_words=stopwords
)
X_cvt = cvt.fit_transform(df['desc'])
# visualizar matriz binaria criada
print(X_cvt.toarray())
# normalizar
tfi = TfidfTransformer(use_idf=True)
X_tfi = tfi.fit_transform(X_cvt)
# dividi o dataset em treino e teste
X_train, X_test, y_train, y_test = train_test_split(
X_tfi,
df['club'],
test_size=0.2
)
# instanciar o algoritmo
clf = LinearSVC().fit(X_train, y_train)
y_pred = clf.predict(X_test)
# testar simples para saber a accurácia do modelo;
print(metrics.accuracy_score(y_test, y_pred))
###Output
0.9364799294221438
###Markdown
Hora de testar o classicador
###Code
texto_ge = 'O Som do Jogo: final sem torcida cria um Dérbi como você nunca viu; assista'
def novo_titulo(titulo):
novo_cvt = cvt.transform(pd.Series(titulo))
novo_tfi = tfi.transform(novo_cvt)
clube = clf.predict(novo_tfi)[0]
return clube
novo_titulo(texto_ge)
###Output
_____no_output_____ |
secao18 - K Nearest Neighbors(KNN)/aula88_KNN.ipynb | ###Markdown
**Precisamos fazer normalização quando trabalhamos com KNN (Sem a normalização ele vai se basear apenas nos caras que são maiores e vai desconsiderar os outros parâmetros, ele não vai ter variações tão relevantes) **
###Code
scaler = StandardScaler()
scaler.fit(df.drop('TARGET CLASS', axis=1))
df_normalizado = scaler.transform(df.drop('TARGET CLASS', axis=1)) # Novo df que recebe os dados normalizados
df_normalizado
df_param = pd.DataFrame(df_normalizado, columns=df.columns[:-1])
df_param.head()
###Output
_____no_output_____
###Markdown
Agara vamos utilizar os dados para fazer o modelo ML usando o KNN
###Code
# Os valores de X serão os dados normalizados, já a de y será apenas a coluna que terá os dados que queremos fazer a precição
X_train, X_test, y_train, y_test = train_test_split(df_param, df['TARGET CLASS'], test_size=0.3)
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
predictions = knn.predict(X_test)
print(classification_report(y_test, predictions))
print(confusion_matrix(y_test, predictions))
# Método Cotovelo
error_rate = []
for i in range(1, 40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
predictions = knn.predict(X_test)
error_rate.append(np.mean(predictions!=y_test))
plt.figure(figsize=(14, 8))
plt.plot(range(1, 40), error_rate, color='blue', linestyle='dashed', marker='o')
plt.xlabel('K')
plt.ylabel('Taxa de erro')
knn1 = KNeighborsClassifier(n_neighbors=20)
knn1.fit(X_train, y_train)
predictions1 = knn1.predict(X_test)
print(classification_report(y_test, predictions1))
knn2 = KNeighborsClassifier(n_neighbors=30)
knn2.fit(X_train, y_train)
predictions2 = knn2.predict(X_test)
print(classification_report(y_test, predictions2))
tn, fp, fn, tp = confusion_matrix(y_test, predictions2).ravel()
tn, fp, fn, tp
###Output
_____no_output_____ |
cylinder/cylgrid0_sst_iddes_matula_03_Re3p6M/plotForcesReDiff.ipynb | ###Markdown
Plot forces for flow past cylinder grid0 case Compare differences with Reynolds number
###Code
%%capture
import sys
sys.path.insert(1, '../utilities')
import litCdData
import numpy as np
import matplotlib.pyplot as plt
## Some needed functions for postprocessing
def concatforces(filelist):
"""
Concatenate all the data in a list of files given by filelist, without overlaps in time
"""
for ifile, file in enumerate(filelist):
dat=np.loadtxt(file, skiprows=1)
if ifile==0:
alldat = dat
else:
lastt = alldat[-1,0] # Get the last time
filt = dat[:,0]>lastt
gooddat = dat[filt,:]
alldat = np.vstack((alldat, gooddat))
return alldat
# Calculate time average
def timeaverage(time, f, t1, t2):
filt = ((time[:] >= t1) & (time[:] <= t2))
# Filtered time
t = time[filt]
# The total time
dt = np.amax(t) - np.amin(t)
# Filtered field
filtf = f[filt]
# Compute the time average as an integral
avg = np.trapz(filtf, x=t, axis=0) / dt
return avg
def tukeyWindow(N, params={'alpha':0.1}):
"""
The Tukey window
see https://en.wikipedia.org/wiki/Window_function#Tukey_window
"""
alpha = params['alpha']
w = np.zeros(N)
L = N+1
for n in np.arange(0, int(N//2) + 1):
if ((0 <= n) and (n < 0.5*alpha*L)):
w[n] = 0.5*(1.0 - np.cos(2*np.pi*n/(alpha*L)))
elif ((0.5*alpha*L <= n) and (n <= N/2)):
w[n] = 1.0
else:
print("Something wrong happened at n = ",n)
if (n != 0): w[N-n] = w[n]
return w
# FFT's a signal, returns 1-sided frequency and spectra
def getFFT(t, y, normalize=False, window=True):
"""
FFT's a signal, returns 1-sided frequency and spectra
"""
n = len(y)
k = np.arange(n)
dt = np.mean(np.diff(t))
frq = k/(n*dt)
if window: w = tukeyWindow(n)
else: w = 1.0
if normalize: L = len(y)
else: L = 1.0
FFTy = np.fft.fft(w*y)/L
# Take the one sided version of it
freq = frq[range(int(n//2))]
FFTy = FFTy[range(int(n//2))]
return freq, FFTy
# Basic problem parameters
D = 6 # Cylinder diameter
U = 20 # Freestream velocity
Lspan = 24 # Spanwise length
A = D*Lspan # frontal area
rho = 1.225 # density
Q = 0.5*rho*U*U # Dynamic head
vis = 1.8375e-5 # viscosity
ReNum = rho*U*D/vis # Reynolds number
#avgt = [160.0, 260.0] # Average times
saveinfo = False
alldata = []
# Label, Filenames averaging times
runlist = [['Re=8.0M', ['../cylgrid0_sst_iddes_matula_01/forces86m.dat'], [150, 600], {'vis':1.8375e-5}],
['Re=3.6M', ['forces36m.dat'], [300, 800], {'vis':4.0833333333333334e-05}],
]
alldata = []
for run in runlist:
forcedat = concatforces(run[1])
t = forcedat[:,0]*U/D # Non-dimensional time
alldata.append([run[0], t, forcedat, run[2], run[3]])
#print(alldata)
print('%30s %10s %10s'%("Case", "avgCd", "avgCl"))
for run in alldata:
label = run[0]
t = run[1]
forcedat = run[2]
avgt = run[3]
Cd = (forcedat[:,1]+forcedat[:,4])/(Q*A)
Cl = (forcedat[:,2]+forcedat[:,5])/(Q*A)
# Calculate averaged Cp, Cd
avgCd = timeaverage(t, Cd, avgt[0], avgt[1])
avgCl = timeaverage(t, Cl, avgt[0], avgt[1])
print('%30s %10f %10f'%(label, avgCd, avgCl))
#print("Avg Cd = %f"%avgCd)
#%print("Avg Cl = %f"%avgCl)
###Output
Case avgCd avgCl
Re=8.0M 0.363020 0.004059
Re=3.6M 0.455503 0.002967
###Markdown
Plot Lift and Drag coefficients
###Code
plt.rc('font', size=16)
plt.figure(figsize=(10,8))
for run in alldata:
label = run[0]
t = run[1]
forcedat = run[2]
avgt = run[3]
Cd = (forcedat[:,1]+forcedat[:,4])/(Q*A)
Cl = (forcedat[:,2]+forcedat[:,5])/(Q*A)
# Calculate averaged Cp, Cd
avgCd = timeaverage(t, Cd, avgt[0], avgt[1])
avgCl = timeaverage(t, Cl, avgt[0], avgt[1])
#print('%30s %f %f'%(label, avgCd, avgCl))
plt.plot(t,Cd, label=label)
plt.hlines(avgCd, np.min(t), np.max(t), linestyles='dashed', linewidth=1)
plt.xlabel(r'Non-dimensional time $t U_{\infty}/D$');
plt.legend()
plt.ylabel('$C_D$')
plt.title('Drag coefficient $C_D$');
plt.figure(figsize=(10,8))
for run in alldata:
label = run[0]
t = run[1]
forcedat = run[2]
avgt = run[3]
Cd = (forcedat[:,1]+forcedat[:,4])/(Q*A)
Cl = (forcedat[:,2]+forcedat[:,5])/(Q*A)
# Calculate averaged Cp, Cd
avgCd = timeaverage(t, Cd, avgt[0], avgt[1])
avgCl = timeaverage(t, Cl, avgt[0], avgt[1])
plt.plot(t,Cl, label=label)
plt.hlines(avgCl, np.min(t), np.max(t), linestyles='dashed', linewidth=1)
plt.xlabel(r'Non-dimensional time $t U_{\infty}/D$');
plt.ylabel('$C_l$')
plt.title('Lift coefficient $C_l$');
plt.legend()
###Output
_____no_output_____
###Markdown
Plot Spectra
###Code
plt.figure(figsize=(10,8))
for run in alldata:
label = run[0]
t = run[1]
forcedat = run[2]
avgt = run[3]
filt = ((t[:] >= avgt[0]) & (t[:] <= avgt[1]))
tfiltered = t[filt]*D/U
Cd = (forcedat[:,1]+forcedat[:,4])/(Q*A)
Cl = (forcedat[:,2]+forcedat[:,5])/(Q*A)
Cdfiltered = Cd[filt]
Clfiltered = Cl[filt]
f, Cdspectra = getFFT(tfiltered, Cdfiltered, normalize=True)
f, Clspectra = getFFT(tfiltered, Clfiltered, normalize=True)
plt.loglog(f*D/U, abs(Clspectra), label='Cl '+label)
plt.axvline(0.37, linestyle='--', color='gray')
plt.xlim([1E-2,2]);
plt.ylim([1E-8, 1E-1]);
plt.xlabel(r'$f*D/U_\infty$');
plt.ylabel(r'$|\hat{C}_{l,d}|$')
plt.legend()
###Output
_____no_output_____
###Markdown
Plot Cd versus Reynolds number
###Code
plt.figure(figsize=(10,8))
litCdData.plotEXP()
litCdData.plotCFD()
for run in alldata:
label = run[0]
t = run[1]
forcedat = run[2]
avgt = run[3]
dict = run[4]
Cd = (forcedat[:,1]+forcedat[:,4])/(Q*A)
Cl = (forcedat[:,2]+forcedat[:,5])/(Q*A)
# Calculate averaged Cp, Cd
avgCd = timeaverage(t, Cd, avgt[0], avgt[1])
avgCl = timeaverage(t, Cl, avgt[0], avgt[1])
vis = dict['vis']
ReNum = rho*U*D/vis
plt.semilogx(ReNum, avgCd, '*', ms=10, label='Nalu SST-IDDES '+label)
plt.grid()
plt.legend(fontsize=10)
plt.xlabel(r'Reynolds number Re');
plt.ylabel('$C_D$')
plt.title('Drag coefficient $C_D$');
# Write the YAML file these averaged quantities
import yaml
if saveinfo:
savedict={'Re':float(ReNum), 'avgCd':float(avgCd), 'avgCl':float(avgCl)}
f=open('istats.yaml','w')
f.write('# Averaged quantities from %f to %f\n'%(avgt[0], avgt[1]))
f.write('# Grid: grid0\n')
f.write(yaml.dump(savedict, default_flow_style=False))
f.close()
###Output
_____no_output_____ |
notebooks/dataset-projections/transcriptome-macosko2015-retina/macosko2015-PCA-tsne.ipynb | ###Markdown
Choose GPU (this may not be needed on your computer)
###Code
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=''
###Output
env: CUDA_DEVICE_ORDER=PCI_BUS_ID
env: CUDA_VISIBLE_DEVICES=''
###Markdown
load packages
###Code
from tfumap.umap import tfUMAP
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import umap
import pandas as pd
###Output
_____no_output_____
###Markdown
Load dataset
###Code
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
#dataset_address = 'http://file.biolab.si/opentsne/macosko_2015.pkl.gz'
# https://opentsne.readthedocs.io/en/latest/examples/01_simple_usage/01_simple_usage.html
# also see https://github.com/berenslab/rna-seq-tsne/blob/master/umi-datasets.ipynb
import gzip
import pickle
with gzip.open(DATA_DIR / 'macosko_2015.pkl.gz', "rb") as f:
data = pickle.load(f)
x = data["pca_50"]
y = data["CellType1"].astype(str)
print("Data set contains %d samples with %d features" % x.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x, y, test_size=.1, random_state=42)
np.shape(X_train)
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
X_train_flat = X_train
from sklearn.preprocessing import OrdinalEncoder
enc = OrdinalEncoder()
Y_train = enc.fit_transform([[i] for i in Y_train]).flatten()
###Output
_____no_output_____
###Markdown
Train PCA model
###Code
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
z = pca.fit_transform(X_train_flat)
###Output
_____no_output_____
###Markdown
plot output
###Code
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)],
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("PCA embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
###Output
_____no_output_____
###Markdown
Save model
###Code
import os
import pickle
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ 'macosko2015' / 'PCA'
ensure_dir(output_dir)
with open(os.path.join(output_dir, "model.pkl"), "wb") as output:
pickle.dump(pca, output, pickle.HIGHEST_PROTOCOL)
np.save(output_dir / 'z.npy', z)
###Output
_____no_output_____
###Markdown
tsne
###Code
from openTSNE import TSNE
tsne = TSNE(
n_components = 2
)
embedding_train = tsne.fit(X_train_flat)
z = np.array(embedding_train)
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)],
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("PCA embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
###Output
_____no_output_____
###Markdown
save model
###Code
import os
import pickle
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ 'macosko2015' / 'TSNE'
ensure_dir(output_dir)
with open(os.path.join(output_dir, "model.pkl"), "wb") as output:
pickle.dump(pca, output, pickle.HIGHEST_PROTOCOL)
np.save(output_dir / 'z.npy', z)
###Output
_____no_output_____ |
interactive_raceFullModel2.ipynb | ###Markdown
Full Interaction Race Model including a subset of trials with weaker inhibition
###Code
import numpy
import random
import matplotlib.pyplot as plt
import pandas
%matplotlib inline
# original matlab code
#function [meanrtgo,presp] = interactiverace
#rng('shuffle');
params={'mugo':.2,
'mustopstrong': .8,
'mustopweak':.0001,
'threshold':60,
'nondecisiongo':50,
'nondecisionstop':50,
'ssds':[1,50,100,150, 200,250, 300, 350, 400, 450, 500,3000],
'nreps':50000,
'maxtime':1000,
'betastop':.4,
'betago':.0000001,
'proportionweak':.15}
def interactiverace(params):
stopsave = []
gosave = []
rtgosave = []
meanrtgo = numpy.zeros(len(params['ssds']))
presp = numpy.zeros(len(params['ssds']));
for irep in range(params['nreps']):
for j,ssd in enumerate(params['ssds']):
stopsignaldelay = ssd
goaccumulator = 0
stopaccumulator = 0
rtgo = 0
itime = 0
if random.uniform(0,1) < params['proportionweak']:
mustop = params['mustopweak']
# mustopVar = numpy.random.normal()*.008
else:
mustop = params['mustopstrong']
# mustopVar = numpy.random.normal()
while itime < params['maxtime'] and rtgo == 0: # single trial
itime = itime + 1
if itime < stopsignaldelay + params['nondecisionstop']:
inhibition = 0
else:
inhibition = params['betastop']
if mustop == params['mustopweak']:
stopaccumulator = stopaccumulator + mustop + numpy.random.normal()*.008 - params['betago']*goaccumulator
else:
stopaccumulator = stopaccumulator + mustop + numpy.random.normal() - params['betago']*goaccumulator
stopsave.append(stopaccumulator)
#print(stopaccumulator)
if itime >= params['nondecisiongo']:
goaccumulator = goaccumulator + params['mugo'] - inhibition*stopaccumulator + numpy.random.normal()
gosave.append(goaccumulator)
if goaccumulator > params['threshold']:
if rtgo == 0:
rtgo = itime;
meanrtgo[j] += rtgo;
rtgosave.append(rtgo)
if rtgo > 0:
presp[j] += 1;
for ssd in range(len(params['ssds'])):
if presp[ssd] > 0:
meanrtgo[ssd] = meanrtgo[ssd]/presp[ssd];
presp[ssd] = presp[ssd]/params['nreps'];
return(meanrtgo,presp,gosave,stopsave,rtgosave)
meanrtgo,presp,gosave,stopsave,rtgosave=interactiverace(params)
#df=pandas.DataFrame({'gosave':gosave,'stopsave':stopsave})
print(meanrtgo)
print(presp)
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(params['ssds'][:11],meanrtgo[:11] - meanrtgo[11])
plt.plot([params['ssds'][0],params['ssds'][10]],[0,0],'k:')
plt.xlabel('Stop signal delay')
plt.ylabel('Violation (Stop Failure RT - No-Stop RT)')
plt.subplot(1,2,2)
plt.plot(params['ssds'][:11],presp[:11])
plt.xlabel('Stop signal delay')
plt.ylabel('Probability of responding')
plt.axis([params['ssds'][0],params['ssds'][10],0,1])
###Output
_____no_output_____ |
tricks/3_compute_dist_NA.ipynb | ###Markdown
How to compute pairwise distance when having missing value?
###Code
import pandas as pd
import numpy as np
from sklearn.metrics import pairwise_distances
from sklearn.metrics.pairwise import nan_euclidean_distances
from scipy.spatial.distance import squareform, pdist
###Output
_____no_output_____
###Markdown
The easist way, when we are free of NA, I'd like to use pdist function
###Code
a = np.random.randn(3,5)
a
# pdist will return a dense distance matrix
pdist(a)
###Output
_____no_output_____
###Markdown
you can convert to a square distance matrixsquareform(pdist(a)) What if we have NA value?
###Code
# if you want to know more about NA value, refer to trick 2 jupyter notebook in the same folder
a[1,3] = np.nan
a
# np.nan (a float object) will be converted to np.float64
type(a[1,3])
###Output
_____no_output_____
###Markdown
Theoretically, sklearn pairwise distance should be able to do that, there is a force_all_finite argument.
###Code
pairwise_distances(X=a)
###Output
_____no_output_____
###Markdown
You see, It doesn't work, because the missing value has to be in the form of np.inf, np.nan and pd.NA What is the workaround?
###Code
# first using nan_euclidean_distances to compute
test = nan_euclidean_distances(X=a,Y=a)
test
# make sure it is sysmetric
test_sym = np.tril(test) + np.tril(test,k=-1).T
test_sym
# make sure the main diagonal is 0
np.fill_diagonal(test_sym,0)
test_sym
# convert to dense distance matrix using squareform
squareform(test_sym)
###Output
_____no_output_____ |
notebooks/20-03-30_troubleshooting-mm-basic-plot.ipynb | ###Markdown
troubleshooting bisc plot for Michaelis-Menten equation on Bokehtroubleshooting getting the first plot up and running: just a simple set of Michaelis-Menten data that can have the Vmax and Km changed interactively
###Code
# import libraries
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
from bokeh.layouts import row, column
from bokeh.models import CustomJS, Slider, Label
from bokeh.plotting import figure, output_file, show, ColumnDataSource
from bokeh.io import output_notebook
root_dir = os.path.join(os.getcwd(), '..')
sys.path.append(root_dir)
from pharmaplot import mm
# generate some fake data and make sure it looks ok
x = np.logspace(-3, 2, num=500)
y = mm.michaelis_menten(x, 100, 10)
plt.plot(x,y)
# set to display in notebook as opposed to making an html
output_notebook()
## generate bokeh plot using the above data
# set up source data and plot lines that will vary
source = ColumnDataSource(data=dict(x=x, y=y))
plot = figure(y_range=(0, 200), plot_width=600, plot_height=400,
x_axis_label='[S]: substrate concentration (μM)',
y_axis_label='initial velocity (μM/s)',
title='Michaelis-Menten Kinetics')
plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6, color='black')
# set up static line and annotations
plot.line(x, y, line_width=5, color='blue', line_alpha=0.3)
mytext = Label(x=50, y=70, text='Km = 10 (μM), Vmax = 100 (μM/s)',
text_color="blue", text_alpha=0.5)
plot.add_layout(mytext)
# set up java script callback function to make plot interactive
vmax_slider = Slider(start=0.1, end=200, value=100, step=1, title="Vmax (μM/s)")
km_slider = Slider(start=1, end=100, value=10, step=1, title="Km (μM)")
callback = CustomJS(args=dict(source=source,
vmax=vmax_slider,
km=km_slider),
code="""
const data = source.data;
const VMAX = vmax.value;
const KM = km.value;
const x = data['x']
const y = data['y']
for (var i = 0; i < x.length; i++) {
y[i] = (VMAX*x[i])/(KM+x[i]);
}
source.change.emit();
""")
# add sliders to plot and display
vmax_slider.js_on_change('value', callback)
km_slider.js_on_change('value', callback)
layout = row(
plot,
column(vmax_slider, km_slider),
)
#output_file("mm.html", title="mm.py example")
show(layout)
###Output
_____no_output_____ |
study2/0_highlevel/2_0_1_mnist_keras_sequential.ipynb | ###Markdown
0-1. Keras Tensorflow 2.0 (Google Tensorflow Team) - Tensorflow 2.0에서는 Keras를 Tensorflow와 보다 강력하게 인테그레이팅 하는 것을 목표로 함 - Estimator은 Tensorflow 2.0에 포함되나, Keras로 작성한 뒤, model_to_estimator()을 이용할 것을 권장 Keras란? - 구글의 Francois Chollet 개발 / 유지보수 - 다양한 프레임워크 위에서 수행가능한 고수준 라이브러리 (MXNet, DL4J, Tensorflow, Microsoft Cognitive Toolkit, Theano) - 현재 Tensorflow에는 tf.keras로 모듈화 되어있으며, 점점 긴밀하게 인테그레이팅하여 내재화 예정 Keras의 사용자 입장에서의 장점 - Model객체를 통하여 간편하고 쉽게 아키텍처 설계 / 학습 및 인퍼런스 수행 가능 Keras의 사용자 입장에서의 단점 - 커스터마이징이 어려움 (특히 loss function) - 현재 버전(1.12)의 Tensorflow 환경에서는 아직 이질적인 서드파티 라이브러리로서의 느낌이 강함(긴밀도 떨어짐) Keras를 쓰는 방법 1. Sequential 2. Functional 3. Subclassing Model Sequential이란? - sequential 모델 객체를 먼저 생성한 뒤, 여기에 layers를 붙여나가는 구조 - keras의 모델 구성 방법 중 가장 고수준 - 기본적이고 범용적인 모델에 가까울 경우 이용 용이 - 복잡한 모델에는 적합하지 않음
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
0-1-1. 모델 변수 정의 - 고수준 layers(tf.layers 혹은 tf.keras.layers)의 Dense(FC)층은 output node의 수만으로 정의 가능하기에 아래와 같이 쉽게 모델변수 작성 가능
###Code
mnist_hidden_dim = [512, 128]
###Output
_____no_output_____
###Markdown
0-1-2. 시퀀스 모델 객체 생성
###Code
model = tf.keras.Sequential()
###Output
_____no_output_____
###Markdown
0-1-3. 모델 레이어 시퀀스 구성[batch_size, 28, 28]$\rightarrow$ Flatten $\rightarrow$ [batch_size, 784]$\rightarrow$ Dense(784, 512) $\rightarrow$ relu $\rightarrow$ [batch_size, 512]$\rightarrow$ Dropout $\rightarrow$ Dense(512, 128) $\rightarrow$ relu $\rightarrow$ [batch_size, 128] $\rightarrow$ Dropout $\rightarrow$ Dense(128, 10) $\rightarrow$ softmax $\rightarrow$ [batch_size, 10]
###Code
model.add(tf.keras.layers.Flatten())
for units in mnist_hidden_dim:
model.add(tf.keras.layers.Dense(units, activation=tf.nn.relu))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
###Output
_____no_output_____
###Markdown
0-1-4. 시퀀스 구성 완료 후 컴파일 - optimizer : 최적화 방법 지정 - loss : 로스 함수 지정 - "sparse_categorical_crossentropy" : one-hot encoding을 하지 않아도 자동으로 해 주는 loss - metrics : training / evaluating시 판단의 근거로 삼을 메트릭 지정
###Code
model.compile(
optimizer=tf.train.AdamOptimizer(1e-3),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
0-1-5. 데이터 추출 - tensorflow.examples.tutorials.mnist는 deprecated - 많은 예제가 아직 위의 코드를 이용하고 있는데, keras의 데이터추출이 쉽고 편함
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train / 255.
x_test = x_test / 255.
###Output
_____no_output_____
###Markdown
0-1-6. 학습 - Sequential 모델 객체의 fit 메서드를 통해 데이터를 배치로 자르고, 셔플링하는 과정까지 한번에 가능 - 그러나, tensorflow.data.Dataset의 적극적 활용을 권장
###Code
model.fit(x_train, y_train, epochs=5, batch_size=100, verbose=1)
###Output
_____no_output_____
###Markdown
0-1-7. 평가 - Sequential 모델 객체의 evaluate 메서드를 통해 테스트 데이터셋 평가
###Code
print(model.evaluate(x_test, y_test))
###Output
_____no_output_____ |
04_02_auto_ml_1.ipynb | ###Markdown
Automated ML
###Code
COLAB = True
if COLAB:
# !sudo apt-get install git-lfs && git lfs install
!rm -rf dl-projects
!git clone https://github.com/mengwangk/dl-projects
!cd dl-projects && ls
if COLAB:
!cp dl-projects/utils* .
!cp dl-projects/preprocess* .
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.stats as ss
import math
import matplotlib
from scipy import stats
from collections import Counter
from pathlib import Path
plt.style.use('fivethirtyeight')
sns.set(style="ticks")
# Automated feature engineering
import featuretools as ft
# Machine learning
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer, MinMaxScaler, StandardScaler
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, precision_recall_curve, roc_curve
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier
from IPython.display import display
from utils import *
from preprocess import *
# The Answer to the Ultimate Question of Life, the Universe, and Everything.
np.random.seed(42)
%aimport
###Output
Modules to reload:
all-except-skipped
Modules to skip:
###Markdown
Preparation
###Code
if COLAB:
DATASET_PATH = Path("dl-projects/datasets")
else:
DATASET_PATH = Path("datasets")
DATASET = DATASET_PATH/"4D.zip"
data = format_tabular(DATASET)
data.info()
data.tail(10)
data['NumberId'] = data['LuckyNo']
data.tail(10)
data.describe()
plt.figure(figsize=(20,6))
sns.boxplot(x='NumberId', y='PrizeType',data=data)
plt.xticks(rotation=90)
plt.title('Draw')
print(data[data['NumberId']==1760])
###Output
DrawNo DrawDate PrizeType LuckyNo NumberId
6007 66894 1994-01-05 ConsolationNo10 1760 1760
12089 93295 1995-09-10 SpecialNo10 1760 1760
33221 185101 2001-06-09 ConsolationNo6 1760 1760
41325 220403 2003-08-10 SpecialNo4 1760 1760
56402 286007 2007-06-24 ConsolationNo3 1760 1760
67267 333210 2010-04-10 SpecialNo2 1760 1760
70041 345310 2010-12-19 ConsolationNo3 1760 1760
72759 357111 2011-08-21 ConsolationNo7 1760 1760
75155 367512 2012-03-20 SpecialNo10 1760 1760
88140 424015 2015-05-17 ConsolationNo10 1760 1760
88193 424215 2015-05-23 ConsolationNo8 1760 1760
94840 453117 2017-01-04 ConsolationNo8 1760 1760
###Markdown
Exploration
###Code
def ecdf(data):
x = np.sort(data)
y = np.arange(1, len(x) + 1) / len(x)
return x, y
###Output
_____no_output_____
###Markdown
Making Labels
###Code
data['TotalStrike'] = 1
data.head(10)
def make_cutoffs(start_date, end_date, threshold=0):
# Find numbers exist before start date
number_pool = data[data['DrawDate'] < start_date]['NumberId'].unique()
tmp = pd.DataFrame({'NumberId': number_pool})
# For numbers in the number pool, find their strike count between the start and end dates
strike_counts = data[(data['NumberId'].isin(number_pool)) &
(data['DrawDate'] >= start_date) &
(data['DrawDate']< end_date)
].groupby('NumberId')['TotalStrike'].count().reset_index()
number_of_draws = data[
(data['DrawDate'] >= start_date) &
(data['DrawDate']< end_date)]['DrawDate'].nunique()
# display(strike_counts)
# print(number_of_draws)
# Merge with all the number ids to record all customers who existed before start date
strike_counts = strike_counts.merge(tmp, on='NumberId', how='right')
# Set the total for any numbers who did not strike in the timeframe equal to 0
strike_counts['TotalStrike'] = strike_counts['TotalStrike'].fillna(0)
# Label is based on the threshold
strike_counts['Label'] = (strike_counts['TotalStrike'] > threshold).astype(int)
# The cutoff time is the start date
strike_counts['cutoff_time'] = pd.to_datetime(start_date)
strike_counts = strike_counts[['NumberId', 'cutoff_time', 'TotalStrike', 'Label']]
#display(strike_counts[strike_counts['Label']==1].nunique())
#display(strike_counts.sort_values(by='TotalStrike', ascending=False))
return number_of_draws, strike_counts
number_of_draws, may_2015 = make_cutoffs(pd.datetime(2015, 5, 1), pd.datetime(2015, 6, 1))
#display(len(may_2015))
#display(may_2015[may_2015['Label']==1].nunique())
may_2015[(may_2015['Label']==1) & (may_2015['TotalStrike']==2)].sort_values(by='TotalStrike', ascending=False).head()
may_2015['Label'].value_counts().plot.bar()
plt.title('Label Distribution for May')
CUT_OFF_YEAR=pd.datetime(2014, 1, 1)
## Loop through each month starting from CUT_OFF_YEAR
from dateutil.relativedelta import relativedelta
# print(data['DrawDate'].max())
max_year_month = data['DrawDate'].max() - relativedelta(months=1) + relativedelta(day=31)
print(f"Max month year: {max_year_month}")
start_year_month = CUT_OFF_YEAR
months_data = []
total_draws = 0
while start_year_month < max_year_month:
start_date = start_year_month
end_date = start_date + relativedelta(months=1)
start_year_month = start_year_month + relativedelta(months=1)
#print(f"Labels from {start_date} to {end_date}")
draw_count, month_data = make_cutoffs(start_date, end_date)
total_draws = total_draws + draw_count
months_data.append(month_data)
print(f"Total draws: {total_draws}")
print(f"Total draws: {data[(data['DrawDate'] >= CUT_OFF_YEAR) & (data['DrawDate'] <= max_year_month)]['DrawDate'].nunique()}")
print(f"Total months:{len(months_data)}")
print(f"Total records count: {sum([len(l) for l in months_data])}")
print([len(l) for l in months_data])
labels = pd.concat(months_data)
labels.to_csv(DATASET_PATH/'labels.csv')
labels.describe()
# plot_labels = labels.copy()
# plot_labels['month'] = plot_labels['cutoff_time'].dt.month
# plt.figure(figsize = (12, 6))
# sns.boxplot(x = 'month', y = 'TotalStrike',
# data = plot_labels[(plot_labels['TotalStrike'] > 0)]);
# plt.title('Distribution by Month');
labels[(labels['NumberId'] == 9016) & (labels['Label'] > 0)]
labels.loc[labels['NumberId'] == 9016].set_index('cutoff_time')['TotalStrike'].plot(figsize = (6, 4), linewidth = 3)
plt.xlabel('Date', size = 16);
plt.ylabel('Total Strike', size = 16);
plt.title('Draw', size = 20);
plt.xticks(size = 16); plt.yticks(size = 16);
###Output
_____no_output_____
###Markdown
Automated Feature Engineering
###Code
es = ft.EntitySet(id="Lotto Results")
# Add the entire data table as an entity
es.entity_from_dataframe("Results",
dataframe=data,
index="results_index",
time_index = 'DrawDate')
es['Results']
es.normalize_entity(new_entity_id="Numbers",
base_entity_id="Results",
index="NumberId",
)
es
es['Numbers'].df.head(24)
es['Results'].df.head(24)
len(es['Results'].df)
###Output
_____no_output_____
###Markdown
Deep Feature Synthesis
###Code
# feature_matrix, feature_names = ft.dfs(entityset=es, target_entity='Numbers',
# cutoff_time = labels, verbose = 2,
# cutoff_time_in_index = True,
# chunk_size = len(labels), n_jobs = 1,
# max_depth = 1)
feature_matrix, feature_names = ft.dfs(entityset=es, target_entity='Numbers',
agg_primitives = ['std', 'max', 'min', 'mode',
'mean', 'skew', 'last', 'avg_time_between'],
trans_primitives = ['cum_sum', 'cum_mean', 'day',
'month', 'hour', 'weekend'],
cutoff_time = labels, verbose = 1,
cutoff_time_in_index = True,
chunk_size = len(labels), n_jobs = 1,
max_depth = 2)
len(feature_matrix.columns), feature_matrix.columns
len(feature_matrix)
feature_matrix.head().T
feature_matrix.shape
feature_matrix[(feature_matrix['NumberId']==0) & (feature_matrix['Label']==1)].head(10)
###Output
_____no_output_____
###Markdown
Correlations
###Code
feature_matrix = pd.get_dummies(feature_matrix).reset_index()
feature_matrix.shape
feature_matrix.head()
corrs = feature_matrix.corr().sort_values('TotalStrike')
corrs['TotalStrike'].head()
corrs['TotalStrike'].dropna().tail(10)
g = sns.FacetGrid(feature_matrix[(feature_matrix['SUM(Results.DrawNo)'] > 0)],
hue = 'Label', size = 4, aspect = 3)
g.map(sns.kdeplot, 'SUM(Results.DrawNo)')
g.add_legend();
plt.title('Distribution of Results Total by Label');
feature_matrix['month'] = feature_matrix['time'].dt.month
feature_matrix['year'] = feature_matrix['time'].dt.year
feature_matrix.info()
feature_matrix.head()
###Output
_____no_output_____
###Markdown
Save feature matrix
###Code
#if COLAB:
# feature_matrix.to_csv(DATASET_PATH/'feature_matrix.csv', index=False)
# feature_matrix.to_pickle(DATASET_PATH/'feature_matrix.pkl')
###Output
_____no_output_____
###Markdown
Save the datahttps://towardsdatascience.com/downloading-datasets-into-google-drive-via-google-colab-bcb1b30b0166
###Code
if COLAB:
#!cd dl-projects && git config --global user.email '[email protected]'
#!cd dl-projects && git config --global user.name 'mengwangk'
#!cd dl-projects && git add -A && git commit -m 'Updated from colab'
from google.colab import drive
drive.mount('/content/gdrive')
GDRIVE_DATASET_FOLDER = Path('gdrive/My Drive/datasets/')
#!ls /content/gdrive/My\ Drive/
feature_matrix.to_csv(GDRIVE_DATASET_FOLDER/'feature_matrix_2.csv', index=False)
feature_matrix.to_pickle(GDRIVE_DATASET_FOLDER/'feature_matrix_2.pkl')
#if COLAB:
# !cd dl-projects && git remote rm origin && git remote add origin https://mengwangk:[email protected]/mengwangk/dl-projects.git && git push -u origin master
# from google.colab import files
# files.download(DATASET_PATH/'feature_matrix.csv')
if COLAB:
!cd gdrive/"My Drive"/datasets/ && ls -l --block-size=M
###Output
total 1151M
-rw------- 1 root root 407M Dec 30 05:01 feature_matrix_2.csv
-rw------- 1 root root 428M Dec 30 05:01 feature_matrix_2.pkl
-rw------- 1 root root 141M Dec 27 08:27 feature_matrix.csv
-rw------- 1 root root 176M Dec 27 08:28 feature_matrix.pkl
|
sktime/forecasting/prob_metric_integration.ipynb | ###Markdown
Probabilistic metric integrationAfter developing probabilistic metrics in 2232 need to ensure they are compatible with useful features such as grid search for model parameters. There are two key problems that need to be solved for this:1. proba metrics take in the output of `predict_quantile` or `predict_interval` (or `predict_proba`) where normal metrics just take predict. This means we need to change what predictions are used inside the grid search.2. Some probabilistic metrics have their own hyperparameters. For example the quantile used in a pinball loss. Currently this is inferred from the data inputted, however for a grid search we will need to somehow tell it what quantile to produce. To solve 1. could either create some `set_default` function which determines what the forecaster implements for predict (_predict, _predict_quantile or _predict_interval) or use tags inside the grid search evaluation that retrieves the type of metric being used and calls the corresponding predict function.To solve 2. we could do a small refactor to the probabilistic metrics, where we specify the hyperprameter(s) we want and it retrieves the correct data from the input (and raises an error if it isn't there). This will allow it to require a specific quantile but reduces flexibility as a user will have to instantiate a new metric class for each different set of quantiles they want to evaluate.
###Code
# Basic imports
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
import numpy as np
import pandas as pd
# Prep data/forecaster
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.theta import ThetaForecaster
y = np.log1p(load_airline())
y_train, y_test = temporal_train_test_split(y)
fh = np.arange(len(y_test)) + 1
f = ThetaForecaster(sp=12)
f.fit(y_train)
y_pred = f.predict(fh=fh)
q_pred = f.predict_quantiles(fh=fh, alpha=0.5)
i_pred = f.predict_interval(fh=fh)
q_pred.head()
i_pred.head()
# Define probabilistic metric
from sktime.performance_metrics.forecasting.probabilistic import PinballLoss
loss = PinballLoss()
loss(y_test, q_pred)
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import (
ExpandingWindowSplitter,
ForecastingGridSearchCV,
)
cv = ExpandingWindowSplitter(
initial_window=24, step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
)
param_grid = {"sp": [6, 12]}
gcv = ForecastingGridSearchCV(f, cv, param_grid, scoring=loss)
gcv.fit(y_test)
###Output
_____no_output_____
###Markdown
The ForecastingGridSearchCV relies on `sktime.forecasting.model_evaluation.evaluate` to evaluate metric scores, hence this is what we will need to change to allow it to work. It also has it's own `score()` function which could also be changed but this isn't used in fitting.
###Code
evaluate(f, cv, y_test, scoring=loss)
###Output
_____no_output_____
###Markdown
If we naively substitute the normal loss for a quantile loss we get an input error (as expected).We will first try changing the evaluate function.
###Code
import time
from sklearn.base import clone
from sktime.forecasting.base import ForecastingHorizon
from sktime.utils.validation.forecasting import (
check_cv,
check_fh,
check_scoring,
check_X,
)
from sktime.utils.validation.series import check_series
def evaluate(
forecaster,
cv,
y,
X=None,
strategy="refit",
scoring=None,
fit_params=None,
return_data=False,
):
"""Evaluate forecaster using timeseries cross-validation.
Parameters
----------
forecaster : sktime.forecaster
Any forecaster
cv : Temporal cross-validation splitter
Splitter of how to split the data into test data and train data
y : pd.Series
Target time series to which to fit the forecaster.
X : pd.DataFrame, default=None
Exogenous variables
strategy : {"refit", "update"}
Must be "refit" or "update". The strategy defines whether the `forecaster` is
only fitted on the first train window data and then updated, or always refitted.
scoring : subclass of sktime.performance_metrics.BaseMetric, default=None.
Used to get a score function that takes y_pred and y_test arguments
and accept y_train as keyword argument.
If None, then uses scoring = MeanAbsolutePercentageError(symmetric=True).
fit_params : dict, default=None
Parameters passed to the `fit` call of the forecaster.
return_data : bool, default=False
Returns three additional columns in the DataFrame, by default False.
The cells of the columns contain each a pd.Series for y_train,
y_pred, y_test.
Returns
-------
pd.DataFrame
DataFrame that contains several columns with information regarding each
refit/update and prediction of the forecaster.
"""
_check_strategy(strategy)
cv = check_cv(cv, enforce_start_with_window=True)
scoring = check_scoring(scoring)
y = check_series(
y,
enforce_univariate=forecaster.get_tag("scitype:y") == "univariate",
enforce_multivariate=forecaster.get_tag("scitype:y") == "multivariate",
)
X = check_X(X)
fit_params = {} if fit_params is None else fit_params
# Define score name.
score_name = "test_" + scoring.name
# Initialize dataframe.
results = pd.DataFrame()
# Run temporal cross-validation.
for i, (train, test) in enumerate(cv.split(y)):
# split data
y_train, y_test, X_train, X_test = _split(y, X, train, test, cv.fh)
# create forecasting horizon
fh = ForecastingHorizon(y_test.index, is_relative=False)
# fit/update
start_fit = time.perf_counter()
if i == 0 or strategy == "refit":
forecaster = clone(forecaster)
forecaster.fit(y_train, X_train, fh=fh, **fit_params)
else: # if strategy == "update":
forecaster.update(y_train, X_train)
fit_time = time.perf_counter() - start_fit
# predict
start_pred = time.perf_counter()
if scoring.get_tag("scitype:y_pred") == "pred_quantiles":
y_pred = forecaster.predict_quantiles(fh, X=X_test, **fit_params)
else:
y_pred = forecaster.predict(fh, X=X_test)
pred_time = time.perf_counter() - start_pred
# score
score = scoring(y_test, y_pred, y_train=y_train)
# save results
results = results.append(
{
score_name: score,
"fit_time": fit_time,
"pred_time": pred_time,
"len_train_window": len(y_train),
"cutoff": forecaster.cutoff,
"y_train": y_train if return_data else np.nan,
"y_test": y_test if return_data else np.nan,
"y_pred": y_pred if return_data else np.nan,
},
ignore_index=True,
)
# post-processing of results
if not return_data:
results = results.drop(columns=["y_train", "y_test", "y_pred"])
results["len_train_window"] = results["len_train_window"].astype(int)
return results
def _split(y, X, train, test, fh):
"""Split y and X for given train and test set indices."""
y_train = y.iloc[train]
y_test = y.iloc[test]
cutoff = y_train.index[-1]
fh = check_fh(fh)
fh = fh.to_relative(cutoff)
if X is not None:
X_train = X.iloc[train, :]
# We need to expand test indices to a full range, since some forecasters
# require the full range of exogenous values.
test = np.arange(test[0] - fh.min(), test[-1]) + 1
X_test = X.iloc[test, :]
else:
X_train = None
X_test = None
return y_train, y_test, X_train, X_test
def _check_strategy(strategy):
"""Assert strategy value.
Parameters
----------
strategy : str
strategy of how to evaluate a forecaster
Raises
------
ValueError
If strategy value is not in expected values, raise error.
"""
valid_strategies = ("refit", "update")
if strategy not in valid_strategies:
raise ValueError(f"`strategy` must be one of {valid_strategies}")
evaluate(f, cv, y_test, scoring=loss)
###Output
0.05 0.95
0 0.008705 0.007918
|
01-k8s/Explore_Kubernetes_Cluster.ipynb | ###Markdown
`kubectl` Kubernetes CLI Within your namespace:
###Code
!kubectl get pods
!kubectl describe pod [add name of your pod]
!kubectl logs [add name of your pod]-0 -c [add name of your pod]
###Output
_____no_output_____ |
ipynb/04/df_series_arrays.ipynb | ###Markdown
We have now come across three important structures that Python uses to store and access data:* arrays* data frames* seriesHere we stop to go back over the differences between these structures, and how to convert between them. Data frames We start by loading a data frame from a Comma Separated Value file (CSVfile).The data file we will load is a table with average scores across all professors teachinga particular academic discipline.See the [array indexing page](../03/array_indexing) for more detail.Each row in this table corresponds to one *discipline*. Each column corresponds to a different *rating*.If you are running on your laptop, you should downloadthe [rate_my_course.csv](https://matthew-brett.github.io/cfd2019/data/rate_my_course.csv)file to the same directory as this notebook.
###Code
# Load the Numpy library, rename to "np"
import numpy as np
# Load the Pandas data science library, rename to "pd"
import pandas as pd
# Read the file.
courses = pd.read_csv('rate_my_course.csv')
# Show the first five rows.
courses.head()
###Output
_____no_output_____
###Markdown
The `pd.read_csv` function returned this table in a structure called a *data frame*.
###Code
type(courses)
###Output
_____no_output_____
###Markdown
The data frame is a two-dimensional structure. It has rows, and columns. We can see the number of rows and columns with:
###Code
courses.shape
###Output
_____no_output_____
###Markdown
This means there are 75 rows. In this case, each row corresponds to one discpline.There are 6 columns. In this case, each column corresponds to a different student rating. Passing the data frame to the Python `len` function shows us the number of rows:
###Code
len(courses)
###Output
_____no_output_____
###Markdown
Indexing into data frames There are two simple ways of indexing into data frames.We index into a data frame to get a subset of of the data.To index into anything, we can give the name of thing - in this case `courses` - followed by an opening square bracket `[`, followed by something to specify which subset of the data we want, followed by a closing square bracket `]`.The two simple ways of indexing into a data frame are:* Indexing with a string to get a column.* Indexing with a Boolean sequence to get a subset of the rows. When we index with a string, the string should be a column name:
###Code
easiness = courses['Easiness']
###Output
_____no_output_____
###Markdown
The result is a *series*:
###Code
type(easiness)
###Output
_____no_output_____
###Markdown
The Series is a structure that holds the data for a single column.
###Code
easiness
###Output
_____no_output_____
###Markdown
We will come back to the Series soon.Notice that, if your string specifying the column name does not match a column name exactly, you will get a long error. This gives you some practice in reading long error messages - skip to the end first, you will often see the most helpful information there.
###Code
# The exact column name starts with capital E
courses['easiness']
###Output
_____no_output_____
###Markdown
You have just seen indexing into the data frame with a string to get the data for one column.The other simple way of indexing into a data frame is with a Boolean sequence.A Boolean sequence is a sequence of values, all of which are either True or False. Examples of sequences are series and arrays. For example, imagine we only wanted to look at courses with an easiness rating of greater than 3.25.We first make the Boolean sequence, by asking the question `> 3.25` of the values in the "Easiness" column, like this:
###Code
is_easy = easiness > 3.25
###Output
_____no_output_____
###Markdown
This is a series that has True and False values:
###Code
type(is_easy)
is_easy
###Output
_____no_output_____
###Markdown
It has True values where the corresponding row had an "Easiness" score greater than 3.25, and False values where the corresponding row had an "Easiness" score of less than or equal to 3.25. We can index into the data frame with this Boolean series.When we do this, we ask the data frame to give us a new version of itself, that only has the rows where there was a True value in the Boolean series:
###Code
easy_courses = courses[is_easy]
###Output
_____no_output_____
###Markdown
The result is a data frame:
###Code
type(easy_courses)
###Output
_____no_output_____
###Markdown
The data frame contains only the rows where the "Easiness" score is greater than 3.25:
###Code
easy_courses
###Output
_____no_output_____
###Markdown
The way this works can be easier to see when we use a smaller data frame.Here we take the first eight rows from the data frame, by using the `head` method.The `head` method can take an argument, which is the number of rows we want.
###Code
first_8 = courses.head(8)
###Output
_____no_output_____
###Markdown
The result is a new data frame:
###Code
type(first_8)
first_8
###Output
_____no_output_____
###Markdown
We index into the new data frame with a string, to get the "Easiness" column:
###Code
easiness_first_8 = first_8["Easiness"]
easiness_first_8
###Output
_____no_output_____
###Markdown
This Boolean series has True where the "Easiness" score is greater than 3.25, and False otherwise:
###Code
is_easy_first_8 = easiness_first_8 > 3.25
is_easy_first_8
###Output
_____no_output_____
###Markdown
We index into the `first_8` data frame with this Boolean series, to select the rows where `is_easy_first_8` has True, and throw away the rows where it has False.
###Code
easy_first_8 = first_8[is_easy_first_8]
easy_first_8
###Output
_____no_output_____
###Markdown
Oh dear, Psychology looks pretty easy. Series and array The series, as you have seen, is the structure that Pandas uses to store the data from a column:
###Code
first_8
easiness_first_8 = first_8["Easiness"]
easiness_first_8
###Output
_____no_output_____
###Markdown
You can index into a series, but this indexing is powerful and sophisticated, so we will not use that for now.For now, you can convert the series to an array, like this:
###Code
easi_8 = np.array(easiness_first_8)
easi_8
###Output
_____no_output_____
###Markdown
Then you can use the usual [array indexing](../03/array_indexing) to get the values you want:
###Code
# The first value
easi_8[0]
# The first five values
easi_8[:5]
###Output
_____no_output_____
###Markdown
You can think of a data frame as sequence of columns, where each column is series.Here I take two columns from the data frame, as series:
###Code
disciplines = first_8['Discipline']
disciplines
clarity = first_8['Clarity']
clarity
###Output
_____no_output_____
###Markdown
I can make a new data frame by inserting these two columns:
###Code
# A new data frame
thinner_courses = pd.DataFrame()
thinner_courses['Discipline'] = disciplines
thinner_courses['Clarity'] = clarity
thinner_courses
###Output
_____no_output_____ |
taller_marketing_alberto.ipynb | ###Markdown
Predicción de adherimiento a campañas de marketing (clasificación) Alberto Mario Ceballos [email protected] Universidad Nacional de Colombia, Sede Medellín Facultad de Minas Medellín, Colombia DESCRIPCIÓN DEL PROBLEMA Las campañas de marketing constituyen una estrategia típica para maximizar los beneficios de las organizaciones. Algunas compañías hacen uso de marketing directo, contactando a los clientes (usualmente por medio de llamadas) para ofrecerles ciertos beneficios y convencerlos de suscribirse a distintos tipos de planes. Muchas organizaciones grandes y medianas centralizan sus interacciones con los clientes en centros de contacto desde los cuales se contacta a los clientes. Este tipo de marketing es considerado 'telemarketing', e incurre un gran costo en las organizaciones debido a la cantidad de personal que deben mantener en dichas tareas y el impacto que puede tener en la relación con el cliente debido a la intrusividad. Debido a esto, muchas organizaciones buscan optimizar la decisión de si llamar o no a un cliente dado. En el caso del banco portugués de la que se extrajeron los datos en los que se basa este trabajo, el objetivo era determinar si un cliente se suscribiría o no a un depósito a término fijo (CDT). Debido a la gran cantidad de variables a tener en cuenta, determinar cuando un cliente se suscribirá al CDT es complicado, pero dicha problemática era clave para el banco ya que el país se encontraba en época de recesión. Fuente : *[Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014* DESCRIPCIÓN DEL PROBLEMA DESDE LOS DATOSLos datos utilizados fueron seleccionados del repositorio en línea UCI. Se trata de una base de datos para clasificación para determinar la adhesión o no de clientes a depósitos a término como resultado de campañas de marketing. La base de datos fue proporcionada por Moro et al. en 2014, y es resultado de un estudio real hecho con datos de un banco portugués.En el repositorio se encuentran varios conjuntos de datos, en todas estas los datos están ordenados por fecha (Mayo 2008-Nov. 2010). La diferencia consiste en el número de atributos y la cantidad de registros. Para el propósito de este informe, se escoge el conjunto bank-additional-full.csv que incluye 41188 datos con 20 atributos y la variable respuesta y.Cabe anotar que problemas de decisión como éste son NP-hard, sin embargo, para el banco era aceptable cierto margen de error ya que realizó más de 40000 llamadas durante el período evaluado, lo cual es consecuencia de la situación de recesión en la que se encontraba el país y la necesidad del banco de recibir dinero de sus clientes. Atributos del cliente: 1. age: Edad del cliente (numérico) 2. job: Tipo de trabajo del cliente (catégorico: "admin.","blue-collar", "entrepreneur", "housemaid","management", "retired","self-employed", "services","student","technician","unemployed","unknown"). 3. marital: Estado civil del cliente (catégorico: "divorced","married","single","unknown"; nota: "divorced" significa divorciado o viudo). 4. education: Nivel educativo más alto del cliente (catégorico: "basic.4y","basic.6y","basic.9y","high.school","illiterate","professional.course","university.degree","unknown"). 5. default: ¿Tiene créditos en mora? (catégorico: "no","yes","unknown"). 6. housing: ¿Tiene préstamos para vivienda? (catégorico: "no","yes","unknown"). 7. loan: ¿Tiene préstamos personales? (catégorico: "no","yes","unknown"). Atributos de la último llamada en la campaña de marketing actual: 8. contact: Tipo de medio de comunicación para contacto (catégorico: "cellular","telephone"). 9. month: Últímo mes de contacto en el año (catégorico: "mar", ..., "nov", "dec"). No se hacen llamadas en enero ni marzo. 10. day_of_week: Último día de la semana en el que se hizo llamada (catégorico: "mon","tue","wed","thu","fri"). 11. duration: Duración de la última llamada, en segundos (numérico). Nota: Este atributo afecta en gran medida la salida (ej. si duración = 0, entonces y = 'no'). Sin embargo, la duración no se conoce antes de realizar una llamada. Además, al terminar la llamada se conoce el resultado de y. Por tanto, este atributo debe ser descartado si se tiene la intención de tener un modelo predictivo realista. Otros atributos: 12. campaign: Número de llamadas realizadas durante esta campaña para el cliente (numérico, incluye la última llamada) 13. pdays: Número de días que han pasado después de que el cliente fuese contactado por última vez (numérico; 999 significa que no se contactó previamente al cliente). 14. previous: Número de llamadas realizadas antes de esta campaña para el cliente (numérico). 15. poutcome: Resultado de la anterior campaña de marketing (catégorico: "failure", "nonexistent", "success"). Atributos de contexto social y económico: 16. emp.var.rate: Tasa de variación de empleo - indicador trimestral (numérico). 17. cons.price.idx: Índice de precios del consumidor - indicador mensual (numérico). 18. cons.conf.idx: Índice de confianza del consumidor - indicador mensual (numérico). 19. euribor3m: Tasa euribor de 3 meses - indicador diario (numérico). 20. nr.employed: Número de empleados - indicador trimestral (numérico). Variable de salida (objetivo): 21. y: ¿El cliente se suscribió a un depósito a término? (binaria: "yes","no"). Datos de atributo faltante: -. Existen valores faltantes en varios atributos categóricos, codificados con la etiqueta "unknown". Estos pueden ser tratados como una clase por sí misma, o tratados con técnicas de imputación. PASOS DE LA IMPLEMENTACIÓN* Exploración descriptiva de los datos.* Primera iteración del preprocesamiento.* Primera iteración del modelado.* Conclusiones de la primera iteración.* Segunda iteración del preprocesamiento.* Segunda iteración del modelado.* Conclusiones de la segunda iteración* Tercera iteración del preprocesamiento.* Tercera iteración del modelado.* Conclusiones finales. LIBRERIASA continuación se importan las librerias necesarias y se definen algunas funciones para la realización del trabajo:
###Code
%matplotlib inline
##
## Se ignoran advertencias
##
import warnings as ws
ws.filterwarnings("ignore")
import math
import pandas as pd
import numpy as np
from sklearn.preprocessing import Imputer
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import roc_auc_score
from sklearn.metrics import cohen_kappa_score, make_scorer
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn import preprocessing
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
import seaborn as sns
from matplotlib import pyplot
import statsmodels.formula.api as smf
import statsmodels.stats.multicomp as multi
import statsmodels.api as sm
import time as tm
from imblearn.over_sampling import SMOTE
from imblearn.combine import SMOTEENN
def magnify():
return [dict(selector="th",
props=[("font-size", "8pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
# mapa de correlación
def correl(correlacion):
cmap=sns.diverging_palette(5, 250, as_cmap=True)
return (correlacion.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '10pt', 'fmt': '0.1'})\
.set_caption("Hover to magnify")\
.set_precision(2)\
.set_table_styles(magnify()))
###Output
_____no_output_____
###Markdown
ANALISIS DESCRIPTIVOEn esta sección se realiza un análisis descriptivo de las distintas variables. Lectura de los datos y eliminación de variables según el ámbito del problemaSe hace la lectura de los datos con la función read_csv de la librería pandas. Se trabaja con la versión 'additional-full' de los datos del banco.Se elimina la variable 'duration' para obtener un modelo predictivo más realista.Se convierte la variable de salida en binaria, para facilitar algunos calculos.
###Code
def lectura():
df_orig = pd.read_csv('bank-additional-full.csv', sep=";")
del(df_orig['duration'])
df_orig.y = df_orig.y.apply(lambda x: 1 if x=='yes' else 0)
return df_orig
df_orig = lectura()
df_orig.head()
df_orig.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 41188 entries, 0 to 41187
Data columns (total 20 columns):
age 41188 non-null int64
job 41188 non-null object
marital 41188 non-null object
education 41188 non-null object
default 41188 non-null object
housing 41188 non-null object
loan 41188 non-null object
contact 41188 non-null object
month 41188 non-null object
day_of_week 41188 non-null object
campaign 41188 non-null int64
pdays 41188 non-null int64
previous 41188 non-null int64
poutcome 41188 non-null object
emp.var.rate 41188 non-null float64
cons.price.idx 41188 non-null float64
cons.conf.idx 41188 non-null float64
euribor3m 41188 non-null float64
nr.employed 41188 non-null float64
y 41188 non-null int64
dtypes: float64(5), int64(5), object(10)
memory usage: 6.3+ MB
###Markdown
Linea base y distribución de datos según si el cliente se ha suscrito a un déposito a términoUna revisión de los datos muestra que el 88.9% de los datos pertenecen a la clase 0 (NO). Esto indica que la línea base puede ser un 89% de precisión. Sin embargo, dato que los datos fueron muestreados en época de recesión, puede ser aceptable una precisión ligeramente menor a ese valor si se maximiza la precisión al clasificar a los clientes que sí se suscribirán a un depósito a término fijo.Este enfoque es similar al que emplean los creadores del conjunto de datos en https://pdfs.semanticscholar.org/cab8/6052882d126d43f72108c6cb41b295cc8a9e.pdf , así que se decide usar los resultados obtenidos por estos con su mejor caso de aplicación como línea base. Dichos resultados describen un 81% de precisión en la predicción de la clase NO y un 65% de precisión para la clase SI, con una precisión general del 75%.
###Code
df_orig.y.value_counts()
sns.countplot(x='y', data=df_orig, palette='hls')
pyplot.show()
###Output
_____no_output_____
###Markdown
Listado de etiquetas
###Code
labels = list(df_orig.columns.values)
print(labels)
###Output
['age', 'job', 'marital', 'education', 'default', 'housing', 'loan', 'contact', 'month', 'day_of_week', 'campaign', 'pdays', 'previous', 'poutcome', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed', 'y']
###Markdown
Descripción de las variables categóricasEn la siguiente sección se hace un análisis descriptivo de las variables categóricas según la distribución de sus valores.
###Code
cat_labels = ['job','marital','education', 'month','day_of_week', 'default','housing','poutcome', 'loan','contact']
df_orig[cat_labels].describe()
cat_labels_plot1 = ['job','education', 'month']
order_education = ['illiterate', 'basic.4y', 'basic.6y', 'basic.9y',
'high.school', 'university.degree', 'professional.course','unknown']
order_month = ['mar','apr','may','jun','jul','aug','sep','oct','nov','dec']
fig = pyplot.figure(figsize=(14, 12))
ax1 = pyplot.subplot(3, 1, 1)
sns.countplot(x="job", data=df_orig, ax = ax1)
ax2 = pyplot.subplot(3, 1, 2)
sns.countplot(x="education", data=df_orig, ax = ax2, order = order_education)
ax2 = pyplot.subplot(3, 1, 3)
sns.countplot(x="month", data=df_orig, ax = ax2, order = order_month)
pyplot.show()
cat_labels_plot2 = [label for label in cat_labels if label not in cat_labels_plot1]
fig = pyplot.figure(figsize=(15, 20))
fig_size = (4,2)
axes = []
for i in range(0, len(cat_labels_plot2)):
axes.append(pyplot.subplot(*fig_size,i+1))
sns.countplot(x=cat_labels_plot2[i], data=df_orig, ax=axes[i])
pyplot.show()
###Output
_____no_output_____
###Markdown
Descripción de las variables numéricasPara mayor facilidad del manejo de las variables numéricas, se almacena un arreglo con las mismas.Observando las variables numéricas se puede observar que existe una desviación estándar considerable en la variable pdays, en contraste con las otras variables.
###Code
num_labels = ['age', 'campaign', 'pdays','previous','emp.var.rate', 'cons.price.idx','cons.conf.idx',
'euribor3m', 'nr.employed']
df_orig[num_labels].describe()
###Output
_____no_output_____
###Markdown
Se muestran gráficos de densidad de las variables numéricas.
###Code
df_orig.plot(kind='density', subplots=True, layout=(17,3), sharex=False, figsize=(15,35))
pyplot.show()
###Output
_____no_output_____
###Markdown
Escalado preliminar de variables númericasSe realiza un escalado preliminar de las variables numéricas para facilitar su análisis. Aunque en primera instancia se sugiere que hay una dispersión muy grande en varias de las variables, una revisión de la literatura y de las variables como tales muestra que para algunas de ellas es un comportamiento normal.
###Code
mm_scaler = preprocessing.MinMaxScaler()
df_scaled = df_orig.copy()
df_scaled[num_labels] = mm_scaler.fit_transform(df_scaled[num_labels])
df_scaled.describe()
###Output
_____no_output_____
###Markdown
Visualización de box-plot para encontrar valores dispersos
###Code
fig, ax = pyplot.subplots(figsize=(20,10))
sns.boxplot(ax = ax, data=df_scaled[num_labels])
pyplot.show()
###Output
_____no_output_____
###Markdown
Análisis de la variable AGE.
###Code
df_orig.age.describe()
def map_ages(x):
lower = math.floor(x/10)
upper = lower + 1
return str(lower)+"0-"+str(upper)+"0"
df_y_age = df_orig.copy()
df_y_age['age'] = df_y_age['age'].apply(lambda x: map_ages(x))
df_y_age = df_y_age.groupby(['age', 'y'])['y'].count().unstack()
df_y_age.plot(kind = 'bar',figsize=(20,5), log=False)
pyplot.show()
df_y_age.plot(kind = 'bar',figsize=(20,5), log=True)
pyplot.show()
###Output
_____no_output_____
###Markdown
Análisis de la variable CAMPAIGN.
###Code
df_orig.campaign.describe()
limit = 18
def map_campaign(x, limit):
if(x < limit):
x = x
else:
x = 999
return int(x)
df_y_campaign = df_orig.copy()
df_y_campaign['campaign'] = df_y_campaign['campaign'].apply(lambda x: map_campaign(x, limit))
df_y_campaign = df_y_campaign.groupby(['campaign', 'y'])['y'].count().unstack()
df_y_campaign.plot(kind = 'bar',figsize=(25,5), log=True)
pyplot.show()
###Output
_____no_output_____
###Markdown
Análisis de la variable PDAYS
###Code
df_orig.pdays.describe()
df_y_pdays = df_orig.groupby(['pdays', 'y'])['y'].count().unstack()
df_y_pdays.plot(kind = 'bar',figsize=(25,5), log=True)
pyplot.show()
###Output
_____no_output_____
###Markdown
Análisis de la variable PREVIOUS
###Code
df_orig.previous.describe()
df_y_pre = df_orig.groupby(['previous', 'y'])['y'].count().unstack()
df_y_pre.plot(kind = 'bar',figsize=(25,5), log=False)
pyplot.show()
df_y_pre.plot(kind = 'bar',figsize=(25,5), log=True)
pyplot.show()
###Output
_____no_output_____
###Markdown
Análisis de la variable EMP.VAR.RATE
###Code
df_orig['emp.var.rate'].describe()
###Output
_____no_output_____
###Markdown
Análisis de la variable CONS.PRICE.IDX
###Code
df_orig['cons.price.idx'].describe()
###Output
_____no_output_____
###Markdown
Análisis de la variable CONS.CONF.IDX
###Code
df_orig['cons.conf.idx'].describe()
###Output
_____no_output_____
###Markdown
Análisis de la variable EURIBOR3M
###Code
df_orig['euribor3m'].describe()
###Output
_____no_output_____
###Markdown
Análisis de la variable NR.EMPLOYED
###Code
df_orig['nr.employed'].describe()
###Output
_____no_output_____
###Markdown
Correlación
###Code
df_scaled_numy = df_scaled.copy()
#df_scaled.corr() #arbol de regresiones
correl(df_scaled_numy.corr())
###Output
_____no_output_____
###Markdown
Se puede observar en la matriz de correlación que las variables emp.var.rate y euribor3m, nr.employed y euribor3m, emp.var.rate y euribor3m están fuertemente correlacionadas, y que hay aparentemente poco impacto de las variables campaign y cons.conf.idx sobre la respuesta. Esta consideración, sin embargo, no es tenida en cuenta ya que en la descripción de los datos y el paper relacionado se afirma que las tres variables tienen un impacto positivo en la predicción. Resumen de los problemas detectados durante el análisis.* Los datos numéricos están en escalas demasiado diferentes y existen outliers, es por tanto necesario estandarizarlos.* Hay poco balanceo entre clases, más del 85% de los datos pertenecen a una clase.* Es necesario convertir varias de las variables categóricas en dummies.* Algunas variables categóricas tienen datos faltantes, imputarlos reduciría la cantidad de variables dummy.El impacto de la correlación entre variables y su uso como posible criterio de eliminación de parámetros se descarta por la alta complejidad de cómputo que implican las otras tareas. Primera iteración del preprocesamientoSe propone realizar una primera iteración del algoritmo considerando las siguientes tareas de preprocesamiento:1. Conversión de variables categóricas a dummies.2. Partición del conjunto de datos.3. Estandarización de valores numéricos. Conversión de variables categóricas a dummiesSe convierten las variables categóricas a dummies en primera instancia para evitar que la ausencia de categorías (ej. si en el conjunto de prueba no queda ningún valor de una variable categórica) genere problemas de diferencia dimensional.
###Code
df_dummied = pd.get_dummies(df_orig, columns = cat_labels, sparse = True)
new_labels = list(df_dummied.columns.values)
print("Atributos catégoricos: ", cat_labels)
print("\nNuevos atributos: ", new_labels)
###Output
Atributos catégoricos: ['job', 'marital', 'education', 'month', 'day_of_week', 'default', 'housing', 'poutcome', 'loan', 'contact']
Nuevos atributos: ['age', 'campaign', 'pdays', 'previous', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed', 'y', 'job_admin.', 'job_blue-collar', 'job_entrepreneur', 'job_housemaid', 'job_management', 'job_retired', 'job_self-employed', 'job_services', 'job_student', 'job_technician', 'job_unemployed', 'job_unknown', 'marital_divorced', 'marital_married', 'marital_single', 'marital_unknown', 'education_basic.4y', 'education_basic.6y', 'education_basic.9y', 'education_high.school', 'education_illiterate', 'education_professional.course', 'education_university.degree', 'education_unknown', 'month_apr', 'month_aug', 'month_dec', 'month_jul', 'month_jun', 'month_mar', 'month_may', 'month_nov', 'month_oct', 'month_sep', 'day_of_week_fri', 'day_of_week_mon', 'day_of_week_thu', 'day_of_week_tue', 'day_of_week_wed', 'default_no', 'default_unknown', 'default_yes', 'housing_no', 'housing_unknown', 'housing_yes', 'poutcome_failure', 'poutcome_nonexistent', 'poutcome_success', 'loan_no', 'loan_unknown', 'loan_yes', 'contact_cellular', 'contact_telephone']
###Markdown
Partición del conjunto de datosSe particiona el conjunto de datos entre entrenamiento (80%) y prueba (20%), con un enfoque estratificado para mantener la distribución original.
###Code
def get_partitions(df_orig, splits = 1, test_size = 0.3, random_state = 42):
split = StratifiedShuffleSplit(n_splits=1, test_size = 0.3, random_state=42)
for train_index, test_index in split.split(df_orig, df_orig["y"]):
df_train = df_orig.loc[train_index]
df_test = df_orig.loc[test_index]
return (df_train, df_test)
(df_train, df_test) = get_partitions(df_dummied, 1, 0.3)
X_train = df_train.copy()
y_train = X_train.y
del(X_train['y'])
X_test = df_test.copy()
y_test = X_test.y
del(X_test['y'])
X_test.head()
X_train.columns.values
###Output
_____no_output_____
###Markdown
Estandarización de valores numéricosSe hace estandarización de los valores numéricos para reducir el impacto de los outliers. Se guardan los parámetros de escalado en la variable scaler para uso posterior con el conjunto de prueba.
###Code
print("\nAtributos numéricos: ", num_labels)
scaler = preprocessing.StandardScaler()
scaler = scaler.fit(X_train[num_labels])
X_train.loc[:,num_labels] = scaler.transform(X_train[num_labels])
X_test.loc[:, num_labels] = scaler.transform(X_test[num_labels])
###Output
_____no_output_____
###Markdown
MODELADO Y GENERALIDADESEn la etapa de modelado se prueban 4 modelos:* K-Vecinos Cercanos:La clasificación por K-Vecinos cercanos consiste en asignar clases a los objetos según el voto mayoritario de sus vecinos, siendo la clase asignada la más común entre sus k vecinos más cercanos. El valor K es un entero positivo, usualmente pequeño. Si k vale 1 entonces el objeto es asignado a la clase de un único vecino más cercano. Se pueden usar distintas métricas de distancia para este algoritmo, aunque la más común es la euclidiana.* Regresión Logística:La regresión logística, al igual que la regresión lineal, computa una suma ponderada de los atributos de entrada (más un término de bias=), pero en vez de calcular la salida directamente como la regresión lineal, emplea el resultado de la función sigmoidea, que tiene como salida valores entre 0 y 1. Debido a esto, la regresión logística puede ser usada para problemas de clasificación. Una vez se tiene la probabilidad estimada de que una instancia x pertenezca a la clase positiva, el problema de clasificación se resume a asignarle 1 si el valor calculado es mayor o igual a 0.5 o 0 si es menor a 0.5.* Bosques de Clasificación:Los bosques de clasificación son algoritmos versátiles de aprendizaje de máquina que pueden lelvar a cabo tareas de clasificación. Los árboles de regresión modelan el problema de predicción en forma de un árbol en el que cada nodo representa una regla de clasificación determinada según los datos de entrenamiento. El enfoque de los bosques de clasificación consiste en agregar n predictores (árboles) y realizar la clasificación como una votación en la que la clase más popular entre los predictores entrenados es elegida. Este método suele ser mucho más costoso que entrenar un único árbol, pero incrementa considerablemente la precisión de dicha tarea.* Perceptrón Multicapa (Red Neuronal): Criterios de medición:Debido al gran desbalanceo entre clases (de una relación casi 10 a 1), medidas como la precisión no pueden ser utilizadas. En su lugar, se emplean dos medidas más robustas al desbalanceo.* Kappa: Kappa es una medición de calidad de clasificación binaria. Cuando dos variables binarias son intentos de dos individuos de medir lo mismo, se puede usar el coeficiente de Kappa como medida de la concordancia entre los dos, esta medición toma valores entre 0 y 1. Un valor de 1 indica un acuerod casi perfecto, mientras que valores menores a 1 indican poco concordancia. A continuación se puede ver la interpretación del coeficiente de kappa, proporcionada en la página 404 de Altman DG. Practical Statistics for Medical Research. (1991) London England: Chapman and Hall. Concordancia pobre = Menos que 0.20 Concordancia aceptable = 0.20 a 0.40 Concordanciauedo moderada = 0.40 a 0.60 Buena concordancia = 0.60 a 0.80 Muy buena concordancia = 0.80 a 1.00* ROC: La región bajo la curva es una representación de la sensibilidad frente a la especificidad para un sistema clasificador binario según se varía el umbral de discriminación. Al igual que el coeficiente Kappa, es una métrica muy útil para determinar si un modelo clasifica correctamente instancias de las distintas clases objetivo. Los valores que puede tomar esta métrica oscilan entre 0.5 y 1, dónde 1 indica una clasificación perfecta.
###Code
#Funciones generales.
def GridSearchCVwithReport( X_train, y_train, X_test, y_test, classifier,
tuned_params,
scores, folds):
"""Aplica la función GridSearchCV, genera un reporte
detallado y retorna los mejores modelos generados para cada score recibido."""
start = tm.time()
clfs = []
for score in scores:
print()
print("Parámetros ajustados del score: %s" % score)
print()
clf = GridSearchCV(classifier, tuned_params, score, cv= folds, n_jobs = 1)
clf.fit(X_train, y_train)
print("Mejores parámetros encontrados:")
print(clf.best_params_)
print()
#Predicción con datos de prueba para validar metricas
y_pred = clf.predict(X_test)
print("-->Reporte de clasificación detallado<--")
print()
print("Matriz de confusión: ")
print(confusion_matrix(y_test, y_pred))
print()
print(classification_report(y_test, y_pred, digits = 5))
print()
print("Coeficiente de kappa: ")
print(cohen_kappa_score(y_test,y_pred))
print("Puntaje ROC_AUC: ")
print(roc_auc_score(y_test,y_pred))
print()
clfs.append(clf)
end = tm.time()
print("Tiempo total de ejecución (segundos): %.2f" % (end - start))
return clfs
class GeneralImputer(Imputer):
"""Se crea una clase Imputer generalizada para imputar datos categóricos."""
def __init__(self, **kwargs):
Imputer.__init__(self, **kwargs)
def fit(self, X, y=None):
if self.strategy == 'most_frequent':
self.fills = pd.DataFrame(X).mode(axis=0).squeeze()
self.statistics_ = self.fills.values
return self
else:
return Imputer.fit(self, X, y=y)
def transform(self, X):
if hasattr(self, 'fills'):
return pd.DataFrame(X).fillna(self.fills).values.astype(str)
else:
return Imputer.transform(self, X)
#https://stackoverflow.com/questions/25239958/impute-categorical-missing-values-in-scikit-learn
# Se crea un scorer kappa para guiar la selección de parámetros.
kappa_scorer = make_scorer(cohen_kappa_score)
###Output
_____no_output_____
###Markdown
Primera iteración de modelamientoPara la primera iteración se prueban los métodos con la función GridSearchCV ajustada para mostrar un reporte más detallado. Para cada método se tienen en cuenta distintos parámetros que se ajustan junto con la realización de 5 ejecuciones de validación cruzada por configuración. Asimismo, se guarda el tiempo de ejecución de los algoritmos para verificar la rapidez de cada uno de los métodos. K Nearest Neighbors
###Code
tuned_params = [{'n_neighbors': [15,20,25,30,35,40] }]
scores = [kappa_scorer,'roc_auc']
folds = 5
knn1_cv = GridSearchCVwithReport(X_train, y_train, X_test, y_test, KNeighborsClassifier(), tuned_params, scores, folds)
knn1_time = 4670.87/ (6*2*5)
"Tiempo de ejecución promedio: " + str(knn1_time)+" segundos."
###Output
_____no_output_____
###Markdown
Regresión Logística
###Code
tuned_params = [{'penalty': ['l1','l2'], 'C': [0.001,0.01,0.1,1,10,100,1000] }]
scores = [kappa_scorer,'roc_auc']
folds = 5
lg1_cv = GridSearchCVwithReport(X_train, y_train, X_test, y_test, LogisticRegression(), tuned_params, scores, folds)
#Tiempo de ejecución promedio (seg):
lg1_time = ((159.84/ (2*7*2*5)))
"Tiempo de ejecución promedio: " + str(lg1_time) +" segundos."
###Output
_____no_output_____
###Markdown
Bosques de clasificación
###Code
tuned_params = {
"n_estimators" : [10,30, 50],
"max_features" : ["auto", "sqrt", "log2"],
"min_samples_split" : [2,4,8,10,12,20],
"bootstrap" : [True, False]
}
scores = [kappa_scorer,'roc_auc']
folds = 5
rf1 = GridSearchCVwithReport(X_train, y_train, X_test, y_test, RandomForestClassifier(), tuned_params, scores, folds)
rf1_time = ((1180.41/ (3*3*5*2*2*5)))
"Tiempo de ejecución promedio: " + str(rf1_time) +" segundos."
tuned_params={
'learning_rate': ["invscaling","adaptive"],
'hidden_layer_sizes': [(15),(20),(15,15),(20,20)],
'alpha': [ 0.01, 0.001, 0.0001],
'activation': ["tanh", "logistic"],
'solver': ['adam']
}
scores = [kappa_scorer,'roc_auc']
folds = 5
mlp2 = GridSearchCVwithReport(X_train2, y_train2, X_test2, y_test2, MLPClassifier(), tuned_params, scores, folds)
mlp1_time = ((1265.26/ (2*4*3*2*1*2*5)))
"Tiempo de ejecución promedio: " + str(mlp1_time) +" segundos."
fig = pyplot.figure(figsize=(20, 20))
pd_times = pd.DataFrame({'times':[knn1_time, lg1_time, rf1_time, mlp1_time],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax1 = pyplot.subplot(2, 3, 1)
ax1 = sns.barplot(y = 'times1', x='labels', data = pd_times1, ax = ax1)
ax1.set_title("Segundos de ejecución promedio")
ax1.set_xlabel("Método")
ax1.set_ylabel("Segundos")
pd_true_positives = pd.DataFrame({'true_positives':[19.5, 21.6, 27.8, 24, 65],
'labels': ['KNN', 'LR', 'RF', 'MLP', 'Moro et al.'],
})
ax2 = pyplot.subplot(2, 3, 2)
sns.barplot(y = 'true_positives', x='labels', data = pd_true_positives, ax = ax2)
ax2.set_title("Tasa de Verdaderos Positivos (TP)")
ax2.set_xlabel("Método")
ax2.set_ylabel("%")
pd_kappa = pd.DataFrame({'kappa':[0.27, 0.3, 0.33, 0.31],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax3 = pyplot.subplot(2, 3, 3)
sns.barplot(y = 'kappa', x='labels', data = pd_kappa, ax = ax3)
ax3.set_title("Coeficiente kappa")
ax3.set_xlabel("Método")
ax3.set_ylabel("Kappa")
pd_roc = pd.DataFrame({'roc':[0.59, 0.6, 0.63, 0.61],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax4 = pyplot.subplot(2, 3, 4)
sns.barplot(y = 'roc', x='labels', data = pd_roc, ax = ax4)
ax4.set_title("Área bajo la curva")
ax4.set_xlabel("Método")
ax4.set_ylabel("ROC")
pd_tn = pd.DataFrame({'tn':[98.8, 99, 97.7, 98.5, 81],
'labels': ['KNN', 'LR', 'RF', 'MLP', 'Moro et al.'],
})
ax5 = pyplot.subplot(2, 3, 5)
sns.barplot(y = 'tn', x='labels', data = pd_tn, ax = ax5)
ax5.set_title("Tasa de Verdaderos Negativos (TN)")
ax5.set_xlabel("Método")
ax5.set_ylabel("%")
#ax2 = pyplot.subplot(2, 2, 3)
#pyplot.show()
pyplot.show()
###Output
_____no_output_____
###Markdown
Conclusiones de la primera iteraciónLos resultados obtenidos no fueron excesivamente positivos, la metodología KNN incide en unos tiempos de ejecución bastante altos (se intuye que puede ser un problema de la implementación y no del método) sin tener una precisión destacable, muy por debajo del resto de metodologías. Se detecta sin embargo que el valor de K que mejores resultados arroja es 15, con el que se clasifican correctamente solo el 19.5% de los datos en los que el cliente acepta el depósito y el 98.8% de los que no aceptaron. Los valores de kappa y area bajo la curva respectivamente fueron 0.27 y 0.59, lo cual no es muy eficiente.Aunque la regresión logística clasifica exitosamente más del 99% de clientes que no aceptaron el depósito, tan solo logra clasificar correctamente al 21.6% de los que sí lo aceptaron, lo que es de poca utilidad para la organización. Esto se ve reflejado en el kappa de 0.3 y el área bajo la curva de 0.61 que indican que no se obtiene una gran precisión para todas las clases. Es, sin embargo, el método de más rápido entrenamiento, ligeramente por encima de los bosques aleatorios. Los valores óptimos de este algoritmo fueron C = 100 y penalización tipo l2. Los bosques aleatorios sacrifican algo de precisión en la clasificación de clientes que no aceptaron el depósito (N) con un 97.7%, pero clasifican exitosamente al 27.8% de los clientes que aceptaron hacer un depósito a término fijo (S), lo que se mide como un kappa de 0.33 y una ROC de 0.63. Los mejores resultados se obtuvieron al ajustar bootstrap como Falso, el valor de características máximas como automático, el mínimo de muestras por división en 8 y el número de estimadores en 30.Las redes neuronales perceptrón multicapa logran una tasa de Verdaderos Positivos ligeramente superior a la de la regresión logistica (24%) pero inferior a la de los árboles de clasificación. En cuanto a Verdaderos Negativos, su precisión es del 98.5%, lo que lo ubica muy cerca de los otros modelos. En general podemos usar los coeficientes de kappa y el área bajo la curva ROC, que son 0.31 y 0.61. En cuanto a tiempos de ejecución, las RN MLP tienen un tiempo promedio por encima de la regresión logística y los bosques aleatorios. Los mejores parámetors encontrados fueron función de acivación tangente hiperbólica, alpha 0.0001, dos capas de neuronas ocultas de 15 neuronas cada una, tasa de aprendizaje adaptativa y solver adam.La regresión logística es el método más preciso en cuanto a Verdaderos Negativos (99% de clientes), aunque en general los cuantro métodos no son muy eficientes para detectar Verdaderos Positivos lo cual resulta de poca utilidad considerando la línea base establecida anteriormente. Se intuye que el uso de variables dummy, aunque ayuda a clasificar datos categóricos con algoritmos que no los soportan en principio, incrementa considerablemente la dimensionalidad de los mismos y dificulta la clasificación, resultando en tiempos de ejecución mucho más altos. Debe considerarse en primera instancia reducir la cantidad de parámetros categóricos con base en el conocimiento del problema. Segunda iteración del preprocesamientoSe propone realizar la segunda iteración considerando las siguientes tareas de preprocesamiento:1. Reducción de atributos categóricos.2. Partición del conjunto de datos.3. Conversión de variables categóricas a dummies.4. Estandarización de valores numéricos.
###Code
print(cat_labels)
###Output
['job', 'marital', 'education', 'month', 'day_of_week', 'default', 'housing', 'poutcome', 'loan', 'contact']
###Markdown
Atributos categóricos irreducibles:El día de la semana y el mes, debido a sus características, no pueden reducirse. Lo mismo aplica para el atributo trabajo y poutcome. Atributos categóricos reducibles que se decide no reducir:El atributo default tiene tan solo 3 datos con 'si', por lo que la proporción de datos con desconocido se vuelve demasiado grande como para despreciarla. Atributos directamente binarizablesEl atributo 'contact' tiene únicamente dos valores, por tanto, puede convertirse en un valor binario directamente. Atributos catégoricos reducibles que deben imputarse en primer lugar.Para los atributos educación, marital, housing y loan se puede reducir la dimensionalidad haciendo imputación. Para esto se emplea la metodología de imputación por moda, teniendo en cuenta que en estos cuatro atributos la cantidad de valores faltantes es reducida.En el caso de la educación, una vez eliminados los datos faltantes, se hace un mapeo a enteros: 'illiterate': 0 'basic.4y': 1 'basic.6y': 2 'basic.9y': 3 'high.school': 4 'university.degree': 5 'professional.course': 6
###Code
def education_map(x):
education_dict = { 'illiterate':0, 'basic.4y':1, 'basic.6y': 2, 'basic.9y': 3, 'high.school':4,
'university.degree':5, 'professional.course':6 }
return education_dict[x]
#Preprocesamiento
df_it2 = df_orig.copy()
df_it2.contact = df_it2.contact.apply(lambda x: 0 if x == 'cellular' else 1)
labels_with_unknowns = ['marital', 'housing', 'loan', 'education']
df_it2[labels_with_unknowns] = df_it2[labels_with_unknowns].replace('unknown', np.NaN)
imputer = GeneralImputer(strategy='most_frequent')
imputer.fit(df_it2[labels_with_unknowns])
df_it2[labels_with_unknowns] = imputer.transform(df_it2[labels_with_unknowns])
df_it2['education'] = df_it2['education'].apply(lambda x: education_map(x))
df_it2['housing'] = df_it2['housing'] .apply(lambda x: 1 if x == 'yes' else 0)
df_it2['loan'] = df_it2['loan'] .apply(lambda x: 1 if x == 'yes' else 0)
#Adicion de dummies.
dummy_cat_labels = ['job','marital','month','day_of_week','default', 'poutcome']
df_it2_dummied = pd.get_dummies(df_it2, columns = dummy_cat_labels, sparse = True, drop_first = True)
#Partición
(df_train2, df_test2) = get_partitions(df_it2_dummied, 1, 0.3)
new_labels = list(df_it2_dummied.columns.values)
X_train2 = df_train2.copy()
y_train2 = X_train2.y
del(X_train2['y'])
X_test2 = df_test2.copy()
y_test2 = X_test2.y
del(X_test2['y'])
#Escalamiento con nueva variable numerica
num_labels2 = num_labels + ['education']
scaler = preprocessing.MinMaxScaler()
scaler = scaler.fit(X_train2[num_labels2])
X_train2.loc[:,num_labels2] = scaler.transform(X_train2[num_labels2])
X_test2.loc[:, num_labels2] = scaler.transform(X_test2[num_labels2])
print(len(new_labels))
###Output
44
###Markdown
Con las modificaciones realizadas se logra reducir la cantidad de etiquetas a 44 (43 sin la respuesta). Segunda iteración de modelamiento: K Nearest Neighbors
###Code
tuned_params = [{'n_neighbors': [15,20,25,30,35,40] }]
scores = [kappa_scorer,'roc_auc']
folds = 5
knn2 = GridSearchCVwithReport(X_train2, y_train2, X_test2, y_test2, KNeighborsClassifier(), tuned_params, scores, folds)
#Tiempo de ejecución promedio (seg):
knn2_time = ((2298.99/ (6*2*5)))
"Tiempo de ejecución promedio: " + str(knn2_time) +" segundos."
###Output
_____no_output_____
###Markdown
Regresión Logística
###Code
#[[10832 133]
#[ 1081 311]]
tuned_params = [{'penalty': ['l1','l2'], 'C': [0.001,0.01,0.1,1,10,100,1000] }]
scores = [kappa_scorer,'roc_auc']
folds = 5
lg2 = GridSearchCVwithReport(X_train2, y_train2, X_test2, y_test2, LogisticRegression(), tuned_params, scores, folds)
lg2_time = ((172.01/ (2*8*2*5)))
"Tiempo de ejecución promedio: " + str(lg2_time) +" segundos."
###Output
_____no_output_____
###Markdown
Random Forest
###Code
tuned_params = {
"n_estimators" : [10,30, 50],
"max_features" : ["auto", "sqrt", "log2"],
"min_samples_split" : [2,4,8,10,12,20],
"bootstrap" : [True, False]
}
scores = [kappa_scorer,'roc_auc']
folds = 5
rf2 = GridSearchCVwithReport(X_train2, y_train2, X_test2, y_test2, RandomForestClassifier(), tuned_params, scores, folds)
rf2_time = ((851.24/ (9*12*10)))
"Tiempo de ejecución promedio: " + str(rf2_time) +" segundos."
###Output
_____no_output_____
###Markdown
Red Neuronal Perceptrón Multicapa
###Code
tuned_params={
'learning_rate': ["invscaling","adaptive"],
'hidden_layer_sizes': [(15),(20),(15,15),(20,20)],
'alpha': [ 0.1,0.01, 0.001, 0.0001],
'activation': ["identity","tanh","relu", "logistic"],
'solver': ['adam']
}
scores = [kappa_scorer,'roc_auc']
folds = 5
mlp2 = GridSearchCVwithReport(X_train2, y_train2, X_test2, y_test2, MLPClassifier(), tuned_params, scores, folds)
mlp2_time = ((2944.89/ (2*4*4*4*1*2*5)))
"Tiempo de ejecución promedio: " + str(rf2_time) +" segundos."
fig = pyplot.figure(figsize=(20, 20))
pd_times = pd.DataFrame({'times':[knn2_time, lg2_time, rf2_time, mlp2_time],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax1 = pyplot.subplot(2, 3, 1)
ax1 = sns.barplot(y = 'times', x='labels', data = pd_times, ax = ax1)
ax1.set_title("Segundos de ejecución promedio")
ax1.set_xlabel("Método")
ax1.set_ylabel("Segundos")
pd_true_positives = pd.DataFrame({'true_positives':[21.9,22.2 , 27.7 , 24.2, 65],
'labels': ['KNN', 'LR', 'RF', 'MLP', 'Moro et al.'],
})
ax2 = pyplot.subplot(2, 3, 2)
sns.barplot(y = 'true_positives', x='labels', data = pd_true_positives, ax = ax2)
ax2.set_title("Porcentaje de verdaderos positivos")
ax2.set_xlabel("Método")
ax2.set_ylabel("%")
pd_kappa = pd.DataFrame({'kappa':[0.28, 0.3, 0.34, 0.32],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax3 = pyplot.subplot(2, 3, 3)
sns.barplot(y = 'kappa', x='labels', data = pd_kappa, ax = ax3)
ax3.set_title("Coeficiente kappa")
ax3.set_xlabel("Método")
ax3.set_ylabel("Kappa")
pd_roc = pd.DataFrame({'roc':[0.6, 0.61, 0.63, 0.61],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax4 = pyplot.subplot(2, 3, 4)
sns.barplot(y = 'roc', x='labels', data = pd_roc, ax = ax4)
ax4.set_title("Área bajo la curva")
ax4.set_xlabel("Método")
ax4.set_ylabel("ROC")
pd_tn = pd.DataFrame({'tn':[98.5, 98.7, 97.7, 98.5, 81],
'labels': ['KNN', 'LR', 'RF', 'MLP', 'Moro et al.'],
})
ax5 = pyplot.subplot(2, 3, 5)
sns.barplot(y = 'tn', x='labels', data = pd_tn, ax = ax5)
ax5.set_title("Tasa de Verdaderos Negativos (TP)")
ax5.set_xlabel("Método")
ax5.set_ylabel("%")
#ax2 = pyplot.subplot(2, 2, 3)
#pyplot.show()
pyplot.show()
###Output
_____no_output_____
###Markdown
Conclusiones de la segunda iteración del modelamientoAl hacer reducción de caracterísicas con base en el cómputo de valores faltantes y aplicando una regla numérica al atributo educación, se pueden notar ligeras mejoras en la clasificación. Asimismo, hay una mejora notoria en los tiempos de ejecución de los algoritmos. La metodología KNN sigue ubicándose como la de peor rendimiento con una tasa de TP del 22% y una tasa de TN del 98.3%, con kappa de 0.28 y ROC de 0.6. Se mantienen el mismo hiperparámetro k = 15, y la reducción de dimensionalidad tiene un impacto considerable en el tiempo de ejecución, reduciéndolo a un promedio de 37 segundos.La regresión logística pasa a clasificar correctamente un 22.3% de las instancias de Sí, una mejora mínima ante una reducción al 98.7% de Verdaderos Negativos, aunque el impacto en la métrica ROC es positivo, aumentando esta a 0.61. Los tiempos de ejecución se reducen a 1.07 segundos, lo cual indica que el algoritmo de regresión logística no es tan sensible a altas cantidades de atributos como los otros. En este caso, los hiperparámetros óptimos fueron C = 1000 y penalización tipo l1.Los bosques aleatorios mantienen la mejor tasa de Verdaderos Positivos, el mejor kappa y la mejor área bajo la curva, con un 27.7%, 0.34 y 0.64 respectivamente. La tasa de TN se mantuvo en 97.7%, lo que a grandes rasgos indica que no hay mayor mejora además del tiempo de ejecución, que pasa a 0.78 segundos por ejecución lo cual indica el alto impacto de la cantidad de atributos en el rendimiento del algoritmo. Los hiperparámetros óptimos fueron los mismos que en la anterior iteración, excepto por las muestras mínimas que se ajustaron a 12 y el número de estimadores ideal que fue 50.Las redes neuronales mantienen un desempeño muy similar al anterior, con un 24.2% de Tasa TP y 98.5% de tasa FP, con kappa 0.32 y ROC de 0.61. Se encontraron los mismos hiperparámetros ideales excepto por el valor de alpha, que pasó de 0.0001 a 0.01, y las capas que se definieron como dos capas ocultas de 20 neuronas cada una. El método, en todo caso, se consolida entre los bosques de clasificación y la regresión logística.En esta iteración no se aprecia una mejora apreciable en la tasa de Verdaderos Positivos, sin embargo, fue posible observar el impacto de la dimensionalidad de los conjuntos de datos sobre los tiempos de ejecución, y como ligeras modificaciones en los atributos categóricos pueden ayudar a reducir la dimensionalidad, resultando en algunos casos en un incremento de la precisión. Por otra parte, se ve como la regresión logística sigue siendo la que mejor clasifica los Verdaderos Negativos, mientras que la tarea de clasificación de Verdaderos Positivos la lleva a cabo más eficientemente los bosques de clasificación. KNN, en contraste, falla en ambos casos, y las Redes Neuronales se ubican en un punto medo entre la Regresión Logística y los Árboles de Clasificación. Tercera iteración del preprocesamientoEsta iteración es similar a la anterior, pero al intuirse que la baja tasa de TP puede deberse al desbalanceo, se aplica balanceo de clases antes de estandarizar los valores numéricos.1. Reducción de atributos categóricos.2. Partición del conjunto de datos.3. Conversión de variables categóricas a dummies.4. Estandarización de valores numéricos.5. Balanceo de clases mediante la metodología mixta de oversampling y downsampling SMOTEENN.
###Code
df_it3 = df_orig.copy()
df_it3.contact = df_it3.contact.apply(lambda x: 0 if x == 'cellular' else 1)
df_it3.contact.describe()
labels_with_unknowns = ['marital', 'housing', 'loan', 'education']
df_it3[labels_with_unknowns] = df_it3[labels_with_unknowns].replace('unknown', np.NaN)
imputer = GeneralImputer(strategy='most_frequent')
imputer.fit(df_it3[labels_with_unknowns])
df_it3[labels_with_unknowns] = imputer.transform(df_it3[labels_with_unknowns])
df_it3['education'] = df_it3['education'].apply(lambda x: education_map(x))
df_it3['housing'] = df_it3['housing'] .apply(lambda x: 1 if x == 'yes' else 0)
df_it3['loan'] = df_it3['loan'] .apply(lambda x: 1 if x == 'yes' else 0)
dummy_cat_labels = ['job','marital','month','day_of_week','default', 'poutcome']
df_it3_dummied = pd.get_dummies(df_it3, columns = dummy_cat_labels, sparse = True, drop_first = True)
(df_train3, df_test3) = get_partitions(df_it3_dummied, 1, 0.3)
new_labels = list(df_it3_dummied.columns.values)
X_train3 = df_train3.copy()
y_train3 = X_train3.y
del(X_train3['y'])
X_test3 = df_test3.copy()
y_test3 = X_test3.y
del(X_test3['y'])
num_labels3 = num_labels + ['education']
scaler = preprocessing.MinMaxScaler()
scaler = scaler.fit(X_train3[num_labels3])
X_train3.loc[:,num_labels3] = scaler.transform(X_train3[num_labels3])
X_test3.loc[:, num_labels3] = scaler.transform(X_test3[num_labels3])
X_resampled3, y_resampled3 = SMOTE().fit_sample(X_train3, y_train3)
X_resampled3 = pd.DataFrame(X_resampled3)
X_resampled3.columns = X_train3.columns
X_resampled3.describe()
X_resampled3, y_resampled3 = SMOTE().fit_sample(X_train3, y_train3)
X_resampled3 = pd.DataFrame(X_resampled3)
X_resampled3.columns = X_train3.columns
X_resampled3.describe()
###Output
_____no_output_____
###Markdown
Tercera iteración del modelamiento K Nearest Neighbors
###Code
tuned_params = [{'n_neighbors': [15,20,25,30,35,40] }]
scores = [kappa_scorer,'roc_auc']
folds = 5
knn3 = GridSearchCVwithReport(X_train3, y_train3, X_test3, y_test3, KNeighborsClassifier(), tuned_params, scores, folds)
knn3_time = ((2311.85/ (6*2*5)))
"Tiempo de ejecución promedio: " + str(knn3_time) +" segundos."
###Output
_____no_output_____
###Markdown
Regresión Logística
###Code
tuned_params = [{'penalty': ['l1','l2'], 'C': [0.001,0.01,0.1,1,10,100,1000] }]
scores = [kappa_scorer,'roc_auc']
folds = 5
lg3 = GridSearchCVwithReport(X_resampled3, y_resampled3, X_test3, y_test3, LogisticRegression(), tuned_params, scores, folds)
lg3_time = ((586.72/ (2*7*2*5)))
"Tiempo de ejecución promedio: " + str(lg3_time) +" segundos."
###Output
_____no_output_____
###Markdown
Random Forest
###Code
tuned_params = {
"n_estimators" : [10,30, 50],
"max_features" : ["auto", "sqrt", "log2"],
"min_samples_split" : [2,4,8,10,12,20],
"bootstrap" : [True, False]
}
scores = [kappa_scorer,'roc_auc']
folds = 5
rf3 = GridSearchCVwithReport(X_resampled3, y_resampled3, X_test3, y_test3, RandomForestClassifier(), tuned_params, scores, folds)
rf3_time = ((1923.14/ (3*3*6*2*5*2)))
"Tiempo de ejecución promedio: " + str(rf3_time) +" segundos."
###Output
_____no_output_____
###Markdown
Red Neuronal Perceptrón Multicapa
###Code
tuned_params={
'learning_rate': ["invscaling","adaptive", "constant"],
'hidden_layer_sizes': [(10),(15),(20),(10,10),(15,15),(15,10),(20,20)],
'alpha': [1, 0.5, 0.1, 0.05, 0.01, 0.001, 0.0001],
'activation': ["identity","tanh","relu", "logistic"],
'solver': ['lbfgs','adam','sgd']
}
scores = [kappa_scorer,'roc_auc']
folds = 5
mlp3 = GridSearchCVwithReport(X_resampled3, y_resampled3, X_test3, y_test3, MLPClassifier(), tuned_params, scores, folds)
mlp3_time = ((98786.25/ (3*7*7*4*3*2*5)))
"Tiempo de ejecución promedio: " + str(mlp3_time) +" segundos."
fig = pyplot.figure(figsize=(20, 20))
pd_times = pd.DataFrame({'times':[knn3_time, lg3_time, rf3_time, mlp3_time],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax1 = pyplot.subplot(2, 3, 1)
ax1 = sns.barplot(y = 'times', x='labels', data = pd_times, ax = ax1)
ax1.set_title("Segundos de ejecución promedio")
ax1.set_xlabel("Método")
ax1.set_ylabel("Segundos")
pd_true_positives = pd.DataFrame({'true_positives':[21.9,66.3 , 35 , 58, 65],
'labels': ['KNN', 'LR', 'RF', 'MLP', 'Moro et al.'],
})
ax2 = pyplot.subplot(2, 3, 2)
sns.barplot(y = 'true_positives', x='labels', data = pd_true_positives, ax = ax2)
ax2.set_title("Porcentaje de verdaderos positivos")
ax2.set_xlabel("Método")
ax2.set_ylabel("%")
pd_kappa = pd.DataFrame({'kappa':[0.28, 0.36, 0.36, 0.31],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax3 = pyplot.subplot(2, 3, 3)
sns.barplot(y = 'kappa', x='labels', data = pd_kappa, ax = ax3)
ax3.set_title("Coeficiente kappa")
ax3.set_xlabel("Método")
ax3.set_ylabel("Kappa")
pd_roc = pd.DataFrame({'roc':[0.6, 0.75, 0.66, 0.71],
'labels': ['KNN', 'LR', 'RF', 'MLP'],
})
ax4 = pyplot.subplot(2, 3, 4)
sns.barplot(y = 'roc', x='labels', data = pd_roc, ax = ax4)
ax4.set_title("Área bajo la curva")
ax4.set_xlabel("Método")
ax4.set_ylabel("ROC")
pd_tn = pd.DataFrame({'tn':[98.3, 83.4, 95.8, 84.2, 81],
'labels': ['KNN', 'LR', 'RF', 'MLP', 'Moro et al.'],
})
ax5 = pyplot.subplot(2, 3, 5)
sns.barplot(y = 'tn', x='labels', data = pd_tn, ax = ax5)
ax5.set_title("Tasa de Verdaderos Negativos (TP)")
ax5.set_xlabel("Método")
ax5.set_ylabel("%")
###Output
_____no_output_____ |
notebooks/grupo1/Aprendizaje supervisado.ipynb | ###Markdown
Procesamiento de los datos1. Levantamos el dataframe con los datos ya preparados
###Code
import pandas as pd
df = pd.read_parquet("/home/mpccolorado/movimientos_curados4.parquet")
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None) # or 1000.
pd.set_option('display.max_rows', None) # or 1000.
pd.set_option('display.max_colwidth', None) # or 199.
###Output
_____no_output_____
###Markdown
Transformamos la columna mes-añoHacemos un DictVectorizer de la columna 'mes-año' porque nos resulta importante dejar el mes en el estudio
###Code
from sklearn import feature_extraction
import numpy as np
def get_dataframe_with_mes_año(dataframe):
df_copy = dataframe.copy()
feature_cols = ['mes-año']
features = list(df_copy[feature_cols].T.to_dict().values())
vectorizer = feature_extraction.DictVectorizer(sparse=False)
feature_matrix = vectorizer.fit_transform(features)
feature_names = vectorizer.get_feature_names()
df_copy.drop('mes-año', axis=1, inplace=True)
matriz_densa_completa = np.hstack([feature_matrix, df_copy.values])
return pd.DataFrame(data=matriz_densa_completa, columns=feature_names + df_copy.columns.values.tolist())
###Output
_____no_output_____
###Markdown
EscaladoAgrupamos las columnas relacionadas en distintos arrays:
###Code
meses_features = [
'mes-año=2020-07','mes-año=2020-08','mes-año=2020-09','mes-año=2020-10','mes-año=2020-11','mes-año=2020-12',
'mes-año=2021-01','mes-año=2021-02','mes-año=2021-03','mes-año=2021-04','mes-año=2021-05'
]
edad_features = [
'rango_edad=(17, 27]','rango_edad=(27, 37]','rango_edad=(37, 47]','rango_edad=(47, 57]','rango_edad=(57, 67]',
'rango_edad=(67, 77]','rango_edad=(77, 109]'
]
estado_civil_features = [
'estado_civil_descripcion=Casadoa','estado_civil_descripcion=Divorciadoa',
'estado_civil_descripcion=Separacion de hecho','estado_civil_descripcion=Sin Datos',
'estado_civil_descripcion=Solteroa','estado_civil_descripcion=Viudoa'
]
sexo_features = [ 'sexo_descripcion=Hombre','sexo_descripcion=Mujer' ]
provincia_features = [
'provincia=BUENOS AIRES','provincia=CAPITAL FEDERAL','provincia=CATAMARCA','provincia=CHACO',
'provincia=CHUBUT','provincia=CORDOBA','provincia=CORRIENTES','provincia=ENTRE RIOS',
'provincia=FORMOSA','provincia=JUJUY','provincia=LA PAMPA','provincia=LA RIOJA',
'provincia=MENDOZA','provincia=MISIONES','provincia=NEUQUEN','provincia=RIO NEGRO',
'provincia=SALTA','provincia=SAN JUAN','provincia=SAN LUIS','provincia=SANTA CRUZ',
'provincia=SANTA FE','provincia=SGO. DEL ESTERO','provincia=TIERRA DEL FUEGO','provincia=TUCUMAN'
]
antig_features = [
'rango_antig=(-1, 4]','rango_antig=(14, 19]','rango_antig=(19, 24]','rango_antig=(24, 32]',
'rango_antig=(4, 9]','rango_antig=(9, 14]'
]
cargo_features = [
'cargo_cat=F','cargo_cat=I','cargo_cat=PEONEMBARCADOS','cargo_cat=PORTEROCONSERJ','cargo_cat=PROFESTECNICO',
'cargo_cat=RD','cargo_cat=RDO','cargo_cat=SD','cargo_cat=VENDEDORPROMOT'
]
nivel_estudio_features = [
'nivel_estudio_descripcion_histo=PRIMARIOS','nivel_estudio_descripcion_histo=SECUNDARIOS',
'nivel_estudio_descripcion_histo=TERCIARIOS','nivel_estudio_descripcion_histo=UNIVERSITARIOS'
]
vivienda_features = [ 'rel_vivienda_descripcion_histo=Otros','rel_vivienda_descripcion_histo=Propia' ]
producto_features = [
'producto_naranja_movimiento=AV','producto_naranja_movimiento=AX','producto_naranja_movimiento=EX',
'producto_naranja_movimiento=MC','producto_naranja_movimiento=PC','producto_naranja_movimiento=PL',
'producto_naranja_movimiento=PN','producto_naranja_movimiento=PP','producto_naranja_movimiento=SM',
'producto_naranja_movimiento=TA','producto_naranja_movimiento=VI','producto_naranja_movimiento=ZE'
]
tipo_producto_features = [
'tipo_producto_tarjeta_movimiento=0','tipo_producto_tarjeta_movimiento=3','tipo_producto_tarjeta_movimiento=99'
]
debito_features = [ 'marca_debito_automatico=0','marca_debito_automatico=1' ]
cat_comercio_features = [
'cat_comercio=0','cat_comercio=1','cat_comercio=2','cat_comercio=3','cat_comercio=4',
'cat_comercio=5','cat_comercio=6','cat_comercio=7','cat_comercio=8','cat_comercio=9'
]
plan_features = [
'plan_movimiento=1','plan_movimiento=10','plan_movimiento=11','plan_movimiento=12','plan_movimiento=2',
'plan_movimiento=3','plan_movimiento=4','plan_movimiento=5','plan_movimiento=6','plan_movimiento=8',
'plan_movimiento=9'
]
target_feature = ['monto_normalizado']
###Output
_____no_output_____
###Markdown
Escalado 1Creamos distintos objetos para escalar los datos de acuerdo a su tipo y de acuerdo a su grupo
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler
producto_scaler = StandardScaler()
tipo_producto_scaler = StandardScaler()
debito_scaler = StandardScaler()
cat_comercio_scaler = StandardScaler()
plan_scaler = StandardScaler()
preprocessor1 = ColumnTransformer(
transformers=[
('meses', 'passthrough', meses_features),
('edad', 'passthrough', edad_features),
('estado_civil', 'passthrough', estado_civil_features),
('sexo', 'passthrough', sexo_features),
('provincia', 'passthrough', provincia_features),
('antig', 'passthrough', antig_features),
('cargo', 'passthrough', cargo_features),
('nivel_estudio', 'passthrough', nivel_estudio_features),
('vivienda', 'passthrough', vivienda_features),
('producto', producto_scaler, producto_features),
('tipo_producto', tipo_producto_scaler, tipo_producto_features),
('debito', debito_scaler, debito_features),
('cat_comercio', cat_comercio_scaler, cat_comercio_features),
('plan', plan_scaler, plan_features)
]
)
###Output
_____no_output_____
###Markdown
Escalado 2Escalaremos todos los features numéricos usando el mismo escalador.
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler
standard_scaler = StandardScaler()
preprocessor2 = ColumnTransformer(
transformers=[
('meses', 'passthrough', meses_features),
('edad', 'passthrough', edad_features),
('estado_civil', 'passthrough', estado_civil_features),
('sexo', 'passthrough', sexo_features),
('provincia', 'passthrough', provincia_features),
('antig', 'passthrough', antig_features),
('cargo', 'passthrough', cargo_features),
('nivel_estudio', 'passthrough', nivel_estudio_features),
('vivienda', 'passthrough', vivienda_features),
('numeric_features', standard_scaler,
producto_features + tipo_producto_features + debito_features + cat_comercio_features + plan_features)
]
)
###Output
_____no_output_____
###Markdown
**Queda pendiente probar escalando el target** --- Regresión
###Code
df_reg = df.copy()
df_reg.drop(['dni'], axis=1, inplace=True)
df_reg = get_dataframe_with_mes_año(df_reg)
###Output
_____no_output_____
###Markdown
Funciones de error
###Code
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
def evaluate_errors(model, X_train, X_test, y_train, y_test, description):
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
(train_error_MSE, test_error_MSE) = evaluate_MSE(y_train, y_train_pred, y_test, y_test_pred)
(train_error_RMSE, test_error_RMSE) = evaluate_RMSE(y_train, y_train_pred, y_test, y_test_pred)
(train_error_MAE, test_error_MAE) = evaluate_MAE(y_train, y_train_pred, y_test, y_test_pred)
errors = pd.DataFrame(data=[], columns=['description', 'train_error_MAE', 'test_error_MAE'])
errors = errors.append(
{
'description': description,
'train_error_MSE': train_error_MSE,
'test_error_MSE': test_error_MSE,
'train_error_RMSE': train_error_RMSE,
'test_error_RMSE': test_error_RMSE,
'train_error_MAE': train_error_MAE,
'test_error_MAE': test_error_MAE
}, ignore_index=True)
return errors
def evaluate_MSE(y_train, y_train_pred, y_test, y_test_pred):
train_error = mean_squared_error(y_train, y_train_pred)
test_error = mean_squared_error(y_test, y_test_pred)
#print(f'Train error MSE: {train_error}, Test error MSE: {test_error}')
return (train_error, test_error)
def evaluate_RMSE(y_train, y_train_pred, y_test, y_test_pred):
train_error = np.sqrt(mean_squared_error(y_train, y_train_pred))
test_error = np.sqrt(mean_squared_error(y_test, y_test_pred))
#print(f'Train error RMSE {train_error.round(3)}, Test error RMSE {test_error.round(3)}')
return (train_error, test_error)
def evaluate_MAE(y_train, y_train_pred, y_test, y_test_pred):
train_error = mean_absolute_error(y_train, y_train_pred)
test_error = mean_absolute_error(y_test, y_test_pred)
#print(f'Train error MAE {train_error.round(3)}, Test error MAE {test_error.round(3)}')
return (train_error, test_error)
###Output
_____no_output_____
###Markdown
División de los datos
###Code
from sklearn.model_selection import train_test_split
# División entre instancias y etiquetas
X, y = df_reg.drop('monto_normalizado', axis=1), df_reg.monto_normalizado
# División entre entrenamiento y evaluación
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Función para simplificar el procesamientoPara poder evaluar los distintos algoritmos, primero sin ningún tipo de escalado y luego con los preprocesadores de escalado 1 y 2
###Code
from sklearn.pipeline import Pipeline
def execute_models(regressor, grid_search=None, feature_selection=None):
(errors1, model1) = execute_pipe(regressor, grid_search, None, feature_selection, 'no preprocessor')
(errors2, model2) = execute_pipe(regressor, grid_search, preprocessor1, feature_selection, 'preprocessor 1')
(errors3, model3) = execute_pipe(regressor, grid_search, preprocessor2, feature_selection, 'preprocessor 2')
return (
errors1.append(errors2).append(errors3),
model1,
model2,
model3
)
def execute_pipe(regressor, grid_search, preprocessor, feature_selection, description):
pipes = []
if preprocessor:
pipes.append(('preprocessor', preprocessor))
if feature_selection:
pipes.append(('feature_selection', feature_selection))
pipes.append(('regressor', regressor))
pipe = Pipeline(pipes)
if grid_search:
model = grid_search(pipe)
else:
model = pipe
model.fit(X_train, y_train)
errors = evaluate_errors(model, X_train, X_test, y_train, y_test, description)
return (errors, model)
###Output
_____no_output_____
###Markdown
------ Linear SVR Default
###Code
from sklearn.svm import LinearSVR
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
(errors_svr, svr1, svr2, svr3) = execute_models(
regressor = LinearSVR(random_state=0, tol=1e-5)
)
errors_svr
###Output
_____no_output_____
###Markdown
Grid Search
###Code
from sklearn.model_selection import RandomizedSearchCV
param_grid = {
'regressor__epsilon': [0.1, 0.01, 0.0001,0.001],
'regressor__tol': [1e-3, 1e-4, 1e-5, 1e-6],
'regressor__C': [1, 2, 0.01, 0.001, 0.0001],
'regressor__loss': ['epsilon_insensitive', 'squared_epsilon_insensitive']
}
(errors_grid_svr, svr_grid_1, svr_grid_2, svr_grid_3) = execute_models(
regressor = LinearSVR(random_state=0),
grid_search = lambda pipe: RandomizedSearchCV(pipe, param_grid, scoring='neg_mean_squared_error',cv=5, n_iter=40)
)
errors_grid_svr
###Output
_____no_output_____
###Markdown
ConclusionesEl mejor resultado lo obtuvimos con el modelo **"Linear SVR - Grid Search"** (svr_grid_1)
###Code
svr_grid_1.best_params_
svr_model = svr_grid_1
###Output
_____no_output_____
###Markdown
------ SGDRegressor Default
###Code
from sklearn.linear_model import SGDRegressor
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
(errors_sgd, sgd1, sgd2, sgd3) = execute_models(
regressor = SGDRegressor(random_state=0, max_iter=1000, tol=1e-3)
)
errors_sgd
###Output
_____no_output_____
###Markdown
Grid Search
###Code
param_grid = {
'regressor__loss': ['squared_error', 'huber', 'epsilon_insensitive', 'squared_epsilon_insensitive'],
'regressor__penalty': ['l2', 'l1', 'elasticnet'],
'regressor__alpha': [0.1, 0.01, 0.001, 0.0001],
'regressor__tol': [1e-3, 1e-4, 1e-5, 1e-6],
'regressor__epsilon': [0.1, 0.01, 0.0001,0.001]
}
(errors_grid_sgd, sgd_grid_1, sgd_grid_2, sgd_grid_3) = execute_models(
regressor = SGDRegressor(random_state=0, max_iter=1000, tol=1e-3),
grid_search = lambda pipe: RandomizedSearchCV(pipe, param_grid, scoring='neg_mean_squared_error',cv=5, n_iter=40)
)
errors_grid_sgd
###Output
_____no_output_____
###Markdown
ConclusionesEl mejor resultado lo obtuvimos con el modelo **SGDRegressor - Grid Search** (sgd_grid_1)
###Code
sgd_grid_1.best_params_
sgd_model = sgd_grid_1
###Output
_____no_output_____
###Markdown
--- KNeighborsRegressor Default
###Code
from sklearn.neighbors import KNeighborsRegressor
(errors_knn, knn1, knn2, knn3) = execute_models(
regressor = KNeighborsRegressor(n_neighbors=2)
)
errors_knn
###Output
_____no_output_____
###Markdown
GridSearch
###Code
param_grid = {
'regressor__n_neighbors': [4,5,6,7,8],
'regressor__weights': ['uniform', 'distance'],
'regressor__algorithm': ['auto', 'ball_tree', 'kd_tree', 'brute'],
'regressor__p': [1,2]
}
(errors_grid_knn, knn_grid_1, knn_grid_2, knn_grid_3) = execute_models(
regressor = KNeighborsRegressor(),
grid_search = lambda pipe: RandomizedSearchCV(pipe, param_grid, scoring='neg_mean_squared_error',cv=5, n_iter=40)
)
errors_grid_knn
###Output
_____no_output_____
###Markdown
ConclusionesObtuvimos los mejores resultados con el modelo **KNeighborsRegressor - Grid Search - processor 1** (knn_grid_2)
###Code
knn_model = knn_grid_2
###Output
_____no_output_____
###Markdown
--- GaussianProcessRegressor Default
###Code
from sklearn.gaussian_process import GaussianProcessRegressor
(errors_gpr, gpr1, gpr2, gpr3) = execute_models(
regressor = GaussianProcessRegressor(random_state=0)
)
errors_gpr
###Output
_____no_output_____
###Markdown
Search Grid
###Code
from sklearn.gaussian_process.kernels import ConstantKernel, RBF, RationalQuadratic, ExpSineSquared
ker_rbf = ConstantKernel(1.0, constant_value_bounds="fixed") * RBF(1.0, length_scale_bounds="fixed")
ker_rq = ConstantKernel(1.0, constant_value_bounds="fixed") * RationalQuadratic(alpha=0.1, length_scale=1)
ker_expsine = ConstantKernel(1.0, constant_value_bounds="fixed") * ExpSineSquared(1.0, 5.0, periodicity_bounds=(1e-2, 1e1))
kernel_list = [ker_rbf, ker_rq, ker_expsine]
param_grid = {"regressor__kernel": kernel_list,
"regressor__alpha": [0.1]}
(errors_grid_gpr, gpr_grid_1, gpr_grid_2, gpr_grid_3) = execute_models(
regressor = GaussianProcessRegressor(random_state=0),
grid_search = lambda pipe: RandomizedSearchCV(pipe, param_grid, scoring='neg_mean_squared_error',cv=5, n_iter=40)
)
errors_grid_gpr
###Output
_____no_output_____
###Markdown
ConclusionesObtuvimos los mejores resultados con el modelo **GaussianProcessRegressor - Grid Search - processor1** (gpr_grid_2)
###Code
gpr_grid_2.best_params_
gpr_model = gpr_grid_2
###Output
_____no_output_____
###Markdown
--- XGBRegressor Default
###Code
from xgboost import XGBRegressor
from sklearn.feature_selection import SelectFromModel
(errors_xgb, xgb1, xgb2, xgb3) = execute_models(
regressor = XGBRegressor(random_state=0),
feature_selection = SelectFromModel(LinearSVR(random_state=0))
)
errors_xgb
###Output
_____no_output_____
###Markdown
Grid Search
###Code
param_grid = {'regressor__n_estimators': [80, 90, 100, 110, 120, 130, 250],
'regressor__reg_alpha': [0, 0.1, 3, 5, 10, 15],
'regressor__booster' : ['gbtree', 'gblinear','dart']}
(errors_grid_xgb, xgb_grid_1, xgb_grid_2, xgb_grid_3) = execute_models(
regressor = XGBRegressor(random_state=0),
feature_selection = SelectFromModel(LinearSVR(random_state=0)),
grid_search = lambda pipe: RandomizedSearchCV(pipe, param_grid, scoring='neg_mean_squared_error',cv=5)
)
errors_grid_xgb
###Output
_____no_output_____
###Markdown
ConclusionesLos mejores resultados los obtuvimos con el modelo de **GridSearch que usa el preprocessor 1** (xgb_grid_2)
###Code
xgb_grid_2.best_params_
xgb_model = xgb_grid_2
###Output
_____no_output_____
###Markdown
--- VotingRegressor Default
###Code
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import VotingRegressor
r1 = LinearRegression()
r2 = RandomForestRegressor(n_estimators=10, random_state=1)
er = VotingRegressor([('lr', r1), ('rf', r2)])
(errors_vot, vot1, vot2, vot3) = execute_models(
regressor = er
)
errors_vot
###Output
_____no_output_____
###Markdown
Grid Search
###Code
from sklearn.model_selection import GridSearchCV
regressor = VotingRegressor([
('svr_cv', svr_model.best_estimator_),
('sgd_cv', sgd_model.best_estimator_),
('knn_cv', knn_model.best_estimator_),
('gpr_cv', gpr_model.best_estimator_),
('xgb_cv', xgb_model.best_estimator_)
])
(errors_grid_vot, vot1, vot2, vot3) = execute_models(
regressor = regressor,
feature_selection = SelectFromModel(LinearSVR(random_state=0)),
grid_search = lambda pipe: GridSearchCV(pipe, param_grid, scoring='neg_mean_squared_error',cv=5)
)
errors_grid_vot
###Output
_____no_output_____
###Markdown
Conclusiones Conclusiones de Regresión* El algoritmo que mejor resultado nos dió fue knn utilizando RandomizedSearchCV con un error MAE en el set de training de 2.4 y de 10198 para el set de test.* Quizás los resultados mejoren si en vez de utilizar rangos para la edad y para la antigüedad utilizáramos los datos como vienen.* Lo mismo podríamos hacer de utilizar el mes como un número en vez de hacer el OneHotEncoding para definir cada mes como columna, quizás al tener menos features los resultados mejoren.* Otra hipótesis es que el número de filas con respecto a la cantidad de features no es el adecuado, quizás necesitaríamos muchos más datos para conseguir resultados más satisfactorios.
###Code
errors_grid_knn
errors_gpr
###Output
_____no_output_____
###Markdown
Clasificación Creamos una columna para identificar si el monto se ha incrementado un 10% con respecto al mes pasado
###Code
df_clas = df.copy()
df_clas.loc[0,'incremento_monto'] = 0
for i in range(1, len(df_clas)):
dni_anterior = df_clas.loc[i-1, 'dni']
monto_mes_anterior = df_clas.loc[i-1, 'monto_normalizado']
dni_actual = df_clas.loc[i, 'dni']
monto_mes_actual = df_clas.loc[i, 'monto_normalizado']
if dni_anterior != dni_actual:
df_clas.loc[i,'incremento_monto'] = 0
else:
df_clas.loc[i,'incremento_monto'] = 1 if monto_mes_actual >= (monto_mes_anterior * 1.1) else 0
#Chequeo con exito
df_clas[df_clas['dni']=='000f0b73ebfa002a79a0642b82e87919904'][['dni', 'mes-año', 'monto_normalizado', 'incremento_monto']]
###Output
_____no_output_____
###Markdown
Eliminamos la columna dni:
###Code
df_clas.drop(['dni'], axis=1, inplace=True)
df_clas.head()
###Output
_____no_output_____
###Markdown
Agregamos las columnas de mes-año:
###Code
df_clas = get_dataframe_with_mes_año(df_clas)
###Output
_____no_output_____
###Markdown
Separamos los sets de entrenamiento y validación:
###Code
from sklearn.model_selection import train_test_split
# División entre instancias y etiquetas
X, y = df_clas.drop('incremento_monto', axis=1), df_clas.incremento_monto
# División entre entrenamiento y evaluación
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
RandomForestClassifier
###Code
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(max_depth=2, random_state=0)
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Métricas y Matríz de Confusión
###Code
from sklearn.metrics import classification_report, confusion_matrix, plot_confusion_matrix
y_train_pred = clf.predict(X_train)
print(classification_report(y_train, y_train_pred))
y_test_pred = clf.predict(X_test)
print(classification_report(y_test, y_test_pred))
import matplotlib.pyplot as plt
plt.figure()
plot_confusion_matrix(estimator= clf, X=X_train, y_true=y_train,
normalize='true', cmap='Blues').ax_ \
.set_title('Random Forest')
plt.figure()
plot_confusion_matrix(estimator= clf, X=X_test, y_true=y_test,
normalize='true', cmap='Blues').ax_ \
.set_title('Random Forest')
###Output
_____no_output_____
###Markdown
XGBClassifier
###Code
from xgboost import XGBClassifier
clf = XGBClassifier(max_depth=2, random_state=0)
model = Pipeline([
('feature_selection', SelectFromModel(LinearSVR(random_state=0))),
('cla', clf)
])
model.fit(X_train, y_train)
y_train_pred = model.predict(X_train)
print(classification_report(y_train, y_train_pred))
y_test_pred = model.predict(X_test)
print(classification_report(y_test, y_test_pred))
plt.figure()
plot_confusion_matrix(estimator= model, X=X_train, y_true=y_train,
normalize='true', cmap='Blues').ax_ \
.set_title('Random Forest')
plt.figure()
plot_confusion_matrix(estimator= model, X=X_test, y_true=y_test,
normalize='true', cmap='Blues').ax_ \
.set_title('Random Forest')
###Output
_____no_output_____ |
02_analisis_y_curacion/notebooks/.ipynb_checkpoints/1. Importando los datos-checkpoint.ipynb | ###Markdown
1.1. Verificar que no hay problemas en la importación
###Code
# modules we'll use
import pandas as pd
###Output
_____no_output_____
###Markdown
Veamos de importar datos de proyectos de Kickstarter la plataforma de Crowdsourcing
###Code
kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv")
###Output
_____no_output_____
###Markdown
Por defecto Pandas falla si hay errores para leer datos https://pandas.pydata.org/pandas-docs/stable/io.htmlerror-handling
###Code
kickstarter_2018 = pd.read_csv("../input/kickstarter-projects/ks-projects-201801.csv")
###Output
_____no_output_____
###Markdown
Veamos los datos cargados en el dataframe
###Code
kickstarter_2018
###Output
_____no_output_____
###Markdown
Por defecto solo vemos los valores al comienzo o al final del archivo.Tomemos una muestra al azar para ver valores más dispersos
###Code
# set seed for reproducibility
import numpy as np
np.random.seed(0)
kickstarter_2018.sample(5)
###Output
_____no_output_____
###Markdown
No se observa a simple vista ningún problema. Veamos la descripción del dataset si se corresponde con lo levantado https://www.kaggle.com/kemical/kickstarter-projects/data
###Code
pd.DataFrame([["ID", "No description provided", "Numeric"],
["name", "No description provided", "String"],
["category", "No description provided", "String"],
["main_category", "No description provided", "String"],
["currency", "No description provided", "String"],
["deadline", "No description provided", "DateTime"],
["goal", "Goal amount in project currency", "Numeric"],
["launched", "No description provided", "DateTime"],
["pledged", "Pledged amount in the project currency", "Numeric"],
["state", "No description provided", "String"],
["backers", "No description provided", "Numeric"],
["country", "No description provided", "String"],
["usd pledged", "Pledged amount in USD (conversion made by KS)", "Numeric"],
["usd_pledged_real", "Pledged amount in USD (conversion made by fixer.io api)", "Numeric"],
["usd_goal_real", "Goal amount in USD", "Numeric"]], columns=["Field name","Field description", "Type"])
kickstarter_2018.dtypes
###Output
_____no_output_____
###Markdown
Los campos object generalmente son String, entonces parece que no reconoció como fechas en **deadline** y **launched** :( Veamos los datos un resumen de los datos
###Code
kickstarter_2018.describe()
###Output
_____no_output_____
###Markdown
Por defecto se ven los datos numéricos, veamos el resto.
###Code
kickstarter_2018.describe(include=['object'])
###Output
_____no_output_____
###Markdown
Operemos un cacho sobre los datos de lanzamiento
###Code
kickstarter_2018['launched'].min()
###Output
_____no_output_____
###Markdown
Parece funcionar, pero ahora calculemos el rango de fechas de los proyectos
###Code
kickstarter_2018['launched'].max() - kickstarter_2018['launched'].min()
###Output
_____no_output_____
###Markdown
Indiquemos que columnas son fechas como indica la [documentación](https://pandas.pydata.org/pandas-docs/stable/io.htmldatetime-handling)
###Code
kickstarter_2018 = pd.read_csv("../input/kickstarter-projects/ks-projects-201801.csv",
parse_dates=["deadline","launched"])
kickstarter_2018.dtypes
###Output
_____no_output_____
###Markdown
Ahora vemos que esas columnas fueron reconocidas como fechasVeamos la misma muestra de nuevo
###Code
kickstarter_2018.sample(5)
###Output
_____no_output_____
###Markdown
Y veamos el resumen de los datos
###Code
kickstarter_2018.describe(include='all')
###Output
_____no_output_____
###Markdown
Podemos ver que tenemos primero y último en el resumen de las columnas de fechas.Ahora deberíamos poder calcular el rango de fechas de lanzamietos
###Code
kickstarter_2018['launched'].max() - kickstarter_2018['launched'].min()
###Output
_____no_output_____
###Markdown
1.2. Asegurar de tener ids/claves únicas Chequear que no hay datos duplicados
###Code
kickstarter_2018.shape
kickstarter_2018 = pd.read_csv("../input/kickstarter-projects/ks-projects-201801.csv",
parse_dates=["deadline","launched"],
index_col=['ID'])
kickstarter_2018
kickstarter_2018.shape
kickstarter_2018[kickstarter_2018.duplicated()]
csv='1,2\n3,3\n1,3'
print(csv)
from io import StringIO
df = pd.read_csv(StringIO(csv), names=['id','value'], index_col='id')
df
df[df.duplicated()]
df[df.index.duplicated( keep=False)]
kickstarter_2018[kickstarter_2018.index.duplicated()]
###Output
_____no_output_____
###Markdown
1.3. Despersonalizar datos y guardarlos en un nuevo archivo Estrategias de Google API https://cloud.google.com/dlp/docs/deidentify-sensitive-data:* **Replacement**: Replaces each input value with a given value.* **Redaction**: Redacts a value by removing it.* **Mask with character**: Masks a string either fully or partially by replacing a given number of characters with a specified fixed character.* **Pseudonymization by replacing input value with cryptographic hash**: Replaces input values with a 32-byte hexadecimal string generated using a given data encryption key.* **Obfuscation of dates**: Shifts dates by a random number of days, with the option to be consistent for the same context.* **Pseudonymization by replacing with cryptographic format preserving token**: Replaces an input value with a “token,” or surrogate value, of the same length using format-preserving encryption (FPE) with the FFX mode of operation.* **Bucket values based on fixed size ranges**: Masks input values by replacing them with “buckets,” or ranges within which the input value falls.* **Bucket values based on custom size ranges**: Buckets input values based on user-configurable ranges and replacement values.* **Replace with infoType**: Replaces an input value with the name of its infoType.* **Extract time data**: Extracts or preserves a portion of Date, Timestamp, and TimeOfDay values.
###Code
from hashlib import md5
kickstarter_2018['name'].apply(md5)
def hashit(val):
return md5(val.encode('utf-8'))
kickstarter_2018['name'].apply(hashit)
def hashit(val):
try:
return md5(val.encode('utf-8'))
except Exception as e:
print(val, type(val))
raise(e)
kickstarter_2018['name'].apply(hashit)
def hashit(val):
if isinstance(val, float):
return str(val)
return md5(val.encode('utf-8')).hexdigest()
kickstarter_2018['name'].apply(hashit)
###Output
_____no_output_____
###Markdown
1.4. Nunca modificar los datos crudos u originales
###Code
kickstarter_2018.to_csv("../input/kickstarter-projects/ks-projects-201801-for-pandas.csv")
###Output
_____no_output_____ |
ARMA MODEL/Time series bit coin.ipynb | ###Markdown
Calculate what the highest and lowest opening prices were for the stock in this period.
###Code
p = dict['dataset']['data']
z= [x[1] for x in p]
res=[]
for val in z:
if val!= None:
res.append(val)
print("The maximum opening value in 2016 & 2020 was " + str(max(res)))
print("The minimum opening value in 2016 & 2020 was " + str(min(res)))
#load python packages
import os
import pandas as pd
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import pandas as pd
import numpy as np
import itertools
from statsmodels.tsa.stattools import adfuller
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.arima_model import ARMA
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from datetime import datetime, timedelta
from tqdm import tqdm_notebook as tqdm
plt.style.use('bmh')
url = 'https://www.quandl.com/api/v3/datasets/BCHAIN/MKPRU.csv?api_key=Lq43ztbiWJ73CJUDPiye&start_date=2016-01-01&end_date=2020-4-29'
df= pd.read_csv( url ,index_col = None)
df.head()
df.tail()
#Cleaning data
df.columns= ['DATE' ,'PRICE']
df.head()
#Covert into Datatime
df['DATE'] = pd.to_datetime(df['DATE'])
df.head()
df.set_index('DATE',inplace =True)
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Visualize the Data
###Code
import statsmodels.api as sm
from statsmodels.tsa.stattools import acf
from statsmodels.tsa.stattools import pacf
from statsmodels.tsa.seasonal import seasonal_decompose
df.plot(figsize=(17,8), title='Closing Prices')
plt.xlabel('year')
plt.ylabel("Price in USD")
plt.show()
decomposition = seasonal_decompose(df.PRICE, model='additive',period = 120)
fig = plt.figure()
fig = decomposition.plot()
fig.set_size_inches(15, 8)
print(decomposition.trend)
print(decomposition.seasonal)
print(decomposition.resid)
### Testing For Stationarity
from statsmodels.tsa.stattools import adfuller
test_result=adfuller(df['PRICE'])
#Ho: It is non stationary
#H1: It is stationary
def adfuller_test(PRICE):
result=adfuller(PRICE)
labels = ['ADF Test Statistic','p-value','#Lags Used','Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis(Ho), reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
adfuller_test(df['PRICE'])
###Output
ADF Test Statistic : -1.6304421695829747
p-value : 0.4672592724496129
#Lags Used : 24
Number of Observations Used : 1556
weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary
###Markdown
Differencing
###Code
df['Price Difference'] = df['PRICE'] - df['PRICE'].shift(1)
df['PRICE'].shift(1)
df.head()
## Again test dickey fuller test
adfuller_test(df['Price Difference'].dropna())
df['Price Difference'].plot(figsize=(16,4), title="Daily Changes in Closing Price")
plt.ylabel("Change in USD")
plt.show()
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(df['Price Difference'].iloc[13:], lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(df['Price Difference'].iloc[13:], lags=40, ax=ax2)
###Output
_____no_output_____
###Markdown
Auto Regressive Model¶
###Code
from pandas.tools.plotting import autocorrelation_plot
autocorrelation_plot(df['PRICE'])
plt.show()
###Output
_____no_output_____ |
PyTorch Exercises FASHION-MNIST/.ipynb_checkpoints/Part 2 - Neural Networks in PyTorch (Solution)-checkpoint.ipynb | ###Markdown
Neural networks with PyTorchDeep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
###Code
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample belowOur goal is to build a neural network that can take one of these images and predict the digit in the image.First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
###Code
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like```pythonfor image, label in trainloader: do things with images and labels```You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
###Code
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
###Output
<class 'torch.Tensor'>
torch.Size([64, 1, 28, 28])
torch.Size([64])
###Markdown
This is what one of the images looks like.
###Code
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
###Output
_____no_output_____
###Markdown
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
###Code
## Solution
def activation(x):
return 1/(1+torch.exp(-x))
# Flatten the input images
inputs = images.view(images.shape[0], -1)
# Create parameters
w1 = torch.randn(784, 256)
b1 = torch.randn(256)
w2 = torch.randn(256, 10)
b2 = torch.randn(10)
h = activation(torch.mm(inputs, w1) + b1)
out = torch.mm(h, w2) + b2
###Output
_____no_output_____
###Markdown
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like$$\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}$$What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
###Code
## Solution
def softmax(x):
return torch.exp(x)/torch.sum(torch.exp(x), dim=1).view(-1, 1)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
###Output
torch.Size([64, 10])
tensor([ 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000])
###Markdown
Building networks with PyTorchPyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
###Code
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
###Output
_____no_output_____
###Markdown
Let's go through this bit by bit.```pythonclass Network(nn.Module):```Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.```pythonself.hidden = nn.Linear(784, 256)```This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.```pythonself.output = nn.Linear(256, 10)```Similarly, this creates another linear transformation with 256 inputs and 10 outputs.```pythonself.sigmoid = nn.Sigmoid()self.softmax = nn.Softmax(dim=1)```Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.```pythondef forward(self, x):```PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.```pythonx = self.hidden(x)x = self.sigmoid(x)x = self.output(x)x = self.softmax(x)```Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.Now we can create a `Network` object.
###Code
# Create the network and look at it's text representation
model = Network()
model
###Output
_____no_output_____
###Markdown
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
###Code
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
###Output
_____no_output_____
###Markdown
Activation functionsSo far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. Your Turn to Build a Network> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
###Code
## Solution
class Network(nn.Module):
def __init__(self):
super().__init__()
# Defining the layers, 128, 64, 10 units each
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
''' Forward pass through the network, returns the output logits '''
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
model = Network()
model
###Output
_____no_output_____
###Markdown
Initializing weights and biasesThe weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
###Code
print(model.fc1.weight)
print(model.fc1.bias)
###Output
Parameter containing:
tensor([[-2.3278e-02, -1.2170e-03, -1.1882e-02, ..., 3.3567e-02,
4.4827e-03, 1.4840e-02],
[ 4.8464e-03, 1.9844e-02, 3.9791e-03, ..., -2.6048e-02,
-3.5558e-02, -2.2386e-02],
[-1.9664e-02, 8.1722e-03, 2.6729e-02, ..., -1.5122e-02,
2.7632e-02, -1.9567e-02],
...,
[-3.3571e-02, -2.9686e-02, -2.1387e-02, ..., 3.0770e-02,
1.0800e-02, -6.5941e-03],
[ 2.9749e-02, 1.2849e-02, 2.7320e-02, ..., -1.9899e-02,
2.7131e-02, 2.2082e-02],
[ 1.3992e-02, -2.1520e-02, 3.1907e-02, ..., 2.2435e-02,
1.1370e-02, 2.1568e-02]])
Parameter containing:
tensor(1.00000e-02 *
[-1.3222, 2.4094, -2.1571, 3.2237, 2.5302, -1.1515, 2.6382,
-2.3426, -3.5689, -1.0724, -2.8842, -2.9667, -0.5022, 1.1381,
1.2849, 3.0731, -2.0207, -2.3282, 0.3168, -2.8098, -1.0740,
-1.8273, 1.8692, 2.9404, 0.1783, 0.9391, -0.7085, -1.2522,
-2.7769, 0.0916, -1.4283, -0.3267, -1.6876, -1.8580, -2.8724,
-3.5512, 3.2155, 1.5532, 0.8836, -1.2911, 1.5735, -3.0478,
-1.3089, -2.2117, 1.5162, -0.8055, -1.3307, -2.4267, -1.2665,
0.8666, -2.2325, -0.4797, -0.5448, -0.6612, -0.6022, 2.6399,
1.4673, -1.5417, -2.9492, -2.7507, 0.6157, -0.0681, -0.8171,
-0.3554, -0.8225, 3.3906, 3.3509, -1.4484, 3.5124, -2.6519,
0.9721, -2.5068, -3.4962, 3.4743, 1.1525, -2.7555, -3.1673,
2.2906, 2.5914, 1.5992, -1.2859, -0.5682, 2.1488, -2.0631,
2.6281, -2.4639, 2.2622, 2.3632, -0.1979, 0.7160, 1.7594,
0.0761, -2.8886, -3.5467, 2.7691, 0.8280, -2.2398, -1.4602,
-1.3475, -1.4738, 0.6338, 3.2811, -3.0628, 2.7044, 1.2775,
2.8856, -3.3938, 2.7056, 0.5826, -0.6286, 1.2381, 0.7316,
-2.4725, -1.2958, -3.1543, -0.8584, 0.5517, 2.8176, 0.0947,
-1.6849, -1.4968, 3.1039, 1.7680, 1.1803, -1.4402, 2.5710,
-3.3057, 1.9027])
###Markdown
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
###Code
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
###Output
_____no_output_____
###Markdown
Forward passNow that we have a network, let's see what happens when we pass in an image.
###Code
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
###Output
_____no_output_____
###Markdown
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! Using `nn.Sequential`PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.htmltorch.nn.Sequential)). Using this to build the equivalent network:
###Code
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
###Output
Sequential(
(0): Linear(in_features=784, out_features=128, bias=True)
(1): ReLU()
(2): Linear(in_features=128, out_features=64, bias=True)
(3): ReLU()
(4): Linear(in_features=64, out_features=10, bias=True)
(5): Softmax()
)
###Markdown
The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
###Code
print(model[0])
model[0].weight
###Output
Linear(in_features=784, out_features=128, bias=True)
###Markdown
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
###Code
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
###Output
_____no_output_____
###Markdown
Now you can access layers either by integer or the name
###Code
print(model[0])
print(model.fc1)
###Output
Linear(in_features=784, out_features=128, bias=True)
Linear(in_features=784, out_features=128, bias=True)
|
notebooks/model-with-prophet.ipynb | ###Markdown
GrammarSome definitions:- `time series` : self-explanatory, i.e. the TimeSeries object- `horizon` : the duration to predict after the last value of the time series- `frequency`: the number of values per unit of time. Usually, the frequency is given in Pandas offset aliases (https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmloffset-aliases)``` horizon |-------------------------|- - - - - -| ||||||||||||| time series frequency``` --- Univariate PredictionTo create a univariate prediction, let's populate a time series with weather data.
###Code
# Data Loading
my_series = pd.read_csv("../data/bbdata-weather/4652.csv")
my_series = pd.DataFrame(data=my_series["value"].values,
index=pd.to_datetime(my_series["timestamp"]).values)
my_series.index = my_series.index.round("S")
# Create TimeSeries
ts = ta.TimeSeries(my_series)["2018-01-01":"2018-06-01"]
ts = ts.resample("min", method="pad").group_by("min")
ts.plot()
## Model Creation
m = ta.models.Prophet()
## Fit the univariate time series
m.fit(ts)
# Predict 5 days after the data's last time stamp
Y_hat = m.predict('5 days')
Y_hat.plot()
###Output
_____no_output_____
###Markdown
Multivariate PredictionTo create a multivariate prediction, let's populate a time series dataset with a temperature and luminosity values.
###Code
# Temperature
ts_1 = pd.read_csv("../data/bbdata-weather/4652.csv")
ts_1 = pd.DataFrame(data=ts_1["value"].values,
index=pd.to_datetime(ts_1["timestamp"]).values)
ts_1.index = ts_1.index.round("S")
ts_1 = ta.TimeSeries(ts_1)
# Luminosity
ts_2 = pd.read_csv("../data/bbdata-weather/4914.csv")
ts_2 = pd.DataFrame(data=ts_2["value"].values,
index=pd.to_datetime(ts_2["timestamp"]).values)
ts_2.index = ts_2.index.round("S")
ts_2 = ta.TimeSeries(ts_2)
# Create the TSD
tsd = ta.TimeSeriesDataset([ts_1, ts_2])
###Output
_____no_output_____
###Markdown
Preprocess data so that it is clean and full
###Code
tsd = tsd.resample("min", method="pad").group_by("min").regularize("][")
###Output
_____no_output_____
###Markdown
Select a data subset
###Code
tsd = tsd["2017-01":"2017-03"]
###Output
_____no_output_____
###Markdown
Split in train/test set
###Code
X_train, X_test = tsd.split_at("2017-02-28")
###Output
_____no_output_____
###Markdown
Create and fit models
###Code
m = ta.models.Prophet()
m.fit(X_train, 0)
###Output
_____no_output_____
###Markdown
Predict the values with the test set
###Code
ts = m.predict(X_test)
###Output
_____no_output_____
###Markdown
See the prediction
###Code
ts.plot()
ts["2017-03-01":"2017-03-04"].plot()
ts["2017-03-01 06:00":"2017-03-01 20:00"].plot()
###Output
_____no_output_____
###Markdown
Predict the values with the test set
###Code
ts = m.predict(X_test)
###Output
_____no_output_____
###Markdown
See the prediction
###Code
ts.plot()
ts["2017-03-01":"2017-03-04"].plot()
ts["2017-03-01 06:00":"2017-03-01 20:00"].plot()
###Output
_____no_output_____
###Markdown
Predict the values with the test set
###Code
ts = m.predict(X_test)
###Output
_____no_output_____
###Markdown
See the prediction
###Code
ts.plot()
ts["2017-03-01":"2017-03-04"].plot()
ts["2017-03-01 06:00":"2017-03-01 20:00"].plot()
###Output
_____no_output_____ |
renaming/renaming.ipynb | ###Markdown
Renaming- IPC522-time-random_num-preset-weather-image_num
###Code
import urllib.request
import re
def getWeather(date, stn = "112"):
year = date[:4]
mm = date[4:6]
dd = date[6:]
# print(year, mm, dd)
# url = "https://www.weather.go.kr/w/obs-climate/land/past-obs/obs-by-day.do?stn=" + stn + "&yy=" + year + "&mm=" + mm + "&obs=1"
url = "https://web.kma.go.kr/weather/climate/past_cal.jsp?stn=" + stn + "&yy=" + year + "&mm=" + mm + "&obs=1&x=24&y=9"
# https://www.weather.go.kr/weather/climate/past_cal.jsp?stn=112&yy=2021&mm=07&obs=1&x=24&y=9 ##2107
lines = []
f = urllib.request.urlopen(url)
r = f.read()
f.close()
r2 = r.decode('euc-kr', 'ignore')
lines = r2.split('\n')
regex = '.*<td class="align_left">평균기온:(.*?)<br \/>최고기온:(.*?)<br \/>최저기온:(.*?)<br \/>평균운량:(.*?)<br \/>일강수량:(.*?)<br \/><\/td>'
dict_month = {}
day = 1
dd = int(dd)
for l in lines:
if not '평균기온' in l: continue
l = l.replace("℃", "")
l_reg = re.match(regex, l)
if not l_reg: continue
dict_day = {'cloud':0, 'rain':0}
data_cloud = l_reg.groups()[3] # 평균운량
data_rain = l_reg.groups()[4] # 일강수량
dict_day['cloud'] = data_cloud # 평균운량
dict_day['rain'] = data_rain.replace("-", "0").replace("mm", "") # 일강수량
if day == dd:
dict_month[dd] = dict_day
day = day + 1
for (day, dict_day) in dict_month.items():
# print ("{0}{1}{2}, cloud : {3}, rain : {4} ".format(year, mm.zfill(2), str(day).zfill(2), dict_day['cloud'], dict_day['rain']))
_date = year + mm.zfill(2) + str(day).zfill(2)
# print(mm)
# print(day)
# print(_date)
_cloud = dict_day['cloud']
_rain = dict_day['rain']
return _date, _cloud, _rain
def get_weather(time):
date, cloud, rain = getWeather(time)
if float(cloud) <= 5. and float(rain) == 0.:
weather = 'sunny'
elif float(rain) != 0.:
weather = 'rainy'
elif float(cloud) > 5. and float(rain) == 0.:
weather = 'foggy'
return weather
import os
import glob
import pandas as pd
import numpy as np
filename = 'wrong_preset_in_ours'
df = pd.read_excel(f'before/{filename}.xlsx')
df.head()
df['time'] = df['before'].apply(lambda x: x.split('_')[1])
df['random_num'] = -1
df['weather'] = -1
df['image_num'] = -1
df['new_filename'] = -1
# df = df[['before', 'start_point', 'time', 'random_num', 'preset', 'weather', 'image_num', 'new_filename']]
df = df[['before', 'time', 'random_num', 'preset', 'weather', 'image_num', 'new_filename']]
df.head()
###Output
_____no_output_____
###Markdown
- IPC522-time-random_num-preset-weather-image_num
###Code
id_list = df['time'].unique().tolist()
for i, time in enumerate(id_list):
data_len = len(df.loc[df['time'] == time])
df.loc[df['time'] == time, 'random_num'] = str(i).zfill(3)
df.loc[df['time'] == time, 'weather'] = get_weather(str(time)[:8])
df.loc[df['time'] == time, 'image_num'] = range(data_len)
df.loc[df['time'] == time, 'image_num'] = df.loc[df['time'] == time, 'image_num'].astype(str).str.zfill(5)
df.loc[df['time'] == time, 'preset'] = df.loc[df['time'] == time, 'preset'].astype(str).str.zfill(2)
df.loc[df['time'] == time, 'preset'] = df.loc[df['time'] == time, 'preset'].astype(str).apply(lambda x: 'p'+x)
df
cols = ['time', 'random_num', 'preset', 'weather', 'image_num']
df['new_filename'] = df[cols].apply(lambda x: '_'.join(x.values.astype(str)), axis=1)
df['new_filename'] = 'IPC522_' + df['new_filename'].astype(str) + '.jpg'
df.drop(['random_num', 'weather', 'image_num'], inplace=True, axis=1)
df
df.to_csv(f'after/{filename}_renamed.csv', index=False)
df
###Output
_____no_output_____
###Markdown
filename 바꾸기
###Code
import os
import glob
import shutil
import pandas as pd
renamed_list = os.listdir('after')
df_renamed = pd.concat([df_renamed1, df_renamed2])
df_renamed
base_url = 'S:/public_data/segmentation/wrong_preset_in_ours/'
target_url = 'S:/public_data/segmentation/second_inspector_renamed/'
image_list = glob.glob(base_url + '/*')
basename_list = [os.path.basename(img) for img in image_list]
len(basename_list)
df_renamed['target_filename'] = df_renamed['new_filename'].apply(lambda x: target_url+x)
df_renamed['filename'] = df_renamed.before.apply(lambda x: base_url+x)
df_renamed
df_renamed['target_filename'].iloc[4]
for before, after in zip(df_renamed['filename'], df_renamed['target_filename']):
shutil.copy(before, after)
###Output
_____no_output_____ |
3_model_training.ipynb | ###Markdown
Part 3 - Training (aka *fine-tuning*) a Transformer modelIn this part we will finally train our very own Transformers model. We saw that the zer-shot model didn't produce great results, and that's probably because the model was trained on summarising news articles, not academic papers. These lines of code are typical setup for Sagemaker, we require them for training jobs: https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html
###Code
import sagemaker
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
bucket = sess.default_bucket()
print(f"IAM role arn used for running training: {role}")
print(f"S3 bucket used for storing artifacts: {sess.default_bucket()}")
###Output
_____no_output_____
###Markdown
We are in the great position that we don't have to write our own training script. Instead we will use a script from the transformers library in Github: https://github.com/huggingface/transformers/blob/v4.6.1/examples/pytorch/summarization/run_summarization.py
###Code
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.6.1'}
###Output
_____no_output_____
###Markdown
These rae the parameters for training, and this is one of the most important levers we can leverage once we are in the experimentation phase. Changing these parameters can influence the model performance and there will be a component of trial & error to find the best model. Also check out https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html for automated hyperparameter tuning.
###Code
# hyperparameters, which are passed into the training job
hyperparameters={'per_device_train_batch_size': 4,
'per_device_eval_batch_size': 4,
'model_name_or_path': 'sshleifer/distilbart-cnn-12-6',
'train_file': '/opt/ml/input/data/datasets/train.csv',
'validation_file': '/opt/ml/input/data/datasets/val.csv',
'do_train': True,
'do_eval': True,
'do_predict': False,
'predict_with_generate': True,
'output_dir': '/opt/ml/model',
'num_train_epochs': 3,
'learning_rate': 5e-5,
'seed': 7,
'fp16': True,
'val_max_target_length': 20,
'text_column': 'text',
'summary_column': 'summary',
}
# configuration for running training on smdistributed Data Parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
from sagemaker.huggingface import HuggingFace
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point='run_summarization.py',
source_dir='./examples/pytorch/summarization',
git_config=git_config,
instance_type='ml.p3.16xlarge',
instance_count=2,
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
role=role,
hyperparameters=hyperparameters,
distribution=distribution,
)
###Output
_____no_output_____
###Markdown
This will kick off the training job which should take around 1 hour. There is also the option to use distributed training with more instances, see here:https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html. Running this training with 2 distributed instances should take ~40 minutes.
###Code
huggingface_estimator.fit({'datasets':f's3://{bucket}/summarization/data/'}, wait=False)
###Output
_____no_output_____
###Markdown
Part 3 - Training (aka *fine-tuning*) a Transformer modelIn this part we will finally train our very own Transformers model. We saw that the zero-shot model didn't produce great results, and that's probably because the model was trained on summarising news articles, not academic papers. These lines of code are typical setup for Sagemaker, we require them for training jobs: https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html
###Code
import sagemaker
bucket = sagemaker.Session().default_bucket()
region = sagemaker.Session().boto_region_name
# the "get_execution_role()" method doesn't work when running a notebook locally and using the API.
# see the explanation in "0b_data_prep_reviews_corrected.ipynb" for an explanation and how to get
# the proper variable
# role = sagemaker.get_execution_role()
role = 'arn:aws:iam::595714217589:role/service-role/AmazonSageMaker-ExecutionRole-20220331T161122'
print(f"IAM role arn used for running training: {role}")
print(f"S3 bucket used for storing artifacts: {bucket}")
###Output
IAM role arn used for running training: arn:aws:iam::595714217589:role/service-role/AmazonSageMaker-ExecutionRole-20220331T161122
S3 bucket used for storing artifacts: sagemaker-us-east-1-595714217589
###Markdown
We are in the great position that we don't have to write our own training script. Instead we will use a script from the transformers library in Github: https://github.com/huggingface/transformers/blob/v4.6.1/examples/pytorch/summarization/run_summarization.py
###Code
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.6.1'}
###Output
_____no_output_____
###Markdown
These are the parameters for training, and this is one of the most important levers we can leverage once we are in the experimentation phase. Changing these parameters can influence the model performance and there will be a component of trial & error to find the best model. Also check out https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html for automated hyperparameter tuning.
###Code
# see his article for why the paths to the data are the way they are
# hyperparameters, which are passed into the training job - original version
hyperparameters={'per_device_train_batch_size': 4,
'per_device_eval_batch_size': 4,
'model_name_or_path': 'sshleifer/distilbart-cnn-12-6',
'train_file': '/opt/ml/input/data/datasets/train.csv',
'validation_file': '/opt/ml/input/data/datasets/val.csv',
'do_train': True,
'do_eval': True,
'do_predict': False,
'predict_with_generate': True,
'output_dir': '/opt/ml/model',
'num_train_epochs': 3,
'learning_rate': 5e-5,
'seed': 7,
'fp16': True,
'val_max_target_length': 20,
'text_column': 'text',
'summary_column': 'summary',
}
# configuration for running training on smdistributed Data Parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
from sagemaker.huggingface import HuggingFace
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point='run_summarization.py',
source_dir='./examples/pytorch/summarization',
git_config=git_config,
instance_type='ml.p3.16xlarge',
instance_count=2,
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
role=role,
hyperparameters=hyperparameters,
distribution=distribution,
)
###Output
_____no_output_____
###Markdown
This will kick off the training job which should take around 1 hour. There is also the option to use distributed training with more instances, see here:https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html. Running this training with 2 distributed instances should take ~40 minutes.
###Code
huggingface_estimator.fit({'datasets':f's3://{bucket}/summarization/data/'}, wait=False)
###Output
Cloning into '/var/folders/87/33lmw8sj3sbc1fnxmv660t480000gn/T/tmpwaykeo03'...
Note: switching to 'v4.6.1'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at fb27b276e Release: v4.6.1
|
March/Week12/78.ipynb | ###Markdown
在重复数组中寻找唯一的元素[](https://github.com/mjd507)[](https://mp.weixin.qq.com/s/p6BglIy8iy0Y3Z_hw3BRTA) 给定一组整数数组`arr`,其中所有数字都出现了两次,仅有额外1个数字只出现了一次。请找到这个数。 其中,你的解决方案需要限制在$O(n)$时间复杂度和$O(1)$空间复杂度内。 用例说明```pythonIn[1]: arr = [7, 3, 5, 5, 4, 3, 4, 8, 8]Out[1]: 7```
###Code
class Solution(object):
def findSingle(self, nums: list) -> int:
result = 0
for item in nums:
result = item ^ result
return result
print(Solution().findSingle([1, 1, 3, 4, 4, 5, 6, 5, 6]))
###Output
3
|
FluentPython/Chapter06_dp_1class_func.ipynb | ###Markdown
使用一等函数实现设计模式python可以使用经典的23个设计模式中的7个左右的模式。其他模式并不适合动态语言。 "策略"模式 经典"策略"模式三部分组成:- 上下文 order- 策略 Promotion- 具体策略 FidelityPromo BulkItemPromo LargeOrderPromo
###Code
from abc import ABC, abstractmethod
from collections import namedtuple
Customer = namedtuple('Customer','name fidelity')
class LineItem:
'''每种产品信息,产品名,数量,单价等'''
def __init__(self, product, quantity, price):
self.product = product
self.quantity = quantity
self.price = price
def total(self):
return self.price * self.quantity
class Order:
'''账单'''
def __init__(self, customer, cart, promotion=None):
self.customer = customer
self.cart = list(cart)
self.promotion = promotion
def total(self):
if not hasattr(self, '__total'):
self.__total = sum(item.total() for item in self.cart)
return self.__total
def due(self):
if self.promotion is None:
discount = 0.0
else:
discount = self.promotion.discount(order = self)
return self.total() - discount
def __repr__(self):
fm = "<Order total:{:.2f} due:{:.2f}>"
return fm.format(self.total(),self.due())
class Promotion(ABC):
'''策略,抽象基类'''
@abstractmethod
def discount(self, order):
'''返回折扣金额(正值)'''
class FidelityPromo(Promotion):
'''满1000积分提供5%的折扣'''
def discount(self, order):
return order.total() * 0.05 if order.customer.fidelity >= 1000.0 else 0.0
class BulkItemPromo(Promotion):
'''单个商品数量为20或以上时提供10%的折扣'''
def discount(self, order):
d = 0.0
for item in order.cart:
if item.quantity >= 20 :
d += item.total() * 0.1
return d
class LargeOrderPromo(Promotion):
'''订单中达到或超过10种商品时提供7%的折扣'''
def discount(self, order):
distinct_item = { item.product for item in order.cart } #使用了集,集内不能有重复元素
if len(distinct_item) >= 10 :
return order.total() * 0.07
return 0.0
joe = Customer('John Doe', 0)
ann = Customer('Ann Smith', 1100)
cart = [LineItem('banana', 4, 0.5),
LineItem('apple', 10,1.5),
LineItem('watermelon', 5, 5.0)]
Order(joe, cart, FidelityPromo()) #记得类作为参数时需要加括号!
Order(ann, cart, FidelityPromo())
banana_cart = [ LineItem('banana', 30, 0.5),
LineItem('apple', 10, 1.5)]
Order(joe, banana_cart, BulkItemPromo())
long_order = [LineItem(str(item_code), 1, 1.0) for item_code in range(10)]
Order(joe, long_order, LargeOrderPromo())
Order(joe, cart, LargeOrderPromo())
###Output
_____no_output_____
###Markdown
使用函数来实现“策略”模式上面的策略类都只有一个函数,而且实例化后也没有变量,仅从作用上来看就是一个函数。下面我们用函数来实现上面的策略,会发现我们并不需要创建一个抽象类了。
###Code
from collections import namedtuple
Customer = namedtuple('Customer','name fidelity')
class LineItem:
'''每种产品信息,产品名,数量,单价等'''
def __init__(self, product, quantity, price):
self.product = product
self.quantity = quantity
self.price = price
def total(self):
return self.price * self.quantity
class Order:
'''账单'''
def __init__(self, customer, cart, promotion=None):
self.customer = customer
self.cart = list(cart)
self.promotion = promotion
def total(self):
if not hasattr(self, '__total'):
self.__total = sum(item.total() for item in self.cart)
return self.__total
def due(self):
if self.promotion is None:
discount = 0.0
else:
discount = self.promotion(order = self) # 此处promotion变量已经是一个函数了,直接执行括号运算符就行了。
return self.total() - discount
def __repr__(self):
fm = "<Order total:{:.2f} due:{:.2f}>"
return fm.format(self.total(),self.due())
# 我们也不需要创建抽象类父类了。
def fidelity_promo(order):
'''满1000积分提供5%的折扣'''
return order.total() * 0.05 if order.customer.fidelity >= 1000.0 else 0.0
def bulkitem_promo(order):
'''单个商品数量为20或以上时提供10%的折扣'''
d = 0.0
for item in order.cart:
if item.quantity >= 20 :
d += item.total() * 0.1
return d
def largeorder_promo(order):
'''订单中达到或超过10种商品时提供7%的折扣'''
distinct_item = { item.product for item in order.cart } #使用了集,集内不能有重复元素
if len(distinct_item) >= 10 :
return order.total() * 0.07
return 0.0
###Output
_____no_output_____
###Markdown
用上面的例子来测试一下
###Code
joe = Customer('John Doe', 0)
ann = Customer('Ann Smith', 1100)
cart = [LineItem('banana', 4, 0.5),
LineItem('apple', 10,1.5),
LineItem('watermelon', 5, 5.0)]
Order(joe, cart, fidelity_promo) # 函数作为参数时不需要加括号()
Order(ann, cart, fidelity_promo)
banana_cart = [ LineItem('banana', 30, 0.5),
LineItem('apple', 10, 1.5) ]
Order(joe, banana_cart, bulkitem_promo)
long_order = [LineItem(str(item_code), 1, 1.0) for item_code in range(10)]
Order(joe, long_order, largeorder_promo)
Order(joe, cart, largeorder_promo)
###Output
_____no_output_____
###Markdown
最佳策略函数两点:- 将函数看作一等对象- 如何自动获得模块中的所有促销函数
###Code
# ----------- 1 -----------
# 手动枚举 不推荐这种方法
# 将函数看作是一等对象
promo1 = [fidelity_promo, bulkitem_promo, largeorder_promo]
# ----------- 2 -----------
# 使用globals()字典,返回当前模块的所有函数
promo2 = [globals()[name] for name in globals() if name.endswith('_promo') and name != 'best_promo']
# ----------- 3 -----------
# 使用模块内省来获得和inspect模块一同获得所有函数
# 首先需要将所有的策略函数都写在promotions模块里,然后使用import promotions来导入模块,然后使用inspect.getmembers()来获得所有函数
# import inspect
# import promotions
# promo3 = [ func for name, func in inspect.getmembers(promotions, inspect.isfunction)]
# ----------- 4 -----------
# 使用装饰器也可以自动获得所有的打折策略函数,7章将会降到装饰器的使用。
def best_promo(order):
''' 返回最佳折扣方案
'''
return max(promo(order) for promo in promo2)
# 测试最佳方案
joe = Customer('John Doe', 0)
ann = Customer('Ann Smith', 1100)
cart = [LineItem('banana', 4, 0.5),
LineItem('apple', 10,1.5),
LineItem('watermelon', 5, 5.0)]
Order(ann, cart, best_promo) # 函数作为参数时不需要加括号()
###Output
_____no_output_____ |
2.normalization/answers/1.flexique.ipynb | ###Markdown
Un lexique du françaisLa ressource *Flexique* est une base de données conçue pour étudier le système flexionnel du français. Elle est constituée de trois tables, réparties dans trois fichiers. Le fichier *nlexique.csv* recense 31 002 lexèmes pour 65 111 mots.Le code ci-dessous permet de charger tous les lexèmes et leurs représentations phonologiques dans une variable `lexemes` :
###Code
import csv
with open('../files/nlexique.csv') as csvfile:
reader = csv.DictReader(csvfile)
lexemes = [
(row['lexeme'], row['sg'])
for row in reader
]
###Output
_____no_output_____
###Markdown
La liste des lexèmes est alors interrogeable en appelant la variable `lexemes` :
###Code
print(lexemes[:10])
###Output
_____no_output_____
###Markdown
**Remarque :** pour tous les exercices, vous tenterez de fournir une courte analyse de votre solution. ChargementAvant toute chose, chargez le module *re* :
###Code
# your code here
import re
###Output
_____no_output_____
###Markdown
Vous êtes accro ?Recherchez dans la liste des lexèmes tous les termes qui commencent par *accro*.
###Code
# your code here
pattern = r'^accro.*'
prog = re.compile(pattern)
for lexeme, phon in lexemes:
result = prog.match(lexeme)
if result:
print(result.group())
###Output
_____no_output_____
###Markdown
**Analyse :** Le métacaractère `ˆ` balise le début d’une ligne. Double doseÀ présent, effectuez une recherche qui liste tous les mots composés qui contiennent un tiret (*-*).
###Code
# your code here
pattern = r'[\w]+-[\w]+'
prog = re.compile(pattern)
for lexeme, phon in lexemes:
result = prog.match(lexeme)
if result:
print(result.group())
###Output
_____no_output_____
###Markdown
**Analyse :** Les mots composés sont structurés en deux ensembles de caractères alphabétiquesséparés entre eux par un tiret. Rimera bien qui rimera le dernierEn dernier lieu, essayez de trouver les lexèmes qui rimeraient avec le mot *acabit*.
###Code
# your code here
pattern = r'.+bi$'
prog = re.compile(pattern)
for lexeme, phon in lexemes:
result = prog.match(phon)
if result:
print(lexeme)
###Output
_____no_output_____ |
Choosing the right hardware for OpenVino - Dev Cloud/Notebooks/Using IntelDevcloud.ipynb | ###Markdown
Exercise: Using Intel DevCloudNow that we've walked through the process of requesting a device on Intel's DevCloud and loading a model, you will have the opportunity to do this yourself with the addition of running inference on an image.In this exercise, you will do the following:1. Write a Python script to load a model and run inference 10 times on a CPU on Intel's DevCloud. * Calculate the time it takes to load the model. * Calculate the time it takes to run inference 10 times.2. Write a shell script to submit a job to Intel's DevCloud.3. Submit a job using `qsub` on the **IEI Tank-870** edge node with an **Intel Xeon E3 1268L v5**.4. Run `liveQStat` to view the status of your submitted job.5. Retrieve the results from your job.6. View the results.Click the **Exercise Overview** button below for a demonstration. Exercise Overview IMPORTANT: Set up paths so we can run Dev Cloud utilitiesYou *must* run this every time you enter a Workspace session.
###Code
%env PATH=/opt/conda/bin:/opt/spark-2.4.3-bin-hadoop2.7/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/intel_devcloud_support
import os
import sys
sys.path.insert(0, os.path.abspath('/opt/intel_devcloud_support'))
sys.path.insert(0, os.path.abspath('/opt/intel'))
###Output
_____no_output_____
###Markdown
The ModelWe will be using the `vehicle-license-plate-detection-barrier-0106` model for this exercise. Remember that to run a model on the CPU, we need to use `FP32` as the model precision.The model has already been downloaded for you in the `/data/models/intel` directory on Intel's DevCloud. We will be using the following filepath during the job submission in **Step 3**:> **/data/models/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106**We will be running inference on an image of a car. The path to the image is `/data/resources/car.png` Step 1: Creating a Python ScriptThe first step is to create a Python script that you can use to load the model and perform inference. We'll use the `%%writefile` magic to create a Python file called `inference_cpu_model.py`. In the next cell, you will need to complete the `TODO` items for this Python script.`TODO` items:1. Load the model2. Prepare the model for inference (create an input dictionary)3. Run inference 10 times in a loopIf you get stuck, you can click on the **Show Solution** button below for a walkthrough with the solution code.
###Code
%%writefile inference_cpu_model.py
import time
import numpy as np
import cv2
from openvino.inference_engine import IENetwork
from openvino.inference_engine import IECore
import argparse
def main(args):
model=args.model_path
model_weights=model+'.bin'
model_structure=model+'.xml'
start=time.time()
# TODO: Load the model
model=IENetwork(model_structure, model_weights)
core = IECore()
net = core.load_network(network=model, device_name='CPU', num_requests=1)
print(f"Time taken to load model = {time.time()-start} seconds")
# Get the name of the input node
input_name=next(iter(model.inputs))
# Reading and Preprocessing Image
input_img=cv2.imread('/data/resources/car.png')
input_img=cv2.resize(input_img, (300,300), interpolation = cv2.INTER_AREA)
input_img=np.moveaxis(input_img, -1, 0)
# TODO: Prepare the model for inference (create input dict etc.)
input_dict={input_name:input_img}
print(input_dict)
start=time.time()
for _ in range(10):
# TODO: Run Inference in a Loop
net.infer(input_dict)
print(f"Time Taken to run 10 Infernce on CPU is = {time.time()-start} seconds")
if __name__=='__main__':
parser=argparse.ArgumentParser()
parser.add_argument('--model_path', required=True)
args=parser.parse_args()
main(args)
###Output
Overwriting inference_cpu_model.py
###Markdown
Show Solution Step 2: Creating a Job Submission ScriptTo submit a job to the DevCloud, you'll need to create a shell script. Similar to the Python script above, we'll use the `%%writefile` magic command to create a shell script called `load_model_job.sh`. In the next cell, you will need to complete the `TODO` items for this shell script.`TODO` items:1. Create a `MODELPATH` variable and assign it the value of the first argument that will be passed to the shell script2. Call the Python script using the `MODELPATH` variable value as the command line argumentIf you get stuck, you can click on the **Show Solution** button below for a walkthrough with the solution code.
###Code
%%writefile inference_cpu_model_job.sh
#!/bin/bash
exec 1>/output/stdout.log 2>/output/stderr.log
mkdir -p /output
#TODO: Create MODELPATH variable
MODELPATH=$1
#TODO: Call the Python script
python3 inference_cpu_model.py --model_path ${MODELPATH}
cd /output
tar zcvf output.tgz stdout.log stderr.log
###Output
Overwriting inference_cpu_model_job.sh
###Markdown
Show Solution Step 3: Submitting a Job to Intel's DevCloudIn the next cell, you will write your `!qsub` command to submit your job to Intel's DevCloud to load your model on the `Intel Xeon E3 1268L v5` CPU and run inference.Your `!qsub` command should take the following flags and arguments:1. The first argument should be the shell script filename2. `-d` flag - This argument should be `.`3. `-l` flag - This argument should request a **Tank-870** node using an **Intel Xeon E3 1268L v5** CPU. The default quantity is 1, so the **1** after `nodes` is optional.To get the queue label for this CPU, you can go to [this link](https://devcloud.intel.com/edge/get_started/devcloud/)4. `-F` flag - This argument should be the full path to the model. As a reminder, the model is located in `/data/models/intel`.**Note**: There is an optional flag, `-N`, you may see in a few exercises. This is an argument that only works on Intel's DevCloud that allows you to name your job submission. This argument doesn't work in Udacity's workspace integration with Intel's DevCloud.If you get stuck, you can click on the **Show Solution** button below for a walkthrough with the solution code.
###Code
job_id_core = !qsub inference_cpu_model_job.sh -d . -l nodes=1:tank-870:e3-1268l-v5 -F "/data/models/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106" -N store_core
print(job_id_core[0])
###Output
vMohGdy25Yr4vWw6Gu03XjRflYH3Vzha
###Markdown
Show Solution Step 4: Running liveQStatRunning the `liveQStat` function, we can see the live status of our job. Running the this function will lock the cell and poll the job status 10 times. The cell is locked until this finishes polling 10 times or you can interrupt the kernel to stop it by pressing the stop button at the top: * `Q` status means our job is currently awaiting an available node* `R` status means our job is currently running on the requested node**Note**: In the demonstration, it is pointed out that `W` status means your job is done. This is no longer accurate. Once a job has finished running, it will no longer show in the list when running the `liveQStat` function.Click the **Running liveQStat** button below for a demonstration. Running liveQStat
###Code
import liveQStat
liveQStat.liveQStat()
###Output
_____no_output_____
###Markdown
Step 5: Retrieving Output FilesIn this step, we'll be using the `getResults` function to retrieve our job's results. This function takes a few arguments.1. `job id` - This value is stored in the `job_id_core` variable we created during **Step 3**. Remember that this value is an array with a single string, so we access the string value using `job_id_core[0]`.2. `filename` - This value should match the filename of the compressed file we have in our `load_model_job.sh` shell script.3. `blocking` - This is an optional argument and is set to `False` by default. If this is set to `True`, the cell is locked while waiting for the results to come back. There is a status indicator showing the cell is waiting on results.**Note**: The `getResults` function is unique to Udacity's workspace integration with Intel's DevCloud. When working on Intel's DevCloud environment, your job's results are automatically retrieved and placed in your working directory.Click the **Retrieving Output Files** button below for a demonstration. Retrieving Output Files
###Code
import get_results
get_results.getResults(job_id_core[0], filename="output.tgz", blocking=True)
###Output
getResults() is blocking until results of the job (id:vMohGdy25Yr4vWw6Gu03XjRflYH3Vzha) are ready.
Please wait..........Success!
output.tgz was downloaded in the same folder as this notebook.
###Markdown
Step 6: View the OutputsIn this step, we unpack the compressed file using `!tar zxf` and read the contents of the log files by using the `!cat` command.`stdout.log` should contain the printout of the print statement in our Python script.
###Code
!tar zxf output.tgz
!cat stdout.log
!cat stderr.log
###Output
Unable to init server: Could not connect: Connection refused
(sample image:17): Gtk-WARNING **: 09:21:13.498: cannot open display:
tar: stdout.log: file changed as we read it
|
ssd300_demo.ipynb | ###Markdown
SSD 300 Demo
###Code
import cv2
import time
from keras import backend as K
from keras.models import load_model
from keras.preprocessing import image
from keras.optimizers import Adam
from imageio import imread
import numpy as np
from matplotlib import pyplot as plt
from models.keras_ssd300 import ssd_300
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
# Prepare model
img_height = 300
img_width = 300
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, 3),
n_classes=20,
mode='inference',
l2_regularization=0.0005,
scales=[0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05], # The scales for MS COCO are [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05]
aspect_ratios_per_layer=[[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]],
two_boxes_for_ar1=True,
steps=[8, 16, 32, 64, 100, 300],
offsets=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
clip_boxes=False,
variances=[0.1, 0.1, 0.2, 0.2],
normalize_coords=True,
subtract_mean=[123, 117, 104],
swap_channels=[2, 1, 0],
confidence_thresh=0.5,
iou_threshold=0.45,
top_k=200,
nms_max_output_size=400)
# Load model
weights_path = 'VGG_VOC0712Plus_SSD_300x300_ft_iter_160000.h5'
model.load_weights(weights_path, by_name=True)
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
# Load image
orig_images = [] # Store the images here.
input_images = [] # Store resized versions of the images here.
img_path = '/home/sahand/Projects/city/sydney1.jpg'
orig_images.append(imread(img_path))
img = image.load_img(img_path, target_size=(img_height, img_width))
img = image.img_to_array(img)
input_images.append(img)
input_images = np.array(input_images)
# Predict
start = time.time()
y_pred = model.predict(input_images)
end = time.time()
# Visualize
confidence_threshold = 0.5
y_pred_thresh = [y_pred[k][y_pred[k,:,1] > confidence_threshold] for k in range(y_pred.shape[0])]
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_thresh[0])
# Display the image and draw the predicted boxes onto it.
# Set the colors for the bounding boxes
colors = plt.cm.hsv(np.linspace(0, 1, 21)).tolist()
classes = ['background',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
print("Time : ",end - start)
plt.figure(figsize=(20,12))
plt.imshow(orig_images[0])
current_axis = plt.gca()
for box in y_pred_thresh[0]:
# Transform the predicted bounding boxes for the 300x300 image to the original image dimensions.
xmin = box[2] * orig_images[0].shape[1] / img_width
ymin = box[3] * orig_images[0].shape[0] / img_height
xmax = box[4] * orig_images[0].shape[1] / img_width
ymax = box[5] * orig_images[0].shape[0] / img_height
color = colors[int(box[0])]
label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0})
###Output
Predicted boxes:
class conf xmin ymin xmax ymax
[[ 7. 1. 116.35 206.14 238.84 306.28]
[ 7. 0.98 194.07 192.65 257.39 252.66]
[ 7. 0.89 87.75 206.81 118.65 240.74]
[ 7. 0.65 111.13 212.2 133.81 232.97]
[ 7. 0.53 53.86 206.5 82.68 236.17]]
Time : 3.988952159881592
|
remote_sensing/python/Local_Jupyter_NoteBooks/scratches_to_experiment/moving_10_day_window_2Yrs.ipynb | ###Markdown
This is scratch to do maximum in 10-days intervals.
###Code
import csv
import numpy as np
import pandas as pd
# import geopandas as gpd
from IPython.display import Image
# from shapely.geometry import Point, Polygon
from math import factorial
import scipy
import scipy.signal
import os, os.path
from datetime import date
import datetime
import time
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from sklearn.linear_model import LinearRegression
from patsy import cr
# from pprint import pprint
import matplotlib.pyplot as plt
import seaborn as sb
import sys
sys.path.append('/Users/hn/Documents/00_GitHub/Ag/remote_sensing/python/')
import remote_sensing_core as rc
import remote_sensing_core as rcp
start_time = time.time()
data_dir = "/Users/hn/Documents/01_research_data/" + \
"remote_sensing/01_NDVI_TS/00_Eastern_WA_withYear/2Years/"
param_dir = "/Users/hn/Documents/00_GitHub/Ag/remote_sensing/parameters/"
###Output
_____no_output_____
###Markdown
Parameters
###Code
####################################################################################
###
### Parameters
###
####################################################################################
irrigated_only = 0
SF_year = 2017
indeks = "EVI"
regular_window_size = 10
###Output
_____no_output_____
###Markdown
Read the data
###Code
f_name = "Eastern_WA_" + str(SF_year) + "_70cloud_selectors.csv"
a_df = pd.read_csv(data_dir + f_name, low_memory=False)
##################################################################
##################################################################
####
#### plots has to be exact. So, we need
#### to filter out NASS, and filter by last survey date
####
##################################################################
##################################################################
a_df = a_df[a_df['county']== "Grant"] # Filter Grant
# a_df = rc.filter_out_NASS(a_df) # Toss NASS
# a_df = rc.filter_by_lastSurvey(a_df, year = SF_year) # filter by last survey date
a_df['SF_year'] = SF_year
###Output
_____no_output_____
###Markdown
Functions Get a field's data
###Code
a_df.reset_index(drop=True, inplace=True)
a_df_1 = a_df[a_df.ID == a_df.ID[0]]
a_df_1.shape
a_df_1 = rc.initial_clean_EVI(a_df_1)
# a_df_1.sort_values(by=['system_start_time'], inplace=True)
a_df_1.sort_values(by=['image_year', 'doy'], inplace=True)
a_df_1 = rc.correct_timeColumns_dataTypes(a_df_1)
a_df_1.reset_index(drop=True, inplace=True)
print(a_df_1.shape)
# a_df_1.head(2)
a_df_1.system_start_time[0]
A = rc.regularize_movingWindow_windowSteps_2Yrs(one_field_df = a_df_1, SF_yr=SF_year, idks=indeks, window_size=10)
a_df_1.image_year.unique()
A.shape
print (a_field_df.shape)
print (regular_df.shape)
outName = "/Users/hn/Documents/01_research_data/remote_sensing/test_data/a_regularized_TS.csv"
regularized_TS.to_csv(outName, index=False)
###Output
_____no_output_____
###Markdown
Create Aeolus Enviornment, and see things works
###Code
first_10_IDs = a_df.ID.unique()[:10]
an_EE_TS = a_df[a_df.ID.isin(first_10_IDs) ]
indeks = "EVI"
county = "Grant"
SF_year = 2017
regular_window_size = 10
########################################################################################
an_EE_TS = an_EE_TS[an_EE_TS['county'] == county] # Filter Grant
an_EE_TS['SF_year'] = SF_year
########################################################################################
# output_dir = "/data/hydro/users/Hossein/remote_sensing/02_Regularized_TS/"
# os.makedirs(output_dir, exist_ok=True)
########################################################################################
if (indeks == "EVI"):
an_EE_TS = rc.initial_clean_EVI(an_EE_TS)
else:
an_EE_TS = rc.initial_clean_NDVI(an_EE_TS)
an_EE_TS.head(2)
###
### List of unique polygons
###
polygon_list = an_EE_TS['ID'].unique()
print(len(polygon_list))
########################################################################################
###
### initialize output data. all polygons in this case
### will have the same length.
### 9 steps in the first three months, followed by 36 points in the full year,
### 9 months in the last year
###
reg_cols = ['ID', 'Acres', 'county', 'CropGrp', 'CropTyp',
'DataSrc', 'ExctAcr', 'IntlSrD', 'Irrigtn', 'LstSrvD', 'Notes',
'RtCrpTy', 'Shap_Ar', 'Shp_Lng', 'TRS', 'image_year',
'SF_year', 'doy', indeks]
nrows = 54 * len(polygon_list)
output_df = pd.DataFrame(data = None,
index = np.arange(nrows),
columns = reg_cols)
########################################################################################
counter = 0
for a_poly in polygon_list:
if (counter): # % 100 == 0
print (counter)
curr_field = an_EE_TS[an_EE_TS['ID']==a_poly].copy()
################################################################
# Sort by DoY (sanitary check)
curr_field.sort_values(by=['image_year', 'doy'], inplace=True)
curr_field = rc.correct_timeColumns_dataTypes(curr_field)
curr_field.reset_index(drop=True, inplace=True)
print ("print(curr_field.shape")
print(curr_field.shape)
print ("__________________________________________")
################################################################
regularized_TS = rc.regularize_movingWindow_windowSteps_18Months(curr_field, \
SF_yr = SF_year, \
idks = indeks, \
window_size = 10)
print(regularized_TS.shape)
################################################################
row_pointer = 54 * counter
output_df[row_pointer: row_pointer+54] = regularized_TS.values
counter += 1
regularized_TS.values.shape
output_df[row_pointer: row_pointer+54].shape
row_pointer
output_df.shape
output_df.head(2)
print (time.strftime('%Y-%m-%d', time.localtime(a_df_1.system_start_time.iloc[0])))
print (a_df_1.system_start_time.iloc[0])
print (time.strftime('%Y-%m-%d', time.localtime(a_df_1.system_start_time.iloc[0])))
print ("Convert Epoch to datetime format")
print (datetime.datetime.fromtimestamp(a_df_1.system_start_time.iloc[0]))
# Convert Epoch to DoY
print ("___________________________________________")
print ("")
print ("Convert Epoch to DoY")
print ( (datetime.datetime.fromtimestamp(a_df_1.system_start_time.iloc[0])).timetuple().tm_yday )
print ("___________________________________________")
print ("")
print ("difference number of days")
print ((date(2003,11,22) - date(2002,10,20)).days)
time.localtime(a_df_1.system_start_time.iloc[0])
# datetime.datetime(2016, 1, 1) + datetime.timedelta(275 - 1)
# im_yr_sotred = a_df_1.copy()
# epoch_sorted = a_df_1.copy()
# im_yr_sotred.sort_values(by=['image_year', 'doy'], inplace=True)
# epoch_sorted.sort_values(by=['system_start_time'], inplace=True)
# epoch_sorted.to_csv (r'/Users/hn/Desktop/test/epoch_sorted.csv', index = True, header=True)
# im_yr_sotred.to_csv (r'/Users/hn/Desktop/test/im_yr_sotred.csv', index = True, header=True)
# a_df_1.to_csv (r'/Users/hn/Desktop/test/a_df_1.csv', index = True, header=True)
###Output
_____no_output_____ |
basics/notebooks/Classification Tutorial.ipynb | ###Markdown
```Created: 2019-09-22Author: Roy WildsUpdates2019-10-02: Added RF classifier2019-11-17: Cleaned up for push to github``` About this notebookThis notebook captures the typical steps involved in building a classifier using pandas and sklearn.It includes some data manipulation to create the classes to be used (the chosen dataset didn't have explicit labels). Data LoadingThis uses the amazon fire CSV file from Kaggle: https://www.kaggle.com/gustavomodelli/forest-fires-in-brazilIt's a nice dataset that has timestamps, categorical, and numerical features. Not overly complicated, but a nice starting point.
###Code
import pandas as pd
csvfile = '~/data/amazon.csv'
df = pd.read_csv(csvfile, quotechar='"', encoding = "ISO-8859-1") #, parse_dates=[4]
df.count()
###Output
_____no_output_____
###Markdown
**Note** the presence of the correct encoding argument. Initial attempt to load the data file failed with a Unicode error (the data is from Brazil).Running a file command points us to the correct encoding:```$ file data/amazon.csv data/amazon.csv: ISO-8859 text, with CRLF line terminators```
###Code
df.sample(5)
###Output
_____no_output_____
###Markdown
Data Manipulation
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Let's properly dtype the various columns, and going to provide the option to translate the "month" to English.Note we could have handled the date column during the `read_csv()` step by adding the `parse_dates` arg.
###Code
df['date'] = pd.to_datetime(df['date'])
df['state'] = df['state'].astype('category')
df['month'] = df['month'].astype('category')
df.dtypes
portugese_months = list(df['month'].cat.categories)
portugese_months.sort()
# We sort so that the explicit ordering of english months here is correct!
english_months = ['April','August','December','February','January','July','June','May','March','November','October','September']
translate_months = dict(zip(portugese_months,english_months))
translate_months
df['month'].replace(translate_months, inplace=True)
df['month'] = df['month'].astype('category')
df.sample(5)
###Output
_____no_output_____
###Markdown
Create ClassesYou may have noticed that we don't actually have any obvious labels! We could try predicting some of the categorical variables... For example, maybe you can predict the month based on the other columns (ignoring the `date` feature obviously).But, here I'm going to be simple with a 2-class problem: "Lots of Fires" (`high`) vs "Fewer Fires" (`low`). This will be determine by whether or not the number is more than 1 standard-deviation1 away from the mean for the particular `state, month` combination in the data.
###Code
# There's probably a pandas way to do this cleverly using groupby and agg()
# but I can't figure out all the reshaping required.
states = list(df['state'].cat.categories)
months = list(df['month'].cat.categories)
import numpy as np
df['class'] = 'low' # Start with everything 'low'
nstd = 1 # Number of standard deviations to be considered 'high'
for s in states:
for m in months:
mu = df[(df['state'] == s ) & (df['month'] == m)]['number'].mean()
sigma = df[(df['state'] == s ) & (df['month'] == m)]['number'].std()
# Wasn't able to get this working using pandas/groupby/etc. ops. Had to resort to a loop.
# At least it's linear in the dataframe size.
for index, row in df[(df['state'] == s ) & (df['month'] == m)].iterrows():
if row['number'] > mu+nstd*sigma:
df.iloc[index,5] = 'high' # THIS IS BRITTLE. Relies on specific shape for "df".
# #Failed attempts to do this more pythonically
# print( (df['state'] == s ) & (df['month'] == m) & ( df['number'] > 0).describe() )
# df['class'] = np.where((df['state'] == s ) & (df['month'] == m) & ( (abs(df['number']-mu)/sigma)>0.01),'high','low')
# df[(df['state'] == s ) & (df['month'] == m) & (abs(df['number']-mu)/sigma>0)]['class'] = 'high'
###Output
_____no_output_____
###Markdown
Data ExplorationAlways good to understand the raw data before jumping into modeling.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
df.describe(include = 'all')
#Plot the number of entries per state
df.groupby(['state'])['number'].agg('count').plot(kind='bar')
#Plot the total number of fires per state
df.groupby(['state'])['number'].agg('sum').plot(kind='bar')
#Plot the total number of fires per state, colouring the numbers that were determined to be class="high"
#Not a terribly informational plot, but useful plotting technique in general.
df.groupby(['state','class'])['number'].agg('sum').unstack().plot(kind='bar')
###Output
_____no_output_____
###Markdown
ModelingGoing to build a model to predict the `class` from the `state` and `number` features.Need to convert the `state` categorical feature into features that can be consumed by LR or RF.An ordinal encoding doesn't make sense (there's no simple ordering of the states... maybe by latitude since that could be a sensible ordering for climate/weather, but skipping that for now).Will use one-hot encoding.
###Code
# Simplest to make a copy and then deal with the one-hot encoding for the 'state' categorical columns.
lrdf = df.copy()
lrdf = pd.concat([df,pd.get_dummies(df['state'], prefix='state')],axis=1)
# Drop the columns we don't need.
lrdf.drop(['year'], axis=1, inplace=True)
lrdf.drop(['state'], axis=1, inplace=True)
lrdf.drop(['date'], axis=1, inplace=True)
lrdf.drop(['month'], axis=1, inplace=True)
lrdf.sample(5)
data = lrdf
labels = lrdf['class']
data.drop(['class'], axis=1, inplace=True)
from sklearn.model_selection import train_test_split
# Make train/test sets with a 30% test size.
data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size=0.3)
labels_test.describe()
###Output
_____no_output_____
###Markdown
Logistic Regression Model
###Code
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(data_train, labels_train)
pred_test = logreg.predict(data_test)
from sklearn.metrics import confusion_matrix
m = confusion_matrix(labels_test, pred_test)
# Assume that the class=='high' is the Positive Case (i.e. what we care about classifying)
tp, fn, fp, tn = m.ravel()
print(m)
#print('tn = {}, fp = {}, fn = {}, tp = {}'.format(tn,fp,fn,tp))
print('Using class="high" as the positive prediction (i.e. a true prediction).')
precision = tp/(tp+fp+0.)
recall = tp/(tp+fn+0.)
print('Precision = {:.2f} and Recall = {:.2f}'.format(precision,recall))
###Output
_____no_output_____
###Markdown
Varying ThresholdRather than using the default 0.5 threshold for determining if a prediction is `high` or not, we can vary a threshold from 0 to 1 to control the precision/recall tradeoff of the classifier.
###Code
thetas = np.linspace(0.1,0.9,101)
pred_test_probs = logreg.predict_proba(data_test)
print(logreg.classes_)
#So 1st col is probability of class='high' and 2nd col is probability of class='low'
pred_test_probs[0:10,:]
precision, recall = [], []
for theta in thetas:
pred_test = np.where(pred_test_probs[:,0] >= theta, 'high','low')
m = confusion_matrix(labels_test, pred_test)
# Assume that the class=='high' is the Positive Case (i.e. what we care about classifying)
tp, fn, fp, tn = m.ravel()
precision.append(tp/(tp+fp+0.))
recall.append(tp/(tp+fn+0.))
logreg_thetas = pd.DataFrame()
logreg_thetas['threshold']=thetas
logreg_thetas['precision'] = precision
logreg_thetas['recall'] = recall
logreg_thetas.plot(x='threshold')
###Output
_____no_output_____
###Markdown
The above plot is typical of the recall/threshold tradeoff. You get better precision (i.e. fewer mistakes) at the cost of missing more of the true (i.e. high) predictions (lower recall). Random Forest
###Code
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100, random_state=0)
rf.fit(data_train,labels_train)
pred_test = rf.predict(data_test)
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
m = confusion_matrix(labels_test, pred_test)
# Assume that the class=='high' is the Positive Case (i.e. what we care about classifying)
tp, fn, fp, tn = m.ravel()
print(m)
#print('tn = {}, fp = {}, fn = {}, tp = {}'.format(tn,fp,fn,tp))
print('Using class="high" as the positive prediction (i.e. a true prediction).')
precision = tp/(tp+fp+0.)
recall = tp/(tp+fn+0.)
accuracy = accuracy_score(labels_test, pred_test)
print('Precision = {:.2f} and Recall = {:.2f}'.format(precision,recall))
print('Accuracy = {:.2f}'.format(accuracy))
###Output
_____no_output_____
###Markdown
We see a great example of why Accuracy isn't a good metric when there's class imbalance.In our case, we've got roughly a 10 to 1 class imbalance and the model gets the class='low' right lots, but for the class='high' case we're not doing great. Varying Threshold
###Code
thetas = np.linspace(0.1,0.9,101)
pred_test_probs = rf.predict_proba(data_test)
print(rf.classes_)
#If not 'high', 'low' then ensure you change [:,0] to the correct column slice to use!
precision, recall = [], []
for theta in thetas:
pred_test = np.where(pred_test_probs[:,0] >= theta, 'high','low')
m = confusion_matrix(labels_test, pred_test)
# Assume that the class=='high' is the Positive Case (i.e. what we care about classifying)
tp, fn, fp, tn = m.ravel()
precision.append(tp/(tp+fp+0.))
recall.append(tp/(tp+fn+0.))
rf_thetas = pd.DataFrame()
rf_thetas['threshold']=thetas
rf_thetas['precision'] = precision
rf_thetas['recall'] = recall
rf_thetas.plot(x='threshold')
###Output
_____no_output_____
###Markdown
Repeat RF but with k-fold Cross ValidationThus far have been using the test/train split with 33% for test. This section is to do k-fold Cross Validation in order to get an estimate on the errors for the model accuracy.Also an opportunity to have some error bars on our precision/recall plots!
###Code
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
import numpy as np
# KFOLD just provides indexes, so we can just do it on the data (not labels) since they're the same size and share the same indices
nfolds = 5
kf = KFold(n_splits=nfolds)
kf.get_n_splits(data)
# We are going to loop thru the KFOLDS and also through the different thresholds.
# Yields a NTHRESHOLD rows x KFOLDS cols
nthresholds = 21
thetas = np.linspace(0.1, 0.9, nthresholds)
precision, recall = np.zeros(shape=(nthresholds, nfolds)), np.zeros(shape=(nthresholds, nfolds))
ifold = 0
for train_index, test_index in kf.split(data):
data_train, data_test = data.iloc[train_index], data.iloc[test_index]
labels_train, labels_test = labels.iloc[train_index], labels.iloc[test_index]
rf = RandomForestClassifier(n_estimators=100, random_state=0)
rf.fit(data_train,labels_train)
pred_test_probs = rf.predict_proba(data_test)
itheta = 0
for theta in thetas:
pred_test = np.where(pred_test_probs[:,0] >= theta, 'high','low')
m = confusion_matrix(labels_test, pred_test)
# Assume that the class=='high' is the Positive Case (i.e. what we care about classifying)
tp, fn, fp, tn = m.ravel()
precision[itheta, ifold] = tp/(tp+fp+0.)
recall[itheta, ifold] = tp/(tp+fn+0.)
itheta += 1
ifold += 1
precision_errors = np.std(precision, axis=1)
precision_line = np.mean(precision, axis=1)
recall_errors = np.std(recall, axis=1)
recall_line = np.mean(recall, axis=1)
plt.title('Precision and Recall - 1 Std Dev shown')
plt.xlabel('threshold')
plt.ylabel('Precision/Recall')
plt.errorbar(thetas, recall_line, yerr=recall_errors, c='red', capsize=3)
plt.errorbar(thetas, precision_line, yerr=precision_errors, c='blue', capsize=3)
###Output
_____no_output_____ |
saildrone-cloud-mur.ipynb | ###Markdown
this reads in the MUR SST from AWS PODAAC collocates it with all Saildrone cruises
###Code
import sys
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
import xarray as xr
import cartopy.crs as ccrs
from scipy import spatial
#sys.path.append('/home/jovyan/shared/users/cgentemann/notebooks/salinity/subroutines/')
#from read_routines import read_all_usv, read_one_usv, add_coll_vars
import warnings
warnings.simplefilter('ignore') # filter some warning messages
from glob import glob
# these libraries help reading cloud data
import fsspec
import s3fs
import requests
import os
warnings.simplefilter("ignore") # filter some warning messages
xr.set_options(display_style="html",keep_attrs=True) # display dataset nicely
###Output
_____no_output_____
###Markdown
Read in All Saildrone cruises downloaded from https://data.saildrone.com/data/sets- 2017 onwards, note that earlier data is going to lack insruments and be poorer data quality in general- For this code I want to develop a routine that reads in all the different datasets and creates a standardized set- It may work best to first read each of the files individually into a dictionary - then go through each dataset finding all variable names- I decided to put all SST into TEMP_CTD_MEAN and same for Salinity so there is a single variable name- this still preserves all the dataset information
###Code
import os
import sys
sys.path.append(os.path.join(os.environ['HOME'],'shared','users','lib'))
import ebdpy as ebd
ebd.set_credentials(profile='esip-qhub')
profile = 'esip-qhub'
region = 'us-west-2'
endpoint = f's3.{region}.amazonaws.com'
ebd.set_credentials(profile=profile, region=region, endpoint=endpoint)
worker_max = 30
client,cluster = ebd.start_dask_cluster(profile=profile,worker_max=worker_max,
region=region, use_existing_cluster=True,
adaptive_scaling=False, wait_for_cluster=False,
environment='pangeo', worker_profile='Medium Worker',
propagate_env=True)
dir_data_pattern = '/home/jovyan/data/sss_collocations_orbital_norepeat/'
dir_out = '/home/jovyan/data/sss_collocations_orbital_norepeat_mur/'
files = glob(dir_data_pattern+'*.nc')
for ifile,file in enumerate(files):
ds = xr.open_dataset(file)
ds.close()
if any(v=='ob' for v in ds.dims.keys()):
ds = ds.swap_dims({'ob':'time'})
#remove any duplicates in time, keep only first value
_, index = np.unique(ds['time'], return_index=True)
ds=ds.isel(time=index)
name = file[52:-3]
name = name.replace(" ", "_")
name = name.replace("/", "_")
if ifile==0:
data_dict = {name:ds}
else:
data_dict[name]=ds
print(name)
###Output
_____no_output_____
###Markdown
Collocate MUR
###Code
from earthdata import Auth
auth = Auth().login()
url = "https://archive.podaac.earthdata.nasa.gov/s3credentials"
response = requests.get(url).json()
%%time
# set up read
json_consolidated = "s3://esip-qhub-public/nasa/mur/murv41_consolidated_20211011.json"
s_opts = {"requester_pays": True, "skip_instance_cache": True}
r_opts = {"key": response["accessKeyId"],"secret": response["secretAccessKey"],"token": response["sessionToken"],"client_kwargs": {"region_name": "us-west-2"},}
fs = fsspec.filesystem("reference",fo=json_consolidated,
ref_storage_args=s_opts,remote_protocol="s3",
remote_options=r_opts,simple_templates=True,)
ds_sst = xr.open_dataset(fs.get_mapper(""), decode_times=False, engine="zarr", consolidated=False)
ds_sst
###Output
_____no_output_____
###Markdown
Collocate using .interp linear interpolation
###Code
ds = ds_sst
for iname,name in enumerate(data_dict):
#if iname>3:
# continue
print(iname,name)
ds_usv = data_dict[name]
#create space for new data
for var in ds_sst:
ds_usv[var]=ds_usv.BARO_PRES_MEAN.copy(deep=True)*np.nan
ds_usv[var].attrs=ds_sst[var].attrs
ilen = len(ds_usv.time)
for inc in range(0,ilen,100):
i1,i2 = inc,inc+100
if i2>ilen:
i2=ilen-1
#print(inc,inc+100)
sub = ds_usv.isel(time=slice(i1,i2))
t1,t2=sub.time.min().data-np.timedelta64(1,'D'),sub.time.max().data+np.timedelta64(1,'D')
x1,x2=sub.lon.min().data-.15,sub.lon.max().data+.15
y1,y2=sub.lat.min().data-.15,sub.lat.max().data+.15
#print(inc,t1,t2,x1,x2,y1,y2)
ds_sat = ds_sst.sel(time=slice(t1,t2),lat=slice(y1,y2),lon=slice(x1,x2))
ds_sat['time']=np.asarray(ds_sat.time.data, "datetime64[ns]")
ds_interp = ds_sat.interp(time=sub.time,lat=sub.lat,lon=sub.lon,method='linear',assume_sorted=False) #add saildrone data to interpolated sat data
#add saildrone data to interpolated sat data
ds_interp = ds_interp.reset_coords(names={'lat','lon'})
for var in ds_interp:
ds_usv[var][i1:i2]=ds_interp[var]
#output
fout = dir_out+name+'_20211116.nc'
ds_usv.to_netcdf(fout)
print('output done, start new')
for iname,name in enumerate(data_dict):
fout = dir_out+name+'_20211116.nc'
#ds_usv = xr.open_dataset(fout)
#ds_usv.close()
#ds_usv.analysed_sst.plot()
#plt.show()
#plt.clf()
###Output
_____no_output_____
###Markdown
now gridded no repeat
###Code
import sys
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
import xarray as xr
import cartopy.crs as ccrs
from scipy import spatial
#sys.path.append('/home/jovyan/shared/users/cgentemann/notebooks/salinity/subroutines/')
#from read_routines import read_all_usv, read_one_usv, add_coll_vars
import warnings
warnings.simplefilter('ignore') # filter some warning messages
from glob import glob
# these libraries help reading cloud data
import fsspec
import s3fs
import requests
import os
warnings.simplefilter("ignore") # filter some warning messages
xr.set_options(display_style="html",keep_attrs=True) # display dataset nicely
import os
import sys
sys.path.append(os.path.join(os.environ['HOME'],'shared','users','lib'))
import ebdpy as ebd
ebd.set_credentials(profile='esip-qhub')
profile = 'esip-qhub'
region = 'us-west-2'
endpoint = f's3.{region}.amazonaws.com'
ebd.set_credentials(profile=profile, region=region, endpoint=endpoint)
worker_max = 30
client,cluster = ebd.start_dask_cluster(profile=profile,worker_max=worker_max,
region=region, use_existing_cluster=True,
adaptive_scaling=False, wait_for_cluster=False,
environment='pangeo', worker_profile='Medium Worker',
propagate_env=True)
from earthdata import Auth
auth = Auth().login()
url = "https://archive.podaac.earthdata.nasa.gov/s3credentials"
response = requests.get(url).json()
%%time
# set up read
json_consolidated = "s3://esip-qhub-public/nasa/mur/murv41_consolidated_20211011.json"
s_opts = {"requester_pays": True, "skip_instance_cache": True}
r_opts = {"key": response["accessKeyId"],"secret": response["secretAccessKey"],"token": response["sessionToken"],"client_kwargs": {"region_name": "us-west-2"},}
fs = fsspec.filesystem("reference",fo=json_consolidated,
ref_storage_args=s_opts,remote_protocol="s3",
remote_options=r_opts,simple_templates=True,)
ds_sst = xr.open_dataset(fs.get_mapper(""), decode_times=False, engine="zarr", consolidated=False)
ds_sst
dir_data_pattern = '/home/jovyan/data/sss_collocations_8day_nearest_norepeat/'
dir_out = '/home/jovyan/data/sss_collocations_8day_nearest_norepeat_mur/'
files = glob(dir_data_pattern+'*.nc')
for ifile,file in enumerate(files):
ds = xr.open_dataset(file)
ds.close()
if any(v=='ob' for v in ds.dims.keys()):
ds = ds.swap_dims({'ob':'time'})
#remove any duplicates in time, keep only first value
_, index = np.unique(ds['time'], return_index=True)
ds=ds.isel(time=index)
name = file[57:-3]
name = name.replace(" ", "_")
name = name.replace("/", "_")
if ifile==0:
data_dict = {name:ds}
else:
data_dict[name]=ds
print(ifile,name)
ds = ds_sst
for iname,name in enumerate(data_dict):
if iname<12:
continue
print(iname,name)
ds_usv = data_dict[name]
#create space for new data
for var in ds_sst:
ds_usv[var]=ds_usv.BARO_PRES_MEAN.copy(deep=True)*np.nan
ds_usv[var].attrs=ds_sst[var].attrs
ilen = len(ds_usv.time)
for inc in range(0,ilen,5):
#print(inc)
i1,i2 = inc,inc+5
if i2>ilen:
i2=ilen-1
if i1==i2:
continue
#print(inc,inc+101)
sub = ds_usv.isel(time=slice(i1,i2))
t1,t2=sub.time.min().data-np.timedelta64(1,'D'),sub.time.max().data+np.timedelta64(1,'D')
x1,x2=sub.lon.min().data-.15,sub.lon.max().data+.15
y1,y2=sub.lat.min().data-.15,sub.lat.max().data+.15
#print(inc,t1,t2,x1,x2,y1,y2)
ds_sat = ds_sst.sel(time=slice(t1,t2),lat=slice(y1,y2),lon=slice(x1,x2))
ds_sat['time']=np.asarray(ds_sat.time.data, "datetime64[ns]")
ds_interp = ds_sat.interp(time=sub.time,lat=sub.lat,lon=sub.lon,method='linear',assume_sorted=False) #add saildrone data to interpolated sat data
#add saildrone data to interpolated sat data
ds_interp = ds_interp.reset_coords(names={'lat','lon'})
for var in ds_interp:
ds_usv[var][i1:i2]=ds_interp[var]
#output
fout = dir_out+name+'_20211116.nc'
ds_usv.to_netcdf(fout)
print('output done, start new')
for iname,name in enumerate(data_dict):
fout = dir_out+name+'_20211116.nc'
ds_usv = xr.open_dataset(fout)
ds_usv.close()
print(iname,ds_usv.analysed_sst.mean().data)
#plt.show()
#plt.clf()
ds_usv.analysed_sst.plot()
ds_interp = ds_sat.interp(time=sub.time).load()
#ds_interp = ds_interp.reset_coords(names={'lat','lon'})
#ds_interp.analysed_sst.plot()
#ds_interp = ds_interp.drop('ob')
ds_interp.analysed_sst[0,:,:].plot()
ds_sst
ds_sst.analysed_sst[5000,0:1000,18000:19000].plot()
ds_sst.analysed_sst[5000,9000,18000]
###Output
_____no_output_____
###Markdown
tricky bit here, .interp wasn't working- ds_sat is being read somewhere as "datetime64[us]" rather than "datetime64[ns]"- this is breaking the interpolation routine which expects "datetime64[ns]"- solution is to set ds_sat time to "datetime64[ns]"
###Code
ds_sat.time
data = np.asarray(ds_sat.time.data, "datetime64[ns]")
ds_sat['time']=data
tem2 = ds_sat.interp(time=ds_usv.time,lat=ds_usv.lat,lon=ds_usv.lon,method='linear',assume_sorted=False)
#tem2 = ds_sat.sel(time=ds_sat.time[1],method='nearest')#,lat=ds_usv.lat[0],lon=ds_usv.lon[0],method='linear',assume_sorted=False)
#tem2 = ds_sat.sel(time=ds_usv.time[0],tem2 = ds_sat.sel(time=ds_sat.time[1],method='nearest')#,lat=ds_usv.lat[0],lon=ds_usv.lon[0],method='linear',assume_sorted=False)
#tem2 = ds_sat.sel(time=data[0],method='nearest')#,lat=ds_usv.lat[0],lon=ds_usv.lon[0],method='linear',assume_sorted=False)
#lat=ds_usv.lat[0],lon=ds_usv.lon[0],method='nearest')#,method='linear',assume_sorted=False)
tem2.analysed_sst.plot()
tem2 = ds_sat.sel(time=sub.time,lat=sub.lat,lon=sub.lon,method='nearest')
tem2.analysed_sst.plot()
###Output
_____no_output_____ |
aas229_workshop/Lecture_Notebooks/gwcs/aas229_GWCS.ipynb | ###Markdown
Generalized World Coordinate System (GWCS) Why not FITS WCS? - Not flexible - No distortion handling - distortion paper never approved - only one correction per axis allowed- There's no way to represent discontiguous WCSs.- It has all the disadvantages of the FITS format, discussed in detail in Thomas, B., Jenness. T. et al. 2015, “The Future of Astronomical Data Formats I. Learning from FITS”. Astronomy & Computing, Volume 12, p. 133-145, arXiv e-print: 1502.00996. https://github.com/timj/aandc-fits GWCS Goals- Flexible - Combine transforms arbitrarily in an efficient way so that resampling is done as little as possible. - Execute subtransforms and their inverse. - Insert transforms in the WCS pipeline or change existing transforms. - Provide modular tools for managing WCS.- Extensible - It should be easy to write new transforms GWCS Data Model- A WCS pipeline is a list of steps executed in order - Each step defines a starting coordinate frame and the transform to the next frame in the pipeline. - The last step has no transform, only a frame which is the output frame of the total transform. - As a minimum a WCS object has an *input_frame* (defaults to "detector"), an *output_frame* and the transform between them.- The WCS has a domain attribute which defines the range of acceptable inputs. The domain is a list of dictionaries - one for each axis *{"lower": 5, "upper": 2048, "includes_lower": True, "includes_upper": False}* - The WCS object is written to file using the Advanced Scientific Data Format (ASDF). ASDF- It has a hierarchical metadata structure, made up of basic dynamic data types such as strings, numbers, lists and mappings.- It has human-readable metadata that can be edited directly in place in the file.- ASDF files have the version of the specification they were written to. This makes it possible to evolve the standard while retaining backwards compatibility.- It’s built on top of industry standards, such as YAML and JSON Schema- The structure of the data can be automatically validated using schema. ASDF and GWCS- The asdf package contains the schemas which define and validate GWCS.http://asdf-standard.readthedocs.io/en/latest/- The asdf package contains also the code which serializes GWCS to disk.http://asdf.readthedocs.io/en/latest/ Example of serializing an astropy.modeling model to a file.
###Code
from asdf import AsdfFile
import numpy as np
from astropy.modeling import models
# Create a 2D rotation model
rotation = models.Rotation2D(angle=60)
print(rotation)
# Open an ASDF file object
f = AsdfFile()
# Every ASDF file object has an attribute, called "tree"
# It is a dict like object which store theinformation in YAML format
print(f.tree)
f.tree['model'] = rotation
f.write_to('rotation.asdf')
#!less rotation.asdf
###Output
_____no_output_____
###Markdown
GWCS and Astropy- Transforms in GWCS are instances of models in [astropy.modeling](http://docs.astropy.org/en/stable/modeling/index.html)- The celestial reference frames in gwcs.coordinate_frames are implemented in [astropy.coordinates](http://docs.astropy.org/en/stable/coordinates/index.html)- Units and unit conversion is implemented in [astropy.units](http://docs.astropy.org/en/stable/units/index.html) JWST and GWCSGWCS is the software used for managing the WCS of JWST observations.- The WCS is included in the JWST science files. It is saved in the FITS file as a separate extension with *EXTNAME=ASDF*.- The WCS includes all transforms from detector to a standard world coordinate system.- The WCS pipelines for different instrument modes include different intermediate coordinate frames.- WCS reference files are in ASDF format. Imaging - A Programmatic Example
###Code
import numpy as np
from astropy.modeling import models
from astropy import units as u
from astropy import coordinates as coord
from asdf import AsdfFile
from gwcs import wcs
from gwcs import coordinate_frames as cf
from gwcs import wcstools
from gwcs import utils as gwutils
###Output
_____no_output_____
###Markdown
First let's create two polynomil models to represent distoriton.
###Code
polyx = models.Polynomial2D(4)
polyx.parameters = np.random.randn(15)
polyy = models.Polynomial2D(4)
polyy.parameters = np.random.randn(15)
distortion = (models.Mapping((0, 1, 0, 1)) | polyx & polyy).rename("distortion")
f = AsdfFile()
f.tree['model'] = distortion
f.write_to('poly.asdf', all_array_storage='inline')
#!less poly.asdf
###Output
_____no_output_____
###Markdown
Next, create a compound transform comprised of offsets in x and y,followed by a rotation and scaling in x and y, followed by a tangent deprojection and a 3D sky rotation.
###Code
undist2sky = (models.Shift(-10.5) & models.Shift(-13.2) | models.Rotation2D(0.0023) | \
models.Scale(.01) & models.Scale(.04) | models.Pix2Sky_TAN() | \
models.RotateNative2Celestial(5.6, -72.05, 180)).rename("undistorted2sky")
###Output
_____no_output_____
###Markdown
Create three coordinate frames.
###Code
detector_frame = cf.Frame2D(name="detector", axes_names=("x", "y"), unit=(u.pix, u.pix))
sky_frame = cf.CelestialFrame(name="icrs", reference_frame=coord.ICRS())
focal_frame = cf.Frame2D(name="focal_frame", unit=(u.arcsec, u.arcsec))
pipeline = [(detector_frame, distortion),
(focal_frame, undist2sky),
(sky_frame, None)
]
wcsobj = wcs.WCS(pipeline)
print(wcsobj)
# Calling the WCS object like a function evaluates the transforms.
ra, dec = wcsobj(500, 600)
print(ra, dec)
# Display the frames available in the WCS pipeline
print(wcsobj.available_frames)
wcsobj.input_frame
wcsobj.output_frame
# Because the output_frame is a CoordinateFrame object we can get as output
# coordinates.SkyCoord objects.
skycoord = wcsobj(1, 2, output="numericals_plus")
print(skycoord)
print(skycoord.transform_to('galactic'))
print(wcsobj.output_frame.coordinates(ra, dec))
###Output
_____no_output_____
###Markdown
Methods for managing the transforms
###Code
# It is possible to retrieve the transform between any
# two coordinate frames in the WCS pipeline
print(wcsobj.available_frames)
det2focal = wcsobj.get_transform("detector", "focal_frame")
fx, fy = det2focal(1, 2)
print(fx, fy)
# And we can see what the units are in focal_frame
print(wcsobj.focal_frame.coordinates(fx, fy))
# It is also possible to replace a transform
# Create a transforms which shifts in X and y
new_det2focal = models.Shift(3) & models.Shift(12)
# Replace the transform between "detector" and "v2v3"
wcsobj.set_transform("detector", "focal_frame", new_det2focal)
new_ra, new_dec = wcsobj(500, 600)
print(ra, dec)
print(new_ra, new_dec)
# We can insert a transform in the pipeline just before or after a frame
rotation = models.EulerAngleRotation(.1, 12, 180, axes_order="xyz")
wcsobj.insert_transform("focal_frame", rotation)
wcsobj.get_transform("detector", "focal_frame")(1, 2)
###Output
_____no_output_____
###Markdown
Discontiguous transformsThere are cases when different WCS transforms apply to different regions of the same image.JWST observations with the IFUs, the NIRSpec MOS and fixed slits, the NIRISS SOSS and the WFSS,are all examlpes of discontiguos WCSs.GWCS manages this by packaging the transforms in a single WCS object.Individual WCSs are accessed using additional inputs. These non-coordinate inputs depend on the specific mode. For the NIRSpec fixed slits the input is the slit name, for the IFU - the slice number, for the MOS - the sltlet_id, for NIRISS SOSS - the spectral order. NIRSpec Fixed Slit Example This example was shown in the workshop, but the code for the jwst module may not be released yet
###Code
from jwst import datamodels
nrs_fs = "nrs1_assign_wcs.fits.gz"
nrs = datamodels.ImageModel(nrs_fs)
from jwst.assign_wcs import nirspec
slits = nirspec.get_open_slits(nrs)
print(slits[0])
slits = nirspec.get_open_slits(nrs)
for s in slits:
print(s)
s0 = nirspec.nrs_wcs_set_input(nrs, "S200A1")
print(s0.domain)
s0.available_frames
s0.output_frame
x, y = wcstools.grid_from_domain(s0.domain)
ra, dec, lam = s0(x, y)
res = s0(1000, 200, output="numericals_plus")
print(res)
%matplotlib inline
from matplotlib import pyplot as plt
plt.imshow(lam, aspect='auto')
plt.title("lambda, microns")
plt.colorbar()
###Output
_____no_output_____ |
uhecr_model/figures/verification/figures_TA_SBG_sims_cumul.ipynb | ###Markdown
Figures for comparison of arrival direction and joint modelsHere use the output from the `arrival_vs_joint` notebook to plot the figures shown in the paper.*This code is used to produce Figures 6, 7 and 8 (left panel) in Capel & Mortlock (2019).*
###Code
%matplotlib inline
import numpy as np
import h5py
import matplotlib as mpl
from matplotlib import pyplot as plt
import seaborn as sns
from pandas import DataFrame
from fancy import Data, Results
from fancy.plotting import Corner
from fancy.plotting.allskymap_cartopy import AllSkyMapCartopy as AllSkyMap
from fancy.plotting.colours import *
# to match paper style
plt.style.use('minimalist')
# Define output files
source_type = "SBG_23"
detector_type = "TA2015"
sim_output_file = "../../output/{0}_sim_{1}_{2}_{3}_{4}_notightB.h5".format(
"joint", source_type, detector_type, 19990308, "p")
###Output
_____no_output_____
###Markdown
Figure 6The simulated data set and the Auger exposure.
###Code
'''set detector and detector properties'''
if detector_type == "TA2015":
from fancy.detector.TA2015 import detector_params, Eth
elif detector_type == "auger2014":
from fancy.detector.auger2014 import detector_params, Eth
elif detector_type == "auger2010":
from fancy.detector.auger2010 import detector_params, Eth
else:
raise Exception("Undefined detector type!")
from astropy.coordinates import SkyCoord
from astropy import units as u
from fancy.detector.exposure import m_dec
from fancy.interfaces.stan import Direction
###Output
_____no_output_____
###Markdown
Figure 7Comparison of the joint and arrival direction fits.
###Code
seeds = [19990308, 16852056, 65492186, 9999999, 9953497]
F_gmf = []
F_joint = []
F_arrival = []
for seed in seeds:
# joint_gmf_output_file = "../../output/{0}_fit_{5}_{1}_{2}_{3}_{4}_{6}.h5".format(
# "joint_gmf", "SBG_23", "TA2015", seed, "p", "sim", "joint_gmf")
joint_output_file = "../../output/{0}_fit_{5}_{1}_{2}_{3}_{4}_{6}.h5".format(
"joint", "SBG_23", "TA2015", seed, "p", "sim", "joint")
arrival_output_file = "../../output/{0}_fit_{5}_{1}_{2}_{3}_{4}_{6}.h5".format(
"arrival_direction", "SBG_23", "TA2015", seed, "p", "sim", "joint")
# f_g = Results(joint_gmf_output_file).get_chain(['f'])['f']
f_j = Results(joint_output_file).get_chain(['f'])['f']
f_a = Results(arrival_output_file).get_chain(['f'])['f']
# F_gmf.append(f_g)
F_joint.append(f_j)
F_arrival.append(f_a)
# f_gmf_avg = np.mean(np.array(F_gmf), axis=0)
f_joint_avg = np.mean(np.array(F_joint), axis=0)
f_arrival_avg = np.mean(np.array(F_arrival), axis=0)
f_true = Results(sim_output_file).get_truths(['f'])['f']
fig, ax = plt.subplots()
# fig.set_size_inches((6, 4))
# sns.distplot(f_gmf_avg, hist = False,
# kde_kws = {'shade' : True, 'lw' : 2, 'zorder' : 0},
# color = grey, label = 'joint + gmf')
sns.distplot(f_joint_avg, hist = False,
kde_kws = {'shade' : True, 'lw' : 2, 'zorder' : 1},
color = purple, label = 'joint')
sns.distplot(f_arrival_avg, hist = False,
kde_kws = {'shade' : True, 'lw' : 2, 'zorder' : 0},
color = lightblue, label = 'arrival')
ax.axvline(f_true, 0, 10, color = 'k', zorder = 3, lw = 2., alpha = 0.7)
ax.set_xlim(0, 1)
# ax.set_ylim(0, 10)
ax.set_xlabel('$f$')
ax.set_ylabel('$P(f | \hat{E}, \hat{\omega})$')
ax.legend(loc="best")
fig.savefig("dist_sims_cumul.png", bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Figure 8 (left panel)
###Code
keys = ['F0', 'L', 'alpha', 'B', 'f']
chain_list = []
for seed in seeds:
joint_gmf_output_file = "../../output/{0}_fit_{5}_{1}_{2}_{3}_{4}_{6}.h5".format(
"joint_gmf", "SBG_23", "TA2015", seed, "p", "sim", "joint_gmf")
# joint_output_file = "../../output/{0}_fit_{5}_{1}_{2}_{3}_{4}.h5".format(
# "joint", "SBG_23", "TA2015", seed, "p", "sim")
chain = Results(joint_gmf_output_file).get_chain(keys)
# Convert form Stan units to plot units
chain['F0'] = chain['F0'] / 1.0e3 # km^-2 yr^-1
chain['L'] = chain['L'] * 10 # 10^-38 yr^-1
chain_list.append(chain)
chain_avgs = {key:0 for key in keys}
for key in keys:
chain_sum = 0
for i in range(len(seeds)):
chain_sum += chain_list[i][key]
chain_sum /= len(seeds)
chain_avgs[key] = chain_sum
chain_avgs
# Get chains from joint fit and truths from simulation
results_sim = Results(sim_output_file)
# results_fit = Results(joint_gmf_output_file)
# keys = ['F0', 'L', 'alpha', 'B', 'f']
# chain = results_fit.get_chain(keys)
# # Convert form Stan units to plot units
# chain['F0'] = chain['F0'] / 1.0e3 # km^-2 yr^-1
# chain['L'] = chain['L'] * 10 # 10^-38 yr^-1
truth_keys = ['F0', 'L', 'alpha', 'B', 'f']
truth = results_sim.get_truths(truth_keys)
info_keys = ['Eth', 'Eth_sim']
info = results_sim.get_truths(info_keys)
# Correct for different Eth in sim and fit
# Also scale to plot units
flux_scale = (info['Eth'] / info['Eth_sim'])**(1 - truth['alpha'])
truth['F0'] = truth['F0'] * flux_scale # km^-2 yr^-1
truth['L'] = truth['L'][0] * flux_scale / 1.0e39 * 10 # 10^-38 yr^-1
labels = {}
labels['L'] = r'$L$ / $10^{38}$ $\mathrm{yr}^{-1}$'
labels['F0'] = r'$F_0$ / $\mathrm{km}^{-2} \ \mathrm{yr}^{-1}$'
labels['B'] = r'$B$ / $\mathrm{nG}$'
labels['alpha'] = r'$\alpha$'
labels['f'] = r'$f$'
params = np.column_stack([chain_avgs[key] for key in keys])
truths = [truth[key] for key in keys]
# Make nicely labelled dict
chain_for_df = {}
for key in keys:
chain_for_df[labels[key]] = chain_avgs[key]
# Make ordered dataframe
df = DataFrame(data = chain_for_df)
df = df[[labels['F0'], labels['L'], labels['alpha'], labels['B'], labels['f']]]
corner = Corner(df, truths, color=purple, contour_color=purple_contour)
corner.save("corner_sims_cumul.png")
###Output
_____no_output_____ |
lesson_notebooks/l5/beautifulsoup/children_tags_solution.ipynb | ###Markdown
TODO: Get The Children from the `` TagIn the cell below, print the contents and the number of children of the `` tag in the `sample2.html` file. Start by opening the `sample2.html` file and passing the open filehandle to the BeautifulSoup constructor using the `lxml` parser. Save the BeautifulSoup object returned by the constructor in a variable called `page_content`. Then access the `` tag and save the tag object in variable called `page_title`. Then use the `.contents` attribute to print the contents and the number of children of the `` tag.
###Code
# Import BeautifulSoup
from bs4 import BeautifulSoup
# Open the HTML file and create a BeautifulSoup Object
with open('./sample2.html') as f:
page_content = BeautifulSoup(f, 'lxml')
# Access the title tag
page_title = page_content.head.title
# Print the children of the title tag
print(page_title.contents)
# Print the number of children of the title tag
print('\nThe <title> contains {} children'.format(len(page_title.contents)))
###Output
['AI For Trading']
The <title> contains 1 children
###Markdown
TODO: Loop Through The Children The `` TagIn the cell below, print the children of the `` tag in the `sample2.html` file. Start by opening the `sample2.html` file and passing the open filehandle to the BeautifulSoup constructor using the `lxml` parser. Save the BeautifulSoup object returned by the constructor in a variable called `page_content`. Then create a loop that prints the children of the `` tag using the `.children` attribute.
###Code
# Import BeautifulSoup
from bs4 import BeautifulSoup
# Open the HTML file and create a BeautifulSoup Object
with open('./sample2.html') as f:
page_content = BeautifulSoup(f, 'lxml')
# Print the children of the head tag
for child in page_content.head.title.children:
print(child)
###Output
AI For Trading
###Markdown
TODO: Search For The `` TagIn the cell below, search for the `` tag only in the direct children of the `` tag in the `sample2.html` file. Start by opening the `sample2.html` file and passing the open filehandle to the BeautifulSoup constructor using the `lxml` parser. Save the BeautifulSoup object returned by the constructor in a variable called `page_content`. Then search the html tag's direct children for the `` tag using the `recursive=False` argument. Print the result using the `.prettify()` attribute.
###Code
# Import BeautifulSoup
from bs4 import BeautifulSoup
# Open the HTML file and create a BeautifulSoup Object
with open('./sample2.html') as f:
page_content = BeautifulSoup(f, 'lxml')
# Search the html tag's direct children for the head tag
for tag in page_content.html.find_all('head', recursive = False):
print(tag.prettify())
###Output
<head>
<title>
AI For Trading
</title>
<meta charset="utf-8"/>
<link href="./teststyle.css" rel="stylesheet"/>
<style>
.h2style {background-color: tomato;color: white;padding: 10px;}
</style>
</head>
|
summer/an introduction to ML.ipynb | ###Markdown
ASDRP Data Science Introduction
###Code
#Import Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Pandas is a python package that deals mostly with : Series (1d homogeneous array) DataFrame (2d labeled heterogeneous array) Panel (general 3d array)
###Code
# Example of creating Pandas series :
myseries = pd.Series( np.random.randn(5) )
print(myseries)
###Output
0 -0.953803
1 0.668505
2 0.454627
3 -0.000543
4 0.267883
dtype: float64
###Markdown
We did not pass any index, so by default, it assigned the indexes ranging from 0 to len(data)-1
###Code
# View index values
print(myseries.index)
# Creating Pandas series with index:
iseries = pd.Series( np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'] )
print(iseries)
# View index values
print(iseries.index)
###Output
Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
###Markdown
Reading Salaries data to DataFrame
###Code
df = pd.read_csv("https://raw.githubusercontent.com/philmui/algorithmic-bias-2019/master/data/salaries/Salaries.csv");
df.dtypes
df.count()
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
GroupBy
###Code
df_rank = df.groupby(["rank"])
df_rank.mean()
#Calculate mean salary for each professor rank:
df.groupby('rank')[['salary']].mean()
###Output
_____no_output_____
###Markdown
Filtering
###Code
#Calculate mean salary for each professor rank:
df_sub = df[ df['salary'] > 120000 ]
df_sub.head()
#Select only those rows that contain female professors:
df_f = df[ df['sex'] == 'Female' ]
df_f.head()
###Output
_____no_output_____
###Markdown
Slicing
###Code
#Select column salary:
df['salary']
#Select column salary:
df[['rank','salary']]
###Output
_____no_output_____
###Markdown
Row Selection
###Code
#Select rows by their position:
df[10:20]
###Output
_____no_output_____
###Markdown
Sorting
###Code
# Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='salary', ascending=1)
df_sorted.head()
df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False])
df_sorted.head(10)
###Output
_____no_output_____
###Markdown
Introduction to Exercises
###Code
# Create a Series from dictionary
data = {'pi': 3.1415, 'e': 2.71828} # dictionary
print(data)
s3 = pd.Series ( data )
print(s3)
# reordering the elements
s4 = pd.Series ( data, index = ['e', 'pi', 'tau'])
print(s4)
###Output
e 2.71828
pi 3.14150
tau NaN
dtype: float64
###Markdown
NAN (non a number) - is used to specify a missing value in Pandas.
###Code
# Creating a Pandas Series object from a single number:
sone = pd.Series( 1, index = range(10), name='Ones')
print(sone)
# Many ways to "slice" Pandas series (series have zero-based index by default):
print(myseries)
myseries[3] # returns 4th element
myseries[:2] # First 2 elements
print( s1[ [2,1,0]]) # Elements out of order
# Series can be used as ndarray:
print("Median:" , myseries.median())
myseries[myseries > 0]
# vector operations:
np.exp(myseries)
###Output
_____no_output_____ |
.ipynb_checkpoints/003set_jaccard-checkpoint.ipynb | ###Markdown
使用set来 集合用来删除重复的值使用jaccard系数来计算2句话的相似度
###Code
#初始化2个句子
st_1 = "you are beautiful"
st_2 = "you are a beauty"
# 创建集合
st_1_words = set(st_1.split())
st_2_words = set(st_2.split())
# 每个集合的大小
c_st_1_words = len(st_1_words)
c_st_2_words = len(st_2_words)
# 两个集合共有的词
com_words = st_1_words.intersection(st_2_words)
c_com_words = len(com_words)
# 两个集合不重复的词
uniq_words = st_1_words.union(st_2_words)
c_uniq_words = len(uniq_words)
# 计算jaccard相似度
similarity = c_com_words/(1.0*c_uniq_words)
# print the result
print 'set1 is ',st_1_words
print 'set2 is ',st_2_words
print 'commen words count is ',c_com_words
print 'unique words count is ',c_uniq_words
print 'commen words is ',com_words
print 'uniq words is ',uniq_words
print 'Similarity is :',similarity
###Output
set1 is set(['beautiful', 'you', 'are'])
set2 is set(['a', 'you', 'are', 'beauty'])
commen words count is 2
unique words count is 5
commen words is set(['you', 'are'])
uniq words is set(['beautiful', 'a', 'are', 'beauty', 'you'])
Similarity is : 0.4
|
Fruit_Vegetables_clas.ipynb | ###Markdown
AI达人创造营作业:水果蔬菜分类 解压数据集
###Code
!unzip -q -d data/ data/data104366/Fruit-Vegetables-Dataset.zip
import matplotlib.pyplot as plt
import PIL.Image as Image
path = 'data/Fruit-Vegetables-Dataset/Banana/245_100.jpg'
img = Image.open(path)
plt.imshow(img) # 根据数组绘制图像
plt.show() # 显示图像
###Output
_____no_output_____
###Markdown
数据处理
###Code
import os
import re
images_path = 'data/Fruit-Vegetables-Dataset' # 存放目录
txt_save_path = 'label_name.txt' # 生成txt文件
fw = open(txt_save_path, "w")
# 读取函数,用来读取文件夹中的所有函数,输入参数是文件名
def read_directory(directory_name):
i = 0
for filename in os.listdir(directory_name):
# print(filename) # 仅仅是为了测试
fw.write(filename + '\t' + str(i) +'\n') # 打印成功信息
i = i + 1
read_directory(images_path) #传入所要读取文件夹路径
# 建立样本数据读取路径与样本标签之间的关系
import os
import random
data_list = [] # 用列表保存每个样本的读取路径和标签
# 构造标签字典
label_list = []
with open('label_name.txt') as f:
for line in f:
a, b = line.strip('\n').split('\t')
label_list.append([a, b])
label_dic = dict(label_list)
label_list_2 = []
with open('label_name.txt') as f:
for line in f:
a, b = line.strip('\n').split('\t')
label_list.append([b, a])
# 获取img_trainA目录下所有子目录名称,保存到列表中
class_list = []
for i in os.listdir('data/Fruit-Vegetables-Dataset'):
class_dir = os.path.join('data/Fruit-Vegetables-Dataset', i)
if os.path.isdir(class_dir):
class_list.append(i)
# print(class_list)
# 方法二
# class_list = os.listdir('img_trainA')
# class_list.remove('label_name.txt')
# print(class_list)
for i in range(0,20):
for each in class_list:
for f in os.listdir('data/Fruit-Vegetables-Dataset/'+each):
data_list.append(['data/Fruit-Vegetables-Dataset/'+ each +'/'+ f,label_dic[each]])
# 打乱文件顺序
random.shuffle(data_list)
# 打印前十个,[样本读取路径,样本标签]
print(data_list[0:10])
# 打印样本数量
print('样本数量是:{}'.format(len(data_list)))
# 构造读取器与数据预处理
# 导入相关模块
import paddle
from paddle.vision.transforms import Compose, ColorJitter, Resize, Transpose, Normalize
from paddle.vision import transforms
import cv2
import numpy as np
from PIL import Image
from paddle.io import Dataset
# 数据预处理
def preprocess(img):
transform = Compose([
# Resize(size=(100, 100)),
# transforms.RandomHorizontalFlip(), # 水平翻转
# transforms.ColorJitter(0.05, 0.05, 0.05, 0.05),
# transforms.RandomRotation(8), # 随机旋转
# transforms.RandomResizedCrop(size=(300,400), scale=(0.8, 1.0),), # 随机剪裁
# mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], data_format='HWC'
Normalize(mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], data_format='HWC'),
Transpose()
])
img = transform(img).astype('float32')
return img
# 自定义收据读取器
class Reader(Dataset):
def __init__(self, data, is_val=False):
super().__init__()
self.samples = data[-int(len(data)*0.2):] if is_val else data[:-int(len(data)*0.2)]
def __getitem__(self, idx):
# 处理图像
img_path = self.samples[idx][0] # 得到某样本路径
img = Image.open(img_path)
if img.mode != 'RGB':
img = img.convert('RGB')
img = preprocess(img) # 数据预处理
# 处理标签
label = self.samples[idx][1] # 得到某样本的标签
label = np.array([label], dtype='int64')
return img, label
def __len__(self):
# 返回每个epoch中的图片数量
return len(self.samples)
# 生成训练数据集实例
train_dataset = Reader(data_list, is_val=False)
# 生成测试数据集实例
eval_dataset = Reader(data_list, is_val=True)
# 打印一个训练样本
print(train_dataset[88][0].shape)
print(train_dataset[88][1])
###Output
(3, 100, 100)
[79]
###Markdown
模型配置
###Code
# 定义模型
class MyNet(paddle.nn.Layer):
def __init__(self):
super(MyNet, self).__init__()
self.layer = paddle.vision.models.vgg16(pretrained=True)
self.fc1 = paddle.nn.Linear(1000, 500)
self.fc2 = paddle.nn.Linear(500, 120)
# 网络的前向计算过程
def forward(self, x):
x = self.layer(x)
x = self.fc1(x)
x = self.fc2(x)
return x
# 模型配置
# 定义输入
input_define = paddle.static.InputSpec(shape=[-1,3,100,100], dtype='float32', name='img')
label_define = paddle.static.InputSpec(shape=[-1,1], dtype='int64', name='label')
# 实例化网络对象并定义优化器等
model = MyNet()
model = paddle.Model(model, inputs=input_define, labels=label_define) # 封装模型
optimizer = paddle.optimizer.Adam(learning_rate=0.0003, parameters=model.parameters())
model.prepare(optimizer=optimizer,
loss=paddle.nn.CrossEntropyLoss(),
metrics=paddle.metric.Accuracy())
###Output
100%|██████████| 817517/817517 [00:11<00:00, 72781.77it/s]
###Markdown
模型训练
###Code
model.fit(train_data=train_dataset,
eval_data=eval_dataset,
batch_size=8,
epochs=10,
save_dir='output',
save_freq=5,
log_freq=10,
verbose=1
)
###Output
The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/10
###Markdown
模型预测
###Code
# 加载模型
model.load('output/final')
result = model.evaluate(eval_dataset,
batch_size=128,
verbose=1)
print(result)
result = model.predict(eval_dataset)
print(len(result[0]), result[0][0].shape)
# path = ' '
# img = Image.open(path)
# plt.imshow(img) # 根据数组绘制图像
# plt.show() # 显示图像
# for i in range(len(result[0])):
import matplotlib.pyplot as plt
for i in range(0,4):
img1 = eval_dataset[i][0]
img1 = img1.transpose(1, 2, 0)
plt.imshow(img1) # 根据数组绘制图像
plt.show() # 显示图像
# print(img1.shape, eval_dataset[i][1], result[0][i])
print(img1.shape, label_list_2[0][int(eval_dataset[i][1])])
###Output
_____no_output_____ |
docs/sbm_unmatched_test.ipynb | ###Markdown
A group-based testNext, we test bilateral symmetry by making an assumption that the left and the righthemispheres both come from a stochastic block model, which models the probabilityof any potential edge as a function of the groups that the source and target nodesare part of.For now, we use some broad cell type categorizations for each neuron to determine itsgroup. Alternatively, there are many methods for *estimating* these assignments togroups for each neuron, which we do not explore here.
###Code
from pkg.utils import set_warnings
set_warnings()
import datetime
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from giskard.plot import rotate_labels
from matplotlib.transforms import Bbox
from myst_nb import glue as default_glue
from pkg.data import load_network_palette, load_node_palette, load_unmatched
from pkg.io import savefig
from pkg.perturb import remove_edges
from pkg.plot import set_theme
from pkg.stats import stochastic_block_test
from seaborn.utils import relative_luminance
DISPLAY_FIGS = False
FILENAME = "sbm_unmatched_test"
def gluefig(name, fig, **kwargs):
savefig(name, foldername=FILENAME, **kwargs)
glue(name, fig, prefix="fig")
if not DISPLAY_FIGS:
plt.close()
def glue(name, var, prefix=None):
savename = f"{FILENAME}-{name}"
if prefix is not None:
savename = prefix + ":" + savename
default_glue(savename, var, display=False)
t0 = time.time()
set_theme()
rng = np.random.default_rng(8888)
network_palette, NETWORK_KEY = load_network_palette()
node_palette, NODE_KEY = load_node_palette()
neutral_color = sns.color_palette("Set2")[2]
GROUP_KEY = "simple_group"
left_adj, left_nodes = load_unmatched(side="left")
right_adj, right_nodes = load_unmatched(side="right")
left_labels = left_nodes[GROUP_KEY].values
right_labels = right_nodes[GROUP_KEY].values
###Output
_____no_output_____
###Markdown
The stochastic block model (SBM)A [**stochastic block model (SBM)**](https://en.wikipedia.org/wiki/Stochastic_block_model)is a popular statistical model of networks. Put simply, this model treats theprobability of an edge occuring between node $i$ and node $j$ as purely a function ofthe *communities* or *groups* that node $i$ and $j$ belong to. Therefore, this modelis parameterized by: 1. An assignment of each node in the network to a group. Note that this assignment can be considered to be deterministic or random, depending on the specific framing of the model one wants to use. 2. A set of group-to-group connection probabilities```{admonition} MathLet $n$ be the number of nodes, and $K$ be the number of groups in an SBM. For anetwork $A$ sampled from an SBM:$$ A \sim SBM(B, \tau)$$We say that for all $(i,j), i \neq j$, with $i$ and $j$ both runningfrom $1 ... n$ the probability of edge $(i,j)$ occuring is:$$ P[A_{ij} = 1] = P_{ij} = B_{\tau_i, \tau_j} $$where $B \in [0,1]^{K \times K}$ is a matrix of group-to-group connectionprobabilities and $\tau \in \{1...K\}^n$ is a vector of node-to-group assignments.Note that here we are assuming $\tau$ is a fixed vector of assignments, though otherformuations of the SBM allow these assignments to themselves come from a categoricaldistribution.``` Testing under the SBM modelAssuming this model, there are a few ways that one could test for differences betweentwo networks. In our case, we are interested in comparing the group-to-groupconnection probability matrices, $B$, for the left and right hemispheres.````{admonition} MathWe are interested in testing:```{math}:label: sbm_unmatched_nullH_0: B^{(L)} = B^{(R)}, \quad H_A: B^{(L)} \neq B^{(R)}```````Rather than having to compare one proportion as in [](er_unmatched_test.ipynb), we arenow interedted in comparing all $K^2$ probabilities between the SBM models for theleft and right hemispheres.```{admonition} MathThe hypothesis test above can be decomposed into $K^2$ indpendent hypotheses.$B^{(L)}$and $B^{(R)}$ are both $K \times K$ matrices, where each element $b_{kl}$ representsthe probability of a connection from a neuron in group $k$ to one in group $l$. Wealso know that group $k$ for the left network corresponds with group $k$ for theright. In other words, the *groups* are matched. Thus, we are interested in testing,for $k, l$ both running from $1...K$:$$ H_0: B_{kl}^{(L)} = B_{kl}^{(R)},\quad H_A: B_{kl}^{(L)} \neq B_{kl}^{(R)}$$```Thus, we will use[Fisher's exact test](https://en.wikipedia.org/wiki/Fisher%27s_exact_test) tocompare each set of probabilities. To combine these multiple hypotheses into one, wewill use [Fisher's method](https://en.wikipedia.org/wiki/Fisher%27s_method) forcombining p-values to give us a p-value for the overall test. We also can look atthe p-values for each of the individual tests after correction for multiplecomparisons by the[Bonferroni-Holm method.](https://en.wikipedia.org/wiki/Holm%E2%80%93Bonferroni_method) For the current investigation, we focus on the case where $\tau$ is known ahead oftime, sometimes called the **A priori SBM**. We use some broad cell type labels whichwere described in the paper which published the data todefine the group assignments $\tau$. Here, we do not exploreestimating these assignments, though many techniques exist for doing so. We note thatthe results presented here could change depending on the group assignments which areused. We also do not consider tests which would compare the assignment vectors,$\tau$. {numref}`Figure {number} ` shows thenumber of neurons in each group in the group assignments $\tau$ for the left andthe right hemispheres. The number of neurons in each group is quite similar betweenthe two hemispheres.
###Code
stat, pvalue, misc = stochastic_block_test(
left_adj, right_adj, labels1=left_labels, labels2=right_labels, method="fisher"
)
glue("uncorrected_pvalue", pvalue)
n_tests = misc["n_tests"]
glue("n_tests", n_tests)
set_theme(font_scale=1)
fig, ax = plt.subplots(1, 1, figsize=(10, 5))
group_counts_left = misc["group_counts1"]
group_counts_right = misc["group_counts2"]
for i in range(len(group_counts_left)):
ax.bar(i - 0.17, group_counts_left[i], width=0.3, color=network_palette["Left"])
ax.bar(i + 0.17, group_counts_right[i], width=0.3, color=network_palette["Right"])
rotate_labels(ax)
ax.set(
ylabel="Count",
xlabel="Group",
xticks=np.arange(len(group_counts_left)) + 0.2,
xticklabels=group_counts_left.index,
)
gluefig("group_counts", fig)
###Output
_____no_output_____
###Markdown
```{glue:figure} fig:sbm_unmatched_test-group_counts:name: "fig:sbm_unmatched_test-group_counts"The number of neurons in each group in each hemisphere. Note the similarity betweenthe hemispheres.```
###Code
def plot_stochastic_block_test(misc, pvalue_vmin=None):
# get values
B1 = misc["probabilities1"]
B2 = misc["probabilities2"]
null_odds = misc["null_odds"]
B2 = B2 * null_odds
index = B1.index
p_max = max(B1.values.max(), B2.values.max())
uncorrected_pvalues = misc["uncorrected_pvalues"]
n_tests = misc["n_tests"]
K = B1.shape[0]
alpha = 0.05
hb_thresh = alpha / n_tests
# set up plot
pad = 2
width_ratios = [0.5, pad + 0.8, 10, pad - 0.4, 10, pad + 0.9, 10, 0.5]
set_theme(font_scale=1.25)
fig, axs = plt.subplots(
1,
len(width_ratios),
figsize=(30, 10),
gridspec_kw=dict(
width_ratios=width_ratios,
),
)
left_col = 2
right_col = 4
pvalue_col = 6
heatmap_kws = dict(
cmap="Blues", square=True, cbar=False, vmax=p_max, fmt="s", xticklabels=True
)
# heatmap of left connection probabilities
annot = np.full((K, K), "")
annot[B1.values == 0] = 0
ax = axs[left_col]
sns.heatmap(B1, ax=ax, annot=annot, **heatmap_kws)
ax.set(ylabel="Source group", xlabel="Target group")
ax.set_title(r"$\hat{B}$ left", fontsize="xx-large", color=network_palette["Left"])
# heatmap of right connection probabilities
annot = np.full((K, K), "")
annot[B2.values == 0] = 0
ax = axs[right_col]
im = sns.heatmap(B2, ax=ax, annot=annot, **heatmap_kws)
ax.set(ylabel="", xlabel="Target group")
text = r"$\hat{B}$ right"
if null_odds != 1:
text = r"$c$" + text
ax.set_title(text, fontsize="xx-large", color=network_palette["Right"])
# handle the colorbars
# NOTE: did it this way cause the other options weren't playing nice with auto
# constrain
# layouts.
def shrink_axis(ax, scale=0.7):
pos = ax.get_position()
mid = (pos.ymax + pos.ymin) / 2
height = pos.ymax - pos.ymin
new_pos = Bbox(
[
[pos.xmin, mid - scale * 0.5 * height],
[pos.xmax, mid + scale * 0.5 * height],
]
)
ax.set_position(new_pos)
ax = axs[0]
shrink_axis(ax, scale=0.5)
_ = fig.colorbar(
im.get_children()[0],
cax=ax,
fraction=1,
shrink=1,
ticklocation="left",
)
# plot p-values
ax = axs[pvalue_col]
annot = np.full((K, K), "")
annot[(B1.values == 0) & (B2.values == 0)] = "B"
annot[(B1.values == 0) & (B2.values != 0)] = "L"
annot[(B1.values != 0) & (B2.values == 0)] = "R"
plot_pvalues = np.log10(uncorrected_pvalues)
plot_pvalues[np.isnan(plot_pvalues)] = 0
im = sns.heatmap(
plot_pvalues,
ax=ax,
annot=annot,
cmap="RdBu",
center=0,
square=True,
cbar=False,
fmt="s",
vmin=pvalue_vmin,
)
ax.set(ylabel="", xlabel="Target group")
ax.set(xticks=np.arange(K) + 0.5, xticklabels=index)
ax.set_title(r"$log_{10}($p-value$)$", fontsize="xx-large")
colors = im.get_children()[0].get_facecolors()
significant = uncorrected_pvalues < hb_thresh
# NOTE: the x's looked bad so I did this super hacky thing...
pad = 0.2
for idx, (is_significant, color) in enumerate(
zip(significant.values.ravel(), colors)
):
if is_significant:
i, j = np.unravel_index(idx, (K, K))
# REF: seaborn heatmap
lum = relative_luminance(color)
text_color = ".15" if lum > 0.408 else "w"
xs = [j + pad, j + 1 - pad]
ys = [i + pad, i + 1 - pad]
ax.plot(xs, ys, color=text_color, linewidth=4)
xs = [j + 1 - pad, j + pad]
ys = [i + pad, i + 1 - pad]
ax.plot(xs, ys, color=text_color, linewidth=4)
# plot colorbar for the pvalue plot
# NOTE: only did it this way for consistency with the other colorbar
ax = axs[7]
shrink_axis(ax, scale=0.5)
_ = fig.colorbar(
im.get_children()[0],
cax=ax,
fraction=1,
shrink=1,
ticklocation="right",
)
fig.text(0.11, 0.85, "A)", fontweight="bold", fontsize=50)
fig.text(0.63, 0.85, "B)", fontweight="bold", fontsize=50)
# remove dummy axes
for i in range(len(width_ratios)):
if not axs[i].has_data():
axs[i].set_visible(False)
return fig, axs
fig, axs = plot_stochastic_block_test(misc)
gluefig("sbm_uncorrected", fig)
# need to save this for later for setting colorbar the same on other plot
pvalue_vmin = np.log10(np.nanmin(misc["uncorrected_pvalues"].values))
###Output
_____no_output_____
###Markdown
Next, we run the test for bilateral symmetry under the stochastic block model.{numref}`Figure {number} ` shows both theestimated group-to-group probability matrices, $\hat{B}^{(L)}$ and $\hat{B}^{(R)}$,as well as the p-values from each test comparing each element of these matrices. Froma visual comparison of $\hat{B}^{(L)}$ and $\hat{B}^{(R)}${numref}`(Figure {number} A) `, we see thatthegroup-to-group connection probabilities are qualitatively similar. Note also that somegroup-to-group connection probabilities are zero, making it non-sensical to do acomparision of binomial proportions. We highlight these elements in the $\hat{B}$matrices with an explicit "0", noting that we did not run the corresponding test inthese cases.In {numref}`Figure {number} B `, we see thep-values from all {glue:text}`sbm_unmatched_test-n_tests` that were run. AfterBonferroni-Holm correction, 5 tests yield p-values less than 0.05, indicating thatwe reject the null hypothesis that those elements of the $\hat{B}$ matrices are thesame between the two hemispheres. We also combine all p-values using Fisher's method,which yields an overall p-value for the entire null hypothesis inEquation {eq}`sbm_unmatched_null` of{glue:text}`sbm_unmatched_test-uncorrected_pvalue:0.2e`.```{glue:figure} fig:sbm_unmatched_test-sbm_uncorrected:name: "fig:sbm_unmatched_test-sbm_uncorrected"Comparison of stochastic block model fits for the left and right hemispheres.**A)** The estimated group-to-group connection probabilities for the leftand right hemispheres appear qualitatively similar. Any estimatedprobabilities which are zero (i.e. no edge was present between a given pair ofcommunities) is indicated explicitly with a "0" in that cell of the matrix.**B)** The p-values for each hypothesis test between individual elements ofthe block probability matrices. In other words, each cell represents a test forwhether a given group-to-group connection probability is the same on the left and theright sides. "X" denotes a significant p-value after Bonferroni-Holm correction,with $\alpha=0.05$. "B" indicates that a test was not run since the estimatedprobabilitywas zero in that cell on both the left and right. "L" indicates this was the case onthe left only, and "R" that it was the case on the right only. These individualp-values were combined using Fisher's method, resulting in an overall p-value (for thenull hypothesis that the two group connection probability matrices are the same) of{glue:text}`sbm_unmatched_test-uncorrected_pvalue:0.2e`.``` Adjusting for a difference in densityFrom {numref}`Figure {number} `, we see thatwe have sufficient evidence to rejectthe null hypothesis of bilateral symmetry under this version of the SBM. However,we already saw in [](er_unmatched_test) that the overalldensities between the two networks are different. Could it be that this rejection ofthe null hypothesis under the SBM can be explained purely by this difference indensity? In other words, are the group-to-group connection probabilities on the rightsimply a "scaled up" version of those on the right, where each probability is scaledby the same amount?In {numref}`Figure {number} `,we plot the estimatedprobabilities on the left and the right hemispheres (i.e. each element of $\hat{B}$),aswell as the difference between them. While subtle, we note that there is a slighttendency for the left hemisphere estimated probability to be lower than thecorresponding one on the right. Specifically, we can also look at the group-to-groupconnection probabilities which were significantly different in{numref}`Figure {number} ` - these are plottedin {numref}`Figure {number} `. Notethat in every case, the estimated probability on the right is higher with that on theright.
###Code
def plot_estimated_probabilities(misc):
B1 = misc["probabilities1"]
B2 = misc["probabilities2"]
null_odds = misc["null_odds"]
B2 = B2 * null_odds
B1_ravel = B1.values.ravel()
B2_ravel = B2.values.ravel()
arange = np.arange(len(B1_ravel))
sum_ravel = B1_ravel + B2_ravel
sort_inds = np.argsort(-sum_ravel)
B1_ravel = B1_ravel[sort_inds]
B2_ravel = B2_ravel[sort_inds]
fig, axs = plt.subplots(2, 1, figsize=(10, 10), sharex=True)
ax = axs[0]
sns.scatterplot(
x=arange,
y=B1_ravel,
color=network_palette["Left"],
ax=ax,
linewidth=0,
s=15,
alpha=0.5,
)
sns.scatterplot(
x=arange,
y=B2_ravel,
color=network_palette["Right"],
ax=ax,
linewidth=0,
s=15,
alpha=0.5,
zorder=-1,
)
ax.text(
0.7,
0.8,
"Left",
color=network_palette["Left"],
transform=ax.transAxes,
)
ax.text(
0.7,
0.7,
"Right",
color=network_palette["Right"],
transform=ax.transAxes,
)
ax.set_yscale("log")
ax.set(
ylabel="Estimated probability " + r"($\hat{p}$)",
xticks=[],
xlabel="Sorted group pairs",
)
ax.spines["bottom"].set_visible(False)
ax = axs[1]
diff = B1_ravel - B2_ravel
yscale = np.max(np.abs(diff))
yscale *= 1.05
sns.scatterplot(
x=arange, y=diff, ax=ax, linewidth=0, s=25, color=neutral_color, alpha=1
)
ax.axhline(0, color="black", zorder=-1)
ax.spines["bottom"].set_visible(False)
ax.set(
xticks=[],
ylabel=r"$\hat{p}_{left} - \hat{p}_{right}$",
xlabel="Sorted group pairs",
ylim=(-yscale, yscale),
)
n_greater = np.count_nonzero(diff > 0)
n_total = len(diff)
ax.text(
0.3,
0.8,
f"Left connection stronger ({n_greater}/{n_total})",
color=network_palette["Left"],
transform=ax.transAxes,
)
n_lesser = np.count_nonzero(diff < 0)
ax.text(
0.3,
0.15,
f"Right connection stronger ({n_lesser}/{n_total})",
color=network_palette["Right"],
transform=ax.transAxes,
)
fig.text(0.02, 0.905, "A)", fontweight="bold", fontsize=30)
fig.text(0.02, 0.49, "B)", fontweight="bold", fontsize=30)
return fig, ax
fig, ax = plot_estimated_probabilities(misc)
gluefig("probs_uncorrected", fig)
###Output
_____no_output_____
###Markdown
```{glue:figure} fig:sbm_unmatched_test-probs_uncorrected:name: "fig:sbm_unmatched_test-probs_uncorrected"Comparison of estimated connection probabilities for the left and right hemispheres.**A)** The estimated group-to-group connection probabilities ($\hat{p}$), sorted bythe mean left/right connection probability. Note the very subtle tendency for theleft probability to be lower than the corresponding one on the right. **B)** Thedifferences between corresponding group-to-group connection probabilities($\hat{p}^{(L)} - \hat{p}^{(R)}$). The trend of the left connection probabilitiesbeing slightly smaller than the corresponding probability on the right is moreapparent here, as there are more negative than positive values.```
###Code
def plot_significant_probabilities(misc):
B1 = misc["probabilities1"]
B2 = misc["probabilities2"]
null_odds = misc["null_odds"]
B2 = B2 * null_odds
index = B1.index
uncorrected_pvalues = misc["uncorrected_pvalues"]
n_tests = misc["n_tests"]
alpha = 0.05
hb_thresh = alpha / n_tests
significant = uncorrected_pvalues < hb_thresh
row_inds, col_inds = np.nonzero(significant.values)
rows = []
for row_ind, col_ind in zip(row_inds, col_inds):
source = index[row_ind]
target = index[col_ind]
left_p = B1.loc[source, target]
right_p = B2.loc[source, target]
pair = source + r"$\rightarrow$" + target
rows.append(
{
"source": source,
"target": target,
"p": left_p,
"side": "Left",
"pair": pair,
}
)
rows.append(
{
"source": source,
"target": target,
"p": right_p,
"side": "Right",
"pair": pair,
}
)
sig_data = pd.DataFrame(rows)
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
sns.pointplot(
data=sig_data,
y="p",
x="pair",
ax=ax,
hue="side",
dodge=True,
join=False,
palette=network_palette,
)
ax.get_legend().set_title("Side")
rotate_labels(ax)
ax.set(xlabel="Group pair", ylabel="Connection probability")
return fig, ax
fig, ax = plot_significant_probabilities(misc)
gluefig("significant_p_comparison", fig)
###Output
_____no_output_____
###Markdown
```{glue:figure} fig:sbm_unmatched_test-significant_p_comparison:name: "fig:sbm_unmatched_test-significant_p_comparison"Comparison of estimated group-to-group connection probabilities for the group-pairswhich were significantly different in{numref}`Figure {number} `.In each case, the connection probability on the right hemisphere is higher.``` These observations are consistent with the idea that perhaps the probabilitieson the right are a scaled up version of those on the right, for some global scaling.We can frame this question as a new null hypothesis:````{admonition} MathWith variables defined as in Equation {eq}`sbm_unmatched_null`, we can write our newnull hypothesis as:```{math}:label: sbm_unmatched_null_adjustedH_0: B^{(L)} = c B^{(R)}, \quad H_A: B^{(L)} \neq c B^{(R)}```where $c$ is the ratio of the densities, $c = \frac{p^{(L)}}{p^{(R)}}$.```` Correcting by subsampling edges for one networkOne naive (though quite intuitive) approach to adjust our test for a difference indensity is to simply make the densities of the two networks the same and then rerunourtest. To do so, we calculated the number of edge removals (from the right hemisphere)required to set the network densities roughly the same. We then randomly removedthat many edges from the right hemisphere network andthen re-ran the SBM test procedure above. We repeated this procedure{glue:text}`sbm_unmatched_test-n_resamples` times, resulting in a p-value for eachsubsampling of the right network.The distribution of p-values from this process isshown in {numref}`Figure {number} `. Whereasthe p-value for the original null hypothesis was{glue:text}`sbm_unmatched_test-uncorrected_pvalue:0.2e`, we see now that the p-valuesfrom our subsampled, density-adjusted test are around 0.8, indicating insufficientevidence to reject our density-adjusted null hypothesis of bilateral symmetry(Equation {eq}`sbm_unmatched_null_adjusted`).
###Code
n_edges_left = np.count_nonzero(left_adj)
n_edges_right = np.count_nonzero(right_adj)
n_left = left_adj.shape[0]
n_right = right_adj.shape[0]
density_left = n_edges_left / (n_left ** 2)
density_right = n_edges_right / (n_right ** 2)
n_remove = int((density_right - density_left) * (n_right ** 2))
glue("density_left", density_left)
glue("density_right", density_right)
glue("n_remove", n_remove)
rows = []
n_resamples = 25
glue("n_resamples", n_resamples)
for i in range(n_resamples):
subsampled_right_adj = remove_edges(
right_adj, effect_size=n_remove, random_seed=rng
)
stat, pvalue, misc = stochastic_block_test(
left_adj,
subsampled_right_adj,
labels1=left_labels,
labels2=right_labels,
method="fisher",
)
rows.append({"stat": stat, "pvalue": pvalue, "misc": misc, "resample": i})
resample_results = pd.DataFrame(rows)
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
sns.histplot(data=resample_results, x="pvalue", ax=ax)
ax.set(xlabel="p-value", ylabel="", yticks=[])
ax.spines["left"].set_visible(False)
mean_resample_pvalue = np.mean(resample_results["pvalue"])
median_resample_pvalue = np.median(resample_results["pvalue"])
gluefig("pvalues_corrected", fig)
###Output
_____no_output_____
###Markdown
```{glue:figure} fig:sbm_unmatched_test-pvalues_corrected:name: "fig:sbm_unmatched_test-pvalues_corrected"Histogram of p-values after a correction for network density. For the observednetworksthe left hemisphere has a density of{glue:text}`sbm_unmatched_test-density_left:0.4f`, and the righthemisphere hasa density of{glue:text}`sbm_unmatched_test-density_right:0.4f`. Here, we randomly removed exactly{glue:text}`sbm_unmatched_test-n_remove`edges from the right hemisphere network, which makes the density of the right networkmatch that of the left hemisphere network. Then, we re-ran the stochastic block modeltestingprocedure from {numref}`Figure {number} `.This entire processwas repeated {glue:text}`sbm_unmatched_test-n_resamples` times. The histogram aboveshows thedistributionof p-values for the overall test. Note that the p-values are no longer small,indicatingthat with this density correction, we now failed to reject our null hypothesis ofbilateral symmetry under the stochastic block model.``` An analytic approach to correcting for differences in densityInstead of randomly resetting the density of the right hemisphere network, we canactually modify the hypothesis we are testing for each element of the $\hat{B}$matrices to include this adjustment by some constant scale, $c$.```{admonition} MathFisher's exact test (usedabove to compare each element of the $\hat{B}$ matrices) tests the null hypotheses:$$H_0: B_{kl}^{(L)} = B_{kl}^{(R)}, \quad H_A: B_{kl}^{(L)} \neq B_{kl}^{(R)}$$for each $(k, l)$ pair, where $k$ and $l$ are the indices of the source and targetgroups, respectively.Instead, we can use a test of:$$H_0: B_{kl}^{(L)} = c B_{kl}^{(R)}, \quad H_A: B_{kl}^{(L)} \neq c B_{kl}^{(R)}$$In our case, $c$ is a constant that we fit to the entire right hemisphere network toset its density equal to the left, $c = \frac{p^{(L)}}{p_{(R)}}$A test for the adjusted null hypothesis above is given by using[Fisher's noncentral hypergeometric distribution](https://en.wikipedia.org/wiki/Fisher%27s_noncentral_hypergeometric_distribution)and applying a procedure much like that of the traditional Fisher's exact test.```More information about this test can be found in [](nhypergeom_sims).
###Code
null_odds = density_left / density_right
stat, pvalue, misc = stochastic_block_test(
left_adj,
right_adj,
labels1=left_labels,
labels2=right_labels,
method="fisher",
null_odds=null_odds,
)
glue("corrected_pvalue", pvalue)
fig, axs = plot_stochastic_block_test(misc, pvalue_vmin=pvalue_vmin)
gluefig("sbm_corrected", fig)
###Output
_____no_output_____
###Markdown
{numref}`Figure {number} ` shows the resultsof running the analytic version of the density-adjusted test based on Fisher'snoncentral hypergeometric distribution. Note that now only two group-to-groupprobability comparisons are significant after Bonferroni-Holm correction, and theoverall p-value for this test of Equation {eq}`sbm_unmatched_null_adjusted` is{glue:text}`sbm_unmatched_test-corrected_pvalue:0.2f`. ```{glue:figure} fig:sbm_unmatched_test-sbm_corrected:name: "fig:sbm_unmatched_test-sbm_corrected"Comparison of stochastic block model fits for the left and right hemispheres aftercorrecting for a difference in hemisphere density.**A)** The estimated group-to-group connection probabilities for the leftand right hemispheres, after the right hemisphere probabilities were scaled by adensity-adjusting constant, $c$. Any estimatedprobabilities which are zero (i.e. no edge was present between a given pair ofcommunities) is indicated explicitly with a "0" in that cell of the matrix.**B)** The p-values for each hypothesis test between individual elements ofthe block probability matrices. In other words, each cell represents a test forwhether a given group-to-group connection probability is the same on the left and theright sides. "X" denotes a significant p-value after Bonferroni-Holm correction,with $\alpha=0.05$. "B" indicates that a test was not run since the estimatedprobabilitywas zero in that cell on both the left and right. "L" indicates this was the case onthe left only, and "R" that it was the case on the right only. These individualp-values were combined using Fisher's method, resulting in an overall p-value (for thenull hypothesis that the two group connection probability matrices are the same afteradjustment by a density-normalizing constant, $c$) of{glue:text}`sbm_unmatched_test-corrected_pvalue:0.2f`.``` Taken together, these results suggest that for the unmatched networks, and using theknown cell type labels, we reject the null hypothesis of bilateral symmetry under theSBM (Equation {eq}`sbm_unmatched_null`), but fail to reject the null hypothesis ofbilateral symmetry under the SBM after a density adjustment (Equation{eq}`sbm_unmatched_null_adjusted`). Moreover, they highlight the insights that can be gained by considering multiple definitions of bilateral symmetry.
###Code
elapsed = time.time() - t0
delta = datetime.timedelta(seconds=elapsed)
###Output
_____no_output_____ |
13. Scikit-Learn, Statsmodel/05. statsmodels 패키지 소개.ipynb | ###Markdown
statsmodels 패키지 소개 statsmodels 는 통계 분석을 위한 Python 패키지다. statsmodels의 메인 웹사이트는 다음과 같다.* http://www.statsmodels.orgstatsmodels에서 제공하는 통계 분석 기능은 꽤 방대한 편이다. * 통계 (Statistics) * 각종 검정(test) 기능 * 커널 밀도 추정 * Generalized Method of Moments* 회귀 분석 (Linear Regression) * 선형 모형 (Linear Model) * 일반화 선형 모형 (Generalized Linear Model) * 강인 선형 모형 (Robust Linear Model) * 선형 혼합 효과 모형 (Linear Mixed Effects Model) * ANOVA (Analysis of Variance) * Discrete Dependent Variable (Logistic Regression 포함)* 시계열 분석 (Time Series Analysis) * ARMA/ARIMA Process * Vector ARMA Process 특히 선형 회귀분석의 경우 R-style 모형 기술을 가능하게 하는 patsy 패키지를 포함하고 있어 기존에 R을 사용하던 사람들도 쉽게 statsmodels를 쓸 수 있게 되었다.* https://patsy.readthedocs.org/en/latest/ statsmodels를 사용하여 선형 회귀 분석을 수행하는 간단한 예를 보인다.
###Code
import statsmodels.api as sm
import statsmodels.formula.api as smf
# 데이터 로드
dat = sm.datasets.get_rdataset("Guerry", "HistData").data
dat.tail()
# 회귀 분석
results = smf.ols('Lottery ~ Literacy + np.log(Pop1831)', data=dat).fit()
# 결과 출력
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Lottery R-squared: 0.348
Model: OLS Adj. R-squared: 0.333
Method: Least Squares F-statistic: 22.20
Date: Thu, 21 Apr 2016 Prob (F-statistic): 1.90e-08
Time: 03:20:40 Log-Likelihood: -379.82
No. Observations: 86 AIC: 765.6
Df Residuals: 83 BIC: 773.0
Df Model: 2
Covariance Type: nonrobust
===================================================================================
coef std err t P>|t| [95.0% Conf. Int.]
-----------------------------------------------------------------------------------
Intercept 246.4341 35.233 6.995 0.000 176.358 316.510
Literacy -0.4889 0.128 -3.832 0.000 -0.743 -0.235
np.log(Pop1831) -31.3114 5.977 -5.239 0.000 -43.199 -19.424
==============================================================================
Omnibus: 3.713 Durbin-Watson: 2.019
Prob(Omnibus): 0.156 Jarque-Bera (JB): 3.394
Skew: -0.487 Prob(JB): 0.183
Kurtosis: 3.003 Cond. No. 702.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
|
notebooks/testing/Supervised-Spectral-Unmixing.ipynb | ###Markdown
Supervised Spectral Unmixing with Landsat 8This code will walk through the workflow process of arthur-e's github repo example: https://github.com/arthur-e/unmixing/blob/master/docs/Example_Spatially_Adaptive_Spectral_Mixture_Analysis_SASMA.ipynbNOTE: This workflow requires cloning the above repo, activating your environment in the repo and `pip install -e`. This is a very large file, so be prepared! The remainder of this notebook will also require a stacked Landsat 8 scene (see Landsat8-cropped-and-stacked.ipynb) and pre-determined spectral signatures. In this example, I am using spectral signatures derived from an unsupervised classification using kmeans (see Kmean-Unsupervised-Classification.ipynb).
###Code
# Import Packages
import os
from glob import glob
import geopandas as gpd
from shapely.geometry import box
import numpy as np
from matplotlib import pyplot as plt, cm
from matplotlib.colors import ListedColormap
import rasterio as rio
from rasterio.mask import mask
from rasterio.plot import plotting_extent
import earthpy as et
import earthpy.spatial as es
import earthpy.plot as ep
from unmixing.utils import as_array
from unmixing.utils import binary_mask
from unmixing.utils import subarray
from unmixing.lsma import FCLSAbundanceMapper
from unmixing.sasma import concat_endmember_arrays
from unmixing.transform import mnf_rotation
from unmixing.visualize import FeatureSpace
%matplotlib inline
# #######################NOT WORKING: INSISTING IT IS A .PY FILES###############
# # Make ONAQ site info retrievable
# %run ./kessb-NEON-scripts/data_grabber.ipynb
# Set working directory and other key paths
os.chdir(os.path.join(et.io.HOME, 'earth-analytics'))
output_dir = os.path.join("data", "Landsat", "outputs")
if os.path.isdir(output_dir) == False:
os.mkdir(output_dir)
stacked_image_path = os.path.join(output_dir, 'stacked_aoi.tif')
unclassified_image_path = os.path.join(output_dir, 'classified_aoi.tif')
###Output
_____no_output_____
###Markdown
Preparing unsupervised classification raster data
###Code
# Import stacked aoi and plot NIR band
stacked_arr, gt, wkt = as_array(stacked_image_path)
preview_nir = stacked_arr[3,...]
preview_nir[preview_nir == -9999] = 0 # Remap any NoData values to zero
# Reconfigure the raster array so that the band axis is the third axis
plt.figure(figsize = (10, 10))
plt.imshow(preview_nir, cmap = cm.YlGnBu_r)
plt.show()
# Transform image to Minimum Noise Fraction (MNF)
#########PRIOR TO EXTRACTING ENDMEMBERS? NEED TO DO PRE-CLASSIFICATION??????#########
mnf = mnf_rotation(lt5_detroit, nodata = -9999)
# The MNF image is returned in HSI form (the transpose of our original raster array)
plt.figure(figsize = (10, 10))
plt.imshow(mnf.T[0,...], cmap = cm.YlGnBu_r)
plt.show()
# Selecting endmembers from pre-defined spectral signatures
pifs, gt0, wkt0 = as_array(unclassified_image_path)
########################ONLY REFLECTING SINGLE CLASS#################################
# Create a color map for background (unclassified image used 7 classifiers)
color_code_map = ListedColormap(['lightgray',
'green',
'red',
'blue',
'yellow',
'orange',
'black'])
plt.figure(figsize = (5, 5))
plt.imshow(pifs[0,...], cmap = color_code_map)
plt.show()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.