path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
documentation/concating_with_pandas.ipynb | ###Markdown
Concatenate multiple Excel files into one dataFrame, and exploratory analysis for DMCAThe Department of Music Concert Archive (DMCA) typically has one dataset representing one concert season. However, this time, they handed us 5 datasets for 5 non-continuous years in Google Drive.So, we have a directory with a bunch of Excel files. How can we easily combine them all without weird lists or overly complicated `glob` calls? Let's try to use as much plain Python and `pandas` as possible. Then, we should start some exploratory data analysis, to find out the characteristics of the data. We'll try to find issues with it, and perform initial cleanup.
###Code
import pandas as pd
import glob, os
df = pd.concat(map(pd.read_excel, ['UCSD Dept of Music 2013-2014 Concert Archive.xlsx', 'UCSD Dept of Music 2014-2015 Concert Archive.xlsx','UCSD Dept of Music 2015-2016 Concert Archive.xlsx', 'UCSD Dept of Music 2016-2017 Concert Archive.xlsx', 'UCSD Dept of Music 2018-2019 Concert Archive.xlsx']), sort=False)
df.describe()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10194 entries, 0 to 2157
Data columns (total 20 columns):
Event Date 3857 non-null datetime64[ns]
Event Title 3854 non-null object
TITLE 3822 non-null object
COMPOSER_CREATOR 3286 non-null object
LOCATION 3857 non-null object
DATE_CREATED 982 non-null object
COMMENTS 203 non-null object
Filename 3857 non-null object
Event_Code 3858 non-null object
TIME 3670 non-null object
ca_contrib::Instrument 9080 non-null object
ca_contrib::Name 9715 non-null object
Department Resources::Recording Engineer 3137 non-null object
Primary_FORMAT 3857 non-null object
Department Resources::Classification 3854 non-null object
Department Resources::Contact 3854 non-null object
Academic Quarter 2323 non-null object
Event Year 2319 non-null object
Department Resources::Quarter 1535 non-null object
Department Resources::Year 1535 non-null object
dtypes: datetime64[ns](1), object(19)
memory usage: 1.6+ MB
###Markdown
Dates are always a problem, even in `pandas`. The short version is that no matter what kind of functions we run on a `datetime` type column, it will remain unchanged at the data level (we can make it display differently, but that's not important to our data needs). So, we should change the data from `datetime` to a `string` type:
###Code
df['Event Date'] = df['Event Date'].dt.strftime('%Y-%m-%d')
###Output
_____no_output_____
###Markdown
Our "NA" values (by default "NaN") will get in the way, and Excel/Refine expect nulls, so let's fill that in
###Code
df = df.fillna('')
###Output
_____no_output_____
###Markdown
Unfortaunately, this didn't touch our date "NaT" values, since those are now strings! We couldn't replace these initially because then the string conversion would error out. So it's ugly, but we'll do a regular expression replacement on that column
###Code
df.replace({'Event Date': r'NaT'}, {'Event Date': ''}, regex=True, inplace=True)
df[0:19]
###Output
_____no_output_____
###Markdown
Inspecting the data, we see that there are two columns that represent the academic quarter, which probably happened because different sheets had unqiue column names. Let's merge those into one 'Quarter' column
###Code
df = df.assign(Quarter = df['Academic Quarter'].astype(str) + df['Department Resources::Quarter'].astype(str))
###Output
_____no_output_____
###Markdown
We'll do the same for year, which had the same issue as quarter
###Code
df = df.assign(Year = df['Event Year'].astype(str) + df['Department Resources::Year'].astype(str))
df[0:29]
###Output
_____no_output_____
###Markdown
So this data is pretty much ready for export (skip to the bottom of the notebook for that). But what if we don't want to plug in the filenames like we did at the beginning? Let's make a quick dataFrame using that method
###Code
df1 = pd.concat(map(pd.read_excel, glob.glob('*.xlsx')), sort=False)
df1[0:29]
###Output
_____no_output_____
###Markdown
So let's go back to our original dataFrame, and write it out to one big merged Excel sheet
###Code
writer = pd.ExcelWriter('/home/zelgius/Documents/dmca_merged.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='Item_description')
writer.save()
###Output
_____no_output_____ |
ts-tbt-sisl-tutorial-master/S_01/run.ipynb | ###Markdown
First Siesta example for analyzing output.This example will calculate the electronic structure of graphene using Siesta and we will post-process the data to calculate band-structures etc.We remark that settings during the Siesta/TranSiesta tutorials are not necessarily converged or proper settings. Users need to invest time in understanding the intrinsics of Siesta before performing real calculations! ExercisesAs this is your first example of running Siesta there are a few things you should know:1. Start by calculating the electronic structure using Siesta siesta RUN.fdf > RUN.out - Go through the output of Siesta to get used to it, also to assert that it has runned without errors (always do this!) ((always, **always** do THIS!)) - Ensure the SCF has indeed converged, Siesta will, by default, die if it hasn't converged the SCF. 2. Use `sisl` to read in the electronic structure (the Hamiltonian), there are several ways of doing this, for now you should try these two methods: H_fdf = sisl.Hamiltonian.read('RUN.fdf') H_TSHS = sisl.Hamiltonian.read('siesta.TSHS') print the objects and note the difference between the two objects. What do you think these differences mean? (it isn't relevant for this tutorial, but it will be in later tutorials). 3. Calculate the $\Gamma$-point eigen spectrum using both above Hamiltonians (search the sisl documentation for `eigh`), are they different? If so, why, if not, why not?4. Calculate the band-structure using the DFT electronic structure using `sisl`. Also calculate the band-structure using the tight-binding Hamiltonian ([TB 1](../TB_01/run.ipynb)) and compare the two. *HINT*: zoom in on an energy-range from $-3$ eV to $3$ eV and plot both the DFT bands and the TB bands in the same plot.
###Code
# Read Hamiltonians using two different methods
# Calculate eigenspectrum at the \Gamma-point
eig_fdf = H_fdf.<>
eig_TSHS = H_TSHS.<>
# Calculate band-structure from DFT Hamiltonian and TB Hamiltonian
band = sisl.BandStructure(sisl.geom.graphene(), <fill-in correct points and labels>)
xtick, xtick_label = band.lineartick()
lk = band.lineark()
ax = plt.gca()
ax.xaxis.set_ticks(xtick)
ax.set_xticklabels(xtick_label)
# Define limits of the y-axis (Energy!)
# Play with this if you want! :)
ax.set_ylim(-3, 3)
ax.set_ylabel('Eigenspectrum [eV]')
# Plot x-major lines at the ticks
ymin, ymax = ax.get_ylim()
for tick in xtick:
ax.plot([tick,tick], [ymin, ymax], 'k')
# Plot band-structures
band.set_parent(<DFT Hamiltonian>)
eigs = band.eigh()
ax.plot(lk, eigs)
# You need to create the TB Hamiltonian, see e.g. example TB 1
band.set_parent(<TB Hamiltonian>)
eigs = band.eigh()
ax.plot(lk, eigs, '--')
###Output
_____no_output_____ |
HW6/notebook/.ipynb_checkpoints/HW6_2-checkpoint.ipynb | ###Markdown
宿題2
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from collections import defaultdict
data = np.loadtxt('data/digit_test0.csv', delimiter=',')
data.shape
data[0]
img = data[0].reshape(16,16)
plt.imshow(img, cmap='gray')
plt.show()
digit = np.array([[1]] * 200)
digit.shape
np.array(data, digit)
np.array([data, [[0]]*200])
test = np.array([1,2,2,3,2,4,5,4])
from collections import Counter
c = Counter(test)
c.most_common(1)[0][0]
###Output
_____no_output_____
###Markdown
kNN http://blog.amedama.jp/entry/2017/03/18/140238
###Code
import numpy as np
from collections import Counter
class kNN(object):
def __init__(self, k=1):
self._train_data = None
self._target_data = None
self._k = k
def fit(self, train_data, target_data):
self._train_data = train_data
self._target_data = target_data
def predict(self, x):
distances = np.array([np.linalg.norm(p - x) for p in self._train_data])
nearest_indices = distances.argsort()[:self._k]
nearest_labels = self._target_data[nearest_indices]
c = Counter(nearest_labels)
return c.most_common(1)[0][0]
def load_train_data():
for i in range(10):
if i==0:
train_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_label = np.array([i]*train_feature.shape[0])
else:
temp_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_feature = np.vstack([train_feature, temp_feature])
temp_label = np.array([i]*temp_feature.shape[0])
train_label = np.hstack([train_label, temp_label])
return train_feature, train_label
def load_test_data():
for i in range(10):
if i==0:
test_feature = np.loadtxt('data/digit_test{}.csv'.format(i), delimiter=',')
test_label = np.array([i]*test_feature.shape[0])
else:
temp_feature = np.loadtxt('data/digit_test{}.csv'.format(i), delimiter=',')
test_feature = np.vstack([test_feature, temp_feature])
temp_label = np.array([i]*temp_feature.shape[0])
test_label = np.hstack([test_label, temp_label])
return test_feature, test_label
train_feature, train_label = load_train_data()
train_feature.shape
train_label.shape
train_label[1900]
test_feature, test_label = load_test_data()
test_label.shape
model = kNN()
model.fit(train_feature, train_label)
model._train_data.shape
from sklearn.metrics import accuracy_score
predicted_labels = []
for feature in test_feature:
predicted_label = model.predict(feature)
predicted_labels.append(predicted_label)
accuracy_score(test_label, predicted_labels)
len(predicted_labels)
accuracy_score(test_label, predicted_labels)
def calc_accuracy(train_feature, train_label, test_feature, test_label):
model = kNN()
model.fit(train_feature, train_label)
predicted_labels = []
for feature in test_feature:
predicted_label = model.predict(feature)
predicted_labels.append(predicted_label)
return accuracy_score(test_label, predicted_labels)
calc_accuracy(train_feature, train_label, test_feature, test_label)
###Output
_____no_output_____
###Markdown
高速化
###Code
import numpy as np
from collections import Counter
def load_train_data():
for i in range(10):
if i==0:
train_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_label = np.array([i]*train_feature.shape[0])
else:
temp_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_feature = np.vstack([train_feature, temp_feature])
temp_label = np.array([i]*temp_feature.shape[0])
train_label = np.hstack([train_label, temp_label])
return train_feature, train_label
def load_test_data():
for i in range(10):
if i==0:
test_feature = np.loadtxt('data/digit_test{}.csv'.format(i), delimiter=',')
test_label = np.array([i]*test_feature.shape[0])
else:
temp_feature = np.loadtxt('data/digit_test{}.csv'.format(i), delimiter=',')
test_feature = np.vstack([test_feature, temp_feature])
temp_label = np.array([i]*temp_feature.shape[0])
test_label = np.hstack([test_label, temp_label])
return test_feature, test_label
train_feature, train_label = load_train_data()
test_feature, test_label = load_test_data()
import numpy as np
from collections import Counter
class kNN(object):
def __init__(self, k=1):
self._train_data = None
self._target_data = None
self._k = k
def fit(self, train_data, target_data):
self._train_data = train_data
self._target_data = target_data
def predict(self, x):
distances = np.array([np.linalg.norm(p - x) for p in self._train_data])
nearest_indices = distances.argsort()[:self._k]
nearest_labels = self._target_data[nearest_indices]
c = Counter(nearest_labels)
return c.most_common(1)[0][0]
predicted_labels
test_feature.shape
predicted_label
###Output
_____no_output_____
###Markdown
ここまでのまとめ
###Code
import numpy as np
from collections import Counter
from sklearn.metrics import accuracy_score
class kNN(object):
def __init__(self, k=1):
self._train_data = None
self._target_data = None
self._k = k
def fit(self, train_data, target_data):
self._train_data = train_data
self._target_data = target_data
def predict(self, x):
distances = np.array([np.linalg.norm(p - x) for p in self._train_data])
nearest_indices = distances.argsort()[:self._k]
nearest_labels = self._target_data[nearest_indices]
c = Counter(nearest_labels)
return c.most_common(1)[0][0]
def load_train_data():
for i in range(10):
if i==0:
train_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_label = np.array([i]*train_feature.shape[0])
else:
temp_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_feature = np.vstack([train_feature, temp_feature])
temp_label = np.array([i]*temp_feature.shape[0])
train_label = np.hstack([train_label, temp_label])
return train_feature, train_label
def load_test_data():
for i in range(10):
if i==0:
test_feature = np.loadtxt('data/digit_test{}.csv'.format(i), delimiter=',')
test_label = np.array([i]*test_feature.shape[0])
else:
temp_feature = np.loadtxt('data/digit_test{}.csv'.format(i), delimiter=',')
test_feature = np.vstack([test_feature, temp_feature])
temp_label = np.array([i]*temp_feature.shape[0])
test_label = np.hstack([test_label, temp_label])
return test_feature, test_label
def calc_accuracy(train_feature, train_label, test_feature, test_label, k=1):
model = kNN(k)
model.fit(train_feature, train_label)
predicted_labels = []
for feature in test_feature:
predicted_label = model.predict(feature)
predicted_labels.append(predicted_label)
return accuracy_score(test_label, predicted_labels)
train_feature, train_label = load_train_data()
test_feature, test_label = load_test_data()
calc_accuracy(train_feature, train_label, test_feature, test_label, k=1)
calc_accuracy(train_feature, train_label, test_feature, test_label, k=5)
###Output
_____no_output_____
###Markdown
交差検証
###Code
n_split = 5
def load_train_data_cv(n_split):
for i in range(10):
if i==0:
train_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_label = np.array([i]*train_feature.shape[0])
group_feature = np.split(train_feature, n_split)
group_label = np.split(train_label, n_split)
else:
temp_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
temp_group_feature = np.split(temp_feature, n_split)
temp_label = np.array([i]*temp_feature.shape[0])
temp_group_label = np.split(temp_label, n_split)
for m in range(n_split):
group_feature[m] = np.vstack([group_feature[m], temp_group_feature[m]])
group_label[m] = np.hstack([group_label[m], temp_group_label[m]])
return group_feature, group_label
group_feature, group_label = load_train_data_cv(5)
len(group_feature)
group_feature[0].shape
group_label[0].shape
group_label[0][999]
group_feature.pop(2)
temp = np.vstack(group_feature)
temp.shape
###Output
_____no_output_____
###Markdown
`pop`はよくなさそう
###Code
temp = group_feature.copy()
temp.pop(2)
temp1 = np.vstack(temp)
print(temp1.shape)
print(len(group_feature))
def cross_validation(n_split=5, params=[1,2,3,4,5,10,20]):
n_params = len(params)
score_list = np.zeros(n_params)
group_feature, group_label = load_train_data_cv(n_split)
for j in range(n_params):
for i in range(n_split):
temp_group_feature = group_feature.copy()
temp_test_feature = temp_group_feature.pop(i)
temp_train_feature = np.vstack(temp_group_feature)
temp_group_label = group_label.copy()
temp_test_label = temp_group_label.pop(i)
temp_train_label = np.hstack(temp_group_label)
score_list[j] += calc_accuracy(temp_train_feature, temp_train_label, temp_test_feature, temp_test_label, k=params[j])
opt_param = params[np.argmax(score_list)]
print(score_list)
return opt_param
cross_validation(n_split=5, params=[1,3,5])
test = np.array([1,2,3,4,5])
np.split(test, 5)
test = [1,2,3,4]
test.pop(2)
test
test = [4.838, 4.837, 4.825]
for i in test:
print(i/5)
###Output
0.9676
0.9673999999999999
0.9650000000000001
###Markdown
まとめ
###Code
import numpy as np
from collections import Counter
from sklearn.metrics import accuracy_score
class kNN(object):
def __init__(self, k=1):
self._train_data = None
self._target_data = None
self._k = k
def fit(self, train_data, target_data):
self._train_data = train_data
self._target_data = target_data
def predict(self, x):
distances = np.array([np.linalg.norm(p - x) for p in self._train_data])
nearest_indices = distances.argsort()[:self._k]
nearest_labels = self._target_data[nearest_indices]
c = Counter(nearest_labels)
return c.most_common(1)[0][0]
def load_train_data():
for i in range(10):
if i==0:
train_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_label = np.array([i]*train_feature.shape[0])
else:
temp_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_feature = np.vstack([train_feature, temp_feature])
temp_label = np.array([i]*temp_feature.shape[0])
train_label = np.hstack([train_label, temp_label])
return train_feature, train_label
def load_test_data():
for i in range(10):
if i==0:
test_feature = np.loadtxt('data/digit_test{}.csv'.format(i), delimiter=',')
test_label = np.array([i]*test_feature.shape[0])
else:
temp_feature = np.loadtxt('data/digit_test{}.csv'.format(i), delimiter=',')
test_feature = np.vstack([test_feature, temp_feature])
temp_label = np.array([i]*temp_feature.shape[0])
test_label = np.hstack([test_label, temp_label])
return test_feature, test_label
def calc_accuracy(train_feature, train_label, test_feature, test_label, k=1):
model = kNN(k)
model.fit(train_feature, train_label)
predicted_labels = []
for feature in test_feature:
predicted_label = model.predict(feature)
predicted_labels.append(predicted_label)
return accuracy_score(test_label, predicted_labels)
def load_train_data_cv(n_split=5):
for i in range(10):
if i==0:
train_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
train_label = np.array([i]*train_feature.shape[0])
group_feature = np.split(train_feature, n_split)
group_label = np.split(train_label, n_split)
else:
temp_feature = np.loadtxt('data/digit_train{}.csv'.format(i), delimiter=',')
temp_group_feature = np.split(temp_feature, n_split)
temp_label = np.array([i]*temp_feature.shape[0])
temp_group_label = np.split(temp_label, n_split)
for m in range(n_split):
group_feature[m] = np.vstack([group_feature[m], temp_group_feature[m]])
group_label[m] = np.hstack([group_label[m], temp_group_label[m]])
return group_feature, group_label
def cross_validation(n_split=5, params=[1,2,3,4,5,10,20]):
n_params = len(params)
score_list = np.zeros(n_params)
group_feature, group_label = load_train_data_cv(n_split)
for j in range(n_params):
for i in range(n_split):
temp_group_feature = group_feature.copy()
temp_test_feature = temp_group_feature.pop(i)
temp_train_feature = np.vstack(temp_group_feature)
temp_group_label = group_label.copy()
temp_test_label = temp_group_label.pop(i)
temp_train_label = np.hstack(temp_group_label)
score_list[j] += calc_accuracy(temp_train_feature, temp_train_label, temp_test_feature, temp_test_label, k=params[j])/n_split
opt_param = params[np.argmax(score_list)]
print(score_list)
return opt_param
def main():
k_opt = cross_validation(n_split=5, params=[1,2,3,4,5,10,20])
train_feature, train_label = load_train_data()
test_feature, test_label = load_test_data()
score = calc_accuracy(train_feature, train_label, test_feature, test_label, k=k_opt)
print(score)
###Output
_____no_output_____ |
Examples/Natural Language Processing/Feature extraction from text.ipynb | ###Markdown
Countvectorizer
###Code
from sklearn.feature_extraction.text import CountVectorizer
# list of text documents
text = ["hello, my name is Aman and I am a data scinetist."]
text1 = ["You are watching unfold data science, Aman Aman"]
# create the transform
vectorizer = CountVectorizer()
# tokenize and build vocab
vectorizer.fit(text)
# summarize
print(vectorizer.vocabulary_)
from IPython.display import Image
Image(filename='C:\\Users\\User\\Desktop\\wordvector.png')
# encode document
newvector = vectorizer.transform(text1)
# summarize encoded vector
print(newvector.toarray())
###Output
[[0 2 0 1 0 0 0 0 0]]
###Markdown
TF - IDF Purpose of TF-IDF is to highlight words which are frequent in a document but not across documents.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
# list of text documents
text = ["Aman is a data scientist in India","This is unfold data science","Data Science is a promising career"]
# create the transform
vectorizer = TfidfVectorizer()
# tokenize and build vocab
vectorizer.fit(text)
#Focus on IDF VALUES
print(vectorizer.idf_)
# summarize
print(vectorizer.vocabulary_)
from IPython.display import Image
Image(filename='C:\\Users\\User\\Desktop\\TF-IDF.png')
text_as_input = text[2]
text_as_input
# encode document
vector = vectorizer.transform([text_as_input])
# summarize encoded vector
print(vector.toarray())
###Output
[[0. 0.55249005 0.32630952 0. 0. 0.32630952
0.55249005 0.42018292 0. 0. 0. ]]
|
R1.Intro_Regression/regression_intro.ipynb | ###Markdown
Introduction to Regression. Author: Jerónimo Arenas García ([email protected]) Jesús Cid Sueiro ([email protected]) Notebook version: 1.1 (Sep 12, 2017) Changes: v.1.0 - First version. Extracted from regression_intro_knn v.1.0. v.1.1 - Compatibility with python 2 and python 3
###Code
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import numpy as np
import scipy.io # To read matlab files
import pandas as pd # To read data tables from csv files
# For plots and graphical results
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pylab
# For the student tests (only for python 2)
import sys
if sys.version_info.major==2:
from test_helper import Test
# That's default image size for this interactive session
pylab.rcParams['figure.figsize'] = 9, 6
###Output
_____no_output_____
###Markdown
1. The regression problemThe goal of regression methods is to predict the value of some *target* variable $S$ from the observation of one or more *input* variables $X_1, X_2, \ldots, X_N$ (that we will collect in a single vector $\bf X$).Regression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$. The only information available to estimate the relation between the inputs and the target is a *dataset* $\mathcal D$ containing several observations of all variables.$$\mathcal{D} = \{{\bf x}^{(k)}, s^{(k)}\}_{k=1}^K$$The dataset $\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\bf x}$, computes an output $\hat{s} = f({\bf x})$ that is a good predition of the true value of the target, $s$. 2. Examples of regression problems.The scikit-learn package contains several datasets related to regression problems. * Boston dataset: the target variable contains housing values in different suburbs of Boston. The goal is to predict these values based on several social, economic and demographic variables taken frome theses suburbs (you can get more details in the UCI repository ).* Diabetes dataset.We can load these datasets as follows:
###Code
from sklearn import datasets
# Load the dataset. Select it by uncommenting the appropriate line
D_all = datasets.load_boston()
#D_all = datasets.load_diabetes()
# Extract data and data parameters.
X = D_all.data # Complete data matrix (including input and target variables)
S = D_all.target # Target variables
n_samples = X.shape[0] # Number of observations
n_vars = X.shape[1] # Number of variables (including input and target)
###Output
_____no_output_____
###Markdown
This dataset contains
###Code
print(n_samples)
###Output
506
###Markdown
observations of the target variable and
###Code
print(n_vars)
###Output
13
###Markdown
input variables. 3. Scatter plots 3.1. 2D scatter plotsWhen the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as scatter plotsPython methods `plot` and `scatter` from the `matplotlib` package can be used for these graphical representations.
###Code
# Select a dataset
nrows = 4
ncols = 1 + (X.shape[1]-1)/nrows
# Some adjustment for the subplot.
pylab.subplots_adjust(hspace=0.2)
# Plot all variables
for idx in range(X.shape[1]):
ax = plt.subplot(nrows,ncols,idx+1)
ax.scatter(X[:,idx], S) # <-- This is the key command
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
plt.ylabel('Target')
###Output
_____no_output_____
###Markdown
3.2. 3D PlotsWith the addition of a third coordinate, `plot` and `scatter` can be used for 3D plotting. Exercise 1:Select the `diabetes` dataset. Visualize the target versus components 2 and 4. (You can get more info about the scatter command and an example of use in the matplotlib documentation)
###Code
# <SOL>
x2 = X[:,2]
x4 = X[:,4]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x2, x4, S, zdir=u'z', s=20, c=u'b', depthshade=True)
ax.set_xlabel('$x_2$')
ax.set_ylabel('$x_4$')
ax.set_zlabel('$s$')
plt.show()
# </SOL>
###Output
_____no_output_____
###Markdown
4. Evaluating a regression taskIn order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\hat{s})$. Two common losses are - Square error: $l(s, \hat{s}) = (s - \hat{s})^2$ - Absolute error: $l(s, \hat{s}) = |s - \hat{s}|$Note that both the square and absolute errors are functions of the estimation error $e = s-{\hat s}$. However, this is not necessarily the case. As an example, imagine a situation in which we would like to introduce a penalty which increases with the magnitude of the estimated variable. For such case, the following cost would better fit our needs: $l(s,{\hat s}) = s^2 \left(s-{\hat s}\right)^2$.
###Code
# In this section we will plot together the square and absolute errors
grid = np.linspace(-3,3,num=100)
plt.plot(grid, grid**2, 'b-', label='Square error')
plt.plot(grid, np.absolute(grid), 'r--', label='Absolute error')
plt.xlabel('Error')
plt.ylabel('Cost')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
The overal prediction performance is computed as the average of the loss computed over a set of samples:$${\bar R} = \frac{1}{K}\sum_{k=1}^K l\left(s^{(k)}, \hat{s}^{(k)}\right)$$ Exercise 2:The dataset in file `'datasets/x01.csv'`, taken from here records the average weight of the brain and body for a number of mammal species.* Represent a scatter plot of the targe variable versus the one-dimensional input.* Plot, over the same plot, the prediction function given by $S = 1.2 X$* Compute the square error rate for the given dataset.
###Code
# Load dataset in arrays X and S
df = pd.read_csv('datasets/x01.csv', sep=',', header=None)
X = df.values[:,0]
S = df.values[:,1]
# <SOL>
fig = plt.figure()
plt.scatter(X, S)
plt.xlabel('$x_2$')
plt.ylabel('$x_4$')
xgrid = np.array([-200.0, 6900.0])
plt.plot(xgrid, 1.2*xgrid)
R = np.mean((S-1.2*X)**2)
print('The average square error is {0}'.format(R))
# </SOL>
if sys.version_info.major==2:
Test.assertTrue(np.isclose(R, 153781.943889), 'Incorrect value for the average square error')
else:
np.testing.assert_almost_equal(R, 153781.943889, decimal=4)
print("Test passed")
###Output
Test passed
|
misc/COGS108_Multiple Linear Regression and Collinearity.ipynb | ###Markdown
Scott Cole8 May 2017This notebook covers how to use python to analyze the relationship between multiple input variables and one output variable using multiple linear regression.Outline:1. Regular (single-variable) linear regression2. Multiple linear regression3. Regressing out one variable from another Import librariesWe will be using the library [statsmodels](http://www.statsmodels.org/stable/index.html) to run our linear regression
###Code
import numpy as np
import statsmodels.formula.api as smf
import pandas as pd
import scipy as sp
%matplotlib notebook
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Regular linear regression 1a. Fitting a line$y_i = \beta_0+\beta_1x_i+\epsilon_i$
###Code
# Define true statistics relating x and y
N_points = 10
true_beta0 = 0
true_beta1 = 2
noise_stdev = 1
# Set random seed
np.random.seed(0)
# Generate correlated data
x = np.random.randn(N_points) + 2
y = true_beta0 + true_beta1*x + np.random.randn(N_points)*noise_stdev
print('x=', x)
print('y=', y)
# Plot x and y
plt.figure(figsize=(4,4))
plt.plot(x, y, 'k.', ms=12)
plt.xlabel('x',size=15)
plt.ylabel('y',size=15)
# Fit line to data
A = np.vstack([x, np.ones(len(x))]).T
m, b = np.linalg.lstsq(A, y)[0]
print('True statistics: y =', true_beta1, '*x +', true_beta0)
print('Estimated stats: y =', m, '*x +', b)
print('R squared (fraction of variance explained) =',np.round(sp.stats.pearsonr(x,y)[0],2))
# Plot fitted line
plt.figure(figsize=(4,4))
plt.plot(x, y, 'k.', ms=12)
plt.plot([0,5], [true_beta1*x+true_beta0 for x in [0,5]], 'k--',label='True correlation')
plt.plot([0,5], [m*x+b for x in [0,5]], 'r--',label='Estimated correlation')
plt.xlabel('x',size=15)
plt.ylabel('y',size=15)
plt.xlim((0,5))
plt.legend(loc='best')
###Output
_____no_output_____
###Markdown
1b. Outliers and normality of errorsMinimizing the squared error of the line fit works well if our data is normally distributed.In this section, we simulate some data in which the error is not normally distributed, and see the fitted line is not as accurate.
###Code
# Simulate data with non-normal distribution of error
np.random.seed(1)
N_points = 100
x = np.random.randn(N_points) + 2
y = true_beta0 + true_beta1*x + np.random.randn(N_points)**2
# Fit line to data
A = np.vstack([x, np.ones(len(x))]).T
m, b = np.linalg.lstsq(A, y)[0]
print('True statistics: y =', true_beta1, '*x +', true_beta0)
print('Estimated stats: y =', m, '*x +', b)
print('R squared (fraction of variance explained) =',np.round(sp.stats.pearsonr(x,y)[0],2))
# Plot fitted line
plt.figure(figsize=(4,4))
plt.plot(x, y, 'k.', ms=8)
plt.plot([0,5], [true_beta1*x+true_beta0 for x in [0,5]], 'k--',label='True correlation')
plt.plot([0,5], [m*x+b for x in [0,5]], 'r--', label='Estimated correlation')
plt.xlabel('x',size=15)
plt.ylabel('y',size=15)
plt.xlim((0,5))
plt.legend(loc='best')
plt.figure(figsize=(8,3))
errors = y - [m*xi+b for xi in x]
hist2 = plt.hist(np.random.randn(100000)*np.std(errors),np.arange(-8,8,.5),color='r', normed=True, alpha=.5,label='normal')
hist = plt.hist(errors,np.arange(-8,8,.5),color='k', normed=True, alpha=.3,label='True error')
plt.legend(loc='best')
plt.xlabel('Estimate error')
plt.ylabel('Probability')
plt.yticks(np.arange(0,1.2,.2),np.arange(0,.6,.1))
###Output
_____no_output_____
###Markdown
1c. Importance of independence of samplesLinear regression works well only if samples are [independent and identically distributed (IID)](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables). If this assumption is violated, then the computed correlation statistics are not reliable.
###Code
# Burrito information
np.random.seed(0)
burrito1_cost = 6 + np.random.randn(50)
burrito1_stars = 3.5 + np.random.randn(50)*.8
burrito_new_cost = 4
burrito_new_stars = np.arange(4,5.1,.1)
# Define cost and stars arrays
c = np.append(np.ones(len(burrito_new_stars))*burrito_new_cost,burrito1_cost)
s = np.append(burrito_new_stars,burrito1_stars)
# Compute correlation
print('Statistics of random data points')
print('R squared (fraction of variance explained) =',np.round(sp.stats.pearsonr(c,s)[0]**2,2))
print('p =',np.round(sp.stats.pearsonr(c,s)[1],3))
print('\nStatistics after adding in 10 non-independent data points')
print('R squared (fraction of variance explained) =',np.round(sp.stats.pearsonr(burrito1_cost, burrito1_stars)[0],2))
print('p =',np.round(sp.stats.pearsonr(burrito1_cost, burrito1_stars)[1],3))
# Fit line to data
A = np.vstack([c, np.ones(len(c))]).T
m, b = np.linalg.lstsq(A, s)[0]
# Plot fitted line
plt.figure(figsize=(4,4))
plt.plot(c, s, 'k.', ms=8)
plt.plot([0,10], [m*x+b for x in [0,10]], 'k--')
plt.xlabel('Burrito cost')
plt.ylabel('Stars')
plt.xlim((0,10))
###Output
_____no_output_____
###Markdown
2. Multiple linear regression Load burritos* [More information about burrito data](https://srcole.github.io/100burritos/)* [Burrito data spreadsheet](https://docs.google.com/spreadsheets/d/18HkrklYz1bKpDLeL-kaMrGjAhUM6LeJMIACwEljCgaw/editgid=1703829449)* [Contribute burrito ratings](https://bit.ly/burritorev)
###Code
# Load burrito data into pandas dataframe
url = 'https://docs.google.com/spreadsheet/ccc?key=18HkrklYz1bKpDLeL-kaMrGjAhUM6LeJMIACwEljCgaw&output=csv'
df = pd.read_csv(url)
# Delete unreliable ratings
import pandasql
df.Unreliable = df.Unreliable.map({'x':1,'X':1,1:1})
df.Unreliable = df.Unreliable.fillna(0)
q = """SELECT * FROM df WHERE unreliable == 0"""
df = pandasql.sqldf(q.lower(), locals())
# Rename meat:filling column because statsmodels sucks
df.rename(columns={'Meat:filling': 'Meatratio'}, inplace=True)
# Limit data to main features
df = df[['Location','Burrito','Yelp','Cost','Hunger', 'Volume', 'Tortilla', 'Temp', 'Meat',
'Fillings', 'Meatratio', 'Uniformity', 'Salsa', 'Synergy', 'Wrap', 'overall']]
df.tail()
###Output
_____no_output_____
###Markdown
2a. Individual linear regressions between burrito dimensions and overall satisfaction rating (BAD)* Ignores redundant information across features* Multiple comparison problem
###Code
# Define dimensions of interest
dims = ['Cost', 'Hunger', 'Tortilla', 'Temp', 'Meat',
'Fillings', 'Meatratio', 'Uniformity', 'Salsa', 'Wrap']
# Correlate each dimension to the overall satisfaction rating
results = {}
for d in dims:
model_str = 'overall ~ ' + d
results[d] = smf.ols(model_str, data=df, missing='drop').fit()
print(d,', R2 =',results[d].rsquared, ', p =',np.round(results[d].pvalues[d],4))
plt.plot(df['Fillings'],df['overall'],'k.')
plt.xlabel('Nonmeat filling flavor')
plt.ylabel('overall satisfaction')
###Output
_____no_output_____
###Markdown
2b. Multiple linear regression
###Code
model_str = 'overall ~ ' + ' + '.join(dims)
print(model_str)
results_all = smf.ols(model_str, data=df, missing='drop').fit()
print(results_all.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: overall R-squared: 0.723
Model: OLS Adj. R-squared: 0.712
Method: Least Squares F-statistic: 68.48
Date: Mon, 08 May 2017 Prob (F-statistic): 2.25e-67
Time: 12:50:15 Log-Likelihood: -115.21
No. Observations: 274 AIC: 252.4
Df Residuals: 263 BIC: 292.2
Df Model: 10
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept -0.5042 0.245 -2.055 0.041 -0.987 -0.021
Cost 0.0217 0.023 0.932 0.352 -0.024 0.068
Hunger 0.0200 0.030 0.668 0.505 -0.039 0.079
Tortilla 0.0631 0.033 1.919 0.056 -0.002 0.128
Temp 0.0617 0.024 2.540 0.012 0.014 0.109
Meat 0.2816 0.035 8.085 0.000 0.213 0.350
Fillings 0.3752 0.037 10.044 0.000 0.302 0.449
Meatratio 0.1389 0.027 5.062 0.000 0.085 0.193
Uniformity 0.0749 0.025 3.023 0.003 0.026 0.124
Salsa 0.0597 0.027 2.174 0.031 0.006 0.114
Wrap 0.0349 0.022 1.619 0.107 -0.008 0.077
==============================================================================
Omnibus: 51.954 Durbin-Watson: 1.631
Prob(Omnibus): 0.000 Jarque-Bera (JB): 158.345
Skew: -0.808 Prob(JB): 4.13e-35
Kurtosis: 6.356 Cond. No. 139.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Add in 'flavor synergy' to model
###Code
dims = ['Cost','Hunger', 'Tortilla', 'Temp', 'Meat',
'Fillings', 'Meatratio', 'Uniformity', 'Salsa', 'Synergy', 'Wrap']
model_str = 'overall ~ ' + ' + '.join(dims)
results_all = smf.ols(model_str, data=df, missing='drop').fit()
print(results_all.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: overall R-squared: 0.781
Model: OLS Adj. R-squared: 0.772
Method: Least Squares F-statistic: 84.41
Date: Mon, 08 May 2017 Prob (F-statistic): 3.58e-79
Time: 12:50:15 Log-Likelihood: -82.726
No. Observations: 272 AIC: 189.5
Df Residuals: 260 BIC: 232.7
Df Model: 11
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept -0.4903 0.219 -2.237 0.026 -0.922 -0.059
Cost 0.0317 0.021 1.523 0.129 -0.009 0.073
Hunger 0.0187 0.027 0.698 0.486 -0.034 0.071
Tortilla 0.0309 0.030 1.043 0.298 -0.027 0.089
Temp 0.0615 0.022 2.836 0.005 0.019 0.104
Meat 0.2065 0.032 6.375 0.000 0.143 0.270
Fillings 0.2486 0.037 6.789 0.000 0.177 0.321
Meatratio 0.0880 0.025 3.481 0.001 0.038 0.138
Uniformity 0.0701 0.022 3.152 0.002 0.026 0.114
Salsa 0.0126 0.025 0.495 0.621 -0.038 0.063
Synergy 0.2978 0.036 8.334 0.000 0.227 0.368
Wrap 0.0463 0.019 2.398 0.017 0.008 0.084
==============================================================================
Omnibus: 33.545 Durbin-Watson: 1.623
Prob(Omnibus): 0.000 Jarque-Bera (JB): 79.744
Skew: -0.584 Prob(JB): 4.83e-18
Kurtosis: 5.382 Cond. No. 144.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
2c. Correlation matrix
###Code
dfcorr = df[dims].corr()
M = len(dims)
from matplotlib import cm
clim1 = (-1,1)
plt.figure(figsize=(12,10))
cax = plt.pcolor(range(M+1), range(M+1), dfcorr, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(dims,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(dims,size=25)
plt.xticks(rotation='vertical')
plt.tight_layout()
plt.xlim((0,M))
plt.ylim((0,M))
###Output
_____no_output_____
###Markdown
3. Regressing outIf you want to know if a feature contains significantly more information about the output, beyond what is contained in another feature, you can regress out the latter from the former 3a. Manufacture input features with redundant information about output variable
###Code
y = np.arange(1,2,.01)
x1 = y + np.random.randn(len(y))*.1
x2 = x1 + np.random.randn(len(y))*.3
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.plot(x1,y,'k.')
plt.ylabel('y')
plt.xlabel('x1')
plt.subplot(1,2,2)
plt.plot(x2,y,'k.')
plt.xlabel('x2')
plt.figure(figsize=(4,4))
plt.plot(x1,x2,'k.')
plt.xlabel('x1')
plt.ylabel('x2')
print('Correlation coefficient between x1 and y: ', np.round(sp.stats.pearsonr(x1,y)[0],3))
print('Correlation coefficient between x2 and y: ', np.round(sp.stats.pearsonr(x2,y)[0],3))
print('Correlation coefficient between x1 and x2: ', np.round(sp.stats.pearsonr(x1,x2)[0],3))
###Output
Correlation coefficient between x1 and y: 0.936
Correlation coefficient between x2 and y: 0.693
Correlation coefficient between x1 and x2: 0.749
###Markdown
3b. Regress out x1 from x2
###Code
# Regress out features
def regress_out(x, y):
"""Regress x out of y to get a new y value"""
A = np.vstack([x, np.ones(len(x))]).T
m, b = np.linalg.lstsq(A, y)[0]
return y - b - x*m
x2b = regress_out(x1, x2)
# Visualize relationships with x2 after regressing out x1
plt.figure(figsize=(4,7))
plt.subplot(2,1,1)
plt.plot(x2b,x1,'k.')
plt.ylabel('x1')
plt.subplot(2,1,2)
plt.plot(x2b,y,'k.')
plt.ylabel('y')
plt.xlabel('x2 after regressing out x1')
print('After regressing out x1 from x2:')
print('Correlation coefficient between x2 and y: ', np.round(sp.stats.pearsonr(x2b,y)[0],3))
print('Correlation coefficient between x1 and x2: ', np.round(sp.stats.pearsonr(x1,x2b)[0],3))
###Output
After regressing out x1 from x2:
Correlation coefficient between x2 and y: -0.013
Correlation coefficient between x1 and x2: -0.0
###Markdown
3c. Multiple linear regression of x1 and x2 to predict y
###Code
df = pd.DataFrame.from_dict({'x1':x1, 'x2':x2, 'y':y})
results_all = smf.ols('y ~ x1 + x2', data=df).fit()
print(results_all.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.877
Model: OLS Adj. R-squared: 0.874
Method: Least Squares F-statistic: 344.3
Date: Mon, 08 May 2017 Prob (F-statistic): 8.70e-45
Time: 12:50:17 Log-Likelihood: 86.947
No. Observations: 100 AIC: -167.9
Df Residuals: 97 BIC: -160.1
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept 0.0937 0.055 1.712 0.090 -0.015 0.202
x1 0.9449 0.054 17.636 0.000 0.839 1.051
x2 -0.0128 0.036 -0.353 0.725 -0.085 0.059
==============================================================================
Omnibus: 3.020 Durbin-Watson: 1.758
Prob(Omnibus): 0.221 Jarque-Bera (JB): 2.313
Skew: -0.225 Prob(JB): 0.315
Kurtosis: 2.406 Cond. No. 17.0
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
|
nbs/02_line_chart.ipynb | ###Markdown
Line Plot> Line plots
###Code
#hide
import altair as alt
import pandas as pd
import plotnine
from gapminder import gapminder
from plotnine.data import mtcars
from plotnine import *
from plotnine import ggplot, geom_point, aes, stat_smooth, facet_wrap, geom_line
from plotnine import ggplot # https://plotnine.readthedocs.io/en/stable/
from bbplot.bbplot import bbc_style
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
%load_ext autoreload
%autoreload 2
alt.renderers.enable('html')
from bbplot.custom_theme import bbc_theme # Custom top-level configuration for charts
alt.themes.register("my_custom_theme", bbc_theme)
alt.themes.enable("my_custom_theme")
line_df = gapminder.query(" country == 'Malawi' ")
###Output
_____no_output_____
###Markdown
Original plot Line Chart
###Code
(ggplot(line_df, aes(x='year', y='lifeExp')) + bbc_style() +
geom_line(colour='#1380A1', size=1) +
geom_hline(yintercept=0, size=1, colour='#333333') +
labs(title='Living longer',
subtitle='Life expectancy in Malawi 1952-2007')
)
# altair
line = (alt.Chart(line_df).mark_line().encode(
x=alt.X('year', axis=alt.Axis(values=list(range(1950, 2020, 10)))),
y=alt.Y('lifeExp', axis=alt.Axis(values=list(range(0, 60, 10)))))
.properties(title={'text': 'Living Longer',
'subtitle': ['Life expectancy in Malawi 1952-2007'],
}, )
)
# hline
overlay = pd.DataFrame({'lifeExp': [0]})
hline = (
alt.Chart(overlay)
.mark_rule(color='#333333', strokeWidth=3)
.encode(y='lifeExp')
)
(line + hline).configure_axisX(grid=False)
###Output
_____no_output_____ |
imsdb_scrapwork.ipynb | ###Markdown
GA Capstone - Matt Traquada - Webscraping notebook
###Code
import requests as r
import pandas as pd
import bs4 as bs
###Output
_____no_output_____
###Markdown
Genre level data structure exploration
###Code
url = 'http://www.imsdb.com/genre/Action'
page = r.get(url)
movies = bs.BeautifulSoup(page.text)
print(soup)
print(soup.find_all('table')[1].find_all('a')[61]['href'])
film_links = pd.DataFrame()
film_links['title'] = [x.string for x in soup.find_all('table')[1].find_all('a')[61:]]
film_links['link'] = [x['href'] for x in soup.find_all('table')[1].find_all('a')[61:]]
film_links['genre'] = ['Action' for x in soup.find_all('table')[1].find_all('a')[61:]]
film_links.head()
len(film_links)
###Output
_____no_output_____
###Markdown
Full script listing exploration
###Code
soup.find_all('table')[1].find_all('a')[61]['href']
film_links = pd.DataFrame()
film_links['title'] = [x.string for x in scripts]
film_links['link'] = [x['href'] for x in scripts]
film_links.head()
len(film_links)
###Output
_____no_output_____
###Markdown
Movie level structure exploration
###Code
film_links['link'][0]
film_url = 'http://www.imsdb.com{}'
link = film_links['link'][0]
film = r.get(film_url.format(link))
print(film.text)
info = bs.BeautifulSoup(film.text, "lxml").find("table", attrs={"class":"script-details"})
info
info.find_all("td")[2]
film_info = str(info.find_all("td")[2])
film_info.replace("\xa0","").split(sep="<br/>")
fields = film_info.replace("\xa0","").split(sep="<br/>")
fields
info.find_all("td")[2].find_all("a")[-1]['href']
data = {
'title':film_links['title'][0],
'info':film_info,
'link':info.find_all("td")[2].find_all("a")[-1]['href'],
'script':None,
}
data
###Output
_____no_output_____
###Markdown
Script retrieval
###Code
film_url = 'http://www.imsdb.com{}'
slink = data['link']
script = r.get(film_url.format(slink))
print(script.text)
text = bs.BeautifulSoup(script.text, "lxml").find("td", attrs={"class":"scrtext"}).pre.pre
len(str(text))
data['script'] = str(text)
import json
with open('data.json', 'w') as fp:
json.dump(data, fp)
with open('data.json') as f:
test = json.load(f)
print(test)
###Output
{'title': '10 Things I Hate About You', 'info': '<td>\n<b>IMSDb opinion</b><br/>\xa0\xa0A better-than-most teen film.<br/><br/>\n<b>IMSDb rating</b><br/>\xa0\xa0<img src="/images/rating/7-stars.gif"/> (7 out of 10)<br/>\n<b>Average user rating</b><br/>\xa0\xa0<img src="/images/rating/9-stars.gif"/> (8.76 out of 10)<br/>\n<!-- IMSDb Box -->\n<ins class="adsbygoogle" data-ad-client="ca-pub-9108429103930209" data-ad-slot="9761090053" style="display:inline-block;width:300px;height:250px"></ins>\n<script>(adsbygoogle = window.adsbygoogle || []).push({});</script>\n<br/>\n<b>Writers</b><br/>\xa0\xa0<a href="/writer.php?w=Karen McCullah Lutz" title="Scripts by Karen McCullah Lutz">Karen McCullah Lutz</a><br/>\xa0\xa0<a href="/writer.php?w=Kirsten Smith" title="Scripts by Kirsten Smith">Kirsten Smith</a><br/>\xa0\xa0<a href="/writer.php?w=William Shakespeare" title="Scripts by William Shakespeare">William Shakespeare</a><br/><br/>\n<b>Genres</b><br/>\xa0\xa0<a href="/genre/Comedy" title="Comedy Scripts">Comedy</a><br/>\xa0\xa0<a href="/genre/Romance" title="Romance Scripts">Romance</a><br/><br/>\n<b>Script Date</b> : November 1997<br/>\n<br/>\n<a href="/scripts/10-Things-I-Hate-About-You.html">Read "10 Things I Hate About You" Script</a>\n</td>', 'link': '/scripts/10-Things-I-Hate-About-You.html', 'script': '<pre>\n<b> TEN THINGS I HATE ABOUT YOU\n</b><b> \n</b> written by Karen McCullah Lutz & Kirsten Smith\n<b> \n</b> based on \'Taming of the Shrew" by William Shakespeare\n<b> \n</b> Revision November 12, 1997\n<b> \n</b><b> \n</b><b> PADUA HIGH SCHOOL - DAY\n</b><b> \n</b> Welcome to Padua High School,, your typical urban-suburban \n high school in Portland, Oregon. Smarties, Skids, Preppies, \n Granolas. Loners, Lovers, the In and the Out Crowd rub sleep \n out of their eyes and head for the main building.\n<b> \n</b><b> PADUA HIGH PARKING LOT - DAY\n</b><b> \n</b> KAT STRATFORD, eighteen, pretty -- but trying hard not to be \n -- in a baggy granny dress and glasses, balances a cup of \n coffee and a backpack as she climbs out of her battered, \n baby blue \'75 Dodge Dart.\n<b> \n</b> A stray SKATEBOARD clips her, causing her to stumble and \n spill her coffee, as well as the contents of her backpack.\n<b> \n</b> The young RIDER dashes over to help, trembling when he sees \n who his board has hit.\n<b> \n</b><b> RIDER\n</b> Hey -- sorry.\n<b> \n</b> Cowering in fear, he attempts to scoop up her scattered \n belongings.\n<b> \n</b><b> KAT\n</b> Leave it \n<b> \n</b> He persists.\n<b> \n</b> KAT (continuing)\n I said, leave it!\n<b> \n</b> She grabs his skateboard and uses it to SHOVE him against a \n car, skateboard tip to his throat. He whimpers pitifully \n and she lets him go. A path clears for her as she marches \n through a pack of fearful students and SLAMS open the door, \n entering school.\n<b> \n</b><b> INT. GIRLS\' ROOM - DAY\n</b><b> \n</b> BIANCA STRATFORD, a beautiful sophomore, stands facing the \n mirror, applying lipstick. Her less extraordinary, but \n still cute friend, CHASTITY stands next to her. \n<b> \n</b><b> BIANCA\n</b> Did you change your hair?\n<b> \n</b><b> CHASTITY \n</b> No.\n<b> \n</b><b> BIANCA\n</b> You might wanna think about it\n<b> \n</b> Leave the girls\' room and enter the hallway.\n<b> \n</b><b> HALLWAY - DAY- CONTINUOUS\n</b><b> \n</b> Bianca is immediately greeted by an admiring crowd, both \n boys\n and girls alike.\n<b> \n</b><b> BOY\n</b> (adoring)\n Hey, Bianca.\n<b> \n</b><b> GIRL\n</b> Awesome shoes.\n<b> \n</b> The greetings continue as Chastity remains wordless and \n unaddressed by her side. Bianca smiles proudly, \n acknowledging her fans.\n<b> \n</b><b> GUIDANCE COUNSELOR\'S OFFICE - DAY\n</b><b> \n</b> CAMERON JAMES, a clean-cut, easy-going senior with an open, \n farm-boy face, sits facing Miss Perky, an impossibly cheery \n guidance counselor.\n<b> \n</b><b> MISS PERKY\n</b> I\'m sure you won\'t find Padua any \n different than your old school. Same \n little asswipe mother-fuckers \n everywhere.\n<b> \n</b> Her plastic smile never leaves her face. Cameron fidgets in \n his chair uncomfortably.\n<b> \n</b><b> MISS PERKY\n</b> (continuing)\n Any questions?\n<b> \n</b><b> CAMERON\n</b> I don\'t think so, ma\'am\n<b> \n</b><b> MISS PERKY\n</b> Then go forth. Scoot I\'ve got \n deviants to see.\n<b> \n</b> Cameron rises to leave and makes eye contact with PATRICK \n VERONA, a sullen-looking bad ass senior who waits outside Ms \n Perky\'s door. His slouch and smirk let us know how cool he \n is.\n<b> \n</b> Miss Perky looks down at her file and up at Patrick\n<b> \n</b><b> MISS PERKY\n</b> (continuing)\n Patrick Verona. I see we\'re making our \n visits a weekly ritual.\n<b> \n</b> She gives him a withering glance. He answers with a charming \n smile.\n<b> \n</b><b> PATRICK\n</b> I missed you.\n<b> \n</b><b> MISS PERKY\n</b> It says here you exposed yourself to a \n group of freshmen girls.\n<b> \n</b><b> PATRICK\n</b> It was a bratwurst. I was eating \n lunch.\n<b> \n</b><b> MISS PERKY\n</b> With the teeth of your zipper?\n<b> \n</b> She motions for Patrick to enter her office and Cameron \n shuffles out the door, bumping into MICHAEL ECKMAN, a lanky, \n brainy senior who will either end up a politician or game \n show host.\n<b> \n</b><b> MICHAEL\n</b> You the new guy?\n<b> \n</b><b> CAMERON\n</b> So they tell me...\n<b> \n</b><b> MICHAEL\n</b> C\'mon. I\'m supposed to give you the \n tour.\n<b> \n</b> They head out of the office\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n So -- which Dakota you from?\n<b> \n</b><b> CAMERON\n</b> North, actually. How\'d you ?\n<b> \n</b><b> MICHAEL\n</b> I was kidding. People actually live \n there?\n<b> \n</b><b> CAMERON\n</b> Yeah. A couple. We\'re outnumbered by \n the cows, though.\n<b> \n</b><b> MICHAEL\n</b> How many people were in your old \n school?\n<b> \n</b><b> CAMERON\n</b> Thirty-two.\n<b> \n</b><b> MICHAEL\n</b> Get out!\n<b> \n</b><b> CAMERON\n</b> How many people go here?\n<b> \n</b><b> MICHAEL\n</b> Couple thousand. Most of them evil\n<b> \n</b><b> INT. HALLWAY - DAY- CONTINUOUS\n</b><b> \n</b> Prom posters adorn the wall. Michael steers Cameron through \n the crowd as he points to various cliques.\n<b> \n</b><b> MICHAEL\n</b> We\'ve got your basic beautiful people. \n Unless they talk to you first, don\'t \n bother.\n<b> \n</b> The beautiful people pass, in full jock/cheerleader \n splendor.\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n Those \'re your cowboys.\n<b> \n</b> Several Stetson-wearing, big belt buckle. Wrangler guys \n walk by.\n<b> \n</b><b> CAMERON\n</b> That I\'m used to.\n<b> \n</b><b> MICHAEL\n</b> Yeah, but these guys have never seen a \n horse. They just jack off to Clint \n Eastwood.\n<b> \n</b> They pass an espresso cart with a group of teens huddled \n around it.\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n To the right, we have the Coffee Kids. \n Very edgy. Don\'t make any sudden \n movements around them.\n<b> \n</b><b> EXT. SCHOOL COURTYARD - DAY\n</b><b> \n</b> Michael continues the tour\n<b> \n</b><b> MICHAEL\n</b> And these delusionals are the White \n Rastae.\n<b> \n</b> Several white boys in dreadlocks and Jamaican knit berets \n lounge on the grass. A cloud of pot smoke hovers above them\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n Big Marley fans. Think they\'re black. \n Semi-political, but mostly, they watch a \n lot of Wild Kingdom, if you know what I \n mean.\n<b> \n</b> Michael waves to DEREK, the one with the longest dreads.\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n Derek - save some for after lunch, bub?\n<b> \n</b><b> DEREK\n</b> (very stoned)\n Michael, my brother, peace\n<b> \n</b> Cameron turns to follow Michael as they walk into the \n cafeteria.\n<b> \n</b><b> CAMERON\n</b> So where do you fit in all this?\n<b> \n</b><b> INT. CAFETERIA - DAY - CONTINUOUS\n</b><b> \n</b> Loud music and loud students. Michael sits with a group of \n studious-looking teens.\n<b> \n</b><b> MICHAEL\n</b> Future MBAs- We\'re all Ivy League, \n already accepted. Someday I\'ll be \n sipping Merlot while those guys --\n<b> \n</b> He points to the table of jocks, as they torture various \n passers-by.\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n are fixing my Saab. Yuppie greed is \n back, my friend.\n<b> \n</b> He points proudly to the ALLIGATOR on his shirt.\n<b> \n</b> Cameron stops listening as BIANCA walks by, and we go SLO \n MO. Pure and perfect, she passes Cameron and Michael \n without a look.\n<b> \n</b> Cameron is smitten\n<b> \n</b><b> CAMERON\n</b> That girl -- I --\n<b> \n</b><b> MICHAEL\n</b> You burn, you pine, you perish?\n<b> \n</b><b> CAMERON\n</b> Who is she?\n<b> \n</b><b> MICHAEL\n</b> Bianca Stratford. Sophomore. Don\'t \n even think about it\n<b> \n</b><b> CAMERON\n</b> Why not?\n<b> \n</b><b> MICHAEL\n</b> I could start with your haircut, but it \n doesn\'t matter. She\'s not allowed to \n date until her older sister does. And \n that\'s an impossibility.\n<b> \n</b><b> ENGLISH CLASS - DAY\n</b><b> \n</b> A room full of bored seniors doodle and scare off into space \n MS. BLAISE, the one-step-away-from-medication English \n Teacher, tries to remember what she\'s talking about.\n<b> \n</b><b> MRS. BLAISE\n</b> Well, then. Oh, yes. I guess that \n does it for our analysis of The Old Man \n and the Sea. Any other comments?\n (with dread)\n Kat?\n<b> \n</b> Kat, the girl we saw as we entered the school, slowly cakes \n off her glasses and speaks up.\n<b> \n</b><b> KAT\n</b> Why didn\'t we just read the Hardy Boys?\n<b> \n</b><b> MRS. BLAISE\n</b> I\'m sorry?\n<b> \n</b><b> KAT\n</b> This book is about a guy and his \n fishing habit. Not exactly a crucial \n topic.\n<b> \n</b> The other students roll their eyes.\n<b> \n</b><b> KAT\n</b> (continuing)\n Frankly, I\'m baffled as to why we still \n revere Hemingway. He was an abusive, \n alcoholic misogynist who had a lot of \n cats.\n<b> \n</b> JOEY DORSEY, a well-muscled jock with great cheekbones, \n makes fun of her from his row.\n<b> \n</b><b> JOEY\n</b> As opposed to a bitter self-righteous \n hag who has no friends?\n<b> \n</b> A few giggles. Kat ignores him. A practiced gesture\n<b> \n</b><b> MRS. BLAISE\n</b> That\'s enough, Mr. Dorsey.\n<b> \n</b> Really gets fired up now\n<b> \n</b><b> KAT\n</b> I guess the school board thinks because \n Hemingway\'s male and an asshole, he\'s \n worthy of our time\n<b> \n</b> She looks up at Ms. Blaise, who is now fighting with her \n pill box.\n<b> \n</b><b> KAT\n</b> (continuing)\n What about Colette? Charlotte Bronte? \n Simone de Beauvoir?\n<b> \n</b> Patrick, lounging in his seat in the back row, elbows a \n crusty-looking crony, identified by the name SCURVY, \n embroidered on his workshirt.\n<b> \n</b><b> PATRICK\n</b> Mother Goose?\n<b> \n</b> The class titters. Kat wears an expression of intolerance\n<b> \n</b><b> INT. GUIDANCE COUNSELOR\'S OFFICE - DAY\n</b><b> \n</b> Kat now sits before Miss Perky.\n<b> \n</b><b> MISS PERKY\n</b> Katarina Stratford. My, my. You\'ve \n been terrorizing Ms. Blaise again.\n<b> \n</b><b> KAT\n</b> Expressing my opinion is not a \n terrorist action.\n<b> \n</b><b> MISS PERKY\n</b> Well, yes, compared to your other \n choices of expression this year, today\'s \n events are quite mild. By the way, \n Bobby Rictor\'s gonad retrieval operation \n went quite well, in case you\'re \n interested.\n<b> \n</b><b> KAT\n</b> I still maintain that he kicked himself \n in the balls. I was merely a spectator.\n<b> \n</b><b> MISS PERKY\n</b> The point is Kat -- people perceive you \n as somewhat ...\n<b> \n</b> Kat smiles at her, daring her to say it.\n<b> \n</b><b> KAT\n</b> Tempestuous?\n<b> \n</b><b> MISS PERKY\n</b> No ... I believe "heinous bitch" is the \n term used most often.\n<b> \n</b> She grimaces, as if she\'s referring to a medical condition.\n<b> \n</b><b> MISS PERKY\n</b> (continuing)\n You might want to work on that\n<b> \n</b> Kat rises from her chair with a plastic smile matching the \n counselor\'s.\n<b> \n</b><b> KAT\n</b> As always, thank you for your excellent \n guidance.\n<b> \n</b><b> INT. SOPHOMORE ENGLISH CLASS - DAY\n</b><b> \n</b> Bianca ignores the droning teacher as she writes a note in \n big flowing handwriting.\n<b> \n</b><b> TEACHER (0.S.)\n</b> I realize the language of Mr. \n Shakespeare makes him a bit daunting, \n but I\'m sure you\'re all doing your best.\n<b> \n</b> Bianca folds the note and passes it behind her with a flip \n of her hair to CHASTITY. Chastity opens the note and reads:\n<b> \n</b><b> INSERT - "JOEY DORSEY SAID HI TO ME IN THE HALL! OH! MY \n</b><b> GOD!"\n</b><b> \n</b> Chastity frowns to herself.\n<b> \n</b><b> TEACHER (0.S.)\n</b> (continuing)\n Ms. Stratford, do you care to comment \n on what you\'ve read so far?\n<b> \n</b> Bianca looks up and smiles the smile of Daddy\'s little girl.\n<b> \n</b><b> BIANCA\n</b> Not really.\n<b> \n</b> The teacher shakes her head, but lets it go.\n<b> \n</b> MANDELLA. a waif-like senior girl who sits off to the side \n trying to slit her wrist with the plastic spiral on her \n notebook, looks up and raises her hand.\n<b> \n</b><b> TEACHER\n</b> Mandella -- since you\'re assisting us, \n you might as well comment. I\'m assuming \n you read the assignment.\n<b> \n</b><b> MANDELLA\n</b> Uh, yeah, I read it all\n<b> \n</b><b> TEACHER\n</b> The whole play^\n<b> \n</b><b> MANDELIA\n</b> The whole folio. All the plays.\n<b> \n</b><b> TEACHER\n</b> (disbelieving)\n You\'ve read every play by William \n Shakespeare?\n<b> \n</b><b> MANDELLA\n</b> Haven\'t you?\n<b> \n</b> She raises a challenging eyebrow. The stunned teacher \n doesn\'t answer and goes to call on the next student.\n<b> \n</b><b> EXT. SCHOOL COURTYARD - DAY\n</b><b> \n</b> Mandella and Kat sit down in the quiet corner. They are \n eating a carton of yogurt with gusto.\n<b> \n</b><b> MANDELLA\n</b><b> \n</b> Your sister is so amazingly without. She\'ll never read him. \n She has no idea.\n<b> \n</b> Kat attacks\n<b> \n</b><b> KAT\n</b> The fact that you\'re cutting gym so you \n can T.A. Sophomore English just to hear \n his name, is a little without in itself \n if you ask me.\n<b> \n</b> Kat\'s attention is caught by Patrick as he walks by with his \n friends, lighting up a cigarette. Mandella notices her \n staring.\n<b> \n</b><b> MANDELLA\n</b> Who\'s that?\n<b> \n</b><b> KAT\n</b> Patrick Verona Random skid.\n<b> \n</b><b> MANDELLA\n</b> That\'s Pat Verona? The one who was gone \n for a year? I heard he was doing porn \n movies.\n<b> \n</b><b> KAT\n</b> I\'m sure he\'s completely incapable of \n doing anything that interesting.\n<b> \n</b><b> MANDELLA\n</b> He always look so\n<b> \n</b><b> KAT\n</b> Block E?\n<b> \n</b> Kat turns back to face Mandella and forces her yogurt into \n Mandella\'s hand.\n<b> \n</b><b> KAT\n</b> (continuing)\n Mandella, eat. Starving yourself is a \n very slow way to die.\n<b> \n</b><b> MANDELLA\n</b> Just a little.\n<b> \n</b> She eats. Kat sees her wrist\n<b> \n</b><b> KAT\n</b> What\'s this?\n<b> \n</b><b> MANDELLA\n</b> An attempted slit.\n<b> \n</b> Kat stares at her, expressionless.\n<b> \n</b><b> KAT\n</b> I realize that the men of this fine \n institution are severely lacking, but \n killing yourself so you can be with \n William Shakespeare is beyond the scope \n of normal teenage obsessions. You\'re \n venturing far past daytime talk show \n fodder and entering the world of those \n who need very expensive therapy.\n<b> \n</b><b> MANDELLA\n</b> But imagine the things he\'d say during \n sex.\n<b> \n</b> Thinks a minute\n<b> \n</b><b> KAT\n</b> Okay, say you do it. You kill \n yourself, you end up in wherever you end \n up and he\'s there. Do you really think \n he\'s gonna wanna dace a ninety pound \n compulsive who failed volleyball?\n<b> \n</b> Mandella\'s attention is struck by Bianca\n<b> \n</b><b> ACROSS THE COURTYARD\n</b><b> \n</b> As she and Chastity parade by Joey and his COHORTS One of \n the cohorts elbows Joey.\n<b> \n</b><b> COHORT\n</b> Virgin alert.\n<b> \n</b> Joey looks up and smiles at Bianca.\n<b> \n</b><b> JOEY\n</b> Lookin\' good, ladies.\n<b> \n</b> Bianca smiles her coyest of smiles.\n<b> \n</b> BACK TO KAT AND MANDELLA Still watching.\n<b> \n</b><b> MANDELLA\n</b> Tragic.\n<b> \n</b> Doesn\'t respond\n<b> \n</b><b> ANOTHER ANGLE\n</b><b> \n</b> Michael and Cameron observe Joey\'s leers at Bianca from \n their bench in another corner. Cowboys eating cue of a can \n of beans linger on the grass behind them.\n<b> \n</b><b> CAMERON\n</b> Why do girls like that always like guys \n like that?\n<b> \n</b><b> MICHAEL\n</b> Because they\'re bred to. Their mothers \n liked guys like that, and their \n grandmothers before them. Their gene \n pool is rarely diluted.\n<b> \n</b><b> CAMERON\n</b> He always have that shit-eating grin?\n<b> \n</b><b> MICHAEL\n</b> Joey Dorsey? Perma-shit-grin. I wish \n I could say he\'s a moron, but he\'s \n number twelve in the class. And a \n model. Mostly regional stuff, but he\'s \n rumored to have a big tube sock ad \n coming out.\n<b> \n</b> The BELL rings, and the cowboys stand and spit into their \n empty bean cans. Cameron and Michael rise as Cameron tries \n to catch a glimpse of Bianca as she walks back inside.\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n You know French?\n<b> \n</b><b> CAMERON\n</b> Sure do ... my Mom\'s from Canada\n<b> \n</b><b> MICHAEL\n</b> Guess who just signed up for a tutor?\n<b> \n</b><b> CAMERON\n</b> You mean I\'d get a chance to talk to \n her?\n<b> \n</b><b> MICHAEL\n</b> You could consecrate with her, my \n friend.\n<b> \n</b> Cameron watches as Bianca flounces back into the building.\n<b> \n</b><b> EXT. SCHOOL PARKING LOT - DAY\n</b><b> \n</b> Kat and Mandella walk toward Kat\'s car. Joey pulls up \n beside her in his Viper.\n<b> \n</b><b> JOEY\n</b> (re her dress)\n The vintage look is over, Kat. Haven\'t \n you been reading your Sassy?\n<b> \n</b><b> KAT\n</b> Yeah, and I noticed the only part of \n you featured in your big Kmart spread \n was your elbow. Tough break.\n<b> \n</b><b> JOEY\n</b> (practically \n spitting)\n They\'re running the rest of me next \n month.\n<b> \n</b> He zooms away as Kat yanks open the door of her Dart. \n Mandella ties a silk scarf around her head, as if they\'re in \n a convertible.\n<b> \n</b><b> KAT\n</b> The people at this school are so \n incredibly foul.\n<b> \n</b><b> MANDELLA\n</b> You could always go with me. I\'m sure \n William has some friends.\n<b> \n</b> They watch Joey\'s car as he slows next to Bianca and \n Chastity as they walk toward the school bus.\n<b> \n</b><b> ON BIANCA AND CHASTITY\n</b><b> \n</b><b> JOEY\n</b> Need a ride, ladies?\n<b> \n</b> Bianca and Chastity can\'t get in Joey\'s car fast enough. He \n pulls away with a smile.\n<b> \n</b><b> BACK TO KAT AND MANDELLA\n</b><b> \n</b> Mandella lowers her sunglasses to watch.\n<b> \n</b><b> MANDELLA\n</b> That\'s a charming new development\n<b> \n</b> Kat doesn\'t answer, but reaches over and puts a tape in the \n tape deck. The sounds of JOYFUL PUNK ROCK fill the car.\n<b> \n</b> As they pull out, Michael crosses in front of them on his \n moped. Kat has to SLAM the brakes to keep from hitting him\n<b> \n</b><b> KAT\n</b> (yelling)\n Remove head from sphincter! Then \n pedal!\n<b> \n</b> Michael begins fearfully, pedaling as Kat PEELS out, angry \n at the delay.\n<b> \n</b> Cameron rushes over\n<b> \n</b><b> CAMERON\n</b> You all right?\n<b> \n</b> He slows to a stop\n<b> \n</b><b> MICHAEL\n</b> Yeah, just a minor encounter with the \n shrew.\n<b> \n</b><b> CAMERON\n</b> That\'s her? Bianca\'s sister?\n<b> \n</b><b> MICHAEL\n</b> The mewling, rampalian wretch herself.\n<b> \n</b> Michael putters off, leaving Cameron dodging Patrick\'s \n grimy, grey Jeep -- a vehicle several years and many paint \n jobs away from its former glory as a REGULATION MAIL TRUCK -\n - as he sideswipes several cars on his way out of the lot.\n<b> \n</b><b> INT. STRATFORD HOUSE - DAY\n</b><b> \n</b> SHARON STRATFORD, attractive and focused, sits in front of \n her computer, typing quickly. A shelf next to her holds \n several bodice-ripper romance novels, bearing her name. \n<b> \n</b> Kat stands behind her, reading over her shoulder as she \n types.\n<b> \n</b><b> KAT\n</b> "Undulating with desire, Adrienne \n removes her crimson cape, revealing her \n creamy --"\n<b> \n</b> WALTER STRATFORD, a blustery, mad scientist-type \n obstetrician, enters through the front door, wearing a \n doctor\'s white jacket and carrying his black bag.\n<b> \n</b><b> WALTER\n</b><b> \n</b> I hope dinner\'s ready because I only have ten minutes before \n Mrs. Johnson squirts out a screamer.\n<b> \n</b> He grabs the mail and rifles through it, as he bends down to \n kiss Sharon on the cheek.\n<b> \n</b><b> SHARON\n</b> In the microwave.\n<b> \n</b><b> WALTER\n</b> (to Kat)\n Make anyone cry today?\n<b> \n</b><b> KAT\n</b> Sadly, no. But it\'s only four-thirty.\n<b> \n</b> Bianca walks in.\n<b> \n</b><b> KAT\n</b> (continuing)\n Where\'ve you been?\n<b> \n</b><b> BIANCA\n</b> (eyeing Walter)\n Nowhere... Hi, Daddy.\n<b> \n</b> She kisses him on the cheek\n<b> \n</b><b> WALTER\n</b> Hello, precious.\n<b> \n</b> Walter kisses Bianca back as Kat heads up the stairs\n<b> \n</b><b> KAT\n</b> How touching.\n<b> \n</b> Walter holds up a letter to Kat\n<b> \n</b><b> WALTER\n</b> What\'s this? It says Sarah Lawrence?\n<b> \n</b> Snatches it away from him.\n<b> \n</b><b> KAT\n</b> I guess I got in\n<b> \n</b> Sharon looks up from her computer.\n<b> \n</b><b> SHARON\n</b> What\'s a synonym for throbbing?\n<b> \n</b><b> WALTER\n</b> Sarah Lawrence is on the other side of \n the country.\n<b> \n</b><b> KAT\n</b> I know.\n<b> \n</b><b> WALTER\n</b> I thought we decided you were going to \n school here. At U of 0.\n<b> \n</b><b> KAT\n</b> You decided.\n<b> \n</b><b> BIANCA\n</b> Is there even a question that we want \n her to stay?\n<b> \n</b> Kat gives Bianca an evil look then smiles sweetly at\n<b> \n</b><b> KAT\n</b> Ask Bianca who drove her home\n<b> \n</b><b> SHARON\n</b> Swollen...turgid.\n<b> \n</b><b> WALTER\n</b> (to Bianca; upset)\n Who drove you home?\n<b> \n</b> Bianca glares at Kat then turns to Walter\n<b> \n</b><b> BIANCA\n</b> Now don\'t get upset. Daddy, but there\'s \n this boy... and I think he might ask...\n<b> \n</b><b> WALTER\n</b> No! You\'re not dating until your sister \n starts dating. End of discussion.\n<b> \n</b><b> BIANCA\n</b> What if she never starts dating?\n<b> \n</b><b> WALTER\n</b> Then neither will you. And I\'ll get to \n sleep at night.\n<b> \n</b><b> BIANCA\n</b> But it\'s not fair -- she\'s a mutant, \n Daddy!\n<b> \n</b><b> KAT\n</b> This from someone whose diary is \n devoted to favorite grooming tips?\n<b> \n</b><b> WALTER\n</b> Enough!\n<b> \n</b> He pulls out a small tape recorder from his black bag.\n<b> \n</b><b> WALTER\n</b> (continuing)\n Do you know what this is?\n<b> \n</b> He hits the "play\' button and SHRIEKS OF PAIN emanate from \n the tape recorder.\n<b> \n</b><b> BIANCA AND WALTER\n</b> (in unison, by \n rote)\n The sound of a fifteen-year-old in \n labor.\n<b> \n</b><b> WALTER\n</b> This is why you\'re not dating until \n your sister does.\n<b> \n</b><b> BIANCA\n</b> But she doesn\'t want to date.\n<b> \n</b><b> WALTER\n</b> Exactly my point\n<b> \n</b> His BEEPER goes off and he grabs his bag again\n<b> \n</b><b> WALTER\n</b> (continuing)\n Jesus! Can a man even grab a sandwich \n before you women start dilating?\n<b> \n</b><b> SHARON\n</b> Tumescent!\n<b> \n</b><b> WALTER\n</b> (to Sharon; as he \n leaves)\n You\'re not helping.\n<b> \n</b><b> INT. TUTORING ROOM - DAY\n</b><b> \n</b> Cameron sits with an empty chair beside him. Bianca arrives \n in a flurry of blonde hair.\n<b> \n</b><b> BIANCA\n</b> Can we make this quick? Roxanne \n Korrine and Andrew Barrett are having an \n incredibly horrendous public break- up \n on the quad. Again.\n<b> \n</b><b> CAMERON\n</b> Well, I thought we\'d start with \n pronunciation, if that\'s okay with you.\n<b> \n</b><b> BIANCA\n</b> Not the hacking and gagging and spitting part. Please.\n<b> \n</b><b> CAMERON\n</b> (looking down)\n Okay... then how \'bout we try out some \n French cuisine. Saturday? Night?\n<b> \n</b> Bianca smiles slowly\n<b> \n</b><b> BIANCA\n</b> You\'re asking me out. That\'s so cute. \n What\'s your name again?\n<b> \n</b><b> CAMERON\n</b> (embarrassed)\n Forget it.\n<b> \n</b> Bianca seizes an opportunity.\n<b> \n</b><b> BIANCA\n</b> No, no, it\'s my fault -- we didn\'t have \n a proper introduction ---\n<b> \n</b><b> CAMERON\n</b> Cameron.\n<b> \n</b><b> BIANCA\n</b> The thing is, Cameron -- I\'m at the \n mercy of a particularly hideous breed of \n loser. My sister. I can\'t date until \n she does.\n<b> \n</b><b> CAMERON\n</b> Seems like she could get a date easy \n enough...\n<b> \n</b> She fingers a lock of her hair. He looks on, dazzled.\n<b> \n</b><b> BIANCA\n</b><b> \n</b> The problem is, she\'s completely anti-social.\n<b> \n</b><b> CAMERON\n</b> Why?\n<b> \n</b><b> BIANCA\n</b> Unsolved mystery. She used to be \n really popular when she started high \n school, then it was just like she got \n sick of it or something.\n<b> \n</b><b> CAMERON\n</b> That\'s a shame.\n<b> \n</b> She reaches out and touches his arm\n<b> \n</b><b> BIANCA\n</b> Gosh, if only we could find Kat a \n boyfriend...\n<b> \n</b><b> CAMERON\n</b> Let me see what I can do.\n<b> \n</b> Cameron smiles, having no idea how stupid he is\n<b> \n</b><b> INT. BIOLOGY CLASS\n</b><b> \n</b> A frog is being torn asunder by several prongs and picks. \n Michael and Cameron go for the spleen.\n<b> \n</b><b> MICHAEL\n</b> You\'re in school for one day and you \n ask out the most beautiful girl? Do you \n have no concept of the high school \n social code?\n<b> \n</b> Cameron grins away\n<b> \n</b><b> CAMERON\n</b> I teach her French, get to know her, \n dazzle her with charm and she falls in \n love with me.\n<b> \n</b><b> MICHAEL\n</b> Unlikely, but even so, she still can\'t \n go out with you. So what\'s the\n point?\n<b> \n</b> Cameron motions with his head toward Patrick, a few lab \n tables away. He\'s wearing biker glasses instead of goggles \n as he tries to revive his frog.\n<b> \n</b><b> CAMERON\n</b> What about him?\n<b> \n</b><b> MICHAEL\n</b> (confused)\n You wanna go out with him?\n<b> \n</b> The others at the lab table raise their eyebrows\n<b> \n</b><b> CAMERON\n</b> (impatient)\n No - he could wrangle with the sister.\n<b> \n</b> Michael smiles. Liking the intrigue.\n<b> \n</b><b> MICHAEL\n</b> What makes you think he\'ll do it?\n<b> \n</b><b> CAMERON\n</b> He seems like he thrives on danger\n<b> \n</b><b> MICHAEL\n</b> No kidding. He\'s a criminal. I heard \n he lit a state trooper on fire. He just \n got out of Alcatraz...\n<b> \n</b><b> CAMERON\n</b> They always let felons sit in on Honors \n Biology?\n<b> \n</b><b> MICHAEL\n</b> I\'m serious, man, he\'s whacked. He \n sold his own liver on the black market \n so he could buy new speakers.\n<b> \n</b><b> CAMERON\n</b> Forget his reputation. Do you think \n we\'ve got a plan or not?\n<b> \n</b><b> MICHAEL\n</b> Did she actually say she\'d go out with \n you?\n<b> \n</b><b> CAMERON\n</b> That\'s what I just said\n<b> \n</b> Michael processes this.\n<b> \n</b><b> MICHAEL\n</b> You know, if you do go out with Bianca, \n you\'d be set. You\'d outrank everyone. \n Strictly A-list. With me by your side.\n<b> \n</b><b> CAMERON\n</b> I thought you hated those people.\n<b> \n</b><b> MICHAEL\n</b> Hey -- I\'ve gotta have a few clients \n when I get to Wall Street.\n<b> \n</b> A cowboy flicks the frog\'s heart into one of the Coffee \n Kid\'s latte. Cameron presses on, over the melee.\n<b> \n</b><b> CAMERON\n</b> So now all we gotta do is talk to him.\n<b> \n</b> He points to Patrick, who now makes his frog hump another \n frog, with full-on sound effects.\n<b> \n</b><b> MICHAEL\n</b> I\'ll let you handle that.\n<b> \n</b><b> INT. WOODSHOP - DAY\n</b><b> \n</b> Boys and a few stray girls nail their pieces of wood\n<b> \n</b> Michael sits next to PEPE, a Coffee Kid, who holds out his \n jacket like the men who sell watches in the subway. Inside \n several bags of coffee hang from hooks.\n<b> \n</b><b> PEPE\n</b> Some people like the Colombian, but it \n all depends on your acidity preference. \n Me? I prefer East African and \n Indonesian. You start the day with a \n Sumatra Boengie or maybe and Ethiopian \n Sidamo in your cup, you\'re that much \n farther ahead than someone drinkin\' \n Cosia Rican or Kona -- you know what I \n mean?\n<b> \n</b> Michael nods solemnly.\n<b> \n</b><b> ACROSS THE ROOM\n</b><b> \n</b> Patrick sits at a table with Scurvy, making something that \n looks like a machete out of a two-by-four.\n<b> \n</b> Cameron approaches, full of good-natured farm boy cheer\n<b> \n</b><b> CAMERON\n</b> Hey, there\n<b> \n</b> In response, Patrick brandishes a loud POWER TOOL in his \n direction.\n<b> \n</b> Cameron slinks away.\n<b> \n</b><b> CAMERON\n</b> (continuing)\n Later, then. \n<b> \n</b> Michael watches, shaking his head.\n<b> \n</b><b> INT. CAFETERIA - DAY\n</b><b> \n</b> Joey and his pals take turns drawing boobs onto a cafeteria \n tray with a magic marker.\n<b> \n</b> Michael walks up and sits between them, casual as can be\n<b> \n</b><b> MICHAEL\n</b> Hey.\n<b> \n</b><b> JOEY\n</b> Are you lost?\n<b> \n</b><b> MICHAEL\n</b> Nope - just came by to chat\n<b> \n</b><b> JOEY\n</b> We don\'t chat.\n<b> \n</b><b> MICHAEL\n</b> Well, actually, I thought I\'d run an \n idea by you. You know, just to see if \n you\'re interested.\n<b> \n</b><b> JOEY\n</b> We\'re not.\n<b> \n</b> He grabs Michael by the side of the head, and proceeds to \n draw a penis on his cheek with the magic marker. Michael \n suffers the indignity and speaks undaunted.\n<b> \n</b><b> MICHAEL\n</b> (grimacing)\n Hear me out. You want Bianca don\'t \n you?\n<b> \n</b> Joey sits back and cackles at his drawing.\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n But she can\'t go out with you because \n her sister is this insane head case and \n no one will go out with her. right?\n<b> \n</b><b> JOEY\n</b> Does this conversation have a purpose?\n<b> \n</b><b> MICHAEL\n</b> So what you need to do is recruit a guy \n who\'ll go out with her. Someone who\'s \n up for the job.\n<b> \n</b> Michael points to Patrick, who makes a disgusted face at his \n turkey pot pie before he rises and throws it at the garbage \n can, rather than in it.\n<b> \n</b><b> JOEY\n</b><b> \n</b> That guy? I heard he ate a live duck once. Everything but \n the beak and the feet.\n<b> \n</b><b> MICHAEL\n</b> Exactly\n<b> \n</b> Joey turns to look at Michael.\n<b> \n</b><b> JOEY\n</b><b> \n</b> What\'s in it for you?\n<b> \n</b><b> MICHAEL\n</b> Oh, hey, nothin\' man Purely good will \n on my part.\n<b> \n</b> He rises to leave and turns to the others.\n<b> \n</b><b> MICHAEL\n</b> (continuing)\n I have a dick on my face, don\'t I? \n<b> \n</b><b> INT. BOY\'S ROOM - DAY\n</b><b> \n</b> Michael stands at the sink, trying to scrub Joey\'s artwork \n off his face as Cameron watches.\n<b> \n</b><b> CAMERON\n</b> You got him involved?\n<b> \n</b><b> MICHAEL\n</b> Like we had a choice? Besides -- when \n you let the enemy think he\'s \n orchestrating the battle, you\'re in a \n position of power. We let him pretend \n he\'s calling the shots, and while he\'s \n busy setting up the plan, you have time \n to woo Bianca.\n<b> \n</b> Cameron grins and puts an arm around him\n<b> \n</b><b> CAMERON\n</b> You\'re one brilliant guy\n<b> \n</b> Michael pulls back, noticing other guys filing in.\n<b> \n</b><b> MICHAEL\n</b><b> \n</b> Hey - I appreciate gratitude as much as the next guy, but \n it\'s not gonna do you any good to be known as New Kid Who \n Embraces Guys In The Bathroom. \n<b> \n</b> Cameron pulls back and attempts to posture himself in a \n manly way for the others, now watching.\n<b> \n</b><b> INT. KENNY\'S THAI FOOD DINER - DAY\n</b><b> \n</b> Kat and Mandella pick apart their pad thai. Mandella is \n smoking.\n<b> \n</b><b> KAT\n</b> So he has this huge raging fit about \n Sarah Lawrence and insists that I go to \n his male-dominated, puking frat boy, \n number one golf team school. I have no \n say at all.\n<b> \n</b><b> MANDELLA\n</b> William would never have gone to a \n state school.\n<b> \n</b><b> KAT\n</b> William didn\'t even go to high school\n<b> \n</b><b> MANDELLA\n</b> That\'s never been proven\n<b> \n</b><b> KAT\n</b> Neither has his heterosexuality.\n<b> \n</b> Mandella replies with a look of ice. Kat uses the moment to \n stub out Mandella\'s cigarette. \n<b> \n</b><b> KAT\n</b> (continuing)\n I appreciate your efforts toward a \n speedy death, but I\'m consuming.\n (pointing at her \n food)\n Do you mind?\n<b> \n</b><b> MANDELLA\n</b> Does it matter?\n<b> \n</b><b> KAT\n</b> If I was Bianca, it would be, "Any \n school you want, precious. Don\'t forget \n your tiara."\n<b> \n</b> They both look up as Patrick enters. He walks up to the \n counter to place his order.\n<b> \n</b> Mandella leans toward Kat with the glow of fresh gossip\n<b> \n</b><b> MANDELLA\n</b> Janice Parker told me he was a roadie \n for Marilyn Manson.\n<b> \n</b> Patrick nods at them as he takes his food outside.\n<b> \n</b><b> KAT \n</b> Janice Parker is an idiot\n<b> \n</b><b> INT. MISS PERKY\'S OFFICE - DAY \n</b><b> \n</b> Patrick sits before Miss Perky, eating his Thai food\n<b> \n</b><b> MISS PERKY \n</b> (looking at chart)\n I don\'t understand, Patrick. You \n haven\'t done anything asinine this week. \n Are you not feeling well?\n<b> \n</b><b> PATRICK\n</b> Touch of the flu.\n<b> \n</b><b> MISS PERKY\n</b> I\'m at a loss, then. What should we \n talk about? Your year of absence?\n<b> \n</b> He smiles his charming smile\n<b> \n</b><b> PATRICK\n</b> How \'bout your sex life?\n<b> \n</b> She tolerates his comment with her withering glance.\n<b> \n</b><b> MISS PERKY\n</b> Why don\'t we discuss your driving need \n to be a hemorrhoid?\n<b> \n</b><b> PATRICK \n</b> What\'s to discuss?\n<b> \n</b><b> MISS PERKY\n</b> You weren\'t abused, you aren\'t stupid, \n and as far as I can tell, you\'re only \n slightly psychotic -- so why is it that \n you\'re such a fuck-up?\n<b> \n</b><b> PATRICK\n</b> Well, you know -- there\'s the prestige \n of the job title... and the benefits \n package is pretty good...\n<b> \n</b> The bell RINGS.\n<b> \n</b><b> MISS PERKY\n</b> Fine. Go do something repugnant and \n give us something to talk about next \n week.\n<b> \n</b><b> INT. TUTORING ROOM - DAY\n</b><b> \n</b> Several pairs of tutors and students sit at the various \n desks.\n<b> \n</b> Mandella sits with TREVOR, a White Rasta. She attempts to \n get him to do geometry, but he stares at her, as if smitten\n<b> \n</b><b> MANDELLA\n</b> Look, it\'s really easy.\n<b> \n</b><b> TREVOR\n</b> You\'re a freedom fighter. Be proud, \n sister.\n<b> \n</b> Mandella sets down her pencil and closes the book.\n<b> \n</b><b> MANDELLA \n</b> (rotely)\n It\'s Mandella with two L\'s. I am not \n related to Nelson Mandela. I am not a \n political figure. I do not live in \n South Africa. My parents just spent a \n few too many acid trips thinking they \n were revolutionaries.\n<b> \n</b><b> TREVOR \n</b> But you freed our people\n<b> \n</b><b> MANDELLA\n</b> Your "people" are white, suburban high \n school boys who smoke too much hemp. I \n have not freed you, Trevor.\n (grabbing his arm \n dramatically)\n Only you can free yourself.\n<b> \n</b> ACROSS THE ROOM Bianca and Cameron sit side by side, cozy as \n can be\n<b> \n</b><b> BIANCA\n</b> C\'esc ma tete. This is my head\n<b> \n</b><b> CAMERON\n</b> Right. See? You\'re ready for the \n quiz.\n<b> \n</b><b> BIANCA\n</b> I don\'t want to know how to say that \n though. I want to know useful things. \n Like where the good stores are. How \n much does champagne cost? Stuff like \n Chat. I have never in my life had to \n point out my head to someone.\n<b> \n</b><b> CAMERON\n</b> That\'s because it\'s such a nice one.\n<b> \n</b><b> BIANCA\n</b> Forget French.\n<b> \n</b> She shuts her book and puts on a seductive smile\n<b> \n</b><b> BIANCA\n</b> (continuing)\n How is our little Find the Wench A Date \n plan progressing?\n<b> \n</b><b> CAMERON\n</b> Well, there\'s someone I think might be \n<b> --\n</b><b> \n</b> Bianca\'s eyes light up\n<b> \n</b><b> BIANCA\n</b> Show me\n<b> \n</b><b> INT. HALLWAY - DAY\n</b><b> \n</b> Cameron and Bianca lean against the wall -inconspicuously. \n Bianca plays it cool.\n<b> \n</b><b> BIANCA\n</b> Give me a sign when he walks by. And \n don\'t point.\n<b> \n</b> The bell RINGS. Kids flood past. Then Patrick saunters by \n with Scurvy. Cameron nudges Bianca.\n<b> \n</b><b> CAMERON\n</b> There.\n<b> \n</b><b> BIANCA\n</b> Where?\n<b> \n</b> Out of desperation, Cameron awkwardly lunges across \n Patrick\'s path. Patrick shoves him back against the wall \n without a thought. Cameron lands in a THUD at Bianca\'s \n feet.\n<b> \n</b><b> CAMERON\n</b> I guess he didn\'t see me \n (calling after \n Patrick) \n Some other time --\n<b> \n</b> Bianca watches Patrick, a wicked gleam in her eye.\n<b> \n</b><b> BIANCA\n</b> My God, he\'s repulsive. He\'s so \n perfect!\n<b> \n</b><b> INT. GYM CLASS - DAY \n</b><b> \n</b> Several volleyball games are being played.\n<b> \n</b> Joey and a member of his hulking entourage, approach \n Patrick, who still manages to look cool, even in gym \n clothes. They pull him aside roughly.\n<b> \n</b><b> PATRICK \n</b> (shrugging them \n off)\n What?\n<b> \n</b> Joey points\n<b> \n</b> JOEY See that girl?\n<b> \n</b> Patrick follows his line of vision to Kat as she spikes the \n ball into some poor cowboy\'s face.\n<b> \n</b><b> PATRICK\n</b> Yeah\n<b> \n</b><b> JOEY \n</b> What do you think?\n<b> \n</b> Kat wins the game and high fives the others, who are scared \n of her.\n<b> \n</b><b> PATRICK\n</b> Two legs, nice rack...\n<b> \n</b><b> JOEY\n</b> Yeah, whatever. I want you to go out \n with her.\n<b> \n</b><b> PATRICK\n</b> Sure, Sparky. I\'ll get right on it.\n<b> \n</b><b> JOEY \n</b> You just said\n<b> \n</b><b> PATRICK\n</b> You need money to take a girl out\n<b> \n</b><b> JOEY\n</b> But you\'d go out with her if you had \n the cake?\n<b> \n</b> Patrick stares at Joey deadpan. His dislike for the guy \n obvious.\n<b> \n</b><b> PATRICK \n</b> (sarcastic)\n Yeah, I\'d take her to Europe if I had \n the plane.\n<b> \n</b> Joey smiles.\n<b> \n</b><b> JOEY\n</b> You got it, Verona. I pick up the tab, \n you do the honors.\n<b> \n</b><b> PATRICK\n</b> You\'re gonna pay me to take out some \n girl?\n<b> \n</b><b> JOEY\n</b> I can\'t date her sister until that one \n gets a boyfriend. And that\'s the catch. \n She doesn\'t want a boyfriend.\n<b> \n</b><b> PATRICK \n</b> How much?\n<b> \n</b><b> JOEY\n</b><b> \n</b> Twenty bucks each time you take her out.\n<b> \n</b><b> PATRICK\n</b> I can\'t take a girl like that out on \n twenty bucks.\n<b> \n</b><b> JOEY \n</b> Fine, thirty.\n<b> \n</b> Patrick raises an eyebrow, urging him up\n<b> \n</b><b> JOEY \n</b> (continuing)\n Take it or leave it. This isn\'t a \n negotiation.\n<b> \n</b><b> PATRICK\n</b> Fifty, and you\'ve got your man.\n<b> \n</b> Patrick walks away with a smile\n<b> \n</b><b> EXT. FIELD HOCKEY FIELD - DAY\n</b><b> \n</b> Kat and the rest of the team go through a grueling practice \n session. Kat spares no one as she whips the ball all over \n the field.\n<b> \n</b> Patrick sits on the bleachers nearby, watching. A cigarette \n dangles from his mouth. His pal, SCURVY is next to him.\n<b> \n</b> MR. CHAPIN, the coach, blows the WHISTLE.\n<b> \n</b><b> MR. CHAPIN \n</b> (proudly) \n Good run, Stratford.\n<b> \n</b> Kat nods in response, and the girls leave the field. Patrick \n hops down to follow.\n<b> \n</b><b> PATRICK \n</b> Hey. Girlie.\n<b> \n</b> Kat stops and turns slowly to look at him.\n<b> \n</b><b> PATRICK \n</b> (continuing) \n I mean Wo-man. How ya doin\'?\n<b> \n</b><b> KAT \n</b> (smiles brightly)\n Sweating like a pig, actually. And \n yourself?\n<b> \n</b><b> PATRICK\n</b> There\'s a way to get a guy\'s attention.\n<b> \n</b><b> KAT\n</b> My mission in life.\n<b> \n</b> She stands there undaunted, hand on hip.\n<b> \n</b><b> KAT \n</b> (continuing)\n Obviously, I\'ve struck your fancy. So, \n you see, it worked. The world makes \n sense again.\n<b> \n</b> Patrick\'s eyes narrow. He steps closer.\n<b> \n</b><b> PATRICK \n</b> Pick you up Friday, then\n<b> \n</b><b> KAT\n</b> Oh, right. Friday.\n<b> \n</b> PATRICK backs up a little. He uses his most seductive tone\n<b> \n</b><b> PATRICK\n</b> The night I take you to places you\'ve \n never been before. And back.\n<b> \n</b><b> KAT\n</b> Like where? The 7-Eleven on Burnside? \n Do you even know my name, screwboy?\n<b> \n</b><b> PATRICK \n</b> I know a lot more than that\n<b> \n</b> Kat stares at him.\n<b> \n</b><b> KAT \n</b> Doubtful. Very doubtful.\n<b> \n</b> She walks away quickly, leaving him standing alone.\n<b> \n</b><b> PATRICK \n</b> (calling after her)\n You\'re no bargain either, sweetheart.\n<b> \n</b> Scurvy appears at his side\n<b> \n</b><b> SCURVY\n</b> So I guess the Jeep won\'t be getting a \n new Blaupunkt.\n<b> \n</b> ACROSS THE FIELD Cameron and Michael watch.\n<b> \n</b><b> MICHAEL \n</b> He took the bait.\n<b> \n</b><b> STRATFORD HOUSE/BATHROOM - NIGHT\n</b><b> \n</b> Kat washes her face at the sink. Bianca appears behind her, \n and attempts to twist Kat\'s hair into a chignon.\n<b> \n</b> She wacks Bianca away.\n<b> \n</b><b> BIANCA\n</b> Have you ever considered a new look? I \n mean, seriously, you could have some \n potential buried under all this \n hostility.\n<b> \n</b> Kat pushes past her into the hallway.\n<b> \n</b><b> KAT\n</b> I have the potential to smack the crap \n out of you if you don\'t get out of my \n way.\n<b> \n</b><b> BIANCA\n</b> Can you at least start wearing a bra?\n<b> \n</b> Kat SLAMS her door in response.\n<b> \n</b><b> INT. HALLWAY - DAY \n</b><b> \n</b> Patrick, Scurvy and some other randoms head for the exit\n<b> \n</b> SCURVY You up for a burger?\n<b> \n</b> Patrick looks in his wallet. It\'s empty.\n<b> \n</b><b> INT. HALLWAY - DAY\n</b><b> \n</b> Kat stands at her locker, gathering her books. Patrick \n appears at her side, smiling.\n<b> \n</b><b> PATRICK\n</b> Hey\n<b> \n</b> Kat doesn\'t answer\n<b> \n</b><b> PATRICK \n</b> (continuing) \n You hate me don\'t you?\n<b> \n</b><b> KAT\n</b> I don\'t really think you warrant that \n strong an emotion.\n<b> \n</b><b> PATRICK\n</b> Then say you\'ll spend Dollar Night at \n the track with me.\n<b> \n</b><b> KAT \n</b> And why would I do that?\n<b> \n</b><b> PATRICK\n</b> Come on -- the ponies, the flat beer, \n you with money in your eyes, me with my \n hand on your ass...\n<b> \n</b><b> KAT\n</b> You -- covered in my vomit.\n<b> \n</b><b> PATRICK \n</b> Seven-thirty?\n<b> \n</b> She slams her locker shut and walks away\n<b> \n</b><b> EXT. DOWNTOWN STREET - NIGHT\n</b><b> \n</b> Kat emerges from a music store carrying a bag of CDs in her \n teeth, and fumbling through her purse with both hands. She \n finds her keys and pulls them out with a triumphant tug.\n<b> \n</b> She looks up and finds Patrick sitting on the hood of her \n car\n<b> \n</b><b> PATRICK \n</b> Nice ride. Vintage fenders.\n<b> \n</b> Kat takes the bag out of her mouth.\n<b> \n</b><b> KAT \n</b> Are you following me?\n<b> \n</b><b> PATRICK\n</b> I was in the laundromat. I saw your \n car. Thought I\'d say hi.\n<b> \n</b><b> KAT \n</b> Hi\n<b> \n</b> She gets in and starts the car.\n<b> \n</b><b> PATRICK\n</b> You\'re not a big talker, are you?\n<b> \n</b><b> KAT\n</b> Depends on the topic. My fenders don\'t \n really whip me into a verbal frenzy.\n<b> \n</b> She starts to pull out, and is blocked by Joey\'s Viper, \n which pulls up perpendicular to her rear and parks.\n<b> \n</b> Joey and his groupies emerge and head for the liquor store\n<b> \n</b><b> KAT \n</b> (continuing)\n Hey -- do you mind?\n<b> \n</b><b> JOEY \n</b> Not at all\n<b> \n</b> They continue on into the store. Kat stares at them in \n disbelief...\n<b> \n</b> Then BACKS UP\n<b> \n</b> Her vintage fenders CRASH into the door of Joey\'s precious \n Viper.\n<b> \n</b> Patrick watches with a delighted grin Joey races out of the \n liquor store.\n<b> \n</b><b> JOEY \n</b> (continuing) \n You fucking bitch!\n<b> \n</b> Kat pulls forward and backs into his car again. Smiling \n sweetly.\n<b> \n</b><b> INT. STRATFORD HOUSE - NIGHT \n</b><b> \n</b> Walter paces as Kat sits calmly on the couch.\n<b> \n</b><b> WALTER\n</b> My insurance does not cover PMS\n<b> \n</b><b> KAT\n</b> Then tell them I had a seizure.\n<b> \n</b><b> WALTER\n</b> Is this about Sarah Lawrence? You \n punishing me?\n<b> \n</b><b> KAT\n</b> I thought you were punishing me.\n<b> \n</b><b> WALTER\n</b> Why can\'t we agree on this?\n<b> \n</b><b> KAT\n</b> Because you\'re making decisions for me.\n<b> \n</b><b> WALTER\n</b> As a parent, that\'s my right\n<b> \n</b><b> KAT\n</b> So what I want doesn\'t matter?\n<b> \n</b><b> WALTER\n</b> You\'re eighteen. You don\'t know what \n you want. You won\'t know until you\'re \n forty-five and you don\'t have it.\n<b> \n</b><b> KAT \n</b> (emphatic)\n I want to go to an East Coast school! I \n want you to trust me to make my own \n choices. I want --\n<b> \n</b> Walter\'s BEEPER goes off\n<b> \n</b><b> WALTER\n</b> Christ! I want a night to go by that \n I\'m not staring a contraction in the \n face.\n<b> \n</b> He walks out, leaving Kat stewing on the couch.\n<b> \n</b><b> INT. HALLWAY - DAY\n</b><b> \n</b> Patrick shuts his graffiti-encrusted locker, revealing \n Joey\'s angry visage, glowering next to him.\n<b> \n</b><b> JOEY\n</b> When I shell out fifty, I expect \n results.\n<b> \n</b><b> PATRICK \n</b> I\'m on it\n<b> \n</b><b> JOEY\n</b> Watching the bitch trash my car doesn\'t \n count as a date.\n<b> \n</b><b> PATRICK\n</b> I got her under control. She just acts \n crazed in public to keep up the image.\n<b> \n</b> Joey sees through the bluff\n<b> \n</b><b> JOEY\n</b> Let me put it to you this way, if you \n don\'t get any action, I don\'t get any \n action. So get your ass on hers by the \n end of the week.\n<b> \n</b> Joey starts to walk off\n<b> \n</b><b> PATRICK \n</b> I just upped my price\n<b> \n</b><b> JOEY \n</b> (turning)\n What?\n<b> \n</b><b> PATRICK\n</b> A hundred bucks a date.\n<b> \n</b><b> JOEY \n</b> Forget it.\n<b> \n</b><b> PATRICK\n</b> Forget her sister, then.\n<b> \n</b> Joey thinks for a frustrated moment, PUNCHES the locker, \n then peels another fifty out of his wallet with a menacing \n scowl.\n<b> \n</b><b> JOEY\n</b> You better hope you\'re as smooth as you \n think you are, Verona.\n<b> \n</b> Patrick takes the money with a smile.\n<b> \n</b><b> INT. TUTORING ROOM - DAY \n</b> Cameron runs a sentence past Bianca.\n<b> \n</b><b> CAMERON\n</b> La copine et I \'ami? La diferance?\n<b> \n</b> Bianca glares at him.\n<b> \n</b><b> BIANCA\n</b> A "copine" is someone you can count on. \n An "ami" is someone who makes promises \n he can\'t keep.\n<b> \n</b> Cameron closes the French book\n<b> \n</b><b> CAMERON\n</b> You got something on your mind?\n<b> \n</b><b> BIANCA\n</b> I counted on you to help my cause. You \n and that thug are obviously failing. \n Aren\'t we ever going on our date?\n<b> \n</b> He melts\n<b> \n</b><b> CAMERON\n</b> You have my word. As a gentleman\n<b> \n</b><b> BIANCA\n</b> You\'re sweet.\n<b> \n</b> She touches his hand. He blushes at her praise and watches \n her toss her hair back\n<b> \n</b><b> CAMERON\n</b> (appreciative)\n How do you get your hair to look like \n that?\n<b> \n</b><b> BIANCA\n</b> Eber\'s Deep Conditioner every two days. \n And I never, ever use a blowdryer \n without the diffuser attachment.\n<b> \n</b> Cameron nods with interest.\n<b> \n</b><b> CAMERON\n</b> You know, I read an article about that.\n<b> \n</b> Bianca looks surprised.\n<b> \n</b><b> BIANCA\n</b> You did?\n<b> \n</b><b> INT. BOY\'S ROOM - DAY\n</b><b> \n</b> Patrick stands at the sink, washing his hands Michael and \n Cameron cower in the corner, watching him.\n<b> \n</b><b> PATRICK \n</b> (without turning \n around) \n Say it\n<b> \n</b><b> MICHAEL \n</b> (clearing his \n throat) \n What?\n<b> \n</b><b> PATRICK\n</b> Whatever the hell it is you\'re standin\' \n there waitin\' to say.\n<b> \n</b> Cameron bravely steps forward\n<b> \n</b><b> CAMERON\n</b> We wanted to talk to you about the \n plan.\n<b> \n</b> Patrick turns toward them.\n<b> \n</b><b> PATRICK \n</b> What plan?\n<b> \n</b><b> MICHAEL\n</b> The situation is, my man Cameron here \n has a major jones for Bianca Stratford.\n<b> \n</b><b> PATRICK\n</b> What is it with this chick? She have \n three tits?\n<b> \n</b> Cameron starts to object, but Michael holds up a hand.\n<b> \n</b><b> MICHAEL\n</b> I think I speak correctly when I say \n that Cameron\'s love is pure. Purer than \n say -- Joey Dorsey\'s.\n<b> \n</b><b> PATRICK\n</b> Dorsey can plow whoever he wants. I\'m \n just in this for the cash.\n<b> \n</b> Cameron starts choking at the thought of Joey plowing his \n beloved Bianca.\n<b> \n</b><b> MICHAEL\n</b> That\'s where we can help you. With \n Kat.\n<b> \n</b><b> PATRICK \n</b> So Dorsey can get the girl?\n<b> \n</b><b> MICHAEL\n</b> Patrick, Pat, you\'re not looking at the \n big picture. Joey\'s just a pawn. We set \n this whole thing up so Cameron can get \n the girl.\n<b> \n</b> Patrick smiles. He likes the idea of Joey being a pawn in \n this game.\n<b> \n</b><b> PATRICK\n</b> You two are gonna help me tame the wild \n beast?\n<b> \n</b><b> MICHAEL \n</b> (grinning) \n We\'re your guys.\n<b> \n</b><b> CAMERON\n</b> And he means that strictly in a non- \n prison-movie type of way.\n<b> \n</b><b> PATRICK \n</b> Yeah -- we\'ll see.\n<b> \n</b> He swings the door open and exits, leaving Michael and \n Cameron grinning at each other.\n<b> \n</b><b> MICHAEL \n</b> We\'re in.\n<b> \n</b><b> INT. CLASSROOM - DAY\n</b><b> \n</b> CU on a party invitation as it gets handed out. "Future \n Princeton Grad Bogey Lowenstein proudly presents a Saturday \n night bash at his abode. Casual attire".\n<b> \n</b> Michael holds the invitation up to Cameron.\n<b> \n</b><b> CAMERON\n</b> This is it. A golden opportunity. \n Patrick can ask Katarina to the party.\n<b> \n</b><b> MICHAEL\n</b> In that case, we\'ll need to make it a \n school-wide blow out.\n<b> \n</b><b> CAMERON\n</b> Will Bogey get bent?\n<b> \n</b><b> MICHAEL\n</b> Are you kidding? He\'ll piss himself \n with joy. He\'s the ultimate kiss ass.\n<b> \n</b><b> CAFETERIA - DAY\n</b><b> \n</b> Michael hands a jock the party invite as they pass each \n other at the trash cans.\n<b> \n</b><b> INT. GYM CLASS - DAY \n</b><b> \n</b> The jock calls a fellow jock \n<b> \n</b><b> INT. MATH CLASS - DAY \n</b><b> \n</b> Jock whispers to a cheerleader \n<b> \n</b><b> COURTYARD - DAY\n</b><b> \n</b> The cheerleader calls a White Rasta that she\'s making out \n with, showing him the invite.\n<b> \n</b><b> TRACK - DAY\n</b><b> \n</b> The White Rasta tells a cowboy as they run laps during track \n practice.\n<b> \n</b><b> INT. SHOWERS - DAY\n</b><b> \n</b> The cowboy Cells a Coffee Kid, as he shields his java from \n the spray of the shower.\n<b> \n</b><b> INT. HALLWAY - DAY\n</b><b> \n</b> Joey stands ac his open locker with Bianca. The locker is \n an homage to Joey\'s "modeling" career. Cheesy PRINT ADS of \n him -- running in a field of daisies, petting a kitten, etc. \n -- adorn the locker door.\n<b> \n</b><b> JOEY \n</b> Which do you like better?\n<b> \n</b> INSERT - HEADSHOTS of Joey. In one, he\'s pouting in a white \n shirt. In the other, he\'s pouting in a black shirt.\n<b> \n</b><b> BIANCA\n</b> I think I like the white shirt\n<b> \n</b> Joey nods thoughtfully.\n<b> \n</b><b> JOEY \n</b> It\'s more\n<b> \n</b><b> BIANCA\n</b> Expensive?\n<b> \n</b><b> \n</b><b> JOEY\n</b> Exactly \n (beat)\n So, you going to Bogey Lowenbrau\'s \n thing on Saturday?\n<b> \n</b><b> BIANCA\n</b> Hopefully.\n<b> \n</b> He gives her his best flirtatious smile\n<b> \n</b><b> JOEY\n</b> Good, \'cause I\'m not gonna bother if \n you won\'t be there.\n<b> \n</b> He taps her on the nose and she giggles\n<b> \n</b><b> INT. TUTORING ROOM \n</b> Bianca sits across from Cameron, who\'s transfixed, as always\n<b> \n</b><b> BIANCA\n</b> Have you heard about Bogey Lowenstein\'s \n party?\n<b> \n</b><b> CAMERON\n</b> Sure have.\n<b> \n</b><b> BIANCA\n</b> (pouting)\n I really, really, really wanna go, but \n I can\'t. Not unless my sister goes.\n<b> \n</b><b> CAMERON\n</b> I\'m workin\' on it. But she doesn\'t seem \n to be goin\' for him.\n<b> \n</b> He fishes.\n<b> \n</b><b> CAMERON\n</b> (continuing) \n She\'s not a...\n<b> \n</b><b> BIANCA\n</b> Lesbian? No. I found a picture of \n Jared Leto in one of her drawers, so I\'m \n pretty sure she\'s not harboring same-sex \n tendencies.\n<b> \n</b><b> CAMERON\n</b> So that\'s the kind of guy she likes? \n Pretty ones?\n<b> \n</b><b> BIANCA\n</b> Who knows? All I\'ve ever heard her say \n is that she\'d dip before dating a guy \n that smokes.\n<b> \n</b> Cameron furiously takes notes\n<b> \n</b><b> CAMERON\n</b> All right. What else is she partial \n to?\n<b> \n</b><b> INT. DIVE BAR - NIGHT \n</b> Patrick plays pool with some random deviant cronies.\n<b> \n</b> He looks up when he hears a COMMOTION at the door. LOU the \n bouncer is in the midst of throwing Michael and Cameron out.\n<b> \n</b><b> PATRICK\n</b> Lou, it\'s okay. They\'re with me.\n<b> \n</b> Lou looks at Patrick, surprised, then reluctantly lets our \n two non-deviants pass through.\n<b> \n</b> Patrick guides them to a table and sips from a beer.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n What\'ve you got for me?\n<b> \n</b><b> CAMERON\n</b> I\'ve retrieved certain pieces of \n information on Miss Katarina Stratford I \n think you\'ll find helpful.\n<b> \n</b> Cameron pulls out a piece of paper.\n<b> \n</b><b> MICHAEL \n</b> (to Patrick)\n<b> \n</b> One question before we start -- should you be drinking \n alcohol when you don\'t have a liver?\n<b> \n</b><b> PATRICK\n</b> What?!\n<b> \n</b><b> MICHAEL \n</b> Good enough.\n<b> \n</b> Cameron looks up at Patrick.\n<b> \n</b><b> CAMERON\n</b> Number one. She hates smokers\n<b> \n</b><b> MICHAEL \n</b> It\'s a lung cancer issue\n<b> \n</b><b> CAMERON\n</b> Her favorite uncle\n<b> \n</b><b> MICHAEL \n</b> Dead at forty-one.\n<b> \n</b> Patrick sits up\n<b> \n</b><b> PATRICK\n</b> Are you telling me I\'m a - \n (spits the word \n out)\n "non-smoker"?\n<b> \n</b><b> MICHAEL \n</b> Just for now.\n<b> \n</b><b> CAMERON\n</b> Another thing. Bianca said that Kat \n likes -- pretty guys.\n<b> \n</b> This is met with silence. Then:\n<b> \n</b><b> PATRICK\n</b> What? You don\'t think I\'m pretty?\n<b> \n</b> Michael smacks Cameron\n<b> \n</b><b> MICHAEL \n</b> He\'s pretty!\n<b> \n</b><b> CAMERON\n</b> Okay! I wasn\'t sure\n<b> \n</b> Cameron goes back to the list.\n<b> \n</b><b> CAMERON\n</b> (continuing)\n Okay -- Likes: Thai food, feminist \n prose, and "angry, stinky girl music of \n the indie-rock persuasion".\n<b> \n</b><b> PATRICK\n</b> So what does that give me? I\'m \n supposed to buy her some noodles and a \n book and sit around listening to chicks \n who can\'t play their instruments?\n<b> \n</b><b> MICHAEL\n</b> Ever been to Club Skunk?\n<b> \n</b><b> PATRICK\n</b> Yeah.\n<b> \n</b><b> CAMERON\n</b> Gigglepuss is playing there tomorrow \n night.\n<b> \n</b><b> PATRICK \n</b> Don\'t make me do it, man\n<b> \n</b><b> MICHAEL\n</b> Assail your ears for one night.\n<b> \n</b><b> CAMERON\n</b> It\'s her favorite band.\n<b> \n</b> Patrick groans\n<b> \n</b><b> MICHAEL\n</b> I also retrieved a list of her most \n recent CD purchases, courtesy of \n American Express.\n<b> \n</b> He hands it over.\n<b> \n</b><b> PATRICK \n</b> (smiling)\n Michael -- did you get this information \n "illegally"?\n<b> \n</b> Michael puts a finger to his lips.\n<b> \n</b><b> MICHAEL\n</b> I prefer to think of it simply as an \n alternative to what the law allows.\n<b> \n</b><b> PATRICK \n</b> I\'m likin\' you guys better\n<b> \n</b> He looks down at the list of CDs.\n<b> \n</b><b> PATRICK \n</b> (continuing) \n This is really music?\n<b> \n</b><b> INT. KAT\'S ROOM - NIGHT\n</b><b> \n</b> MUSIC BLARES in a room with minimalist decor splashed with \n indie rock band posters and flyers.\n<b> \n</b> Kat and Mandella dance as they dress and apply make-up \n Bianca enters, interrupting their fun.\n<b> \n</b><b> BIANCA\n</b> Can you turn down the Screaming \n Menstrual Bitches? I\'m trying to study.\n<b> \n</b> Kat doesn\'t move, so Bianca crosses to the stereo, turning \n down the volume.\n<b> \n</b><b> BIANCA\n</b> (continuing)\n Don\'t tell me you\'re actually going \n out? On a school night, no less.\n<b> \n</b> Kat shoots her a glare\n<b> \n</b><b> BIANCA\n</b> (continuing; \n excited)\n Oh my God, does this mean you\'re \n becoming normal?\n<b> \n</b><b> KAT\n</b> It means that Gigglepuss is playing at \n Club Skunk and we\'re going.\n<b> \n</b><b> BIANCA\n</b> (disappointed)\n Oh, I thought you might have a date\n (beat)\n I don\'t know why I\'m bothering to ask, \n but are you going to Bogey Lowenstein\'s \n party Saturday night?\n<b> \n</b><b> KAT\n</b> What do you think?\n<b> \n</b><b> BIANCA\n</b> I think you\'re a freak. I think you do \n this to torture me. And I think you \n suck.\n<b> \n</b> She smiles sweetly and shuts the door behind her. Kat \n doesn\'t bat an eye. She grabs her purse and opens the door\n<b> \n</b><b> KAT \n</b> Let\'s hit it.\n<b> \n</b><b> EXT. CLUB SKUNK - NIGHT\n</b><b> \n</b> A happy black and white neon skunk sprays fine mist on the \n line of kids below.\n<b> \n</b><b> INT. CLUB FOYER - NIGHT\n</b><b> \n</b> Kat and Mandella walk in, Mandella nervously pulling out her \n fake ID. The giant, afroed bouncer, BRUCE, looks typically \n mono-syllabic.\n<b> \n</b><b> MANDELLA \n</b> (whispering to Kat) \n You think this\'ll work?\n<b> \n</b><b> KAT \n</b> No fear.\n<b> \n</b> They approach Bruce. Kat puts on her happy, shiny face\n<b> \n</b><b> KAT \n</b> (continuing)\n Hello! We\'d like two for Gigglepuss!\n<b> \n</b> Bruce looks the girls up and down.\n<b> \n</b><b> BRUCE \n</b> I can count.\n<b> \n</b> He looks at their IDs. Mandella gently moves Kat aside, \n wearing a face that could only be described as "I AM a \n Victoria\'s Secret model."\n<b> \n</b><b> MANDELLA \n</b> I\'ll bet you can..\n<b> \n</b> She sticks out her chest and licks her lips. Bruce stares \n at her deadpan and hands her back the IDs.\n<b> \n</b><b> BRUCE \n</b> Go ahead.\n (to Mandella)\n And you\n<b> \n</b><b> MANDELLA \n</b> (all come hither)\n Yes?\n<b> \n</b><b> BRUCE\n</b> Take it easy on the guys in there.\n<b> \n</b> Mandella winks at him and sashays inside Kat: follows \n behind, shaking her head.\n<b> \n</b><b> EXT. CLUB SKUNK - NIGHT \n</b><b> \n</b> Patrick\'s mail truck clatters to a stop out front.\n<b> \n</b><b> INT. CLUB FOYER - NIGHT\n</b><b> \n</b> Patrick walks up to Bruce, who\'s frisking a badly mowhawked \n PIERCED EYEBROW BOY. Bruce pulls a SWITCHBLADE out of the \n boy\'s inside pocket.\n<b> \n</b><b> BRUCE\n</b> Next time, leave the Bic at home, \n Skippy.\n<b> \n</b><b> SKIPPY\n</b> It\'s a bottle opener.\n<b> \n</b> Bruce pushes him inside the club, then sees Patrick.\n<b> \n</b><b> BRUCE \n</b> Verona, my man.\n<b> \n</b> They shake.\n<b> \n</b><b> PATRICK \n</b> Always a pleasure, Brucie.\n<b> \n</b><b> BRUCE\n</b> Didn\'t have you pegged for a Gigglepuss \n fan. Aren\'t they a little too pre-teen \n belly-button ring for you?\n<b> \n</b><b> PATRICK\n</b> Fan of a fan. You see a couple of \n minors come in?\n<b> \n</b><b> BRUCE\n</b> Never\n<b> \n</b><b> PATRICK\n</b> Padua girls. One tall, decent body. \n The other one kinda short and \n undersexed?\n<b> \n</b><b> BRUCE \n</b> Just sent \'em through.\n<b> \n</b> Patrick starts to go in\n<b> \n</b><b> BRUCE \n</b> (continuing)\n Hey -- what happened to that chick you \n brought last time? The one with the \n snake?\n<b> \n</b> Patrick laughs and goes into the club\n<b> \n</b><b> INT. CLUB - NIGHT\n</b><b> \n</b> Onstage, the all-female band GIGGLEPUSS is parlaying their \n bad girl sass into a ripping punk number.\n<b> \n</b> Near the stage is a joyful mass of pogo-ing teens AT THE BAR\n<b> \n</b> Patrick bellies up and looks around the club. Gigglepuss \n finishes a song.\n<b> \n</b><b> LEAD SINGER\n</b> Hello, out there. We\'re Gigglepuss and \n we\'re from Olympia.\n<b> \n</b> A teenage boy in the audience takes the opportunity to \n scream.\n<b> \n</b><b> BOY (0.S.)\n</b> Pet my kitty!\n<b> \n</b><b> LEAD SINGER \n</b> Meow\n<b> \n</b> They rev into their next song. \n<b> \n</b><b> NEAR THE STAGE \n</b><b> \n</b> Mandella and Kat glow with sweat. When they hear the \n opening chords of the song, they look at each other and \n scream with glee as they begin to dance. They couldn\'t be \n having a better time.\n<b> \n</b><b> AT THE BAR\n</b><b> \n</b> Patrick signals to get the bartender\'s attention and looks \n across the bouncing surge of the crowd. He spots Kat and \n Mandella singing along.\n<b> \n</b><b> HIS POV\n</b><b> \n</b> The gleeful Kat -- dancing and looking completely at ease. \n None of her usual "attitude". Patrick is transfixed. And \n most definitely attracted.\n<b> \n</b> NEAR THE STAGE Kat looks at Mandella.\n<b> \n</b><b> KAT \n</b> (shouting)\n I need agua!\n<b> \n</b> She makes her way through the crowd to the bar. AT THE BAR\n<b> \n</b> She made it. She signals for the bartender and as she\'s \n waiting, looks around. She spots Patrick a few feet away\n<b> \n</b><b> KAT \n</b> (continuing to \n herself) \n Shit\n<b> \n</b> She sneaks a glance. He\'s staring, but this time he looks \n away before she can. Despite herself, she\'s miffed.\n<b> \n</b> The bartender arrives\n<b> \n</b><b> BARTENDER \n</b> (shouting) \n What can I get you?\n<b> \n</b><b> KAT \n</b> Two waters.\n<b> \n</b> She looks at Patrick again. He\'s completely absorbed in the \n band. She scowls. The bottled water arrives and she \n marches off, forgetting to pay.\n<b> \n</b> She walks up to Patrick.\n<b> \n</b><b> KAT \n</b> (continuing) \n You\'re not fooling anyone.\n<b> \n</b> Patrick looks at her, surprised\n<b> \n</b><b> PATRICK \n</b> (yelling) \n hey. Great show, huh?\n<b> \n</b><b> KAT \n</b> (yelling)\n<b> \n</b> If you\'re planning on asking me out you might as well get it \n over with.\n<b> \n</b><b> PATRICK \n</b> (yelling) \n Excuse me?\n<b> \n</b><b> KAT \n</b> (yelling)\n That\'s what you want, isn\'t it?\n<b> \n</b><b> PATRICK \n</b> (yelling; gesturing \n toward the band)\n Do you mind? You\'re sort of ruining it \n for me.\n<b> \n</b> Kat steams. And watches him watch the band\n<b> \n</b><b> KAT \n</b> (yelling)\n You\'re not surrounded by your usual \n cloud of smoke.\n<b> \n</b> The band takes a break, so they can stop yelling now\n<b> \n</b><b> PATRICK \n</b> I know. I quit.\n<b> \n</b> He leans back, making no attempt to hit on her. She moves \n closer.\n<b> \n</b><b> KAT \n</b> Oh, really?\n<b> \n</b> He motions toward the stage\n<b> \n</b><b> PATRICK\n</b> You know, these guys are no Bikini Kill \n or The Raincoats, but they\'re right up \n there.\n<b> \n</b><b> KAT\n</b> You know who The Raincoats are?\n<b> \n</b><b> PATRICK \n</b> Why, don\'t you?\n<b> \n</b> She\'s completely taken aback. He uses the moment to his \n advantage and brushes her hair back as he speaks right into \n her ear.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n I watched you out there I\'ve never \n seen you look like that\n<b> \n</b> Kat steps away, brushing the hair back that he just touched \n Her cheeks pinken.\n<b> \n</b> His cocky side is back in a flash\n<b> \n</b><b> PATRICK \n</b> (continuing) \n Come to that party with me.\n<b> \n</b> At that moment, the band starts another SONG\n<b> \n</b><b> KAT \n</b> (yelling) \n What?\n<b> \n</b> The bartender approaches.\n<b> \n</b><b> BARTENDER \n</b> (to Kat, yelling) \n You forgot to pay!\n<b> \n</b><b> PATRICK \n</b> (yelling) \n I got it, Rick.\n<b> \n</b> He tosses some bills on the bar\n<b> \n</b> Rather than thank him, Kat simply watches him, trying to \n figure out his motive.\n<b> \n</b><b> PATRICK \n</b> (continuing; \n yelling) \n Nine-thirty then.\n<b> \n</b> A few people have gotten between them at the bar and she \n can\'t hear a word he\'s saying. She gives him one last look \n and heads back into the crowd.\n<b> \n</b> Patrick smiles. She didn\'t say no this time.\n<b> \n</b><b> EXT. CLUB SKUNK - NIGHT\n</b><b> \n</b> The crowd files out of the club, Kat and Mandella amongst \n them. A^ they\'re walking toward the parking lot, Patrick \n coasts by in his truck. The gears GRIND. He yells out the \n window.\n<b> \n</b><b> MANDELLA \n</b> What\'d he say?\n<b> \n</b><b> KAT \n</b> Who cares?\n<b> \n</b> Mandella watches Kat as she stares after Patrick\n<b> \n</b><b> MANDELLA\n</b> Has he importun\'d you with love in \n honourable fashion?\n<b> \n</b> Kat glances sharply at her.\n<b> \n</b><b> MANDELLA \n</b> (continuing; off \n her look)\n Don\'t be Cruella with me. I\'m in favor \n of romance. You\'re the one that wants \n to march on Washington every five \n minutes.\n<b> \n</b> Kat pokes her, then looks back at the club dreamily.\n<b> \n</b><b> KAT \n</b> Gigglepuss was so beyond.\n<b> \n</b> Mandella nods.\n<b> \n</b><b> MANDELLA\n</b> They were. I only wish William could \n have been here to witness the rebirth of \n punk rock with us.\n<b> \n</b> Kat links her arm through Mandella\'s and they head for the \n car.\n<b> \n</b><b> KAT\n</b> So true.\n<b> \n</b><b> INT. HALLWAY - DAY \n</b> Cameron and Michael are at Michael\'s locker.\n<b> \n</b><b> CAMERON\n</b> So, then she says that she almost \n didn\'t wear the Kenneth Coles with that \n dress because she thought she was \n mixing, you know, genres. And the fact \n that I noticed -- and I\'m quoting here -\n "really meant something."\n<b> \n</b> Cameron looks At Michael expectantly\n<b> \n</b><b> MICHAEL \n</b> You told me that part already.\n<b> \n</b><b> CAMERON\n</b> Hell, I\'ve just been going over the \n whole thing in my head and -\n<b> \n</b> Joey appears over Cameron\'s shoulder. \n<b> \n</b><b> JOEY \n</b> Hey. Dingo Boingo\n<b> \n</b> Cameron and Michael look at each other And turn around \n slowly\n<b> \n</b><b> JOEY \n</b> (continuing; to \n Michael)\n I hear you\'re helpin\' Verona.\n<b> \n</b><b> MICHAEL \n</b> Uh, yeah. We\'re old friend*\n<b> \n</b><b> JOEY \n</b> You and Verona?\n<b> \n</b><b> MICHAEL\n</b> What? We took bathes together when we \n were kids.\n<b> \n</b> It\'s incredibly obvious that he\'s lying. Joey eyes him then \n turns to Cameron.\n<b> \n</b><b> JOEY\n</b> What\'s your gig in all this?\n<b> \n</b><b> CAMERON\n</b> I\'m just the new guy.\n<b> \n</b> Joey turns back to Michael, grabbing the alligator on his \n shirt and twisting it.\n<b> \n</b><b> JOEY\n</b> You better not fuck this up. I\'m \n heavily invested.\n<b> \n</b><b> MICHAEL\n</b> Hey -- it\'s all for the higher good \n right?\n<b> \n</b> Joey lets go of Michael and SHOVES Cameron against a locker \n for good measure, as he walks away-\n<b> \n</b><b> CAMERON\n</b> Is it about me?\n<b> \n</b><b> EXT. MISS PERKY\'S OFFICE - DAY\n</b><b> \n</b> Kat sits outside waiting for her appointment, bored and \n annoyed.\n<b> \n</b> The door opens and Miss Perky escorts Patrick out\n<b> \n</b><b> MISS PERKY\n</b> You\'re completely demented.\n<b> \n</b><b> PATRICK\n</b> (cheery)\n See you next week!\n<b> \n</b> Kat stands and Patrick sees her.\n<b> \n</b> Miss Perky watches in horror\n<b> \n</b><b> MISS PERKY\n</b> You two know each other?\n<b> \n</b><b> PATRICK/KAT \n</b> Yeah/No.\n<b> \n</b> Miss Perky grabs Kat and shoves her into her office.\n<b> \n</b><b> MISS PERKY \n</b> (to Patrick)\n Dear God, stay away from her. If you \n two ever decided to breed, evil would \n truly walk the earth.\n<b> \n</b> Patrick gives Kat one last look before the door shuts, then \n smiles-\n<b> \n</b><b> EXT. STRATFORD HOUSE - NIGHT \n</b><b> \n</b> The lights are on, illuminating the yard\n<b> \n</b><b> INT. STRATFORD HOUSE/UPSTAIRS HALLWAY - NIGHT\n</b><b> \n</b> Bianca and Chastity stand outside Kat\'s room. MUSIC is \n blaring and the door is shut. Bianca looks at her watch\n<b> \n</b><b> BIANCA\n</b> She\'s obviously not going.\n<b> \n</b><b> INT. LIVING ROOM - NIGHT\n</b><b> \n</b> Across the carpet, two pairs of teenage girl feet sneak \n past. Bianca and Chastity, teddy bear purses in hand.\n<b> \n</b> FROM THE KITCHEN A RUSTLING is heard. The girls freeze.\n<b> \n</b> Walter emerges from the kitchen with a mile-high sandwich \n The girls are like statues. Walter jumps.\n<b> \n</b><b> BIANCA\n</b> Daddy, I --\n<b> \n</b><b> WALTER\n</b> And where\'re you going?\n<b> \n</b><b> BIANCA\n</b> If you must know, we were attempting to \n go to a small study group of friends.\n<b> \n</b><b> WALTER\n</b> Otherwise known as an orgy?\n<b> \n</b><b> BIANCA\n</b> It\'s just a party. Daddy, but I knew \n you\'d forbid me to go since "Gloria \n Steinem" over there isn\'t going --\n<b> \n</b> She points to Kat -- Walkman blaring -- who comes \n downstairs, wearing a baby tee and battered Levis. Her \n relaxing-at-home look is about 400 times sexier than her at-\n school look. She wanders toward the kitchen.\n<b> \n</b> Walter directs his attention toward Kat.\n<b> \n</b><b> WALTER\n</b> Do you know about any party? Katarina?\n<b> \n</b> Kat shrugs as she comes back out of the kitchen with an \n apple\n<b> \n</b><b> BIANCA\n</b> Daddy, people expect me to be there!\n<b> \n</b><b> WALTER\n</b> If Kat\'s not going, you\'re not going.\n<b> \n</b> Bianca turns to Kat, eyes ablaze\n<b> \n</b><b> BIANCA\n</b> You\'re ruining my life\' Because you \n won\'t be normal, I can\'t be normal.\n<b> \n</b><b> KAT \n</b> What\'s normal?\n<b> \n</b><b> BIANCA\n</b> Bogey Lowenstein\'s party is normal, but \n you\'re too busy listening to Bitches Who \n Need Prozac to know that.\n<b> \n</b><b> WALTER\n</b> What\'s a Bogey Lowenstein?\n<b> \n</b> Kat takes off her earphones, ready to do battle\n<b> \n</b><b> BIANCA\n</b> Can\'t you forget for just one night \n that you\'re completely wretched?\n<b> \n</b><b> KAT\n</b> At least I\'m not a clouted fen- sucked \n hedge-pig.\n<b> \n</b> Bianca tosses her hair.\n<b> \n</b><b> BIANCA\n</b> Like I\'m supposed to know what that \n even means.\n<b> \n</b><b> KAT\n</b> It\'s Shakespeare. Maybe you\'ve heard \n of him?\n<b> \n</b><b> BIANCA\n</b> Yeah, he\'s your freak friend Mandella\'s \n boyfriend. I guess since I\'m not \n allowed to go out, I should obsess over \n a dead guy, too.\n<b> \n</b><b> WALTER\n</b> Girls\n<b> \n</b> Kat stares Bianca down\n<b> \n</b><b> KAT\n</b> I know about the goddamn party. I\'m \n going.\n<b> \n</b> Bianca and Chastity look at each other, thrilled, and burst \n into gleeful screams.\n<b> \n</b> A startled Walter clutches Bianca in a protective hug.\n<b> \n</b><b> WALTER\n</b> Oh, God. It\'s starting.\n<b> \n</b><b> BIANCA\n</b> It\'s just a party. Daddy.\n<b> \n</b> Walter looks dazed.\n<b> \n</b><b> WALTER\n</b> Wear the belly before you go.\n<b> \n</b><b> BIANCA\n</b> Daddy, no!\n<b> \n</b><b> WALTER\n</b> Just for a minute\n<b> \n</b> He rushes to a cupboard and pulls out a padded faux-\n pregnancy belly.\n<b> \n</b><b> WALTER\n</b> (continuing)\n I want you to realize the weight of \n your decisions.\n<b> \n</b> He hangs the belly on her as she stands mortified.\n<b> \n</b><b> BIANCA\n</b> You are so completely unbalanced.\n<b> \n</b><b> KAT \n</b> Can we go now?\n\n Scanned by http://freemoviescripts.com\n Formatting by http://simplyscripts.home.att.net\n\n<b> \n</b><b> WALTER\n</b> (to Bianca)\n Promise me you won\'t talk to any boys \n unless your sister is present.\n<b> \n</b><b> BIANCA\n</b> Why?\n<b> \n</b><b> WALTER\n</b> Because she\'ll scare them away.\n<b> \n</b> Kat stomps to the door, grabbing her car keys off the hall \n table and a sweater from the coat rack. She flings open the \n door and...\n<b> \n</b> There stands Patrick.\n<b> \n</b><b> PATRICK \n</b> Nine-thirty right?\n<b> \n</b> Kat\'s in shock\n<b> \n</b><b> PATRICK \n</b> (continuing)\n I\'m early.\n<b> \n</b> She holds up her keys\n<b> \n</b><b> KAT \n</b> I\'m driving.\n<b> \n</b> He peeks in behind her.\n<b> \n</b><b> PATRICK \n</b> Who knocked up your sister?\n<b> \n</b><b> INT. BOGEY LOWENSTEIN\'S HOUSE - NIGHT\n</b><b> \n</b> BOGEY, a short Future MBA in a tux, greets his guests like a \n pro, handing out cigars and martinis.\n<b> \n</b><b> BOGEY\n</b> Nice to see you. Martini bar to the \n right, shots in the kitchen.\n<b> \n</b> The house is filled to capacity with Padua High\'s finest Kat \n pushes through the crowd. Patrick saunters in behind her\n<b> \n</b><b> INT. BOGEY\'S KITCHEN - NIGHT\n</b><b> \n</b> Joey lines up a row of shots amid much whooping and \n hollering within the jock crowd.\n<b> \n</b> Kat enters, then quickly tries to make an about face. Joey \n sees her and rushes over to block her, standing in the \n doorway.\n<b> \n</b><b> JOEY\n</b> Lookin\' fresh tonight, Pussy-Kat\n<b> \n</b> Kat gives him a death look and then stops and points at his \n forehead.\n<b> \n</b><b> KAT\n</b> Wait -- was that?-- Did your hairline \n just recede?\n<b> \n</b> He panics, whipping out a handy pocket mirror She\'s \n already walking away.\n<b> \n</b><b> JOEY \n</b> Where ya goin?\n<b> \n</b><b> KAT\n</b> Away.\n<b> \n</b><b> JOEY \n</b> Your sister here?\n<b> \n</b> Kat\'s face shows utter hatred\n<b> \n</b><b> KAT \n</b> Leave my sister alone.\n<b> \n</b><b> JOEY \n</b> (smirking) \n And why would I do that?\n<b> \n</b> A RUCKUS sounds from the next room\n<b> \n</b><b> JOCK \n</b> A fight!\n<b> \n</b> The other jocks rush to watch as two Coffee Kids splash \n their cupfuls on each other.\n<b> \n</b><b> COFFEE KID #1\n</b> That was a New Guinea Peaberry, you \n Folger\'s-crystals-slurping-buttwipe.\n<b> \n</b> Caffeinated fists fly. Joey slithers away from the door to \n watch, giving Kat one last smirk, just as Bianca walks into \n the kitchen.\n<b> \n</b><b> JOEY \n</b> Just who I was looking for.\n<b> \n</b> He puts his arm around Bianca and escorts her out\n<b> \n</b><b> KAT \n</b><b> BIANCA\n</b><b> \n</b> Bianca keeps walking, ignoring Kat\n<b> \n</b> A GUY pouring shots hands Kat one She downs it and accepts \n another.\n<b> \n</b><b> GUY\n</b> Drink up, sister.\n<b> \n</b> Patrick walks up\n<b> \n</b><b> PATRICK\n</b> What\'s this?\n<b> \n</b><b> KAT \n</b> (mocking)\n "I\'m getting trashed, man." Isn\'t that \n what you\'re supposed to do at a party?\n<b> \n</b><b> PATRICK\n</b> I say, do what you wanna do.\n<b> \n</b><b> KAT\n</b> Funny, you\'re the only one\n<b> \n</b> She downs another.\n<b> \n</b><b> INT. BOGEY\'S LIVING ROOM - NIGHT\n</b><b> \n</b> Cameron and Michael enter. Cameron looks, around for his \n beloved, while Michael schmoozee with all in attendance and \n dishes dirt simultaneously.\n<b> \n</b><b> MICHAEL \n</b> (high-fiving a \n jock)\n Moose, my man! \n (to Cameron)\n Ranked fifth in the state. Recruiters \n have already started calling.\n<b> \n</b> Cameron nods intently\n<b> \n</b><b> MICHAEL \n</b> (continuing; \n grabbing his belt)\n Yo, Clem. \n (to Cameron)\n A Patsy Cline fan, but hates the new \n Leanne Rimes. \n (with a Jamaican \n swagger)\n Ziggy, peace, bra. \n (to Cameron)\n Prefers a water pipe, but has been \n known to use a bong.\n<b> \n</b> Michael spots Bianca and Chastity, watching the skirmish, \n and points Cameron\'s body in her direction.\n<b> \n</b><b> MICHAEL \n</b> (continuing)\n Follow the love, man\n<b> \n</b> ON BIANCA AND CHASTITY Bianca cranes her neck\n<b> \n</b><b> BIANCA\n</b> Where did he go? He was just here.\n<b> \n</b><b> CHASTITY \n</b> Who?\n<b> \n</b><b> BIANCA\n</b> Joey.\n<b> \n</b> Cameron walks over.\n<b> \n</b><b> CAMERON\n</b> Evening, ladies.\n<b> \n</b> Bianca turns and graces him with a pained smile.\n<b> \n</b><b> BIANCA\n</b> Hi.\n<b> \n</b><b> CAMERON\n</b> Looks like things worked out tonight, \n huh?\n<b> \n</b> Bianca ignores the question and tries to pawn him off\n<b> \n</b><b> BIANCA\n</b> You know Chastity?\n<b> \n</b><b> CAMERON\n</b> I believe we share an art instructor\n<b> \n</b><b> CHASTITY \n</b> Great\n<b> \n</b><b> BIANCA\n</b> Would you mind getting me a drink, \n Cameron?\n<b> \n</b><b> CAMERON\n</b> Certainly\n Pabst? Old Milwaukee? RaiJieer?\n<b> \n</b> Bianca gives him a tense smile.\n<b> \n</b><b> BIANCA\n</b> Surprise me.\n<b> \n</b> He heads for the kitchen. Joey walks up and grabs her \n around the waist.\n<b> \n</b> She giggles as he picks her up and carries her off -- just \n as Cameron returns, a beer -- complete with a napkin and \n straw -- in his hand.\n<b> \n</b> Chastity glares with a jealous fury after Bianca and Joey, \n then gives Cameron the once-over and walks away.\n<b> \n</b> Michael appears.\n<b> \n</b><b> MICHAEL\n</b> Extremely unfortunate maneuver.\n<b> \n</b><b> CAMERON\n</b> The hell is that? What kind of \'guy \n just picks up a girl and carries her \n away while you\'re talking to her?\n<b> \n</b><b> MICHAEL\n</b> Buttholus extremus. But hey, you\'re \n making progress.\n<b> \n</b><b> CAMERON\n</b> No, I \' m not.\n<b> \n</b> He smacks himself in the head\n<b> \n</b><b> CAMERON\n</b> (continuing)\n She used me! She wants to go out with \n Dorsey. Not me. I\'m an idiot!\n<b> \n</b> Michael pats him on the shoulder.\n<b> \n</b><b> MICHAEL \n</b> At least you\'re self-aware\n<b> \n</b><b> BOGEY\'S KITCHEN - NIGHT\n</b><b> \n</b> Kat and a crowd of White Rastas and Cowboys stand in a \n drunken group hug singing "I Shot the Sheriff". Kat has \n another shot glass in hand.\n<b> \n</b> Patrick is showing a scar to an inebriated, enraptured \n cheerleader. He looks up at Kat and smiles meets his eyes \n then looks away.\n<b> \n</b><b> INT. BOGEY\'S LIVING ROOM - NIGHT \n</b><b> \n</b> Bianca stands next to Joey, sipping from her beer\n<b> \n</b><b> JOEY\n</b> So yeah, I\'ve got the Sears catalog \n thing going -- and the tube sock gig " \n that\'s gonna be huge. And then I\'m up \n for an ad for Queen Harry next week.\n<b> \n</b><b> BIANCA\n</b> Queen Harry?\n<b> \n</b><b> JOEY\n</b> It\'s a gay cruise line, but I\'ll be, \n like, wearing a uniform and stuff.\n<b> \n</b> Bianca tries to appear impressed, but it\'s getting \n difficult.\n<b> \n</b><b> BIANCA\n</b> Neat...\n<b> \n</b><b> JOEY\n</b> My agent says I\'ve got a good shot at \n being the Prada guy next year.\n<b> \n</b> He looks over her shoulder and waves at someone. Bianca \n takes the opportunity to escape.\n<b> \n</b><b> BIANCA\n</b> I\'ll be right back.\n<b> \n</b><b> INT. BOGEY\'S BATHROOM - NIGHT\n</b><b> \n</b> Bianca shuts the door and leans on it with a sigh. Chastity \n applies lip-gloss in the mirror.\n<b> \n</b><b> BIANCA\n</b> He practically proposed when he found \n out we had the same dermatologist. I \n mean. Dr. Bonchowski is great an all, \n but he\'s not exactly relevant party \n conversation.\n<b> \n</b><b> CHASTITY \n</b> Is he oily or dry?\n<b> \n</b><b> BIANCA\n</b> Combination. I don\'t know -- I thought \n he\'d be different. More of a \n gentleman...\n<b> \n</b> Chastity rolls her eyes\n<b> \n</b><b> CHASTITY \n</b> Bianca, I don\'t think the highlights of \n dating Joey Dorsey are going to include \n door-opening and coat-holding.\n<b> \n</b><b> BIANCA\n</b> Sometimes I wonder if the guys we\'re \n supposed to want to go out with are the \n ones we actually want to go out with, \n you know?\n<b> \n</b><b> CHASTITY \n</b> All I know is -- I\'d give up my private \n line to go out with a guy like Joey.\n<b> \n</b> There\'s a KNOCK at the door. Bianca opens it to find a very \n drunken Kat.\n<b> \n</b><b> KAT\n</b> Bianca, I need to talk to you -- I need \n to tell you --\n<b> \n</b><b> BIANCA\n</b> (cutting her off)\n I really don\'t think I need any social \n advice from you right now.\n<b> \n</b> Bianca grabs Chastity\'s arm and they exit\n<b> \n</b><b> INT. BOGEY\'S KITCHEN - NIGHT - LATER \n</b><b> \n</b> Patrick tries to remove a shot glass from Kat\'s hand.\n<b> \n</b><b> PATRICK\n</b><b> \n</b> Maybe you should let me have it.\n<b> \n</b> Kat is fierce in her refusal to let go\n<b> \n</b><b> KAT\n</b> I want another one\n<b> \n</b> Joey enters, grabbing Patrick by the shoulder, distracting \n him from his task.\n<b> \n</b><b> JOEY\n</b> My man\n<b> \n</b> As Patrick turns, Kat breaks free and dives into the sea of \n dancing people in the dining room.\n<b> \n</b><b> PATRICK \n</b> (annoyed)\n It\'s about time.\n<b> \n</b><b> JOEY \n</b> A deal\'s a deal.\n<b> \n</b> He peels off some bills\n<b> \n</b><b> JOEY \n</b> (continuing)\n How\'d you do it?\n<b> \n</b><b> PATRICK\n</b> Do what?\n<b> \n</b><b> JOEY\n</b> Get her to act like a human\n<b> \n</b> A very drunken Kat jumps up onto the kitchen island and \n starts dancing by herself. She lets loose, hair flying. \n She\'s almost burlesque.\n<b> \n</b> Others form a crowd, clapping and cheering her on\n<b> \n</b> She swings her head around BANGING it on a copper pot \n hanging from the rack above the center island. She starts \n to sway, then goes down as Patrick rushes over to catch her.\n<b> \n</b> The others CLAP, thinking this is a wonderful finale. \n Patrick sets her down on her feet, holding her up\n<b> \n</b><b> PATRICK\n</b> Okay?\n<b> \n</b><b> KAT\n</b> I\'m fine. I\'m\n<b> \n</b> She tries to push him away, but staggers when she does grabs \n her again, bracing her.\n<b> \n</b><b> PATRICK\n</b> You\'re not okay.\n<b> \n</b><b> KAT\n</b> I just need to lie down for awhile\n<b> \n</b><b> PATRICK\n</b> Uh, uh. You lie down and you\'ll go to \n sleep\n<b> \n</b><b> KAT\n</b> I know, just let me sleep\n<b> \n</b><b> PATRICK\n</b> What if you have a concussion? My dog \n went to sleep with a concussion and woke \n up a vegetable. Not that I could tell \n the difference...\n<b> \n</b> She tries to sit on the floor\n<b> \n</b><b> KAT\n</b> Okay, I\'ll just sleep but stay awake, \n okay?\n<b> \n</b> He pulls her back to her\n<b> \n</b><b> PATRICK\n</b> C\'mon, let\'s walk\n<b> \n</b><b> INT. BOGEY\'S DINING ROOM - NIGHT\n</b><b> \n</b> As Patrick walks Kat through the dining room, Cameron grabs \n his arm.\n<b> \n</b> CAMERON We need to talk.\n<b> \n</b><b> PATRICK\n</b> Cameron, I\'m a little busy\n<b> \n</b><b> CAMERON\n</b> It\'s off. The whole thing.\n<b> \n</b> Kat slides down to the floor and Patrick struggles to get h \n back on her feet.\n<b> \n</b><b> PATRICK\n</b> What \'re you talking about?\n<b> \n</b><b> CAMERON\n</b> She\'s partial to Joey, not me\n<b> \n</b> Patrick doesn\'t have time for this.\n<b> \n</b><b> PATRICK\n</b> Cameron -- do you like the girl?\n<b> \n</b><b> CAMERON\n</b> Sure\n<b> \n</b><b> PATRICK \n</b> (impatient)\n Then, go get her\n<b> \n</b> Patrick continues walking an oblivious Kat outside. Cameron \n stands there, unsure how to make use of this advice\n<b> \n</b><b> EXT. BOGEY LOWENSTEIN\'S HOUSE - NIGHT \n</b><b> \n</b> Patrick marches Kat around the yard, holding her up\n<b> \n</b><b> KAT\n</b> This is so patronizing.\n<b> \n</b><b> PATRICK\n</b> Leave it to you to use big words when \n you\'re shitfaced.\n<b> \n</b><b> KAT\n</b> Why \'re you doing this?\n<b> \n</b><b> PATRICK\n</b> I told you\n<b> \n</b><b> KAT\n</b> You don\'t care if I die\n<b> \n</b><b> PATRICK\n</b> Sure, I do\n<b> \n</b><b> KAT\n</b> Why?\n<b> \n</b><b> PATRICK\n</b> Because then I\'d have to start taking \n out girls who like me.\n<b> \n</b><b> KAT\n</b> Like you could find one\n<b> \n</b><b> PATRICK\n</b> See that? Who needs affection when \n I\'ve got blind hatred?\n<b> \n</b><b> KAT\n</b> Just let me sit down.\n<b> \n</b> He walks her over to the swingset and plops her down in a \n swing, moving her hands to hang onto the chains.\n<b> \n</b><b> PATRICK\n</b> How\'s that?\n<b> \n</b> She sits and looks at him for a moment with a smile. Then \n FALLS over backward.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n Jesus. You\'re like a weeble\n<b> \n</b> Patrick rushes to right her, then starts pushing her on the \n swing to keep her entertained.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n Why\'d you let him get to you?\n<b> \n</b><b> KAT\n</b> Who? \n<b> \n</b><b> PATRICK\n</b> Dorsey.\n<b> \n</b><b> KAT\n</b> I hate him.\n<b> \n</b><b> PATRICK\n</b> I know. It\'d have to be a pretty big \n deal to get you to mainline tequila. You \n don\'t seem like the type.\n<b> \n</b><b> KAT\n</b> (holding up a \n drunken head)\n Hey man. . . You don \' t think I can \n be "cool"? You don\'t think I can be \n "laid back" like everyone else?\n<b> \n</b><b> PATRICK \n</b> (slightly \n sarcastic)\n I thought you were above all that\n<b> \n</b><b> KAT\n</b> You know what they say\n<b> \n</b> He stops the swing\n<b> \n</b><b> PATRICK\n</b> No. What do they say?\n<b> \n</b> Kat is asleep, her head resting against the swing\'s chains.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n Shit!\n<b> \n</b> He drags her to her feet and starts singing loudly.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n Jingle Bells! Jingle Belles! Wake up \n damn it!\n<b> \n</b> He sits her down on the slide and shakes her like a rag \n doll.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n Kat! Wake up! \n<b> \n</b><b> KAT\n</b> (waking)\n What?\n<b> \n</b> He sighs with relief.\n<b> \n</b><b> PATRICK\n</b> I thought you were...\n<b> \n</b> They share some meaningful eye contact. And then she PUKES \n on his shoes.\n<b> \n</b><b> INT. BOGEY\'S BATHROOM - NIGHT\n</b><b> \n</b> Kat washes her face and grabs a bottle of Scope, taking a \n big swig.\n<b> \n</b> A KNOCK sounds at the door\n<b> \n</b><b> KAT\n</b> Go away\n<b> \n</b> Bianca opens the door and looks at her sister with the \n smuggest of all possible grins.\n<b> \n</b><b> BIANCA\n</b> Dinner taste better on the way out?\n<b> \n</b> Gives her a "don\'t even start" look.\n<b> \n</b><b> BIANCA\n</b> (continuing)\n I don\'t get you. You act like you\'re \n too good for any of this, and then you \n go totally apeshit when you get here.\n<b> \n</b><b> KAT\n</b> You\'re welcome.\n<b> \n</b> She pushes past her and leaves the bathroom.\n<b> \n</b><b> KAT\'S CAR - NIGHT\n</b><b> \n</b> Kat\'s in the driver\'s seat. Patrick leans in and takes the \n keys out of the ignition.\n<b> \n</b><b> PATRICK\n</b> Cute\n<b> \n</b><b> BOGEY LOWENSTEIN\'S HOUSE - NIGHT\n</b><b> \n</b> Kids loiter on the lawn. Bianca and Chastity walk outside \n Joey catches up to them.\n<b> \n</b><b> JOEY\n</b> A bunch of us are going to Jaret\'s \n house. Wanna come?\n<b> \n</b> Chastity looks at Bianca, who wears a pained expression. \n She looks at her watch.\n<b> \n</b><b> BIANCA\n</b> I have to be home in twenty minutes.\n<b> \n</b><b> CHASTITY \n</b> (eagerly, to Joey)\n I don\'t have to be home \'til two.\n<b> \n</b><b> JOEY \n</b> Then, c\'mon.\n (to Bianca)\n Maybe next time --\n<b> \n</b> They head back into the party, leaving an astonished Bianca\n<b> \n</b> Cameron exits the party and stops when he sees Bianca \n standing alone.\n<b> \n</b><b> CAMERON\n</b> (slightly \n accusatory)\n Have fun tonight?\n<b> \n</b><b> BIANCA\n</b> Tons\n<b> \n</b> He starts to walk on\n<b> \n</b><b> BIANCA\n</b> (continuing)\n Cameron?\n<b> \n</b> He stops. She gives him a helpless smile.\n<b> \n</b><b> BIANCA\n</b> (continuing)\n Do you think you could give me a ride \n home?\n<b> \n</b><b> INT. KAT\'S CAR - NIGHT\n</b><b> \n</b> Patrick drives as Kat sits in the passenger seat, fiddling \n with the radio dial. She finds a SONG she\'s happy with and \n Patrick quickly changes it.\n<b> \n</b><b> PATRICK\n</b> I\'m driving, so I get to pick the \n tunes.\n<b> \n</b> She changes it back to her song.\n<b> \n</b><b> KAT\n</b> It\'s my car.\n<b> \n</b> He changes it back.\n<b> \n</b><b> PATRICK\n</b> And I\'m in control of it.\n<b> \n</b><b> KAT\n</b> But it\'s Gigglepuss - I know you like \n them. I saw you there.\n<b> \n</b> Patrick doesn\'t have an answer for this, so he let\'s her \n listen to her song.\n<b> \n</b><b> KAT \n</b> (continuing)\n When you were gone last year -- where \n were you?\n<b> \n</b><b> PATRICK\n</b> Busy\n<b> \n</b><b> KAT\n</b> Were you in jail?\n<b> \n</b><b> PATRICK\n</b> Maybe.\n<b> \n</b><b> KAT\n</b> No, you weren\'t\n<b> \n</b><b> PATRICK\n</b> Then why\'d you ask?\n<b> \n</b><b> KAT\n</b> Why\'d you lie?\n<b> \n</b> He doesn\'t answer, but instead, frowns and turns up the \n music. She bobs her head drunkenly.\n<b> \n</b><b> KAT \n</b> (continuing)\n I should do this.\n<b> \n</b><b> PATRICK\n</b> Do what?\n<b> \n</b><b> KAT\n</b> This.\n<b> \n</b> She points to the radio\n<b> \n</b><b> PATRICK\n</b> Start a band?\n<b> \n</b><b> KAT \n</b> (sarcastically)\n My father wouldn\'t approve of that that\n<b> \n</b><b> PATRICK\n</b> You don\'t strike me as the type that \n would ask permission.\n<b> \n</b> She turns to look at him.\n<b> \n</b><b> KAT\n</b> Oh, so now you think you know me?\n<b> \n</b><b> PATRICK\n</b> I\'m gettin\' there\n<b> \n</b> Her voice loses it\'s venom\n<b> \n</b><b> KAT\n</b> The only thing people know about me is \n that I\'m "scary".\n<b> \n</b> He turns to look at her -- she looks anything but scary \n right now. He tries to hide his smile.\n<b> \n</b><b> PATRICK\n</b><b> \n</b> Yeah -- well, I\'m no picnic myself.\n<b> \n</b> They eye each other, sharing a moment of connection, \n realizing they\'re both created the same exterior for \n themselves.\n<b> \n</b> Patrick pulls into her driveway and shuts off the motor. He \n looks up at her house.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n So what \' s up with your dad? He a \n pain in the ass?\n<b> \n</b><b> KAT\n</b> He just wants me to be someone I\'m not.\n<b> \n</b><b> PATRICK\n</b> Who?\n<b> \n</b><b> KAT\n</b><b> BIANCA\n</b><b> \n</b><b> PATRICK\n</b> No offense, but you\'re sister is \n without. I know everyone likes her and \n all, but ...\n<b> \n</b> Kat stares at him with new admiration.\n<b> \n</b><b> KAT\n</b> You know -- you\'re not as vile as I \n thought you were.\n<b> \n</b> She leans drunkenly toward him.\n<b> \n</b> Their faces grow closer as if they\'re about to kiss And then \n Patrick turns away\n<b> \n</b><b> PATRICK\n</b> So, I\'ll see you in school\n<b> \n</b> Kat stares at him, pissed. Then gets out of the car, \n SLAMMING the door shut behind her.\n<b> \n</b><b> CAMERON\'S CAR - NIGHT\n</b><b> \n</b> Bianca and Cameron ride in silence.\n He finally breaks it.\n<b> \n</b><b> CAMERON\n</b> I looked for you back at the party, but \n you always seemed to be "occupied".\n<b> \n</b><b> BIANCA\n</b> (faux-innocence )\n I was?\n<b> \n</b><b> CAMERON\n</b> You never wanted to go out with \'me, \n did you?\n<b> \n</b> Bianca bites her lip.\n<b> \n</b><b> BIANCA\n</b> (reluctant)\n Well, no...\n<b> \n</b><b> CAMERON\n</b> Then that\'s all you had to say.\n<b> \n</b><b> BIANCA\n</b> But\n<b> \n</b><b> CAMERON\n</b> You always been this selfish?\n<b> \n</b> BIANCA thinks a minute\n<b> \n</b> He pulls up in front of the house\n<b> \n</b><b> CAMERON\n</b> Just because you\'re beautiful, doesn\'t \n mean you can treat people like they \n don\'t matter.\n<b> \n</b> She looks at him for a moment -- then grabs his face and \n gives him a kiss on the lips. He draws back in surprise, \n then kisses her back. She smiles, then gets out of the car \n without another word.\n<b> \n</b> Cameron grins and drives away\n<b> \n</b><b> CAMERON\n</b> (continuing)\n And I\'m back in the saddle.\n<b> \n</b><b> INT. ENGLISH CLASS - DAY\n</b><b> \n</b> Kat sits at her desk, burying her face in a book as the \n others enter. The White Rastas are first.\n<b> \n</b><b> DEREK\n</b> Kat, my lady, you sway to the rhythm of \n my heart.\n<b> \n</b> He grabs her hand and kisses it as she pulls it away.\n<b> \n</b> CLEM, a cowboy, enters, high-fiving Derek with new-found \n friendliness.\n<b> \n</b><b> CLEM\n</b> Yippe kai-aye, bra. \n (to Kat)\n Dance for me, cowgirl.\n<b> \n</b> He sits next to Derek\n<b> \n</b><b> CLEM \n</b> (continuing)\n Okay, now tell me again why he didn\'t \n shoot the deputy?\n<b> \n</b><b> DEREK\n</b> Because the deputy meant him no harm, \n my friend. It was only the sheriff that \n was the oppressor.\n<b> \n</b> Joey saunters in and takes his seat.\n<b> \n</b><b> JOEY\n</b> Kat, babe, you were on fire.\n<b> \n</b> Mrs. Blaise enters and sits at her desk\n<b> \n</b><b> MRS. BLAISE\n</b> Well now, did everyone have a good \n weekend?\n<b> \n</b><b> JOEY \n</b> Maybe we should ask Verona\n<b> \n</b> Patrick enters, late, and slinks to his desk. Kat looks up, \n down and around, everywhere but at Patrick.\n<b> \n</b> Mrs. Blaise tries to remember what she\'s supposed to talk \n about.\n<b> \n</b><b> MRS. BLAISE \n</b> Okay then. Well.\n (beat)\n Oh, yes\n<b> \n</b> She clears her throat.\n<b> \n</b><b> MRS. BLAISE \n</b> (continuing)\n I\'d like you all to write your own \n version of Shakespeare\'s Sonnet #141.\n<b> \n</b> Groans.\n<b> \n</b><b> MRS. BLAISE \n</b> (continuing)\n Any form you\'d like. Rhyme, no rhyme, \n whatever. I\'d like to see you elaborate \n on his theme, however. Let\'s read it \n aloud, shall we? Anyone?\n<b> \n</b> The class is frozen in apathy.\n<b> \n</b><b> MRS. BLAISE \n</b> (continuing)\n Derek?\n<b> \n</b> Ms. Blaise hands him the sonnet. He shifts uncomfortably in \n his seat. Then grins.\n<b> \n</b><b> DEREK \n</b> (reading; in his \n Rasta stoner drawl)\n In faith, I do not love thee with mine \n eyes/ For they in thee a thousand errors \n note/ But \'tis my heart that loves what \n they despise/ Who in despite of view is \n pleas \'d to dote.\n<b> \n</b> In the back of the room Clem raises his hand\n<b> \n</b><b> CLEM\n</b> Ms. Blaise, can I get the bathroom \n pass? Damn if Shakespeare don\'t act as \n a laxative on my person.\n<b> \n</b><b> INT. KENNY\'S THAI FOOD DINER - DAY \n</b> Kat and Mandella scrape the peanuts out of their sauce.\n<b> \n</b><b> MANDELLA\n</b> You went to the party? I thought we \n were officially opposed to suburban \n social activity.\n<b> \n</b><b> KAT\n</b> I didn\'t have a choice.\n<b> \n</b><b> MANDELLA\n</b> You didn\'t have a choice? Where\'s Kat \n and what have you done with her?\n<b> \n</b><b> KAT\n</b> I did Bianca a favor and it backfired.\n<b> \n</b><b> MANDELLA \n</b> You didn\'t\n<b> \n</b><b> KAT\n</b> I got drunk. I puked. I got rejected. \n It was big fun.\n<b> \n</b> Patrick enters, walking to the counter to order. He sees Kat \n and smiles.\n<b> \n</b><b> PATRICK\n</b> Hey\n<b> \n</b> She gathers her things and bolts out the door. Patrick \n looks at Mandella, who shrugs and follows Kat.\n<b> \n</b> INT. BIOLOGY CLASS - DAY Cameron and Michael flank Patrick \n at his lab table\n<b> \n</b><b> MICHAEL\n</b> So you got cozy with she who stings?\n<b> \n</b><b> PATRICK\n</b> No - I\'ve got a sweet-payin\' job that \n I\'m about to lose.\n<b> \n</b><b> CAMERON\n</b> What\'d you do to her?\n<b> \n</b><b> PATRICK\n</b> I don \' t know. \n (beat)\n I decided not to nail her when she was \n too drunk to remember it.\n<b> \n</b> Michael and Cameron look at each other in realization, then \n turn back to Patrick.\n<b> \n</b><b> CAMERON\n</b><b> \n</b> You realize this puts the whole operation in peril.\n<b> \n</b><b> PATRICK\n</b><b> \n</b> No shit. She won\'t even look at me\n<b> \n</b><b> CAMERON\n</b><b> \n</b> Why can\'t you just tell her you\'re sorry?\n<b> \n</b> Patrick\'s expression says that this is not a possibility. \n Michael makes a time out sign with his hands.\n<b> \n</b><b> MICHAEL\n</b> I\'m on it\n<b> \n</b><b> INT. HALLWAY - DAY\n</b><b> \n</b> Mandella is at her locker. Drawings of William Shakespeare \n adorn the door. She looks at them with a sigh, then ties \n her silk scarf tightly around her neck, in an attempt to cut \n off her air supply.\n<b> \n</b> Michael walks up.\n<b> \n</b><b> MICHAEL\n</b> Hey there. Tired of breathing?\n<b> \n</b><b> MANDELLA \n</b> (shyly, as she \n loosens the scarf)\n Hi.\n<b> \n</b><b> MICHAEL \n</b> Cool pictures. You a fan?\n<b> \n</b><b> MANDELLA\n</b> Yeah. I guess.\n<b> \n</b> MICHAEL rocks. Very hip.\n<b> \n</b><b> MANDELLA\n</b> You think?\n<b> \n</b><b> MICHAEL \n</b> Oh yeah.\n<b> \n</b> She looks at him suspiciously\n<b> \n</b><b> MANDELLA\n</b> Who could refrain that had a heart to \n love and in that heart, courage to make \n \' B love known?\n<b> \n</b> Michael thinks for a minute.\n<b> \n</b><b> MICHAEL \n</b> Macbeth, right?\n<b> \n</b><b> MANDELLA \n</b> (happily stunned)\n Right.\n<b> \n</b><b> MICHAEL \n</b> Kat a fan, too?\n<b> \n</b><b> MANDELLA \n</b> (puzzled)\n Yeah...\n<b> \n</b> He leans in close to her, conspiratorially\n<b> \n</b><b> MICHAEL\n</b> So, listen... I have this friend\n<b> \n</b><b> EXT. FIELD HOCKEY FIELD - DAY\n</b><b> \n</b> Cameron sits next to Patrick on the bleachers as they watch \n Kat\'s practice.\n<b> \n</b><b> CAMERON\n</b> She hates you with the fire of a \n thousand suns . That\'s a direct quote\n<b> \n</b><b> PATRICK\n</b> She just needs time to cool off I\'ll \n give it a day.\n<b> \n</b> A PUCK flies at them from the field, narrowly missing their \n heads.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n Maybe two.\n<b> \n</b> He looks at Cameron.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n You makin\' any headway?\n<b> \n</b><b> CAMERON\n</b> She kissed me.\n<b> \n</b><b> PATRICK \n</b> (eyebrow raised)\n Where?\n<b> \n</b><b> INT. HALLWAY - DAY\n</b><b> \n</b> Chastity rounds the corner and bends down to get a drink \n from the water fountain.\n<b> \n</b><b> NEARBY\n</b><b> \n</b> Joey stands talking to two JOCK COHORTS. The guys don\'t see \n her.\n<b> \n</b><b> JOEY\n</b> Don\'t talk to me about the sweetest \n date. That little halo Bianca is gonna \n be prone and proven on prom night. Six \n virgins in a row.\n<b> \n</b> The cohorts chortle Chastity keeps drinking from the \n fountain\n<b> \n</b><b> EXT. PARKING LOT - DAY \n</b><b> \n</b> Joey leans against Patrick\'s Jeep. Patrick is inside.\n<b> \n</b><b> PATRICK\n</b> I don\'t know, Dorsey. ..the limo.-the \n flowers. Another hundred for the tux --\n<b> \n</b><b> JOEY\n</b> Enough with the Barbie n\' Ken shit. I \n know.\n<b> \n</b> He pulls out his wallet and hands Patrick a wad of money\n<b> \n</b><b> JOEY \n</b> (continuing)\n Take it\n<b> \n</b> Patrick does, with a smile, as he ROARS out of the parking \n lot.\n<b> \n</b><b> INT. SCHOOL COURTYARD - DAY \n</b><b> \n</b> Kat and Mandella deface a prom flyer.\n<b> \n</b><b> KAT\n</b> Can you even imagine? Who the hell \n would go to this a bastion of commercial \n excess?\n<b> \n</b><b> MANDELLA\n</b> Well, I guess we\'re not, since we don\'t \n have dates .\n<b> \n</b><b> KAT\n</b> Listen to you! You sound like Betty, \n all pissed off because Archie is taking \n Veronica.\n<b> \n</b><b> MANDELLA \n</b> Okay, okay, we won\'t go. It\'s not like \n I have a dress anyway\n<b> \n</b><b> KAT\n</b> You \' re looking at this from the wrong \n perspective. We\'re making a statement.\n<b> \n</b><b> MANDELLA \n</b> (unconvinced)\n Oh, good. Something new and different \n for us.\n<b> \n</b><b> EXT. ARCHERY FIELD - DAY \n</b><b> \n</b> Mr. Chapin patrols as boys and girls shoot arrows at targets\n<b> \n</b> Joey swaggers up to Bianca, who is taking careful aim. \n Chastity watches from across the row.\n<b> \n</b><b> JOEY \n</b> Hey, sweet cheeks.\n<b> \n</b><b> BIANCA\n</b> (not looking at \n him)\n Hi, Joey.\n<b> \n</b><b> JOEY\n</b> You\'re concentrating awfully hard \n considering it\'s gym class.\n<b> \n</b> She lets the arrow go and turns to look at him.\n<b> \n</b><b> JOEY \n</b> (continuing)\n Listen, I want to talk to you about the \n prom.\n<b> \n</b><b> BIANCA\n</b> You know the deal. I can \' t go if Kat \n doesn\'t go --\n<b> \n</b> In the background, a RASTA crumples to the ground. Hit\n A casualty of Gym. Mr. Chapin scurries over.\n<b> \n</b><b> JOEY\n</b> Your sister is going.\n<b> \n</b> Bianca looks at him, surprised\n<b> \n</b><b> BIANCA\n</b> Since when?\n<b> \n</b> Joey takes the bow and arrow from Bianca\'s hand. He draws \n back and takes aim.\n<b> \n</b><b> JOEY \n</b> I\'m taking care of it.\n<b> \n</b> Chastity looks over from her spot on the field, but keeps \n lips firmly shut.\n<b> \n</b><b> INT. BOOK STORE - DAY\n</b><b> \n</b> Kat browses through the feminist lit section\n Patrick appears, through a hole in the books.\n<b> \n</b><b> PATRICK\n</b> Excuse me, have you seen The Feminine \n Mystique? I lost my copy.\n<b> \n</b><b> KAT \n</b> (frowning)\n What are you doing here?\n<b> \n</b><b> PATRICK\n</b> I heard there was a poetry reading.\n<b> \n</b><b> KAT\n</b> You \'re so --\n<b> \n</b><b> PATRICK\n</b> Pleasant?\n<b> \n</b> Kat stares at him, deadpan.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n Wholesome.\n<b> \n</b><b> KAT\n</b> Unwelcome.\n<b> \n</b><b> PATRICK\n</b> Unwelcome? I guess someone still has \n her panties in a twist.\n<b> \n</b><b> KAT\n</b> Don\'t for one minute think that you had \n any effect whatsoever on my panties.\n<b> \n</b><b> PATRICK\n</b> So what did I have an effect on ?\n<b> \n</b><b> KAT\n</b> Other than my upchuck reflex? Nothing.\n<b> \n</b> She pushes past him and heads out the\' door\n Pat looks down at the book he\'s been holding in his hand: \n Taming of the Shrew.\n<b> \n</b><b> INT. CAFETERIA - DAY\n</b><b> \n</b> Cameron and Michael flank Patrick as he shovels food into \n mouth.\n<b> \n</b><b> PATRICK\n</b> You were right. She\'s still pissed.\n<b> \n</b><b> MICHAEL\n</b> Sweet love, renew thy force!\n<b> \n</b><b> PATRICK\n</b> Man -- don\'t say shit like that to me. \n People can hear you.\n<b> \n</b><b> CAMERON\n</b> (exasperated)\n You humiliated the woman! Sacrifice \n yourself on the altar of dignity and \n even the score.\n<b> \n</b><b> MICHAEL\n</b> Best case scenario, you\'re back on the \n payroll for awhile.\n<b> \n</b><b> PATRICK\n</b> What\'s the worst?\n<b> \n</b><b> CAMERON\n</b> You get the girl.\n<b> \n</b> Patrick thinks for a minute\n<b> \n</b><b> PATRICK\n</b> If I go down. I\'m takin\' her with me\n<b> \n</b><b> INT. ENGLISH CLASS - DAY\n</b><b> \n</b> Kat and the other students sit at their desks, taking a quiz \n Patrick\'s seat is conspicuously empty.\n<b> \n</b> From outside, we hear the soft, unsure beginnings of a SONG. \n Kat looks up, then out the window, HORRIFIED.\n<b> \n</b> The song grows louder until we realize it\'s The Partridge \n Family\'s "I Think I Love You". Being sung by Patrick.\n<b> \n</b><b> PATRICK \n</b><b> (0. S.)\n</b> "This morning, I woke up with this \n feeling, I didn\'t know how to deal with, \n and so I just decided to myself--"\n<b> \n</b> The STUDENTS rush to the window. OUTSIDE Patrick stands \n beneath the window, crooning.\n<b> \n</b> Scurvy is next to him, keeping the beat on the bongos and \n doing backup vocal s.\n<b> \n</b><b> PATRICK\n</b> "I\'d hide it to myself. And never talk \n about it. And didn\'t I go and shout it \n when you walked into the room --"\n<b> \n</b> He makes quite a sarcastic show of it.\n<b> \n</b><b> IN THE CLASSROOM\n</b><b> \n</b> Mrs. Blaise touches her heart, as if the song is for her. \n Kat slowly walks to the window, peeking below.\n<b> \n</b><b> OUTSIDE\n</b><b> \n</b> Patrick smiles at her as he finishes the verse with a big \n finale.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n " I think I love you I "\n<b> \n</b><b> INSIDE\n</b><b> \n</b> The other students laugh, clap, cheer, etc. Kat sinks down, \n mortified, but with a slight smile\n<b> \n</b><b> INT. DETENTION HALL - DAY\n</b><b> \n</b> Patrick and several other miscreants sit quietly, mulling \n over their misfortune.\n<b> \n</b><b> MISCREANT \n</b> Nice song, Verona.\n<b> \n</b><b> PATRICK\n</b> Flog me.\n<b> \n</b> He makes the appropriate hand gesture\n<b> \n</b> Mr. Chapin, the gym teacher, sits at the desk in front, \n ignoring them while he reads a girly weightlifting magazine\n<b> \n</b><b> KAT (0. S.)\n</b> Excuse me, Mr. Chapin?\n<b> \n</b> Patrick looks up at the sound of her voice and sees Kat \n standing in the doorway. She gives him a smile and he perks \n up a little.\n<b> \n</b> Kat walks into the room and addresses Mr. Chapin again. He \n turns fully to face her.\n<b> \n</b><b> KAT\n</b> Sir, I\'d like to state for the record \n that Mr. Verona \' s current \n incarceration is unnecessary. I never \n filed a complaint.\n<b> \n</b><b> MR. CHAPIN \n</b> You didn\'t have to. He disrupted a \n classroom.\n<b> \n</b> Kat glances over at Patrick and motions her head toward the \n window.\n<b> \n</b> Patrick shrugs, not knowing what she \' s talking about.\n<b> \n</b> She motions again, and looks toward the window with an \n expression that says, "Make a break for it, moron."\n<b> \n</b> Kat brings her attention back to Mr. Chapin while Patrick \n inches out of his seat toward the window.\n<b> \n</b> The other miscreants watch with glee.\n<b> \n</b><b> KAT\n</b> But, Mr. Chapin, I hardly think a \n simple serenade warrants a week of \n detention. There are far more hideous \n acts than off-key singing being \n performed by the student body on a \n regular basis.\n<b> \n</b> Patrick is halfway out the window now. And none too happy \n about it, considering they\'re on the second floor.\n<b> \n</b> He eyes a large TREE a few feet away from MR. CHAPIN. He \n starts to turn away from Kat\n<b> \n</b><b> MR. CHAPIN \n</b> You\'re not gonna change my mind, Kat. \n Rules stick.\n<b> \n</b> Kat starts to panic, as Patrick has yet to make the jump for \n the tree.\n<b> \n</b><b> KAT\n</b> Wait, Mr. Chapin. There\'s something \n I\'ve always wanted to show you.\n<b> \n</b> He turns back toward her again, the very second before he \n would have spotted Patrick.\n<b> \n</b> Kat glances toward the window. Patrick\'s just about to make \n the jump.\n<b> \n</b><b> MR. CHAPIN \n</b> What?\n<b> \n</b><b> KAT\n</b> These.\n<b> \n</b> From behind, we see her lift up her shirt and flash her bra \n at Mr. Chapin, just as Patrick makes the Jump.\n<b> \n</b> The miscreants cheer, for both the daring\' escape and the \n flash of skin.\n<b> \n</b> Mr. Chapin reddens and tries to be stern.\n<b> \n</b><b> MR. CHAPIN \n</b> I\'m going to let that slide, Katarina. \n But if I catch you doing that again, \n you\'ll be in here with the rest of these \n guys.\n<b> \n</b> He motions to the remaining detention prisoners, without \n noticing Patrick\'s absence.\n<b> \n</b> Kat smiles at him.\n<b> \n</b><b> KAT\n</b> Thank you, Mr. Chapin.\n<b> \n</b> Kat bolts out the door. Mr. Chapin goes back to his muscle \n mag, wiping the sweat from his brow.\n<b> \n</b><b> EXT. SCHOOL CAMPUS LAWN\n</b><b> \n</b> Kat arrives at the tree. looking around breathlessly, seeing \n no one.\n<b> \n</b><b> KAT\n</b> He left! I sprung the dickhead and he \n cruised on me.\n<b> \n</b><b> PATRICK \n</b><b> (0. S.)\n</b> Look up, sunshine\n<b> \n</b> She does. He\'s still in the tree\n<b> \n</b><b> PATRICK\n</b> I guess I never told you I\'m afraid of \n heights.\n<b> \n</b><b> KAT\n</b> (smiling)\n C\'mon. It\'s not that bad\n<b> \n</b><b> PATRICK\n</b> Try lookin\' at it from this angle\n<b> \n</b> She assesses the branch structure\n<b> \n</b><b> KAT\n</b> Put your right foot there --\n<b> \n</b><b> PATRICK\n</b> Forget it. I\'m stayin\'.\n<b> \n</b><b> KAT\n</b> You want me to climb up and show you \n how to get down?\n<b> \n</b><b> PATRICK\n</b> (voice trembling)\n Maybe.\n<b> \n</b> She sighs and dose so. When she gets to his level, she \n perches on the branch next to him. He grins at her.\n<b> \n</b> Then swings himself down with the grace and ease of a \n monkey, leaving her sitting there, realizing she\'s been \n duped.\n<b> \n</b><b> KAT\n</b> You shit!\n<b> \n</b> She climbs down after him\n<b> \n</b><b> EXT. OUTDOOR ARCADE - DAY \n</b><b> \n</b> Patrick and Kat walk amongst the games\n<b> \n</b><b> KAT\n</b> The Partridge Family?\n<b> \n</b><b> PATRICK\n</b> I figured it had to be something \n ridiculous to win your respect. And \n piss you off.\n<b> \n</b><b> KAT\n</b> Good call.\n<b> \n</b><b> PATRICK\n</b> So how\'d you get Chapin to look the \n other way?\n<b> \n</b><b> KAT\n</b> I dazzled him with my wit\n<b> \n</b> She stops and picks up a toy gun that SHOOTS water at \n giggling hyenas and wails on it. The barker hands her a \n stuffed animal as her prize. She hands it to the small KID \n next to her and they continue walking.\n<b> \n</b><b> PATRICK \n</b> (sarcastic)\n A soft side? Who knew?\n<b> \n</b><b> KAT\n</b> Yeah, well, don\'t let it get out\n<b> \n</b><b> PATRICK\n</b> So what\'s your excuse?\n<b> \n</b><b> KAT\n</b> Acting the way we do.\n<b> \n</b><b> PATRICK\n</b> Yes\n<b> \n</b><b> KAT\n</b> I don\'t like to do what people expect. \n Then they expect it all the time and \n they get disappointed when you change.\n<b> \n</b><b> PATRICK\n</b> So if you disappoint them from the \n start, you\'re covered?\n<b> \n</b><b> KAT\n</b> Something like that\n<b> \n</b><b> PATRICK\n</b> Then you screwed up\n<b> \n</b><b> KAT\n</b> How?\n<b> \n</b><b> PATRICK\n</b> You never disappointed me.\n<b> \n</b> She blushes under his gaze\n<b> \n</b><b> PATRICK \n</b> (continuing)\n You up for it?\n<b> \n</b><b> KAT\n</b> For. . . ?\n<b> \n</b> He motions to the SIGN for a paint-ball game. She grins \n<b> SERIES OF SHOTS:\n</b><b> \n</b> The two of them creep through the paint-ball course, \n stealthy and full of the desire to best the other.\n<b> \n</b> Patrick nails Kat in the back with a big glob of red paint \n Kat gets him in the chest with a glob of blue.\n<b> \n</b> Patrick returns fire with a big yellow splat to the side of \n her face.\n<b> \n</b> Kat squirts a green shot to his forehead After a few more \n shots, they\'re both covered in paint\n<b> \n</b> She tries to shoot him again, only to find that her gun is \n empty.\n<b> \n</b><b> KAT \n</b> (continuing)\n Damn it!\n<b> \n</b> Patrick grabs her in a victorious tackle. They land, \n laughing.\n<b> \n</b> It\'s hard to even recognize them, as their hair and faces \n are so smeared with paint globs, but they still manage to \n find each other\'s eyes.\n<b> \n</b> He wipes a smear of blue paint away from her lips, as he \n goes to kiss her.\n<b> \n</b> NEARBY The kid with the stuffed animal, points\n<b> \n</b><b> KID \n</b> Look, Mom\n<b> \n</b> His mother hurries him away. What\'s started as a tackle has \n turned into a passionate kiss\n<b> \n</b><b> EXT. STRATFORD HOUSE - NIGHT\n</b><b> \n</b> Patrick pulls up in Kat\'s driveway. Their paint wardrobe \n has dried by now and they look like refugees from some \n strange, yet colorful, war.\n<b> \n</b><b> KAT\n</b> State trooper?\n<b> \n</b><b> PATRICK\n</b> Fallacy.\n<b> \n</b><b> KAT\n</b> The duck?\n<b> \n</b><b> PATRICK\n</b> Hearsay.\n<b> \n</b><b> KAT\n</b> I know the porn career\'s a lie.\n<b> \n</b> He shuts off the car and turns to her.\n<b> \n</b><b> PATRICK\n</b> Do you?\n<b> \n</b> He kisses her neck. It tickles. She laughs.\n<b> \n</b><b> KAT\n</b> Tell me something true.\n<b> \n</b><b> PATRICK\n</b> I hate peas.\n<b> \n</b><b> KAT\n</b> No -- something real. Something no one \n else knows.\n<b> \n</b><b> PATRICK \n</b> (in-between kisses)\n You\'re sweet. And sexy. And \n completely hot for me.\n<b> \n</b><b> KAT\n</b> What?\n<b> \n</b><b> PATRICK\n</b> No one else knows\n<b> \n</b><b> KAT\n</b> You\'re amazingly self-assured. Has \n anyone ever told you that?\n<b> \n</b><b> PATRICK\n</b> Go to the prom with me\n<b> \n</b> Kat\'s smile disappears.\n<b> \n</b><b> KAT\n</b> Is that a request or a command?\n<b> \n</b><b> PATRICK\n</b> You know what I mean\n<b> \n</b><b> KAT\n</b> No.\n<b> \n</b><b> PATRICK\n</b> No what?\n<b> \n</b><b> KAT\n</b> No, I won\'t go with you\n<b> \n</b><b> PATRICK\n</b> Why not?\n<b> \n</b><b> KAT\n</b> Because I don\'t want to. It\'s a stupid \n tradition.\n<b> \n</b> Patrick sits quietly, torn. He can\'t very well tell her he \n being paid to take her.\n<b> \n</b><b> PATRICK\n</b> People won\'t expect you to go...\n<b> \n</b> Kat turns to him, getting angry.\n<b> \n</b><b> KAT\n</b> Why are you doing this?\n<b> \n</b><b> KAT\n</b> All of it -- what\'s in it for you?\n<b> \n</b> He sits silently, not looking at her, confirming her \n suspicions.\n<b> \n</b><b> KAT \n</b> (continuing)\n Create a little drama? Start a new \n rumor? What?\n<b> \n</b><b> PATRICK\n</b> So I have to have a motive to be with \n you?\n<b> \n</b><b> KAT\n</b> You tell me.\n<b> \n</b><b> PATRICK\n</b> You need therapy. Has anyone ever told \n you that?\n<b> \n</b><b> KAT \n</b> (quietly)\n Answer the question, Patrick\n<b> \n</b><b> PATRICK \n</b> (angry)\n Nothing! There\'s nothing in it for me. \n Just the pleasure of your company.\n<b> \n</b> He takes out a cigarette. She breaks it in half before she \n SLAMS the car door and walks into the house.\n<b> \n</b> Patrick PEELS out of the driveway. Kat turns at the front \n door and watches him go\n<b> \n</b><b> EXT. STREET - NIGHT \n</b><b> \n</b> Patrick pulls up to a stop light and waits for .the green\n<b> \n</b> He glances over at A DRUNKEN HOMELESS GUY in the median, who \n has decided that he doesn\'t need to wear pants.\n<b> \n</b> Patrick pulls out his wallet, takes the wad of money Joey \n gave him and hands it to the homeless guy.\n<b> \n</b><b> PATRICK\n</b> cover that up\n<b> \n</b> The light turns green and Patrick pulls away\n<b> \n</b><b> INT. STRATFORD HOUSE/BATHROOM - NIGHT\n</b><b> \n</b> Kat stands at the sink, scrubbing paint off of her face \n Bianca TAPS on the open door.\n<b> \n</b><b> BIANCA\n</b> Quick question -- are you going to the \n prom?\n<b> \n</b> Kat pushes the door shut with a SLAM\n<b> \n</b><b> INT. STUDY HALL - DAY\n</b><b> \n</b> Cameron and Bianca sit together at their study cubby. She \n fingers a strand of her hair.\n<b> \n</b><b> BIANCA\n</b> Then Guillermo says, "If you go any \n lighter, you\'re gonna look like an extra \n on 90210."\n<b> \n</b><b> CAMERON\n</b> No...\n<b> \n</b> Bianca stares at him for a moment.\n<b> \n</b><b> BIANCA\n</b> do you listen to this crap?\n<b> \n</b><b> CAMERON\n</b> What crap?\n<b> \n</b><b> BIANCA\n</b> Me. This endless ...blonde babble. I\'m \n like, boring myself.\n<b> \n</b><b> CAMERON\n</b> Thank God! If I had to hear one more \n story about your coiffure...\n<b> \n</b> He mock stabs himself with a pencil as she giggles and \n smacks his hand away.\n<b> \n</b><b> CAMERON\n</b> (continuing)\n I figured you\'d get to the good stuff \n eventually.\n<b> \n</b><b> BIANCA\n</b> What good stuff?\n<b> \n</b><b> CAMERON\n</b> The "real you".\n<b> \n</b><b> BIANCA\n</b> Like my fear of wearing pastels?\n<b> \n</b> He looks stricken.\n<b> \n</b><b> BIANCA\n</b> (continuing)\n I\'m kidding. \n (beat)\n You know how sometimes you just become \n this "persona"? And you don\'t know how \n to quit?\n<b> \n</b><b> CAMERON\n</b> (matter of fact)\n No\n<b> \n</b><b> BIANCA\n</b> Okay -- you\'re gonna need to learn how \n to lie.\n<b> \n</b><b> INT. HALLWAY - DAY\n</b><b> \n</b> Mandella struggles with the lock on her locker. Finally, it \n opens.\n<b> \n</b> Hanging inside is a beautiful DRESS, inspired by the 16th \n Century. Mandella slowly unpins a NOTE from the dress.\n<b> \n</b><b> INSERT - "0 FAIR ONE. JOIN ME AT THE PROM. I WILL BE \n</b><b> WAITING. LOVE, WILLIAM S."\n</b><b> \n</b> Mandella\'s agog. Trevor walks by and sees her holding the \n dress.\n<b> \n</b><b> TREVOR\n</b> You\'re gonna look splendiferous in \n that, Mandella.\n<b> \n</b> Mandella looks up sharply, shaken from her reverie.\n<b> \n</b><b> TREVOR \n</b> (continuing)\n that\'s cool to say.\n<b> \n</b> Mandella grins It is\n<b> \n</b><b> MANDELLA\n</b><b> \n</b><b> INT. STRATFORD HOUSE/DEN - DAY\n</b><b> \n</b> Sharon is at her computer, Walter at his exercise bike\n<b> \n</b><b> SHARON\n</b> Would you rather be ravished by a \n pirate or a British rear admiral?\n<b> \n</b><b> WALTER\n</b> Pirate -- no question.\n<b> \n</b> Bianca enters and walks over to Walter\n<b> \n</b><b> BIANCA\n</b> Daddy, I want to discuss the prom with \n you. It\'s tomorrow night --\n<b> \n</b><b> WALTER\n</b> The prom? Kat has a date?\n<b> \n</b><b> BIANCA\n</b> No, but\n<b> \n</b><b> WALTER\n</b> It\'s that hot rod Joey, right? That \' s \n who you want me to bend my rules for?\n<b> \n</b><b> BIANCA\n</b> He\'s not a "hot rod". Whatever that \n is.\n<b> \n</b><b> WALTER\n</b> You\'re not going unless your sister \n goes. End of story.\n<b> \n</b><b> BIANCA\n</b> Fine. I see that I\'m a prisoner in my \n own house. I\'m not a daughter. I\'m a \n possession!\n<b> \n</b> Bianca storms out.\n<b> \n</b><b> WALTER\n</b> (calling out)\n You know what happens at proms?\n<b> \n</b> Sharon stops her typing and looks up at Walter\n<b> \n</b><b> SHARON\n</b> They\'ll dance, they\'ll kiss, they\'ll \n come home. Let her go.\n<b> \n</b><b> WALTER\n</b> Kissing? Is that what you think \n happens? Kissing isn\'t what keeps me up \n to my elbows in placenta all day.\n<b> \n</b><b> INT. BIANCA\'S ROOM - NIGHT \n</b><b> \n</b> Bianca lies on her bed. MTV blares. A KNOCK sounds.\n<b> \n</b><b> BIANCA\n</b> Come in.\n<b> \n</b> Kat enters and sits down on the bed, muting the TV.\n<b> \n</b><b> KAT \n</b> (kindly)\n Listen, I know you hate having to sit \n home because I\'m not Susie High School.\n<b> \n</b><b> BIANCA\n</b> Like you care.\n<b> \n</b><b> KAT\n</b> I do care. But I\'m a firm believer in \n doing something for your own reasons, \n not someone else \' s .\n<b> \n</b><b> BIANCA\n</b> I wish I had that luxury. I\'m the only \n sophomore that got asked to the prom and \n I can\'t go, because you won \' t.\n<b> \n</b> Kat clears her throat\n<b> \n</b><b> KAT\n</b> Joey never told you we went out, did \n he?\n<b> \n</b><b> BIANCA\n</b> What?\n<b> \n</b><b> KAT\n</b> In 9th. For a month\n<b> \n</b><b> BIANCA\n</b> (confused)\n Why?\n<b> \n</b><b> KAT \n</b> (self-mocking)\n He was, like, a total babe\n<b> \n</b><b> BIANCA\n</b> But you hate Joey\n<b> \n</b><b> KAT\n</b> Now I do. Back then, was a different \n story.\n<b> \n</b><b> BIANCA\n</b> As in...\n<b> \n</b> Kat takes a deep breath.\n<b> \n</b><b> KAT\n</b> He said everyone was doing it. So I \n did it.\n<b> \n</b><b> BIANCA\n</b> You did what?\n<b> \n</b><b> KAT \n</b> (continuing on)\n Just once. Afterwards, I told him I \n didn\'t want to anymore. I wasn\'t ready. \n He got pissed. Then he broke up with \n me.\n<b> \n</b> Bianca stares at her, dumbfounded\n<b> \n</b><b> BIANCA\n</b> But\n<b> \n</b><b> KAT\n</b> After that, I swore I\'d never do \n anything just because "everyone else" \n was doing it. And I haven\'t since. \n Except for Bogey\'s party, and my \n stunning gastro-intestinal display --\n<b> \n</b><b> BIANCA\n</b> (stunned)\n Why didn\'t you tell me?\n<b> \n</b><b> KAT\n</b> I wanted to let you make up your own \n mind about him.\n<b> \n</b><b> BIANCA\n</b> No. you didn\'t! If you really thought \n I could make my own decisions, you \n would\'ve let me go out with him instead \n of helping Daddy hold me hostage.\n<b> \n</b> Kat stands up slowly\n<b> \n</b><b> KAT\n</b> That\'s not\n<b> \n</b><b> BIANCA\n</b> I\'m not stupid enough to repeat your \n mistakes.\n<b> \n</b><b> KAT\n</b> I guess I thought I was protecting you.\n<b> \n</b><b> BIANCA\n</b> God, you\'re just like him! Just keep me \n locked away in the dark, so I can\'t \n experience anything for myself\n<b> \n</b><b> KAT\n</b> Not all experiences are good, Bianca. \n You can\'t always trust the people you \n want to.\n<b> \n</b><b> BIANCA\n</b> I guess I\'ll never know, will I?\n<b> \n</b> She rises and holds the door open for Kat, then slams it \n behind her.\n<b> \n</b><b> EXT. STRATFORD HOUSE - DAY \n</b><b> \n</b> A sprinkler cruises the lawn.\n<b> \n</b><b> INT. KAT\'S ROOM - DAY\n</b><b> \n</b> Kat lies in bed, staring at the ceiling. She rolls over and \n picks up the phone.\n<b> \n</b><b> BIANCA\'S ROOM - DAY\n</b><b> \n</b> Bianca, still in her pajamas, eats a bowl of cereal while \n watching "I Love Lucy" reruns.\n<b> \n</b> A KNOCK sounds\n<b> \n</b><b> BIANCA\n</b> Come in.\n<b> \n</b> Kat opens the door and peers in with a grin\n<b> \n</b><b> KAT\n</b> Feel like shopping?\n<b> \n</b> Bianca looks up, hopefully.\n<b> \n</b><b> LIVING ROOM - NIGHT\n</b><b> \n</b> Walter and Sharon are in front of the television. Walter \n has the TV Guide in hand, glasses on.\n<b> \n</b><b> WALTER\n</b> What do you wanna watch? We\'ve got \n crap, crap, crap or crap\n<b> \n</b><b> SHARON \n</b> Dr. Ruth?\n<b> \n</b> Bianca walks into the living room. She\'s wearing a prom \n dress.\n<b> \n</b><b> BIANCA\n</b> Hi, Mommy.\n (looking away)\n<b> WALTER\n</b><b> \n</b> Walter scurries takes off his glasses and looks from Bianca \n to Sharon.\n<b> \n</b><b> SHARON \n</b> Honey, you look beautiful!\n<b> \n</b><b> BIANCA\n</b> You like? My date should be here in \n five.\n<b> \n</b><b> WALTER\n</b> I\'m missing something.\n<b> \n</b><b> BIANCA\n</b> I have a date, Daddy. And he \' s not a \n captain of oppression like some men we \n know.\n<b> \n</b> The DOORBELL RINGS. Bianca runs to open it. There stands \n CAMERON. He takes in Bianca\'s outfit.\n<b> \n</b><b> CAMERON\n</b> Wow\n<b> \n</b><b> BIANCA\n</b> Let\'s go.\n<b> \n</b> Walter rises. Sharon pulls him back down on the couch\n<b> \n</b><b> SHARON \n</b> (to Bianca)\n Have a great time, honey!\n<b> \n</b><b> WALTER\n</b> But -- who -- what --?\n<b> \n</b> The door SLAMS. As Sharon looks at Walter with a grin, a \n blur rushes down the stairs and out the door. The blur has \n Kat \' s voice.\n<b> \n</b><b> KAT\n</b> Hey, guys. I\'m going to the prom. See \n you in a few.\n<b> \n</b> The door SLAMS again. Walter and Sharon \'are alone\n<b> \n</b><b> WALTER\n</b> What just happened?\n<b> \n</b><b> SHARON\n</b> Your daughters went to the prom.\n<b> \n</b><b> WALTER\n</b> Did I have anything to say about it?\n<b> \n</b><b> SHARON \n</b> Absolutely not.\n<b> \n</b><b> WALTER\n</b> That \' s what I thought\n<b> \n</b> The DOORBELL RINGS again. Walter opens it to find Joey on \n the porch, wearing a tux.\n<b> \n</b><b> JOEY \n</b> I\'m here to pick up Bianca.\n<b> \n</b><b> WALTER\n</b> late\n<b> \n</b> He SLAMS the door shut\n<b> \n</b><b> EXT HOTEL PARKING LOT - NIGHT\n</b><b> \n</b> Kat pulls up in her car, emerging resplendent in an ice \n gown.\n<b> \n</b> Patrick sits on the steps, waiting. In a tux.\n<b> \n</b><b> KAT\n</b> How\'d you get a tux at the last minute?\n<b> \n</b><b> PATRICK\n</b> It\'s Scurvy\'s. His date got convicted. \n Where\'d you get the dress?\n<b> \n</b><b> KAT\n</b> It\'s just something I had. You know\n<b> \n</b><b> PATRICK \n</b> (smiling)\n Oh huh\n<b> \n</b><b> KAT\n</b> Look, I\'m -- sorry -- that I \n questioned your motives. I was wrong.\n<b> \n</b> Patrick winces slightly, but covers it with a smile\n<b> \n</b><b> PATRICK\n</b> No prob.\n<b> \n</b> He remains seated. Kat fidgets nervously.\n<b> \n</b><b> KAT\n</b> are you ready?\n<b> \n</b> He rises and stares at her, taking in her image \n appreciatively. She blushes and turns away.\n<b> \n</b><b> KAT \n</b> (continuing)\n C\'mon. Let\'s get this over with.\n<b> \n</b><b> INT. PROM - NIGHT\n</b><b> \n</b> A hotel ballroom transformed into a fantasy world. Patrick \n and Kat enter, Kat attempting to deny the romance of it.\n<b> \n</b><b> KAT\n</b> Quite the ostentatious display\n<b> \n</b> A cowboy two-steps by them, dragging some poor girl around\n<b> \n</b><b> PATRICK\n</b> Look, Clem even wore his good boots\n<b> \n</b> Kat steps forward, looking around and spots Cameron and \n Bianca dancing cheek to cheek. She smiles.\n<b> \n</b><b> ACROSS THE ROOM\n</b><b> \n</b> Mandella enters nervously, in the long Elizabethan gown, \n hair piled on top of her head. She spots Kat and hurries \n over.\n<b> \n</b><b> MANDELLA\n</b> Have you seen him?\n<b> \n</b><b> KAT\n</b> Who?\n<b> \n</b><b> MANDELLA\n</b> William - he asked me to meet him here.\n<b> \n</b><b> KAT\n</b> Oh, honey -- tell me we haven\'t\' \n progressed to full-on hallucinations.\n<b> \n</b> Patrick looks toward the door and taps Kat. She turns and \n points Mandella the same way.\n<b> \n</b> Michael - in full Shakespearean dress with a new goatee on \n his chin - bows in their direction. Mandella\'s grin couldn\'t \n be bigger.\n<b> \n</b> Michael swashbuckles over to them, taking Mandella\'s hand \n and leading her onto the dance floor.\n<b> \n</b><b> MICHAEL \n</b> Mi\' lady.\n<b> \n</b> (to Patrick)\n Good sir.\n<b> \n</b> Patrick rolls his eyes.\n<b> \n</b><b> INT. PROM - NIGHT - LATER\n</b><b> \n</b> Kat and Patrick dance to a slow SONG. Whatever he\'s \n whispering into her ear is making her laugh.\n<b> \n</b> Cam and Bianca dance nearby, glowing with happiness. She \n whispers something in his ear and heads for the ladies\' room\n<b> \n</b><b> INT. LADIES ROOM - NIGHT\n</b><b> \n</b> Bianca walks in, positively radiant. Chastity emerges from a \n stall.\n<b> \n</b><b> BIANCA\n</b> (surprised)\n What are you doing here?\n<b> \n</b> Chastity checks her hair in the mirror, aloof.\n<b> \n</b><b> CHASTITY \n</b> You think you \' re the only sophomore \n at the prom?\n<b> \n</b><b> BIANCA\n</b> I did.\n<b> \n</b> Chastity maintains her snooty tone.\n<b> \n</b><b> CHASTITY \n</b> And just so you know, my date isn\'t \n planning on spending most of the night \n in his backseat.\n<b> \n</b> BIANCA What\'re you talking about?\n<b> \n</b><b> CHASTITY \n</b> Joey Dorsey is only after one thing - - \n your cherry. He practically made a \n public announcement.\n<b> \n</b> Appalled, Bianca storms out. Chastity tries to backpedal.\n<b> \n</b><b> CHASTITY \n</b> (continuing)\n I wanted to tell you\n<b> \n</b><b> INT. PROM - NIGHT\n</b><b> \n</b> Joey, drunk, disorderly and pissed off, walks in with a few \n stray jocks - also dateless. He zeroes in on Cameron, now \n consoling a pissed-off Bianca.\n<b> \n</b> Patrick and Kat continue to slow dance, oblivious to the \n evil about to erupt.\n<b> \n</b><b> PATRICK\n</b> My grandmother\'s .\n<b> \n</b><b> KAT\n</b> What?\n<b> \n</b><b> PATRICK\n</b> That\'s where I was last year. She\'d \n never lived alone -- my grandfather died \n -- I stayed with her. I wasn\'t in jail, \n I don\'t know Marilyn Manson, and I\'ve \n never slept with a Spice Girl. I spent \n a year sitting next to my grandma on the \n couch watching Wheel of Fortune. End of \n story.\n<b> \n</b> He takes a breath and looks away, not meeting her eyes. Kat \n stares at him for a moment and laughs a delighted laugh\n<b> \n</b><b> KAT\n</b> That \' s completely adorable!\n<b> \n</b><b> PATRICK\n</b> It gets worse -- you still have your \n freshman yearbook?\n<b> \n</b> He\'s interrupted by Joey\'s hand on his shoulder.\n<b> \n</b><b> JOEY\n</b> What\'s Bianca doing here with that \n cheese dick? I didn\'t pay you to let \n some little punk ass snake me.\n<b> \n</b><b> ACROSS THE ROOM\n</b><b> \n</b> Michael spots the altercation and dances Mandella over to \n Cameron and Bianca.\n<b> \n</b><b> MICHAEL \n</b> (to Cameron)\n Feces hitting fan. C\'mon\n<b> \n</b> Michael takes Cameron aside, leaving Mandella and Bianca \n staring after them.\n<b> \n</b><b> ACROSS THE ROOM\n</b><b> \n</b> Michael and Cameron approach Joey as he continues to taunt \n Patrick who keeps quiet, realizing the weight of this \n situation.\n<b> \n</b><b> MICHAEL \n</b> (continuing)\n Joey, pal, compadre. Let\'s take it \n easy.\n<b> \n</b> Joey turns toward Michael and Cameron.\n<b> \n</b> JOEY You two are in big trouble\n<b> \n</b> Cameron faces Joey.\n<b> \n</b><b> CAMERON\n</b> Admit it. You lost. Be a man.\n<b> \n</b> Joey PUNCHES Cameron in the face, taking him by surprise \n Cameron holds his nose as it bleeds onto his tux\n<b> \n</b> The various cliques descend angrily and Joey is soon \n surrounded by seething Cowboys, Coffee Kids and White \n Rastas.\n<b> \n</b><b> DEREK \n</b> Very uncool, my brother\n<b> \n</b><b> JOEY\n</b> I\'m not your brother, white boy.\n<b> \n</b> The other Rastas GASP, as if stung by the realization that \n they\'re white.\n<b> \n</b> Joey turns back to Patrick and Kat.\n<b> \n</b><b> JOEY \n</b> (continuing)\n Just so you know -- she\'ll only spread \n her legs once.\n<b> \n</b> Kat looks from Joey to Patrick, not sure what she\'s hearing. \n Joey pushes through the crowd but a HAND drags him back. \n It\'s Bianca. And she BELTS the hell out of him\n<b> \n</b><b> BIANCA\n</b> That\'s for making my date bleed\n<b> \n</b> She BELTS him again\n<b> \n</b><b> BIANCA\n</b> (continuing)\n That\'s for my sister.\n<b> \n</b> And AGAIN\n<b> \n</b><b> BIANCA\n</b> (continuing)\n And that\'s for me.\n<b> \n</b> Cliques now descend on Joey, punching him wildly.\n<b> \n</b><b> COWBOY\n</b> And that\'s for the fourth grade, \n asshole.\n<b> \n</b><b> HOTEL - NIGHT \n</b><b> \n</b> KAT runs down the stairs, Patrick chasing her\n<b> \n</b><b> PATRICK\n</b> Wait I...\n<b> \n</b><b> KAT\n</b> You were paid to take me out! By -- \n the one person I truly hate. I knew it \n was a set-up!\n<b> \n</b><b> PATRICK\n</b> It wasn\'t like that.\n<b> \n</b><b> KAT\n</b> Really? What was it like? A down \n payment now, then a bonus for sleeping \n with me?\n<b> \n</b><b> PATRICK\n</b> I didn\'t care about the money.\n<b> \n</b> He catches up to her now\n<b> \n</b><b> PATRICK \n</b> (continuing)\n I cared about --\n<b> \n</b> She turns to face him with a countenance more in sorrow than \n in anger.\n<b> \n</b><b> KAT\n</b> You are so not what I thought you were.\n<b> \n</b> He grabs her and kisses her to shut her up. After a second, \n she jerks away and flees down the stairs and out of sight.\n<b> \n</b> Bianca stands at the top of the stairs, watching. She\'s \n never looked more guilty.\n<b> \n</b><b> INT. STRATFORD HOUSE - DAY\n</b><b> \n</b> Kat is sprawled on the couch in sweats, wrapped in a \n blanket, watching "Sixteen Candles". When Molly Ringwald \n leans across the birthday cake to get a kiss from her dream \n date, Kat changes the channel disgustedly, settling for an \n infomercial\n<b> \n</b> The phone sits next to her. Not ringing. Bianca breezes \n in, bearing a cup of tea.\n<b> \n</b><b> BIANCA\n</b> Are you sure you don\'t want to come \n with us? It\'ll be fun.\n<b> \n</b> Kat takes the tea and gives a weak smile.\n<b> \n</b><b> KAT\n</b> I \' m sure .\n<b> \n</b> Bianca sits down next to her\n<b> \n</b><b> BIANCA\n</b> You looked beautiful last night, you \n know.\n<b> \n</b><b> KAT\n</b> So did you\n<b> \n</b> Bianca gives her a squeeze, then jumps up when the DOORBELL \n rings, opening the door to a waiting Cameron. He peeks his \n head inside.\n<b> \n</b><b> CAMERON\n</b> She okay?\n<b> \n</b><b> BIANCA\n</b> I hope so.\n<b> \n</b> The door shuts behind her as Walter enters.\n<b> \n</b><b> WALTER\n</b> Was that your sister?\n<b> \n</b><b> KAT\n</b> Yeah. She left with some bikers Big \n ones. Full of sperm.\n<b> \n</b><b> WALTER\n</b> Funny.\n<b> \n</b> Walter sits down on the arm of the chair and watches the \n infomercial with Kat.\n<b> \n</b><b> WALTER\n</b> (continuing)\n I don\'t understand the allure of \n dehydrated food. Is this something I \n should be hip to?\n<b> \n</b><b> KAT\n</b> No, Daddy.\n<b> \n</b><b> WALTER\n</b> (dreading the \n answer)\n So tell me about this dance. Was it \n fun?\n<b> \n</b><b> KAT\n</b> Parts of it.\n<b> \n</b><b> WALTER\n</b> Which parts?\n<b> \n</b><b> KAT\n</b> The part where Bianca beat the hell out \n of some guy.\n<b> \n</b><b> WALTER\n</b> Bianca did what?\n<b> \n</b><b> KAT\n</b> What\'s the matter? Upset that I rubbed \n off on her?\n<b> \n</b><b> WALTER\n</b> No -- impressed.\n<b> \n</b> Kat looks up in surprise.\n<b> \n</b><b> WALTER\n</b> (continuing)\n You know, fathers don\'t like to admit \n that their daughters are capable of \n running their own lives. It means we\'ve \n become spectators. Bianca still lets me \n play a few innings. You\'ve had me on \n the bleachers for years. When you go to \n Sarah Lawrence, I won\'t even be able to \n watch the game.\n<b> \n</b><b> KAT \n</b> (hopeful)\n When I go?\n<b> \n</b><b> WALTER\n</b> Oh, Christ. Don\'t tell me you\'ve \n changed your mind. I already sent \'em a \n check.\n<b> \n</b> Kat reaches over and gives him a hug\n<b> \n</b> INT. CAFETERIA - DAY Kat stands grabs a box of cornflakes \n from the food line.\n<b> \n</b><b> CAMERON (0. S.)\n</b> Katarina?\n<b> \n</b> She turns and looks at him\n<b> \n</b><b> CAMERON\n</b> I\'d like to express my apologies.\n<b> \n</b><b> KAT\n</b> For what?\n<b> \n</b><b> CAMERON\n</b> (looking down)\n I didn\'t mean for you to get -- When \n Bianca asked me to find you a boyfriend, \n I had no idea it would turn out so -- \n ugly. I would never have done anything \n to compromise your - - -\n<b> \n</b> He trails off when he realizes she\'s thrown her food tray \n against the wall and marched off -- the old "kill, kill" \n look back in her eyes.\n<b> \n</b><b> INT. HALLWAY - DAY \n</b><b> \n</b> Kat stomps up the hallway, full of menace\n<b> \n</b><b> CLASSROOM - DAY\n</b><b> \n</b> Bianca\'s English teacher perches on the edge of a desk, open \n book in hand.\n<b> \n</b><b> TEACHER\n</b> Who can tell me at what point Lucentio \n admits his deception?\n<b> \n</b> The door of the classroom FLIES open and an angry Kat stalks \n in, yanking Bianca from her chair and dragging her toward \n the hallway.\n<b> \n</b><b> KAT \n</b> (to the teacher)\n Family emergency.\n<b> \n</b><b> HALLWAY - DAY\n</b><b> \n</b> Bianca tries to pull away as Kat drags her by the hair \n between two rows of lockers.\n<b> \n</b><b> BIANCA\n</b> Let go!\n<b> \n</b><b> KAT\n</b> You set me up.\n<b> \n</b><b> BIANCA\n</b> I just wanted --\n<b> \n</b><b> KAT\n</b> What? To completely damage me? To send \n me to therapy forever? What?\n<b> \n</b><b> BIANCA\n</b> No! I just wanted\n<b> \n</b> Miss Perky walks up\n<b> \n</b><b> MISS PERKY\n</b> Ladies? Shall we take a trip to my \n office?\n<b> \n</b><b> INT. MISS PERKY\'S OFFICE - DAY\n</b><b> \n</b> Miss Perky stares at both sisters as they sit before her, \n then focuses on Bianca.\n<b> \n</b><b> MISS PERKY\n</b> So you\'re the real bitch\n<b> \n</b><b> BIANCA\n</b> Yes! Okay? Yes -- I\'m the real bitch. \n I wanted her to get a boyfriend so I \n could. Apparently, this makes me a \n horrible person. I\'m sorry.\n<b> \n</b> She turns to Kat.\n<b> \n</b><b> BIANCA\n</b> (continuing)\n I swear -- I didn\'t know about the \n money. I didn\'t even know Joey was \n involved. I would never intentionally \n hurt you, Kat.\n<b> \n</b><b> MISS PERKY \n</b> (to Kat)\n Do you care to respond?\n<b> \n</b><b> KAT\n</b> Am I supposed to feel better? Like, \n right now? Or do I have some time to \n think about it?\n<b> \n</b><b> MISS PERKY \n</b> Just smack her now.\n<b> \n</b> Bianca rises, taking Kat by the arm.\n<b> \n</b><b> BIANCA\n</b> (to Miss Perky)\n We\'ll be getting back to you.\n<b> \n</b><b> MISS PERKY \n</b> What, no hug?\n<b> \n</b><b> HALLWAY - DAY \n</b><b> \n</b> And Bianca leave Miss Perky\'s office\n<b> \n</b><b> BIANCA\n</b> Is that woman a complete fruit-loop or \n is it just me?\n<b> \n</b><b> KAT\n</b> It\'s just you.\n<b> \n</b><b> ENGLISH CLASS - DAY \n</b><b> \n</b> Mrs. Blaise faces the class\n<b> \n</b><b> MRS. BLAISE\n</b> All right. I\'m assuming everyone found \n time to compose, their poems. Except for \n Mr. Dorsey, who\'s still in ICU.\n<b> \n</b> Nerds in the back high-five each other.\n<b> \n</b><b> MRS. BLAISE \n</b> (continuing)\n Would anyone care to read theirs aloud?\n<b> \n</b> No one moves. Then Kat slowly stands up.\n<b> \n</b><b> KAT\n</b> I\'11 go\n<b> \n</b> Patrick looks up.\n<b> \n</b><b> MRS. BLAISE \n</b> Oh, Lord.\n<b> \n</b> She downs a couple Prozac\n<b> \n</b><b> MRS. BLAISE \n</b> (continuing)\n Please proceed.\n<b> \n</b> Kat stands, puts on her glasses, and takes a deep breath \n before reading from her notebook.\n<b> \n</b><b> KAT\n</b> I hate the way you talk to me/ and the \n way you cut your hair/ I hate the way \n you drive my car/ I hate it when you \n stare.\n<b> \n</b> She pauses, then continues\n<b> \n</b><b> KAT \n</b> (continuing)\n I hate your big dumb combat boots/ and \n the way you read my mind/ I hate you so \n much it makes me sick/ it even makes me \n rhyme.\n<b> \n</b> She takes a deep breath, and looks quickly at Patrick, who \n stares at the floor.\n<b> \n</b><b> KAT \n</b> (continuing)\n I hate the way you\'re always right/ I \n hate it when you lie/ I hate it when you \n make me laugh/ even worse when you make \n me cry/ I hate it that you\'re not \n around/ and the fact that you didn\'t \n call/ But mostly I hate the way I don \' \n t hate you/ not even close, not even a \n little bit, not even any at all.\n<b> \n</b> She looks directly at Patrick. He looks back this time. \n The look they exchange says everything.\n<b> \n</b> Then she walks out of the room The rest of the class remains \n in stunned silence.\n<b> \n</b><b> EXT. PARKING LOT - MOMENTS LATER\n</b><b> \n</b> Kat walks to her car alone. When she opens the door, she\'s \n greeted with a Fender Stratocaster guitar, reclining in the \n front seat.\n<b> \n</b> She picks it up slowly, inspecting every detail, then spins \n around.\n<b> \n</b> Patrick stands there, smiling.\n<b> \n</b><b> KAT\n</b> A Fender Strat. You bought this?\n<b> \n</b><b> PATRICK\n</b> I thought you could use it. When you \n start your band.\n<b> \n</b> She doesn\'t answer, but hides a smile, so he walks closer.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n Besides, I had some extra cash. Some \n asshole paid me to take out a really \n great girl.\n<b> \n</b><b> KAT\n</b> Is that right?\n<b> \n</b><b> PATRICK\n</b> Yeah, but then I fucked up. I fell for \n her.\n<b> \n</b> Blushes and looks down.\n<b> \n</b><b> PATRICK \n</b> (continuing)\n You know -- it\'s not every day you find \n a girl who\'ll flash her tits to get you \n out of detention.\n<b> \n</b> Looks up. surprised and embarrassed that he found out\n<b> \n</b> He takes her upturned face as a sign to kiss her and he does \n She lets him this time.\n<b> \n</b> Then breaks it off\n<b> \n</b><b> KAT\n</b> You can\'t just buy me a guitar every \n time you screw up, you know.\n<b> \n</b> He grimaces.\n<b> \n</b><b> PATRICK\n</b> I know\n<b> \n</b> He quiets her with another kiss Which she breaks off again.\n<b> \n</b><b> KAT\n</b> And don\'t just think you can\n<b> \n</b> He kisses her again, not letting her end it this time.\n<b> \n</b><b> STRATFORD HOUSE - SUNSET\n</b><b> \n</b> We hear the sounds of MUSIC and LAUGHTER.\n<b> \n</b><b> STRATFORD HOUSE/BACKYARD - SUNSET\n</b><b> \n</b> Patrick is at the barbecue grill, flipping burgers. Kat \n watches.\n<b> \n</b><b> KAT\n</b> Why is my veggie burger the only burnt \n object on this grill?\n<b> \n</b><b> PATRICK\n</b> Because I like to torture you.\n<b> \n</b><b> KAT\n</b> Oh, Bianca? Can you get me my freshman \n yearbook?\n<b> \n</b><b> PATRICK\n</b> Don \' t you even dare. . .\n<b> \n</b> ON BIANCA AND CAMERON As they argue on the patio.\n<b> \n</b><b> CAMERON\n</b> They do to!\n<b> \n</b><b> BIANCA\n</b> They do not!\n<b> \n</b> Rises to get the yearbook.\n<b> \n</b><b> CAMERON\n</b> Can someone please tell her that \n sunflower seeds come from sunflowers?\n<b> \n</b><b> ON MICHAEL AND MANDELLA\n</b><b> \n</b> Severely making-out in a lawn chair. She comes up for a \n breath.\n<b> \n</b><b> MANDELLA\n</b> I can\'t remember a word of Shakespeare \n right now. Isn\'t that weird?\n<b> \n</b> Michael pulls her back down for another round ON KAT AND \n<b> PATRICK\n</b><b> \n</b> She tries to keep him from grabbing the yearbook that Bianca \n now hands her.\n<b> \n</b><b> KAT\n</b> You\'re freaked over this, aren\'t you?\n<b> \n</b> Bianca hands her the yearbook\n<b> \n</b><b> BIANCA\n</b> He\'s more than freaked. He\'s froke\n<b> \n</b> Flips to a page.\n<b> \n</b><b> KAT\n</b> I\'d like to call your attention to \n Patrick Verona\'s stunning bad-ass look \n of 1995 ---\n<b> \n</b> INSERT - A horrifically nerdy freshman year picture Glasses, \n bad hair, headgear -- the works.\n<b> \n</b> She holds up the picture for all to view. Patrick cringes \n and throws a handful of pretzels at her.\n<b> \n</b><b> BIANCA\n</b> Patrick -- is that- a.\n<b> \n</b><b> KAT\n</b> Perm?\n<b> \n</b><b> PATRICK\n</b> Ask my attorney.\n<b> \n</b> Kat and Bianca huddle over the picture, giggling -- as we \n CRANE UP and hear a GIRLY PUNK version of The Partridge \n Family\'s "I Think I Love You".\n<b> \n</b><b> FADE OUT:\n</b><b> \n</b><b> END \n</b>\n</pre>'}
|
notebooks/covariogram.ipynb | ###Markdown
Fixed uniform intensity
###Code
intensity = 10
s2_0 = 4
T = [-1,1]
prior_b_sig2 = 1e-16
prior_w_sig2 = 1.0
x_plot = np.linspace(-1.5,1.5,1000)
intensity_func = util_porbnet.Piecewise(np.array([T[0],T[1]]),np.array([intensity]))
dim_hidden_initial = (T[1]-T[0])*intensity
net = networks_porbnet.RBFN(dim_in=1, dim_hidden_initial=dim_hidden_initial, \
dim_hidden_max=3*dim_hidden_initial, \
dim_out=1, intensity=intensity_func, s2_0=s2_0, \
prior_w_sig2 = prior_w_sig2*np.sqrt(np.pi/s2_0), prior_b_sig2 = prior_b_sig2, \
sig2 = .01,\
prior_intensity_alpha = intensity, prior_intensity_beta = 1.0)
torch.manual_seed(0)
f_samp = net.sample_functions_prior(torch.from_numpy(x_plot).reshape(-1,1), n_samp=1000, sample_K=True, sample_intensity=False).detach().numpy()
h_list = np.array([0,.1,.2])
fig, ax = plt.subplots(figsize=(5,3))
colors = ['C0','C1','C2']
for i, h in enumerate(h_list):
cov_plot = util_porbnet.cov_porbnet_fixed_intensity(x1=x_plot-h/2, \
x2=x_plot+h/2, \
C0=T[0], \
C1=T[1], \
b_sig2=prior_b_sig2, \
w_sig2=prior_w_sig2, \
s2_0=s2_0, \
intensity=intensity)
ax.plot(x_plot, cov_plot, color=colors[i])
plot_stationarity(x_plot,f_samp,h=h_list,ax=ax)
ax.set_ylim(-.1,2.1)
fig.tight_layout()
fig.savefig('covariogram_porbnet.pdf')
fig.savefig('covariogram_porbnet.png')
###Output
_____no_output_____
###Markdown
Uniform intensity with Gamma prior
###Code
intensity = 10
s2_0 = 1
T = [-1,1]
prior_b_sig2 = 1e-16
prior_w_sig2 = 1.0
x_plot = np.linspace(-1.5,1.5,1000)
alpha = util_porbnet.alpha_for_sqrt_gamma(beta=1.0, K=intensity)
intensity_func = util_porbnet.Piecewise(np.array([T[0],T[1]]),np.array([intensity]))
dim_hidden_initial = (T[1]-T[0])*intensity
net = networks_porbnet.RBFN(dim_in=1, dim_hidden_initial=dim_hidden_initial, \
dim_hidden_max=3*dim_hidden_initial, \
dim_out=1, intensity=intensity_func, s2_0=s2_0, \
prior_w_sig2 = prior_w_sig2*np.sqrt(np.pi/s2_0), prior_b_sig2 = prior_b_sig2, \
sig2 = .01,\
prior_intensity_alpha = alpha, prior_intensity_beta = 1.0)
torch.manual_seed(0)
f_samp = net.sample_functions_prior(torch.from_numpy(x_plot).reshape(-1,1), n_samp=1000, sample_K=True, sample_intensity=True).detach().numpy()
h_list = np.array([0, .1, .2])
fig, ax = plt.subplots(figsize=(5,3))
colors = ['C0','C1','C2']
for i, h in enumerate(h_list):
cov_plot = util_porbnet.cov_porbnet_gamma_intensity(x1=x_plot-h/2,\
x2=x_plot+h/2,\
C0=T[0],\
C1=T[1],\
b_sig2=prior_b_sig2,\
w_sig2=prior_w_sig2,\
s2_0=s2_0,\
intensity_alpha=alpha,\
intensity_beta=1.0)
ax.plot(x_plot, cov_plot, color=colors[i])
plot_stationarity(x_plot,f_samp,h=h_list,ax=ax)
ax.set_ylim(-.1,2.1)
fig.tight_layout()
fig.savefig('covariogram_porbnet_sgcp.pdf')
fig.savefig('covariogram_porbnet_sgcp.png')
###Output
_____no_output_____ |
health_insurance_cross-sell/notebooks/live017_metrics.ipynb | ###Markdown
0.0. Imports
###Code
import numpy as np
import pandas as pd
import scikitplot as skplt
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn import model_selection as ms
from sklearn import linear_model as lm
from sklearn import preprocessing as pp
from sklearn import ensemble as en
from sklearn import neighbors as nh
df_raw = pd.read_csv( '../data/raw/train.csv' )
###Output
_____no_output_____
###Markdown
1.0. Data Description
###Code
df1 = df_raw.copy()
###Output
_____no_output_____
###Markdown
1.1. Data Dimension
###Code
print( 'Number of Rows:{}'.format( df1.shape[0] ) )
print( 'Number of Columns:{}'.format( df1.shape[1] ) )
cols_new = ['id', 'gender', 'age', 'driving_license', 'region_code', 'previously_insured', 'vehicle_age',
'vehicle_damage', 'annual_premium', 'policy_sales_channel', 'vintage', 'response']
# rename
df1.columns = cols_new
###Output
_____no_output_____
###Markdown
1.2. Data Types
###Code
df1.dtypes
###Output
_____no_output_____
###Markdown
1.3. Check NA
###Code
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.4. Data Descriptive
###Code
num_attributes = df1.select_dtypes( include=['int64', 'float64'] )
cat_attributes = df1.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'])
# Central Tendency - Mean, Median
ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T
ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T
# dispersion - std, min, max, range, skew, kurtosis
d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T
d2 = pd.DataFrame( num_attributes.apply( min ) ).T
d3 = pd.DataFrame( num_attributes.apply( max ) ).T
d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() - x.min() ) ).T
d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T
d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T
# concatenar
m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
m
###Output
_____no_output_____
###Markdown
2.0. Feature Engineering
###Code
df2 = df1.copy()
cols_new = ['id', 'gender', 'age', 'driving_license', 'region_code', 'previously_insured', 'vehicle_age',
'vehicle_damage', 'annual_premium', 'policy_sales_channel', 'vintage', 'response']
# rename
df1.columns = cols_new
df2.head()
# Vehicle Damage
df2['vehicle_damage'] = df2['vehicle_damage'].apply( lambda x: 1 if x == 'Yes' else 0 )
# Vehicle Age
df2['vehicle_age'] = df2['vehicle_age'].apply( lambda x: 'over_2_years' if x == '> 2 Years' else 'between_1_2_year' if x == '1-2 Year' else 'below_1_year' )
###Output
_____no_output_____
###Markdown
3.0. Data Filtering
###Code
df3 = df2.copy()
###Output
_____no_output_____
###Markdown
4.0. Exploratory Data Analysis
###Code
df4 = df3.copy()
###Output
_____no_output_____
###Markdown
4.1. Univariate Analysys
###Code
# gender
sns.boxplot( x='response', y='age', data=df4 )
aux00 = df4.loc[df4['response'] == 0, 'age']
sns.histplot( aux00 )
aux00 = df4.loc[df4['response'] == 1, 'age']
sns.histplot( aux00 )
# driving license
aux = df4[['driving_license', 'response']].groupby( 'response' ).sum().reset_index()
sns.barplot( x='response', y='driving_license', data=aux )
pd.crosstab( df4['driving_license'], df4['response'] ).apply( lambda x: x / x.sum(), axis=1 )
# region_code
aux0 = df4[['id', 'region_code', 'response']].groupby( ['region_code', 'response'] ).count().reset_index()
sns.scatterplot( x='region_code', y='id', hue='response', data=aux0 )
# previously_insured
aux0 = df4[['id', 'previously_insured', 'response']].groupby( ['previously_insured', 'response'] ).count().reset_index()
sns.scatterplot( x='previously_insured', y='id', hue='response', data=aux0 )
pd.crosstab( df4['previously_insured'], df4['response'] ).apply( lambda x: x / x.sum(), axis=1 )
# vehicle_age
df4[['vehicle_age', 'response']].value_counts( normalize=True ).reset_index()
# vehicle_damage
aux = df4[['vehicle_damage', 'response']].groupby( 'response' ).sum().reset_index()
sns.barplot( x='response', y='vehicle_damage', data=aux );
# annual_premium
aux = df4[(df4['annual_premium'] > 10000) & (df4['annual_premium'] < 100000)]
sns.boxplot( x='response', y='annual_premium', data=aux );
aux00 = aux.loc[aux['response'] == 0, 'annual_premium']
sns.histplot( aux00 )
aux00 = aux.loc[aux['response'] == 1, 'annual_premium']
sns.histplot( aux00 )
# policy_sales_channel
#aux = pd.crosstab( df4['policy_sales_channel'], df4['response'] ).reset_index()
#aux.columns = ['policy_sales_channel', 'no', 'yes']
#aux.set_index( 'policy_sales_channel' );
aux = df4[['policy_sales_channel', 'response']].groupby( ['policy_sales_channel', 'response'] ).size().reset_index()
aux.columns = ['policy_sales_channel', 'response', 'number']
aux = aux[aux['number'] < 1000]
aux.head()
sns.scatterplot( x='policy_sales_channel', y='number', hue='response', data=aux )
fig, ax = plt.subplots( 1, figsize=(16,6) )
x = np.arange( 0, len( aux.index ) )
# plot bars
plt.bar( x - 0.1, aux['yes'], color='red' )
plt.bar( x + 0.1, aux['no'], color='blue' )
# vintage
aux0 = df4.loc[df4['response'] == 0, 'vintage']
sns.histplot( aux0 )
# vintage
aux0 = df4.loc[df4['response'] == 1, 'vintage']
sns.histplot( aux0 )
###Output
_____no_output_____
###Markdown
5.0. Data Preparation
###Code
X = df4.drop( 'response', axis=1 )
y = df4['response'].copy()
x_train, x_val, y_train, y_val = ms.train_test_split( X, y, test_size=0.20 )
df5 = pd.concat( [x_train, y_train], axis=1 )
###Output
_____no_output_____
###Markdown
5.1. Standardization
###Code
ss = pp.StandardScaler()
# annual premium - Standard Scaler
df5['annual_premium'] = ss.fit_transform( df5[['annual_premium']].values )
###Output
_____no_output_____
###Markdown
5.2. Rescaling
###Code
mms_age = pp.MinMaxScaler()
mms_vintage = pp.MinMaxScaler()
# Age - MinMax Scaler
df5['age'] = mms_age.fit_transform( df5[['age']].values )
# Vintage - MinMax Scaler
df5['vintage'] = mms_vintage.fit_transform( df5[['vintage']].values )
###Output
_____no_output_____
###Markdown
5.3. Transformation
###Code
# gender - One Hot Encoding / Target Encoding
target_encode_gender = df5.groupby( 'gender' )['response'].mean()
df5.loc[:, 'gender'] = df5.loc[:, 'gender'].map( target_encode_gender )
# region_code - Frequency Encoding / Target Encoding
target_encode_region_code = df5.groupby( 'region_code' )['response'].mean()
df5.loc[:, 'region_code'] = df5.loc[:, 'region_code'].map( target_encode_region_code )
# vehicle_age - One Hot Encoding / Frequency Encoding
df5 = pd.get_dummies( df5, prefix='vehicle_age', columns=['vehicle_age'] )
# policy_sales_channel - Frequency Encoding / Target Encoding
fe_policy_sales_channel = df5.groupby( 'policy_sales_channel' ).size() / len( df5 )
df5.loc[:, 'policy_sales_channel'] = df5.loc[:, 'policy_sales_channel'].map( fe_policy_sales_channel )
###Output
_____no_output_____
###Markdown
5.4. Validation
###Code
# gender
x_val.loc[:, 'gender'] = x_val.loc[:, 'gender'].map( target_encode_gender )
# age
x_val.loc[:, 'age'] = mms_age.transform( x_val[['age']].values )
# region_code
x_val.loc[:, 'region_code'] = x_val.loc[:, 'region_code'].map( target_encode_region_code )
# vehicle_age
x_val = pd.get_dummies( x_val, prefix='vehicle_age', columns=['vehicle_age'] )
# annual_premium
x_val.loc[:, 'annual_premium'] = ss.transform( x_val[['annual_premium']].values )
# policy_sales_channel
x_val.loc[:, 'policy_sales_channel'] = x_val['policy_sales_channel'].map( fe_policy_sales_channel )
# vintage
x_val.loc[:, 'vintage'] = mms_vintage.transform( x_val[['vintage']].values )
# fillna
x_val = x_val.fillna( 0 )
###Output
/Users/meigarom.lopes/.pyenv/versions/3.8.0/envs/pa004/lib/python3.8/site-packages/pandas/core/indexing.py:1676: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_single_column(ilocs[0], value, pi)
/Users/meigarom.lopes/.pyenv/versions/3.8.0/envs/pa004/lib/python3.8/site-packages/pandas/core/indexing.py:1738: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_single_column(loc, value[:, i].tolist(), pi)
/Users/meigarom.lopes/.pyenv/versions/3.8.0/envs/pa004/lib/python3.8/site-packages/pandas/core/indexing.py:1676: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_single_column(ilocs[0], value, pi)
###Markdown
6.0. Feature Selection 6.1. Boruta Algorithms 6.2. Tree-Based Model Feature Importance
###Code
# model definition
et_model = en.ExtraTreesClassifier( n_estimators=250, random_state=0, n_jobs=-1 )
# data preparation
x_train_n = df5.drop( ['id', 'response'], axis=1 )
y_train_n = y_train.values
et_model.fit( x_train_n, y_train_n )
importances = et_model.feature_importances_
indices = np.argsort( importances )[::-1]
print( 'feature ranking')
df = pd.DataFrame()
for i, j in zip( x_train_n, et_model.feature_importances_ ):
aux = pd.DataFrame( {'feature': i, 'importance': j}, index=[0] )
df = pd.concat( [df, aux], axis=0 )
print( df.sort_values( 'importance', ascending=False ) )
# Plot the impurity-based feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(x_train_n.shape[1]), importances[indices], color="r", align="center")
plt.xticks(range(x_train_n.shape[1]), indices)
plt.xlim([-1, x_train_n.shape[1]])
plt.show()
###Output
feature ranking
feature importance
0 vintage 0.273567
0 annual_premium 0.244376
0 age 0.162889
0 region_code 0.108230
0 vehicle_damage 0.066724
0 policy_sales_channel 0.060147
0 previously_insured 0.057128
0 vehicle_age_below_1_year 0.013169
0 vehicle_age_between_1_2_year 0.006332
0 gender 0.004671
0 vehicle_age_over_2_years 0.002279
0 driving_license 0.000487
###Markdown
7.0. Machine Learning Model
###Code
cols_selected = ['annual_premium', 'vintage', 'age', 'region_code', 'vehicle_damage', 'previously_insured',
'policy_sales_channel']
x_train = df5[ cols_selected ]
x_val = x_val[ cols_selected ]
###Output
_____no_output_____
###Markdown
7.1. K-NN
###Code
# model definition
knn_model = nh.KNeighborsClassifier( n_neighbors=7 )
# model training
knn_model.fit( x_train, y_train )
# model predicion
yhat_knn = knn_model.predict_proba( x_val )
# Accumulative Gain
skplt.metrics.plot_cumulative_gain( y_val, yhat_knn )
# Lift
skplt.metrics.plot_lift_curve( y_val, yhat_knn )
###Output
_____no_output_____
###Markdown
7.2. Logistic Regression
###Code
# model definition
lr_model = lm.LogisticRegression( random_state=42 )
# model training
lr_model.fit( x_train, y_train )
# model predicion
yhat_lr = lr_model.predict_proba( x_val )
# Accumulative Gain
skplt.metrics.plot_cumulative_gain( y_val, yhat_lr )
# Lift
skplt.metrics.plot_lift_curve( y_val, yhat_lr )
###Output
_____no_output_____
###Markdown
7.3. ExtraTrees
###Code
# model definition
et_model = en.ExtraTreesClassifier( n_estimators=1000, n_jobs=-1, random_state=42 )
# model training
et_model.fit( x_train, y_train )
# model predicion
yhat_et = et_model.predict_proba( x_val )
# Accumulative Gain
skplt.metrics.plot_cumulative_gain( y_val, yhat_et )
# Lift
skplt.metrics.plot_lift_curve( y_val, yhat_et )
###Output
_____no_output_____ |
Lesson04/Lists.ipynb | ###Markdown
Lists 1. Create a list of numbers with variable L.The numbers are 4, 6, 8, 9.
###Code
L = [4,6,8,9]
###Output
_____no_output_____
###Markdown
2. Find the length of the list.
###Code
print(len(L))
###Output
4
###Markdown
3. Add number 11 in the list.
###Code
L.append(11)
L
###Output
_____no_output_____
###Markdown
4. Concatenate list with the list of [13, 17 and 20] to the list.
###Code
L += [13, 17, 20]
###Output
_____no_output_____
###Markdown
5. Sort the list in ascending order
###Code
L.sort
L
###Output
_____no_output_____
###Markdown
6. Does list can contain any values other than integers?
###Code
L = [1, "two", 3.24, [0, 3,5]]
L
###Output
_____no_output_____ |
spark_sql/5_play_with_data_frame.ipynb | ###Markdown
操作與觀察 Dataframe Part1. 基本操作與觀察 Read filedata source hdfs:///tmp/ratings.csv
###Code
fileDF =
###Output
_____no_output_____
###Markdown
觀察檔案
###Code
fileDF
###Output
_____no_output_____
###Markdown
觀察欄位
###Code
fileDF
###Output
_____no_output_____
###Markdown
How many columns? 欄位統計值
###Code
fileDF
###Output
_____no_output_____
###Markdown
如果只想看 userid?
###Code
fileDF
###Output
_____no_output_____
###Markdown
打印 schema
###Code
fileDF
###Output
_____no_output_____
###Markdown
選擇欄位 please show top 20 rows of userid and rating
###Code
fileDF
### please show top 20 rows of userid and rating + 1
fileDF
### please show top 20 rows of userid, rating + 1 and turn data type of rating to double
fileDF
### please check if the data type od new rating is double
###Output
_____no_output_____
###Markdown
篩選欄位
###Code
### please select userid which is equal to 3
fileDF
### please select userid and rating and userid is equal to 3
###Output
_____no_output_____
###Markdown
計算不重複值
###Code
### how many rows in the data?
fileDF
### how many unique users?
fileDF
###Output
_____no_output_____
###Markdown
小練習: 有幾部電影被評為5分?
###Code
fileDF
###Output
_____no_output_____
###Markdown
Part2 資料清理 觀察數值分配
###Code
fileDF.crosstab('userid', 'rating').show()
###Output
_____no_output_____
###Markdown
處理遺漏值
###Code
# please fill missing value with 0 if the type of column is numeric, otherwise, please fill it with ''
fileDF.fillna(0).show()
###Output
_____no_output_____
###Markdown
轉換欄位屬性
###Code
from pyspark.sql.types import DoubleType
fileDF = fileDF.withColumn("rating_double", fileDF["rating"].cast(DoubleType()))
fileDF.printSchema()
fileDF.show()
fileDF.fillna(0).show()
fileDF_clean = fileDF.fillna(0)
fileDF_clean.crosstab('userid', 'rating_double').show()
fileDF.dropna().show()
###Output
_____no_output_____
###Markdown
處理重復值
###Code
### How many unique rows in the dataset?
### Please delete duplicated data and verify it
fileDF.show()
fileDF.crosstab("userid", "movieid").show()
(fileDF.select("userid", "movieid", "rating").count() -
fileDF.select("userid", "movieid", "rating").distinct().count())
fileDF.dropDuplicates().orderBy(['userid', 'movieid', 'rating'], ascending=[1,0]).show()
fileDF.dropDuplicates(['userid', 'movieid', 'rating']).orderBy(['userid', 'movieid', 'rating'], ascending=[1,0]).show()
fileDF_nodup = fileDF.dropDuplicates(['userid', 'movieid', 'rating']).orderBy(['userid', 'movieid', 'rating'], ascending=[1,0])
(fileDF_nodup.select("userid", "movieid", "rating").count() -
fileDF_nodup.select("userid", "movieid", "rating").distinct().count())
spark.stop()
###Output
_____no_output_____ |
shape-classification/.ipynb_checkpoints/submission_3_shape_classification-checkpoint.ipynb | ###Markdown
Welcome to Shape Classification Import all dependencies
###Code
pip install pydot==1.3.0
pip install graphviz==0.10.1
import tensorflow as tf
import os
import zipfile
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from keras.utils import plot_model
from keras import callbacks
from keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
###Output
_____no_output_____
###Markdown
Import Training data
###Code
DATASET_DIR = './shape-classification-dataset/shapes'
###Output
_____no_output_____
###Markdown
Check datasets from directory
###Code
os.listdir(DATASET_DIR)
###Output
_____no_output_____
###Markdown
Check total dataset
###Code
print('total circle images :', len(os.listdir(DATASET_DIR + '/circle')))
print('total square images :', len(os.listdir(DATASET_DIR + '/square')))
print('total star images :', len(os.listdir(DATASET_DIR + '/star')))
print('total triangle images :', len(os.listdir(DATASET_DIR + '/triangle')))
###Output
total circle images : 3720
total square images : 3765
total star images : 3765
total triangle images : 3720
###Markdown
Example data from dataset
###Code
%matplotlib inline
img = image.load_img(DATASET_DIR + '/circle/0.png')
imgplot = plt.imshow(img)
###Output
_____no_output_____
###Markdown
Preprocessing Image Augmentation (Rescaling and Splitting)
###Code
train_dir = os.path.join(DATASET_DIR)
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=20,
zoom_range=0.2,
shear_range=0.2,
fill_mode = 'nearest',
validation_split=0.2) # set validation split
###Output
_____no_output_____
###Markdown
Generate for training and validation
###Code
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(16, 16),
batch_size=8,
class_mode='categorical',
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_dir, # same directory as training data
target_size=(16, 16),
batch_size=16,
class_mode='categorical',
subset='validation')
###Output
Found 11976 images belonging to 4 classes.
Found 2994 images belonging to 4 classes.
###Markdown
Create the model
###Code
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 150x150 with 3 bytes color
tf.keras.layers.Conv2D(4, (3,3), activation='relu', input_shape=(16, 16, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.4),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(8, activation='relu'),
# Give output of 4 labels
tf.keras.layers.Dense(4, activation='softmax')
])
###Output
_____no_output_____
###Markdown
Create the loss and optimizer function1. Adam Optimizer => replacement optimization algorithm for stochastic gradient descent 2. metrics to benchmark is accuracy
###Code
model.compile(optimizer=tf.optimizers.Adam(),
loss='categorical_crossentropy',
metrics = ['accuracy'])
print(model.summary())
plot_model(model, to_file='submission_3_shape_classification.png')
###Output
_____no_output_____
###Markdown
Create Callback
###Code
class myCallback(callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.92):
print("\nYour Accuracy >92%!")
self.model.stop_training = True
callbacks = myCallback()
###Output
_____no_output_____
###Markdown
Fit the model and Save the History
###Code
history = model.fit(train_generator,
validation_data=validation_generator,
epochs=50,
verbose=2,
callbacks=[callbacks])
###Output
Epoch 1/50
1497/1497 - 19s - loss: 0.6783 - accuracy: 0.6886 - val_loss: 0.2986 - val_accuracy: 0.8988
Epoch 2/50
1497/1497 - 19s - loss: 0.3830 - accuracy: 0.8365 - val_loss: 0.1558 - val_accuracy: 0.9549
Epoch 3/50
1497/1497 - 20s - loss: 0.2943 - accuracy: 0.8789 - val_loss: 0.0834 - val_accuracy: 0.9826
Epoch 4/50
1497/1497 - 18s - loss: 0.2300 - accuracy: 0.9098 - val_loss: 0.0556 - val_accuracy: 0.9850
Epoch 5/50
Your Accuracy >92%!
1497/1497 - 19s - loss: 0.1974 - accuracy: 0.9228 - val_loss: 0.0462 - val_accuracy: 0.9880
###Markdown
Plotting Plot Model Loss
###Code
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss Model')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Plot Model Accuracy
###Code
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Akurasi Model')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Convert to Tensorflow Lite
###Code
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
###Output
_____no_output_____ |
example/GradBoost Notebook.ipynb | ###Markdown
Gradient Boosting from (Almost) Scratch By John Clements Over the past month, I've been slowly working my way through Joel Grus' Data Science from Scratch 2nd Edition and I have thoroughly enjoyed building simple versions of machine learning algorithms. I've always learned better by doing and this book plays to that learning style. Unfortunately, it doesn't have a chapter for implementing gradient boosting. Although I read through Tianqi Chen's great slides (https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf), I thought I'd gain a better understanding of gradient boosting by implementing it myself.Gradient boosting is a relatively simple algorithm. It works by training weak models (most frequently CARTs) on the residuals from the preceding iteration. New predictions are generated by adding the predicted residuals multiplied by a learning rate between 0 and 1 to the previous prediction. The learning rate prevents the model from taking too large a step towards the actual values and overshooting them. Small learning rates take longer to converge towards the best fit. Finding the right one is a balancing act. I highly recommend Tianqi Chen's slides for a more thorough treatment of gradient boosting, or look at the documentation for XGBoost, which is his amazing gradient boosting package (https://xgboost.readthedocs.io/en/latest/index.html).I said gradient boosting from almost scratch; I use scikit-learn's models as my weak learners and use numpy for its efficiency for certain mathematical functions and for its array structure.The gradient boosting algorithm goes: **Step 1: Make a first guess for the training and testing y, using the average of the training y** $y_{train_{pred_0}} = \frac{1}{n} * \Sigma_{i = 1}^{n}y_{train_i}$ $y_{test_{pred_0}} = y_{train_{pred_0}}$ **Step 2: Calculate the residuals from the training data set** $r_0 = y_{train} - y_{train_{pred_0}}$ **Step 3: Fit a weak learner to the residuals minimizing the loss function** $r_0 = f_0(X_{train})$, such that $f_0(X) = arg_{h(X)}min(\Lambda(r_0, h(X_{train}))$ Here $\Lambda$ is our loss function and $h(X)$ is a weak learner trained on the explanatory variables and residuals from the preceding step. The first model I am using is scikit-learn's regression tree. Therefore the potential loss functions are Mean Squared Error, Friedman's adjusted Mean Square Error, and Mean Absolute Error. They each contain regularization terms, which is lucky for me. A more thorough explanation of each is contained in the documentation: https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html. In addition, I am using a ridge regression model whose loss function is the sum of the squared errors + the l2 norm of the coefficients multiplied by a penalty coefficient. More information is here: https://en.wikipedia.org/wiki/Tikhonov_regularization **Step 4: Increment the predicted y's** $y_{train_{pred_1}} = y_{train_{pred_0}} + learning rate * f_0(X_{train})$ $y_{test_{pred_1}} = y_{test_{pred_0}} + learning rate * f_0(X_{test})$ **Step 5: Repeat Steps 2 through 4 until you reach the number of boosting rounds** Calculate the residuals using $y_{train_{pred_1}}$, fit a new model, increment the predictions, and repeat for $2, 3, ..., n$, where $n$ is the number of boosting iterations. This algorithm is implemented below.
###Code
import typing
import numpy as np
def GradBoost(model,
X_test: np.array, # testing independent variables
X_train: np.array, # training independent variables
y_train: np.array, # training dependent variable
boosting_rounds: int = 100, # number of boosting rounds
learning_rate: float = 0.1, # learning rate with default of 0.1
verbose: bool = True) -> np.array: # if True, shows a tqdm progress bar
'''
Takes in a model and performs gradient boosting using that model. This allows for almost any scikit-learn
model to be used.
'''
import numpy as np
# make a first guess of our training target variable using the mean
y_hat_train = np.repeat(np.mean(y_train), len(y_train))
# initialize the out of sample prediction with the mean of the training target variable
y_hat_train_test = np.repeat(np.mean(y_train), len(X_test))
# calculate the residuals from the training data using the first guess
pseudo_resids = y_train - y_hat_train
# performs gradient boosting with a tqdm progress bar
if verbose:
from tqdm import tqdm
# iterates through the boosting round
for _ in tqdm(range(0, boosting_rounds)):
# fit the model to the pseudo residuals
model = model.fit(X_train, pseudo_resids)
# increment the predicted training y with the pseudo residual * learning rate
y_hat_train += learning_rate * model.predict(X_train)
# increment the predicted test y as well
y_hat_train_test += learning_rate * model.predict(X_test)
# calculate the pseudo resids for next round
pseudo_resids = y_train - y_hat_train
# performs gradient boosting without a progress bar
else:
# iterates through the boosting round
for _ in range(0, boosting_rounds):
# fit the model to the pseudo residuals
model = model.fit(X_train, pseudo_resids)
# increment the predicted training y with the pseudo residual * learning rate
y_hat_train += learning_rate * model.predict(X_train)
# increment the predicted test y as well
y_hat_train_test += learning_rate * model.predict(X_test)
# calculate the pseudo resids for next round
pseudo_resids = y_train - y_hat_train
# return a tuple of the predicted training y and the predicted test y
return y_hat_train, y_hat_train_test
###Output
_____no_output_____
###Markdown
Now let's generate some testing data to see if this function works. I chose to simulate data, so I know there is a relationship between the some of the independent variables and the target variable. I used scikit-learn's make_regression function to generate 1,000 observations, with 20 independent variables, three-quarters of which actually contain useful information. I then split the data into training and testing data sets.
###Code
from sklearn.datasets import make_regression
X, y = make_regression(n_samples=1000,
n_features=20,
n_informative=15,
n_targets=1,
bias=0.0,
noise=20,
shuffle=True,
random_state=13)
X_train = X[0:int(len(X) / 2)]
y_train = y[0:int(len(X) / 2)]
X_test = X[int(len(X) / 2):]
y_test = y[int(len(X) / 2):]
###Output
_____no_output_____
###Markdown
When using gradient boosting, too few iterations leads to an underfit model and too many iterations leads to an overfitted one. Before we make predictions, it is helpful to see how the training mean square error evolves with more boosting iterations, so we can try and fit a well calibrated model.When actually attempting to solve a real problem, you would determine the number of boosting rounds, as well as other parameters, using a grid search and k-folds cross validation. I did something a little simpler, for ease of demonstration. I just plotted the training Mean Square Error vs. number of boosting rounds for 5 to 100 rounds, incrementing by 5 and chose a semi-arbitrary cut-off. The regression trees in this implementation have depth of 3 and the loss function is specified as the regularized mean square error. The learning rate is set to 0.1. The ridge regressions choose the best l2-norm penalty coefficient at each step among {0.01, 0.1, 1, 10} using 3-fold cross validation.
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import RidgeCV
tree_model = DecisionTreeRegressor(criterion='mse',
max_depth=3)
ridge_model = RidgeCV(alphas=(0.01, 0.1, 1.0, 10.0),
fit_intercept=True,
normalize=True,
cv=3)
###
# Plot the training mean squared error vs. number of boosting rounds by looping through various
# numbers of boosting rounds, calculating the training mean squared error each round and
# appending it to a list.
###
from tqdm import tqdm_notebook as tqdm
tree_mse_train = []
n_rounds = np.arange(5, 101, 5)
for n_round in tqdm(n_rounds):
y_hat_train = GradBoost(tree_model,
X_test,
X_train,
y_train,
boosting_rounds=n_round,
learning_rate=0.1,
verbose=False)[0]
tree_mse_train.append(np.mean((y_train - y_hat_train) ** 2))
ridge_mse_train = []
for n_round in tqdm(n_rounds):
y_hat_train = GradBoost(ridge_model,
X_test,
X_train,
y_train,
boosting_rounds=n_round,
learning_rate=0.1,
verbose=False)[0]
ridge_mse_train.append(np.mean((y_train - y_hat_train) ** 2))
# sets the plot size to 20x8
plt.rcParams['figure.figsize'] = (20,8)
plt.subplot(1, 2, 1)
plt.plot(n_rounds, tree_mse_train)
plt.title('Training MSE vs. Boosting Rounds for Tree Model', fontsize=20)
plt.xlabel('Number of Boosting Rounds', fontsize=15)
plt.ylabel('Training Mean Squared Error', fontsize=15)
plt.show;
# sets the plot size to 20x8
plt.rcParams['figure.figsize'] = (20,8)
plt.subplot(1, 2, 2)
plt.plot(n_rounds, ridge_mse_train)
plt.title('Training MSE vs. Boosting Rounds for Ridge Regression', fontsize=20)
plt.xlabel('Number of Boosting Rounds', fontsize=15)
plt.ylabel('Training Mean Squared Error', fontsize=15)
plt.show;
###Output
_____no_output_____
###Markdown
When using the trees as our weak learner, the mean squared error falls quickly until it reaches an inflection point around 30 boosting iterations. After that point, it gently slopes down as it approaches 100 rounds. When using the ridge regression model, the training mean square error plunges until 20 rounds and it really levels off after 30. We'll use 100 as our number of boosting iterations using trees and 30 for the ridge regression boosting model. But first, let's look at the results from 0 and 10 boosting iterations.
###Code
fig=plt.figure(figsize=(20, 20), dpi= 80, facecolor='w', edgecolor='k')
n_rounds = 0
y_hat_train, y_hat_test = GradBoost(tree_model,
X_test,
X_train,
y_train,
boosting_rounds=n_rounds,
learning_rate=0.1,
verbose=False)
plt.subplot(321)
plt.scatter(y_train, y_hat_train)
plt.title('Actual vs. Predicted on Training Data (Trees)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(322)
plt.scatter(y_test, y_hat_test)
plt.title('Actual vs. Predicted on Testing Data (Trees)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
y_hat_train, y_hat_test = GradBoost(ridge_model,
X_test,
X_train,
y_train,
boosting_rounds=n_rounds,
learning_rate=0.1,
verbose=False)
# sets the plot size to 20x8
plt.rcParams['figure.figsize'] = (20,20)
plt.subplot(323)
plt.scatter(y_train, y_hat_train)
plt.title('Actual vs. Predicted on Training Data (Ridge)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(324)
plt.scatter(y_test, y_hat_test)
plt.title('Actual vs. Predicted on Testing Data (Ridge)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
###Output
_____no_output_____
###Markdown
In the plots above, the red line is where the predicted values equal the actual values. The closer to that line for the testing data, the more accurate our model. If the training predictions are a tight fit, but the testing predictions are all over the place, the model is overfit to the training data.With 0 boosting rounds, we are just guessing the average. As you would expect, this naive "model" underfits the data. Let's see how things evolve after 50 boosting rounds.
###Code
fig=plt.figure(figsize=(20, 20), dpi= 80, facecolor='w', edgecolor='k')
n_rounds = 10
y_hat_train, y_hat_test = GradBoost(tree_model,
X_test,
X_train,
y_train,
boosting_rounds=n_rounds,
learning_rate=0.1,
verbose=False)
plt.subplot(321)
plt.scatter(y_train, y_hat_train)
plt.title('Actual vs. Predicted on Training Data (Trees)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(322)
plt.scatter(y_test, y_hat_test)
plt.title('Actual vs. Predicted on Testing Data (Trees)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
y_hat_train, y_hat_test = GradBoost(ridge_model,
X_test,
X_train,
y_train,
boosting_rounds=n_rounds,
learning_rate=0.1,
verbose=False)
# sets the plot size to 20x8
plt.rcParams['figure.figsize'] = (20,20)
plt.subplot(323)
plt.scatter(y_train, y_hat_train)
plt.title('Actual vs. Predicted on Training Data (Ridge)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(324)
plt.scatter(y_test, y_hat_test)
plt.title('Actual vs. Predicted on Testing Data (Ridge)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
###Output
_____no_output_____
###Markdown
At 10 rounds, the predictions from each model are starting to rotate towards the perfect prediction line. This process is sped up with higher learning rates, but it is possible to overshoot the mark. The cloud of predictions is much tighter for the model using ridge regression as the weak learner. This is probably because the data we generated contains linear relationships and ridge regression is a linear model. Now I think it is time to make our final predictions and compare.
###Code
fig=plt.figure(figsize=(20, 20), dpi= 80, facecolor='w', edgecolor='k')
y_hat_train_tree, y_hat_test_tree = GradBoost(tree_model,
X_test,
X_train,
y_train,
boosting_rounds=100,
learning_rate=0.1,
verbose=True)
plt.subplot(321)
plt.scatter(y_train, y_hat_train_tree)
plt.title('Actual vs. Predicted on Training Data (Trees)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(322)
plt.scatter(y_test, y_hat_test_tree)
plt.title('Actual vs. Predicted on Testing Data (Trees)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
y_hat_train_ridge, y_hat_test_ridge = GradBoost(ridge_model,
X_test,
X_train,
y_train,
boosting_rounds=30,
learning_rate=0.1,
verbose=True)
# sets the plot size to 20x8
plt.rcParams['figure.figsize'] = (20,20)
plt.subplot(323)
plt.scatter(y_train, y_hat_train_ridge)
plt.title('Actual vs. Predicted on Training Data (Ridge)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(324)
plt.scatter(y_test, y_hat_test_ridge)
plt.title('Actual vs. Predicted on Testing Data (Ridge)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
###Output
100%|██████████| 100/100 [00:00<00:00, 210.32it/s]
100%|██████████| 30/30 [00:01<00:00, 23.52it/s]
###Markdown
At 100 iterations, our boosted trees model is decent, but it performs worse the farther from the mean of the target variable. Still it is a much better fit than the 10 round model. Our model with ridge regression as the weak learner is a much better fit. Again, this is likely due to the linear structure in the simulated data. Speaking of linear structure, let's check how these models fare against a simple multiple linear regression.
###Code
from sklearn.linear_model import LinearRegression
ols_model = LinearRegression().fit(X_train, y_train)
y_hat_train_ols, y_hat_test_ols = ols_model.predict(X_train), ols_model.predict(X_test)
# sets the plot size to 20x8
plt.rcParams['figure.figsize'] = (20,20)
plt.subplot(323)
plt.scatter(y_train, y_hat_train_ols)
plt.title('Actual vs. Predicted on Training Data (OLS)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(324)
plt.scatter(y_test, y_hat_test_ols)
plt.title('Actual vs. Predicted on Testing Data (OLS)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
print(f'Gradient Boosted Trees MSE on Testing Data: {np.mean((y_test - y_hat_test_tree)**2)}')
print(f'Gradient Boosted Ridge Regression MSE on Testing Data: {np.mean((y_test - y_hat_test_ridge)**2)}')
print(f'Ordinary Least Squares MSE on Testing Data: {np.mean((y_test - y_hat_test_ols)**2)}')
###Output
Gradient Boosted Trees MSE on Testing Data: 8002.433204404426
Gradient Boosted Ridge Regression MSE on Testing Data: 538.6635459552464
Ordinary Least Squares MSE on Testing Data: 433.8959640828043
###Markdown
The multiple linear regression wins. Let this be a reminder to visualize and investigate your data. Always think carefully about the data generating process that shapes its structure before throwing algorithms at problems. Or go crazy. It's your life.Let's see what would happen with more boosting rounds, say 1000 for the Boosted Trees and 100 for the Boosted Ridge Regression.
###Code
fig=plt.figure(figsize=(20, 20), dpi= 80, facecolor='w', edgecolor='k')
y_hat_train_tree, y_hat_test_tree = GradBoost(tree_model,
X_test,
X_train,
y_train,
boosting_rounds=1000,
learning_rate=0.1,
verbose=True)
plt.subplot(321)
plt.scatter(y_train, y_hat_train_tree)
plt.title('Actual vs. Predicted on Training Data (Trees)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(322)
plt.scatter(y_test, y_hat_test_tree)
plt.title('Actual vs. Predicted on Testing Data (Trees)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
y_hat_train_ridge, y_hat_test_ridge = GradBoost(ridge_model,
X_test,
X_train,
y_train,
boosting_rounds=100,
learning_rate=0.1,
verbose=True)
# sets the plot size to 20x8
plt.rcParams['figure.figsize'] = (20,20)
plt.subplot(323)
plt.scatter(y_train, y_hat_train_ridge)
plt.title('Actual vs. Predicted on Training Data (Ridge)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.subplot(324)
plt.scatter(y_test, y_hat_test_ridge)
plt.title('Actual vs. Predicted on Testing Data (Ridge)', fontsize=20)
plt.xlabel('Actual Values', fontsize=15)
plt.ylabel('Predictions', fontsize=15)
plt.plot(y_train, y_train, color='r')
plt.show;
###Output
100%|██████████| 1000/1000 [00:04<00:00, 203.81it/s]
100%|██████████| 100/100 [00:04<00:00, 25.04it/s]
###Markdown
The Boosted Trees have clearly overfit the training data, but the results from the Boosted Ridge Regression are pretty good. Again, the relationships in the data are linear. Let's look at the new mean square errors.
###Code
print(f'Gradient Boosted Trees MSE on Testing Data: {np.mean((y_test - y_hat_test_tree)**2)}')
print(f'Gradient Boosted Ridge Regression MSE on Testing Data: {np.mean((y_test - y_hat_test_ridge)**2)}')
print(f'Ordinary Least Squares MSE on Testing Data: {np.mean((y_test - y_hat_test_ols)**2)}')
###Output
Gradient Boosted Trees MSE on Testing Data: 6193.426323669114
Gradient Boosted Ridge Regression MSE on Testing Data: 442.29215365523663
Ordinary Least Squares MSE on Testing Data: 433.8959640828043
|
Classification_and_Regression/Exercise Files/Ch04/04_notebook.ipynb | ###Markdown
Line Graphs Helper functions
###Code
# %load ../helper_funcs/get_df.py
def get_df(yr):
return pd.read_csv("../../inputs/Environmental_Data_Deep_Moor_{}.csv".format(yr))
# %load ../helper_funcs/line_helpers.py
def monthly_avg_calc(mo,col):
return df[df['date'].str.contains('201[2345]_[0]?'+ str(mo))][col].mean()
def yearly_avg(category):
return list(map(lambda m: monthly_avg_calc(m, category),range(1,13)))
###Output
_____no_output_____
###Markdown
Read date into the dataframe
###Code
df = get_df('2014')
###Output
_____no_output_____
###Markdown
Plot Wind Speed
###Code
plt.plot(yearly_avg('Wind_Speed'))
plt.show()
###Output
_____no_output_____
###Markdown
Plot Wind Speed, Wind Gust, Dew Point, and Barometric Pressure
###Code
plt.plot(yearly_avg('Wind_Speed'))
plt.plot(yearly_avg('Wind_Gust'))
plt.plot(yearly_avg('Dew_Point'))
plt.plot(yearly_avg('Barometric_Press'))
plt.show()
###Output
_____no_output_____
###Markdown
Make a few adjustments - Add labels- Add legend- Add colors- Add format string
###Code
plt.plot(yearly_avg('Wind_Speed'),label='Wind Speed',color="#2BD31444")
plt.plot(yearly_avg('Wind_Gust'),'d-',label='Wind Gust',color="#2BD314FF")
plt.plot(yearly_avg('Dew_Point'),'d-',label='Dew Point',color='#142BD3FF')
plt.plot(yearly_avg('Barometric_Press'),':',label='Barometric Pressure',color="#142BD366")
plt.ylim(0,80)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Alternate Styling Options A slightly cleaner format
###Code
plt.plot(yearly_avg('Wind_Speed'),label='Wind Speed',color="#2BD31444")
plt.plot(yearly_avg('Wind_Gust'),'d-',label='Wind Gust',color="#2BD314FF")
plt.plot(yearly_avg('Dew_Point'),'d-',label='Dew Point',color='#142BD3FF')
plt.plot(yearly_avg('Barometric_Press'),',:',label='Barometric Pressure',color="#142BD366")
plt.ylim(0,70)
plt.title('Yearly trends')
plt.axis('off')
plt.text(8,60, 'Dew Point')
plt.text(6,20, 'Wind Gust')
plt.show()
###Output
_____no_output_____ |
week3/sets_HW.ipynb | ###Markdown
**IMPORTANT: ** When submitting this homework notebook, please modify only the cells that start with:```python modify this cell``` *Note:* No packages should be imported for this assignment Sets: The Inclusion-Exclusion Principle Problem 1 Problem 1.1 The inclusion-exclusion principle states that for two sets $A$ and $B$,$$|A\cup B|=|A|+|B|-|A\cap B|.$$Write the following functions to determine $|A\cup B|$ in two different ways.A function **union** that determines first $A\cup B$ and then evaluates the union's size.Output the ordered pair $(A\cup B, |A\cup B|)$. * **Sample run** *```pythonA = {1, 2, 3}B = {3, -6, 2, 0}print union(A, B)``` * **Expected Output** *```({-6, 0, 1, 2, 3}, 5)```
###Code
# modify this cell
def union(A, B):
# inputs: A and B are of type 'set'
# output: a tuple of the type (set, set_length)
#
# YOUR CODE HERE
#
a_b_u = A | B
return a_b_u, len(a_b_u)
A = {1,4,-3, "bob"}
B = {2,1,-3,"jill"}
assert union(A,B) == ({-3, 1, 2, 4, 'bob', 'jill'}, 6)
#
# AUTOGRADER TEST - DO NOT REMOVE
#
###Output
_____no_output_____
###Markdown
Problem 1.2 A function **inclusion_exclusion** that first deterimines $|A|$, $|B|$, $A\cap B$, and $|A\cap B|$, and then uses the inclusion-exclusion formula to determine $|A\cup B|$. Output the tuple $(|A|, |B|, |A\cap B|, |A\cup B|)$. * **Sample run:** *```pythonA = {1, 2, 3}B = {3, -6, 2, 0}print inclusion_exclusion(A, B)print "notice: 3 + 4 - 2 == 5"``` * **Expected Output:** *```(3, 4, 2, 5)notice: 3 + 4 - 2 == 5```
###Code
# modify this cell
def inclusion_exclusion(A, B):
# inputs: A and B are of type 'set'
# output: a tuple of four integers
#
# YOUR CODE HERE
#
return len(A), len(B), len(A & B), len(A) + len(B) - len(A & B)
# Check Function
A = {1, 2, 3, 4, 5}
B = {0, 2, -6, 5, 8, 9}
assert inclusion_exclusion(A, B) == (5, 6, 2, 9)
#
# AUTOGRADER TEST - DO NOT REMOVE
#
###Output
_____no_output_____
###Markdown
Problem 2 The inclusion-exclusion principle says that for three sets $A$, $B$ and $C$, $$|A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|B\cap C|-|C\cap A|+|A\cap B\cap C|$$We will write the following functions to determine $|A\cup B\cup C|$ in two different ways. Problem 2.1 Write function **union3** that first determines $A\cup B\cup C$ and then evaluates the size of this union.Output the tuple $(A\cup B\cup C, |A\cup B\cup C|)$. * **Sample run:** *```pythonA = {1, 2, 3, 4, 5}B = {0, 2, -6, 5, 8, 9}C = {2, 10}union3(A, B, C)``` * **Expected Output:** *```({-6, 0, 1, 2, 3, 4, 5, 8, 9, 10}, 10)```
###Code
# modify this cell
def union3(A, B, C):
# inputs: A, B and C are of type 'set'
# output: a tuple of the type (set, set_length)
#
# YOUR CODE HERE
#
union = A | B | C
return union, len(union)
# check Function
A = {1, 2, 4, 5, 10}
B = {5, 2, -6, 5, 8, 9}
C = {2, 10, 13}
assert union3(A,B,C) == ({-6, 1, 2, 4, 5, 8, 9, 10, 13}, 9)
#
# AUTOGRADER TEST - DO NOT REMOVE
#
###Output
_____no_output_____
###Markdown
Problem 2.2 A function **inclusion_exclusion3** that first deterimines the sizes of $A$, $B$, $C$ and their mutual intersections, and then uses the inclusion-exclusion principle to determine the size of the union. Output the tuple $(|A\cap B\cap C|, |A\cup B\cup C|)$. Note that for brevity we are asking you to output the intermediate answer just for $A\cap B\cap C$, but you need to calculate all. * **Sample run:** *```pythonA = {1, 2, 3, 4, 5}B = {0, 2, -6, 5, 8, 9}C = {2, 10}print inclusion_exclusion3(A, B, C)``` * **Expected Output:** *```(1, 10)```
###Code
# modify this cell
def inclusion_exclusion3(A, B, C):
# inputs: A, B and C are of type 'set'
# output: a tuple of two integers
#
# YOUR CODE HERE
#
len_union = len(A) + len(B) + len(C) + len(A & B & C) - len(A & B) - len(B & C) - len(C & A)
return len(A & B & C), len_union
# Check Function
A = {1, 2, 4, 5, 10}
B = {5, 2, -6, 5, 8, 9, 10}
C = {2, 10, 13}
assert inclusion_exclusion3(A,B,C) == (2, 9)
#
# AUTOGRADER TEST - DO NOT REMOVE
#
A = {-2, 0, 5, 'bob'}
B = {-2, 0, 5, 10, 'jill'}
inclusion_exclusion(A, B)
###Output
_____no_output_____ |
USGS_NED_one_meter/USGS NED DEM 1m Fractality for San Diego, California, US.ipynb | ###Markdown
MIT LicenseCopyright (c) 2019 Alexey Pechnikov, https://orcid.org/0000-0001-9626-8615 (ORCID)Source dataset: https://prd-tnm.s3.amazonaws.com/StagedProducts/Elevation/1m/IMG/USGS_NED_one_meter_x57y365_CA_E_SanDiegoCo_2016_IMG_2017.zip
###Code
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline
###Output
_____no_output_____
###Markdown
Define helper functions
###Code
from geomed3dv4 import *
# select work area
def crop_area(raster):
return raster.sel(x=slice(575500-500,576000),y=slice(3642500+500,3642000))
def plot_fractality(ax, data):
from scipy.stats import linregress
import numpy as np
import matplotlib.ticker as ticker
ax.loglog(data.r, data, base=2, label='Calculated')
ax.set_xlabel('Wavelength, m', fontsize=18)
ax.axes.get_yaxis().set_visible(False)
ax.xaxis.set_major_formatter(ticker.FuncFormatter(lambda y, _: '{:g}'.format(y)))
res = linregress(np.log2(data.r), np.log2(data))
ax.plot(data.r, 2**(res.intercept + res.slope*np.log2(data.r)), 'r', label=f'Fitted R²={res.rvalue**2:.2f}', ls='--')
ax.legend(fontsize=18)
fractality = 1000*np.round((3 - (res.slope/2)),1)
return fractality
###Output
_____no_output_____
###Markdown
Load DEM
###Code
!wget -c https://prd-tnm.s3.amazonaws.com/StagedProducts/Elevation/1m/IMG/USGS_NED_one_meter_x57y365_CA_E_SanDiegoCo_2016_IMG_2017.zip
!unzip -p \
USGS_NED_one_meter_x57y365_CA_E_SanDiegoCo_2016_IMG_2017.zip \
USGS_NED_one_meter_x57y365_CA_E_SanDiegoCo_2016_IMG_2017.img > \
USGS_NED_one_meter_x57y365_CA_E_SanDiegoCo_2016_IMG_2017.img
# DEM image
dem = crop_area(xr.open_rasterio("USGS_NED_one_meter_x57y365_CA_E_SanDiegoCo_2016_IMG_2017.img")[0])
###Output
_____no_output_____
###Markdown
Calculate spatial spectrum components
###Code
# check spatial components, [m]
gammas = np.arange(2,128)
dem_power = xr.DataArray([raster_gamma_range(dem, g-1, g+1, backward=True).std() for g in gammas],
coords=[gammas],
dims=['r'])
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))
dem.plot(ax=ax1)
ax1.ticklabel_format(useOffset=False, style='plain')
ax1.set_title('USGS NED DEM 1m', fontsize=18)
ax1.get_yaxis().set_major_formatter(ticker.FuncFormatter(lambda x, p: int(x/1000)))
ax1.get_xaxis().set_major_formatter(ticker.FuncFormatter(lambda x, p: int(x/1000)))
ax1.set_ylabel('Y, km', fontsize=18)
ax1.set_xlabel('X, km', fontsize=18)
dem_fractality = plot_fractality(ax2, dem_power)
ax2.set_title('Fractality Index', fontsize=18)
ax2.axvline(x=90, ymin=0, ymax=1, color = 'black', ls='--', alpha=1)
plt.suptitle(f"USGS_NED_one_meter - San Diego, California, US\nFractality Density ρ={dem_fractality:.0f} kg/m³", fontsize=22)
fig.tight_layout(rect=[0.03, 0.03, .97, 0.97])
plt.savefig('USGS_NED_one_meter - San Diego, California, US.jpg', dpi=150)
plt.show()
###Output
_____no_output_____ |
Analysis of melanoma single cell data.ipynb | ###Markdown
Experimental designIsolate the imputed CD45+ cells from the melanoma dataset Perform bottom end filtering Build TSNE plot Highlight expression of immune genes based on surface markers and define cell types Relate immune profile to mutational profile Identify markers for cell types based on differential expression Using those markers subgroup TCGA melanoma samples Check for response to therapy in datasets where response is known Load Packages
###Code
#General packages
import pandas as pd
import numpy as np
import os
import time
from sklearn.manifold import TSNE
# Plotting and miscellaneous imports
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load Data
###Code
# Set Working directory
# Laptop WD
WD = '/Users/Ajit/Dropbox/pythonprojects/melanoma/'
os.chdir(WD)
# Import data
exp = pd.read_csv('exp_immune.csv', index_col=0)
###Output
_____no_output_____
###Markdown
Remove zero rows
###Code
# Remove genes with zero in all cells
nonzero = exp.join(pd.DataFrame(exp.sum(axis=1)))
nonzero = nonzero[(nonzero[0] > 0)]
nonzero = nonzero.drop([0], axis=1)
###Output
_____no_output_____
###Markdown
TSNE
###Code
n_sne = 10
time_start = time.time()
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(nonzero.loc[rndperm[:n_sne],nonzero].values)
###Output
_____no_output_____ |
examples/notebooks/Issue-sim.ipynb | ###Markdown
API
###Code
enum class Severity(val penalty: Double){
MINOR(1.0),
MAJOR(2.0),
CRITICAL(3.0)
}
enum class State{
OPEN,
ASSIGNED,
RESOLVED
}
data class Issue(val id: String, val dayCreated: Int, val severity: Severity, val complexity: Int,
var state: State = State.OPEN, var dayAssigned: Int? = null, var dayResolved: Int? = null){
fun activate(day: Int){
state = State.ASSIGNED
dayAssigned = day
}
fun resolve(day: Int){
state = State.RESOLVED
dayResolved = day
}
internal fun tryResolve(day: Int){
if(state == State.ASSIGNED && day >= (dayAssigned ?: 0) + complexity ){
resolve(day)
}
}
}
class Worker(val name: String){
var currentIssue: Issue? = null
private set
fun isBusy(): Boolean = currentIssue != null
fun update(day: Int){
currentIssue?.tryResolve(day)
if(currentIssue?.state == State.RESOLVED){
currentIssue = null
}
}
fun assign(day: Int, issue: Issue){
if(currentIssue != null) error("Can't assign work to a worker which is busy")
issue.activate(day)
currentIssue = issue
}
}
interface IssueGenerator{
fun generate(day: Int): List<Issue>
}
interface Strategy{
fun selectIssue(day: Int, issues: List<Issue>): Issue?
}
class WorkResult(val issues: List<Issue>, val workers: Int, val days: Int)
@OptIn(kotlin.ExperimentalStdlibApi::class)
fun simulate(generator: IssueGenerator, strategy: Strategy, numWorkers: Int = 10, days: Int = 100): WorkResult{
val workers = (0 until numWorkers).map{Worker("worker $it")}
val issues = buildList<Issue>{
for(day in 0 until days){
//update all workers
workers.forEach { it.update(day) }
//generate new issues
val newIssues = generator.generate(day)
addAll(newIssues)
//Select all free workers
workers.filter { !it.isBusy() }.forEach { worker->
val unasigned = filter { it.state == State.OPEN }
val anIssue = strategy.selectIssue(day, unasigned) //select an issue to assign from all unassigned issues
if(anIssue != null){
worker.assign(day, anIssue)
}
}
}
}
return WorkResult(issues, numWorkers, days)
}
fun WorkResult.computeLoss(): Double = issues.sumByDouble { ((it.dayResolved ?: days) - it.dayCreated)*it.severity.penalty } / days / workers / issues.size
###Output
_____no_output_____
###Markdown
Implementations
###Code
import kotlin.random.Random
import kotlin.math.pow
/**
* Generate one random issue per day
*/
class RandomIssueGenerator(seed: Long, val issuesPerDay: Int = 4 ) : IssueGenerator{
private val random = Random(seed)
override fun generate(day: Int): List<Issue>{
return List(issuesPerDay){
val severity = Severity.values()[random.nextInt(3)]
val complexity = random.nextInt(15)
Issue("${day}_${it}", day, severity, complexity)
}
}
}
object TakeOldest: Strategy{
override fun selectIssue(day: Int, issues: List<Issue>): Issue?{
return issues.minByOrNull { it.dayCreated }
}
}
class TakeRandom(seed: Long): Strategy{
private val random = Random(seed)
override fun selectIssue(day: Int, issues: List<Issue>): Issue?{
if(issues.isEmpty()) return null
return issues.random(random)
}
}
object TakeCritical: Strategy{
override fun selectIssue(day: Int, issues: List<Issue>): Issue?{
return issues.maxByOrNull { it.severity.penalty*(day - it.dayCreated) }
}
}
###Output
_____no_output_____
###Markdown
Simulate lossseverity
###Code
val seed = 89L
val days = 100
val workers = 10
###Output
_____no_output_____
###Markdown
Take oldest
###Code
val result = simulate(RandomIssueGenerator(seed, workers),TakeOldest, days = days)
//result.issues.forEach { println(it)}
result.computeLoss()
###Output
_____no_output_____
###Markdown
Take random
###Code
simulate(RandomIssueGenerator(seed, workers),TakeRandom(seed), days = days).computeLoss()
###Output
_____no_output_____
###Markdown
Take critical
###Code
simulate(RandomIssueGenerator(seed, workers), TakeCritical, days = days).computeLoss()
val seeds = List(1000){Random.nextLong()}
Plotly.plot{
trace{
x.numbers = seeds.map{ seed -> simulate(RandomIssueGenerator(seed, workers), TakeOldest, days = days).computeLoss()}
name = "oldest"
type = TraceType.histogram
}
trace{
x.numbers = seeds.map{ seed -> simulate(RandomIssueGenerator(seed, workers), TakeRandom(seed), days = days).computeLoss()}
name = "random"
type = TraceType.histogram
}
trace{
x.numbers = seeds.map{ seed -> simulate(RandomIssueGenerator(seed, workers), TakeCritical, days = days).computeLoss()}
name = "critical"
type = TraceType.histogram
}
layout{
title = "Loss distribtution"
xaxis {
title = "Loss"
}
}
}
###Output
_____no_output_____ |
polire/interpolate/kriging/tests/Kriging Interpolation.ipynb | ###Markdown
* What does ok('points') really do?* Specifically test when points aren't really passed - they are let's say the point of an array* Returns the diagonal matrix of all these coordinates
###Code
ordinary_kriging(data,method='points')
def make_grid(X,y,res):
y_min = y.min()-0.2
y_max = y.max()+0.2
x_min = X.min()-0.2
x_max = X.max()+0.2
x_arr = np.linspace(x_min,x_max,res)
y_arr = np.linspace(y_min,y_max,res)
xx,yy = np.meshgrid(x_arr,y_arr)
return xx,yy
x, y = make_grid(data[:,0],data[:,1],100)
###Output
_____no_output_____ |
1219-local DPP_pos.ipynb | ###Markdown
通过下面的操作得到正样本的特征值feature_Neg.csv文件中有525行(样本个数),120列(特征值)对应每一个正样本
###Code
n=3
features=[]
for pssm in load_data['train_pssm'][0]:
# get the submatrix
L = len(pssm)
length=int(L/n)
# the length of L1 and L2
if(L%n==0):
L1=L2=length
else:
L1=length
L2=L-L1*(n-1)
print('L,L1,L2',L,L1,L2)
# get the first block of feautre F
feature1=[]
for k in range(1,n):
temp_fea=[]
for j in range(0,20):
temp=0
for i in range((k-1)*L1,k*L1):
temp+=pssm[i][j]
temp_=temp/L1
temp_fea.append(temp_)
feature1.extend(temp_fea)
#print('feature1......',feature1)
print('len(feature1):',len(feature1)) #40
with open("feature/pos/feature1_Pos.csv","a+",newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(feature1)
feature2=[]
for k in range(1,n):
temp_fea=[]
for j in range(0,20):
temp=0
for i in range((k-1)*L1,k*L1-1):
temp+=(pssm[i][j]-pssm[i+1][j])**2
temp_=temp/(L1-1)
temp_fea.append(temp_)
feature2.extend(temp_fea)
#print('feature2......',feature2)
print('len(feature2):',len(feature2)) #40
with open("feature/pos/feature2_Pos.csv","a+",newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(feature2)
feature1_n=[]
for j in range(0,20):
temp=0
for i in range((n-1)*L1,(n-1)*L1+L2):
temp+=pssm[i][j]
temp_=temp/L2
feature1_n.append(temp_)
#print ('feature1_n......',feature1_n)
print ('len(feature1_n):',len(feature1_n)) #20
with open("feature/pos/feature1_n_Pos.csv","a+",newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(feature1_n)
feature2_n=[]
for j in range(0,20):
temp=0
for i in range((n-1)*L1,(n-1)*L1+L2-1):
temp+=(pssm[i][j]-pssm[i+1][j])**2
temp_=temp/(L2-1)
feature2_n.append(temp_)
#print('feature2_n......',feature2_n)
print('len(feature2_n):',len(feature2_n)) # 20
with open("feature/pos/feature2_n_Pos.csv","a+",newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(feature2_n)
# get all features
feature=[]
feature.extend(feature1) # 40
feature.extend(feature2) # 40
feature.extend(feature1_n) # 20
feature.extend(feature2_n) # 20
#print('feature......',feature)
print(len(feature)) # 120
with open("feature/pos/feature_Pos.csv","a+",newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(feature)
features.append(feature)
#print(features)
print(len(features)) #550
with open("feature/pos/features_Pos.csv","a+",newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(features)
###Output
L,L1,L2 61 20 21
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 146 48 50
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 69 23 23
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 217 72 73
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 164 54 56
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 81 27 27
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 89 29 31
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 111 37 37
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 81 27 27
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 319 106 107
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 266 88 90
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 80 26 28
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 154 51 52
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 100 33 34
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 91 30 31
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 454 151 152
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 183 61 61
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 52 17 18
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 132 44 44
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 131 43 45
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 103 34 35
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 81 27 27
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 246 82 82
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 334 111 112
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 64 21 22
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 144 48 48
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 70 23 24
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 294 98 98
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 55 18 19
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 258 86 86
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 206 68 70
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 609 203 203
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 565 188 189
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 121 40 41
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 99 33 33
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 78 26 26
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 495 165 165
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 123 41 41
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 212 70 72
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 274 91 92
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 152 50 52
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 89 29 31
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 259 86 87
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 227 75 77
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 91 30 31
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 421 140 141
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 182 60 62
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 198 66 66
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 86 28 30
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 93 31 31
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 97 32 33
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 155 51 53
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 77 25 27
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 87 29 29
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 291 97 97
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 97 32 33
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 230 76 78
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 136 45 46
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 282 94 94
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 85 28 29
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 429 143 143
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 99 33 33
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 226 75 76
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 195 65 65
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 217 72 73
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 142 47 48
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 67 22 23
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 72 24 24
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 70 23 24
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 221 73 75
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 187 62 63
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 483 161 161
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 261 87 87
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 55 18 19
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 171 57 57
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 282 94 94
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 152 50 52
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 293 97 99
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 126 42 42
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 301 100 101
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 75 25 25
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 64 21 22
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 276 92 92
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 54 18 18
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 93 31 31
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 122 40 42
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 174 58 58
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 141 47 47
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 236 78 80
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
120
L,L1,L2 110 36 38
len(feature1): 40
len(feature2): 40
len(feature1_n): 20
len(feature2_n): 20
|
P5-Big M Method.ipynb | ###Markdown
Min z= 4x1 + x2subjected to:3x1 + 4x2 >= 20x1 + 5x2 >= 15x1, x2 >= 0
###Code
from scipy.optimize import linprog
obj = [4, 1]
lhs_ineq = [[ -3, -4], # left side of first constraint
[-1, -5]] # right side of first constraint
rhs_ineq = [-20, # right side of first constraint
... -15] # right side of Second constraint
bnd = [(0, float("inf")), # Bounds of x1
... (0, float("inf"))] # Bounds of x2
opt = linprog(c=obj, A_ub=lhs_ineq, b_ub=rhs_ineq,
... bounds=bnd,method="interior-point")
opt
###Output
_____no_output_____ |
hmm/office_hours_hmm.ipynb | ###Markdown
betteridiot Office Hours Today's Topic: Hidden Markov Models (HMM) Taken from https://en.wikipedia.org/wiki/Hidden_Markov_model A Quick Illustration ___ Examples from the [Class GitHub](https://github.com/dcmb-courses/bioinf529-winter2019/blob/master/classes/class_7/class7.ipynb) The Setup
###Code
# The letters used for our sequences
alphabet = 'ACGT'
# The unseen states inferred by observations
hidden_states = 'IG'
# Where do we start
initial_probabilities = {
'I' : 0.1,
'G' : 0.9
}
# What are the conditions in which we change
transition_probabilities = {
'I': { 'I' : 0.6, 'G' : 0.4 },
'G': { 'I' : 0.1, 'G' : 0.9 }
}
# What do we do when we get there
emission_probabilities = {
'I': { 'A' : 0.1, 'C' : 0.4, 'G' : 0.4, 'T' : 0.1 },
'G': { 'A' : 0.4, 'C' : 0.1, 'G' : 0.1, 'T' : 0.4 }
}
###Output
_____no_output_____
###Markdown
Now the data structure to handle this all
###Code
import json
import numpy as np
class HMM:
"""Main class for HMM objects
Atttibutes:
alphabet (set): The alphabet of emissions
hidden_states (set): Hidden states in the model
initial_probs (dict): A dictionary of initial state probabilities (default: None)
trans_probs (dict of dict): A dictionary of transition probabilities (default: None)
emit_probs (dict of dict): A dictionary of emission probabilities (default: None)
"""
__all__ = ['alphabet', 'hidden_states', 'trans_probs', 'initial_probs', 'emit_probs', 'viterbi']
def __init__(self, alphabet, hidden_states, β = None, trans_probs = None, emit_probs = None):
"""Instaniates the object
Args:
alphabet (str): The alphabet of emissions
hidden_states (list of str): Hidden states in the model
β (dict of float): A dictionary of initial state probabilities (default: None)
trans_probs (dict of dict): A dictionary of transition probabilities (default: None)
emit_probs (dict of dict): A dictionary of emission probabilities (default: None)
"""
self.alphabet = set(alphabet)
self.hidden_states = set(hidden_states)
self._β = β
self.initial_probs = {key: np.log10(val) for key, val in β.items()}
self._t = trans_probs
self.trans_probs = self._transform_dict(trans_probs)
self._e = emit_probs
self.emit_probs = self._transform_dict(emit_probs)
@staticmethod
def _transform_dict(nested_dict):
"""Transforms a dict of dict of floating point probabilites to log10 equivalent
Args:
nested_dict (dict of dict of floats): dictionary of probabilities wrt hidden state
Returns:
out_dict (dict of dict of floats): log10 transformed probabilities
"""
out_dict = {}
for key_outer, sub_dict in nested_dict.items():
for key_inner, val in sub_dict.items():
out_dict.setdefault(key_outer, {}).update({key_inner: np.log10(val)})
return out_dict
def __str__(self):
out_text = [f'Alphabet: {self.alphabet}',
f'Hidden States: {self.hidden_states}',
f'Initial Probabilities: {json.dumps(self._β, sort_keys = True, indent = 4)}',
f'Transition Probabilities: {json.dumps(self._t, sort_keys = True, indent = 4)}',
f'Emission Probabilities: {json.dumps(self._e, sort_keys = True, indent = 4)}']
return '\n'.join(out_text)
@classmethod
def __dir__(cls):
return cls.__all__
def viterbi(self, sequence):
""" The Viterbi algorithm for decoding a string using a HMM
Args:
sequence (str): Sequence of valid emissions from the HMM
Returns:
result (str): optimal path through HMM given the model parameters
using the Viterbi algorithm
"""
traceback = []
first_base = sequence[0]
previous = {state: self.initial_probs[state] + self.emit_probs[state][first_base] for state in self.hidden_states}
# previous = {}
# for state in self.hidden_states:
# previous.update({state: self.initial_probs[state] + self.emit_probs[state][first_base]})
for base in sequence[1:]:
update_previous, update_tb = self.update_probs(base, previous)
previous = update_previous
traceback.append(update_tb)
result = max(previous, key = previous.get)
result += self.get_traceback(traceback, result)
return result[::-1]
@staticmethod
def get_traceback(traceback, last_origin):
tb = ''
for pos in reversed(traceback):
prev_origin = pos[last_origin]
tb += prev_origin
last_origin = prev_origin
return tb
def update_probs(self, base, previous):
curr_prob = {}
tb_pos = {}
for future in self.hidden_states:
check = {current: previous[current] + self.trans_probs[current][future] for current in self.hidden_states}
origin = max(check, key = check.get)
curr_prob.update({future: self.emit_probs[future][base] + check[origin]})
tb_pos.update({future: origin})
return curr_prob, tb_pos
###Output
_____no_output_____
###Markdown
Let's see it at work
###Code
model = HMM(alphabet, hidden_states, β=initial_probabilities,
trans_probs=transition_probabilities, emit_probs= emission_probabilities)
seq = "ACGCGATCATACTATATTAGCTAAATAGATACGCGCGCGCGCGCGATATATATATATAGCTAATGATCGATTACCCCCCCCCCCAATTA"
print(model.viterbi(seq))
print('GIIIIGGGGGGGGGGGGGGGGGGGGGGGGGGIIIIIIIIIIIIIIGGGGGGGGGGGGGGGGGGGGGGGGGGGGIIIIIIIIIIIGGGGG')
# Exact example from slides
sequence = "ACGCGATC"
print(model.viterbi(sequence))
# A slightly more complex example
sequence = "ACGCGATCATACTATATTAGCTAAATAGATACGCGCGCGCGCGCGATATATATATATAGCTAATGATCGATTACCCCCCCCCCCAATTA"
print(sequence)
print(model.viterbi(sequence))
###Output
ACGCGATCATACTATATTAGCTAAATAGATACGCGCGCGCGCGCGATATATATATATAGCTAATGATCGATTACCCCCCCCCCCAATTA
GIIIIGGGGGGGGGGGGGGGGGGGGGGGGGGIIIIIIIIIIIIIIGGGGGGGGGGGGGGGGGGGGGGGGGGGGIIIIIIIIIIIGGGGG
###Markdown
___ Extended example
###Code
hidden_states = ('Ai', 'Ci', 'Gi', 'Ti', 'Ag', 'Cg', 'Gg', 'Tg')
alphabet = 'ACGT'
initial_probabilities = {
'Ai' : 0.125,
'Ci' : 0.125,
'Gi' : 0.125,
'Ti' : 0.125,
'Ag' : 0.125,
'Cg' : 0.125,
'Gg' : 0.125,
'Tg' : 0.125
}
transition_probabilities = {
'Ai': { 'Ai' : 0.2, 'Ci' : 0.36, 'Gi' : 0.2, 'Ti' : 0.2, 'Ag' : 0.01, 'Cg' : 0.01, 'Gg' : 0.01, 'Tg' : 0.01 },
'Ci': { 'Ai' : 0.1, 'Ci' : 0.1, 'Gi' : 0.66, 'Ti' : 0.1, 'Ag' : 0.01, 'Cg' : 0.01, 'Gg' : 0.01, 'Tg' : 0.01 },
'Gi': { 'Ai' : 0.1, 'Ci' : 0.39, 'Gi' : 0.1, 'Ti' : 0.1, 'Ag' : 0.1, 'Cg' : 0.01, 'Gg' : 0.1, 'Tg' : 0.1 },
'Ti': { 'Ai' : 0.2, 'Ci' : 0.36, 'Gi' : 0.2, 'Ti' : 0.2, 'Ag' : 0.01, 'Cg' : 0.01, 'Gg' : 0.01, 'Tg' : 0.01 },
'Ag': { 'Ai' : 0.01, 'Ci' : 0.1, 'Gi' : 0.01, 'Ti' : 0.01, 'Ag' : 0.2175, 'Cg' : 0.2175, 'Gg' : 0.2175, 'Tg' : 0.2175 },
'Cg': { 'Ai' : 0.01, 'Ci' : 0.1, 'Gi' : 0.01, 'Ti' : 0.01, 'Ag' : 0.2175, 'Cg' : 0.2175, 'Gg' : 0.2175, 'Tg' : 0.2175 },
'Gg': { 'Ai' : 0.01, 'Ci' : 0.1, 'Gi' : 0.01, 'Ti' : 0.01, 'Ag' : 0.2175, 'Cg' : 0.2175, 'Gg' : 0.2175, 'Tg' : 0.2175 },
'Tg': { 'Ai' : 0.01, 'Ci' : 0.1, 'Gi' : 0.01, 'Ti' : 0.01, 'Ag' : 0.2175, 'Cg' : 0.2175, 'Gg' : 0.2175, 'Tg' : 0.2175 }
}
emission_probabilities = {
'Ai': { 'A' : 1, 'C' : 0.001, 'G' : 0.001, 'T' : 0.001 },
'Ci': { 'A' : 0.001, 'C' : 1, 'G' : 0.001, 'T' : 0.001 },
'Gi': { 'A' : 0.001, 'C' : 0.001, 'G' : 1, 'T' : 0.001 },
'Ti': { 'A' : 0.001, 'C' : 0.001, 'G' : 0.001, 'T' : 1 },
'Ag': { 'A' : 1, 'C' : 0.001, 'G' : 0.001, 'T' : 0.001 },
'Cg': { 'A' : 0.001, 'C' : 1, 'G' : 0.001, 'T' : 0.001 },
'Gg': { 'A' : 0.001, 'C' : 0.001, 'G' : 1, 'T' : 0.001 },
'Tg': { 'A' : 0.001, 'C' :0.0010, 'G' : 0.001, 'T' : 1 }
}
model = HMM(alphabet, hidden_states, trans_probs=transition_probabilities,
emit_probs=emission_probabilities, β = initial_probabilities)
sequence = "ACGCGATCATACTATATTAGCTAAATAGATACGCGCGCGCGCGCGATATATATATATAGCTAATGATCGATTACCCCCCCCCCCAATTA"
print(sequence)
result = model.viterbi(sequence)
result = result.replace("A", "")
result = result.replace("C", "")
result = result.replace("G", "")
result = result.replace("T", "")
result = result.replace("i", "I")
print(result)
###Output
ACGCGATCATACTATATTAGCTAAATAGATACGCGCGCGCGCGCGATATATATATATAGCTAATGATCGATTACCCCCCCCCCCAATTA
IIIIIggggggggggggggggggggggggggIIIIIIIIIIIIIIgggggggggggggggggggggggggggggggggggggggggggg
|
Traffic_Sign_Classifier_with_keras.ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
import os
# TODO: Fill this in based on where you saved the training and testing data
data_folder = 'traffic-signs-data'
training_file = os.path.join(data_folder, 'train.p')
validation_file= os.path.join(data_folder, 'valid.p')
testing_file = os.path.join(data_folder, 'test.p')
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = X_train.shape[0]
# TODO: Number of validation examples
n_validation = X_valid.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_train.shape[1:]
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(set(y_train))
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of validation examples = 4410
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from IPython.display import display
# Visualizations will be shown in the notebook.
%matplotlib inline
signnames = pd.read_csv('signnames.csv')
num_of_examples = pd.DataFrame(y_train,columns=['num of training']).apply(pd.value_counts)
signnames_with_num = pd.merge(signnames,num_of_examples,left_index=True,right_index=True)
display(signnames_with_num)
###Output
_____no_output_____
###Markdown
Show image from training set by random choice
###Code
def show_image(X, y, number):
fig, axes = plt.subplots(number//5,5,figsize=(15,6))
axes = axes.ravel()
for i in range(number):
index = np.random.randint(0,X.shape[0])
if X.shape[3]==1:
axes[i].imshow(X[index,:,:,0], cmap='gray')
else:
axes[i].imshow(X[index])
axes[i].axis('off')
axes[i].set_title(signnames.iloc[y[index]].SignName)
show_image(X_train,y_train,10)
###Output
_____no_output_____
###Markdown
Show Specific image from training set
###Code
# choose which class to show
which_class = 27
fig, axes = plt.subplots(2,5,figsize=(15,6))
axes = axes.ravel()
for i in range(10):
index = np.random.randint(0,X_train[y_train == which_class].shape[0])
img_show = X_train[y_train == which_class][index]
axes[i].imshow(img_show)
axes[i].axis('off')
print(signnames.iloc[which_class].SignName)
###Output
Pedestrians
###Markdown
Distribution of training set
###Code
plt.hist(y_train, bins=n_classes)
plt.title('Distribution of the training set')
plt.show()
###Output
_____no_output_____
###Markdown
Distribution of training set, validation set and test set
###Code
labels_count = []
for y in [y_train, y_valid, y_test]:
count = pd.DataFrame(y).apply(pd.value_counts)/y.shape[0]
count.sort_index(inplace=True)
labels_count.append(count)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
colors = ['r', 'g', 'b']
yticks = [2, 1, 0]
for c, k, count in zip(colors, yticks,labels_count):
xs = np.arange(n_classes)
ys = count[0]
# Plot the bar graph given by xs and ys on the plane y=k with 80% opacity.
ax.bar(xs, ys, zs=k, zdir='y', color=c, alpha=0.8)
ax.set_yticks([])
ax.set_xlabel('class')
ax.set_ylabel('train validation test')
ax.set_zlabel('ratio')
plt.show()
###Output
_____no_output_____
###Markdown
We can figure out that three dataset are almost identically distribution although number of examples per label is different(some have more than others). ---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
from keras.preprocessing.image import ImageDataGenerator
from cv2 import equalizeHist
###Output
D:\Anaconda3\envs\carnd-term1\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
Preprocessed image Normalize the image data by `pixel/255.`
###Code
def equalize(X):
equ = np.zeros_like(X)
r, g, b = X[:,:,0], X[:,:,1], X[:,:,2]
equ[:,:,0] += equalizeHist(r)
equ[:,:,1] += equalizeHist(g)
equ[:,:,2] += equalizeHist(b)
return equ
def normalization(X):
# normalize
X_normalized = X / 255.
return X_normalized
def preprocess(X):
# normalize
X_preprocessed = np.zeros_like(X)
for i in range(len(X)):
X_preprocessed[i]=equalize(X[i])
X_normalized = normalization(X_preprocessed)
return X_normalized
X_train_processed = preprocess(X_train)
X_valid_processed = preprocess(X_valid)
X_test_processed = preprocess(X_test)
fig, axes = plt.subplots(2,5,figsize=(15,6))
axes = axes.ravel()
for i in range(5):
index = np.random.randint(0,X_train.shape[0])
axes[i].imshow(X_train[index])
axes[i].axis('off')
axes[i].set_title(signnames.iloc[y_train[index]].SignName)
axes[5+i].imshow(X_train_processed[index])
axes[5+i].axis('off')
###Output
_____no_output_____
###Markdown
Show image from preprocessed set by random choice
###Code
show_image(X_train_processed,y_train,10)
###Output
_____no_output_____
###Markdown
Generate image Generate image by1. width shift2. height shift3. rotation4. shear5. channel shift
###Code
# generate 0.5 * n_train image
generate_ratio = 0.5
train_gen = ImageDataGenerator(width_shift_range=0.1,
height_shift_range=0.1,
rotation_range=10,
shear_range=0.2,
channel_shift_range=0.1)
X_generated,y_generated = X_train_processed, y_train
batches = 0
for X_batch,y_batch in train_gen.flow(X_train_processed,y_train,batch_size=256):
X_generated = np.concatenate((X_generated,X_batch))
y_generated = np.concatenate((y_generated,y_batch))
batches += 1
if batches >= n_train / 256 * generate_ratio:
break
print("Number of training examples =", n_train)
print("Number of generated examples =", X_generated.shape[0]-X_train.shape[0])
###Output
Number of training examples = 34799
Number of generated examples = 17408
###Markdown
Show some images from generated data
###Code
fig, axes = plt.subplots(2,5,figsize=(15,6))
axes = axes.ravel()
for i in range(10):
index = np.random.randint(0,X_generated.shape[0]-X_train.shape[0])+X_train.shape[0]
axes[i].imshow(X_generated[index])
axes[i].axis('off')
axes[i].set_title(signnames.iloc[y_generated[index]].SignName)
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Use Lenet-5 architecture with regularization and learning rate decay.
###Code
from keras.layers import Conv2D,MaxPool2D,Flatten,Dense,Dropout
from keras.models import Sequential
from keras.optimizers import Adam,SGD
from keras.utils import to_categorical
from keras.initializers import truncated_normal
from keras import backend as K
from keras.regularizers import l2
from keras.models import load_model
import tensorflow as tf
EPOCHS = 60
BATCH_SIZE = 256
LOGS_PATH = 'model'
y_train_onehot = to_categorical(y_train,43)
y_generated_onehot = to_categorical(y_generated,43)
y_valid_onehot = to_categorical(y_valid,43)
y_test_onehot = to_categorical(y_test,43)
model = Sequential()
model.add(Conv2D(6,(5,5),activation='relu',input_shape=(32,32,3),kernel_initializer=truncated_normal(stddev=0.1)))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Conv2D(16,(5,5),activation='relu', kernel_initializer=truncated_normal(stddev=0.1)))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(120,activation='relu',kernel_initializer=truncated_normal(stddev=0.1)))
model.add(Dropout(0.6))
model.add(Dense(84,activation='relu', kernel_initializer=truncated_normal(stddev=0.1)))
model.add(Dropout(0.6))
model.add(Dense(43,activation='softmax', kernel_initializer=truncated_normal(stddev=0.1)))
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Training and Evaluation Train the Model
###Code
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(X_generated,y_generated_onehot,batch_size=BATCH_SIZE,epochs=EPOCHS,validation_data=(X_valid_processed,y_valid_onehot),shuffle=True)
model.save('model/lenet_5.h5')
###Output
Train on 52207 samples, validate on 4410 samples
Epoch 1/60
52207/52207 [==============================] - 7s 141us/step - loss: 2.9632 - acc: 0.1980 - val_loss: 1.8257 - val_acc: 0.4676
Epoch 2/60
52207/52207 [==============================] - 5s 94us/step - loss: 1.8879 - acc: 0.4179 - val_loss: 1.2474 - val_acc: 0.5955
Epoch 3/60
52207/52207 [==============================] - 5s 93us/step - loss: 1.5335 - acc: 0.5061 - val_loss: 0.9714 - val_acc: 0.6853
Epoch 4/60
52207/52207 [==============================] - 5s 92us/step - loss: 1.3263 - acc: 0.5689 - val_loss: 0.7669 - val_acc: 0.7726
Epoch 5/60
52207/52207 [==============================] - 5s 93us/step - loss: 1.1746 - acc: 0.6189 - val_loss: 0.6577 - val_acc: 0.8075
Epoch 6/60
52207/52207 [==============================] - 5s 93us/step - loss: 1.0663 - acc: 0.6545 - val_loss: 0.5564 - val_acc: 0.8438
Epoch 7/60
52207/52207 [==============================] - 5s 94us/step - loss: 0.9844 - acc: 0.6800 - val_loss: 0.4760 - val_acc: 0.8628
Epoch 8/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.9220 - acc: 0.7012 - val_loss: 0.4724 - val_acc: 0.8578
Epoch 9/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.8664 - acc: 0.7173 - val_loss: 0.3850 - val_acc: 0.8816
Epoch 10/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.8243 - acc: 0.7329 - val_loss: 0.3475 - val_acc: 0.8925
Epoch 11/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.7778 - acc: 0.7488 - val_loss: 0.3498 - val_acc: 0.8932
Epoch 12/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.7480 - acc: 0.7595 - val_loss: 0.3022 - val_acc: 0.9138
Epoch 13/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.7143 - acc: 0.7702 - val_loss: 0.2923 - val_acc: 0.9104
Epoch 14/60
52207/52207 [==============================] - 5s 92us/step - loss: 0.6870 - acc: 0.7777 - val_loss: 0.2707 - val_acc: 0.9147
Epoch 15/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.6676 - acc: 0.7856 - val_loss: 0.2659 - val_acc: 0.9247
Epoch 16/60
52207/52207 [==============================] - 5s 94us/step - loss: 0.6407 - acc: 0.7912 - val_loss: 0.2602 - val_acc: 0.9215
Epoch 17/60
52207/52207 [==============================] - 5s 94us/step - loss: 0.6208 - acc: 0.8014 - val_loss: 0.2311 - val_acc: 0.9281
Epoch 18/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.6038 - acc: 0.8070 - val_loss: 0.2293 - val_acc: 0.9274
Epoch 19/60
52207/52207 [==============================] - 5s 94us/step - loss: 0.5827 - acc: 0.8121 - val_loss: 0.2207 - val_acc: 0.9354
Epoch 20/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.5609 - acc: 0.8186 - val_loss: 0.2183 - val_acc: 0.9361
Epoch 21/60
52207/52207 [==============================] - 5s 92us/step - loss: 0.5530 - acc: 0.8200 - val_loss: 0.2018 - val_acc: 0.9415
Epoch 22/60
52207/52207 [==============================] - 5s 92us/step - loss: 0.5421 - acc: 0.8240 - val_loss: 0.1970 - val_acc: 0.9451
Epoch 23/60
52207/52207 [==============================] - 5s 92us/step - loss: 0.5274 - acc: 0.8289 - val_loss: 0.1918 - val_acc: 0.9392
Epoch 24/60
52207/52207 [==============================] - 5s 92us/step - loss: 0.5221 - acc: 0.8326 - val_loss: 0.2118 - val_acc: 0.9358
Epoch 25/60
52207/52207 [==============================] - 5s 92us/step - loss: 0.5119 - acc: 0.8346 - val_loss: 0.1782 - val_acc: 0.9442
Epoch 26/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.4990 - acc: 0.8388 - val_loss: 0.1918 - val_acc: 0.9494
Epoch 27/60
52207/52207 [==============================] - 5s 92us/step - loss: 0.4906 - acc: 0.8397 - val_loss: 0.1729 - val_acc: 0.9458
Epoch 28/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.4792 - acc: 0.8435 - val_loss: 0.1741 - val_acc: 0.9417
Epoch 29/60
52207/52207 [==============================] - 5s 96us/step - loss: 0.4749 - acc: 0.8462 - val_loss: 0.1814 - val_acc: 0.9458
Epoch 30/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.4608 - acc: 0.8502 - val_loss: 0.1878 - val_acc: 0.9469
Epoch 31/60
52207/52207 [==============================] - 5s 94us/step - loss: 0.4669 - acc: 0.8505 - val_loss: 0.1772 - val_acc: 0.9467
Epoch 32/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.4501 - acc: 0.8544 - val_loss: 0.1692 - val_acc: 0.9494
Epoch 33/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.4447 - acc: 0.8556 - val_loss: 0.1776 - val_acc: 0.9512
Epoch 34/60
52207/52207 [==============================] - 5s 93us/step - loss: 0.4253 - acc: 0.8604 - val_loss: 0.1889 - val_acc: 0.9497
Epoch 35/60
52207/52207 [==============================] - 5s 97us/step - loss: 0.4339 - acc: 0.8589 - val_loss: 0.1681 - val_acc: 0.9515
Epoch 36/60
52207/52207 [==============================] - 5s 91us/step - loss: 0.4259 - acc: 0.8612 - val_loss: 0.1654 - val_acc: 0.9503
Epoch 37/60
52207/52207 [==============================] - 5s 95us/step - loss: 0.4221 - acc: 0.8636 - val_loss: 0.1732 - val_acc: 0.9485
Epoch 38/60
52207/52207 [==============================] - 5s 100us/step - loss: 0.4131 - acc: 0.8663 - val_loss: 0.1889 - val_acc: 0.9478
Epoch 39/60
52207/52207 [==============================] - 5s 97us/step - loss: 0.4141 - acc: 0.8652 - val_loss: 0.1675 - val_acc: 0.9508
Epoch 40/60
52207/52207 [==============================] - 5s 99us/step - loss: 0.4037 - acc: 0.8681 - val_loss: 0.1566 - val_acc: 0.9551
Epoch 41/60
52207/52207 [==============================] - 6s 108us/step - loss: 0.4076 - acc: 0.8692 - val_loss: 0.1635 - val_acc: 0.9558
Epoch 42/60
52207/52207 [==============================] - 5s 98us/step - loss: 0.3928 - acc: 0.8712 - val_loss: 0.1488 - val_acc: 0.9612
Epoch 43/60
52207/52207 [==============================] - 5s 100us/step - loss: 0.3909 - acc: 0.8732 - val_loss: 0.1618 - val_acc: 0.9583
Epoch 44/60
52207/52207 [==============================] - 5s 100us/step - loss: 0.3934 - acc: 0.8742 - val_loss: 0.1451 - val_acc: 0.9580
Epoch 45/60
52207/52207 [==============================] - 5s 100us/step - loss: 0.3779 - acc: 0.8760 - val_loss: 0.1558 - val_acc: 0.9592
Epoch 46/60
52207/52207 [==============================] - 5s 98us/step - loss: 0.3869 - acc: 0.8746 - val_loss: 0.1529 - val_acc: 0.9558
Epoch 47/60
52207/52207 [==============================] - 5s 97us/step - loss: 0.3790 - acc: 0.8764 - val_loss: 0.1501 - val_acc: 0.9594
Epoch 48/60
52207/52207 [==============================] - 5s 97us/step - loss: 0.3805 - acc: 0.8773 - val_loss: 0.1570 - val_acc: 0.9556
Epoch 49/60
52207/52207 [==============================] - 5s 102us/step - loss: 0.3709 - acc: 0.8789 - val_loss: 0.1748 - val_acc: 0.9612
Epoch 50/60
52207/52207 [==============================] - 6s 106us/step - loss: 0.3618 - acc: 0.8802 - val_loss: 0.1494 - val_acc: 0.9553
Epoch 51/60
52207/52207 [==============================] - 5s 105us/step - loss: 0.3679 - acc: 0.8798 - val_loss: 0.1773 - val_acc: 0.9569
Epoch 52/60
52207/52207 [==============================] - 5s 103us/step - loss: 0.3628 - acc: 0.8798 - val_loss: 0.1483 - val_acc: 0.9594
Epoch 53/60
52207/52207 [==============================] - 5s 103us/step - loss: 0.3572 - acc: 0.8829 - val_loss: 0.1320 - val_acc: 0.9569: 0.3552
Epoch 54/60
52207/52207 [==============================] - 5s 101us/step - loss: 0.3592 - acc: 0.8832 - val_loss: 0.1388 - val_acc: 0.9580
Epoch 55/60
52207/52207 [==============================] - 5s 104us/step - loss: 0.3566 - acc: 0.8837 - val_loss: 0.1529 - val_acc: 0.9615
Epoch 56/60
52207/52207 [==============================] - 5s 105us/step - loss: 0.3499 - acc: 0.8850 - val_loss: 0.1712 - val_acc: 0.9558
Epoch 57/60
52207/52207 [==============================] - 5s 101us/step - loss: 0.3470 - acc: 0.8869 - val_loss: 0.1441 - val_acc: 0.9612
Epoch 58/60
52207/52207 [==============================] - 5s 102us/step - loss: 0.3468 - acc: 0.8863 - val_loss: 0.1472 - val_acc: 0.9605
Epoch 59/60
52207/52207 [==============================] - 5s 101us/step - loss: 0.3356 - acc: 0.8897 - val_loss: 0.1534 - val_acc: 0.9639
###Markdown
Evaluation
###Code
model = load_model('model/lenet_5.h5')
train_accuracy = model.evaluate(X_train_processed, y_train_onehot)
validation_accuracy = model.evaluate(X_valid_processed, y_valid_onehot)
test_accuracy = model.evaluate(X_test_processed, y_test_onehot)
print("Trian Accuracy = {:.3f}".format(train_accuracy[1]))
print("Validation Accuracy = {:.3f}".format(validation_accuracy[1]))
print("Test Accuracy = {:.3f}".format(test_accuracy[1]))
from sklearn.metrics import precision_score,recall_score,f1_score
y_softmax = model.predict(X_test)
y_pred = np.argmax(y_softmax,1)
Precision = precision_score(y_test, y_pred,average=None)
Recall = recall_score(y_test, y_pred,average=None)
F1_score = f1_score(y_test, y_pred,average=None)
signnames_with_PRF1 = pd.merge(signnames_with_num,pd.DataFrame({'Precision':Precision,'Recall':Recall,'F1 score':F1_score}),left_index=True,right_index=True)
display(signnames_with_PRF1)
###Output
_____no_output_____
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import cv2
image_folder = 'test_image'
image_files = os.listdir(image_folder)
images = []
for file in image_files:
image = mpimg.imread(os.path.join(image_folder,file))
resized = cv2.resize(image,(32,32),interpolation=cv2.INTER_AREA)
images.append(resized)
fig, axes = plt.subplots(5,3,figsize=(15,10))
axes = axes.ravel()
for i in range(15):
axes[i].imshow(images[i])
axes[i].axis('off')
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
images = np.array(images)
X_test_last = preprocess(images)
y_new_softmax = model.predict(X_test_last)
y_new_predict = np.argmax(y_new_softmax,1)
fig, axes = plt.subplots(5,3,figsize=(15,10))
axes = axes.ravel()
for i in range(15):
axes[i].imshow(images[i])
axes[i].axis('off')
axes[i].set_title("{}:{}".format(i+1,signnames.iloc[y_new_predict[i]].SignName))
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
The model predicted 11 out of 15 signs correctly, it's 73.3% accurate on these new images. The Children crossing is misclassified as Road narrows on the right. The model work well on the new images. Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
top_k = sess.run(tf.nn.top_k(y_new_softmax,5))
for i in range(len(X_test_last)):
fig,axes = plt.subplots(1,2)
axes = axes.ravel()
axes[0].imshow(images[i])
axes[0].axis('off')
axes[1].barh(range(5),top_k[0][i])
axes[1].set_yticks(range(5))
axes[1].set_yticklabels(['%.3f No.%2d %s'%(top_k[0][i][j],top_k[1][i][j],signnames.iloc[top_k[1][i][j]].SignName) for j in range(5)])
axes[1].tick_params(labelleft='off' , labelright='on')
###Output
_____no_output_____ |
training/2019_SynergisticSymposium/05_Multi_Channel_Reconstruction_solutions.ipynb | ###Markdown
======================================================================== Copyright 2019 Science Technology Facilities Council Copyright 2019 University of Manchester This work is part of the Core Imaging Library developed by Science Technology Facilities Council and University of Manchester Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.txt Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ========================================================================= Multi-Channel ReconstructionLearning ObjectivesBy the end of this notebook, you will be able to:- Identify the key differences in building Image/Acquisition Geometries and Operators for multi-channel datasets - Build your own reconstructions using FDK, CGLS and PDHG- Determine optimum regularisation parameters based on reconstruction method- Evaluate the effectiveness of each reconstruction routine using energy profiles and elemental maps. Prerequisites:- Acquisition/Image Geometry, Acquisition Data- AstraProjectorMC/3DMC- FDK, CGLS, PDHG, TV- BlockFramework Background:Conventional X-ray detectors only measure the variable density of objects they pass through, giving no insight into what materials are actually inside. This is because standard detectors measure only the number of photons that arrive at each point on the detector.For multi-channel imaging, one can use an energy-sensitive X-ray detector, which measures the energy of every X-ray photon that arrives at each individual pixel. This provides an additional layer of information which can provide important insight on a sample's composition or structure. However, adapted reconstruction routines are required to account for the extra energy-based dimension.The additional energy dimension is stored as a histogram of energy 'channels', indicating the number of X-ray photons detected by a pixel within a fine energy range. Typically 200+ channels are acquired, however in order to speed up computation time, we will restrict our dataset to just 40 channels, where the dominant energy signals are known to appear.
###Code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from utilities import islicer, link_islicer
from utilities.show_utilities import channel_to_energy, show
from ccpi.framework import ImageGeometry, AcquisitionGeometry, AcquisitionData, ImageData, BlockDataContainer
import numpy as np
import matplotlib.pyplot as plt
import h5py
import os
import sys
from ccpi.io import NEXUSDataReader
from ccpi.optimisation.algorithms import PDHG, CGLS
from ccpi.optimisation.operators import BlockOperator, Gradient
from ccpi.optimisation.functions import L2NormSquared, KullbackLeibler,\
MixedL21Norm, BlockFunction, IndicatorBox
from ccpi.astra.utils import *
from ccpi.astra.processors import FBP
from ccpi.astra.operators import AstraProjectorMC, AstraProjector3DMC
from ccpi.framework.TestData import data_dir
pathname = data_dir
filename = 'sinogram_centered_channels100_140.h5'
path = os.path.join(pathname , filename)
arrays = {}
with h5py.File(path, 'r+') as f:
for k, v in f.items():
arrays[k] = np.array(v)
X = arrays['SC']
###Output
_____no_output_____
###Markdown
The sample we will look at in this notebook is an iodine-stained lizard head. The use of elemental staining is common in biology and medicine, by acting as a contrast agent to provide improved visibility of internal structures for X-ray imaging. Iodine is a popular choice in the clinical and research fields, due to its energy-based properties falling in the typical range for diagnostic X-ray imaging.The sample was scanned in a lab-based X-ray CT cone-beam system at 50keV, 0.8W, using an energy-sensitive detector to acquire a full 4D dataset. The detector consisted of an 80 x 80 pixel array, with pixel sizes of 250 $\mu$m x 250 $\mu$m. A source-sample distance of 233.0 mm and a sample-detector distance of 245.0 mm gave a geometric magnification of 2.05x. The sample was scanned for 60 projections over a full 360$^{\circ}$ rotation, with 60s exposure time per projection.A diagram of the style of setup used for spectral imaging is shown below from a paper by [C.K.Egan *et al*, 2015](https://www.nature.com/articles/srep15979): As shown by the diagram, each pixel stores its own energy channel histogram, with characteristic signals such as 'Absorption edges' (caused by photoelectric absorption of X-ray photons by the iodine in our sample) producing sharp rises in signal. These are the key features we look for when analysing spectral data. Setting up Acquisition/Image GeometryFirst we need to setup the geometries based on the acquisition system.These are currently ordered based on the 4D raw dataset.Run the code to see the current dimensions.
###Code
print(X.shape)
###Output
_____no_output_____
###Markdown
We can allocate these separately
###Code
num_channels = X.shape[0]
num_pixels_h = X.shape[3]
num_pixels_v = X.shape[1]
num_angles = X.shape[2]
###Output
_____no_output_____
###Markdown
We set the angles used based on the parameters chosen for data acquisition. An offset is also applied (this just gives us the freedom to adjust the rotation of our reconstructed images).
###Code
angles = np.linspace(-180-55,180-55,num_angles,endpoint=False)*np.pi/180
###Output
_____no_output_____
###Markdown
We encode all this information into `AcquisitionGeometry` and `ImageGeometry`.Recall that the Geometry for scanning with a cone-beam X-ray source is defined by the following parameters:(See **Notebook00_Building_Blocks** for more information)
###Code
# Define acquisition geometry from the detector system acquisition settings
# with reordering of the dimension labels to match the raw data
distance_source_center = 233.0 # [mm]
distance_center_detector = 245.0 # [mm]
detector_pixel_size = 0.25 # [mm]
ag = AcquisitionGeometry('cone',
'3D',
angles,
pixel_num_h=num_pixels_h,
pixel_size_h=detector_pixel_size,
pixel_num_v=num_pixels_v,
pixel_size_v=detector_pixel_size,
dist_source_center=distance_source_center,
dist_center_detector=distance_center_detector,
channels=num_channels,
dimension_labels = ['channel', 'vertical', 'angle', 'horizontal'])
# Create the 4D acquisition data
data = ag.allocate()
data.fill(X)
# Calculate the geometric magnification to scale the voxel size relative
# to the detector pixel size.
mag = (ag.dist_source_center + ag.dist_center_detector)/ag.dist_source_center
# Define Image Geoemtry
ig = ImageGeometry(voxel_num_x=ag.pixel_num_h,
voxel_num_y=ag.pixel_num_h,
voxel_num_z=ag.pixel_num_h,
voxel_size_x=ag.pixel_size_h/mag,
voxel_size_y=ag.pixel_size_h/mag,
voxel_size_z=ag.pixel_size_h/mag,
channels=num_channels)
print(data)
###Output
_____no_output_____
###Markdown
We can use an interactive image slicer `islicer` to provide a visualisation of the `AcquisitionData` in any dimension below. Simply by taking a data subset in a particular dimension, we can then visualise the data in any other given dimension.Run the code below to see three such examples:1) Projection radiographs for each of the 60 rotation angles acquired in a single channel2) The sinogram for each energy channel for the central slice subset3) The spectral signals acquired in each energy channel for a single projection angle **Note: You can adjust the look of your reconstructions by varying certain parameters** - by removing `cmap`, you return to the default colour map of 'gray' - the default scaling runs from 0.0 through the full data range, vary this using e.g. `minmax = (0.0 , 2.0)`
###Code
islicer(data.subset(channel=20), direction='angle', title = 'Projection Angle', cmap='inferno')
islicer(data.subset(vertical=40), direction='channel', title = 'Sinogram Channel', cmap='inferno')
islicer(data.subset(angle=40), direction='channel', title = 'Channel', cmap='inferno')
###Output
_____no_output_____
###Markdown
We setup the tomography operator for 3D multi-channel data using the `AcquisitionGeometry` and `ImageGeometry`
###Code
A3DMC = AstraProjector3DMC(ig, ag)
###Output
_____no_output_____
###Markdown
FDK ReconstructionOne of the simplest, and most common, means of image reconstruction for X-ray CT is the use of Filtered BackProjection (FBP). In the case of many lab-based X-ray sources, which utilise a cone-beam rather than parallel- or fan-beams, we use a specific case of FBP: The [Feldkamp-Davis-Kress (FDK)](https://www.osapublishing.org/josaa/abstract.cfm?uri=josaa-1-6-612) algorithm.The function `FBP` is capable of handling reconstructions for both parallel-beam and cone-beam geometries in 2D and 3D. When supplied with both Acquisition and Image Geometry (`ag`, `ig`), the function recognises and performs the appropriate form of FBP, in this case FDK.Run the code below to see a reconstruction using the FDK algorithm. Here we use a `ram-lak` filter as part of the reconstruction.
###Code
# Setup geometry for FDK
fdk = FBP(ig, ag, 'ram-lak')
# Provide dataset
fdk.set_input(data)
# Store results
recon = fdk.get_output()
# Check dimensions
print('Shape: {}'.format(recon.shape))
recon.dimension_labels
###Output
_____no_output_____
###Markdown
We can see we now have three spatial dimensions to our reconstructed data (each of size 80), along with our 40 energy channels. By selecting a single slice of a spatial dimension, we can view the resulting 80x80 reconstructed image for each energy channel using `islicer`.
###Code
# Show results
islicer(recon.subset(vertical=46),direction='channel',
title='FDK: Channel', cmap='inferno', minmax=(0.0,0.4))
###Output
_____no_output_____
###Markdown
While some features of the lizard head can be seen in the reconstructed images, much of the signal is shrouded by noise. In the next section, we will explore the first iterative algorithm - CGLS. Running the CGLS algorithm on a 4D datasetAs the next step, we will begin with a standard CGLS algorithm, applied to our 4D dataset We initialise the operator based on the dimensions of the 4D datasetLet's test the CGLS algorithm for just 10 iterations
###Code
# We initialise
x_init = A3DMC.volume_geometry.allocate()
# Run the CGLS for 10 iterations
cgls = CGLS(x_init = x_init, operator = A3DMC, data = data, max_iteration = 100)
cgls.run(10)
###Output
_____no_output_____
###Markdown
We can use custom-made functions to visualise the quality of our reconstructions. Here we are looking at three commonly used views of the reconstructed dataset (axial, coronal and sagittal), and how these vary for different energy channels. Here we use the `show` function for multi-channel datasets, providing parameters to vary: - Title, Figure/font size - Channel number you wish to view (Currently takes one value for 4D data) - Colour map - Min/max colour map scalingRun the code below to see reconstructions for the 10th, 20th and 30th channel in our truncated dataset, with the X-ray energies these correspond to.
###Code
# Plot axial, coronal and sagittal views
# Plotter automatically converts channel number to energy
show(cgls.get_output(), title='CGLS 4D Reconstruction', show_channels=10,
cmap='inferno', figure_size = [15,6], font_size=[25,20], minmax=(0.0,0.3))
show(cgls.get_output(), title='CGLS 4D Reconstruction', show_channels=20,
cmap='inferno', figure_size = [15,6], font_size=[25,20], minmax=(0.0,0.3))
show(cgls.get_output(), title='CGLS 4D Reconstruction', show_channels=30,
cmap='inferno', figure_size = [15,6], font_size=[25,20], minmax=(0.0,0.3))
###Output
_____no_output_____
###Markdown
As a result of performing our reconstruction, we change the shape and dimensions of our 4D dataset, as we now have 3 spatial dimensions along with the energy channels.Run the following code to see the new shape and corresponding dimension labels.
###Code
print('Shape = ', cgls.get_output().shape)
print('Labels = ', cgls.get_output().dimension_labels)
###Output
_____no_output_____
###Markdown
You can use these labels to once more explore visualisations with the islicer function. Try varying the subset dimension and the direction using the label names above.e.g. try: `islicer(cgls.get_output().subset(vertical=46),direction='channel',title='Axial: Channel', cmap='inferno', minmax = (0.0,0.4))`
###Code
# Examples slicer
islicer(cgls.get_output().subset(vertical=46),direction='channel',
title='Axial View: Channel', cmap='inferno', minmax = (0.0,0.4))
###Output
_____no_output_____
###Markdown
Now we can see how the visualisation changes if we increase the number of iterations. Using the `show` function, we can visualise the reconstructed image slices at chosen iteration intervals. We'll run it for 30 iterations.
###Code
# Initialise
x_init = A3DMC.volume_geometry.allocate()
# Set up max iterations and step size
max_iter = 30
step = 10
# Set up algorithm
cgls2 = CGLS(x_init = x_init, operator = A3DMC, data = data,
max_iteration = max_iter, update_objective_interval = 10)
for i in range(0, max_iter // step):
cgls2.run(step)
# get and visusualise the results
cgls_out = cgls2.get_output()
show(cgls_out, title='Iteration {},'.format((i+1) * step) + ' Objective Function: {}'.format(cgls2.loss[-1]) + '\n',\
show_channels=20,cmap='inferno', figure_size=[15,6], font_size=[25,20], minmax=(0.0,0.4))
###Output
_____no_output_____
###Markdown
These images highlight the instability of a basic CGLS algorithm, where the **number of iterations** effectively acts as the algorithm's **regularisation parameter**. As a result, too many iterations leads to a divergence from an optimum solution, and while the objective function continues to reduce, only additional noise is being contributed. In the next section, we will look at a different algorithm: Primal-Dual Hybrid Gradient (PDHG), combined with a Total Variation (TV) regularising factor, to see if this improves our reconstructed data quality. Total Variation Regularisation in 4D Volume using PDHGThe PDHG algorithm and Total Variation regularisation were covered extensively in **Notebook_04_PDHG_Tomography**, but below gives a brief recap of the basics behind each. Recap on PDHGPDHG aims to solve problems of the form:$$ \begin{equation} \min_{u} \mathcal{F}(K u) + \mathcal{G}(u) \label{min_problem} \end{equation} $$In order to setup and run PDHG, we need to define the following: - The operator $K$. - The function $\mathcal{F}$ and $\mathcal{G}$. - Step-sizes $\sigma$ and $\tau$ such that $\sigma\tau\|K\|^{2}<1$. Then we can setup PDHG:`pdhg = PDHG(f = F, g = G, operator = K, tau = tau, sigma = sigma, max_iteration = maxiter)` Applying Total Variation RegularisationFor this notebook, we will be applying Total Variation (TV) regularisation to our PDHG algorithm. TV is a non-smooth regulariser, $$ \underset{u}{\operatorname{argmin}} \alpha\,\mathrm{TV}(u) + \frac{1}{2} \| \mathcal{A} u - g\|^{2}_{2} + \mathbb{I}_{\{u>0\}}(u) $$where,- The Total Variation is taken as a `MixedL21Norm()`, such that $$\mathrm{TV}(u) = \|\nabla u \|_{2,1} = \sum \sqrt{ (\partial_{y}u)^{2} + (\partial_{x}u)^{2} }$$ - $g$ is the Acquisition data obtained from the detector - $\mathcal{A}$ is the projection operator ( Radon transform ) that maps from an image-space to an acquisition space, i.e., $\mathcal{A} : X \rightarrow Y$. - $\alpha$ is the regularising parameter that measures a trade-off between the fidelity and the regulariser terms - $\mathbb{I}_{\{u>0\}}(u) : = \begin{cases}0, \mbox u>0\\\infty, \mbox otherwise\quad\end{cases}$ is a positivity constraint for the minimiser, $u$. Setting up PDHG for TVWe define K as a `BlockOperator`, containing the Gradient and Projection operator:$$ K = \begin{bmatrix}\nabla\\A\end{bmatrix}$$K = `BlockOperator(Gradient, A)`The function $\mathcal{F}$, is a `BlockFunction` witha function $\alpha\|\cdot\|_{2,1}\quad\Longleftrightarrow\quad$ `MixedL21Norm()` term that represents the Total variation regularisation ,a function $\|\cdot -g \|_{2}^{2}\quad\Longleftrightarrow\quad$ `L2NormSquared(data)` term that represents the data fitting.Hence, $\mathcal{F} = [f_{1}, f_{2}] \quad\Longleftrightarrow\quad $ `F = BlockFunction(MixedL21Norm(), L2NormSquared(data))`Finally, we have the function $\mathcal{G} = \mathbb{I}_{\{u>0\}}(u) \quad\Longleftrightarrow\quad$ `G = IndicatorBox(lower=0)`Again, we can verify that with the above setting we can express our problem into this form, for $x=u$$$ \begin{align} \underset{u}{\operatorname{argmin}}\alpha\|\nabla u\|_{2,1} + \frac{1}{2}\|\mathcal{A} u - g\|^{2}_{2} + \mathbb{I}_{\{u>0\}}(u) \\= \underset{u}{\operatorname{argmin}} f_{1}(\nabla u) + f_{2}(\mathcal{A}u) + \mathbb{I}_{\{u>0\}}(u) \\ = \underset{u}{\operatorname{argmin}} \mathcal{F}( \begin{bmatrix} \nabla \\ \mathcal{A} \end{bmatrix}u) + \mathbb{I}_{\{u>0\}}(u)\\ = \underset{u}{\operatorname{argmin}} \mathcal{F}(Ku) + \mathcal{G}(u)\\ = \underset{x}{\operatorname{argmin}} \mathcal{F}(Kx) + \mathcal{G}(x) \end{align} $$
###Code
# The operator K is a Block Operator that contains the gradient and the tomography operator
op1 = Gradient(ig)
op2 = A3DMC
# Set up a BlockOperator K
K = BlockOperator(op1, op2, shape=(2,1))
###Output
_____no_output_____
###Markdown
$\mathcal{F}$ is a `BlockFunction` of a fidelity term and our TV regularising term:- $f_{1}$ = $\alpha\|\cdot\|_{2,1}\quad\Longleftrightarrow\quad$ `MixedL21Norm()` term that represents the Total variation regularisation ,- $f_{2}$ = $\|\cdot -g \|_{2}^{2}\quad\Longleftrightarrow\quad$ `L2NormSquared(data)` term that represents the data fitting.Therefore as $f_{1}$ and $f_{2}$ act on each element of $K$, we end up with $$ \mathcal{F}(Ku) = \mathcal{F}(\begin{bmatrix}\nabla u \\A u\end{bmatrix}) = ( f_{1}(\nabla u), f_{2}(Au) ) $$
###Code
alpha = 0.05
f1 = alpha * MixedL21Norm()
f2 = 0.5 * L2NormSquared(b = data)
F = BlockFunction(f1, f2)
###Output
_____no_output_____
###Markdown
Next, we have our positivity constraint function $\mathcal{G} = \mathbb{I}_{\{u>0\}}(u) \quad\Longleftrightarrow\quad$ `G = IndicatorBox(lower=0)`.
###Code
G = IndicatorBox(lower = 0)
###Output
_____no_output_____
###Markdown
Finally, we compute the operator norm of $K$ (`normK = operator.norm()`), and set our step sizes, $\sigma$ and $\tau$.
###Code
# Compute the operator norm for K
normK = K.norm()
# Define the step sizes sigma and tau
sigma = 1
tau = 1/(sigma*normK**2)
###Output
_____no_output_____
###Markdown
Now we can setup and run the PDHG algorithm. Here it will run for 100 iterations.Due to the increased complexity of the algorithm, combined with the reconstruction of a 4D dataset, the runtime for 100 iterations alone will be around **4 minutes**. However, we may require many more than 100 iterations to reach an optimum solution. Further, we may wish to reconstruct hundreds of energy channels, rather than just the 40 we use here.Therefore, consideration must be taken on the number of iterations you perform based on your choice of algorithm and size of your dataset.**Scroll past the cell below** if you wish to save yourself 4 minutes! We have provided a final reconstruction after 1000 iterations of PDHG, which required around 40 minutes of runtime.
###Code
pdhg_S_100 = PDHG(f = F, g = G, operator = K, tau = tau, sigma = sigma,
max_iteration = 100, update_objective_interval = 25)
pdhg_S_100.run()
show(pdhg_S_100.get_output(), title='PDHG TV 100 Iterations',
show_channels=20, cmap='inferno', figure_size = [15,6], font_size=[25,20], minmax=(0.0,0.4))
###Output
_____no_output_____
###Markdown
1000 IterationsWe load in the 1000 iteration result using the `NEXUSDataReader()` we saw in **Notebook 03_3D_Diamond_dataset**.
###Code
# Load in 1000 iteration dataset
reader = NEXUSDataReader()
reader.set_up(nexus_file = 'pdhg_S_1000.nxs')
pdhg_S_1000 = reader.load_data()
# Show the result
show(pdhg_S_1000, title='PDHG TV 1000 Iterations',
show_channels=20, cmap='inferno', figure_size = [15,6], font_size=[25,20], minmax=(0.0,0.4))
###Output
_____no_output_____
###Markdown
Exercise 1: Change correlation to Space-ChannelWe can make some adjustments to our framework to see how these affect the reconstruction.Our gradient component is capable of smoothing our reconstruction both in the spatial domain, but also in the energy domain, by comparing reconstructions to its neighbouring channel.Try changing the gradient component of our operator, $\nabla$, so that it is correlated in both the spatial and energy channel domain. We can do this by simply setting: `op1 = Gradient(ig, correlation='SpaceChannels')` Keep the tomography operator, $A$ the same. There are three numerical parameters that have an impact on the resulting reconstruction: alpha ($\alpha$), sigma ($\sigma$) and tau ($\tau$). For our original PDHG reconstruction, these optimum parameters were: $\alpha$ = 0.05, $\sigma$ = 1, $\tau$ = `1/(sigma*normK**2)`.For this test, we'll adjust our parameters slightly to: $\alpha$ = 0.07, $\sigma$ = 1, $\tau$ = `1/(sigma*normK**2)`. See if you notice any difference when you visualise the new reconstruction.
###Code
# Setup BlockOperator, K
op1 = Gradient(ig, correlation = "SpaceChannels")
op2 = A3DMC
K = BlockOperator(op1, op2, shape=(2,1) )
# Define regularising parameter
alpha = 0.07
# Setup BlockFunction, F
f1 = alpha * MixedL21Norm()
f2 = 0.5 * L2NormSquared(b = data)
F = BlockFunction(f1, f2)
# Positivity Constraint Function
G = IndicatorBox(lower = 0)
# Compute the operator norm for K
normK = K.norm()
# Define the step sizes sigma and tau
sigma = 1
tau = 1/(sigma*normK**2)
###Output
_____no_output_____
###Markdown
Once you've made these changes, have a go at running the reconstruction and see what influence your changeshad.
###Code
pdhgSC = PDHG(f = F, g = G, operator = K, tau = tau, sigma = sigma,
max_iteration = 100, update_objective_interval = 25)
pdhgSC.run()
show(pdhgSC.get_output(), title='PDHG TV Reconstruction', show_channels=20,
cmap='inferno', figure_size = [15,6], font_size=[25,20], minmax=(0.0,0.4))
###Output
_____no_output_____
###Markdown
We can directly compare the two types of PDHG reconstruction using `islicer`. As we move across energy channels, see if there are any major differences between the constructions: 1) PDHG TV using a Spatial correlation for 1000 iterations 2) PDHG TV using a Space-Channel correlation for 100 iterations **Note:** If you also ran the earlier code for 100 iterations of PDHG TV using a Spatial correlation, feel free to add in an extra line to compare this too: `i3 = islicer(pdhg_S_100.get_output().subset(vertical=40),direction='channel',title='100 Iteration Space Corr.: Channel', cmap='inferno', minmax = (0.0,0.4))`
###Code
i1 = islicer(pdhg_S_1000.subset(vertical=46),direction='channel',
title='1000 Iteration Space Corr.: Channel', cmap='inferno', minmax = (0.0,0.4))
i2 = islicer(pdhgSC.get_output().subset(vertical=46),direction='channel',
title='100 Iteration Space-Channel Corr.: Channel',cmap='inferno', minmax = (0.0,0.4))
#i3 = islicer(pdhg_S_100.get_output().subset(vertical=46),direction='channel',
# title='100 Iteration Space Corr.: Channel', cmap='inferno', minmax = (0.0,0.4))
link_islicer(i1,i2)
###Output
_____no_output_____
###Markdown
Spatial/Energy ProfilesOne way we can directly observe the effect of applying Spatial/Energy Correlation to our algorithms is through the use of pixel profiles across our reconstructed images in each domain respectively.By plotting signal values across spatial pixels, or across energy channels, the smoothing effect of this correlation feature is seen. We will demonstrate each of these below. Extracting Spatial ProfilesBy identifying a region of our reconstructed slices were the Iodine signals are strong, we can plot along the pixels in this region and compare the line profiles for each algorithm covered here.Given the strong signals given by the eyes of the lizard head sample, we will look across this pixel row for a single channel, given by subset values of: - `vertical = 46` - `horizontal_y = 33` - `channel = 20` **Feel free to adjust these values in the cell below if you wish to explore different regions of the reconstructed images.**
###Code
plt.figure(figsize=[12,8])
# Extract line profiles for each algorithm
plt.plot(recon.subset(vertical = 46, horizontal_y = 33, channel = 20).as_array())
plt.plot(cgls.get_output().subset(vertical = 46, horizontal_y = 33, channel = 20).as_array())
plt.plot(pdhg_S_1000.subset(vertical = 46, horizontal_y = 33, channel = 20).as_array())
plt.plot(pdhgSC.get_output().subset(vertical = 46, horizontal_y = 33, channel = 20).as_array())
# Label and add key
plt.xlabel('Pixels X', fontsize=20)
plt.ylabel('Optical Density (no units)', fontsize=20)
plt.legend(labels=['FDK', 'Simple CGLS', 'Space Corr. PDHG TV', 'Space-Channel Corr. PDHG TV'])
###Output
_____no_output_____
###Markdown
We can see the reduced noise fluctuations for the Space-correlated PDHG algorithms, compared to the significant noise seen in both FDK and CGLS reconstructions. Extracting Energy ProfilesOnce we have chosen and optimised our reconstruction routine, we can exploit this additional, energy dimension further by identifying the spectral signals that lie within.First we can plot out energy profiles for any region-of-interest in our reconstructed slices. This allows us to find, for instance, an absorption edge.Iodine has a known absorption 'K-edge' of 33.169 keV, so we can test the accuracy of reconstruction algorithms by seeing where this edge falls in each case. A plot of an idealised, theoretical Iodine K-edge is shown below, falling precisely at 33.169 keV.We start by extracting a 3x3 set of pixels from each of the 3D multi-channel datasets. We can choose the pixels with the highest signal by looking at our previous reconstructed images.
###Code
# Collect 3x3 regions of interest for each dataset
# FDK Recon
# Sum along z-direction (vertical)
fdk_ROI1 = np.mean(fdk.get_output().as_array()[:,44:47,:,:],1)
# Sum along y-direction (horizontal_y)
fdk_ROI2 = np.mean(fdk_ROI1[:,31:34,:],1)
# Sum along x-direction (horizontal_x)
fdk_ROI3 = np.mean(fdk_ROI2[:,14:17],1)
# 3D multi-channel 10 iteration CGLS
cgls_ROI1 = np.mean(cgls.get_output().as_array()[:,44:47,:,:],1)
cgls_ROI2 = np.mean(cgls_ROI1[:,31:34,:],1)
cgls_ROI3 = np.mean(cgls_ROI2[:,14:17],1)
# 3D multi-channel space correlated PDHG
pdhg_S_1000_ROI1 = np.mean(pdhg_S_1000.as_array()[:,44:47,:],1)
pdhg_S_1000_ROI2 = np.mean(pdhg_S_1000_ROI1[:,31:34],1)
pdhg_S_1000_ROI3 = np.mean(pdhg_S_1000_ROI2[:,14:17],1)
# 3D multi-channel space-channel correlated PDHG TV
pdhgSC_ROI1 = np.mean(pdhgSC.get_output().as_array()[:,44:47,:],1)
pdhgSC_ROI2 = np.mean(pdhgSC_ROI1[:,31:34],1)
pdhgSC_ROI3 = np.mean(pdhgSC_ROI2[:,14:17],1)
# Set channel numbers for reduced dataset (here 100 to 140)
channel_no = np.linspace(100,140,num_channels)
# Apply linear calibration equation to convert channels to energy
e_keV = 0.2786*channel_no + 0.8575
plt.figure(figsize=(10,8))
plt.plot(e_keV,fdk_ROI3)
plt.plot(e_keV,cgls_ROI3)
plt.plot(e_keV,pdhg_S_1000_ROI3)
plt.plot(e_keV,pdhgSC_ROI3)
plt.plot((33.169,33.169),plt.ylim(0.0,0.4))
plt.legend(labels=['FDK', 'Simple CGLS', 'Space Corr. 1000 Iter. PDHG',
'Space-Channel Corr. 100 Iter. PDHG', 'I K-edge 33.169 keV'])
plt.ylim(0.1,0.3)
plt.xlim([29,39])
plt.ylabel('Optical Density (no units)',fontsize=20)
plt.xlabel('Energy (keV)',fontsize=20)
###Output
_____no_output_____
###Markdown
From our plots, we can see all algorithms experience a sharp rise in signal value due to the iodine absorption K-edge. Compared to the theoretical value line, each of the cases match well with the expected position of the edge.An important factor to note is the increased smoothness of the signal for PDHG TV, when a 'Space-Channel' correlation is applied to our gradient operator for these methods. This correlation enforces our operator to use the previous energy channel as a reference point for reconstructing data in the next, resulting in a smoother transition across channels. Exercise 2: Move to 2D Spatial Geometry with Channels Defining 2D `Acquisition/Image Geometry` and `Projector`From our 4D dataset, we can reduce the dimensionality by taking a single slice along a particular spatial dimension. We can go from a shape of (Channel, Vertical, Angle, Horizontal) = (40, 80, 60, 80) to a shape of (Channel, Angle, Horizontal) = (40, 60, 80) by taking a vertical slice subset.Let's start by reducing our original `AcquisitionData`. For simplicity, choose a central slice subset by setting: `data2DMC = data.subset(vertical=40)` `ag2d = data2DMC.geometry` This will set our new `AcquisitionGeometry`.We must then alter the dimensionality of `ag2d`. Currently this is stored as `'3D'`, but as we now use only 2 spatial dimensions, we must change it to `'2D'`. So we can do: `ag2d.dimension = '2D'`
###Code
data2DMC = data.subset(vertical=40)
ag2d = data2DMC.geometry
ag2d.dimension = '2D'
###Output
_____no_output_____
###Markdown
We still need to manually adjust the `ImageGeometry` parameters we used for the [4D dataset](4D_Acquisition_Image_Geometry). Below we have the parameters used for the original dataset. Which parts of this geometry do we no longer need? **Delete** what's not needed and then run the code.
###Code
ig2d = ImageGeometry(voxel_num_x=ig.voxel_num_x,
voxel_num_y=ig.voxel_num_y,
voxel_size_x=ag.pixel_size_h/mag,
voxel_size_y=ag.pixel_size_h/mag,
channels = ig.channels)
###Output
_____no_output_____
###Markdown
Now we can setup the tomography operator for 2D multi-channel data using the redefined `AcquisitionGeometry` and `ImageGeometry` for 2D. Let's call it `A2DMC`.Remember: Last time we used [AstraProjector3DMC](A3DMC) as we had a 3D multi-channel dataset. Here we instead use `AstraProjectorMC()`.Note: While `AstraProjector3DMC` required the use of `gpu`, we can setup our 2D multi-channel tomography operator using either `gpu` or `cpu`. The default is the use of `gpu`.
###Code
A2DMC = AstraProjectorMC(ig2d, ag2d ,'cpu')
data2DMC.dimension_labels
###Output
_____no_output_____
###Markdown
Exercise 3: Running 2D plus channel CGLSIn this exercise, use what you've learned about setting up reconstruction routines, and perform a simple CGLS using our 2D multi-channel data. Then run it for 10 iterations.Remember, the key parameters you need are: - initialisation x_init (`A2DMC.volume_geometry.allocate()`) - operator (`A2DMC`) - data (`data2DMC`)
###Code
x_init = A2DMC.volume_geometry.allocate()
cgls2d = CGLS(x_init = x_init, operator = A2DMC,
data = data2DMC, max_iteration = 100)
cgls2d.run(10)
###Output
_____no_output_____
###Markdown
You can check the results of your reconstruction by running the function `show` below, giving you axial views for two energy channels. Or run an interactive slicer across energy channels.
###Code
# Show reconstruction
show(cgls2d.get_output(), title='2D Multi-Channel CGLS', show_channels=[10,20,30],
cmap='inferno', figure_size = [15,6], font_size=[25,20],minmax=(0.0,0.4))
islicer(cgls2d.get_output(),direction='channel',title='Channel',cmap='inferno',minmax=(0.0,0.4))
###Output
_____no_output_____
###Markdown
Exercise 4: Running 2D plus channel regularised CGLSHere we will expand upon what you learned about BlockFrameworks. In particular, we will cover both `BlockOperators` and `BlockDataContainers`, which were covered in **Notebook 02_Tikhonov_Block_Framework**.Below gives a brief definition of each type:`BlockDataContainer` holds datacontainers as a column vector.`BlockOperator` is a matrix of operators. Setting up Regularised CGLSFor our regularisation, we wish to solve:$$\underset{u}{\mathrm{argmin}}\begin{Vmatrix}\binom{\alpha \nabla}{A} u - \binom{0}{g}\end{Vmatrix}^2_2$$With the definitions:$\tilde{A} = \binom{\alpha \nabla}{A} \quad\Longleftrightarrow\quad$ `BlockOperator(alpha*Gradient(ig2d),A2DMC)` (where $\alpha$ is the regularisation parameter)$\tilde{g} = \binom{0}{g} \quad\Longleftrightarrow\quad$ `BlockDataContainer(op1.range_geometry().allocate(),data2DMC)`this can now be recognised as a least squares problem:$$\underset{u}{\mathrm{argmin}}\begin{Vmatrix}\tilde{A} u - \tilde{g}\end{Vmatrix}^2_2$$and being a least squares problem, it can be solved using CGLS with $\tilde{A}$ as operator and $\tilde{g}$ as data. Build `BlockOperator`, $\tilde{A} = \binom{\alpha \nabla}{A}$Using the 2D multi-channel data and geometry constructed above, build a `BlockOperator`, $\tilde{A}$, applying a Space-Channel correlation. Choose a regularisation value around $\alpha$ = 1.5 as a starting point.
###Code
# Setup the BlockOperator
op1 = Gradient(ig2d, correlation='SpaceChannels')
op2 = A2DMC
alpha = 1.5
A_block = BlockOperator(alpha*op1,op2)
###Output
_____no_output_____
###Markdown
Build `BlockDataContainer`Next build a `BlockDataContainer`, $\tilde{g}$, containing an array with the range of the regularising operator, $\alpha \nabla$, and our reduced `AcquisitionData` (`data2DMC`).
###Code
# Setup the BlockDataContainer
g_block = BlockDataContainer(op1.range_geometry().allocate(),data2DMC)
###Output
_____no_output_____
###Markdown
Initialise and RunNow we can initialise the `BlockOperator` and run the algorithm for 50 iterations.
###Code
# Initialise the BlockOperator
x_init = A_block.domain_geometry().allocate()
# Setup and run the regularised CGLS algorithm
cgls_reg = CGLS(x_init = x_init, operator = A_block, data = g_block,
max_iteration = 100,update_objective_interval = 10)
cgls_reg.run(50)
###Output
_____no_output_____
###Markdown
Check the reconstruction below. Does the value of $\alpha$ need changing? Click [here](Choosing_alpha) if you want to go back and vary your initial value of $\alpha$ to see if you can improve the reconstruction.
###Code
# Show reconstruction
show(cgls_reg.get_output(), show_channels=[10,20,30], title='2D Multi-Channel Regularised CGLS',
cmap='inferno', figure_size = [15,6], font_size=[25,20], minmax=(0.0,0.4))
###Output
_____no_output_____
###Markdown
While still noisy, we can see a significant improvement in contrast to the CGLS algorithm when no regularisation is applied. Next we will try the PDHG algorithm using TV regularisation to see if this fares any better. Exercise 5: 2D Multi-channel Total Variation with PDHG Now let's test our reduced dataset with the PDHG algorithm using Total Variation regularisation.Based on what our construction looked like for the 4D version, try to recreate this for our 2D multi-channel data. You'll need to adjust the `BlockOperator`, $K$ and `BlockFunction`, $\mathcal{F}$, accordingly.Play around with the numeric parameters ($\alpha$, $\sigma$ and $\tau$) as well, and see how they affect the reconstruction here. Is there a big change when moving from 3D multi-channel to 2D multi-channel?
###Code
# Setup the BlockOperator, K
op1 = Gradient(ig2d, correlation='SpaceChannels')
op2 = A2DMC
K = BlockOperator(op1,op2)
# Define the regularising parameter, alpha
alpha = 0.03
# Setup the BlockFunction, F
f1 = alpha * MixedL21Norm()
f2 = 0.5 * L2NormSquared(b=data2DMC)
F = BlockFunction(f1,f2)
# Define indicator function that provides a positivity constraint
G = IndicatorBox(lower=0)
# Compute the operator norm for K
normK = K.norm()
# Define the step sizes sigma and tau
sigma = 1
tau = 1/(sigma*normK**2)
pdhg_2d = PDHG(f=F, g=G, operator=K, tau=tau, sigma=sigma,
max_iteration = 1000, update_objective_interval = 25)
pdhg_2d.run(500)
# Show reconstruction
show(pdhg_2d.get_output(), show_channels=[10,20,30], title='2D Multi-Channel TV PDHG',
cmap='inferno', figure_size = [15,6], font_size=[25,20], minmax=(0.0,0.4))
###Output
_____no_output_____
###Markdown
Comparing 2D Multi-channel MethodsWe can bring together the 2D multi-channel reconstructions we have performed for: - Simple CGLS - Regularised CGLS - PDHG with TV Using the `islicer` tool, we can directly compare these reconstructions side-by-side and evaluate the level of noise removal and spatial/energy smoothing offered by each method.
###Code
i1 = islicer(cgls2d.get_output(),direction='channel',title='Simple CGLS: Channel',
cmap='inferno',minmax=(0.0,0.4))
i2 = islicer(cgls_reg.get_output(),direction='channel',title='Regularised CGLS: Channel',
cmap='inferno',minmax=(0.0,0.4))
i3 = islicer(pdhg_2d.get_output(),direction='channel',title='PDHG TV: Channel',
cmap='inferno',minmax = (0.0,0.4))
link_islicer(i1,i2,i3)
###Output
_____no_output_____
###Markdown
Energy profilesOnce more we can extract energy profiles, to determine the K-edge position in our 2D multi-channel datasets.In this case, we extract a 3x3 set of pixels by only summing along two spatial domains, y (`horizontal_y`) and x (`horizontal_x`).
###Code
# Collect 3x3 regions of interest for each dataset
# 2d multi-channel simple CGLS
# Sum along y-direction
cgls2d_ROI1 = np.mean(cgls2d.get_output().as_array()[:,33:36,:],1)
# Sum along x-direction
cgls2d_ROI2 = np.mean(cgls2d_ROI1[:,60:63],1)
# 2d multi-channel regularised CGLS
cgls_reg_ROI1 = np.mean(cgls_reg.get_output().as_array()[:,33:36,:],1)
cgls_reg_ROI2 = np.mean(cgls_reg_ROI1[:,60:63],1)
# 2d multi-channel PDHG TV
pdhg_2d_ROI1 = np.mean(pdhg_2d.get_output().as_array()[:,33:36,:],1)
pdhg_2d_ROI2 = np.mean(pdhg_2d_ROI1[:,60:63],1)
# Set channel numbers for reduced dataset
channel_no = np.linspace(100,140,num_channels)
# Apply linear calibration equation to convert channels to energy
e_keV = 0.2786*channel_no + 0.8575
plt.figure(figsize=(10,8))
plt.plot(e_keV,cgls2d_ROI2)
plt.plot(e_keV,cgls_reg_ROI2)
plt.plot(e_keV,pdhg_2d_ROI2)
plt.plot((33.169,33.169),plt.ylim(0.0,0.4))
plt.legend(labels=['Simple CGLS', 'Regularised (Space-Channel) CGLS',
'(Space-Channel) PDHG TV', 'I K-edge 33.169 keV'])
plt.ylim(0.1,0.4)
plt.xlim([29,39])
plt.ylabel('Optical Density (no units)',fontsize=20)
plt.xlabel('Energy (keV)',fontsize=20)
###Output
_____no_output_____
###Markdown
Constructing Elemental MapsOnce we know have identified the position of this edge in our energy profile, we can narrow our dataset and produce an 'Iodine map'. That is, we can select only the energy channels occupied by the absorption edge, so all reconstructed signal is now due only to the iodine contrast agent. This method is known as **'K-edge subtraction'**, which you can read about in more detail in papers such as that by [C.K.Egan *et al*, 2015](https://www.nature.com/articles/srep15979). A basic concept is shown below for an energy profile plot. The hashed area highlights the energy range we are interested in, corresponding to the absorption edge.Based on our plots, we will estimate the start and end of the edge to occur at approximately 32.5 keV and 34 keV respectively.
###Code
# Calculate energy channels corresponding to start and end of the K-edge
e_keV = np.array([32.5,34])
channel_no = ((e_keV-0.8575)/0.2786)-100
# Display the channels corresponding to the start and end of the K-edge
print("Start of edge = channel",int(channel_no[0]))
print("End of edge = channel",int(channel_no[1]))
# 2d multi-channel simple CGLS
# Sum over all pixels for channels of interest
cgls2d_COI = np.mean(cgls2d.get_output().as_array()[int(channel_no[0]):int(channel_no[1]),:,:],0)
# 2d multi-channel regularised CGLS
cgls_reg_COI = np.mean(cgls_reg.get_output().as_array()[int(channel_no[0]):int(channel_no[1]),:,:],0)
# 2d multi-channel PDHG TV
pdhg_2d_COI = np.mean(pdhg_2d.get_output().as_array()[int(channel_no[0]):int(channel_no[1]),:,:],0)
###Output
_____no_output_____
###Markdown
By stacking our iodine maps as an array, we can use `islicer` to move between each, directly comparing the quality of feature definition and degree of background noise for each algorithm.
###Code
# Collect together iodine maps
stack_maps = (np.array([cgls2d_COI, cgls_reg_COI, pdhg_2d_COI]))
titles = ['Simple CGLS', 'Regularised CGLS', 'Space-Channel PDHG TV']
islicer(stack_maps, title=titles, direction=0, cmap='inferno',minmax = (0.0,0.4))
###Output
_____no_output_____ |
trees/binary_search_tree.ipynb | ###Markdown
Binary Search Tree  Define Node class
###Code
# this code makes the tree that we'll traverse
class Node(object):
def __init__(self,value = None):
self.value = value
self.left = None
self.right = None
def set_value(self,value):
self.value = value
def get_value(self):
return self.value
def set_left_child(self,left):
self.left = left
def set_right_child(self, right):
self.right = right
def get_left_child(self):
return self.left
def get_right_child(self):
return self.right
def has_left_child(self):
return self.left != None
def has_right_child(self):
return self.right != None
# define __repr_ to decide what a print statement displays for a Node object
def __repr__(self):
return f"Node({self.get_value()})"
def __str__(self):
return f"Node({self.get_value()})"
from collections import deque
class Queue():
def __init__(self):
self.q = deque()
def enq(self,value):
self.q.appendleft(value)
def deq(self):
if len(self.q) > 0:
return self.q.pop()
else:
return None
def __len__(self):
return len(self.q)
def __repr__(self):
if len(self.q) > 0:
s = "<enqueue here>\n_________________\n"
s += "\n_________________\n".join([str(item) for item in self.q])
s += "\n_________________\n<dequeue here>"
return s
else:
return "<queue is empty>"
###Output
_____no_output_____
###Markdown
Define insertLet's assume that duplicates are overriden by the new node that is to be inserted. Other options are to keep a counter of duplicate nodes, or to keep a list of duplicates nodes with the same value. Note:Insertion can be done using iteration or recursion
###Code
# Solution
class Tree():
def __init__(self):
self.root = None
def set_root(self,value):
self.root = Node(value)
def get_root(self):
return self.root
def compare(self,node, new_node):
"""
0 means new_node equals node
-1 means new node less than existing node
1 means new node greater than existing node
"""
if new_node.get_value() == node.get_value():
return 0
elif new_node.get_value() < node.get_value():
return -1 # traverse left
else: #new_node > node
return 1 # traverse right
def insert_with_loop(self,new_value):
new_node = Node(new_value)
node = self.get_root()
if node == None:
self.root = new_node
return
while(True):
comparison = self.compare(node, new_node)
if comparison == 0:
# override with new node's value
node.set_value(new_node.get_value())
break # override node, and stop looping
elif comparison == -1:
# go left
if node.has_left_child():
node = node.get_left_child()
else:
node.set_left_child(new_node)
break #inserted node, so stop looping
else: #comparison == 1
# go right
if node.has_right_child():
node = node.get_right_child()
else:
node.set_right_child(new_node)
break # inserted node, so stop looping
def insert_with_recursion(self,value):
if self.get_root() == None:
self.set_root(value)
return
#otherwise, use recursion to insert the node
self.insert_recursively(self.get_root(), Node(value))
def insert_recursively(self,node,new_node):
comparison = self.compare(node,new_node)
if comparison == 0:
# equal
node.set_value(new_node.get_value())
elif comparison == -1:
# traverse left
if node.has_left_child():
self.insert_recursively(node.get_left_child(),new_node)
else:
node.set_left_child(new_node)
else: #comparison == 1
# traverse right
if node.has_right_child():
self.insert_recursively(node.get_right_child(), new_node)
else:
node.set_right_child(new_node)
def __repr__(self):
level = 0
q = Queue()
visit_order = list()
node = self.get_root()
q.enq( (node,level) )
while(len(q) > 0):
node, level = q.deq()
if node == None:
visit_order.append( ("<empty>", level))
continue
visit_order.append( (node, level) )
if node.has_left_child():
q.enq( (node.get_left_child(), level +1 ))
else:
q.enq( (None, level +1) )
if node.has_right_child():
q.enq( (node.get_right_child(), level +1 ))
else:
q.enq( (None, level +1) )
s = "Tree\n"
previous_level = -1
for i in range(len(visit_order)):
node, level = visit_order[i]
if level == previous_level:
s += " | " + str(node)
else:
s += "\n" + str(node)
previous_level = level
return s
tree = Tree()
tree.insert_with_loop(5)
tree.insert_with_loop(6)
tree.insert_with_loop(4)
tree.insert_with_loop(2)
tree.insert_with_loop(5) # insert duplicate
print(tree)
###Output
Tree
Node(5)
Node(4) | Node(6)
Node(2) | <empty> | <empty> | <empty>
<empty> | <empty>
###Markdown

###Code
tree = Tree()
tree.insert_with_recursion(5)
tree.insert_with_recursion(6)
tree.insert_with_recursion(4)
tree.insert_with_recursion(2)
tree.insert_with_recursion(5) # insert duplicate
print(tree)
###Output
Tree
Node(5)
Node(4) | Node(6)
Node(2) | <empty> | <empty> | <empty>
<empty> | <empty>
###Markdown
SearchDefine a search function that takes a value, and returns true if a node containing that value exists in the tree, otherwise false. 
###Code
# Solution
class Tree():
def __init__(self):
self.root = None
def set_root(self,value):
self.root = Node(value)
def get_root(self):
return self.root
def compare(self,node, new_node):
"""
0 means new_node equals node
-1 means new node less than existing node
1 means new node greater than existing node
"""
if new_node.get_value() == node.get_value():
return 0
elif new_node.get_value() < node.get_value():
return -1
else:
return 1
def insert(self,new_value):
new_node = Node(new_value)
node = self.get_root()
if node == None:
self.root = new_node
return
while(True):
comparison = self.compare(node, new_node)
if comparison == 0:
# override with new node
node = new_node
break # override node, and stop looping
elif comparison == -1:
# go left
if node.has_left_child():
node = node.get_left_child()
else:
node.set_left_child(new_node)
break #inserted node, so stop looping
else: #comparison == 1
# go right
if node.has_right_child():
node = node.get_right_child()
else:
node.set_right_child(new_node)
break # inserted node, so stop looping
def search(self,value):
node = self.get_root()
s_node = Node(value)
while(True):
comparison = self.compare(node,s_node)
if comparison == 0:
return True
elif comparison == -1:
if node.has_left_child():
node = node.get_left_child()
else:
return False
else:
if node.has_right_child():
node = node.get_right_child()
else:
return False
def __repr__(self):
level = 0
q = Queue()
visit_order = list()
node = self.get_root()
q.enq( (node,level) )
while(len(q) > 0):
node, level = q.deq()
if node == None:
visit_order.append( ("<empty>", level))
continue
visit_order.append( (node, level) )
if node.has_left_child():
q.enq( (node.get_left_child(), level +1 ))
else:
q.enq( (None, level +1) )
if node.has_right_child():
q.enq( (node.get_right_child(), level +1 ))
else:
q.enq( (None, level +1) )
s = "Tree\n"
previous_level = -1
for i in range(len(visit_order)):
node, level = visit_order[i]
if level == previous_level:
s += " | " + str(node)
else:
s += "\n" + str(node)
previous_level = level
return s
tree = Tree()
tree.insert(5)
tree.insert(6)
tree.insert(4)
tree.insert(2)
print(f"""
search for 8: {tree.search(8)}
search for 2: {tree.search(2)}
""")
print(tree)
###Output
search for 8: False
search for 2: True
Tree
Node(5)
Node(4) | Node(6)
Node(2) | <empty> | <empty> | <empty>
<empty> | <empty>
###Markdown
Bonus: deletionTry implementing deletion yourself. You can also check out this explanation [here](https://www.geeksforgeeks.org/binary-search-tree-set-2-delete/)
###Code
#Have to implement delete
###Output
_____no_output_____ |
tools/sus2021/COVID19.ipynb | ###Markdown
Análise de Dados - Ocupação Hospitalar COVID-19> A avaliação de melhor ou pior hospital não considera a quantidade de paciente por unidade de saúde e também não considera o período de ocupação.Fonte: https://opendatasus.saude.gov.br
###Code
import pandas as pd
import pandera as pa
import matplotlib.pyplot as plt
dataset_covid = "../../datasets/sus2021/esus-vepi.LeitoOcupacao.csv"
dataset_cnes = "../../datasets/sus2021/cnes_ndis_01_04_2020_banco_estab.csv"
sars_df = pd.read_csv(dataset_covid,
low_memory=False,
converters={'cnes': lambda x: str(x)},
parse_dates=['dataNotificacao'],
date_parser=lambda x: pd.to_datetime(x).tz_localize(None)
).dropna(subset=['cnes', 'saidaConfirmadaObitos'])
cnes_df = pd.read_csv(dataset_cnes,
low_memory=False,
usecols=["CO_CNES", "NO_FANTASIA"],
converters={'CO_CNES': lambda x: str(x)}
).rename(columns={'CO_CNES': 'cnes', 'NO_FANTASIA': 'nome'})
def cnes(): return cnes_df
cnes().dtypes
def df(): return sars_df
df().dtypes
schema = pa.DataFrameSchema(
columns = {
"cnes":pa.Column(pa.String),
"dataNotificacao":pa.Column(pa.DateTime),
"estado":pa.Column(pa.String, nullable=True),
"saidaConfirmadaObitos":pa.Column(pa.Float)
}
)
schema.validate(df())
def get_hospital_by_cnes(id):
while len(id) < 7: id = "0" + id
return cnes().loc[cnes().cnes == id].nome.iloc[0]
def history_by_cnes(id):
while len(id) < 7: id = "0" + id
filter = df().cnes == id
rows = df().loc[filter, ["dataNotificacao", "saidaConfirmadaObitos"]]
grouped = rows.groupby(['dataNotificacao'])
rank = grouped.saidaConfirmadaObitos.sum().sort_values()
return rank
def best_hospital_by_state(state):
filter = df().estado.str.lower() == state.lower()
rows = df().loc[filter, ["cnes", "saidaConfirmadaObitos"]]
grouped = rows.groupby(['cnes'], as_index = False)
rank = grouped.saidaConfirmadaObitos.mean().sort_values(by="saidaConfirmadaObitos")
return rank[rank.saidaConfirmadaObitos > 0].iloc[0].cnes
def worst_hospital_by_state(state):
filter = df().estado.str.lower() == state.lower()
rows = df().loc[filter, ["cnes", "saidaConfirmadaObitos"]]
grouped = rows.groupby(['cnes'], as_index = False)
result = grouped.saidaConfirmadaObitos.mean().sort_values(by="saidaConfirmadaObitos", ascending=False)
return result.iloc[0].cnes
# Melhor Hospital de Ceará: Menor média de mortalidade por COVID19
cnes_rank = best_hospital_by_state("ceará")
get_hospital_by_cnes(cnes_rank)
# 2481286 MATERNIDADE ESCOLA ASSIS CHATEAUBRIAND
# Melhor Hospital de São Paulo: Menor média de mortalidade por COVID19
cnes_rank = best_hospital_by_state("são paulo")
get_hospital_by_cnes(cnes_rank)
# 2079690 SANTA CASA DE MISERICORDIA DE SAO LUIZ DO PARA..
# Pior Hospital do Ceará: Maior média de mortalidade por COVID19
cnes_rank = worst_hospital_by_state("ceará")
get_hospital_by_cnes(cnes_rank)
# 0086673 HOSPITAL LEONARDO DA VINCI
# Pior Hospital de São Paulo: Maior média de mortalidade por COVID19
cnes_rank = worst_hospital_by_state("São paulo")
get_hospital_by_cnes(cnes_rank)
# 2091585 HOSPITAL ESTADUAL DE SAPOPEMBA SAO PAULO
best_ce = worst_hospital_by_state("ceará")
hist_ce = history_by_cnes(best_ce)
hist_ce.plot()
best_sp = worst_hospital_by_state("são paulo")
hist_sp = history_by_cnes(best_sp)
hist_sp.plot()
best_sc = worst_hospital_by_state("santa catarina")
hist_sc = history_by_cnes(best_sc)
hist_sc.plot()
###Output
_____no_output_____ |
3-ml-classification/Week_2-learning-linear-classifiers.ipynb | ###Markdown
Implementing logistic regression from scratchThe goal of this notebook is to implement your own logistic regression classifier. You will: * Extract features from Amazon product reviews. * Convert an SFrame into a NumPy array. * Implement the link function for logistic regression. * Write a function to compute the derivative of the log likelihood function with respect to a single coefficient. * Implement gradient ascent. * Given a set of coefficients, predict sentiments. * Compute classification accuracy for the logistic regression model. Let's get started! Fire up Turi CreateMake sure you have the latest version of Turi Create.
###Code
import turicreate
###Output
_____no_output_____
###Markdown
Load review dataset For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
###Code
products = turicreate.SFrame('amazon_baby_subset.sframe/')
###Output
_____no_output_____
###Markdown
One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
###Code
products['sentiment']
###Output
_____no_output_____
###Markdown
Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
###Code
products.head(10)['name']
print('# of positive reviews =', len(products[products['sentiment']==1]))
print('# of negative reviews =', len(products[products['sentiment']==-1]))
###Output
# of positive reviews = 26579
# of negative reviews = 26493
###Markdown
**Note:** For this assignment, we eliminated class imbalance by choosing a subset of the data with a similar number of positive and negative reviews. Apply text cleaning on the review dataIn this section, we will perform some simple feature cleaning using **SFrames**. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file. Now, we will load these words from this JSON file:
###Code
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print(important_words)
###Output
['baby', 'one', 'great', 'love', 'use', 'would', 'like', 'easy', 'little', 'seat', 'old', 'well', 'get', 'also', 'really', 'son', 'time', 'bought', 'product', 'good', 'daughter', 'much', 'loves', 'stroller', 'put', 'months', 'car', 'still', 'back', 'used', 'recommend', 'first', 'even', 'perfect', 'nice', 'bag', 'two', 'using', 'got', 'fit', 'around', 'diaper', 'enough', 'month', 'price', 'go', 'could', 'soft', 'since', 'buy', 'room', 'works', 'made', 'child', 'keep', 'size', 'small', 'need', 'year', 'big', 'make', 'take', 'easily', 'think', 'crib', 'clean', 'way', 'quality', 'thing', 'better', 'without', 'set', 'new', 'every', 'cute', 'best', 'bottles', 'work', 'purchased', 'right', 'lot', 'side', 'happy', 'comfortable', 'toy', 'able', 'kids', 'bit', 'night', 'long', 'fits', 'see', 'us', 'another', 'play', 'day', 'money', 'monitor', 'tried', 'thought', 'never', 'item', 'hard', 'plastic', 'however', 'disappointed', 'reviews', 'something', 'going', 'pump', 'bottle', 'cup', 'waste', 'return', 'amazon', 'different', 'top', 'want', 'problem', 'know', 'water', 'try', 'received', 'sure', 'times', 'chair', 'find', 'hold', 'gate', 'open', 'bottom', 'away', 'actually', 'cheap', 'worked', 'getting', 'ordered', 'came', 'milk', 'bad', 'part', 'worth', 'found', 'cover', 'many', 'design', 'looking', 'weeks', 'say', 'wanted', 'look', 'place', 'purchase', 'looks', 'second', 'piece', 'box', 'pretty', 'trying', 'difficult', 'together', 'though', 'give', 'started', 'anything', 'last', 'company', 'come', 'returned', 'maybe', 'took', 'broke', 'makes', 'stay', 'instead', 'idea', 'head', 'said', 'less', 'went', 'working', 'high', 'unit', 'seems', 'picture', 'completely', 'wish', 'buying', 'babies', 'won', 'tub', 'almost', 'either']
###Markdown
Now, we will perform 2 simple data transformations:1. Remove punctuation using [Python's built-in](https://docs.python.org/2/library/string.html) string functionality.2. Compute word counts (only for **important_words**)We start with *Step 1* which can be done as follows:
###Code
import string
def remove_punctuation(text):
try: # python 2.x
text = text.translate(None, string.punctuation)
except: # python 3.x
translator = text.maketrans('', '', string.punctuation)
text = text.translate(translator)
return text
products['review_clean'] = products['review'].apply(remove_punctuation)
###Output
_____no_output_____
###Markdown
Now we proceed with *Step 2*. For each word in **important_words**, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in **important_words** which keeps a count of the number of times the respective word occurs in the review text.**Note:** There are several ways of doing this. In this assignment, we use the built-in *count* function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
###Code
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
###Output
_____no_output_____
###Markdown
The SFrame **products** now contains one column for each of the 193 **important_words**. As an example, the column **perfect** contains a count of the number of times the word **perfect** occurs in each of the reviews.
###Code
products['perfect']
###Output
_____no_output_____
###Markdown
Now, write some code to compute the number of product reviews that contain the word **perfect**.**Hint**: * First create a column called `contains_perfect` which is set to 1 if the count of the word **perfect** (stored in column **perfect**) is >= 1.* Sum the number of 1s in the column `contains_perfect`.
###Code
def if_perfect(row):
if row >= 1:
return 1
else:
return 0
products['contains_perfect'] = products['perfect'].apply(if_perfect)
products['contains_perfect'].sum()
###Output
_____no_output_____
###Markdown
**Quiz Question**. How many reviews contain the word **perfect**? Convert SFrame to NumPy arrayAs you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.First, make sure you can perform the following import. If it doesn't work, you need to go back to the terminal and run`pip install numpy`.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term.
###Code
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
###Output
_____no_output_____
###Markdown
Let us convert the data into NumPy arrays.
###Code
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
###Output
_____no_output_____
###Markdown
**Are you running this notebook on an Amazon EC2 t2.micro instance?** (If you are using your own machine, please skip this section)It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running `get_numpy_data` function. Instead, download the [binary file](https://s3.amazonaws.com/static.dato.com/files/coursera/course-3/numpy-arrays/module-3-assignment-numpy-arrays.npz) containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:```arrays = np.load('module-3-assignment-numpy-arrays.npz')feature_matrix, sentiment = arrays['feature_matrix'], arrays['sentiment']```
###Code
feature_matrix.shape
###Output
_____no_output_____
###Markdown
**Quiz Question:** How many features are there in the **feature_matrix**?**Quiz Question:** Assuming that the intercept is present, how does the number of features in **feature_matrix** relate to the number of features in the logistic regression model? Now, let us see what the **sentiment** column looks like:
###Code
sentiment
###Output
_____no_output_____
###Markdown
Estimating conditional probability with link function Recall from lecture that the link function is given by:$$P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},$$where the feature vector $h(\mathbf{x}_i)$ represents the word counts of **important_words** in the review $\mathbf{x}_i$. Complete the following function that implements the link function:
###Code
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
prod = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = 1/(1 + np.exp(-prod))
# return predictions
return predictions
###Output
_____no_output_____
###Markdown
**Aside**. How the link function works with matrix algebraSince the word counts are stored as columns in **feature_matrix**, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$:$$[\text{feature_matrix}] =\left[\begin{array}{c}h(\mathbf{x}_1)^T \\h(\mathbf{x}_2)^T \\\vdots \\h(\mathbf{x}_N)^T\end{array}\right] =\left[\begin{array}{cccc}h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \\h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \\\vdots & \vdots & \ddots & \vdots \\h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N)\end{array}\right]$$By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying **feature_matrix** and the coefficient vector $\mathbf{w}$.$$[\text{score}] =[\text{feature_matrix}]\mathbf{w} =\left[\begin{array}{c}h(\mathbf{x}_1)^T \\h(\mathbf{x}_2)^T \\\vdots \\h(\mathbf{x}_N)^T\end{array}\right]\mathbf{w}= \left[\begin{array}{c}h(\mathbf{x}_1)^T\mathbf{w} \\h(\mathbf{x}_2)^T\mathbf{w} \\\vdots \\h(\mathbf{x}_N)^T\mathbf{w}\end{array}\right]= \left[\begin{array}{c}\mathbf{w}^T h(\mathbf{x}_1) \\\mathbf{w}^T h(\mathbf{x}_2) \\\vdots \\\mathbf{w}^T h(\mathbf{x}_N)\end{array}\right]$$ **Checkpoint**Just to make sure you are on the right track, we have provided a few examples. If your `predict_probability` function is implemented correctly, then the outputs will match:
###Code
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print('The following outputs must match ')
print('------------------------------------------------')
print('correct_predictions =', correct_predictions)
print('output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients))
###Output
The following outputs must match
------------------------------------------------
correct_predictions = [0.98201379 0.26894142]
output of predict_probability = [0.98201379 0.26894142]
###Markdown
Compute derivative of log likelihood with respect to a single coefficientRecall from lecture:$$\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)$$We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments:* `errors` vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$.* `feature` vector containing $h_j(\mathbf{x}_i)$ for all $i$. Complete the following code block:
###Code
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# Return the derivative
return derivative
###Output
_____no_output_____
###Markdown
In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$We provide a function to compute the log likelihood for the entire dataset.
###Code
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
###Output
_____no_output_____
###Markdown
**Checkpoint**Just to make sure we are on the same page, run the following code block and check that the outputs match.
###Code
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print('The following outputs must match ')
print('------------------------------------------------')
print('correct_log_likelihood =', correct_ll)
print('output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients))
###Output
The following outputs must match
------------------------------------------------
correct_log_likelihood = -5.331411615436032
output of compute_log_likelihood = -5.331411615436032
###Markdown
Taking gradient steps Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum. Complete the following function to solve the logistic regression model using gradient ascent:
###Code
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in range(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in range(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[:,j])
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] = coefficients[j] + (step_size * derivative)
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print('iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp))
return coefficients
###Output
_____no_output_____
###Markdown
Now, let us run the logistic regression solver.
###Code
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
###Output
iteration 0: log likelihood of observed labels = -36780.91768478
iteration 1: log likelihood of observed labels = -36775.13434712
iteration 2: log likelihood of observed labels = -36769.35713564
iteration 3: log likelihood of observed labels = -36763.58603240
iteration 4: log likelihood of observed labels = -36757.82101962
iteration 5: log likelihood of observed labels = -36752.06207964
iteration 6: log likelihood of observed labels = -36746.30919497
iteration 7: log likelihood of observed labels = -36740.56234821
iteration 8: log likelihood of observed labels = -36734.82152213
iteration 9: log likelihood of observed labels = -36729.08669961
iteration 10: log likelihood of observed labels = -36723.35786366
iteration 11: log likelihood of observed labels = -36717.63499744
iteration 12: log likelihood of observed labels = -36711.91808422
iteration 13: log likelihood of observed labels = -36706.20710739
iteration 14: log likelihood of observed labels = -36700.50205049
iteration 15: log likelihood of observed labels = -36694.80289716
iteration 20: log likelihood of observed labels = -36666.39512033
iteration 30: log likelihood of observed labels = -36610.01327118
iteration 40: log likelihood of observed labels = -36554.19728365
iteration 50: log likelihood of observed labels = -36498.93316099
iteration 60: log likelihood of observed labels = -36444.20783914
iteration 70: log likelihood of observed labels = -36390.00909449
iteration 80: log likelihood of observed labels = -36336.32546144
iteration 90: log likelihood of observed labels = -36283.14615871
iteration 100: log likelihood of observed labels = -36230.46102347
iteration 200: log likelihood of observed labels = -35728.89418769
iteration 300: log likelihood of observed labels = -35268.51212683
###Markdown
**Quiz Question:** As each iteration of gradient ascent passes, does the log likelihood increase or decrease? Predicting sentiments Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula:$$\hat{y}_i = \left\{\begin{array}{ll} +1 & \mathbf{x}_i^T\mathbf{w} > 0 \\ -1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \\\end{array} \right.$$Now, we will write some code to compute class predictions. We will do this in two steps:* **Step 1**: First compute the **scores** using **feature_matrix** and **coefficients** using a dot product.* **Step 2**: Using the formula above, compute the class predictions from the scores.Step 1 can be implemented as follows:
###Code
# Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients)
###Output
_____no_output_____
###Markdown
Now, complete the following code block for **Step 2** to compute the class predictions using the **scores** obtained above:
###Code
def pred_class(scores):
scores = np.array(scores)
return np.where(scores > 0, 1, -1)
###Output
_____no_output_____
###Markdown
**Quiz Question:** How many reviews were predicted to have positive sentiment?
###Code
predictions = pred_class(scores)
np.sum(predictions == 1)
###Output
_____no_output_____
###Markdown
Measuring accuracyWe will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:$$\mbox{accuracy} = \frac{\mbox{ correctly classified data points}}{\mbox{ total data points}}$$Complete the following code block to compute the accuracy of the model.
###Code
sentiment == predictions
num_mistakes = np.sum(sentiment != predictions) # YOUR CODE HERE
accuracy = (len(predictions) - num_mistakes)/len(predictions) # YOUR CODE HERE
print("-----------------------------------------------------")
print('# Reviews correctly classified =', len(products) - num_mistakes)
print('# Reviews incorrectly classified =', num_mistakes)
print('# Reviews total =', len(products))
print("-----------------------------------------------------")
print('Accuracy = %.2f' % accuracy)
###Output
-----------------------------------------------------
# Reviews correctly classified = 39903
# Reviews incorrectly classified = 13169
# Reviews total = 53072
-----------------------------------------------------
Accuracy = 0.75
###Markdown
**Quiz Question**: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy) Which words contribute most to positive & negative sentiments? Recall that in Module 2 assignment, we were able to compute the "**most positive words**". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:* Treat each coefficient as a tuple, i.e. (**word**, **coefficient_value**).* Sort all the (**word**, **coefficient_value**) tuples by **coefficient_value** in descending order.
###Code
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
###Output
_____no_output_____
###Markdown
Now, **word_coefficient_tuples** contains a sorted list of (**word**, **coefficient_value**) tuples. The first 10 elements in this list correspond to the words that are most positive. Ten "most positive" wordsNow, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
###Code
word_coefficient_tuples[:10]
###Output
_____no_output_____
###Markdown
**Quiz Question:** Which word is **not** present in the top 10 "most positive" words?- love- easy- great- perfect- cheap Ten "most negative" wordsNext, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment.
###Code
word_coefficient_tuples[-10:]
###Output
_____no_output_____ |
.ipynb_checkpoints/Scanner-checkpoint.ipynb | ###Markdown
Scanner
###Code
import java.util.Scanner
val scan = Scanner(System.`in`)
val a = scan.next().trim().toInt()
val b = scan.next().trim().toInt()
val c = scan.next().trim().toInt()
println("Angka pertama: $a")
println("Angka kedua: $b")
println("Angka ketiga: $c")
###Output
_____no_output_____
###Markdown
Output Formatting
###Code
val sc = Scanner(System.`in`)
println("================================")
for (i in 0..2) {
val key: String = sc.next()
val value: Int = sc.nextInt()
System.out.printf("%-15s : %03d\n", key, value)
}
println("================================")
sc.close()
//Input: Apel 2 Semangka 3 Jambu 4
###Output
================================
stdin:Apel 2 Semangka 3 Jambu 4
Apel : 002
Semangka : 003
Jambu : 004
================================
|
hw/Day_007_HW.ipynb | ###Markdown
作業 : (Kaggle)鐵達尼生存預測 [作業目標]- 試著完成三種不同特徵類型的三種資料操作, 觀察結果- 思考一下, 這三種特徵類型, 哪一種應該最複雜/最難處理 [作業重點]- 完成剩餘的八種 類型 x 操作組合 (In[6]~In[13], Out[6]~Out[13])- 思考何種特徵類型, 應該最複雜
###Code
# 載入基本套件
import pandas as pd
import numpy as np
# 讀取訓練與測試資料
data_path = 'data/'
df_train = pd.read_csv(data_path + 'titanic_train.csv')
df_test = pd.read_csv(data_path + 'titanic_test.csv')
df_train.shape
# 重組資料成為訓練 / 預測用格式
train_Y = df_train['Survived']
ids = df_test['PassengerId']
df_train = df_train.drop(['PassengerId', 'Survived'] , axis=1)
df_test = df_test.drop(['PassengerId'] , axis=1)
df = pd.concat([df_train,df_test])
df.head()
# 秀出資料欄位的類型與數量
dtype_df = df.dtypes.reset_index()
dtype_df.columns = ["Count", "Column Type"]
dtype_df = dtype_df.groupby("Column Type").aggregate('count').reset_index()
dtype_df
#確定只有 int64, float64, object 三種類型後, 分別將欄位名稱存於三個 list 中
int_features = []
float_features = []
object_features = []
for dtype, feature in zip(df.dtypes, df.columns):
if dtype == 'float64':
float_features.append(feature)
elif dtype == 'int64':
int_features.append(feature)
else:
object_features.append(feature)
print(f'{len(int_features)} Integer Features : {int_features}\n')
print(f'{len(float_features)} Float Features : {float_features}\n')
print(f'{len(object_features)} Object Features : {object_features}')
###Output
3 Integer Features : ['Pclass', 'SibSp', 'Parch']
2 Float Features : ['Age', 'Fare']
5 Object Features : ['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked']
###Markdown
作業1 * 試著執行作業程式,觀察三種類型 (int / float / object) 的欄位分別進行( 平均 mean / 最大值 Max / 相異值 nunique ) 中的九次操作會有那些問題? 並試著解釋那些發生Error的程式區塊的原因? 作業2* 思考一下,試著舉出今天五種類型以外的一種或多種資料類型,你舉出的新類型是否可以歸在三大類中的某些大類? 所以三大類特徵中,哪一大類處理起來應該最複雜?
###Code
# 例 : 整數 (int) 特徵取平均 (mean)
df[int_features].mean()
# 請依序列出 三種特徵類型 (int / float / object) x 三種方法 (平均 mean / 最大值 Max / 相異值 nunique) 的其餘操作
"""
Your Code Here
"""
###Output
_____no_output_____ |
notebooks/1_1_Basic_Examples.ipynb | ###Markdown
Example 1:
###Code
import tensorflow as tf
a = 2
b = 3
c = tf.add(a, b, name='Add')
print(c)
###Output
_____no_output_____
###Markdown
Example 2:
###Code
import tensorflow as tf
x = 2
y = 3
add_op = tf.add(x, y, name='Add')
mul_op = tf.multiply(x, y, name='Multiply')
pow_op = tf.pow(add_op, mul_op, name='Power')
with tf.Session() as sess:
pow_out = sess.run(pow_op)
print(pow_out)
###Output
_____no_output_____
###Markdown
Example 3:
###Code
import tensorflow as tf
x = 2
y = 3
add_op = tf.add(x, y, name='Add')
mul_op = tf.multiply(x, y, name='Multiply')
pow_op = tf.pow(add_op, mul_op, name='Power')
useless_op = tf.multiply(x, add_op, name='Useless')
with tf.Session() as sess:
pow_out = sess.run(pow_op)
###Output
_____no_output_____
###Markdown
Example 4: _Constant_
###Code
import tensorflow as tf
a = tf.constant(2, name='A')
b = tf.constant(3, name='B')
c = tf.add(a, b, name='Add')
with tf.Session() as sess:
print(sess.run(c))
###Output
_____no_output_____
###Markdown
Example 5: _Variable_
###Code
import tensorflow as tf
# create graph
a = tf.get_variable(name="A", initializer=tf.constant([[0, 1], [2, 3]]))
b = tf.get_variable(name="B", initializer=tf.constant([[4, 5], [6, 7]]))
c = tf.add(a, b, name="Add")
# launch the graph in a session
with tf.Session() as sess:
# now we can run the desired operation
print(sess.run(c))
###Output
_____no_output_____
###Markdown
Example 6: _Placeholder_
###Code
import tensorflow as tf
a = tf.constant([5, 5, 5], tf.float32, name='A')
b = tf.placeholder(tf.float32, shape=[3], name='B')
c = tf.add(a, b, name="Add")
with tf.Session() as sess:
# create a dictionary:
d = {b: [1, 2, 3]}
# feed it to the placeholder
print(sess.run(c, feed_dict=d))
###Output
_____no_output_____
###Markdown
Example 7: _Math Equation_ 1. import the required libraries
###Code
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
2. Create the graph
###Code
x = tf.placeholder(tf.int32, shape=[1], name='x')
y = tf.placeholder(tf.int32, shape=[1], name='y')
c = tf.constant(2, name='c')
x_2 = tf.pow(x, 2, name='x_2')
add_op = tf.add(y, c, name='Add')
mul_op = tf.multiply(x_2, y, name='Multiply')
output = tf.add(add_op, mul_op, name='Output')
###Output
_____no_output_____
###Markdown
3. Run a session
###Code
with tf.Session() as sess:
out = sess.run(output, feed_dict={x: np.array([2]), y: np.array([3])})
print('Output is: {}'.format(out))
###Output
_____no_output_____ |
bostonhouse_notebook.ipynb | ###Markdown
calculate unit price
###Code
df['unit_price'] = df['price']/df['area']
df[:10]
###Output
_____no_output_____
###Markdown
4.1
###Code
avg_unt_prc_per_year = df.groupby('built_in').mean()['unit_price']
avg_unt_prc_per_year.plot()
###Output
_____no_output_____
###Markdown
higher unit prices in early 1930s 4.2
###Code
df['unit_price'].hist()
###Output
_____no_output_____
###Markdown
most unit prices are bellow $200/sqft 4.3
###Code
avg_unt_prc_per_type = df.groupby('house_type').mean()['unit_price']
avg_unt_prc_per_type.plot.bar()
###Output
_____no_output_____
###Markdown
4.4
###Code
num_per_type = df['house_type'].value_counts()
num_per_type.plot.pie()
###Output
_____no_output_____
###Markdown
4.5
###Code
df.plot.scatter(x='area',y='price')
###Output
_____no_output_____ |
BookpurnongResolveInversion.ipynb | ###Markdown
Inversion of Frequency Domain Electromagnetic Data at Bookpurnong https://em.geosci.xyz/content/case_histories/bookpurnong/index.htmlLindsey J. Heagy, Rowan Cockett, Seogi Kang, Gudni K. Rosenkjaer, Douglas W. Oldenburg, A framework for simulation and inversion in electromagnetics, Computers & Geosciences, Volume 107, 2017, Pages 1-19, ISSN 0098-3004, http://dx.doi.org/10.1016/j.cageo.2017.06.018.
###Code
import dask
import h5py
import matplotlib.pyplot as plt
import numpy as np
import os
from scipy.constants import mu_0
from scipy.spatial import cKDTree
import ipywidgets
import discretize
from pymatsolver import Pardiso
from SimPEG import (
data, maps, utils,
data_misfit, regularization,
optimization,
inversion, inverse_problem,
directives,
)
from SimPEG.electromagnetics import frequency_domain as fdem
from matplotlib import rcParams
rcParams["font.size"] = 14
###Output
_____no_output_____
###Markdown
Load and plot the data
###Code
data_directory = "bookpurnong-data"
# Load resolve data
resolve = h5py.File(os.path.sep.join([data_directory, "booky_resolve.hdf5"]), "r")
river_path = resolve["river_path"][()] # River path
n_sounding = resolve["data"].shape[0] # the # of soundings
# Bird height from surface
height_resolve = resolve["src_elevation"][()]
# fetch the frequencies we are considering
cpi_inds = [0, 2, 6, 8, 10] # Indices for HCP in-phase
cpq_inds = [1, 3, 7, 9, 11] # Indices for HCP quadrature
frequencies = resolve["frequency_cp"][()]
# plot observed and predicted data
def plot_data(frequency_index=0, sounding_index=40):
fig, ax = plt.subplots(1, 2, figsize=(16, 8))
for i, a in enumerate(ax):
out = utils.plot2Ddata(
resolve["xy"][:, :],
resolve["data"][:, 2*frequency_index+i],
ncontour=100,
ax=a,
)
a.plot(resolve["xy"][:, 0], resolve["xy"][:, 1], 'k.', ms="2")
a.plot(resolve["xy"][sounding_index, 0], resolve["xy"][sounding_index, 1], 'w.', ms="8")
cb = plt.colorbar(out[0], ax=a)
cb.set_label("$bz$ (ppm)")
header = str(resolve["data_header"][2*frequency_index + i])
title = f"{header[5:-3]}Hz {'real' if header[4] == 'I' else 'imag'}"
a.set_title(title)
a.plot(river_path[:, 0], river_path[:, 1], "k-", lw=0.5)
a.set_aspect(1)
a.set_xlabel("easting (m)")
a.set_ylabel("northing (m)")
plt.tight_layout()
ipywidgets.interact(
plot_data,
frequency_index=ipywidgets.IntSlider(min=0, max=len(cpi_inds), value=0),
sounding_index=ipywidgets.IntSlider(min=0, max=n_sounding, value=517)
)
# survey parameters
rx_offset = 7.86 # tx-rx separation
bp = -mu_0 / (4 * np.pi * rx_offset ** 3) # primary magnetic field
def resolve_1Dinversions(
serialized_mesh,
dobs,
src_height,
frequencies,
sigma_0,
relative_error=0.08,
floor=1e-14,
rx_offset=7.86,
beta=2.0,
alpha_s=1e-3,
alpha_x=1.0
):
from pyMKL import mkl_set_num_threads
mkl_set_num_threads(1)
# ------------------- Mesh -------------------------------- #
mesh = discretize.CylMesh.deserialize(serialized_mesh)
# ------------------- Model & Mappings --------------------- #
sigma_air = 1e-8
active = mesh.vectorCCz < 0.0
actMap = maps.InjectActiveCells(mesh, active, np.log(sigma_air), nC=mesh.nCz)
mapping = maps.ExpMap(mesh) * maps.SurjectVertical1D(mesh) * actMap
m0 = np.log(sigma_0) * np.ones(active.sum()) # starting model
# ------------------- Forward Simulation ------------------- #
# set up the receivers
receiver_list = [
fdem.receivers.PointMagneticFluxDensitySecondary(
np.array([[rx_offset, 0.0, src_height]]), orientation="z", component=component
)
for component in ["real", "imag"]
]
# source location
source_location = np.array([0.0, 0.0, src_height])
source_list = [
fdem.sources.MagDipole(receiver_list, frequency, source_location, orientation="z")
for frequency in frequencies
]
# construct a forward simulation
survey = fdem.Survey(source_list=source_list)
survey._sourceOrder = dict() # todo- this is a bug
[survey._sourceOrder.setdefault(src._uid, ii) for ii, src in enumerate(source_list)]
simulation = fdem.Simulation3DMagneticFluxDensity(
mesh, sigmaMap=mapping, solver=Pardiso
)
simulation.survey = survey
# ------------------- Inversion ------------------- #
# data misfit term
uncertainty = abs(dobs) * relative + floor
observed_data = data.Data(survey=survey, dobs=dobs, standard_deviation=uncertainty)
dmisfit = data_misfit.L2DataMisfit(simulation=simulation, data=observed_data)
# regularization
regularization_mesh = discretize.TensorMesh([mesh.hz[mapping.maps[-1].indActive]])
reg = regularization.Simple(regularization_mesh, alpha_s=alpha_s, alpha_x=alpha_x)
# optimization
opt = optimization.InexactGaussNewton(maxIter=10)
# statement of the inverse problem
inv_prob = inverse_problem.BaseInvProblem(dmisfit, reg, opt, beta=beta)
# Inversion directives and parameters
target = directives.TargetMisfit()
inv = inversion.BaseInversion(inv_prob, directiveList=[target])
# run the inversion
mopt = inv.run(m0)
return mopt, inv_prob.dpred, observed_data.dobs
###Output
_____no_output_____
###Markdown
A single sounding
###Code
sounding_index = 517
cs, ncx, ncz, npad = 1., 10., 10., 20
pf = 1.3
hx = [(cs, ncx), (cs, npad,pf)]
npadz = 12
hz = np.logspace(np.log10(1.0), np.log10(12.0), npad-1)
hz_pad = hz[-1] * pf ** np.arange(npadz)
hz = np.r_[hz_pad[::-1], hz[::-1], hz, hz_pad]
mesh = discretize.CylMesh([hx, 1, hz], "00C")
active = mesh.vectorCCz < 0.0
# build starting and reference model
sigma_half = 1e-1
# set up a noise model
# 10% for the 3 lowest frequencies, 15% for the two highest
relative = np.repeat(np.r_[np.ones(3) * 0.1, np.ones(2) * 0.15], 2)
floor = abs(20 * bp * 1e-6) # floor of 20ppm
dobsi = (
np.c_[
resolve["data"][sounding_index, :][cpi_inds].astype(float),
resolve["data"][sounding_index, :][cpq_inds].astype(float),
].flatten()
* bp
* 1e-6
)
# perform the inversion
src_height = height_resolve[sounding_index].astype(float)
result = resolve_1Dinversions(
mesh.serialize(),
dobsi,
src_height,
frequencies,
sigma_0=sigma_half,
relative_error=relative,
floor=floor,
)
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].semilogx(frequencies, result[2][::2], "x", label="observed")
ax[0].semilogx(frequencies, result[1][::2], "-s", label="predicted")
ax[0].set_ylim([-6e-13, -1e-13])
ax[1].semilogx(frequencies, result[2][1::2], "x")
ax[1].semilogx(frequencies, result[1][1::2], "-s")
ax[1].set_ylim([-2e-13, 0])
ax[0].legend()
for a, t in zip(ax, ["real", "imag"]):
a.set_xlabel("frequency (Hz)")
a.set_ylabel("Bz")
plt.tight_layout()
fig, ax = plt.subplots(1, 1)
ax.semilogx(np.exp(result[0]), mesh.vectorCCz[active])
ax.set_ylim([-250, 0])
###Output
_____no_output_____
###Markdown
Invert the whole survey
###Code
from dask_jobqueue import PBSCluster
cores = 12
cluster = PBSCluster(
queue='regular',
project="UCLB0022",
cores=cores,
processes=cores,
memory="109GB"
)
cluster.scale(jobs=10)
from dask.distributed import Client
client = Client(cluster)
client
# loop over the soundings and invert each
# initalize empty lists for storing inversion results
mopt = [] # recovered model
dpred = [] # predicted data
dobs = [] # observed data
for rxind in range(n_sounding):
# convert data from ppm to magnetic field (A/m^2)
dobsi = (
np.c_[
resolve["data"][rxind, :][cpi_inds].astype(float),
resolve["data"][rxind, :][cpq_inds].astype(float),
].flatten()
* bp
* 1e-6
)
# perform the inversion
src_height = height_resolve[rxind].astype(float)
result = dask.delayed(resolve_1Dinversions)(
mesh.serialize(),
dobsi,
src_height,
frequencies,
sigma_0=sigma_half,
relative_error=relative,
floor=floor,
)
mopt.append(result[0])
dpred.append(result[1])
dobs.append(result[2])
%%time
out = dask.compute(mopt, dpred, dobs);
mopt = np.vstack(out[0])
dpred = np.vstack(out[1])
dobs = np.vstack(out[2])
###Output
_____no_output_____
###Markdown
Compare predicted and observed data
###Code
fig, ax = plt.subplots(1, 2, figsize=(16, 8))
frequency_index = 0
for a, d, t in zip(ax, [dobs, dpred], ["observed", "predicted"]):
out = utils.plot2Ddata(
resolve["xy"][()],
d[:, frequency_index],
ncontour=100,
ax=a,
contourOpts={"cmap": "viridis", "vmin":dpred[:, frequency_index].min(), "vmax":dpred[:, frequency_index].max()},
)
vmin, vmax = out[0].get_clim()
cb = plt.colorbar(out[0], ax=a)
cb.set_label("Bz")
title = f"{t} Hz "
a.set_title(title)
a.plot(river_path[:, 0], river_path[:, 1], "k-", lw=0.5)
a.set_aspect(1)
a.set_xlabel("easting (m)")
a.set_ylabel("northing (m)")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
View the result
###Code
sigma = np.exp(mopt)
indz = -7 # depth index
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
# interpolate to grid
tree = cKDTree(list(zip(resolve["xy"][:, 0], resolve["xy"][:, 1])))
d, d_inds = tree.query(list(zip(resolve["xy"][:, 0], resolve["xy"][:, 1])), k=20)
w = 1.0 / (d + 100.0) ** 2.0
w = utils.sdiag(1.0 / np.sum(w, axis=1)) * (w)
xy = resolve["xy"]
plot_sigma = (sigma[:, indz].flatten()[d_inds] * w).sum(axis=1)
out = utils.plot2Ddata(
xy,
plot_sigma,
ncontour=100,
scale="log",
dataloc=False,
contourOpts={"cmap": "viridis", "vmin": 3e-2, "vmax": 3e0},
ax=ax,
)
ax.plot(resolve["xy"][:, 0], resolve["xy"][:, 1], "k.", alpha=0.2, ms=1)
cb = plt.colorbar(out[0], ax=ax, ticks=np.linspace(-2, 1, 6))
cb.set_label("Conductivity (S/m)")
ax.plot(river_path[:, 0], river_path[:, 1], "k-", lw=0.5)
###Output
_____no_output_____ |
vehicle_motion_and_calculus/Integrating Rate Gyro Data.ipynb | ###Markdown
Integrating Rate Gyro DataThe **yaw rate** of a vehicle can be measured by a **rate gyro**. The yaw rate gives the rate of change of the vehicle's heading in radians per second and since a vehicle's heading is usually given by the greek letter $\theta$ (theta), yaw **rate** is given by $\dot{\theta}$ (theta dot).Integrating the yaw rate gives total change in heading.
###Code
from helpers import process_data, get_derivative_from_data
from matplotlib import pyplot as plt
PARALLEL_PARK_DATA = process_data("parallel_park.pickle")
TIMESTAMPS = [row[0] for row in PARALLEL_PARK_DATA]
DISPLACEMENTS = [row[1] for row in PARALLEL_PARK_DATA]
YAW_RATES = [row[2] for row in PARALLEL_PARK_DATA]
ACCELERATIONS = [row[3] for row in PARALLEL_PARK_DATA]
plt.title("Yaw Rate vs Time")
plt.xlabel("Time (seconds)")
plt.ylabel("Yaw Rate (radians / second)")
plt.plot(TIMESTAMPS, YAW_RATES)
plt.show()
###Output
_____no_output_____
###Markdown
Here's what I make of this data**From t=0 to t=1**: The yaw rate is zero so the wheels are straight (or the car isn't moving). This is when the car is backing up straight.**From t=1 to t=2**: This is where the driver cuts the steering wheel hard to the right and keeps backing up. Since the yaw rate is non-zero, this means the vehicle is turning.**From t=2 to t=3**: This is where the driver cuts the wheel back to the left to straighten out. **After t=3**: Here the vehicle isn't turning so it's probably just adjusting its position within the spot by driving forward and/or backward slowly. Your jobIn this notebook you will write the `get_integral_from_data` function yourself and then use that function to keep track of a vehicle's heading as it drives. First, take a look at what the integrated rate gyro data should look like when you get your function working correctly
###Code
from helpers import get_integral_from_data as solution_integral
thetas = solution_integral(YAW_RATES, TIMESTAMPS)
plt.scatter(TIMESTAMPS[1:], thetas)
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, the vehicle's heading is initially $\theta = 0 \text{ radians}$. From $t=1$ to $t=2$ the heading increases to a maximum of about $0.28 \text{ radians}$ (which is about 16 degrees).
###Code
def get_integral_from_data(data, times):
# TODO - write integration code!
return
# Visual Testing - Compare the result of your
# integration code to the plot above
thetas = get_integral_from_data(YAW_RATES, TIMESTAMPS)
plt.scatter(TIMESTAMPS[1:], thetas)
plt.show()
###Output
_____no_output_____ |
Python Programs for YouTube/1_Introduction/Flow-Control/3_for-loop/for_loop.ipynb | ###Markdown
Python for Loop The for loop in Python is used to iterate over a sequence (list, tuple, string) or other iterable objects. Iterating over a sequence is called traversal. Syntax: for element in sequence : Body of for Here, element is the variable that takes the value of the item inside the sequence on each iteration.Loop continues until we reach the last item in the sequence. Flow Chart  Example
###Code
#Find product of all numbers present in a list
lst = [10, 20, 30, 40, 50]
product = 1
#iterating over the list
for ele in lst:
print(type(ele))
product *= ele
print("Product is: {}".format(product))
ele
###Output
_____no_output_____
###Markdown
range() function We can generate a sequence of numbers using range() function. range(10) will generate numbers from 0 to 9 (10 numbers).We can also define the start, stop and step size as range(start,stop,step size). step size defaults to 1 if not provided.This function does not store all the values in memory, it would be inefficient. So it remembers the start, stop, step size and generates the next number on the go.
###Code
#print range of 10
for i in range(5, 9):
print(i)
#print range of numbers from 1 to 20 with step size of 2
for i in range(0, 21, 5):
print(i)
lst = ["satish", "srinu", "murali", "naveen", "bramha"]
for ele in lst:
print(ele)
#iterate over the list using index
lst = ["satish", "srinu", "murali", "naveen", "bramha"]
for index in range(len(lst)):
print(lst[index])
###Output
satish
srinu
murali
naveen
bramha
###Markdown
for loop with else A for loop can have an optional else block as well. The else part is executed if the items in the sequence used in for loop exhausts.break statement can be used to stop a for loop. In such case, the else part is ignored.Hence, a for loop's else part runs if no break occurs.
###Code
numbers = [1, 2, 3]
#iterating over the list
for item in numbers:
print(item)
else:
print("no item left in the list")
for item in numbers:
print(item)
if item % 2 == 0:
break
else:
print("no item left in the list")
###Output
1
2
###Markdown
Python Program to display all prime numbers within an interval
###Code
index1 = 20
index2 = 50
print("Prime numbers between {0} and {1} are :".format(index1, index2))
for num in range(index1, index2+1): #default step size is 1
if num > 1:
isDivisible = False;
for index in range(2, num):
if num % index == 0:
isDivisible = True;
if not isDivisible:
print(num);
###Output
Prime numbers between 20 and 50 are :
23
29
31
37
41
43
47
|
4 Convolutional Neural Networks/5 Autonomous_driving_application_Car_detection_v3a.ipynb | ###Markdown
Autonomous driving - Car detectionWelcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242). **You will learn to**:- Use object detection on a car detection dataset- Deal with bounding boxes Updates If you were working on the notebook before this update...* The current notebook is version "3a".* You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* Clarified "YOLO" instructions preceding the code. * Added details about anchor boxes.* Added explanation of how score is calculated.* `yolo_filter_boxes`: added additional hints. Clarify syntax for argmax and max.* `iou`: clarify instructions for finding the intersection.* `iou`: give variable names for all 8 box vertices, for clarity. Adds `width` and `height` variables for clarity.* `iou`: add test cases to check handling of non-intersecting boxes, intersection at vertices, or intersection at edges.* `yolo_non_max_suppression`: clarify syntax for tf.image.non_max_suppression and keras.gather.* "convert output of the model to usable bounding box tensors": Provides a link to the definition of `yolo_head`.* `predict`: hint on calling sess.run.* Spelling, grammar, wording and formatting updates to improve clarity. Import librariesRun the following cell to load the packages and dependencies that you will find useful as you build the object detector!
###Code
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`. 1 - Problem StatementYou are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. Pictures taken from a car-mounted camera while driving around Silicon Valley. We thank [drive.ai](htps://www.drive.ai/) for providing this dataset.You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like. **Figure 1** : **Definition of a box** If you have 80 classes that you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. In this exercise, you will learn how "You Only Look Once" (YOLO) performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. 2 - YOLO "You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. 2.1 - Model details Inputs and outputs- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. Anchor Boxes* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85). EncodingLet's look in greater detail at what this encoding represents. **Figure 2** : **Encoding architecture for YOLO** If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425). **Figure 3** : **Flattening the last two last dimensions** Class scoreNow, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class. The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$. **Figure 4** : **Find the class detected by each box** Example of figure 4* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1). * The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$. * The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$. * Let's say we calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So we'll assign the score 0.44 and class "3" to this box "1". Visualizing classesHere's one way to visualize what YOLO is predicting on an image:- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).- Color that grid cell according to what object that grid cell considers the most likely.Doing this results in this picture: **Figure 5** : Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell. Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. Visualizing bounding boxesAnother way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: **Figure 6** : Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. Non-Max suppressionIn the figure above, we plotted only boxes for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects. To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps: - Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class; either due to the low probability of any object, or low probability of this particular class).- Select only one box when several boxes overlap with each other and detect the same object. 2.2 - Filtering with a threshold on class scoresYou are going to first apply a filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It is convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell. **Exercise**: Implement `yolo_filter_boxes()`.1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$). The following code may help you choose the right operator: ```pythona = np.random.randn(19*19, 5, 1)b = np.random.randn(19*19, 5, 80)c = a * b shape of c will be (19*19, 5, 80)```This is an example of **broadcasting** (multiplying vectors of different sizes).2. For each box, find: - the index of the class with the maximum box score - the corresponding box score **Useful references** * [Keras argmax](https://keras.io/backend/argmax) * [Keras max](https://keras.io/backend/max) **Additional Hints** * For the `axis` parameter of `argmax` and `max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`. * Applying `max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. We don't need to keep the last dimension after applying the maximum here. * Even though the documentation shows `keras.backend.argmax`, use `keras.argmax`. Similarly, use `keras.max`.3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep. 4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. **Useful reference**: * [boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask) **Additional Hints**: * For the `tf.boolean_mask`, we can keep the default `axis=None`.**Reminder**: to call a Keras function, you should use `K.function(...)`.
###Code
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores, axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = ((box_class_scores) >= threshold)
### END CODE HERE ###
# Step 4: Apply the mask to box_class_scores, boxes and box_classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask, name='boolean_mask')
boxes = tf.boolean_mask(boxes, filtering_mask, name='boolean_mask')
classes = tf.boolean_mask(box_classes, filtering_mask, name='boolean_mask')
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
###Output
scores[2] = 10.7506
boxes[2] = [ 8.42653275 3.27136683 -0.5313437 -4.94137383]
classes[2] = 7
scores.shape = (?,)
boxes.shape = (?, 4)
classes.shape = (?,)
###Markdown
**Expected Output**: **scores[2]** 10.7506 **boxes[2]** [ 8.42653275 3.27136683 -0.5313437 -4.94137383] **classes[2]** 7 **scores.shape** (?,) **boxes.shape** (?, 4) **classes.shape** (?,) **Note** In the test for `yolo_filter_boxes`, we're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative. 2.3 - Non-max suppression Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). **Figure 7** : In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU. **Figure 8** : Definition of "Intersection over Union". **Exercise**: Implement iou(). Some hints:- In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, we move to the right. As y increases, we move down.- For this exercise, we define a box using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. (This makes it a bit easier to calculate the intersection).- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. (Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$: - Feel free to draw some examples on paper to clarify this conceptually. - The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom. - The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top. - The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero). - The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.**Additional Hints**- `xi1` = **max**imum of the x1 coordinates of the two boxes- `yi1` = **max**imum of the y1 coordinates of the two boxes- `xi2` = **min**imum of the x2 coordinates of the two boxes- `yi2` = **min**imum of the y2 coordinates of the two boxes- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
###Code
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
# Assign variable names to coordinates for clarity
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 7 lines)
xi1 = np.maximum(box1_x1, box2_x1)
yi1 = np.maximum(box1_y1, box2_y1)
xi2 = np.minimum(box1_x2, box2_x2)
yi2 = np.minimum(box1_y2, box2_y2)
inter_width = np.maximum(xi2 - xi1, 0)
inter_height = np.maximum(yi2 - yi1, 0)
inter_area = inter_width * inter_height
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1_x2 - box1_x1) * (box1_y2 - box1_y1)
box2_area = (box2_x2 - box2_x1) * (box2_y2 - box2_y1)
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
###Output
iou for intersecting boxes = 0.142857142857
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
###Markdown
**Expected Output**:```iou for intersecting boxes = 0.14285714285714285iou for non-intersecting boxes = 0.0iou for boxes that only touch at vertices = 0.0iou for boxes that only touch at edges = 0.0``` YOLO non-max suppressionYou are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score.2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):** Reference documentation ** - [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)```tf.image.non_max_suppression( boxes, scores, max_output_size, iou_threshold=0.5, name=None)```Note that in the version of tensorflow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument 'score_threshold.*- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/keras/backend/gather) Even though the documentation shows `tf.keras.backend.gather()`, you can use `keras.gather()`. ```keras.gather( reference, indices)```
###Code
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(
boxes,
scores,
max_boxes,
iou_threshold=0.5,
name="nms_indices"
)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 6.9384
boxes[2] = [-5.299932 3.13798141 4.45036697 0.95942086]
classes[2] = -2.24527
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 6.9384 **boxes[2]** [-5.299932 3.13798141 4.45036697 0.95942086] **classes[2]** -2.24527 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) 2.4 Wrapping up the filteringIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. **Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): ```pythonboxes = yolo_boxes_to_corners(box_xy, box_wh) ```which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes````pythonboxes = scale_boxes(boxes, image_shape)```YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; we'll show you where they need to be called.
###Code
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates)
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with
# maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 138.791
boxes[2] = [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
classes[2] = 54
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 138.791 **boxes[2]** [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] **classes[2]** 54 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) Summary for YOLO:- Input image (608, 608, 3)- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425): - Each cell in a 19x19 grid over the input image gives 425 numbers. - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect- You then select only few boxes based on: - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes- This gives you YOLO's final output. 3 - Test YOLO pre-trained model on images In this part, you are going to use a pre-trained model and test it on the car detection dataset. We'll need a session to execute the computation graph and evaluate the tensors.
###Code
sess = K.get_session()
###Output
_____no_output_____
###Markdown
3.1 - Defining classes, anchors and image shape.* Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. * We have gathered the information on the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". * We'll read class names and anchors from text files.* The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
###Code
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
###Output
_____no_output_____
###Markdown
3.2 - Loading a pre-trained model* Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. * You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5". * These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will simply refer to it as "YOLO" in this notebook.Run the cell below to load the model from this file.
###Code
yolo_model = load_model("model_data/yolo.h5")
###Output
/opt/conda/lib/python3.6/site-packages/keras/models.py:251: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
###Markdown
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
###Code
yolo_model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 608, 608, 3) 0
____________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 608, 608, 32) 864 input_1[0][0]
____________________________________________________________________________________________________
batch_normalization_1 (BatchNorm (None, 608, 608, 32) 128 conv2d_1[0][0]
____________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization_1[0][0]
____________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu_1[0][0]
____________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
batch_normalization_2 (BatchNorm (None, 304, 304, 64) 256 conv2d_2[0][0]
____________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_2[0][0]
____________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_2[0][0]
____________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 152, 152, 128) 73728 max_pooling2d_2[0][0]
____________________________________________________________________________________________________
batch_normalization_3 (BatchNorm (None, 152, 152, 128) 512 conv2d_3[0][0]
____________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_3[0][0]
____________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_3[0][0]
____________________________________________________________________________________________________
batch_normalization_4 (BatchNorm (None, 152, 152, 64) 256 conv2d_4[0][0]
____________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_4[0][0]
____________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 152, 152, 128) 73728 leaky_re_lu_4[0][0]
____________________________________________________________________________________________________
batch_normalization_5 (BatchNorm (None, 152, 152, 128) 512 conv2d_5[0][0]
____________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_5[0][0]
____________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_5[0][0]
____________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_3[0][0]
____________________________________________________________________________________________________
batch_normalization_6 (BatchNorm (None, 76, 76, 256) 1024 conv2d_6[0][0]
____________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_6[0][0]
____________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_6[0][0]
____________________________________________________________________________________________________
batch_normalization_7 (BatchNorm (None, 76, 76, 128) 512 conv2d_7[0][0]
____________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_7[0][0]
____________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_7[0][0]
____________________________________________________________________________________________________
batch_normalization_8 (BatchNorm (None, 76, 76, 256) 1024 conv2d_8[0][0]
____________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_8[0][0]
____________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_8[0][0]
____________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_4[0][0]
____________________________________________________________________________________________________
batch_normalization_9 (BatchNorm (None, 38, 38, 512) 2048 conv2d_9[0][0]
____________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_9[0][0]
____________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_9[0][0]
____________________________________________________________________________________________________
batch_normalization_10 (BatchNor (None, 38, 38, 256) 1024 conv2d_10[0][0]
____________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_10[0][0]
____________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_10[0][0]
____________________________________________________________________________________________________
batch_normalization_11 (BatchNor (None, 38, 38, 512) 2048 conv2d_11[0][0]
____________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_11[0][0]
____________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_11[0][0]
____________________________________________________________________________________________________
batch_normalization_12 (BatchNor (None, 38, 38, 256) 1024 conv2d_12[0][0]
____________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_12[0][0]
____________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_12[0][0]
____________________________________________________________________________________________________
batch_normalization_13 (BatchNor (None, 38, 38, 512) 2048 conv2d_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_13[0][0]
____________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_5[0][0]
____________________________________________________________________________________________________
batch_normalization_14 (BatchNor (None, 19, 19, 1024) 4096 conv2d_14[0][0]
____________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_14[0][0]
____________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_14[0][0]
____________________________________________________________________________________________________
batch_normalization_15 (BatchNor (None, 19, 19, 512) 2048 conv2d_15[0][0]
____________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_15[0][0]
____________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_15[0][0]
____________________________________________________________________________________________________
batch_normalization_16 (BatchNor (None, 19, 19, 1024) 4096 conv2d_16[0][0]
____________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_16[0][0]
____________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_16[0][0]
____________________________________________________________________________________________________
batch_normalization_17 (BatchNor (None, 19, 19, 512) 2048 conv2d_17[0][0]
____________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_17[0][0]
____________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_17[0][0]
____________________________________________________________________________________________________
batch_normalization_18 (BatchNor (None, 19, 19, 1024) 4096 conv2d_18[0][0]
____________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0]
____________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0]
____________________________________________________________________________________________________
batch_normalization_19 (BatchNor (None, 19, 19, 1024) 4096 conv2d_19[0][0]
____________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0]
____________________________________________________________________________________________________
batch_normalization_21 (BatchNor (None, 38, 38, 64) 256 conv2d_21[0][0]
____________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_19[0][0]
____________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_21[0][0]
____________________________________________________________________________________________________
batch_normalization_20 (BatchNor (None, 19, 19, 1024) 4096 conv2d_20[0][0]
____________________________________________________________________________________________________
space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_21[0][0]
____________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_20[0][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0]
leaky_re_lu_20[0][0]
____________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate_1[0][0]
____________________________________________________________________________________________________
batch_normalization_22 (BatchNor (None, 19, 19, 1024) 4096 conv2d_22[0][0]
____________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_22[0][0]
____________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_22[0][0]
====================================================================================================
Total params: 50,983,561
Trainable params: 50,962,889
Non-trainable params: 20,672
____________________________________________________________________________________________________
###Markdown
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). 3.3 - Convert output of the model to usable bounding box tensorsThe output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.If you are curious about how `yolo_head` is implemented, you can find the function definition in the file ['keras_yolo.py'](https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py). The file is located in your workspace in this path 'yad2k/models/keras_yolo.py'.
###Code
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
###Output
_____no_output_____
###Markdown
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function. 3.4 - Filtering boxes`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Let's now call `yolo_eval`, which you had previously implemented, to do this.
###Code
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
###Output
_____no_output_____
###Markdown
3.5 - Run the graph on an imageLet the fun begin. You have created a graph that can be summarized as follows:1. yolo_model.input is given to `yolo_model`. The model is used to compute the output yolo_model.output 2. yolo_model.output is processed by `yolo_head`. It gives you yolo_outputs 3. yolo_outputs goes through a filtering function, `yolo_eval`. It outputs your predictions: scores, boxes, classes **Exercise**: Implement predict() which runs the graph to test YOLO on an image.You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.The code below also uses the following function:```pythonimage, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))```which outputs:- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.- image_data: a numpy-array representing the image. This will be the input to the CNN.**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}. Hint: Using the TensorFlow Session object* Recall that above, we called `K.get_Session()` and saved the Session object in `sess`.* To evaluate a list of tensors, we call `sess.run()` like this:```sess.run(fetches=[tensor1,tensor2,tensor3], feed_dict={yolo_model.input: the_input_variable, K.learning_phase():0 }```* Notice that the variables `scores, boxes, classes` are not passed into the `predict` function, but these are global variables that you will use within the `predict` function.
###Code
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input: image_data, K.learning_phase(): 0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
###Output
_____no_output_____
###Markdown
Run the following cell on the "test.jpg" image to verify that your function is correct.
###Code
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
###Output
Found 7 boxes for test.jpg
car 0.60 (925, 285) (1045, 374)
car 0.66 (706, 279) (786, 350)
bus 0.67 (5, 266) (220, 407)
car 0.70 (947, 324) (1280, 705)
car 0.74 (159, 303) (346, 440)
car 0.80 (761, 282) (942, 412)
car 0.89 (367, 300) (745, 648)
|
minGPT_Tilde_eval_LV_EN.ipynb | ###Markdown
Text preprocessing
###Code
num_bpe_merges = 10000
vocab_size = 5500
joint_vocab_size = 2*vocab_size
!echo BPE_ops=$num_bpe_merges vocab_size=$vocab_size joint_vocab_size=$joint_vocab_size
#!pip install subword-nmt
!pip show subword-nmt
TILDE_ALL_EN = f'{TILDE_DATA}/all.norm2.en'
TILDE_ALL_LV = f'{TILDE_DATA}/all.norm2.lv'
TILDE_TOK_EN = f'{TILDE_DATA}/combined.en.tok.txt'
TILDE_TOK_LV = f'{TILDE_DATA}/combined.lv.tok.txt'
!echo $TILDE_DATA/combined.lv.tok.txt $TILDE_TOK_EN
# !git clone https://github.com/moses-smt/mosesdecoder.git
# # Read texts from previously saved files.
with open(f'{TILDE_DATA}/combined.lv.BPE.txt', 'r') as f:
text_input = f.read().splitlines()
with open(f'{TILDE_DATA}/combined.en.BPE.txt', 'r') as f:
text_output = f.read().splitlines()
with open('data/tilde/train2.lv', 'r') as f:
train_input = f.read().splitlines()
with open('data/tilde/test2.lv', 'r') as f:
test_input = f.read().splitlines()
with open('data/tilde/valid2.lv', 'r') as f:
valid_input = f.read().splitlines()
with open('data/tilde/train2.en', 'r') as f:
train_output = f.read().splitlines()
with open('data/tilde/test2.en', 'r') as f:
test_output = f.read().splitlines()
with open('data/tilde/valid2.en', 'r') as f:
valid_output = f.read().splitlines()
print("total:\t", len(text_input), len(text_output))
print(f"train:\t {len(train_input)} {len(train_output)}")
print(f"valid:\t {len(valid_input)} {len(valid_output)}")
print(f"test:\t {len(test_input)} {len(test_output)}")
# _100k = 100000
# test_inp_100k = test_input[:_100k-1]
# test_outp_100k = test_output[:_100k-1]
# print(len(test_inp_100k), len(test_outp_100k))
# with open("data/tilde/test2_100000.lv", "w") as f:
# f.write("\n".join(test_inp_100k)+"\n")
# with open("data/tilde/test2_100000.en", "w") as f:
# f.write("\n".join(test_outp_100k)+"\n")
def build_vocab(freq_file):
vocab = Counter() #['<unk>', '<pad>', '<eos>'])
with open(freq_file, 'r') as f:
for line in f.readlines():
token, num_occurs = line.split()
vocab[token] += int(num_occurs)
return vocab #[:vocab_size]
en_vocab = build_vocab(f'{TILDE_DATA}/bpe/vocab.en')
lv_vocab = build_vocab(f'{TILDE_DATA}/bpe/vocab.lv')
joint_vocab = build_vocab(f'{TILDE_DATA}/bpe/vocab.lven')
print("en_vocab:", len(en_vocab), "lv_vocab:", len(lv_vocab), "joint_vocab", len(joint_vocab))
# en_vocab: 10099 lv_vocab: 6477 joint_vocab 10519
if False:
special_tokens = ['<unk>', '<pad>', '<eos>', '<sep>'] #, '<S>', '</S>', '<bos>', '<eos>', '<sep>', '<NONE>', '<|>']
normalizer = normalizers.Sequence([Strip(), Lowercase()])
pre_tokenizer = Whitespace()
model = tokenizers.models.WordLevel(unk_token='<unk>')
# model = tokenizers.models.WordPiece()
tokenizer = tokenizers.Tokenizer(model=model)
tokenizer.add_special_tokens(special_tokens)
tokenizer.normalizer = normalizer
tokenizer.pre_tokenizer = pre_tokenizer
# filelist = glob.glob(PTHRU_DIR+"valid/*.pthru")
# filelist.extend( glob.glob(PTHRU_DIR+"test/*.pthru"))
# filelist.extend( glob.glob(PTHRU_DIR+"train/*.pthru"))
# token_strs = [tok for (tok, span) in pre_tokenizer.pre_tokenize_str(str1)]
# print(token_strs)
# filelist = glob.glob(PTHRU_DIR+"valid/*.pthru")
filelist = glob.glob(f"{TILDE_DATA}/combined.*.BPE.txt")
filelist = sorted(filelist)
print(len(filelist), filelist[:10])
# unigram_trainer = tokenizers.trainers.UnigramTrainer()
# trainer = tokenizers.trainers.WordPieceTrainer(vocab_size=vocab_size)
trainer = tokenizers.trainers.WordLevelTrainer(vocab_size=joint_vocab_size, special_tokens=special_tokens)
tokenizer.train(files=filelist, trainer=trainer)
vocab_dict = tokenizer.get_vocab(with_added_tokens=False)
print("ACTUAL VOCAB SIZE =", len(vocab_dict))
print(vocab_dict)
# ACTUAL VOCAB SIZE = 9048 #? Why is this not == len(joint_vocab) ?
###Output
_____no_output_____
###Markdown
MinGPT
###Code
import random
import numpy as np
import torch
import torch.nn as nn
from torch.nn import functional as F
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
def top_k_logits(logits, k):
v, ix = torch.topk(logits, k)
out = logits.clone()
out[out < v[:, [-1]]] = -float('Inf')
return out
def calculate_attention_token(attention, top_k, model):
logits = model.head(attention)
logits = logits[:, -1, :]
logits = top_k_logits(logits, top_k)
probs = F.softmax(logits)
_, ix = torch.topk(probs, k=1, dim=-1)
ix = torch.multinomial(probs, num_samples=top_k)
return ix[0]
@torch.no_grad()
def sample(model, x, steps, temperature=1.0, sample=False, top_k=None,
output_attention=False, stop_tokidx=None):
"""
take a conditioning sequence of indices in x (of shape (b,t)) and predict the next token in
the sequence, feeding the predictions back into the model each time. Clearly the sampling
has quadratic complexity unlike an RNN that is only linear, and has a finite context window
of block_size, unlike an RNN that has an infinite context window.
"""
block_size = model.get_block_size()
model.eval()
attention_state = [[] for _ in model.blocks]
for k in range(steps):
x_cond = x if x.size(1) <= block_size else x[:, -block_size:] # crop context if needed
logits, _ = model(x_cond)
# pluck the logits at the final step and scale by temperature
logits = logits[:, -1, :] / temperature
# optionally crop probabilities to only the top k options
if top_k is not None:
logits = top_k_logits(logits, top_k)
# apply softmax to convert to probabilities
probs = F.softmax(logits, dim=-1)
# sample from the distribution or take the most likely
if sample:
ix = torch.multinomial(probs, num_samples=1)
else:
_, ix = torch.topk(probs, k=1, dim=-1)
if output_attention:
b, t = x.size()
for block_id in range(len(model.blocks)):
att = model.blocks[block_id].attn.att
attention_state[block_id].append(att)
# append to the sequence and continue
x = torch.cat((x, ix), dim=1)
if stop_tokidx is not None and ix == stop_tokidx:
break
if output_attention:
return x, attention_state
return x
"""
GPT model:
- the initial stem consists of a combination of token encoding and a positional encoding
- the meat of it is a uniform sequence of Transformer blocks
- each Transformer is a sequential combination of a 1-hidden-layer MLP block and a self-attention block
- all blocks feed into a central residual pathway similar to resnets
- the final decoder is a linear projection into a vanilla Softmax classifier
"""
import math
import logging
import torch
import torch.nn as nn
from torch.nn import functional as F
logger = logging.getLogger(__name__)
class GPTConfig:
""" base GPT config, params common to all GPT versions """
embd_pdrop = 0.1
resid_pdrop = 0.1
attn_pdrop = 0.1
def __init__(self, vocab_size, block_size, **kwargs):
self.vocab_size = vocab_size
self.block_size = block_size
for k,v in kwargs.items():
setattr(self, k, v)
class GPT1Config(GPTConfig):
""" GPT-1 like network roughly 125M params """
n_layer = 12
n_head = 12
n_embd = 768
class CausalSelfAttention(nn.Module):
"""
A vanilla multi-head masked self-attention layer with a projection at the end.
It is possible to use torch.nn.MultiheadAttention here but I am including an
explicit implementation here to show that there is nothing too scary here.
"""
def __init__(self, config):
super().__init__()
assert config.n_embd % config.n_head == 0
# key, query, value projections for all heads
self.key = nn.Linear(config.n_embd, config.n_embd)
self.query = nn.Linear(config.n_embd, config.n_embd)
self.value = nn.Linear(config.n_embd, config.n_embd)
# regularization
self.attn_drop = nn.Dropout(config.attn_pdrop)
self.resid_drop = nn.Dropout(config.resid_pdrop)
# output projection
self.proj = nn.Linear(config.n_embd, config.n_embd)
# causal mask to ensure that attention is only applied to the left in the input sequence
self.register_buffer("mask", torch.tril(torch.ones(config.block_size, config.block_size))
.view(1, 1, config.block_size, config.block_size))
self.n_head = config.n_head
self.att = None
def forward(self, x, layer_past=None):
B, T, C = x.size()
# calculate query, key, values for all heads in batch and move head forward to be the batch dim
k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
# causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf'))
att = F.softmax(att, dim=-1)
att = self.attn_drop(att)
y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side
# output projection
y = self.resid_drop(self.proj(y))
self.att = att
return y
class Block(nn.Module):
""" an unassuming Transformer block """
def __init__(self, config):
super().__init__()
self.ln1 = nn.LayerNorm(config.n_embd)
self.ln2 = nn.LayerNorm(config.n_embd)
self.attn = CausalSelfAttention(config)
self.mlp = nn.Sequential(
nn.Linear(config.n_embd, 4 * config.n_embd),
nn.GELU(),
nn.Linear(4 * config.n_embd, config.n_embd),
nn.Dropout(config.resid_pdrop),
)
def forward(self, x):
x = x + self.attn(self.ln1(x))
x = x + self.mlp(self.ln2(x))
return x
class GPT(nn.Module):
""" the full GPT language model, with a context size of block_size """
def __init__(self, config):
super().__init__()
# input embedding stem
self.tok_emb = nn.Embedding(config.vocab_size, config.n_embd)
self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd))
self.drop = nn.Dropout(config.embd_pdrop)
# transformer
self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)])
# decoder head
self.ln_f = nn.LayerNorm(config.n_embd)
self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
self.block_size = config.block_size
self.apply(self._init_weights)
logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters()))
def get_block_size(self):
return self.block_size
def _init_weights(self, module):
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=0.02)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
def configure_optimizers(self, train_config):
"""
This long function is unfortunately doing something very simple and is being very defensive:
We are separating out all parameters of the model into two buckets: those that will experience
weight decay for regularization and those that won't (biases, and layernorm/embedding weights).
We are then returning the PyTorch optimizer object.
"""
# separate out all parameters to those that will and won't experience regularizing weight decay
decay = set()
no_decay = set()
whitelist_weight_modules = (torch.nn.Linear, )
blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding)
for mn, m in self.named_modules():
for pn, p in m.named_parameters():
fpn = '%s.%s' % (mn, pn) if mn else pn # full param name
if pn.endswith('bias'):
# all biases will not be decayed
no_decay.add(fpn)
elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules):
# weights of whitelist modules will be weight decayed
decay.add(fpn)
elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules):
# weights of blacklist modules will NOT be weight decayed
no_decay.add(fpn)
# special case the position embedding parameter in the root GPT module as not decayed
no_decay.add('pos_emb')
# validate that we considered every parameter
param_dict = {pn: p for pn, p in self.named_parameters()}
inter_params = decay & no_decay
union_params = decay | no_decay
assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), )
assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \
% (str(param_dict.keys() - union_params), )
# create the pytorch optimizer object
optim_groups = [
{"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": train_config.weight_decay},
{"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0},
]
optimizer = torch.optim.AdamW(optim_groups, lr=train_config.learning_rate, betas=train_config.betas)
return optimizer
def forward(self, idx, targets=None):
b, t = idx.size()
assert t <= self.block_size, "Cannot forward, model block size is exhausted."
# forward the GPT model
token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
x = self.drop(token_embeddings + position_embeddings)
x = self.blocks(x)
x = self.ln_f(x)
logits = self.head(x)
# if we are given some desired targets also calculate the loss
loss = None
if targets is not None:
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
return logits, loss
"""
Simple training loop; Boilerplate that could apply to any arbitrary neural network,
so nothing in this file really has anything to do with GPT specifically.
"""
import sacrebleu
import math
import logging
from random import choice
from tqdm import tqdm
import numpy as np
import torch
import torch.optim as optim
from torch.optim.lr_scheduler import LambdaLR
from torch.utils.data.dataloader import DataLoader
logger = logging.getLogger(__name__)
def clean_tokens(sentence):
return sentence.replace('@@ ', '').replace(' @', '').replace('@ ', '')
class TrainerConfig:
# optimization parameters
max_epochs = 10
batch_size = 64
learning_rate = 3e-4
betas = (0.9, 0.95)
grad_norm_clip = 1.0
weight_decay = 0.1 # only applied on matmul weights
# learning rate decay params: linear warmup followed by cosine decay to 10% of original
lr_decay = False
warmup_tokens = 375e6 # these two numbers come from the GPT-3 paper, but may not be good defaults elsewhere
final_tokens = 260e9 # (at what point we reach 10% of original LR)
# checkpoint settings
ckpt_path = None
num_workers = 0 # for DataLoader
def __init__(self, **kwargs):
for k,v in kwargs.items():
setattr(self, k, v)
class Trainer:
def __init__(self, model, train_dataset, test_dataset, valid_dataset, config):
self.model = model
self.train_dataset = train_dataset
self.test_dataset = test_dataset
self.valid_dataset = valid_dataset
self.config = config
# take over whatever gpus are on the system
self.device = 'cpu'
if torch.cuda.is_available():
self.device = torch.cuda.current_device()
self.model = torch.nn.DataParallel(self.model).to(self.device)
def save_checkpoint(self, postfix=''):
# DataParallel wrappers keep raw model object in .module attribute
raw_model = self.model.module if hasattr(self.model, "module") else self.model
checkpoint_path = self.config.ckpt_path + postfix + '.pt'
logger.info("saving %s", checkpoint_path)
torch.save(raw_model.state_dict(), checkpoint_path)
def train(self):
model, config = self.model, self.config
raw_model = model.module if hasattr(self.model, "module") else model
optimizer = raw_model.configure_optimizers(config)
def run_epoch(split):
is_train = split == 'train'
model.train(is_train)
data = self.train_dataset
if split == 'test':
data = self.test_dataset
elif split == 'valid':
data = self.valid_dataset
model.eval()
loader = DataLoader(data, shuffle=True, pin_memory=True,
batch_size=config.batch_size, # if is_train else 8,
num_workers=config.num_workers)
losses = []
pbar = tqdm(enumerate(loader), total=len(loader)) if is_train else enumerate(loader)
# predicted_tokids = None
context_list = []
translation_results = []
eval_results = []
x_total = None
y_total = None
for it, (x, y) in pbar:
# place data on the correct device
x = x.to(self.device)
y = y.to(self.device)
# forward the model
with torch.set_grad_enabled(is_train):
logits, loss = model(x, y)
loss = loss.mean() # collapse all losses if they are scattered on multiple gpus
losses.append(loss.item())
if split == 'valid':
intent = (x == valid_dataset.tokenizer_input.encode(['<eos>'])[0]).nonzero(as_tuple=True) #[0]
#print(valid_dataset.tokenizer_input.encode(['<eos>']))
#print(intent)
#print(x.shape, y.shape, logits.shape)
#for i in range(len(intent[0])):
# print(x[i][intent[1][i]], end=", ")
#print()
probs = F.softmax(logits, dim=-1)
#print(probs.shape)
for i in range(len(probs)):
# sample from the distribution or take the most likely
_, predicted = torch.topk(probs[i], k=1, dim=-1)
if len(predicted.shape) > 1:
# print("PREDICTED:", predicted.shape, predicted)
predicted = predicted.squeeze()
if len(predicted.shape) > 1:
print("AFTER predicted.squeeze(1):", predicted.shape)
sep = intent[1][i]
# print("sep=", sep)
#print("***CONTEXT")
context = clean_tokens(data.tokenizer_input.decode(x[i][:sep - 1], True))
#print(context)
#print("***COMPLETION")
completion = clean_tokens(data.tokenizer_output.decode(predicted[sep:], True))
#print(completion)
#print("***REAL")
real = clean_tokens(data.tokenizer_output.decode(y[i][sep:], True))
#print(real)
context_list.append(context)
translation_results.append(completion)
eval_results.append(real)
# probs = F.softmax(logits, dim=-1)
# # sample from the distribution or take the most likely
# _, predicted = torch.topk(probs, k=1, dim=-1)
# if predicted_tokids is None:
# predicted_tokids = [predicted]
# x_total = x
# y_total = y
# else:
# predicted_tokids.append(predicted)
# x_total = torch.cat((x_total, x), dim=0)
# y_total = torch.cat((y_total, y), dim=0)
if is_train:
# backprop and update the parameters
model.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), config.grad_norm_clip)
optimizer.step()
# decay the learning rate based on our progress
if config.lr_decay:
self.tokens += (y >= 0).sum() # number of tokens processed this step (i.e. label is not -100)
if self.tokens < config.warmup_tokens:
# linear warmup
lr_mult = float(self.tokens) / float(max(1, config.warmup_tokens))
else:
# cosine learning rate decay
progress = float(self.tokens - config.warmup_tokens) / float(max(1, config.final_tokens - config.warmup_tokens))
lr_mult = max(0.1, 0.5 * (1.0 + math.cos(math.pi * progress)))
lr = config.learning_rate * lr_mult
for param_group in optimizer.param_groups:
param_group['lr'] = lr
else:
lr = config.learning_rate
# report progress
pbar.set_description(f"epoch {epoch+1} iter {it}: train loss {loss.item():.5f}. mean loss: {float(np.mean(losses)):.5f}. lr {lr:e}")
if split == 'train':
train_loss = float(np.mean(losses))
print(f"train loss: {train_loss}")
return train_loss
if split == 'test':
test_loss = float(np.mean(losses))
print(f"test loss: {test_loss}")
return test_loss
if split == 'valid':
test_loss = float(np.mean(losses))
print(f"valid loss: {test_loss}")
# eval_results = []
# translation_results = []
# context_list = []
# for idx in range(len(logits_total)):
# intent = (x_total[idx] == valid_dataset.tokenizer_input.encode(['<eos>'])[0]).nonzero(as_tuple=True)[0][0]
# probs = F.softmax(logits_total[idx], dim=-1)
# # sample from the distribution or take the most likely
# _, predicted = torch.topk(probs, k=1, dim=-1)
# for idx in range(len(predicted_tokids)):
# intent = (x_total[idx] == valid_dataset.tokenizer_input.encode(['<eos>'])[0]).nonzero(as_tuple=True)[0][0]
# predicted = predicted_tokids[idx]
# print("***CONTEXT")
# context = clean_tokens(data.tokenizer_input.decode(x_total[idx][:intent - 1], True))
# print("***COMPLETION")
# completion = clean_tokens(data.tokenizer_output.decode(predicted[intent:], True))
# print("***REAL")
# real = clean_tokens(data.tokenizer_output.decode(y_total[idx][intent:], True))
# context_list.append(context)
# translation_results.append(completion)
# eval_results.append(real)
with open('valid.txt', 'w') as f:
f.write("\n".join(translation_results))
with open('eval.txt', 'w') as f:
f.write("\n".join(eval_results))
with open('context.txt', 'w') as f:
f.write("\n".join(context_list))
!cat valid.txt | mosesdecoder/scripts/tokenizer/detokenizer.perl -l lv > valid.detok.txt
!cat eval.txt | mosesdecoder/scripts/tokenizer/detokenizer.perl -l lv > eval.detok.txt
!cat context.txt | mosesdecoder/scripts/tokenizer/detokenizer.perl -l lv > context.detok.txt
with open('eval.detok.txt', 'r') as f:
eval_results = [l.strip() for l in f.readlines()]
with open('valid.detok.txt', 'r') as f:
translation_results = [l.strip() for l in f.readlines()]
with open('context.detok.txt', 'r') as f:
context_list = [l.strip() for l in f.readlines()]
# idx = choice(range(len(context_list)))
valid_sentences = ['the driver wore a cap and his face was thin and very tanned.',
'outside it was getting dark.',
'the two girls were asleep.',
'I would like to have had the uniform off although I did not care much about the outward forms.',
'I watched the flashes on San Gabriele.',
'I asked.',
'"no.']
idx_list = [i for i, sentence in enumerate(eval_results) if sentence in valid_sentences]
for idx in idx_list:
print(f'Input: {context_list[idx]}')
print(f'Predicted output: {translation_results[idx]}')
print(f'Real output: {eval_results[idx]}')
print('--------------------------------------------------')
refs = [eval_results]
sys = translation_results
bleu = sacrebleu.corpus_bleu(sys, refs)
print(f'BLEU: {bleu.score}')
print('##############################################################')
return test_loss, bleu.score
train_loss_list = []
test_loss_list = []
valid_loss_list = []
valid_bleu_list = []
best_loss = float('inf')
best_bleu = 0.0
bleu_score = -1.0
self.tokens = 0 # counter used for learning rate decay
for epoch in range(config.max_epochs):
train_loss = run_epoch('train')
train_loss_list.append(train_loss)
if self.test_dataset is not None:
test_loss = run_epoch('test')
test_loss_list.append(test_loss)
if self.valid_dataset is not None:
valid_loss, bleu_score = run_epoch('valid')
valid_loss_list.append(valid_loss)
valid_bleu_list.append(bleu_score)
# supports early stopping based on the test loss, or just save always if no test set is provided
# good_model = self.test_dataset is None or test_loss < best_loss
good_model = self.valid_dataset is None or bleu_score > best_bleu
if self.config.ckpt_path is not None and good_model:
best_loss = test_loss
best_bleu = bleu_score
self.save_checkpoint("_best")
if epoch % 10 == 0:
self.save_checkpoint(f"_{epoch}")
self.save_checkpoint("_last")
return train_loss_list, test_loss_list, valid_loss_list, valid_bleu_list
###Output
_____no_output_____
###Markdown
Training
###Code
class Tokenizer:
def __init__(self, data, vocab_size, vocab):
self.vocab_size = vocab_size
self.vocab = set(vocab)
self.vocab_size = len(vocab)
if self.vocab_size != vocab_size:
logger.warn(f"Tokenizer len(vocab) != vocab_size: {len(self.vocab)} {vocab_size}")
print(f"Tokenizer vocab_size={vocab_size} len(vocab)={len(self.vocab)}")
self.stoi = {ch: i for i, ch in enumerate(vocab)}
self.itos = {i: ch for i, ch in enumerate(vocab)}
def tokenize(self, data, block_size):
tokenized_text = data.split()
# Filter empty strings
tokenized_text = [x for x in tokenized_text if x]
result = []
for tokenized in tokenized_text:
# In case other single # found, replace them with <unk> special token, marking the element as unknown
if tokenized in self.vocab:
result.append(tokenized)
else:
logger.warn(f"Tokenizer UNKNOWN TOKEN: |{tokenized}|")
result.append('<unk>')
# in case the sentence is longer, than block_size, we trim the sentence
return result[:block_size]
def encode(self, data):
return [self.stoi[s] for s in data]
def decode(self, data, clean_paddings=False):
if hasattr(data, "shape") and len(data.shape) > 1:
print(data.shape)
print(data)
text = ' '.join([self.itos[int(i)] for i in data if int(i) >= 0])
if not clean_paddings:
return text
return text.replace('<pad>', '').replace(' ', '')
# vocab_size = 10000
# vocab_input = None
# if os.path.exists('vocab_input.pkl'):
# with open('vocab_input.pkl', 'rb') as f:
# vocab_input = pickle.load(f)
# vocab_output = None
# if os.path.exists('vocab_output.pkl'):
# with open('vocab_output.pkl', 'rb') as f:
# vocab_output = pickle.load(f)
# building vocabluary can take some time. ~5 minutes for 10_000 tokens for each tokenizer.
tokenizer_input = Tokenizer(text_input, vocab_size, list(joint_vocab))
tokenizer_output = Tokenizer(text_output, vocab_size, list(joint_vocab))
print(text_input[:2])
print(f"{len(text_input)}")
# def separate_lines(text, train_idxs, valid_idxs, test_idxs):
# text_lines = text.splitlines()
# train_lines = [text_lines[idx] for idx in train_idxs]
# valid_lines = [text_lines[idx] for idx in valid_idxs]
# test_lines = [text_lines[idx] for idx in test_idxs]
# return train_lines, valid_lines, test_lines
# train_input, valid_input, test_input = separate_lines(text_input, train_idxs, valid_idxs, test_idxs)
# train_output, valid_output, test_output = separate_lines(text_output, train_idxs, valid_idxs, test_idxs)
print(len(train_input), len(valid_input), len(test_input))
assert len(train_input) == len(train_output)
assert len(valid_input) == len(valid_output)
assert len(test_input) == len(test_output)
from torch.utils.data import Dataset
class WordDataset(Dataset):
def __init__(self, output_text, input_text, tokenizer_output, tokenizer_input, block_size):
self.tokenizer_output = tokenizer_output
self.tokenizer_input = tokenizer_input
self.block_size = block_size * 2 + 1
self.output_text = [tokenizer_output.tokenize(t, block_size) for t in output_text]
self.input_text = [tokenizer_input.tokenize(t, block_size) for t in input_text]
def __len__(self):
return len(self.output_text)
def __getitem__(self, idx):
"""
The idea is to get the input sentence
and translate it to output sentence (sentences could be on any language).
In the init method we already split a sentence into tokens and filled with spaces,
to have an equal sentence size. In this method we just encode the tokens to
ids (a list of numbers), and we're trying to map ids sequences
"""
tokenized_input_text = self.tokenizer_input.encode(self.input_text[idx])
tokenized_output_text = self.tokenizer_output.encode(self.output_text[idx])
dix = tokenized_input_text + self.tokenizer_output.encode(['<eos>']) + tokenized_output_text
if len(dix) < self.block_size:
dix += self.tokenizer_output.encode(['<pad>']) * (self.block_size - len(dix))
x = torch.tensor(dix[:-1], dtype=torch.long)
y = torch.tensor(dix[1:], dtype=torch.long)
y[:len(tokenized_input_text) - 1] = -100
return x, y
block_size = 100 # the estimate how long lines the text could be (token count)
import datetime
start_time = datetime.datetime.now()
print(f"================ encode Datasets - Start time: {start_time}")
# for faster debuging of Out of Memory during validation
_train_limit = len(train_output) # 10000 # len(train_output)
_eval_limit = 10000 # -1 # 5000
train_dataset = WordDataset(train_output[:_train_limit], train_input[:_train_limit],
tokenizer_output, tokenizer_input, block_size)
if _eval_limit > 0:
test_dataset = WordDataset(test_output[:_eval_limit], test_input[:_eval_limit],
tokenizer_output, tokenizer_input, block_size)
valid_dataset = WordDataset(valid_output[:_eval_limit], valid_input[:_eval_limit],
tokenizer_output, tokenizer_input, block_size)
else:
test_dataset = WordDataset(test_output, test_input,
tokenizer_output, tokenizer_input, block_size)
valid_dataset = WordDataset(valid_output, valid_input,
tokenizer_output, tokenizer_input, block_size)
finish_time = datetime.datetime.now()
print(f"================ encode Datasets - Finished : {finish_time} -- elapsed: {finish_time-start_time}")
# NOTE: fixed, no longer shows UNKNOWN TOKEN
# joint_vocab -s 10000
# UNKNOWN TOKEN
# |;@@| (2040) # I &@@ apos@@ ;@@ m
# |q@@| (148)
# |R| (40)
# |v| (409)
len(train_dataset)
number_of_heads = 8
number_of_layers = 6
# from mingpt.model import GPT, GPTConfig
embd_pdrop = 0.1
resid_pdrop = 0.1
attn_pdrop = 0.1
max_vocab = max(tokenizer_input.vocab_size, tokenizer_output.vocab_size)
mconf = GPTConfig(max_vocab, train_dataset.block_size,
n_layer=number_of_layers, n_head=number_of_heads, n_embd=512,
embd_pdrop=embd_pdrop, resid_pdrop=resid_pdrop, attn_pdrop=attn_pdrop)
model = GPT(mconf)
# from mingpt.trainer import Trainer, TrainerConfig
# tokens_per_epoch = len(train_dataset) * block_size
# train_epochs = 100
# _batch_size = 128
# # initialize a trainer instance and kick off training
# tconf = TrainerConfig(max_epochs=train_epochs,
# batch_size=_batch_size, learning_rate=3e-4,
# lr_decay=True, warmup_tokens=tokens_per_epoch, final_tokens=train_epochs*tokens_per_epoch,
# ckpt_path='minGPT-Tilde-LV-EN-translator_model',
# num_workers=1, weight_decay=0.0001, betas=(0.9, 0.98))
# trainer = Trainer(model, train_dataset, test_dataset, valid_dataset, tconf)
param_count = sum([param.nelement() for param in model.parameters()])
print(f'Parameters count: {param_count}')
# Parameters count: 29789696
# train_loss_list, test_loss_list, valid_loss_list, valid_bleu_list = trainer.train()
# epochs = range(len(test_loss_list))
# # plt.subplots(nrows=number_of_layers, ncols=number_of_heads, figsize=(30, 20))
# fig, axs = plt.subplots(nrows=4, ncols=1, figsize=(20, 10))
# axs[0].plot(epochs, train_loss_list)
# axs[0].set_title('Train loss')
# axs[0].set_xlabel('Epochs')
# axs[0].set_ylabel('Loss')
# axs[0].plot(epochs, test_loss_list)
# axs[0].set_title('Test loss')
# axs[0].set_xlabel('Epochs')
# axs[0].set_ylabel('Loss')
# axs[1].plot(epochs, valid_loss_list)
# axs[1].set_title('Validation loss')
# axs[1].set_xlabel('Epochs')
# axs[1].set_ylabel('Loss')
# axs[2].plot(epochs, valid_bleu_list)
# axs[2].set_title('Validation BLEU')
# axs[2].set_xlabel('Epochs')
# axs[2].set_ylabel('BLEU')
# plt.show()
###Output
_____no_output_____
###Markdown
Evaluate
###Code
cuda1 = torch.device('cuda:1')
checkpoint = torch.load('minGPT-Tilde-LV-EN-translator_model_best.pt', map_location=cuda1)
model.load_state_dict(checkpoint)
print(model)
model.to(cuda1)
from random import choice
pad_tokidx = tokenizer_output.encode(['<pad>'])[0]
for _ in range(5):
idx = choice(range(len(valid_output)))
context = valid_input[idx]
encoded_input = tokenizer_input.encode(tokenizer_input.tokenize(context, block_size))
x = torch.tensor(encoded_input, dtype=torch.long)[None,...].to(cuda1) #trainer.device)
y = sample(model, x, block_size, temperature=1.0, sample=False, top_k=10,
stop_tokidx=pad_tokidx)[0]
intent = len(encoded_input) + 1
predicted = y[intent:]
completion = tokenizer_output.decode(predicted, True)
print(f'Input: {context}')
print(f'\nPredicted output: {completion}')
print(f'\nReal output: {valid_output[idx]}')
print('--------------------------------------------------')
print(context)
print(encoded_input)
idx = choice(range(len(valid_output)))
context = valid_input[idx]
encoded_input = tokenizer_input.encode(tokenizer_input.tokenize(context, block_size))
x = torch.tensor(encoded_input, dtype=torch.long)[None,...].to(cuda1)
y, attention_state = sample(model, x, block_size, temperature=1.0, sample=False, top_k=10,
output_attention=True) #, stop_tokidx=pad_tokidx)
intent = len(encoded_input) + 1
predicted = y[0][intent:]
completion = tokenizer_output.decode(predicted,)
print(f'Input: {context}')
print(f'\nPredicted output: {completion}')
print(f'\nReal output: {valid_output[idx]}')
print('--------------------------------------------------')
# fig, plots = plt.subplots(nrows=number_of_layers, ncols=number_of_heads, figsize=(30, 20))
# axis_text = tokenizer_input.decode(encoded_input, True).split()
# axis_text.append('<eos>')
# axis_text += tokenizer_input.decode(predicted, True).split()
# limit = len(axis_text)
# for bi in range(number_of_layers):
# for hi in range(number_of_heads):
# attetion_plot = torch.zeros(limit, limit)
# for di in range(limit):
# attetion_plot[:di, :di] = attention_state[bi][di][0,hi,:di,:di].data
# ax = plots[bi][hi]
# ax.matshow(attetion_plot.numpy(), cmap='bone')
# # Set up axes
# ax.set_xticklabels([''] + axis_text, rotation=90)
# ax.set_yticklabels([''] + axis_text)
# # Show label at every tick
# ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
# ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
# # Set up a title
# ax.set_title(f'Block {bi + 1} Head {hi + 1}', size=25, pad=30)
# plt.show()
# In case the previous cell is not plotting anything, uncomment the code below and execute. After that, the plotting should be fine.
# %matplotlib inline
# import numpy as np
# x = np.linspace(0, 10, 100)
# fig = plt.figure()
# plt.plot(x, np.sin(x), '-')
# plt.plot(x, np.cos(x), '--');
###Output
_____no_output_____
###Markdown
Calculate BLEU
###Code
def clean_tokens(sentence):
return sentence.replace('@@ ', '').replace(' @', '').replace('@ ', '')
# import sys
# import time
# from nltk.translate.bleu_score import sentence_bleu, SmoothingFunction
# smooth = SmoothingFunction().method7
translation_results = []
eval_text = []
bleu_results = []
num_validation_recs = 100000 #len(valid_input) 161,361 -- would take 55 hours!
pad_tokidx = tokenizer_output.encode(['<pad>'])[0]
for idx, (context,target) in tqdm(enumerate(zip(test_input, test_output)),
total=num_validation_recs):
# sys.stdout.write('\r'+str(idx)+' / '+str(num_validation_recs))
# time.sleep(0.1)
## if (idx+1) % 50 == 0:
## print(idx, end=" ")
## if (idx+1) % 100 == 0:
## print(idx)
if (idx+1) % (num_validation_recs+1) == 0:
break
encoded_input = tokenizer_input.encode(tokenizer_input.tokenize(context, block_size))
x = torch.tensor(encoded_input, dtype=torch.long)[None,...].to(cuda1)
y = sample(model, x, block_size, temperature=1.0, sample=False, top_k=10,
stop_tokidx=pad_tokidx)[0]
intent = len(encoded_input) + 1
predicted = y[intent:]
completion = clean_tokens(tokenizer_output.decode(predicted, True))
translation_results.append(completion)
eval_text.append(clean_tokens(target))
# bleu = sentence_bleu([eval], completion, smoothing_function=smooth)
# bleu_results.append(bleu)
with open('tilde_valid.predicted', 'w') as f:
f.write("\n".join(translation_results))
with open('tilde_valid.gtruth', 'w') as f:
f.write("\n".join(eval_text))
# print(f"Averare BLEU: {np.mean(bleu_results)}")
# max_recs = 2000
# eval_text = []
# for idx, (context,target) in tqdm(enumerate(zip(test_input, test_output)),total=max_recs):
# if (idx+1) % (max_recs+1) == 0:
# break
# eval_text.append(clean_tokens(target))
# print(len(eval_text))
# with open('tilde_valid.gtruth', 'w') as f:
# f.write("\n".join(eval_text))
!perl mosesdecoder/scripts/generic/multi-bleu.perl tilde_valid.gtruth < tilde_valid.predicted
# BLEU = 7.92, 38.4/12.4/4.2/2.0 (BP=1.000, ratio=1.021, hyp_len=9711, ref_len=9509)
# joint_vocab -s 10,000
# BLEU = 8.61, 44.4/15.1/5.5/2.8 (BP=0.852, ratio=0.862, hyp_len=8198, ref_len=9509)
# full joint_vocab
# BLEU = 9.18, 41.7/14.1/5.4/2.8 (BP=0.948, ratio=0.950, hyp_len=9030, ref_len=9509)
# model_best.pt
# BLEU = 13.47, 48.0/19.6/9.4/5.5 (BP=0.908, ratio=0.912, hyp_len=8670, ref_len=9509)
# ---- train_input/output[:2000] (epoch 44, validation BLEU: 48.494189)
# ---- 49.96, 75.5/55.5/44.2/36.1 (BP=0.982, ratio=0.983, hyp_len=56089, ref_len=57081)
# ---- test_input/output[:100000] (epoch >=44, validation BLEU: ? >=48.49)
# ---- 43.82, 72.3/49.9/37.7/29.4 (BP=0.980, ratio=0.980, hyp_len=2757982, ref_len=2813764)
# --- Epoch 59, validation BLEU:48.80495715291374
# ---> 43.94, 72.3/50.1/37.8/29.6 (BP=0.980, ratio=0.980, hyp_len=2757389, ref_len=2813764)
!cat tilde_valid.predicted | mosesdecoder/scripts/tokenizer/detokenizer.perl -l lv > tilde_valid.detok.predicted
!cat tilde_valid.gtruth | mosesdecoder/scripts/tokenizer/detokenizer.perl -l lv > tilde_valid.detok.gtruth
#!pip install sacrebleu
#!pip show sacrebleu
import sacrebleu
with open('tilde_valid.detok.gtruth', 'r') as f:
eval_ref = [l.strip() for l in f.readlines()]
with open('tilde_valid.detok.predicted', 'r') as f:
translation_results = [l.strip() for l in f.readlines()]
bleu = sacrebleu.corpus_bleu(translation_results, [eval_ref])
print(bleu) #print(bleu.score)
# 7.918993465381516
# joint_vocab -s 10000 8.534786641173136
# full joint_vocab 9.174070997058795
# model_best.pt
#13.481896471451254
# ---- train_input/output[:2000] 50.07499125333281
# ---- test_input/output[:100000] 43.94 72.2/50.0/37.8/29.5 (BP = 0.980 ratio = 0.981 hyp_len = 2762939 ref_len = 2817588)
# --- model_best: Epoch 59, validation BLEU:48.80495715291374
#BLEU = 44.06 72.3/50.1/37.9/29.7 (BP = 0.980 ratio = 0.980 hyp_len = 2762448 ref_len = 2817588)
###Output
_____no_output_____ |
Automate the Boring Stuff with Python Ch2.ipynb | ###Markdown
In this lecture we study the control flow statements and conditional execution. When it comes to the word 'control', we first need a notion of 'true' and 'false'. In Python, this type of Boolean logic serves as a foundation for conditional execution. Logical equality and inequality is denoted by "==" and "!=".
###Code
spam=False
email=True
print(spam, email)
###Output
False True
###Markdown
The 'and' and 'or' operators always take two Boolean values (or expressions), so they’reconsidered binary operators. The 'and' operator evaluates an expression to 'True' if bothBoolean values are True; otherwise, it evaluates to 'False'. Below is a truth table, which serves as a foundation for Boolean logic in Python.
###Code
print(True or True) # True
print(True or False) # True
print(False or True) # True
print(False or False) # False
print(True and False) # False
print(False and True) # False
print(not True) # False
print(not False) # True
###Output
True
True
True
False
False
False
False
True
###Markdown
There is one caveat about the Boolean logic comparison. Besides 'True' and 'False', there is also a 'None' logic. In Python there is a value called 'None', which represents the absence of a value. 'None' is the only value of the 'NoneType' data type (other programming languages might call this value null, nil, or undefined). In Python, the 'or' operator returns the value of its first operand, if that value is true in the Pythonic boolean sense, otherwise it returns the value of its second operand. So when the 'or' opeator meets 'None', things become interesting.
###Code
print(None or False) # False
print(False or None) # None
###Output
False
None
###Markdown
We now go over the usual control statements: the if-else statements, the 'for' loop, the 'while' loop, the 'break' statement and the 'continue' statement. Indentation is crucial in Pythonic programming. When it comes to control statements, make sure each block of logic is indicated with an indentation of 4 spaces. Below are some examples:
###Code
name='Alice'
age=0
if name == 'Alice':
print('Hi, Alice.')
elif age < 12:
print('You are not Alice, kiddo.')
elif age > 2000:
print('Unlike you, Alice is not an undead, immortal vampire.')
elif age > 100:
print('You are not Alice, grannie.')
name='Peter'
age=2532.43
if name == 'Alice':
print('Hi, Alice.')
elif age < 12:
print('You are not Alice, kiddo.')
elif age > 2000:
print('Unlike you, Alice is not an undead, immortal vampire.')
elif age > 100:
print('You are not Alice, grannie.')
name='Emily'
age=14
if name == 'Alice':
print('Hi, Alice.')
elif age < 12:
print('You are not Alice, kiddo.')
else:
print('You are neither Alice nor a little kid.')
spam = 0
while spam < 3:
print('Hello, world.')
spam = spam + 1
print('spam='+str(spam))
print('My name is:')
for i in range(3):
print('Jimmy: 3 Times (' + str(i) + ')')
###Output
My name is:
Jimmy: 3 Times (0)
Jimmy: 3 Times (1)
Jimmy: 3 Times (2)
###Markdown
The 'break' statement, like in C language, breaks out of the smallest enclosing 'for' or 'while' loop statements. Loop statements may have an 'else' clause; it is executed when the loop terminates through exhaustion of the list (with 'for') or when the condition becomes false (with 'while'), but not when the loop is terminated by a 'break' statement. Below are two examples:
###Code
var=9
while var > 0:
print('Current variable value is: ', var)
var = var -1
if var == 5:
break
print ("Goodbye!")
for n in range(2, 9):
for x in range(2, n):
if n % x == 0:
print(n, 'equals', x, '*', n//x)
break
else: # loop fell through without finding a factor
print(n, 'is a prime number')
###Output
2 is a prime number
3 is a prime number
4 equals 2 * 2
5 is a prime number
6 equals 2 * 3
7 is a prime number
8 equals 2 * 4
###Markdown
The 'continue' statement, also borrowed from the C language, continues with the next iteration of the loop.
###Code
var=7
while var > 0:
var = var -1
if var == 5:
continue
print ('Current variable value: ', var)
if var==2:
break
print ("Goodbye!")
a=1
b=10
import random # importing a module
for i in range(6):
print("random integer draw: ", random.randint(a, b)) # the random.randint() function finds a random integer value between the two integers specified
###Output
random integer draw: 4
random integer draw: 1
random integer draw: 2
random integer draw: 10
random integer draw: 10
random integer draw: 2
###Markdown
There is one caveat about the range() function used in a loop. When we use this type of program logic, the most important thing is to pay attention to the starting and ending value. The default setting is that the first number from range(n) for any natural number n stars from 0.
###Code
for i in range(4): # the value starts on the value 0 and ends on the value 3
print(i)
###Output
0
1
2
3
|
_doc/notebooks/python/tarabiscote.ipynb | ###Markdown
Exercices expliqués de programmationQuelques exercices autour de la copie de liste, du temps de calcul, de l'héritage.
###Code
from jyquickhelper import add_notebook_menu
add_notebook_menu()
###Output
_____no_output_____
###Markdown
Copie de listesLa fonction ``somme`` est censée faire la concaténation de toutes les listes contenues dans ``ens``. Le résultat retourné est effectivement celui désiré mais la fonction modifie également la liste ``ens``, pourquoi ?
###Code
def somme(tab):
l = tab[0]
for i in range(1, len(tab)):
l += tab [i]
return l
ens = [[0,1],[2,3]]
print(somme(ens))
print(ens)
###Output
[0, 1, 2, 3]
[[0, 1, 2, 3], [2, 3]]
###Markdown
Le problème vient du fait qu'une affectation en *python* (seconde ligne de la fonction ``somme`` ne fait pas une copie mais crée un second identificateur pour désigner la même chose. Ici, ``l`` et ``tab[0]`` désignent la même liste, modifier l'une modifie l'autre. Ceci explique le résultat. Pour corriger, il fallait faire une copie explicite de ``tab[0]`` :
###Code
import copy ###### ligne ajoutée
def somme(tab):
l = copy.copy (tab[0]) ###### ligne modifiée
for i in range(1, len (tab)):
l += tab[i]
return l
ens = [[0,1],[2,3]]
print(somme(ens))
print(ens)
###Output
[0, 1, 2, 3]
[[0, 1], [2, 3]]
###Markdown
Il était possible, dans ce cas, de se passer de copie en écrivant :
###Code
def somme(tab) :
l = [] ###### ligne modifiée
for i in range (0, len (tab)) : ###### ligne modifiée
l += tab [i]
return l
ens = [[0,1],[2,3]]
print(somme(ens))
print(ens)
###Output
[0, 1, 2, 3]
[[0, 1], [2, 3]]
###Markdown
Erreur de logiqueLe programme suivant fonctionne mais le résultat n'est pas celui escompté.
###Code
l = ["un", "deux", "trois", "quatre", "cinq"]
for i in range (0,len (l)) :
mi = i
for j in range (i, len (l)) :
if l[mi] < l [j] : mi = j
e = l [i]
l [mi] = l [i]
l [i] = e
l
###Output
_____no_output_____
###Markdown
Ce programme est censé effectuer un tri par ordre alphabétique **décroissant**. Le problème intervient lors de la permutation de l'élément ``l[i]`` avec l'élément ``l[mi]``. Il faut donc écrire :
###Code
l = ["un", "deux", "trois", "quatre", "cinq"]
for i in range (0,len (l)) :
mi = i
for j in range (i, len (l)) :
if l[mi] < l [j] : mi = j
e = l [mi] ######## ligne modifiée
l [mi] = l [i]
l [i] = e
l
###Output
_____no_output_____
###Markdown
Coût d'un algorithmeLe coût d'un algorithme ou d'un programme est le nombre d'opérations (additions, multiplications, tests, ...) qu'il effectue. Il s'exprime comme un multiple d'une fonction de la dimension des données que le programme manipule. Par exemple : $O(n)$, $O(n^2)$, $O(n\ln n)$, ...
###Code
def moyenne (tab) :
s = 0.0
for x in tab :
s += x
return s / len (tab)
def variance (tab) :
s = 0.0
for x in tab :
t = x - moyenne (tab)
s += t * t
return s / len (tab)
l = [ 0,1,2, 2,3,1,3,0]
print(moyenne (l))
print(variance (l))
###Output
1.5
1.25
###Markdown
Tout d'abord, le coût d'un algorithme est très souvent exprimé comme un multiple de la dimension des données qu'il traite. Ici, la dimension est la taille du tableau ``tab``. Par exemple, si on note ``n = len(tab)``, alors le coût de la fonction ``moyenne`` s'écrit $O(n)$ car cette fonction fait la somme des $n$ éléments du tableau.La fonction ``variance`` contient quant à elle un petit piège. Si elle contient elle aussi une boucle, chacun des $n$ passages dans cette boucle fait appel à la fonction ``moyenne``. Le coût de la fonction ``variance`` est donc $O(n^2)$.Il est possible d'accélérer le programme car la fonction ``moyenne`` retourne le même résultat à chaque passage dans la boucle. Il suffit de mémoriser son résultat dans une variable avant d'entrer dans la boucle comme suit :
###Code
def variance (tab) :
s = 0.0
m = moyenne (tab)
for x in tab :
t = x - m
s += t * t
return s / len (tab)
variance(l)
###Output
_____no_output_____
###Markdown
Le coût de la fonction ``variance`` est alors $O(n)$. Le coût d'un algorithme peut être évalué de manière plus précise et nécessiter un résultat comme $n^2 + 3n + 2$ mais cette exigence est rarement utile pour des langages comme *python*. L'expression ``for x in tab:`` cache nécessairement un test qu'il faudrait prendre en compte si plus de précision était exigée. Il faudrait également se tourner vers un autre langage de programmation, plus précis dans sa syntaxe. Par exemple, lorsqu'on conçoit un programme avec le langage C ou C++, à partir du même code informatique, on peut construire deux programmes exécutables. Le premier (ou version *debug*), lent, sert à la mise au point : il inclut des tests supplémentaires permettant de vérifier à chaque étape qu'il n'y a pas eu d'erreur (une division par zéro par exemple). Lorsqu'on est sûr que le programme marche, on construit la seconde version (ou *release*), plus rapide, dont ont été ôtés tous ces tests de conception devenus inutiles. *python* aboutit à un programme lent qui inclut une quantité de tests invisibles pour celui qui programme mais qui détecte les erreurs plus vite et favorise une conception rapide. Il n'est pas adapté au traitement d'information en grand nombre et fait une multitude d'opérations cachées. Héritage doubleOn a besoin dans un programme de créer une classe ``carre`` et une classe ``rectangle``. Mais on ne sait pas quelle classe doit hériter de l'autre. Dans le premier programme, ``rectangle`` hérite de ``carre``.
###Code
class carre:
def __init__ (self, a):
self.a = a
def surface (self):
return self.a ** 2
class rectangle(carre):
def __init__ (self, a,b) :
carre.__init__(self,a)
self.b = b
def surface (self):
return self.a * self.b
rectangle(3, 4).surface()
###Output
_____no_output_____
###Markdown
Dans le second programme, c'est la classe ``carre`` qui hérite de la classe ``rectangle``.
###Code
class rectangle :
def __init__ (self, a,b) :
self.a = a
self.b = b
def surface (self) :
return self.a * self.b
class carre (rectangle) :
def __init__ (self, a) :
rectangle.__init__ (self, a,a)
def surface (self) :
return self.a ** 2
carre(3).surface()
###Output
_____no_output_____
###Markdown
* Dans le second programme, est-il nécessaire de redéfinir la méthode ``surface`` dans la classe ``carre`` ?* Quel est le sens d'héritage qui vous paraît le plus censé, ``class rectangle(carre)`` ou ``class carre(rectangle)`` ?* On désire ajouter la classe ``losange``. Est-il plus simple que ``rectangle`` hérite de la classe ``carre`` ou l'inverse pour introduire la classe ``losange`` ? Quel ou quels attributs supplémentaires faut-il introduire dans la classe ``losange`` ? Le principe de l'héritage est qu'une classe ``carre`` héritant de la classe ``rectangle`` hérite de ses attributs et méthodes. L'aire d'un carré est égale à celle d'un rectangle dont les côtés sont égaux, par conséquent, la méthode ``surface`` de la classe retourne la même valeur que celle de la classe ``rectangle``. Il n'est donc pas nécessaire de la redéfinir.* D'après la réponse de la première question, il paraît plus logique de considérer que ``carre`` hérite de ``rectangle``.* Un losange est défini par un côté et un angle ou un côté et la longueur d'une de ses diagonales, soit dans les deux cas, deux paramètres. Dans la première question, il paraissait plus logique que la classe la plus spécifique hérite de la classe la plus générale afin de bénéficier de ses méthodes. Pour introduire le losange, il paraît plus logique de partir du plus spécifique pour aller au plus général afin que chaque classe ne contienne que les informations qui lui sont nécessaires.
###Code
import math
class carre :
def __init__ (self, a) :
self.a = a
def surface (self) :
return self.a ** 2
class rectangle (carre) :
def __init__ (self, a,b) :
carre.__init__(self,a)
self.b = b
def surface (self) :
return self.a * self.b
class losange (carre) :
def __init__ (self, a,theta) :
carre.__init__(self,a)
self.theta = theta
def surface (self) :
return self.a * math.cos (self.theta) * self.a * math.sin (self.theta) * 2
losange(3, 1).surface()
###Output
_____no_output_____
###Markdown
Le sens de l'héritage dépend de vos besoins. Si l'héritage porte principalement sur les méthodes, il est préférable de partir du plus général pour aller au plus spécifique. La première classe sert d'interface pour toutes ses filles. Si l'héritage porte principalement sur les attributs, il est préférable de partir du plus spécifique au plus général. Dans le cas général, il n'y a pas d'héritage plus sensé qu'un autre mais pour un problème donné, il y a souvent un héritage plus sensé qu'un autre. Précision des calculsVoici un aperçu de la précision des calculs pour le calcul $1 - 10^{-n}$. L'exercice a pour but de montrer que l'ordinateur ne fait que des calculs approchés et que la précision du résultat dépend de la méthode numérique employée.
###Code
x = 1.0
for i in range (0,19) :
x = x / 10
print(i, "\t", 1.0 - x, "\t", x, "\t", x **(0.5))
###Output
0 0.9 0.1 0.31622776601683794
1 0.99 0.01 0.1
2 0.999 0.001 0.03162277660168379
3 0.9999 0.0001 0.01
4 0.99999 1e-05 0.0031622776601683794
5 0.999999 1.0000000000000002e-06 0.001
6 0.9999999 1.0000000000000002e-07 0.000316227766016838
7 0.99999999 1.0000000000000002e-08 0.0001
8 0.999999999 1.0000000000000003e-09 3.1622776601683795e-05
9 0.9999999999 1.0000000000000003e-10 1e-05
10 0.99999999999 1.0000000000000003e-11 3.1622776601683796e-06
11 0.999999999999 1.0000000000000002e-12 1.0000000000000002e-06
12 0.9999999999999 1.0000000000000002e-13 3.1622776601683797e-07
13 0.99999999999999 1.0000000000000002e-14 1.0000000000000001e-07
14 0.999999999999999 1e-15 3.162277660168379e-08
15 0.9999999999999999 1.0000000000000001e-16 1e-08
16 1.0 1e-17 3.1622776601683795e-09
17 1.0 1e-18 1e-09
18 1.0 1.0000000000000001e-19 3.1622776601683795e-10
###Markdown
Le programme montre que l'ordinateur affiche ``1`` lorsqu'il calcule $1-10^{-17}$. Cela signifie que la précision des calculs en *python* est au mieux de $10^{-16}$. C'est encore moins bon dans le cas de *float* ou réel simple précision codé sur 4 octets au lieu de 8 pour les *double*.
###Code
import numpy
x = numpy.float32(1.0)
for i in range (0,19) :
x = x / numpy.float32(10)
print(i, "\t", 1.0 - x, "\t", x, "\t", x **(0.5))
###Output
0 0.8999999985098839 0.1 0.3162277683729184
1 0.9900000002235174 0.01 0.0999999988824129
2 0.9990000000689179 0.0009999999 0.03162277551199656
3 0.9999000000098022 9.999999e-05 0.009999999509891484
4 0.9999900000011621 9.999999e-06 0.0031622774764217087
5 0.9999990000001162 9.999999e-07 0.0009999999418942008
6 0.999999900000013 9.999999e-08 0.0003162277453952373
7 0.999999990000001 9.999999e-09 9.999999525523424e-05
8 0.9999999990000001 9.999999e-10 3.162277439909038e-05
9 0.9999999999 9.999999e-11 9.99999937286775e-06
10 0.99999999999 9.999999e-12 3.162277516708525e-06
11 0.999999999999 9.999999e-13 9.999999437919884e-07
12 0.9999999999999 9.999999e-14 3.162277525279896e-07
13 0.99999999999999 9.999999e-15 9.999999488741863e-08
14 0.999999999999999 9.999999e-16 3.162277498494361e-08
15 0.9999999999999999 9.999999e-17 9.999999422567411e-09
16 1.0 9.999999e-18 3.162277503725911e-09
17 1.0 9.999999e-19 9.999999712080637e-10
18 1.0 1e-19 3.1622776099917643e-10
###Markdown
On écrit une classe ``matrice_carree_2`` qui représente une matrice carrée de dimension 2.
###Code
class matrice_carree_2 :
def __init__ (self, a,b,c,d) :
self.a, self.b, self.c, self.d = a,b,c,d
def determinant (self) :
return self.a * self.d - self.b * self.c
m1 = matrice_carree_2 (1.0,1e-6,1e-6,1.0)
m2 = matrice_carree_2 (1.0,1e-9,1e-9,1.0)
print(m1.determinant())
print(m2.determinant())
###Output
0.999999999999
1.0
###Markdown
La seconde valeur est donc fausse. On considère maintenant la matrice $M = \left(\begin{array}{cc} 1 & 10^{-9} \\ 10^{-9} & 1 \end{array} \right)$.On pose $D = \det(M) = 1 - 10^{-18}$ et $T = tr(M) = 2$. $\Delta$ est le déterminant de $M$ et $T$ sa trace. On sait que les valeurs propres de $M$ notées $\lambda_1, \lambda_2$ vérifient :$$\begin{array}{lll}D &=& \lambda_1 \lambda_2 \\T &=& \lambda_1 + \lambda_2\end{array}$$On vérifie que $(x - \lambda_1)(x - \lambda_2) = x^2 - x (\lambda_1 + \lambda_2) + \lambda_1 \lambda_2$. Les valeurs propres de $M$ sont donc solutions de l'équation : $x^2 - T x + D = 0$. Le discriminant de ce polynôme est $\Delta = T^2 - 4 D$. On peut donc exprimer les valeurs propres de la matrice $M$ par : $$\begin{array}{lll}\lambda_1 &=& \frac{T - \sqrt{\Delta}}{2} \\\lambda_2 &=& \frac{T + \sqrt{\Delta}}{2} \end{array}$$On ajoute donc la méthode suivante à la classe ``matrice_carree_2`` :
###Code
class matrice_carree_2 :
def __init__ (self, a,b,c,d) :
self.a, self.b, self.c, self.d = a,b,c,d
def determinant (self) :
return self.a * self.d - self.b * self.c
def valeurs_propres (self) :
det = self.determinant ()
trace = self.a + self.d
delta = trace ** 2 - 4 * det
l1 = 0.5 * (trace - (delta ** (0.5)) )
l2 = 0.5 * (trace + (delta ** (0.5)) )
return l1,l2
m1 = matrice_carree_2 (1.0,1e-6,1e-6,1.0)
m2 = matrice_carree_2 (1.0,1e-9,1e-9,1.0)
print(m1.valeurs_propres())
print(m2.valeurs_propres())
###Output
(0.9999990000110609, 1.000000999988939)
(1.0, 1.0)
|
homework/10_Probability_And_Simulations_Solutions.ipynb | ###Markdown
**Exercise 1 (20 points)**Implement a function that returns $n$ samples from a multivariate Gaussian distribution in C++ and wrap it for use in Python using `pybind11`. Use only standard C++ and the `Eigen` library. The function signature in Python is```pythondef mvnorm(mu, Sigma, n): """Returns n random samples from a multivariate Gaussian distribution. mu is a mean vector Sigma is a covariance matrix Returns an n by p matrix, where p is the dimension of the distribution. """```
###Code
%%file rng.cpp
<%
cfg['compiler_args'] = ['-std=c++11']
cfg['include_dirs'] = ['eigen']
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/eigen.h>
#include <Eigen/Cholesky>
#include <random>
namespace py = pybind11;
Eigen::MatrixXd mvn(Eigen::VectorXd mu, Eigen::MatrixXd sigma, int n) {
std::default_random_engine generator;
std::normal_distribution<double> distribution(0, 1);
Eigen::MatrixXd A(sigma.llt().matrixL());
int p = mu.size();
Eigen::MatrixXd Z(n, p);
for (int i=0; i<n; i++) {
Eigen::VectorXd v(p);
for (int j=0; j<p; j++) {
v[j] = distribution(generator);
}
Z.row(i) = mu + A * v;
}
return Z;
}
PYBIND11_PLUGIN(rng) {
pybind11::module m("rng", "auto-compiled c++ extension");
m.def("mvn", &mvn);
return m.ptr();
}
import cppimport
import numpy as np
rng = cppimport.imp("rng")
mu = 4.0*np.ones(2)
sigma = np.array([[1,0.6], [0.6, 1]])
n = 1000
x, y = rng.mvn(mu, sigma, n).T
sns.jointplot(x, y, kind='scatter')
pass
###Output
_____no_output_____
###Markdown
**Exercise 2 (20 points)**- Consider a sequence of $n$ Bernoulli trials with success probability $p$ per trial. A string of consecutive successes is known as a success *run*. Write a function that returns the counts for runs of length $k$ for each $k$ observed in a dictionary.For example: if the trials were [0, 1, 0, 1, 1, 0, 0, 0, 0, 1], the function should return ```{1: 2, 2: 1})```- What is the probability of observing at least one run of length 5 or more when $n=100$ and $p=0.5$?. Estimate this from 100,000 simulated experiments. Is this more, less or equally likely than finding runs of length 7 or more when $p=0.7$?
###Code
from collections import Counter
def count_runs(xs):
"""Count number of success runs of length k."""
ys = []
count = 0
for x in xs:
if x == 1:
count += 1
else:
if count:
ys.append(count)
count = 0
if count:
ys.append(count)
return Counter(ys)
count_runs([0, 1, 0, 1, 1, 0, 0, 0, 0, 1])
def count_runs_alt(x):
"""Returns the counts for runs of length k for each observed in x.
This works but is slower.
"""
return Counter(len(s) for s in ''.join(map(str, x)).split('0') if s)
count_runs_alt([0, 1, 0, 1, 1, 0, 0, 0, 0, 1])
%timeit count_runs([0, 1, 0, 1, 1, 0, 0, 0, 0, 1])
%timeit count_runs_alt([0, 1, 0, 1, 1, 0, 0, 0, 0, 1])
def run_prob(expts, n, k, p):
xxs = np.random.choice([0,1], (expts, n), p=(1-p, p))
return sum(max(d.keys()) >= k for d in map(count_runs, xxs))/expts
run_prob(expts=100000, n=100, k=5, p=0.5)
run_prob(expts=100000, n=100, k=7, p=0.7)
###Output
_____no_output_____
###Markdown
**Exercise 3 (20 points)**.- Consider an unbiased random walk of length $n$ as simulated with a sequence of -1 or +1 values. If we start from 0, plot the distribution of *last* return times for 100,000 simulations with $n = 100$. The last return time is the last time the cumulative sum of the random walk is zero - this may be the starting point if the walk never returns to zero in 100 steps. - Do a maximum likeliood fit of a beta distribution to the set of last return times using the `beta.fit()` function from `scipy.stats`. Set the lower bound (loc) = 0 and the upper bound (scale) = 100 for plotting. Superimpose the fitted beta PDF on the normalized histogram of last reutrn times.
###Code
n = 100
k = 100000
returns = np.zeros(k).astype('int')
for i in range(k):
x = np.random.choice([-1,1], n)
y = np.r_[0, np.cumsum(x)]
returns[i] = np.nonzero(y == 0)[0][-1]
plt.hist(returns, normed=True)
pass
from scipy.stats import beta
a, b, loc, scale = beta.fit(returns)
x = np.linspace(0, 100, 100)
plt.plot(x, beta(a=a, b=b, loc=0, scale=100).pdf(x), linestyle='dashed', color='blue')
plt.hist(returns, histtype='step', normed=True, linewidth=1)
pass
###Output
_____no_output_____
###Markdown
**Exercise 4 (20 points)**The Cauchy distribution is given by $$f(x) = \frac{1}{\pi (1 + x^2)}, \ \ -\infty \lt x \lt \infty $$- Integrate the tail probability $P(X > 2)$ using Monte Carlo 1. Sampling from the Cauchy distribution directly 2. Sampling from the uniform distribution using an appropriate change of variables- Plot the 95% CI for the Monte Carlo estimates for n = 1 to 1000 1. For sampling from the Cauchy distribution using mulitple Monte Carlo sequences 2. For sampling from the uniform distribution using bootstrap samples of a single Monte Carlo sequence
###Code
from scipy import stats
# Direct
n = 1000
sum(stats.cauchy().rvs(n) > 2)/n
# After change of variables
x = stats.uniform().rvs(n)
np.mean(2/(np.pi*(x**2+4)))
# Check (not required)
1 - stats.cauchy.cdf(2)
###Output
_____no_output_____
###Markdown
Sampling from the Cauchy distribution using mulitple Monte Carlo sequences
###Code
n=1000
reps = 1000
samples = stats.cauchy().rvs((n, reps))
# repeat multiple Monte Carlo sequences
ys = np.zeros((n, reps))
for k in range(1, n+1):
ys[k-1] = np.sum(samples[:k, :] > 2, axis=0)/k
upper, lower = np.percentile(ys, [2.5, 97.5], axis=1)
plt.plot(np.arange(1, n+1)[:, None], ys, c='grey', alpha=0.02)
plt.plot(np.arange(1, n+1), ys[:, 0], c='red', linewidth=1) # one path
plt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')
pass
###Output
_____no_output_____
###Markdown
Sampling from the uniform distribution using bootstrap samples of a single Monte Carlo sequence
###Code
n=1000
reps = 1000
samples = stats.uniform().rvs(n)
samples = 2/(np.pi*(samples**2+4))
# generate bootsrap samples
xb = np.random.choice(samples, (n, reps), replace=True)
yb = 1/np.arange(1, n+1)[:, None] * np.cumsum(xb, axis=0)
upper, lower = np.percentile(yb, [2.5, 97.5], axis=1)
plt.plot(np.arange(1, n+1)[:, None], yb, c='grey', alpha=0.02)
plt.plot(np.arange(1, n+1), yb[:, 0], c='red', linewidth=1)
plt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')
pass
###Output
_____no_output_____
###Markdown
**Exercise 5 (20 points)**.Estimate the following integral using Monte Carlo integration$$\int_{-\infty}^{\infty} x^2 \frac{1}{2}e^{-|x|} dx$$Hint: See notes on importance sampling and figure.
###Code
# Use importance sampling a normal distribuion
def p(x):
"""Double exponential density."""
return 0.5*np.exp(-np.abs(x))
n = 1000000
x = stats.norm(0, 2).rvs(n)
np.mean(x**2 * p(x)/stats.norm(0, 2).pdf(x))
# Check (not required)
from sympy import symbols, integrate, exp, oo
x = symbols('x')
integrate(x**2 * exp(-x), (x, 0, oo))
###Output
_____no_output_____ |
docs/geemap_complete/webapps/template.ipynb | ###Markdown
[](https://colab.research.google.com/github/giswqs/GEE-Courses/blob/master/docs/geemap_complete/webapps/template.ipynb)Uncomment and execute the following code block to install geemap if needed.
###Code
# !pip install geemap
###Output
_____no_output_____ |
Arquivo aulas/Modulo 6/MOD6-Aula3(for e if).ipynb | ###Markdown
for + if Estrutura:
###Code
for item in lista:
if condicao:
faça alguma coisa
else:
outra coisa
###Output
_____no_output_____
###Markdown
Digamos que a gente esteja analisando a meta de vendas de vários funcionários de uma empresa. A meta de vendas é de 1000 reais em 1 dia.Temos uma lista com as vendas de todos os funcionários e quero calcular qual o % de pessoas que bateram a meta.
###Code
vendas = [1200, 300, 800, 1500, 1900, 2750, 400, 20, 23, 70, 90, 80, 1100, 999, 900, 880, 870, 50, 1111, 120, 300, 450, 800]
meta = 1000
qtd_meta = 0
for venda in vendas:
if venda >= meta:
qtd_meta += 1
qtd_funci = len(vendas)
print('O percentual de colaboradores que bateram a meta foi {:.2%}'.format(qtd_meta/qtd_funci))
###Output
O percentual de colaboradores que bateram a meta foi 26.09%
|
course-1/week-2/my1stNN.ipynb | ###Markdown
my1stNN.ipynb (or MNIST digits classification with TensorFlow) This task will be submitted for peer review, so make sure it contains all the necessary outputs!
###Code
import numpy as np
from sklearn.metrics import accuracy_score
from matplotlib import pyplot as plt
%matplotlib inline
import tensorflow as tf
print("We're using TF", tf.__version__)
import sys
sys.path.append("..")
import grading
import matplotlib_utils
from importlib import reload
reload(matplotlib_utils)
import keras_utils
from keras_utils import reset_tf_session
###Output
_____no_output_____
###Markdown
Look at the dataIn this task we have 50000 28x28 images of digits from 0 to 9.We will train a classifier on this data.
###Code
import preprocessed_mnist
X_train, y_train, X_val, y_val, X_test, y_test = preprocessed_mnist.load_dataset()
# X contains rgb values divided by 255
print("X_train [shape %s] sample patch:\n" % (str(X_train.shape)), X_train[1, 15:20, 5:10])
print("A closeup of a sample patch:")
plt.imshow(X_train[1, 15:20, 5:10], cmap="Greys")
plt.show()
print("And the whole sample:")
plt.imshow(X_train[1], cmap="Greys")
plt.show()
print("y_train [shape %s] 10 samples:\n" % (str(y_train.shape)), y_train[:10])
###Output
_____no_output_____
###Markdown
Linear modelYour task is to train a linear classifier $\vec{x} \rightarrow y$ with SGD using TensorFlow.You will need to calculate a logit (a linear transformation) $z_k$ for each class: $$z_k = \vec{x} \cdot \vec{w_k} + b_k \quad k = 0..9$$And transform logits $z_k$ to valid probabilities $p_k$ with softmax: $$p_k = \frac{e^{z_k}}{\sum_{i=0}^{9}{e^{z_i}}} \quad k = 0..9$$We will use a cross-entropy loss to train our multi-class classifier:$$\text{cross-entropy}(y, p) = -\sum_{k=0}^{9}{\log(p_k)[y = k]}$$ where $$[x]=\begin{cases} 1, \quad \text{if $x$ is true} \\ 0, \quad \text{otherwise} \end{cases}$$Cross-entropy minimization pushes $p_k$ close to 1 when $y = k$, which is what we want.Here's the plan:* Flatten the images (28x28 -> 784) with `X_train.reshape((X_train.shape[0], -1))` to simplify our linear model implementation* Use a matrix placeholder for flattened `X_train`* Convert `y_train` to one-hot encoded vectors that are needed for cross-entropy* Use a shared variable `W` for all weights (a column $\vec{w_k}$ per class) and `b` for all biases.* Aim for ~0.93 validation accuracy
###Code
X_train_flat = X_train.reshape((X_train.shape[0], -1))
print(X_train_flat.shape)
X_val_flat = X_val.reshape((X_val.shape[0], -1))
print(X_val_flat.shape)
import keras # we use keras only for keras.utils.to_categorical
y_train_oh = keras.utils.to_categorical(y_train, 10)
y_val_oh = keras.utils.to_categorical(y_val, 10)
print(y_train_oh.shape)
print(y_train_oh[:3], y_train[:3])
# run this again if you remake your graph
s = reset_tf_session()
# Model parameters: W and b
W = ### YOUR CODE HERE ### tf.get_variable(...) with shape[0] = 784
b = ### YOUR CODE HERE ### tf.get_variable(...)
# Placeholders for the input data
input_X = ### YOUR CODE HERE ### tf.placeholder(...) for flat X with shape[0] = None for any batch size
input_y = ### YOUR CODE HERE ### tf.placeholder(...) for one-hot encoded true labels
# Compute predictions
logits = ### YOUR CODE HERE ### logits for input_X, resulting shape should be [input_X.shape[0], 10]
probas = ### YOUR CODE HERE ### apply tf.nn.softmax to logits
classes = ### YOUR CODE HERE ### apply tf.argmax to find a class index with highest probability
# Loss should be a scalar number: average loss over all the objects with tf.reduce_mean().
# Use tf.nn.softmax_cross_entropy_with_logits on top of one-hot encoded input_y and logits.
# It is identical to calculating cross-entropy on top of probas, but is more numerically friendly (read the docs).
loss = ### YOUR CODE HERE ### cross-entropy loss
# Use a default tf.train.AdamOptimizer to get an SGD step
step = ### YOUR CODE HERE ### optimizer step that minimizes the loss
s.run(tf.global_variables_initializer())
BATCH_SIZE = 512
EPOCHS = 40
# for logging the progress right here in Jupyter (for those who don't have TensorBoard)
simpleTrainingCurves = matplotlib_utils.SimpleTrainingCurves("cross-entropy", "accuracy")
for epoch in range(EPOCHS): # we finish an epoch when we've looked at all training samples
batch_losses = []
for batch_start in range(0, X_train_flat.shape[0], BATCH_SIZE): # data is already shuffled
_, batch_loss = s.run([step, loss], {input_X: X_train_flat[batch_start:batch_start+BATCH_SIZE],
input_y: y_train_oh[batch_start:batch_start+BATCH_SIZE]})
# collect batch losses, this is almost free as we need a forward pass for backprop anyway
batch_losses.append(batch_loss)
train_loss = np.mean(batch_losses)
val_loss = s.run(loss, {input_X: X_val_flat, input_y: y_val_oh}) # this part is usually small
train_accuracy = accuracy_score(y_train, s.run(classes, {input_X: X_train_flat})) # this is slow and usually skipped
valid_accuracy = accuracy_score(y_val, s.run(classes, {input_X: X_val_flat}))
simpleTrainingCurves.add(train_loss, val_loss, train_accuracy, valid_accuracy)
###Output
_____no_output_____ |
2_Octubre_1358_2.ipynb | ###Markdown
Meteorología en MéxicoEn Sistema meteorológico nacional lleva el registro de la lluvias desde el año 1985 y lo pone a disposición de la población por medio de la pagina datos.gob.mx.En la siguiente liga se encuentran 2 archivos separados por comas CSV correspondientes a los registros de lluvias mensuales y anuales de los años 2017 y 2018. En los columnas se encuentran 13, correspondientes al promedio mensual y el promedio anual. En los renglones se encuentran 33, correspondientes a cada cada uno de los 32 estados y a nivel nacional.https://drive.google.com/file/d/1lamkxgq2AsXRu81Y4JTNXLVld4og7nxt/view?usp=sharing Planteamiento del problemaDiseñar un algoritmo y programarlo para que:1. Solicite por teclado el año, el estado y el mes, en base a esa información:- muestre en pantalla el promedio de ese mes en ese estado en el año seleccionado.- muestre en pantalla el promedio anual del estado seleccionado.- muestre la suma de los 12 meses de ese estado en el año seleccionado.2. Busque el mes que mas llovió en todos los estados durante esos dos años. Imprimir año, estado y mes.3. Busque el mes que menos llovió en los dos. Imprimir año, estado y mes.
###Code
import csv
import pandas as pd
anio = int(input("Seleccione el año: "))
mes = input ("Seleccione el mes(Con tres letras): ")
mes = mes.upper()
estado = input("Seleccione el estado: ")
estado= estado.upper()
m=0
e=0
dat=[]
bandera = False
def prom_mes(dat):
for i in range (0,14):
if mes == dat[1][i]:
m = i
for j in range (0, 35):
if estado == dat [j][0]:
e =j
bandera = True
if bandera == True:
print("Promedio del mes: "+ str (estado) + " " +str (dat[e][m]))
else:
print("El estado no se encuentra")
def prom_an(dat):
aux =0
for i in range (0,35):
if estado == dat[i][0]:
aux =i
bandera = True
if bandera == True:
print("Promedio anual de: "+" "+ str (estado) + " " +str (dat[aux][13]))
else:
print("El estado no se encuentra")
def sum(dat):
aux =0
auxEs=0
for i in range (0,35):
if estado == dat[i][0]:
auxEs=i
for j in range (1,13):
aux += float(dat[auxEs][j])
bandera = True
if bandera == True:
print("Suma por mes: "+ str (estado) + " " +str (aux))
else:
print("El estado no se encuentra")
def mas(dat):
dataux=[]
aux=0
aux2=0
mes=''
mes2=''
est=''
est2=''
if anio == 2018:
with open('/content/2017Precip.csv', 'r', encoding='latin') as file:
reader = csv.reader(file)
for row in reader:
dataux.append(row)
elif anio == 2017:
with open('/content/2018Precip.csv', 'r', encoding='latin') as file:
reader = csv.reader(file)
for row in reader:
dataux.append(row)
for i in range (1,13):
for j in range(2, 34):
if float(aux) < float(dat[j][i]):
aux = dat[j][i]
mes = dat[1][i]
est = dat[j][0]
if float(aux2) < float(dataux[j][i]):
aux2 = dataux[j][i]
mes2 = dataux[1][i]
est2 = dataux[j][0]
if aux>aux2:
print("El valor mayor es: " +dat[0][0]+" " + est + " "+ mes + " "+ aux)
else:
print("El valor mayor es: " +dataux[0][0]+" " + est2 + " "+ mes2 + " "+ aux2)
def menos(dat):
dataux=[]
aux=100
aux2=100
mes=''
mes2=''
est=''
est2=''
if anio == 2018:
with open('/content/2017Precip.csv', 'r', encoding='latin') as file:
reader = csv.reader(file)
for row in reader:
dataux.append(row)
elif anio == 2017:
with open('/content/2018Precip.csv', 'r', encoding='latin') as file:
reader = csv.reader(file)
for row in reader:
dataux.append(row)
for i in range (1,13):
for j in range(2, 34):
if float(aux) > float(dat[j][i]):
aux = dat[j][i]
mes = dat[1][i]
est = dat[j][0]
if float(aux2) > float(dataux[j][i]):
aux2 = dataux[j][i]
mes2 = dataux[1][i]
est2 = dataux[j][0]
if aux<aux2:
print("El valor menor es: " +dat[0][0]+ " " + est + " "+ mes + " "+ aux)
else:
print("El valor menor es: " +dataux[0][0]+ " " + est2 + " "+ mes2 + " "+ aux2)
if anio == 2017:
with open('/content/2017Precip.csv', 'r', encoding='latin') as file:
reader = csv.reader(file)
for row in reader:
dat.append(row)
if anio == 2018:
with open('/content/2018Precip.csv', 'r', encoding='latin') as file:
reader = csv.reader(file)
for row in reader:
dat.append(row)
prom_mes(dat)
prom_an(dat)
sum(dat)
mas(dat)
menos(dat)
###Output
Seleccione el año: 2017
Seleccione el mes(Con tres letras): sep
Seeccione el estado: morelos
Promedio del mes: MORELOS 441.1
Promedio anual de: MORELOS 1,949.5
Suma por mes: MORELOS 1949.6000000000001
El valor mayor es: 2018 MORELOS JUN 565.0
El valor menor es: 2018 AGUASCALIENTES MAR 0.0
|
Auto Insurance EDA/Python-Insurance_EDA.ipynb | ###Markdown
**Auto Insurance; Fraud Claim Prediction Model**Insurance fraud is a deliberate deception perpetrated against or by an insurance company or agent for the purpose of financial gain. Fraud may be committed at different points in the transaction by applicants, policyholders, third-party claimants, or professionals who provide services to claimants. Insurance agents and company employees may also commit insurance fraud. Common frauds include “padding,” or inflating claims; misrepresenting facts on an insurance application; submitting claims for injuries or damage that never occurred; and staging accidents.Auto insurance fraud ranges from misrepresenting facts on insurance applications and inflating insurance claims to staging accidents and submitting claim forms for injuries or damage that never occurred, to false reports of stolen vehicles.***Source***: https://www.iii.org/article/background-on-insurance-fraud **Problem Statement**” Our objective is to create an interface for insurance company with Machine Learning model in the backend, to identify the fraud claims in the automobile industry.”**H0**: "*Give data is genuine*"**H1**: "*Given data is fraud*"
###Code
import numpy as np #Linear Algebra library
import pandas as pd #Data Analytical library
import matplotlib.pyplot as plt #Data Visualisation Library
import seaborn as sns #Statistical Visualisation Library
###Output
_____no_output_____
###Markdown
**Loading the Data**Using pandas library, we imported insurance dataset.***Source***: https://www.kaggle.com/roshansharma/insurance-claim
###Code
df=pd.read_csv("/Users/manu/SJC/Sem 2/MVS-Python/insurance_claims.csv")
df_copy = df.copy()
pd.set_option('display.max_columns', 100)
df_copy
df_copy.shape
df_copy.info()
df_copy.describe().transpose()
df_copy.select_dtypes(include='object').describe().transpose()
###Output
_____no_output_____
###Markdown
Data Pre Processing- Data cleaning- Data Transformation- Data Integration 1. Data Cleaning
###Code
#replacing '?' with NAN
df_copy = df_copy[df_copy != '?']
df_copy.shape
df_copy.isnull().sum()
#replacing null values with mode of the respective columns
df_copy['collision_type'].fillna(df_copy['collision_type'].mode()[0],inplace = True)
df_copy['property_damage'].fillna(df_copy['property_damage'].mode()[0],inplace = True)
df_copy['police_report_available'].fillna(df_copy['police_report_available'].mode()[0],inplace = True)
df_copy.isnull().sum()
df_copy.isna().sum()
###Output
_____no_output_____
###Markdown
2. Data Transformation
###Code
df_copy['policy_bind_date'] = pd.to_datetime(df_copy['policy_bind_date'])
df_copy['incident_date'] = pd.to_datetime(df_copy['incident_date'])
###Output
_____no_output_____
###Markdown
EDA - Descriptive Statistics- Outlier Analysis- Data Visualisation 1. Descriptive Statistics
###Code
df_copy.describe().transpose()
df_copy.select_dtypes(include='object').describe().transpose()
###Output
_____no_output_____
###Markdown
2. Outlier Analysis
###Code
df_copy.plot.box(figsize = (16,6))
plt.xticks(rotation = 90)
#import sklearn.preprocessing as pre
#lb=pre.LabelEncoder()
#df_copy['fraud_reported']=lb.fit_transform(df_copy['fraud_reported'])
#correlation matrix for numerical variables
plt.figure(figsize = (10, 10))
sns.heatmap(df_copy.corr(), annot = True, cmap = 'Blues')
plt.title('Correlation matrix for numerical features')
###Output
_____no_output_____
###Markdown
3. Data Visualisation
###Code
#function for crosstabs
def cross_tab(x,y):
crtab = pd.crosstab(df_copy[x], df_copy[y])
return crtab
#Number of fraud claim
p = df_copy['fraud_reported'].value_counts()
print(p)
df_copy['fraud_reported'].value_counts().plot.bar()
sns.countplot(x=df_copy['fraud_reported'])
#Age v/s fraud reported
table=pd.crosstab(df_copy.age,df_copy.fraud_reported)
stacked_data = table.apply(lambda x: x*100/sum(x), axis=1)
stacked_data.plot(kind="bar", stacked=True)
#insured sex v/s fraud reported
cross_tab('insured_sex','fraud_reported')
sns.catplot(data=df_copy,x='insured_sex',hue='fraud_reported',kind='count')
#policy state v/s fraud reported
cross_tab('policy_state','fraud_reported')
sns.catplot(data=df_copy,x='policy_state',hue='fraud_reported',kind='count')
#Hour in which incident happend
table1=pd.crosstab(df_copy['incident_hour_of_the_day'],df_copy['fraud_reported'])
stacked_data = table1.apply(lambda x: x*100/sum(x), axis=1)
stacked_data.plot(kind="bar", stacked=True)
#Hour in which incident happend
cross_tab('incident_hour_of_the_day','fraud_reported')
sns.countplot(data = df_copy,x ='incident_hour_of_the_day',hue='fraud_reported')
plt.xticks(rotation = 90)
#incident type v/s fraud reported
cross_tab('incident_type','fraud_reported')
sns.catplot(data=df_copy,x='incident_type',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#insured education level v/s fraud reported
cross_tab('insured_education_level','fraud_reported')
sns.catplot(data=df_copy,x='insured_education_level',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#insured occupation v/s fraud reported
cross_tab('insured_occupation','fraud_reported')
sns.catplot(data=df_copy,x='insured_occupation',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#insured hobbies v/s fraud reported
cross_tab('insured_hobbies','fraud_reported')
sns.catplot(data=df_copy,x='insured_hobbies',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#insured relationship v/s fraud reported
cross_tab('insured_relationship','fraud_reported')
sns.catplot(data=df_copy,x='insured_relationship',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#incident type v/s fraud reported
cross_tab('incident_type','fraud_reported')
#incident type v/s fraud reported
sns.catplot(data=df_copy,x='incident_type',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#collision type v/s fraud reported
cross_tab('collision_type','fraud_reported')
sns.catplot(data=df_copy,x='collision_type',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#incident severity v/s fraud reported
cross_tab('incident_severity','fraud_reported')
sns.catplot(data=df_copy,x='incident_severity',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
sns.catplot(data =df_copy, x="incident_severity", y="property_claim",hue='fraud_reported',kind='box')
plt.xticks(rotation = 90)
#authorities contacted v/s fraud reported
cross_tab('authorities_contacted','fraud_reported')
sns.catplot(data=df_copy,x='authorities_contacted',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
cross_tab('police_report_available','fraud_reported')
sns.catplot(data=df_copy,x='police_report_available',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#incident state v/s fraud reported
cross_tab('incident_state','fraud_reported')
sns.catplot(data=df_copy,x='incident_state',hue='fraud_reported',kind='count')
#incident withness v/s fraud reported
cross_tab('witnesses','fraud_reported')
sns.catplot(data=df_copy,x='witnesses',hue='fraud_reported',kind='count')
plt.xticks(rotation = 90)
#incident type v/s total claim amount WRT fraud reported
sns.catplot(data=df_copy,x='incident_type',y='total_claim_amount',hue='fraud_reported',kind='bar')
plt.xticks(rotation = 90)
plt.figure(figsize = (15, 5))
df_temp = df_copy[df_copy.fraud_reported == 'Y']
sns.set_style('darkgrid')
sns.countplot(x = 'auto_year', data = df_temp)
plt.ylabel('No. of fraud reported')
plt.title('Fraud reported VS vechile year')
###Output
_____no_output_____ |
notebooks/3_Distributions_in_text.ipynb | ###Markdown
Distributions in Text >- [Corpora in NLTK](Corpora-in-NLTK)>>>- [Word Frequencies](Word-Frequencies)>>>- [The-Zipf's Law](The-Zipf's-Law) ---
###Code
%matplotlib inline
import matplotlib as plt
###Output
_____no_output_____
###Markdown
Corpora in NLTKThe `nltk.corpus` package defines a collection of `corpus reader` classes, which can be used to access the contents of a diverse set of corpora:
###Code
import nltk.corpus
###Output
_____no_output_____
###Markdown
Some of the `corpus reader` classes:
###Code
# The Brown corpus
print(nltk.corpus.brown)
# The Penn Treebank Corpus
print(nltk.corpus.treebank)
# The Name Genders Corpus
print(nltk.corpus.names)
# The Gutenberg Corpus
print(nltk.corpus.gutenberg)
###Output
<CategorizedTaggedCorpusReader in '.../corpora/brown' (not loaded yet)>
<BracketParseCorpusReader in '.../corpora/treebank/combined' (not loaded yet)>
<WordListCorpusReader in '.../corpora/names' (not loaded yet)>
<PlaintextCorpusReader in '.../corpora/gutenberg' (not loaded yet)>
###Markdown
Corpus ReadersEach corpus reader provides a variety of methods implementing a wide range of functionalities, depending on the format of the corpus. - Want to know more about a corpus? Use the method `readme()` to access the corpus `ReadMe`
###Code
# Do you remember the Brown corpus?
print(nltk.corpus.brown.readme())
###Output
BROWN CORPUS
A Standard Corpus of Present-Day Edited American
English, for use with Digital Computers.
by W. N. Francis and H. Kucera (1964)
Department of Linguistics, Brown University
Providence, Rhode Island, USA
Revised 1971, Revised and Amplified 1979
http://www.hit.uib.no/icame/brown/bcm.html
Distributed with the permission of the copyright holder,
redistribution permitted.
###Markdown
- Most plaintext and tagged corpora support methods to read the corpus as raw text, a list of words, a list of sentences, or a list of paragraphs.
###Code
# `nltk.corpus.gutenberg` is a subset of the full Project Gutenberg corpus, starting with Jane Austen's 'Emma'
# Accessing corpus as raw text
print(nltk.corpus.gutenberg.raw()[:289])
# list of words
print(nltk.corpus.gutenberg.words()[:60])
# list of sentences
print(nltk.corpus.gutenberg.sents()[:4])
# list of paragraphs
print(nltk.corpus.gutenberg.paras()[:4])
###Output
[[['[', 'Emma', 'by', 'Jane', 'Austen', '1816', ']']], [['VOLUME', 'I']], [['CHAPTER', 'I']], [['Emma', 'Woodhouse', ',', 'handsome', ',', 'clever', ',', 'and', 'rich', ',', 'with', 'a', 'comfortable', 'home', 'and', 'happy', 'disposition', ',', 'seemed', 'to', 'unite', 'some', 'of', 'the', 'best', 'blessings', 'of', 'existence', ';', 'and', 'had', 'lived', 'nearly', 'twenty', '-', 'one', 'years', 'in', 'the', 'world', 'with', 'very', 'little', 'to', 'distress', 'or', 'vex', 'her', '.']]]
###Markdown
- Most corpora are composed by set of files, whose id can be retrieved by using the `fileids()` method
###Code
print(nltk.corpus.gutenberg.fileids())
###Output
['austen-emma.txt', 'austen-persuasion.txt', 'austen-sense.txt', 'bible-kjv.txt', 'blake-poems.txt', 'bryant-stories.txt', 'burgess-busterbrown.txt', 'carroll-alice.txt', 'chesterton-ball.txt', 'chesterton-brown.txt', 'chesterton-thursday.txt', 'edgeworth-parents.txt', 'melville-moby_dick.txt', 'milton-paradise.txt', 'shakespeare-caesar.txt', 'shakespeare-hamlet.txt', 'shakespeare-macbeth.txt', 'whitman-leaves.txt']
###Markdown
- The above methods methods accept a single file name (or a list of file names) to restrict their scope:
###Code
# the first 5 sentences of Alice in Wonderland
print(nltk.corpus.gutenberg.sents("carroll-alice.txt")[:5])
###Output
[['[', 'Alice', "'", 's', 'Adventures', 'in', 'Wonderland', 'by', 'Lewis', 'Carroll', '1865', ']'], ['CHAPTER', 'I', '.'], ['Down', 'the', 'Rabbit', '-', 'Hole'], ['Alice', 'was', 'beginning', 'to', 'get', 'very', 'tired', 'of', 'sitting', 'by', 'her', 'sister', 'on', 'the', 'bank', ',', 'and', 'of', 'having', 'nothing', 'to', 'do', ':', 'once', 'or', 'twice', 'she', 'had', 'peeped', 'into', 'the', 'book', 'her', 'sister', 'was', 'reading', ',', 'but', 'it', 'had', 'no', 'pictures', 'or', 'conversations', 'in', 'it', ',', "'", 'and', 'what', 'is', 'the', 'use', 'of', 'a', 'book', ",'", 'thought', 'Alice', "'", 'without', 'pictures', 'or', 'conversation', "?'"], ['So', 'she', 'was', 'considering', 'in', 'her', 'own', 'mind', '(', 'as', 'well', 'as', 'she', 'could', ',', 'for', 'the', 'hot', 'day', 'made', 'her', 'feel', 'very', 'sleepy', 'and', 'stupid', '),', 'whether', 'the', 'pleasure', 'of', 'making', 'a', 'daisy', '-', 'chain', 'would', 'be', 'worth', 'the', 'trouble', 'of', 'getting', 'up', 'and', 'picking', 'the', 'daisies', ',', 'when', 'suddenly', 'a', 'White', 'Rabbit', 'with', 'pink', 'eyes', 'ran', 'close', 'by', 'her', '.']]
###Markdown
- Categorized corpora accept the `categories()` methods and the categorize tags can be used to restrict the scope of text accessing methods:
###Code
print(nltk.corpus.brown.categories())
# fileids of category 'adventure'
print(nltk.corpus.brown.fileids('adventure'))
# the editorial section of the Brown corpus is composed by fewer words than the news one
print(len(nltk.corpus.brown.words(categories = "editorial")))
print(len(nltk.corpus.brown.words(categories = "news")))
###Output
61604
100554
###Markdown
- Some corpora may have overlapping categories, so that the possibility of interplaying between category names and filenames can be useful:
###Code
# categories of the Reuters corpus
print(nltk.corpus.reuters.categories())
# which files of the Reuters corpus belong to category "gold"?
print(nltk.corpus.reuters.fileids("gold"))
# what are the topics of the file called "test/16009"?
print(nltk.corpus.reuters.categories("test/16009"))
# the raw file
nltk.corpus.reuters.raw("test/16009")
###Output
_____no_output_____
###Markdown
Loading your CorpusIf you want to access a corpus that is not part of the NLTK distribution or to access an existing corpus by using a customized reader (e.g. a customize tokenizer), you may want to create a new corpus reader. Different corpus readers have different constructor signatures. For instance, the folder `./data/gutenberg-extension` contains a selection of 5 (additional) books from the Gutenberg collection, plus a readme file. We can treat these text files as a nltk corpus by using the `PlaintextCorpusReader()` method to import them. Arguments of the `PlaintextCorpusReader()` method are the root of the corpus folder plus a list of files (e.g. `["austen-pride.txt", "doyle-sherlock.txt"]`) or a pattern matching fileids.
###Code
# all (txt) files
gutenberg_extension = nltk.corpus.PlaintextCorpusReader("./data/gutenberg-extension", '.*.txt')
## all files containing 'austen'
# gutenberg_extension = nltk.corpus.PlaintextCorpusReader("./data/gutenberg-extension", 'austen.*')
# note that the README file is not part of the corpus...
gutenberg_extension.fileids()
# ... yet it has been handled in a special way
print(gutenberg_extension.readme())
###Output
Project Gutenberg Extension
http://gutenberg.net/
This corpus contains etexts from from Project Gutenberg,
by the following authors:
* Jane Austen
* Arthur Conan Doyle
* James Joyce
* Mary Wollstonecraft (Godwin) Shelley
* Bram Stoker
This is just of toy-example for educational purposes.
The same legal conditions as the gutenberg corpus in the NLTK
distribution apply.
###Markdown
> For the full list of corpus reader methods see the [Corpus Readers HowTo](http://www.nltk.org/howto/corpus.html) or the official documentation (i.e. `help(nltk.corpus.reader)`). --- Text ObjectsThe NLTK `Text` class is a wrapper around a sequence of simple (string) tokens that offers a series of usefull methods supporting the **initial exploration** of text.A `Text` is typically initialized from a given document or corpus:
###Code
us_inaugural_addresses = nltk.text.Text(nltk.corpus.inaugural.words())
###Output
_____no_output_____
###Markdown
The `concordance(self, word, width=79, lines=25)` method allows you to visually inspect the occurrences of a given "`word`", returned in the so-called **KWIC** (Keyword in Context) format. Optional arguments: - "`lines`": number of returned occurrences- "`width`": width of the context of presentation (i.e. "line width")
###Code
us_inaugural_addresses.concordance("citizen", width = 80, lines = 10)
# Note: we're matching tokens, so "citizens" != "citizen"
us_inaugural_addresses.concordance("citizens", width = 80, lines = 10)
###Output
Displaying 10 of 247 matches:
Fellow - Citizens of the Senate and of the House of R
wisest and most experienced of her citizens a distrustful scrutiny into his qua
roof of the confidence of my fellow citizens , and have thence too little consul
han my own , nor those of my fellow citizens at large less than either . No peop
which can win the affections of its citizens and command the respect of the worl
his Government must depend . Fellow citizens , I am again called upon by the voi
ffrage , in common with my fellow - citizens , in the adoption or rejection of a
the Legislature , are exercised by citizens selected at regular periods by thei
rited the gratitude of his fellow - citizens , commanded the highest praises of
to be more friendly to us , and our citizens to be more friendly to them ; if an
###Markdown
The `.similar()` method allows to look for tokens that appear in similar contexts:
###Code
# show 5 other words appearing in similar contexts as "citizen"
us_inaugural_addresses.similar("citizen", 5)
###Output
people states country nation executive
###Markdown
The method `.common_contexts()` can give us an idea of a frequently encountered context of a given word:
###Code
us_inaugural_addresses.common_contexts(["citizens"], 10)
###Output
fellow_of her_a fellow_and fellow_at its_and fellow_i fellow_in
by_selected fellow_commanded our_to
###Markdown
Or we can use it to find contexts shared by **two** words:
###Code
us_inaugural_addresses.common_contexts(["citizen", "president"])
###Output
the_of a_may the_is the_by
###Markdown
We can easily create a dispersion plot to have a rough idea of where in the corpus our word are used:
###Code
# Lexical Dispersion Plot for Words in U.S. Presidential Inaugural Addresses
us_inaugural_addresses.dispersion_plot(["citizens", "democracy", "freedom", "duties", "America"])
###Output
_____no_output_____
###Markdown
--- Word Frequencies> The frequency of words and other linguistic units plays a central role in all branches of corpus linguistics. Indeed, the use of frequency information is what distinguishes corpus-based methodologies from other approaches to language (Baroni, 2009). - The **ABSOLUTE FREQUENCY** of a *type* $v_i$ word, $f(v_i)$, is its number of occurrences (i.e. *tokens*) in a corpus  We already know how to compute frequencies by using the `Counter()` method from the `collections` module:
###Code
from collections import Counter
# case sensitive counts
fdist_raw = Counter(nltk.corpus.brown.words())
# top-10 most frequent words
fdist_raw.most_common(10)
# let's ignore non-alphanumeric words
fdist = Counter([word for word in nltk.corpus.brown.words() if word.isalnum()])
fdist.most_common(10)
# If we want the case insensitive counts of the results above...
fdist_insensitive = Counter([word.lower() for word in nltk.corpus.brown.words() if word.isalnum()])
fdist_insensitive.most_common(10)
###Output
_____no_output_____
###Markdown
--- How about a fancy frequency list that is easy to manipulate? There are many modules in the `pandas` package that can help you doing that.
###Code
# convert our dictionary into a dataframe
import pandas as pd
df = pd.DataFrame.from_dict(fdist, orient='index')
df.columns = ['fq']
# let's sort our frequencies in descending order
dfs = df.sort_values('fq', ascending = False)
dfs.head(10)
###Output
_____no_output_____
###Markdown
A useful piece of information we want to visualize is the **RANK** of each item, i.e. its position in our sorted frequency list
###Code
df["rank"] = df['fq'].rank(ascending = False, method = 'first') # add column 'rank' with values of 'fq'
df.sort_values('rank', ascending = True, inplace=True) # sorting our frequencies IN PLACE
df.head(10)
###Output
_____no_output_____
###Markdown
- The **CONDITIONAL FREQUENCY** of a *type* $v_i$ word in the condition $X$, $f(v_i|X)$, is its number of occurrences (i.e. *tokens*) in the corpus sections where the target condition is satisfied. The NLTK `nltk.ConditionalFreqDist()` builds a conditional frequency object by counting the instances in a list of pairs (each pair being a given occurrence in a given condition)
###Code
# let's build our (condition, word) pairings
cond_word_pairs = [(genre, word) for genre in nltk.corpus.brown.categories() for word in nltk.corpus.brown.words(categories = genre)]
cond_word_pairs[:10]
cond_word_pairs[100000:100010]
# create our conditional frequency object
cfd = nltk.ConditionalFreqDist(cond_word_pairs)
# check the conditions in the conditional frequency object
cfd.conditions()
###Output
_____no_output_____
###Markdown
Condition values can be accessed individually, and the return objects are simple frequency distributions
###Code
cfd['editorial'].most_common(10)
###Output
_____no_output_____
###Markdown
The `plot()` and `tabulate()` methods can be used to plot the frequency distributions in the different conditions and to create a contingency table (a.k.a. a two-way table). Optional parameters `conditions` and `samples` can be used to focus on a given set of condition values or samples. This makes possible to load large quantity of data once, and then to focus only on meaningful portions of them. For instance, let's contrast how frequently modal verbs are used in some of the brown corpus sections.
###Code
genres = ['news', 'religion', 'science_fiction', 'romance', 'humor']
modals = ['can', 'could', 'may', 'might', 'must', 'will']
cfd.tabulate(conditions = genres, samples = modals)
###Output
can could may might must will
news 93 86 66 38 50 389
religion 82 59 78 12 54 71
science_fiction 16 49 4 12 8 16
romance 74 193 11 51 45 43
humor 16 30 8 8 9 13
###Markdown
--- - the **CORPUS SIZE** is the total number of occurrences (tokens) in the text: $$|C| = f(v_1) + f(v_2) + f(v_3) + ... + f(v_n)$$
###Code
# Recall: keys = words, values = their counts
%timeit corpus_size = sum(fdist.values())
corpus_size
# equivalent, fully explicit...
%timeit corpus_size = len([word for word in nltk.corpus.brown.words() if word.isalnum()])
corpus_size
###Output
2.3 s ± 54.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
- The **RELATIVE FREQUENCY** of a *type* $v_i$ word is its absolute frequency divided by the corpus size:$$f_{rel}(v_i) = \dfrac{f(v_i)}{|C|}$$
###Code
for word, abs_freq in fdist.most_common(10):
print( (word, abs_freq, round(abs_freq / corpus_size, 6)))
###Output
('the', 62713, 0.063453)
('of', 36080, 0.036506)
('and', 27915, 0.028245)
('to', 25732, 0.026036)
('a', 21881, 0.022139)
('in', 19536, 0.019767)
('that', 10237, 0.010358)
('is', 10011, 0.010129)
('was', 9777, 0.009892)
('for', 8841, 0.008945)
###Markdown
- The **VOCABULARY**, $V_c$, is the total number of *types* instantiated in the corpus (instead of *tokens* above)
###Code
vocabulary = fdist.keys()
print(len(vocabulary))
###Output
46969
###Markdown
- The **FREQUENCY CLASS** $V_i$ is the set of *types* occurring $i$ times
###Code
from collections import defaultdict
frequency_classes = defaultdict(set)
for atype, freq in fdist.items():
frequency_classes[freq].add(atype)
# a dictionary, where a frequency maps to a set of words occuring that often
print(frequency_classes[100])
print(frequency_classes[62713])
###Output
{'figures', 'Thomas', 'meant', 'price', 'race', 'rates'}
{'the'}
###Markdown
$V_1$ is the set of items occurring just one time, they are called **hapax legomena**
###Code
import random
print(random.sample(frequency_classes[1], 20))
print(len(frequency_classes[1]))
###Output
['override', 'Krupa', 'Lyman', 'Kaplan', 'nonsystematic', 'Retail', 'acrobats', 'lopsidedly', 'sighs', 'Dilys', 'ludicrousness', 'Eckart', 'halter', 'Mumford', 'Theodosius', 'unshaved', 'boomed', 'Desperately', 'twittered', 'Salesmanship']
19052
###Markdown
A **frequency spectrum** reports all the frequency classes of a corpus
###Code
frequency_spectrum = Counter(fdist.values())
frequency_spectrum[1]
###Output
_____no_output_____
###Markdown
A frequency spectrum can be visually inspected by plotting the class cardinality as a function of the increasing class frequency
###Code
import matplotlib.pyplot as plt
sorted_classes_freqs_tuples = sorted(frequency_spectrum.items())
# zip() returns a list of tuples, where each tuple contains the i-th element from each of the argument sequences
# the single star * unpacks a sequence/collection into positional arguments
x, y = zip(*sorted_classes_freqs_tuples) # unpack a list of pairs into two tuples
plt.plot(x, y, "o", color = "black", markerfacecolor='None')
plt.xscale('log') # log tranfor the x axis (but not the x values, note the tick values)
plt.ylabel("$|V_i|$")
plt.xlabel("$i$ (i.e. type frequency)")
plt.title("Brown Corpus Frequency Spectrum")
# try this
#plt.loglog()
plt.show()
###Output
_____no_output_____
###Markdown
The sum of the cardinality of each frequency class equals the vocabulary size: $|V_c| = |V_1| + |V_2| + |V_3| + ... + |V_{max(f)}|$
###Code
print (sum(frequency_spectrum.values()))
print (len(vocabulary))
###Output
46969
46969
###Markdown
- When dealing with datatypes that can be **meaningfully** ordered (e.g. age, weekdays, length), the **CUMULATIVE FREQUENCY** for the category $i$ is obtained by summing the absolute frequency of $i$ together with all absolute frequencies of all the events below it:$$f^c(v_i) = f(v_1) + f(v_2) + f(v_3) + ... + f(v_i)$$ For instance, let's count how many words of different length are used in some of the translations of the "Universal Declaration of Human Rights":
###Code
# let's calculate the frequency of each
languages = ['Chickasaw', 'English', 'German_Deutsch', 'Dutch_Nederlands', 'Italian']
cfd = nltk.ConditionalFreqDist((lang, len(word))
for lang in languages
for word in nltk.corpus.udhr.words(lang + '-Latin1'))
###Output
_____no_output_____
###Markdown
It is easy to see how the contingency table reporting the cumulative frequencies gives us a different information than the one reporting the absolute frequencies:
###Code
cfd.tabulate(cumulative = False, samples = range(1, 16))
cfd.tabulate(cumulative = True, samples = range(1, 16))
###Output
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Chickasaw 411 510 551 619 710 799 876 946 995 1028 1044 1072 1117 1127 1133
Dutch_Nederlands 173 494 781 922 1088 1213 1331 1417 1496 1552 1618 1644 1664 1677 1686
English 185 525 883 997 1166 1283 1440 1558 1638 1701 1751 1763 1774 1780 1781
German_Deutsch 171 263 614 717 894 1013 1110 1213 1275 1333 1386 1418 1445 1474 1489
Italian 336 601 743 862 1028 1157 1327 1434 1543 1611 1650 1683 1698 1714 1723
###Markdown
--- Zipf's LawTo have a look at the frequency distribution in the Brown corpus we can plot word frequencies as a function of word ranks.
###Code
y = sorted(fdist.values(), reverse = True)
x = range(1, len(y) + 1)
plt.plot(x, y, "o", color = "black", markerfacecolor='None')
plt.yscale('log') # log tranfor the y axis (but not the x values, note the tick values)
plt.ylabel("frequency")
plt.xlabel("rank")
plt.title("Brown Corpus Rank/Frequency Profile")
plt.show()
###Output
_____no_output_____
###Markdown
[Zipf's Law] The FREQUENCY of a word is inversely proportion to its RANK $$f(z) = \dfrac{C}{z^a} $$- $f(z)$: frequency of the rank-$z$ word- $C$ is the frequency of the top-frequent word (it depends on the corpus length and on the vocabulary size)- $a$ is an index that is inversely proportional to the corpus richness (the richer the corpus, the lower its $a$ value) - according to Zipf, $a \approx 1$ The difference $C/n -C/(n-1)$ between the frequencies of the rank $n$ and the rank $n-1$ words decreases **progressively** as a function of the rank- $a = 1 \ \Rightarrow \ f(1) = C \ \Rightarrow\ f(2) = \dfrac{C}{2}\ \Rightarrow\ f(3) = \dfrac{C}{3}\ ...$ - that is: the rank 1 word should occur twice as frequently as the rank 2 word- words in the lower tail tend to have similar frequencies- Zipf’s curve tail is populated by a lot of very rare words: the **hapax legomena** Scale Invariance (optional)- Zip’s law is a power law distribution with integer values: $y = ax^{-k}$- one attribute of power laws is their scale invariance: - scaling the argument by a constant factor $c$ causes only a porportionate scaling of the function itself of a $c^{-k}$ factor - $f(cx) = a(cx)^{-k} = c^{-k}f(x) \propto f(x)$ - as a consequence, **a change in corpus size does not affect the shape of the distribution, but only its scale**.
###Code
corpora = [nltk.corpus.brown, nltk.corpus.genesis, nltk.corpus.inaugural, nltk.corpus.switchboard]
corpora_titles = ["Brown","Genesis","Inaugural","Switchboard"]
for i, corpus in enumerate(corpora):
plt.subplot(2, 2, i+1)
y = sorted(Counter([word for word in corpus.words() if word.isalnum()]).values(), reverse = True)
x = range(1, len(y) + 1)
plt.plot(x, y, "o", color = "black", markerfacecolor='None')
plt.yscale('log')
plt.ylabel("frequency")
plt.xlabel("rank")
plt.title(corpora_titles[i])
plt.tight_layout() # adjust spacing between subplots to minimize the overlaps.
plt.show()
###Output
_____no_output_____
###Markdown
Log-log Space (optional)If we plot our data so that both axes are on a logarithmic scale, Zipf’s law can be reformulated as a **straight line**: $$log(f(z)) = log(C) - a\ log(z)$$- intercept if the log-frequency of the most frequent word- slope is a function of the corpus richness (i.e. $a$)
###Code
import numpy as np
log_y = np.log(sorted(fdist.values(), reverse = True))
log_x = np.log(range(1, len(fdist.values()) + 1))
plt.plot(log_x, log_y, "o", color = "black", markerfacecolor='None');
# we use the least squares method to estimate the slope (i.e. Zipf’s law’s parameters a)
import scipy.optimize
c = np.log(max(fdist.values())) # intercept if the log-frequency of the most frequent word
def func(x, a):
return c - a * x
a = scipy.optimize.curve_fit(func, log_x, log_y)[0] # estimated Zipf's law's a coefficient
zipf_fit = [func(x, a) for x in log_x]
plt.plot(log_x, log_y, "o", color = "blue", markerfacecolor='None');
plt.plot(log_x, zipf_fit, linestyle = "--", color = "red")
plt.ylabel("log frequency")
plt.xlabel("log rank")
plt.title("Brown Corpus Log Rank/Log Frequency Plot")
plt.show()
###Output
_____no_output_____ |
notebooks/lgb-baseline-trxn-payments.ipynb | ###Markdown
Features
###Code
target['feature_5_0'] = target.feature_5 < 1e-10
target['feature_5_1'] = target.feature_5 > 1e-10
target['feature_4_0'] = target.feature_4 < 1e-10
target['feature_4_1'] = target.feature_4 > 1e-10
for col in ['feature_7', 'feature_8', 'feature_9', 'feature_10']:
target[col] = target[col].fillna(target[col].mode()[0])
client = pd.read_csv(CLIENTS, sep=',')
client.loc[client.education.isna(), 'education'] = 'MISSING'
client.loc[(client.city > 1000) | (client.city == -1), 'city'] = 1001
client.loc[(client.region > 60) | (client.region == -1), 'region'] = 61
client['gender'] = client['gender'].fillna(value='F')
client['age'] = client['age'].fillna(client['age'].mode()[0])
client.head()
client = pd.get_dummies(client, columns=['education', 'job_type', 'citizenship', 'region', 'city', 'gender'])
target = target.set_index('client_id').sort_index()
client = client.set_index('client_id').sort_index()
pd_train = target.join(client)
pd_train.shape
pd_trxn_features = pd.read_csv('trxn_features_2.csv', sep=',', index_col='client_id')
pd_trxn_features.shape
pd_train['has_trxn_features'] = pd_train.index.isin(set(pd_trxn_features.index))
pd_train = pd_train.join(pd_trxn_features)
pd_payments_features = pd.read_csv('payments_features.csv', sep=',', index_col='client_id')
pd_payments_features.shape
pd_payments_features.columns
pd_train['has_payment_features'] = pd_train.index.isin(set(pd_payments_features.index))
pd_train = pd_train.join(pd_payments_features)
pd_train = pd_train.fillna(0)
pd_train.shape
pd_train.head()
###Output
_____no_output_____
###Markdown
Train
###Code
class KFoldGenerator:
def __init__(self, path, df):
locs = {v: k for k, v in enumerate(pd_train.index)}
folds = []
for i in range(5):
with open(os.path.join(path, f'fold_{i}_train.txt'), mode='r') as inp:
tr = np.array([*map(int, inp)])
with open(os.path.join(path, f'fold_{i}_test.txt'), mode='r') as inp:
te = np.array([*map(int, inp)])
folds.append((tr, te))
folds = [
([locs[e] for e in fold_train],
[locs[e] for e in fold_valid], )
for fold_train, fold_valid in folds
]
self.folds = folds
def __iter__(self):
yield from self.folds
kfold = KFoldGenerator(path='../folds/', df=pd_train)
# I'd better use :)
# from sklearn.model_selection import StratifiedKFold
# kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=72)
import lightgbm as lgb
X_train = lgb.Dataset(
data=pd_train.drop(['sale_flg', 'sale_amount', 'contacts', 'region_cd'], axis=1),
label=pd_train['sale_flg'].to_numpy(),
)
from sklearn.metrics import (
roc_auc_score,
precision_recall_fscore_support,
accuracy_score,
)
params = {
'objective': 'binary',
'metric': 'auc',
'learning_rate': 0.05,
'subsample': 0.7,
'class_weight': 'balanced',
'colsample_bytree': 0.7,
'max_depth': 5,
'num_leaves': 256,
}
def update_learning_rate(num_rounds):
if num_rounds <= 550:
return 0.05
return 0.03
%%time
trees = 1000
cv = lgb.cv(params, X_train, show_stdv=False, verbose_eval=True,
num_boost_round=trees, early_stopping_rounds=50,
return_cvbooster=True, folds=kfold)
cvbooster = cv.pop('cvbooster', None)
cv = pd.DataFrame(cv)
cv[10:].plot(figsize=(6, 6), y=['auc-mean'])
print(cv.loc[cv['auc-mean'].values.argmax()])
trees = cv['auc-mean'].values.argmax()
trees
feature_importance = []
for booster in cvbooster.boosters:
feature_importance_ = pd.Series(
data=booster.feature_importance('split'),
index=booster.feature_name(),
)
feature_importance.append(feature_importance_)
feature_importance = pd.concat(feature_importance, axis=1)
feature_importance_mean = feature_importance.median(axis=1).astype(int).rename('mean')
feature_importance_std = feature_importance.std(axis=1).rename('std')
indices = feature_importance_mean.argsort()
feature_importance = feature_importance.iloc[indices]
feature_importance_mean = feature_importance_mean[indices]
feature_importance_std = feature_importance_std[indices]
mask = feature_importance_mean > 0
feature_importance_mean = feature_importance_mean[mask]
feature_importance_std = feature_importance_std[mask]
feature_importance_mean[::-1]
feature_importance_mean.shape
feature_importance = []
for booster in cvbooster.boosters:
feature_importance_ = pd.Series(
data=booster.feature_importance('gain'),
index=booster.feature_name(),
)
feature_importance.append(feature_importance_)
feature_importance = pd.concat(feature_importance, axis=1)
feature_importance_mean = feature_importance.mean(axis=1).rename('mean')
feature_importance_std = feature_importance.std(axis=1).rename('std')
indices = feature_importance_mean.argsort()
feature_importance = feature_importance.iloc[indices]
feature_importance_mean = feature_importance_mean[indices]
feature_importance_std = feature_importance_std[indices]
mask = feature_importance_mean > 0
feature_importance_mean = feature_importance_mean[mask]
feature_importance_std = feature_importance_std[mask]
feature_importance_mean[::-1]
pd.concat([
feature_importance_mean,
feature_importance_std,
], axis=1).iloc[-30:].plot.barh(y='mean', figsize=(10, 15), xerr='std')
feature_importance.iloc[-30:].T.boxplot(figsize=(10, 15), vert=False)
def eval_metrics(y_true, y_score, earnings, contacts_cnt, thrsh=0.5):
auc = roc_auc_score(y_true, y_score)
y_pred = y_score > thrsh
acc = accuracy_score(y_true, y_pred)
pre, rec, f1, _ = precision_recall_fscore_support(y_true, y_pred, average='binary')
anic = (y_pred * (earnings - 4000 * contacts_cnt)).mean()
return auc, acc, pre, rec, f1, anic
submission_pull = []
for (_, valid_idx), booster in zip(kfold, tqdm(cvbooster.boosters)):
clients_valid, X_valid, y_valid = (
pd_train.iloc[valid_idx].index,
pd_train.loc[:, X_train.get_feature_name()].iloc[valid_idx],
pd_train.loc[:, 'sale_flg'].iloc[valid_idx],
)
submission = pd.DataFrame(index=clients_valid)
submission.index = submission.index.rename('client_id')
submission['scores'] = booster.predict(X_valid)
submission_pull.append(submission)
submission = pd.concat(submission_pull, axis=0)
submission.head()
submission = submission.join(pd_train[['sale_flg', 'sale_amount', 'contacts']])
submission['sale_amount'] = submission['sale_amount'].fillna(0)
submission.head()
submission.shape, pd_train.shape
thresholds = np.linspace(0, 1, 1000)
scores = [
eval_metrics(submission['sale_flg'], submission['scores'],
submission['sale_amount'], submission['contacts'], thrsh)
for thrsh in tqdm(thresholds, position=0)
]
scores = np.asarray(scores)
fig, ax = plt.subplots(figsize=(8, 8))
_ = ax.plot(thresholds, scores[:, -1])
_ = ax.grid()
thrsh_best = thresholds[np.argmax(scores[:, -1])]
metrics_best = eval_metrics(
submission['sale_flg'], submission['scores'],
submission['sale_amount'], submission['contacts'],
thrsh_best,
)
print("ROC AUC: {:.6f}\n"
"Accuarcy: {:.6f}\n"
"Precision: {:.6f}\n"
"Recall: {:.6f}\n"
"F1-score: {:.6f}\n"
"ANIC: {:.6f}\n".format(*metrics_best))
###Output
ROC AUC: 0.986633
Accuarcy: 0.941428
Precision: 0.759963
Recall: 0.934384
F1-score: 0.838196
ANIC: 6340.690993
|
demos/01_teaching/NB02c_Isotopic_distribution.ipynb | ###Markdown
NB02c Isotopic Distribution
[](https://mybinder.org/v2/gh/CSBiology/BIO-BTE-06-L-7/gh-pages?filepath=NB02c_Isotopic_distribution.ipynb)
[Download Notebook](https://github.com/CSBiology/BIO-BTE-06-L-7/releases/download/NB02a_NB02b_NB02c/NB02c_Isotopic_distribution.ipynb)
1. Isotopic Distribution
1. Simulating Isotopic Clusters for peptides
2. Simulating Isotopic Clusters for peptides with stable isotope labeled variant
2. References
Isotopic Distribution
Peptide signals exhibit a characteristic shape in the mass spectrum that depend on their isotopic profile, which is defined by
the number of naturally occurring isotopes in the peptide. The occurrence probabilities of natural isotopes are reflected in the mass
spectrum by the relative heights of the peak series belonging to the respective peptide. The frequency at which natural isotopes occur
is known and can be used to compute the isotope distribution of a molecule. The isotopic distribution for a given peptide molecule
C(v)H(w)N(x)O(y)S(z) is described by the following product of polynomials:
^{v}&space;\times&space;({}^{1}\textrm{H}+{}^{2}\textrm{H})^{w}&space;\times&space;({}^{14}\textrm{N}+{}^{15}\textrm{N})^{x}\times({}^{16}\textrm{O}+{}^{17}\textrm{O}&space;+&space;{}^{18}\textrm{O})^{y}\newline\times({}^{32}\textrm{S}+{}^{33}\textrm{S}+{}^{34}\textrm{S}+{}^{36}\textrm{S})^{z})
Symbolic expansion of the polynomials results in many product terms, which correspond to different isotopic variants of a molecule.
Even for molecules of a medium size, the straightforward expansion of the polynomials leads to an explosion regarding the number of product terms.
Due to this complexity, there was a need to develop algorithms for efficient computation. The different strategies comprise pruning the
polynomials to discard terms with coefficients below a threshold (Yergey 1983) combined with a recursive
computation (Claesen et al. 2012), and Fourier Transformation for a more efficient convolution of the isotope distributions of
individual elements (Rockwood et al. 1995), or rely on dynamic programming (Snider 2007).
> MIDAs (Alves and Yu 2005) is one of the more elaborate algorithms to predict an isotope cluster based on a given peptide sequence.
> Simulate the isotopic cluster of the peptide sequence ‘PEPTIDES’ and ‘PEPTIDEPEPTIDEPEPTIDEPEPTIDES’ with natural occurring isotope abundances.
###Code
#r "nuget: BioFSharp, 2.0.0-beta5"
#r "nuget: BioFSharp.IO, 2.0.0-beta5"
#r "nuget: Plotly.NET, 2.0.0-beta6"
#r "nuget: Plotly.NET, 2.0.0-preview.6"
#r "nuget: Plotly.NET.Interactive, 2.0.0-preview.6"
open Plotly.NET
open BioFSharp
###Output
_____no_output_____
###Markdown
Simulating Isotopic Clusters for peptidesWe will use two artificial peptide sequences and translate them into their elemental composition to simulate their isotopic clusters. Therefore, we first define a function that maps from a peptide sequence to its formula:
###Code
// Code-Block 1
// create chemical formula for amino acid and add water to reflect hydrolysed state in mass spectrometer
let toFormula bioseq =
bioseq
|> BioSeq.toFormula
// peptides are hydrolysed in the mass spectrometer, so we add H2O
|> Formula.add Formula.Table.H2O
###Output
_____no_output_____
###Markdown
Next, we will apply our function to receive the elemental composition or chemical formula of the peptides.
###Code
// Code-Block 2
// translate single letter code into amino acids and create chemical formula of it.
let peptide_short =
"PEPTIDES"
|> BioSeq.ofAminoAcidString
|> toFormula
let peptide_long =
"PEPTIDEPEPTIDEPEPTIDEPEPTIDES"
|> BioSeq.ofAminoAcidString
|> toFormula
let peptide_shortString =
peptide_short
|> Formula.toString
let peptide_longString =
peptide_long
|> Formula.toString
###Output
_____no_output_____
###Markdown
Additionally, we need a function that maps from Formula (and charge) to the isotopic distribution. Here, we can use `IsotopicDistribution.MIDA.ofFormula` from the BioFSharp library. However, for convenience (to use the same parameter twice), we define our function `generateIsotopicDistribution`:
###Code
// Code-Block 3
// Predicts an isotopic distribution of the given formula at the given charge,
// normalized by the sum of probabilities, using the MIDAs algorithm
let generateIsotopicDistribution (charge:int) (f:Formula.Formula) =
IsotopicDistribution.MIDA.ofFormula
IsotopicDistribution.MIDA.normalizeByMaxProb
0.01
0.005
charge
f
// create pattern for peptide_short
let isoPattern_peptide_short =
generateIsotopicDistribution 1 peptide_short
// create pattern for peptide_long
let isoPattern_peptide_long =
generateIsotopicDistribution 1 peptide_long
isoPattern_peptide_long
// Code-Block 4
// create one chart for both, short and long peptide isotopic patterns.
let isoPatternChart =
[
Chart.Column(isoPattern_peptide_short,Name= "peptide_short" )
|> Chart.withX_AxisStyle ("m/z",MinMax=(885.,895.))
Chart.Column(isoPattern_peptide_long,Name= "peptide_long" )
|> Chart.withX_AxisStyle ("m/z",MinMax=(3230., 3240.))
]
|> Chart.Stack 2
|> Chart.withSize (900.,600.)
|> Chart.withTitle "Isotopeclusters"
|> Chart.withY_AxisStyle "intensity"
|> Chart.withTemplate ChartTemplates.darkMirrored
isoPatternChart
###Output
_____no_output_____
###Markdown
Simulating Isotopic Clusters for peptides with stable isotope labeled variantIn addition to the natural occurring isotopic distribution, the field of proteomics has benefited greatly from the ability to introduce stable isotopes into peptide sequences. So called isotopic labeling refers to the introduction of a naturally low-abundance isotope of carbon, nitrogen, hydrogen and, in some cases, oxygen, into a peptide sequence. The isotopes commonly used are 13C, 15N, 2H (deuterium) and 18O with natural abundances of 1.10%, 0.366%, 0.015% and 0.200%, respectively (Becker 2008). Therefore, the introduction of these isotopes into a peptide sequence can be detected by most modern mass spectrometers leading to a respective mass shift and the ability to separate the same peptide species within the same run.> MIDAs (Alves and Yu 2005) is also able to predict isotope clusters with altered isotope abundances. Simulate the isotopic cluster > of the peptide sequence ‘PEPTIDES’ and ‘PEPTIDEPEPTIDEPEPTIDEPEPTIDES’ with stable isotopes 15N labeling. Therefore, we define a function called `label`. The function maps from a formula to a formula with exchangen nitrogen isotopes. (Attention: Don't get confused a formula is just a FSharpMap.)
###Code
// Code-Block 5
/// returns a function that replaces the nitrogen atoms in a formula
/// with the 15N isotope
let label formula =
Formula.replaceElement formula Elements.Table.N Elements.Table.Heavy.N15
// Code-Block 6
let N15_peptide_short =
"PEPTIDES"
|> BioSeq.ofAminoAcidString
|> toFormula
|> label
let N15_peptide_long =
"PEPTIDEPEPTIDEPEPTIDEPEPTIDES"
|> BioSeq.ofAminoAcidString
|> toFormula
|> label
//result: N15_peptide_short
N15_peptide_short
//result: N15_peptide_long
N15_peptide_long
// Code-Block 7
// create pattern for N15_peptide_short
let N15_isoPattern_peptide_short =
generateIsotopicDistribution 1 N15_peptide_short
// create pattern for N15_peptide_long
let N15_isoPattern_peptid_long =
generateIsotopicDistribution 1 N15_peptide_long
// Code-Block 8
// Create two charts. Each with the related 14N and 15N isotopic clusters. Then stack them two one unit.
let isoPatternChart2 =
[
[
Chart.Column(isoPattern_peptide_short,Name= "peptide_short" )
Chart.Column(N15_isoPattern_peptide_short,Name= "N15_peptide_short" )
]
|> Chart.Combine
|> Chart.withX_AxisStyle ("m/z",MinMax=(885., 905.0))
[
Chart.Column(isoPattern_peptide_long,Name= "peptide_long" )
Chart.Column(N15_isoPattern_peptid_long,Name= "N15_peptide_long" )
]
|> Chart.Combine
|> Chart.withX_AxisStyle ("m/z",MinMax=(3230.0, 3270.0))
]
|> Chart.Stack 2
|> Chart.withTitle "Isotopeclusters"
|> Chart.withY_AxisStyle "intensity"
isoPatternChart2
###Output
_____no_output_____ |
analysis/golden/cogsci19_analysis.ipynb | ###Markdown
Creating spline and stroke level dataframes for further analysis
###Code
## get the list of unique labels applied to sketches
unique_labels = np.unique(D.label.values)
## Removing Nones and obviously wrong super long lables
unique_labels = [i for i in unique_labels if i is not None]
unique_labels = [i for i in unique_labels if len(i)<900]
print 'we have {} unique labels'.format(len(unique_labels))
unique_cats= np.unique(D['category'])
##Create empty dictionary with categories as keys. We will use this to store part occurrence data for our categories
label_vect_dict = {unique_cats[0]:None,unique_cats[1]:None,unique_cats[2]:None,unique_cats[3]:None}
##Create vectors that contain the number of part instances in each sketch
num_annots=3
for category in unique_cats:
DS= D[D['category']==category]
unique_sketches_in_cat = np.unique(DS['sketch_id'])
unique_labels_in_cat = np.unique(DS['label'])
## initialize matrix that has the correct dimensions
Label_Vec = np.zeros((len(unique_sketches_in_cat),len(unique_labels_in_cat)), dtype=int)
unique_labels_in_cat= np.array(unique_labels_in_cat)
for s,this_sketch in enumerate(unique_sketches_in_cat):
label_vec = np.zeros(len(unique_labels_in_cat),dtype=int)
DSS = DS[DS['sketch_id']==this_sketch]
annotation_ids = np.unique(DSS['annotation_id'].values)
for this_annotation in annotation_ids:
DSA = DSS[DSS['annotation_id']==this_annotation]
label_list = DSA.label.values
for this_label in label_list:
label_ind = unique_labels_in_cat==this_label
label_vec[label_ind] += 1
Label_Vec[s,:]=label_vec/num_annots
label_vect_dict[category]= Label_Vec
valid_labels=[]
valid_labels_dict={}
for category in unique_cats:
vect = label_vect_dict[category]
thresh = 50
#print 'These are the labels that appear at least {} times:'.format(thresh)
#print unique_labels[np.sum(Label_Vec,0)>thresh]
unique_labels_in_cat = np.unique(D[D['category']==category]['label'])
plot_labels= unique_labels_in_cat[np.sum(vect,0)>thresh]
valid_labels_dict[category]=plot_labels
valid_labels.append(plot_labels)
prop_labels=[]
for part in plot_labels:
DS=D[D['category']==category]
prop_labels.append(DS[DS['label']==part]['annotation_id'].nunique()/DS['annotation_id'].nunique())
# sns.set_context('talk')
# plt.figure(figsize=(12,7))
# plt.ylim(0,1)
# h = plt.bar(plot_labels,prop_labels)
# plt.title('Proportion of {} annotations with labels'.format(category))
# plt.ylabel('proportion of annotations')
# plt.xlabel('Part')
##flattening valid labels
valid_labels = [item for sublist in valid_labels for item in sublist]
#Creating a spline-level df where the modal label is set as the 'true' label for any given spline
spline_df= D.groupby('spline_id').agg(lambda x: Counter(x).most_common(1)[0][0])
spline_df.reset_index(level=0, inplace=True)
##Creating a stroke-level dataframe that takes the mode value of annotation for its children splines to set as its
##label value
from collections import Counter
from collections import OrderedDict
stroke_svgs=OrderedDict()
for category in unique_cats:
DS=D[D['category']==category]
for sketch in np.unique(DS['sketch_id']):
DSS=DS[DS['sketch_id']==sketch]
for stroke in np.unique(DSS['stroke_num']):
DSA=DSS[DSS['stroke_num']==stroke]
DSA=DSA.reset_index()
stroke_svgs[DSA['stroke_id'][0]] = DSA['sketch_svg_string'][0][stroke]
stroke_svg_df= pd.DataFrame.from_dict(stroke_svgs, orient='index')
stroke_group_data= D.groupby('stroke_id').agg(lambda x: Counter(x).most_common(1)[0][0])
labels= pd.DataFrame(stroke_group_data[['sketch_id','label','stroke_num','condition','target','category','outcome']])
stroke_df=pd.merge(stroke_svg_df,labels,left_index=True, right_index =True)
stroke_df.reset_index(level=0, inplace=True)
stroke_df=stroke_df.rename(index=str, columns={"index": "stroke_id", 0: "svg"})
##Adding total arclength information to stroke dataframe
def calculate_arclength(svg):
try:
arclength= parse_path(svg).length()
except ZeroDivisionError:
print 'zero div error'
arclength = 0
return arclength
stroke_df['arc_length'] = stroke_df['svg'].apply(calculate_arclength)
stroke_df[stroke_df['condition']=='closer'].nunique()
D.annotation_id.nunique()
###Output
_____no_output_____
###Markdown
Inter-annotator reliability
###Code
## Getting the number of unique labels assigned to a given spline across annotations
a=[]
num_diff_annots = []
for this_cat in unique_cats:
DS=D[D['category']==this_cat]
labels = valid_labels_dict[this_cat]
unique_sketches_in_cat=np.unique(DS['sketch_id'])
for this_sketch_id in unique_sketches_in_cat:
DSA=DS[DS['sketch_id']==this_sketch_id]
unique_splines = np.unique(DSA['cumulative_spline_num'])
for i,this_spline in enumerate(unique_splines):
DSB =DSA[DSA['cumulative_spline_num']==this_spline]
numannots= 4-len(np.unique(DSB['label']))
if len(np.unique(DSB['label'])) == 3:
a.append(this_sketch_id)
if numannots==0:
numannots=1
num_diff_annots.append(numannots)
#plotting variability in spline annots
plt.figure(figsize=(8,8))
plt.hist(num_diff_annots, bins= range(1,5), align='left', density='True')
plt.title('Inter-annotator reliability')
plt.ylabel('proportion of splines')
plt.xlabel('Annotator agreement on label')
plt.xticks([1,2,3],['0/3','2/3','3/3'])
plt.show()
###Output
_____no_output_____
###Markdown
Stroke-part relationships
###Code
spline_dfs = spline_df
stroke_dfs = stroke_df
spline_annots_per_stroke = []
for this_cat in unique_cats:
labels = valid_labels_dict[this_cat]
DS=spline_dfs[spline_dfs['category']==this_cat]
unique_sketches_in_cat= np.unique(DS['sketch_id'])
for this_sketch_id in unique_sketches_in_cat:
DSA=DS[DS['sketch_id']==this_sketch_id]
unique_strokes = np.unique(DSA['stroke_num'])
for i,this_stroke in enumerate(unique_strokes):
DSB =DSA[DSA['stroke_num']==this_stroke]
numlabels= DSB['label'].nunique()
spline_annots_per_stroke.append(numlabels)
h= plt.hist(spline_annots_per_stroke, bins =[1,2,3,4,5,6], align='left', density="True", color='grey')
pps_series = pd.Series(np.array([h[0][0],h[0][1],h[0][2:].sum()]), index=['1', '2', '3+'], \
)
strokes_per_part = []
for this_cat in unique_cats:
DS=stroke_dfs[stroke_dfs['category']==this_cat]
unique_sketches_in_cat= np.unique(DS['sketch_id'])
for this_sketch_id in unique_sketches_in_cat:
DSA=DS[DS['sketch_id']==this_sketch_id]
parts_in_sketch = np.unique(DSA['label'])
for i,this_part in enumerate(parts_in_sketch):
DSB =DSA[DSA['label']==this_part]
numstrokes= DSB['stroke_num'].nunique()
strokes_per_part.append(numstrokes)
h= plt.hist(strokes_per_part, bins =[1,2,3,4,5,6,7,8,9,10], align='left', density="True", color ='grey')
spp_series = pd.Series(np.array([h[0][0],h[0][1],h[0][2:].sum()]), index=['1', '2', '3+'], \
)
plt.close()
fig = plt.figure(figsize=(25,15))
colors = sns.color_palette('tab20c')
ax1 = fig.add_subplot(212) # Create matplotlib axes
ax2 = fig.add_subplot(211)
# pd.DataFrame(spp_series).T.plot(ax=axes[0,1]).bar(stacked=True,legend=False, width =0.2)
# pd.DataFrame(pps_series).T.plot(ax=axes[0,0]).bar(stacked=True,legend=False, width =0.2)
b1=pd.DataFrame(spp_series).T.plot.barh(stacked=True,legend=False, width =0.25,ax=ax1, color=[colors[0],colors[1],colors[2]])
b2=pd.DataFrame(pps_series).T.plot.barh(stacked=True,legend=False, width =0.25,ax=ax2, color=[colors[4],colors[5],colors[6]])
for item in b1.get_xticklabels():
item.set_rotation(0)
for item in b2.get_xticklabels():
item.set_rotation(0)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax1.set_ylabel('')
ax2.set_ylabel('')
ax1.set_xlabel('',labelpad = 15)
ax2.set_xlabel('',labelpad= 15)
ax1.set_yticks([])
ax2.set_yticks([])
plt.subplots_adjust(wspace=1)
#plt.savefig(os.path.join(plot_dir,'stroke_part_relationship'),edgecolor='w',bbox_inches='tight',dpi=500)
from collections import Counter
iters=1
c_pps=[]
f_pps=[]
c_spp=[]
f_spp=[]
for this_cond in np.unique(D.condition):
for i in range(iters):
if i%100==0:
print("iteration {}".format(i))
spline_dfs = spline_df[spline_df['condition']==this_cond]
stroke_dfs = stroke_df[stroke_df['condition']==this_cond]
spline_annots_per_stroke = []
unique_sketches_in_cond= np.unique(spline_dfs['sketch_id'])
sample_sketches = np.random.choice(unique_sketches_in_cond,len(unique_sketches_in_cond),replace=True)
for this_sketch_id in sample_sketches:
DSA=spline_dfs[spline_dfs['sketch_id']==this_sketch_id]
unique_strokes = np.unique(DSA['stroke_num'])
for i,this_stroke in enumerate(unique_strokes):
DSB =DSA[DSA['stroke_num']==this_stroke]
numlabels= DSB['label'].nunique()
spline_annots_per_stroke.append(numlabels)
h= np.array(Counter(spline_annots_per_stroke).values())
pps_series = np.array([h[0],h[1],h[2:].sum()])
if this_cond=='closer':
c_pps.append(pps_series)
elif this_cond == 'further':
f_pps.append(pps_series)
strokes_per_part = []
unique_sketches_in_cond= np.unique(stroke_dfs['sketch_id'])
sample_sketches = np.random.choice(unique_sketches_in_cond,len(unique_sketches_in_cond),replace=True)
for this_sketch_id in sample_sketches:
DSA=stroke_dfs[stroke_dfs['sketch_id']==this_sketch_id]
parts_in_sketch = np.unique(DSA['label'])
for i,this_part in enumerate(parts_in_sketch):
DSB =DSA[DSA['label']==this_part]
numstrokes= DSB['stroke_num'].nunique()
strokes_per_part.append(numstrokes)
h= np.array(Counter(strokes_per_part).values())
spp_series = np.array([h[0],h[1],h[2:].sum()])
if this_cond=='closer':
c_spp.append(spp_series)
elif this_cond == 'further':
f_spp.append(spp_series)
c_pps=np.vstack(c_pps)
f_pps=np.vstack(f_pps)
c_spp=np.vstack(c_spp)
f_spp=np.vstack(f_spp)
#load arrays for 1000 bootstrap iters
c_pps= np.load(os.path.join(features_dir,'c_pps.npy'))
c_spp= np.load(os.path.join(features_dir,'c_spp.npy'))
f_pps= np.load(os.path.join(features_dir,'f_pps.npy'))
f_spp= np.load(os.path.join(features_dir,'f_spp.npy'))
#compute proportions from counts
# c_pps= np.load(os.path.join(features_dir,'c_pps.npy'))
c_pps = c_pps/c_pps.sum(axis=1)[:,None]
# c_spp= np.load(os.path.join(features_dir,'c_spp.npy'))
c_spp = c_spp/c_spp.sum(axis=1)[:,None]
# f_pps= np.load(os.path.join(features_dir,'f_pps.npy'))
f_pps = f_pps/f_pps.sum(axis=1)[:,None]
# f_spp= np.load(os.path.join(features_dir,'f_spp.npy'))
f_spp = f_spp/f_spp.sum(axis=1)[:,None]
## do some collapsing
c_pps[:,1]=c_pps[:,1]+c_pps[:,2]
c_pps=np.delete(c_pps,2,1)
f_pps[:,1]=f_pps[:,1]+f_pps[:,2]
f_pps=np.delete(f_pps,2,1)
c_spp[:,1]=c_spp[:,1]+c_spp[:,2]
c_spp=np.delete(c_spp,2,1)
f_spp[:,1]=f_spp[:,1]+f_spp[:,2]
f_spp=np.delete(f_spp,2,1)
pps_diff = f_pps[:,1]-c_pps[:,1]
spp_diff = c_spp[:,1]-f_spp[:,1]
###pval for spp diff
(sum(pps_diff<0)/len(pps_diff))*2
###pval for spp diff
(sum(spp_diff<0)/len(spp_diff))*2
len(spp_diff)
#Parts per stroke
c_pps_CI = np.percentile(c_pps[:,1],2.5).round(3),np.percentile(c_pps[:,1],97.5).round(3)
f_pps_CI = np.percentile(f_pps[:,1],2.5).round(3),np.percentile(f_pps[:,1],97.5).round(3)
print c_pps_CI, f_pps_CI
#Strokes per part
c_spp_CI = np.percentile(c_spp[:,1],2.5).round(3),np.percentile(c_spp[:,1],97.5).round(3)
f_spp_CI = np.percentile(f_spp[:,1],2.5).round(3),np.percentile(f_spp[:,1],97.5).round(3)
print c_spp_CI, f_spp_CI
###Output
(0.537, 0.586) (0.499, 0.546)
###Markdown
Part-streak analysis
###Code
dataset = 'normalized'
##Creating a dictionary of sketch_id with associated part sequences
seq_dict={}
for this_sketch in np.unique(stroke_df['sketch_id']):
parts_list=[]
DS=stroke_df[stroke_df['sketch_id']==this_sketch]
for i, row in DS.iterrows():
parts_list.append(stroke_df['label'][i])
seq_dict[this_sketch]=parts_list
##functions for getting 'mean streak_length' from a particular sketch for ground truth and scrambled part orders
import random
def get_mean_streak(sketch_id):
parts = seq_dict[sketch_id]
streak_counter=1
list_of_streaks=[]
for obj in range(len(parts)-1):
if parts[obj]==parts[obj+1]:
streak_counter+=1
else:
list_of_streaks.append(streak_counter)
streak_counter=1
list_of_streaks.append(streak_counter)
return np.mean(list_of_streaks)
def get_scramble_mean_streak(sketch_id):
parts = seq_dict[sketch_id]
scram_parts=random.sample(parts,len(parts))
streak_counter=1
list_of_streaks=[]
for obj in range(len(scram_parts)-1):
if scram_parts[obj]==scram_parts[obj+1]:
streak_counter+=1
else:
list_of_streaks.append(streak_counter)
streak_counter=1
list_of_streaks.append(streak_counter)
return np.mean(list_of_streaks)
#Iterating over all sketches to get mean streakiness for each sketch_id
gt_streak_mean={}
for this_cat in unique_cats:
DS= stroke_df[stroke_df['category']==this_cat]
streak_mean_list=[]
for this_sketch in np.unique(DS['sketch_id']):
streak_mean_list.append(get_mean_streak(this_sketch))
gt_streak_mean[this_cat]=np.mean(streak_mean_list)
##Creating a list of exception sketches
single_stroke_sketches=[]
single_label_sketches=[]
strokes_equal_labels_sketches=[]
for this_sketch in stroke_df.sketch_id.unique():
stroke_df_s= stroke_df[stroke_df['sketch_id']==this_sketch]
if stroke_df_s.stroke_num.nunique()==1:
single_stroke_sketches.append(this_sketch)
if stroke_df_s.label.nunique()==1:
single_label_sketches.append(this_sketch)
if stroke_df_s.label.nunique()== stroke_df_s.stroke_num.nunique():
strokes_equal_labels_sketches.append(this_sketch)
ss_sketches_labels={}
sl_sketches_numstrokes={}
sel_sketches_labels={}
for this_sketch in single_stroke_sketches:
ss_sketches_labels[this_sketch] = stroke_df[stroke_df['sketch_id']==this_sketch].label
for this_sketch in single_label_sketches:
sl_sketches_numstrokes[this_sketch]=stroke_df[stroke_df['sketch_id']==this_sketch].stroke_num.nunique()
for this_sketch in strokes_equal_labels_sketches:
sel_sketches_labels[this_sketch]=stroke_df[stroke_df['sketch_id']==this_sketch].label.unique()
_donotpermute=single_stroke_sketches + single_label_sketches + strokes_equal_labels_sketches
donotpermute=np.unique(_donotpermute).tolist()
##z-score of gt
#scrambled_higher_prop={}
gt_streak_zscore={}
true_streak_means = {}
permuted_streak_means = {}
for this_target in stroke_df.target.unique():
DA=stroke_df[stroke_df['target']==this_target]
for this_sketch in DA.sketch_id.unique():
if this_sketch not in donotpermute:
prop_counter=0
intact_mean_streak = get_mean_streak(this_sketch)
permuted_streak_list = []
for i in range(1000):
scrambled_mean_streak=get_scramble_mean_streak(this_sketch)
permuted_streak_list.append(scrambled_mean_streak)
# if intact_mean_streak<scrambled_mean_streak:
# prop_counter+=1
try:
assert np.isnan((get_mean_streak(this_sketch)-np.mean(permuted_streak_list))/np.std(permuted_streak_list)) == False
true_streak_means[this_sketch] = get_mean_streak(this_sketch)
permuted_streak_means[this_sketch] = np.mean(permuted_streak_list)
gt_streak_zscore[this_sketch]=(get_mean_streak(this_sketch)-np.mean(permuted_streak_list))/np.std(permuted_streak_list)
except AssertionError:
print stroke_df[stroke_df.sketch_id==this_sketch].stroke_num.nunique(),stroke_df[stroke_df.sketch_id==this_sketch].label.nunique()
# scrambled_higher_prop[this_sketch]=prop_counter/1000
tls=[]
objs=[]
cond=[]
cat=[]
for this_target in stroke_df.target.unique():
DA=stroke_df[stroke_df['target']==this_target]
_sketch_ids = DA.sketch_id.unique()
_sketch_ids = [x for x in _sketch_ids if x not in donotpermute]
true_streaks_sub=dict((k, true_streak_means[k]) for k in _sketch_ids)
perm_streaks_sub = dict((k, permuted_streak_means[k]) for k in _sketch_ids)
tls.append(true_streaks_sub.values())
cond.append(["Intact"]*len(true_streaks_sub.values()))
objs.append([this_target]*len(true_streaks_sub.values()))
cat.append([OBJECT_TO_CATEGORY[this_target]]*len(true_streaks_sub.values()))
tls.append(perm_streaks_sub.values())
cond.append(["Scrambled"]*len(true_streaks_sub.values()))
objs.append([this_target]*len(true_streaks_sub.values()))
cat.append([OBJECT_TO_CATEGORY[this_target]]*len(true_streaks_sub.values()))
tls = [item for sublist in tls for item in sublist]
objs= [item for sublist in objs for item in sublist]
cond= [item for sublist in cond for item in sublist]
cat= [item for sublist in cat for item in sublist]
assert len(tls)==len(objs)==len(cond)==len(cat)
_data= { 'objects':objs,'Mean Streak Length':tls, "Condition":cond, "category":cat}
data= pd.DataFrame(data = _data)
colors = sns.color_palette("husl", 5)
C0=colors[0]
C1=colors[1]
C2=colors[2]
C3=colors[3]
from matplotlib.lines import Line2D
palette= {
'basset': C3, 'beetle': C1, 'bloodhound': C3, 'bluejay': C0,
'bluesedan': C1, 'bluesport': C1, 'brown': C1, 'bullmastiff': C3,
'chihuahua': C3, 'crow': C0, 'cuckoo': C0, 'doberman': C3,
'goldenretriever': C3, 'hatchback': C1, 'inlay': C2, 'knob': C2,
'leather': C2, 'nightingale': C0, 'pigeon': C0, 'pug': C3,
'redantique': C1, 'redsport': C1, 'robin': C0, 'sling': C2,
'sparrow': C0, 'squat': C2, 'straight': C2, 'tomtit': C0,
'waiting': C2, 'weimaraner': C3, 'white': C1, 'woven': C2,
}
plt.figure(figsize=(12,12))
p = sns.pointplot(x="Condition", hue="objects", y= "Mean Streak Length",data=data,ci=95\
,dodge= 0.2, palette = palette)
p.set(ylim=(1, 3.5))
plt.setp([p.get_children()[0],p.get_children()],alpha=0.4)
leg_elements = [Line2D([0], [0], marker='o', color='w', label='bird',
markerfacecolor=C0, markersize=15),
Line2D([0], [0], marker='o', color='w', label='car',
markerfacecolor=C1, markersize=15),
Line2D([0], [0], marker='o', color='w', label='chair',
markerfacecolor=C2, markersize=15),
Line2D([0], [0], marker='o', color='w', label='dog',
markerfacecolor=C3, markersize=15),
]
plt.legend(handles= leg_elements, prop={'size': 35})
plt.tick_params(labelsize=35)
plt.xlabel('', fontsize=35)
plt.ylabel('', fontsize=35)
#plt.savefig(os.path.join(plot_dir,'streak_length_pp'),edgecolor='w',bbox_inches='tight',dpi=500)
#plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.0)
DA=D
_sketch_ids= DA.sketch_id.unique()
_sketch_ids = [x for x in _sketch_ids if x not in donotpermute]
z_scores_sub=dict((k, gt_streak_zscore[k]) for k in _sketch_ids)
plt.figure()
plt.title('Intact mean streak Z-score Distribution for all sketches')
h=sns.distplot(z_scores_sub.values(),kde=False,hist=True,norm_hist=False)
plt.close()
print 'mean and CI for all objs', calculate_CI(z_scores_sub.values())
##broken out by condition
for this_cond in stroke_df.condition.unique():
DA=stroke_df[stroke_df['condition']==this_cond]
_sketch_ids= DA.sketch_id.unique()
_sketch_ids = [x for x in _sketch_ids if x not in donotpermute]
z_scores_sub=dict((k, gt_streak_zscore[k]) for k in _sketch_ids)
plt.figure()
plt.title('Intact mean streak Z-score Distribution for {}'.format(this_cond))
h=sns.distplot(z_scores_sub.values(),kde=False,hist=True,norm_hist=False)
#plt.close()
print 'Intact and CI for {} condition'.format(this_cond), calculate_CI(z_scores_sub.values())
###Output
mean and CI for all objs (2.068, 1.899, 2.236)
Intact and CI for closer condition (2.591, 2.299, 2.883)
Intact and CI for further condition (1.614, 1.437, 1.791)
###Markdown
Creating feature vectors and normalizing
###Code
###This is where we make a num unique labels * 2 X number of sketches vector
feature_vec = np.zeros((len(stroke_df.sketch_id.unique()),len(valid_labels)*2), dtype=int)
ind=0
start_pos=0
end_pos=0
meta_list=[]
cols = ['sketch_id','target','condition','category','outcome']
for cat in unique_cats:
DS= stroke_df[stroke_df['category']==cat]
unique_labels_in_cat=valid_labels_dict[cat]
unique_sketches_in_cat=DS['sketch_id'].unique()
start_pos = end_pos
end_pos+= len(unique_labels_in_cat)
print start_pos, end_pos
clear_output(wait=True)
Label_Vec = np.zeros((len(unique_sketches_in_cat),len(unique_labels_in_cat)*2), dtype=int)
arc_length_vec = np.zeros((len(unique_sketches_in_cat),len(valid_labels_dict[cat])), dtype=int)
for s,sketch in enumerate(unique_sketches_in_cat):
label_vec = np.zeros(len(unique_labels_in_cat),dtype=int)
arc_vec = np.zeros(len(unique_labels_in_cat),dtype=int)
DSA=DS[DS['sketch_id']==sketch]
meta_list.append(pd.Series([DSA['sketch_id'].unique(),DSA['target'].unique(),DSA['condition'].unique(),DSA['category'].unique(),DSA['outcome'].unique()], index=cols))
label_list = DSA.label.values
for label in label_list:
if label in unique_labels_in_cat:
label_ind = unique_labels_in_cat==label
label_vec[label_ind] += 1
for label in unique_labels_in_cat:
DSB=DSA[DSA['label']==label]
label_ind = unique_labels_in_cat==label
arc_vec[label_ind] = DSB['arc_length'].sum()
feature_vec[ind,start_pos:end_pos]=label_vec
feature_vec[ind,start_pos+len(valid_labels):end_pos+len(valid_labels)]=arc_vec
ind+=1
meta_df = pd.DataFrame(meta_list, columns=cols)
##Changing column values from np arrays to strings/boolean
def arr_to_str(arr):
return (arr[0])
meta_df['sketch_id']=meta_df['sketch_id'].apply(arr_to_str)
meta_df['target']=meta_df['target'].apply(arr_to_str)
meta_df['condition']=meta_df['condition'].apply(arr_to_str)
meta_df['category']=meta_df['category'].apply(arr_to_str)
meta_df['outcome']=meta_df['outcome'].apply(arr_to_str)
feature_df= pd.DataFrame(feature_vec, columns=[s + '_numstrokes' for s in valid_labels]+[s + '_total_arclength' for s in valid_labels])
##creating a compressed version of the feature df with no duplicates for parts
labs_numstrokes=[]
labs_total_arclength=[]
for lab in np.unique(valid_labels):
labs_numstrokes.append(lab +'_numstrokes')
labs_total_arclength.append(lab+'_total_arclength')
feature_df_labs=labs_numstrokes+labs_total_arclength
feature_df_final= pd.DataFrame(columns=feature_df_labs)
for this_lab in feature_df_labs:
duplicates=[col for col in feature_df if col.startswith(this_lab)]
feature_df_final[this_lab]= feature_df[duplicates].sum(axis=1)
feature_df = feature_df_final
##Check to make sure the df looks okay
assert len(feature_df.columns)==len(np.unique(feature_df.columns))
feature_df.head()
## sanity check: make sure that the numstrokes and arclength features each add up to 1
numstrokes_cols = [i for i in feature_df.columns if i.split('_')[-1]=='numstrokes']
arclength_cols = [i for i in feature_df.columns if i.split('_')[-1]=='arclength']
feat_cols = numstrokes_cols + arclength_cols
if dataset=='rawcounts':
assert len(np.unique(feature_df[arclength_cols].sum(axis=1).round(10)))==1
assert len(np.unique(feature_df[numstrokes_cols].sum(axis=1).round(10)))==1
## normalize feature_df (apply whitening)?
## Warning, this will make it so numstrokes and arclength features DO NOT add up to 1
whitening = True
if whitening:
feature_df = normalize(feature_df)
print 'Applied whitening to raw feature matrix.'
else:
print 'Did not apply whitening to raw feature matrix.'
## concatenate meta and features to enable easy subsetting of dataframe
F = pd.concat((meta_df,feature_df),axis=1)
## add category to F dataframe so we can subset on that later
F['category'] = F['target'].apply(lambda x: OBJECT_TO_CATEGORY[x])
# hacky way of guarding against accidentally over-writing F, have a copy here called F0
F0 = F
## aggregate by target and condition and take the mean across rows within each group
F2 = F.groupby(['target','condition']).mean().reset_index()
F2['category'] = F2['target'].apply(lambda x: OBJECT_TO_CATEGORY[x])
## get ordered list of all objects
obj_list = np.unique(F.target.values)
ordered_obj_list = ordered_objs = get_ordered_objs_list_by_category(F2)
###Output
_____no_output_____
###Markdown
Feature vector correlation
###Code
##Empirical matrix
c_means=[]
f_means=[]
for this_obj in ordered_objs:
c_obj,f_obj,c_obj_list,f_obj_list = subset_dataframe_by_condition(F0,to_inspect='object', this_object=this_obj )
c_mean = np.array(c_obj.mean())
c_means.append(c_mean)
f_mean = np.array(f_obj.mean())
f_means.append(f_mean)
# c_means = np.apply_along_axis(softmax,1,np.vstack(c_means))
# f_means = np.apply_along_axis(softmax,1,np.vstack(f_means))
all_means = np.vstack((c_means,f_means))
#dmat = pdist(all_sample_means, 'correlation')
#dmat = squareform(dmat)
dmat = np.corrcoef(all_means)
plt.rcParams["axes.grid"] = False
plt.figure(figsize(8,8))
plt.matshow(dmat, cmap=plt.cm.Spectral,vmin=-1.,vmax=1.)
plt.colorbar(fraction=0.045)
#t = plt.xticks(range(len(ordered_objs)*2), close_far_labels, fontsize=10,rotation='vertical')
#t = plt.yticks(range(len(ordered_objs)*2), close_far_labels, fontsize=10)
plt.xticks([])
plt.yticks([])
plt.xlabel('bird car chair dog bird car chair dog\n close far ',\
fontsize=25)
plt.ylabel(' far close\ndog chair car bird dog chair car bird',\
fontsize=25)
plt.tick_params(axis='x',bottom=False,top=False,labelbottom=False)
plt.tick_params(axis='x',bottom=False,top=False,labelbottom=False)
###Output
_____no_output_____
###Markdown
Bootstrapping for CI
###Code
mean_close_dists = []
mean_far_dists = []
mean_within_dists = []
mean_between_dists = []
cf_mean_diff = []
wb_mean_diff=[]
num_iters = 1000 #Temporary low sampling number
for i in range(num_iters):
c_sample_means=[]
f_sample_means=[]
for this_obj in ordered_objs:
c_obj,f_obj,c_obj_list,f_obj_list = subset_dataframe_by_condition(F0,to_inspect='object', this_object=this_obj )
c_indices = np.random.choice(c_obj.shape[0],size=c_obj.shape[0],replace=True) #sample with replacement
c_sample = c_obj.iloc[c_indices]
c_sample.reset_index(drop=True)
f_indices= np.random.choice(f_obj.shape[0],size=f_obj.shape[0],replace=True) #sample with replacement
f_sample =f_obj.iloc[f_indices]
f_sample.reset_index(drop=True)
c_mean = np.array(c_sample.mean())
c_sample_means.append(c_mean)
f_mean = np.array(f_sample.mean())
f_sample_means.append(f_mean)
# c_sample_means = np.apply_along_axis(softmax,1,np.vstack(c_sample_means))
# f_sample_means = np.apply_along_axis(softmax,1,np.vstack(f_sample_means))
c_sample_means = np.apply_along_axis(minmaxscale,1,np.vstack(c_sample_means))
f_sample_means = np.apply_along_axis(minmaxscale,1,np.vstack(f_sample_means))
all_sample_means = np.vstack((c_sample_means,f_sample_means))
#dmat = pdist(all_sample_means, 'correlation')
#dmat = squareform(dmat)
dmat = np.corrcoef(all_sample_means)
# plt.rcParams["axes.grid"] = False
# plt.figure(figsize(8,8))
# plt.matshow(dmat, cmap=plt.cm.Spectral,vmin=-1.,vmax=1.)
# plt.colorbar(fraction=0.05)
# t = plt.xticks(range(len(ordered_objs)*2), close_far_labels, fontsize=10,rotation='vertical')
# t = plt.yticks(range(len(ordered_objs)*2), close_far_labels, fontsize=10)
# plt.tick_params(axis='x',bottom=False,top=False,labelbottom=False)
half_dim = int(dmat.shape[0]/2)
cf_dmat= dmat[:half_dim,half_dim:]
cc_dmat = dmat[:half_dim,:half_dim]
ff_dmat = dmat[half_dim:,half_dim:]
cat_dim = half_dim/4
close_dists = []
far_dists = []
within_dists = []
between_dists = []
for catnum in range(len(unique_cats)):
start_ind = int(cat_dim*catnum)
end_ind = int(cat_dim*(catnum+1))
f_cat_dmat = ff_dmat[start_ind:end_ind,start_ind:end_ind]
c_cat_dmat = cc_dmat[start_ind:end_ind,start_ind:end_ind]
cf_cat_dmat = cf_dmat[start_ind:end_ind,start_ind:end_ind]
triu_inds = np.triu_indices(cat_dim,k=1)
c_cat_dist = np.mean(c_cat_dmat[triu_inds])
f_cat_dist = np.mean(f_cat_dmat[triu_inds])
close_dists.append(c_cat_dist)
far_dists.append(f_cat_dist)
within_dists.append(np.mean(np.diag(cf_cat_dmat)))
od_inds = np.where(~np.eye(cf_cat_dmat.shape[0],dtype=bool))
between_dists.append(np.mean(cf_cat_dmat[od_inds]))
mean_close_dists.append(np.mean(close_dists))
mean_far_dists.append(np.mean(far_dists))
cf_mean_diff.append(np.mean(far_dists)-np.mean(close_dists))
mean_within_dists.append(np.mean(within_dists))
mean_between_dists.append(np.mean(between_dists))
wb_mean_diff.append(np.mean(within_dists)-np.mean(between_dists))
c_obj.shape[0]
sum(np.array(cf_mean_diff)<0)/len(cf_mean_diff)*2
len(cf_mean_diff)
sum(np.array(wb_mean_diff)<0)/len(wb_mean_diff)*2
y_vals = [np.array(mean_close_dists).mean(),np.array(mean_far_dists).mean()]
spreadclose= np.percentile(mean_close_dists, 2.5),np.percentile(mean_close_dists, 97.5)
spreadfar = np.percentile(mean_far_dists,2.5),np.percentile(mean_far_dists, 97.5)
lower_err = np.array(mean_far_dists).mean()-spreadfar[0],np.array(mean_close_dists).mean()-spreadclose[0]
upper_err = spreadfar[1]-np.array(mean_far_dists).mean(),spreadclose[1]-np.array(mean_close_dists).mean()
errs= np.vstack((lower_err, upper_err))
y_pos = np.arange(2)
fig = plt.figure(figsize=(4,8))
sns.set_context('poster')
colors = sns.color_palette('tab20c')
color_list = [colors[4],colors[6]]
plt.bar(y_pos,y_vals, yerr=errs, width= 0.8, capsize=0,color=color_list)
plt.ylim((0.,1.))
plt.xlim((-0.5,1.5))
plt.ylabel('correlation')
plt.xticks(y_pos,['close','far'])
#plt.savefig(os.path.join(plot_dir,'close_far_dispersion.pdf'))
spreaddiff= np.percentile(cf_mean_diff,2.5),np.percentile(cf_mean_diff,97.5)
lower_err = np.array(cf_mean_diff).mean()-spreaddiff[0]
upper_err = spreaddiff[1]-np.array(cf_mean_diff).mean()
differrs = np.vstack((lower_err, upper_err))
y_pos = 1
fig = plt.figure(figsize=(2,8))
plt.bar(y_pos,np.mean(cf_mean_diff), yerr= differrs, width= 0.5,capsize=0)
plt.ylim((0,0.4))
plt.xlim(0.5, 1.5)
plt.xticks([])
plt.ylabel('close-far difference')
spreadwithin= np.percentile(mean_within_dists, 2.5),np.percentile(mean_within_dists, 97.5)
spreadbetween = np.percentile(mean_between_dists,2.5),np.percentile(mean_between_dists, 97.5)
lower_err = np.array(mean_within_dists).mean()-spreadwithin[0],np.array(mean_between_dists).mean()-spreadbetween[0]
upper_err = spreadwithin[1]-np.array(mean_within_dists).mean(),spreadbetween[1]-np.array(mean_between_dists).mean()
errs= np.vstack((lower_err, upper_err))
fig = plt.figure(figsize=(5,8))
sns.set_context('poster')
colors = sns.color_palette('tab20c')
color_list = [colors[4],colors[6]]
y_vals = [np.array(mean_within_dists).mean(),np.array(mean_between_dists).mean()]
print y_vals
print errs
y_pos = np.arange(2)
plt.bar(y_pos,y_vals, yerr= errs, width= 0.8, capsize=0,color=color_list)
plt.ylim((0,1))
plt.xlim((-0.5,1.5))
plt.ylabel('correlation')
plt.xticks(y_pos,['within \nobject','between \nobjects'])
plt.tight_layout()
#plt.savefig(os.path.join(plot_dir,'within_object_btw_context_similarity.pdf'))
spreaddiff= np.percentile(wb_mean_diff,2.5),np.percentile(wb_mean_diff,97.5)
lower_err = np.array(wb_mean_diff).mean()-spreaddiff[0]
upper_err = spreaddiff[1]-np.array(wb_mean_diff).mean()
differrs = np.vstack((lower_err, upper_err))
y_pos = 1
plt.bar(y_pos,np.mean(wb_mean_diff), yerr= differrs, width= 0.5,capsize=20)
plt.ylim((0,0.5))
plt.xlim(0, 2)
plt.xticks([])
plt.ylabel('Within-between difference')
###get empirical sparsity difference for close and far
##Get close and far vectors:
co,fa,obc,obf = subset_dataframe_by_condition(F2,to_inspect='all')
cs = [] #close sparsity
fs = [] #far sparsity
cs = co.apply(get_sparsity, axis = 1)
fs = fa.apply(get_sparsity, axis = 1)
print 'difference in sparsity between close and far = {}'.format(np.mean(cs.values) - np.mean(fs.values))
###Output
_____no_output_____
###Markdown
close 0.3460397682282061 far 0.2769144895890592
###Code
### bootstrap resample to get 95% CIs
nIter = 5000
sdiff=[]
for currIter in np.arange(nIter):
print 'Running bootstrap iteration {} of {}'.format(currIter+1,nIter)
clear_output(wait=True)
Fboot = resample_sketches(F0,random_state=currIter)
F2boot = aggregate_sketches(Fboot,OBJECT_TO_CATEGORY=OBJECT_TO_CATEGORY)
c_boot,f_boot,obc,obf = subset_dataframe_by_condition(F2boot,to_inspect='all')
csboot = c_boot.apply(get_sparsity,axis=1)
fsboot = f_boot.apply(get_sparsity,axis=1)
sdiff.append(np.mean(csboot.values)-np.mean(fsboot.values))
print np.mean(sdiff),np.percentile(sdiff,2.5), np.percentile(sdiff,97.5)
sdiffdf = pd.DataFrame(sdiff)
sdiffdf.columns=['sparsity']
colors = sns.color_palette('tab20c')
sns.set_context('poster')
fig = plt.figure(figsize=(4,8))
mu = np.mean(sdiff)
lb = np.percentile(sdiff,2.5)
ub = np.percentile(sdiff,97.5)
plt.bar(0,mu,color=colors[4],width=0.3)
plt.errorbar(0,mu,
yerr=np.vstack((mu-lb,ub-mu)),
color='black',elinewidth=3)
wid = 0.3
plt.xlim(-wid,wid)
plt.ylim(0,1.)
plt.ylabel('vector sparsity difference',fontsize=22)
plt.xlabel(' ')
plt.xticks([])
plt.tight_layout()
#plt.savefig(os.path.join(plot_dir,'difference_vector_sparsity.pdf'))
np.mean(sdiff).round(3)
print lb,ub
###Output
_____no_output_____
###Markdown
Apply PCA and visualize MDS plot
###Code
## aggregate by target and condition and take the mean across rows within each group
F2 = F.groupby(['target','condition']).mean().reset_index()
## re-add category back to the F dataframe so we can subset on that later
##( taking mean above removes it b/c it is a string)
F2['category'] = F2['target'].apply(lambda x: OBJECT_TO_CATEGORY[x])
## sort into standard order
F2 = F2.sort_values(['condition','category','target']).reset_index(drop=True)
## extract just the feature columns and store as np array
PF = np.array(F2[feat_cols])
## do the same for the meta
PM = F2.loc[:,['condition','category','target']]
# optionally apply PCA
apply_pca = True
num_pcs = 3
if apply_pca:
from sklearn.decomposition import PCA
pca = PCA(n_components=num_pcs)
pca.fit(PF)
print('Applying PCA and transforming data, using {} components'.format(num_pcs))
PF = pca.fit_transform(PF)
PF = pd.DataFrame(PF)
## join into single dataframe for plotting
P = pd.concat([PF,PM],axis=1)
sns.set_style('white')
sns.set_context('talk')
colors = sns.color_palette("husl", 5)
sns.scatterplot(data=P,
x=0,
y=1,
hue='category',
style='condition',
palette=colors[:4])
plt.xlabel(' ')
plt.ylabel(' ')
axlim = 7
plt.xlim(-axlim,axlim)
# plt.xticks(np.arange(-axlim,axlim), 1.)
plt.ylim(-axlim,axlim)
plt.legend(bbox_to_anchor=(1.,1.))
plt.tick_params(axis='both', which='major', labelsize=20)
plt.tight_layout()
#plt.savefig(os.path.join(plot_dir,'MDS_part_vectors.pdf'))
###Output
_____no_output_____
###Markdown
Some exploratory stuff
###Code
CI= {
'arc_length':}
aggs = {'arc_length':{
'Mean':'mean',
'Std Dev':'std',
'95CI': lambda x:([np.percentile([np.random.choice(x,size=len(x),replace=True)]*1000,[2.5,97.5])])
}
}
#'95CI':lambda x:([np.percentile([np.random.choice(x,size=len(x),replace=True)]*1000,[2.5,97.5])])
stroke_df.groupby(['label','condition']).agg(aggs)
###Output
_____no_output_____ |
Workspace/ChrisMartinModeling.ipynb | ###Markdown
Modeling
###Code
X = train_enc
y = trainlabel['damage_grade']
X_train, X_test, y_train, y_test = train_test_split(X,y,stratify=y, random_state=123)
X_train.head()
###Output
_____no_output_____
###Markdown
DECISION TREE CLASSIFIER
###Code
# Decision Tree with max_depth features adjusted
pipe_forest = make_pipeline(StandardScaler(), DecisionTreeClassifier())
params = {'decisiontreeclassifier__max_depth' : [2, 3, 4, 5]
}
grid_forest = GridSearchCV(pipe_forest, param_grid = params)
grid_forest.fit(X_train,y_train)
grid_forest.score(X_test,y_test)
pred = grid_forest.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_forest.best_estimator_
# # Decision Tree with min_samples_split and max_depth adjusted
pipe_forest = make_pipeline(StandardScaler(), DecisionTreeClassifier())
params = {'decisiontreeclassifier__max_depth' : [2, 3, 4, 5],
'decisiontreeclassifier__random_state' : [123]
,'decisiontreeclassifier__min_samples_split' : [2, 3, 4]
}
grid_forest = GridSearchCV(pipe_forest, param_grid = params)
grid_forest.fit(X_train,y_train)
grid_forest.score(X_test,y_test)
pred = grid_forest.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_forest.best_estimator_
# # Decision Tree with max_depth adjusted AGAIN
pipe_forest = make_pipeline(StandardScaler(), DecisionTreeClassifier())
params = {'decisiontreeclassifier__max_depth' : [5,6,7,8],
'decisiontreeclassifier__random_state' : [123]
,'decisiontreeclassifier__min_samples_split' : [2, 3, 4]
}
grid_forest = GridSearchCV(pipe_forest, param_grid = params)
grid_forest.fit(X_train,y_train)
grid_forest.score(X_test,y_test)### best DT model
pred = grid_forest.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_forest.best_estimator_
# Clean Decision Tree pipeline
pipe_forest = make_pipeline(StandardScaler(), DecisionTreeClassifier())
params = {
'decisiontreeclassifier__random_state' : [123]}
grid_forest = GridSearchCV(pipe_forest, param_grid = params)
grid_forest.fit(X_train,y_train)
grid_forest.score(X_test,y_test)
pred = grid_forest.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_forest.best_estimator_
# # Decision Tree with min_samples_split adjusted again
pipe_forest = make_pipeline(StandardScaler(), DecisionTreeClassifier())
params = {'decisiontreeclassifier__max_depth' : [5,6,7,8],
'decisiontreeclassifier__random_state' : [123]
,'decisiontreeclassifier__min_samples_split' : [4,5,6,7]
}
grid_forest = GridSearchCV(pipe_forest, param_grid = params)
grid_forest.fit(X_train,y_train)
grid_forest.score(X_test,y_test)
pred = grid_forest.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_forest.best_estimator_
# # Decision Tree with min_samples_split and max_depth adjusted ONCE AGAIN
pipe_forest = make_pipeline(StandardScaler(), DecisionTreeClassifier())
params = {'decisiontreeclassifier__max_depth' : [7,8,9,10,11],
'decisiontreeclassifier__random_state' : [123]
,'decisiontreeclassifier__min_samples_split' : [6,7,8,9,10]
}
grid_forest = GridSearchCV(pipe_forest, param_grid = params)
grid_forest.fit(X_train,y_train)
grid_forest.score(X_test,y_test)
pred = grid_forest.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_forest.best_estimator_
###Output
_____no_output_____
###Markdown
BAGGING ESTIMATOR
###Code
# Bagging estimator with n_estimators, max_features, max_samples adjusted
pipe_bagged = make_pipeline(StandardScaler(), BaggingClassifier())
params = {'baggingclassifier__n_estimators' : [10,20,30,40,50],
'baggingclassifier__random_state' : [123]
,'baggingclassifier__max_features' : [1,6,7,8,9,10],
'baggingclassifier__max_samples' : [1, 6,7,8,9,10]
}
grid_bagged = GridSearchCV(pipe_bagged, param_grid = params)
grid_bagged.fit(X_train,y_train)
grid_bagged.score(X_test,y_test)
pred = grid_bagged.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_bagged.best_estimator_
# Bagging with max_features adjusted AGAIN
pipe_bagged = make_pipeline(StandardScaler(), BaggingClassifier())
params = {
'baggingclassifier__random_state' : [123]
,'baggingclassifier__max_features' : [9,10, 12, 14, 16, 20]
}
grid_bagged = GridSearchCV(pipe_bagged, param_grid = params)
grid_bagged.fit(X_train,y_train)
############### best score ######################
grid_bagged.score(X_test,y_test)
pred = grid_bagged.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_bagged.best_estimator_
#Clean Bagging Estimator
pipe_bagged = make_pipeline(StandardScaler(), BaggingClassifier())
params = {
'baggingclassifier__random_state' : [123]
}
grid_bagged = GridSearchCV(pipe_bagged, param_grid = params)
grid_bagged.fit(X_train,y_train)
grid_bagged.score(X_test,y_test)
pred = grid_bagged.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_bagged.best_estimator_
###Output
_____no_output_____
###Markdown
AdaBoost
###Code
#Clean Adaboost Estimator
pipe_ada = make_pipeline(StandardScaler(), AdaBoostClassifier())
params = {
'adaboostclassifier__random_state' : [123]
}
grid_ada = GridSearchCV(pipe_ada, param_grid = params)
grid_ada.fit(X_train,y_train)
grid_ada.score(X_test,y_test)
pred = grid_ada.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_ada.best_estimator_
# Adaboost with n_estimators, learning_rate, algorithm adjusted
pipe_ada = make_pipeline(StandardScaler(), AdaBoostClassifier())
params = {'adaboostclassifier__n_estimators':[30,40,50,60,70],
'adaboostclassifier__learning_rate':[.1,.3,1.0,1.3,3],
'adaboostclassifier__algorithm':['SAMME.R','SAMME'],
'adaboostclassifier__random_state' : [123]
}
grid_ada = GridSearchCV(pipe_ada, param_grid = params)
grid_ada.fit(X_train,y_train)
grid_ada.score(X_test,y_test)
pred = grid_ada.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_ada.best_estimator_
# Adaboost with n_estimators, learning_rate adjusted AGAIN
pipe_ada = make_pipeline(StandardScaler(), AdaBoostClassifier())
params = {'adaboostclassifier__n_estimators':[70,80,90,100,110],
'adaboostclassifier__learning_rate':[1.3,2, 2.3, 3],
'adaboostclassifier__random_state' : [123]
}
grid_ada = GridSearchCV(pipe_ada, param_grid = params)
grid_ada.fit(X_train,y_train)
grid_ada.score(X_test,y_test)
pred = grid_ada.predict(X_test)
f1_score(y_test,pred, average='micro')
grid_ada.best_estimator_
# MY BEST MODEL: using ALL variables....
# Pipeline(steps=[('standardscaler', StandardScaler()),
# ('baggingclassifier', < with base estimator of Decision Tree Classifier>
# BaggingClassifier(max_features=9, random_state=123))])
# Accuracy score: .668
###Output
_____no_output_____ |
notebooks/wall-crack-detection.ipynb | ###Markdown
Set up environment
###Code
import math, re, os
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import random
import cv2
import urllib
from functools import partial
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing import image
import warnings
warnings.filterwarnings('ignore')
print("Tensorflow version " + tf.__version__)
###Output
_____no_output_____
###Markdown
Set up environment and variables
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
BATCH_SIZE = 64
CLASSES = ['crack', 'non crack']
EPOCHS = 100
###Output
_____no_output_____
###Markdown
Define data loading methods
###Code
TRAINING_FILENAMES = tf.io.gfile.glob('../input/vgg-skz-crack-dataset/train/*/*.jpg')
VALID_FILENAMES = tf.io.gfile.glob('../input/vgg-skz-crack-dataset/validation/*/*.jpg')
TEST_FILENAMES = tf.io.gfile.glob('../input/vgg-skz-crack-dataset/test/*/*.jpg')
###Output
_____no_output_____
###Markdown
Augmentation methods (adding random noise to the image)
###Code
def add_noise(image):
VARIABILITY = 60
deviation = VARIABILITY*random.random()
noise = np.random.normal(0, deviation, image.shape)
image += noise
np.clip(image, 0., 255.)
return image
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
AUTOTUNE = tf.data.experimental.AUTOTUNE
def get_training_dataset():
dataset = train_datagen.flow_from_directory(
'../input/vgg-skz-crack-dataset/train',
class_mode='categorical',
target_size=[256, 256],
batch_size=BATCH_SIZE,
shuffle=True,
)
return dataset
valid_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
def get_validation_dataset():
dataset = valid_datagen.flow_from_directory(
'../input/vgg-skz-crack-dataset/validation',
class_mode='categorical',
target_size=[256, 256],
batch_size=BATCH_SIZE,
)
return dataset
test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
def get_test_dataset():
dataset = test_datagen.flow_from_directory(
'../input/vgg-skz-crack-dataset/test',
target_size=[256, 256],
batch_size=BATCH_SIZE,
)
return dataset
test2_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
def count_data_items(filenames):
return len(filenames)
NUM_TRAINING_IMAGES = count_data_items(TRAINING_FILENAMES)
NUM_VALIDATION_IMAGES = count_data_items(VALID_FILENAMES)
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print('Dataset: {} training images, {} validation images, {} test images'.format(
NUM_TRAINING_IMAGES, NUM_VALIDATION_IMAGES, NUM_TEST_IMAGES))
# numpy and matplotlib defaults
np.set_printoptions(threshold=15, linewidth=80)
def title_from_label_and_target(label, correct_label):
if correct_label is None:
return CLASSES[label], True
correct = (label == correct_label)
return "{} [{}{}{}]".format(CLASSES[int(label)], 'OK' if correct else 'NO', u"\u2192" if not correct else '',
CLASSES[correct_label] if not correct else ''), correct
def display_one_image(image, title, subplot, red=False, titlesize=16):
plt.subplot(*subplot)
plt.axis('off')
plt.imshow(image.astype('uint8'))
if len(title) > 0:
plt.title(title, fontsize=int(titlesize) if not red else int(titlesize/1.2), color='red' if red else 'black', fontdict={'verticalalignment':'center'}, pad=int(titlesize/1.5))
return (subplot[0], subplot[1], subplot[2]+1)
def display_batch_of_images(directory_iterator, predictions=None):
"""This will work with:
display_batch_of_images(images)
display_batch_of_images(images, predictions)
display_batch_of_images((images, labels))
display_batch_of_images((images, labels), predictions)
"""
# data
images, labels = directory_iterator.next()
labels = np.argmax(labels, axis=-1)
if labels is None:
labels = [None for _ in enumerate(images)]
# auto-squaring: this will drop data that does not fit into square or square-ish rectangle
rows = int(math.sqrt(len(images)))
cols = len(images)//rows
# size and spacing
FIGSIZE = 13.0
SPACING = 0.1
subplot=(rows,cols,1)
if rows < cols:
plt.figure(figsize=(FIGSIZE,FIGSIZE/cols*rows))
else:
plt.figure(figsize=(FIGSIZE/rows*cols,FIGSIZE))
# display
for i, (image, label) in enumerate(zip(images[:rows*cols], labels[:rows*cols])):
title = '' if label is None else CLASSES[int(label)]
correct = True
if predictions is not None:
title, correct = title_from_label_and_target(predictions[i], int(label))
dynamic_titlesize = FIGSIZE*SPACING/max(rows,cols)*40+3 # magic formula tested to work from 1x1 to 10x10 images
subplot = display_one_image(image, title, subplot, not correct, titlesize=dynamic_titlesize)
#layout
plt.tight_layout()
if label is None and predictions is None:
plt.subplots_adjust(wspace=0, hspace=0)
else:
plt.subplots_adjust(wspace=SPACING, hspace=SPACING)
plt.show()
ds_train = get_training_dataset()
ds_valid = get_validation_dataset()
display_batch_of_images(ds_train)
display_batch_of_images(ds_valid)
###Output
_____no_output_____
###Markdown
Building the model Learning rate schedule
###Code
lr_scheduler = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=1e-5,
decay_steps=10000,
decay_rate=0.9)
###Output
_____no_output_____
###Markdown
Building the model with pretrained ResNet50 model
###Code
# img_adjust_layer = tf.keras.layers.Lambda(tf.keras.applications.resnet50.preprocess_input, input_shape=[0, *IMAGE_SIZE, 3])
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False, pooling='avg')
base_model.trainable = False
model = tf.keras.Sequential([
# Base
base_model,
# Head
# tf.keras.layers.Flatten()
tf.keras.layers.Dense(128, activation='linear'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(2, activation='softmax')
])
model.summary()
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=lr_scheduler, epsilon=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the model
###Code
early_stopping_callbacks = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
STEPS_PER_EPOCH = NUM_TRAINING_IMAGES // BATCH_SIZE
VALID_STEPS = NUM_VALIDATION_IMAGES // BATCH_SIZE
history = model.fit(ds_train,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS,
validation_data=ds_valid,
validation_steps=VALID_STEPS,
callbacks=[early_stopping_callbacks]
)
###Output
_____no_output_____
###Markdown
Evaluating model
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save('./crack_detection_resnet50_model')
###Output
_____no_output_____
###Markdown
Load the model
###Code
# model = tf.keras.models.load_model("../input/resnet-new-crack-detection/crack_detection_resnet50_model")
###Output
_____no_output_____
###Markdown
Run predictions
###Code
ds_test = get_test_dataset()
STEP_SIZE_TEST = ds_test.n // ds_test.batch_size
ds_test.reset()
probabilities = model.predict(ds_test, steps=STEP_SIZE_TEST, verbose=1)
predictions = np.argmax(probabilities, axis=-1)
display_batch_of_images(ds_test, predictions)
###Output
_____no_output_____
###Markdown
Run prediction on whole image
###Code
def predict_on_crops(input_image, https=False, height=256, width=256, save_crops = False):
if https:
req = urllib.request.urlopen(input_image)
arr = np.asarray(bytearray(req.read()), dtype=np.uint8)
im = cv2.imdecode(arr, -1)
else:
im = cv2.imread(input_image)
try:
imgheight, imgwidth, channels = im.shape
except:
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
imgheight, imgwidth, channels = im.shape
k=0
output_image = np.zeros_like(im)
for i in range(0,imgheight,height):
for j in range(0,imgwidth,width):
a = im[i:i+height, j:j+width]
a = np.expand_dims(a, axis=0)
processed_a = test2_datagen.flow(a).next()
## discard image cropss that are not full size
predicted_class = CLASSES[int(np.argmax(model.predict(processed_a), axis=-1))]
## save image
file, ext = os.path.splitext(input_image)
image_name = file.split('/')[-1]
folder_name = 'out_' + image_name
## Put predicted class on the image
if predicted_class == 'crack':
color = (0,0, 255)
else:
color = (0, 255, 0)
cv2.putText(a, predicted_class, (50,50), cv2.FONT_HERSHEY_SIMPLEX , 0.7, color, 1, cv2.LINE_AA)
b = np.zeros_like(a, dtype=np.uint8)
b[:] = color
add_img = cv2.addWeighted(a, 0.9, b, 0.1, 0, dtype=cv2.CV_64F)
## Save crops
if save_crops:
if not os.path.exists(os.path.join('predictions', folder_name)):
os.makedirs(os.path.join('predictions', folder_name))
filename = os.path.join('predictions', folder_name,'img_{}.png'.format(k))
cv2.imwrite(filename, add_img)
output_image[i:i+height, j:j+width,:] = add_img
k+=1
## Save output image
cv2.imwrite(os.path.join('predictions', folder_name+ '.jpg'), output_image)
plt.figure(figsize=(10,10))
plt.imshow(cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB))
predict_on_crops('../input/crack-test/test_big/00001.jpg')
###Output
_____no_output_____ |
08 Build Model.ipynb | ###Markdown
Model Build (part 1)
###Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold, cross_val_score, cross_val_predict
from sklearn import metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
from keras.layers import Dropout
import csv
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
model_data = pd.read_csv('model_data.csv')
model_data.columns
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
#feature engineering
#engineer a number out of stage
model_data.stage.value_counts()
#Closed Won, Deal Signed, Invoice Sent = 1; otherwise 0
def translate_stage(stage):
if stage in ['Closed Won', 'Deal Signed', 'Invoice Sent']:
return(1)
else:
return (0)
model_data['y'] = model_data['stage'].apply(translate_stage)
#feature engineering
y = model_data['y']
X = model_data[['lat',
'lng',
'mobility_score',
'carshare',
'bikeshare',
'ridehailing',
'masstransit',
'closest_ts',
'within_one_tenth',
'within_one_half',
'within_one',
#'within_five' #this was taken out because it caused scores to decrease
]]
###Output
_____no_output_____
###Markdown
Split Data
###Code
#may need to delete outliers here, or put a max on closest_ts
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=14)
#calculate baseline
print (y.value_counts())
print (1 - (456 / (1227+456)))
###Output
0 1227
1 456
Name: y, dtype: int64
0.7290552584670231
###Markdown
Run Models
###Code
#random forest (for grid search code, see notebook 09)
model = RandomForestClassifier(n_estimators=500
,max_depth=5
,min_samples_split=2
,min_samples_leaf=1
,max_features=3)
scores = cross_val_score(model, X_train, y_train, cv=3)
print(scores)
print(np.mean(scores))
model.fit(X_train, y_train)
features = pd.DataFrame(list(zip(X.columns,model.feature_importances_))
,columns=['feature','importance'])
features.plot(kind='bar', title='Random Forest Feature Importance',
x='feature', y='importance', fontsize='large', legend=False
,sort_columns=True)
plt.xticks(rotation = 90)
plt.xlabel('Features', fontsize='large')
plt.ylabel('Feature importance', fontsize='large')
#######GRADIENT BOOSTING model
model = GradientBoostingClassifier(max_features = 6, max_depth = 50)
scores = cross_val_score(model, X_train, y_train, cv=3)
print(scores)
print(np.mean(scores))
model.fit(X_train, y_train)
features = pd.DataFrame(list(zip(X.columns,model.feature_importances_))
,columns=['feature','importance'])
features.plot(kind='bar', title='Gradient Boost Feature Importance',
x='feature', y='importance', fontsize='large', legend=False
,sort_columns=True)
plt.xticks(rotation = 90)
plt.xlabel('Features', fontsize='large')
plt.ylabel('Feature importance', fontsize='large')
####### ADABoost model
model = AdaBoostClassifier(n_estimators=100)
scores = cross_val_score(model, X_train, y_train, cv=3)
print(scores)
print(np.mean(scores))
model.fit(X_train, y_train)
features = pd.DataFrame(list(zip(X.columns,model.feature_importances_))
,columns=['feature','importance'])
features.plot(kind='bar', title='AdaBoost Feature Importance',
x='feature', y='importance', fontsize='large', legend=False
,sort_columns=True)
plt.xticks(rotation = 90)
plt.xlabel('Features', fontsize='large')
plt.ylabel('Feature importance', fontsize='large')
#Create keras Model
#X_train, X_test, y_train, y_test = train_test_split(Xtr, ytr, test_size=0.30, random_state=11)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
ss = StandardScaler()
X_train = ss.fit_transform(X_train) #the scaler is fit only to the training data
X_test = ss.transform(X_test)
model = Sequential()
input_units = X_train.shape[1] #number of features in training set
hidden_units = input_units #hidden layer has the same number of nodes as input
#first input layer
model.add(Dense(hidden_units
,input_dim=input_units
,activation='relu'
#uncomment this to add L2 regularization
#,kernel_regularizer=regularizers.l2(0.0001)
))
#hidden layer (try with and without)
node_reduction = 0
model.add(Dense(hidden_units - node_reduction
,input_dim=input_units
,activation='tanh'
#,kernel_regularizer=regularizers.l2(0.0001)
))
#model.add(Dropout(0.8))
#final layer
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy'
,optimizer='adam'
#added later
,metrics=['binary_accuracy']
)
#Run Keras model
history = model.fit(X_train, y_train, validation_data=(X_test, y_test),
epochs=60, batch_size=None, verbose=1)
###Output
Train on 1178 samples, validate on 505 samples
Epoch 1/60
1178/1178 [==============================] - 1s 799us/step - loss: 0.6671 - binary_accuracy: 0.6927 - val_loss: 0.6187 - val_binary_accuracy: 0.7505
Epoch 2/60
1178/1178 [==============================] - 0s 55us/step - loss: 0.6278 - binary_accuracy: 0.7173 - val_loss: 0.5969 - val_binary_accuracy: 0.7505
Epoch 3/60
1178/1178 [==============================] - 0s 56us/step - loss: 0.6030 - binary_accuracy: 0.7216 - val_loss: 0.5821 - val_binary_accuracy: 0.7545
Epoch 4/60
1178/1178 [==============================] - 0s 52us/step - loss: 0.5853 - binary_accuracy: 0.7258 - val_loss: 0.5712 - val_binary_accuracy: 0.7505
Epoch 5/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5716 - binary_accuracy: 0.7343 - val_loss: 0.5650 - val_binary_accuracy: 0.7564
Epoch 6/60
1178/1178 [==============================] - 0s 57us/step - loss: 0.5619 - binary_accuracy: 0.7394 - val_loss: 0.5593 - val_binary_accuracy: 0.7604
Epoch 7/60
1178/1178 [==============================] - 0s 56us/step - loss: 0.5539 - binary_accuracy: 0.7479 - val_loss: 0.5560 - val_binary_accuracy: 0.7604
Epoch 8/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5485 - binary_accuracy: 0.7589 - val_loss: 0.5534 - val_binary_accuracy: 0.7663
Epoch 9/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5436 - binary_accuracy: 0.7581 - val_loss: 0.5513 - val_binary_accuracy: 0.7663
Epoch 10/60
1178/1178 [==============================] - 0s 55us/step - loss: 0.5401 - binary_accuracy: 0.7640 - val_loss: 0.5501 - val_binary_accuracy: 0.7683
Epoch 11/60
1178/1178 [==============================] - 0s 54us/step - loss: 0.5381 - binary_accuracy: 0.7674 - val_loss: 0.5489 - val_binary_accuracy: 0.7663
Epoch 12/60
1178/1178 [==============================] - 0s 56us/step - loss: 0.5352 - binary_accuracy: 0.7674 - val_loss: 0.5487 - val_binary_accuracy: 0.7683
Epoch 13/60
1178/1178 [==============================] - 0s 52us/step - loss: 0.5339 - binary_accuracy: 0.7683 - val_loss: 0.5472 - val_binary_accuracy: 0.7683
Epoch 14/60
1178/1178 [==============================] - 0s 56us/step - loss: 0.5330 - binary_accuracy: 0.7691 - val_loss: 0.5470 - val_binary_accuracy: 0.7683
Epoch 15/60
1178/1178 [==============================] - 0s 56us/step - loss: 0.5315 - binary_accuracy: 0.7683 - val_loss: 0.5466 - val_binary_accuracy: 0.7683
Epoch 16/60
1178/1178 [==============================] - 0s 56us/step - loss: 0.5301 - binary_accuracy: 0.7708 - val_loss: 0.5461 - val_binary_accuracy: 0.7683
Epoch 17/60
1178/1178 [==============================] - 0s 55us/step - loss: 0.5282 - binary_accuracy: 0.7683 - val_loss: 0.5450 - val_binary_accuracy: 0.7683
Epoch 18/60
1178/1178 [==============================] - 0s 54us/step - loss: 0.5273 - binary_accuracy: 0.7691 - val_loss: 0.5442 - val_binary_accuracy: 0.7703
Epoch 19/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5263 - binary_accuracy: 0.7691 - val_loss: 0.5434 - val_binary_accuracy: 0.7703
Epoch 20/60
1178/1178 [==============================] - 0s 57us/step - loss: 0.5251 - binary_accuracy: 0.7708 - val_loss: 0.5425 - val_binary_accuracy: 0.7703
Epoch 21/60
1178/1178 [==============================] - 0s 55us/step - loss: 0.5241 - binary_accuracy: 0.7699 - val_loss: 0.5427 - val_binary_accuracy: 0.7703
Epoch 22/60
1178/1178 [==============================] - 0s 55us/step - loss: 0.5238 - binary_accuracy: 0.7708 - val_loss: 0.5417 - val_binary_accuracy: 0.7723
Epoch 23/60
1178/1178 [==============================] - 0s 57us/step - loss: 0.5231 - binary_accuracy: 0.7716 - val_loss: 0.5414 - val_binary_accuracy: 0.7723
Epoch 24/60
1178/1178 [==============================] - 0s 55us/step - loss: 0.5226 - binary_accuracy: 0.7716 - val_loss: 0.5412 - val_binary_accuracy: 0.7723
Epoch 25/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5213 - binary_accuracy: 0.7708 - val_loss: 0.5421 - val_binary_accuracy: 0.7723
Epoch 26/60
1178/1178 [==============================] - 0s 63us/step - loss: 0.5197 - binary_accuracy: 0.7716 - val_loss: 0.5399 - val_binary_accuracy: 0.7723
Epoch 27/60
1178/1178 [==============================] - 0s 60us/step - loss: 0.5193 - binary_accuracy: 0.7716 - val_loss: 0.5403 - val_binary_accuracy: 0.7703
Epoch 28/60
1178/1178 [==============================] - 0s 59us/step - loss: 0.5187 - binary_accuracy: 0.7708 - val_loss: 0.5400 - val_binary_accuracy: 0.7723
Epoch 29/60
1178/1178 [==============================] - 0s 59us/step - loss: 0.5185 - binary_accuracy: 0.7716 - val_loss: 0.5400 - val_binary_accuracy: 0.7723
Epoch 30/60
1178/1178 [==============================] - 0s 59us/step - loss: 0.5176 - binary_accuracy: 0.7716 - val_loss: 0.5392 - val_binary_accuracy: 0.7723
Epoch 31/60
1178/1178 [==============================] - 0s 60us/step - loss: 0.5173 - binary_accuracy: 0.7716 - val_loss: 0.5401 - val_binary_accuracy: 0.7703
Epoch 32/60
1178/1178 [==============================] - 0s 59us/step - loss: 0.5159 - binary_accuracy: 0.7725 - val_loss: 0.5391 - val_binary_accuracy: 0.7703
Epoch 33/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5150 - binary_accuracy: 0.7725 - val_loss: 0.5383 - val_binary_accuracy: 0.7743
Epoch 34/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5153 - binary_accuracy: 0.7725 - val_loss: 0.5389 - val_binary_accuracy: 0.7703
Epoch 35/60
1178/1178 [==============================] - 0s 63us/step - loss: 0.5145 - binary_accuracy: 0.7742 - val_loss: 0.5386 - val_binary_accuracy: 0.7683
Epoch 36/60
1178/1178 [==============================] - 0s 63us/step - loss: 0.5134 - binary_accuracy: 0.7742 - val_loss: 0.5385 - val_binary_accuracy: 0.7723
Epoch 37/60
1178/1178 [==============================] - 0s 61us/step - loss: 0.5123 - binary_accuracy: 0.7742 - val_loss: 0.5390 - val_binary_accuracy: 0.7723
Epoch 38/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5118 - binary_accuracy: 0.7742 - val_loss: 0.5383 - val_binary_accuracy: 0.7703
Epoch 39/60
1178/1178 [==============================] - 0s 60us/step - loss: 0.5122 - binary_accuracy: 0.7742 - val_loss: 0.5379 - val_binary_accuracy: 0.7723
Epoch 40/60
1178/1178 [==============================] - 0s 61us/step - loss: 0.5116 - binary_accuracy: 0.7742 - val_loss: 0.5383 - val_binary_accuracy: 0.7723
Epoch 41/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5108 - binary_accuracy: 0.7742 - val_loss: 0.5389 - val_binary_accuracy: 0.7723
Epoch 42/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5102 - binary_accuracy: 0.7742 - val_loss: 0.5381 - val_binary_accuracy: 0.7723
Epoch 43/60
1178/1178 [==============================] - 0s 62us/step - loss: 0.5096 - binary_accuracy: 0.7750 - val_loss: 0.5383 - val_binary_accuracy: 0.7723
Epoch 44/60
1178/1178 [==============================] - 0s 60us/step - loss: 0.5095 - binary_accuracy: 0.7759 - val_loss: 0.5383 - val_binary_accuracy: 0.7723
Epoch 45/60
1178/1178 [==============================] - 0s 59us/step - loss: 0.5088 - binary_accuracy: 0.7733 - val_loss: 0.5382 - val_binary_accuracy: 0.7723
Epoch 46/60
1178/1178 [==============================] - 0s 61us/step - loss: 0.5081 - binary_accuracy: 0.7750 - val_loss: 0.5383 - val_binary_accuracy: 0.7723
Epoch 47/60
1178/1178 [==============================] - 0s 57us/step - loss: 0.5078 - binary_accuracy: 0.7742 - val_loss: 0.5382 - val_binary_accuracy: 0.7723
Epoch 48/60
1178/1178 [==============================] - 0s 58us/step - loss: 0.5079 - binary_accuracy: 0.7742 - val_loss: 0.5383 - val_binary_accuracy: 0.7743
Epoch 49/60
1178/1178 [==============================] - 0s 57us/step - loss: 0.5078 - binary_accuracy: 0.7767 - val_loss: 0.5379 - val_binary_accuracy: 0.7723
Epoch 50/60
1178/1178 [==============================] - 0s 63us/step - loss: 0.5072 - binary_accuracy: 0.7759 - val_loss: 0.5378 - val_binary_accuracy: 0.7743
Epoch 51/60
1178/1178 [==============================] - 0s 61us/step - loss: 0.5065 - binary_accuracy: 0.7759 - val_loss: 0.5389 - val_binary_accuracy: 0.7723
|
codebase/mental-health-data-exploration.ipynb | ###Markdown
Les tables disponibles sont donc : Answer, Question et Survey
###Code
cursor.execute("SELECT * FROM Answer")
for row in cursor:
print(row)
###Output
('37', 2014, 1, 1)
('44', 2014, 2, 1)
('32', 2014, 3, 1)
('31', 2014, 4, 1)
('31', 2014, 5, 1)
('33', 2014, 6, 1)
('35', 2014, 7, 1)
('39', 2014, 8, 1)
('42', 2014, 9, 1)
('23', 2014, 10, 1)
('31', 2014, 11, 1)
('29', 2014, 12, 1)
('42', 2014, 13, 1)
('36', 2014, 14, 1)
('27', 2014, 15, 1)
('29', 2014, 16, 1)
('23', 2014, 17, 1)
('32', 2014, 18, 1)
('46', 2014, 19, 1)
('36', 2014, 20, 1)
('29', 2014, 21, 1)
('31', 2014, 22, 1)
('46', 2014, 23, 1)
('41', 2014, 24, 1)
('33', 2014, 25, 1)
('35', 2014, 26, 1)
('33', 2014, 27, 1)
('35', 2014, 28, 1)
('34', 2014, 29, 1)
('37', 2014, 30, 1)
('32', 2014, 31, 1)
('31', 2014, 32, 1)
('30', 2014, 33, 1)
('42', 2014, 34, 1)
('40', 2014, 35, 1)
('27', 2014, 36, 1)
('29', 2014, 37, 1)
('38', 2014, 38, 1)
('50', 2014, 39, 1)
('35', 2014, 40, 1)
('24', 2014, 41, 1)
('35', 2014, 42, 1)
('27', 2014, 43, 1)
('18', 2014, 44, 1)
('30', 2014, 45, 1)
('38', 2014, 46, 1)
('28', 2014, 47, 1)
('34', 2014, 48, 1)
('26', 2014, 49, 1)
('30', 2014, 50, 1)
('22', 2014, 51, 1)
('33', 2014, 52, 1)
('31', 2014, 53, 1)
('32', 2014, 54, 1)
('28', 2014, 55, 1)
('27', 2014, 56, 1)
('32', 2014, 57, 1)
('24', 2014, 58, 1)
('26', 2014, 59, 1)
('33', 2014, 60, 1)
('44', 2014, 61, 1)
('26', 2014, 62, 1)
('27', 2014, 63, 1)
('26', 2014, 64, 1)
('35', 2014, 65, 1)
('40', 2014, 66, 1)
('23', 2014, 67, 1)
('36', 2014, 68, 1)
('31', 2014, 69, 1)
('34', 2014, 70, 1)
('28', 2014, 71, 1)
('34', 2014, 72, 1)
('23', 2014, 73, 1)
('38', 2014, 74, 1)
('33', 2014, 75, 1)
('19', 2014, 76, 1)
('25', 2014, 77, 1)
('31', 2014, 78, 1)
('32', 2014, 79, 1)
('28', 2014, 80, 1)
('38', 2014, 81, 1)
('23', 2014, 82, 1)
('30', 2014, 83, 1)
('27', 2014, 84, 1)
('33', 2014, 85, 1)
('31', 2014, 86, 1)
('39', 2014, 87, 1)
('34', 2014, 88, 1)
('29', 2014, 89, 1)
('32', 2014, 90, 1)
('31', 2014, 91, 1)
('40', 2014, 92, 1)
('34', 2014, 93, 1)
('18', 2014, 94, 1)
('25', 2014, 95, 1)
('29', 2014, 96, 1)
('24', 2014, 97, 1)
('31', 2014, 98, 1)
('33', 2014, 99, 1)
('30', 2014, 100, 1)
('26', 2014, 101, 1)
('44', 2014, 102, 1)
('25', 2014, 103, 1)
('33', 2014, 104, 1)
('29', 2014, 105, 1)
('35', 2014, 106, 1)
('35', 2014, 107, 1)
('28', 2014, 108, 1)
('34', 2014, 109, 1)
('32', 2014, 110, 1)
('22', 2014, 111, 1)
('28', 2014, 112, 1)
('45', 2014, 113, 1)
('32', 2014, 114, 1)
('28', 2014, 115, 1)
('26', 2014, 116, 1)
('21', 2014, 117, 1)
('27', 2014, 118, 1)
('18', 2014, 119, 1)
('35', 2014, 120, 1)
('29', 2014, 121, 1)
('25', 2014, 122, 1)
('33', 2014, 123, 1)
('36', 2014, 124, 1)
('27', 2014, 125, 1)
('27', 2014, 126, 1)
('27', 2014, 127, 1)
('32', 2014, 128, 1)
('31', 2014, 129, 1)
('19', 2014, 130, 1)
('33', 2014, 131, 1)
('32', 2014, 132, 1)
('27', 2014, 133, 1)
('38', 2014, 134, 1)
('24', 2014, 135, 1)
('39', 2014, 136, 1)
('28', 2014, 137, 1)
('39', 2014, 138, 1)
('29', 2014, 139, 1)
('22', 2014, 140, 1)
('38', 2014, 141, 1)
('37', 2014, 142, 1)
('35', 2014, 143, 1)
('-29', 2014, 144, 1)
('30', 2014, 145, 1)
('37', 2014, 146, 1)
('24', 2014, 147, 1)
('23', 2014, 148, 1)
('30', 2014, 149, 1)
('29', 2014, 150, 1)
('19', 2014, 151, 1)
('32', 2014, 152, 1)
('28', 2014, 153, 1)
('36', 2014, 154, 1)
('37', 2014, 155, 1)
('25', 2014, 156, 1)
('27', 2014, 157, 1)
('26', 2014, 158, 1)
('27', 2014, 159, 1)
('25', 2014, 160, 1)
('36', 2014, 161, 1)
('25', 2014, 162, 1)
('31', 2014, 163, 1)
('26', 2014, 164, 1)
('33', 2014, 165, 1)
('27', 2014, 166, 1)
('34', 2014, 167, 1)
('42', 2014, 168, 1)
('23', 2014, 169, 1)
('24', 2014, 170, 1)
('26', 2014, 171, 1)
('31', 2014, 172, 1)
('22', 2014, 173, 1)
('23', 2014, 174, 1)
('34', 2014, 175, 1)
('31', 2014, 176, 1)
('28', 2014, 177, 1)
('32', 2014, 178, 1)
('45', 2014, 179, 1)
('33', 2014, 180, 1)
('29', 2014, 181, 1)
('26', 2014, 182, 1)
('28', 2014, 183, 1)
('45', 2014, 184, 1)
('43', 2014, 185, 1)
('37', 2014, 186, 1)
('24', 2014, 187, 1)
('26', 2014, 188, 1)
('23', 2014, 189, 1)
('35', 2014, 190, 1)
('38', 2014, 191, 1)
('28', 2014, 192, 1)
('28', 2014, 193, 1)
('35', 2014, 194, 1)
('32', 2014, 195, 1)
('31', 2014, 196, 1)
('35', 2014, 197, 1)
('26', 2014, 198, 1)
('27', 2014, 199, 1)
('28', 2014, 200, 1)
('27', 2014, 201, 1)
('34', 2014, 202, 1)
('41', 2014, 203, 1)
('37', 2014, 204, 1)
('34', 2014, 205, 1)
('32', 2014, 206, 1)
('21', 2014, 207, 1)
('30', 2014, 208, 1)
('24', 2014, 209, 1)
('26', 2014, 210, 1)
('40', 2014, 211, 1)
('37', 2014, 212, 1)
('26', 2014, 213, 1)
('32', 2014, 214, 1)
('32', 2014, 215, 1)
('27', 2014, 216, 1)
('30', 2014, 217, 1)
('31', 2014, 218, 1)
('29', 2014, 219, 1)
('41', 2014, 220, 1)
('34', 2014, 221, 1)
('33', 2014, 222, 1)
('28', 2014, 223, 1)
('28', 2014, 224, 1)
('23', 2014, 225, 1)
('24', 2014, 226, 1)
('32', 2014, 227, 1)
('34', 2014, 228, 1)
('24', 2014, 229, 1)
('26', 2014, 230, 1)
('36', 2014, 231, 1)
('41', 2014, 232, 1)
('38', 2014, 233, 1)
('38', 2014, 234, 1)
('30', 2014, 235, 1)
('25', 2014, 236, 1)
('37', 2014, 237, 1)
('34', 2014, 238, 1)
('37', 2014, 239, 1)
('28', 2014, 240, 1)
('22', 2014, 241, 1)
('34', 2014, 242, 1)
('33', 2014, 243, 1)
('25', 2014, 244, 1)
('27', 2014, 245, 1)
('40', 2014, 246, 1)
('21', 2014, 247, 1)
('29', 2014, 248, 1)
('32', 2014, 249, 1)
('29', 2014, 250, 1)
('23', 2014, 251, 1)
('28', 2014, 252, 1)
('31', 2014, 253, 1)
('27', 2014, 254, 1)
('24', 2014, 255, 1)
('29', 2014, 256, 1)
('23', 2014, 257, 1)
('42', 2014, 258, 1)
('24', 2014, 259, 1)
('25', 2014, 260, 1)
('27', 2014, 261, 1)
('27', 2014, 262, 1)
('30', 2014, 263, 1)
('29', 2014, 264, 1)
('43', 2014, 265, 1)
('32', 2014, 266, 1)
('41', 2014, 267, 1)
('32', 2014, 268, 1)
('37', 2014, 269, 1)
('32', 2014, 270, 1)
|
_build/jupyter_execute/notebooks/qua/01bis Computing integral with quasi-Monte Carlo methods.ipynb | ###Markdown
Computing integral with quasi-Monte Carlo methods**Randall Romero Aguilar, PhD**This demo is based on the original Matlab demo accompanying the Computational Economics and Finance 2001 textbook by Mario Miranda and Paul Fackler.Original (Matlab) CompEcon file: **demqua01bis.m**Running this file requires the Python version of CompEcon. This can be installed with pip by running !pip install compecon --upgradeLast updated: 2021-Oct-01 About To seven significant digits,\begin{align*}A &=\int_{-1}^1\int_{-1}^1 e^{-x_1}\cos^2(x_2)dx _1dx_2\\ &=\int_{-1}^1 e^{-x_1} dx _1 \times \int_{-1}^1 \cos^2(x_2) dx_2\\ &=\left(e - \tfrac{1}{e}\right) \times \left(1+\tfrac{1}{2}\sin(2)\right)\\ &\approx 3.4190098\end{align*} Initial tasks
###Code
import numpy as np
from compecon import qnwequi
import pandas as pd
###Output
_____no_output_____
###Markdown
Make support function
###Code
f1 = lambda x1: np.exp(-x1)
f2 = lambda x2: np.cos(x2)**2
f = lambda x1, x2: f1(x1) * f2(x2)
def quad(method, n):
(x1, x2), w = qnwequi(n,[-1, -1], [1, 1],method)
return w.dot(f(x1, x2))
###Output
_____no_output_____
###Markdown
Compute the approximation errors
###Code
nlist = range(3,7)
quadmethods = ['Random', 'Neiderreiter','Weyl']
f_quad = np.array([[quad(qnw[0], 10**ni) for qnw in quadmethods] for ni in nlist])
f_true = (np.exp(1) - np.exp(-1)) * (1+0.5*np.sin(2))
f_error = np.log10(np.abs(f_quad/f_true - 1))
###Output
_____no_output_____
###Markdown
Make table with results
###Code
results = pd.DataFrame(f_error, columns=quadmethods)
results['Nodes'] = ['$10^%d$' % n for n in nlist]
results.set_index('Nodes', inplace=True)
results
#results.to_latex('demqua01bis.tex', escape=False, float_format='%.1f')
###Output
_____no_output_____ |
MyProjects/15. Universities Clustering/Clustering( K Means).ipynb | ###Markdown
Universities-Clustering Project using K-Means ClusteringFor this project we will attempt to use KMeans Clustering to cluster Universities into to two groups, Private and Public.___**It is very important to note, we actually have the labels for this data set, but we will NOT use them for the KMeans clustering algorithm, since that is an unsupervised learning algorithm.** When using the Kmeans algorithm under normal circumstances, it is because you don't have labels. In this case we will use the labels to try to get an idea of how well the algorithm performed, but you won't usually do this for Kmeans, so the classification report and confusion matrix at the end of this project, don't truly make sense in a real world setting!.___ The DataWe will use a data frame with 777 observations on the following 18 variables.* Private A factor with levels No and Yes indicating private or public university* Apps Number of applications received* Accept Number of applications accepted* Enroll Number of new students enrolled* Top10perc Pct. new students from top 10% of H.S. class* Top25perc Pct. new students from top 25% of H.S. class* F.Undergrad Number of fulltime undergraduates* P.Undergrad Number of parttime undergraduates* Outstate Out-of-state tuition* Room.Board Room and board costs* Books Estimated book costs* Personal Estimated personal spending* PhD Pct. of faculty with Ph.D.’s* Terminal Pct. of faculty with terminal degree* S.F.Ratio Student/faculty ratio* perc.alumni Pct. alumni who donate* Expend Instructional expenditure per student* Grad.Rate Graduation rate Import Libraries**Import the libraries you usually use for data analysis.**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Get the Data **Read in the College_Data file using read_csv. Figure out how to set the first column as the index.**
###Code
df = pd.read_csv('College_Data', index_col=0)
###Output
_____no_output_____
###Markdown
**Check the head of the data**
###Code
df
###Output
_____no_output_____
###Markdown
**Check the info() and describe() methods on the data.**
###Code
df.info()
df.describe()
###Output
_____no_output_____
###Markdown
EDA (Exploratory Data Analysis)It's time to create some data visualizations!**Create a scatterplot of Grad.Rate versus Room.Board where the points are colored by the Private column.**
###Code
sns.set_style('whitegrid')
sns.lmplot(x='Room.Board', y='Grad.Rate', data=df, hue='Private', fit_reg=False, palette='Set1', height=10, aspect=1.4)
###Output
_____no_output_____
###Markdown
**Create a scatterplot of F.Undergrad versus Outstate where the points are colored by the Private column.**
###Code
sns.set_style('whitegrid')
sns.lmplot(x='Outstate', y='F.Undergrad', data=df, hue='Private', fit_reg=False, palette='Set1', height=10, aspect=1.4)
###Output
_____no_output_____
###Markdown
**Create a stacked histogram showing Out of State Tuition based on the Private column. Try doing this using [sns.FacetGrid](https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.FacetGrid.html). If that is too tricky, see if you can do it just by using two instances of pandas.plot(kind='hist').**
###Code
sns.set_style('darkgrid')
g = sns.FacetGrid(df, hue='Private', palette='Set1', height=8, aspect=1.8)
g = g.map(plt.hist, 'Outstate', bins=40, alpha=0.7)
###Output
_____no_output_____
###Markdown
**Create a similar histogram for the Grad.Rate column.**
###Code
sns.set_style('darkgrid')
g = sns.FacetGrid(df, hue='Private', palette='Set1', height=8, aspect=1.8)
g = g.map(plt.hist, 'Grad.Rate', bins=40, alpha=0.7)
###Output
_____no_output_____
###Markdown
**Notice how there seems to be a private school with a graduation rate of higher than 100%.What is the name of that school?**
###Code
# Now find out the name of that school
df[df['Grad.Rate']>100]
###Output
_____no_output_____
###Markdown
**Set that school's graduation rate to 100 so it makes sense. You may get a warning not an error) when doing this operation, so use dataframe operations or just re-do the histogram visualization to make sure it actually went through.**
###Code
df['Grad.Rate']['Cazenovia College'] = 100
# Now find out the name of that school
df[df['Grad.Rate']>100]
sns.set_style('darkgrid')
g = sns.FacetGrid(df, hue='Private', palette='Set1', height=8, aspect=1.8)
g = g.map(plt.hist, 'Grad.Rate', bins=40, alpha=0.7)
###Output
_____no_output_____
###Markdown
K Means Cluster CreationNow it is time to create the Cluster labels!**Import KMeans module from SciKit Learn.**
###Code
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
**Create an instance of a K Means model with 2 clusters.**
###Code
kmeans = KMeans(n_clusters=2)
###Output
_____no_output_____
###Markdown
**Fit the model to all the data except for the Private label.**
###Code
kmeans.fit(df.drop('Private', axis=1))
###Output
_____no_output_____
###Markdown
**What are the cluster center vectors?**
###Code
kmeans.cluster_centers_
kc = kmeans.cluster_centers_
# This is going to be same dimensions as number of features in your dataset
kc.shape
###Output
_____no_output_____
###Markdown
EvaluationThere is no perfect way to evaluate clustering if you don't have the labels, however since this is just a simple project, we do have the labels, so we take advantage of this to evaluate our clusters, keep in mind, we usually won't have this luxury in the real world.**Create a new column for df called 'Cluster', which is a 1 for a Private school, and a 0 for a public school.**
###Code
# Define a method for converting clusters
def converter(private):
if private == 'Yes':
return 1
else:
return 0
# Now make a new column called 'Cluster'
df['Cluster'] = df['Private'].apply(converter)
# Now check our dataframe
df
###Output
_____no_output_____
###Markdown
**Create a confusion matrix and classification report to see how well the Kmeans clustering worked without being given any labels.**
###Code
from sklearn.metrics import confusion_matrix,classification_report
# Print confusion matrix
print(confusion_matrix(df['Cluster'],kmeans.labels_))
# Print Classification report
print(classification_report(df['Cluster'],kmeans.labels_))
###Output
precision recall f1-score support
0 0.21 0.65 0.31 212
1 0.31 0.06 0.10 565
accuracy 0.22 777
macro avg 0.26 0.36 0.21 777
weighted avg 0.29 0.22 0.16 777
|
src/exploratory_py/notebook_s3/py_exp_comp_s3_sol.ipynb | ###Markdown
Exploratory Computing with Python*Developed by Mark Bakker* Statistics Notebook 3: Distribution of the mean, hypothesis tests, and the central limit theoremIn this notebook we first investigate the distribution of the mean of a dataset, we simulate several hypothesis tests, and finish with exploring the central limit theorem.
###Code
import numpy as np
import matplotlib.pyplot as plt
import numpy.random as rnd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Consider a dataset of 100 points. The data are drawn from a normal distribution with mean 4 and standard deviation 2. As we noticed before, the sample mean of the 100 data points almost always differs from 4. And every time we generate a new set of 100 points, the mean will be somewhat different.
###Code
for i in range(5):
a = 2 * rnd.standard_normal(100) + 4
print 'mean a: ', np.mean(a)
###Output
mean a: 4.45605572363
mean a: 3.79381345101
mean a: 3.87726457786
mean a: 3.88379101479
mean a: 4.2115754024
###Markdown
In fact, the mean of the dataset itself can be considered as a random variable with a distribution of its own. Sample standard deviationThe sample standard deviation $s_n$ of a dataset of $n$ values is defined as$s_n = \sqrt{ \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x}_n)^2 }$and can be computed with the `std` function of the `numpy` package. By default, the `std` function devides the sum by $n$ rather than by $n-1$. To divide by $n-1$, as we want for an unbiased estimate of the standard deviation, specify the keyword argument `ddof=1` in the `std` function. Exercise 1. Histogram of the means of datasets with 100 valuesGenerate 1000 datasets each with 100 values drawn from a normal distribution with mean 4 and standard deviation 2; use a seed of 22. Compute the mean of each dataset and store them in an array of length 1000. Compute the mean of the means and the standard deviation of the means, and print them to the screen. Draw a boxplot of the means. In a separate figure, draw a histogram of the means and make sure the horizontal axis extends from 3 to 5. Recall that you can start a new figure with the `figure()` function. Answers to Exercise 1 Exercise 2. Histogram of the means of datasets with 1000 valuesRepeat exercise 1 but now generate 1000 datasets each with 1000 values (rather than 100 values) drawn from the same normal distribution with mean 4 and standard deviation 2, and again with a seed of 22. Make sure that the limits of the horizontal axis of the histogram go from 3 to 5, so that the histogram can be compared to the histogram you created above. Is the spread of the mean much smaller now as compared to the the datasets consisting of only 100 values? Answers to Exercise 2 Sample standard deviation of the sample meanThe histogram of the means looks like the bell-shaped curve of a Normal distribution, but you may recall that it is actually a Student's $t$-distribution, also simply called the $t$-distribution. A $t$-distribution arises when estimating the mean of a normally distributed variable in situations where the sample size is relatively small and the standard deviation is unknown (as it pretty much always is in practice) and needs to be estimated from the data. The sample mean of a dataset of $n$ values is commonly written as $\overline{x}_n$, while the sample standard deviation is written as $s_n$ (as defined above). Here, we are computing the sample standard deviation of the sample means, which we write as $\hat{s}_n$ for a dataset of size $n$. Theoretically, the value of the standard deviation of the sample mean $\hat{s}_n$ is related to the sample standard deviation as (see [here](http://en.wikipedia.org/wiki/Standard_deviationStandard_deviation_of_the_mean))$\hat{s}_n = s_n / \sqrt{n}$ Percentiles of $t$-distributionYou may recall that the 90% interval around the mean for a Normally distributed variable runs from $\mu-1.64\sigma$ to $\mu+1.64\sigma$. In other words, 5% of the data is expected to lie below $\mu-1.64\sigma$ and 5% of the data is expected to lie above $\mu+1.64\sigma$. What now if you forgot it is $1.64\sigma$ to the left and right of the mean? Or what if you want to know the value for some other percentile. You may look that up in a table in a Statistics book (or on the web), or use the percent point function `ppf`, which is part of any statistical distribution function defined in the `scipy.stats` package. The `ppf` function is the inverse of the cumulative distribution function. For example, `ppf(0.05)` returns the value of the data such that the cumulative distribution function is equal to 0.05 at the returned value. To find the 5% and 95% values, type (recall that by default the `norm` distribution has mean zero and standard deviation 1; you can specify different values with the `loc` and `scale` keyword arguments, respectively).
###Code
from scipy.stats import norm
xvalue_05 = norm.ppf(0.05)
xvalue_95 = norm.ppf(0.95)
print '5% limit: ',xvalue_05
print '95% limit: ',xvalue_95
print 'check if it works for 5%: ',norm.cdf( xvalue_05 )
print 'check if it works for 95%: ',norm.cdf( xvalue_95 )
# Next, specify a mean and standard deviation
xvalue_05_musig = norm.ppf(0.05, loc = 20, scale = 10) # mu = 20, sigma = 10
print '5% limit with mu=20, sig=10: ',xvalue_05_musig
print 'check: ',norm.cdf(xvalue_05_musig, loc = 20, scale = 10)
###Output
5% limit: -1.64485362695
95% limit: 1.64485362695
check if it works for 5%: 0.05
check if it works for 95%: 0.95
5% limit with mu=20, sig=10: 3.55146373049
check: 0.05
###Markdown
A similar function exists for the $t$ distribution. The $t$-distribution takes one additional argument: the number of degrees of freedom, which is equal to the number of data points minus 1. For example, consider a sample with 40 data points, a sample mean of 20, and a sample standard deviation of the mean of 2, then the 5 and 95 percentiles are
###Code
from scipy.stats import t
xvalue_05 = t.ppf(0.05, 39, loc=20, scale=2)
xvalue_95 = t.ppf(0.95, 39, loc=20, scale=2)
print '5% limit: ',xvalue_05
print '95% limit: ',xvalue_95
print 'check if it works for 5%: ',t.cdf( xvalue_05, 39, loc=20, scale=2 )
print 'check if it works for 95%: ',t.cdf( xvalue_95, 39, loc=20, scale=2 )
###Output
5% limit: 16.630249761
95% limit: 23.369750239
check if it works for 5%: 0.0500000002153
check if it works for 95%: 0.949999999785
###Markdown
Exercise 3. Count the number of means outside 95 percentileGo back to Exercise 1. Generate 1000 datasets each with 100 values drawn from a normal distribution with mean 4 and standard deviation 2; use a seed of 22. For each dataset, evaluate whether the sample mean is within the 95 percentile of the $t$-distribution around the true mean of 4 (the standard deviation of the sample mean is different every time, of course). Count how many times the sample mean is so low that it is below the 5 percentile of the $t$ distribution around the true mean. If the theory is correct, it should, of course, be the case for about 5% of the datasets. Try a few different seeds. Answers to Exercise 3 Exercise 4. $t$ test on dataset of 20 valuesGenerate 20 datapoints from a Normal distribution with mean 39 and standard deviation 4. Use a seed of 2. Compute and report the mean and standard deviation of the dataset and the standard deviation of the mean. If you computed it correctly, the mean of the 20 data points generated above is 38.16. Somebody now claims that the 20 datapoints are taken from a distribution with a mean of 40. You are asked to decide wether the true underlying mean could indeed be 40. In statistical terms, you are asked to perform a Hypothesis test, testing the null hypothesis that the mean is 40 against the alternative hypothesis that the mean is not 40 at significance level 5%. Hence, you are asked to do a two-sided $t$-test. All you can do in Hypothesis testing it trying to reject the null hypothesis, so let's try that. Most statistics books give a cookbook recipe for performing a $t$-test. Here we will visualize the $t$-test. We reject the null hypothesis if the sample mean is outside the 95% interval around the mean of the corresponding $t$-distribution. If the mean is inside the 95% interval we can only conclude that there is not enough evidence to reject the null hypothesis. Draw the probability density function of a $t$-distribution with mean 40 and standard deviation equal to the standard deviation of the sample mean you computed above. Draw red vertical lines indicating the left and right limits of the 95% interval around the mean. Draw a heavy black vertical line at the position of the sample mean you computed above. Decide whether you can reject the null hypothesis that the mean is 40 and add that as a title to the figure. Answers to Exercise 4 Exercise 5. Hypothesis tests on Wooden beam dataLoad the data set of experiments on wooden beams stored in the file `douglas_data.csv`. First, consider the first 20 measurements of the bending strength. Compute the sample mean and the standard deviation of the sample mean. The manufacturer claims that the mean bending strength is only 50 Pa. Perform a $t$-test (significance level 5%) with null hypothesis that the mean is indeed 50 Pa and alternative hypothesis that the mean is not 50 Pa using the approach applied in Exercise 4. Repeat the $t$-test above but now with all the measurements of the bending strength. Do you reach the same conclusion? Answers to Exercise 5 Central limit theoremSo far we looked at the distribution of the sample mean of a dataset while we knew that the data was taken from a normal distribution (except for the wooden beam data, but that looked very much like a Normal distribution). Such a sample mean has a Student $t$-distribtion, which approaches the Normal distribution when the dataset is large. Actually, 100 datapoints is already enough to approach the Normal distribution fairly closely. You may check this by comparing, for example, the percent point function `ppf` of a Normal distribution with a $t$-distribution with 99 degrees of freedom, or by simply plotting the pdf of both distributions:
###Code
print '95 percentile Standard Normal: ',norm.ppf(0.95)
print '95 percentile t-dist with n=99: ',t.ppf(0.95,99)
x = np.linspace(-4,4,100)
y1 = norm.pdf(x)
y2 = t.pdf(x,99)
plt.plot(x,y1,'b',label='Normal')
plt.plot(x,y2,'r',label='t-dist')
plt.legend()
###Output
95 percentile Standard Normal: 1.64485362695
95 percentile t-dist with n=99: 1.660391156
###Markdown
The Central limit theorem now states that the distribution of the sample mean approaches the Normal distribution in the limit even if the dataset is drawn from an entirely different distribution! We are going to test this theorem by drawing numbers from a Gamma distribution. The Gamma distribution is a skewed distribution and takes a shape parameter $k$ and a scale parameter $\theta$, and is defined for $x>0$. Details on the Gamma distribution can be found, for example [here](http://en.wikipedia.org/wiki/Gamma_distribution). Let's choose the shape parameter equal to 2 and the scale parameter equal to 1 (which happens to be the default). When the scale parameter is equal to 1, the mean is equal to the shape parameter. The pdf of the Gamma distribution for these values is shown below. The mean is indicated with the red vertical line.
###Code
from scipy.stats import gamma
x = np.linspace(1e-6,10,100)
y = gamma.pdf(x,2,scale=1)
plt.plot(x,y)
plt.axvline(2,color='r')
###Output
_____no_output_____
###Markdown
Random numbers may be drawn from any distribution in the `scipy.stats` package with the `rvs` function. Here, we draw 1000 numbers and add the histogram to the previous figure
###Code
x = np.linspace(1e-6,10,100)
y = gamma.pdf(x,2)
plt.plot(x,y)
plt.axvline(2, color='r')
data = gamma.rvs(2, size=1000)
plt.hist(data, bins=20, normed=True)
###Output
_____no_output_____
###Markdown
Exercise 6. Explore Central Limit Theorem for Gamma DistributionGenerate $N$ datasets of 20 numbers randomly drawn from a Gamma distribution with shape parameter equal to 2 and scale equal to 1. Draw a histogram of the means of the $N$ datasets using 20 bins. On the same graph, draw the pdf of the Normal distribution using the mean of means and sample standard deviation of the mean; choose the limits of the $x$-axis between 0 and 4. Make 3 graphs, for $N=100,1000,10000$ and notice that the distribution starts to approach a Normal distribution. Add a title to each graph stating the number of datasets. Answers to Exercise 6 Answers to the exercises Answers to Exercise 1
###Code
rnd.seed(22)
mean_of_data = np.mean( 2.0 * rnd.standard_normal((1000,100)) + 4.0, 1 )
print 'The mean of the means is: ', np.mean(mean_of_data)
print 'The standard deviation of the means is: ', np.std(mean_of_data, ddof=1)
plt.figure()
plt.boxplot(mean_of_data)
plt.figure()
plt.hist(mean_of_data, normed=True)
plt.xlim(3,5)
###Output
The mean of the means is: 4.00474211854
The standard deviation of the means is: 0.190485481767
###Markdown
Back to Exercise 1Answers to Exercise 2
###Code
rnd.seed(22)
mean_of_data = np.mean( 2.0 * rnd.standard_normal((1000,1000)) + 4.0, 1 )
print 'The mean of the means is: ', np.mean(mean_of_data)
print 'The standard deviation of the means is: ', np.std(mean_of_data, ddof=1)
plt.figure()
plt.boxplot(mean_of_data)
plt.figure()
plt.hist(mean_of_data)
plt.xlim(3,5)
###Output
The mean of the means is: 4.00128131235
The standard deviation of the means is: 0.0654148250988
###Markdown
Back to Exercise 2Answers to Exercise 3
###Code
from scipy.stats import t
for s in [22,32,42,52,62]:
rnd.seed(s)
data = 2.0 * rnd.standard_normal((1000,100)) + 4.0
mean_of_data = np.mean( data, 1 )
std_of_mean_of_data = np.std( data, 1, ddof = 1 ) / np.sqrt(100)
fivepercentile = t.ppf(0.05, 99)
outside = mean_of_data < 4.0 + std_of_mean_of_data * fivepercentile
print 'number of datasets where sample mean is above 95 percentile: ', np.sum( outside )
###Output
number of datasets where sample mean is above 95 percentile: 37
number of datasets where sample mean is above 95 percentile: 42
number of datasets where sample mean is above 95 percentile: 56
number of datasets where sample mean is above 95 percentile: 60
number of datasets where sample mean is above 95 percentile: 48
###Markdown
Back to Exercise 3Answers to Exercise 4
###Code
rnd.seed(2)
data = 4 * rnd.standard_normal(20) + 39
mu = np.mean(data)
sig = np.std(data, ddof=1)
sighat = np.std(data, ddof=1) / np.sqrt(20)
print 'mean of the data: ', mu
print 'std of the data: ', sig
print 'std of the mean: ', sighat
x = np.linspace(37,43,100)
y = t.pdf(x, 19, loc=40, scale=sighat)
plt.plot(x,y)
perc025 = t.ppf(0.025, 19, loc = 40, scale = sighat)
perc975 = t.ppf(0.975, 19, loc = 40, scale = sighat)
plt.axvline(perc025,color='r')
plt.axvline(perc975,color='r')
plt.axvline(mu,color='k',lw=5)
plt.title('H0 cannot be rejected')
###Output
_____no_output_____
###Markdown
Back to Exercise 4Answers to Exercise 5
###Code
from pandas import read_csv
w = read_csv('douglas_data.csv',skiprows=[1],skipinitialspace=True)
mu20 = np.mean(w.bstrength[:20])
sig20 = np.std(w.bstrength[:20], ddof=1) / np.sqrt(20)
print 'sample mean, standard deviation of sample mean: ', mu20, sig20
x = np.linspace(30,70,100)
y = t.pdf(x, 19, loc = 50, scale = sig20)
plt.plot(x,y)
perc025 = t.ppf(0.025, 19, loc = 50, scale = sig20)
perc975 = t.ppf(0.975, 19, loc = 50, scale = sig20)
plt.axvline(perc025,color='r')
plt.axvline(perc975,color='r')
plt.axvline(mu20,color='k',lw=4)
plt.title('H0 is rejected: mean is not 50 Pa')
from pandas import read_csv
w = read_csv('douglas_data.csv',skiprows=[1],skipinitialspace=True)
N = len(w.bstrength)
mu = np.mean(w.bstrength)
sig = np.std(w.bstrength, ddof=1) / np.sqrt(N)
print 'sample mean, standard deviation of sample mean: ', mu, sig
x = np.linspace(30,70,100)
y = t.pdf(x, N-1, loc=50, scale=sig)
plt.plot(x,y)
perc025 = t.ppf(0.025, N-1, loc = 50, scale = sig)
perc975 = t.ppf(0.975, N-1, loc = 50, scale = sig)
plt.axvline(perc025,color='r')
plt.axvline(perc975,color='r')
plt.axvline(mu,color='k',lw=4)
plt.title('Not enough evidence to reject H0: mean may very well be 50')
###Output
sample mean, standard deviation of sample mean: 48.650505618 0.903543631702
###Markdown
Back to Exercise 5Answers to Exercise 6
###Code
from scipy.stats import norm, gamma
for N in [100, 1000, 10000]:
data = gamma.rvs(2,size=(N,20))
mean_of_data = np.mean(data,1)
mu = np.mean(mean_of_data)
sig = np.std(mean_of_data,ddof=1)
plt.figure()
plt.hist(mean_of_data,bins=20,normed=True)
x = np.linspace(0,4,100)
y = norm.pdf(x,loc=mu,scale=sig)
plt.plot(x,y,'r')
plt.title('N='+str(N))
###Output
_____no_output_____ |
colab_repoclone_testbed.ipynb | ###Markdown
colab-repoclone-testbed===================A Google Colab notebook for verifying that the `colab-repoclone` module works as intended in the real world! First you'll have to install the latest version of the library on [PyPI](https://pypi.org/project/colab-repoclone/) using `pip`.
###Code
!pip install colab-repoclone -qU
###Output
Building wheel for colab-repoclone (setup.py) ... [?25l[?25hdone
Building wheel for colab-env (setup.py) ... [?25l[?25hdone
###Markdown
Importing the library should set everything up. If your Google Drive isn't already mounted, you'll have to complete an authentication flow. Otherwise it'll probably warn you that the drive is already mounted.The `import` statement ought to have loaded everything in `colab_repoclone/__init__.py`. This includes a method, `colab_repoclone.local_repository()` for either cloning a specified GitHub repo into your Google Colab environment or initializing a new GitHub repo *from* your Google Colab environment.If you don't have `vars.env` in your Google Drive, any empty one will be created at this point by the `colab_env` package. Otherwise, the variables will be loaded into the environment.
###Code
import colab_repoclone
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /content/gdrive
###Markdown
If all has gone to plan we ought to be able to access the module's `__version__`:
###Code
colab_repoclone.__version__
###Output
_____no_output_____
###Markdown
Now, we can try to clone or create a new repository using the `local_repository()` method. This method takes one input and three keywords.The input is the link to the GitHub repository for cloning or the empty GitHub repo for initializing (you *must* have proper permissions if you wish to push to a repo).The first keyword, `clone`, defaults to **True**, and the package will thus assume you are cloning an existing repository. If it is changed to **False** you must have a local directory that you wish to initialize and upload to GitHub as a new repository.The second keyword, `branch`, allows the user to specify a branch to clone and defaults to "master" if none is indicated or if `clone=False`.The third keyword, `auth_method`, specifies the method of authentication. This defaults to "env" which will look in the `vars.env` file loaded by `colab_env` for the following environment variables: `GITHUB_KEY`, `USER_NAME`, `USER_EMAIL` - i.e. your GitHub personal access token that gives you permission to edit the specified repo, your GitHub username and the email linked to your GitHub account. We can check the existence of these environment variables as follows:
###Code
import os
print(os.getenv("USER_NAME"),os.getenv("USER_EMAIL"))
# NB: I have not retrieved the GITHUB_KEY as Google Colab stores the output of
# cells and GitHub will not allow pushing of a file that contains a visible and
# valid GitHub access code in it.
###Output
jordanml7 [email protected]
###Markdown
If these environment variables do not exist it will throw an error indicating so. Alternatively, the `auth_method` keyword can be set to anything other than "env" and the user will instead be prompted to enter their valid GitHub username, email and authorization token. Cloning an existing repositoryLet's try cloning an existing repository first. How about the `testbed` branch of this very repository? When prompted to enter your various authentication strings you can just enter garbage - the repository will still clone but you will not have `push` permissions. Alternatively, replace the repo link with your own to confirm you can `push` to it!
###Code
my_clone = colab_repoclone.local_repository("https://github.com/apolitical/colab-repoclone",clone=True,branch="testbed",auth_method="input")
###Output
GitHub Username :: jordanml7
GitHub Email :: [email protected]
GitHub Authorization Token :: ··········
###Markdown
You can clone as many repositories as you want by calling the above command for each! The repo directories will be placed into the Google Colab `/content` directory and the methods for each can be accessed by referencing their respective object, such as `my_clone.push()` or `my_clone.pull()`.Let's confirm that our repo name got set correctly:
###Code
my_clone.repo_dir
###Output
_____no_output_____
###Markdown
You can also confirm that the link to the repo on GitHub is correct by calling `my_clone.access_repo`, however this link contains our access token so I will not do so here.Let's check that we properly pulled the `testbed` branch:
###Code
my_clone.branch
###Output
_____no_output_____
###Markdown
And while we're at it, let's confirm that our username and email got set correctly:
###Code
print(my_clone.github_user, my_clone.github_email)
###Output
jordanml7 [email protected]
###Markdown
Finally, although you've probably seen it in the `Files` sidebar by now (if not, click `REFRESH`), let's confirm that our repo is now in our Colab environment:
###Code
!ls /content/
###Output
colab-repoclone gdrive sample_data
###Markdown
There it is, `colab-repoclone` (or your own repo if you've changed it)! Wooo!Now you can make any changes you want to the files in `colab-repoclone`, including adding and deleting files. Since this is an empty directory, let's create a dummy file that we'll throw in there:
###Code
dummy = open("dummy.txt","w+")
for i in range(10):
dummy.write("This is test!\n")
dummy.close()
###Output
_____no_output_____
###Markdown
And now that we've made that *very* import file, let's move it into our repository:
###Code
!mv /content/dummy.txt /content/colab-repoclone/.
# Obviously you may have to change the path here if you're using your own repo
###Output
_____no_output_____
###Markdown
Let's confirm that it's in there:
###Code
!ls /content/colab-repoclone/
###Output
dummy.txt
###Markdown
There it is! We can now push our changes using `colab_repoclone`'s own `push` method!If you're using the `colab-repoclone` repository, this will not work for you, as you do not have permission to push to it. Instead, you'll likely get a message saying:`Command: failed. Check your permissions.`But that's not a problem for me, because I *do* have permissions! The `push` method allows for two keywords: `commit_msg`, which is pretty self explanatory (if you do not supply one, you will be prompted for one), and `file_path`, which can be used to specify that you only want to push files in a certain directory *within* your repo. This defaults to all files, i.e. `"."`Let's try it:
###Code
my_clone.push(commit_msg="Pushing to a repo from INSIDE the repo, meta...")
###Output
*************************************************************
* Are you sure you want to push your changes? *
* *
* Press "q" to abort. Press any other key to continue... *
*************************************************************
###Markdown
Well, I didn't get any error messages, but you may have!Now, because I don't want a stupid dummy file in my lovely empty repository (and because this tutorial relies on that file not existing initially), lets move `dummy.txt` out of this repo and push again!
###Code
!mv /content/colab-repoclone/dummy.txt /content/.
!ls /content/colab-repoclone/
###Output
_____no_output_____
###Markdown
And voila, it's gone.
###Code
my_clone.push(commit_msg="Annnddddd now its gone.")
###Output
*************************************************************
* Are you sure you want to push your changes? *
* *
* Press "q" to abort. Press any other key to continue... *
*************************************************************
###Markdown
Initializing a new repositoryNow let's explore creating a new local directory, initializing it as a Git repo, and pushing it to GitHub. First we have to create a local folder & put that dummy file from earlier into it.
###Code
!mkdir /content/my_new_repo
!mv /content/dummy.txt /content/my_new_repo/.
###Output
_____no_output_____
###Markdown
As before, we will use the `local_repository` method. The input is the link to the *existing* and yet empty repository on GitHub. I've made one on my personal GitHub account called `repo_init_testing`, that, again, you will not have push permissions for.When you run the following command, you will be prompted for the name of the directory you wish to initialize. In this case, it's `my_new_repo`.Since you do not have push permissions for my repo, unless you change it to one of your own, you will likely recieve the following warning upon executing this code:`Command: failed. Check your permissions.`
###Code
my_init = colab_repoclone.local_repository("https://github.com/jordanml7/repo_init_testing.git",clone=False)
# NB: This time I have not included the "branch" and "auth_method" keywords as
# branch will always default to "master" for initializing a new repo and I
# already have my credentials set in my environment
###Output
Local Repository :: my_new_repo
###Markdown
Running the above will commit and push whatever files are in your new directory, `my_new_repo`, with the commit message "First Commit from Google Colab".As above, you can now create new files, delete files, and modify existing files within this directory at will, then use the `push` method to upload any changes!For now, let's remove the `dummy.txt` file we just added to our repository and push that changes.
###Code
!mv /content/my_new_repo/dummy.txt /content/.
my_init.push(commit_msg="Wooosshhh!")
###Output
*************************************************************
* Are you sure you want to push your changes? *
* *
* Press "q" to abort. Press any other key to continue... *
*************************************************************
###Markdown
Other featuresThe latest version off this package includes four methods not described above - let's explore those. The first is `pull`, which allows you to pull changes to your repo if any were made elsewhere while you were running your notebook. Since all runtimes reset upon each load a notebook, you will need to "reclone" your repo each time you open and run your Colab notebook, so pulling will rarely be necessary.We can try it on our clone `colab-repoclone` from earlier, but it won't do much since no new files were added to the `testbed` branch externally...
###Code
my_clone.pull()
###Output
_____no_output_____
###Markdown
We can also create a new branch after cloning or initializing a new repository, using the `new_branch` method.This method takes a single keyword, the name of the branch you wish to create. If you do not enter one you will be prompted to do so. The specified branch is then created and checked-out. The correct upstream for future pushes from this branch is then set.Let's try it:
###Code
my_clone.new_branch()
###Output
New Branch :: exciting_new_branch
###Markdown
Let's confirm that we're now in our new branch:
###Code
my_clone.branch
###Output
_____no_output_____
###Markdown
*Very* exciting!Now, because we love our `dummy.txt` file so much, let's add it to our new branch:
###Code
!mv /content/dummy.txt /content/colab-repoclone/.
my_clone.push(commit_msg="Branches 4 Dayz")
###Output
*************************************************************
* Are you sure you want to push your changes? *
* *
* Press "q" to abort. Press any other key to continue... *
*************************************************************
###Markdown
That seemed to work for me! Again, the whole permissions mumbo-jumbo applies, so either accept your fate or replace all these repositories with your own!Now let's get rid of it just to keep things clean...
###Code
!rm /content/colab-repoclone/dummy.txt
my_clone.push(commit_msg="Anybody feel like we have been here before?")
###Output
*************************************************************
* Are you sure you want to push your changes? *
* *
* Press "q" to abort. Press any other key to continue... *
*************************************************************
###Markdown
This leads us nicely into the third cool new method, `checkout`. Pretty self explanatory. This method also takes a single keyword, the name of the *existing* branch you wish to checkout.Let's checkout our `testbed` branch again:
###Code
my_clone.checkout()
###Output
Checkout Branch :: testbed
###Markdown
Let's confirm it worked...
###Code
my_clone.branch
###Output
_____no_output_____
###Markdown
Bazinga! Last, but not least, we have `reset`. This method will perform a *hard* reset of your repository to a specific previous commit, passed as a keyword. If no commit ID is passed, it will default to rolling-back to the last commit.Be careful with this one! It will wipe any changes you have made since the specified commit. And not just locally, but on GitHub as well :OFor demonstration purposes, I'm gonna roll our `testbed` repo waaayyyy back. A bunch of files will appear, old versions of what's currently in `master`.
###Code
my_clone.reset(commit="ca0ec747dbbed7da681b58d4c9fa6ca6cfd8c818")
!ls /content/colab-repoclone/
###Output
CHANGELOG.md dummy.txt MANIFEST.in README.md
colab_repoclone LICENSE Pipfile setup.py
###Markdown
Look at all those files! Where'd they come from? We're still in `testbed`...
###Code
my_clone.branch
###Output
_____no_output_____
###Markdown
... we've just rolled back to before all the bajillion pushes we made above *were* made!Let's do some cleaning up of `testbed`:
###Code
!rm -r /content/colab-repoclone/*
###Output
rm 'CHANGELOG.md'
rm 'LICENSE'
rm 'MANIFEST.in'
rm 'Pipfile'
rm 'README.md'
rm 'colab_repoclone/__init__.py'
rm 'colab_repoclone/git_access.py'
rm 'setup.py'
###Markdown
And let's push our changes so `testbed` is all nice & empty:
###Code
my_clone.push(commit_msg="What a wild ride its been")
###Output
*************************************************************
* Are you sure you want to push your changes? *
* *
* Press "q" to abort. Press any other key to continue... *
*************************************************************
###Markdown
And as some final cleaning up, let's just wipe all the new directories we've created:
###Code
!rm -rf colab-repoclone
!rm -rf my_new_repo
###Output
_____no_output_____ |
docs/pages/partmc_sample_data.ipynb | ###Markdown
format of PartMC raw output [Here](../data/urban_plume_0001_00000002.nc) is a sample of PartMC raw output
###Code
import xarray as xr
p = "../data/urban_plume_0001_00000002.nc"
ds = xr.open_dataset(p)
ds
###Output
_____no_output_____ |
Chapter07/50_Training_RCNN.ipynb | ###Markdown
###Code
!pip install -q --upgrade selectivesearch torch_snippets
from torch_snippets import *
import selectivesearch
from google.colab import files
files.upload() # upload kaggle.json file which you can get
# by clicking on Create New API token in your personal account
!mkdir -p ~/.kaggle
!mv kaggle.json ~/.kaggle/
!ls ~/.kaggle
!chmod 600 /root/.kaggle/kaggle.json
!kaggle datasets download -d sixhky/open-images-bus-trucks/
!unzip -qq open-images-bus-trucks.zip
from torchvision import transforms, models, datasets
from torch_snippets import Report
from torchvision.ops import nms
device = 'cuda' if torch.cuda.is_available() else 'cpu'
IMAGE_ROOT = 'images/images'
DF_RAW = pd.read_csv('df.csv')
print(DF_RAW.head())
class OpenImages(Dataset):
def __init__(self, df, image_folder=IMAGE_ROOT):
self.root = image_folder
self.df = df
self.unique_images = df['ImageID'].unique()
def __len__(self): return len(self.unique_images)
def __getitem__(self, ix):
image_id = self.unique_images[ix]
image_path = f'{self.root}/{image_id}.jpg'
image = cv2.imread(image_path, 1)[...,::-1] # conver BGR to RGB
h, w, _ = image.shape
df = self.df.copy()
df = df[df['ImageID'] == image_id]
boxes = df['XMin,YMin,XMax,YMax'.split(',')].values
boxes = (boxes * np.array([w,h,w,h])).astype(np.uint16).tolist()
classes = df['LabelName'].values.tolist()
return image, boxes, classes, image_path
ds = OpenImages(df=DF_RAW)
im, bbs, clss, _ = ds[9]
show(im, bbs=bbs, texts=clss, sz=10)
def extract_candidates(img):
img_lbl, regions = selectivesearch.selective_search(img, scale=200, min_size=100)
img_area = np.prod(img.shape[:2])
candidates = []
for r in regions:
if r['rect'] in candidates: continue
if r['size'] < (0.05*img_area): continue
if r['size'] > (1*img_area): continue
x, y, w, h = r['rect']
candidates.append(list(r['rect']))
return candidates
def extract_iou(boxA, boxB, epsilon=1e-5):
x1 = max(boxA[0], boxB[0])
y1 = max(boxA[1], boxB[1])
x2 = min(boxA[2], boxB[2])
y2 = min(boxA[3], boxB[3])
width = (x2 - x1)
height = (y2 - y1)
if (width<0) or (height <0):
return 0.0
area_overlap = width * height
area_a = (boxA[2] - boxA[0]) * (boxA[3] - boxA[1])
area_b = (boxB[2] - boxB[0]) * (boxB[3] - boxB[1])
area_combined = area_a + area_b - area_overlap
iou = area_overlap / (area_combined+epsilon)
return iou
FPATHS, GTBBS, CLSS, DELTAS, ROIS, IOUS = [], [], [], [], [], []
N = 500
for ix, (im, bbs, labels, fpath) in enumerate(ds):
if(ix==N):
break
H, W, _ = im.shape
candidates = extract_candidates(im)
candidates = np.array([(x,y,x+w,y+h) for x,y,w,h in candidates])
ious, rois, clss, deltas = [], [], [], []
ious = np.array([[extract_iou(candidate, _bb_) for candidate in candidates] for _bb_ in bbs]).T
for jx, candidate in enumerate(candidates):
cx,cy,cX,cY = candidate
candidate_ious = ious[jx]
best_iou_at = np.argmax(candidate_ious)
best_iou = candidate_ious[best_iou_at]
best_bb = _x,_y,_X,_Y = bbs[best_iou_at]
if best_iou > 0.3: clss.append(labels[best_iou_at])
else : clss.append('background')
delta = np.array([_x-cx, _y-cy, _X-cX, _Y-cY]) / np.array([W,H,W,H])
deltas.append(delta)
rois.append(candidate / np.array([W,H,W,H]))
FPATHS.append(fpath)
IOUS.append(ious)
ROIS.append(rois)
CLSS.append(clss)
DELTAS.append(deltas)
GTBBS.append(bbs)
FPATHS = [f'{IMAGE_ROOT}/{stem(f)}.jpg' for f in FPATHS]
FPATHS, GTBBS, CLSS, DELTAS, ROIS = [item for item in [FPATHS, GTBBS, CLSS, DELTAS, ROIS]]
targets = pd.DataFrame(flatten(CLSS), columns=['label'])
label2target = {l:t for t,l in enumerate(targets['label'].unique())}
target2label = {t:l for l,t in label2target.items()}
background_class = label2target['background']
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
def preprocess_image(img):
img = torch.tensor(img).permute(2,0,1)
img = normalize(img)
return img.to(device).float()
def decode(_y):
_, preds = _y.max(-1)
return preds
class RCNNDataset(Dataset):
def __init__(self, fpaths, rois, labels, deltas, gtbbs):
self.fpaths = fpaths
self.gtbbs = gtbbs
self.rois = rois
self.labels = labels
self.deltas = deltas
def __len__(self): return len(self.fpaths)
def __getitem__(self, ix):
fpath = str(self.fpaths[ix])
image = cv2.imread(fpath, 1)[...,::-1]
H, W, _ = image.shape
sh = np.array([W,H,W,H])
gtbbs = self.gtbbs[ix]
rois = self.rois[ix]
bbs = (np.array(rois)*sh).astype(np.uint16)
labels = self.labels[ix]
deltas = self.deltas[ix]
crops = [image[y:Y,x:X] for (x,y,X,Y) in bbs]
return image, crops, bbs, labels, deltas, gtbbs, fpath
def collate_fn(self, batch):
input, rois, rixs, labels, deltas = [], [], [], [], []
for ix in range(len(batch)):
image, crops, image_bbs, image_labels, image_deltas, image_gt_bbs, image_fpath = batch[ix]
crops = [cv2.resize(crop, (224,224)) for crop in crops]
crops = [preprocess_image(crop/255.)[None] for crop in crops]
input.extend(crops)
labels.extend([label2target[c] for c in image_labels])
deltas.extend(image_deltas)
input = torch.cat(input).to(device)
labels = torch.Tensor(labels).long().to(device)
deltas = torch.Tensor(deltas).float().to(device)
return input, labels, deltas
n_train = 9*len(FPATHS)//10
train_ds = RCNNDataset(FPATHS[:n_train], ROIS[:n_train], CLSS[:n_train], DELTAS[:n_train], GTBBS[:n_train])
test_ds = RCNNDataset(FPATHS[n_train:], ROIS[n_train:], CLSS[n_train:], DELTAS[n_train:], GTBBS[n_train:])
from torch.utils.data import TensorDataset, DataLoader
train_loader = DataLoader(train_ds, batch_size=2, collate_fn=train_ds.collate_fn, drop_last=True)
test_loader = DataLoader(test_ds, batch_size=2, collate_fn=test_ds.collate_fn, drop_last=True)
vgg_backbone = models.vgg16(pretrained=True)
vgg_backbone.classifier = nn.Sequential()
for param in vgg_backbone.parameters():
param.requires_grad = False
vgg_backbone.eval().to(device)
class RCNN(nn.Module):
def __init__(self):
super().__init__()
feature_dim = 25088
self.backbone = vgg_backbone
self.cls_score = nn.Linear(feature_dim, len(label2target))
self.bbox = nn.Sequential(
nn.Linear(feature_dim, 512),
nn.ReLU(),
nn.Linear(512, 4),
nn.Tanh(),
)
self.cel = nn.CrossEntropyLoss()
self.sl1 = nn.L1Loss()
def forward(self, input):
feat = self.backbone(input)
cls_score = self.cls_score(feat)
bbox = self.bbox(feat)
return cls_score, bbox
def calc_loss(self, probs, _deltas, labels, deltas):
detection_loss = self.cel(probs, labels)
ixs, = torch.where(labels != 0)
_deltas = _deltas[ixs]
deltas = deltas[ixs]
self.lmb = 10.0
if len(ixs) > 0:
regression_loss = self.sl1(_deltas, deltas)
return detection_loss + self.lmb * regression_loss, detection_loss.detach(), regression_loss.detach()
else:
regression_loss = 0
return detection_loss + self.lmb * regression_loss, detection_loss.detach(), regression_loss
def train_batch(inputs, model, optimizer, criterion):
input, clss, deltas = inputs
model.train()
optimizer.zero_grad()
_clss, _deltas = model(input)
loss, loc_loss, regr_loss = criterion(_clss, _deltas, clss, deltas)
accs = clss == decode(_clss)
loss.backward()
optimizer.step()
return loss.detach(), loc_loss, regr_loss, accs.cpu().numpy()
@torch.no_grad()
def validate_batch(inputs, model, criterion):
input, clss, deltas = inputs
with torch.no_grad():
model.eval()
_clss,_deltas = model(input)
loss, loc_loss, regr_loss = criterion(_clss, _deltas, clss, deltas)
_, _clss = _clss.max(-1)
accs = clss == _clss
return _clss, _deltas, loss.detach(), loc_loss, regr_loss, accs.cpu().numpy()
rcnn = RCNN().to(device)
criterion = rcnn.calc_loss
optimizer = optim.SGD(rcnn.parameters(), lr=1e-3)
n_epochs = 5
log = Report(n_epochs)
for epoch in range(n_epochs):
_n = len(train_loader)
for ix, inputs in enumerate(train_loader):
loss, loc_loss, regr_loss, accs = train_batch(inputs, rcnn,
optimizer, criterion)
pos = (epoch + (ix+1)/_n)
log.record(pos, trn_loss=loss.item(), trn_loc_loss=loc_loss,
trn_regr_loss=regr_loss,
trn_acc=accs.mean(), end='\r')
_n = len(test_loader)
for ix,inputs in enumerate(test_loader):
_clss, _deltas, loss, \
loc_loss, regr_loss, accs = validate_batch(inputs,
rcnn, criterion)
pos = (epoch + (ix+1)/_n)
log.record(pos, val_loss=loss.item(), val_loc_loss=loc_loss,
val_regr_loss=regr_loss,
val_acc=accs.mean(), end='\r')
# Plotting training and validation metrics
log.plot_epochs('trn_loss,val_loss'.split(','))
def test_predictions(filename, show_output=True):
img = np.array(cv2.imread(filename, 1)[...,::-1])
candidates = extract_candidates(img)
candidates = [(x,y,x+w,y+h) for x,y,w,h in candidates]
input = []
for candidate in candidates:
x,y,X,Y = candidate
crop = cv2.resize(img[y:Y,x:X], (224,224))
input.append(preprocess_image(crop/255.)[None])
input = torch.cat(input).to(device)
with torch.no_grad():
rcnn.eval()
probs, deltas = rcnn(input)
probs = torch.nn.functional.softmax(probs, -1)
confs, clss = torch.max(probs, -1)
candidates = np.array(candidates)
confs, clss, probs, deltas = [tensor.detach().cpu().numpy() for tensor in [confs, clss, probs, deltas]]
ixs = clss!=background_class
confs, clss, probs, deltas, candidates = [tensor[ixs] for tensor in [confs, clss, probs, deltas, candidates]]
bbs = (candidates + deltas).astype(np.uint16)
ixs = nms(torch.tensor(bbs.astype(np.float32)), torch.tensor(confs), 0.05)
confs, clss, probs, deltas, candidates, bbs = [tensor[ixs] for tensor in [confs, clss, probs, deltas, candidates, bbs]]
if len(ixs) == 1:
confs, clss, probs, deltas, candidates, bbs = [tensor[None] for tensor in [confs, clss, probs, deltas, candidates, bbs]]
if len(confs) == 0 and not show_output:
return (0,0,224,224), 'background', 0
if len(confs) > 0:
best_pred = np.argmax(confs)
best_conf = np.max(confs)
best_bb = bbs[best_pred]
x,y,X,Y = best_bb
_, ax = plt.subplots(1, 2, figsize=(20,10))
show(img, ax=ax[0])
ax[0].grid(False)
ax[0].set_title('Original image')
if len(confs) == 0:
ax[1].imshow(img)
ax[1].set_title('No objects')
plt.show()
return
ax[1].set_title(target2label[clss[best_pred]])
show(img, bbs=bbs.tolist(), texts=[target2label[c] for c in clss.tolist()], ax=ax[1], title='predicted bounding box and class')
plt.show()
return (x,y,X,Y),target2label[clss[best_pred]],best_conf
image, crops, bbs, labels, deltas, gtbbs, fpath = test_ds[7]
test_predictions(fpath)
###Output
_____no_output_____ |
notebooks/01_Building_datasets.ipynb | ###Markdown
Building a datasetA labeled dataset was needed to train and validate a model. Using MNIST dataset was tempting however it does not contain math operators. Since math operators are required in the dataset, it was probably better to generate the digits in the same process. The same 20 by 20 "white" character framed inside a 28 by 28 black frame was chosen for compatibility with MNIST samples (no harm in some future-proofing the design). The major difference is that centering the character within the frame was not done based on the mass-weighted center of the character but rather by padding the non-black area evenly on all four sides. Design processA simple 3-phase approach was chosen for generating a labeled dataset. Phase 1: generating initial pages of 4 styles of "handwriting"The goal was to have:- 1 page per style- 16 many-character lines per page, 1 class of characters per lineThe characters within a each style and class **could be identical** as long as there were many of them per line. The procedure was:1. some sans-serif fonts were chosen for digits and operators that resemble a very, very clean handwriting - 3 different mixed-fonts set of characters were chosen and called "style A", "style B" and "style C"2. for each style, the 16 characters from set { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, +, -, ×, /, (, ) } were typed each on its own new line of text in Inkscape3. each character was copy/pasted in increasing block sizes within its line to extend the number of total same-class characters of the same style - generous spacing between characters and lines was set to avoid any horizontal or vertical overlap, especially after augmentation4. the 3 blocks of 16 lines were exported as separate PNG images at a resolution that resulted in about 75 px line height for the tallest characters like "/" and "(" to as little as 5 px line height for a "-" - this concluded the character manipulations in vector format using Inkscape - the resulting large images are provided as [page_A.png](images/page_A.png), [page_B.png](images/page_B.png) and [page_C.png](images/page_C.png)5. the fourth style was added which started as a [photograph](images/page_D.jpg) of characters hand-written on a sheet of paper - note: switched to using GIMP for manipulating bitmap images - the hand-written sample was small so was horizontally stacked with its own copies to extend to the dimensions of other styles - note: some augmentation was already done here while copy/pasting, namely flipping and/or rotating long lines of characters from { 0, 6, 8, 9, +, -, x, /, (, ) } before hstacking them together - the resulting large image is provided as [page_D.png](images/page_D.png) Phase 2: data augmentationInstead of programming image augmentation per character basis, a different approach was chosen. The goal was to efficiently diversify the members of each class in huge batches all at once while not loosing their class label information. The procedure was:1. all 4 images were imported as separate layers into a multi-layer image2. characters in all 4 layers were randomly distorted by: - generating a displacement map: - generating [simplex noise](images/simplex_noise_settings.png) - applying [Gaussian blur](images/gaussian_blur_settings.png) - adjusting the [white level](images/curves_settings.png) - a _highly_ compressed JPEG [sample](images/xy_displacement_map.jpg) is provided for reference - applying displacement mapping to all 4 layers of the image ([parameters](images/xy_displacement_settings.png) for reference)3. distorted characters in all 4 layers were squeezed and stretched along the y-axis by: - generating another displacement map using a gradient tool - a _highly_ compressed JPEG [sample](images/y_displacement_map.jpg) is provided for reference - applying displacement mapping to all 4 layers of the image ([parameters](images/y_displacement_settings.png) for reference)4. all 4 layers of the image were duplicated multiple times and the copies were skewed left and right to various degrees - all the resulting layers of the image were batch-exported to separate PNG images featuring: - 11 levels of "skewiness" (none, 5 left, 5 right) for each of the styles A, B and C - 7 levels of "skewiness" (none, 2 left, 4 right) for handwritten style D (inherently more diverse) - all 40 pages are provided in the [images](images) subfolder (names starting with "base_")This kind of batch-level data augmentation (at least for the purpose of simulating handwriting by applying distortions) has proved to be time-efficient and most importantly, yielded quickly and easily visually verifiable results. Phase 3: data processingThis phase is very similar to preprocessing that an image of an expression will go through in order to extract the characters from the image and pass them to a classifier. The only difference here is that the 40 pages of characters generated in the phase 2 all have single-class lines in the same known order. Attaching a true class label to each extracted character is thus made to be a trivial task. The rest of the notebook does just that, with some data analysis of the resulting labeled dataset.
###Code
import os
import re
import sys
import pickle
import hashlib
import numpy as np
import pandas as pd
import cv2 as cv
import seaborn as sns
from matplotlib import pyplot as plt
%matplotlib inline
plt.style.use('seaborn')
sns.set(rc={'figure.dpi': 72, 'savefig.dpi': 72})
sns.set_context('notebook')
###Output
_____no_output_____
###Markdown
Original dataset-building functionsWhat follows are the original functions used to (pre)process the 40 pages of synthetic data. Some of them are prototypes of the functions meant for real data preprocessing. The later need more robustness but given that some crucial properties of synthetic data are well defined, these simple prototypes will serve the purpose for now.Documenting prototypes was not of a great concern since they were to be used only once in this notebook. Those that were further tailored to real-world data requirements are well documented in a separate module.
###Code
WHITE_SPREAD = 5.5
MAX_INK_VALUE = 102_000
def showhist(img):
plt.hist(img.ravel(), 256, (0, 256))
plt.show()
def desaturate(img):
return img.min(axis=2)
def autostretch(img, black=None, white=None):
img_out = img.astype(float)
if black is None:
black = img_out.min()
if white is None:
white = img_out.max()
img_out = np.round((img_out - black) / (white - black) * 255)
img_out[img_out < 0] = 0
return img_out.astype(np.uint8)
def autothresh(img):
img_out = autostretch(img).astype(float)
whitepoint = np.median(img_out) - 3 * img_out.std()
mask = img_out <= whitepoint
blackpoint = np.median(img_out[mask]) * 2
thresh = (blackpoint + whitepoint) * 0.5
img_out[img_out > thresh] = 255
img_out[img_out <= thresh] = 0
return img_out.astype(np.uint8)
def get_hmask(img):
return (img != 255).any(axis=1)
def get_vmask(img):
return (img != 255).any(axis=0)
def split_mask(mask):
# extend by a single False
mask1 = np.array([*mask, False], dtype=bool)
# prepend by a single False
mask2 = np.array([False, *mask], dtype=bool)
# offset by one and XOR to detect transitions from False to True and vice versa
transitions = np.logical_xor(mask1, mask2)
transitions = np.flatnonzero(transitions)
# extract the submasks
submasks = []
for idx in range(0, len(transitions), 2):
submask = np.zeros(mask.shape)
submask[transitions[idx]:transitions[idx + 1]] = 1
submasks.append(submask.astype(bool))
return submasks
def invert_bw(img):
return 255 - img
def framechar(char):
max_dim = max(char.shape)
scale = 20 / max_dim
char = cv.resize(char, (0, 0), fx=scale, fy=scale)
char = invert_bw(char)
height, width = char.shape
framed = np.zeros((28, 28), dtype=np.uint8)
top = (28 - height) // 2
left = (28 - width) // 2
framed[top:(top + height), left:(left + width)] = char
return framed
def get_filenames(folder, regex):
pagename_re = re.compile(regex)
pages = []
script_styles = []
script_substyles = []
for filename in os.listdir(folder):
if (match := pagename_re.match(filename)):
script_styles.append(match.group(2))
script_substyles.append(match.group(1))
pagefilename = os.path.join(folder, filename)
pages.append(pagefilename)
return script_styles, script_substyles, pages
def get_img_sig(img):
img_string = ' '.join(img.ravel().astype(str))
img_string_hashed = hashlib.md5(img_string.encode('utf-8')).hexdigest()
return img_string_hashed
def get_ink_fraction(img):
return img.sum() / MAX_INK_VALUE
def get_outliers(df):
idxs = []
for script_substyle in df.script_substyle.unique():
for label in df.label.unique():
subdf = df.loc[(df.script_substyle == script_substyle) & (df.label == label)]
Q1 = subdf.ink_fraction.quantile(0.25)
Q3 = subdf.ink_fraction.quantile(0.75)
IQR = Q3 - Q1
low = Q1 - 1.5 * IQR
high = Q3 + 1.5 * IQR
idxs.extend(list(subdf.loc[(subdf.ink_fraction < low) | (subdf.ink_fraction > high)].index))
return idxs
base_images_dir = 'images'
dataset_dir = os.path.join('..', '.nogit', 'dataset')
# a list of character MD5 digests to be used in filenames
char_sigs = []
dataset = {'filename': [],
'array' : [],
'label': [],
'label_desc': [],
'script_style': [],
'script_substyle': [],
'ink_fraction': [],
}
labels = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '+', '-', 'x', '/', '(', ')']
labels_POSIX = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '0', 'plus', 'minus', 'x', 'slash', 'lpar', 'rpar']
script_styles, script_substyles, pages = get_filenames(base_images_dir, r'^base_(([ABCD]).*)\.png$')
for script_style, script_substyle, filename in zip (script_styles, script_substyles, pages):
print(f'{script_substyle = }')
img = cv.imread(filename, 1)
if img is None:
sys.exit(f'Could not read: {filename}')
img2 = desaturate(img)
hsubmasks = split_mask(get_hmask(img2))
# this part of code is the skeleton for the extractor.py module
for line_idx, hsubmask in enumerate(hsubmasks):
label = labels[line_idx]
label_POSIX = labels_POSIX[line_idx]
label_dir = os.path.join(dataset_dir, label_POSIX)
os.makedirs(label_dir, exist_ok=True)
line = img2[hsubmask]
vsubmasks = split_mask(get_vmask(line))
for vsubmask in vsubmasks:
char = line[:, vsubmask]
charmasks = split_mask(get_hmask(char))
charmask = sorted(charmasks, key=lambda x: x.sum(), reverse=True)[0]
char = char[charmask]
# get the unique signature for the char (MD5 digest of the caracter array values)
char_sig = get_img_sig(char)
char_sigs.append(char_sig)
char = framechar(char)
filename = os.path.join(label_dir, f'{char_sig}.png')
cv.imwrite(filename, char)
dataset['filename'].append(filename)
dataset['array'].append(char)
dataset['label'].append(line_idx)
dataset['label_desc'].append(label)
dataset['script_style'].append(script_style)
dataset['script_substyle'].append(script_substyle)
dataset['ink_fraction'].append(get_ink_fraction(char))
df = pd.DataFrame(dataset)
df
print(f'duplicates count = {df.filename.duplicated().sum()}')
# find outliers based on ink fraction and move them to a subfolder for closer inspection
outliers = get_outliers(df)
for path1 in df.loc[outliers, 'filename'].values:
path2 = re.sub(dataset_dir, os.path.join(dataset_dir, 'outliers'), path1)
os.renames(path1, path2)
print(f'outliers count = {len(outliers)}')
print(f'outliers fraction = {len(outliers)/len(df):.2%}')
plt.figure(figsize=(14, 8))
_ = sns.boxenplot(data=df, x='label_desc', y='ink_fraction')
###Output
_____no_output_____
###Markdown
Chart 1The chart clearly shows different ink fraction ranges between character classes.
###Code
# drop outliers
df = df.loc[~df.index.isin(outliers)].reset_index(drop=True)
plt.figure(figsize=(14, 8))
_ = sns.boxenplot(data=df, x='label_desc', y='ink_fraction')
###Output
_____no_output_____
###Markdown
Chart 2After dropping the outliers one can see that some upper extreme values shrank down a bit but not by much. This is in accordance with very low fraction of outliers detected and removed from an otherwise massive dataset.Among 734 outliers removed from the dataset, visual inspection revealed only 12 were truly unrecognizable smudges. The rest could have easily have passed as valid samples. Since the images were stored in PNG format, all 12 true outliers had either the largest or the smallest size (in bytes) of any other "outlier" in their class. This made it easy to inspect a few smallest and largest files among each of the labels in the rest of the dataset. No smudges or otherwise unrecognizable "characters" were found in the remaining dataset.
###Code
df.to_csv(os.path.join(dataset_dir, 'math_dataset_1.0.csv'))
plt.figure(figsize=(8, 8))
_ = sns.boxenplot(data=df.sort_values('script_style'), x='script_style', y='ink_fraction')
###Output
_____no_output_____
###Markdown
Chart 3It seems that styles B and C fit pretty close by the ink fraction range to this particular handwritten style D. However, there are certainly thicker pens and heavier hands out there that could find a closer match in style A.
###Code
plt.figure(figsize=(20, 8))
_ = sns.boxenplot(data=df.sort_values('script_substyle'), x='script_substyle', y='ink_fraction')
###Output
_____no_output_____
###Markdown
Chart 4To no surprise, no major difference in ink fraction can be distinguished among substyles of any style.
###Code
plt.figure(figsize=(24, 8))
_ = sns.boxenplot(data=df, x='label_desc', y='ink_fraction', hue="script_style", hue_order=['A', 'B', 'C', 'D'])
###Output
_____no_output_____
###Markdown
Chart 5Compared to the Chart 3 no big surprises were found by controlling for the class label. Yes, some classes show discrepancies with the aggregate trend but nothing worth investigating. Saving datasetsSeveral subsets are exported from the dataframe and labeled according to which styles of "handwriting" they contain.
###Code
X = np.concatenate(df.array).reshape(len(df), 28, 28)
y = df.label.values
print(X.shape, y.shape)
custom_math_dataset = (X, y)
with open(os.path.join(dataset_dir, 'math_dataset_ABCD_1.0.pkl'), 'wb') as file:
pickle.dump(custom_math_dataset, file)
df_ABC = df.loc[df.script_style.isin(['A', 'B', 'C'])].reset_index(drop=True)
X = np.concatenate(df_ABC.array).reshape(len(df_ABC), 28, 28)
y = df_ABC.label.values
print(X.shape, y.shape)
custom_math_dataset = (X, y)
with open(os.path.join(dataset_dir, 'math_dataset_ABC_1.0.pkl'), 'wb') as file:
pickle.dump(custom_math_dataset, file)
df_D = df.loc[df.script_style == 'D'].reset_index(drop=True)
X = np.concatenate(df_D.array).reshape(len(df_D), 28, 28)
y = df_D.label.values
print(X.shape, y.shape)
custom_math_dataset = (X, y)
with open(os.path.join(dataset_dir, 'math_dataset_D_1.0.pkl'), 'wb') as file:
pickle.dump(custom_math_dataset, file)
###Output
(22614, 28, 28) (22614,)
|
Exercises/Supervised_Learning_Regression/5a - Evaluating Performance - House Prices.ipynb | ###Markdown
1) Load the houseprices data
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
df = pd.read_csv('houseprices.cvs')
num_col = ['overallqual', 'grlivarea', 'garagecars', 'garagearea', 'totalbsmtsf', 'firstflrsf']
cat_col = ['exterqual', 'kitchenqual']
df2 = pd.concat([df[num_col], df[cat_col], df['saleprice']], axis = 1)
for col in cat_col:
df2 = pd.concat([df2, pd.get_dummies(df[col], drop_first=True, prefix = col)], axis = 1)
df2.head()
###Output
_____no_output_____
###Markdown
2) Run your house prices model again and assess the goodness of fit of your model using F-test, R-squared, adjusted R-squared, AIC and BIC. The R-squared and adjusted R-squared values are similar, at 0.792 and 0.790.F-statistic is 459 and has a p-value near 0, telling us that the features explain something about the target variable.Lastly, the AIC and BIC are 3.482e+04 and 3.489e+04.
###Code
Y = df2['saleprice']
X = df2.drop(['saleprice', 'exterqual', 'kitchenqual'], axis = 1)
X = sm.add_constant(X)
results = sm.OLS(Y,X).fit()
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: saleprice R-squared: 0.792
Model: OLS Adj. R-squared: 0.790
Method: Least Squares F-statistic: 459.0
Date: Mon, 09 Sep 2019 Prob (F-statistic): 0.00
Time: 22:23:36 Log-Likelihood: -17398.
No. Observations: 1460 AIC: 3.482e+04
Df Residuals: 1447 BIC: 3.489e+04
Df Model: 12
Covariance Type: nonrobust
==================================================================================
coef std err t P>|t| [0.025 0.975]
----------------------------------------------------------------------------------
const 3.758e+04 1.12e+04 3.346 0.001 1.55e+04 5.96e+04
overallqual 1.609e+04 1225.717 13.131 0.000 1.37e+04 1.85e+04
grlivarea 43.0657 2.516 17.116 0.000 38.130 48.001
garagecars 1.466e+04 2857.554 5.129 0.000 9051.993 2.03e+04
garagearea 7.0534 9.848 0.716 0.474 -12.264 26.371
totalbsmtsf 19.2251 4.074 4.719 0.000 11.234 27.216
firstflrsf 8.9155 4.745 1.879 0.060 -0.393 18.224
exterqual_Fa -4.953e+04 1.29e+04 -3.851 0.000 -7.48e+04 -2.43e+04
exterqual_Gd -3.435e+04 6421.345 -5.349 0.000 -4.69e+04 -2.17e+04
exterqual_TA -4.598e+04 7092.098 -6.483 0.000 -5.99e+04 -3.21e+04
kitchenqual_Fa -4.695e+04 8338.150 -5.631 0.000 -6.33e+04 -3.06e+04
kitchenqual_Gd -3.634e+04 4793.439 -7.581 0.000 -4.57e+04 -2.69e+04
kitchenqual_TA -4.707e+04 5374.308 -8.759 0.000 -5.76e+04 -3.65e+04
==============================================================================
Omnibus: 621.769 Durbin-Watson: 1.992
Prob(Omnibus): 0.000 Jarque-Bera (JB): 68903.106
Skew: -0.985 Prob(JB): 0.00
Kurtosis: 36.597 Cond. No. 3.96e+04
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 3.96e+04. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
3) Do you think your model is satisfactory? If so, why?As a first model, getting a R-squared values of 0.792 isn't bad. This means that ~79% of the variance is explained. Of course, there is room for improvement, as that means 21% of the variance is not explained. 4) In order to improve the goodness of fit of your model, try different model specifications by adding or removing some variables.I've created new features that adds up or multiplies the external and kitchen qualities as well as summing up the total square footage around the property. However, none of these really made much difference in R-squared and adjusted R-squared values. The biggest difference in improving these values was taking the log of sale prices.
###Code
df2['sumqual'] = df2['exterqual_Fa'] + df2['exterqual_Gd'] + df2['exterqual_TA'] + df2['kitchenqual_Fa'] + df2['kitchenqual_Gd'] + df2['kitchenqual_TA']
df2['prodqual'] = df2['exterqual_Fa'] + df2['exterqual_Gd'] * df2['exterqual_TA'] * df2['kitchenqual_Fa'] * df2['kitchenqual_Gd'] * df2['kitchenqual_TA']
df2['totalarea'] = df2['grlivarea'] + df2['totalbsmtsf'] + df2['firstflrsf'] + df2['garagearea']
Y2 = np.log(df2['saleprice'])
X2 = pd.concat([df2.drop(['saleprice', 'exterqual', 'kitchenqual'], axis = 1)], axis = 1)
X2 = sm.add_constant(X2)
results2 = sm.OLS(Y2,X2).fit()
print(results2.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: saleprice R-squared: 0.822
Model: OLS Adj. R-squared: 0.820
Method: Least Squares F-statistic: 475.2
Date: Mon, 09 Sep 2019 Prob (F-statistic): 0.00
Time: 22:29:10 Log-Likelihood: 526.73
No. Observations: 1460 AIC: -1023.
Df Residuals: 1445 BIC: -944.2
Df Model: 14
Covariance Type: nonrobust
=====================================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------------
const 10.4310 0.224 46.516 0.000 9.991 10.871
overallqual 0.1896 0.028 6.737 0.000 0.134 0.245
grlivarea 0.0001 1.42e-05 9.923 0.000 0.000 0.000
garagecars 0.0726 0.014 5.367 0.000 0.046 0.099
garagearea 1.686e-05 3.81e-05 0.443 0.658 -5.78e-05 9.15e-05
totalbsmtsf 5.895e-05 2.11e-05 2.790 0.005 1.75e-05 0.000
firstflrsf -1.614e-05 2.37e-05 -0.680 0.497 -6.27e-05 3.04e-05
exterqual_Fa -0.0651 0.030 -2.155 0.031 -0.124 -0.006
exterqual_Gd 0.0505 0.036 1.389 0.165 -0.021 0.122
exterqual_TA -0.0002 0.038 -0.006 0.995 -0.074 0.074
kitchenqual_Fa -0.0855 0.035 -2.445 0.015 -0.154 -0.017
kitchenqual_Gd 0.0373 0.023 1.640 0.101 -0.007 0.082
kitchenqual_TA -0.0418 0.024 -1.769 0.077 -0.088 0.005
sumqual -0.1048 0.085 -1.239 0.216 -0.271 0.061
prodqual -0.0651 0.030 -2.155 0.031 -0.124 -0.006
totalarea 0.0002 1.62e-05 12.415 0.000 0.000 0.000
overallXtotalarea -2.116e-05 2.3e-06 -9.205 0.000 -2.57e-05 -1.67e-05
overallXsum -0.0038 0.012 -0.319 0.749 -0.027 0.019
==============================================================================
Omnibus: 463.279 Durbin-Watson: 1.986
Prob(Omnibus): 0.000 Jarque-Bera (JB): 2995.159
Skew: -1.318 Prob(JB): 0.00
Kurtosis: 9.503 Cond. No. 1.21e+21
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 9.3e-31. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
|
notebooks/timeSeries-processing ~ Examples.ipynb | ###Markdown
PREREQUISITES In the examples shown in this document, the air pollution datasets of EEA will be used. For this reason, the [EEA-datasets-handler](https://github.com/EnricoPittini/EEA-datasets-handler) library will be used.In particular,the PM10 mean concentrations in Italy are considered, with respect to 2020.
###Code
import EEA_datasets_handler as eea
# Download the datasets
# IT'S NECESSARY ONLY IF THEY HAVEN'T BEEN DOWNLOADED YET
dest_path = "C:\\Datasets"
countries_cities_dict = {"IT": "all"}
pollutants = ["PM10"]
years = [2020]
eea.download_datasets(dest_path, countries_cities_dict, pollutants, years)
# Load the datasets
source_path = "C:\\Datasets\\EEA"
countries_cities_dict = {"IT":"all"}
pollutants = ["PM10"]
years = [2020]
df = eea.load_datasets(source_path,countries_cities_dict,pollutants,years)
df.info()
df
# Process the datasets
df_mean, _, _ = eea.preprocessing(df, fill=True, fill_n_days=10 ,fill_aggr="mean")
df_mean
###Output
C:\Users\Enrico\anaconda3\lib\site-packages\EEA_datasets_handler.py:806: UserWarning: Missing days: ['2020-01-31', '2020-02-01', '2020-02-02', '2020-02-03', '2020-02-04', '2020-02-05', '2020-02-06', '2020-02-07', '2020-02-08', '2020-02-10', '2020-02-11']
warnings.warn("Missing days: "+str(list(missing_days.strftime('%Y-%m-%d'))))
###Markdown
In addition, also a metereological dataset will be used (i.e. `meteorological_data_2020`). It will be loaded later on. These data have been obtained using the [ILMETEO](https://www.ilmeteo.it/) Website. FUNCTIONS TO MANIPULATE DATES Auxiliary functions which work with dates.
###Code
import pandas as pd
days = pd.date_range("2020-01-01","2020-01-04").append(pd.date_range("2020-01-11","2020-03-14"))
print(days)
tsp.find_missing_days(days)
print(tsp.group_days_by(days,criterion="year"))
print()
print(tsp.group_days_by(days,criterion="month"))
print()
print(tsp.group_days_by(days,criterion="season"))
day = pd.Timestamp("2020-04-15")
print(tsp.find_same_month_days(day))
print()
print(tsp.find_same_season_days(day))
print(tsp.find_k_years_ago_days(day,k=4,n_days=11))
print()
print(tsp.find_k_years_ago_days(day,k=4,n_days="month"))
print()
print(tsp.find_k_years_ago_days(day,k=4,n_days="season"))
print()
print(tsp.find_current_year_days(day, n_days=11))
print()
print(tsp.find_current_year_days(day, n_days=11, current_day=True))
print()
print(tsp.find_current_year_days(day, n_days="month", current_day=True))
print()
print(tsp.find_current_year_days(day, n_days="season", current_day=False))
print()
###Output
DatetimeIndex(['2020-04-04', '2020-04-05', '2020-04-06', '2020-04-07',
'2020-04-08', '2020-04-09', '2020-04-10', '2020-04-11',
'2020-04-12', '2020-04-13', '2020-04-14'],
dtype='datetime64[ns]', freq='D')
DatetimeIndex(['2020-04-05', '2020-04-06', '2020-04-07', '2020-04-08',
'2020-04-09', '2020-04-10', '2020-04-11', '2020-04-12',
'2020-04-13', '2020-04-14', '2020-04-15'],
dtype='datetime64[ns]', freq='D')
DatetimeIndex(['2020-04-01', '2020-04-02', '2020-04-03', '2020-04-04',
'2020-04-05', '2020-04-06', '2020-04-07', '2020-04-08',
'2020-04-09', '2020-04-10', '2020-04-11', '2020-04-12',
'2020-04-13', '2020-04-14', '2020-04-15'],
dtype='datetime64[ns]', freq='D')
DatetimeIndex(['2020-03-01', '2020-03-02', '2020-03-03', '2020-03-04',
'2020-03-05', '2020-03-06', '2020-03-07', '2020-03-08',
'2020-03-09', '2020-03-10', '2020-03-11', '2020-03-12',
'2020-03-13', '2020-03-14', '2020-03-15', '2020-03-16',
'2020-03-17', '2020-03-18', '2020-03-19', '2020-03-20',
'2020-03-21', '2020-03-22', '2020-03-23', '2020-03-24',
'2020-03-25', '2020-03-26', '2020-03-27', '2020-03-28',
'2020-03-29', '2020-03-30', '2020-03-31', '2020-04-01',
'2020-04-02', '2020-04-03', '2020-04-04', '2020-04-05',
'2020-04-06', '2020-04-07', '2020-04-08', '2020-04-09',
'2020-04-10', '2020-04-11', '2020-04-12', '2020-04-13',
'2020-04-14'],
dtype='datetime64[ns]', freq='D')
###Markdown
FUNCTION TO PLOT A TIME SERIES
###Code
tsp.plot_timeSeries(df_mean, col_name="mean", figsize=(6,6))
tsp.plot_timeSeries(df_mean, col_name="mean", divide="month", figsize=(6,6))
tsp.plot_timeSeries(df_mean, col_name="mean", divide="season", line=False, figsize=(6,6))
###Output
_____no_output_____
###Markdown
add_timeSeries_dataframe Load `meteorological_data_2020.csv`, which contains meteorological data about Italy in 2020. (It has been obtained using [ILMETEO](https://www.ilmeteo.it/)).
###Code
import pandas as pd
df_meteo = pd.read_csv("meteorological_data_2020.csv",index_col=0)
df_meteo = df_meteo.set_index(pd.DatetimeIndex(df_meteo.index))
df_meteo.info()
df_meteo
df_meteo = df_meteo[["TMEDIA °C","TMIN °C"]] # Focus on two features
df_mean_met, X, y = tsp.add_timeSeries_dataframe(df=df_mean, df_other=df_meteo, y_col="mean")
df_mean_met
###Output
_____no_output_____
###Markdown
The `add_timeSeries_dataframe` function, like all the other following processing functions, also returns the `X` and `y` numpy arrays.
###Code
X # Contains the explanatory features
y # Contains the response feature. It has been scaled
###Output
_____no_output_____
###Markdown
*With the input parameter `y_scale`, the user can decide whether to scale or not the response feature.* add_k_previous_days
###Code
df_mean_temp, X, y = tsp.add_k_previous_days(df=df_mean, col_name="mean", k=5, y_col="mean")
df_mean_temp.info()
df_mean_temp
###Output
_____no_output_____
###Markdown
*The names of the added columns have the structure "ColumnName_i", with i from 1 to k* add_k_years_ago_statistics Get the EEA datasets about the PM10 mean concentrations in Italy, with respect to 2019 (i.e. one year before 2020).
###Code
# Download the datasets
# IT'S NECESSARY ONLY IF THEY HAVEN'T BEEN DOWNLOADED YET
dest_path = "C:\\Datasets"
countries_cities_dict = {"IT": "all"}
pollutants = ["PM10"]
years = [2019]
eea.download_datasets(dest_path, countries_cities_dict, pollutants, years)
# Load the datasets
source_path = "C:\\Datasets\\EEA"
countries_cities_dict = {"IT": "all"}
pollutants = ["PM10"]
years = [2019]
df_prev_year = eea.load_datasets(source_path, countries_cities_dict, pollutants, years)
df_prev_year.info()
# Process the datasets
df_prev_year_mean, df_prev_year_min, df_prev_year_max = eea.preprocessing(df_prev_year)
###Output
_____no_output_____
###Markdown
First example: **days_to_select is an odd integer**
###Code
df_mean_temp, X, y = tsp.add_k_years_ago_statistics(df=df_mean, df_k_years_ago=df_prev_year_mean, k=1, days_to_select=11,
y_col="mean")
###Output
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-01-01 only these 1 years ago days have been found: ['2019-01-01', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-05', '2019-01-06']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-01-02 only these 1 years ago days have been found: ['2019-01-01', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-05', '2019-01-06', '2019-01-07']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-01-03 only these 1 years ago days have been found: ['2019-01-01', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-05', '2019-01-06', '2019-01-07', '2019-01-08']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-01-04 only these 1 years ago days have been found: ['2019-01-01', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-05', '2019-01-06', '2019-01-07', '2019-01-08', '2019-01-09']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-01-05 only these 1 years ago days have been found: ['2019-01-01', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-05', '2019-01-06', '2019-01-07', '2019-01-08', '2019-01-09', '2019-01-10']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-12-27 only these 1 years ago days have been found: ['2019-12-22', '2019-12-23', '2019-12-24', '2019-12-25', '2019-12-26', '2019-12-27', '2019-12-28', '2019-12-29', '2019-12-30', '2019-12-31']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-12-28 only these 1 years ago days have been found: ['2019-12-23', '2019-12-24', '2019-12-25', '2019-12-26', '2019-12-27', '2019-12-28', '2019-12-29', '2019-12-30', '2019-12-31']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-12-29 only these 1 years ago days have been found: ['2019-12-24', '2019-12-25', '2019-12-26', '2019-12-27', '2019-12-28', '2019-12-29', '2019-12-30', '2019-12-31']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-12-30 only these 1 years ago days have been found: ['2019-12-25', '2019-12-26', '2019-12-27', '2019-12-28', '2019-12-29', '2019-12-30', '2019-12-31']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:657: UserWarning: For the day 2020-12-31 only these 1 years ago days have been found: ['2019-12-26', '2019-12-27', '2019-12-28', '2019-12-29', '2019-12-30', '2019-12-31']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
###Markdown
*A warning is given:* - *for each day for which less days than the ones expected are selected;* - *for each day for which no day has been selected.*
###Code
df_mean_temp
###Output
_____no_output_____
###Markdown
*The names of the added columns have the structure "k_years_ago_ColumnName"* In order to compute the value for a 2020 day, the 11 days of 2019 that are centered on that day are selected.
###Code
# 2020-04-15
import pandas as pd
print(df_mean_temp["1_years_ago_mean"].loc[pd.Timestamp("2020-04-15")])
df_prev_year_mean["mean"].loc[pd.date_range('2019-04-10','2019-04-20')].mean()
# 2020-10-22
print(df_mean_temp["1_years_ago_mean"].loc[pd.Timestamp("2020-10-22")])
df_prev_year_mean["mean"].loc[pd.date_range('2019-10-17','2019-10-27')].mean()
# 2020-01-01: example with less than 11 days
print(df_mean_temp["1_years_ago_mean"].loc[pd.Timestamp("2020-01-01")])
df_prev_year_mean["mean"].loc[pd.date_range('2019-01-01','2019-01-06')].mean()
# 2020-12-27: example with less than 11 days
print(df_mean_temp["1_years_ago_mean"].loc[pd.Timestamp("2020-12-27")])
df_prev_year_mean["mean"].loc[pd.date_range('2019-12-22','2019-12-31')].mean()
###Output
24.0776657907545
###Markdown
*With the input parameter `scale`, the user can decide which is the statistical aggregation to be used.* **days_to_select is "month" (/"season")**
###Code
df_mean_temp, _, _ = tsp.add_k_years_ago_statistics(df = df_mean, df_k_years_ago=df_prev_year_mean, k=1,
days_to_select="month", y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
In order to compute the value for a 2020 day, the days of the same month in 2019 are selected.
###Code
# January
import pandas as pd
df_prev_year_mean["mean"].loc[pd.date_range('2019-01-01','2019-01-31')].mean()
###Output
_____no_output_____
###Markdown
**More columns** The same logic is applied individually on each column of `df_prev_year_mean`
###Code
df_prev_year_mean["COLUMN"] = df_prev_year_mean["mean"]*2 # Put a second column
df_mean_temp, _ ,_ = tsp.add_k_years_ago_statistics(df_mean, df_k_years_ago=df_prev_year_mean, k=1, days_to_select=11,
y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
With the input parameter `columns_to_select`, the user can specify which are the columns to be taken into account.
###Code
df_mean_temp,_,_ = tsp.add_k_years_ago_statistics(df_mean, df_k_years_ago=df_prev_year_mean, k=1, days_to_select=11,
columns_to_select = ["mean"], y_col="mean", )
df_mean_temp
###Output
_____no_output_____
###Markdown
**days_to_select is a function** It's a predicate that decides which days have to be selected
###Code
# Predicate of selection
f = lambda day, df, prev_day, prev_df : abs(df["mean"][day]-prev_df["mean"][prev_day])<3
df_mean_temp, _, _ = tsp.add_k_years_ago_statistics(df=df_mean, df_k_years_ago=df_prev_year_mean, k=1,
days_to_select=f, y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
For each 2020 day, the 2019 days with similar PM10 concentration are selected (i.e. the 2019 days whose PM10 concentration differ from the PM10 concentration of the 2020 day less than 3). **replace_miss** By default, the parameter `replace_miss` is True. I.e. the missing days are filled.
###Code
# replace_miss True
f = lambda day,df,prev_day,prev_df : abs(df["mean"][day]-prev_df["mean"][prev_day])<3
df_mean_temp, _, _ = tsp.add_k_years_ago_statistics(df=df_mean, df_k_years_ago=df_prev_year_mean, k=1,
days_to_select=f, replace_miss=True, y_col="mean")
df_mean_temp
###Output
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:654: UserWarning: No 1 years ago days have been found for the day 2020-01-01
warnings.warn("No "+str(k)+" years ago days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:654: UserWarning: No 1 years ago days have been found for the day 2020-01-16
warnings.warn("No "+str(k)+" years ago days have been found for the day " + day.strftime('%Y-%m-%d'))
###Markdown
*The 2020-01-01 is a missing day, and it has been filled.*
###Code
# replace_miss False
f = lambda day,df,prev_day,prev_df : abs(df["mean"][day]-prev_df["mean"][prev_day])<3
df_mean_temp, _, _ = tsp.add_k_years_ago_statistics(df=df_mean, df_k_years_ago=df_prev_year_mean, k=1,
days_to_select=f, replace_miss=False, y_col="mean")
df_mean_temp
###Output
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:654: UserWarning: No 1 years ago days have been found for the day 2020-01-01
warnings.warn("No "+str(k)+" years ago days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:654: UserWarning: No 1 years ago days have been found for the day 2020-01-16
warnings.warn("No "+str(k)+" years ago days have been found for the day " + day.strftime('%Y-%m-%d'))
###Markdown
add_current_year_statistics First example: **days_to_select is an integer.**
###Code
df_mean_temp, _, _ = tsp.add_current_year_statistics(df_mean, df_current_year=df_mean, days_to_select=11, y_col="mean")
###Output
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-01-01
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-02 only these current year days have been found: ['2020-01-01']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-03 only these current year days have been found: ['2020-01-01', '2020-01-02']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-04 only these current year days have been found: ['2020-01-01', '2020-01-02', '2020-01-03']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-05 only these current year days have been found: ['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-06 only these current year days have been found: ['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04', '2020-01-05']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-07 only these current year days have been found: ['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04', '2020-01-05', '2020-01-06']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-08 only these current year days have been found: ['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04', '2020-01-05', '2020-01-06', '2020-01-07']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-09 only these current year days have been found: ['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04', '2020-01-05', '2020-01-06', '2020-01-07', '2020-01-08']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-10 only these current year days have been found: ['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04', '2020-01-05', '2020-01-06', '2020-01-07', '2020-01-08', '2020-01-09']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:874: UserWarning: For the day 2020-01-11 only these current year days have been found: ['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04', '2020-01-05', '2020-01-06', '2020-01-07', '2020-01-08', '2020-01-09', '2020-01-10']
warnings.warn(("For the day "+day.strftime('%Y-%m-%d')+
###Markdown
*The same warnings of the previous function are given.*
###Code
df_mean_temp
###Output
_____no_output_____
###Markdown
*The names of the added columns have the structure "current_year_ColumnName"*
###Code
df_mean
###Output
_____no_output_____
###Markdown
In order to compute the value for a day, the 11 days preceding that day are selected.
###Code
# 2020-04-15
import pandas as pd
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-04-15")])
df_mean["mean"].loc[pd.date_range('2020-04-04','2020-04-14')].mean()
# 2020-10-22
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-10-22")])
df_mean["mean"].loc[pd.date_range('2020-10-11','2020-10-21')].mean()
# 2020-01-03: example with less than 11 days
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-01-03")])
df_mean["mean"].loc[pd.date_range('2020-01-01','2020-01-02')].mean()
# 2020-01-05: example with less than 11 days
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-01-05")])
df_mean["mean"].loc[pd.date_range('2020-01-01','2020-01-04')].mean()
###Output
60.938575393321
###Markdown
*With the input parameter `scale`, the user can decide which is the statistical aggregation to be used.* **replace_miss** By default, the parameter `replace_miss` is True. I.e. the missing days are filled.The days for which no preceding day is found are deleted from the dataset.
###Code
df_mean_temp, _, _ = tsp.add_current_year_statistics(df_mean, df_current_year=df_mean, days_to_select=11, y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
*No preceding day is present for the day 2020-01-01: this day has been removed.* If `replace_miss` is False, the missing values are kept as Nan.
###Code
df_mean_temp, _, _ = tsp.add_current_year_statistics(df_mean, df_current_year=df_mean, days_to_select=11, replace_miss=False,
y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
**current_day** By default the parameter `current_day` is False. I.e. each day is not itself selected, but only the preceding days are selected.Otherwise, if it's True, each day is itself selected.
###Code
df_mean_temp,_,_ = tsp.add_current_year_statistics(df_mean, df_current_year=df_mean, days_to_select=11, current_day=True,
y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
In order to compute the value for a day, the day itself and the 10 days preceding that day are selected.
###Code
# 2020-04-15
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-04-15")])
df_mean["mean"].loc[pd.date_range('2020-04-05','2020-04-15')].mean()
# 2020-10-22
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-10-22")])
df_mean["mean"].loc[pd.date_range('2020-10-12','2020-10-22')].mean()
# 2020-01-03: example with less than 11 days
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-01-03")])
df_mean["mean"].loc[pd.date_range('2020-01-01','2020-01-03')].mean()
# 2020-01-05: example with less than 11 days
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-01-05")])
df_mean["mean"].loc[pd.date_range('2020-01-01','2020-01-05')].mean()
###Output
54.38927215262471
###Markdown
It's important to notice that, despite the fact that `replcae_miss` is True, the day 2020-01-01 hasn't been removed: this is because it isn't a missing day anymore (it has been selected at least one day, which is the day itself). **days_to_select is "month" (/"season")**
###Code
df_mean_temp, _, _ = tsp.add_current_year_statistics(df_mean, df_current_year=df_mean, days_to_select="month", y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
In order to compute the value for a day, the preceding days that are in the same month are selected.
###Code
# 2020-04-15
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-04-15")])
df_mean["mean"].loc[pd.date_range('2020-04-01','2020-04-14')].mean()
# 2020-10-22
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-10-22")])
df_mean["mean"].loc[pd.date_range('2020-10-01','2020-10-21')].mean()
###Output
16.255132480170204
###Markdown
*If `curret_day` is True, the day itself is also taken* **days_to_select is a function** It's a predicate that decides which preceding days have to be selected. (*If `curret_day` is True, the day itself can also be taken*).
###Code
# Predicate of selection
f = lambda day, df, current_day ,current_df: abs(df["mean"].loc[day]-current_df["mean"].loc[current_day])<3
df_mean_temp,_,_ = tsp.add_current_year_statistics(df=df_mean, df_current_year=df_mean, days_to_select=f, y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
For each day, the preceding days with similar PM10 concentration are selected (i.e. the preceding days whose PM10 concentration differ from the PM10 concentration of the current day less than 3).
###Code
df_mean[:10]
# 2020-01-04: 2020-01-03 and 2020-01-02 are selected
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-01-04")])
(56.675791+55.216906)/2
# 2020-01-09: 2020-01-08 and 2020-01-07 are selected
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-01-09")])
(40.759729+38.646087)/2
# 2020-01-05: is a missing day. The miss value is filled taking into account all the previous days.
print(df_mean_temp["current_year_mean"].loc[pd.Timestamp("2020-01-05")])
(76.974569+56.675791+55.216906+54.887035)/4
###Output
60.938575393321
###Markdown
**More columns** The same logic is applied individually for each column of `df_current_year`. With the parameter `columns_to_select` the user can specify which are the columns that have to be used.This is shown in the examples for the previous function : `add_k_years_ago_statistics` **EXAMPLE WITH METEOROLOGICAL DATA** Load `meteorological_data_2020.csv`, which contains meteorological data about Italy in 2020. (It has been obtained using [ILMETEO](https://www.ilmeteo.it/)).
###Code
import pandas as pd
df_meteo = pd.read_csv("meteorological_data_2020.csv",index_col=0)
df_meteo = df_meteo.set_index(pd.DatetimeIndex(df_meteo.index))
df_meteo.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 366 entries, 2020-01-01 to 2020-12-31
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 TMEDIA °C 366 non-null float64
1 TMIN °C 366 non-null float64
2 TMAX °C 366 non-null float64
3 PUNTORUGIADA °C 366 non-null float64
4 UMIDITA % 366 non-null float64
5 VISIBILITA km 366 non-null float64
6 VENTOMEDIA km/h 366 non-null float64
7 VENTOMAX km/h 366 non-null float64
8 RAFFICA km/h 366 non-null float64
9 PRESSIONESLM mb 366 non-null float64
10 PRESSIONEMEDIA mb 366 non-null float64
11 PIOGGIA mm 366 non-null float64
dtypes: float64(12)
memory usage: 37.2 KB
###Markdown
Dataset with both the PM10 mean concentrations and the meteorological features
###Code
df_mean_met, _, _ = tsp.add_timeSeries_dataframe(df_mean, df_meteo, y_col="mean")
df_mean_met
f = lambda day,df,d,df_current_day: abs(df_current_day["TMEDIA °C"].loc[day]-df_current_day["TMEDIA °C"].loc[d])<0.5
df_mean_temp,_,_ = tsp.add_current_year_statistics(df=df_mean, df_current_year=df_mean_met,
days_to_select=f, replace_miss=False, # Miss values are not replaced
columns_to_select=["mean"], # Only the column "mean" is taken into account
y_col="mean")
###Output
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-01-01
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-01-04
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-01-07
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-01-08
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-01-28
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-02-03
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-02-04
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-02-11
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-02-24
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-03-13
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-04-06
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-04-10
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-05-01
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-05-02
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-05-14
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-05-18
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-05-21
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-06-21
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-06-22
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-06-23
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-07-28
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-07-29
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
C:\Users\Enrico\anaconda3\lib\site-packages\timeSeries_processing.py:871: UserWarning: No current year days have been found for the day 2020-07-30
warnings.warn("No current year days have been found for the day " + day.strftime('%Y-%m-%d'))
###Markdown
For each day, the preceding days which have similar mean temperatures are selected (i.e. the difference between the temperatures is less than 0.5).
###Code
df_mean_temp
df_mean_met[["TMEDIA °C"]][:10]
df_mean_temp[:10]
# 2020-01-03: 2020-01-02 and 2020-01-01 are selected
(56.675791+76.974569)/2
# 2020-01-05: only 2020-01-04 is selected
54.887035
# 2020-01-09: only 2020-01-06 is selected
26.099399
###Output
_____no_output_____
###Markdown
add_upTo_k_years_ago_statistics Get the EEA datasets about the PM10 mean concentrations in Italy, with respect to 2018-2019-2020 (i.e. up to two years before 2020).
###Code
# Download the datasets
# IT'S NECESSARY ONLY IF THEY HAVEN'T BEEN DOWNLOADED YET
dest_path = "C:\\Datasets"
countries_cities_dict = {"IT": "all"}
pollutants = ["PM10"]
years = [2018, 2019, 2020]
eea.download_datasets(dest_path, countries_cities_dict, pollutants, years)
# Load the datasets
source_path = "C:\\Datasets\\EEA"
countries_cities_dict = {"IT":"all"}
pollutants = ["PM10"]
years = [2018, 2019, 2020]
df_full = eea.load_datasets(source_path,countries_cities_dict,pollutants,years)
df_full.info()
df_full_mean,df_full_min,df_full_max = eea.preprocessing(df_full)
df_mean_temp,_,_ = tsp.add_upTo_k_years_ago_statistics(df_mean, df_upTo_k_years_ago=df_full_mean, k=2, days_to_select=11,
y_col="mean")
df_mean_temp
###Output
_____no_output_____
###Markdown
*The names of the added columns have the structure "upTo_k_years_ago_ColumnName".* Basically, the function `add_k_years_ago_statistics` is applied for each of the specified previous years (i.e. for 2019 and for 2018) and then the mean of the computed values is calculated: in this way, for each day of 2020 an unique value is obtained, which sums up the two previous years. The semantics of the parameters `days_to_select`, `stat`, `columns_to_select`, `replace_miss` are the same seen for the function `add_k_years_ago_statistics`. **current_year** By default, the parameter `current_year` is False. I.e. the current year (2020 in our case) is not considered. Otherwise, if `current_year` is True, the current year is considered: that means that not only the function `add_k_years_ago_statistics` is applied two times, but also the function `add_current_year_statistics` is applied one time. The final values in the DataFrame are computed calculating the mean of the values produced by these three functions.
###Code
df_mean_temp,_,_ = tsp.add_upTo_k_years_ago_statistics(df_mean, df_upTo_k_years_ago=df_full_mean, k=2, current_year=True,
days_to_select=11, current_day=False, y_col="mean")
df_mean_temp
###Output
_____no_output_____ |
Indian Liver Patient Dataset Analysis.ipynb | ###Markdown
Indian Liver Patient Records  Description Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. This dataset was used to evaluate prediction algorithms in an effort to reduce burden on doctors. Content This data set contains 416 liver patient records and 167 non liver patient records collected from North East of Andhra Pradesh, India. The "Dataset" column is a class label used to divide groups into liver patient (liver disease) or not (no disease). This data set contains 441 male patient records and 142 female patient records. Acknowledgements This dataset was downloaded from the UCI ML Repository:Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. Problem Statemtent By using these patient records to determine which patients have liver disease and which ones do not. Data Description Any patient whose age exceeded 89 is listed as being of age "90".Columns: Age of the patient Gender of the patient Total Bilirubin Bilirubin is an orange-yellow pigment that occurs normally when part of your red blood cells break down. A bilirubin test measures the amount of bilirubin in your blood. It’s used to help find the cause of health conditions like jaundice, anemia, and liver disease. Direct Bilirubin Bilirubin attached by the liver to glucuronic acid, a glucose-derived acid, is called direct, or conjugated, bilirubin. Bilirubin not attached to glucuronic acid is called indirect Alkaline Phosphotase Alkaline phosphatase (ALP) is an enzyme in a person's blood that helps break down proteins.Using an ALP test, it is possible to measure how much of this enzyme is circulating in a person’s blood. Alamine Aminotransferase Alanine aminotransferase (ALT) is an enzyme found primarily in the liver and kidney. ALT is increased with liver damage and is used to screen for and/or monitor liver disease. Aspartate Aminotransferase AST (aspartate aminotransferase) is an enzyme that is found mostly in the liver, but also in muscles. When your liver is damaged, it releases AST into your bloodstream. An AST blood test measures the amount of AST in your blood. The test can help your health care provider diagnose liver damage or disease. Total Protiens Albumin and globulin are two types of protein in your body. The total protein test measures the total amount albumin and globulin in your body. Albumin Albumin and Globulin Ratio Dataset: field used to split the data into two sets (patient with liver disease, or no disease) Business objectives and constraints 1. The cost of a mis-classification can be very high.2. There is no strict latency concerns. Importing Liabraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.preprocessing import LabelEncoder
import warnings
warnings.filterwarnings('ignore')
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn import linear_model
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
###Output
_____no_output_____
###Markdown
Mounting the GDrive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Reading the Data from the CSV file
###Code
liver_df = pd.read_csv("/content/drive/My Drive/Machine Learning Projects/Classification/Indian Liver Patient Records/indian_liver_patient.csv")
liver_df.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis (EDA)
###Code
# Total number of columns in the dataset
liver_df.columns
# Information about the dataset
liver_df.info()
# To know more about the dataset
liver_df.describe()
# Checking if there is some null values or not
liver_df.isnull()
# # Checking if there is some null values or not
liver_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
# Plotting the Number of patients with liver disease vs Number of patients with no liver disease
sns.countplot(data=liver_df, x = 'Dataset', label='Count')
LD, NLD = liver_df['Dataset'].value_counts()
print('Number of patients diagnosed with liver disease: ',LD)
print('Number of patients not diagnosed with liver disease: ',NLD)
# Plotting the Number of Male and Female patients
sns.countplot(data=liver_df, x = 'Gender',color="salmon", facecolor=(0, 0, 0, 0),linewidth=5, label='Count')
M, F = liver_df['Gender'].value_counts()
print('Number of patients that are male: ',M)
print('Number of patients that are female: ',F)
# Plotting patient Age vs Gender
sns.catplot(x="Age", y="Gender", hue="Dataset", data=liver_df)
liver_df[['Gender', 'Dataset','Age']].groupby(['Dataset','Gender'], as_index=False).mean().sort_values(by='Dataset', ascending=False)
# Plotting Age vs Gender
g = sns.FacetGrid(liver_df, col="Dataset", row="Gender", margin_titles=True)
g.map(plt.hist, "Age", color="red")
plt.subplots_adjust(top=0.9)
g.fig.suptitle('Disease by Gender and Age')
# Plotting Gender(Male/Female) along with Total_Bilirubin and Direct_Bilirubin
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Direct_Bilirubin", "Total_Bilirubin", edgecolor="w")
plt.subplots_adjust(top=0.9)
# Plotting Total_Bilirubin vs Direct_Bilirubin
sns.jointplot("Total_Bilirubin", "Direct_Bilirubin", data=liver_df, kind="reg")
# Plotting Gender(Male/Female) along with Aspartate Aminotransferase, Alamine Aminotransferase
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Aspartate_Aminotransferase", "Alamine_Aminotransferase", edgecolor="w")
plt.subplots_adjust(top=0.9)
# Plotting Aspartate_Aminotransferase vs Alamine_Aminotransferase
sns.jointplot("Aspartate_Aminotransferase", "Alamine_Aminotransferase", data=liver_df, kind="reg")
# Plotting Gender(Male/Female) along with Alkaline_Phosphotase and Alamine_Aminotransferase
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Alkaline_Phosphotase", "Alamine_Aminotransferase", edgecolor="w")
plt.subplots_adjust(top=0.9)
# Plotting Alkaline_Phosphotase vs Alamine_Aminotransferase
sns.jointplot("Alkaline_Phosphotase", "Alamine_Aminotransferase", data=liver_df, kind="reg")
# Plotting Gender(Male/Female) along with Total_Protiens and Albumin
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Total_Protiens", "Albumin", edgecolor="w")
plt.subplots_adjust(top=0.9)
# Plotting Total_Protiens vs Albumin
sns.jointplot("Total_Protiens", "Albumin", data=liver_df, kind="reg")
# Plotting Gender(Male/Female) along with Albumin and Albumin_and_Globulin_Ratio
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Albumin", "Albumin_and_Globulin_Ratio", edgecolor="w")
plt.subplots_adjust(top=0.9)
# Plotting Albumin vs Albumin_and_Globulin_Ratio
sns.jointplot("Albumin_and_Globulin_Ratio", "Albumin", data=liver_df, kind="reg")
# Plotting Gender(Male/Female) along with Albumin and Globulin Ratio and Total Protiens
g = sns.FacetGrid(liver_df, col="Gender", row="Dataset", margin_titles=True)
g.map(plt.scatter,"Albumin_and_Globulin_Ratio", "Total_Protiens", edgecolor="w")
plt.subplots_adjust(top=0.9)
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
liver_df.head(3)
pd.get_dummies(liver_df['Gender'], prefix = 'Gender').head()
# Concatination
liver_df = pd.concat([liver_df,pd.get_dummies(liver_df['Gender'], prefix = 'Gender')], axis=1)
liver_df.head()
liver_df.describe()
liver_df[liver_df['Albumin_and_Globulin_Ratio'].isnull()]
liver_df["Albumin_and_Globulin_Ratio"] = liver_df.Albumin_and_Globulin_Ratio.fillna(liver_df['Albumin_and_Globulin_Ratio'].mean())
X = liver_df.drop(['Gender','Dataset'], axis=1)
X.head(3)
# In the Dataset 1 implies the patient have liver disease; 2 implies the patients do not have liver disease
y = liver_df['Dataset']
###Output
_____no_output_____
###Markdown
Correlation between all the features
###Code
liver_corr = X.corr()
liver_corr
# Plotting Heatmaps for Correlations between all the features
plt.figure(figsize=(12,12))
sns.heatmap(liver_corr, cbar = True, square = True, annot=True, fmt= '.2f', annot_kws={'size': 12}, cmap= 'coolwarm')
plt.title('Correlation between all the features')
###Output
_____no_output_____
###Markdown
Splitting the data into Train and Test
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
print (X_train.shape)
print (y_train.shape)
print (X_test.shape)
print (y_test.shape)
###Output
(390, 11)
(390,)
(193, 11)
(193,)
###Markdown
Model Building 1. Logistic Regression
###Code
logreg = LogisticRegression()
# Train the model using the training sets and check score
logreg.fit(X_train, y_train)
# Predict Output
log_predicted= logreg.predict(X_test)
logreg_score = round(logreg.score(X_train, y_train) * 100, 2)
logreg_score_test = round(logreg.score(X_test, y_test) * 100, 2)
# Equation coefficient and Intercept
print('Logistic Regression Training Score: \n', logreg_score)
print('Logistic Regression Test Score: \n', logreg_score_test)
print('Accuracy: \n', accuracy_score(y_test,log_predicted))
print('Confusion Matrix: \n', confusion_matrix(y_test,log_predicted))
print('Classification Report: \n', classification_report(y_test,log_predicted))
###Output
Logistic Regression Training Score:
70.77
Logistic Regression Test Score:
73.06
Accuracy:
0.7305699481865285
Confusion Matrix:
[[131 10]
[ 42 10]]
Classification Report:
precision recall f1-score support
1 0.76 0.93 0.83 141
2 0.50 0.19 0.28 52
accuracy 0.73 193
macro avg 0.63 0.56 0.56 193
weighted avg 0.69 0.73 0.68 193
###Markdown
Confision Matrix
###Code
sns.heatmap(confusion_matrix(y_test,log_predicted),annot=True,fmt="d")
###Output
_____no_output_____
###Markdown
2. Gaussian Naive Bayes
###Code
gaussian = GaussianNB()
gaussian.fit(X_train, y_train)
# Predict Output
gauss_predicted = gaussian.predict(X_test)
gauss_score = round(gaussian.score(X_train, y_train) * 100, 2)
gauss_score_test = round(gaussian.score(X_test, y_test) * 100, 2)
print('Gaussian Score: \n', gauss_score)
print('Gaussian Test Score: \n', gauss_score_test)
print('Accuracy: \n', accuracy_score(y_test, gauss_predicted))
print(confusion_matrix(y_test,gauss_predicted))
print(classification_report(y_test,gauss_predicted))
###Output
Gaussian Score:
53.59
Gaussian Test Score:
57.51
Accuracy:
0.5751295336787565
[[60 81]
[ 1 51]]
precision recall f1-score support
1 0.98 0.43 0.59 141
2 0.39 0.98 0.55 52
accuracy 0.58 193
macro avg 0.68 0.70 0.57 193
weighted avg 0.82 0.58 0.58 193
###Markdown
Confusion Matrix
###Code
sns.heatmap(confusion_matrix(y_test,gauss_predicted),annot=True,fmt="d")
###Output
_____no_output_____
###Markdown
3. Random Forest
###Code
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, y_train)
# Predict Output
rf_predicted = random_forest.predict(X_test)
random_forest_score = round(random_forest.score(X_train, y_train) * 100, 2)
random_forest_score_test = round(random_forest.score(X_test, y_test) * 100, 2)
print('Random Forest Score: \n', random_forest_score)
print('Random Forest Test Score: \n', random_forest_score_test)
print('Accuracy: \n', accuracy_score(y_test,rf_predicted))
print(confusion_matrix(y_test,rf_predicted))
print(classification_report(y_test,rf_predicted))
###Output
Random Forest Score:
100.0
Random Forest Test Score:
73.06
Accuracy:
0.7305699481865285
[[124 17]
[ 35 17]]
precision recall f1-score support
1 0.78 0.88 0.83 141
2 0.50 0.33 0.40 52
accuracy 0.73 193
macro avg 0.64 0.60 0.61 193
weighted avg 0.70 0.73 0.71 193
###Markdown
Confusion Matrix
###Code
sns.heatmap(confusion_matrix(y_test,rf_predicted),annot=True,fmt="d")
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
# Comparing all the models
models = pd.DataFrame({
'Model': [ 'Logistic Regression', 'Gaussian Naive Bayes','Random Forest'],
'Score': [ logreg_score, gauss_score, random_forest_score],
'Test Score': [ logreg_score_test, gauss_score_test, random_forest_score_test]})
models.sort_values(by='Test Score', ascending=False)
###Output
_____no_output_____ |
dataset/process_yt_bb.ipynb | ###Markdown
Visualize one image
###Code
xmin, xmax, ymin, ymax = 0.482, 0.540, 0.371667, 0.616667
shape = dataset_dic['AAB6lO-XiKE'][238000].shape
plt.imshow(dataset_dic['AAB6lO-XiKE'][238000])
plt.plot([shape[1]*xmin,shape[1]*xmin,shape[1]*xmax, shape[1]*xmax],[shape[0]*ymin, shape[0]*ymax, shape[0]*ymax, shape[0]*ymin])
###Output
_____no_output_____ |
ipynb/bac_genome/fullCyc/trimDataset/.ipynb_checkpoints/rep4_DBL-comm_bw_ALL5-checkpoint.ipynb | ###Markdown
Goal* Trying varying levels of bandwidth and DBL scaling with pre-fractionation abundances ('DBL-comm')* Varying parameters * bandwidth (bw) * 0.006, 0.02, 0.2, 0.4, 0.6, 0.8 * diffusive boundary layer (DBL) scaling (DBL scaling by abundance) * 0, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8* **NOTE:** using default bandwidth for DBL & isotope incorporation steps* **NOTE:** using less replicate Monte Carlo replicates to save on memory* **Note:** using 4 kb size selection (simulated in validation notebook) Init
###Code
import os
import glob
import re
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(phyloseq)
###Output
_____no_output_____
###Markdown
BD min/max
###Code
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
print 'Min BD: {}'.format(min_BD)
print 'Max BD: {}'.format(max_BD)
###Output
Min BD: 1.67323
Max BD: 1.7744
###Markdown
Nestly* assuming fragments already simulated
###Code
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'
buildDir = os.path.join(workDir, 'rep4_DBL-comm_bw_ALL5')
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
fragFile = '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_kde.pkl'
commFile = '/home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm.txt'
# emperical data for validation
emp_shan_file = '/home/nick/notebook/SIPSim/dev/fullCyc_trim/SIP-core_unk_shan.txt'
emp_BDspan_file = '/home/nick/notebook/SIPSim/dev/fullCyc_trim/SIP-core_unk_trm_BD-span.txt'
emp_corr_file = '/home/nick/notebook/SIPSim/dev/fullCyc_trim/SIP-core_unk_trm_corr.txt'
nreps = 4
# building tree structure
nest = nestly.Nest()
# varying params
nest.add('DBL_scaling', [0, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8])
nest.add('bandwidth', [0.002, 0.02, 0.2, 0.4, 0.6, 0.8])
nest.add('rep', [x + 1 for x in xrange(nreps)])
## set params
nest.add('abs', ['1e9'], create_dir=False)
nest.add('percIncorp', [0], create_dir=False)
nest.add('percTaxa', [0], create_dir=False)
nest.add('np', [8], create_dir=False)
nest.add('subsample_dist', ['lognormal'], create_dir=False)
nest.add('subsample_mean', [9.432], create_dir=False)
nest.add('subsample_scale', [0.5], create_dir=False)
nest.add('subsample_min', [10000], create_dir=False)
nest.add('subsample_max', [30000], create_dir=False)
nest.add('min_BD', [min_BD], create_dir=False)
nest.add('max_BD', [max_BD], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
nest.add('fragFile', [fragFile], create_dir=False)
nest.add('commFile', [commFile], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
export PATH={R_dir}:$PATH
echo '#-- SIPSim pipeline --#'
echo '# shuffling taxa in comm file'
comm_shuffle_taxa.r {commFile} > comm.txt
echo '# simulating gradient fractions'
SIPSim gradient_fractions \
--BD_min {min_BD} \
--BD_max {max_BD} \
comm.txt \
> fracs.txt
echo '# adding diffusion'
SIPSim diffusion \
{fragFile} \
-n 100000 \
--bw {bandwidth} \
--np {np} \
> ampFrags_KDE_dif.pkl
echo '# adding DBL contamination; abundance-weighted smearing'
SIPSim DBL \
ampFrags_KDE_dif.pkl \
-n 100000 \
--comm comm.txt \
--commx {DBL_scaling} \
--np {np} \
> ampFrags_KDE_dif_DBL.pkl
echo '# making incorp file'
SIPSim incorpConfigExample \
--percTaxa {percTaxa} \
--percIncorpUnif {percIncorp} \
> {percTaxa}_{percIncorp}.config
echo '# adding isotope incorporation to BD distribution'
SIPSim isotope_incorp \
ampFrags_KDE_dif_DBL.pkl \
{percTaxa}_{percIncorp}.config \
-n 100000 \
--comm comm.txt \
--np {np} \
> ampFrags_KDE_dif_DBL_inc.pkl
echo '# simulating an OTU table'
SIPSim OTU_table \
ampFrags_KDE_dif_DBL_inc.pkl \
comm.txt \
fracs.txt \
--abs {abs} \
--np {np} \
> OTU_abs{abs}.txt
#-- w/ PCR simulation --#
echo '# simulating PCR'
SIPSim OTU_PCR \
OTU_abs{abs}.txt \
> OTU_abs{abs}_PCR.txt
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}_PCR.txt \
> OTU_abs{abs}_PCR_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_meta.txt
#-- w/out PCR simulation --#
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}.txt \
> OTU_abs{abs}_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_sub.txt \
> OTU_abs{abs}_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_sub.txt \
> OTU_abs{abs}_sub_meta.txt
#-- making summary tables --#
# PCR
shannon_calc.r OTU_abs{abs}_PCR_sub.txt > OTU_abs{abs}_PCR_sub_shan.txt
BD_span_calc.r OTU_abs{abs}_PCR_sub.txt comm.txt > OTU_abs{abs}_PCR_sub_BD-span.txt
correlogram_make.r OTU_abs{abs}_PCR_sub.txt > OTU_abs{abs}_PCR_sub_corr.txt
# no PCR
shannon_calc.r OTU_abs{abs}_sub.txt > OTU_abs{abs}_sub_shan.txt
BD_span_calc.r OTU_abs{abs}_sub.txt comm.txt > OTU_abs{abs}_sub_BD-span.txt
correlogram_make.r OTU_abs{abs}_sub.txt > OTU_abs{abs}_sub_corr.txt
#-- removing large intermediate files --#
rm -f ampFrags_KDE_dif.pkl
rm -f ampFrags_KDE_dif_DBL.pkl
rm -f ampFrags_KDE_dif_DBL_inc.pkl
!chmod 777 $bashFile
!cd $workDir; \
nestrun --template-file $bashFile -d rep4_DBL-comm_bw_ALL5 --log-file log.txt -j 2
%pushnote rep4_DBL-comm_bw_ALL5 complete
###Output
_____no_output_____
###Markdown
Comparing to emperical data* correlation/regression analyses of metrics on community composition
###Code
%%R
# function for loading dataset files
load.data.files = function(sim.files, emp.file){
# loading
## simulations
df = list()
for(x in sim.files){
# simulation
tmp = read.delim(x, sep='\t')
xx = strsplit(x, '/')[[1]]
tmp$DBL_scale = xx[10] %>% as.numeric
tmp$bw = xx[11] %>% as.numeric
tmp$SIM_rep = xx[12] %>% as.numeric
tmp$dataset = 'Simulation'
df[[x]] = tmp
# emperical (matched for each simulation)
if(xx[12] %>% as.numeric == 1){
tmp = read.delim(emp.file, sep='\t')
tmp$DBL_scale = xx[10] %>% as.numeric
tmp$bw = xx[11] %>% as.numeric
tmp$SIM_rep = 1
tmp$dataset = 'Emperical'
xy = paste0(x, '_EMP')
df[[xy]] = tmp
}
}
df = do.call(rbind, df) %>% as.data.frame
rownames(df) = 1:nrow(df)
# return
return(df)
}
###Output
_____no_output_____
###Markdown
Shannon index
###Code
sim_shan_files = !find $buildDir -name "OTU_abs1e9_PCR_sub_shan.txt"
print len(sim_shan_files)
print emp_shan_file
# checking for empty files
for x in sim_shan_files:
ret = !ls -thlc $x
if ret[0].split(' ')[4] == '0':
print ret
%%R -i sim_shan_files -i emp_shan_file
df.shan = load.data.files(sim_shan_files, emp_shan_file)
df.shan %>% tail(n=3)
%%R -w 800 -h 600
# summarizing
df.shan.s = df.shan %>%
group_by(dataset, bw, DBL_scale, BD_bin = ntile(Buoyant_density, 24)) %>%
summarize(mean_shannon = mean(shannon),
sd_shannon = sd(shannon),
mean_BD = mean(Buoyant_density))
ggplot(df.shan.s, aes(mean_BD, mean_shannon, color=dataset,
ymin=mean_shannon-sd_shannon, ymax=mean_shannon+sd_shannon)) +
geom_pointrange() +
facet_grid(DBL_scale ~ bw) +
labs(x='Buoyant density (binned; 24 bins)', y='Shannon index') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 650 -h 600
# pairwise correlations for each dataset
df.shan.bin = df.shan %>%
group_by(BD_bin = ntile(Buoyant_density, 24))
calc.pearson = function(x){
cor(x[,'shannon.x'], x['shannon.y'], method='pearson')[1,1]
}
df.shan.corr = inner_join(df.shan.bin, df.shan.bin, c('BD_bin' = 'BD_bin',
'bw' = 'bw',
'DBL_scale' = 'DBL_scale')) %>%
group_by(bw, DBL_scale, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.pearson)) %>%
unnest(pearson = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(pearson_txt = round(pearson, 2))
# plotting
ggplot(df.shan.corr, aes(dataset.x, dataset.y, fill=pearson)) +
geom_tile() +
geom_text(aes(label=pearson_txt), color='white', size=5) +
scale_fill_gradient(low='black', high='red') +
labs(title='Shannon index') +
facet_grid(DBL_scale ~ bw) +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 500 -h 250
# getting emperical-emperical corr
emp.val = df.shan.corr %>%
filter((dataset.x == 'Emperical' &
dataset.y == 'Emperical')) %>%
group_by() %>%
summarize(max_value = max(pearson)) %>%
ungroup() %>%
select(max_value) %>% as.matrix %>% as.vector
emp.val = emp.val[1]
# filtering
df.shan.corr.f = df.shan.corr %>%
filter((dataset.x == 'Simulation' &
dataset.y == 'Emperical')) %>%
mutate(DBL_scale = DBL_scale %>% as.character,
bw = bw %>% as.character,
gt_emp = ifelse(round(pearson,2) >= round(emp.val, 2), 'bold.italic', 'plain')) %>%
complete(DBL_scale, bw)
df.shan.corr.f %>% head(n=3)
# plotting
ggplot(df.shan.corr.f, aes(DBL_scale,bw, fill=pearson)) +
geom_tile() +
geom_text(aes(label=pearson_txt,fontface=gt_emp), color='white', size=5.5) +
scale_color_manual(values=c('white', 'black')) +
scale_fill_gradient('Pearson', low='black', high='red') +
labs(title='Shannon index', x='DBL scaling', y='KDE Bandwidth') +
theme(
text = element_text(size=15)
)
###Output
_____no_output_____
###Markdown
BD spans
###Code
sim_BDspan_files = !find $buildDir -name "OTU_abs1e9_PCR_sub_BD-span.txt"
print len(sim_BDspan_files)
print emp_BDspan_file
%%R -i sim_BDspan_files -i emp_BDspan_file
df.BDspan = load.data.files(sim_BDspan_files, emp_BDspan_file)
df.BDspan %>% head
%%R -w 700 -h 600
# plotting
ggplot(df.BDspan, aes(mean_preFrac_abund, BD_range_perc, fill=dataset)) +
geom_hex(alpha=0.5) +
scale_x_log10() +
facet_grid(DBL_scale ~ bw) +
labs(x='Pre-fractionation abundance', y='BD span') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -i sim_BDspan_files -i emp_BDspan_file
# binning by pre-fractionation abundances
n.tile = 20
df.BDspan = df.BDspan %>%
group_by(dataset, library, DBL_scale, bw, preFrac_abund_bin = ntile(mean_preFrac_abund, n.tile)) %>%
summarize(mean_preFrac_abund = mean(mean_preFrac_abund),
var_BD_range = var(BD_range),
sd_BD_range = sd(BD_range))
df.BDspan %>% tail(n=3)
%%R -w 675 -h 600
calc.spearman = function(x){
cor(x[,'var_BD_range.x'], x['var_BD_range.y'], method='spearman')[1,1]
}
df.BDspan.corr = inner_join(df.BDspan, df.BDspan, c('preFrac_abund_bin' = 'preFrac_abund_bin',
'DBL_scale' = 'DBL_scale',
'bw' = 'bw')) %>%
group_by(DBL_scale, bw, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.spearman)) %>%
unnest(spearman = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(spearman_txt = round(spearman, 2))
# plotting
ggplot(df.BDspan.corr, aes(dataset.x, dataset.y, fill=spearman)) +
geom_tile() +
geom_text(aes(label=spearman_txt), color='white', size=5) +
scale_fill_gradient(low='black', high='red') +
labs(title='BD span') +
facet_grid(DBL_scale ~ bw) +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 500 -h 250
# getting emperical-emperical corr
emp.val = df.BDspan.corr %>%
filter((dataset.x == 'Emperical' &
dataset.y == 'Emperical')) %>%
group_by() %>%
summarize(max_value = max(spearman, na.rm=TRUE)) %>%
ungroup() %>%
select(max_value) %>% as.matrix %>% as.vector
emp.val = emp.val[1]
# filtering
df.BDspan.corr.f = df.BDspan.corr %>%
filter((dataset.x == 'Simulation' &
dataset.y == 'Emperical')) %>%
mutate(DBL_scale = DBL_scale %>% as.character,
bw = bw %>% as.character,
gt_emp = ifelse(round(spearman,2) >= round(emp.val,2), 'bold.italic', 'plain')) %>%
complete(DBL_scale, bw)
# plotting
ggplot(df.BDspan.corr.f, aes(DBL_scale, bw, fill=spearman)) +
geom_tile() +
geom_text(aes(label=spearman_txt, fontface=gt_emp), color='white', size=5.5) +
scale_color_manual(values=c('white', 'black')) +
scale_fill_gradient('Spearman', low='black', high='red') +
labs(title='BD span', x='DBL scaling', y='KDE Bandwidth') +
theme(
text = element_text(size=16)
)
###Output
_____no_output_____
###Markdown
correlograms (jaccard ~ BD)
###Code
sim_corr_files = !find $buildDir -name "OTU_abs1e9_PCR_sub_corr.txt"
print len(sim_corr_files)
print emp_corr_file
# checking for empty files
for x in sim_corr_files:
ret = !ls -thlc $x
if ret[0].split(' ')[4] == '0':
print ret
%%R -i sim_corr_files -i emp_corr_file
df.corr = load.data.files(sim_corr_files, emp_corr_file)
# binning
df.corr = df.corr %>%
filter(!is.na(Mantel.corr)) %>%
group_by(DBL_scale, bw, dataset, library, class.index.bin = ntile(class.index, 12))
df.corr %>% tail(n=3) %>% as.data.frame
%%R -w 800 -h 600
# plotting
df.corr.s = df.corr %>%
group_by(DBL_scale, bw, dataset, class.index.bin) %>%
summarize(mean_Mantel.corr = mean(Mantel.corr),
sd_Mantel.corr = sd(Mantel.corr),
mean_class.index = mean(class.index))
ggplot(df.corr.s, aes(mean_class.index, mean_Mantel.corr, color=dataset,
ymin=mean_Mantel.corr-sd_Mantel.corr,
ymax=mean_Mantel.corr+sd_Mantel.corr)) +
geom_pointrange(size=0.2) +
labs(x='Class index (binned; 12 bins)', y='Mantel correlation coef.') +
facet_grid(DBL_scale ~ bw) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 700 -h 600
# pairwise correlations for each dataset
df.shan.bin = df.shan %>%
group_by(BD_bin = ntile(Buoyant_density, 24))
calc.pearson = function(x){
cor(x[,'Mantel.corr.x'], x['Mantel.corr.y'], method='pearson')[1,1]
}
df.corr.lm = inner_join(df.corr, df.corr, c('class.index.bin' = 'class.index.bin',
'bw' = 'bw',
'DBL_scale' = 'DBL_scale')) %>%
group_by(bw, DBL_scale, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.pearson)) %>%
unnest(pearson = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(pearson_txt = round(pearson, 2))
# plotting
ggplot(df.corr.lm, aes(dataset.x, dataset.y, fill=pearson)) +
geom_tile() +
geom_text(aes(label=pearson_txt), color='white', size=5) +
scale_fill_gradient(low='black', high='red') +
labs(title='Beta diversity correlogram') +
facet_grid(DBL_scale ~ bw) +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 500 -h 250
# getting emperical-emperical corr
emp.val = df.corr.lm %>%
filter((dataset.x == 'Emperical' &
dataset.y == 'Emperical')) %>%
group_by() %>%
summarize(max_value = max(pearson)) %>%
ungroup() %>%
select(max_value) %>% as.matrix %>% as.vector
emp.val = emp.val[1]
print(emp.val)
# filtering
df.corr.lm.f = df.corr.lm %>%
filter((dataset.x == 'Simulation' &
dataset.y == 'Emperical')) %>%
mutate(DBL_scale = DBL_scale %>% as.character,
bw = bw %>% as.character,
gt_emp = ifelse(round(pearson,2) >= round(emp.val,2), 'bold.italic', 'plain')) %>%
complete(DBL_scale, bw)
df.corr.lm.f %>% head(n=3)
# plotting
ggplot(df.corr.lm.f, aes(DBL_scale,bw, fill=pearson)) +
geom_tile() +
geom_text(aes(label=pearson_txt,fontface=gt_emp), color='white', size=5.5) +
scale_color_manual(values=c('white', 'black')) +
scale_fill_gradient('Pearson', low='black', high='red') +
labs(title='Beta diversity correlogram', x='DBL scaling', y='KDE Bandwidth') +
theme(
text = element_text(size=16)
)
###Output
_____no_output_____ |
Database-Design/A4.ipynb | ###Markdown
Assignment 4. Database Design Objectives This assignment has two parts.* In Part 1, you will be trained to draw an E/R diagram (Task 1) and transform it into relational schemas (Task 2).* In Part 2, you will be trained to master important techniques related to database normalization (Tasks 3-5).Download [A4.zip](A4.zip). Answer the questions in A4.ipynb. Part 1. Entity-Relationship Model (10 points) You will design a database for SFU. This database will include information about departments, students, courses (and their offerings):* Information about **students** includes their SID, name and age. The SID of a student is assumed to be unique, not shared by any other student. Each student is either a **graduate** or or an **undergraduate**. - Each student must be in one category or the other, and cannot be in both categories simultaneously. - For graduate students, we record what their research field is. - For undergraduate students, we record their concentration. * Information about **departments** includes their name and address. The name of a department is assumed to be unique, not shared by any other department.* We need to be able to associate student with the departments with which they are affiliated. Each student has to be affiliated with exactly one department.* Information about a course includes its number (e.g., "354"), name (e.g., "Introduction to Databases"), and capacity (e.g., 110). We also need to be able to know the unique department that owns each course: no cross-listing of courses across departments is allowed, and every course is owned by exactly one department. * Note: you cannot assume that course number uniquely identifies a course; in fact, you cannot assume even that course number together with course name uniquely identify a course. However, course number uniquely identifies courses within a department. * Finally, we need to record all terms -- identified as semester (e.g., "fall") and year (e.g., "2018") -- in which each course has been offered in the history of the university.* Assume that for a course to be offered during a term, it has at least one student enrolled. Also a course is offered at most once during each term. In other words, a course cannot have multiple sections during one term.* Finally, assume that a student can take courses “owned” by departments with which the student is not affiliated. And a student should be enrolled in at least one course. Task 1: E/R Diagram (5 points) Render the SFU database in the version of the E/R model that we studied in class, with *exactly* the constraints and requirements specified above. Task 2: From E/R Diagram to Relational Schemas (5 points). Please follow the above E/R Diagram and write SQL queries to create required tables in `sfu.db`
###Code
%load_ext sql
%sql sqlite:///sfu.db
%%sql
CREATE TABLE students
(
SID integer,
name char(20),
age integer,
primary key (SID)
);
%%sql
CREATE TABLE undergraduate
(
studentid integer,
concentration char(10),
primary key (studentid)
foreign key (studentid) references students(SID)
);
%%sql
CREATE TABLE graduate
(
studentid integer,
research char(10),
primary key (studentid)
foreign key (studentid) references students(SID)
);
%%sql
CREATE TABLE departments
(
name char(10),
address char(20),
primary Key (name)
);
%%sql
CREATE TABLE affiliated
(
name char(10),
studentid integer,
primary key (studentid, name)
foreign key (studentid) references students(SID)
foreign key (name) references departments(name)
);
%%sql
CREATE TABLE courses
(
dname char(10),
name char(20),
number integer,
capacity integer,
primary Key (number, dname)
foreign key (dname) references departments(name)
);
%%sql
CREATE TABLE terms
(
semester char(10),
year integer,
primary Key (semester, year)
);
%%sql
CREATE TABLE offered
(
department_name char(10),
course_number integer,
semester char(10),
year integer,
primary key (department_name, course_number, semester, year)
foreign key (department_name) references departments(name)
foreign key (course_number) references courses(number)
foreign key (semester) references terms(semester)
foreign key (year) references terms(year)
);
%%sql
CREATE TABLE enrolled
(
department_name char(10),
course_number integer,
studentid integer,
primary key (department_name, course_number, studentid)
foreign key (department_name) references departments(name)
foreign key (course_number) references course(number)
foreign key (studentid) references students(SID)
);
###Output
* sqlite:///sfu.db
Done.
###Markdown
Part 2. Normalization (10 points) Task 3. Decompose a relational schema into BCNF Consider a relational schema and a set of functional dependencies: * $R(A,B,C,D,E)$ with functional dependencies $A \rightarrow E$, $BC \rightarrow A$, $DE \rightarrow B$**Decompose $R(A,B,C,D,E)$ into BCNF. Show all of your work and explain, at each step, which dependency violations you are correcting. You have to write down a description of your decomposition steps. (2 points)** * closures from the given FDs: * {A}+ = {A,E}* {B,C}+ = {B,C,A,E}* {D,E}+ = {D,E,B}* taking the bad FD {A}+ = {A,E}, R(A,B,C,D,E) is decomposed to R1(A,E) and R2(A,B,C,D)* taking the bad FD {B,C}+ = {B,C,A,E}, R2(A,B,C,D) is decomposed to R21(B,C,A) and R2(B,C,D)* now we have the tables R1(A,E), R21(B,C,A) and R22(B,C,D) and we don't have anymore non-trivial bad FD* therefore, R(A,B,C,D,E) is decomposed to R1(A,E), R21(B,C,A) and R22(B,C,D) Task 4. Find a set of FDs that is consistent with a closed attribute set A set of attributes $X$ is called closed (with respect to a given set of functional dependencies) if$X^+=X$. Consider a relation with schema $R(A,B,C,D)$ and an unknown set of functional dependencies. For each closed attribute set below, give a set of functional dependencies that is consistent with it. **a. All sets of attributes are closed (1 point)*** A -> A* B -> B* C -> C* D -> D ** b. The only closed sets are $\{\}$ and $\{A,B,C,D\}$ (1 point)*** A -> B* B -> C* C -> D* D -> A ** c. The only closed sets are $\{\}$, $\{A,B\}$, and $\{A,B,C,D\}$ (1 point)*** A -> B* B -> A* C -> DAB* D -> CAB Task 5. Normalize a database Suppose Mike is the owner of a small store. He uses the following database ([mike.db](mike.db)) to store monthly sales of his store. * `Sales`(name, discount, mouth, price)
###Code
%load_ext sql
%sql sqlite:///mike.db
%sql select * from Sales limit 5
###Output
* sqlite:///mike.db
sqlite:///sfu.db
Done.
###Markdown
However, Mike finds that the database is difficult to update (i.e., when inserting new data into the database). Your job is to help Mike to normalize his database. You should do the following steps(a-d): **a.** Find all *nontrivial* functional dependencies in the database.This is a reverse engineering task, so expect to proceed in a trial and error fashion. Search first for the simple dependencies, say $name \rightarrow discount$ then try the more complex ones, like $name, discount \rightarrow month$, as needed. To check each functional dependency you have to write a SQL query.Your challenge is to write this SQL query for every candidate functional dependency that you check, such that: - the query's answer is always short (say: no more than ten lines - remember that 0 results can be instructive as well) - you can determine whether the FD holds or not by looking at the query's answer. Try to be clever in order not to check too many dependencies, but don't miss potential relevant dependencies. For example, if you have A → B and C → D, you do not need to derive AC → BD as well.**Write down all FDs that you found. (1 point)** * Name -> Price* Month -> Discount ** For each FD above, write down the SQL query that discovered it (remember short queries are preferred) (1 point)**
###Code
%%sql
Select *
From Sales S1, Sales S2
Where S1.name = s2.name and S1.price != s2.price
%%sql
select *
from Sales S1, Sales S2
where S1.month = s2.month and S1.discount != s2.discount
###Output
* sqlite:///mike.db
sqlite:///sfu.db
Done.
###Markdown
** b. Decompose the `Sales` table into BCNF. Like Task 1, show a description of your decomposition steps. (1 point)** * Closures for FD:* {Name}+ = {Name, Price}* {Month}+ = {Month, Discount}* taking the bad FD {Month}+ = {Month,Discount}, R(Name,Discount,Month,Price) is decomposed to R1(Month,Discount) and R2(Month,Name,Price)* taking the bad FD {Name}+ = {Name, Price}, R2(Month,Name,Price) is decomposed to R21(Month,Name) and R22(Name,Price)* now we have the tables R1(Month,Discount), R21(Month,Name) and R22(Name,Price) and we don't have anymore non-trivial bad FD* therefore, R(Name,Discount,Month,Price) is decomposed to R1(Month,Discount), R21(Month,Name) and R22(Name,Price) ** c. Write down SQL queries to create the BCNF tables in the [mike.db](mike.db). Create keys and foreign keys where appropriate. (1 point)**
###Code
%%sql
create table MonthDiscount
(
month varchar(3),
discount float,
primary key (month)
);
%%sql
create table NamePrice
(
name varchar(50),
price int,
primary key (name)
);
%%sql
create table MonthName
(
month varchar(3),
name varchar(50),
foreign key (month) references MonthDiscount(month)
foreign key (name) references NamePrice(name)
);
###Output
* sqlite:///mike.db
sqlite:///sfu.db
Done.
###Markdown
** d. Populate the BCNF tables using the data from the sales table. (1 point)***Hint:* see [SQL INSERT INTO SELECT Statement](https://www.w3schools.com/sql/sql_insert_into_select.asp)
###Code
%%sql
insert into MonthDiscount (month, discount)
select distinct month, discount
from Sales
%%sql
insert into NamePrice (name, price)
select distinct name, price
from Sales
%%sql
insert into MonthName (month,name)
select month, name
from Sales
###Output
* sqlite:///mike.db
sqlite:///sfu.db
426 rows affected.
|
notebooks/10.0-gather_data_w_annotations.ipynb | ###Markdown
Algorithm to plot annotated accls and speed:- Do for all rows in sync.csv - Read accls raw data path and fetch corresponding data from raw.h5. - Read subject.start time and correct it by adding offset (to ensure that fetched raw accls are syncs with all derived tables) - Generate an array of accelerations 'stamps' such that the starting time of the array equals to the corrected time above. Then increment each stamp by 0.01. - Lookup average speed data (i.e.speed) from wheel.csv based on: SubjectId+RunId+nearest number of wheel MID time to corresponding stamp. - Lookup annotation data (i.e. annotation and annotation type) from annotation.csv based on: SubjectId+RunId+nearest number of annotation START time to to corresponding stamp. - Save gathered data to annot_gathered.csv
###Code
annotation.head()
wheel.head()
sync.head()
len(sync)
def convert_cols_to_time(dataframe, columns_to_convert):
""" A function that converts columns specified by 'columns_to_convert' in the 'dataframe' into epoch time and returns the converted dataframe"""
for col in columns_to_convert:
dataframe[col]=dataframe[col].apply(lambda x: pd.Timestamp(x).timestamp())
return dataframe
sync=convert_cols_to_time(sync,['subject.start','subject.end','wheel.start','wheel.end','video.start'])
wheel=convert_cols_to_time(wheel,['start','end'])
wheel['mid']=wheel['start']+0.5
annotation=convert_cols_to_time(annotation,['start','end'])
# Construct expanded sync dataframe with omitted unnecessary fields
data_columns=["stamp","SubjectId","RunId","x","y","z","speed", "Annotation","AnnotationType"]
data=[]
for idx, row in sync.iterrows():
subject_id=row['SubjectId']
run_id=row['RunId']
subject_start=row['subject.start']
delta=row['subject.delta']/100
# Fetch subject raw accelerometer data from raw.h5
path_to_subject=row["subject.path"]
subject_data=raw.get(path_to_subject)
subject_data=list(subject_data)
subject_data=np.array(subject_data).T
# Generate corrected stamps corresponding to subject sensor entries
stamps=0.01*np.arange(len(subject_data))+subject_start-delta
speeds=np.zeros_like(stamps)
subjects=subject_id*np.ones_like(stamps)
runs=run_id*np.ones_like(stamps)
annotations=[]
annotation_types=[]
# lookup average speed data from wheel.csv based on SubjectId, RunId, and nearest number of wheel MID time to corrected subject start time to stamp
run_wheel_data=wheel.loc[(wheel['SubjectId']==subject_id ) & (wheel['RunId']==run_id)]
first_speed_stamp=run_wheel_data["start"].iloc[0]
last_speed_stamp=run_wheel_data["end"].iloc[-1]
for i in range(len(stamps)):
time_idx=np.argmin(np.abs(run_wheel_data['mid'].values-stamps[i]))
speeds[i]=run_wheel_data['speed'].values[time_idx]
# lookup annotation data from annotation.csv based on SubjectId, RunId, and nearest number of annotation START time to stamp
run_annot_data=annotation.loc[(annotation['SubjectId']==subject_id ) & (annotation['RunId']==run_id)]
first_annot_stamp=run_annot_data["start"].iloc[0]
last_annot_stamp=run_annot_data["end"].iloc[-1]
for i in range(len(stamps)):
time_idx=np.argmin(np.abs(run_annot_data['start'].values-stamps[i]))
annotations.append(run_annot_data['Annotation'].values[time_idx])
annotation_types.append(run_annot_data['AnnotationType'].values[time_idx])
# Construct table for this subject run
annotations=np.array(annotations)
annotation_types=np.array(annotation_types)
run_gathered_data=np.concatenate([stamps.reshape(-1,1),subjects.reshape(-1,1),runs.reshape(-1,1),subject_data[:,0:-1], speeds.reshape(-1,1), annotations.reshape(-1,1), annotation_types.reshape(-1,1)], axis=1) #added stamps and dropped last KSS column
# discard all samples that started before initial_speed_stamp and continued after last_speed_stamp (This step will NOT generate discontinuity in the generated time series)
run_gathered_data=run_gathered_data[run_gathered_data[:,0].astype(float)>first_speed_stamp]
run_gathered_data=run_gathered_data[run_gathered_data[:,0].astype(float)<last_speed_stamp]
# discard samples with inf speed (This action will generate discontinuities in the collection of time series)
#run_gathered_data=run_gathered_data[run_gathered_data[:,-1]!=np.inf]
data.append(run_gathered_data)
# # Fetch wheel raw accelerometer data from raw.h5 (no correction needed since wheel is reference)
# path_to_wheel=row["wheel.path"]
# wheel_data=raw.get(path_to_wheel)
# wheel_data=list(wheel_data)
# wheel_data=np.array(wheel_data).T
isExist = os.path.exists("../data/processed")
if not isExist:
os.makedirs("../data/processed")
data=np.concatenate(data)
data=pd.DataFrame(data, columns=data_columns)
data.to_csv("../data/processed/{}annot_gathered.csv".format(ref), index=False)
###Output
_____no_output_____ |
other/Readme.ipynb | ###Markdown
Online JupyterHubhttp://intro.syzygy.ca/getting-started/* uvic.syzygy.ca* https://cybera.syzygy.ca Cybera (via Google Authentication)Your “university credentials” will be something like your campus-wide login ID, student or employee number, etc. You will also need to enter the password connected to your campus-wide login ID or number. This information is held privately by your university IT group, so privacy is assured by your university.Anyone with Google credentials can use https://cybera.syzygy.ca for research, learning or innovation (including post secondary institutions, K-12 and business incubators).You can use any web browser that you like. However, some experience suggests the following: - Firefox is better at rendering math formulas, but cut/copy/paste does not work in the terminal. - Chrome is better at cut/copy/paste in the terminal, important when using GIT, for instance. - Safari works fine, including on iPads and iPhones. - Internet Explorer is not something to use, for many good reasons* https://pims.jupyter.ca* try.jupyter.org Python for scientific computation and optimizationhttps://docs.scipy.org/doc/scipy/reference/tutorial/optimize.htmlhttps://github.com/PlamenStilyianov/Python3/tree/master/Learning-SciPy-for-Numerical-and-Scientific-Computing-2ndhttp://people.duke.edu/~ccc14/sta-663-2016/11_OptimizationOneDimension.htmlhttp://people.duke.edu/~ccc14/sta-663-2016/index.htmlComputational Statistics in Pythonhttps://github.com/unpingco/Python-for-Probability-Statistics-and-Machine-Learning/MATH3076/3976 Mathematical Computing
###Code
from scipy import optimize
help(scipy.optimize)
!pip install --upgrade --user pip
!pip install --user quantecon
from quantecon import quad
from numba import *
?generated_jit
###Output
Object `generated_jit` not found.
###Markdown
Textbook* http://www.oreilly.com/programming/free/files/python-for-scientists.pdf* A Primer on Scientific Programming in Python, H.P. Langtangen https://hplgit.github.io/primer.html/doc/pub/half/book.pdf * https://github.com/hplgit/scipro-primer PlotThere are many, many more plot types available. One useful way to explore these is bylooking at the [matplotlib gallery](http://matplotlib.org/gallery.html).You can test these examples out easily in the notebook: simply copy the ``Source Code``link on each page, and put it in a notebook using the ``%load`` magic.For example:
###Code
# %load http://matplotlib.org/mpl_examples/pylab_examples/ellipse_collection.py
###Output
_____no_output_____ |
Day01/DB-WS01a.ipynb | ###Markdown
IPython notebook을 쓰게 된 것에 환영합니다!-------------------------------------새로 접한 환경에서 `"Hello world!"`을 써 봅시다.하나의 셀의 내용을 실행하려면, 셀을 선택하고, ```SHIFT+ENTER```를 누르면 그 내용을 실행할 수 있습니다. 그리고 다음 셀로 이동합니다.다른 유용한 명령들은 메뉴에서 확인해 보기 바랍니다. 키보드로 명령을 내리는 것이 익숙하지 않다면 메뉴를 사용해도 됩니다.
###Code
print "Hello world!"
###Output
Hello world!
###Markdown
좀 더 익숙한 사용자라면 다음을 실행해보기 바랍니다.
###Code
import sys, time
message = "HELLO WORLD!"
L = len(message)
for i in range(100):
time.sleep(0.1)
j = i % L
if j > 0:
sys.stdout.write("\r" + message[-j:] + " " + message[:L-j])
else:
sys.stdout.write("\r" + message)
sys.stdout.flush()
###Output
LD! HELLO WOR |
transfer_learning/Emergency_Or_Not/Emergency_vehicle_Image_Classification_using_transfer_learning_V2.ipynb | ###Markdown
This is our solution for [Analytics Vidhya Hackathon](https://datahack.analyticsvidhya.com/contest/janatahack-computer-vision-hackathon) using image generator and flow_from_dataframe
###Code
# Comment this if you are not running in colab
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Import Modules
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
from os import path, listdir
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.applications import resnet_v2
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.model_selection import train_test_split
from google.colab import files
###Output
_____no_output_____
###Markdown
Defines reusable constants
###Code
# Data files:
# Google drive: https://drive.google.com/file/d/1jnhIjWUL7B4K4iJ665f7JgWbODmGAMAn/view
# Download the above data files from our google drive
# to your google drive or local computer and update the below paths accordingly
project_folder = '/content/drive/MyDrive/ML_Projects/Emergency_Or_Not'
data_dir = path.join(project_folder, 'data')
models_path =path.join(project_folder, 'models')
models_dir = path.join(models_path, "base")
train_csv_path = path.join(data_dir, 'train.csv')
test_csv_path = path.join(data_dir, 'test.csv')
label_col = 'emergency_or_not'
image_idx_col = 'image_idx'
image_names_col = 'image_names'
images_dir = path.join(data_dir, "images")
imgy = imgx = 224
submission_csv_path = path.join(data_dir, 'submission.csv')
###Output
_____no_output_____
###Markdown
Define common functions
###Code
from keras_preprocessing.image import dataframe_iterator
def get_image_id(img_name):
return int(img_name[:-4])
def get_image(img_name):
img_path = path.join(images_dir, img_name)
return Image.open(img_path)
# Shows a randomly selected images from data
def showImages(df, n=1):
cols = 5
rows = max(1, n//cols)
sample = df.sample(n=n).reset_index(drop=True)
for idx, row in sample.iterrows():
img = get_image(row[image_names_col])
plt.subplot(rows, cols, idx+1)
plt.title(row[label_col])
plt.imshow(img)
plt.figure(figsize=(10, 20))
plt.show()
# Creates tensorflow data generator for images
def create_image_generator(df, shuffle=False, class_mode="binary", batch_size=32):
image_gen = ImageDataGenerator()
dataframe_iterator = image_gen.flow_from_dataframe(df, images_dir, x_col=image_names_col, y_col=label_col, batch_size=batch_size,
class_mode=class_mode, shuffle=shuffle, target_size=(imgy, imgx), validate_filenames=False)
return dataframe_iterator
###Output
_____no_output_____
###Markdown
Load data
###Code
train_df = pd.read_csv(train_csv_path)
train_df[label_col] = np.where(train_df[label_col]==0, "No", "Yes")
test_df = pd.read_csv(test_csv_path)
###Output
_____no_output_____
###Markdown
Prepare datasets for training and validation
###Code
train_data, val_data = train_test_split(train_df, test_size=0.2)
train_gen = create_image_generator(train_data, shuffle=True)
val_gen = create_image_generator(val_data)
test_gen = create_image_generator(test_df, class_mode=None)
###Output
Found 1316 non-validated image filenames belonging to 2 classes.
Found 330 non-validated image filenames belonging to 2 classes.
Found 706 non-validated image filenames.
###Markdown
Training deep neaural network using Tensorflow **Transfer learning**: We are using ResNet model for image classification.1. In 1st phase, we train our model by freezing keeping pre_trained_model (i.e setting trainable=False)1. In 2nd phase, we train our model by unfreezing keeping pre_trained_model (i.e setting trainable=True) but with very small learning_rate
###Code
def create_model(trainable=False):
pre_trained_model = resnet_v2.ResNet50V2(include_top=False, pooling='max', weights='imagenet', input_shape=(imgy, imgx, 3))
input = tf.keras.Input(shape=(imgy, imgx, 3))
output = resnet_v2.preprocess_input(input)
output = pre_trained_model(output, training=False)
output = layers.Dropout(0.2)(output)
output = layers.Dense(1, activation="sigmoid")(output)
pre_trained_model.trainable=trainable
model = tf.keras.Model(input, output)
return model
def compile_and_fit(model, train_gen, val_gen, model_path, learning_rate, epochs=5, patience=3):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience, mode='min')
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=patience,
mode='min', min_delta=0.0001, cooldown=0,
min_lr=min(0.0001, learning_rate))
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath=model_path, monitor='val_loss',
mode='min', save_weights_only=True,
save_best_only=True)
model.compile(loss=tf.keras.losses.BinaryCrossentropy(), metrics=['accuracy'],
optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate))
return model.fit(train_gen, epochs=epochs, validation_data=val_gen,
callbacks=[early_stopping, model_checkpoint_callback, reduce_lr])
###Output
_____no_output_____
###Markdown
Fast learning phase by freezing the pre-trained-model
###Code
model = create_model()
print(model.summary())
model_path = path.join(models_dir, "best")
model.load_weights(tf.train.latest_checkpoint(models_dir))
compile_and_fit(model, train_gen, val_gen, model_path, learning_rate=0.01, epochs=10, patience=3)
###Output
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 224, 224, 3)] 0
tf.math.truediv_2 (TFOpLamb (None, 224, 224, 3) 0
da)
tf.math.subtract_2 (TFOpLam (None, 224, 224, 3) 0
bda)
resnet50v2 (Functional) (None, 2048) 23564800
dropout_2 (Dropout) (None, 2048) 0
dense_2 (Dense) (None, 1) 2049
=================================================================
Total params: 23,566,849
Trainable params: 2,049
Non-trainable params: 23,564,800
_________________________________________________________________
None
Epoch 1/10
42/42 [==============================] - 9s 160ms/step - loss: 0.0455 - accuracy: 0.9916 - val_loss: 0.0388 - val_accuracy: 0.9970 - lr: 1.0000e-05
Epoch 2/10
42/42 [==============================] - 6s 141ms/step - loss: 0.0433 - accuracy: 0.9901 - val_loss: 0.0379 - val_accuracy: 0.9970 - lr: 1.0000e-05
Epoch 3/10
42/42 [==============================] - 5s 130ms/step - loss: 0.0493 - accuracy: 0.9894 - val_loss: 0.0386 - val_accuracy: 0.9970 - lr: 1.0000e-05
Epoch 4/10
42/42 [==============================] - 5s 129ms/step - loss: 0.0551 - accuracy: 0.9863 - val_loss: 0.0381 - val_accuracy: 0.9970 - lr: 1.0000e-05
Epoch 5/10
42/42 [==============================] - 5s 130ms/step - loss: 0.0635 - accuracy: 0.9901 - val_loss: 0.0387 - val_accuracy: 0.9970 - lr: 1.0000e-05
###Markdown
Slow learning phase by unfreezing the pre-trained model
###Code
# Create best model from saved weights
best_model = create_model()
best_model.load_weights(tf.train.latest_checkpoint(models_dir))
model = create_model(trainable=True)
model.set_weights(best_model.get_weights())
compile_and_fit(model, train_gen, val_gen, model_path, learning_rate=1e-5, epochs=10, patience=3)
###Output
Epoch 1/10
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
42/42 [==============================] - 17s 261ms/step - loss: 0.0896 - accuracy: 0.9901 - val_loss: 0.0980 - val_accuracy: 0.9939 - lr: 1.0000e-05
Epoch 2/10
42/42 [==============================] - 10s 238ms/step - loss: 0.1267 - accuracy: 0.9863 - val_loss: 0.0248 - val_accuracy: 0.9939 - lr: 1.0000e-05
Epoch 3/10
42/42 [==============================] - 10s 246ms/step - loss: 0.0382 - accuracy: 0.9932 - val_loss: 0.0217 - val_accuracy: 0.9909 - lr: 1.0000e-05
Epoch 4/10
42/42 [==============================] - 9s 212ms/step - loss: 0.0865 - accuracy: 0.9878 - val_loss: 0.0795 - val_accuracy: 0.9879 - lr: 1.0000e-05
Epoch 5/10
42/42 [==============================] - 9s 215ms/step - loss: 0.0767 - accuracy: 0.9894 - val_loss: 0.0637 - val_accuracy: 0.9909 - lr: 1.0000e-05
Epoch 6/10
42/42 [==============================] - 8s 200ms/step - loss: 0.0252 - accuracy: 0.9954 - val_loss: 0.0405 - val_accuracy: 0.9939 - lr: 1.0000e-05
###Markdown
Check results on test dataset
###Code
# Create best model from saved weights
best_model = create_model()
best_model.load_weights(tf.train.latest_checkpoint(models_dir))
test_predictions = best_model.predict(test_gen)
test_df[label_col] = (test_predictions>0.5).astype(np.uint0)
showImages(test_df, n=10)
###Output
_____no_output_____
###Markdown
**Create Submission file**
###Code
test_df[[image_names_col, label_col]].to_csv(submission_csv_path, index=False)
files.download(submission_csv_path)
###Output
_____no_output_____ |
exercises/exercises_ch11.ipynb | ###Markdown
Chapter 11 - Training Deep Neural Networks Problem 1 Is it OK to initialize all the weights to the same value as long as that value is selected randomly using He initialization? If all weights are initialized with the same value and BackProp via Gradient Descent is used to train a network, then weights will always stay the same. It is highly unlikely that a good solution could be found. Problem 2 Is it OK to initialize the bias terms to 0? Yes, which is also commonly done. The initalization of the bias is not very important. Problem 3 Name three advantages of the SELU activation function over ReLU. 1. SELU is continuously differentiable as opposed to ReLU, where the gradient can jump and learning oscillates.2. SELU does not run the risk of dead nodes.3. SELU produces on average values closer to zero as it can return negative values. This stabilizes the variance of different layers and helps with vanishing or exploding gradients. Problem 4 In which cases would you want to use each of the following activation functions: SELU, leaky ReLU (and its variants), ReLU, tanh, logistic, and softmax? * tanh, logistic, and softmax are primarily useful for the output layer. tanh if the output range is bounded, logistic for binary classification of bounded ranges, softmax for multiclass classification.* ReLU is preferrable if prediction speed is critical* Leaky ReLU allows for learning everywhere, and prevents dead nodes, and is faster than SELU.* SELU tends to perform best, but requires more compute Problem 5 What may happen if you set the momentum hyperparameter too close to 1 (e.g., 0.99999) when using an SGD optimizer? When momentum is very high, the step direction is updated very slowly. This risks that the parameter search overshoots and takes a long time to correct itself. Problem 6 Name three ways you can produce a sparse model. 1. L1 regularization2. Pruning, i.e. set sufficiently small weights to zero.3. Use the TensorFlow Model Optimization Toolkit Problem 7 Does dropout slow down training? Does it slow down inference (i.e., making predictions on new instances)? What about MC Dropout? 1. Yes, dropout slows down training - in general - as in each training step any given node may or may not learn2. Dropout does not slow down inference, as all nodes are always included.3. MC Dropout slows down inference, as multiple predictions need to be made. Problem 8 Practice training a deep neural network on the CIFAR10 image dataset:
###Code
import os
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow import keras
from sklearn.model_selection import train_test_split
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full, test_size=5000)
###Output
_____no_output_____
###Markdown
Problem 8a Build a DNN with 20 hidden layers of 100 neurons each (that’s too many, but it’s the point of this exercise). Use He initialization and the ELU activation function.
###Code
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=X_train.shape[1:]))
for i in range(20):
model.add(keras.layers.Dense(100, activation='elu', kernel_initializer='he_normal'))
###Output
_____no_output_____
###Markdown
Problem 8b Using Nadam optimization and early stopping, train the network on theCIFAR10 dataset. You can load it with keras.datasets.cifar10.load_data() . The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you’ll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model’s architecture or hyperparameters.
###Code
model.add(keras.layers.Dense(10, activation='softmax'))
early_stopping_cb = keras.callbacks.EarlyStopping(patience=10)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
%load_ext tensorboard
%tensorboard --logdir=./my_cifar10_logs --port=6006
index_run = 0
Wsave = model.get_weights()
for lr in range(4,11):
lr = 10**(-lr/2)
index_run += 1
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(index_run))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
optimizer = keras.optimizers.Nadam(lr=lr)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
model.fit(X_train, y_train, epochs=25, validation_data=(X_valid, y_valid), callbacks=callbacks)
model.set_weights(Wsave)
best_lr = 10**(-8/2)
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=X_train.shape[1:]))
for i in range(20):
model.add(keras.layers.Dense(100, activation='elu', kernel_initializer='he_normal'))
model.add(keras.layers.Dense(10, activation="softmax"))
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
lr = best_lr
index_run += 1
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(index_run))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
optimizer = keras.optimizers.Nadam(lr=lr)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
model.fit(X_train, y_train, epochs=100, validation_data=(X_valid, y_valid), callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4946 - accuracy: 0.4866
###Markdown
Problem 8c Now try adding Batch Normalization and compare the learning curves: Is itconverging faster than before? Does it produce a better model? How does it affect training speed?
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=X_train.shape[1:]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=best_lr)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
index_run += 1
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(index_run))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 1s 143us/sample - loss: 1.3619 - accuracy: 0.5352
###Markdown
Problem 8d Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=X_train.shape[1:]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
index_run += 1
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(index_run))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 75us/sample - loss: 1.4792 - accuracy: 0.4870
###Markdown
Problem 8e Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=X_train.shape[1:]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
index_run += 1
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(index_run))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
Problem 8f Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=X_train.shape[1:]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=X_train.shape[1:]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=best_lr)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 50
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/50
45000/45000 [==============================] - 2s 40us/sample - loss: 2.0565 - accuracy: 0.2812 - val_loss: 1.7915 - val_accuracy: 0.3670
Epoch 2/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.7969 - accuracy: 0.3672 - val_loss: 1.7063 - val_accuracy: 0.4140
Epoch 3/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.6658 - accuracy: 0.4100 - val_loss: 1.6782 - val_accuracy: 0.4264
Epoch 4/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.5736 - accuracy: 0.4419 - val_loss: 1.6218 - val_accuracy: 0.4374
Epoch 5/50
45000/45000 [==============================] - 1s 28us/sample - loss: 1.5085 - accuracy: 0.4650 - val_loss: 1.6376 - val_accuracy: 0.4468
Epoch 6/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.4545 - accuracy: 0.4797 - val_loss: 1.5893 - val_accuracy: 0.4544
Epoch 7/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.4160 - accuracy: 0.4977 - val_loss: 1.6604 - val_accuracy: 0.4548
Epoch 8/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.3723 - accuracy: 0.5126 - val_loss: 1.5565 - val_accuracy: 0.4644
Epoch 9/50
45000/45000 [==============================] - 1s 28us/sample - loss: 1.3383 - accuracy: 0.5237 - val_loss: 1.5426 - val_accuracy: 0.4838
Epoch 10/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.3114 - accuracy: 0.5320 - val_loss: 1.5346 - val_accuracy: 0.4822
Epoch 11/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.2824 - accuracy: 0.5409 - val_loss: 1.6596 - val_accuracy: 0.4506
Epoch 12/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.2573 - accuracy: 0.5528 - val_loss: 1.6213 - val_accuracy: 0.4756
Epoch 13/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.2297 - accuracy: 0.5605 - val_loss: 1.6002 - val_accuracy: 0.4644
Epoch 14/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.2083 - accuracy: 0.5698 - val_loss: 1.6343 - val_accuracy: 0.4718
Epoch 15/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.1852 - accuracy: 0.5778 - val_loss: 1.6518 - val_accuracy: 0.4504
Epoch 16/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.1690 - accuracy: 0.5835 - val_loss: 1.6003 - val_accuracy: 0.4880
Epoch 17/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.1498 - accuracy: 0.5901 - val_loss: 1.7596 - val_accuracy: 0.4606
Epoch 18/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.1308 - accuracy: 0.5977 - val_loss: 1.6584 - val_accuracy: 0.4850
Epoch 19/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.1145 - accuracy: 0.6011 - val_loss: 1.6920 - val_accuracy: 0.4804
Epoch 20/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.0990 - accuracy: 0.6073 - val_loss: 1.6873 - val_accuracy: 0.4858
Epoch 21/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.0868 - accuracy: 0.6144 - val_loss: 1.6541 - val_accuracy: 0.4944
Epoch 22/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.0723 - accuracy: 0.6176 - val_loss: 1.6689 - val_accuracy: 0.4702
Epoch 23/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.0578 - accuracy: 0.6239 - val_loss: 1.6247 - val_accuracy: 0.4906
Epoch 24/50
45000/45000 [==============================] - 1s 29us/sample - loss: 1.0133 - accuracy: 0.6406 - val_loss: 1.7307 - val_accuracy: 0.4722
Epoch 25/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.9761 - accuracy: 0.6485 - val_loss: 1.7375 - val_accuracy: 0.4910
Epoch 26/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.9387 - accuracy: 0.6614 - val_loss: 1.7813 - val_accuracy: 0.4854
Epoch 27/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.9012 - accuracy: 0.6778 - val_loss: 1.7525 - val_accuracy: 0.4982
Epoch 28/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.8671 - accuracy: 0.6882 - val_loss: 1.8503 - val_accuracy: 0.4922
Epoch 29/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.8243 - accuracy: 0.7010 - val_loss: 1.8298 - val_accuracy: 0.5010
Epoch 30/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.7880 - accuracy: 0.7145 - val_loss: 1.8489 - val_accuracy: 0.5034
Epoch 31/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.7517 - accuracy: 0.7287 - val_loss: 2.1339 - val_accuracy: 0.4764
Epoch 32/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.7219 - accuracy: 0.7379 - val_loss: 2.1026 - val_accuracy: 0.4924
Epoch 33/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.6777 - accuracy: 0.7556 - val_loss: 2.0699 - val_accuracy: 0.5100
Epoch 34/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.6502 - accuracy: 0.7638 - val_loss: 2.1437 - val_accuracy: 0.4882
Epoch 35/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.6097 - accuracy: 0.7790 - val_loss: 2.2004 - val_accuracy: 0.4902
Epoch 36/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.5712 - accuracy: 0.7923 - val_loss: 2.3103 - val_accuracy: 0.4918
Epoch 37/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.5295 - accuracy: 0.8095 - val_loss: 2.2783 - val_accuracy: 0.4990
Epoch 38/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.4967 - accuracy: 0.8202 - val_loss: 2.4972 - val_accuracy: 0.4904
Epoch 39/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.4511 - accuracy: 0.8375 - val_loss: 2.4949 - val_accuracy: 0.4926
Epoch 40/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.4050 - accuracy: 0.8546 - val_loss: 2.6705 - val_accuracy: 0.5052
Epoch 41/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.3632 - accuracy: 0.8703 - val_loss: 2.7178 - val_accuracy: 0.4990
Epoch 42/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.3179 - accuracy: 0.8872 - val_loss: 2.8916 - val_accuracy: 0.5024
Epoch 43/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.2778 - accuracy: 0.9041 - val_loss: 3.1210 - val_accuracy: 0.5034
Epoch 44/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.2369 - accuracy: 0.9186 - val_loss: 3.2258 - val_accuracy: 0.4984
Epoch 45/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.2019 - accuracy: 0.9334 - val_loss: 3.4177 - val_accuracy: 0.4978
Epoch 46/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.1748 - accuracy: 0.9454 - val_loss: 3.5481 - val_accuracy: 0.4966
Epoch 47/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.1579 - accuracy: 0.9519 - val_loss: 3.6350 - val_accuracy: 0.4934
Epoch 48/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.1428 - accuracy: 0.9578 - val_loss: 3.7192 - val_accuracy: 0.4958
Epoch 49/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.1327 - accuracy: 0.9619 - val_loss: 3.7710 - val_accuracy: 0.4954
Epoch 50/50
45000/45000 [==============================] - 1s 29us/sample - loss: 0.1252 - accuracy: 0.9651 - val_loss: 3.7836 - val_accuracy: 0.4968
|
notebooks/bp_5_final.ipynb | ###Markdown
1 Load Packages and Obtain DataHere are the packages we'll need.
###Code
import os
import tensorflow as tf
from tensorflow.keras import utils, layers, models
from matplotlib import pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Now we'll get our data, which is images of cats and dogs. We're using Datasets, which a way to deal with lots of data by not loading all of it into memory.
###Code
# location of data
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
# download the data and extract it
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
# construct paths
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
# parameters for datasets
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
# construct train and validation datasets
train_dataset = utils.image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = utils.image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
# construct the test dataset by taking every 5th observation out of the validation dataset
val_batches = tf.data.experimental.cardinality(validation_dataset)
test_dataset = validation_dataset.take(val_batches // 5)
validation_dataset = validation_dataset.skip(val_batches // 5)
class_names = train_dataset.class_names
###Output
Downloading data from https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip
68608000/68606236 [==============================] - 1s 0us/step
68616192/68606236 [==============================] - 1s 0us/step
Found 2000 files belonging to 2 classes.
Found 1000 files belonging to 2 classes.
###Markdown
This next code will help with quickly reading the data.
###Code
AUTOTUNE = tf.data.AUTOTUNE
train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Visualizing the dataNext we're going to create a function that will show us some cats and and then some dogs.
###Code
def visualize():
#Set up a figure
plt.figure(figsize=(10,7))
#Get the first batch of data from the dataset
#This comes in images and labels
for images, labels in train_dataset.take(1):
#Get a list of images of cats and dogs
cats = images[labels == 0]
dogs = images[labels == 1]
#For each index referring to spot in plot
for i in range(6):
#Get the axis
ax = plt.subplot(2,3,i+1)
#If it's in the first row
if i < 3:
#Plot the ith image from cats
plt.imshow(cats[i].numpy().astype('uint8'))
#Label cat
plt.xlabel("cat")
#If it's in the the second row
else:
#Plot the ith image from dogs
plt.imshow(dogs[i].numpy().astype('uint8'))
#Label dog
plt.xlabel("dog")
#Remove the axes
plt.axis("off")
###Output
_____no_output_____
###Markdown
Let's see it in action.
###Code
visualize()
###Output
_____no_output_____
###Markdown
Checking the label frequenciesWe'll create an iterator that will cycle through the labels and use it to get an array of labels
###Code
labels_iterator= train_dataset.unbatch().map(lambda image, label: label).as_numpy_iterator()
#Use list comprehension over iterator to covert to array
labels = np.array([x for x in labels_iterator])
###Output
_____no_output_____
###Markdown
We can then use the array to count the number with each label.
###Code
#Cats have a label of 0
n_cats = sum(labels == 0)
#Dogs have a label of 1
n_dogs = sum(labels == 1)
n_cats, n_dogs
###Output
_____no_output_____
###Markdown
As the data has an equal number of cats and dogs, the baseline model would only be 50% accurate. First ModelOur first model will use several layers:- Conv2D performs convolution to extract features from the images- MaxPooling2D summarize the data to reduce its size at each step- Dropout drops some units during training to reduce overfitting- Flatten flattens the data from 2D to 1D- Dense is the similest layer athe heart of tensor flow and neural networks
###Code
model1 = models.Sequential([
#Alternate Conv2D and MaxPooling2D layers
layers.Conv2D(64, (3, 3), activation='relu', ),
layers.MaxPooling2D((2, 2)),
#Add in a Dropout layer to reduce overfitting
layers.Dropout(0.2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.2),
layers.Conv2D(64, (3, 3), activation='relu'),
#Flatten the data to prepare to use Dense
layers.Flatten(),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Next we'll compile the model using adam as an optimizer.
###Code
model1.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Next we'll train the model on the dataset for twenty epochs, printing the results each time.
###Code
history1 = model1.fit(train_dataset,
epochs=20,
validation_data=validation_dataset)
###Output
Epoch 1/20
63/63 [==============================] - 9s 127ms/step - loss: 0.7230 - accuracy: 0.5265 - val_loss: 2.2136 - val_accuracy: 0.5050
Epoch 2/20
63/63 [==============================] - 12s 180ms/step - loss: 0.7079 - accuracy: 0.5340 - val_loss: 2.0323 - val_accuracy: 0.5309
Epoch 3/20
63/63 [==============================] - 8s 122ms/step - loss: 0.7072 - accuracy: 0.5295 - val_loss: 2.0843 - val_accuracy: 0.5149
Epoch 4/20
63/63 [==============================] - 8s 116ms/step - loss: 0.6846 - accuracy: 0.5590 - val_loss: 1.7832 - val_accuracy: 0.5347
Epoch 5/20
63/63 [==============================] - 8s 113ms/step - loss: 0.6688 - accuracy: 0.5955 - val_loss: 1.5376 - val_accuracy: 0.5730
Epoch 6/20
63/63 [==============================] - 8s 115ms/step - loss: 0.6543 - accuracy: 0.6105 - val_loss: 1.5109 - val_accuracy: 0.5569
Epoch 7/20
63/63 [==============================] - 7s 114ms/step - loss: 0.6215 - accuracy: 0.6345 - val_loss: 1.2644 - val_accuracy: 0.5507
Epoch 8/20
63/63 [==============================] - 8s 114ms/step - loss: 0.6158 - accuracy: 0.6545 - val_loss: 1.2523 - val_accuracy: 0.5532
Epoch 9/20
63/63 [==============================] - 7s 114ms/step - loss: 0.5433 - accuracy: 0.6965 - val_loss: 1.2825 - val_accuracy: 0.5594
Epoch 10/20
63/63 [==============================] - 8s 114ms/step - loss: 0.5264 - accuracy: 0.7120 - val_loss: 0.9881 - val_accuracy: 0.5681
Epoch 11/20
63/63 [==============================] - 8s 115ms/step - loss: 0.5002 - accuracy: 0.7375 - val_loss: 1.2785 - val_accuracy: 0.5606
Epoch 12/20
63/63 [==============================] - 8s 115ms/step - loss: 0.4901 - accuracy: 0.7495 - val_loss: 1.0352 - val_accuracy: 0.6002
Epoch 13/20
63/63 [==============================] - 8s 116ms/step - loss: 0.4266 - accuracy: 0.7900 - val_loss: 1.3286 - val_accuracy: 0.5854
Epoch 14/20
63/63 [==============================] - 7s 114ms/step - loss: 0.4166 - accuracy: 0.8220 - val_loss: 1.3362 - val_accuracy: 0.5755
Epoch 15/20
63/63 [==============================] - 8s 129ms/step - loss: 0.3593 - accuracy: 0.8360 - val_loss: 1.4885 - val_accuracy: 0.5941
Epoch 16/20
63/63 [==============================] - 9s 126ms/step - loss: 0.3348 - accuracy: 0.8420 - val_loss: 1.3269 - val_accuracy: 0.5990
Epoch 17/20
63/63 [==============================] - 8s 115ms/step - loss: 0.3294 - accuracy: 0.8435 - val_loss: 1.5562 - val_accuracy: 0.5854
Epoch 18/20
63/63 [==============================] - 8s 127ms/step - loss: 0.2842 - accuracy: 0.8715 - val_loss: 1.8595 - val_accuracy: 0.6040
Epoch 19/20
63/63 [==============================] - 8s 115ms/step - loss: 0.2749 - accuracy: 0.8830 - val_loss: 1.7500 - val_accuracy: 0.6448
Epoch 20/20
63/63 [==============================] - 7s 114ms/step - loss: 0.2567 - accuracy: 0.8855 - val_loss: 1.9522 - val_accuracy: 0.6337
###Markdown
Now let's visualize the results
###Code
plt.plot(history1.history["accuracy"], label = "training")
plt.plot(history1.history["val_accuracy"], label = "validation")
plt.gca().set(xlabel = "epoch", ylabel = "accuracy", title = "Model 1")
plt.legend()
###Output
_____no_output_____
###Markdown
The accuracy of model 1 stabilized **between 59% and 63%** during training. This is better than the baseline of 50%, but still not that beneficial.The data does appear to be overfit, with the training data having an accuracy of nearly 80%, but the accuracy is still slightly increasing on the validation dataset. 3 Data AugmentationThe flip layer flips an image in different orientations. Lets try it on a couple times on an image and see what it looks like.
###Code
#Make the layer
flip = tf.keras.layers.RandomFlip()
#Make a figure
plt.figure(figsize=(12,5))
#Get the images and labels for the first batch of data
for images, labels in train_dataset.take(1):
#We'll use the first image in the dataset
image = images[0]
#For 5 indicies
for i in range(5):
#Get the axis
ax = plt.subplot(2,5,i+1)
#If it's not the first (which we're keeping the same)
if i > 0:
#Flip it
image = flip(image)
#Display the image
plt.imshow(image.numpy().astype('uint8'))
plt.axis("off")
###Output
_____no_output_____
###Markdown
The rotate layer rotates the image a random number of degrees. We'll create a similar display for it
###Code
#Make a rotate layer
#The argument is the range of degrees to flip it, multiplied by pi
rotate = tf.keras.layers.RandomRotation((-1,1))
#Make another figure and do the same thing
plt.figure(figsize=(12,5))
for images, labels in train_dataset.take(1):
image = images[0]
for i in range(5):
ax = plt.subplot(2,5,i+1)
if i > 0:
#Use rotate this time
image = rotate(image)
plt.imshow(image.numpy().astype('uint8'))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Now we'll make another model incorporating these layers and compile it
###Code
model2 = models.Sequential([
#Add data augmentation layers
layers.RandomFlip(),
layers.RandomRotation((-1,1)),
#The rest of the model is the same
layers.Conv2D(64, (3, 3), activation='relu', ),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.Flatten(),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dense(10)
])
model2.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Now we'll train it, again for 20 epochs.
###Code
history2 = model2.fit(train_dataset,
epochs=20,
validation_data=validation_dataset)
###Output
Epoch 1/20
63/63 [==============================] - 21s 149ms/step - loss: 27.4879 - accuracy: 0.5090 - val_loss: 1.9692 - val_accuracy: 0.5124
Epoch 2/20
63/63 [==============================] - 10s 147ms/step - loss: 0.7463 - accuracy: 0.4895 - val_loss: 1.8112 - val_accuracy: 0.4790
Epoch 3/20
63/63 [==============================] - 7s 113ms/step - loss: 0.7405 - accuracy: 0.4975 - val_loss: 1.8224 - val_accuracy: 0.4901
Epoch 4/20
63/63 [==============================] - 7s 111ms/step - loss: 0.7192 - accuracy: 0.5185 - val_loss: 1.6957 - val_accuracy: 0.4864
Epoch 5/20
63/63 [==============================] - 7s 112ms/step - loss: 0.7252 - accuracy: 0.5000 - val_loss: 1.7348 - val_accuracy: 0.5124
Epoch 6/20
63/63 [==============================] - 7s 112ms/step - loss: 0.7091 - accuracy: 0.5335 - val_loss: 1.6756 - val_accuracy: 0.5433
Epoch 7/20
63/63 [==============================] - 7s 113ms/step - loss: 0.7018 - accuracy: 0.5285 - val_loss: 1.5446 - val_accuracy: 0.5681
Epoch 8/20
63/63 [==============================] - 7s 112ms/step - loss: 0.7077 - accuracy: 0.5270 - val_loss: 1.3831 - val_accuracy: 0.5520
Epoch 9/20
63/63 [==============================] - 9s 131ms/step - loss: 0.6874 - accuracy: 0.5420 - val_loss: 1.5037 - val_accuracy: 0.5743
Epoch 10/20
63/63 [==============================] - 10s 143ms/step - loss: 0.6860 - accuracy: 0.5510 - val_loss: 1.4514 - val_accuracy: 0.5879
Epoch 11/20
63/63 [==============================] - 10s 159ms/step - loss: 0.6853 - accuracy: 0.5460 - val_loss: 1.1610 - val_accuracy: 0.5903
Epoch 12/20
63/63 [==============================] - 10s 144ms/step - loss: 0.6671 - accuracy: 0.5755 - val_loss: 1.2077 - val_accuracy: 0.6089
Epoch 13/20
63/63 [==============================] - 9s 139ms/step - loss: 0.6793 - accuracy: 0.5910 - val_loss: 1.0984 - val_accuracy: 0.6101
Epoch 14/20
63/63 [==============================] - 10s 142ms/step - loss: 0.6431 - accuracy: 0.6115 - val_loss: 0.8950 - val_accuracy: 0.6460
Epoch 15/20
63/63 [==============================] - 9s 130ms/step - loss: 0.6643 - accuracy: 0.6035 - val_loss: 0.8793 - val_accuracy: 0.5507
Epoch 16/20
63/63 [==============================] - 7s 112ms/step - loss: 0.6636 - accuracy: 0.6190 - val_loss: 0.7137 - val_accuracy: 0.6262
Epoch 17/20
63/63 [==============================] - 7s 113ms/step - loss: 0.6537 - accuracy: 0.6095 - val_loss: 0.9494 - val_accuracy: 0.6040
Epoch 18/20
63/63 [==============================] - 7s 112ms/step - loss: 0.6456 - accuracy: 0.6355 - val_loss: 0.7276 - val_accuracy: 0.6337
Epoch 19/20
63/63 [==============================] - 7s 113ms/step - loss: 0.6458 - accuracy: 0.6315 - val_loss: 0.7690 - val_accuracy: 0.6114
Epoch 20/20
63/63 [==============================] - 7s 113ms/step - loss: 0.6394 - accuracy: 0.6330 - val_loss: 0.6679 - val_accuracy: 0.6658
###Markdown
And display our results.
###Code
plt.plot(history2.history["accuracy"], label = "training")
plt.plot(history2.history["val_accuracy"], label = "validation")
plt.gca().set(xlabel = "epoch", ylabel = "accuracy", title = "Model 2")
plt.legend()
###Output
_____no_output_____
###Markdown
The validation accuracy is **between 60% and 66%** during training.This is slightly higher but still similar to what we achieved in model 1.This time the model does not seem to be overfit, as the training and validation acucuracies are near each other. 4 Data PreprocessingNow we're going to properly scale our data before training it. We'll normalize the RGB values between 0 and 1, instead of to 255.
###Code
i = tf.keras.Input(shape=(160, 160, 3))
x = tf.keras.applications.mobilenet_v2.preprocess_input(i)
#Create a preprocessing layer
preprocessor = tf.keras.Model(inputs = [i], outputs = [x])
###Output
_____no_output_____
###Markdown
The next model will encourporate this layer along with everything else we had before.
###Code
model3 = models.Sequential([
#We add the preprocessing layer
preprocessor,
#But much of the rest is the same
layers.RandomFlip(),
layers.RandomRotation((-1,1)),
layers.Conv2D(64, (3, 3), activation='relu', ),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Dropout(0.2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.Flatten(),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
#With an additional dense layer to increase accuracy
layers.Dense(64, activation='relu'),
layers.Dense(10)
])
#Compile model
model3.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Now we'll train our third model.
###Code
history3 = model3.fit(train_dataset,
epochs=20,
validation_data=validation_dataset)
###Output
Epoch 1/20
63/63 [==============================] - 9s 116ms/step - loss: 0.8453 - accuracy: 0.5055 - val_loss: 0.6683 - val_accuracy: 0.6200
Epoch 2/20
63/63 [==============================] - 7s 114ms/step - loss: 0.6724 - accuracy: 0.5715 - val_loss: 0.6708 - val_accuracy: 0.5495
Epoch 3/20
63/63 [==============================] - 7s 113ms/step - loss: 0.6566 - accuracy: 0.6015 - val_loss: 0.6310 - val_accuracy: 0.6572
Epoch 4/20
63/63 [==============================] - 7s 114ms/step - loss: 0.6421 - accuracy: 0.6130 - val_loss: 0.6198 - val_accuracy: 0.6634
Epoch 5/20
63/63 [==============================] - 7s 113ms/step - loss: 0.6247 - accuracy: 0.6435 - val_loss: 0.6292 - val_accuracy: 0.6250
Epoch 6/20
63/63 [==============================] - 8s 115ms/step - loss: 0.6058 - accuracy: 0.6545 - val_loss: 0.6353 - val_accuracy: 0.6324
Epoch 7/20
63/63 [==============================] - 7s 113ms/step - loss: 0.6034 - accuracy: 0.6735 - val_loss: 0.6004 - val_accuracy: 0.6671
Epoch 8/20
63/63 [==============================] - 7s 114ms/step - loss: 0.5828 - accuracy: 0.6810 - val_loss: 0.5959 - val_accuracy: 0.6819
Epoch 9/20
63/63 [==============================] - 7s 113ms/step - loss: 0.5776 - accuracy: 0.6855 - val_loss: 0.6740 - val_accuracy: 0.6287
Epoch 10/20
63/63 [==============================] - 7s 113ms/step - loss: 0.6288 - accuracy: 0.6280 - val_loss: 0.5918 - val_accuracy: 0.6894
Epoch 11/20
63/63 [==============================] - 7s 113ms/step - loss: 0.5801 - accuracy: 0.6945 - val_loss: 0.5855 - val_accuracy: 0.6819
Epoch 12/20
63/63 [==============================] - 7s 112ms/step - loss: 0.5728 - accuracy: 0.6800 - val_loss: 0.6041 - val_accuracy: 0.6832
Epoch 13/20
63/63 [==============================] - 7s 114ms/step - loss: 0.5614 - accuracy: 0.7050 - val_loss: 0.6576 - val_accuracy: 0.6621
Epoch 14/20
63/63 [==============================] - 7s 113ms/step - loss: 0.5751 - accuracy: 0.7035 - val_loss: 0.5757 - val_accuracy: 0.7191
Epoch 15/20
63/63 [==============================] - 7s 113ms/step - loss: 0.5459 - accuracy: 0.7210 - val_loss: 0.5566 - val_accuracy: 0.7228
Epoch 16/20
63/63 [==============================] - 7s 113ms/step - loss: 0.5477 - accuracy: 0.7180 - val_loss: 0.5677 - val_accuracy: 0.7092
Epoch 17/20
63/63 [==============================] - 7s 113ms/step - loss: 0.5457 - accuracy: 0.7260 - val_loss: 0.5623 - val_accuracy: 0.7092
Epoch 18/20
63/63 [==============================] - 8s 114ms/step - loss: 0.5304 - accuracy: 0.7325 - val_loss: 0.5976 - val_accuracy: 0.6720
Epoch 19/20
63/63 [==============================] - 7s 113ms/step - loss: 0.5453 - accuracy: 0.7220 - val_loss: 0.5797 - val_accuracy: 0.7067
Epoch 20/20
63/63 [==============================] - 9s 146ms/step - loss: 0.5219 - accuracy: 0.7430 - val_loss: 0.5625 - val_accuracy: 0.7277
###Markdown
And display our results
###Code
plt.plot(history3.history["accuracy"], label = "training")
plt.plot(history3.history["val_accuracy"], label = "validation")
plt.gca().set(xlabel = "epoch", ylabel = "accuracy", title = "Model 3")
plt.legend()
###Output
_____no_output_____
###Markdown
The accuracy validation of our training model mostly stabilizes between 70% and 72%.This is higher than our model 2, but we would still like it to be more accurate.There does not seem to be overfitting, as the training and validation accuracy closely follow each other Transfer LearningPreviously we've used models that we've trained from scratch. But now we're going to use a model for image recognition that has already been built as a base model.We'll download that here.
###Code
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
i = tf.keras.Input(shape=IMG_SHAPE)
x = base_model(i, training = False)
base_model_layer = tf.keras.Model(inputs = [i], outputs = [x])
###Output
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v2/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_160_no_top.h5
9412608/9406464 [==============================] - 0s 0us/step
9420800/9406464 [==============================] - 0s 0us/step
###Markdown
For this model we'll use a simplfied version of our previous model and add in the layer for the base model.
###Code
model4 = models.Sequential([
#Use preprocessing layers
preprocessor,
#Data Augmentation layer
layers.RandomFlip(),
layers.RandomRotation((-1,1)),
#Base modle layer
base_model_layer,
#Flatten the data to avoid an error
layers.Flatten(),
#And just one Dense layer to train the model for our purpose
layers.Dense(2)
])
#And compile it
model4.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Let's look at the structure of our model.
###Code
model4.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
model (Functional) (None, 160, 160, 3) 0
random_flip_3 (RandomFlip) (None, 160, 160, 3) 0
random_rotation_3 (RandomRo (None, 160, 160, 3) 0
tation)
model_1 (Functional) (None, 5, 5, 1280) 2257984
flatten_2 (Flatten) (None, 32000) 0
dense_4 (Dense) (None, 2) 64002
=================================================================
Total params: 2,321,986
Trainable params: 64,002
Non-trainable params: 2,257,984
_________________________________________________________________
###Markdown
Now we'll look train this model.
###Code
history4 = model4.fit(train_dataset,
epochs=20,
validation_data=validation_dataset)
###Output
Epoch 1/20
63/63 [==============================] - 13s 119ms/step - loss: 0.8487 - accuracy: 0.8640 - val_loss: 0.1467 - val_accuracy: 0.9703
Epoch 2/20
63/63 [==============================] - 6s 90ms/step - loss: 0.6197 - accuracy: 0.9085 - val_loss: 0.3898 - val_accuracy: 0.9542
Epoch 3/20
63/63 [==============================] - 6s 89ms/step - loss: 0.6667 - accuracy: 0.9110 - val_loss: 0.3032 - val_accuracy: 0.9629
Epoch 4/20
63/63 [==============================] - 6s 89ms/step - loss: 0.6914 - accuracy: 0.9190 - val_loss: 0.2712 - val_accuracy: 0.9715
Epoch 5/20
63/63 [==============================] - 6s 88ms/step - loss: 0.5833 - accuracy: 0.9255 - val_loss: 0.3050 - val_accuracy: 0.9567
Epoch 6/20
63/63 [==============================] - 6s 89ms/step - loss: 0.4787 - accuracy: 0.9405 - val_loss: 0.2995 - val_accuracy: 0.9592
Epoch 7/20
63/63 [==============================] - 6s 89ms/step - loss: 0.8115 - accuracy: 0.9060 - val_loss: 0.2668 - val_accuracy: 0.9703
Epoch 8/20
63/63 [==============================] - 6s 89ms/step - loss: 0.7204 - accuracy: 0.9325 - val_loss: 0.4704 - val_accuracy: 0.9530
Epoch 9/20
63/63 [==============================] - 6s 90ms/step - loss: 0.6673 - accuracy: 0.9335 - val_loss: 0.3548 - val_accuracy: 0.9678
Epoch 10/20
63/63 [==============================] - 6s 89ms/step - loss: 0.6656 - accuracy: 0.9355 - val_loss: 0.3654 - val_accuracy: 0.9691
Epoch 11/20
63/63 [==============================] - 6s 89ms/step - loss: 0.6762 - accuracy: 0.9300 - val_loss: 0.3984 - val_accuracy: 0.9604
Epoch 12/20
63/63 [==============================] - 6s 89ms/step - loss: 0.7155 - accuracy: 0.9415 - val_loss: 0.6922 - val_accuracy: 0.9418
Epoch 13/20
63/63 [==============================] - 6s 89ms/step - loss: 0.6312 - accuracy: 0.9485 - val_loss: 0.4733 - val_accuracy: 0.9629
Epoch 14/20
63/63 [==============================] - 6s 89ms/step - loss: 0.4126 - accuracy: 0.9540 - val_loss: 0.3215 - val_accuracy: 0.9752
Epoch 15/20
63/63 [==============================] - 6s 89ms/step - loss: 0.3864 - accuracy: 0.9610 - val_loss: 0.4246 - val_accuracy: 0.9666
Epoch 16/20
63/63 [==============================] - 6s 89ms/step - loss: 0.4272 - accuracy: 0.9545 - val_loss: 0.4216 - val_accuracy: 0.9703
Epoch 17/20
63/63 [==============================] - 6s 89ms/step - loss: 0.4698 - accuracy: 0.9505 - val_loss: 0.6562 - val_accuracy: 0.9579
Epoch 18/20
63/63 [==============================] - 6s 91ms/step - loss: 0.5730 - accuracy: 0.9505 - val_loss: 0.4798 - val_accuracy: 0.9691
Epoch 19/20
63/63 [==============================] - 7s 114ms/step - loss: 0.5984 - accuracy: 0.9405 - val_loss: 0.3949 - val_accuracy: 0.9691
Epoch 20/20
63/63 [==============================] - 6s 89ms/step - loss: 0.4480 - accuracy: 0.9540 - val_loss: 0.4958 - val_accuracy: 0.9641
###Markdown
Now let's visualize the accuracy
###Code
plt.plot(history4.history["accuracy"], label = "training")
plt.plot(history4.history["val_accuracy"], label = "validation")
plt.gca().set(xlabel = "epoch", ylabel = "accuracy", title = "Model 4")
plt.legend()
###Output
_____no_output_____
###Markdown
The validation accuracy of our model is **between 95% and 97%**This is by far the highest, nd it is high from the beginning.The data is not overfit. In fact, the accuracy on the training data is below that of the validation data, due to dropout layers. 6 Score on Test DataOur best model was model 4, so I'll use that to evaluate on the training data.
###Code
model4.evaluate(test_dataset)
###Output
6/6 [==============================] - 1s 104ms/step - loss: 0.8994 - accuracy: 0.9688
|
feature-prep/spark/SparkMailingListFeaturePrep.ipynb | ###Markdown
We could make this a transformer stage, but I'm lazy so we'll just use a UDF directly.
###Code
annotated_spark_mailing_list_data = posts_with_labels.select(
"*",
extract_links_udf(posts_with_labels["body"]).alias("links_in_email"),
contains_python_stack_trace_udf(posts_with_labels.body).alias("contains_python_stack_trace").cast("double"),
contains_probably_java_stack_trace_udf(posts_with_labels.body).alias("contains_java_stack_trace").cast("double"),
contains_exception_in_task_udf(posts_with_labels.body).alias("contains_exception_in_task").cast("double"))
annotated_spark_mailing_list_data.cache()
annotated_spark_mailing_list_data.show()
further_annotated = annotated_spark_mailing_list_data.withColumn(
"domain_links",
extract_domains_udf(annotated_spark_mailing_list_data.links_in_email))
# Long story, allow mixed UDF types
further_annotated.cache()
further_annotated.count()
#tag::make_features[]
tokenizer = Tokenizer(inputCol="body", outputCol="body_tokens")
body_hashing = HashingTF(
inputCol="body_tokens", outputCol="raw_body_features",
numFeatures=10000)
body_idf = IDF(
inputCol="raw_body_features", outputCol="body_features")
body_word2Vec = Word2Vec(
vectorSize=5, minCount=0, numPartitions=10,
inputCol="body_tokens", outputCol="body_vecs")
assembler = VectorAssembler(
inputCols=[
"body_features", "body_vecs", "contains_python_stack_trace",
"contains_java_stack_trace", "contains_exception_in_task"],
outputCol="features")
#end::make_features[]
featureprep_pipeline = Pipeline(
stages=[tokenizer, body_hashing, body_idf, body_word2Vec, assembler])
featureprep_pipeline_transformer = featureprep_pipeline.fit(further_annotated)
preped_data = featureprep_pipeline_transformer.transform(further_annotated)
featureprep_pipeline_transformer.write().save(fs_prefix+"/feature_prep-2")
preped_data.write.format("parquet").mode("overwrite").save(fs_prefix+"/prepared_data")
###Output
_____no_output_____ |
Data Science with Jupyter.ipynb | ###Markdown
The Boston Housing Dataset
###Code
from sklearn import datasets
boston = datasets.load_boston()
type(boston)
from sklearn.utils import Bunch
# A bunch is basically a dictionary and be treated as such
Bunch?
boston['DESCR']
import pandas as pd
pd.DataFrame?
# What does the data look like
boston['data']
# The data is a 2D numpy array
boston['data'].shape
# The feature names to be used as column headers
boston['feature_names']
# Load the data into a pandas dataframe
df = pd.DataFrame(data=boston['data'], columns=boston['feature_names'])
# We need to confirm and view the 'target' variable, which will be used in the ML model
# The taget feature will be MEDV, which is the median house value in $1,000 units
boston['target'].shape
# Set the target to 'MEDV'
df['MEDV'] = boston['target']
# Move the target feature to the 'front' of the dataframe for easy identification
y = df['MEDV'].copy() # dummy variable to hold a copyy of the target column
del df['MEDV'] # delete the old column
df = pd.concat((y, df), axis=1) # use pandas concat to add to the df along the 1st axis (or the first column)
# Display the 'head' or start of the dataframe
df.head()
# Display the 'tail' or end of the dataframe
df.tail()
# Investigate the length of how many records are in the dataframe
len(df)
# Investigate the dataframe
df.info()
# Investigate the datatypes in each feature
df.dtypes
# Identify null values in the features
df.isnull()
# Get the sum of null values in each feature
df.isnull().sum()
# Results - there are no null values in any feature
# There are some extra columns that we don't need for this project, so we can remove those from the dataframe
for col in ['ZN', 'NOX', 'RAD', 'PTRATIO', 'B']:
del
###Output
_____no_output_____ |
BCDVGG16.ipynb | ###Markdown
###Code
import cv2
import pickle
import os.path
import matplotlib.pyplot as plt
from imutils import paths
from sklearn.preprocessing import LabelBinarizer
import numpy as np
from sklearn.model_selection import train_test_split
from keras.layers import Dense,GlobalAveragePooling2D,Dropout
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
from keras.models import model_from_json
from sklearn import metrics
from sklearn.metrics import roc_curve,auc,accuracy_score,classification_report
from keras.optimizers import SGD
from keras import models
from sklearn import svm
!git clone https://github.com/magus96/BCDProj.git
cd '/content/BCDProj/'
from helpers import resize_to_fit
path='/content/BCDProj/Test/'
data=[]
labels=[]
for image_file in paths.list_images(path):
image=cv2.imread(image_file)
image=resize_to_fit(image,224,224)
label=image_file.split(os.path.sep)[-2]
data.append(image)
labels.append(label)
data = np.array(data, dtype="float") / 255.0
labels=np.array(labels)
print(data.shape)
print(labels.shape)
print(labels)
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
vgg_conv = VGG16(weights='imagenet',include_top=False,input_shape=(224, 224, 3))
nTrain=1844
train_features = np.zeros(shape=(nTrain, 7, 7, 512))
datagen.fit(data)
train_features=vgg_conv.predict(data)
print(train_features.shape)
train_features = np.reshape(train_features, (nTrain, 7 * 7 * 512))
from sklearn.linear_model import LogisticRegression
clf=LogisticRegression()
clf.fit(train_features,labels)
from helpers import resize_to_fit
path='/content/BCDProj/Validation/'
data1=[]
labels1=[]
for image_file in paths.list_images(path):
image=cv2.imread(image_file)
image=resize_to_fit(image,224,224)
label=image_file.split(os.path.sep)[-2]
data1.append(image)
labels1.append(label)
data1 = np.array(data1, dtype="float") / 255.0
labels1 = np.array(labels1)
print(data1.shape)
test_features=np.zeros(shape=(221,7,7,512))
test_features=vgg_conv.predict(data1)
print(test_features.shape)
test_features = np.reshape(test_features, (221, 7 * 7 * 512))
preds=clf.predict(test_features)
print(preds)
print(accuracy_score(preds,labels1))
filename = 'log_model.sav'
pickle.dump(clf, open(filename, 'wb'))
print(classification_report(labels1,preds))
###Output
_____no_output_____ |
chapter08_intro_to_dl_for_computer_vision_FIXED_i.ipynb | ###Markdown
This is a companion notebook for the book [Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition?a_aid=keras&a_bid=76564dff). For readability, it only contains runnable code blocks and section titles, and omits everything else in the book: text paragraphs, figures, and pseudocode.**If you want to be able to follow what's going on, I recommend reading the notebook side by side with your copy of the book.**This notebook was generated for TensorFlow 2.6. Introduction to deep learning for computer vision1. First a convolutional network is shown to have a superior performance on MNIST data compared to densely connect networks of chapter 2. - To understand why convnets work better on image data, a deep dive is taken into the nuts and bolts of convolutional networks: - The convolutional layers and - The MaxPooling layers.3. A convolutional neural network is trained from a scratch on a dataset of cats and dogs images.4. Final section shows how to use **pretrained models**, which is a highly effective approach for deep learning on small image datasets. Introduction to convnets- The convnets trained on MNIST handwritten digits dataset are compared the results obtained using a densly connected network (chapter 2). It will be seen below that even simple convnets have superior performance on image data relative to densely connected network.- The convnet below is a stack of Conv2D and MaxPooling2D layers.- Functional API of Keras willl be used to build the model. **Instantiating a small convnet**The Convolution and MaxPooling layers will be explained in the next subsection. First look at a full Keras convnet model architecture defined below (we will measure performance after convnet is trained).- The inputs shape is: (image_width x image_height x image_channels)MNIST hand-written digits data have images of size (28 x 28). There is only one channel ie. grey. The batch dimension is not included. Hence the input shape is (28x28x1)- The different types of layers and keywords such as *kernel_size* and *pool_size* will be explained later. But first run the model and look at shapes of the output layers.
###Code
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(28, 28, 1)) #allowing for only 1 color channel ie black
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
outputs = layers.Dense(10, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
**Displaying the model's summary**
###Code
model.summary()
###Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 28, 28, 1)] 0
conv2d (Conv2D) (None, 26, 26, 32) 320
max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0
)
conv2d_1 (Conv2D) (None, 11, 11, 64) 18496
max_pooling2d_1 (MaxPooling (None, 5, 5, 64) 0
2D)
conv2d_2 (Conv2D) (None, 3, 3, 128) 73856
flatten (Flatten) (None, 1152) 0
dense (Dense) (None, 10) 11530
=================================================================
Total params: 104,202
Trainable params: 104,202
Non-trainable params: 0
_________________________________________________________________
###Markdown
- Notice that we start from input with dimensions (28x28x1). After after each conv2D layer the height and weight dimensions shrink, and the number of channels grows. The number of channels is specified by the first argument passed to conv2D layer (ie. filters= ).- after the last conv2D layers the output shape is 3x3x128 -- a 3x3 feature map with 128 channels.- We want to feed this output into a dense layer. But the dense layers accepts 1D input while the output from last convolutional layer is still a 3D tensor.- So the 3D output is flattened with layers.Flatten()- Finally, the flattened input is passed into a dense layer with 10 units, which corresponds to 10-way classification needed for 0-9 digits.- Training the network is next. **Training the convnet on MNIST images**- Becuase this is a 10-way classification, the output layer is softmax, and loss function is sparse_categorical_crossentropy
###Code
from tensorflow.keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype("float32") / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype("float32") / 255
model.compile(optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
model.fit(train_images, train_labels, epochs=5, batch_size=64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
Epoch 1/5
938/938 [==============================] - 18s 9ms/step - loss: 0.1535 - accuracy: 0.9523
Epoch 2/5
938/938 [==============================] - 7s 8ms/step - loss: 0.0438 - accuracy: 0.9868
Epoch 3/5
938/938 [==============================] - 7s 8ms/step - loss: 0.0309 - accuracy: 0.9905
Epoch 4/5
938/938 [==============================] - 7s 8ms/step - loss: 0.0228 - accuracy: 0.9930
Epoch 5/5
938/938 [==============================] - 7s 8ms/step - loss: 0.0177 - accuracy: 0.9943
###Markdown
**Evaluating the convnet**Now that the convnet model is trained, we can evaluate it to measure its performance and compare it to perfromance of model with densely connected layers (chapter 2)
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f"Test accuracy: {test_acc:.3f}")
###Output
313/313 [==============================] - 1s 4ms/step - loss: 0.0277 - accuracy: 0.9919
Test accuracy: 0.992
###Markdown
The test accuracy is 99.1%, which is higher than 97.8% obtained using densly connected layers. The error rate (1 - accuracy) is 60% lower! The convolution operationTo understand why the simple convnet works so well compared to densely connected network, let's dive into what the conv2D and MaxPooling layers do- Dense layers learn global patterns in the input feature spase- Convolutional layers learn local patterns, e.g in 2D window of size 3x3 in case of MNIST images. They have two properties: - *The patterns they learn are translation invarient*: A pattern learnt at specific location in the picture will be recoganized anywhere. - *They can learn spatial hierachies of patterns*: The first convolutional layer could learn small local pattern such as edges, the second convolutional layer could learn larger patterns made up of features of the first layer.- Convolutions operate on 3D tensors, called feature maps. There are two spatial axis -- height and width. There is also depth axis or the channel axis (to represent color channels). With RGB image, this dimension is 3.- The convolution operation extracts a 3D patch and transforms it to the output feature map (still a 3D tensor). It has height, width and depth (channel axis). However, the channel axis is not limited to RBG ie. dimension 3, it can have arbitray number of channels defined by the number of filters specified for the layer.- Filters encode specific aspects of input data to represent higher level concepts such as the 'presence of a face'.- The first convolutional layer (26x26x32) has 32 output channels, each of which is a response to applying one of the filters to 26x26 grid of values- Convolutions have two key parameters: - Size of the patch extracted (3x3 or 5x5). Remember the patch is also 3D tensor. - Depth of output feature map (32 in case of first convolution, 64 for the last).- Convolution invloves sliding 3x3 windows over the entire 3D input feature map. Each of the excratcted 3D patches is **converted into 1D vector** by taking its tensor product with **convolutional kernel**, which is **a matrix of learned weights** that remain the same (ie. are re-used) for each patch. All these 1D vectors are reassembled to produce 3D output map with shape (height, width, output_depth) Understanding border effects and paddingPadding becomes necessary because you cannot center a 3x3 (or 5x5) window around each cell in the feature map. A 5x5 feature map would allow you to center 3x3 window on only 9 of its 25 tiles.- Padding adds an appropriate number of row and colums to allow centering a convolution window around each tile in the feature map.- If padding is not done then after the first convolutional layer the output shrinks a little. In the MNIST example, the input was 28x28 and the convolutional window was 3x3, which caused the output to shrink to 26x26 after the first convolutional layer. Understanding convolution strides- As we slide the convolutional widow (say 3x3) over the input feature map, it is not necessary to center it at every input tile. The centers don't have to be contiguous, which is teh case when stride size is 1. Some tiles may be skipped eg using stride size 2.- Stride 2 means we are downsampling the feature map by a factor of 2.- In classification models we rarely use strided convolution, which are more common in other models in next chapter.- In classification problems, we use Max-Pooling operations. The max-pooling operation- Another way of downsampling is feature space is MaxPooling. This done by extracting a window and outputing max value of each channel. Unlike the convolutional kernel, this does not involve learned weights. Rather the max is a hardcoded tensor operation.- The window size in maxPooling is 2x2.- The stride in maxPooling is 2, which downsamples the input features. Note that after the first MaxPooling2D layer the output size halved to 13x13 (input size was 26x26). **An incorrectly structured convnet missing its max-pooling layers**
###Code
inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(inputs)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
outputs = layers.Dense(10, activation="softmax")(x)
model_no_max_pool = keras.Model(inputs=inputs, outputs=outputs)
model_no_max_pool.summary()
###Output
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 28, 28, 1)] 0
conv2d_3 (Conv2D) (None, 26, 26, 32) 320
conv2d_4 (Conv2D) (None, 24, 24, 64) 18496
conv2d_5 (Conv2D) (None, 22, 22, 128) 73856
flatten_1 (Flatten) (None, 61952) 0
dense_1 (Dense) (None, 10) 619530
=================================================================
Total params: 712,202
Trainable params: 712,202
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training a convnet from scratch on a small datasetNot having enough data is a common situation in computer vision work in professional settings.- Naive convnets trained on small data sets may be prone to overfitting. Accuracy is only 70% on dogs & cats dataset (2500 dogs & 2500 cats).- **Data augmentation** is a strategy used for mitigating overfitting in computer vision. With this strategy used on dogs and cats data, the accuracy improved to 80-85%-In the next section two more strategies are presented that improve performance to 98.5% - Feature extraction with a pretrained model - Fine-tuning a pretrained model The relevance of deep learning for small-data problemsConvnets learn *local, translation-invariant features*, therefore, they are highly efficient o perceptual data. Training a convnet from scratch on very small image data would yield reasonable results. Downloading the data**Before running the cell below, read the instructions in section 8.2.2 of the Chollet book**. You need to use Kaggle API to download the dataset to Google colab. Read instructions in the **Sidebar box** in the book.Before running the code cell below: 1. go to kaggle website2. navigate to your account page. Scroll down to API section on this page3. Click on "Create a new API token" tab. This will download kaggle.jason file in your downloads folder4. Run the cell below. It will ask you to **Choose Files**. Select the kaggle.jason file
###Code
from google.colab import files
files.upload()
###Output
_____no_output_____
###Markdown
The cell below will make a directory ~/.kaggle, upload the kaggle.jason file, and perform chmod as a security precaution.
###Code
!mkdir ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
###Output
_____no_output_____
###Markdown
The following cell will download the dataset dogs-vs-cats
###Code
!kaggle competitions download -c dogs-vs-cats # can use --force to force a re-download replacing the existing copy
!unzip -qq dogs-vs-cats.zip
!unzip -qq train.zip
!unzip -qq test1.zip
!rm -rf cats_vs_dogs_small/
!rm -rf train.zip test1.zip sampleSubmission.csv
###Output
_____no_output_____
###Markdown
**Copying images to training, validation, and test directories**We want to have three directories(**train, validation, test**) with each having subdirectories (cat, dog):**Train**/cats_vs_dogs_small/train/cat (1000 cat images)/cats_vs_dogs_small/train/dog(1000 dog images**Validation**/cats_vs_dogs_small/validation/cat (500 cat images)/cats_vs_dogs_small/validation/dog (500 dog images)**test**/cats_vs_dogs_small/test/cat (1000 cat images)/cats_vs_dogs_small/test/dog (1000 dog images) The following are imported: os, shutil, pathlib.The code below is annotated as follows (1,2, ...)1. Path to directory where the original data was uncompressed2. Directory where smaller datasets will be stored3. Utility function to copy cat(respecively dog) images from index *start_index* to index *end_index* (e.g 1000 to 1500) to the subdirectory new_base_dir/{subset_name)/cat (respectively dog). The "subset_name" will be either "train", "validation", or "test".4. create validation immages with next 500 images of each category5. create test subset with gthe next 1000 images of eact category
###Code
import os, shutil, pathlib
original_dir = pathlib.Path("train") #1
new_base_dir = pathlib.Path("cats_vs_dogs_small") #2
def make_subset(subset_name, start_index, end_index): #3
for category in ("cat", "dog"):
dir = new_base_dir / subset_name / category
os.makedirs(dir)
fnames = [f"{category}.{i}.jpg" for i in range(start_index, end_index)]
for fname in fnames:
shutil.copyfile(src=original_dir / fname,
dst=dir / fname)
make_subset("train", start_index=0, end_index=1000) #4
make_subset("validation", start_index=1000, end_index=1500) #5
make_subset("test", start_index=1500, end_index=2500) #6
ls /content
###Output
[0m[01;34mcats_vs_dogs_small[0m/ kaggle.json [01;34mtest1[0m/
dogs-vs-cats.zip [01;34msample_data[0m/ [01;34mtrain[0m/
###Markdown
Building the model **Instantiating a small convnet for dogs vs. cats classification**
###Code
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(180, 180, 3))
x = layers.Rescaling(1./255)(inputs)
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()
###Output
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 180, 180, 3)] 0
rescaling (Rescaling) (None, 180, 180, 3) 0
conv2d_3 (Conv2D) (None, 178, 178, 32) 896
max_pooling2d_2 (MaxPooling (None, 89, 89, 32) 0
2D)
conv2d_4 (Conv2D) (None, 87, 87, 64) 18496
max_pooling2d_3 (MaxPooling (None, 43, 43, 64) 0
2D)
conv2d_5 (Conv2D) (None, 41, 41, 128) 73856
max_pooling2d_4 (MaxPooling (None, 20, 20, 128) 0
2D)
conv2d_6 (Conv2D) (None, 18, 18, 256) 295168
max_pooling2d_5 (MaxPooling (None, 9, 9, 256) 0
2D)
conv2d_7 (Conv2D) (None, 7, 7, 256) 590080
flatten_1 (Flatten) (None, 12544) 0
dense_1 (Dense) (None, 1) 12545
=================================================================
Total params: 991,041
Trainable params: 991,041
Non-trainable params: 0
_________________________________________________________________
###Markdown
**Configuring the model for training**
###Code
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Data preprocessingThe images are in the form of JPEG files. We need to build a **data pipline** to convert the JPEG images to RBG based floating point tenrors, which can be fed into the deep learning model.The data pipeline can be built with tensorflow utility **image_dataset_from_directory**The utility performs the following tasks:1. Read the picture files2. Decode the JPEG content to RGB grids of pixels3. Convert these into floating-point tensors4. Resize them into shared size (we'll use 180x180)5. Pack them into batches (we'll use batches of 32 images).Calling the utility *image_from_directory(directory)* will create and return tf.data.Dataset object.See book SIDEBAR: Understanding Tensorflow Dataset objects. See below. **Using `image_dataset_from_directory` to read images**
###Code
# TensorFlow utility image_from_directory()
# --converts images to tensors (RGB)
from tensorflow.keras.utils import image_dataset_from_directory
train_dataset = image_dataset_from_directory(
new_base_dir / "train",
image_size=(180, 180),
batch_size=32)
validation_dataset = image_dataset_from_directory(
new_base_dir / "validation",
image_size=(180, 180),
batch_size=32)
test_dataset = image_dataset_from_directory(
new_base_dir / "test",
image_size=(180, 180),
batch_size=32)
###Output
Found 2000 files belonging to 2 classes.
Found 1000 files belonging to 2 classes.
Found 2000 files belonging to 2 classes.
###Markdown
---**SIDEBAR: Understanding TensorFlow Dataset objects**---It is demonstrated below:- How to create a Dataset object from a numpy array - Dataset object is iterable- The batch() method called on Dataset object can be used to batch data - The map() method called on Dataset ofbject can be used to perform arbitrary transformations of each element of Dataset. Moreover, this method used with shape() method can be used to reshape the Datset.
###Code
# Creating a dataset object from numpy array
import numpy as np
import tensorflow as tf
random_numbers = np.random.normal(size=(1000, 16))
dataset = tf.data.Dataset.from_tensor_slices(random_numbers)
# dataset object is an iterable
# This is demonstarted by iterating over it to print shape
for i, element in enumerate(dataset):
print(element.shape)
if i >= 2:
break
# batch() method can use used to batch data
batched_dataset = dataset.batch(32)
for i, element in enumerate(batched_dataset):
print(element.shape)
if i >= 2:
break
# map() a callable method can be used to apply arbitrary transformation
# on each element of the dataset
# Here map() is used along with reshape() to reshape data
reshaped_dataset = dataset.map(lambda x: tf.reshape(x, (4, 4)))
for i, element in enumerate(reshaped_dataset):
print(element.shape)
if i >= 2:
break
###Output
(4, 4)
(4, 4)
(4, 4)
###Markdown
**Displaying the shapes of the data and labels yielded by the `Dataset`**The Dataset object has batches of size 180x180x3. Original image size was 180x180, and there are RGB channels. Batch size is 32. So we have tensors of shape (32x180x180x3). The labels shape is (32, ) because there are 32 labels when the batch size is 32.
###Code
for data_batch, labels_batch in train_dataset:
print("data batch shape:", data_batch.shape)
print("labels batch shape:", labels_batch.shape)
break
###Output
data batch shape: (32, 180, 180, 3)
labels batch shape: (32,)
###Markdown
---**END of SIDEBAR**--- **Fitting the model using a `Dataset`**The ModelCheckpoint callback is used to save the model after each epoch. The callback is configured to:- save only the best model- monitor validation loss (and determine the best model on that basis)- provide a filepath for saving the model.
###Code
# setting up the callback for saving model
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch.keras",
save_best_only=True,
monitor="val_loss")
]
# training the model with fit(). Note the callback.
history = model.fit(
train_dataset,
epochs=30,
validation_data=validation_dataset,
callbacks=callbacks)
###Output
Epoch 1/30
63/63 [==============================] - 10s 122ms/step - loss: 0.7414 - accuracy: 0.5180 - val_loss: 0.6895 - val_accuracy: 0.5670
Epoch 2/30
63/63 [==============================] - 7s 110ms/step - loss: 0.6959 - accuracy: 0.5530 - val_loss: 0.6870 - val_accuracy: 0.5580
Epoch 3/30
63/63 [==============================] - 7s 110ms/step - loss: 0.7282 - accuracy: 0.6050 - val_loss: 0.6765 - val_accuracy: 0.5600
Epoch 4/30
63/63 [==============================] - 7s 111ms/step - loss: 0.6314 - accuracy: 0.6460 - val_loss: 0.6151 - val_accuracy: 0.6580
Epoch 5/30
63/63 [==============================] - 7s 110ms/step - loss: 0.5881 - accuracy: 0.6870 - val_loss: 0.7394 - val_accuracy: 0.5870
Epoch 6/30
63/63 [==============================] - 7s 111ms/step - loss: 0.5643 - accuracy: 0.7195 - val_loss: 0.6299 - val_accuracy: 0.6670
Epoch 7/30
63/63 [==============================] - 9s 141ms/step - loss: 0.5082 - accuracy: 0.7605 - val_loss: 0.6393 - val_accuracy: 0.6920
Epoch 8/30
63/63 [==============================] - 8s 121ms/step - loss: 0.4780 - accuracy: 0.8020 - val_loss: 0.5654 - val_accuracy: 0.7170
Epoch 9/30
63/63 [==============================] - 7s 110ms/step - loss: 0.4026 - accuracy: 0.8225 - val_loss: 0.6511 - val_accuracy: 0.7010
Epoch 10/30
63/63 [==============================] - 7s 110ms/step - loss: 0.3697 - accuracy: 0.8295 - val_loss: 0.7293 - val_accuracy: 0.7020
Epoch 11/30
63/63 [==============================] - 7s 111ms/step - loss: 0.2859 - accuracy: 0.8765 - val_loss: 0.7338 - val_accuracy: 0.7130
Epoch 12/30
63/63 [==============================] - 7s 113ms/step - loss: 0.2535 - accuracy: 0.8960 - val_loss: 0.9070 - val_accuracy: 0.7060
Epoch 13/30
63/63 [==============================] - 7s 111ms/step - loss: 0.2067 - accuracy: 0.9135 - val_loss: 0.8728 - val_accuracy: 0.7270
Epoch 14/30
63/63 [==============================] - 7s 111ms/step - loss: 0.1597 - accuracy: 0.9370 - val_loss: 1.0037 - val_accuracy: 0.6800
Epoch 15/30
63/63 [==============================] - 7s 110ms/step - loss: 0.1110 - accuracy: 0.9605 - val_loss: 1.2856 - val_accuracy: 0.6900
Epoch 16/30
63/63 [==============================] - 7s 110ms/step - loss: 0.1218 - accuracy: 0.9540 - val_loss: 0.9910 - val_accuracy: 0.7170
Epoch 17/30
63/63 [==============================] - 7s 112ms/step - loss: 0.0782 - accuracy: 0.9690 - val_loss: 1.2041 - val_accuracy: 0.7230
Epoch 18/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0823 - accuracy: 0.9710 - val_loss: 1.5432 - val_accuracy: 0.7220
Epoch 19/30
63/63 [==============================] - 8s 123ms/step - loss: 0.0629 - accuracy: 0.9775 - val_loss: 1.5555 - val_accuracy: 0.7330
Epoch 20/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0628 - accuracy: 0.9770 - val_loss: 1.6687 - val_accuracy: 0.7390
Epoch 21/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0673 - accuracy: 0.9820 - val_loss: 1.6557 - val_accuracy: 0.7450
Epoch 22/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0410 - accuracy: 0.9870 - val_loss: 1.8786 - val_accuracy: 0.7090
Epoch 23/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0629 - accuracy: 0.9830 - val_loss: 1.9537 - val_accuracy: 0.6970
Epoch 24/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0549 - accuracy: 0.9855 - val_loss: 2.4985 - val_accuracy: 0.7120
Epoch 25/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0658 - accuracy: 0.9830 - val_loss: 2.0476 - val_accuracy: 0.7110
Epoch 26/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0522 - accuracy: 0.9795 - val_loss: 2.3405 - val_accuracy: 0.6890
Epoch 27/30
63/63 [==============================] - 7s 111ms/step - loss: 0.0425 - accuracy: 0.9915 - val_loss: 2.4205 - val_accuracy: 0.7020
Epoch 28/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0650 - accuracy: 0.9815 - val_loss: 2.2558 - val_accuracy: 0.7270
Epoch 29/30
63/63 [==============================] - 7s 111ms/step - loss: 0.0324 - accuracy: 0.9890 - val_loss: 2.3857 - val_accuracy: 0.7340
Epoch 30/30
63/63 [==============================] - 7s 110ms/step - loss: 0.0473 - accuracy: 0.9875 - val_loss: 2.3048 - val_accuracy: 0.7340
###Markdown
**Displaying curves of loss and accuracy during training**
###Code
import matplotlib.pyplot as plt
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, accuracy, "bo", label="Training accuracy")
plt.plot(epochs, val_accuracy, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The model is overfitting. The training accuracy goes on increasing but the validation accuracy peaks at 75%. Similarly, the training loss continues to fall by the validation loss starts rising again after 10 epochs. **Evaluating the model on the test set**- To evaluate the model on test dataset, we will load the best model saved and evaluate it on the test_dataset.- The test accuracy is printed
###Code
test_model = keras.models.load_model("convnet_from_scratch.keras")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
###Output
63/63 [==============================] - 4s 49ms/step - loss: 0.5856 - accuracy: 0.7185
Test accuracy: 0.719
###Markdown
Using data augmentationSome data augmentation layers are described below:*RandomFlip("horizontal")* will apply fliping to a random 50% of the images that it goes through.*RandomRotation(0.1)* will rotate input images by a random value in range [-10%, +10%]*RandomZoom(0.2)* will zoom in or out of the image by a random factor in the range [-20%, +20%] See Keras documentation for other available data augmentation layers **Define a data augmentation stage to add to an image model**
###Code
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
###Output
_____no_output_____
###Markdown
**Displaying some randomly augmented training images**
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_dataset.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
**Defining a new convnet that includes image augmentation and dropout**Note that there is a dropout layer before the densely connected layer.
###Code
inputs = keras.Input(shape=(180, 180, 3))
x = data_augmentation(inputs)
x = layers.Rescaling(1./255)(x)
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
**Training the regularized convnet**- Callback is created that saves the best model based on performance of validation loss- A filepath is also provided - **If you are using colab, download the file name on the filepath**, whic in this case is" "convnet_from_scratch_with_augmentation.keras" This file will be used later
###Code
# creating callback
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch_with_augmentation.keras",
save_best_only=True,
monitor="val_loss")
]
# training the model with fit()
history = model.fit(
train_dataset,
epochs=100,
validation_data=validation_dataset,
callbacks=callbacks)
###Output
Epoch 1/100
63/63 [==============================] - 9s 120ms/step - loss: 0.7564 - accuracy: 0.4960 - val_loss: 0.6926 - val_accuracy: 0.5000
Epoch 2/100
63/63 [==============================] - 8s 117ms/step - loss: 0.6940 - accuracy: 0.5265 - val_loss: 0.6857 - val_accuracy: 0.6140
Epoch 3/100
63/63 [==============================] - 8s 116ms/step - loss: 0.6794 - accuracy: 0.5620 - val_loss: 0.7044 - val_accuracy: 0.5840
Epoch 4/100
63/63 [==============================] - 8s 118ms/step - loss: 0.6888 - accuracy: 0.6235 - val_loss: 0.6348 - val_accuracy: 0.6290
Epoch 5/100
63/63 [==============================] - 8s 118ms/step - loss: 0.6407 - accuracy: 0.6440 - val_loss: 0.6565 - val_accuracy: 0.6270
Epoch 6/100
63/63 [==============================] - 8s 117ms/step - loss: 0.6344 - accuracy: 0.6310 - val_loss: 0.7813 - val_accuracy: 0.6080
Epoch 7/100
63/63 [==============================] - 8s 117ms/step - loss: 0.6296 - accuracy: 0.6605 - val_loss: 0.5765 - val_accuracy: 0.6980
Epoch 8/100
63/63 [==============================] - 8s 117ms/step - loss: 0.6029 - accuracy: 0.6820 - val_loss: 0.5991 - val_accuracy: 0.6660
Epoch 9/100
63/63 [==============================] - 8s 117ms/step - loss: 0.5884 - accuracy: 0.6930 - val_loss: 0.6809 - val_accuracy: 0.6740
Epoch 10/100
63/63 [==============================] - 8s 116ms/step - loss: 0.5953 - accuracy: 0.6845 - val_loss: 0.6593 - val_accuracy: 0.6340
Epoch 11/100
63/63 [==============================] - 8s 118ms/step - loss: 0.5758 - accuracy: 0.7195 - val_loss: 0.6710 - val_accuracy: 0.6380
Epoch 12/100
63/63 [==============================] - 7s 114ms/step - loss: 0.5657 - accuracy: 0.7205 - val_loss: 0.5900 - val_accuracy: 0.7100
Epoch 13/100
63/63 [==============================] - 8s 115ms/step - loss: 0.5507 - accuracy: 0.7140 - val_loss: 0.8465 - val_accuracy: 0.6200
Epoch 14/100
63/63 [==============================] - 8s 117ms/step - loss: 0.5610 - accuracy: 0.7360 - val_loss: 0.5313 - val_accuracy: 0.7450
Epoch 15/100
63/63 [==============================] - 8s 119ms/step - loss: 0.5331 - accuracy: 0.7460 - val_loss: 0.4920 - val_accuracy: 0.7670
Epoch 16/100
63/63 [==============================] - 8s 116ms/step - loss: 0.5237 - accuracy: 0.7475 - val_loss: 0.5304 - val_accuracy: 0.7310
Epoch 17/100
63/63 [==============================] - 8s 117ms/step - loss: 0.4899 - accuracy: 0.7595 - val_loss: 0.4958 - val_accuracy: 0.7730
Epoch 18/100
63/63 [==============================] - 8s 116ms/step - loss: 0.5169 - accuracy: 0.7485 - val_loss: 0.4919 - val_accuracy: 0.7620
Epoch 19/100
63/63 [==============================] - 8s 115ms/step - loss: 0.4993 - accuracy: 0.7635 - val_loss: 0.5196 - val_accuracy: 0.7630
Epoch 20/100
63/63 [==============================] - 8s 116ms/step - loss: 0.4744 - accuracy: 0.7755 - val_loss: 0.5148 - val_accuracy: 0.7400
Epoch 21/100
63/63 [==============================] - 8s 116ms/step - loss: 0.4716 - accuracy: 0.7845 - val_loss: 0.5136 - val_accuracy: 0.7450
Epoch 22/100
63/63 [==============================] - 8s 116ms/step - loss: 0.4543 - accuracy: 0.7840 - val_loss: 0.7303 - val_accuracy: 0.6550
Epoch 23/100
63/63 [==============================] - 8s 116ms/step - loss: 0.4559 - accuracy: 0.7790 - val_loss: 0.4758 - val_accuracy: 0.7750
Epoch 24/100
63/63 [==============================] - 8s 117ms/step - loss: 0.4503 - accuracy: 0.7890 - val_loss: 0.4376 - val_accuracy: 0.8060
Epoch 25/100
63/63 [==============================] - 8s 116ms/step - loss: 0.4336 - accuracy: 0.7970 - val_loss: 0.4680 - val_accuracy: 0.7820
Epoch 26/100
63/63 [==============================] - 8s 115ms/step - loss: 0.4251 - accuracy: 0.8040 - val_loss: 0.4409 - val_accuracy: 0.8150
Epoch 27/100
63/63 [==============================] - 8s 117ms/step - loss: 0.4149 - accuracy: 0.8135 - val_loss: 0.4378 - val_accuracy: 0.8040
Epoch 28/100
63/63 [==============================] - 8s 116ms/step - loss: 0.4008 - accuracy: 0.8165 - val_loss: 0.6141 - val_accuracy: 0.7520
Epoch 29/100
63/63 [==============================] - 8s 118ms/step - loss: 0.4057 - accuracy: 0.8205 - val_loss: 0.4534 - val_accuracy: 0.8100
Epoch 30/100
63/63 [==============================] - 8s 117ms/step - loss: 0.4035 - accuracy: 0.8190 - val_loss: 0.6854 - val_accuracy: 0.7700
Epoch 31/100
63/63 [==============================] - 8s 116ms/step - loss: 0.3822 - accuracy: 0.8295 - val_loss: 0.7218 - val_accuracy: 0.7510
Epoch 32/100
63/63 [==============================] - 8s 117ms/step - loss: 0.3948 - accuracy: 0.8310 - val_loss: 0.4175 - val_accuracy: 0.8300
Epoch 33/100
63/63 [==============================] - 8s 116ms/step - loss: 0.3832 - accuracy: 0.8360 - val_loss: 0.4286 - val_accuracy: 0.8190
Epoch 34/100
63/63 [==============================] - 8s 116ms/step - loss: 0.3687 - accuracy: 0.8390 - val_loss: 0.4998 - val_accuracy: 0.7800
Epoch 35/100
63/63 [==============================] - 8s 116ms/step - loss: 0.3582 - accuracy: 0.8410 - val_loss: 0.4836 - val_accuracy: 0.8000
Epoch 36/100
63/63 [==============================] - 8s 115ms/step - loss: 0.3565 - accuracy: 0.8480 - val_loss: 0.4520 - val_accuracy: 0.8260
Epoch 37/100
63/63 [==============================] - 8s 115ms/step - loss: 0.3609 - accuracy: 0.8435 - val_loss: 0.5423 - val_accuracy: 0.7710
Epoch 38/100
63/63 [==============================] - 8s 115ms/step - loss: 0.3410 - accuracy: 0.8550 - val_loss: 0.5650 - val_accuracy: 0.7740
Epoch 39/100
63/63 [==============================] - 8s 117ms/step - loss: 0.3366 - accuracy: 0.8595 - val_loss: 0.3937 - val_accuracy: 0.8380
Epoch 40/100
63/63 [==============================] - 7s 114ms/step - loss: 0.3213 - accuracy: 0.8685 - val_loss: 0.4568 - val_accuracy: 0.8240
Epoch 41/100
63/63 [==============================] - 8s 115ms/step - loss: 0.3264 - accuracy: 0.8600 - val_loss: 0.4269 - val_accuracy: 0.8280
Epoch 42/100
63/63 [==============================] - 8s 116ms/step - loss: 0.3269 - accuracy: 0.8610 - val_loss: 0.4023 - val_accuracy: 0.8260
Epoch 43/100
63/63 [==============================] - 8s 116ms/step - loss: 0.3005 - accuracy: 0.8785 - val_loss: 0.4696 - val_accuracy: 0.8190
Epoch 44/100
63/63 [==============================] - 8s 116ms/step - loss: 0.3156 - accuracy: 0.8715 - val_loss: 0.3732 - val_accuracy: 0.8570
Epoch 45/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2974 - accuracy: 0.8725 - val_loss: 0.9257 - val_accuracy: 0.7240
Epoch 46/100
63/63 [==============================] - 7s 114ms/step - loss: 0.2870 - accuracy: 0.8800 - val_loss: 1.0313 - val_accuracy: 0.7290
Epoch 47/100
63/63 [==============================] - 8s 115ms/step - loss: 0.2823 - accuracy: 0.8825 - val_loss: 0.4960 - val_accuracy: 0.8350
Epoch 48/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2739 - accuracy: 0.8860 - val_loss: 0.4429 - val_accuracy: 0.8120
Epoch 49/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2978 - accuracy: 0.8825 - val_loss: 0.4384 - val_accuracy: 0.8500
Epoch 50/100
63/63 [==============================] - 8s 117ms/step - loss: 0.2873 - accuracy: 0.8870 - val_loss: 0.4562 - val_accuracy: 0.8270
Epoch 51/100
63/63 [==============================] - 8s 119ms/step - loss: 0.2639 - accuracy: 0.8970 - val_loss: 0.4601 - val_accuracy: 0.8070
Epoch 52/100
63/63 [==============================] - 8s 115ms/step - loss: 0.2544 - accuracy: 0.8970 - val_loss: 0.4522 - val_accuracy: 0.8310
Epoch 53/100
63/63 [==============================] - 8s 115ms/step - loss: 0.2959 - accuracy: 0.8805 - val_loss: 0.5118 - val_accuracy: 0.8360
Epoch 54/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2756 - accuracy: 0.8885 - val_loss: 0.3953 - val_accuracy: 0.8380
Epoch 55/100
63/63 [==============================] - 7s 115ms/step - loss: 0.2453 - accuracy: 0.9000 - val_loss: 0.5220 - val_accuracy: 0.8440
Epoch 56/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2727 - accuracy: 0.8880 - val_loss: 0.4098 - val_accuracy: 0.8280
Epoch 57/100
63/63 [==============================] - 8s 117ms/step - loss: 0.2396 - accuracy: 0.9060 - val_loss: 0.4870 - val_accuracy: 0.8240
Epoch 58/100
63/63 [==============================] - 8s 117ms/step - loss: 0.2250 - accuracy: 0.9120 - val_loss: 1.1309 - val_accuracy: 0.7270
Epoch 59/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2448 - accuracy: 0.9100 - val_loss: 0.5785 - val_accuracy: 0.8350
Epoch 60/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2260 - accuracy: 0.9120 - val_loss: 0.5335 - val_accuracy: 0.8420
Epoch 61/100
63/63 [==============================] - 8s 115ms/step - loss: 0.2399 - accuracy: 0.9055 - val_loss: 0.5407 - val_accuracy: 0.8160
Epoch 62/100
63/63 [==============================] - 8s 118ms/step - loss: 0.2321 - accuracy: 0.9090 - val_loss: 0.4975 - val_accuracy: 0.8310
Epoch 63/100
63/63 [==============================] - 8s 117ms/step - loss: 0.2454 - accuracy: 0.9050 - val_loss: 0.5527 - val_accuracy: 0.8340
Epoch 64/100
63/63 [==============================] - 8s 117ms/step - loss: 0.2144 - accuracy: 0.9235 - val_loss: 0.6625 - val_accuracy: 0.8220
Epoch 65/100
63/63 [==============================] - 8s 115ms/step - loss: 0.2284 - accuracy: 0.9150 - val_loss: 0.3950 - val_accuracy: 0.8590
Epoch 66/100
63/63 [==============================] - 8s 115ms/step - loss: 0.2071 - accuracy: 0.9180 - val_loss: 0.4905 - val_accuracy: 0.8390
Epoch 67/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2181 - accuracy: 0.9190 - val_loss: 0.5827 - val_accuracy: 0.8220
Epoch 68/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2318 - accuracy: 0.9065 - val_loss: 0.4254 - val_accuracy: 0.8500
Epoch 69/100
63/63 [==============================] - 8s 115ms/step - loss: 0.1980 - accuracy: 0.9255 - val_loss: 0.4544 - val_accuracy: 0.8500
Epoch 70/100
63/63 [==============================] - 8s 118ms/step - loss: 0.2146 - accuracy: 0.9195 - val_loss: 0.4506 - val_accuracy: 0.8500
Epoch 71/100
63/63 [==============================] - 8s 115ms/step - loss: 0.2083 - accuracy: 0.9225 - val_loss: 0.6118 - val_accuracy: 0.7980
Epoch 72/100
63/63 [==============================] - 8s 117ms/step - loss: 0.2166 - accuracy: 0.9185 - val_loss: 0.4995 - val_accuracy: 0.8180
Epoch 73/100
63/63 [==============================] - 8s 119ms/step - loss: 0.2030 - accuracy: 0.9205 - val_loss: 0.5015 - val_accuracy: 0.8430
Epoch 74/100
63/63 [==============================] - 8s 116ms/step - loss: 0.1933 - accuracy: 0.9305 - val_loss: 0.7025 - val_accuracy: 0.8220
Epoch 75/100
63/63 [==============================] - 8s 116ms/step - loss: 0.2065 - accuracy: 0.9215 - val_loss: 0.4971 - val_accuracy: 0.8690
Epoch 76/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1904 - accuracy: 0.9275 - val_loss: 0.6255 - val_accuracy: 0.8450
Epoch 77/100
63/63 [==============================] - 8s 118ms/step - loss: 0.2010 - accuracy: 0.9280 - val_loss: 0.9257 - val_accuracy: 0.8270
Epoch 78/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1999 - accuracy: 0.9235 - val_loss: 0.4872 - val_accuracy: 0.8550
Epoch 79/100
63/63 [==============================] - 8s 116ms/step - loss: 0.1833 - accuracy: 0.9290 - val_loss: 0.4927 - val_accuracy: 0.8680
Epoch 80/100
63/63 [==============================] - 8s 116ms/step - loss: 0.1671 - accuracy: 0.9385 - val_loss: 0.4637 - val_accuracy: 0.8680
Epoch 81/100
63/63 [==============================] - 8s 115ms/step - loss: 0.2064 - accuracy: 0.9255 - val_loss: 0.5293 - val_accuracy: 0.8560
Epoch 82/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1818 - accuracy: 0.9350 - val_loss: 0.6902 - val_accuracy: 0.8380
Epoch 83/100
63/63 [==============================] - 8s 120ms/step - loss: 0.2303 - accuracy: 0.9195 - val_loss: 0.5409 - val_accuracy: 0.8660
Epoch 84/100
63/63 [==============================] - 8s 117ms/step - loss: 0.1805 - accuracy: 0.9390 - val_loss: 0.5587 - val_accuracy: 0.8610
Epoch 85/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1976 - accuracy: 0.9395 - val_loss: 0.5352 - val_accuracy: 0.8680
Epoch 86/100
63/63 [==============================] - 8s 119ms/step - loss: 0.1952 - accuracy: 0.9400 - val_loss: 0.6428 - val_accuracy: 0.8400
Epoch 87/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1994 - accuracy: 0.9305 - val_loss: 0.5359 - val_accuracy: 0.8390
Epoch 88/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1648 - accuracy: 0.9390 - val_loss: 0.5993 - val_accuracy: 0.8450
Epoch 89/100
63/63 [==============================] - 8s 120ms/step - loss: 0.1837 - accuracy: 0.9335 - val_loss: 0.5069 - val_accuracy: 0.8750
Epoch 90/100
63/63 [==============================] - 8s 120ms/step - loss: 0.1873 - accuracy: 0.9365 - val_loss: 0.6310 - val_accuracy: 0.8260
Epoch 91/100
63/63 [==============================] - 8s 119ms/step - loss: 0.1736 - accuracy: 0.9415 - val_loss: 0.6080 - val_accuracy: 0.8570
Epoch 92/100
63/63 [==============================] - 8s 117ms/step - loss: 0.2199 - accuracy: 0.9265 - val_loss: 0.5427 - val_accuracy: 0.8690
Epoch 93/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1851 - accuracy: 0.9355 - val_loss: 0.5865 - val_accuracy: 0.8600
Epoch 94/100
63/63 [==============================] - 8s 119ms/step - loss: 0.1913 - accuracy: 0.9325 - val_loss: 0.9268 - val_accuracy: 0.7900
Epoch 95/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1662 - accuracy: 0.9355 - val_loss: 0.8398 - val_accuracy: 0.8580
Epoch 96/100
63/63 [==============================] - 8s 119ms/step - loss: 0.2118 - accuracy: 0.9270 - val_loss: 0.7377 - val_accuracy: 0.8380
Epoch 97/100
63/63 [==============================] - 8s 119ms/step - loss: 0.2228 - accuracy: 0.9230 - val_loss: 0.6878 - val_accuracy: 0.8480
Epoch 98/100
63/63 [==============================] - 8s 118ms/step - loss: 0.1702 - accuracy: 0.9400 - val_loss: 0.7351 - val_accuracy: 0.8330
Epoch 99/100
63/63 [==============================] - 8s 118ms/step - loss: 0.2009 - accuracy: 0.9265 - val_loss: 0.8133 - val_accuracy: 0.8320
Epoch 100/100
63/63 [==============================] - 8s 117ms/step - loss: 0.2066 - accuracy: 0.9250 - val_loss: 1.8321 - val_accuracy: 0.7570
###Markdown
**Evaluating the model on the test set**Before performing the inference on test data, the model saved via callback to filepath convnet_from_scratch_with_augmentation.keras" is reloaded
###Code
test_model = keras.models.load_model(
"convnet_from_scratch_with_augmentation.keras")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
###Output
63/63 [==============================] - 3s 46ms/step - loss: 0.4415 - accuracy: 0.8365
Test accuracy: 0.836
###Markdown
We get 82% accuracy on test data set for a model with data augmentation and dropout.Recall that our small_dataset model i.e. without data augmentation and dropout, the accuracy on test data was only 64.8%. So with data augmentation, there is a significant improvement in performance on the test dataset. Leveraging a pretrained model Feature extraction with a pretrained model **Instantiating the VGG16 convolutional base**
###Code
conv_base = keras.applications.vgg16.VGG16(
weights="imagenet",
include_top=False,
input_shape=(180, 180, 3))
conv_base.summary()
###Output
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None, 180, 180, 3)] 0
block1_conv1 (Conv2D) (None, 180, 180, 64) 1792
block1_conv2 (Conv2D) (None, 180, 180, 64) 36928
block1_pool (MaxPooling2D) (None, 90, 90, 64) 0
block2_conv1 (Conv2D) (None, 90, 90, 128) 73856
block2_conv2 (Conv2D) (None, 90, 90, 128) 147584
block2_pool (MaxPooling2D) (None, 45, 45, 128) 0
block3_conv1 (Conv2D) (None, 45, 45, 256) 295168
block3_conv2 (Conv2D) (None, 45, 45, 256) 590080
block3_conv3 (Conv2D) (None, 45, 45, 256) 590080
block3_pool (MaxPooling2D) (None, 22, 22, 256) 0
block4_conv1 (Conv2D) (None, 22, 22, 512) 1180160
block4_conv2 (Conv2D) (None, 22, 22, 512) 2359808
block4_conv3 (Conv2D) (None, 22, 22, 512) 2359808
block4_pool (MaxPooling2D) (None, 11, 11, 512) 0
block5_conv1 (Conv2D) (None, 11, 11, 512) 2359808
block5_conv2 (Conv2D) (None, 11, 11, 512) 2359808
block5_conv3 (Conv2D) (None, 11, 11, 512) 2359808
block5_pool (MaxPooling2D) (None, 5, 5, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
###Markdown
Fast feature extraction without data augmentation **Extracting the VGG16 features and corresponding labels**
###Code
import numpy as np
def get_features_and_labels(dataset):
all_features = []
all_labels = []
for images, labels in dataset:
preprocessed_images = keras.applications.vgg16.preprocess_input(images)
features = conv_base.predict(preprocessed_images)
all_features.append(features)
all_labels.append(labels)
return np.concatenate(all_features), np.concatenate(all_labels)
train_features, train_labels = get_features_and_labels(train_dataset)
val_features, val_labels = get_features_and_labels(validation_dataset)
test_features, test_labels = get_features_and_labels(test_dataset)
train_features.shape
###Output
_____no_output_____
###Markdown
Note convolutional base gave us the features it had learnt. The shape of the features tensor is (2000, 5, 5). This is what will be the **shape of input** in the model below **Defining and training the densely connected classifier**
###Code
inputs = keras.Input(shape=(5, 5, 512)) # same as shape of features from VGG16
x = layers.Flatten()(inputs)
x = layers.Dense(256)(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="feature_extraction.keras",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_features, train_labels,
epochs=20,
validation_data=(val_features, val_labels),
callbacks=callbacks)
###Output
Epoch 1/20
63/63 [==============================] - 1s 13ms/step - loss: 20.3694 - accuracy: 0.9200 - val_loss: 7.4042 - val_accuracy: 0.9480
Epoch 2/20
63/63 [==============================] - 1s 11ms/step - loss: 2.7788 - accuracy: 0.9795 - val_loss: 5.8277 - val_accuracy: 0.9700
Epoch 3/20
63/63 [==============================] - 1s 10ms/step - loss: 3.0905 - accuracy: 0.9805 - val_loss: 5.1285 - val_accuracy: 0.9680
Epoch 4/20
63/63 [==============================] - 1s 9ms/step - loss: 2.0267 - accuracy: 0.9865 - val_loss: 9.3945 - val_accuracy: 0.9610
Epoch 5/20
63/63 [==============================] - 1s 9ms/step - loss: 0.9435 - accuracy: 0.9925 - val_loss: 8.5390 - val_accuracy: 0.9590
Epoch 6/20
63/63 [==============================] - 1s 9ms/step - loss: 1.4407 - accuracy: 0.9935 - val_loss: 6.2487 - val_accuracy: 0.9710
Epoch 7/20
63/63 [==============================] - 1s 9ms/step - loss: 0.8608 - accuracy: 0.9940 - val_loss: 6.2173 - val_accuracy: 0.9750
Epoch 8/20
63/63 [==============================] - 1s 9ms/step - loss: 0.6160 - accuracy: 0.9945 - val_loss: 5.6379 - val_accuracy: 0.9740
Epoch 9/20
63/63 [==============================] - 1s 9ms/step - loss: 0.1290 - accuracy: 0.9985 - val_loss: 5.4740 - val_accuracy: 0.9710
Epoch 10/20
63/63 [==============================] - 1s 9ms/step - loss: 0.4196 - accuracy: 0.9980 - val_loss: 7.9394 - val_accuracy: 0.9660
Epoch 11/20
63/63 [==============================] - 1s 10ms/step - loss: 0.0017 - accuracy: 0.9995 - val_loss: 6.3928 - val_accuracy: 0.9800
Epoch 12/20
63/63 [==============================] - 1s 9ms/step - loss: 0.1406 - accuracy: 0.9990 - val_loss: 5.5832 - val_accuracy: 0.9800
Epoch 13/20
63/63 [==============================] - 1s 9ms/step - loss: 0.4975 - accuracy: 0.9980 - val_loss: 6.7393 - val_accuracy: 0.9770
Epoch 14/20
63/63 [==============================] - 1s 9ms/step - loss: 0.2177 - accuracy: 0.9980 - val_loss: 6.5312 - val_accuracy: 0.9790
Epoch 15/20
63/63 [==============================] - 1s 9ms/step - loss: 0.1267 - accuracy: 0.9980 - val_loss: 5.5131 - val_accuracy: 0.9760
Epoch 16/20
63/63 [==============================] - 1s 9ms/step - loss: 1.5695e-17 - accuracy: 1.0000 - val_loss: 5.5131 - val_accuracy: 0.9760
Epoch 17/20
63/63 [==============================] - 1s 9ms/step - loss: 0.0650 - accuracy: 0.9990 - val_loss: 6.3715 - val_accuracy: 0.9740
Epoch 18/20
63/63 [==============================] - 1s 9ms/step - loss: 0.2470 - accuracy: 0.9990 - val_loss: 8.8890 - val_accuracy: 0.9720
Epoch 19/20
63/63 [==============================] - 1s 9ms/step - loss: 0.2202 - accuracy: 0.9980 - val_loss: 7.0301 - val_accuracy: 0.9730
Epoch 20/20
63/63 [==============================] - 1s 10ms/step - loss: 0.1346 - accuracy: 0.9990 - val_loss: 6.2289 - val_accuracy: 0.9730
###Markdown
**Plotting the results**
###Code
import matplotlib.pyplot as plt
acc = history.history["accuracy"]
val_acc = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, "bo", label="Training accuracy")
plt.plot(epochs, val_acc, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The model achieves high accuracy of 97%. Also note that it is overfitting from the start. This is because the technique does not use data augmentation. So the next model allows data augmentation. Feature extraction together with data augmentationWe first exclude the top-layer of VGG16 and set base to be non-trainable. **Instantiating and freezing the VGG16 convolutional base**
###Code
conv_base = keras.applications.vgg16.VGG16(
weights="imagenet",
include_top=False)
conv_base.trainable = False
###Output
_____no_output_____
###Markdown
**Printing the list of trainable weights before and after freezing**
###Code
conv_base.trainable = True
print("This is the number of trainable weights "
"before freezing the conv base:", len(conv_base.trainable_weights))
conv_base.trainable = False
print("This is the number of trainable weights "
"after freezing the conv base:", len(conv_base.trainable_weights))
###Output
This is the number of trainable weights after freezing the conv base: 0
###Markdown
**Adding a data augmentation stage and a classifier to the convolutional base**For the model to take advantage of data augmentation, the followig workflow will be implemented:1. Perform data augmentation through Kera sequential API.2. Pass the augmented input through the convulutional base of VGG16. This will retrain the base weights also.3. Pass the output from the convilutional base of VGG16 on to Dense layer, follwed by an application of a dropout layer.4. Finally, pass the output from previous layer to the sigmoid output layer.
###Code
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
inputs = keras.Input(shape=(180, 180, 3))
x = data_augmentation(inputs)
x = keras.applications.vgg16.preprocess_input(x)
x = conv_base(x)
x = layers.Flatten()(x)
x = layers.Dense(256)(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="feature_extraction_with_data_augmentation.keras",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_dataset,
epochs=50,
validation_data=validation_dataset,
callbacks=callbacks)
###Output
Epoch 1/50
63/63 [==============================] - 23s 335ms/step - loss: 23.1446 - accuracy: 0.8890 - val_loss: 5.5547 - val_accuracy: 0.9650
Epoch 2/50
63/63 [==============================] - 21s 324ms/step - loss: 6.8423 - accuracy: 0.9410 - val_loss: 6.7516 - val_accuracy: 0.9590
Epoch 3/50
63/63 [==============================] - 21s 329ms/step - loss: 6.9933 - accuracy: 0.9445 - val_loss: 3.6921 - val_accuracy: 0.9730
Epoch 4/50
63/63 [==============================] - 21s 330ms/step - loss: 4.0176 - accuracy: 0.9595 - val_loss: 3.3034 - val_accuracy: 0.9780
Epoch 5/50
63/63 [==============================] - 21s 330ms/step - loss: 4.5250 - accuracy: 0.9670 - val_loss: 2.1708 - val_accuracy: 0.9790
Epoch 6/50
63/63 [==============================] - 21s 326ms/step - loss: 4.5487 - accuracy: 0.9620 - val_loss: 2.6840 - val_accuracy: 0.9850
Epoch 7/50
63/63 [==============================] - 21s 326ms/step - loss: 3.7369 - accuracy: 0.9650 - val_loss: 3.2376 - val_accuracy: 0.9810
Epoch 8/50
63/63 [==============================] - 21s 326ms/step - loss: 2.2774 - accuracy: 0.9805 - val_loss: 2.3795 - val_accuracy: 0.9830
Epoch 9/50
63/63 [==============================] - 21s 326ms/step - loss: 3.0672 - accuracy: 0.9720 - val_loss: 2.6991 - val_accuracy: 0.9780
Epoch 10/50
63/63 [==============================] - 21s 330ms/step - loss: 2.2022 - accuracy: 0.9725 - val_loss: 1.9371 - val_accuracy: 0.9860
Epoch 11/50
63/63 [==============================] - 21s 326ms/step - loss: 2.8785 - accuracy: 0.9705 - val_loss: 2.3254 - val_accuracy: 0.9820
Epoch 12/50
63/63 [==============================] - 21s 326ms/step - loss: 1.5899 - accuracy: 0.9805 - val_loss: 3.1616 - val_accuracy: 0.9830
Epoch 13/50
63/63 [==============================] - 21s 326ms/step - loss: 2.1030 - accuracy: 0.9760 - val_loss: 2.2809 - val_accuracy: 0.9850
Epoch 14/50
63/63 [==============================] - 21s 326ms/step - loss: 1.9406 - accuracy: 0.9760 - val_loss: 2.6590 - val_accuracy: 0.9840
Epoch 15/50
63/63 [==============================] - 21s 326ms/step - loss: 1.5356 - accuracy: 0.9845 - val_loss: 2.2023 - val_accuracy: 0.9850
Epoch 16/50
63/63 [==============================] - 21s 327ms/step - loss: 1.0789 - accuracy: 0.9805 - val_loss: 2.0041 - val_accuracy: 0.9830
Epoch 17/50
63/63 [==============================] - 21s 325ms/step - loss: 1.7267 - accuracy: 0.9780 - val_loss: 2.5062 - val_accuracy: 0.9800
Epoch 18/50
63/63 [==============================] - 21s 326ms/step - loss: 1.7041 - accuracy: 0.9820 - val_loss: 2.4909 - val_accuracy: 0.9790
Epoch 19/50
63/63 [==============================] - 21s 326ms/step - loss: 1.2493 - accuracy: 0.9810 - val_loss: 2.6772 - val_accuracy: 0.9810
Epoch 20/50
63/63 [==============================] - 21s 326ms/step - loss: 1.0369 - accuracy: 0.9855 - val_loss: 3.7604 - val_accuracy: 0.9730
Epoch 21/50
63/63 [==============================] - 21s 326ms/step - loss: 1.1502 - accuracy: 0.9810 - val_loss: 2.9044 - val_accuracy: 0.9810
Epoch 22/50
63/63 [==============================] - 21s 326ms/step - loss: 1.6317 - accuracy: 0.9785 - val_loss: 2.8907 - val_accuracy: 0.9760
Epoch 23/50
63/63 [==============================] - 21s 326ms/step - loss: 1.1472 - accuracy: 0.9805 - val_loss: 2.5181 - val_accuracy: 0.9820
Epoch 24/50
63/63 [==============================] - 21s 326ms/step - loss: 0.9553 - accuracy: 0.9840 - val_loss: 2.2885 - val_accuracy: 0.9800
Epoch 25/50
63/63 [==============================] - 21s 326ms/step - loss: 1.0631 - accuracy: 0.9790 - val_loss: 2.0014 - val_accuracy: 0.9850
Epoch 26/50
63/63 [==============================] - 21s 331ms/step - loss: 0.5987 - accuracy: 0.9860 - val_loss: 1.5438 - val_accuracy: 0.9880
Epoch 27/50
63/63 [==============================] - 21s 327ms/step - loss: 1.3545 - accuracy: 0.9765 - val_loss: 4.2572 - val_accuracy: 0.9660
Epoch 28/50
63/63 [==============================] - 21s 332ms/step - loss: 1.1571 - accuracy: 0.9795 - val_loss: 1.4851 - val_accuracy: 0.9800
Epoch 29/50
63/63 [==============================] - 21s 327ms/step - loss: 0.8404 - accuracy: 0.9840 - val_loss: 1.4852 - val_accuracy: 0.9830
Epoch 30/50
63/63 [==============================] - 21s 327ms/step - loss: 0.7207 - accuracy: 0.9860 - val_loss: 1.5758 - val_accuracy: 0.9820
Epoch 31/50
63/63 [==============================] - 21s 325ms/step - loss: 0.7835 - accuracy: 0.9840 - val_loss: 1.9768 - val_accuracy: 0.9810
Epoch 32/50
63/63 [==============================] - 21s 326ms/step - loss: 0.7884 - accuracy: 0.9850 - val_loss: 1.9138 - val_accuracy: 0.9830
Epoch 33/50
63/63 [==============================] - 21s 327ms/step - loss: 0.6945 - accuracy: 0.9890 - val_loss: 2.1211 - val_accuracy: 0.9760
Epoch 34/50
63/63 [==============================] - 21s 327ms/step - loss: 0.4336 - accuracy: 0.9885 - val_loss: 1.5729 - val_accuracy: 0.9820
Epoch 35/50
63/63 [==============================] - 21s 326ms/step - loss: 0.5791 - accuracy: 0.9875 - val_loss: 3.1754 - val_accuracy: 0.9710
Epoch 36/50
63/63 [==============================] - 21s 326ms/step - loss: 0.4920 - accuracy: 0.9875 - val_loss: 1.5946 - val_accuracy: 0.9860
Epoch 37/50
63/63 [==============================] - 21s 331ms/step - loss: 0.4658 - accuracy: 0.9860 - val_loss: 1.4392 - val_accuracy: 0.9850
Epoch 38/50
63/63 [==============================] - 21s 331ms/step - loss: 0.7250 - accuracy: 0.9860 - val_loss: 1.1589 - val_accuracy: 0.9860
Epoch 39/50
63/63 [==============================] - 21s 326ms/step - loss: 0.6321 - accuracy: 0.9865 - val_loss: 1.2583 - val_accuracy: 0.9880
Epoch 40/50
63/63 [==============================] - 21s 328ms/step - loss: 0.5552 - accuracy: 0.9905 - val_loss: 1.4410 - val_accuracy: 0.9870
Epoch 41/50
63/63 [==============================] - 21s 328ms/step - loss: 0.5256 - accuracy: 0.9910 - val_loss: 1.8861 - val_accuracy: 0.9810
Epoch 42/50
63/63 [==============================] - 21s 327ms/step - loss: 0.5623 - accuracy: 0.9890 - val_loss: 1.8363 - val_accuracy: 0.9840
Epoch 43/50
63/63 [==============================] - 21s 327ms/step - loss: 0.4686 - accuracy: 0.9900 - val_loss: 1.5380 - val_accuracy: 0.9830
Epoch 44/50
63/63 [==============================] - 21s 326ms/step - loss: 0.6298 - accuracy: 0.9870 - val_loss: 1.8972 - val_accuracy: 0.9800
Epoch 45/50
63/63 [==============================] - 21s 326ms/step - loss: 0.5874 - accuracy: 0.9870 - val_loss: 1.5208 - val_accuracy: 0.9810
Epoch 46/50
63/63 [==============================] - 21s 327ms/step - loss: 0.4917 - accuracy: 0.9880 - val_loss: 3.0650 - val_accuracy: 0.9740
Epoch 47/50
63/63 [==============================] - 21s 325ms/step - loss: 0.3981 - accuracy: 0.9925 - val_loss: 1.6642 - val_accuracy: 0.9810
Epoch 48/50
63/63 [==============================] - 21s 326ms/step - loss: 0.4791 - accuracy: 0.9895 - val_loss: 1.7643 - val_accuracy: 0.9810
Epoch 49/50
63/63 [==============================] - 21s 326ms/step - loss: 0.6439 - accuracy: 0.9845 - val_loss: 2.4101 - val_accuracy: 0.9730
Epoch 50/50
63/63 [==============================] - 21s 328ms/step - loss: 0.4878 - accuracy: 0.9880 - val_loss: 3.0994 - val_accuracy: 0.9700
###Markdown
**Evaluating the model on the test set**
###Code
test_model = keras.models.load_model(
"feature_extraction_with_data_augmentation.keras")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
###Output
_____no_output_____
###Markdown
Fine-tuning a pretrained model
###Code
conv_base.summary()
###Output
_____no_output_____
###Markdown
**Freezing all layers until the fourth from the last**
###Code
conv_base.trainable = True
for layer in conv_base.layers[:-4]:
layer.trainable = False
###Output
_____no_output_____
###Markdown
**Fine-tuning the model**
###Code
model.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.RMSprop(learning_rate=1e-5),
metrics=["accuracy"])
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="fine_tuning.keras",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_dataset,
epochs=30,
validation_data=validation_dataset,
callbacks=callbacks)
model = keras.models.load_model("fine_tuning.keras")
test_loss, test_acc = model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
###Output
_____no_output_____ |
user_queries/notebooks/Mini-hackathon Workflow A.ipynb | ###Markdown
Workflow A: Chemical ---[]---- Gene (EGFR)
###Code
#Loading the functions from a notebook
%run /Users/priyash/Documents/GitHub/robogallery/user_queries/proj_tools/template.ipynb
# query = {"message":
# {"query_graph":
# {
# "nodes": {
# "n0": {
# "categories": [
# "biolink:ChemicalSubstance"
# ],
# "name": "Chemical Substance"
# },
# "n1": {
# "name": "EGFR",
# "ids": ["NCBIGene:1956"]
# }
# },
# "edges": {
# "e0": {
# "subject": "n0",
# "object": "n1",
# "predicates": [
# "biolink:decreases_abundance_of",
# "biolink:decreases_activity_of",
# "biolink:decreases_expression_of",
# "biolink:decreases_synthesis_of",
# "biolink:increases_degradation_of",
# "biolink:disrupts",
# "biolink:entity_negatively_regulates_entity"
# ]
# }
# }
# }
# }
# }
query = {"message":
{"query_graph":
{
"nodes": {
"n0": {
"categories": [
"biolink:Gene"
],
"name": "EGFR",
"ids": ["NCBIGene:1956"]
},
"n1": {
"name": "Chemical Substance",
"categories": ["biolink:ChemicalSubstance"]
}
},
"edges": {
"e0": {
"subject": "n0",
"object": "n1",
"predicates": [
"biolink:decreases_abundance_of",
"biolink:decreases_activity_of",
"biolink:decreases_expression_of",
"biolink:decreases_synthesis_of",
"biolink:increases_degradation_of",
"biolink:disrupts",
"biolink:entity_negatively_regulates_entity"
]
}
}
}
}
}
printjson(query)
###Output
{
"message": {
"query_graph": {
"nodes": {
"n0": {
"categories": [
"biolink:Gene"
],
"name": "EGFR",
"ids": [
"NCBIGene:1956"
]
},
"n1": {
"name": "Chemical Substance",
"categories": [
"biolink:ChemicalSubstance"
]
}
},
"edges": {
"e0": {
"subject": "n0",
"object": "n1",
"predicates": [
"biolink:decreases_abundance_of",
"biolink:decreases_activity_of",
"biolink:decreases_expression_of",
"biolink:decreases_synthesis_of",
"biolink:increases_degradation_of",
"biolink:disrupts",
"biolink:entity_negatively_regulates_entity"
]
}
}
}
}
}
###Markdown
Strider Direct
###Code
start = dt.now()
strider_result = strider(query)
end = dt.now()
print(f"Strider produced {len(strider_result['message']['results'])} results in {end-start}.")
prov = get_provenance(strider_result)
display(prov)
###Output
_____no_output_____
###Markdown
**one observation when i was playing with the subject and object node the result outputs were changing****contacting KP error came up once in log error but then disappeared on several other runs afterwards**
###Code
for logmessage in strider_result['logs']:
if logmessage['level'] == 'ERROR':
print(logmessage['message'])
strider_result['logs']
view = GammaViewer(props={"data":strider_result})
display(view)
pred = get_predicate(strider_result)
display(pred)
node =which_nodes(strider_result, 'biolink:decreases_expression_of')
display(node)
###Output
_____no_output_____ |
Harsh's Contributions/School Dropout Model/School Dropout Model.ipynb | ###Markdown
Custom Input :
###Code
filename = 'schoolModels/' + 'try1.pkl'
model=joblib.load(filename)
input_val=[1,1,0,3,20,20,10,20,10]
model.predict(np.array(input_val).reshape(1,-1))
###Output
_____no_output_____ |
notebooks/1c_nearest_neighbour_cluster_maps.ipynb | ###Markdown
QuaSaR: Identifying EEW Rings - Cluster into Coarser Topology[Quake Safe Rings](./1a_stations_faultlnes_plot.ipynb) - in our efforts to understand the station fault topology - we make use of the International Federation Data of Seismic Networks (FDSN), the global standard and a [data service](http://www.fdsn.org/services/) for sharing seismic sensor wave form data. The Obspy librarires support FDSN. The list of resources and services that are used for retrieving station inventory and waveform data.
###Code
'''
WARNING CONTROL to display or ignore all warnings
'''
import warnings; warnings.simplefilter('ignore') #switch betweeb 'default' and 'ignore'
###Output
_____no_output_____
###Markdown
OBJECTIVE 1.C - PLOT STATIO FAULT METRICS DEFINE data services and software modules TODO Write about all this below1. FDSN station service 1. FSDN as Client data sources; both (i) the FDSN client service and the (ii) FDSN complient GoeNet API webservice 1. retrieve station metadata information in a FDSN StationXML format or text format for all the channels in CECS station with no time limitations: https://service.geonet.org.nz/fdsnws/station/1/query?network=NZ&station=CECS&level=channel&format=text1. ObsPy 1. wavePicker is no longer supported by ObsPy; instead the [Pyrocko](http://pyrocko.org) Snuffler for seismic data inspection and picking is recommoended Define station types and channelsTo learn about sensor type and channel code definitions [see section in ipynb](./stations_faultlnes_plot_1a.ipynbsensor_code_desc) Class of station data processing methodsThe class is defined to manage all functions for retrieving, parsing, and preparing station data in an easily useable form.* Class _station_data()_ * _get_channels()_ returns abbreviated channel codes * _get_types()_ returns a list of all seismic station types with abbreviation and description * _get_stations()_ returns list of all stations with code, type abbr, lat/lon pair ___TODO REMOVE BELOW CELL___Use stations.py FIRST fix all the dependecies before removing
###Code
''' DEPRECATED .... Use stations.py FIRST fix all the dependecies before removing '''
''' CLASS for defining station meta data and functions '''
''' All weak & strong motion, low gain, and mass possion sensor types '''
class station_data():
def __init__(self, name: str = 'station_metadata'):
'''
DICTIONARY for defining the data source (make connection) and global parameters
'''
# import glob
from obspy import read_inventory, UTCDateTime
# from obspy.clients.fdsn import Client
#from datetime import date
self.name = name
'''
Establish start and end time for retrieving waveform data
'''
self.t_start = UTCDateTime.now()-518400 #6 days ago = 60s x 60m x 24h x 6d
self.t_end = UTCDateTime.now()+86400 #1 day in the future = 60s x 60m x 24h
print('Retrieving active stations with a \nstart-time: {} \n & end-time: {}'.format(self.t_start, self.t_end))
st_test_data = "" # TODO create a test set with corresponding faults
'''
use either or GeoNet station service webservice URL or Obspy FDSN Client protocol to retrieve station data
'''
self.s_fdsn_url_code = "GEONET" # FDSN client URL code
#st_ws = 'https://service.geonet.org.nz/fdsnws/station/1/query?network=NZ&station=CECS&level=channel'
st_ws = 'https://service.geonet.org.nz/fdsnws/station/1/query?network=NZ&level=station&endafter=2020-12-31&format=xml'
'''
NZ faults
'''
# s_flt_full_data = "../data/NZAFD/JSON/NZAFD_Oct_2020_WGS84.json" # Downloaded and unzipped data
# s_flt_test_data = "../data/NZAFD/JSON/NZAFD_WGS84-test.json" # Sample of 6-10 fault attribs & features
# s_flt_new_data = "https://data.gns.cri.nz/af/" # Active Faults GeoNet database
# return name
def get_client(self):
from obspy.clients.fdsn import Client
try:
client = Client(self.s_fdsn_url_code)
print(client)
except Exception as err:
print("Error message: [get_client]", err)
return client
''' Define channel codes '''
def get_channels(self):
channels = "UH*,VH*,LH*,BH*,SH*,HH*,EH*,UN*,VN*,LN*,BN*,SN*,HN*,EN*"
return channels
'''
All combinations with definition of the first and second letter to define identify each station type
'''
def get_types(self):
dict_st_types = {"UH" : "Weak motion sensor, e.g. measuring velocity\nUltra Long Period sampled at 0.01Hz, or SOH sampled at 0.01Hz",
"VH" : "Weak motion sensor, e.g. measuring velocity\nVery Long Period sampled at 0.1Hz, or SOH sampled at 0.1Hz",
"LH" : "Weak motion sensor, e.g. measuring velocity\nBroad band sampled at 1Hz, or SOH sampled at 1Hz",
"BH" : "Weak motion sensor, e.g. measuring velocity\nBroad band sampled at between 10 and 80 Hz, usually 10 or 50 Hz",
"SH" : "Weak motion sensor, e.g. measuring velocity\nShort-period sampled at between 10 and 80 Hz, usually 50 Hz",
"HH" : "Weak motion sensor, e.g. measuring velocity\nHigh Broad band sampled at or above 80Hz, generally 100 or 200 Hz",
"EH" : "Weak motion sensor, e.g. measuring velocity\nExtremely Short-period sampled at or above 80Hz, generally 100 Hz",
"UN" : "Strong motion sensor, e.g. measuring acceleration\nUltra Long Period sampled at 0.01Hz, or SOH sampled at 0.01Hz",
"VN" : "Strong motion sensor, e.g. measuring acceleration\nVery Long Period sampled at 0.1Hz, or SOH sampled at 0.1Hz",
"LN" : "Strong motion sensor, e.g. measuring acceleration\nBroad band sampled at 1Hz, or SOH sampled at 1Hz",
"BN" : "Strong motion sensor, e.g. measuring acceleration\nBroad band sampled at between 10 and 80 Hz, usually 10 or 50 Hz",
"SN" : "Strong motion sensor, e.g. measuring acceleration\nShort-period sampled at between 10 and 80 Hz, usually 50 Hz",
"HN" : "Strong motion sensor, e.g. measuring acceleration\nHigh Broad band sampled at or above 80Hz, generally 100 or 200 Hz",
"EN" : "Strong motion sensor, e.g. measuring acceleration\nExtremely Short-period sampled at or above 80Hz, generally 100 Hz"}
return dict_st_types
'''
TODO Ranking of the station types by their EEW capacity and capabilities
currently simply enumerating them for testing
'''
def get_st_type_rank(self):
l_enum_st_types = []
try:
for idx_st_type, val_st_type in enumerate(list(self.get_types())):
l_enum_st_types.append([idx_st_type, val_st_type])
except Exception as err:
print("Error message: [get_st_type_rank]", err)
sys.exit(1)
return l_enum_st_types
'''Prepare an array of station data: (i) station code as a unique identifier,
(ii) coordinates longitude & latitude, and
(iii) elevation in meters above mean sea level
return the construct as a list of stations including the list of invalid stations
'''
def get_stations(self, client):
st_list = []
invalid_st_list = []
try:
st_inv = client.get_stations(network='NZ', location="1?,2?", station='*', channel=self.get_channels(), level='channel', starttime=self.t_start, endtime = self.t_end)
except Exception as err:
print("Error message: [get_stations]", err)
'''run through stations to parse code, type, and location'''
try:
for each_st in range(len(st_inv[0].stations)):
''' use lat/lon paris only in and around NZ remove all others '''
if(st_inv[0].stations[each_st].latitude < 0 and st_inv[0].stations[each_st].longitude > 0):
each_st_type_dict = st_inv[0].stations[each_st].get_contents()
''' get the second character representing the station type '''
# st_type_dict["st_type"].append(each_st_type_dict["channels"][0][-3:-1])
''' list of corresponding station locations (lat / lon) '''
st_list.append([st_inv[0].stations[each_st].code, each_st_type_dict["channels"][0][-3:-1], st_inv[0][each_st].latitude, st_inv[0][each_st].longitude])
else:
''' dictionary of all stations not in NZ visinity '''
invalid_st_list.append([st_inv[0].stations[each_st].code,st_inv[0][each_st].latitude, st_inv[0][each_st].longitude])
except Exception as err:
print("Error message: [get_stations]", err)
return st_list, invalid_st_list
'''
GET WAVE FORMS
'''
def get_station_waveform(self, client, station_code: str):
try:
st_wf = client.get_waveforms("NZ", station_code,"*", "H??", self.t_start, self.t_end, attach_response=True)
except Exception as err:
print(err)
return st_wf
###Output
_____no_output_____
###Markdown
___TODO MOVE To 1B___ Define fault lines Class of Fault line methodsWe have completed objective 1.A. However, we will also include a mapping of the fault lines to give a perception of the station distribution relative to that of the map of fault lines.* Class fault_data() * _get_paths()_ to convert the WSG84 json file into a list * _interpolate_paths_ input results from get_paths() and spcify an interpolation distance ( e.g. distance=2.5)
###Code
'''
CLASS of functions for offering various fault-line data filters, clensing, and structuring procedures
'''
class fault_data():
'''
TODO at initiatlization download latest ZIP'd datasets from GeoNet then extract the *.json
'''
def __init__(self, name: str = 'Fault Metadata'):
self.name = name
''' NZ fault datasets '''
self.s_flt_full_data = "../data/NZAFD/JSON/NZAFD_Oct_2020_WGS84.json" # Downloaded and unzipped data
self.s_flt_test_data = "../data/NZAFD/JSON/NZAFD_WGS84-test.json" # Sample of 6-10 fault attribs & features
self.s_flt_new_data = "https://data.gns.cri.nz/af/" # Active Faults GeoNet database
# pass
''' Return the fault file name '''
def get_fault_file(self, s_type: str = 'test'):
return self.s_flt_test_data
'''
Extract nested values from a JSON tree to build a list of fault lines
containing the fault name and lat / lon pairs of the path
'''
def get_paths(self, s_file: str = None):
import json
from dictor import dictor
self.s_file = self.s_flt_test_data
print(self.s_file)
try:
# with open('../data/NZAFD/JSON/NZAFD_Oct_2020_WGS84.json') as json_file:
# data = json.load(json_file)
# with open('../data/NZAFD/JSON/NZAFD_WGS84-test.json') as json_file:
# data = json.load(json_file)
''' change parameter to switch between test, full downloaded, and latest data sets
test: s_flt_test_data
full: s_flt_full_data
new: s_flt_new_data
'''
with open(s_file) as json_file:
data = json.load(json_file)
faults = []
fault_path_count = 1
for each_feature in range(len(data['features'])):
s_flt_id = dictor(data,('features.{0}.attributes.FID').format(each_feature))
s_flt_name = dictor(data,('features.{0}.attributes.NAME').format(each_feature))
s_flt_uid = str(s_flt_id) + " " + s_flt_name
if s_flt_uid==" ":
s_flt_uid = 'Unnamed fault '+ str(fault_path_count)
fault_path_count += 1
points = []
path = dictor(data,'features.{}.geometry.paths.0'.format(each_feature))
for each_coordinate in range(len(path)):
points.append([path[each_coordinate][0],path[each_coordinate][1]])
faults.append([s_flt_uid,points])
'''
faults = []
fault_path_count = 1
for each_feature in range(len(data['features'])):
flt = dictor(data,('features.{}.attributes.FID'+' '+'features.{}.attributes.NAME').format(each_feature))
if flt==" ":
flt = 'Unnamed fault '+ str(fault_path_count)
fault_path_count += 1
points = []
path = dictor(data,'features.{}.geometry.paths.0'.format(each_feature))
for each_coordinate in range(len(path)):
points.append([path[each_coordinate][0],path[each_coordinate][1]])
faults.append([flt,points])
'''
except Exception as err:
print("Error message:", err)
return faults
'''
Interpolate more points for each fault line; if the distance between points > 1.5Km @ 0.5Km intervals
Otherwise, fit a single halfway point
'''
def interpolate_paths(self, paths, distance=float(2.5)):
from shapely.geometry import LineString
interp_paths = []
try:
''' loop through each fault path to breakdown into line segments; i.e. coordinate pairs '''
for path in range(len(paths)):
path_index = 0
''' add the two line segment coordinates to begin with
now loop through each path line segment to add interpolated points '''
while (path_index < len(paths[path][1])-1):
ip = [] # interpolated point
rel_origin_coord = paths[path][1][path_index] # relative starting point of the path
rel_nn_coord = paths[path][1][path_index+1]
''' change to a while loop until all distances between consecutive points < delta_distance'''
while LineString([rel_origin_coord, rel_nn_coord]).length*6371.0 > distance:
ip = LineString([rel_origin_coord,rel_nn_coord]).interpolate((10.0**3)/6371.0, normalized=True).wkt
# convertion needs to happen otherwise throws an exception
ip_lat = float(ip[ip.find("(")+1:ip.find(")")].split()[0])
ip_lon = float(ip[ip.find("(")+1:ip.find(")")].split()[1])
rel_nn_coord = list([ip_lat,ip_lon])
''' If you want to add the already interpolated coordinates to the path to possibly speedup
and use those points to create a denser path; note that it may will results in uniequal
distant between consecutive points in the path. Comment the instruction below to disable.
'''
paths[path][1].insert(path_index+1,rel_nn_coord) # interpolated coordinates closest to the relative origin
path_index += 1
interp_paths.append([paths[path][0], paths[path][1]])
except Exception as err:
print("Error message:", err)
return interp_paths
###Output
_____no_output_____
###Markdown
___TODO MOVE TO 1B___ OBJECTIVE 1.B - STATION FAULT METRIC Data preperation for analysisThe steps below build a set of list and array metric for the stations and fault lines:1. Interpolate points between fault line path coordinates1. Calculate the station to fault line perpendicular distances Why interpolate more coordinates?The fault line paths might have been reduced by applying the [Ramer-Douglus-Peuker algotithm](https://pypi.org/project/rdp/) before publishing the GeoNet fault line paths with an optimal set of coordinates sufficient for mapping - _Edward Lee pointed out that instead of using the "perpendicular distance" from a point to a line, the algorithm should use the 'Shortest Distance' from a point to a line segment._ Therefore, we are essentially inverting the rdp PyPi agoritm to interpolate more coordinates to reduce the line segment lengths to ~1.0 Km. Interpolate coordinates in ~1.0Km separationsThe average distance between consecutive coordinates in each fault line path latitude and longitude pairs range from 2.0 - 30.0 Km. Therefore; we use the [shapely interpolation](https://shapely.readthedocs.io/en/latest/manual.htmllinear-referencing-methods) techniques to synthesize coordinates such that the distance between consecutive coordinates is ~ 1.0 Km.
###Code
'''
METHOD for interpolating lat/lon coordinates along the fault line segments
'''
#def interpolate_fault_path_coord(fault_paths: list):
import sys
from shapely.geometry import LineString
from obspy.core import UTCDateTime
try:
faults = fault_data() # declare fault lines class
s_flt_file = faults.get_fault_file('test')
print(f'\nReceiving all fault line path data from {s_flt_file}')
#with s_flt_file:
original_paths = faults.get_paths(s_flt_file) # get all fault line paths
''' analyse the distance between fault line path coordinates '''
print("[interpolate_fault_path_coord] Begin {0} calculating inter-coordinate disance of {1} original fault lines\n".format(UTCDateTime.now(), len(original_paths)))
for path in range(len(original_paths)):
sum_lengths = float(0.0)
for coords in range(len(original_paths[path][1])-1):
sum_lengths += LineString([original_paths[path][1][coords],
original_paths[path][1][coords+1]]).length*6371.0
sys.stdout.write("\r"+"Processing fault {3} of {4}: {0} with {1} coordinates & inter-coordinate distance: {2} Km".format(original_paths[path][0], len(original_paths[path][1]), str(sum_lengths/len(original_paths[path][1])), str(path+1), str(len(original_paths))))
sys.stdout.flush()
str_flt_coord_desc= "Fault {3} of {4}: {0} has {1} coordinates with an average inter-coordinate distance: {2} Km".format(original_paths[path][0], len(original_paths[path][1]), str(sum_lengths/len(original_paths[path][1])), str(path+1), str(len(original_paths)))
print("\nInitializing interpolation ...")
interpolated_paths = faults.interpolate_paths(paths=original_paths,distance=2.5)
print("Wait until interpolation is complete ...")
print("Begin interpolating coordinates for {} fault lines\n".format(len(interpolated_paths)))
for path in range(len(interpolated_paths)):
sum_lengths = float(0)
for coords in range(len(interpolated_paths[path][1])-1):
sum_lengths += LineString([interpolated_paths[path][1][coords],
interpolated_paths[path][1][coords+1]]).length*6371.0
sys.stdout.write("\r"+"Interpolation for fault {3} of {4}: {0} with {1} coordinates & inter-coordinate distance: {2} Km".format(interpolated_paths[path][0], len(interpolated_paths[path][1]), str(sum_lengths/len(interpolated_paths[path][1])), str(path+1), str(len(interpolated_paths))))
sys.stdout.flush()
str_flt_coord_desc= "Fault {3} of {4}: {0} has {1} coordinates with an average inter-coordinate distance: {2} Km".format(interpolated_paths[path][0], len(interpolated_paths[path][1]), str(sum_lengths/len(interpolated_paths[path][1])), str(path+1), str(len(interpolated_paths)))
'''TODO change output to give numbers only; e.g. mean, median, and variance of fault coordinate distances'''
'''TODO write the non-empty interpolated dataset to a file'''
# if :
# with open('../data/NZAFD/JSON/interpolated_NZAFD_Oct_2020_WGS84.json', 'w') as outfile:
# json.dump(interpolated_paths, outfile)
print("\nInterpolation complete!")
except Exception as err:
print("Error message:", err)
###Output
Receiving all fault line path data from ../data/NZAFD/JSON/NZAFD_WGS84-test.json
../data/NZAFD/JSON/NZAFD_WGS84-test.json
[interpolate_fault_path_coord] Begin 2021-02-06T09:25:25.884151Z calculating inter-coordinate disance of 6 original fault lines
Processing fault 6 of 6: 9502 Dirty Spur Fault with 2 coordinates & inter-coordinate distance: 5.392093066302016 Km
Initializing interpolation ...
Wait until interpolation is complete ...
Begin interpolating coordinates for 6 fault lines
Interpolation for fault 6 of 6: 9502 Dirty Spur Fault with 11 coordinates & inter-coordinate distance: 0.9803805575094575 Km
Interpolation complete!
###Markdown
Station to nearest fault line distance metricEstimate distance from station to nearest fault line segment. Thereafter, associate each station with the nearest neigbour fault line segments. We have a station with coordinates _A=\[s_lat, s_lon\]_ and two coordinates _B=\[f1_lat,f1_lon\]_ and _C=\[f2_lat, f2_lon\]_, and want to project A onto the arc between B and C, and find the length of the projection arc. 1. __Loop through stations and faults__ to build a distance metric that can be used to determine the station sequence that might be triggered by a particular earthquke from a location along a fault line1. Ideally __calculate perpendicular distance__ from the station to the line segment; i.e. [shortest arc length](https://math.stackexchange.com/questions/993236/calculating-a-perpendicular-distance-to-a-line-when-using-coordinates-latitude) 1. _Compute_ `n=A×B` ("×" the cross product) and `N=n/√n⋅n` ("⋅" the dot product) 1. _Convert the coordinates_ A, B, & C to _\[x,y,z\]_ triples with `x=sinucosv; y=sinv; z=cosucosv` 1. _Compute_ the angular distance between 1. a ray from the earth's center to A and the plane _n_ described above `s=90∘−|arccos(C⋅N)|` 1. the "distance" between A and B as `s′=arccos(A⋅B)`; assuming working in degrees (range from 0 to 180)1. For now, differ to __calculate the shortest distance__ recommended by Edward Lee discussed in [why we interpolate?](Why-interpolate-more-coordinates?) 1. \[ERROR grumbling about lat / lon attributes\] __Obspy geodedics__ [inside_geobounds](https://docs.obspy.org/packages/autogen/obspy.geodetics.base.inside_geobounds.htmlobspy.geodetics.base.inside_geobounds) function can confirm whether the fault line segments A-B are within a given radius of the station A.
###Code
''' REMOVE - already moved to cell below [get_station_fault_metric_list]'''
import sys
sys.path.insert(1, '/home/nuwan/workspace/quasar/lib')
import station
''' TODO send time window '''
cls_st = station.station_data()
__client = cls_st.get_client()
#lst_val_st, lst_inval_st = cls_st.get_stations(cls_st.get_client())
lst_val_st, lst_inval_st = cls_st.get_stations(__client)
###Output
Retrieving active stations with a
start-time: 2021-01-31T09:27:26.023645Z
& end-time: 2021-02-07T09:27:26.023686Z
FDSN Webservice Client (base url: http://service.geonet.org.nz)
Available Services: 'dataselect' (v1.1), 'event' (v1.1), 'station' (v1.1), 'available_event_catalogs', 'available_event_contributors'
Use e.g. client.help('dataselect') for the
parameter description of the individual services
or client.help() for parameter description of
all webservices.
###Markdown
___TODO MOVE TO 1B___
###Code
'''
LIST construction of the station fault metric
'''
from obspy.geodetics import base
from obspy.core import UTCDateTime
from shapely.geometry import LineString
import sys
def get_station_fault_metric_list():
'''
get a clean version of the active stations, attributes, and values
'''
import sys
sys.path.insert(1, '/home/nuwan/workspace/quasar/lib')
import station
cls_st_meta = station.station_data()
__client = cls_st.get_client()
# l = cl_st.get_stations(cl_st.get_client())
# cls_st_meta = station_data()
try:
print("[get_station_fault_metric_list] Begin buldinging a list of the station fault metric elements ",UTCDateTime.now())
print("[get_station_fault_metric_list] Fetching station list ...")
st_list, invalid_st_list = cls_st.get_stations(__client)
if not st_list:
raise TypeError
else:
print('[get_station_fault_metric_list] There are {0} active valid stations and {1} invalid station(s)'.format(len(st_list),len(invalid_st_list)))
print('[get_station_fault_metric_list] The invalid stations are:{0}'.format(invalid_st_list))
print('[get_station_fault_metric_list] Unique station types 1st & 2nd letters of station codes are: {})'.format(set(item[1] for item in st_list)))
except Exception as err:
print("Error message: [get_station_fault_metric_list]", err)
sys.exit(1)
'''
build the metric
'''
try:
st_flt_metric = []
short_dist_ub = float(10**4)
null_nearest_flt_coord = [0.0000, 0.0000]
'''
loop through each fault line coordinates to find a station closest to it withing an epsilon radius.
'''
print("[get_station_fault_metric_list] Wait for a moment to build the metric comprising {} stations and {} faults...".format(len(st_list), len(interpolated_paths)))
for indx, each_station in enumerate(st_list):
sys.stdout.write("\r" + "[get_station_fault_metric_list] Building {0} of {1} calculating faults closest to Station {2}.".format(indx+1, len(st_list), each_station[0]))
''' TODO move interpolated paths to a function and check if not null then process the loop '''
for each_fault in interpolated_paths:
st_coord = [each_station[3],each_station[2]]
shortest_distance = short_dist_ub
nearest_fault_coord = null_nearest_flt_coord
for flt_coord in range(len(each_fault[1])):
st_to_flt = LineString([each_fault[1][flt_coord], st_coord]).length*6371.0
''' TODO make the correct projection
st_to_flt = LineString([each_fault[1][flt_coord], st_coord])
st_to_flt.srid = 4326
st_to_flt.transform(3857)
st_to_flt.length
'''
if st_to_flt < shortest_distance:
shortest_distance = st_to_flt
nearest_fault_coord = each_fault[1][flt_coord]
if shortest_distance < short_dist_ub :
''' station type rank, code, coordinates, nearest fault, coordinates, and distance '''
st_rank = [row[0] for row in cls_st_meta.get_st_type_rank() if row[1] == each_station[1]]
st_flt_metric.append([each_station[1], each_station[0], st_coord, each_fault[0],
nearest_fault_coord, shortest_distance,st_rank[0]])
# shortest_distance = short_dist_ub
# shortest_distance = shortest_distance
'''
TODO fix the error on the lat / lon attributes
if base.inside_geobounds(each_fault[1], minlatitude=None, maxlatitude=None,
minlongitude=None, maxlongitude=None,
latitude=36, longitude=174,
minradius=1/6378137.0, maxradius=30.0/6378137):
print(each_fault[0],"yes")
else:
print(each_fault[0],"no")
'''
print("\n[get_station_fault_metric_list[] Done building the list metric size {}".format(len(st_flt_metric)) if len(st_flt_metric) > 0 else "Empty metric; no data was built")
# min_distance_to_fault = calc_vincenty_inverse(lat1, lon1, lat2, lon2, a=6378137.0, f=0.0033528106647474805)
# statio_faults.append[interpolated_paths[each_fault]]
except Exception as err:
print("Error message: [get_station_fault_metric_list]", err)
return st_flt_metric
'''
ARRAY augment station fault metric with kmeans cluster labels
'''
def get_augm_cluster_label_list(l_st_flt_metric: list, l_cluster_labels: list):
'''
check if dimensions match and then combine the two lists with mathcing elements
'''
if not isinstance(l_st_flt_metric, list) or not isinstance(l_cluster_labels, list) :
print("[get_augm_cluster_label_list] ERROR function requires propoer list inputs.")
raise TypeError
sys.exit(1)
# print("l_st_flt_metric = ",str(len(l_st_flt_metric)), "l_cluster_labels=", str(len(l_cluster_labels)))
if range(len(l_st_flt_metric)) != range(len(l_cluster_labels)) :
print("[get_augm_cluster_label_list] ERROR input list lengths don't match.")
raise TypeError
sys.exist(1)
print("[get_augm_cluster_label_list] Begin combining station fault list with cluster labels")
# l_augm_st_flt_clust = l_st_flt_metric
for indx, value in enumerate(l_st_flt_metric):
l_st_flt_metric[indx].insert(len(l_st_flt_metric[indx]),l_cluster_labels[indx])
return l_st_flt_metric
###Output
_____no_output_____
###Markdown
___TODO MOVE TO 1B___ Build a 2D array station-fault metricBegins with the non-empty set station fault list comprising the station code and coordinates, fault name and coordinates, and the distance between them. With that we create an agency matrix by tansforming the a subset of the list into a n_station by n_fault 2D array; with element values:* _r\_station\_type_ - a ranking of the [station types](Class-of-station-data-processing-methods) based on their contribution to earthquake detection* ~~_d\_station\_fault_ - distance between the station coordinates and the nearest interpolated fault path coordinate~~ (couldn't get this to work; thought k-means can handle tuples; might have to do with the array declaration)[Issue 11](https://github.com/waidyanatha/quasar/issues/11) For the K-means clustering' consider a [mixed categorical and numerical data](https://datascience.stackexchange.com/questions/22/k-means-clustering-for-mixed-numeric-and-categorical-data) where the _station type, code,_ and _fault name_ are categorical while distance that is based on the station and fault lat/lon coordinates are numerical; decimal to be precise.* [Extensions to thek-Means Algorithm for ClusteringLarge Data Sets with Categorical Values](http://www.cs.ust.hk/~qyang/Teaching/537/Papers/huang98extensions.pdf)* [Approximation algorithm for k-modes clustering](https://arxiv.org/ftp/cs/papers/0603/0603120.pdf)
###Code
'''
ARRAY construction of the station fault metric
'''
def get_station_fault_metric_array(list_st_flt_metric: list, max_separation: float = 30000.0):
import numpy as np
import sys
if not isinstance(list_st_flt_metric, list):
raise TypeError
sys.exit(1)
# return list_st_flt_metric[index]
'''
Initialize station types and numpy array
'''
count = 0
arr_st_flt_met = np.array([],[])
st_flt_ub = max_separation # 30Km distance between station and fault line
cls_st = station_data() # from the class 'station data' get dictionary of 'station types'
l_tmp_st_types = []
print("Begin building 2D array station fault type & distance metric")
try:
for idx_st_type, val_st_type in enumerate(list(cls_st.get_types())):
l_tmp_st_types.append([idx_st_type, val_st_type])
print("[get_station_fault_metric_array] temp list of station types\n {}".format(l_tmp_st_types))
except Exception as err:
print("Error message:", err)
sys.exit(1)
'''
already filter list to refelct maximum separation upper bound for the distance between
faults and stations
'''
try:
''' list metric elements: [0] station type, [1] code [2] coordinates
[3] fault name [4] coordinates and [5] distance
filter all station fault distance rows such that [5] distance < epsilon (st_flt_ub)
'''
# l_epsFilter_st_flt_met = list([idx for idx,
# element in enumerate([row[5] for row in list_st_flt_metric])
# if element <= st_flt_ub])
unique_stations = set([row[1] for row in list_st_flt_metric])
unique_faults = set([row[3] for row in list_st_flt_metric])
if not unique_stations or not unique_faults:
raise TypeError
else:
print("[get_station_fault_metric_array] {1} number of stations and faults within {0}m distance".format(st_flt_ub,len(list_st_flt_metric)))
'''
Build the input array with rows = station and columns = faults
'''
arr_st_flt_met = np.zeros([len(unique_stations),len(unique_faults)], dtype = float)
except Exception as err:
print("Error message:", err)
sys.exit(1)
'''
TODO set the array element as a tuple with [station-type-ranking, station-fault-distance].
At the moment it is using distance only
'''
try:
# import time
print("[get_station_fault_metric_array] Wait a moment while we construct an array with shape {} for stations in rows and faults in columns".format(arr_st_flt_met.shape))
for st_indx, st_val in enumerate(unique_stations):
for flt_indx, flt_val in enumerate(unique_faults):
''' filter by data elements: [0] station type, [1] code, [3] fault name, and [5] distance
for the particular station and fault combination from the list to construnct a new list
[0] station type, [1] code, [2]fault name, and [3] distance
'''
l_filter_tmp_st_flt = [[row[0],row[1], row[3], row[5]] for row in list_st_flt_metric if (row[1] == st_val and row[3] == flt_val)]
if not l_filter_tmp_st_flt:
pass
else:
for tmp_indx, row in enumerate(l_filter_tmp_st_flt):
s_trunc_flt_name = (row[2][:10] + '...') if len(row[2]) > 10 else row[2]
s_stdout = "[get_station_fault_metric_array] inserting {3} {0} of {4} "
s_stdout +="stations with neigbouring fault {5} {1} of {6} distance {2} into the array"
sys.stdout.write("\r"+s_stdout.format(row[1], s_trunc_flt_name, round(row[3],4),
st_indx+1, len(unique_stations),flt_indx+1,
len(l_filter_tmp_st_flt)))
sys.stdout.flush()
arr_st_flt_met[st_indx,flt_indx] = row[3]
# arr_st_flt_met[st_indx,flt_indx] = [s_tmp_st_type,row[3]]
# time.sleep(2)
''' TODO remove all zero rows and columns '''
#arr_st_flt_met[~np.all(arr_st_flt_met == 0, axis=0)]
#arr_st_flt_met[~np.all(arr_st_flt_met[..., :] == 0, axis=0)]
s_stdout = "\n[get_station_fault_metric_array] station fault {1}D array shape {0} has {2} elements and an itemsize {3}"
print(s_stdout.format(arr_st_flt_met.shape, arr_st_flt_met.ndim,
arr_st_flt_met.size, arr_st_flt_met.itemsize))
print("[get_station_fault_metric_array] and it looks like this \n",arr_st_flt_met[0:9])
except Exception as err:
print("Error message:", err)
sys.exit(1)
return arr_st_flt_met
###Output
_____no_output_____
###Markdown
OBJECTIVE 1.C - STATION FAULT COARSEST TOPOGRAPHY Define clustering methods[Learn about clustering methods](https://realpython.com/k-means-clustering-python/) Class of Clustering algorithms1. _get_dbscan_labels()_ 1. Compute the cluster property measures to estimate the acceptability 1. Dump the output to a file including cluster label, lat/lon, station code, and so on1. _get_nn_labels()_ 1. Compute the mean distance between [nearest neigbours](https://scikit-learn.org/stable/modules/neighbors.html) of a minimum 3 points 1. Also consider [mean nearest neighbour distance](https://pysal.org/notebooks/explore/pointpats/distance_statistics.htmlMean-Nearest-Neighbor-Distance-Statistics)1. _get_kmean_labels()_ 1. separates the station fault distances into _n\_clusters_ with similar variances from the mean centroid 1. returns the cluster labels associated with the station fault metric__Note 1:__ - Apply DBSCAN to cluster stations with an epsilon < 30Km. DBSCAN is preferred over K-means clustering because K-means clustering considance the variance while DBSCAN considers a distance function. It gives the capacity to build clusters serving the criteria of < 30Km distance between stations.__Note 2:__ - Inherent __problem of DBSCAN__ is that it characterises data points to be in the same clusted if pair-wise data points satisfy the epsilon condition. This would not adequately satisfy the required condition that all data points in a a cluster are within the desired epsilon distance.
###Code
'''
CLUSTERING of data, both spatial and temporal, functions necessary for the station-fault analysis
'''
class clustering():
def __init__(self):
pass
'''
TODO consider OPTICS (Ordering Points To Identify the Clustering Structure)
'''
'''
DBSCAN clustering - lat/lon pairs
'''
def get_dbscan_labels(self,st_arr):
from sklearn.cluster import DBSCAN
from sklearn import metrics
import sklearn.utils
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_blobs
err="0"
# try:
X, labels_true = make_blobs(n_samples=len(st_arr), centers=st_arr, cluster_std=0.4,random_state=0)
db = DBSCAN(eps=30.0/6371.0, min_samples=3, algorithm='ball_tree', metric='haversine').fit(np.radians(X))
print('DBSCAN epsilon:',db.eps,'algorithm:', db.algorithm, 'metric: ', db.metric)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
# print('core samples mask', len(core_samples_mask),core_samples_mask)
labels = db.labels_
# print("DBSCAN found %0.3f labels" % labels )
# except Exception as err:
# print("Error message:", err)
# labels = ""
return labels, labels_true, core_samples_mask
'''
K nearest neigbour clustering
'''
def get_nn_labels(self,st_flt_list):
from sklearn.neighbors import NearestNeighbors
# Augment station array with cluster number
# Start a new station coorinates and details tuple
st_list = []
i=0
for i in range(len(labels)):
st_row = [tmp_arr[i,0],labels[i],tmp_arr[i,1],tmp_arr[i,2],tmp_arr[i,3]]
st_list.append(list(st_row))
clusters = list({item[1] for item in st_list})
for each_cluster in clusters:
cluster_list = list(st_list[j] for j in range(len(st_list)) if st_list[j][1] == each_cluster)
cluster_arr = np.delete(cluster_list, [0,1,4],axis=1).astype(np.float)
nbrs = NearestNeighbors(n_neighbors=3, algorithm='brute', metric='haversine').fit(cluster_arr)
distances, indices = nbrs.kneighbors(cluster_arr)
print(nbrs.kneighbors_graph(cluster_arr).toarray())
each_cluster_clique = client.get_stations(latitude=-42.693,longitude=173.022,maxradius=30.0/6371.0, starttime = "2016-11-13 11:05:00.000",endtime = "2016-11-14 11:00:00.000")
print(each_cluster_clique)
_=inventory.plot(projection="local")
break
sorted_rank = sorted(st_list, key=lambda i: (int(i[1])), reverse=True)
#print('Code, Cluster, Latitude, Longitude, Elevation')
#print(sorted_rank)
return sorted_rank
'''
K Means clustering - station-fault distance metric
Parameters:
number of clusters = 5 gives optimal Homogeneity, V-measure, and Silhouette Coefficient
maximum number of iterations = 300 to minimize clustering quality; i.e. sum of the squared error
'''
def get_kmean_labels(self, st_flt_arr, n_clusters=5):
from sklearn.cluster import KMeans
# import sklearn.utils
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_blobs
import numpy as np
''' make station-faults blob with shape of X being 6 features and len(st-flt-arr) '''
# X, labels_true = make_blobs(n_samples=len(st_flt_arr), centers=st_flt_arr, cluster_std=0.4,random_state=0)
scaler = StandardScaler()
# scaled_features = scaler.fit_transform(X)
scaled_features = scaler.fit_transform(st_flt_arr)
''' init = "random", "k-means++" "'''
kmeans = KMeans(init='k-means++', n_clusters=n_clusters, n_init=5,max_iter=300, random_state=5)
''' use either fit_predict or fit - using fit because it works with scaled features '''
#label = kmeans.fit_predict(scaled_features)
kmeans.fit(scaled_features)
y_kmeans = kmeans.predict(scaled_features)
s_stdout = "Statistics from the initialization run with the lowest SSE;\n"
s_stdout += "Inertia {0} with {1} iterations befor staturation and {3} \ncenters\n {2}"
print(s_stdout.format(kmeans.inertia_, kmeans.n_iter_, kmeans.cluster_centers_, len(kmeans.cluster_centers_)))
labels = kmeans.labels_
print("\nThe station and fault K-means clustering {1} labels \n{0}".format(kmeans.labels_, len(kmeans.labels_)))
# core_samples_mask = np.zeros_like(kmeans.labels_, dtype=bool)
# core_samples_mask[kmeans.core_sample_indices_] = True
# return kmeans
# return labels, labels_true, core_samples_mask
# return labels, labels_true, kmeans.cluster_centers_, scaled_features, y_kmeans
return kmeans.labels_, kmeans.cluster_centers_, scaled_features, y_kmeans
###Output
_____no_output_____
###Markdown
Cluster Stations and faults by distance Apply K-means clusteringWe use the k-means function defined in the [clustering class](Class-of-Clustering-algorithms). There are several [drawbacks SciKit preassumes](https://scikit-learn.org/stable/modules/clustering.htmlk-means) that have been considered on the assumption that the clusters are convex and isotropic and a principle component analysis has been applied prior to the clustering.
###Code
'''
METHOD for applying k-means clustering of the station fault metric
'''
from sklearn import metrics
import numpy as np
n_clusters = 8
arr_st_flt = np.array([])
#arr_st_flt = np.array([],[])
'''
Get the station fault metric list in the form of an array
'''
try:
print("[Clustering] Wait a moment to construct the station fault metric list ...\n")
''' n_clusters = 10 defines the optimal k-means clusters to build'''
st_flt_list = []
''' [0] station type, [1] code, [2] coordinates, [3] nearest fault, [4] coordinates, and [5] distance '''
st_flt_list = get_station_fault_metric_list()
if not isinstance(st_flt_list, list):
err = "[Clustering] invalid station fault metric list"
raise TypeError
else:
print("\n[Clustering] Received station fault list with distance metric and it looks like this with")
print("station type, code, st-coordinates, fault name, flt-coordinates, distance:\n{}".format(st_flt_list[0:5]))
# arr_st_flt = get_station_fault_metric_array(st_flt_list)
# arr_st_flt = np.array([[row[0], row[1], row[2], row[3], row[4], row[5]] for row in st_flt_list])
arr_st_flt = np.array([[row[5]] for row in st_flt_list])
print("[Clustering] Received array with {0} dimensions of shape {1} and it looks like this:\n{2}".format(arr_st_flt.ndim, arr_st_flt.shape,arr_st_flt[0:9]))
except Exception as err:
print("[Clustering] Error message:", err)
'''
Apply k-means clustering on the 2D array metric
'''
try:
cls_cluster = clustering()
# Run k means to get the cluster labels
print("[Clustering] Begin k means clustering ...")
# arr_labels, labels_true, cluster_centers, scaled_features, y_kmeans = cls_cluster.get_kmean_labels(
# arr_st_flt, n_clusters)
''' reshape to (-1, 1) for data with single feature or (1, -1) if it contains a single sample. '''
arr_st_flt.reshape(-1, 1)
arr_labels, cluster_centers, scaled_features, y_kmeans = cls_cluster.get_kmean_labels(
arr_st_flt, n_clusters)
#print('core samples mask', len(core_samples_mask),core_samples_mask)
print("[Clustering] complete!")
except Exception as err:
print("Error message:", err)
###Output
[Clustering] Wait a moment to construct the station fault metric list ...
Retrieving active stations with a
start-time: 2021-01-31T09:32:09.200130Z
& end-time: 2021-02-07T09:32:09.200203Z
FDSN Webservice Client (base url: http://service.geonet.org.nz)
Available Services: 'dataselect' (v1.1), 'event' (v1.1), 'station' (v1.1), 'available_event_catalogs', 'available_event_contributors'
Use e.g. client.help('dataselect') for the
parameter description of the individual services
or client.help() for parameter description of
all webservices.
[get_station_fault_metric_list] Begin buldinging a list of the station fault metric elements 2021-02-06T09:32:11.046985Z
[get_station_fault_metric_list] Fetching station list ...
[get_station_fault_metric_list] There are 450 active valid stations and 3 invalid station(s)
[get_station_fault_metric_list] The invalid stations are:[['CTZ', -43.73549, -176.61719], ['GLKZ', -29.26068, -177.918038], ['RIZ', -29.2449, -177.9289]]
[get_station_fault_metric_list] Unique station types 1st & 2nd letters of station codes are: {'EH', 'LN', 'VH', 'HH', 'HN', 'LH'})
[get_station_fault_metric_list] Wait for a moment to build the metric comprising 450 stations and 6 faults...
[get_station_fault_metric_list] Building 450 of 450 calculating faults closest to Station WVZ..
[get_station_fault_metric_list[] Done building the list metric size 530
[Clustering] Received station fault list with distance metric and it looks like this with
station type, code, st-coordinates, fault name, flt-coordinates, distance:
[['EH', 'ABAZ', [174.832332909, -36.600224003], '0 Waikopua Fault', [175.01746888200012, -36.96771368799995], 2621.602611793905, 6], ['EH', 'ABAZ', [174.832332909, -36.600224003], '1 Wairoa South Fault', [175.14850826000009, -37.17744763299993], 4193.037586927532, 6], ['EH', 'ABAZ', [174.832332909, -36.600224003], '2 Kerepehi Fault', [175.6209514190001, -37.52698099199995], 7752.744408176556, 6], ['EH', 'ALRZ', [176.343014052, -38.562043299], '2 Kerepehi Fault', [175.62638571500008, -37.53186966399994], 7995.069136880622, 6], ['EH', 'ALRZ', [176.343014052, -38.562043299], '9501 Dirty Spur Fault', [176.0680322820001, -39.861194103999935], 8460.265306127993, 6]]
[Clustering] Received array with 2 dimensions of shape (530, 1) and it looks like this:
[[ 2621.60261179]
[ 4193.03758693]
[ 7752.74440818]
[ 7995.06913688]
[ 8460.26530613]
[ 8518.61536248]
[ 7108.15022562]
[ 4596.48422029]
[ 4594.17630058]]
[Clustering] Begin k means clustering ...
Statistics from the initialization run with the lowest SSE;
Inertia 10.383517299095361 with 9 iterations befor staturation and 8
centers
[[ 0.751789 ]
[-1.12222031]
[-0.57610419]
[ 0.29283451]
[ 1.38836194]
[-2.25156721]
[-0.17628675]
[-1.72354007]]
The station and fault K-means clustering 530 labels
[7 1 0 0 0 0 3 2 2 0 0 0 3 3 3 7 1 3 6 2 2 6 7 7 0 4 2 2 3 0 6 3 3 3 2 2 3
3 3 3 3 3 3 4 2 7 7 2 7 7 0 1 6 6 7 1 3 5 7 6 3 2 2 5 1 1 3 0 6 4 4 6 2 2
7 3 3 4 6 4 4 2 6 0 4 4 6 6 7 1 3 2 1 7 4 2 2 4 6 4 6 3 4 0 2 3 4 4 4 6 6
4 0 0 3 2 2 7 3 3 2 3 3 3 4 4 6 4 4 3 0 3 0 4 2 2 7 1 3 4 2 4 0 6 6 5 7 6
0 1 1 6 4 4 2 4 4 0 3 7 3 7 7 4 3 6 6 4 6 4 4 2 2 2 0 1 1 3 3 3 3 4 6 0 3
7 1 3 3 6 4 4 6 2 2 5 1 3 4 2 2 4 0 5 5 1 0 3 0 3 6 7 1 7 7 6 1 1 4 3 3 1
6 6 0 0 0 2 2 2 6 4 4 4 4 4 6 1 1 3 0 3 1 6 6 6 6 6 6 3 6 6 4 2 3 2 2 3 3
3 6 6 6 6 4 3 6 6 4 1 0 0 5 2 2 4 6 4 6 2 2 2 1 0 0 3 2 2 2 4 4 3 3 0 2 2
3 3 0 6 6 6 0 5 1 1 6 5 5 5 2 2 7 1 1 0 0 0 3 3 0 7 1 1 4 0 2 2 0 4 4 1 2
2 5 7 6 3 3 3 4 2 2 4 2 2 4 4 0 4 3 3 4 0 6 6 4 6 3 4 4 0 4 4 1 2 0 0 3 3
2 2 3 3 6 1 5 3 3 0 3 1 4 0 0 4 0 1 1 7 7 1 0 0 4 6 4 4 2 6 6 0 3 6 4 0 2
6 4 4 1 7 7 3 2 2 3 0 0 6 1 5 0 0 0 2 2 2 0 6 2 2 0 1 5 5 4 0 6 6 4 4 4 0
0 0 6 2 2 4 3 3 3 6 6 4 6 0 4 4 0 0 0 0 6 5 5 3 0 4 0 0 1 3 3 6 1 1 4 7 3
3 0 4 3 1 1 1 0 0 4 4 4 0 0 0 6 2 2 5 7 6 4 4 0 0 0 6 2 2 3 7 7 3 0 0 0 1
1 2 3 3 4 7 1 0 3 6 6 0]
[Clustering] complete!
###Markdown
Performance indicatorsJustifying the clustering on the basis of:* Noise - * Silhouette Coefficient -Other cluster performance measure such as Homogenity, Completeness, V-measure,Adjusted Rand Index, and Adjusted Mutual Information cannot be calculated without a _ground truth_ matrix.The _cluster center_ is the arithmetic mean of the points belonging to the cluster. From the __???V-value or Siloueth Coefficient__ we know that each point is closer to its own cluster center than to other cluster centers. The scatter plot shows the cluster centers and the clusters.
###Code
'''
Performance indicators for the clustering
'''
try:
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(arr_labels)) - (1 if -1 in arr_labels else 0)
n_noise_ = list(arr_labels).count(-1)
print('Performance evaluation ...')
print('Total number of stations: %d' % len(arr_labels))
print('Estimated number of clusters: %d' % n_clusters_)
print('Estimated number of noise points: %d' % n_noise_)
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(arr_st_flt, arr_labels))
# print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, arr_labels))
# print("Completeness: %0.3f" % metrics.completeness_score(labels_true, arr_labels))
# print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, arr_labels))
# print("Adjusted Rand Index: %0.3f"
# % metrics.adjusted_rand_score(labels_true, arr_labels))
# print(f"Adjusted Mutual Information: %0.3f" % metrics.adjusted_mutual_info_score(labels_true, arr_labels))
except Exception as err:
print("Error message:", err)
###Output
Performance evaluation ...
Total number of stations: 530
Estimated number of clusters: 8
Estimated number of noise points: 0
Silhouette Coefficient: 0.578
###Markdown
Plot the clustering results Augment station fault and cluster labels lists
###Code
l_st_flt_clust = []
l_labels = arr_labels.tolist()
l_st_flt = get_station_fault_metric_list()
l_st_flt_clust = get_augm_cluster_label_list(l_st_flt, l_labels)
print(l_st_flt_clust[0:9])
###Output
Retrieving active stations with a
start-time: 2021-01-31T09:32:29.971978Z
& end-time: 2021-02-07T09:32:29.972026Z
FDSN Webservice Client (base url: http://service.geonet.org.nz)
Available Services: 'dataselect' (v1.1), 'event' (v1.1), 'station' (v1.1), 'available_event_catalogs', 'available_event_contributors'
Use e.g. client.help('dataselect') for the
parameter description of the individual services
or client.help() for parameter description of
all webservices.
[get_station_fault_metric_list] Begin buldinging a list of the station fault metric elements 2021-02-06T09:32:31.224102Z
[get_station_fault_metric_list] Fetching station list ...
[get_station_fault_metric_list] There are 450 active valid stations and 3 invalid station(s)
[get_station_fault_metric_list] The invalid stations are:[['CTZ', -43.73549, -176.61719], ['GLKZ', -29.26068, -177.918038], ['RIZ', -29.2449, -177.9289]]
[get_station_fault_metric_list] Unique station types 1st & 2nd letters of station codes are: {'EH', 'LN', 'VH', 'HH', 'HN', 'LH'})
[get_station_fault_metric_list] Wait for a moment to build the metric comprising 450 stations and 6 faults...
[get_station_fault_metric_list] Building 450 of 450 calculating faults closest to Station WVZ..
[get_station_fault_metric_list[] Done building the list metric size 530
[get_augm_cluster_label_list] Begin combining station fault list with cluster labels
[['EH', 'ABAZ', [174.832332909, -36.600224003], '0 Waikopua Fault', [175.01746888200012, -36.96771368799995], 2621.602611793905, 6, 7], ['EH', 'ABAZ', [174.832332909, -36.600224003], '1 Wairoa South Fault', [175.14850826000009, -37.17744763299993], 4193.037586927532, 6, 1], ['EH', 'ABAZ', [174.832332909, -36.600224003], '2 Kerepehi Fault', [175.6209514190001, -37.52698099199995], 7752.744408176556, 6, 0], ['EH', 'ALRZ', [176.343014052, -38.562043299], '2 Kerepehi Fault', [175.62638571500008, -37.53186966399994], 7995.069136880622, 6, 0], ['EH', 'ALRZ', [176.343014052, -38.562043299], '9501 Dirty Spur Fault', [176.0680322820001, -39.861194103999935], 8460.265306127993, 6, 0], ['EH', 'ALRZ', [176.343014052, -38.562043299], '9502 Dirty Spur Fault', [176.06084879900004, -39.86902406499996], 8518.615362483364, 6, 0], ['EH', 'ANWZ', [176.475058802, -40.459742927], '9500 Ohakea Fault', [175.3838380520001, -40.22729356699995], 7108.150225624606, 6, 3], ['EH', 'ANWZ', [176.475058802, -40.459742927], '9501 Dirty Spur Fault', [176.06084879900004, -39.86902406499996], 4596.484220292747, 6, 2], ['EH', 'ANWZ', [176.475058802, -40.459742927], '9502 Dirty Spur Fault', [176.05970438100007, -39.87027127699997], 4594.17630058067, 6, 2]]
###Markdown
Scatter plot[Scatter plot](https://jakevdp.github.io/PythonDataScienceHandbook/05.11-k-means.html) of the fault lines to show closest sensor in cluster to the fault line
###Code
'''
SCATTER PLOT of the clusters and cluster centers
TODO colour code the clusters
'''
#import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=(15, 15))
coords = np.array([[row[6],row[7]] for row in l_st_flt_clust])
#plt.scatter(scaled_features[:,0], scaled_features[:,1])
'''
Calcuate the size of each cluster and station type combination in the list: [6] station rank [7] cluster label
'''
unique_labels = set(row[6] for row in l_st_flt_clust)
unique_st_ranks = set(row[7] for row in l_st_flt_clust)
l_clust_size = []
for cl_label in range(len(unique_labels)):
l_st_ranks =[row[6] for row in l_st_flt_clust if row[7] == cl_label]
for st_rank in range(len(l_st_ranks)):
size = 100*len([row for row in l_st_ranks if row == st_rank])
if size > 0:
l_clust_size.append([st_rank,cl_label,size])
arr_plot = np.array(l_clust_size)
'''
Scatter plot axis and labeling
TODO fix the colours and axis labels
'''
plt.scatter(arr_plot[:,0], arr_plot[:,1], alpha=0.4, c=arr_plot[:,2], s=arr_plot[:,2], label=set(y_kmeans))
#plt.scatter(coords[:,0], coords[:,1], alpha=0.4, c=y_kmeans, s=300, label=set(y_kmeans))
plt.title('Scatter plot of clusters labels and station type ranking')
plt.xlabel('Ranking (enumeration)')
plt.yticks(range(0,len(unique_labels)),unique_labels)
#plt.xticks(range(0,len(unique_st_ranks)),unique_st_ranks)
plt.ylabel('Cluster label')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
___TODO Move to 1B___ Proximity MapWe use [LineCollection in Matplotlib](https://stackoverflow.com/questions/21352580/matplotlib-plotting-numerous-disconnected-line-segments-with-different-colors) to construct the station to fault distance line segments. The colour coding represents the cluster.
###Code
import numpy as np
import pylab as pl
from matplotlib import collections as mc
'''
Create a list of station and fault coordinate tuples
'''
l_st = list(tuple(row[2]) for row in l_st_flt_clust)
l_flt = list(tuple(row[4]) for row in l_st_flt_clust)
l_coords = [[tuple(row[0]),tuple(row[1])] for row in list(zip(l_st, l_flt))]
'''
Build the colour scheme corresponding with the cluster labels
'''
unique_labels = set([row[6] for row in l_st_flt_clust])
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
l_colours = list(row[6] for row in l_st_flt_clust)
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
for col_indx, col_label in enumerate(l_colours):
if l_colours[col_indx] == k:
l_colours[col_indx] = col
'''
Plot the line collection
'''
lc = mc.LineCollection(l_coords, colors=l_colours, linewidths=2)
fig, ax = pl.subplots()
fig.set_size_inches(15, 25)
fig.legend()
ax.add_collection(lc)
ax.margins(0.1)
ax.legend([lc],[unique_labels])
###Output
No handles with labels found to put in legend.
###Markdown
Voroni diagramPlot clusters as [Voroni Cells](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.voronoi_plot_2d.html) with varied colors unique to each cluster and also displaying the centroid. Voronoi diagrams are many and often include [determining which feature is closest to any given point](https://towardsdatascience.com/how-to-create-voronoi-regions-with-geospatial-data-in-python-adbb6c5f2134) -- determine which station is nearest at a given fault path in a neighbourhood.
###Code
'''
PLOT Voroni diagram of the stations
TODO put inside geographic boundary
'''
from scipy.spatial import Voronoi, voronoi_plot_2d
#arr_clust_coords = np.array([[row[0],row[6]] for row in l_st_flt_clust])
#print([labels[:],scaled_features[:, 0], scaled_features[:, 1]])
arr_coord = np.array(list([row[2][0],row[2][1]] for row in l_st_flt_clust))
vor = Voronoi(arr_coord)
#fig = voronoi_plot_2d(vor)
fig = voronoi_plot_2d(vor, show_vertices=False, line_colors='orange',
line_width=3, line_alpha=0.6, point_size=5)
fig.set_size_inches(15,15)
#plt.axis("equal")
#plt.xlim
# #############################################################################
# Plot result
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
plt.figure(figsize=(30, 40))
#nz_map = Basemap(width=15000,height=15000,projection='merc',
# resolution='l',lat_0=-40,lon_0=176.)
#nz_map.drawcoastlines()
l_coords = [row[2] for row in l_st_flt_clust]
print(coords)
# Black removed and is used for noise instead.
unique_labels = set(l_labels)
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (l_labels == k)
xy = station_coordinates[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
# uncomment to plot the noise
#xy = station_coordinates[class_member_mask & ~core_samples_mask]
#plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
# markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.legend(loc='upper left', fontsize=20)
plt.xlabel('Latitude')
plt.ylabel('Longitude')
plt.show()
###Output
[[ 6 7]
[ 6 1]
[ 6 0]
...,
[ 6 6]
[ 6 6]
[12 0]]
|
notebooks/consistant_atom_order.ipynb | ###Markdown
Below is playground
###Code
mol4 = Chem.MolFromMol2Block(open("example2.mol2", "r").read(), removeHs=False)
mol4
atoms = mol4.GetAtoms()
for a in atoms:
print(a.GetIdx(), a.GetSymbol())
list(order)
conformer = mol4.GetConformer(0)
pos = conformer.GetAtomPosition(6)
pos.x, pos.y, pos.z
a6 = mol4.GetAtomWithIdx(6)
a6.GetSymbol()
a6.SetAtomMapNum(20)
a6.GetAtomMapNum()
a5 = mol4.GetAtomWithIdx(5)
print(a5.GetSymbol())
print(a5.GetAtomMapNum())
new_mol = Chem.rdchem.RWMol(Chem.Mol())
for idx in order:
new_mol.AddAtom(mol4.GetAtomWithIdx(idx))
for a in new_mol.GetAtoms():
print(a.GetIdx(), a.GetSymbol())
bonds = mol4.GetBonds()
for b in bonds:
new_mol.AddBond(order[b.GetBeginAtomIdx()], order[b.GetEndAtomIdx()], b.GetBondType())
mol5 = Chem.MolFromSmiles(Chem.MolToSmiles(mol4, canonical=True))
mol5
mol5 = Chem.MolFromMolBlock(Chem.rdmolfiles.MolToMolBlock(new_mol), removeHs=False)
new_mol
###Output
_____no_output_____ |
examples/6_p_scale_test_Tange_MgO.ipynb | ###Markdown
For high dpi displays.
###Code
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
0. General note This example compares pressure calculated from `pytheos` and original publication for the gold scale by Tange 2008. 1. Global setup
###Code
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
###Output
_____no_output_____
###Markdown
3. Compare
###Code
eta = np.linspace(1., 0.65, 36)
print(eta)
tange_mgo = eos.periclase.Tange2009()
tange_mgo.print_equations()
tange_mgo.print_equations()
tange_mgo.print_parameters()
v0 = 74.698
tange_mgo.three_r
v = v0 * (eta)
temp = 3000.
p = tange_mgo.cal_p(v, temp * np.ones_like(v))
###Output
_____no_output_____
###Markdown
###Code
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f}".format(eta_i, p_i))
###Output
for T = 3000.0
1.000 16.76+/-0.18
0.990 18.44+/-0.18
0.980 20.22+/-0.18
0.970 22.10+/-0.19
0.960 24.09+/-0.21
0.950 26.18+/-0.22
0.940 28.40+/-0.23
0.930 30.74+/-0.25
0.920 33.21+/-0.26
0.910 35.82+/-0.28
0.900 38.58+/-0.29
0.890 41.48+/-0.30
0.880 44.55+/-0.31
0.870 47.79+/-0.33
0.860 51.20+/-0.34
0.850 54.81+/-0.35
0.840 58.61+/-0.36
0.830 62.63+/-0.37
0.820 66.87+/-0.38
0.810 71.34+/-0.39
0.800 76.07+/-0.40
0.790 81.06+/-0.41
0.780 86.33+/-0.42
0.770 91.90+/-0.43
0.760 97.79+/-0.44
0.750 104.01+/-0.46
0.740 110.59+/-0.47
0.730 117.55+/-0.48
0.720 124.91+/-0.50
0.710 132.70+/-0.52
0.700 140.95+/-0.54
0.690 149.69+/-0.56
0.680 158.94+/-0.58
0.670 168.75+/-0.61
0.660 179.16+/-0.64
0.650 190.19+/-0.67
###Markdown
Make comparison with the Vinet column in the table.
###Code
v = tange_mgo.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print(1.-(v/v0))
###Output
[0. 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13
0.14 0.15 0.16 0.17 0.18 0.19 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27
0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35]
|
tirgul/tirgul2.ipynb | ###Markdown
Trigul 2 A deeper dive into *Pandas* and *Numpy*
###Code
# cmd view point denoted by ! sign
!pip install numpy
!pip install pandas
# read file using pandas
# pandas is a fast, powerful, flexible and easy to use open source data analysis in python
import pandas as pd
data = pd.read_csv('titanic_short.csv')
### The data read is of type dataframe, which is a 2D data structure where columns can be of different types
type(data)
###Output
_____no_output_____
###Markdown
[Tutorial for python data frames](https://stackabuse.com/beginners-tutorial-on-the-pandas-python-library/)
###Code
# print using python system print
print(data)
# print using pandas display
data
data.head() # we can set the number of first rows to be displayed.
# guess what is the commmand for the tail
# Same goes for tail
data.tail()
#### Set specific cells:
data[10:19]
###Output
_____no_output_____
###Markdown
Self excercise: Suggest how to display only survivors (denoted by 1)
###Code
data[data['survived'] == 1]
###Output
_____no_output_____
###Markdown
Self excercise: Suggest how to display only females gender
###Code
data[data['sex']=='female']
###Output
_____no_output_____
###Markdown
Self excercise: Suggest how to display only females survivors
###Code
data[(data['survived'] == 1) & (data['sex']=='female')]
#### display some of the collumns by a list of collumns
data[ [ 'pclass','age'] ]
#### Number of rows and collumns by hsape attribute:
data.shape
#### Get more information such as the column names by info() method
data.info()
#### the type of the read data is data frame
type(data)
###Output
_____no_output_____
###Markdown
Add nice equations using LaTEX syntax in Markdown Sphere or Quadratic equations::$ x^2 + y^2 = z^2 \\$$ \\ x_1,_2 = \frac{- b\pm \sqrt{b^2-4ac}}{2a} $ NumpyMatrices and much more, this is where the fun begins..[Numpy official docs](https://numpy.org/doc/stable/reference/index.html)
###Code
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
print(arr*arr)
#### vector multiplication (AKA dot producdt)
print("arr.dot(arr)\t=",arr.dot(arr))
# or by
print("arr@arr\t\t=",arr@arr)
# create ones matrix of specific size:
np.ones((2,3))
# self excercise: create zeros matrix of specific size:
np.zeros((2,3))
# we can create any constant array
c = np.full((2,2), 7)
print(c)
d = np.eye(3)
print(d)
# self excercise: create a 3X3 matrix where digonal has value of 8
8*np.eye(3)
e = np.random.random((2,4))
print(e)
###Output
[[0.21763861 0.91912039 0.64856454 0.68660837]
[0.37050684 0.37090226 0.96263439 0.99146131]]
###Markdown
Self learning at home:+ 1. Read the tutorials in links+ 2. Write a NumPy program to test whether none of the elements of a given array is zero (create a sample array).
###Code
import numpy as np
x = np.array([1, 2, 3, 4])
print("Original array:")
print(x)
print("Test if none of the elements of the said array is zero:")
print(np.all(x))
x = np.array([0, 1, 2, 3])
print("Original array:")
print(x)
print("Test if none of the elements of the said array is zero:")
print(np.all(x))
###Output
Original array:
[1 2 3 4]
Test if none of the elements of the said array is zero:
True
Original array:
[0 1 2 3]
Test if none of the elements of the said array is zero:
False
|
0_developed_draft/legend_choreography.ipynb | ###Markdown
Above code has been moved to chor_legend.py as get_chor_legend function. June 14, 2020
###Code
for c in chor_legend_df['name'].values:
print(f"'{c}',", end=' ')
chor_legend_df['category'] = ['time', 'time', 'time', 'id',
'sample_size', 'sample_size',
'area',
'width', 'width', 'width', 'width',
'length', 'length',
'angle', 'angle', 'angle', 'angle',
'speed', 'speed',
'dir', 'pathlen', 'dir',
'location', 'location', 'speed', 'speed',
'angle', 'speed',
'stim', 'stim', 'stim', 'stim',
'stats', 'stats', 'stats', 'stats',
'stats', 'stats', 'stats', 'stats',
'stats', 'stats', 'stats']
###Output
_____no_output_____
###Markdown
time measures:* time* persistencesize measures:* areawidth measures:* midline* morphwidth* width* relwidthlength measures:* length* rellengthangle measures:* aspect* relaspect* kink* curve* orientmovement speed measures:* speed* crab* angularmovement dir measures: * bias* dir* vel_x* vel_y
###Code
# save
path_save = os.path.join(dir_save, 'legend_choreography.pickle')
with open(path_save,'wb') as f:
pickle.dump(chor_legend_df,f)
# save as csv
path_save = os.path.join(dir_save, 'legend_choreography.csv')
chor_legend_df.to_csv(path_save, index=False)
###Output
_____no_output_____ |
LoadPostDisasterData.ipynb | ###Markdown
This notebook loads post-disaster damage, loss, health data for the 2017 flood in Bangladesh
###Code
import os
import sys
import numpy as np
import pandas as pd
import geopandas as gpd
from geopandas.tools import sjoin
import rasterio
from shapely.geometry import Point, Polygon
from functools import reduce
import fhv
# ADMINISTRATIVE SHAPEFILE
# ------------------------------------------------- #
shape = gpd.read_file('./data/admin_boundary/bgd_admbnda_adm3_bbs_20180410.shp')
# Convert ADM3_PCODE of Mymensingh (45) division (total 378 unions) (45 -> 30)
f45t30 = '30' + shape.loc[shape['ADM1_PCODE'] == '45', 'ADM3_PCODE'].str[2:]
shape.loc[shape['ADM1_PCODE'] == '45', 'ADM3_PCODE'] = f45t30.values
shape['ADM3_PCODE'] = shape['ADM3_PCODE'].astype(int)
f45t30 = '30' + shape.loc[shape['ADM1_PCODE'] == '45', 'ADM2_PCODE'].str[2:]
shape.loc[shape['ADM1_PCODE'] == '45', 'ADM2_PCODE'] = f45t30.values
shape['ADM2_PCODE'] = shape['ADM2_PCODE'].astype(int)
ADM2 = shape[['ADM2_EN','ADM2_PCODE']].copy().drop_duplicates()
ADM2['ADM2_PCODE'] = ADM2['ADM2_PCODE'].astype(int)
if False:
shape[['ADM2_PCODE','ADM2_EN','ADM3_PCODE','ADM3_EN']].sort_values(
by='ADM3_PCODE').reset_index(drop=True).to_excel('./data/upazila_list.xlsx')
# ------------------------------------------------- #
# POPULATION DATA
# ------------------------------------------------- #
# BGD Census total population in 2011: 144,043,697
# BGD World Bank population in 2011: 149,273,778
# BGD World Bank population in 2017: 159,670,593
# ------------------------------------------------- #
df = fhv.LoadCensusBBS('./data/census2011/age 5 years group.xls')
popu2011 = df.sum(axis=1)
popu2017 = (popu2011/popu2011.sum()*159670593).astype(int)
popu2017_adm2 = popu2017.copy()
popu2017_adm2.index = (popu2017_adm2.index / 100).astype(int)
popu2017_adm2 = popu2017_adm2.groupby(popu2017_adm2.index).sum()
popu2017_adm2.index.name = 'ADM2_PCODE'; popu2017_adm2.name = 'Population'
###Output
_____no_output_____
###Markdown
Post-flood disaster damage and loss dataThe impacts (damage and loss) of 2017 August flood is obtained from Shelter Cluster, DDM, MoDMR, NIRAPAD, etc.- [Banladesh Monsoon Floods 2017, data table on Sep-3](https://www.sheltercluster.org/bangladesh-monsoon-floods-2017/documents/assessment-flood-damage-data-government-03092017)- [Banladesh Monsoon Floods 2017, data table on Aug-30, from NIRAPAD Monthly Hazard Incident Report](https://www.nirapad.org.bd/home/resources/monthlyHazard)- [72 hours Rapid Assessment Report NAWG V1](https://www.sheltercluster.org/bangladesh-monsoon-floods-2017/documents/nawg-72-hours-rapid-assessment-report-v1)Here we use the following variables to represent the impacts on public health:- Distress: Percent of affected population, Percent of displaced people, Number of death- Damage: Number of damaged houses, Number of damaged roads (km), Number of damaged crop land (Hect)- Disruption: Number of affected institution, Number of damaged tubewellThe latest DDM (MoMDR) report is published Sep-3-2017, but it doesn't have the affected population. The report published has it, but only in percentage.Here, we calibrated affected population based on the reports published at Aug-20, Aug-21, and Aug-30.
###Code
damage_table = [['PAFFCPOPU','Distress','Percentage of affected population','MinMax'],
['NDISPPOPU','Distress','Number of displaced population','MinMax'],
['NDEATH','Distress','Number of death','MinMax'],
['NDAMGHOUS','Damage','Number of damaged houses', 'Quantile'],
['DAMGROAD','Damage','Damaged roads (Km)', 'Quantile'],
['DAMGCLAND','Damage','Damaged crop land (Hect)', 'Quantile'],
['NAFFCINST','Disruption','Number of affected educational institutions','Quantile'],
['NDAMGTUBE','Disruption','Number of damaged tubewell','Quantile']]
damage_table = pd.DataFrame(damage_table, columns=['Name','Domain','Description','Normalization'])
damage_table['Scale'] = 'District'
damage_table
# DDM and NDRCC published in Sep-03-2017
df = pd.read_excel('./data/disaster_records/damagedata_DDM.xlsx',
sheet_name='DDM, Sep 3',skiprows=1,skipfooter=1).drop('SL', axis=1).fillna(0)
df = df[['Name of affected Districts','No of Total affecetd Families','No of Total damaged Houses','No of Total damaged Crops land (Hect)','No of Death People','No of Damaged Water point (Tube well)']]
df = df.rename(columns={'Name of affected Districts': 'ADM2_EN',
'No of Total affecetd Families': 'Affected families',
'No of Total damaged Houses': 'Affected houses',
'No of Total damaged Crops land (Hect)': 'Affected crops land (Hect)',
'No of Death People': 'Death',
'No of Damaged Water point (Tube well)': 'Damaged tube well'})
# - Change the district names to be consistent with shapefile
df['ADM2_EN'] = df['ADM2_EN'].replace({'Rajshahi Dist': 'Rajshahi',
'Moulvibazar': 'Maulvibazar',
'Sunamjanj': 'Sunamganj',
'Netrokona': 'Netrakona'})
# - Merge with ADM2 (Name and Code) just like join
df = pd.merge(df,ADM2,how='inner',left_on='ADM2_EN',right_on='ADM2_EN').drop('ADM2_EN',axis=1)
df0903 = df[df.columns[[-1,0,1,2,3,4]]]
# We will remove the Rajshahi district (ADM2_PCODE: 5081)
df0903 = df0903.loc[df0903['ADM2_PCODE'] != 5081].reset_index(drop=True)
# DDM and NDRCC published in Aug-30-2017
df = pd.read_excel('./data/disaster_records/damagedata_DDM.xlsx',
sheet_name='DDM, Aug 30 (modified)',skiprows=0,skipfooter=1).fillna(0)
df = df.rename(columns={'Affected Districts': 'ADM2_EN',
'Affected People ( %. of total population)':'Affected population',
'No. of Damaged House':'Damaged houses',
'Affected Crops land (Hec.)':'Affected crops land (Hect)',
'No. of Death':'Death',
'No. of Displaced':'Displaced',
'No. of Affected Tube well':'Damaged tube well'})
# - Change the district names to be consistent with shapefile
df['ADM2_EN'] = df['ADM2_EN'].replace({'Brahmanbaria': 'Brahamanbaria',
'Chadpur': 'Chandpur',
'Moulvibazar': 'Maulvibazar',
'Munsiganj': 'Munshiganj',
'Netrokona': 'Netrakona',
'Panchaghar': 'Panchagarh'})
# - Merge with ADM2 (Name and Code) just like join
df = pd.merge(df,ADM2,how='inner',left_on='ADM2_EN',right_on='ADM2_EN').drop('ADM2_EN',axis=1)
df0830 = df[df.columns[[-1,4,5,6,7,8,9,10,11,12,13]]]
# Merge to single DataFrame
temp1 = df0830[['ADM2_PCODE', 'Affected Institution', 'Affected Road (km)', 'Displaced']]
temp2 = df0903[['ADM2_PCODE', 'Affected houses', 'Affected crops land (Hect)', 'Death', 'Damaged tube well']]
damage = pd.merge(temp1, temp2, how='outer', left_on='ADM2_PCODE', right_on='ADM2_PCODE')
# Manual editing ---------------- #
# - Add calibrated population
df = pd.read_excel('./data/disaster_records/damagedata_DDM.xlsx',
sheet_name='calibrated',skiprows=0,skipfooter=0).fillna(0)
damage = damage.merge(df[['ADM2_PCODE','Calibrated (%)']], on='ADM2_PCODE').set_index('ADM2_PCODE')
damage['Calibrated (%)'] = damage['Calibrated (%)'] / 100
# ------------------------------- #
# Reorder and Rename DataFrame
damage = damage.rename(columns={'Calibrated (%)': 'PAFFCPOPU',
'Affected Institution':'NAFFCINST',
'Affected Road (km)':'DAMGROAD',
'Displaced':'NDISPPOPU',
'Affected houses':'NDAMGHOUS',
'Affected crops land (Hect)':'DAMGCLAND',
'Death':'NDEATH',
'Damaged tube well':'NDAMGTUBE'})
damage = damage[damage_table.Name]
###Output
_____no_output_____
###Markdown
Post-flood health dataThe post-flood health data is obtained from the [Directorate General of Health Services (DGHS)](https://dghs.gov.bd/index.php/en/home) - [Health Dashboard](http://103.247.238.81/webportal/pages/index.php).In the dashboard, the DGHS made a page for public health situation during/after the 2017 (July-August) flood in the Tableau format.- By removing the duplicate rows, the numbers are similar to the [dashboards](http://103.247.238.81/webportal/pages/flood_affected.php).- There are some differences from different periods of data Here we use the following variables to represent the impacts on public health:- Injuries Trauma Affected- Diarrhoea Affected- RTI, Drowning, snake bite, injury, eye disease, skin disease, and other cases
###Code
phealth_table = [['NTRAUMA','Health','Number of people with trauma from injuries','Quantile'],
['NDIARRHEA','Health','Number of diarrhea cases','Quantile'],
['NODIEASE','Health','Number of other diseases','Quantile']]
phealth_table = pd.DataFrame(phealth_table, columns=['Name','Domain','Description','Normalization'])
phealth_table['Scale'] = 'District'
phealth_table
# (1) Health records from Jul-01-2017 to Aug-15-2017 (Upazila scale)
cols = ['Division Code', 'Division Name',
'District Code', 'District Name',
'Upazila Code', 'Upazila Name',
'Diarrhoea Affected', 'Diarrhoea Death',
'Drowning Affected', 'Drowning Death',
'Injuries Trauma Affected','Injuries Trauma Death',
'Skin Disease Affected',
'Snakebite Affected','Snakebite Death',
'Is This Upazilla Currently Flood Affected',
# 'Latitude', 'Longitude',
'Period']
df = pd.read_excel('./data/health_impact_2017Flood/dhis2_flood_affected (dbmis).xlsx',usecols=cols)[cols]
# - Convert Mymensingh (45) to Dhaka (30)
adm1_pcode = df['Division Code'].copy(); adm1_pcode.loc[adm1_pcode == 45] = 30
# - Assign ADM2_PCODE and ADM3_PCODE
df['ADM2_PCODE'] = adm1_pcode*10**2 + df['District Code']
assert np.isin(df['ADM2_PCODE'].unique(), shape.ADM2_PCODE).sum() == len(df['ADM2_PCODE'].unique())
df['ADM3_PCODE'] = adm1_pcode*10**4 + df['District Code']*10**2 + df['Upazila Code']
assert np.isin(df['ADM3_PCODE'].unique(), shape.ADM3_PCODE).sum() == len(df['ADM3_PCODE'].unique())
# - Reorder the DataFrame
df['Date'] = pd.DatetimeIndex(pd.to_datetime(df['Period'],format='%Y%m%d'))
df = df[['ADM2_PCODE','Date',*cols[6:-1]]]
# - Remove duplicate rows
df = df.drop_duplicates()
# - Group by ADM2_PCODE
health_trauma = df.groupby('ADM2_PCODE')['Injuries Trauma Affected'].sum()
health_trauma.name = 'Trauma'
# (2) Health data from Jul-23-2017 to Aug-26-2017 (District scale)
df = pd.read_excel('./data/health_impact_2017Flood/controlroom_flood (dbmis).xlsx')
df = df[['District',*df.columns[6:-1]]]
df = df.groupby('District').sum().reset_index()
df = df.rename(columns={'District':'ADM2_EN'})
df['ADM2_EN'] = df['ADM2_EN'].replace({'Brahmanbaria': 'Brahamanbaria'})
df = pd.merge(df,ADM2,how='inner',left_on='ADM2_EN',right_on='ADM2_EN').drop('ADM2_EN',axis=1)
df = df.set_index('ADM2_PCODE')
# - Diarrhea and Sum of other disease cases
col_other = ['No of RTI cases','No Of Eye Disease cases','No Of Drowning cases',
'No Of Eye Disease Deaths','No Of Injury cases','No Of Injury Deaths',
'No Of Snake Bite cases','No Of Snake Bite Deaths','No Of Skin Disease cases','No Of Other cases']
health_diarrhea = df['No Of Diarrhea Cases']
health_diarrhea.name = 'Diarrhea'
health_other = df[col_other].sum(1)
health_other.name = 'Other'
health_disease = pd.merge(health_diarrhea,health_other, how='inner', left_index=True, right_index=True)
# Merged DataFrame
phealth = pd.merge(health_trauma,health_disease,how='outer',left_index=True,right_index=True).fillna(0)
phealth = phealth[phealth.sum(1) > 0]
# Reorder and Rename Dataframe
phealth = phealth.rename(columns={'Trauma':'NTRAUMA',
'Diarrhea':'NDIARRHEA',
'Other':'NODIEASE'})
###Output
_____no_output_____
###Markdown
Save the data
###Code
impact_table = pd.concat([damage_table, phealth_table]).reset_index(drop=True)
impact = pd.merge(damage, phealth, how='outer',left_index=True, right_index=True).fillna(0)
# Save data
if True:
fn = './data/impact.hdf'
impact.to_hdf(fn, 'data'); print('%s is saved.' % fn)
fn = './data/impact_table.hdf'
impact_table.to_hdf(fn, 'table'); print('%s is saved.' % fn)
impact_table
###Output
./data/impact.hdf is saved.
./data/impact_table.hdf is saved.
|
notebook/preprocessing.ipynb | ###Markdown
Quora preprocessing **(~1:30h gpu run time)**
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import RandomOverSampler
###Output
_____no_output_____
###Markdown
Load data
###Code
df_raw = pd.read_csv("../data/raw/train.csv", low_memory=False)
df_raw.head()
df_raw.shape
sampled_df = df_raw[df_raw.target == 0].sample(n=480_000, random_state=42)
df = pd.concat(
[sampled_df, df_raw[df_raw.target == 1]]
)
df.head()
# Target samples
df.shape[0] - 200_000
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# Train/Test
X, y = df.drop('target', axis=1), df.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.10, random_state=42
)
train_df = X_train
train_df['target'] = y_train
test_df = X_test
test_df['target'] = y_test
###Output
_____no_output_____
###Markdown
Random Oversampling
###Code
# Oversampling
ros = RandomOverSampler(random_state=42)
X_resampled, y_resampled = ros.fit_resample(X_train, y_train)
resampled_df = X_resampled
resampled_df['target'] = y_resampled
###Output
_____no_output_____
###Markdown
nlpaugUsing: * KeyboardAug * ContextualWordEmbsAug * SynonymAug * BackTranslationAug * SpellingAug80_000 new examples for each augment
###Code
import nlpaug.augmenter.char as nac
import nlpaug.augmenter.word as naw
import nlpaug.augmenter.sentence as nas
df_1 = train_df[train_df.target==1].copy()
texts = list(df_1.question_text)
texts[:5]
key_board_aug = nac.KeyboardAug(aug_char_max=2, aug_word_max=2)
key_board_texts = key_board_aug.augment(texts)
key_board_texts[:5]
synonym_aug = naw.SynonymAug(aug_max=2)
synonym_texts = synonym_aug.augment(texts)
synonym_texts[:5]
spelling_aug = naw.SpellingAug(aug_max=1)
spelling_texts = spelling_aug.augment(texts)
spelling_texts[:5]
# Load contextual words
contextual_words_aug = naw.ContextualWordEmbsAug(model_path='distilbert-base-uncased', aug_max=4, device='cuda')
contextual_words_aug
contextual_words = []
for i in range(0, len(texts), 64):
if i%1024==0:
print(f"{i}/{len(texts)}")
contextual_words += contextual_words_aug.augment(texts[i: i + 64])
contextual_words[:5]
new_texts = key_board_texts + synonym_texts + contextual_words + spelling_texts
new_texts[:5]
df_nlpaug = pd.DataFrame({'question_text': new_texts, 'target': np.ones(len(new_texts))})
df_nlpaug.head()
train_nlpaug = pd.concat([train_df, df_nlpaug])
train_nlpaug.head()
train_nlpaug.shape
train_nlpaug.target.value_counts()
import pandas as pd
t = pd.read_csv("../data/nlpaug/train.csv")
t.shape
x1 = t[t.target==1]
x2 = t[t.target==0]
x2 = x2.sample(n=364145)
df = pd.concat([x1, x2])
df.shape
df.target.value_counts()
###Output
_____no_output_____
###Markdown
Save
###Code
# train datasets
resampled_df.to_csv("../data/ros/train.csv")
train_nlpaug.to_csv("../data/nlpaug/train.csv")
# nlpaug
test_df.to_csv("../data/processed/test.csv")
###Output
_____no_output_____
###Markdown
1.缺失值分析
###Code
# Function to calculate missing values by column# Funct
def missing_values_table(df):
# Total missing values
mis_val = df.isnull().sum()
# Percentage of missing values
mis_val_percent = 100 * df.isnull().sum() / len(df)
# Make a table with the results
mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)
# Rename the columns
mis_val_table_ren_columns = mis_val_table.rename(
columns = {0 : 'Missing Values', 1 : '% of Total Values'})
# Sort the table by percentage of missing descending
mis_val_table_ren_columns = mis_val_table_ren_columns[
mis_val_table_ren_columns.iloc[:,1] != 0].sort_values(
'% of Total Values', ascending=False).round(1)
# Print some summary information
print ("Your selected dataframe has " + str(df.shape[1]) + " columns.\n"
"There are " + str(mis_val_table_ren_columns.shape[0]) +
" columns that have missing values.")
# Return the dataframe with missing information
return mis_val_table_ren_columns
# Missing values statistics
missing_values = missing_values_table(df)
missing_values.head(20)
for i in df.columns:
print(len(df[i].dropna())/len(df))
###Output
0.997080291970803
0.992992700729927
0.9924087591240875
0.9921167883211679
0.9912408759124087
0.005255474452554744
###Markdown
2.相關度分析
###Code
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
df[:] = ss.fit_transform(df)
df.corr()
###Output
_____no_output_____
###Markdown
3.折線圖分析
###Code
import matplotlib.pyplot as plt
n = 7
fig = plt.figure(figsize=(20,10))
for i in df.columns:
plt.plot(df[i].rolling(n).median(),label=i)
plt.legend()
plt.show()
###Output
_____no_output_____ |
NoSQL/NetworkX/plot_expected_degree_sequence.ipynb | ###Markdown
Expected Degree SequenceRandom graph from given degree sequence.
###Code
# Author: Aric Hagberg ([email protected])
# Copyright (C) 2006-2019 by
# Aric Hagberg <[email protected]>
# Dan Schult <[email protected]>
# Pieter Swart <[email protected]>
# All rights reserved.
# BSD license.
import networkx as nx
from networkx.generators.degree_seq import expected_degree_graph
# make a random graph of 500 nodes with expected degrees of 50
n = 500 # n nodes
p = 0.1
w = [p * n for i in range(n)] # w = p*n for all nodes
G = expected_degree_graph(w) # configuration model
print("Degree histogram")
print("degree (#nodes) ****")
dh = nx.degree_histogram(G)
for i, d in enumerate(dh):
print("%2s (%2s) %s" % (i, d, '*'*d))
###Output
_____no_output_____ |
Pandas and MatPlotlib.ipynb | ###Markdown
Pandas Tutorial
###Code
#Pandas Tutorial
import pandas as pd
import numpy as np
data = pd.read_csv('Desktop/panda.csv')
data
###Output
_____no_output_____
###Markdown
Working with dates
###Code
#Assignment
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
person = pd.DataFrame({'Name':['Henry' , 'Kofi' , 'John', 'Joe','Felix','Ama','King','Josh','Louis','Cindy'],
'DOB': ['6/20/1960', '11/19/1981', '1/12/1999' , '7/11/1967','5/10/1992','9/8/1994','12/22/1995','8/21/2000','6/14/1997','4/27/1998',],
'EmpID': ['E275', 'E983', 'E675','E120','E409','E7654','E345','E453','E234','E443',]})
person
person.dtypes
# Change the datatype of the column to Datetime
person['DOB']=pd.to_datetime(person['DOB'])
person.dtypes
person
# Extracting Month , Day , Year,......from the Date field
person['Month'] = person.DOB.dt.month
person['Day'] = person.DOB.dt.day
person['Year'] = person.DOB.dt.year
person['Week Number'] =person.DOB.dt.isocalendar().week
person['Day Of Week'] = person.DOB.dt.dayofweek
person['Day Name']=pd.to_datetime(person['DOB']).dt.day_name()
person['Month Name']=pd.to_datetime(person['DOB']).dt.month_name()
person
# Changing Datetime format to '%d/%m/%Y' using strftime()
person['DOB']=pd.to_datetime(person['DOB']).dt.strftime('%d/%m/%Y') # Note : This will change the datatype back to object
person
person.dtypes
# Changing Datetime format to ''%m-%d-%Y' using strftime()
person['DOB']=pd.to_datetime(person['DOB']).dt.strftime('%m-%d-%Y') # Note : This will change the datatype back to object
person
person.dtypes
# Find employees who are born after 12-20-1980
from datetime import date
person[pd.to_datetime(person['DOB']) > pd.Timestamp(date(1980,12,20))]
# Find all records where DOB is between "12-20-1980" - "12-20-2000"
from datetime import date
person[(pd.to_datetime(person['DOB']) > pd.Timestamp(date(1980,12,20))) &
(pd.to_datetime(person['DOB']) < pd.Timestamp(date(2000,12,20)))]
# Min Date in a dataframe column
pd.to_datetime(person['DOB']).min()
# Max Date in a dataframe column
pd.to_datetime(person['DOB']).max()
# Current timestamp
timestamp = pd.to_datetime('now')
print('Timestamp :{}'.format(timestamp))
# Current Date (Today)
current_date=pd.to_datetime('now').date()
print('Current Date : {}'.format(current_date))
# Yesterday
yesterday = pd.to_datetime('now').date()- pd.Timedelta('1 day')
print('Yesterday: {}'.format(yesterday))
# tomorrow
tomorrow = pd.to_datetime('now').date() + pd.Timedelta('1 day')
print('Tomorrow: {}'.format(tomorrow))
#OR
tomorrow = pd.to_datetime('now').date() + pd.DateOffset(days=1)
print('Tomorrow: {}'.format(tomorrow))
#Add Business Day to current date
add_buss_day=pd.to_datetime('now').date()+pd.offsets.BDay(1)
print('Date after adding Business Day: {}'.format(add_buss_day)) # Saturday & Sunday will be excluded
#Add 1 month to current date
add_month=pd.to_datetime('now').date()+pd.DateOffset(months=1)
print('Date after adding 1 month to current date: {}'.format(add_month))
# Date Difference in hours
diff_hrs= (pd.to_datetime('2021-03-26 21:11:13') - pd.to_datetime('2021-03-01 11:11:13')).total_seconds()//3600
print('Date Difference in hours: {}'.format(diff_hrs))
# Age of the person (Extract year from current time and subtract from Year column)
person['Age'] = pd.to_datetime('now').year - person['Year']
person
# OR
person['Age'] = pd.to_datetime('now').year - pd.to_datetime(person['DOB']).dt.year
person
# Lets work on simple dataset (Female birth Dataset)
# The source of the dataset is credited to Newton (1988).
female = pd.read_csv('https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-total-female-births.csv',delimter = ';')
female.head(10)
# Find min & max date to get the date range
pd.to_datetime(female['Date']).max()-pd.to_datetime(female['Date']).min() # This is one year of dataset that we need
# Change datatype of Date column to Datetime
female['Date'] = pd.to_datetime(female['Date'])
# Create helper columns
female['Month'] = female.Date.dt.month
female['Day'] = female.Date.dt.day
female['Year'] = female.Date.dt.year
female['Week Number'] =female.Date.dt.isocalendar().week
female['Day Of Week'] = female.Date.dt.dayofweek
female['Day Name']=pd.to_datetime(female['Date']).dt.day_name()
female['Month Name']=pd.to_datetime(female['Date']).dt.month_name()
# OR We can use below lines of code as well
female['Month'] = female.Date.apply(lambda x:x.month)
female['Day'] = female.Date.apply(lambda x:x.day)
female['Year'] = female.Date.apply(lambda x:x.year)
female['Week Number'] =female.Date.apply(lambda x:x.week)
female['Day Of Week'] = female.Date.apply(lambda x:x.dayofweek)
female['Day Name']=pd.to_datetime(female['Date']).apply(lambda x:x.day_name())
female['Month Name']=pd.to_datetime(female['Date']).apply(lambda x:x.month_name())
female.head()
# Total female births in the month of January
female[female['Month Name'] =='January']['Births'].sum()
# Total female births in each month using for loop
for i in female['Month Name'].unique():
print('Female births in {0} : {1}'.format(i,female[female['Month Name'] ==i]['Births'].sum()))
# Using "group by" to get female births in each month
female.groupby('Month Name').sum()[['Births']] # Month Name column data is not in ascending order.
# Use Pivot table to get female births in each month
pd.pivot_table(female,values=['Births'],index=['Month Name'],aggfunc=np.sum) # Month Name data is not in proper order.
pd.pivot_table(female,values=['Births'],index=['Month Name'],aggfunc=np.sum).plot.bar()
# We will convert "Month Name" column into Categorical variable and specify the ordering
order = ['January','February','March','April','May','June',
'July','August','September','October','November','December']
female['Month Name']=pd.Categorical(female['Month Name'],order)
female.groupby('Month Name').sum()[['Births']] # Now the output is much better after custom ordering
# Bar plot to get monthly female births using matplotlib library
plt.figure(figsize=(14,6))
plt.bar(female.groupby('Month Name').sum().index,female.groupby('Month Name').sum()['Births'])
plt.show()
# Bar plot to get monthly female births using Pandas
pd.pivot_table(female,values=['Births'],index=['Month Name'],aggfunc=np.sum).plot.bar()
# Same way we can implement custom ordering for Day Name field
order=[ 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
female['Day Name']=pd.Categorical(female['Day Name'],order)
female.groupby('Day Name').sum()[['Births']]
# Plot Bar Graph to show female births on day basis.
plt.figure(figsize=(14,6))
plt.bar(female.groupby('Day Name').sum().index,female.groupby('Day Name').sum()['Births'])
plt.show()
# Daily female births
plt.figure(figsize=(15,5))
plt.plot(female['Date'],female['Births'])
# Get all records for the month of Janaury(1959-01-01 - 1959-01-31).
# Using boolean is not good method when we are dealing with large datasets.
female[(pd.to_datetime(female['Date']) > pd.Timestamp(date(1959,1,1))) &
(pd.to_datetime(female['Date']) < pd.Timestamp(date(1959,1,31)))]
# Convert date column into Datetime index for faster selection.
female = female.set_index(['Date'])
female
female.index # DatetimeIndex
# Now lets select the data
female.loc['1959'] # Get all data for year 1959
###Output
_____no_output_____ |
note/fig3_model_3.ipynb | ###Markdown
Corner and traceplot of the model* This is an alternative `A.S.A.P` model* Three stages burn-in with 256 walkers using the "Snooker" moves, each stage has 250 steps.* Using the walker position with the best likelihood as initial positions for the next stage. * Final sampling process has 400 steps.* **SMF**: use the covariance matrix of SMFs* **DeltaSigma** profiles: fit the radius between 0.05 to 15 Mpc. - Including the inner most data point
###Code
test_dir = '../model/'
# The configuration file
config_file = os.path.join(test_dir, 'asap_test_3.yaml')
# The results from the 3-stage burn-in results
burnin_file_1 = os.path.join(test_dir, 'asap_test_3_burnin_1.npz')
burnin_file_2 = os.path.join(test_dir, 'asap_test_3_burnin_2.npz')
burnin_file_3 = os.path.join(test_dir, 'asap_test_3_burnin_3.npz')
# The results of the final sampling process
result_file = os.path.join(test_dir, 'asap_test_3_sample.npz')
# Initialize the model, load the data
cfg, params, obs_data, um_data = fitting.initial_model(config_file, verbose=True)
# Load the burn-in results
(mod_burnin_samples_1,
mod_burnin_chains_1,
mod_burnin_lnprob_1,
mod_burnin_best_1, _, _) = io.load_npz_results(burnin_file_1)
(mod_burnin_samples_2,
mod_burnin_chains_2,
mod_burnin_lnprob_2,
mod_burnin_best_2, _, _) = io.load_npz_results(burnin_file_2)
(mod_burnin_samples_3,
mod_burnin_chains_3,
mod_burnin_lnprob_3,
mod_burnin_best_3, _, _) = io.load_npz_results(burnin_file_3)
mod_burnin_chains = np.concatenate([
mod_burnin_chains_1, mod_burnin_chains_2, mod_burnin_chains_3], axis=1)
# Load in the final sampling results
(mod_result_samples,
mod_result_chains,
mod_result_lnprob,
mod_result_best, _, _) = io.load_npz_results(result_file)
print(np.nanmax(mod_burnin_lnprob_1), mod_burnin_best_1)
print(np.nanmax(mod_burnin_lnprob_2), mod_burnin_best_2)
print(np.nanmax(mod_burnin_lnprob_3), mod_burnin_best_3)
print(np.nanmax(mod_result_lnprob), mod_result_best)
###Output
# Running model: asap_test_3
# Will use emcee as sampler ...
# Use 256 walkers with snooker moves for 160 x 3 steps of burn-in
# Use 256 walkers with kde moves for 400 steps of sampling
# Observations:
# Galaxy catalog: s16a_wide2_massive_fsps1_imgsub_use_short.fits
# DSigma results: s16a_wide2_dsigma_logm11.6_12_bins.npy
# SMF of inner Mstar: s16a_wide2_massive_smf_m10_11.6.npy
# SMF of total Mstar: s16a_wide2_massive_smf_mmax_11.6.npy
# Covariances for SMFs: s16a_wide2_massive_smf_mmax_m10_cov.npy
# Reference SMF: primus_smf_z0.3_0.4.fits
# Column of inner Mstar: logm_10
# Column of total Mstar: logm_max
# UniverseMachine:
# Galaxy catalog : um_smdpl_insitu_exsitu_0.7124_basic_logmp_11.5_short.npy
# DSigma results : um_smdpl_insitu_exsitu_0.7124_basic_logmp_11.5_50m_r_0.08_50_22bins.npy
# Volumn of the simulation: 205348196.23 Mpc^3
# Halo mass : logmh_host
# Stellar mass : logms_tot
# There are 12 DSigma profiles in this sample
# SMF for total stellar mass:
11.6000 -- 12.3000 in 7 bins
# SMF for inner stellar mass:
10.8000 -- 11.8000 in 10 bins
# For inner stellar mass:
10 bins at 10.80 < logMinn < 11.80
# For total stellar mass:
7 bins at 11.60 < logMtot < 12.30
# The volume of the HSC data is 101441428.40 Mpc^3
# 1124544 out of 1557939 galaxies are central
-261.7930168591053 [ 6.30709026e-01 1.18404857e+01 7.57081339e-03 2.65721456e-04
6.75567084e-01 -1.62304532e-01 3.70918744e-01]
-237.9634815026266 [ 6.06118521e-01 1.18478104e+01 1.43361544e-03 1.41163218e-03
6.66396363e-01 -1.75136085e-01 3.74419544e-01]
-231.66976542380434 [ 6.05805802e-01 1.18468676e+01 -7.51333067e-03 1.83844621e-03
6.58512378e-01 -1.73458293e-01 3.72979417e-01]
-229.80105303983714 [ 6.01549974e-01 1.18474434e+01 -1.52924450e-02 1.89000651e-03
6.50239451e-01 -1.71091394e-01 3.73623679e-01]
###Markdown
Corner plot
###Code
params_label = [r'$a$', r'$b$', r'$c$', r'$d$',
r'$f_{\rm ins}$', r'$A_{\rm exs}$', r'$B_{\rm exs}$']
params_range = [(0.594, 0.615), (11.831, 11.854),
(0.059, 0.180), (-0.01, 0.119),
(0.64, 0.695),
(-0.21, -0.15), (0.02, 0.096)]
title_fmt = '.3f'
mod_corner = plotting.plot_mcmc_corner(
mod_result_samples, params_label, truths=mod_result_best, truth_color='skyblue',
**{'title_fmt': title_fmt, 'plot_datapoints': False})
###Output
_____no_output_____
###Markdown
Trace plot
###Code
mod_trace = plotting.plot_mcmc_trace(
mod_result_chains, params_label,
mcmc_best=mod_result_best, mcmc_burnin=mod_burnin_chains,
burnin_alpha=0.15, trace_alpha=0.12)
###Output
_____no_output_____
###Markdown
Quick checks of the SMFs
###Code
# Predict the stellar mass in inner and outer apertures
logms_inn, logms_tot, sig_logms, mask_use = predict_mstar_basic(
um_data['um_mock'], mod_result_best, min_logms=10.6,
logmh_col=cfg['um']['logmh_col'], min_scatter=cfg['um']['min_scatter'],
pivot=cfg['um']['pivot_logmh'])
# Predict the SMFs and DeltaSigma profiles
um_smf_tot, um_smf_inn, um_dsigma = make_model_predictions(
mod_result_best, cfg, obs_data, um_data)
# Check the likelihood for SMF and DeltaSigma profiles
lnlike_smf, lnlike_dsigma = ln_likelihood(
mod_result_best, cfg, obs_data, um_data, sep_return=True)
print("# ln(Likelihood) for SMFs : %8.4f" % lnlike_smf)
print("# ln(Likelihood) for DSigma : %8.4f" % lnlike_dsigma)
um_smf_tot_all = smf.get_smf_bootstrap(logms_tot, cfg['um']['volume'],
10, 10.9, 12.4, n_boots=1)
mod_smf_plot = plotting.plot_mtot_minn_smf(
obs_data['smf_tot'], obs_data['smf_inn'], obs_data['mtot'], obs_data['minn'],
um_smf_tot, um_smf_inn, logms_tot, logms_inn, obs_smf_full=obs_data['smf_full'],
um_smf_tot_all=um_smf_tot_all, not_table=True)
###Output
_____no_output_____
###Markdown
Save the figures
###Code
mod_corner.savefig('fig/fig3_corner_model_3.pdf', dpi=120)
mod_trace.savefig('fig/fig3_trace_model_3.png', dpi=120)
mod_smf_plot.savefig('fig/fig3_smf_model_3.png', dpi=120)
###Output
_____no_output_____ |
_notebooks/2020-09-04-l1_trend_filter.ipynb | ###Markdown
"l1 Trend Filtering"> "My reference notebook for l1 trend filtering."- author: Christopher Thiemann- toc: true- branch: master- badges: true- comments: true- categories: [time series, lasso, trend-filtering]- hide: false- search_exclude: true- image: images/l1_trend_filtering.png
###Code
#hide
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.linear_model import Lasso
import cvxpy as cp
from scipy.sparse import dia_matrix
import seaborn as sns; sns.set()
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
Motivation When working with timeseries data $y_1, y_2,...y_t, ...y_T$ one decomposes the time series in different components most often that is trend, seasonal and a random component. $y_t = x_t + s_t + \varepsilon_t$. During my econmoics studies I first encountered the Hodrick -Prescott Filter (HP) in a macro course. As we will see, the HP filter produces a non linear trend component while the l1-trend filter will provide a piecewise linear trend component. This makes l2 trend filtgering more interpretable because we can interpret the kink points in the piece wiese linear solution as change points in the trends.The corresponding paper is {% cite l1trendfilter %} Hodrick Prescott Filter The Hp-Filter is defined as the solution to the following problem$x^{HP} = argmin_{x \in R^T} 1 / 2 \sum_{t=1}^T (y_t-x_t)^2 + \lambda \sum_{t=2}^{T-1}(x_{t-1}-2x_t+x_{t+1})^2$The second term penalized the roughness of the solution.For $\lambda = 0$ we have $x_t^{HP} = y_t$ and for $\lambda -> \infty$ one can show that the solution is given by $x_t^{HP} = \alpha + \beta t$An alternative formulation of the problem is$1 / 2 ||y-x||_2^2 + \lambda ||Dx||_2^2$where D is a $(T-2) \times T$ second order difference matrix. See below an example with $T = 10$
###Code
#hide_input
calc_D(10)
###Output
_____no_output_____
###Markdown
The solution can then be written as $x^{HP} = (I+2\lambda D' D)^{-1}y$ l1 Trend Filtering The problem is similar to the Hp Filter$1 / 2 \sum_{t=1}^T (y_t-x_t)^2 + \lambda \sum_{t=2}^{T-1}|x_{t-1}-2x_t+x_{t+1}|$it is possible to rewrite the problem as a lasso problem$1 / 2 || A \theta - y||_2^2 + \lambda \sum_{t = 3}^T |\theta_t|$Note that we are **not** penalizing the first two coeficients.$A$ is a square T dimensional Matrix of the following form. (T = 10)
###Code
#hide_input
calc_A(10)
###Output
_____no_output_____
###Markdown
To recover the solution of the original l1 trend filtering problem we transform the solution as$x = A \theta$since the solution to the lasso problem is piecewiese linear the solution to the l1 trend filter is also piecewiese linear. Reproducing S&P 500 example
###Code
#data
# downloaded from https://finance.yahoo.com/quote/%5EGSPC/history?period1=922320000&period2=1172880000&interval=1d&filter=history&frequency=1d
ts = pd.read_csv("sandp.csv")
ts.rename( columns = {'Close': 'S&P 500'}, inplace = True)
log_price = np.log(ts['S&P 500'])
log_price.index = ts.Date
# apply trend filtering
solution_hp = HP_filter(log_price, 2000000)
solution_l1 = l1_trend_filter_lasso(log_price, 100)
#plotting
fig, axes = plt.subplots(figsize = (10, 10))
log_price.plot(ax = axes,alpha = .3)
fig.autofmt_xdate()
axes.plot(solution_l1, label = 'l1 Trend Filtering', linewidth = 2)
axes.set_ylabel("log-price")
axes.set_xlabel(" ")
axes.legend()
fig.savefig("l1_trend_filtering.png");
###Output
_____no_output_____
###Markdown
Extension It is possible to extend the trend filtering problem to respect seasonality, outliers or level shifts.The problem is then$1/2 ||y-x-u-w-\alpha sin(\omega t) -\beta cos(\omega t)||_2^2 + \lambda||Dx||_1+\rho||u||_1+\eta \sum_{t=2}^T|w_t-w_{t-1}|$where $sin$ and $cos$ is understood vectorwise. $u$ models *spikes*, $w$ *level shifts* and $\alpha$ and $\beta$ measure the sinusoidal periodic component. $\omega$ is the frequency. $x$ is the trend component. For an implementation look in the helper functions section. Helper Functions
###Code
def calc_A(n):
A = np.zeros((n, n))
A[:, 0] = 1
for i in range(1, n):
A[i:, i] = list(range(1, n - i + 1))
return A
def calc_D(n, return_sparse = False):
D = np.zeros((n - 2, n))
dif = np.array([1, -2, 1])
for i in range(n-2):
D[ i, i : (i+3)] = dif
if return_sparse:
return dia_matrix(D)
else:
return D
def HP_filter(ts, lam):
n = len(ts)
D = calc_D(n)
return np.linalg.inv(np.eye(n) + 2 * lam * (D.T @ D)) @ ts
def l1_trend_filter_lasso(ts, lambda_value):
n = len(ts)
A = calc_A(n)
beta = cp.Variable(n)
lambd = cp.Parameter(nonneg=True)
u = cp.Variable(n)
lambd.value = lambda_value
problem = cp.Problem(cp.Minimize(cp.norm2(A @ beta - ts)**2 + lambd * cp.norm1(beta[2 : ])))
problem.solve(verbose = True, solver=cp.CVXOPT)
solution = A @ beta.value
return solution
def l1_trend_filter(ts, freq, lam, rho, eta):
if isinstance(ts, pd.Series):
ts = ts.to_numpy()
n = len(ts)
sin_comp = np.sin(freq * np.array(range(1, n + 1)))
cosine_com = np.cos(freq * np.array(range(1, n + 1)))
D = calc_D(n, return_sparse = True)
#define variables
a = cp.Variable(1)
b = cp.Variable(1)
w = cp.Variable(n)
x = cp.Variable(shape=n)
u = cp.Variable(shape = n)
eta = eta
rho = rho
lam = lam
obj = cp.Minimize(0.5 * cp.sum_squares( ts - x - a * sin_comp - b * cosine_com)
+ lam * cp.norm(D @ x, 1)
+ rho * cp.norm(u, 1)
+ eta * cp.norm(w[1:] - w[0:n-1],
1) )
prob = cp.Problem(obj)
prob.solve(verbose=False)
return x.value, u.value, w.value, a.value, b.value, mean_squared_error(ts, x.value + a.value * sin_comp + b.value * cosine_com)
###Output
_____no_output_____ |
raw/exploratory_computing_with_python/notebook_s3/py_exp_comp_s3.ipynb | ###Markdown
Exploratory Computing with Python*Developed by Mark Bakker* Statistics Notebook 3: Distribution of the mean, hypothesis tests, and the central limit theoremIn this notebook we first investigate the distribution of the mean of a dataset, we simulate several hypothesis tests, and finish with exploring the central limit theorem.
###Code
import numpy as np
import matplotlib.pyplot as plt
import numpy.random as rnd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Consider a dataset of 100 points. The data are drawn from a normal distribution with mean 4 and standard deviation 2. As we noticed before, the sample mean of the 100 data points almost always differs from 4. And every time we generate a new set of 100 points, the mean will be somewhat different.
###Code
for i in range(5):
a = 2 * rnd.standard_normal(100) + 4
print('mean a: ', np.mean(a))
###Output
mean a: 3.6708281248223487
mean a: 3.648112863635938
mean a: 3.90391769311422
mean a: 4.060602447346081
mean a: 3.5643489099636914
###Markdown
In fact, the mean of the dataset itself can be considered as a random variable with a distribution of its own. Sample standard deviationThe sample standard deviation $s_n$ of a dataset of $n$ values is defined as$s_n = \sqrt{ \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x}_n)^2 }$and can be computed with the `std` function of the `numpy` package. By default, the `std` function devides the sum by $n$ rather than by $n-1$. To divide by $n-1$, as we want for an unbiased estimate of the standard deviation, specify the keyword argument `ddof=1` in the `std` function. Exercise 1. Histogram of the means of datasets with 100 valuesGenerate 1000 datasets each with 100 values drawn from a normal distribution with mean 4 and standard deviation 2; use a seed of 22. Compute the mean of each dataset and store them in an array of length 1000. Compute the mean of the means and the standard deviation of the means, and print them to the screen. Draw a boxplot of the means. In a separate figure, draw a histogram of the means and make sure the horizontal axis extends from 3 to 5. Recall that you can start a new figure with the `figure()` function. Answers to Exercise 1 Exercise 2. Histogram of the means of datasets with 1000 valuesRepeat exercise 1 but now generate 1000 datasets each with 1000 values (rather than 100 values) drawn from the same normal distribution with mean 4 and standard deviation 2, and again with a seed of 22. Make sure that the limits of the horizontal axis of the histogram go from 3 to 5, so that the histogram can be compared to the histogram you created above. Is the spread of the mean much smaller now as compared to the the datasets consisting of only 100 values? Answers to Exercise 2 Sample standard deviation of the sample meanThe histogram of the means looks like the bell-shaped curve of a Normal distribution, but you may recall that it is actually a Student's $t$-distribution, also simply called the $t$-distribution. A $t$-distribution arises when estimating the mean of a normally distributed variable in situations where the sample size is relatively small and the standard deviation is unknown (as it pretty much always is in practice) and needs to be estimated from the data. The sample mean of a dataset of $n$ values is commonly written as $\overline{x}_n$, while the sample standard deviation is written as $s_n$ (as defined above). Here, we are computing the sample standard deviation of the sample means, which we write as $\hat{s}_n$ for a dataset of size $n$. Theoretically, the value of the standard deviation of the sample mean $\hat{s}_n$ is related to the sample standard deviation as (see [here](http://en.wikipedia.org/wiki/Standard_deviationStandard_deviation_of_the_mean))$\hat{s}_n = s_n / \sqrt{n}$ Percentiles of $t$-distributionYou may recall that the 90% interval around the mean for a Normally distributed variable runs from $\mu-1.64\sigma$ to $\mu+1.64\sigma$. In other words, 5% of the data is expected to lie below $\mu-1.64\sigma$ and 5% of the data is expected to lie above $\mu+1.64\sigma$. What now if you forgot it is $1.64\sigma$ to the left and right of the mean? Or what if you want to know the value for some other percentile. You may look that up in a table in a Statistics book (or on the web), or use the percent point function `ppf`, which is part of any statistical distribution function defined in the `scipy.stats` package. The `ppf` function is the inverse of the cumulative distribution function. For example, `ppf(0.05)` returns the value of the data such that the cumulative distribution function is equal to 0.05 at the returned value. To find the 5% and 95% values, type (recall that by default the `norm` distribution has mean zero and standard deviation 1; you can specify different values with the `loc` and `scale` keyword arguments, respectively).
###Code
from scipy.stats import norm
xvalue_05 = norm.ppf(0.05)
xvalue_95 = norm.ppf(0.95)
print '5% limit: ',xvalue_05
print '95% limit: ',xvalue_95
print 'check if it works for 5%: ',norm.cdf( xvalue_05 )
print 'check if it works for 95%: ',norm.cdf( xvalue_95 )
# Next, specify a mean and standard deviation
xvalue_05_musig = norm.ppf(0.05, loc = 20, scale = 10) # mu = 20, sigma = 10
print '5% limit with mu=20, sig=10: ',xvalue_05_musig
print 'check: ',norm.cdf(xvalue_05_musig, loc = 20, scale = 10)
###Output
_____no_output_____
###Markdown
A similar function exists for the $t$ distribution. The $t$-distribution takes one additional argument: the number of degrees of freedom, which is equal to the number of data points minus 1. For example, consider a sample with 40 data points, a sample mean of 20, and a sample standard deviation of the mean of 2, then the 5 and 95 percentiles are
###Code
from scipy.stats import t
xvalue_05 = t.ppf(0.05, 39, loc=20, scale=2)
xvalue_95 = t.ppf(0.95, 39, loc=20, scale=2)
print '5% limit: ',xvalue_05
print '95% limit: ',xvalue_95
print 'check if it works for 5%: ',t.cdf( xvalue_05, 39, loc=20, scale=2 )
print 'check if it works for 95%: ',t.cdf( xvalue_95, 39, loc=20, scale=2 )
###Output
_____no_output_____
###Markdown
Exercise 3. Count the number of means outside 95 percentileGo back to Exercise 1. Generate 1000 datasets each with 100 values drawn from a normal distribution with mean 4 and standard deviation 2; use a seed of 22. For each dataset, evaluate whether the sample mean is within the 95 percentile of the $t$-distribution around the true mean of 4 (the standard deviation of the sample mean is different every time, of course). Count how many times the sample mean is so low that it is below the 5 percentile of the $t$ distribution around the true mean. If the theory is correct, it should, of course, be the case for about 5% of the datasets. Try a few different seeds. Answers to Exercise 3 Exercise 4. $t$ test on dataset of 20 valuesGenerate 20 datapoints from a Normal distribution with mean 39 and standard deviation 4. Use a seed of 2. Compute and report the mean and standard deviation of the dataset and the standard deviation of the mean. If you computed it correctly, the mean of the 20 data points generated above is 38.16. Somebody now claims that the 20 datapoints are taken from a distribution with a mean of 40. You are asked to decide wether the true underlying mean could indeed be 40. In statistical terms, you are asked to perform a Hypothesis test, testing the null hypothesis that the mean is 40 against the alternative hypothesis that the mean is not 40 at significance level 5%. Hence, you are asked to do a two-sided $t$-test. All you can do in Hypothesis testing it trying to reject the null hypothesis, so let's try that. Most statistics books give a cookbook recipe for performing a $t$-test. Here we will visualize the $t$-test. We reject the null hypothesis if the sample mean is outside the 95% interval around the mean of the corresponding $t$-distribution. If the mean is inside the 95% interval we can only conclude that there is not enough evidence to reject the null hypothesis. Draw the probability density function of a $t$-distribution with mean 40 and standard deviation equal to the standard deviation of the sample mean you computed above. Draw red vertical lines indicating the left and right limits of the 95% interval around the mean. Draw a heavy black vertical line at the position of the sample mean you computed above. Decide whether you can reject the null hypothesis that the mean is 40 and add that as a title to the figure. Answers to Exercise 4 Exercise 5. Hypothesis tests on Wooden beam dataLoad the data set of experiments on wooden beams stored in the file `douglas_data.csv`. First, consider the first 20 measurements of the bending strength. Compute the sample mean and the standard deviation of the sample mean. The manufacturer claims that the mean bending strength is only 50 Pa. Perform a $t$-test (significance level 5%) with null hypothesis that the mean is indeed 50 Pa and alternative hypothesis that the mean is not 50 Pa using the approach applied in Exercise 4. Repeat the $t$-test above but now with all the measurements of the bending strength. Do you reach the same conclusion? Answers to Exercise 5 Central limit theoremSo far we looked at the distribution of the sample mean of a dataset while we knew that the data was taken from a normal distribution (except for the wooden beam data, but that looked very much like a Normal distribution). Such a sample mean has a Student $t$-distribtion, which approaches the Normal distribution when the dataset is large. Actually, 100 datapoints is already enough to approach the Normal distribution fairly closely. You may check this by comparing, for example, the percent point function `ppf` of a Normal distribution with a $t$-distribution with 99 degrees of freedom, or by simply plotting the pdf of both distributions:
###Code
print '95 percentile Standard Normal: ',norm.ppf(0.95)
print '95 percentile t-dist with n=99: ',t.ppf(0.95,99)
x = np.linspace(-4,4,100)
y1 = norm.pdf(x)
y2 = t.pdf(x,99)
plt.plot(x,y1,'b',label='Normal')
plt.plot(x,y2,'r',label='t-dist')
plt.legend()
###Output
_____no_output_____
###Markdown
The Central limit theorem now states that the distribution of the sample mean approaches the Normal distribution in the limit even if the dataset is drawn from an entirely different distribution! We are going to test this theorem by drawing numbers from a Gamma distribution. The Gamma distribution is a skewed distribution and takes a shape parameter $k$ and a scale parameter $\theta$, and is defined for $x>0$. Details on the Gamma distribution can be found, for example [here](http://en.wikipedia.org/wiki/Gamma_distribution). Let's choose the shape parameter equal to 2 and the scale parameter equal to 1 (which happens to be the default). When the scale parameter is equal to 1, the mean is equal to the shape parameter. The pdf of the Gamma distribution for these values is shown below. The mean is indicated with the red vertical line.
###Code
from scipy.stats import gamma
x = np.linspace(1e-6,10,100)
y = gamma.pdf(x,2,scale=1)
plt.plot(x,y)
plt.axvline(2,color='r')
###Output
_____no_output_____
###Markdown
Random numbers may be drawn from any distribution in the `scipy.stats` package with the `rvs` function. Here, we draw 1000 numbers and add the histogram to the previous figure
###Code
x = np.linspace(1e-6,10,100)
y = gamma.pdf(x,2)
plt.plot(x,y)
plt.axvline(2, color='r')
data = gamma.rvs(2, size=1000)
plt.hist(data, bins=20, normed=True)
###Output
_____no_output_____
###Markdown
Exercise 6. Explore Central Limit Theorem for Gamma DistributionGenerate $N$ datasets of 20 numbers randomly drawn from a Gamma distribution with shape parameter equal to 2 and scale equal to 1. Draw a histogram of the means of the $N$ datasets using 20 bins. On the same graph, draw the pdf of the Normal distribution using the mean of means and sample standard deviation of the mean; choose the limits of the $x$-axis between 0 and 4. Make 3 graphs, for $N=100,1000,10000$ and notice that the distribution starts to approach a Normal distribution. Add a title to each graph stating the number of datasets. Answers to Exercise 6 Answers to the exercises Answers to Exercise 1
###Code
rnd.seed(22)
mean_of_data = np.mean( 2.0 * rnd.standard_normal((1000,100)) + 4.0, 1 )
print 'The mean of the means is: ', np.mean(mean_of_data)
print 'The standard deviation of the means is: ', np.std(mean_of_data, ddof=1)
plt.figure()
plt.boxplot(mean_of_data)
plt.figure()
plt.hist(mean_of_data, normed=True)
plt.xlim(3,5)
###Output
_____no_output_____
###Markdown
Back to Exercise 1Answers to Exercise 2
###Code
rnd.seed(22)
mean_of_data = np.mean( 2.0 * rnd.standard_normal((1000,1000)) + 4.0, 1 )
print 'The mean of the means is: ', np.mean(mean_of_data)
print 'The standard deviation of the means is: ', np.std(mean_of_data, ddof=1)
plt.figure()
plt.boxplot(mean_of_data)
plt.figure()
plt.hist(mean_of_data)
plt.xlim(3,5)
###Output
_____no_output_____
###Markdown
Back to Exercise 2Answers to Exercise 3
###Code
from scipy.stats import t
for s in [22,32,42,52,62]:
rnd.seed(s)
data = 2.0 * rnd.standard_normal((1000,100)) + 4.0
mean_of_data = np.mean( data, 1 )
std_of_mean_of_data = np.std( data, 1, ddof = 1 ) / np.sqrt(100)
fivepercentile = t.ppf(0.05, 99)
outside = mean_of_data < 4.0 + std_of_mean_of_data * fivepercentile
print 'number of datasets where sample mean is above 95 percentile: ', np.sum( outside )
###Output
_____no_output_____
###Markdown
Back to Exercise 3Answers to Exercise 4
###Code
rnd.seed(2)
data = 4 * rnd.standard_normal(20) + 39
mu = np.mean(data)
sig = np.std(data, ddof=1)
sighat = np.std(data, ddof=1) / np.sqrt(20)
print 'mean of the data: ', mu
print 'std of the data: ', sig
print 'std of the mean: ', sighat
x = np.linspace(37,43,100)
y = t.pdf(x, 19, loc=40, scale=sighat)
plt.plot(x,y)
perc025 = t.ppf(0.025, 19, loc = 40, scale = sighat)
perc975 = t.ppf(0.975, 19, loc = 40, scale = sighat)
plt.axvline(perc025,color='r')
plt.axvline(perc975,color='r')
plt.axvline(mu,color='k',lw=5)
plt.title('H0 cannot be rejected')
###Output
_____no_output_____
###Markdown
Back to Exercise 4Answers to Exercise 5
###Code
from pandas import read_csv
w = read_csv('douglas_data.csv',skiprows=[1],skipinitialspace=True)
mu20 = np.mean(w.bstrength[:20])
sig20 = np.std(w.bstrength[:20], ddof=1) / np.sqrt(20)
print 'sample mean, standard deviation of sample mean: ', mu20, sig20
x = np.linspace(30,70,100)
y = t.pdf(x, 19, loc = 50, scale = sig20)
plt.plot(x,y)
perc025 = t.ppf(0.025, 19, loc = 50, scale = sig20)
perc975 = t.ppf(0.975, 19, loc = 50, scale = sig20)
plt.axvline(perc025,color='r')
plt.axvline(perc975,color='r')
plt.axvline(mu20,color='k',lw=4)
plt.title('H0 is rejected: mean is not 50 Pa')
from pandas import read_csv
w = read_csv('douglas_data.csv',skiprows=[1],skipinitialspace=True)
N = len(w.bstrength)
mu = np.mean(w.bstrength)
sig = np.std(w.bstrength, ddof=1) / np.sqrt(N)
print 'sample mean, standard deviation of sample mean: ', mu, sig
x = np.linspace(30,70,100)
y = t.pdf(x, N-1, loc=50, scale=sig)
plt.plot(x,y)
perc025 = t.ppf(0.025, N-1, loc = 50, scale = sig)
perc975 = t.ppf(0.975, N-1, loc = 50, scale = sig)
plt.axvline(perc025,color='r')
plt.axvline(perc975,color='r')
plt.axvline(mu,color='k',lw=4)
plt.title('Not enough evidence to reject H0: mean may very well be 50')
###Output
_____no_output_____
###Markdown
Back to Exercise 5Answers to Exercise 6
###Code
from scipy.stats import norm, gamma
for N in [100, 1000, 10000]:
data = gamma.rvs(2,size=(N,20))
mean_of_data = np.mean(data,1)
mu = np.mean(mean_of_data)
sig = np.std(mean_of_data,ddof=1)
plt.figure()
plt.hist(mean_of_data,bins=20,normed=True)
x = np.linspace(0,4,100)
y = norm.pdf(x,loc=mu,scale=sig)
plt.plot(x,y,'r')
plt.title('N='+str(N))
###Output
_____no_output_____ |
Deep-Learning-Notes/3 Foundation/4-L2Penalty-and-Dropout.ipynb | ###Markdown
L2正则和丢弃法--- L2范数惩罚
###Code
import torch
import torch.nn as nn
import numpy as np
import sys
sys.path.append('..')
import d2lzh_pytorch as d2l
n_train, n_test , num_input = 20, 100, 200
true_w, true_b = torch.ones(num_input, 1) * 0.01, 0.05
true_w[:10]
features = torch.randn((n_train + n_test, num_input))
labels = torch.matmul(features, true_w) + true_b
labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float)
train_features, test_features = features[:n_train, :], features[n_train:, :]
train_labels, test_labels = labels[:n_train], labels[n_train:]
print(train_features.shape)
print(test_features.shape)
print(train_labels.shape)
print(test_labels.shape)
# [初始化模型参数]
def init_params():
w = torch.randn((num_input, 1),requires_grad=True)
b = torch.zeros(1, requires_grad=True)
return [w, b]
# [L2范数惩罚]
def l2_penalty(w):
return (w**2).sum() / 2
# [定义训练和测试]
batch_size, num_epochs, lr = 1, 100, 0.003
net, loss = d2l.linreg, d2l.squared_loss
dataset = torch.utils.data.TensorDataset(train_features, train_labels)
train_iter = torch.utils.data.DataLoader(dataset, batch_size, shuffle=True)
def fit_and_plot(lambd):
w, b = init_params()
train_ls, test_ls = [], []
for _ in range(num_epochs):
for X, y in train_iter:
l = loss(net(X, w, b), y) + lambd * l2_penalty(w)
l = l.sum()
if w.grad is not None:
w.grad.data.zero_()
b.grad.data.zero_()
l.backward()
d2l.sgd([w,b], lr, batch_size)
train_ls.append(loss(net(train_features, w, b), train_labels).mean().item())
test_ls.append(loss(net(test_features, w, b), test_labels).mean().item())
d2l.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
range(1, num_epochs + 1), test_ls, ['train', 'test'])
print('L2 norm of w: ', w.norm().item())
# 没有使用权重衰减
fit_and_plot(lambd=0)
# 使用权重衰减
fit_and_plot(lambd=3)
fit_and_plot(lambd=20)
###Output
L2 norm of w: 0.017819780856370926
###Markdown
调库实现L2
###Code
def fit_and_plot_pytorch(wd):
net = nn.Linear(num_input, 1)
nn.init.normal_(net.weight, mean=0, std=1)
nn.init.normal_(net.bias, mean=0, std=1)
optimizer_w = torch.optim.SGD(params=[net.weight], lr=lr, weight_decay=wd) # 对权重的衰减
optimizer_b = torch.optim.SGD(params=[net.bias], lr=lr) # 不对偏差参数衰减
train_ls, test_ls = [], []
for _ in range(num_epochs):
for X, y in train_iter:
l = loss(net(X), y).mean()
optimizer_w.zero_grad()
optimizer_b.zero_grad()
l.backward()
# 对两个optimizer实例分别调用step函数,从而分别更新权重和偏差
optimizer_w.step()
optimizer_b.step()
train_ls.append(loss(net(train_features), train_labels).mean().item())
test_ls.append(loss(net(test_features), test_labels).mean().item())
d2l.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
range(1, num_epochs + 1), test_ls, ['train', 'test'])
print('L2 norm of w: ', net.weight.data.norm().item())
fit_and_plot_pytorch(0)
fit_and_plot_pytorch(3)
###Output
L2 norm of w: 0.06084102764725685
###Markdown
丢弃发
###Code
def dropout(X, drop_prob):
X = X.float()
assert 0 <= drop_prob <= 1
keep_prob = 1- drop_prob
if keep_prob == 0:
return torch.zeros_like(X)
mask = (torch.rand(X.shape) < keep_prob).float()
return mask * X / keep_prob
X = torch.arange(16).view(2, 8)
dropout(X, 0)
dropout(X, 0.5)
dropout(X, 1.0)
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256
W1 = torch.tensor(np.random.normal(0, 0.01, size=(num_inputs, num_hiddens1)), dtype=torch.float, requires_grad=True)
b1 = torch.zeros(num_hiddens1, requires_grad=True)
W2 = torch.tensor(np.random.normal(0, 0.01, size=(num_hiddens1, num_hiddens2)), dtype=torch.float, requires_grad=True)
b2 = torch.zeros(num_hiddens2, requires_grad=True)
W3 = torch.tensor(np.random.normal(0, 0.01, size=(num_hiddens2, num_outputs)), dtype=torch.float, requires_grad=True)
b3 = torch.zeros(num_outputs, requires_grad=True)
params = [W1, b1, W2, b2, W3, b3]
drop_prob1, drop_prob2 = 0.2, 0.5
def net(X, is_training=True):
X = X.view(-1, num_inputs)
H1 = (torch.matmul(X, W1) + b1).relu()
if is_training: # 只在训练模型时使用丢弃法
H1 = dropout(H1, drop_prob1) # 在第一层全连接后添加丢弃层
H2 = (torch.matmul(H1, W2) + b2).relu()
if is_training:
H2 = dropout(H2, drop_prob2) # 在第二层全连接后添加丢弃层
return torch.matmul(H2,W3) + b3
num_epochs, lr, batch_size = 5, 100.0, 256
loss = torch.nn.CrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
d:\anaconda3\envs\pytorch_gpu_1\lib\site-packages\torchvision\datasets\mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:180.)
return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)
###Markdown
简洁实现Dropout
###Code
net = nn.Sequential(
d2l.FlattenLayer(),
nn.Linear(num_inputs, num_hiddens1),
nn.ReLU(),
nn.Dropout(),
nn.Linear(num_hiddens1, num_hiddens2),
nn.ReLU(),
nn.Dropout(),
nn.Linear(num_hiddens2, 10)
)
for param in net.parameters():
nn.init.normal_(param, mean=0, std=0.1)
optimizer = torch.optim.SGD(net.parameters(), lr=0.5)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, optimizer)
###Output
epoch 1, loss 0.0038, train acc 0.653, test acc 0.750
epoch 2, loss 0.0025, train acc 0.769, test acc 0.809
epoch 3, loss 0.0022, train acc 0.793, test acc 0.829
epoch 4, loss 0.0021, train acc 0.807, test acc 0.823
epoch 5, loss 0.0020, train acc 0.818, test acc 0.837
|
hw5/hw5_report.ipynb | ###Markdown
P1: 請比較有無normalize(在rating上)的差別。並說明如何normalize. (1%)
###Code
plt.plot(hist.history["val_rmse"],label="Non-normalized RMSE")
plt.plot(hist_normal.history["val_rmse"],label="Normalized RMSE")
plt.title("Nomalization comparison - Validaton")
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.legend()
plt.show()
plt.plot(hist.history["rmse"],label="Non-normalized RMSE")
plt.plot(hist_normal.history["rmse"],label="Normalized RMSE")
plt.title("Nomalization comparison - Training")
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
P2: 比較不同的latent dimension的結果(1%)
###Code
train_rmse = []
valid_rmse = []
batch_size = 1024
epochs =30
for lant in [8,16,32,64,128,256]:
MF = MF_model(num_user,num_movie,latent_dim=lant,bias=False)
train_user_, train_movie_, train_rating_ = shuffle(train_user, train_movie, train_rating)
hist_latent = History()
early_stop = EarlyStopping(monitor="val_rmse", patience=3)
MF.fit([train_user_, train_movie_], train_rating_,
batch_size=batch_size,
epochs=epochs,
validation_split=0.05,callbacks=[ hist_latent])
train_rmse.append(hist_latent.history["rmse"])
valid_rmse.append(hist_latent.history["val_rmse"])
for i,j in enumerate([8,16,32,64,128,256]):
plt.plot(train_rmse[i],label=str(j))
plt.title("Latent comparison - Training")
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.legend()
plt.show()
for i,j in enumerate([8,16,32,64,128,256]):
plt.plot(valid_rmse[i],label=str(j))
plt.title("Latent comparison - Validation")
plt.ylim(0.75,2.2)
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
P3: 比較有無bias的結果。(1%)
###Code
batch_size = 256
epochs = 10
MF = MF_model(num_user,num_movie,latent_dim=15,bias=True,normalize=False)
train_user_, train_movie_, train_rating_ = shuffle(train_user, train_movie, train_rating)
hist_bias = History()
early_stop = EarlyStopping(monitor="val_rmse", patience=3)
MF.fit([train_user_, train_movie_], train_rating_,
batch_size=batch_size,
epochs=epochs,
validation_split=0.1,callbacks=[ hist_bias])
plt.plot(hist.history["val_rmse"],label="No bias RMSE")
plt.plot(hist_bias.history["val_rmse"],label="Bias RMSE")
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.title("Bias term comparison - Validation")
plt.legend()
plt.show()
plt.plot(hist.history["rmse"],label="No bias RMSE")
plt.plot(hist_bias.history["rmse"],label="Bias RMSE")
plt.title("Bias term comparison - Training")
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
P4: 請試著用DNN(投影片p.28)來解決這個問題,並且說明實做的方法(方法不限)。並比較MF和NN的結果,討論結果的差異。(1%)
###Code
def NN_model(n_users, n_items, latent_dim=128,normalize=False):
if normalize:
def rmse(y_true, y_pred):
y_true = y_true*1.116897661+3.58171208
y_pred = y_pred*1.116897661+3.58171208
y_pred = K.clip(y_pred, 1.0, 5.0)
return K.sqrt(K.mean(K.pow(y_true - y_pred, 2)))
else:
def rmse(y_true, y_pred):
y_pred = K.clip(y_pred, 1.0, 5.0)
return K.sqrt(K.mean(K.pow(y_true - y_pred, 2)))
get_custom_objects().update({"rmse": rmse})
user_input = Input(shape=[1])
item_input = Input(shape=[1])
user_vec = Embedding(n_users, latent_dim, embeddings_initializer="random_normal")(user_input)
user_vec = Flatten()(user_vec)
item_vec = Embedding(n_items, latent_dim, embeddings_initializer="random_normal")(item_input)
item_vec = Flatten()(item_vec)
merge_vec = concatenate([item_vec,user_vec])
hidden = Dense(128,activation="relu")(merge_vec)
hidden = Dense(64,activation="relu")(hidden)
output = Dense(1)(hidden)
model = Model([user_input,item_input],output)
model.compile(loss="mse", optimizer="adam", metrics=[rmse])
return model
batch_size = 1024
epochs = 30
NN = NN_model(num_user,num_movie,latent_dim=20,normalize=False)
train_user_, train_movie_, train_rating_ = shuffle(train_user, train_movie, train_rating)
hist_NN = History()
early_stop = EarlyStopping(monitor="val_rmse", patience=3)
NN.fit([train_user_, train_movie_], train_rating_,
batch_size=batch_size,
epochs=epochs,
validation_split=0.05,callbacks=[hist_NN])
plt.plot(hist.history["val_rmse"],label="MF RMSE")
plt.plot(hist_NN.history["val_rmse"],label="NN RMSE")
plt.title("NN comparison- Validation")
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.legend()
plt.show()
plt.plot(hist.history["rmse"],label="MF RMSE")
plt.plot(hist_NN.history["rmse"],label="NN RMSE")
plt.title("NN comparison - Training")
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.legend()
plt.show()
test_user = np.array([user2id[i] for i in test_data["UserID"]])
test_movie = np.array([movie2id[i] for i in test_data["MovieID"]])
pred_rating = NN.predict([test_user,test_movie])
print(pred_rating)
###Output
[[ 4.29356718]
[ 4.12751293]
[ 4.39508724]
...,
[ 1.47380805]
[ 3.7565589 ]
[ 3.99380565]]
###Markdown
P5: 請試著將movie的embedding用tsne降維後,將movie category當作label來作圖(如投影片p.29)。(1%)
###Code
movie_df = pd.read_csv("movies.csv",sep="::")
movie_df["Genres"] = movie_df["Genres"].apply(lambda x:x.split("|")[0])
print(movie_df.head())
user_emb = np.array(MF.layers[2].get_weights()).squeeze()
print("user embeddign shape:", user_emb.shape)
movie_emb = np.array(MF.layers[3].get_weights()).squeeze()
print("movie embedding shape:", movie_emb.shape)
np.save("user_emb.npy", user_emb)
np.save("movie_emb.npy",movie_emb)
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0, perplexity=80)
vis_data = tsne.fit_transform(movie_emb)
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
inv_map = {v: k for k, v in movie2id.items()}
y = []
for i in range(len(vis_data)):
movie_id = inv_map[i]
genres = movie_df.loc[movie_df["movieID"]==movie_id,"Genres"].tolist()
y+=genres
y_c = le.fit_transform(y)
plt.figure(figsize=(16,9))
cm = plt.cm.get_cmap("tab20", 18)
sc = plt.scatter(vis_data[:,0], vis_data[:,1], c=y_c, cmap=cm)
plt.colorbar(ticks=range(18))
plt.clim(-0.5, 17.5)
plt.show()
pd.DataFrame({"class":list(range(18)),"genres":list(le.classes_)})
len(le.classes_)
###Output
_____no_output_____
###Markdown
BONUS: 試著使用除了rating以外的feature, 並說明你的作法和結果,結果好壞不會影響評分。(1%)
###Code
kaggle_val = [0.8617,0.8464,0.8381,0.8353,0.8352,0.8338,0.8340,0.8349,0.8354,0.8350]
plt.plot(hist_bias.history["val_rmse"],label="MF with bias RMSE")
plt.plot(kaggle_val,label="best RMSE")
plt.title("Best comparison - Validation")
plt.xlabel("Epochs")
plt.ylabel("RMSE")
plt.legend()
plt.show()
###Output
_____no_output_____ |
Notebooks/Archive/Bike_Demand_Prediction_3.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
import warnings
warnings.filterwarnings('ignore')
!wget http://archive.ics.uci.edu/ml/machine-learning-databases/00560/SeoulBikeData.csv
df = pd.read_csv('./SeoulBikeData.csv', index_col='Date', encoding='unicode_escape')
data = df.copy()
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(data, data['Seasons']):
strat_train_set = data.iloc[train_index]
strat_test_set = data.iloc[test_index]
strat_train_set.shape
strat_test_set.shape
data = strat_train_set.drop('Rented Bike Count', axis=1)
data_labels = strat_train_set['Rented Bike Count'].copy()
data
def num_pipeline_transformer(data):
"""
Function to process numerical transformations
Argument:
data: original dataframe
Returns:
num_attrs: numerical dataframe
num_pipeline: numerical pipeline object
"""
numerics = ['float64', 'int64']
num_attrs = data.select_dtypes(include=numerics)
num_pipeline = Pipeline([
('std_scaler', StandardScaler()),
])
return num_attrs, num_pipeline
def pipeline_transformer(data):
"""
Complete transformation pipeline for both
numerical and categorical data.
Argument:
data:original dataframe
Returns:
prepared_data: transformed, ready to use
"""
cat_attrs = ['Seasons', 'Holiday', 'Functioning Day']
num_attrs, num_pipeline = num_pipeline_transformer(data)
full_pipeline = ColumnTransformer([
('num', num_pipeline, list(num_attrs)),
('cat', OneHotEncoder(), cat_attrs),
])
prepared_data = full_pipeline.fit_transform(data)
return prepared_data
prepared_data = pipeline_transformer(data)
prepared_data
prepared_data[0]
prepared_data.shape
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(prepared_data, data_labels)
sample_data = data.iloc[:5]
sample_labels = data_labels.iloc[:5]
sample_data_prepared = pipeline_transformer(sample_data)
print ("Prediction of samples:", lin_reg.predict(sample_data_prepared))
###Output
_____no_output_____ |
duel/duel-of-sorcerers.ipynb | ###Markdown
Duel of SorcerersYou are witnessing an epic battle between two powerful sorcerers: Gandalf and Saruman. Each sorcerer has 10 spells of variable power in their mind and they are going to throw them one after the other. The winner of the duel will be the one who wins more of those clashes between spells. Spells are represented as a list of 10 integers whose value equals the power of the spell.```gandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22]saruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17]```For example:- The first clash is won by Saruman: 10 against 23.- The second clash is won by Saruman: 11 against 66.- ...You will create two variables, one for each sorcerer, where the sum of clashes won will be stored. Depending on which variable is greater at the end of the duel, you will show one of the following three results on the screen:* Gandalf wins* Saruman wins* Tie ToolsYou don't necessarily need to use all the tools. Maybe you opt to use some of them or completely different ones, they are given to help you shape the exercise. Programming exercises can be solved in many different ways.1. Data structures: **lists, dictionaries**2. Loop: **for loop**3. Conditional statements: **if-elif-else**4. Functions: **range(), len(), print()** Tasks 1. Create two variables called `gandalf` and `saruman` and assign them the spell power lists. Create a variable called `spells` to store the number of spells that the sorcerers cast.
###Code
gandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22]
saruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17]
spells = 10
###Output
_____no_output_____
###Markdown
2. Create two variables called `gandalf_wins` and `saruman_wins`. Set both of them to 0. You will use these variables to count the number of clashes each sorcerer wins.
###Code
gandalf_wins = 0
saruman_wins = 0
###Output
_____no_output_____
###Markdown
3. Using the lists of spells of both sorcerers, update variables `gandalf_wins` and `saruman_wins` to count the number of times each sorcerer wins a clash.
###Code
for i in range(len(gandalf)):
if gandalf[i]>saruman[i]:
gandalf_wins += 1
elif saruman[i]>gandalf[i]:
saruman_wins += 1
print("Total gandalf wins:", gandalf_wins)
print("Total saruman wins:", saruman_wins)
###Output
Total gandalf wins: 6
Total saruman wins: 4
###Markdown
4. Who won the battle?Print `Gandalf wins`, `Saruman wins` or `Tie` depending on the result.
###Code
if gandalf_wins > saruman_wins :
print ("Gandalf wins !")
###Output
Gandalf wins !
###Markdown
BonusIn this bonus challenge, you'll need to check the winner of the battle but this time, a sorcerer wins if he succeeds in winning 3 spell clashes in a row.Also, the spells now have a name and there is a dictionary that associates that name to a power.```POWER = { 'Fireball': 50, 'Lightning bolt': 40, 'Magic arrow': 10, 'Black Tentacles': 25, 'Contagion': 45}gandalf = ['Fireball', 'Lightning bolt', 'Lightning bolt', 'Magic arrow', 'Fireball', 'Magic arrow', 'Lightning bolt', 'Fireball', 'Fireball', 'Fireball']saruman = ['Contagion', 'Contagion', 'Black Tentacles', 'Fireball', 'Black Tentacles', 'Lightning bolt', 'Magic arrow', 'Contagion', 'Magic arrow', 'Magic arrow']``` 1. Create variables `POWER`, `gandalf` and `saruman` as seen above. Create a variable called `spells` to store the number of spells that the sorcerers cast.
###Code
POWER = {
'Fireball': 50,
'Lightning bolt': 40,
'Magic arrow': 10,
'Black Tentacles': 25,
'Contagion': 45
}
gandalf = ['Fireball', 'Lightning bolt', 'Lightning bolt', 'Magic arrow', 'Fireball',
'Magic arrow', 'Lightning bolt', 'Fireball', 'Fireball', 'Fireball']
saruman = ['Contagion', 'Contagion', 'Black Tentacles', 'Fireball', 'Black Tentacles',
'Lightning bolt', 'Magic arrow', 'Contagion', 'Magic arrow', 'Magic arrow']
spells = 10
###Output
_____no_output_____
###Markdown
2. Create two variables called `gandalf_wins` and `saruman_wins`. Set both of them to 0.
###Code
gandalf_wins = 0
saruman_wins = 0
###Output
_____no_output_____
###Markdown
3. Create two variables called `gandalf_power` and `saruman_power` to store the list of spell powers of each sorcerer.
###Code
gandalf_power = []
saruman_power = []
for i in gandalf:
gandalf_power.append(POWER[i])
for i in saruman:
saruman_power.append(POWER[i])
print('List of spells of Gandalf:',gandalf_power, '\n List of spells of Saruman:', saruman_power)
###Output
List of spells of Gandalf: [50, 40, 40, 10, 50, 10, 40, 50, 50, 50]
List of spells of Saruman: [45, 45, 25, 50, 25, 40, 10, 45, 10, 10]
###Markdown
4. The battle starts! Using the variables you've created above, code the execution of spell clashes. Remember that a sorcerer wins if he succeeds in winning 3 spell clashes in a row. If a clash ends up in a tie, the counter of wins in a row is not restarted to 0. Remember to print who is the winner of the battle.
###Code
for i in range(len(gandalf_power)):
if gandalf_power[i]>saruman_power[i]:
gandalf_wins += 1
saruman_wins = 0
if gandalf_wins == 3:
print("Gandalf wins!")
break
elif saruman_power[i]>gandalf_power[i]:
saruman_wins +=1
gandalf_wins = 0
if saruman_wins == 3:
print("Saruman wins.")
break
else:
for i in range(len(gandalf)):
if gandalf[i]>saruman[i]:
gandalf_wins += 1
elif saruman[i]>gandalf[i]:
saruman_wins += 1
print("Total gandalf wins:", gandalf_wins)
print("Total saruman wins:", saruman_wins)
###Output
Gandalf wins!
Total gandalf wins: 3
Total saruman wins: 0
###Markdown
5. Find the average spell power of Gandalf and Saruman.
###Code
print (sum(gandalf_power)/len(gandalf_power))
print (sum(saruman_power)/len(saruman_power))
np.mean(gandalf_power)
###Output
_____no_output_____
###Markdown
6. Find the standard deviation of the spell power of Gandalf and Saruman.
###Code
import numpy as np
np.std(gandalf_power)
np.std(saruman_power)
###Output
_____no_output_____ |
KG_02_Mathematicians.ipynb | ###Markdown
Black Mathematicians have made significant contributions to American life.This notebook creates a dataset that tells their stories through a lens of data analysis.
###Code
from IPython.display import Image
Image("Banneker.jpg")
# Benjamin Banneker
from IPython.display import Image
Image("KatherineJ.jpg")
# Benjamin Banneker
import pandas as pd
import numpy as np
import seaborn as sns
import nltk
import requests
from bs4 import BeautifulSoup as bs
print("Pandas Version", pd.__version__)
#Requests
r = requests.get("https://en.wikipedia.org/wiki/List_of_African-American_mathematicians")
#Convert to bs object
soup = bs(r.content)
soup.title.text
#Scraping the first table of Math Phds
table = soup.find_all("table", attrs = {'class': 'wikitable'})[2]
phd = pd.read_html(str(table))[0]
phd['Degree'] = 'Phd'
phd.head()
#Find those with degrees in Math Education
table = soup.find_all("table", attrs = {'class': 'wikitable'})[4]
math_ed = pd.read_html(str(table))[0]
math_ed['Degree'] = 'Math Ed'
math_ed.head()
#Reporting out findings
print(f'There were {len(phd)} African_Americans who earned their Phds in Mathematics between 1925 to 1975.')
print(f'There were {len(math_ed)} African_Americans who earned their Phds in Mathematics Education between 1925 to 1975.')
#Concatenating the two dataframes
df = pd.concat([phd, math_ed])
# df = df.drop(columns = ['Ref.'])
df.head()
df.to_csv("African_American_Mathematicians.csv")
#Checking to see the
#Socrates Walter Saunders earned 2 Math Phds; one in 1942 and in 1962
df['Name'].value_counts().head(5)
#How many were Math Education degrees vs. PhDs
df['Degree'].value_counts()
#Doctoral degree by gender
df.groupby(['Gender'])['Degree'].count()
#What were the universities who awarded the most degrees to AA mathematicians
most_common_univ = df['Awarded by'].value_counts().head(20)
most_common_univ
###Output
_____no_output_____
###Markdown
The analysis below only includes Advanced degrees and is not meant to ignore the important & fundamental work done by HBCU's; where an overwhelming number of people on this list received their undergrad training.
###Code
#Which universities awarded the most degrees to Black Mathematicians
for i, j in zip(most_common_univ.index, most_common_univ):
if j == 1:
print(f"{i} awarded {j} advanced degrees to Black Mathematicians.\n")
else:
print(f"{i} awarded {j} advanced degrees to Black Mathematicians.\n")
###Output
Oklahoma State University awarded 10 advanced degrees to Black Mathematicians.
University of Michigan awarded 10 advanced degrees to Black Mathematicians.
University of Illinois awarded 9 advanced degrees to Black Mathematicians.
University of Pittsburgh awarded 7 advanced degrees to Black Mathematicians.
Cornell University awarded 6 advanced degrees to Black Mathematicians.
University of Chicago awarded 6 advanced degrees to Black Mathematicians.
Ohio State University awarded 6 advanced degrees to Black Mathematicians.
University of California, Berkeley awarded 6 advanced degrees to Black Mathematicians.
University of Pennsylvania awarded 5 advanced degrees to Black Mathematicians.
Pennsylvania State University awarded 5 advanced degrees to Black Mathematicians.
University of Texas awarded 5 advanced degrees to Black Mathematicians.
Columbia University awarded 5 advanced degrees to Black Mathematicians.
Purdue University awarded 5 advanced degrees to Black Mathematicians.
New York University awarded 5 advanced degrees to Black Mathematicians.
Syracuse University awarded 4 advanced degrees to Black Mathematicians.
Catholic University of America awarded 3 advanced degrees to Black Mathematicians.
University of Maryland awarded 3 advanced degrees to Black Mathematicians.
Harvard University awarded 3 advanced degrees to Black Mathematicians.
Case Western University awarded 2 advanced degrees to Black Mathematicians.
University of North Carolina awarded 2 advanced degrees to Black Mathematicians.
|
python-statatics-tutorial/basic-theme/python-language/MultiThread.ipynb | ###Markdown
Python 多线程 1 Python 全局解释器锁(GIL)对Python虚拟机的访问由全局解释器锁(Global interpreter lock)来控制,正是这个锁能够保证同一时刻只有一个线程在运行,在多线程的环境中,Python虚拟机按照如下的方式进行。1. 设置GIL 2. 切换到一个线程去运行3. 运行 + 指定数量的字节码的指令,或者 + 线程主动让出控制(可以调用`time.sleep(0)`)4. 把线程设置为睡眠状态5. 解锁GIL6. 再次重复以上所有步骤 2 非多线程
###Code
from time import sleep, ctime
def loop0():
print 'start loop 0 at: ',ctime()
sleep(4)
print 'loop 0 done at: ', ctime()
def loop1():
print 'start loop 1 at: ', ctime()
sleep(2)
print 'loop 1 done at: ', ctime()
def main():
print 'starting at: ', ctime()
loop0()
loop1()
print 'all Done at:', ctime()
main()
###Output
starting at: Wed Feb 1 15:45:34 2017
start loop 0 at: Wed Feb 1 15:45:34 2017
loop 0 done at: Wed Feb 1 15:45:38 2017
start loop 1 at: Wed Feb 1 15:45:38 2017
loop 1 done at: Wed Feb 1 15:45:40 2017
all Done at: Wed Feb 1 15:45:40 2017
###Markdown
3 Thread 模块1. start_new_thread(function, args, kwargs=None) 产生一个新的线程,用指定参数和可选参数来调用该函数2. allocate_lock() 分配一个LockType类型的对象3. exit() 让该线程退出 3.1 多线程1
###Code
import thread
from time import ctime, sleep
def loop0():
print 'start loop 0 at: ',ctime()
sleep(4)
print 'loop 0 done at: ', ctime()
def loop1():
print 'start loop 1 at: ', ctime()
sleep(2)
print 'loop 1 done at: ', ctime()
def main():
print 'start at: ', ctime()
thread.start_new_thread(loop0, ())
thread.start_new_thread(loop1, ())
sleep(6)
print 'all Done at: ', ctime()
main()
###Output
start at: Wed Feb 1 15:45:26 2017
start loop 1 at: Wed Feb 1 15:45:26 2017start loop 0 at:
Wed Feb 1 15:45:26 2017
loop 1 done at: Wed Feb 1 15:45:28 2017
loop 0 done at: Wed Feb 1 15:45:30 2017
all Done at: Wed Feb 1 15:45:32 2017
###Markdown
3.2 多线程2 使用锁
###Code
import thread
from time import ctime, sleep
loops = [4, 2]
def loop(nloop, nsec, lock):
print 'start loop', nloop, 'at', ctime()
sleep(nsec)
print 'loop', nloop, 'done at:', ctime()
lock.release()
def main():
print 'starting at: ', ctime()
locks = []
nloops = range(len(loops))
for i in nloops:
lock= thread.allocate_lock()
lock.acquire()
locks.append(lock)
for i in nloops:
thread.start_new_thread(loop,(i, loops[i], locks[i]))
for i in nloops:
while locks[i].locked():
pass
print 'all Done at: ', ctime()
main()
###Output
starting at: Wed Feb 1 15:55:39 2017
start loop 1 atstart loop Wed Feb 1 15:55:39 2017
0 at Wed Feb 1 15:55:39 2017
loop 1 done at: Wed Feb 1 15:55:41 2017
loop 0 done at: Wed Feb 1 15:55:43 2017
all Done at: Wed Feb 1 15:55:43 2017
###Markdown
4 Threading 模块1. Thread 表示线程执行对象2. Lock 锁对象3. Condition 条件变量对象能让一个线程能够停下来,等待其他线程满足了某个条件4. Event 通用的事件变量,多个线程等待某个事情的发生5. Timer 等待一定时间后才能发生。 4.1 Thread 类使用使用方法1. 创建一个Thread实例,传给它一个函数2. 创建一个Thread实例,传给它一个可调用的类对象3. 从Thread派生出一个子类,创建一个这个子类的实例(推荐) 4.1.1 传函数
###Code
import threading
from time import ctime, sleep
loops = [4, 2]
def loop(nloop, nsec):
print 'start loop', nloop, 'at: ', ctime()
sleep(nsec)
print 'loop', nloop, 'done at: ', ctime()
def main():
print 'starting at: ', ctime()
threads = []
nloops = range(len(loops))
for i in nloops:
t = threading.Thread(target=loop, args=(i, loops[i]))
threads.append(t)
for i in nloops:
threads[i].start()
for i in nloops:
threads[i].join()
print 'all Done at: ', ctime()
main()
###Output
starting at: Wed Feb 1 16:16:45 2017
start loop 0 at: Wed Feb 1 16:16:45 2017
start loop 1 at: Wed Feb 1 16:16:45 2017
loop 1 done at: Wed Feb 1 16:16:47 2017
loop 0 done at: Wed Feb 1 16:16:49 2017
all Done at: Wed Feb 1 16:16:49 2017
###Markdown
4.1.2 传递对象
###Code
import threading
from time import ctime, sleep
loops = [4, 2]
class ThreadFunc(object):
def __init__(self, func, args, name=''):
self.name = name
self.func = func
self.args = args
def __call__(self):
apply(self.func, self.args)
def loop(nloop, nsec):
print 'start loop', nloop, 'at: ', ctime()
sleep(nsec)
print 'loop', nloop, 'done at: ',ctime()
def main():
print 'starting at: ', ctime()
threads = []
nloops = range(len(loops))
for i in nloops:
t = threading.Thread(target=ThreadFunc(loop, (i, loops[i]), loop.__name__))
threads.append(t)
for i in nloops:
threads[i].start()
for i in nloops:
threads[i].join()
main()
###Output
starting at: Wed Feb 1 16:37:20 2017
start loop 0 at: Wed Feb 1 16:37:20 2017
start loop 1 at: Wed Feb 1 16:37:20 2017
loop 1 done at: Wed Feb 1 16:37:22 2017
loop 0 done at: Wed Feb 1 16:37:24 2017
###Markdown
4.1.3 派生类
###Code
import threading
from time import ctime, sleep
class MyThread(threading.Thread):
def __init__(self, func, args, name =''):
threading.Thread.__init__(self)
self.name = name
self.func = func
self.args = args
def getResult(self):
return self.result
def run(self):
print 'starting', self.name, 'at: ', ctime()
self.result = apply(self.func, self.args)
print self.name, 'finished at: ', ctime()
def fib(x):
sleep(0.005)
if x < 2: return x
return fib(x-1) + fib(x-2)
def fac(x):
sleep(0.1)
if x < 2: return x
return fac(x-1) * x
def sum(x):
sleep(0.1)
if x<2: return x
return x+sum(x-1)
funcs = [fib, fac, sum]
n = 12
def main():
nfuncs = range(len(funcs))
print '*** SINGLE THREAD'
for i in nfuncs:
print 'starting', funcs[i].__name__, 'at: ', ctime()
print funcs[i](n)
print funcs[i].__name__, 'finished at: ', ctime()
print '\n *** MULTI THREAD'
threads = []
for i in nfuncs:
t = MyThread(funcs[i], (n,), funcs[i].__name__)
threads.append(t)
for i in nfuncs:
threads[i].start()
for i in nfuncs:
threads[i].join()
print threads[i].getResult()
print 'all done'
main()
###Output
*** SINGLE THREAD
starting fib at: Wed Feb 1 16:59:09 2017
144
fib finished at: Wed Feb 1 16:59:11 2017
starting fac at: Wed Feb 1 16:59:11 2017
479001600
fac finished at: Wed Feb 1 16:59:13 2017
starting sum at: Wed Feb 1 16:59:13 2017
78
sum finished at: Wed Feb 1 16:59:14 2017
*** MULTI THREAD
starting fib at: Wed Feb 1 16:59:14 2017
starting fac at: Wed Feb 1 16:59:14 2017
starting sum at: Wed Feb 1 16:59:14 2017
facsum finished at: finished at: Wed Feb 1 16:59:15 2017Wed Feb 1 16:59:15 2017
fib finished at: Wed Feb 1 16:59:17 2017
144
479001600
78
all done
###Markdown
4 Queue 模块使用Queue结构来共享线程之间的数据函数 | 描述 --- | --- queue(size) | 构造函数qsize() | 返回队列的大小 empty() | 是否为空full() | 是否满 put(item, block=0) | 把item放入到队列中,如果block不为0,函数阻塞到队列中有空间为止 get(block=0) | 从队列中获取对象,如果block中不为0,函数阻塞只队列中有对象
###Code
from random import randint
from time import sleep
from Queue import Queue
import threading
class MyThread(threading.Thread):
def __init__(self, func, args, name =''):
threading.Thread.__init__(self)
self.name = name
self.func = func
self.args = args
def getResult(self):
return self.result
def run(self):
print 'starting', self.name, 'at: ', ctime()
self.result = apply(self.func, self.args)
print self.name, 'finished at: ', ctime()
def writeQ(queue):
print 'producting object for Q...',queue.put('xxx', 1)
print 'size now', queue.qsize()
def readQ(queue):
val = queue.get(1)
print 'consumed object from Q...', queue.qsize()
def writer(queue, loops):
for i in range(loops):
writeQ(queue)
sleep(randint(1, 3))
def reader(queue, loops):
for i in range(loops):
readQ(queue)
sleep(randint(2, 5))
funcs = [writer, reader]
nfuncs = range(len(funcs))
def main():
nloops = randint(2,5)
q = Queue(32)
threads = []
for i in nfuncs:
t= MyThread(funcs[i], (q, nloops), funcs[i].__name__)
threads.append(t)
for i in nfuncs:
threads[i].start()
for i in nfuncs:
threads[i].join()
print 'all Done'
main()
import Tkinter
top = Tkinter.Tk()
hello = Tkinter.Label(top, text = 'hello world!')
hello.pack()
quit = Tkinter.Button(top, text='quit', command=top.quit, bg='red', fg='white')
quit.pack(fill = Tkinter.X, expand=1)
Tkinter.mainloop()
###Output
_____no_output_____ |
Lessons&CourseWorks/1. IntroToComputerVision/4.FeatureVector/2.HOG/1. HOG.ipynb | ###Markdown
<div style = "font-family:Georgia; font-size:2.5vw; color:lightblue; font-style:bold; text-align:center; background:url('./Animations/Title Background.gif') no-repeat center; background-size:cover)"> Histograms of Oriented Gradients (HOG) IntroductionAs we saw with the ORB algorithm, we can use keypoints in images to do keypoint-based matching to detect objects in images. These type of algorithms work great when you want to detect objects that have a lot of consistent internal features that are not affected by the background. For example, these algorithms work well for facial detection because faces have a lot of consistent internal features that don’t get affected by the image background, such as the eyes, nose, and mouth. However, these type of algorithms don’t work so well when attempting to do more general object recognition, say for example, pedestrian detection in images. The reason is that people don’t have consistent internal features, like faces do, because the body shape and style of every person is different (see Fig. 1). This means that every person is going to have a different set of internal features, and so we need something that can more generally describe a person. Fig. 1. - Pedestrians. One option is to try to detect pedestrians by their contours instead. Detecting objects in images by their contours (boundaries) is very challenging because we have to deal with the difficulties brought about by the contrast between the background and the foreground. For example, suppose you wanted to detect a pedestrian in an image that is walking in front of a white building and she is wearing a white coat and black pants (see Fig. 2). We can see in Fig. 2, that since the background of the image is mostly white, the black pants are going to have a very high contrast, but the coat, since it is white as well, is going to have very low contrast. In this case, detecting the edges of pants is going to be easy but detecting the edges of the coat is going to be very difficult. This is where **HOG** comes in. HOG stands for **Histograms of Oriented Gradients** and it was first introduced by Navneet Dalal and Bill Triggs in 2005. Fig. 2. - High and Low Contrast. The HOG algorithm works by creating histograms of the distribution of gradient orientations in an image and then normalizing them in a very special way. This special normalization is what makes HOG so effective at detecting the edges of objects even in cases where the contrast is very low. These normalized histograms are put together into a feature vector, known as the HOG descriptor, that can be used to train a machine learning algorithm, such as a Support Vector Machine (SVM), to detect objects in images based on their boundaries (edges). Due to its great success and reliability, HOG has become one of the most widely used algorithms in computer vison for object detection.In this notebook, you will learn:* How the HOG algorithm works* How to use OpenCV to create a HOG descriptor* How to visualize the HOG descriptor. The HOG AlgorithmAs its name suggests, the HOG algorithm, is based on creating histograms from the orientation of image gradients. The HOG algorithm is implemented in a series of steps:1. Given the image of particular object, set a detection window (region of interest) that covers the entire object in the image (see Fig. 3).2. Calculate the magnitude and direction of the gradient for each individual pixel in the detection window.3. Divide the detection window into connected *cells* of pixels, with all cells being of the same size (see Fig. 3). The size of the cells is a free parameter and it is usually chosen so as to match the scale of the features that want to be detected. For example, in a 64 x 128 pixel detection window, square cells 6 to 8 pixels wide are suitable for detecting human limbs.4. Create a Histogram for each cell, by first grouping the gradient directions of all pixels in each cell into a particular number of orientation (angular) bins; and then adding up the gradient magnitudes of the gradients in each angular bin (see Fig. 3). The number of bins in the histogram is a free parameter and it is usually set to 9 angular bins.5. Group adjacent cells into *blocks* (see Fig. 3). The number of cells in each block is a free parameter and all blocks must be of the same size. The distance between each block (known as the stride) is a free parameter but it is usually set to half the block size, in which case you will get overlapping blocks (*see video below*). The HOG algorithm has been shown empirically to work better with overlapping blocks.6. Use the cells contained within each block to normalize the cell histograms in that block (see Fig. 3). If you have overlapping blocks this means that most cells will be normalized with respect to different blocks (*see video below*). Therefore, the same cell may have several different normalizations.7. Collect all the normalized histograms from all the blocks into a single feature vector called the HOG descriptor.8. Use the resulting HOG descriptors from many images of the same type of object to train a machine learning algorithm, such as an SVM, to detect those type of objects in images. For example, you could use the HOG descriptors from many images of pedestrians to train an SVM to detect pedestrians in images. The training is done with both positive a negative examples of the object you want detect in the image.9. Once the SVM has been trained, a sliding window approach is used to try to detect and locate objects in images. Detecting an object in the image entails finding the part of the image that looks similar to the HOG pattern learned by the SVM. Fig. 3. - HOG Diagram. Vid. 1. - HOG Animation. Why The HOG Algorithm WorksAs we learned above, HOG creates histograms by adding the magnitude of the gradients in particular orientations in localized portions of the image called *cells*. By doing this we guarantee that stronger gradients will contribute more to the magnitude of their respective angular bin, while the effects of weak and randomly oriented gradients resulting from noise are minimized. In this manner the histograms tell us the dominant gradient orientation of each cell. Dealing with contrast Now, the magnitude of the dominant orientation can vary widely due to variations in local illumination and the contrast between the background and the foreground.To account for the background-foreground contrast differences, the HOG algorithm tries to detect edges locally. In order to do this, it defines groups of cells, called **blocks**, and normalizes the histograms using this local group of cells. By normalizing locally, the HOG algorithm can detect the edges in each block very reliably; this is called **block normalization**.In addition to using block normalization, the HOG algorithm also uses overlapping blocks to increase its performance. By using overlapping blocks, each cell contributes several independent components to the final HOG descriptor, where each component corresponds to a cell being normalized with respect to a different block. This may seem redundant but, it has been shown empirically that by normalizing each cell several times with respect to different local blocks, the performance of the HOG algorithm increases dramatically. Loading Images and Importing ResourcesThe first step in building our HOG descriptor is to load the required packages into Python and to load our image. We start by using OpenCV to load an image of a triangle tile. Since, the `cv2.imread()` function loads images as BGR we will convert our image to RGB so we can display it with the correct colors. As usual we will convert our BGR image to Gray Scale for analysis.
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Load the image
image = cv2.imread('./images/triangle_tile.jpeg')
# Convert the original image to RGB
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the original image to gray scale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Print the shape of the original and gray scale images
print('The original image has shape: ', original_image.shape)
print('The gray scale image has shape: ', gray_image.shape)
# Display the images
plt.subplot(121)
plt.imshow(original_image)
plt.title('Original Image')
plt.subplot(122)
plt.imshow(gray_image, cmap='gray')
plt.title('Gray Scale Image')
plt.show()
###Output
The original image has shape: (250, 250, 3)
The gray scale image has shape: (250, 250)
###Markdown
Creating The HOG DescriptorWe will be using OpenCV’s `HOGDescriptor` class to create the HOG descriptor. The parameters of the HOG descriptor are setup using the `HOGDescriptor()` function. The parameters of the `HOGDescriptor()` function and their default values are given below:`cv2.HOGDescriptor(win_size = (64, 128), block_size = (16, 16), block_stride = (8, 8), cell_size = (8, 8), nbins = 9, win_sigma = DEFAULT_WIN_SIGMA, threshold_L2hys = 0.2, gamma_correction = true, nlevels = DEFAULT_NLEVELS)`Parameters:* **win_size** – *Size* Size of detection window in pixels (*width, height*). Defines the region of interest. Must be an integer multiple of cell size.* **block_size** – *Size* Block size in pixels (*width, height*). Defines how many cells are in each block. Must be an integer multiple of cell size and it must be smaller than the detection window. The smaller the block the finer detail you will get.* **block_stride** – *Size* Block stride in pixels (*horizontal, vertical*). It must be an integer multiple of cell size. The `block_stride` defines the distance between adjecent blocks, for example, 8 pixels horizontally and 8 pixels vertically. Longer `block_strides` makes the algorithm run faster (because less blocks are evaluated) but the algorithm may not perform as well.* **cell_size** – *Size* Cell size in pixels (*width, height*). Determines the size fo your cell. The smaller the cell the finer detail you will get.* **nbins** – *int* Number of bins for the histograms. Determines the number of angular bins used to make the histograms. With more bins you capture more gradient directions. HOG uses unsigned gradients, so the angular bins will have values between 0 and 180 degrees.* **win_sigma** – *double* Gaussian smoothing window parameter. The performance of the HOG algorithm can be improved by smoothing the pixels near the edges of the blocks by applying a Gaussian spatial window to each pixel before computing the histograms.* **threshold_L2hys** – *double* L2-Hys (Lowe-style clipped L2 norm) normalization method shrinkage. The L2-Hys method is used to normalize the blocks and it consists of an L2-norm followed by clipping and a renormalization. The clipping limits the maximum value of the descriptor vector for each block to have the value of the given threshold (0.2 by default). After the clipping the descriptor vector is renormalized as described in *IJCV*, 60(2):91-110, 2004.* **gamma_correction** – *bool* Flag to specify whether the gamma correction preprocessing is required or not. Performing gamma correction slightly increases the performance of the HOG algorithm.* **nlevels** – *int* Maximum number of detection window increases.As we can see, the `cv2.HOGDescriptor()`function supports a wide range of parameters. The first few arguments (`block_size, block_stride, cell_size`, and `nbins`) are probably the ones you are most likely to change. The other parameters can be safely left at their default values and you will get good results. In the code below, we will use the `cv2.HOGDescriptor()`function to set the cell size, block size, block stride, and the number of bins for the histograms of the HOG descriptor. We will then use `.compute(image)`method to compute the HOG descriptor (feature vector) for the given `image`.
###Code
# Specify the parameters for our HOG descriptor
# Cell Size in pixels (width, height). Must be smaller than the size of the detection window
# and must be chosen so that the resulting Block Size is smaller than the detection window.
cell_size = (6, 6)
# Number of cells per block in each direction (x, y). Must be chosen so that the resulting
# Block Size is smaller than the detection window
num_cells_per_block = (2, 2)
# Block Size in pixels (width, height). Must be an integer multiple of Cell Size.
# The Block Size must be smaller than the detection window
block_size = (num_cells_per_block[0] * cell_size[0],
num_cells_per_block[1] * cell_size[1])
# Calculate the number of cells that fit in our image in the x and y directions
x_cells = gray_image.shape[1] // cell_size[0]
y_cells = gray_image.shape[0] // cell_size[1]
# Horizontal distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer.
h_stride = 1
# Vertical distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer.
v_stride = 1
# Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size
block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride)
# Number of gradient orientation bins
num_bins = 9
# Specify the size of the detection window (Region of Interest) in pixels (width, height).
# It must be an integer multiple of Cell Size and it must cover the entire image. Because
# the detection window must be an integer multiple of cell size, depending on the size of
# your cells, the resulting detection window might be slightly smaller than the image.
# This is perfectly ok.
win_size = (x_cells * cell_size[0] , y_cells * cell_size[1])
# Print the shape of the gray scale image for reference
print('\nThe gray scale image has shape: ', gray_image.shape)
print()
# Print the parameters of our HOG descriptor
print('HOG Descriptor Parameters:\n')
print('Window Size:', win_size)
print('Cell Size:', cell_size)
print('Block Size:', block_size)
print('Block Stride:', block_stride)
print('Number of Bins:', num_bins)
print()
# Set the parameters of the HOG descriptor using the variables defined above
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)
# Compute the HOG Descriptor for the gray scale image
hog_descriptor = hog.compute(gray_image)
###Output
The gray scale image has shape: (250, 250)
HOG Descriptor Parameters:
Window Size: (246, 246)
Cell Size: (6, 6)
Block Size: (12, 12)
Block Stride: (6, 6)
Number of Bins: 9
###Markdown
Number of Elements In The HOG DescriptorThe resulting HOG Descriptor (feature vector), contains the normalized histograms from all cells from all blocks in the detection window concatenated in one long vector. Therefore, the size of the HOG feature vector will be given by the total number of blocks in the detection window, multiplied by the number of cells per block, times the number of orientation bins:\begin{equation}\mbox{total_elements} = (\mbox{total_number_of_blocks})\mbox{ } \times \mbox{ } (\mbox{number_cells_per_block})\mbox{ } \times \mbox{ } (\mbox{number_of_bins})\end{equation}If we don’t have overlapping blocks (*i.e.* the `block_stride`equals the `block_size`), the total number of blocks can be easily calculated by dividing the size of the detection window by the block size. However, in the general case we have to take into account the fact that we have overlapping blocks. To find the total number of blocks in the general case (*i.e.* for any `block_stride` and `block_size`), we can use the formula given below:\begin{equation}\mbox{Total}_i = \left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right)\left( \frac{\mbox{window_size}_i}{\mbox{block_size}_i} \right) - \left [\left( \frac{\mbox{block_size}_i}{\mbox{block_stride}_i} \right) - 1 \right]; \mbox{ for } i = x,y\end{equation}Where Total$_x$, is the total number of blocks along the width of the detection window, and Total$_y$, is the total number of blocks along the height of the detection window. This formula for Total$_x$ and Total$_y$, takes into account the extra blocks that result from overlapping. After calculating Total$_x$ and Total$_y$, we can get the total number of blocks in the detection window by multiplying Total$_x$ $\times$ Total$_y$. The above formula can be simplified considerably because the `block_size`, `block_stride`, and `window_size`are all defined in terms of the `cell_size`. By making all the appropriate substitutions and cancelations the above formula reduces to:\begin{equation}\mbox{Total}_i = \left(\frac{\mbox{cells}_i - \mbox{num_cells_per_block}_i}{N_i}\right) + 1\mbox{ }; \mbox{ for } i = x,y\end{equation}Where cells$_x$ is the total number of cells along the width of the detection window, and cells$_y$, is the total number of cells along the height of the detection window. And $N_x$ is the horizontal block stride in units of `cell_size` and $N_y$ is the vertical block stride in units of `cell_size`. Let's calculate what the number of elements for the HOG feature vector should be and check that it matches the shape of the HOG Descriptor calculated above.
###Code
# Calculate the total number of blocks along the width of the detection window
tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1)
# Calculate the total number of blocks along the height of the detection window
tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1)
# Calculate the total number of elements in the feature vector
tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins
# Print the total number of elements the HOG feature vector should have
print('\nThe total number of elements in the HOG Feature Vector should be: ',
tot_bx, 'x',
tot_by, 'x',
num_cells_per_block[0], 'x',
num_cells_per_block[1], 'x',
num_bins, '=',
tot_els)
# Print the shape of the HOG Descriptor to see that it matches the above
print('\nThe HOG Descriptor has shape:', hog_descriptor.shape)
print()
###Output
The total number of elements in the HOG Feature Vector should be: 40 x 40 x 2 x 2 x 9 = 57600
The HOG Descriptor has shape: (57600, 1)
###Markdown
Visualizing The HOG DescriptorWe can visualize the HOG Descriptor by plotting the histogram associated with each cell as a collection of vectors. To do this, we will plot each bin in the histogram as a single vector whose magnitude is given by the height of the bin and its orientation is given by the angular bin that its associated with. Since any given cell might have multiple histograms associated with it, due to the overlapping blocks, we will choose to average all the histograms for each cell to produce a single histogram for each cell.OpenCV has no easy way to visualize the HOG Descriptor, so we have to do some manipulation first in order to visualize it. We will start by reshaping the HOG Descriptor in order to make our calculations easier. We will then compute the average histogram of each cell and finally we will convert the histogram bins into vectors. Once we have the vectors, we plot the corresponding vectors for each cell in an image. The code below produces an interactive plot so that you can interact with the figure. The figure contains:* the grayscale image, * the HOG Descriptor (feature vector), * a zoomed-in portion of the HOG Descriptor, and * the histogram of the selected cell. **You can click anywhere on the gray scale image or the HOG Descriptor image to select a particular cell**. Once you click on either image a *magenta* rectangle will appear showing the cell you selected. The Zoom Window will show you a zoomed in version of the HOG descriptor around the selected cell; and the histogram plot will show you the corresponding histogram for the selected cell. The interactive window also has buttons at the bottom that allow for other functionality, such as panning, and giving you the option to save the figure if desired. The home button returns the figure to its default value.**NOTE**: If you are running this notebook in the Udacity workspace, there is around a 2 second lag in the interactive plot. This means that if you click in the image to zoom in, it will take about 2 seconds for the plot to refresh.
###Code
%matplotlib notebook
import copy
import matplotlib.patches as patches
# Set the default figure size
plt.rcParams['figure.figsize'] = [9.8, 9]
# Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins].
# The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number
# and the second index to the column number. This will be useful later when we plot the feature vector, so
# that the feature vector indexing matches the image indexing.
hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx,
tot_by,
num_cells_per_block[0],
num_cells_per_block[1],
num_bins).transpose((1, 0, 2, 3, 4))
# Print the shape of the feature vector for reference
print('The feature vector has shape:', hog_descriptor.shape)
# Print the reshaped feature vector
print('The reshaped feature vector has shape:', hog_descriptor_reshaped.shape)
# Create an array that will hold the average gradients for each cell
ave_grad = np.zeros((y_cells, x_cells, num_bins))
# Print the shape of the ave_grad array for reference
print('The average gradient array has shape: ', ave_grad.shape)
# Create an array that will count the number of histograms per cell
hist_counter = np.zeros((y_cells, x_cells, 1))
# Add up all the histograms for each cell and count the number of histograms per cell
for i in range (num_cells_per_block[0]):
for j in range(num_cells_per_block[1]):
ave_grad[i:tot_by + i,
j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :]
hist_counter[i:tot_by + i,
j:tot_bx + j] += 1
# Calculate the average gradient for each cell
ave_grad /= hist_counter
# Calculate the total number of vectors we have in all the cells.
len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2]
# Create an array that has num_bins equally spaced between 0 and 180 degress in radians.
deg = np.linspace(0, np.pi, num_bins, endpoint = False)
# Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude
# equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram).
# To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the
# image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the
# cells in the image. Create the arrays that will hold all the vector positons and components.
U = np.zeros((len_vecs))
V = np.zeros((len_vecs))
X = np.zeros((len_vecs))
Y = np.zeros((len_vecs))
# Set the counter to zero
counter = 0
# Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the
# cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the
# average gradient array
for i in range(ave_grad.shape[0]):
for j in range(ave_grad.shape[1]):
for k in range(ave_grad.shape[2]):
U[counter] = ave_grad[i,j,k] * np.cos(deg[k])
V[counter] = ave_grad[i,j,k] * np.sin(deg[k])
X[counter] = (cell_size[0] / 2) + (cell_size[0] * i)
Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j)
counter = counter + 1
# Create the bins in degress to plot our histogram.
angle_axis = np.linspace(0, 180, num_bins, endpoint = False)
angle_axis += ((angle_axis[1] - angle_axis[0]) / 2)
# Create a figure with 4 subplots arranged in 2 x 2
fig, ((a,b),(c,d)) = plt.subplots(2,2)
# Set the title of each subplot
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False)
d.set(title = 'Histogram of Gradients')
# Plot the gray scale image
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
# Plot the feature vector (HOG Descriptor)
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
# Define function for interactive zoom
def onpress(event):
#Unless the left mouse button is pressed do nothing
if event.button != 1:
return
# Only accept clicks for subplots a and b
if event.inaxes in [a, b]:
# Get mouse click coordinates
x, y = event.xdata, event.ydata
# Select the cell closest to the mouse click coordinates
cell_num_x = np.uint32(x / cell_size[0])
cell_num_y = np.uint32(y / cell_size[1])
# Set the edge coordinates of the rectangle patch
edgex = x - (x % cell_size[0])
edgey = y - (y % cell_size[1])
# Create a rectangle patch that matches the the cell selected above
rect = patches.Rectangle((edgex, edgey),
cell_size[0], cell_size[1],
linewidth = 1,
edgecolor = 'magenta',
facecolor='none')
# A single patch can only be used in a single plot. Create copies
# of the patch to use in the other subplots
rect2 = copy.copy(rect)
rect3 = copy.copy(rect)
# Update all subplots
a.clear()
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
a.add_patch(rect)
b.clear()
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
b.add_patch(rect2)
c.clear()
c.set(title = 'Zoom Window')
c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1)
c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0]))
c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1]))
c.invert_yaxis()
c.set_aspect(aspect = 1)
c.set_facecolor('black')
c.add_patch(rect3)
d.clear()
d.set(title = 'Histogram of Gradients')
d.grid()
d.set_xlim(0, 180)
d.set_xticks(angle_axis)
d.set_xlabel('Angle')
d.bar(angle_axis,
ave_grad[cell_num_y, cell_num_x, :],
180 // num_bins,
align = 'center',
alpha = 0.5,
linewidth = 1.2,
edgecolor = 'k')
fig.canvas.draw()
# Create a connection between the figure and the mouse click
fig.canvas.mpl_connect('button_press_event', onpress)
plt.show()
###Output
The feature vector has shape: (57600, 1)
The reshaped feature vector has shape: (40, 40, 2, 2, 9)
The average gradient array has shape: (41, 41, 9)
|
End-to-end Stegnography-Working.ipynb | ###Markdown
Paper Implementation END-TO-END TRAINED CNN ENCODER-DECODER NETWORKS FOR IMAGE STEGANOGRAPHY - Atique $et.al$ Tensorflow 2.0 Notebook Author: Saad Zia
###Code
import numpy as np
import tensorflow as tf
import pickle
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
from IPython import display
%load_ext autoreload
%autoreload 2
import tensorflow as tf
# For process to not allocate entire GPU memory
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
tf.config.experimental.set_memory_growth(physical_devices[0], True)
###Output
_____no_output_____
###Markdown
Setting up Data Pipeline
###Code
(x, y), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x = x.astype(np.float32)
x_test = x_test.astype(np.float32)
# for when payload is grayscale
# payload_train = np.mean(x, axis=-1)[:5000, :, :, np.newaxis]
# for when payload is rgb
payload_train = x[:5000]
host_train = x[np.random.choice(np.arange(x.shape[0]), size=payload_train.shape[0])][:5000]
# for when payload is grayscale
payload_test = np.mean(x_test, axis=-1)[:500, :, :, np.newaxis]
# for when payload is rgb
payload_test = x[:500]
host_test = x_test[np.random.choice(np.arange(x_test.shape[0]), size=payload_test.shape[0])][:500]
# Instantiate the Dataset class
train_dataset = tf.data.Dataset.from_tensor_slices((payload_train, host_train))
# Normalization function
def normalize(payload, host):
payload = tf.image.per_image_standardization(payload)
host = tf.image.per_image_standardization(host)
return payload, host
# Adding shuffle, normalization and batching operations to the dataset object
train_dataset = train_dataset.map(normalize).shuffle(5000).batch(256, drop_remainder=True)
# Instantiate the test Dataset class
test_dataset = tf.data.Dataset.from_tensor_slices((payload_test, host_test))
test_dataset = (test_dataset.map(normalize).batch(128, drop_remainder=True)).shuffle(500)
###Output
_____no_output_____
###Markdown
Setting up tf.keras Model
###Code
from tensorflow.keras import Model
from tensorflow.keras.layers import Input
tf.keras.backend.set_floatx('float32')
tf.keras.backend.floatx()
from encoder import EncoderNetwork
from decoder import DecoderNetwork
carrier_image_shape=(32, 32, 3)
payload_image_shape=(32, 32, 3)
encoder_network = EncoderNetwork(carrier_shape=carrier_image_shape, payload_shape=payload_image_shape)
decoder_network = DecoderNetwork(target_image_shape=payload_image_shape)
input_carrier = Input(shape=carrier_image_shape, name='input_carrier')
input_payload = Input(shape=payload_image_shape, name='input_payload')
encoded_output = encoder_network.get_network(input_carrier, input_payload)
decoded_output = decoder_network.get_network(encoded_output)
steganography_model = Model(inputs=[input_carrier, input_payload], outputs=[encoded_output, decoded_output])
from tensorflow.keras.utils import plot_model
plot_model(steganography_model, show_shapes=True)
steganography_model.summary()
# Defining Loss Function
@tf.function
def loss_function(payload, host, encoder_output, decoder_output):
loss = tf.math.reduce_mean(tf.math.squared_difference(payload, decoder_output)\
+ tf.math.squared_difference(host, encoder_output))
return loss
def custom_loss(input_):
def loss(y_true, y_pred):
return tf.math.reduce_mean(tf.math.squared_difference(y_true, y_pred))
return loss
optimizer = tf.keras.optimizers.Adam(0.0001)
a = None
for payload, host in train_dataset.batch(5000):
a = host
break
test_loss = tf.keras.metrics.Mean(name='test_loss')
train_loss = tf.keras.metrics.Mean(name='train_loss')
@tf.function
def train_step(payload, host):
with tf.GradientTape() as tape:
encoded_host, decoded_payload = steganography_model([host, payload])
loss = loss_function(payload, host, encoded_host, decoded_payload)
train_loss(loss)
gradients = tape.gradient(loss, steganography_model.trainable_variables)
optimizer.apply_gradients(zip(gradients, steganography_model.trainable_variables))
train_host_psnr = tf.reduce_mean(tf.image.psnr(host, encoded_host, 18))
train_payload_psnr = tf.reduce_mean(tf.image.psnr(payload, decoded_payload, 18))
return train_host_psnr, train_payload_psnr
@tf.function
def test_step(payload, host):
encoded_host, decoded_payload = steganography_model([host, payload])
t_loss = loss_function(payload, host, encoded_host, decoded_payload)
test_loss(t_loss)
test_host_psnr = tf.reduce_mean(tf.image.psnr(host, encoded_host, 18))
test_payload_psnr = tf.reduce_mean(tf.image.psnr(payload, decoded_payload, 18))
return test_host_psnr, test_payload_psnr
EPOCHS = 250
SUMMARY_DIR = './summary'
import time
for epoch in range(EPOCHS):
start = time.time()
for payload, host in train_dataset:
train_host_psnr, train_payload_psnr = train_step(payload, host)
for payload, host in test_dataset:
test_host_psnr, test_payload_psnr = test_step(payload, host)
elapsed = time.time() - start
print('elapsed: %f' % elapsed)
template = 'Epoch {}, Train Loss: {}, Test Loss: {}, TrainH PSNR: {}, TrainP PSNR: {}, TestH PSNR: {}, TestP PSNR: {}'
print(template.format(epoch+1, train_loss.result(), test_loss.result(), train_host_psnr,\
train_payload_psnr, test_host_psnr, test_payload_psnr))
# Reset the metrics for the next epoch
test_loss.reset_states()
print('Training Finished.')
###Output
elapsed: 2.904781
Epoch 1, Train Loss: 0.5700900554656982, Test Loss: 0.5686190128326416, TrainH PSNR: 42.55171203613281, TrainP PSNR: 27.74254035949707, TestH PSNR: 43.034645080566406, TestP PSNR: 27.80188751220703
elapsed: 2.047589
Epoch 2, Train Loss: 0.5700501799583435, Test Loss: 0.568868100643158, TrainH PSNR: 42.33446502685547, TrainP PSNR: 27.764963150024414, TestH PSNR: 42.30915451049805, TestP PSNR: 27.78251075744629
elapsed: 2.076320
Epoch 3, Train Loss: 0.5700225830078125, Test Loss: 0.5694487690925598, TrainH PSNR: 42.24988555908203, TrainP PSNR: 27.692031860351562, TestH PSNR: 42.394630432128906, TestP PSNR: 27.775386810302734
elapsed: 2.055980
Epoch 4, Train Loss: 0.5697759389877319, Test Loss: 0.567550003528595, TrainH PSNR: 42.33718490600586, TrainP PSNR: 27.789186477661133, TestH PSNR: 42.5048942565918, TestP PSNR: 27.84157943725586
elapsed: 2.006746
Epoch 5, Train Loss: 0.5695685744285583, Test Loss: 0.5706283450126648, TrainH PSNR: 42.53901290893555, TrainP PSNR: 27.865520477294922, TestH PSNR: 42.41766357421875, TestP PSNR: 27.819034576416016
elapsed: 2.016329
Epoch 6, Train Loss: 0.569511353969574, Test Loss: 0.5670080780982971, TrainH PSNR: 42.509727478027344, TrainP PSNR: 27.795785903930664, TestH PSNR: 43.078426361083984, TestP PSNR: 27.815031051635742
elapsed: 1.991798
Epoch 7, Train Loss: 0.5693581104278564, Test Loss: 0.5670785307884216, TrainH PSNR: 42.46376037597656, TrainP PSNR: 27.83668327331543, TestH PSNR: 43.078941345214844, TestP PSNR: 27.814918518066406
elapsed: 2.046209
Epoch 8, Train Loss: 0.5691951513290405, Test Loss: 0.56670743227005, TrainH PSNR: 42.37964630126953, TrainP PSNR: 27.835439682006836, TestH PSNR: 43.011505126953125, TestP PSNR: 27.819629669189453
elapsed: 2.115230
Epoch 9, Train Loss: 0.5690739750862122, Test Loss: 0.5667250156402588, TrainH PSNR: 42.4371452331543, TrainP PSNR: 27.84261131286621, TestH PSNR: 42.446502685546875, TestP PSNR: 27.850296020507812
elapsed: 2.111316
Epoch 10, Train Loss: 0.5689347982406616, Test Loss: 0.5660011768341064, TrainH PSNR: 42.525577545166016, TrainP PSNR: 27.807525634765625, TestH PSNR: 42.50043487548828, TestP PSNR: 27.801349639892578
elapsed: 2.085870
Epoch 11, Train Loss: 0.568825364112854, Test Loss: 0.5670145750045776, TrainH PSNR: 42.70795822143555, TrainP PSNR: 27.886693954467773, TestH PSNR: 42.48849105834961, TestP PSNR: 27.84592628479004
elapsed: 2.042556
Epoch 12, Train Loss: 0.568669855594635, Test Loss: 0.5658660531044006, TrainH PSNR: 42.65060043334961, TrainP PSNR: 27.846529006958008, TestH PSNR: 43.040672302246094, TestP PSNR: 27.826095581054688
elapsed: 2.038682
Epoch 13, Train Loss: 0.5685498118400574, Test Loss: 0.5653240084648132, TrainH PSNR: 42.571044921875, TrainP PSNR: 27.777149200439453, TestH PSNR: 43.130767822265625, TestP PSNR: 27.82933235168457
elapsed: 2.061020
Epoch 14, Train Loss: 0.5683814287185669, Test Loss: 0.5650483965873718, TrainH PSNR: 42.58977127075195, TrainP PSNR: 27.823362350463867, TestH PSNR: 42.54198455810547, TestP PSNR: 27.860301971435547
elapsed: 2.018467
Epoch 15, Train Loss: 0.5682369470596313, Test Loss: 0.564622163772583, TrainH PSNR: 42.5848503112793, TrainP PSNR: 27.895578384399414, TestH PSNR: 43.139060974121094, TestP PSNR: 27.835142135620117
elapsed: 2.035024
Epoch 16, Train Loss: 0.568117618560791, Test Loss: 0.5649442672729492, TrainH PSNR: 42.629913330078125, TrainP PSNR: 27.83990478515625, TestH PSNR: 42.38909912109375, TestP PSNR: 27.865022659301758
elapsed: 2.067375
Epoch 17, Train Loss: 0.5679454207420349, Test Loss: 0.5646426677703857, TrainH PSNR: 42.58247375488281, TrainP PSNR: 27.887094497680664, TestH PSNR: 42.50730514526367, TestP PSNR: 27.814655303955078
elapsed: 2.053176
Epoch 18, Train Loss: 0.5677905678749084, Test Loss: 0.5642651915550232, TrainH PSNR: 42.48470687866211, TrainP PSNR: 27.817481994628906, TestH PSNR: 42.57563018798828, TestP PSNR: 27.86417579650879
elapsed: 2.035991
Epoch 19, Train Loss: 0.567659854888916, Test Loss: 0.5641142725944519, TrainH PSNR: 42.5151481628418, TrainP PSNR: 27.84213638305664, TestH PSNR: 42.51837921142578, TestP PSNR: 27.868379592895508
elapsed: 2.056058
Epoch 20, Train Loss: 0.5675429701805115, Test Loss: 0.5665355324745178, TrainH PSNR: 42.66448211669922, TrainP PSNR: 27.817665100097656, TestH PSNR: 42.39888000488281, TestP PSNR: 27.8070125579834
elapsed: 2.028533
Epoch 21, Train Loss: 0.5677026510238647, Test Loss: 0.5779115557670593, TrainH PSNR: 42.0799560546875, TrainP PSNR: 27.84222412109375, TestH PSNR: 41.964317321777344, TestP PSNR: 27.774124145507812
elapsed: 2.054151
Epoch 22, Train Loss: 0.5677467584609985, Test Loss: 0.5647170543670654, TrainH PSNR: 42.58464431762695, TrainP PSNR: 27.781890869140625, TestH PSNR: 43.008270263671875, TestP PSNR: 27.839149475097656
elapsed: 2.080784
Epoch 23, Train Loss: 0.5676438808441162, Test Loss: 0.5636320114135742, TrainH PSNR: 42.797096252441406, TrainP PSNR: 27.76034164428711, TestH PSNR: 42.467864990234375, TestP PSNR: 27.823007583618164
elapsed: 2.053850
Epoch 24, Train Loss: 0.5674973726272583, Test Loss: 0.5629985332489014, TrainH PSNR: 42.49690628051758, TrainP PSNR: 27.82455825805664, TestH PSNR: 42.55951690673828, TestP PSNR: 27.87533950805664
elapsed: 2.044751
Epoch 25, Train Loss: 0.5673487782478333, Test Loss: 0.5627654194831848, TrainH PSNR: 42.513526916503906, TrainP PSNR: 27.93730354309082, TestH PSNR: 42.583282470703125, TestP PSNR: 27.876657485961914
elapsed: 2.115146
Epoch 26, Train Loss: 0.5672092437744141, Test Loss: 0.5627568364143372, TrainH PSNR: 42.55811309814453, TrainP PSNR: 27.969768524169922, TestH PSNR: 42.562015533447266, TestP PSNR: 27.828285217285156
elapsed: 2.073499
Epoch 27, Train Loss: 0.5670627951622009, Test Loss: 0.5624329447746277, TrainH PSNR: 42.555458068847656, TrainP PSNR: 27.812793731689453, TestH PSNR: 42.60110092163086, TestP PSNR: 27.878334045410156
elapsed: 2.086973
Epoch 28, Train Loss: 0.5669166445732117, Test Loss: 0.5621957182884216, TrainH PSNR: 42.80897903442383, TrainP PSNR: 27.81426239013672, TestH PSNR: 42.578216552734375, TestP PSNR: 27.833648681640625
elapsed: 2.080438
Epoch 29, Train Loss: 0.5667811632156372, Test Loss: 0.5619715452194214, TrainH PSNR: 42.4637336730957, TrainP PSNR: 27.887500762939453, TestH PSNR: 43.23027038574219, TestP PSNR: 27.85623550415039
elapsed: 2.119122
Epoch 30, Train Loss: 0.566641628742218, Test Loss: 0.5618396401405334, TrainH PSNR: 42.559844970703125, TrainP PSNR: 27.892126083374023, TestH PSNR: 42.590946197509766, TestP PSNR: 27.83584213256836
elapsed: 2.085240
Epoch 31, Train Loss: 0.5665139555931091, Test Loss: 0.5616241097450256, TrainH PSNR: 42.30091857910156, TrainP PSNR: 27.790149688720703, TestH PSNR: 43.25395202636719, TestP PSNR: 27.858673095703125
elapsed: 2.032907
Epoch 32, Train Loss: 0.5663843154907227, Test Loss: 0.5615147948265076, TrainH PSNR: 42.268898010253906, TrainP PSNR: 27.89167594909668, TestH PSNR: 42.61086654663086, TestP PSNR: 27.83905029296875
elapsed: 2.075691
Epoch 33, Train Loss: 0.5662670135498047, Test Loss: 0.5615694522857666, TrainH PSNR: 42.377113342285156, TrainP PSNR: 27.903911590576172, TestH PSNR: 43.22393798828125, TestP PSNR: 27.860881805419922
elapsed: 2.101246
Epoch 34, Train Loss: 0.5661524534225464, Test Loss: 0.5613484382629395, TrainH PSNR: 42.62338638305664, TrainP PSNR: 27.88452911376953, TestH PSNR: 43.21470642089844, TestP PSNR: 27.86187744140625
elapsed: 2.091501
Epoch 35, Train Loss: 0.5660479664802551, Test Loss: 0.561028778553009, TrainH PSNR: 42.66416931152344, TrainP PSNR: 27.977018356323242, TestH PSNR: 42.61514663696289, TestP PSNR: 27.843019485473633
elapsed: 2.031250
Epoch 36, Train Loss: 0.5659225583076477, Test Loss: 0.5608827471733093, TrainH PSNR: 42.690948486328125, TrainP PSNR: 27.831554412841797, TestH PSNR: 43.28003692626953, TestP PSNR: 27.865062713623047
elapsed: 2.064609
Epoch 37, Train Loss: 0.5658117532730103, Test Loss: 0.5609762072563171, TrainH PSNR: 42.2972526550293, TrainP PSNR: 27.87787437438965, TestH PSNR: 43.18972396850586, TestP PSNR: 27.867341995239258
elapsed: 2.053963
Epoch 38, Train Loss: 0.5657029151916504, Test Loss: 0.5606653690338135, TrainH PSNR: 42.90315628051758, TrainP PSNR: 27.90570831298828, TestH PSNR: 42.66483688354492, TestP PSNR: 27.890947341918945
###Markdown
Inference
###Code
example_ids = np.arange(len(host_test))[:100]
example_id = np.random.choice(example_ids)
# showing host
fig, axs = plt.subplots(ncols=2)
host_example = host_test.astype(int)[example_id]
payload_example = payload_test.astype(int)[example_id]
# payload_example = np.concatenate((payload_example, np.zeros_like(payload_example), np.zeros_like(payload_example)), axis=-1)
axs[0].imshow(host_example)
axs[1].imshow(payload_example)
# showing host
fig, axs = plt.subplots(ncols=2)
inference_dataset = tf.data.Dataset.from_tensor_slices((host_test[example_ids], payload_test[example_ids])).batch(len(example_ids))
for host, payload in inference_dataset:
encoded_host, decoded_payload = steganography_model([host, payload])
host_outputs = encoded_host.numpy()
payload_output = decoded_payload.numpy()
host_output = host_outputs.astype(int)[example_id]
payload_output = payload_output.astype(int)[example_id]
# payload_output = np.concatenate((payload_output, np.zeros_like(payload_output), np.zeros_like(payload_output)), axis=-1)
axs[0].imshow(host_output)
axs[1].imshow(payload_output)
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
|
data_cleaning_and_analysis/exploratory_data_analysis.ipynb | ###Markdown
Exploratory Data Analysis
###Code
# import libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import plotly.figure_factory as ff
from path import Path
#load data
data = Path('../Resources/cleaned_stroke_dataset.csv')
stroke_df = pd.read_csv(data)
stroke_df.head()
stroke_df['gender'].value_counts()
stroke_df['hypertension'].value_counts()
stroke_df['smoking_status'].value_counts()
stroke_df['heart_disease'].value_counts()
# plotting the output (stroke) column in a pie chart
sizes = stroke_df['stroke'].value_counts(sort = True)
plt.figure(figsize=(7,7),facecolor='white')
plt.pie(sizes, labels=['No stroke','Stroke'], autopct='%1.2f%%', explode=[0, 0.2],textprops={'fontsize': 13})
plt.title("Stroke Outcome Pie Chart", fontdict={'fontsize': 20,'weight':'bold'})
plt.text(-1,-1.2, "Note: The data is highly imbalanced ", {'font':'Arial', 'size':14, 'color':'black', 'weight':'bold'})
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Review attributes: age, gender, marital status, type of work and residence, hypertension, heart disease, average glucose level measured after a meal, Body Mass Index (BMI), and smoking status Gender attribute
###Code
# gender count
sns.countplot(data=stroke_df, x='gender')
print(stroke_df['gender'].value_counts())
# view the distribution between stroke and gender
len_data = len(stroke_df)
len_w = len(stroke_df[stroke_df["gender"]=="Male"])
len_m = len_data - len_w
len_o = len_data - (len_w + len_m)
men_stroke = len(stroke_df.loc[(stroke_df["stroke"]==1)&(stroke_df['gender']=="Male")])
men_no_stroke = len_m - men_stroke
women_stroke = len(stroke_df.loc[(stroke_df["stroke"]==1) & (stroke_df['gender']=="Female")])
women_no_stroke = len_w - women_stroke
other_stroke = len(stroke_df.loc[(stroke_df["stroke"]==1) & (stroke_df['gender']=="Other")])
other_no_stroke = len_o - other_stroke
labels = ['Men with stroke','Men healthy','Women with stroke','Women healthy']
values = [men_stroke, men_no_stroke, women_stroke, women_no_stroke]
fig = go.Figure(data=[go.Pie(labels=labels, values=values,textinfo='label+percent',hole=0.4)])
fig.update_layout(
title_text="Distribution of stroke event according to their gender")
fig.show()
###Output
_____no_output_____
###Markdown
Marriage attribute
###Code
# marriage and stroke
sns.catplot(y="ever_married", hue="stroke", kind="count",
palette="bright", edgecolor=".6",
data=stroke_df)
plt.figure(figsize = (30,10), dpi = 60)
plt.subplot(1,3,(1,1))
stroke_married = len(stroke_df.loc[(stroke_df["stroke"]==1)&(stroke_df['ever_married']=="Yes")])
stroke_not_married = len(stroke_df.loc[(stroke_df["stroke"]==1)&(stroke_df['ever_married']=="No")])
healthy_married = len(stroke_df.loc[(stroke_df["stroke"]==0)&(stroke_df['ever_married']=="Yes")])
healthy_not_married = len(stroke_df.loc[(stroke_df["stroke"]==0)&(stroke_df['ever_married']=="No")])
temp=pd.Series([healthy_married,healthy_not_married,stroke_not_married,stroke_married],
index=['Married and Healthy ','Not married and healthy','Not Married but has stroke','Married and has stroke'])
plt.pie(temp,labels=['Married and Healthy ','Not married and healthy','Not Married but has stroke','Married and has stroke'], explode=[0.1, 0.1,0.2,0.1], autopct='%1.2f%%',textprops={'fontsize': 17})
plt.title("Marriage distribution: Pie Chart", fontdict={'fontsize': 25,'weight':'bold'})
plt.legend( bbox_to_anchor=(0.75, 0.5, 0.5, 0.5))
###Output
_____no_output_____
###Markdown
Work type attribute
###Code
# Compare work_type with stroke
sns.catplot(y="work_type", hue="stroke", kind="count",
palette="bright", edgecolor=".6",
data=stroke_df)
print(stroke_df['work_type'].value_counts())
###Output
Private 24834
Self-employed 6793
children 6156
Govt_job 5440
Never_worked 177
Name: work_type, dtype: int64
###Markdown
Residence type attribute
###Code
plt.figure(figsize = (30,10), dpi = 60)
plt.subplot(1,3,(1,2))
stroke_urban = len(stroke_df.loc[(stroke_df["stroke"]==1)&(stroke_df['Residence_type']=="Urban")])
stroke_rural = len(stroke_df.loc[(stroke_df["stroke"]==1)&(stroke_df['Residence_type']=="Rural")])
healthy_urban = len(stroke_df.loc[(stroke_df["stroke"]==0)&(stroke_df['Residence_type']=="Urban")])
healthy_rural = len(stroke_df.loc[(stroke_df["stroke"]==0)&(stroke_df['Residence_type']=="Rural")])
temp=pd.Series([healthy_rural,healthy_urban,stroke_rural,stroke_urban],
index=['Rural and Healthy ','Urban and healthy','Rural and stroke','Healthy and has stroke'])
plt.pie(temp,labels=['Rural and Healthy ','Urban and healthy','Rural and stroke','Healthy and has stroke']
,explode=[0.0, 0.0,0.2,0.2], autopct='%1.2f%%',textprops={'fontsize': 15},startangle=30)
plt.title("Residence Type distribution: Pie Chart", fontdict={'fontsize': 25,'weight':'bold'})
plt.legend( bbox_to_anchor=(0.7, 0.5, 0.5, 0.5))
###Output
_____no_output_____
###Markdown
Hypertension attribute
###Code
plt.figure(figsize = (30,10), dpi = 60)
plt.subplot(1,3,(1,2))
stroke_hypertension = len(stroke_df.loc[(stroke_df["stroke"]==1)&(stroke_df['hypertension']==1)])
stroke_no_hypertension = len(stroke_df.loc[(stroke_df["stroke"]==1)&(stroke_df['hypertension']==0)])
healthy_hypertension = len(stroke_df.loc[(stroke_df["stroke"]==0)&(stroke_df['hypertension']==1)])
healthy_no_hypertension = len(stroke_df.loc[(stroke_df["stroke"]==0)&(stroke_df['hypertension']==0)])
temp=pd.Series([stroke_hypertension,stroke_no_hypertension,healthy_hypertension,healthy_no_hypertension],
index=['Hypertension and stroke ','No hypertension and stroke','Hypertension and healthy','No hypertension and healthy'])
plt.pie(temp,labels=['Hypertension and stroke ','No hypertension and stroke','Hypertension and healthy','No hypertension and healthy']
,explode=[0.1,0.2,0.2,0.2], autopct='%1.2f%%',textprops={'fontsize': 12},startangle=-60)
plt.title("Hypertension Pie Chart", fontdict={'fontsize': 20,'weight':'bold'})
plt.legend( bbox_to_anchor=(0.75, 0.5, 0.5, 0.5))
###Output
_____no_output_____
###Markdown
Bi/Multivariate
###Code
# charts on hyptertension, heart disease, and bmi
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(15, 5))
sns.countplot(data=stroke_df, x='hypertension', hue='stroke', ax=ax[0]);
sns.countplot(data=stroke_df, x='heart_disease', hue='stroke', ax=ax[1]);
sns.countplot(data=stroke_df, x='ever_married', hue='stroke', ax=ax[2]);
# work type, residence type, marital status
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(17, 6))
sns.countplot(data=stroke_df, x='work_type', hue='stroke', ax=ax[0]);
sns.countplot(data=stroke_df, x='Residence_type', hue='stroke', ax=ax[1]);
# heart disease and hypertension between stroke
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(19, 5))
sns.countplot(data=stroke_df, x='hypertension', hue='work_type', ax=ax[0]);
sns.countplot(data=stroke_df, x='heart_disease', hue='work_type', ax=ax[1]);
###Output
_____no_output_____
###Markdown
A significant number of people who have heart disease or are suffering from hypertension work in private sector.
###Code
# examining bmi and average glucose level to stroke outcome
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
sns.countplot(data=stroke_df, x='bmi', hue='stroke', ax=ax[0]);
sns.countplot(data=stroke_df, x='avg_glucose_level', hue='stroke', ax=ax[1]);
###Output
_____no_output_____
###Markdown
Stacked bar charts
###Code
# divide data into bins
stroke_df['age_binned'] = pd.cut(stroke_df['age'], np.arange(0, 91, 5))
stroke_df['bmi_binned'] = pd.cut(stroke_df['bmi'], np.arange(0, 101, 5))
stroke_df['avg_glucose_level_binned'] = pd.cut(stroke_df['avg_glucose_level'], np.arange(0, 301, 10))
def get_stacked_bar_chart(column):
# Get the count of records by column and stroke
stroke_stack = stroke_df.groupby([column, 'stroke'])['age'].count()
# Create proper DataFrame's format
stroke_stack = stroke_stack.unstack()
return stroke_stack.plot.bar(stacked=True, figsize=(6,6), width=1);
#age and stroke
get_stacked_bar_chart('age_binned')
# bmi and stroke
get_stacked_bar_chart('bmi_binned')
#stroke and average glucose level
get_stacked_bar_chart('avg_glucose_level_binned')
###Output
_____no_output_____ |
modelling/stroke_model.ipynb | ###Markdown
EDA
###Code
data.head()
sns.countplot(x = data['stroke'])
sns.pairplot(data)
sns.heatmap(data.corr())
data.describe()
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5110 entries, 0 to 5109
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gender 5110 non-null object
1 age 5110 non-null float64
2 hypertension 5110 non-null int64
3 heart_disease 5110 non-null int64
4 ever_married 5110 non-null object
5 work_type 5110 non-null object
6 Residence_type 5110 non-null object
7 avg_glucose_level 5110 non-null float64
8 bmi 4909 non-null float64
9 smoking_status 5110 non-null object
10 stroke 5110 non-null int64
dtypes: float64(3), int64(3), object(5)
memory usage: 439.3+ KB
###Markdown
BMI data is missing. Feature Engeeniering
###Code
X = data.drop("stroke", axis="columns")
y = data["stroke"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, stratify=y, random_state=123)
def evaluate_model(model: BaseEstimator, X_test = X_test, y_test = y_test, transform = None):
if transform is None:
print("Accuracy: " + str(accuracy_score(y_test, model.predict(X_test))))
print("F1 score: " + str(f1_score(y_test, model.predict(X_test))))
print("Recall score: " + str(recall_score(y_test, model.predict(X_test))))
else:
print("Accuracy: " + str(accuracy_score(y_test, model.predict(transform.transform(X_test)))))
print("F1 score: " + str(f1_score(y_test, model.predict(transform.transform(X_test)))))
print("Recall score: " + str(recall_score(y_test, model.predict(transform.transform(X_test)))))
column_transform = ColumnTransformer([
('imputer', SimpleImputer(strategy = "most_frequent", add_indicator=True), ['bmi']),
('OHEncoder', OneHotEncoder(handle_unknown='ignore'), ['gender', 'ever_married', 'work_type', 'Residence_type', 'smoking_status'])
],
remainder='passthrough')
column_transform.fit(X_train)
poly_f = PolynomialFeatures()
poly_f.fit(
column_transform.transform(X_train))
###Output
_____no_output_____
###Markdown
First models Linear Regression
###Code
lin_reg = make_pipeline(
column_transform,
poly_f,
LogisticRegression(max_iter=10000, random_state=123))
lin_reg.fit(X_train, y_train)
evaluate_model(lin_reg)
###Output
Accuracy: 0.9510763209393346
F1 score: 0.038461538461538464
Recall score: 0.02
###Markdown
Linear Regression with weights
###Code
lin_reg = make_pipeline(
column_transform,
poly_f,
LogisticRegression(max_iter=10000, random_state=123, class_weight='balanced'))
lin_reg.fit(X_train, y_train)
evaluate_model(lin_reg)
###Output
Accuracy: 0.7622309197651663
F1 score: 0.23343848580441642
Recall score: 0.74
###Markdown
Random Forest
###Code
rf_pipe = make_pipeline(
column_transform,
poly_f,
RandomForestClassifier(random_state=123))
rf_pipe.fit(X_train, y_train)
evaluate_model(rf_pipe)
###Output
Accuracy: 0.9510763209393346
F1 score: 0.038461538461538464
Recall score: 0.02
###Markdown
Under samplinghttps://www.analyticsvidhya.com/blog/2020/07/10-techniques-to-deal-with-class-imbalance-in-machine-learning/
###Code
rus = RandomUnderSampler(random_state=42, replacement=True)# fit predictor and target variable
X_rus, y_rus = rus.fit_resample(X_train, y_train)
lin_reg_rus = make_pipeline(
column_transform,
LogisticRegression(max_iter=1000, random_state=123))
lin_reg_rus.fit(X_rus, y_rus)
evaluate_model(lin_reg_rus)
rf_pipe_rus = make_pipeline(
column_transform,
RandomForestClassifier(random_state=123))
rf_pipe_rus.fit(X_rus, y_rus)
evaluate_model(rf_pipe_rus)
###Output
Accuracy: 0.7113502935420744
F1 score: 0.20485175202156336
Recall score: 0.76
###Markdown
Over sampling
###Code
ros = RandomOverSampler(random_state=42, sampling_strategy=0.5)
X_ros, y_ros = ros.fit_resample(X_train, y_train)
lin_reg_ros = make_pipeline(
column_transform,
LogisticRegression(max_iter=1000, random_state=123))
lin_reg_ros.fit(X_ros, y_ros)
evaluate_model(lin_reg_ros)
rf_pipe_ros = make_pipeline(
column_transform,
RandomForestClassifier(random_state=123))
rf_pipe_ros.fit(X_ros, y_ros)
evaluate_model(rf_pipe_ros)
###Output
Accuracy: 0.9481409001956947
F1 score: 0.07017543859649124
Recall score: 0.04
###Markdown
SMOT
###Code
smote = SMOTE()
X_smote, y_smote = smote.fit_resample(column_transform.transform(X_train), y_train)
lin_reg_smote = LogisticRegression(max_iter=1000, random_state=123)
lin_reg_smote.fit(X_smote, y_smote)
evaluate_model(lin_reg_smote, transform=column_transform)
rf_smote = RandomForestClassifier(random_state=123)
rf_smote.fit(X_smote, y_smote)
evaluate_model(rf_smote, transform=column_transform)
###Output
Accuracy: 0.9461839530332681
F1 score: 0.06779661016949154
Recall score: 0.04
###Markdown
NearMiss
###Code
nm = NearMiss(version=3)
X_nm, y_nm = nm.fit_resample(column_transform.transform(X_train), y_train)
lin_reg_nm = LogisticRegression(max_iter=1000, random_state=123)
lin_reg_nm.fit(X_nm, y_nm)
evaluate_model(lin_reg_nm, transform=column_transform)
rf_nm = RandomForestClassifier(random_state=123)
rf_nm.fit(X_nm, y_nm)
evaluate_model(rf_nm, transform=column_transform)
###Output
Accuracy: 0.6203522504892368
F1 score: 0.11818181818181818
Recall score: 0.52
###Markdown
SVM
###Code
svc_pipe = make_pipeline(
column_transform,
SVC(class_weight='balanced', probability=True, random_state=123))
svc_pipe.fit(X_train, y_train)
evaluate_model(svc_pipe)
###Output
Accuracy: 0.7299412915851272
F1 score: 0.22905027932960895
Recall score: 0.82
###Markdown
Balanced Random Forest
###Code
brf = make_pipeline(
column_transform,
poly_f,
BalancedRandomForestClassifier()
)
brf.fit(X_train, y_train)
evaluate_model(brf)
###Output
Accuracy: 0.7211350293542075
F1 score: 0.2359249329758713
Recall score: 0.88
###Markdown
Hyper-param optimization
###Code
grid = {
'columntransformer__imputer__strategy': ['mean', 'most_frequent', 'median'],
'balancedrandomforestclassifier__n_estimators' : [50, 100, 200, 300, 400, 500, 600, 800],
'balancedrandomforestclassifier__max_depth' : [2, 3, 4, 6, 8, 10]
}
clf = RandomizedSearchCV(brf, grid, scoring='recall', random_state=123, n_iter=20)
search = clf.fit(X_train, y_train)
brf.get_params().keys()
print(search.best_score_)
print(search.best_params_)
estimator = search.best_estimator_
evaluate_model(estimator)
brf = make_pipeline(
column_transform,
poly_f,
BalancedRandomForestClassifier(n_estimators=500, max_depth=4)
)
brf.fit(X, y)
evaluate_model(brf)
with open('model.bin', "wb") as f:
pickle.dump(brf, f)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.