path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Dense_Associative_Memory_training.ipynb | ###Markdown
This code illustrates the learning algorithm for Dense Associative Memories from [Dense Associative Memory for Pattern Recognition](https://arxiv.org/abs/1606.01164) on MNIST data set.If you want to learn more about Dense Associative Memories, check out a [NIPS 2016 talk](https://channel9.msdn.com/Events/Neural-Information-Processing-Systems-Conference/Neural-Information-Processing-Systems-Conference-NIPS-2016/Dense-Associative-Memory-for-Pattern-Recognition) or a [research seminar](https://www.youtube.com/watch?v=lvuAU_3t134). This cell loads the data and normalizes it to the [-1,1] range
###Code
import scipy.io
import numpy as np
import matplotlib.pyplot as plt
mat = scipy.io.loadmat('mnist_all.mat')
N=784
Nc=10
Ns=60000
NsT=10000
M=np.zeros((0,N))
Lab=np.zeros((Nc,0))
for i in range(Nc):
M=np.concatenate((M, mat['train'+str(i)]), axis=0)
lab1=-np.ones((Nc,mat['train'+str(i)].shape[0]))
lab1[i,:]=1.0
Lab=np.concatenate((Lab,lab1), axis=1)
M=2*M/255.0-1
M=M.T
MT=np.zeros((0,N))
LabT=np.zeros((Nc,0))
for i in range(Nc):
MT=np.concatenate((MT, mat['test'+str(i)]), axis=0)
lab1=-np.ones((Nc,mat['test'+str(i)].shape[0]))
lab1[i,:]=1.0
LabT=np.concatenate((LabT,lab1), axis=1)
MT=2*MT/255.0-1
MT=MT.T
###Output
_____no_output_____
###Markdown
To draw a heatmap of the weights together with the errors on the training set (blue) and the test set (red) a helper function is created:
###Code
def draw_weights(synapses, Kx, Ky, err_tr, err_test):
fig.clf()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
plt.sca(ax1)
yy=0
HM=np.zeros((28*Kx,28*Ky))
for y in range(Ky):
for x in range(Kx):
HM[y*28:(y+1)*28,x*28:(x+1)*28]=synapses[yy,:].reshape(28,28)
yy += 1
nc=np.amax(np.absolute(HM))
im=plt.imshow(HM,cmap='bwr',vmin=-nc,vmax=nc)
cbar=fig.colorbar(im,ticks=[np.amin(HM), 0, np.amax(HM)])
plt.axis('off')
cbar.ax.tick_params(labelsize=30)
plt.sca(ax2)
plt.ylim((0,100))
plt.xlim((0,len(err_tr)+1))
ax2.plot(np.arange(1, len(err_tr)+1, 1), err_tr, color='b', linewidth=4)
ax2.plot(np.arange(1, len(err_test)+1, 1), err_test, color='r',linewidth=4)
ax2.set_xlabel('Number of epochs', size=30)
ax2.set_ylabel('Training and test error, %', size=30)
ax2.tick_params(labelsize=30)
plt.tight_layout()
fig.canvas.draw()
###Output
_____no_output_____
###Markdown
This cell defines parameters of the algorithm: `n` - power of the rectified polynomial in [Eq 3](https://arxiv.org/abs/1606.01164); `m` - power of the loss function in [Eq 14](https://arxiv.org/abs/1606.01164); `K` - number of memories that are displayed as an `Ky` by `Kx` array by the helper function defined above; `eps0` - initial learning rate that is exponentially annealed during training with the damping parameter `f`, as explained in [Eq 12](https://arxiv.org/abs/1606.01164); `p` - momentum as defined in [Eq 13](https://arxiv.org/abs/1606.01164); `mu` - the mean of the gaussian distribution that initializes the weights; `sigma` - the standard deviation of that gaussian; `Nep` - number of epochs; `Num` - size of the training minibatch; `NumT` - size of the test minibatch; `prec` - parameter that controls numerical precision of the weight updates. Parameter `beta` that is used in [Eq 9](https://arxiv.org/abs/1606.01164) is defined as `beta=1/Temp**n`. The choice of temperatures `Temp` as well as the duration of the annealing `thresh_pret` is discussed in [Appendix A](https://arxiv.org/abs/1606.01164).
###Code
Kx=10 # Number of memories per row on the weights plot
Ky=10 # Number of memories per column on the weigths plot
K=Kx*Ky # Number of memories
n=20 # Power of the interaction vertex in the DAM energy function
m=30 # Power of the loss function
eps0=4.0e-2 # Initial learning rate
f=0.998 # Damping parameter for the learning rate
p=0.6 # Momentum
Nep=300 # Number of epochs
Temp_in=540. # Initial temperature
Temp_f=540. # Final temperature
thresh_pret=200 # Length of the temperature ramp
Num=1000 # Size of training minibatch
NumT=5000 # Size of test minibatch
mu=-0.3 # Weights initialization mean
sigma=0.3 # Weights initialization std
prec=1.0e-30 # Precision of weight update
###Output
_____no_output_____
###Markdown
This cell defines the main code. The external loop runs over epochs `nep`, the internal loop runs over minibatches. The weights are updated after each minibatch in a way so that the largest update is equal to the learning rate `eps` at that epoch, see [Eq 13](https://arxiv.org/abs/1606.01164). The weights are displayed by the helper function after each epoch.
###Code
%matplotlib inline
%matplotlib notebook
fig=plt.figure(figsize=(12,10))
KS=np.random.normal(mu, sigma, (K, N+Nc))
VKS=np.zeros((K, N+Nc))
aux=-np.ones((Nc,Num*Nc))
for d in range(Nc):
aux[d,d*Num:(d+1)*Num]=1.
auxT=-np.ones((Nc,NumT*Nc))
for d in range(Nc):
auxT[d,d*NumT:(d+1)*NumT]=1.
err_tr=[]
err_test=[]
for nep in range(Nep):
eps=eps0*f**nep
# Temperature ramp
if nep<=thresh_pret:
Temp=Temp_in+(Temp_f-Temp_in)*nep/thresh_pret
else:
Temp=Temp_f
beta=1./Temp**n
perm=np.random.permutation(Ns)
M=M[:,perm]
Lab=Lab[:,perm]
num_correct = 0
for k in range(Ns//Num):
v=M[:,k*Num:(k+1)*Num]
t_R=Lab[:,k*Num:(k+1)*Num]
t=np.reshape(t_R,(1,Nc*Num))
u=np.concatenate((v, -np.ones((Nc,Num))),axis=0)
uu=np.tile(u,(1,Nc))
vv=np.concatenate((uu[:N,:],aux),axis=0)
KSvv=np.maximum(np.dot(KS,vv),0)
KSuu=np.maximum(np.dot(KS,uu),0)
Y=np.tanh(beta*np.sum(KSvv**n-KSuu**n, axis=0)) # Forward path, Eq 9
Y_R=np.reshape(Y,(Nc,Num))
#Gradients of the loss function
d_KS=np.dot(np.tile((t-Y)**(2*m-1)*(1-Y)*(1+Y), (K,1))*KSvv**(n-1),vv.T) - np.dot(np.tile((t-Y)**(2*m-1)*(1-Y)*(1+Y), (K,1))*KSuu**(n-1),uu.T)
VKS=p*VKS+d_KS
nc=np.amax(np.absolute(VKS),axis=1).reshape(K,1)
nc[nc<prec]=prec
ncc=np.tile(nc,(1,N+Nc))
KS += eps*VKS/ncc
KS=np.clip(KS, a_min=-1., a_max=1.)
correct=np.argmax(Y_R,axis=0)==np.argmax(t_R,axis=0)
num_correct += np.sum(correct)
err_tr.append(100.*(1.0-num_correct/Ns))
num_correct = 0
for k in range(NsT//NumT):
v=MT[:,k*NumT:(k+1)*NumT]
t_R=LabT[:,k*NumT:(k+1)*NumT]
u=np.concatenate((v, -np.ones((Nc,NumT))),axis=0)
uu=np.tile(u,(1,Nc))
vv=np.concatenate((uu[:N,:],auxT),axis=0)
KSvv=np.maximum(np.dot(KS,vv),0)
KSuu=np.maximum(np.dot(KS,uu),0)
Y=np.tanh(beta*np.sum(KSvv**n-KSuu**n, axis=0)) # Forward path, Eq 9
Y_R=np.reshape(Y,(Nc,NumT))
correct=np.argmax(Y_R,axis=0)==np.argmax(t_R,axis=0)
num_correct += np.sum(correct)
errr=100.*(1.0-num_correct/NsT)
err_test.append(errr)
draw_weights(KS[:,:N], Kx, Ky, err_tr, err_test)
###Output
_____no_output_____
###Markdown
This code illustrates the learning algorithm for Dense Associative Memories from [Dense Associative Memory for Pattern Recognition](https://arxiv.org/abs/1606.01164) on MNIST data set.If you want to learn more about Dense Associative Memories, check out a [NIPS 2016 talk](https://channel9.msdn.com/Events/Neural-Information-Processing-Systems-Conference/Neural-Information-Processing-Systems-Conference-NIPS-2016/Dense-Associative-Memory-for-Pattern-Recognition) or a [research seminar](https://www.youtube.com/watch?v=lvuAU_3t134). This cell loads the data and normalizes it to the [-1,1] range
###Code
import scipy.io
import numpy as np
import matplotlib.pyplot as plt
!pip install emnist
from emnist import extract_training_samples, extract_test_samples
images, labels = extract_training_samples('letters')
images_test, labels_test = extract_test_samples('letters')
chars = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']
chars_idx = []
for char in chars:
char_num = ord(char) - 96
chars_idx.append(char_num)
def get_indices_per_char(is_train, chars_str):
indices_per_char_test = []
for char_str in chars_str:
char_num = ord(char_str) - 96
c_labels = labels if is_train else labels_test
char_indices = [i for i, x in enumerate(c_labels) if x == char_num]
indices_per_char_test.append(char_indices)
return indices_per_char_test
indices_per_char = get_indices_per_char(True, chars)
indices_per_char_test = get_indices_per_char(False, chars)
N=784 # total neurons
Nc=len(chars) # classifier neurons
training_size=4800*len(chars)
test_size=800*len(chars)
M=np.zeros((0,N))
Lab=np.zeros((Nc,0))
for i in range(Nc):
flat_images = images[indices_per_char[i]].reshape((4800, 784))
M=np.concatenate((M, flat_images), axis=0)
lab1=-np.ones((Nc,4800))
lab1[i,:]=1.0
Lab=np.concatenate((Lab,lab1), axis=1)
M=2*M/255.0-1
M=M.T
MT=np.zeros((0,N))
LabT=np.zeros((Nc,0))
for i in range(Nc):
flat_images = images_test[indices_per_char_test[i]].reshape((800, 784))
MT=np.concatenate((MT, flat_images), axis=0)
lab1=-np.ones((Nc,800))
lab1[i,:]=1.0
LabT=np.concatenate((LabT,lab1), axis=1)
MT=2*MT/255.0-1
MT=MT.T
# print(indices_per_char_test[0][2])
# plt.imshow(np.reshape(MT[:,800], (28,28)))
# plt.show()
import scipy.io
mat = scipy.io.loadmat('mnist_all.mat')
print("hello")
print(mat['train'+str(i)].shape)
###Output
hello
(5958, 784)
###Markdown
To draw a heatmap of the weights together with the errors on the training set (blue) and the test set (red) a helper function is created:
###Code
def draw_weights(synapses, Kx, Ky, err_tr, err_test):
fig.clf()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
plt.sca(ax1)
yy=0
HM=np.zeros((28*Kx,28*Ky))
for y in range(Ky):
for x in range(Kx):
HM[y*28:(y+1)*28,x*28:(x+1)*28]=synapses[yy,:].reshape(28,28)
yy += 1
nc=np.amax(np.absolute(HM))
im=plt.imshow(HM,cmap='bwr',vmin=-nc,vmax=nc)
cbar=fig.colorbar(im,ticks=[np.amin(HM), 0, np.amax(HM)])
plt.axis('off')
cbar.ax.tick_params(labelsize=30)
plt.sca(ax2)
plt.ylim((0,100))
plt.xlim((0,len(err_tr)+1))
ax2.plot(np.arange(1, len(err_tr)+1, 1), err_tr, color='b', linewidth=4)
ax2.plot(np.arange(1, len(err_test)+1, 1), err_test, color='r',linewidth=4)
ax2.set_xlabel('Number of epochs', size=30)
ax2.set_ylabel('Training and test error, %', size=30)
ax2.tick_params(labelsize=30)
plt.tight_layout()
fig.canvas.draw()
###Output
_____no_output_____
###Markdown
This cell defines parameters of the algorithm: `n` - power of the rectified polynomial in [Eq 3](https://arxiv.org/abs/1606.01164); `m` - power of the loss function in [Eq 14](https://arxiv.org/abs/1606.01164); `K` - number of memories that are displayed as an `Ky` by `Kx` array by the helper function defined above; `eps0` - initial learning rate that is exponentially annealed during training with the damping parameter `f`, as explained in [Eq 12](https://arxiv.org/abs/1606.01164); `p` - momentum as defined in [Eq 13](https://arxiv.org/abs/1606.01164); `mu` - the mean of the gaussian distribution that initializes the weights; `sigma` - the standard deviation of that gaussian; `Nep` - number of epochs; `Num` - size of the training minibatch; `NumT` - size of the test minibatch; `prec` - parameter that controls numerical precision of the weight updates. Parameter `beta` that is used in [Eq 9](https://arxiv.org/abs/1606.01164) is defined as `beta=1/Temp**n`. The choice of temperatures `Temp` as well as the duration of the annealing `thresh_pret` is discussed in [Appendix A](https://arxiv.org/abs/1606.01164).
###Code
Kx=23 # Number of memories per row on the weights plot
Ky=23 # Number of memories per column on the weigths plot
K=Kx*Ky # Number of memories
n=20 # Power of the interaction vertex in the DAM energy function
m=30 # Power of the loss function
eps0=4.0e-2 # Initial learning rate
f=0.998 # Damping parameter for the learning rate
p=0.6 # Momentum
epochs=300 # Number of epochs
Temp_in=540. # Initial temperature
Temp_f=540. # Final temperature
thresh_pret=200 # Length of the temperature ramp
training_batch_size=600 # Size of training minibatch
test_batch_size=1200 # Size of test minibatch
mu=-0.3 # Weights initialization mean
sigma=0.3 # Weights initialization std
prec=1.0e-30 # Precision of weight update
has_starting_memories = True
n_starting_memories = 26*15
###Output
_____no_output_____
###Markdown
This cell defines the main code. The external loop runs over epochs `nep`, the internal loop runs over minibatches. The weights are updated after each minibatch in a way so that the largest update is equal to the learning rate `eps` at that epoch, see [Eq 13](https://arxiv.org/abs/1606.01164). The weights are displayed by the helper function after each epoch.
###Code
%matplotlib inline
%matplotlib notebook
fig=plt.figure(figsize=(12,10))
memories=np.random.normal(mu, sigma, (K, N+Nc)) # random starting memories
if has_starting_memories:
for char_counter, char_idx in enumerate(chars_idx):
char_image_idxs = np.random.random_integers(0, len(indices_per_char[char_counter]), n_starting_memories // len(chars))
image_idxs = np.array(indices_per_char[char_counter])[char_image_idxs]
for image_counter, image_idx in enumerate(image_idxs):
image_data = images[image_idx].flatten()
classifier_data = -np.ones((Nc,))
classifier_data[char_counter] = 1
memory = np.concatenate((image_data, classifier_data))
memories[char_counter * (n_starting_memories // len(chars)) + image_counter] = memory
VKS=np.zeros((K, N+Nc))
# plt.imshow(np.reshape(memories[50][0:784], (28, 28)))
# print(memories[50][784:784+Nc])
print("here")
aux=-np.ones((Nc,training_batch_size*Nc))
for d in range(Nc):
aux[d,d*training_batch_size:(d+1)*training_batch_size]=1.
auxT=-np.ones((Nc,test_batch_size*Nc))
for d in range(Nc):
auxT[d,d*test_batch_size:(d+1)*test_batch_size]=1.
err_tr=[]
err_test=[]
for epoch in range(epochs):
learning_rate=eps0*f**epoch
# Temperature ramp
if epoch<=thresh_pret:
Temp=Temp_in+(Temp_f-Temp_in)*epoch/thresh_pret
else:
Temp=Temp_f
beta=1./Temp**n
# Training
perm=np.random.permutation(training_size) # random order
M=M[:,perm] # change memory order
Lab=Lab[:,perm] # change label order
num_correct = 0
# for every batch
for k in range(training_size//training_batch_size): # floor division
batch_memories = M[:,k*training_batch_size:(k+1)*training_batch_size]
batch_labels = Lab[:,k*training_batch_size:(k+1)*training_batch_size]
t=np.reshape(batch_labels,(1,Nc*training_batch_size))
# u = memories in column form with classifier neurons all -1
u=np.concatenate((batch_memories, -np.ones((Nc,training_batch_size))),axis=0)
# uu = Nc * every memory
uu=np.tile(u,(1,Nc))
# vv = memories in column form with classifier neurons with one +1
vv=np.concatenate((uu[:N,:],aux),axis=0)
KSvv=np.maximum(np.dot(memories,vv),0) # memories with positive classifier
KSuu=np.maximum(np.dot(memories,uu),0) # memories with negative classifier
# Diff F(postive classifier) and F(negative classifier)
Y=np.tanh(beta*np.sum(KSvv**n-KSuu**n, axis=0)) # Forward path, Eq 9
pred_labels=np.reshape(Y,(Nc,training_batch_size))
# Gradients of the loss function
d_KS=np.dot(np.tile((t-Y)**(2*m-1)*(1-Y)*(1+Y), (K,1))*KSvv**(n-1),vv.T) - np.dot(np.tile((t-Y)**(2*m-1)*(1-Y)*(1+Y), (K,1))*KSuu**(n-1),uu.T)
VKS=p*VKS+d_KS
nc=np.amax(np.absolute(VKS),axis=1).reshape(K,1)
nc[nc<prec]=prec
ncc=np.tile(nc,(1,N+Nc))
memories += learning_rate*VKS/ncc
memories=np.clip(memories, a_min=-1., a_max=1.)
correct=np.argmax(pred_labels,axis=0)==np.argmax(batch_labels,axis=0)
num_correct += np.sum(correct)
err_tr.append(100.*(1.0-num_correct/training_size))
# Testing
num_correct = 0
for k in range(test_size//test_batch_size):
v=MT[:,k*test_batch_size:(k+1)*test_batch_size]
t_R=LabT[:,k*test_batch_size:(k+1)*test_batch_size]
u=np.concatenate((v, -np.ones((Nc,test_batch_size))),axis=0)
uu=np.tile(u,(1,Nc))
vv=np.concatenate((uu[:N,:],auxT),axis=0)
KSvv=np.maximum(np.dot(memories,vv),0)
KSuu=np.maximum(np.dot(memories,uu),0)
Y=np.tanh(beta*np.sum(KSvv**n-KSuu**n, axis=0)) # Forward path, Eq 9
Y_R=np.reshape(Y,(Nc,test_batch_size))
correct=np.argmax(Y_R,axis=0)==np.argmax(t_R,axis=0)
num_correct += np.sum(correct)
print(np.argmax(Y_R,axis=0))
print(np.argmax(t_R,axis=0))
print(num_correct)
errr=100.*(1.0-num_correct/test_size)
err_test.append(errr)
draw_weights(memories[:,:N], Kx, Ky, err_tr, err_test)
print("test")
print(memories[99])
from scipy.io import savemat
mdic = {"memories": memories, "label": "23l86c"}
savemat("matlab_matrix.mat", mdic)
###Output
_____no_output_____ |
pandas/Demos/dataframe.ipynb | ###Markdown
pandas DataFrame Setup
###Code
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', 10)
pd.set_option('display.max_rows', 10)
###Output
_____no_output_____
###Markdown
Create Simple DataFrame
###Code
mda = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
df1 = pd.DataFrame(mda)
df1
df1.columns = ['A', 'B', 'C']
df1.index = np.arange(1, len(df1) + 1)
df1
df2 = pd.DataFrame(mda, columns=['A', 'B', 'C'], index=np.arange(1, len(mda) + 1))
df2
###Output
_____no_output_____
###Markdown
Create DataFrame using Series as Rows
###Code
people = pd.Series(['Adam', 'Bob', 'Carl', 'Dave'], index=['A', 'B', 'C', 'D'])
places = pd.Series(['Africa', 'Berlin', 'Canada'], index=['A', 'B', 'C'])
things = pd.Series(['Apple', 'Baseball bat', 'Car'], index=['A', 'B', 'C'])
df3 = pd.DataFrame([people, places, things])
df3
df3.index = ['People', 'Place', 'Thing']
df3
df4 = pd.DataFrame(
[people, places, things],
index = ['People', 'Place', 'Thing'],
columns = ['A', 'B', 'D']
)
df4
###Output
_____no_output_____
###Markdown
Create Series
###Code
# Create NumPy arrays with random grades.
rng = np.random.default_rng(seed=42)
ar1 = rng.choice(['A', 'B', 'C', 'D', 'F'], 100, p=[.2, .4, .3, .08, .02])
ar2 = rng.choice(['A', 'B', 'C', 'D', 'F'], 50, p=[.3, .4, .2, .1, 0])
ar3 = rng.choice(['a', 'b', 'c', 'd', 'f'], 200, p=[.15, .45, .25, .13, .02])
# Create pandas Series from arrays.
s1 = pd.Series(ar1)
s2 = pd.Series(ar2)
s3 = pd.Series(ar3)
###Output
_____no_output_____
###Markdown
Create DataFrame from Dictionary
###Code
d = {
'grades1': s1,
'grades2': s2,
'grades3': s3
}
df1_grades = pd.DataFrame(d)
df1_grades
###Output
_____no_output_____
###Markdown
Create DataFrame from CSV
###Code
csv = '../csvs/nc-est2019-agesex-res.csv'
pops = pd.read_csv(csv, usecols=[0, 1, 10, 11])
pops
###Output
_____no_output_____
###Markdown
Column Names and Row index
###Code
pops.index
pops.columns
###Output
_____no_output_____
###Markdown
Number of Columns and Rows
###Code
num_rows = len(pops)
num_cols = len(pops.columns)
num_rows, num_cols
###Output
_____no_output_____
###Markdown
Shape
###Code
pops.shape
###Output
_____no_output_____
###Markdown
head() and tail()
###Code
pops.head()
pops.tail()
pops.describe()
###Output
_____no_output_____
###Markdown
Accessing Columns
###Code
pops['POPESTIMATE2019']
pops.POPESTIMATE2019
pops[['AGE', 'SEX', 'POPESTIMATE2019']]
cols = ['AGE', 'SEX', 'POPESTIMATE2019']
pops[cols]
###Output
_____no_output_____
###Markdown
Changing Values in a Column
###Code
pops.SEX.unique() # Current values of pops.SEX
def fix_sex_values(sex):
if sex == 0:
return 'T'
elif sex == 1:
return 'M'
else: # 2
return 'F'
pops.SEX = pops.SEX.apply(fix_sex_values)
pops.SEX.unique() # New values of pops.SEX
###Output
_____no_output_____
###Markdown
Setting the index when creating the `DataFrame`
###Code
csv ='../csvs/mantle.csv'
mantle = pd.read_csv(csv, usecols=range(7), index_col='Year')
mantle.head()
###Output
_____no_output_____
###Markdown
Changing the index of an Existing `DataFrame`
###Code
pops['SEX_AGE'] = pops['SEX'] + pops['AGE'].apply(str)
pops.set_index('SEX_AGE', inplace=True)
pops
###Output
_____no_output_____
###Markdown
Accessing Rows
###Code
pops.loc['T25']
pops.loc[['F25', 'M25', 'T25']]
pops.loc['T20':'T25']
pops.iloc[4]
pops.iloc[[0, 1, 2, -3, -2, -1]]
pops.iloc[:5]
###Output
_____no_output_____
###Markdown
Combining Row and Column Selection
###Code
first5rows = pops.iloc[:5]
type(first5rows)
###Output
_____no_output_____
###Markdown
Two Steps - Rows First
###Code
first5rows = pops.iloc[:5]
first5rows[['POPESTIMATE2018']]
###Output
_____no_output_____
###Markdown
Two Steps - Columns First
###Code
pop2018 = pops[['POPESTIMATE2018']]
pop2018.iloc[:5]
###Output
_____no_output_____
###Markdown
One Step - Rows First
###Code
pops.iloc[:5][['POPESTIMATE2018']]
###Output
_____no_output_____
###Markdown
One Step - Columns First
###Code
pops[['POPESTIMATE2018']].iloc[:5]
###Output
_____no_output_____
###Markdown
Getting a Series (Rows then Column)
###Code
pops.iloc[:5]['POPESTIMATE2018']
pops.iloc[:5].POPESTIMATE2018
###Output
_____no_output_____
###Markdown
Getting a Series (Column then Rows)
###Code
pops['POPESTIMATE2018'].iloc[:5]
pops.POPESTIMATE2018.iloc[:5]
###Output
_____no_output_____
###Markdown
Math on a DataFrame vs. a Series
###Code
pops.iloc[:5][['POPESTIMATE2018']].mean()
pops.iloc[:5]['POPESTIMATE2018'].mean()
pops.iloc[:5][['POPESTIMATE2018', 'POPESTIMATE2019']].mean()
###Output
_____no_output_____
###Markdown
Getting Specific Data
###Code
pops.at['F40', 'POPESTIMATE2019']
pops.iat[41, 3]
###Output
_____no_output_____
###Markdown
Boolean Selection
###Code
pops[pops.SEX == 'F']
pops[(pops.SEX == 'M') | (pops.SEX == 'F')]
pops[(pops.SEX == 'F') & (pops.AGE > 65)]
###Output
_____no_output_____
###Markdown
Filtering a DataFrame with a Boolean Series
###Code
(pops.POPESTIMATE2018 > pops.POPESTIMATE2019) & (pops.SEX == 'T')
pops[(pops.POPESTIMATE2018 > pops.POPESTIMATE2019) & (pops.SEX == 'T')]
###Output
_____no_output_____
###Markdown
Pivoting a DataFrame
###Code
pops2 = pops.pivot(index='AGE', columns = 'SEX', values='POPESTIMATE2019')
pops2
###Output
_____no_output_____
###Markdown
For which ages were there more females than males in 2019?
###Code
pops2['DIFF'] = pops2.F - pops2.M
pops2[pops2.F > pops2.M]
(pops2['F'] / pops2['M']).iloc[range(0, 101, 10)]
(pops2['F'] / pops2['M']).iloc[-10:]
###Output
_____no_output_____
###Markdown
Common Gotcha
###Code
type(pops2['F']), type(pops2['M']), type(pops2['T'])
type(pops2.F), type(pops2.M), type(pops2.T)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 6)
pops2['T']
pops2.T
? pops2.T
###Output
_____no_output_____ |
docs/07.02-Stepper_Motor_Control_via_I2C.ipynb | ###Markdown
*This notebook contains material from [cbe61622](https://jckantor.github.io/cbe61622);content is available [on Github](https://github.com/jckantor/cbe61622.git).* 7.2 Stepper Motor Control via I2C 7.2.1 Particle CLI 7.2.1.1 Installation
###Code
%%capture
!bash <( curl -sL https://particle.io/install-cli )
# path to the particle cli. May be environment dependent.
particle_cli = "/root/bin/particle"
###Output
_____no_output_____
###Markdown
7.2.1.2 Utility functions
###Code
import re
import subprocess
# regular expression to strip ansi control characters
ansi = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
# decode byte string and strip ansi control characters
def decode_bytes(byte_string):
if isinstance(byte_string, bytes):
result = byte_string.decode("utf-8")
return ansi.sub("", result)
# streamline call to the particle-cli
def particle(args):
process = subprocess.run(["/root/bin/particle"] + args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
process.stdout = decode_bytes(process.stdout)
process.stderr = decode_bytes(process.stderr)
return process
###Output
_____no_output_____
###Markdown
7.2.1.3 Login to Particle
###Code
import getpass
# prompt for username and password
username = getpass.getpass(prompt="Username: ")
password = getpass.getpass(prompt="Password: ")
# attempt login
output = particle(["login", "--username", username, "--password", password])
# report results
if output.returncode:
print(f"Return code = {output.returncode}")
print(output.stderr)
else:
print(output.stdout)
###Output
Username: ··········
Password: ··········
> Successfully completed login!
###Markdown
7.2.1.4 Select a deviceThe following cell downloads a list of all user devices and creates a list of device names. Here we choose the first name in the list for the rest of this notebook. If this is not the device to be used, then modify this cell accordingly.
###Code
devices = [line.split()[0] for line in particle(["list"]).stdout.splitlines()]
device_name = devices[0]
print(particle(["list", device_name]).stdout)
###Output
jck_argon_01 [e00fce68eaceb1faa7cf7193] (Argon) is online
###Markdown
7.2.2 Project: Motor Control 7.2.2.1 Grove I2C Motor Driver V1.3[SeeedStudio Documentation](https://wiki.seeedstudio.com/Grove-I2C_Motor_Driver_V1.3/)[Github repository](https://github.com/Seeed-Studio/Grove_I2C_Motor_Driver_v1_3)Note the default address ``0x0f``.** It turns out this motor driver requires a 5 volt logic. The Particle Argon is capable of 3.3v only, thus not electrically compatible. This is confirmed by the absence of a code library supporting this motor driver on Particle Argon.****New motor drivers are on order.** 7.2.3 Prototype 7.2.3.1 Create Project
###Code
print(particle(["project", "create", "--name", "myproject", "."]).stdout)
###Output
Initializing project in directory myproject...
> A new project has been initialized in directory myproject
###Markdown
7.2.3.2 Change working directoryThe Particle CLI assumes one is working in the top project directory.
###Code
%cd myproject
###Output
/content/myproject
###Markdown
7.2.3.3 Add relevant libraries
###Code
print(particle(["library", "search", "motor"]).stdout)
print(particle(["library", "add", "Grove_I2C_Motor_Driver_v1_3"]).stdout)
###Output
Library Grove_I2C_Motor_Driver_v1_3 not found
###Markdown
7.2.3.4 Create source file
###Code
%%writefile src/myproject.ino
###Output
Overwriting src/myproject.ino
###Markdown
7.2.3.5 Compiling
###Code
print(particle(["compile", "argon", "--saveTo", "myproject.bin"]).stdout)
###Output
Compiling code for argon
Including:
src/myproject.ino
project.properties
attempting to compile firmware
downloading binary from: /v1/binaries/5f91c30f9c09c651a428aa51
saving to: myproject.bin
Memory use:
text data bss dec hex filename
6588 108 1112 7808 1e80 /workspace/target/workspace.elf
Compile succeeded.
Saved firmware to: /content/myproject/myproject.bin
###Markdown
7.2.3.6 Flash firmware
###Code
print(particle(["flash", device_name, "myproject.bin"]).stdout)
###Output
Including:
myproject.bin
attempting to flash firmware to your device jck_argon_01
Flash device OK: Update started
Flash success!
|
AI_text_to_speech.ipynb | ###Markdown
1. Installer les dépendances
###Code
!pip install ibm_watson
###Output
Collecting ibm_watson
Downloading ibm-watson-5.3.0.tar.gz (412 kB)
[?25l
[K |▉ | 10 kB 26.9 MB/s eta 0:00:01
[K |█▋ | 20 kB 31.0 MB/s eta 0:00:01
[K |██▍ | 30 kB 19.2 MB/s eta 0:00:01
[K |███▏ | 40 kB 15.8 MB/s eta 0:00:01
[K |████ | 51 kB 9.8 MB/s eta 0:00:01
[K |████▊ | 61 kB 10.0 MB/s eta 0:00:01
[K |█████▋ | 71 kB 9.4 MB/s eta 0:00:01
[K |██████▍ | 81 kB 10.3 MB/s eta 0:00:01
[K |███████▏ | 92 kB 11.0 MB/s eta 0:00:01
[K |████████ | 102 kB 9.3 MB/s eta 0:00:01
[K |████████▊ | 112 kB 9.3 MB/s eta 0:00:01
[K |█████████▌ | 122 kB 9.3 MB/s eta 0:00:01
[K |██████████▎ | 133 kB 9.3 MB/s eta 0:00:01
[K |███████████▏ | 143 kB 9.3 MB/s eta 0:00:01
[K |████████████ | 153 kB 9.3 MB/s eta 0:00:01
[K |████████████▊ | 163 kB 9.3 MB/s eta 0:00:01
[K |█████████████▌ | 174 kB 9.3 MB/s eta 0:00:01
[K |██████████████▎ | 184 kB 9.3 MB/s eta 0:00:01
[K |███████████████ | 194 kB 9.3 MB/s eta 0:00:01
[K |████████████████ | 204 kB 9.3 MB/s eta 0:00:01
[K |████████████████▊ | 215 kB 9.3 MB/s eta 0:00:01
[K |█████████████████▌ | 225 kB 9.3 MB/s eta 0:00:01
[K |██████████████████▎ | 235 kB 9.3 MB/s eta 0:00:01
[K |███████████████████ | 245 kB 9.3 MB/s eta 0:00:01
[K |███████████████████▉ | 256 kB 9.3 MB/s eta 0:00:01
[K |████████████████████▋ | 266 kB 9.3 MB/s eta 0:00:01
[K |█████████████████████▌ | 276 kB 9.3 MB/s eta 0:00:01
[K |██████████████████████▎ | 286 kB 9.3 MB/s eta 0:00:01
[K |███████████████████████ | 296 kB 9.3 MB/s eta 0:00:01
[K |███████████████████████▉ | 307 kB 9.3 MB/s eta 0:00:01
[K |████████████████████████▋ | 317 kB 9.3 MB/s eta 0:00:01
[K |█████████████████████████▍ | 327 kB 9.3 MB/s eta 0:00:01
[K |██████████████████████████▏ | 337 kB 9.3 MB/s eta 0:00:01
[K |███████████████████████████ | 348 kB 9.3 MB/s eta 0:00:01
[K |███████████████████████████▉ | 358 kB 9.3 MB/s eta 0:00:01
[K |████████████████████████████▋ | 368 kB 9.3 MB/s eta 0:00:01
[K |█████████████████████████████▍ | 378 kB 9.3 MB/s eta 0:00:01
[K |██████████████████████████████▏ | 389 kB 9.3 MB/s eta 0:00:01
[K |███████████████████████████████ | 399 kB 9.3 MB/s eta 0:00:01
[K |███████████████████████████████▉| 409 kB 9.3 MB/s eta 0:00:01
[K |████████████████████████████████| 412 kB 9.3 MB/s
[?25h Installing build dependencies ... [?25l[?25hdone
Getting requirements to build wheel ... [?25l[?25hdone
Preparing wheel metadata ... [?25l[?25hdone
Collecting websocket-client==1.1.0
Downloading websocket_client-1.1.0-py2.py3-none-any.whl (68 kB)
[K |████████████████████████████████| 68 kB 7.2 MB/s
[?25hCollecting ibm-cloud-sdk-core==3.*,>=3.3.6
Downloading ibm-cloud-sdk-core-3.13.2.tar.gz (49 kB)
[K |████████████████████████████████| 49 kB 6.5 MB/s
[?25hRequirement already satisfied: requests<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from ibm_watson) (2.23.0)
Requirement already satisfied: python-dateutil>=2.5.3 in /usr/local/lib/python3.7/dist-packages (from ibm_watson) (2.8.2)
Collecting requests<3.0,>=2.0
Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB)
[K |████████████████████████████████| 63 kB 1.9 MB/s
[?25hCollecting urllib3<2.0.0,>=1.26.0
Downloading urllib3-1.26.8-py2.py3-none-any.whl (138 kB)
[K |████████████████████████████████| 138 kB 76.0 MB/s
[?25hCollecting PyJWT<3.0.0,>=2.0.1
Downloading PyJWT-2.3.0-py3-none-any.whl (16 kB)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.5.3->ibm_watson) (1.15.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.0->ibm_watson) (2021.10.8)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.0->ibm_watson) (2.10)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.0->ibm_watson) (2.0.10)
Building wheels for collected packages: ibm-watson, ibm-cloud-sdk-core
Building wheel for ibm-watson (PEP 517) ... [?25l[?25hdone
Created wheel for ibm-watson: filename=ibm_watson-5.3.0-py3-none-any.whl size=408872 sha256=e2d0490dde4f631700ee5d6a256f82b3a0609af76fbb34c76e738d2ebbb43532
Stored in directory: /root/.cache/pip/wheels/21/d9/82/4ce5b94730bc4f1f7b4c6384f72964b9b8f79fcc125bb8085c
Building wheel for ibm-cloud-sdk-core (setup.py) ... [?25l[?25hdone
Created wheel for ibm-cloud-sdk-core: filename=ibm_cloud_sdk_core-3.13.2-py3-none-any.whl size=83241 sha256=773c1f8dd95f07c5845c2710263e34534a9e7d3c0326a595c5dcabfab115c40f
Stored in directory: /root/.cache/pip/wheels/f0/0d/5c/0c26fcc2db712e8d270e52f7c9f6d8abe33ca79ec29438aa14
Successfully built ibm-watson ibm-cloud-sdk-core
Installing collected packages: urllib3, requests, PyJWT, websocket-client, ibm-cloud-sdk-core, ibm-watson
Attempting uninstall: urllib3
Found existing installation: urllib3 1.24.3
Uninstalling urllib3-1.24.3:
Successfully uninstalled urllib3-1.24.3
Attempting uninstall: requests
Found existing installation: requests 2.23.0
Uninstalling requests-2.23.0:
Successfully uninstalled requests-2.23.0
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.27.1 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.[0m
Successfully installed PyJWT-2.3.0 ibm-cloud-sdk-core-3.13.2 ibm-watson-5.3.0 requests-2.27.1 urllib3-1.26.8 websocket-client-1.1.0
###Markdown
2. AuthentificationOn prend le service Text to Speech d'IBM : https://cloud.ibm.com/catalog/services/text-to-speech On aura dans Gestion la clé API et l'URL.
###Code
url = ''
api_key = ''
from ibm_watson import TextToSpeechV1
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
authenticator = IAMAuthenticator(api_key) #authentification
tts = TextToSpeechV1(authenticator=authenticator) #instance du service
tts.set_service_url(url)
###Output
_____no_output_____
###Markdown
Voix à partir d'une chaine de caractère
###Code
text = "I am Groot !"
with open('./speech.mp3', 'wb') as audio_file:
res = tts.synthesize(text, accept='audio/mp3', voice='en-US_AllisonV3Voice').get_result()
audio_file.write(res.content)
###Output
_____no_output_____
###Markdown
Voix à partir d'un texte
###Code
with open('./text_AI.txt', 'r') as f:
text = f.readlines()
print(text)
#Transformer la liste en un texte clair.
#Mettre des points permet de ralentir la vitesse de lecture.
text = [l.replace('/n', '') for l in text]
text = ''.join(str(l) for l in text)
print(text)
with open('./speech_joker.mp3', 'wb') as audio_file:
res = tts.synthesize(text, accept='audio/mp3', voice='en-US_AllisonV3Voice').get_result()
audio_file.write(res.content)
###Output
_____no_output_____
###Markdown
Listes des voix
###Code
import json
#List all the voices
all_voices = tts.list_voices().get_result()
print(json.dumps(all_voices, indent=2))
get_a_voice =tts.get_voice('de-DE_ErikaV3Voice').get_result()
print(json.dumps(get_a_voice, indent=2))
###Output
{
"name": "de-DE_ErikaV3Voice",
"language": "de-DE",
"gender": "female",
"description": "Erika: Standard German (Standarddeutsch) female voice. Dnn technology.",
"customizable": true,
"supported_features": {
"custom_pronunciation": true,
"voice_transformation": false
},
"url": "https://api.au-syd.text-to-speech.watson.cloud.ibm.com/instances/5dfd87e4-bca3-49c7-9226-2612478bc9d0/v1/voices/de-DE_ErikaV3Voice"
}
|
Chapter1/1.5-Control.ipynb | ###Markdown
1.5.1-Statements
###Code
#1.5.1-Statements
###Output
_____no_output_____ |
Jupyter/EDA/EDA.ipynb | ###Markdown
Exploratory Data Analysis of Palladium Spot PriceData is provided free for non-commercial use by Perth Mint (www.perthmint.com) The TaskExtract-Load-Transform (ELT) process of transforming noisy data in CSV files (e.g. inconsistent data types, spurious values, etc.) into cleaned data suitable for increasingly sophisticated analysis.
###Code
import locale
import re
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from datetime import datetime
from dateutil.parser import parse
###Output
_____no_output_____
###Markdown
This function ensures pandas will not throw exceptions without a good reason.
###Code
def locale_string_to_float(value):
"""On a best efforts basis converts the string argument into a floating point number"""
returnVal = np.nan
try:
returnVal = locale.atof(value)
except ValueError as error:
pass
return returnVal
###Output
_____no_output_____
###Markdown
A specific problem with the CSV files occurs when dates cross from the 10th to the 11th of some months.
###Code
def clean_data(filename):
"""Keeping the original file named in the argument, creates a new CSV file with contiguous ISO-correct dates"""
with open(filename, 'r') as in_file:
old_date = datetime(1900, 1, 1)
with open(filename + '.cleaned.csv', 'w') as out_file:
for line in in_file:
if re.match('^"?\d+', line):
data_list = line.split(',')
matched = re.search('^"?(\d+)/(\d+)/(\d+)"?\s*$',
data_list[0])
year_string = matched[3]
maybe_month_string = matched[2]
maybe_day_string = matched[1]
current_date = parse('{}/{}/{}'.format(
maybe_day_string, maybe_month_string, year_string))
if abs((current_date - old_date).days) > 1:
current_date = parse('{}/{}/{}'.format(
maybe_month_string, maybe_day_string, year_string))
print('{},{}'.format(current_date,','.join(data_list[1:])),
end='', file=out_file)
old_date = current_date
###Output
_____no_output_____
###Markdown
A function to remove outliers in a given pandas dataframe. The valid column name is given as well as a floating point threshold factor. All rows with values too far above or below the threshold are returned as a list.
###Code
def abnormal_value_indices(df, column_name, threshold_factor):
"""Given a valid dataframe, a column in it and a threshold, returns rows that exceed the threshold"""
abnormals_list = []
previous_value = df[column_name][0]
for i in range(1, len(df[column_name])):
current_value = df[column_name][i]
if (previous_value > current_value * threshold_factor) or (current_value > previous_value * threshold_factor):
abnormals_list.append(df.index[i])
else:
previous_value = current_value
print(abnormals_list)
return(abnormals_list)
###Output
_____no_output_____
###Markdown
Create CSV files filled with clean (albeit sparse) data based on the original CSV files.
###Code
clean_data(r'Data\part_1.csv')
clean_data(r'Data\part_2.csv')
###Output
_____no_output_____
###Markdown
Data is split between the 2 CSV files. There are prices from 1968 to 2015 in one, and from 2016 onwards in the other.
###Code
df_london_fixes_daily_1968_2015 = pd.read_csv(
r'Data\part_1.csv.cleaned.csv',
header=None,
names=['Date', 'Au AM', 'Au PM', 'Ag', 'Pt AM', 'Pt PM', 'Pd AM', 'Pd PM'],
index_col=0,
skiprows=[0,1,2,3,4],
usecols=[0,1,2,3,4,5,6,7],
parse_dates=True,
converters={1: locale_string_to_float,
2: locale_string_to_float,
3: locale_string_to_float,
4: locale_string_to_float,
5: locale_string_to_float,
6: locale_string_to_float,
7: locale_string_to_float})
###Output
_____no_output_____
###Markdown
The dataframe deliberately holds NaN values as compaction will be done just before the data is needed.
###Code
df_london_fixes_daily_1968_2015.head()
df_london_fixes_daily_from_2016_on = pd.read_csv(
r'Data\part_2.csv.cleaned.csv',
header=None,
names=['Date', 'Au AM', 'Au PM', 'Ag', 'Pt AM', 'Pt PM', 'Pd AM', 'Pd PM'],
index_col=0,
skiprows=[0,1,2,3,4],
usecols=[0,1,2,3,4,5,6,7],
parse_dates=True,
converters={1: locale_string_to_float,
2: locale_string_to_float,
3: locale_string_to_float,
4: locale_string_to_float,
5: locale_string_to_float,
6: locale_string_to_float,
7: locale_string_to_float})
df_london_fixes_daily_from_2016_on.head()
###Output
_____no_output_____
###Markdown
Ready for basic analysis, we drop NaN entries in specific columns and then drop outliers. Here the outliers are values that are bigger or smaller than their preceding value by some factor.
###Code
data_d1 = {'Pd AM': df_london_fixes_daily_1968_2015['Pd AM'],
'Pd PM': df_london_fixes_daily_1968_2015['Pd PM']}
df_palladium_1 = pd.DataFrame(data_d1).dropna(axis=0)
df_palladium_1 = df_palladium_1.drop(abnormal_value_indices(df_palladium_1, 'Pd AM', 1.5))
df_palladium_1.describe()
data_d2 = {'Pd AM': df_london_fixes_daily_from_2016_on['Pd AM'],
'Pd PM': df_london_fixes_daily_from_2016_on['Pd PM']}
df_palladium_2 = pd.DataFrame(data_d2).dropna(axis=0)
df_palladium_2 = df_palladium_2.drop(abnormal_value_indices(df_palladium_2, 'Pd AM', 1.5))
df_palladium_2.describe()
###Output
[]
###Markdown
Combine the compacted dataframes to analyze the data contiguously from 1968 to 2016 onwards.
###Code
df_palladium = df_palladium_1.append(df_palladium_2)
df_palladium.describe()
fig, ax = plt.subplots()
ax.plot(df_palladium['Pd AM'])
ax.grid()
fig.set_size_inches(15, 5)
plt.title("Historical Daily Palladium Spot Prices (AM)")
plt.xlabel("Date")
plt.ylabel("Price / USD")
plt.show()
###Output
_____no_output_____
###Markdown
AppendixVisualizations to see the data transformations. 1. Raw data
###Code
df_london_fixes_daily_1968_2015_raw = pd.read_csv(
r'Data\part_1.csv',
header=None,
names=['Date', 'Au AM', 'Au PM', 'Ag', 'Pt AM', 'Pt PM', 'Pd AM', 'Pd PM'],
index_col=0,
skiprows=[0,1,2,3,4],
usecols=[0,1,2,3,4,5,6,7],
parse_dates=True,
converters={1: locale_string_to_float,
2: locale_string_to_float,
3: locale_string_to_float,
4: locale_string_to_float,
5: locale_string_to_float,
6: locale_string_to_float,
7: locale_string_to_float})
fig, ax = plt.subplots()
ax.plot(df_london_fixes_daily_1968_2015_raw['Pd AM'])
ax.grid()
fig.set_size_inches(15, 5)
plt.title("Palladium Spot Price (AM) - Raw Data (part_1.csv) [1968-2015]")
plt.xlabel("Date")
plt.ylabel("Price / USD")
plt.show()
df_london_fixes_daily_2016_on_raw = pd.read_csv(
r'Data\part_2.csv',
header=None,
names=['Date', 'Au AM', 'Au PM', 'Ag', 'Pt AM', 'Pt PM', 'Pd AM', 'Pd PM'],
index_col=0,
skiprows=[0,1,2,3,4],
usecols=[0,1,2,3,4,5,6,7],
parse_dates=True,
converters={1: locale_string_to_float,
2: locale_string_to_float,
3: locale_string_to_float,
4: locale_string_to_float,
5: locale_string_to_float,
6: locale_string_to_float,
7: locale_string_to_float})
fig, ax = plt.subplots()
ax.plot(df_london_fixes_daily_2016_on_raw['Pd AM'])
ax.grid()
fig.set_size_inches(15, 5)
plt.title("Palladium Spot Price (AM) - Raw Data (part_2.csv) [2016+]")
plt.xlabel("Date")
plt.ylabel("Price / USD")
plt.show()
###Output
_____no_output_____
###Markdown
2. Initial data cleaning
###Code
fig, ax = plt.subplots()
ax.plot(df_london_fixes_daily_1968_2015['Pd AM'])
ax.grid()
fig.set_size_inches(15, 5)
plt.title("Palladium Spot Price (AM) - Initial Clean Data (part_1.csv.cleaned.csv) [1968-2015]")
plt.xlabel("Date")
plt.ylabel("Price / USD")
plt.show()
###Output
_____no_output_____
###Markdown
3 extreme values are seen in the plot above: 2005-05-18, 2012-02-29, and 2012-10-04.
###Code
fig, ax = plt.subplots()
ax.plot(df_london_fixes_daily_from_2016_on['Pd AM'])
ax.grid()
fig.set_size_inches(15, 5)
plt.title("Palladium Spot Price (AM) - Initial Clean Data (part_2.csv.cleaned.csv) [2016+]")
plt.xlabel("Date")
plt.ylabel("Price / USD")
plt.show()
###Output
_____no_output_____
###Markdown
No extreme values seen in the plot above. The same data will still be run through the outlier isolation function. 3. Outlier removal
###Code
fig, ax = plt.subplots()
ax.plot(df_palladium_1['Pd AM'])
ax.grid()
fig.set_size_inches(15, 5)
plt.title("Palladium Spot Price (AM) - Outliers Removed [1968-2015]")
plt.xlabel("Date")
plt.ylabel("Price / USD")
plt.show()
fig, ax = plt.subplots()
ax.plot(df_palladium_2['Pd AM'])
ax.grid()
fig.set_size_inches(15, 5)
plt.title("Palladium Spot Price (AM) - Outliers Removed [2016+]")
plt.xlabel("Date")
plt.ylabel("Price / USD")
plt.show()
###Output
_____no_output_____ |
_Hands-On-Deep-Learning-for-Games.ipynb | ###Markdown
Chapter 1: Neural Networks Perceptron takes a set of inputs, sums them all up, passes them thruogh an activation function.Activation function determiens whether to send output and at what level to send it when activated, 
###Code
inputs = [1,2]
weights = [1,1,1]
def perceptron_predict(inputs, weights):
activation = weights[0]
for i in range(len(inputs)-1):
activation += weights[i+1] * inputs[i]
return 1.0 if activation >= 0.0 else 0.0
print(perceptron_predict(inputs,weights))
train = [[1,2],[2,3],[1,1],[2,2],[3,3],[4,2],[2,5],[5,5],[4,1],[4,4]]
weights = [1,1,1]
def perceptron_predict(inputs, weights):
activation = weights[0]
for i in range(len(inputs)-1):
activation += weights[i+1] * inputs[i]
return 1.0 if activation >= 0.0 else 0.0
for inputs in train:
print(perceptron_predict(inputs,weights))
###Output
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
###Markdown
Using Gradient Descent to adjust weights
###Code
def perceptron_predict(inputs, weights):
activation = weights [0]
for i in range(len(inputs)-1):
activation += weights [i + 1] * inputs [i]
return 1.0 if activation >= 0.0 else 0.0
def train_weights(train, learning_rate, epochs):
weights = [0.0 for i in range(len(train[0]))]
for epoch in range(epochs):
sum_error = 0.0
for inputs in train:
prediction = perceptron_predict(inputs, weights)
error = inputs[-1] - prediction
sum_error += error**2
weights[0] = weights[0] + learning_rate * error
for i in range(len(inputs)-1):
weights[i + 1] = weights[i + 1] + learning_rate * error * inputs[i]
print('>epoch=%d, learning_rate=%.3f, error=%.3f' % (epoch, learning_rate, sum_error))
return weights
train = [[1.5,2.5,0],[2.5,3.5,0],[1.0,11.0,1],[2.3,2.3,1],[3.6,3.6,1],[4.2,2.4,0],[2.4,5.4,0],[5.1,5.1,1],[4.3,1.3,0],[4.8,4.8,1]]
learning_rate = 0.1
#Epoch is a passthrough training data
#Helps to converage at global minimum rather than local
epochs = 10
weights = train_weights(train, learning_rate, epochs)
print(weights)
###Output
>epoch=0, learning_rate=0.100, error=5.000
>epoch=1, learning_rate=0.100, error=6.000
>epoch=2, learning_rate=0.100, error=4.000
>epoch=3, learning_rate=0.100, error=7.000
>epoch=4, learning_rate=0.100, error=6.000
>epoch=5, learning_rate=0.100, error=6.000
>epoch=6, learning_rate=0.100, error=4.000
>epoch=7, learning_rate=0.100, error=5.000
>epoch=8, learning_rate=0.100, error=6.000
>epoch=9, learning_rate=0.100, error=6.000
[-0.8999999999999999, -0.3900000000000005, 0.6899999999999998]
###Markdown
Multi-layer Perceptron in Tensorflow
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
X = tf.placeholder("float", [None, n_input])
Y = tf.placeholder("float", [None, n_classes])
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Create model
def multilayer_perceptron(x):
# Hidden fully connected layer with 256 neurons
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
# Hidden fully connected layer with 256 neurons
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
# Output fully connected layer with a neuron for each class
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Construct model
logits = multilayer_perceptron(X)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Initializing the variables
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
c, _ = sess.run([loss_op, train_op], feed_dict={X: batch_x,Y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost={:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test model
pred = tf.nn.softmax(logits) # Apply softmax to logits
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(Y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))
###Output
C:\Users\Watson Turbo\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\Watson Turbo\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\Watson Turbo\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\Watson Turbo\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\Watson Turbo\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\Watson Turbo\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
###Markdown
Training Neural Networks with Backpropagation -Sum up all of the errors across the output layer-backpropagate the error back through the network-update each weight based on contribution to total error Cost Function-describes the average sum of errors for a batch in the entire network C(w1, w2,...,wn) = C(w) Building an Autoencoder with Keras
###Code
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
# this is the size of our encoded representations
encoding_dim = 64 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from tensorflow.keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print( x_train.shape)
print( x_test.shape)
autoencoder.fit(x_train, x_train,
epochs=15,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
from keras.layers import Input, Dense
from keras.models import Model
# this is the size of our encoded representations
encoding_dim = 64 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print( x_train.shape)
print( x_test.shape)
autoencoder.fit(x_train, x_train,
epochs=15,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# Make a prediction with weights
def perceptron_predict(inputs, weights):
activation = weights[0]
for i in range(len(inputs)-1):
activation += weights[i + 1] * inputs[i]
return 1.0 if activation >= 0.0 else 0.0
# Estimate Perceptron weights using stochastic gradient descent
def train_weights(train, learning_rate, n_epoch):
weights = [0.0 for i in range(len(train[0]))]
for epoch in range(n_epoch):
sum_error = 0.0
for inputs in train:
prediction = perceptron_predict(inputs, weights)
error = inputs[-1] - prediction
sum_error += error**2
weights[0] = weights[0] + learning_rate * error
for i in range(len(inputs)-1):
weights[i + 1] = weights[i + 1] + learning_rate * error * inputs[i]
print('>epoch=%d, lrate=%.3f, error=%.3f' % (epoch, learning_rate, sum_error))
return weights
train = [[1.5,2.5,0],[2.5,3.5,0],[1.0,11.0,1],[2.3,2.3,1],[3.6,3.6,1],[4.2,2.4,0],[2.4,5.4,0],[5.1,5.1,1],[4.3,1.3,0],[4.8,4.8,1]]
learning_rate = 0.1
epochs = 10
weights = train_weights(train, learning_rate, epochs)
print(weights)
###Output
>epoch=0, lrate=0.100, error=5.000
>epoch=1, lrate=0.100, error=6.000
>epoch=2, lrate=0.100, error=4.000
>epoch=3, lrate=0.100, error=7.000
>epoch=4, lrate=0.100, error=6.000
>epoch=5, lrate=0.100, error=6.000
>epoch=6, lrate=0.100, error=4.000
>epoch=7, lrate=0.100, error=5.000
>epoch=8, lrate=0.100, error=6.000
>epoch=9, lrate=0.100, error=6.000
[-0.8999999999999999, -0.3900000000000005, 0.6899999999999998]
|
Notebooks/Week 4/Week4_Sections.ipynb | ###Markdown
**Table of Content****Anouncement**: * Join Piazza and get this notes. * Remember to study HW 1, 2 for exams. Both the concepts and the code involved. * Any questions on HW2? **This Section *** Short Excercises about programming in general. * Fibonacci Numbers with Forloops * Fibonacci Numbers with Forloops and Arrays. * Excercises Iterative Methods and SVD solving of equations. * HW3 Guide
###Code
import numpy as np
import scipy.linalg
###Output
_____no_output_____
###Markdown
**Exercise 1: Forloops Fibonacci Number**The Fibonacci Number is a sequence of numbers starting with 1 as the first 2 terms, and the sequences satisfies: $$\begin{cases} a_{n + 2} &= a_{n + 1} + a_n \\ a_{1} = a_0 &= 1 & \end{cases}$$**Objective**: Write a function that accepts an integer, assuming it's non-negative, it returns $a_n$. For example: ```pythonFibboTermAt(0)1FibboTermAt(1)1FibboTermAt(4)5```**Question**: How would you write the code inside the function to build up the sequence? How many iterations should there be? What if the input has $0\le N\le 1$? Hints* Use 3 variables to keep track of the sequence, where 2 variables are keeping track of the sequence.
###Code
def FibboTermAt(n):
pass
Sequence = []
for I in range(21):
Sequence.append(FibboTermAt(I))
print(Sequence)
###Output
[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
###Markdown
**Exercise 2: Forloops Fibonacci Sequence****Objective**Write a function that accepts an integer, assuming it's non-negative, it returns the first n sequences of Fibbonacci Numbers in an numpy array. **Examples**```FibboSeq(11)[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]``` Hints Build an empty array using the input `n`, and put in the first 2 terms Use the forloops and update the `n+2`'term using the `n+1` and `n` th term
###Code
def FibboSeq(n):
pass
print(FibboSeq(20))
###Output
_____no_output_____
###Markdown
**Exercise 3: Eigenvalues and Fibbonacci Number**We can use Eigenvalue decomposition to improve the speed of computing the nth Fibbonacci Numbers. Observe the following: $$\begin{bmatrix} 0 & 1 \\ 1 & 1\end{bmatrix}\begin{bmatrix} a_{n}\\a_{n + 1}\end{bmatrix}= \begin{bmatrix} a_{n + 1} \\ a_{n + 1} + a_{n}\end{bmatrix}=\begin{bmatrix} a_{n + 1} \\ a_{n + 2}\end{bmatrix}$$For example, n = 5: $$\begin{bmatrix} 0 & 1 \\ 1 & 1\end{bmatrix}\begin{bmatrix} 5 \\8\end{bmatrix}= \begin{bmatrix} 8 \\ 13\end{bmatrix}$$If, we started with the number "8, 13", we want to compute the next 2 numbers in the sequence, how would we do that with linear algebra in numpy? **Discussion with peers*** What is the matrix that when we multiply the vector `[9; 13]` so that we can get the next 2 terms: `[21; 34]`? Answer* Multiply the matrix a second time$$\begin{bmatrix} 0 & 1 \\ 1 & 1\end{bmatrix}^2$$
###Code
v = np.array([8, 13]).reshape(-1, 1)
###Output
_____no_output_____
###Markdown
**Prompt:** What is the general patterns here? And how would be make use of the Eigenvalue decomposition for solving this problem (computing the nth fibbonacci number)? Discuss with you peers. **Objective**Fill in the function below, so that when it gets the input `n`, it computes the `n` th term in the fibb sequence just like that first exercise we did. **Helpful Functions Python**|Function |Description||-----|--------||[np.linalg.eig](https://numpy.org/doc/stable/reference/generated/numpy.linalg.eig.html)| Compute eigenvalues & eigenvectors of a non-symmetric matrix. ||[np.diag](https://numpy.org/doc/stable/reference/generated/numpy.diag.html)| If the input is a vector of size `n`, then the vector is filled into the diagonal of a zero matrix with size `n x n`, then it returns the matrix. If input is a matrix, then it extracts out the diagonal elements of the matrix and returns it as a vector. ||[np.linalg.solve](https://numpy.org/doc/stable/reference/generated/numpy.linalg.solve.html)| solve a system Ax = b|**Helpful Function Matlab**|Function| Description||-------|---------||[eig](https://www.mathworks.com/help/matlab/ref/eig.html)|Returns the eigevalues & eigenvectors of a general matrix. Both as matrices|Hints* First, we need to realize that, the repeatedly multiplying the matrix is involved. * Second, we need to remember how to use `np.eig`, `np.solve`, and the values returned to code it up. Answer* Observe that, to get from `a_0, a_1` to `a_{n}, a_{n + 1}`, we need to multiply by `n` times, and assuming that the matrix has eigen decomposition $X\Lambda X^{-1}$ where $X$ is the matrix whose columns are the eigenvectors, then:$$\begin{bmatrix} 0 & 1 \\ 1 & 1\end{bmatrix}^n = \underbrace{X\Lambda X^{-1}X\Lambda X^{-1} \dots X\Lambda X^{-1}}_{\text{n times}} = X\Lambda^nX^{-1}$$Then we just raise power on the diagonal matrix and then multiply both side by the the matrix $X$, and inverse of $X$.
###Code
def EigenComputeFibbTo(n):
pass
for II in range(20):
print(EigenComputeFibbTo(II))
###Output
_____no_output_____
###Markdown
**Use SVD to Find Solutions to a Linear System** **SVD Decomposition****Python Relevant Function**|Python Functon |Description||-----|-----||[np.linalg.svd](https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html)|returns $U, \sigma, V^H$, and notice that the singular values are returned as a vector. ||[np.diag](https://numpy.org/doc/stable/reference/generated/numpy.diag.html)| If the input is a vector of size `n`, then the vector is filled into the diagonal of a zero matrix with size `n x n`, then it returns the matrix. If input is a matrix, then it extracts out the diagonal elements of the matrix and returns it as a vector. |**Matlab Relevant Functions**|Matlab Function|Description||------|-----||[svd](https://www.mathworks.com/help/matlab/ref/double.svd.html)|Returns $U, \Sigma, V$, all as matrices. |Suppoes that SVD is performed on $A$, where $A$ is $m\times n$ The decomposition is: $$A = U\Sigma V^T$$Where, $U$ could be $m\times m$ or $m\times \min(m, n)$ which shape it takes depends on parameters passed into the function. $\Sigma$ is a diagonal matrix full of singular values of the matrix $A$, and they are all positive real numbers, $\Sigma$ can be in shape of $m\times n$ or $\min(m, n)\times \min(m, n)$, and $V^T$ is a matrix with in the shape of $m\times n$ or $\min(m, n)\times n$. The matrix $U, V$ are called unitary/orthonormal. Their inverse is the transpose conjugate of itself. Then, one can assert that the inverse of matrix $A$ when $A$ happens to be square would be: $$A^{-1} = (U\Sigma V^T)^{-1} = V(\Sigma)^{-1}U^T$$Because $\Sigma$ is diagonal, it's inverse is just taking the reciprical of all the elements in the diagonal of $\Sigma$. **Exercise*** Implement the method `SolveViaSVD` below, using the $U, \Sigma, V^T$ matrix from `np.linalg.svd`. Execute the cell and compare the output from both of these solver. * Disscuss, can the matrix be solved using iterative method such as Guass Sediel or Jacobi and WHY?
###Code
def JacobiIterate(A, b, x_guess=None, tol=1e-8):
"""
Performs Jacobi iteration on the system Ax = b, until the residual has infnorm less than 1e-10.
(Please take a read on this method after class and understand how iterative methods are implemented in real life)
"""
x_guess = b if x_guess is None else x_guess
d = np.diag(A) # d: Vector of diagonal elements of A.
D = np.diag(d) # D: Gotten by setting all the non-diagonal part of the matrix A to zero.
LpU = A - np.diag(d) # The non diagonal part of the matrix A.
guesses = [x_guess] # All the guesses are stored in python native array
while True:
guesses.append(np.linalg.solve(D, (b - LpU.dot(guesses[-1])))) # Acccumulate the results.
if np.linalg.norm(guesses[-1] - guesses[-2]) < tol: # Break conditions.
break
return guesses[-1]
def SolveViaSVD(M, c):
# your code here, return the inverse of the matrix X using SVD
pass
N = 10
b = np.random.rand(N, 1)
X = np.random.randn(N, N)
X += np.diag(np.sum(np.abs(X), axis=1)) # Row sum
print(f"Solution from iterative method is: \n{JacobiIterate(X, b)}")
print(f"Solution from your method is: \n{SolveViaSVD(X, b)}")
###Output
Solution from iterative method is:
[[ 0.1343723 ]
[ 0.00110638]
[ 0.1276749 ]
[ 0.02813489]
[-0.00155135]
[ 0.08361342]
[ 0.100144 ]
[ 0.13483284]
[ 0.16942361]
[ 0.08490875]]
Solution from your method is:
None
|
Past/DSS/Neural_Network/05.rnn_nlp.ipynb | ###Markdown
((6923, 188), (6923, 1205))188 - 가장 긴 문장 길이6923 문장 갯수
###Code
from keras.preprocessing import sequence
from keras.datasets import imdb
from keras import layers, models
class RNN_LSTM(models.Sequential): # 연쇄방식
def __init__(self, max_features, maxlen):
super().__init__()
self.add(layers.Embedding(max_features, 128, input_length=maxlen))
self.add(layers.LSTM(128, dropout=0.2, recurrent_dropout=0.2))
self.add(layers.Dense(1, activation='sigmoid'))
self.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
a = RNN_LSTM
class Data:
def __init__(self, max_features=20000, maxlen=80):
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# max_features : 최대 단어 빈도
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
# max_len 보다 긴 단어들을 pad_sequences() 함수로 잘라냄, 부족하면 0으로 채움
self.x_train, self.y_train = x_train, y_train
self.x_test, self.y_test = x_test, y_test
class RNN_LSTM(models.Sequential): # 연쇄방식
def __init__(self, max_features, maxlen):
super().__init__()
self.add(layers.Embedding(max_features, 128, input_length=maxlen))
self.add(layers.LSTM(128, dropout=0.2, recurrent_dropout=0.2))
self.add(layers.Dense(1, activation='sigmoid'))
self.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
class Machine:
def __init__(self,
max_features=20000,
maxlen=80):
self.data = Data(max_features, maxlen)
self.model = RNN_LSTM(max_features, maxlen)
def run(self, epochs=3, batch_size=32):
data = self.data
model = self.model
print('Training stage')
print('==============')
model.fit(data.x_train, data.y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(data.x_test, data.y_test))
score, acc = model.evaluate(data.x_test, data.y_test,
batch_size=batch_size)
print('Test performance: accuracy={0}, loss={1}'.format(acc, score))
def main():
m = Machine()
m.run()
main()
###Output
Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz
17465344/17464789 [==============================] - 15s 1us/step
Training stage
==============
Train on 25000 samples, validate on 25000 samples
Epoch 1/3
24416/25000 [============================>.] - ETA: 6s - loss: 0.4586 - acc: 0.7835 |
0326.ipynb | ###Markdown
###Code
!pip install gradio
pip install keras
import tensorflow as tf
import numpy as np
import requests
inception_net = tf.keras.applications.MobileNetV2()
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
len(labels)
import tensorflow as tf
import numpy as np
import requests
import gradio as gr
inception_net = tf.keras.applications.MobileNetV2()
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def classify(img):
img = np.expand_dims(img, 0)
img = tf.keras.applications.mobilenet_v2.preprocess_input(img)
prediction = inception_net.predict(img).flatten()
return {labels[i]:float(prediction[i]) for i in range(len(prediction))}
image = gr.inputs.Image(shape=(224, 224))
label = gr.outputs.Label(num_top_classes=3, label='預測結果')
grobj = gr.Interface(fn=classify, inputs=image, outputs=label, title='影像辨識')
grobj.launch()
###Output
Colab notebook detected. To show errors in colab notebook, set `debug=True` in `launch()`
Running on public URL: https://21937.gradio.app
This share link expires in 72 hours. For free permanent hosting, check out Spaces (https://huggingface.co/spaces)
|
Regression/Linear Models/LassoRegression_RobustScaler_PowerTransformer.ipynb | ###Markdown
LassoRegresion with Robust Scaler & Power Transformer This Code template is for regression analysis using the Lasso Regressor where rescaling method used is RobustScaler and feature transformation is done via PowerTransformer. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler, PowerTransformer
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
ModelLinear Model trained with L1 prior as regularizer (aka the Lasso)The optimization objective for Lasso is:(1 / (2 n_samples)) ||y - Xw||^2_2 + alpha * ||w||_1 Technically the Lasso model is optimizing the same objective function as the Elastic Net with l1_ratio=1.0 (no L2 penalty). Parameters: 1. alpha: float, default=1.0 > Constant that multiplies the L1 term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object. 2. fit_intercept: bool, default=True> Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered). 3. normalize: bool, default=False> This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. 4. precompute: bool or array-like of shape (n_features, n_features), default=False> Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always False to preserve sparsity. 5. copy_X: bool, default=True > If True, X will be copied; else, it may be overwritten. 6. max_iter: int, default=1000 > The maximum number of iterations. 7. tol: float, default=1e-4 > The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. 8. warm_start: bool, default=False > When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. 9. positive: bool, default=False > When set to True, forces the coefficients to be positive. 10. random_state: int, RandomState instance, default=None > The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. 11. selection: {‘cyclic’, ‘random’}, default=’cyclic’ > If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Data Rescaling Robust ScalerThis Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the transform method. Data Transformation Power Transformer Apply a power transform featurewise to make data more Gaussian-like.Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.[Power Transformer API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
###Code
model=make_pipeline(RobustScaler(), PowerTransformer(), Lasso())
model.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 91.89 %
###Markdown
r2_score: The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. mae: The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. mse: The mean squared error function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 91.89 %
Mean Absolute Error 82740.66
Mean Squared Error 10794070709.17
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
n=len(x_test) if len(x_test)<20 else 20
plt.figure(figsize=(14,10))
plt.plot(range(n),y_test[0:n], color = "green")
plt.plot(range(n),model.predict(x_test[0:n]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
k1lib/tests/moparse.ipynb | ###Markdown
Recap:
###Code
[names(), passedMain(), passed(), passedStrict()] | shape(0).all() | deref()
groupSamples = k1lib.Wrapper(passedStrict() | apply(mo.parseM.recognizeInit, 2) | cut(2) | cut(1) | joinStreams() | op().strip().all() | toSet() | sort(numeric=False) | deref())
mainSamples = k1lib.Wrapper(passedStrict() | apply(mo.parseM.recognizeInit, 2) | cut(2) | cut(0) | apply(lambda x: x or "").all() | deref().all() | unique(0) | deref())
[passedStrict(), groupSamples(), mainSamples()] | shape(0).all() | deref()
###Output
_____no_output_____
###Markdown
Quality assurance:
###Code
assert passedStrict() | apply(mo.parseM.recognizeInit, 2) | instanceOf(str, 2) | shape(0) == 0 # every passedStrict MUST run through recognize fine
assert groupSamples() | apply(lambda s: re.fullmatch(mo.parseM.branchC, s)) | filt(op()==None) | shape(0) == 0
assert mainSamples() | apply(lambda s: re.fullmatch(mo.parseM.mainC, s[0])) | filt(op()==None) | shape(0) == 0
###Output
_____no_output_____
###Markdown
92% chance of main recognition! Other cases are sort of weird, but that's quite outstanding!
###Code
failedRecMain = mainSamples() | apply(mo.parseM.recognizeMain) | ~instanceOf(mo.Atom) | deref()
ratio = 100 - (failedRecMain | shape(0)) / (mainSamples() | shape(0)) * 100; ratio, failedRecMain
assert ratio > 90
###Output
_____no_output_____
###Markdown
98% chance of group recognition!
###Code
failedRecGroup = groupSamples() | apply(partial(mo.parseM.recognizeGroup, mainA=None)) | ~instanceOf(mo.Atom) | deref(); failedRecGroup | headOut()
ratio = 100 - (failedRecGroup | shape(0)) / (groupSamples() | shape(0)) * 100; ratio
assert ratio > 98
###Output
_____no_output_____
###Markdown
73% chance of end-to-end recognition. Not the greatest in the world, but still quite awesome!
###Code
fullyRecognized = k1lib.Wrapper(passedStrict() | apply(partial(mo.parse, quiet=True), 2) | instanceOf(mo.Atom, 2) | deref())
ratio = (fullyRecognized() | shape(0)) / (passedStrict() | shape(0)) * 100; ratio, fullyRecognized() | shape()
assert ratio > 73
###Output
_____no_output_____
###Markdown
39% recognition chance over the entire dataset.
###Code
ratio = (fullyRecognized() | shape(0)) / (t() | shape(0)) * 100; ratio
assert ratio > 38
###Output
_____no_output_____
###Markdown
Self-consistency check:
###Code
for _i, _a, _b in zip(*(fullyRecognized() | apply(lambda a: a.empirical(), 2) | transpose() | deref())):
if not mo.sameEmpirical(_a, _b):
raise RuntimeError(f"{_i}, {_a}, {_b}")
###Output
_____no_output_____
###Markdown
Everything else not recognized:
###Code
t() | ~inSet(passedStrict()|cut(0)|toSet(), 0) | display(None)
###Output
No Formula Name Molwt Tfp (K) Tb (K) Tc (K) Pc (bar) Vc Zc Omega Dipm CpA CpB CpC CpD dHf dGf Eq VpA VpB VpC VpD Tmin Tmax Lden Tden
544 C10H12 1,2,3,4-tetrahydronaphthalene 132.206 242 480.7 719 35.1 0.303 2760 167100 3 9.5883 4009.49 -64.89 365 500 0.973 293
471 C8H16 1,2,4-trimethylcyclopentane-c,c,t 112.216 391 579 29 0.277 3 9.1341 3073.95 -54.2 283 418
472 C8H16 1,2,4-trimethylcyclopentane-c,t,c 112.216 382.4 571 28 0.246 3 9.1554 3009.7 -53.23 282 417
267 C4H10O2 1,2-dimethoxyethane 90.123 202 358 536 38.7 271 0.235 0.358 0 32.23 0.3567 -0.0001336 8.399E-09 3 9.4039 2869.79 -53.15 262 393 0.867 293
211 C3H8O2 1,2-propanediol 76.096 213 460.5 625 60.7 237 0.28 3.6 0.6322 0.4212 -0.0002981 8.951E-08 -424200 3 13.9122 6091.95 -22.46 357 483 1.036 293
199 C3H6O 1,2-propylene oxide 58.08 161 308 482.2 49.2 186 0.229 0.269 2 -8.457 0.3257 -0.0001989 4.823E-08 -92820 -25800 1 -6.97569 0.6365 -1.49187 -6.37743 249 482.2 0.829 293
212 C3H8O2 1,3-propanediol 76.096 246.4 487.6 724 89.5 3.7 8.269 0.3676 -0.0002162 5.053E-08 -409100 1 -10.20156 2.93938 -6.69889 5.49989 332 724 1.053 293
248 C4H8O2 1,4-dioxane 88.107 285 374.6 587 52.1 238 0.254 0.281 0.4 -53.57 0.5987 -0.0004085 1.062E-07 -315300 -180900 3 9.5125 2966.88 -62.15 275 410 1.033 293
396 C7H35 2,3,4,5,6-pentafluorotoluene 182.091 390.7 566.5 31.3 384 0.255 0.415 1 -8.05688 1.46673 -3.82439 -2.78727 313 566.5
279 C5H6O 2-methyl furan 82.102 338 527 47.2 247 0.266 0.27 0.7 0 0.913 293
278 C5H6N2 2-methyl pyrazine 94.117 410 634.3 50.1 283 0.268 0.315 0 1.044 273
301 C5H10O 2-methyl tetrahydrofuran 86.134 351 537 37.6 267 0.225 0.264 0 0.855 293
160 C2H4O acetaldehyde 44.054 150.2 294 461 55.7 154 0.22 0.303 2.5 7.716 0.1823 -0.0001007 2.38E-08 -164400 -133400 1 -7.04687 0.12142 -0.0266037 -5.903 273 461 0.778 293
162 C2H4O2 acetic acid 60.052 289.8 391.1 592.7 57.9 171 0.201 0.447 1.3 4.84 0.2549 -0.0001753 4.949E-08 -435100 -376900 1 -7.83183 0.000551929 0.24709 -8.50462 304 592.7 1.049 293
231 C4H6O3 acetic anhydride 102.089 199 413.2 569 46.8 0.908 3 -23.13 0.5087 -0.000358 9.835E-08 -576100 -477000 1 -18.1529 18.3036 -20.0953 16.697 336 569 1.087 293
196 C3H6O acetone 58.08 178.2 329.2 508.1 47 209 0.232 0.304 2.9 6.301 0.2606 -0.0001253 2.038E-08 -217700 -153200 1 -7.45514 1.202 -2.43926 -3.3559 259 508.1 0.79 293
153 C2H3N acetonitrile 41.053 229.3 354.8 545.5 48.3 173 0.184 0.327 3.5 20.48 0.1196 -0.00004492 3.203E-09 87920 105700 2 40.774 5392.43 -4.357 2615 300 545.5 0.782 293
148 C2H3ClO acetyl chloride 78.498 160.2 323.9 508 58.7 204 0.28 0.344 2.4 25.02 0.1711 -0.00009856 2.219E-08 -244100 -206400 1 -7.94455 1.81437 -2.09194 -1.98959 267 508 1.104 293
140 C2H2 acetylene 26.038 188.4 308.3 61.4 112.7 0.27 0.19 0 26.82 0.07578 -0.00005007 1.412E-08 226900 209300 1 -6.90128 1.26873 -2.09113 -2.75601 192 308.3 0.615 189
187 C3H4O acrolein 56.064 186 326 506 51.6 0.33 2.9 11.97 0.2106 -0.0001071 1.906E-08 -70920 -65190 3 9.2855 2606.53 -45.15 235 360 0.839 293
188 C3H4O2 acrylic acid 72.064 285 414 615 56.7 210 0.23 0.56 1.742 0.3191 -0.0002352 6.975E-08 -336500 -286300 3 9.9415 3319.18 -80.15 315 450 1.051 293
183 C3H3N acrylonitrile 53.064 189.5 350.5 536 45.6 210 0.21 0.35 3.5 10.69 0.2208 -0.0001565 4.601E-08 185100 195400 3 9.3051 2782.21 -51.15 255 385 0.806 293
197 C3H6O allyl alcohol 58.08 144 370.2 543 -1.105 0.3146 -0.0002032 5.321E-08 -132100 -71300 3 10.2864 2928.2 -85.15 286 400 0.855 288
190 C3H5Cl allyl chloride 76.526 138.7 318.3 514 47.6 234 0.26 0.13 2 2.529 0.3047 -0.0002278 7.293E-08 -628 43630 1 -6.76334 2.5073 -7.64033 11.6666 286 514 0.937 293
224 C4H5N allyl cyanide 67.091 186.7 392 585 39.5 265 0.22 0.39 3.4 21.7 0.2572 -0.0001192 1.229E-08 3 9.3817 3128.75 -58.15 400 430 0.835 293
513 C9H10 alpha-methylstyrene 118.179 438.5 654 34 -24.33 0.6933 -0.000453 1.181E-07 3 9.7106 3644.3 -67.15 348 493 0.911 293
1 AlBr3 aluminum tribromide 266.694 370.7 528 763 28.9 310 0.141 0.399 5 64.9 0.06098 -0.00007306 2.978E-08 -423300 -452200 0
2 AlCl3 aluminum trichloride 133.341 467 453 620 26.3 259 0.132 0.66 2 50.54 0.1037 -0.0001202 4.793E-08 -584900 -570400 0 1.31 473
3 AlI3 aluminum triiodide 407.697 464 655 983 408 2.3 62.7 0.06802 -0.00008113 3.298E-08 -205200 -253100 0
80 H3N ammonia 17.031 195.4 239.8 405.5 113.5 72.5 0.244 0.25 1.5 27.31 0.02383 0.00001707 -1.185E-08 -45730 -16160 2 45.327 4104.67 -5.146 615 220 405.5 0.639 273
82 H4ClN ammonium chloride 53.492 793 882 16.4 3.92 0
377 C6H12O2 amyl formate 116.16 199.7 403.6 576 34.6 0.538 0 0.902 273
345 C6H7N aniline 93.129 267 457.6 699 53.1 274 0.25 0.384 1.6 -40.52 0.6385 -0.0005133 1.633E-07 86920 166800 1 -7.65517 0.85386 -2.51602 -5.96795 376 699 1.022 293
592 C14H10 anthracene 178.234 489.7 613.1 869.3 554 0 -58.98 1.006 -0.0006594 1.606E-07 224800 3 11.0499 6492.44 -26.13 490 655
4 Ar argon 39.948 83.8 87.3 150.8 48.7 74.9 0.291 0.001 0 20.8 0 0 1 -5.90501 1.12627 -0.76787 -1.62721 84 150.8 1.373 90
5 As arsenic 74.922 888 1673 223 34.9 0.056 0.121 0
6 AsCl3 arsenic trichloride 181.281 264.7 403 654 252 1.6 0 2.163 293
79 H3AS arsine 77.946 159.7 218 373.1 0.2 182500 157800 0 1.604 209
399 C7H6O2 benzoic acid 122.124 395.6 523 752 45.6 341 0.25 0.62 1.7 -51.29 0.6293 -0.0004237 1.062E-07 -290400 -210600 3 10.5432 4190.7 -125.2 405 560 1.075 403
397 C7H5N benzonitrile 103.124 260 464.3 699.4 42.2 0.362 3.5 -26.05 0.5732 -0.000443 1.349E-07 219000 261000 2 53.154 7912.31 -5.881 4898 340 699.4 1.01 288
402 C7H8O benzyl alcohol 108.14 257.8 478.6 720.2 44 1.7 -7.398 0.5481 -0.0003357 7.771E-08 -94080 1 -7.09506 1.18389 -9.14255 5.56311 303 720.2 1.041 298
7 BBr3 boron tribromide 250.568 227 364 581 272 0 43.31 0.116 -0.0001267 4.849E-08 -204300 -231200 0 2.643 291
8 BCl3 boron trichloride 117.191 165.9 285.8 455 38.7 239.5 0.245 0.14 0 32.61 0.139 -0.0001461 5.439E-08 -403200 -388200 2 46.103 4443.16 -5.404 2228 230 455 1.349 284
9 BF3 boron trifluoride 67.805 146.5 172 260.8 49.9 114.7 0.264 0.393 0 18.58 0.1399 -0.0001217 3.916E-08 -1136000 -1120000 2 61.138 3481.19 -7.963 576 160 260.8 2.811 293
10 BI3 boron triiodide 391.55 323.1 483 773 356 49.37 0.1028 -0.0001159 4.529E-08 71180 20890 0 3.35 323
11 Br2 bromine 159.808 266 331.9 588 103 127.2 0.268 0.108 0.2 33.86 0.01125 -0.00001192 4.534E-09 30930 3136 3 9.2239 2582.32 -51.56 259 354 3.119 293
574 C11H14O2 butyl benzoate 178.232 251 523 723 26 561 0.25 0.58 -17.37 0.8657 -0.000461 7.253E-08 3 9.7161 4158.47 -94.15 390 570 1.006 293
234 C4H7N butyronitrile 69.107 161 391.1 582.2 37.9 0.373 3.8 15.21 0.3206 -0.0001638 2.982E-08 34100 108700 2 49.985 6476.68 -5.599 3770 320 582.2 0.792 293
352 C6H11N capronitrile 97.161 194 436.8 622 32.5 0.524 3.5 3 9.7814 3677.63 -60.4 363 438 0.809 288
560 C10H19N caprylonitrile 153.269 255.3 516 622 32.5 0 0.82 293
99 CO2 carbon dioxide 44.01 216.6 304.1 73.8 93.9 0.274 0.239 0 19.8 0.07344 -0.00005602 1.715E-08 -393800 -394600 1 -6.95626 1.19695 -3.12614 2.99448 217 304.1
100 CS2 carbon disulfide 76.131 161.3 319 552 79 160 0.276 0.109 0 27.44 0.08127 -0.00007666 2.673E-08 117100 66950 1 -6.63896 1.20395 -0.37653 -4.3282 277 552 1.293 273
97 CO carbon monoxide 28.01 68.1 81.7 132.9 35 93.2 0.295 0.066 0.1 30.87 -0.01285 0.00002789 -1.272E-08 -110600 -137400 1 -6.20798 1.27885 -1.34533 -2.56842 71 132.9 0.803 81
94 CCl4 carbon tetrachloride 153.823 250 349.9 556.4 45.6 275.9 0.272 0.193 0 40.72 0.2049 -0.000227 8.843E-08 -100500 -58280 1 -7.07139 1.71497 -2.8993 -2.49466 250 556.4 1.584 298
96 CF4 carbon tetrafluoride 88.005 86.4 145.1 227.6 37.4 139.6 0.276 0.177 0 13.98 0.2026 -0.0001625 4.513E-08 -933700 -889000 3 9.4341 1244.55 -13.06 93 148 1.33 193
98 COS carbonyl sulfide 60.07 134.3 223 378.8 63.5 136.3 0.275 0.105 0.7 23.57 0.07984 -0.00007017 2.453E-08 -138500 -165800 1 -6.40952 1.21015 -1.54976 -2.10074 162 378.8 1.274 174
22 Cl2 chlorine 70.906 172.2 239.2 416.9 79.8 123.8 0.285 0.09 0 26.93 0.03384 -0.00003869 1.547E-08 0 0 1 -6.34074 1.15037 -1.40416 -2.2322 206 416.9 1.563 239
20 ClF5 chlorine pentafluoride 130.433 260 416 52.7 233 0.355 0.216 30.98 0.3203 -0.0003685 1.462E-07 -238600 -146900 0
103 CHCl3 chloroform 119.378 209.6 334.3 536.4 53.7 238.9 0.293 0.218 1.1 24 0.1893 -0.0001841 6.657E-08 -101300 -68580 1 -6.95546 1.16625 -2.1397 -3.44421 215 536.4 1.489 293
178 C3ClF5O chloropentafluoroacetone 182.475 281 410.6 28.8 0.347 0
557 C10H18 cis-decalin 138.254 230 468.9 702.3 32 0.286 0 -112.5 1.118 -0.0006607 1.437E-07 -169100 85870 3 9.211 3671.61 -69.74 368 495 0.897 293
33 F2N2 cis-difluorodiazine 66.01 167.5 272 70.9 0.252 11.21 0.1745 -0.0001688 5.898E-08 68660 108800 0
134 C2N2 cyanogen 52.035 245.3 252 400 59.8 0.278 0.2 35.94 0.09253 -0.00008148 2.95E-08 309200 297400 2 51.703 4390.8 -6.185 1130 250 400 0.954 252
28 D2 deuterium (equilibrium) 4.032 18.7 23.6 38.2 16.5 60.3 0.313 -0.137 0 30.25 -0.006615 0.0000117 -3.684E-09 0 0 3 6.6752 157.89 0 19 25 0.165 22.7
29 D2 deuterium (normal) 4.032 18.6 23.5 38.4 16.6 -0.16 0 0 0 0
30 D2O deuterium oxide 20.031 277 374.6 644 216.6 56.6 0.225 0.351 1.9 31.82 0.003045 0.00002033 -9.737E-09 -249400 -234800 0 1.105 298
95 CD4 deutromethane 20.071 111.7 189.2 46.6 98.2 0.291 0.032 0 12.49 0.101 -0.00002199 -8.458E-09 -88300 -59540 0
86 H6B2 diborane 27.668 108 185.6 289.8 40.5 0.217 0 31400 83320 3 8.039 1200.78 -31.22 118 181 0.47 153
528 C9H18O dibutyl ketone 142.242 267.3 461.6 640 2.7 0 0.827 286
600 C16H22O4 dibutyl-o-phthalate 278.35 238 608 1.88 1.254 -0.0006121 6.971E-08 3 10.3337 4852.47 -138.1 469 657 1.047 293
270 C4H10S2 diethyl disulfide 122.244 171.7 427.1 642 2 26.9 0.4601 -0.000271 5.97E-08 -74690 22270 3 9.4405 3421.57 -64.19 312 455 0.998 293
300 C5H10O diethyl ketone 86.134 234.2 375.1 561 37.3 336 0.269 0.344 2.7 30.01 0.3939 -0.0001907 3.398E-08 -258800 -135400 1 -7.70542 1.44422 -3.60173 -2.8814 330 561 0.814 293
269 C4H10S diethyl sulfide 90.184 169.2 365.3 557 39.6 318 0.272 0.292 1.6 13.59 0.3959 -0.000178 2.649E-08 -83530 17800 3 9.3329 2896.27 -54.49 260 390 0.837 293
268 C4H10O3 diethylene glycol 106.122 265 519 681 47 73.06 0.3461 -0.0001468 1.846E-08 -571500 3 10.4124 4122.52 -122.5 402 560 1.116 293
460 C8H14O4 diethylsuccinate 174.196 251.9 490.9 663 2.3 0 1.041 293
288 C5H8O dihydropyran 84.118 359 561.7 45.6 268 0.262 0.247 1.4 0
232 C4H6O4 dimethyl oxalate 118.09 327 436.5 628 39.8 0.556 0 1.15 288
173 C2H6S dimethyl sulphide 62.13 174.9 310.5 503 55.3 201 0.266 0.191 1.5 24.3 0.1875 -0.00006875 4.099E-09 -37560 6950 1 -6.94973 1.43646 -2.51444 -2.47611 222 503 0.848 293
579 C12H10 diphenyl 154.212 342.2 529.3 789 38.5 502 0.295 0.372 -97.07 1.106 -0.0008855 0.000000279 182200 280300 1 -7.674 1.23008 -3.67908 -2.29172 342 789 0.99 347
249 C4H8O2 ethyl acetate 88.107 189.6 350.3 523.2 38.3 286 0.252 0.362 1.9 7.235 0.4072 -0.0002092 2.855E-08 -443200 -327600 1 -7.68521 1.36511 -4.0898 -1.75342 289 523.2 0.901 293
289 C5H8O2 ethyl acrylate 100.118 201 373 552 37.4 320 0.261 0.4 16.81 3.69 -0.0001382 -5.732E-09 3 9.4688 2974.94 -58.15 274 409 0.921 293
514 C9H10O2 ethyl benzoate 150.178 238.3 485.9 668.7 23.2 0.48 -9.32936 20.67 0.6887 -0.0003608 5.062E-08 1 -9.32936 2.89807 -6.54758 5.56703 317 668.7 1.046 293
164 C2H5Br ethyl bromide 108.966 154.6 311.5 503.9 62.3 215 0.32 0.229 2 6.657 0.2348 -0.0001472 3.804E-08 -64060 -26330 1 -9.14807 5.49831 -6.68657 6.27287 301 503.9 1.451 298
374 C6H12O2 ethyl butyrate 116.16 180 394.7 569 29.6 421 0.263 0.461 1.8 21.51 0.4928 -0.0001938 3.559E-09 1 -8.00073 1.34045 -3.99843 -3.74347 290 569 0.879 293
165 C2H5Cl ethyl chloride 64.515 136.8 285.5 460.4 52.7 199 0.274 0.191 2 -0.5527 0.2606 -0.000184 5.545E-08 -111800 -60040 1 -7.23667 2.11017 -3.53882 0.34775 217 460.4 0.896 293
166 C2H5F ethyl fluoride 48.06 129.9 235.5 375.3 50.2 169 0.272 0.215 2 4.346 0.218 -0.0001166 2.41E-08 -261700 -209700 1 -6.82738 0.59267 -0.73934 -3.69185 266 375.3
202 C3H602 ethyl formate 74.08 193.8 327.5 508.5 47.4 229 0.257 0.285 2 24.67 0.2316 -0.0000212 -5.359E-08 -371500 1 -7.16968 1.13188 -3.37309 -3.53058 277 508.5 0.927 289
167 C2H5I ethyl iodide 155.967 165 345.6 554 47 1.7 10.11 0.2253 -0.0001382 3.531E-08 -8370 21350 1 -6.50172 1.05321 -3.16148 -0.64188 290 554 1.95 293
375 C6H12O2 ethyl isobutyrate 116.16 185 383.2 555 29.7 421 0.271 0.431 2.1 1 -8.08582 1.61436 -4.14816 -3.8072 280 555 0.869 293
172 C2H6S ethyl mercaptan 62.134 125.3 308.2 499 54.9 207 0.274 0.191 1.5 14.92 0.2351 -0.0001356 3.162E-08 -46140 -4670 1 -6.96578 1.5097 -2.7374 -1.73828 273 499 0.839 293
451 C8H10O ethyl pheny ether 122.167 243 443 647 34.2 0.418 1.2 1 -8.50867 2.56997 -5.78999 0.10899 371 647 0.979 277
369 C6H12O ethyl propyl ketone 100.16 396.6 582.8 33.2 0.378 3 9.5 3144.85 -65.19 347 408 0.813 295
155 C2H4 ethylene 28.054 104 169.3 282.4 50.4 130.4 0.28 0.089 0 3.806 0.1566 -0.00008348 1.755E-08 52340 68160 1 -6.32055 1.16819 -1.55935 -1.83552 105 282.4 0.577 163
171 C2H6O2 ethylene glycol 62.069 260.2 470.5 645 77 2.2 35.7 0.2483 -0.0001497 3.01E-08 -389600 -304700 3 13.6299 6022.18 -28.25 364 494 1.114 293
161 C2H4O ethylene oxide 44.054 161 283.7 469 71.9 140 0.259 0.202 1.9 -7.519 0.2222 -0.0001256 2.592E-08 -52670 -13100 1 -6.56234 0.42696 -1.25638 -3.18133 238 469 0.899 273
177 C2H8N2 ethylenediamine 60.099 284 390.4 593 62.8 206 0.26 0.51 1.9 38.3 0.2407 -0.00004338 -3.948E-08 1 -8.82254 2.27867 -3.52636 -6.97579 285 593 0.896 293
32 F2 fluorine 37.997 53.5 85 144.3 52.2 66.3 0.288 0.054 0 23.22 0.03657 -0.00003613 1.204E-08 0 0 1 -6.18224 1.18062 -1.16555 -1.50167 64 144.3 1.51 85
104 CHF3 fluoroform 70.013 110 191 299.3 48.6 132.7 0.259 0.26 1.6 8.156 0.1813 -0.0001379 3.938E-08 -697500 -662800 1 -7.41994 1.65884 -3.14962 -0.84938 25 299.3 1.246 239
109 CH2O formaldehyde 30.026 156 254 408 65.9 0.253 2.3 23.48 0.03157 0.00002985 -0.000000023 -116000 -110000 1 -7.29343 1.08395 -1.63882 -2.30677 184 408 0.815 253
110 CH2O2 formic acid 46.025 281.5 373.8 580 1.5 11.71 0.1358 -0.00008411 2.017E-08 -378900 -351200 3 10.368 3599.58 -26.09 271 409 1.226 288
222 C4H40 furan 68.075 187.5 304.5 490.2 55 218 0.295 0.209 0.7 -35.53 0.4321 -0.0003455 1.074E-07 -34700 879 3 9.441 2442.7 -45.41 238 363 0.938 293
276 C5H4O2 furfural 96.085 234.5 434.9 670 58.9 0.383 3.6 3 8.5214 2760.09 -110.4 328 434 1.159 293
213 C3H8O3 glycerol 92.095 291 563 726 66.8 255 0.28 3 8.424 0.4442 -0.0003159 9.378E-08 -585300 3 10.619 4487.04 -140.2 440 600 1.261 293
47 He helium-3 3.017 3.19 3.31 1.14 72.9 0.302 -0.473 0 20.8 0 0 0
48 He helium-4 4.003 4.25 5.19 2.27 57.4 0.302 -0.365 0 20.8 0 0 1 -3.97466 1.00074 1.50056 -0.4302 2 5.19 0.123 4.3
275 C5H2F6O2 hexafluoroacetylacetone 208.059 327.3 485.1 27.7 0.278 0
84 H4N2 hydrazine 32.045 274.7 386.7 653 147 0.316 3 9.768 0.1895 -0.0001657 6.025E-08 95250 158600 2 49.476 6951.84 -5.286 1222 350 653 1.008 293
75 H2 hydrogen (equilib) 2.016 14 20.3 33 12.9 64.3 0.303 -0.216 0 27.14 0.009274 -0.00001381 7.645E-09 0 0 1 -5.57929 2.60012 -0.85506 1.70503 14 33 0.071 20
76 H2 hydrogen (normal) 2.016 14 20.4 33.2 13 65.1 0.306 -0.218 0 0
69 HBr hydrogen bromide 80.912 187.1 206.8 363.2 85.5 0.088 0.8 30.65 -0.009462 0.00001722 -6.238E-09 -36260 -53300 2 21.482 2394.35 -1.843 653 200 363.2 2.16 216
70 HCl hydrogen chloride 36.461 159 188.1 324.7 83.1 80.9 0.249 0.133 1.1 30.67 -0.007201 0.00001246 -3.898E-09 -92360 -95330 2 31.994 2626.67 -3.443 538 180 324.7 1.193 188
105 HCN hydrogen cyanide 27.026 259.9 298.9 456.7 53.9 138.8 0.197 0.388 3 21.86 0.06062 -0.00004961 1.815E-08 130600 120200 2 31.122 4183.37 -3.004 1635 280 456.7 0.688 293
71 HD hydrogen deuteride 3.023 16.6 22.1 36 14.8 62.7 0.31 -0.179 0 29.47 -0.001329 0.000001311 1.279E-09 322 -1465 0
72 HF hydrogen fluoride 20.006 190 293 461 64.8 69.2 0.117 0.329 1.9 29.06 0.0006611 -0.000002032 2.504E-09 -271300 -273400 1 -9.74369 4.68946 -2.98358 9.65825 273 461 0.967 293
73 HI hydrogen iodide 127.912 222.4 237.6 424 83.1 0.049 0.5 31.16 -0.01428 0.00002972 -1.353E-08 26380 1591 2 27.264 3013.08 -2.673 923 235 424 2.8 237
78 H2S hydrogen sulfide 34.08 189.6 213.5 373.2 89.4 98.6 0.284 0.081 0.9 31.94 0.001436 0.00002432 -1.176E-08 -20180 -33080 2 36.067 3132.31 -3.985 653 205 373.2 0.993 214
512 C9H10 indane 118.179 451.1 684.9 39.5 0.308 0
50 I2 iodine 253.82 386.8 457.5 819 155 1.3 35.59 0.006515 -0.000006988 2.834E-09 62470 19380 3 9.5395 3709.23 -68.16 383 487 3.74 453
12 BrI iodine bromide 206.813 315 389 719 139 1.2 34.02 0.01229 -0.0000142 5.847E-09 409100 3714 0
427 C7H14O2 isoamyl acetate 130.187 194.7 415.7 599 1.8 3 10.5011 3699.29 -57.54 311 369 0.876 288
378 C6H12O2 isoamyl formate 116.16 396.7 578 0 0.882 293
479 C8H16O2 isoamyl propionate 144.214 433.4 611 0 0.87 293
373 C6H12O2 isobutyl acetate 116.16 174.3 389.7 564 30.2 414 0.267 0.455 1.9 7.31 0.574 -0.0002576 1.101E-08 -495500 1 -8.12456 1.66934 -4.20511 -3.72813 290 564 0.875 293
480 C8H16O2 isobutyl butyrate 144.214 430.1 603 24.5 1 -8.32597 1.4235 -4.25376 -3.09772 310 603 0.863 291
305 C5H10O2 isobutyl formate 102.134 178 371.4 554 37.3 352 0.285 0.396 1.9 19.85 0.4034 -0.0001436 -7.402E-09 1 -8.01454 2.05091 -4.38201 -2.8558 270 554 0.885 293
481 C8H16O2 isobutyl isobutyrate 144.214 421.8 594 24.6 1 -8.18677 1.322 -3.94343 -3.68833 310 594 0.875 273
428 C7H14O2 isobutyl propionate 130.187 201.8 410 583 27.7 1 -8.32761 1.56574 -3.97739 -4.71845 300 583 0.888 273
240 C4H8 isobutylene 56.108 132.8 266.2 417.9 40 239 0.275 0.194 0.5 16.05 0.2804 -0.0001091 9.098E-09 -16910 58110 1 -6.95542 1.35673 -2.45222 -1.4611 170 417.9 0.594 293
242 C4H8O isobutyraldehyde 72.107 208.2 337 513 41.5 274 0.27 0.35 24.46 0.3356 -0.0002057 6.368E-08 -215900 -121400 1 -7.53679 1.08548 -1.52929 -8.48589 286 513 0.789 293
247 C4H8O2 isobutyric acid 88.107 227.2 427.9 609 40.5 292 0.234 0.623 1.3 9.814 0.4668 -0.000372 0.000000135 -484200 2 76.037 9222.72 -8.986 3863 320 609 0.968 293
208 C3H8O isopropyl alcohol 60.096 184.7 355.4 508.3 47.6 220 0.248 0.665 1.7 32.43 0.1885 0.00006406 -9.261E-08 -272600 -173500 1 -8.16927 -0.0943213 -8.1004 7.85 250 508.3 0.786 293
205 C3H7Cl isopropyl chloride 78.542 156 308.9 485 47.2 230 0.269 0.232 2.1 1.842 0.3488 -0.0002244 5.862E-08 -146500 -62550 3 9.4182 2490.48 -43.15 225 340 0.862 293
511 C9H7N isoquinoline 129.162 300 516.4 803 3 9.2957 3968.37 -88.94 437 517 1.091 303
304 C5H10O2 isovaleric acid 102.134 449.7 634 1 3 2.4671 588.09 -261.9 359 378 0.925 293
184 C3H3NO isoxazole 69.063 368 552 2.8 0 1.078 293
145 C2H2O ketene 42.038 138 232 380 65 145 0.3 0.21 1.4 6.385 0.1638 -0.0001084 2.698E-08 -61130 -60330 3 9.3995 1849.21 -35.15 170 255
53 Kr krypton 83.8 115.8 119.9 209.4 55 91.2 0.288 0.005 0 20.8 0 0 2 24.097 1408.77 -2.579 336 115 209.4 2.42 120
404 C7H8O m-cresol 108.14 285.4 475.4 705.8 45.6 309 0.24 0.454 1.8 -45.01 0.7264 -0.0006029 2.077E-07 -132400 -40570 1 -8.58506 2.82624 -8.57418 8.74822 423 705.8 1.034 293
566 C10H20O menthol 156.269 316 489.5 694 0
49 Hg mercury 200.61 234.3 630 1765 1510 42.7 0.439 -0.167 20.8 61340 31860 0 13.594 293
203 C3H6O2 methyl acetate 74.08 175 330.4 506.8 46.9 228 0.254 0.326 1.7 16.55 0.2245 -0.00004342 2.914E-08 -409700 1 -8.05406 2.56375 -5.12994 0.16125 275 506.8 0.934 293
186 C3H4 methyl acetylene 40.065 170.5 249.9 402.4 56.3 164 0.275 0.215 0.7 14.71 0.1864 -0.0001174 3.224E-08 185600 194600 1 -7.4386 2.62026 -5.76535 7.55261 178 402.4 0.706 223
235 C4H7O2 methyl acrylate 86.091 196.7 353.5 536 43 265 0.25 0.35 15.16 0.2796 -0.00008805 -1.66E-08 3 9.4886 2788.43 -59.15 260 390 0.956 293
387 C6H14O methyl amyl ether 102.77 372 546.5 30.4 392 0.262 0.347 0 0.75 298
424 C7H14O methyl amyl ketone 114.188 424.2 611.5 34.4 0.483 0 0.82 288
442 C8H8O2 methyl benzoate 136.151 260.8 472.2 692 36.4 396 0.25 0.43 1.9 -21.21 0.5501 -0.0001799 4.425E-08 -254000 3 9.607 3751.83 -81.15 350 516 1.086 293
111 CH3Br methyl bromide 94.939 179.5 276.6 464 66.1 1.8 14.43 0.1091 -0.00005401 0.00000001 -37680 -28180 1 -7.43951 3.15408 -4.67922 2.33796 184 464 1.737 268
370 C6H120 methyl butyl ketone 100.16 216 400.7 587 33.2 0.392 0 0.816 288
308 C5H10O2 methyl butyrate 102.134 188.4 375.9 554.4 34.8 340 0.257 0.38 1.7 1 -7.776 1.32028 -3.93963 -3.5311 275 554.4 0.898 293
112 CH3Cl methyl chloride 50.488 175.4 249.1 416.3 67 138.9 0.269 0.153 1.9 13.88 0.1014 -0.00003889 2.567E-09 -86370 -62930 1 -6.86672 1.52273 -1.92919 -2.61459 175 416.3 0.915 293
243 C4H8O methyl ethyl ketone 72.107 186.5 352.7 536.8 42.1 267 0.252 0.32 3.3 10.94 0.3559 -0.00019 3.92E-08 -238500 -146200 1 -7.71476 1.71061 -3.6877 -0.75169 255 536.8 0.805 293
214 C3H8S methyl ethyl sulfide 76.157 167.2 339.8 533 42.6 0.216 19.53 0.2891 -0.0001209 1.287E-08 -59660 11400 3 9.3563 2722.95 -48.37 250 360 0.837 293
113 CH3F methyl fluoride 34.033 131.4 194.7 315 56 113.2 0.24 0.187 1.8 13.82 0.08616 -0.00002071 -1.985E-09 -234000 -210100 1 -6.78099 0.828379 -1.41137 -2.417 135 315 0.843 213
163 C2H4O2 methyl formate 60.052 174.2 304.9 487.2 60 172 0.255 0.257 1.8 1.432 0.27 -0.0001949 5.702E-08 -350000 -297400 1 -6.99601 0.89328 -2.52294 -3.16636 220 487.2 0.974 293
120 CH6N2 methyl hydrazine 46.072 362 567 82.4 271.2 0.474 0.425 1.7 85410 178000 3 8.5222 2319.84 -91.7 270 400
114 CH3I methyl iodide 141.939 206.7 315.7 528 65.9 1.6 10.81 0.1389 -0.0001041 3.486E-08 13980 15660 1 -6.51125 0.888786 -1.36624 -3.03652 259 528 2.279 293
371 C6H12O methyl isobutyl ketone 100.16 189 389.6 571 32.7 0.385 2.8 3.894 0.5656 -0.0003318 8.231E-08 -284000 1 -8.54349 2.92801 -5.27311 -2.54507 295 571 0.801 293
309 C5H10O2 methyl isobutyrate 102.134 185.4 365.5 540.8 34.3 339 0.259 0.362 2 1 -7.65814 1.29248 -3.85632 -3.4985 270 540.8 0.891 293
154 C2H3NO methyl isocyanate 57.052 312 491 55.7 0.278 35.76 0.104 -0.00000582 -1.687E-08 -90000 3 9.7056 2480.37 -56.31 230 340 0.958 293
299 C5H10O methyl isopropyl ketone 86.134 181 367.5 553.4 38.5 310 0.259 0.331 2.8 -2.914 0.4991 -0.0002935 6.665E-08 3 7.5577 1993.12 -103.2 271 406 0.803 293
118 CH4S methyl mercaptan 48.107 150 279.1 470 72.3 144.8 0.268 0.153 1.3 13.27 0.1457 -0.00008545 2.075E-08 -22990 -9923 1 -6.793 1.52687 -2.45989 -1.34839 222 470 0.866 293
298 C5H10O methyl n-propyl ketone 86.134 196 375.4 561.1 36.9 301 0.238 0.346 2.5 1.147 0.4802 -0.0002818 6.661E-08 -258800 -137200 3 9.3829 2934.87 -62.25 275 410 0.806 293
441 C8H8O methyl phenyl ketone 120.151 292.8 474.9 714 40.6 376 0.257 0.42 3 -29.58 0.641 -0.0004071 9.722E-08 -86900 1840 1 -7.63896 1.20432 -3.60753 -1.55754 298 714 1.032 288
250 C4H8O2 methyl propionate 88.107 185.7 352.8 530.6 40 282 0.256 0.35 1.7 18.2 0.314 -0.00009353 -1.828E-08 1 -8.23756 2.71406 -5.35097 -2.34114 294 530.6 0.915 293
307 C5H10O2 methyl propionate 102.134 199.3 372.2 546 33.6 345 0.256 0.391 1.8 19.85 0.4034 -0.0001437 -7.394E-09 -470200 -323700 1 -8.55094 3.10067 -6.99241 3.45112 307 546 0.895 289
443 C8H8O3 methyl salicylate 152.149 264.6 496.1 709 2.4 3 9.6897 3943.86 -86.19 350 495 1.182 298
121 CH6Si methyl silane 46.145 116.7 215.6 352.5 0.7 0
210 C3H8O2 methylal 76.096 168 315 480.6 39.5 213 0.211 0.286 1 3 9.2035 2415.92 -52.58 270 315 0.888 291
449 C8H10O m-ethylphenol 122.167 269 491.6 718.8 -147000 3 10.5753 4272.77 -86.08 370 500 1.025 273
176 C2H7NO monoethanolamine 61.084 283.5 443.5 614 44.5 196 0.17 2.6 9.311 0.3009 -0.0001818 4.656E-08 -201700 1 -10.8842 3.303743 -7.21939 -2.99322 379 614 1.016 293
257 C4H9NO morpholine 87.122 268.4 401.4 618 54.7 253 0.27 0.37 1.5 -42.8 0.5388 -0.0002666 4.199E-08 3 9.6162 3171.35 -71.15 300 440 1 293
608 C18H14 m-terphenyl 230.31 360 638 924.9 35.1 768 0.358 0.449 0
414 C7H9N m-toluidine 107.156 242.8 476.6 709 41.5 0.41 1.5 -15.99 0.5681 -0.0003033 4.643E-08 1 -8.43741 2.58101 -6.00776 -1.52856 395 709 0.989 293
445 C8H10 m-xylene 106.168 225.3 412.3 617.1 35.4 376 0.259 0.325 0.3 -29.17 0.6297 -0.0003747 8.478E-08 17250 118900 1 -7.59222 1.39441 -3.22746 -2.40376 332 617.1 0.864 293
458 C8H11N n,n-dimethylaniline 121.183 275.6 467.3 687 36.3 0.411 1.6 84150 231400 3 10.3445 4276.08 -52.8 345 480 0.956 293
523 C9H13N n,n-dimethyl-o-toluidine 135.21 212 467.3 668 31.2 0.484 0.9 0 0.929 293
372 C6H12O2 n-butyl acetate 116.16 199.7 399.3 579 31.4 400 0.26 0.417 1.8 13.62 0.5489 -0.0002278 7.91E-10 -486800 1 -8.36658 2.40985 -6.42511 4.85939 333 579 0.898 273
556 C10H15N n-butylaniline 149.236 259 513.9 721 28.3 -34.07 0.9144 -0.000556 1.287E-07 3 9.7792 4079.72 -96.15 385 560 0.932 293
241 C4H8O n-butyraldehyde 72.107 176.8 348 545.4 53.8 0.352 2.6 14.08 0.3457 -0.0001723 2.887E-08 -205200 -114800 1 -7.01403 0.12265 -0.00073 -8.50911 304 545.4 0.802 293
246 C4H8O2 n-butyric acid 88.107 267.9 437.2 628 52.7 290 0.292 0.683 1.5 11.74 0.4137 -0.000243 5.531E-08 -476200 1 -10.0392 3.15679 -7.72604 5.2763 364 628 0.958 293
617 C20H42 n-eicosane 282.556 310 617 767 11.1 0.907 -22.38 1.939 -0.001117 2.528E-07 -456100 117400 3 9.8483 4680.46 -141.1 471 652 0.775 313
618 C20H420 n-eicosanol 298.555 339 629 770 12 -12.58 1.95 -0.001118 2.516E-07 -608100 -19430 3 9.2031 3912.1 -203.1 492 679
58 Ne neon 20.183 24.5 27.1 44.4 27.6 41.6 0.311 -0.029 0 20.8 0 0 1 -6.07686 1.59402 -1.06092 4.06656 25 44.4 1.204 27
459 C8H11N n-ethylaniline 121.183 207.4 476.2 698 1.7 3 10.4715 4382.63 -58.88 321 481 0.963 293
576 C11H22 n-hexylcylopentane 154.297 200.2 476.3 660.1 21.3 0.476 -58.32 1.128 -0.0006536 1.473E-07 -209600 78250 3 9.3939 3702.56 -81.55 351 507 0.797 394
54 NO nitric oxide 30.006 109.5 121.4 180 64.8 57.7 0.25 0.588 0.2 29.35 -0.0009378 0.000009747 -4.187E-09 90430 86750 2 54.894 2465.78 -7.211 209 115 180 1.28 121
56 N2 nitrogen 28.013 63.3 77.4 126.2 33.9 89.8 0.29 0.039 0 31.15 -0.01357 0.0000268 -1.168E-08 0 0 1 -6.09676 1.1367 -1.04072 -1.93306 63 126.2 0.804 78
17 ClF2N nitrogen chloride difluoride 87.456 207 337.5 51.5 0.154 0
55 NO2 nitrogen dioxide 46.006 261.9 294.3 431 101 167.8 0.473 0.834 0.4 24.23 0.04836 -0.00002081 2.93E-10 33870 52000 2 55.242 6073.34 -6.094 780 270 431 1.45 293
37 F3N nitrogen trifluoride 71.002 66.4 144.4 234 45.3 0.135 0.2 11.41 0.1948 -0.0002023 7.454E-08 -131600 -90100 2 32.599 1970.37 -3.81 509 130 234 1.54 144
21 ClNO nitrosyl chloride 65.459 213.5 267.7 440 1.8 34.1 0.04472 -0.0000334 1.015E-08 52630 66990 2 29.76 3748.59 -2.819 900 230 440 1.42 261
57 N2O nitrous oxide 44.013 182.3 184.7 309.6 72.4 97.4 0.274 0.165 0.2 21.62 0.07281 -0.00005778 1.83E-08 81600 10370 2 39.824 2867.98 -4.655 557 190 309.6 1.226 184
31 FNO2 nitryl fluoride 65.003 231.2 349.5 0.5 17.78 0.1416 -0.0001245 4.14E-08 -108900 -66490 0
412 C7H9N n-methylaniline 107.156 216 469.4 701 52 0.475 1.7 85410 199300 3 9.6864 3756.28 -80.71 320 480 0.989 293
306 C5H10O2 n-propyl acetate 102.134 178 374.7 549.4 33.3 345 0.252 0.391 1.8 15.42 0.4501 -0.0001686 -1.439E-08 -466000 1 -7.85524 1.43936 -4.30187 -3.0483 312 549.4 0.887 293
425 C7H14O2 n-propyl butyrate 130.187 176 416.2 590 27.1 1.8 1 -8.28062 1.40511 -4.19323 -3.70158 300 590 0.879 288
251 C4H8O2 n-propyl formate 88.107 180.3 354.1 538 40.6 285 0.259 0.314 1.9 1 -7.48563 1.7126 -5.16404 1.6429 299 538 0.911 289
426 C7H14O2 n-propyl isobutyrate 130.187 408.6 581 28.3 1 -8.52052 2.1066 -4.44053 -3.9042 300 581 0.884 273
482 C8H16O2 n-propyl isovalerate 144.214 429.1 609 0 0.863 293
376 C6H12O2 n-propyl propionate 116.16 197.3 395.8 571 30.2 1.8 1 -8.00913 1.33297 -3.97513 -3.83674 290 571 0.881 293
611 C18H36 n-tridecylcylopenyane 252.486 278 598.6 761 12 0.755 -64.21 1.79 -0.001032 2.309E-07 -354000 137100 3 9.6068 4483.13 -131.3 453 634 0.818 293
303 C5H10O n-valeric acid 102.134 239 459.5 651 13.39 0.5033 -0.0002931 6.619E-08 -490700 -357400 3 11.0104 4092.15 -86.55 350 495 0.939 293
403 C7H8O o-cresol 108.14 304.1 464.2 697.6 50.1 0.433 1.6 -32.28 0.7005 -0.0005924 2.214E-07 -128700 -33000 1 -8.82061 3.14197 -6.63041 -0.84857 393 697.6 1.028 313
448 C8H10O o-ethylphenol 122.167 269.8 477.7 703 -146000 3 11.3408 4928.36 -45.75 350 500 1.037 273
607 C18H14 o-terphenyl 230.31 330 605 891 39 753 0.396 0.431 0
413 C7H9N o-toluidine 107.156 258.4 473.5 694 37.5 0.438 1.6 1 -8.68458 2.72553 -5.9462 -1.09185 392 694 0.998 293
59 O2 oxygen 31.999 54.4 90.2 154.6 50.4 73.4 0.288 0.025 0 28.11 -0.00000368 0.00001746 -1.065E-08 0 0 1 -6.28275 1.73619 -1.81349 -0.0253645 54 154.6 1.149 90
35 F2O oxygen difluoride 53.995 50 128.4 215 49.6 0.2 22.07 0.09875 -0.0001028 3.796E-08 24530 41780 0 1.521 128
444 C8H10 o-xylene 106.168 248 417.6 630.3 37.3 369 0.262 0.31 0.5 -15.85 0.5962 -0.0003443 7.528E-08 19000 122200 1 -7.53357 1.40968 -3.10985 -2.85992 337 630.3 0.88 293
61 O3 ozone 47.998 80.5 181.2 261.1 55.7 88.9 0.228 0.691 0.6 20.54 0.08009 -0.00006243 1.697E-08 142800 162900 3 9.1225 1272.18 -22.16 109 174 1.356 161
405 C7H8O p-cresol 108.14 307.9 475.1 704.6 51.5 0.505 1.6 -40.63 0.7055 -0.0005757 1.967E-07 -125500 -30900 1 -9.23951 3.2988 -7.17725 -0.48 401 704.6 1.019 313
333 C6HF5O pentafluorophenol 184.063 418.8 609 40 348 0.275 0.502 1 -8.69734 2.03071 -5.32619 -3.28915 379 609
16 ClFO3 perchloryl fluoride 102.448 125.5 226.4 368.4 53.7 160.8 0.282 0.17 0 12.45 0.239 -0.0002346 8.321E-08 -21440 50620 0 2.003 399
331 C6F14 perfluoro-2,3-dimethlybutane 338.044 332.9 463 18.7 525 0.256 0.394 3 9.9846 2933.85 -38.7 262 333
179 C3F60 perfluoroacetone 166.02 245.7 357.1 28.4 329 0.314 0.365 0
542 C10F18 perfluorodecalin 462.074 415 566 15.2 0.392 0
393 C7F8 perfluorotoluene 236.061 377.7 534.5 27.1 428 0.26 0.475 0
450 C8H10O p-ethylphenol 122.167 318 491.1 716.4 -144700 3 12.4703 5579.62 -44.15 370 500
593 C14H10 phenanthrene 178.234 373.7 613 873 554 0 -58.98 1.006 -0.0006594 1.606E-07 3 10.0985 5477.94 -69.39 450 655
344 C6H6O phenol 94.113 314 455 694.2 61.3 229 0.24 0.438 1.6 -35.84 0.5983 -0.0004827 1.527E-07 -96420 -32900 1 -8.7555 2.92651 -6.31601 -1.36889 380 694.2 1.059 313
92 CCL2O phosgene 98.916 145 281 455 56.7 190.1 0.285 0.205 1.1 28.09 0.1361 -0.0001374 5.07E-08 -221100 -206900 1 -7.08177 1.60461 -2.57153 -1.88377 216 455 1.381 293
81 H3P phosphine 33.998 140 185.4 324.5 65.4 0.038 0.6 23.23 0.04401 0.00001303 -1.598E-08 22900 25410 0 1.529 298
83 H4ClP phosphonium chloride 70.459 246 322.3 73.7 1.64 0
63 P phosphorus 30.974 553 994 20.8 334100 292200 0
18 ClF2P phosphorus chloride difluoride 104.423 225.9 362.4 45.2 0.164 0
23 Cl2FP phosphorus dichloride fluoride 120.878 287 463 49.6 0.174 0
27 Cl5P phosphorus pentachloride 208.26 148 433 646 0.8 69.46 0.2079 -0.0002455 9.914E-08 -342900 -278500 0
13 Br3P phosphorus tribromide 270.723 233 446.1 711 300 0.5 61.02 0.07421 -0.00008899 3.631E-08 -128500 -157500 0 2.852 288
24 Cl3P phosphorus trichloride 137.333 161 349.1 563 264 0.9 48.49 0.1131 -0.0001334 5.38E-08 -271300 -257700 0 1.574 294
39 F3P phosphorus trifluoride 87.968 178 271.2 43.3 0.326 21.79 0.1733 -0.0001852 6.974E-08 -937800 -925300 0 3.1 172
439 C8H4O3 phthalic anhydride 148.118 404 560 810 47.6 368 0.26 5.3 -4.455 0.654 -0.0004283 1.009E-07 -372000 3 9.3782 4467.01 -83.15 409 615
310 C5H11N piperidine 85.15 262.7 379.6 594 47.6 289 0.28 0.251 1.2 -53.07 0.6289 -0.0003358 6.427E-08 -49030 1 -7.56707 2.15002 -3.8903 -3.7036 316 594 0.862 293
198 C3H6O propionaldehyde 58.08 193 321 515.3 63.3 0.313 2.7 11.72 0.2614 -0.00013 2.126E-08 -192200 -130500 1 -7.18479 1.00298 -1.49247 -5.13288 235 515.3 0.797 293
201 C3H6O2 propionic acid 74.08 252.5 414.5 612 54 222 0.183 0.52 1.5 5.669 0.3689 -0.0002865 9.877E-08 -455400 -369600 1 -8.69958 1.4946 -4.50355 1.06898 345 612 0.993 293
192 C3H5N propionitrile 55.08 180.3 370.3 564.4 41.8 229 0.205 0.313 3.7 15.4 0.2245 -0.00011 1.954E-08 50660 96210 1 -7.27719 0.46035 -0.45714 -10.1636 309 564.4 0.782 293
204 C3H7Cl propyl chloride 78.542 150.4 320.4 503 45.8 254 0.278 0.235 2 -3.345 0.3626 -0.0002508 7.448E-08 -130200 -50700 1 -7.55764 2.60153 -5.06041 3.31163 248 503 0.891 293
194 C3H6 propylene 42.081 87.9 225.5 364.9 46 181 0.274 0.144 0.4 3.71 0.2345 -0.000116 2.205E-08 20430 62760 1 -6.64231 1.21857 -1.81005 -2.48212 140 364.9 0.612 223
609 C18H14 p-terphenyl 230.31 485 649 926 33.2 763 0.329 0.523 0.7 0
415 C7H9N p-toluidine 107.156 316.9 473.7 667 23.8 0.443 1.6 3 10.0766 4041.04 -72.15 350 500 0.964 323
446 C8H10 p-xylene 106.168 286.4 411.5 616.2 35.1 379 0.26 0.32 0.1 -25.09 0.6042 -0.0003374 6.82E-08 17960 121200 1 -7.63495 1.50724 -3.19678 -2.7871 331 616.2 0.861 293
225 C4H5N pyrrole 67.091 403 639.8 1.8 108400 3 10.1764 3457.47 -62.73 330 440 0.967 294
256 C4H9N pyrrolidine 71.123 359.6 568.6 56.1 249 0.295 0.274 1.6 -51.53 0.5338 -0.000324 7.528E-08 -3600 114800 1 -7.73658 2.33495 -4.20213 -3.71251 316 568.6 0.852 295
510 C9H7N quinoline 129.162 258 510.8 782 3 9.0779 3842.4 -86.94 437 515 1.095 293
64 Rn radon 222 202 211.4 377 62.8 -0.008 20.8 0 0 0 4.4 211
66 Se selenium 78.96 1010 1766 272 0.346 0
85 H4Si silane 32.122 88.2 161 269.7 48.4 0.068 0 11.18 0.122 -0.00005548 6.84E-09 32660 55180 0 0.68 88
14 Br4SI silicon tetrabromide 347.702 278.6 427 663 382 0 74.66 0.1097 -0.0001298 5.246E-08 -415900 -432400 0 2.772 298
25 Cl4Si silicon tetrachloride 169.898 204.3 330.8 508.1 35.9 325.7 0.277 0.232 0 56.58 0.1636 -0.0001897 7.565E-08 -657700 -617800 3 9.1817 2634.16 -43.15 238 364 1.48 293
43 F4Si silicon tetrafluoride 104.09 183 187 259 37.2 0.753 0 26.78 0.2157 -0.0002204 8.031E-08 -1616000 -1574000 0
51 I4Si silicon tetraiodide 535.706 393.7 560.5 944 558 84.79 0.078 -0.0000934 3.806E-08 -110500 -159800 0
440 C8H8 styrene 104.152 242.5 418.3 647 39.9 0.257 0.1 -28.25 0.6159 -0.0004023 9.935E-08 147000 213900 1 -7.15981 1.78861 -5.10359 1.63749 303 647 0.906 293
233 C4H6O4 succinic acid 118.09 456 508 2.2 15.07 0.4689 -0.0003143 7.938E-08 0
65 S sulfur 32.066 717.8 1314 207 0.171 279200 238600 0
60 O2S sulfur dioxide 64.063 197.7 263.2 430.8 78.8 122.2 0.269 0.256 1.6 23.85 0.06699 -0.00004961 1.328E-08 -297100 -300400 2 48.882 4552.5 -5.666 990 235 430.8 1.455 263
45 F6S sulfur hexafluoride 146.054 222.5 209.6 318.7 37.6 198.8 0.282 0.286 0 -0.6599 0.4639 -0.0005089 1.953E-07 -1222000 -1118000 3 12.7583 2524.78 -11.16 159 220 1.83 223
42 F4S sulfur tetrafluoride 108.058 152 232.7 364 1 25.42 0.242 -0.0002653 1.017E-07 -781300 -740600 3 7.4561 1218.59 -73.24 161 224 1.936 195
62 O3S sulfur trioxide 80.058 290 318 491 82.1 127.3 0.256 0.481 0 19.21 0.1374 -0.0001176 0.000000037 -396000 -371700 2 132.94 10420.1 -17.38 1200 300 491 1.78 318
255 C4H9Cl tert-butyl chloride 92.569 247.8 324 507 39.5 295 0.28 0.19 2.1 -3.931 0.4652 -0.2886 7.871E-08 -183400 -64140 3 9.1919 2567.15 -44.14 235 360 0.842 293
41 F4N2 tetrafluorohydrazine 104.016 105 199 309 37.5 0.206 0.3 3.553 0.3509 -0.0003637 1.338E-07 -8374 79880 0 1.5 163
244 C4H8O tetrahydrofuran 72.107 164.7 338 540.1 51.9 224 0.259 0.217 1.7 19.1 0.5162 -0.0004132 1.454E-07 -184300 3 9.4867 2768.38 -46.9 270 370 0.889 293
302 C5H10O tetrahydropyran 86.134 361 572.2 47.7 263 0.263 0.218 1.6 0 0.886 293
252 C4H8S tetrahydrothiophene 88.172 177 394.2 632 1.9 3 9.387 3160.1 -57.2 308 473 1 293
223 C4H4S thiophene 84.136 234.9 357.2 579.4 56.9 219 0.258 0.196 0.5 -30.61 0.448 -0.0003772 1.253E-07 115800 126900 1 -7.05208 1.6964 -3.17778 -1.57742 312 579.4 1.071 289
19 ClF2PS thiophosphoryl chloride difluoride 136.489 279 439.2 41.4 0.202 0
40 F3PS thiophosphoryl trifluoride 120.034 220.9 346 38.2 0.187 0.6 24.92 0.2326 -0.0002472 9.275E-08 -992300 -974300 0
555 C10H14O thymol 150.221 323 505.7 698 0
15 Br4Ti titanium tetrabromide 367.536 312 503 795.7 391 84.49 0.07785 -0.00009361 3.826E-08 -550600 -569500 0
26 Cl4Ti titanium tetrachloride 189.712 243 409.6 638 46.6 339.2 0.298 0.268 0 70.64 0.1224 -0.0001443 5.819E-08 -763700 -727200 0 1.7 298
52 I4Ti titanium tetraiodide 555.52 423 650 1040 505 95.75 0.04253 -0.00005179 2.135E-08 -277500 -328200 0
400 C7H8 toluene 92.141 178 383.8 591.8 41 316 0.263 0.263 0.4 -24.35 0.5125 -0.0002765 4.911E-08 50030 122100 1 -7.28607 1.38091 -2.83433 -2.79168 309 591.8 0.867 293
558 C10H18 trans-decalin 138.254 242.8 460.5 687.1 31.4 0.27 0 -97.67 1.045 -0.0005476 8.981E-08 -182400 73480 3 9.1787 3610.66 -66.49 363 470 0.87 293
34 F2N2 trans-difluorodiazine 66.01 161.7 260 55.7 0.217 22.54 0.1377 -0.0001258 4.232E-08 81220 120500 0
139 C2CHF302 trifluoroacetic acid 14.024 257.9 346 491.3 32.6 204 0.163 0.54 2.3 3 7.5356 2828.94 -57.11 285 345 1.535 273
131 C2F3N trifluoroacetonitrile 95.023 205.5 311.1 36.2 202 0.283 0.267 22.13 0.2519 -0.0002361 8.207E-08 -49570 -46220 3 9.7917 1781.77 -23.28 142 206
38 F3NO trifluoroamine oxide 87.001 186 303 64.3 146.9 0.375 0.212 15.13 0.2446 -0.0002528 9.375E-08 -163300 -96460 0
215 C3H9BO3 trimethyl borate 103.912 342 501.7 35.9 0.415 0.8 0 0.915 293
67 T2 tritium 6.32 25 40 58 0 0
46 F6U uranium hexafluoride 352.018 337 329 505.8 46.6 250 0.277 0.318 0 -2139000 -2060000 0
297 C5H10O valeraldehyde 86.134 182 376 554 35.4 333 0.26 0.4 2.6 14.24 0.4329 -0.0002107 3.162E-08 -228000 -108400 3 9.5421 3030.2 -58.15 277 412 0.81 293
230 C4H6O2 vinyl acetate 86.091 173 346 525 43.5 265 0.26 0.34 1.7 15.16 0.2795 -0.00008805 -1.66E-08 -316000 1 -7.80478 1.80668 -4.4816 1.70357 295 525 0.932 293
146 C2H3Cl vinyl chloride 62.499 119.4 259.8 425 51.5 169 0.265 0.122 1.5 5.949 0.2019 -0.0001536 4.773E-08 35170 51540 1 -6.50008 1.21422 -2.57867 -2.00937 208 425 0.969 259
245 C4H8O vinyl ethyl ether 72.107 157.9 308.7 475 40.7 0.268 1.3 17.28 0.3236 -0.0001471 2.15E-08 -140300 1 -7.33727 1.50878 -3.30376 -1.10728 256 475 0.793 293
151 C2H3F vinyl fluoride 46.044 130 201 327.9 52.4 144 0.277 0.157 1.4 -117200 1 -6.80471 1.67182 -3.29094 -0.69493 114 327.9 0.681 263
189 C3H4O2 vinyl formate 72.064 215.5 319.6 475 57.7 210 0.31 0.55 27.81 0.1839 -0.0000356 -2.335E-07 3 10.0329 2569.68 -63.15 240 350 0.963 293
200 C3H6O vinyl methyl ether 58.08 151.5 278 436 47.6 205 0.27 0.34 15.63 0.2341 -0.00009697 1.062E-08 3 7.84 1980.22 -25.15 190 315 0.75 293
221 C4H4 vinylacetylene 52.076 227.6 278.1 455 49.6 202 0.26 0.092 6.757 0.2841 -0.0002265 7.461E-08 304800 306200 3 9.3898 2203.57 -43.15 200 305 0.71 273
77 H2O water 18.015 273.2 373.2 647.3 221.2 57.1 0.235 0.344 1.8 32.24 0.001924 0.00001055 -3.596E-09 -242000 -228800 1 -7.76451 1.45838 -2.7758 -1.23303 275 647.3 0.998 293
68 Xe xenon 131.3 161.3 165 289.7 58.4 118.4 0.287 0.008 0 20.8 0 0 2 24.809 1951.76 -2.544 603 170 289.7 3.06 165
36 F2Xe xenon difluoride 169.296 387.5 631 93.2 148.6 0.264 0.317 0
44 F4Xe xenon tetrafluoride 207.292 387 388.9 612 70.4 188.6 0.261 0.357 -187600 0
|
Final Codes/.ipynb_checkpoints/LR -- F -- 3rd Phase -- WIth modification -- Done-checkpoint.ipynb | ###Markdown
Preprocessing
###Code
plt.figure(figsize=(20,12))
sns.heatmap(training_data.isnull(),yticklabels=False,cbar=False,cmap = 'viridis')
###Output
_____no_output_____
###Markdown
Missing value handling We are going to use different approches with missing values:1. Removing the column having 80% missing values (**Self intuition)2. Keeping all the features3. Later, we will try to implement some feature engineering **For the rest of the missing values, we are replacing them with their mean() for now (**Ref) Second Approach
###Code
sample_training_data = training_data
sample_training_data.fillna(sample_training_data.mean(),inplace=True)
#after replacing with mean()
plt.figure(figsize=(20,12))
sns.heatmap(sample_training_data.isnull(),yticklabels=False,cbar=False,cmap='viridis')
#as all the other values are numerical except Class column so we can replace them with 1 and 0
sample_training_data = sample_training_data.replace('neg',0)
sample_training_data = sample_training_data.replace('pos',1)
sample_training_data.head()
###Output
_____no_output_____
###Markdown
Testing Data preprocessing
###Code
testing_data = pd.read_csv("../Data/aps_failure_test_set.csv",na_values="na")
testing_data.head()
sample_testing_data = testing_data
sample_testing_data.fillna(sample_testing_data.mean(),inplace=True)
#after replacing with mean()
plt.figure(figsize=(20,12))
sns.heatmap(sample_testing_data.isnull(),yticklabels=False,cbar=False,cmap='viridis')
#as all the other values are numerical except Class column so we can replace them with 1 and 0
sample_testing_data = sample_testing_data.replace('neg',0)
sample_testing_data = sample_testing_data.replace('pos',1)
sample_testing_data.head()
###Output
_____no_output_____ |
DA&DDD project.ipynb | ###Markdown
The MNIST database was constructed from NIST's Special Database 3 and Special Database 1 which contain binary images of handwritten digits. SD-3 was collected among Census Bureau employees, while SD-1 was collected among high-school students.The MNIST training set is composed of ``30 000`` patterns from SD-3 and ``30 000`` patterns from SD-1. The test set was composed of ``5 000`` patterns from SD-3 and ``5 000`` patterns from SD-1. The ``60 000`` pattern training set contained examples from approximately ``250`` writers. The writers of the training set and test set were disjoint.The MNIST dataset contains images presented in efficient way in numpy ndarrays with ``28 * 28 = 784`` pixels in a single MNIST image. The dataset contains images of handwritten digits. So, the domain is 10 digits ``[0...9]``
###Code
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
###Output
_____no_output_____
###Markdown
Downloading the MNIST data as a tuple containing the training data, the validation data, and the test data.The ``tr_d`` is returned as a tuple with two entries.The first entry contains the actual training images. This is a numpy ndarray with 50,000 entries. The second entry in the ``tr_d`` tuple is a numpy ndarray containing 50,000 entries. Those entries are just the digit values ``[0...9]`` for the corresponding images contained in the first entry of the tuple.The ``te_d`` is similar, except it contains only 10,000 images.
###Code
f = gzip.open('data/mnist.pkl.gz', 'rb')
u = pickle._Unpickler(f)
u.encoding = 'latin1'
tr_d, va_d, te_d = u.load()
f.close()
###Output
_____no_output_____
###Markdown
For using in neural networks it's helpful to modify the format of the ``training_outputs``(labels) a little.``Example:``- ``input: 3``- ``output: [0, 0, 0, 1, 0, 0, 0, 0, 0, 0]``
###Code
def vector_output(j):
e = np.zeros((10, 1))
e[j] = 1.0
return e
###Output
_____no_output_____
###Markdown
Reshaping into the format, which is more convenient for use in a neural network.The ``training_data`` is a list containing 50,000 2-tuples ``(x, y)``. ``x`` is a 784-dimensional numpy.ndarray containing the input image. ``y`` is a 10-dimensional numpy.ndarray representing the unit vector corresponding to the correct digit for ``x``.``test_data`` is a lists containing 10,000 2-tuples ``(x, y)``. In each case, ``x`` is a 784-dimensional numpy.ndarry containing the input image, and ``y`` is the corresponding classification, i.e., the digit values (integers) corresponding to ``x``.
###Code
training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]]
training_outputs = [vector_output(y) for y in tr_d[1]]
training_data = list(zip(training_inputs, training_outputs))
test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]]
test_data = list(zip(test_inputs, te_d[1]))
###Output
_____no_output_____
###Markdown
Real format of data of, for example, the second "image" in the trining set.
###Code
print(np.reshape(tr_d[0][1], (28, 28)))
###Output
[[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0.19921875 0.62109375 0.98828125
0.62109375 0.1953125 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.1875 0.9296875 0.984375 0.984375
0.984375 0.92578125 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.2109375 0.88671875 0.98828125 0.984375 0.93359375
0.91015625 0.984375 0.22265625 0.0234375 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.0390625
0.234375 0.875 0.984375 0.98828125 0.984375 0.7890625
0.328125 0.984375 0.98828125 0.4765625 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.63671875
0.984375 0.984375 0.984375 0.98828125 0.984375 0.984375
0.375 0.73828125 0.98828125 0.65234375 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.19921875 0.9296875
0.98828125 0.98828125 0.7421875 0.4453125 0.98828125 0.890625
0.18359375 0.30859375 0.99609375 0.65625 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0.1875 0.9296875 0.984375
0.984375 0.69921875 0.046875 0.29296875 0.47265625 0.08203125
0. 0. 0.98828125 0.94921875 0.1953125 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0.1484375 0.64453125 0.98828125 0.91015625
0.8125 0.328125 0. 0. 0. 0.
0. 0. 0.98828125 0.984375 0.64453125 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0.02734375 0.6953125 0.984375 0.9375 0.27734375
0.07421875 0.109375 0. 0. 0. 0.
0. 0. 0.98828125 0.984375 0.76171875 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0.22265625 0.984375 0.984375 0.24609375 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.98828125 0.984375 0.76171875 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0.7734375 0.98828125 0.7421875 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.99609375 0.98828125 0.765625 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.296875 0.9609375 0.984375 0.4375 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.98828125 0.984375 0.578125 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.33203125 0.984375 0.8984375 0.09765625 0. 0.
0. 0. 0. 0. 0. 0.
0.02734375 0.52734375 0.98828125 0.7265625 0.046875 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.33203125 0.984375 0.87109375 0. 0. 0.
0. 0. 0. 0. 0. 0.02734375
0.51171875 0.984375 0.87890625 0.27734375 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.33203125 0.984375 0.56640625 0. 0. 0.
0. 0. 0. 0. 0.1875 0.64453125
0.984375 0.67578125 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.3359375 0.98828125 0.87890625 0. 0. 0.
0. 0. 0. 0.4453125 0.9296875 0.98828125
0.6328125 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.33203125 0.984375 0.97265625 0.5703125 0.1875 0.11328125
0.33203125 0.6953125 0.87890625 0.98828125 0.87109375 0.65234375
0.21875 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.33203125 0.984375 0.984375 0.984375 0.89453125 0.83984375
0.984375 0.984375 0.984375 0.765625 0.5078125 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0.109375 0.77734375 0.984375 0.984375 0.98828125 0.984375
0.984375 0.91015625 0.56640625 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0.09765625 0.5 0.984375 0.98828125 0.984375
0.55078125 0.14453125 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]]
###Markdown
Plotting the second image from the training dataset.We can see this image is representing zero.
###Code
plt.figure()
plt.imshow(np.reshape(tr_d[0][1], (28, 28)), cmap=plt.cm.binary)
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
First ``25`` images from the training set and their labels
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(np.reshape(tr_d[0][i], (28, 28)), cmap=plt.cm.binary)
plt.xlabel(class_names[tr_d[1][i]])
plt.show()
###Output
_____no_output_____
###Markdown
Sigmoid activation function for normalization and it's derivative
###Code
def sigmoid(z):
return 1.0/(1.0+np.exp(-z))
def sigmoid_prime(z):
return sigmoid(z)*(1-sigmoid(z))
###Output
_____no_output_____
###Markdown
Feed Forward Neural Network classLearning method: stochastic gradient descent (randomly selected mini batches)Accuracy evaluation - on the ``10 000`` size test set.Constructor allows different combinations of topologies for the hidden layers.
###Code
class Network:
def __init__(self, sizes):
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = []
for y in sizes[1:]:
self.biases.append(np.random.randn(y, 1))
#[30][1] [10][1]
self.weights = []
for x, y in zip(sizes[:-1], sizes[1:]):
self.weights.append(np.random.randn(y, x))
#[784][30] [30][10]
# self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
# self.weights = [np.random.randn(y, x)
# for x, y in zip(sizes[:-1], sizes[1:])]
def feed_forward(self, a):
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
def stochastic_gradient_descent(self, training_data, epochs, mini_batch_size, eta, test_data=None):
results = []
if test_data:
n_test = len(test_data)
n = len(training_data)
for j in range(epochs):
random.shuffle(training_data)
mini_batches = []
for k in range(0, n, mini_batch_size):
mini_batches.append(training_data[k:k + mini_batch_size])
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
evaluation = self.evaluate(test_data)
if test_data:
print("Epoch {0} completed. Accuracy: {1} %".format(j, str(evaluation/100)))
results.append(str(evaluation/100) + " %")
else:
print("Epoch {0} complete".format(j))
return results
def update_mini_batch(self, mini_batch, eta):
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.back_propagation(x, y)
nabla_b = [nb + dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw + dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w - (eta / len(mini_batch)) * nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b - (eta / len(mini_batch)) * nb
for b, nb in zip(self.biases, nabla_b)]
def back_propagation(self, x, y):
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
activation = x
activations = [x]
zs = []
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation) + b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
for l in range(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l + 1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l - 1].transpose())
return (nabla_b, nabla_w)
def evaluate(self, test_data):
# test_results = []
# for (x, y) in test_data:
# test_results.append(np.argmax(self.feed_forward(x)), y)
test_results = [(np.argmax(self.feed_forward(x)), y) for (x, y) in test_data]
return sum(int(x == y) for (x, y) in test_results)
def cost_derivative(self, output_activations, y):
return (output_activations - y)
###Output
_____no_output_____
###Markdown
At the moment for testing the effectiveness of feed forward neural network I am using 2 topologies on the 5 epochs.Each topology will have input layer contains 784 neurons(1 neuron for each "pixel").Each topology will have output layer contains 10 neurons(1 neuron for each digit).First topology:- ``input layer: 784 neurons``- ``hidden layer: 15 neurons``- ``output layer: 10 neurons`` Second topology:- ``input layer: 784 neurons``- ``hidden layer: 30 neurons``- ``output layer: 10 neurons``
###Code
print("1st network")
net1 = Network([784, 15, 10])
res1 = net1.stochastic_gradient_descent(training_data, 5, 10, 3.0, test_data=test_data)
print("2nd network")
net2 = Network([784, 30, 10])
res2 = net2.stochastic_gradient_descent(training_data, 5, 10, 3.0, test_data=test_data)
# net3 = = Network([784, 25, 15, 10])
# res2 = net2.stochastic_gradient_descent(training_data, 2, 10, 3.0, test_data=test_data)
df = pd.DataFrame(list(zip(res1, res2)), columns = ["15 neurons in 1 hidden layer", "30 neurons in 1 hidden layer"])
df
###Output
_____no_output_____ |
CoursewareHub/notebooks/531-IdP-proxyを学認へ申請する.ipynb | ###Markdown
IdP-proxyを学認へ申請する---IdP-proxyをSPとして学認へ申請する はじめに CoursewareHubの構成要素を以下に示します。 このNotebookでは`IdP-proxy` と学認との連携に必要となる申請や、登録などについて記します。 学認にSP設置の申請を行う申請を行う前に学認(GakuNin)の「[参加情報](https://www.gakunin.jp/join)」にてフェデレーション参加の流れを確認してください。参加するフェデレーションに従い「[テストフェデレーション参加手続き](https://www.gakunin.jp/join/test)」または「[運用フェデレーション参加手続き](https://www.gakunin.jp/join/production)」にある「学認申請システム」から「新規SP申請」を行います。 「新規SP申請」を選択すると以下のような画面が表示されます。> キャプチャー画面はテストフェデレーションのものです。[IdP-proxy](https://github.com/NII-cloud-operation/CoursewareHub-LC_idp-proxy)では[SimpleSAMLphp](https://simplesamlphp.org/)を利用しています。そのため学認申請システムのテンプレートではなく、構築した IdP-proxyのメタデータを「テンプレート外メタデータ」からアップロードして申請してください。 申請システムにメタデータをアップロードするために、構築したIdP-proxyからメタデータのダウンロードを行います。次のセルで対象となるIdP-proxyのホスト名(FQDN)を指定してください。
###Code
# (例)
# auth_fqdn = 'idpproxy.example.org'
auth_fqdn =
###Output
_____no_output_____
###Markdown
IdP-proxyからメタデータをダウンロードします。次のセルを実行することで表示されるリンク先にアクセスするとIdP-proxyのメタデータがダウンロードされます。
###Code
print(f'https://{auth_fqdn}/simplesaml/module.php/saml/sp/metadata.php/default-sp')
###Output
_____no_output_____
###Markdown
ダウンロードしたメタデータを学認申請システムにアップロードすると「SPメタデータ情報」の入力欄のうちメタデータに対応するものが設定されます。他の欄を入力した後に申請してください。ただし CoursewareHubでは `mail` 属性を利用するので「受信する属性情報」に `mail` を**必須** な項目として追加してください。 学認mAPとの連携CoursewareHubでは利用者グループを管理するために[学認mAP](https://meatwiki.nii.ac.jp/confluence/display/gakuninmappublic/Home)を利用します。ここでは IdP-proxyを学認mAPと連携する手順について記します。 申請 学認mAPの「[問い合わせ](https://meatwiki.nii.ac.jp/confluence/pages/viewpage.action?pageId=8716731)」などに記されている窓口を通して、構築したSP(IdP-proxy)と学認mAPとを連携するための依頼を行ってください。 メタデータの登録(テストフェデレーションの場合)テストフェデレーションの場合、mAP(SP検証環境)のメタデータをIdP-proxyに登録する必要があります。 mAP利用申請を行うと、学認クラウドゲートウェイサービスサポートからmAP(SP検証環境)のメタデータが送られてきます。送られてきたメタデータを IdP-proxyに登録します。メタデータファイルをこのNotebook環境に配置し、そのパスを次のセルで指定してください。
###Code
# (例)
# cgidp_metadata = './sptestcgidp-metadata.xml'
cgidp_metadata =
###Output
_____no_output_____
###Markdown
操作対象となるIdP-proxyのAnsibleグループ名を指定します。 VCノードを作成時に指定した値を確認するために `group_vars`ファイル名の一覧を表示します。
###Code
!ls -1 group_vars/
###Output
_____no_output_____
###Markdown
上のセルの出力結果を参考にしてAnsibleグループ名を指定してください。
###Code
# (例)
# target_auth = 'IdPproxy'
target_auth =
###Output
_____no_output_____
###Markdown
SP検証環境のメタデータをIdP-proxyの環境に配置します。
###Code
!ansible {target_auth} \
-b -m copy -a 'src={cgidp_metadata} backup=yes \
dest={{{{idp_proxy_metadata_dir}}}}/cgidp-metadata.xml'
###Output
_____no_output_____
###Markdown
SimpleSAMLphpの設定ページでSP検証環境のメタデータが登録されたことを確認します。次のセルを実行すると表示されるリンク先をアクセスしてください。
###Code
print(f'https://{auth_fqdn}/simplesaml/module.php/core/frontpage_federation.php')
###Output
_____no_output_____ |
Copy_of_LS_DS12_222.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 2*--- Random Forests - use scikit-learn for **random forests**- do **ordinal encoding** with high-cardinality categoricals- understand how categorical encodings affect trees differently compared to linear models- understand how tree ensembles reduce overfitting compared to a single decision tree with unlimited depth Today's lesson has two take-away messages: Try Tree Ensembles when you do machine learning with labeled, tabular data- "Tree Ensembles" means Random Forest or Gradient Boosting models. - [Tree Ensembles often have the best predictive accuracy](https://arxiv.org/abs/1708.05070) with labeled, tabular data.- Why? Because trees can fit non-linear, non-[monotonic](https://en.wikipedia.org/wiki/Monotonic_function) relationships, and [interactions](https://christophm.github.io/interpretable-ml-book/interaction.html) between features.- A single decision tree, grown to unlimited depth, will [overfit](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/). We solve this problem by ensembling trees, with bagging (Random Forest) or boosting (Gradient Boosting).- Random Forest's advantage: may be less sensitive to hyperparameters. Gradient Boosting's advantage: may get better predictive accuracy. One-hot encoding isn’t the only way, and may not be the best way, of categorical encoding for tree ensembles.- For example, tree ensembles can work with arbitrary "ordinal" encoding! (Randomly assigning an integer to each category.) Compared to one-hot encoding, the dimensionality will be lower, and the predictive accuracy may be just as good or even better. SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- **category_encoders** - **graphviz**- ipywidgets- matplotlib- numpy- pandas- seaborn- scikit-learn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Use scikit-learn for random forests OverviewLet's fit a Random Forest! Solution exampleFirst, read & wrangle the data.> Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what other columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What other columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
# Also create a "missing indicator" column, because the fact that
# values are missing may be a predictive signal.
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
# Drop duplicate columns
duplicates = ['quantity_group', 'payment_type']
X = X.drop(columns=duplicates)
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# The status_group column is the target
target = 'status_group'
# Get a dataframe with all train columns except the target
train_features = train.drop(columns=[target])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Get a list of all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
# Combine the lists
features = numeric_features + categorical_features
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
train.describe(exclude='number').T.unique.sum()
###Output
_____no_output_____
###Markdown
Follow Along[Scikit-Learn User Guide: Random Forests](https://scikit-learn.org/stable/modules/ensemble.htmlrandom-forests)
###Code
# let's take a small sample for comparison purposes
region = X_train[['region']].copy()
region = region.sample(30)
region.head()
# how many regions are there?
region['region'].value_counts()
# what does one-hot encoding look like?
# region_ohe.head()
###Output
_____no_output_____
###Markdown
Put that into a pipeline.
###Code
%%time
# WARNING: the %%time command sometimes has quirks/bugs
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(strategy='median'),
RandomForestClassifier(random_state=0, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
print('X_train shape before encoding', X_train.shape)
pipeline.named_steps
encoder = pipeline.named_steps['onehotencoder']
encoded = encoder.transform(X_train)
print('X_train shape after encoding', encoded.shape)
%matplotlib inline
import matplotlib.pyplot as plt
# Get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, encoded.columns)
# Plot top n feature importances
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh();
###Output
_____no_output_____
###Markdown
Do ordinal encoding with high-cardinality categoricals Overviewhttp://contrib.scikit-learn.org/categorical-encoding/ordinal.html Follow Along
###Code
# Re-arrange data into X features matrix and y target vector, so
# we use *all* features, including the high-cardinality categoricals
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
###Output
_____no_output_____
###Markdown
There's a couple of ways to encode categorical variables
###Code
# let's take a small sample for comparison purposes
region = X_train[['region']].copy()
region = region.sample(30)
region.head()
# Method 1
# Encode the categorical columns using the pandas "category" data type
region['region_cats']=region['region'].astype('category')
region['region_codes']= region['region_cats'].cat.codes
region.dtypes
# show a few
region.head()
# distribution (note that the values start at 0)
region['region_codes'].value_counts().sort_index()
# Method 2
# Encode the categorical columns using scikit learn's ordinal encoding class
myencoder=ce.OrdinalEncoder()
region['region_encoded'] = myencoder.fit_transform(region['region'])
region.sample(5)
# they are the same distribution but starting at 1 not 0
region['region_encoded'].value_counts().sort_index()
###Output
_____no_output_____
###Markdown
Okay back to our scheduled programming
###Code
%%time
# This pipeline is identical to the example cell above,
# except we're replacing one-hot encoder with "ordinal" encoder
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(random_state=0, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
print('X_train shape before encoding', X_train.shape)
encoder = pipeline.named_steps['ordinalencoder']
encoded = encoder.transform(X_train)
print('X_train shape after encoding', encoded.shape)
###Output
X_train shape after encoding (47520, 45)
###Markdown
Understand how categorical encodings affect trees differently compared to linear models Follow Along Categorical exploration, 1 feature at a timeChange `feature`, then re-run these cells!
###Code
print(features)
feature = 'extraction_type_class'
X_train[feature].value_counts()
import seaborn as sns
plt.figure(figsize=(16,9))
sns.barplot(
x=train[feature],
y=train['status_group']=='functional',
color='grey'
);
X_train[feature].head(20)
###Output
_____no_output_____
###Markdown
[One Hot Encoding](http://contrib.scikit-learn.org/categorical-encoding/onehot.html)> Onehot (or dummy) coding for categorical features, produces one feature per category, each binary.Warning: May run slow, or run out of memory, with high cardinality categoricals!
###Code
encoder = ce.OneHotEncoder(use_cat_names=True)
encoded = encoder.fit_transform(X_train[[feature]])
print(f'{len(encoded.columns)} columns')
encoded.head(20)
###Output
7 columns
###Markdown
One-Hot Encoding, Logistic Regression, Validation Accuracy
###Code
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
lr = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(),
StandardScaler(),
LogisticRegressionCV(multi_class='auto', solver='lbfgs', cv=5, n_jobs=-1)
)
lr.fit(X_train[[feature]], y_train)
score = lr.score(X_val[[feature]], y_val)
print('Logistic Regression, Validation Accuracy', score)
###Output
Logistic Regression, Validation Accuracy 0.6202861952861953
###Markdown
One-Hot Encoding, Decision Tree, Validation Accuracy
###Code
from sklearn.tree import DecisionTreeClassifier
dt = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(),
DecisionTreeClassifier(random_state=42)
)
dt.fit(X_train[[feature]], y_train)
score = dt.score(X_val[[feature]], y_val)
print('Decision Tree, Validation Accuracy', score)
###Output
Decision Tree, Validation Accuracy 0.6202861952861953
###Markdown
One-Hot Encoding, Logistic Regression, Model Interpretation
###Code
model = lr.named_steps['logisticregressioncv']
encoder = lr.named_steps['onehotencoder']
encoded_columns = encoder.transform(X_val[[feature]]).columns
coefficients = pd.Series(model.coef_[0], encoded_columns)
coefficients.sort_values().plot.barh(color='grey');
###Output
_____no_output_____
###Markdown
One-Hot Encoding, Decision Tree, Model Interpretation
###Code
# Plot tree
# https://scikit-learn.org/stable/modules/generated/sklearn.tree.export_graphviz.html
import graphviz
from sklearn.tree import export_graphviz
model = dt.named_steps['decisiontreeclassifier']
encoder = dt.named_steps['onehotencoder']
encoded_columns = encoder.transform(X_val[[feature]]).columns
dot_data = export_graphviz(model,
out_file=None,
max_depth=7,
feature_names=encoded_columns,
class_names=model.classes_,
impurity=True,
filled=False,
proportion=True,
rounded=False)
display(graphviz.Source(dot_data))
pipeline
pipeline.named_steps['randomforestclassifier'].feature_importances_
###Output
_____no_output_____
###Markdown
[Ordinal Encoding](http://contrib.scikit-learn.org/categorical-encoding/ordinal.html)> Ordinal encoding uses a single column of integers to represent the classes. An optional mapping dict can be passed in; in this case, we use the knowledge that there is some true order to the classes themselves. Otherwise, the classes are assumed to have no true order and integers are selected at random.
###Code
# what does the original column look like?
X_train[[feature]].head()
# let's apply ordinal encoding to that column!
encoder = ce.OrdinalEncoder()
encoded = encoder.fit_transform(X_train[[feature]])
print(f'1 column, {encoded[feature].nunique()} unique values')
encoded.head(5)
# what are the new values?
encoded['extraction_type_class'].value_counts().sort_index()
###Output
_____no_output_____
###Markdown
Beware! Ordinal encoding works well with Tree Models but will ruin your linear models. Ordinal Encoding, Logistic Regression, Validation Accuracy
###Code
lr = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
StandardScaler(),
LogisticRegressionCV(multi_class='auto', solver='lbfgs', cv=5, n_jobs=-1)
)
lr.fit(X_train[[feature]], y_train)
score = lr.score(X_val[[feature]], y_val)
print('Logistic Regression, Validation Accuracy', score)
###Output
Logistic Regression, Validation Accuracy 0.5417508417508418
###Markdown
Ordinal Encoding, Decision Tree, Validation Accuracy
###Code
dt = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
DecisionTreeClassifier(random_state=42)
)
dt.fit(X_train[[feature]], y_train)
score = dt.score(X_val[[feature]], y_val)
print('Decision Tree, Validation Accuracy', score)
###Output
Decision Tree, Validation Accuracy 0.6202861952861953
###Markdown
Ordinal Encoding, Logistic Regression, Model Interpretation
###Code
model = lr.named_steps['logisticregressioncv']
encoder = lr.named_steps['ordinalencoder']
encoded_columns = encoder.transform(X_val[[feature]]).columns
coefficients = pd.Series(model.coef_[0], encoded_columns)
coefficients.sort_values().plot.barh(color='grey');
###Output
_____no_output_____
###Markdown
Ordinal Encoding, Decision Tree, Model Interpretation
###Code
model = dt.named_steps['decisiontreeclassifier']
encoder = dt.named_steps['ordinalencoder']
encoded_columns = encoder.transform(X_val[[feature]]).columns
dot_data = export_graphviz(model,
out_file=None,
max_depth=5,
feature_names=encoded_columns,
class_names=model.classes_,
impurity=False,
filled=True,
proportion=True,
rounded=True)
display(graphviz.Source(dot_data))
dt.named_steps['decisiontreeclassifier'].feature_importances_
###Output
_____no_output_____
###Markdown
Understand how tree ensembles reduce overfitting compared to a single decision tree with unlimited depth Overview What's "random" about random forests?1. Each tree trains on a random bootstrap sample of the data. (In scikit-learn, for `RandomForestRegressor` and `RandomForestClassifier`, the `bootstrap` parameter's default is `True`.) This type of ensembling is called Bagging. (Bootstrap AGGregatING.)2. Each split considers a random subset of the features. (In scikit-learn, when the `max_features` parameter is not `None`.) For extra randomness, you can try ["extremely randomized trees"](https://scikit-learn.org/stable/modules/ensemble.htmlextremely-randomized-trees)!>In extremely randomized trees (see [ExtraTreesClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html) and [ExtraTreesRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html) classes), randomness goes one step further in the way splits are computed. As in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random for each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. This usually allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias  Follow Along Example: [predicting golf putts](https://statmodeling.stat.columbia.edu/2008/12/04/the_golf_puttin/)(1 feature, non-linear, regression)
###Code
putts = pd.DataFrame(
columns=['distance', 'tries', 'successes'],
data = [[2, 1443, 1346],
[3, 694, 577],
[4, 455, 337],
[5, 353, 208],
[6, 272, 149],
[7, 256, 136],
[8, 240, 111],
[9, 217, 69],
[10, 200, 67],
[11, 237, 75],
[12, 202, 52],
[13, 192, 46],
[14, 174, 54],
[15, 167, 28],
[16, 201, 27],
[17, 195, 31],
[18, 191, 33],
[19, 147, 20],
[20, 152, 24]]
)
putts['rate of success'] = putts['successes'] / putts['tries']
putts_X = putts[['distance']]
putts_y = putts['rate of success']
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
def putt_trees(max_depth=1, n_estimators=1):
models = [DecisionTreeRegressor(max_depth=max_depth),
RandomForestRegressor(max_depth=max_depth, n_estimators=n_estimators)]
for model in models:
name = model.__class__.__name__
model.fit(putts_X, putts_y)
ax = putts.plot('distance', 'rate of success', kind='scatter', title=name)
ax.step(putts_X, model.predict(putts_X), where='mid')
plt.show()
interact(putt_trees, max_depth=(1,6,1), n_estimators=(10,40,10));
###Output
_____no_output_____
###Markdown
Bagging demo, with golf putts datahttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html
###Code
# Do-it-yourself Bagging Ensemble of Decision Trees (like a Random Forest)
def diy_bagging(max_depth=1, n_estimators=1):
y_preds = []
for i in range(n_estimators):
title = f'Tree {i+1}'
bootstrap_sample = putts.sample(n=len(putts), replace=True).sort_values(by='distance')
bootstrap_X = bootstrap_sample[['distance']]
bootstrap_y = bootstrap_sample['rate of success']
tree = DecisionTreeRegressor(max_depth=max_depth)
tree.fit(bootstrap_X, bootstrap_y)
y_pred = tree.predict(bootstrap_X)
y_preds.append(y_pred)
ax = bootstrap_sample.plot('distance', 'rate of success', kind='scatter', title=title)
ax.step(bootstrap_X, y_pred, where='mid')
plt.show()
ensembled = np.vstack(y_preds).mean(axis=0)
title = f'Ensemble of {n_estimators} trees, with max_depth={max_depth}'
ax = putts.plot('distance', 'rate of success', kind='scatter', title=title)
ax.step(putts_X, ensembled, where='mid')
plt.show()
interact(diy_bagging, max_depth=(1,6,1), n_estimators=(2,5,1));
###Output
_____no_output_____
###Markdown
Go back to Tanzania Waterpumps ... Helper function to visualize predicted probabilities
###Code
import itertools
import seaborn as sns
def pred_heatmap(model, X, features, class_index=-1, title='', num=100):
"""
Visualize predicted probabilities, for classifier fit on 2 numeric features
Parameters
----------
model : scikit-learn classifier, already fit
X : pandas dataframe, which was used to fit model
features : list of strings, column names of the 2 numeric features
class_index : integer, index of class label
title : string, title of plot
num : int, number of grid points for each feature
Returns
-------
y_pred_proba : numpy array, predicted probabilities for class_index
"""
feature1, feature2 = features
min1, max1 = X[feature1].min(), X[feature1].max()
min2, max2 = X[feature2].min(), X[feature2].max()
x1 = np.linspace(min1, max1, num)
x2 = np.linspace(max2, min2, num)
combos = list(itertools.product(x1, x2))
y_pred_proba = model.predict_proba(combos)[:, class_index]
pred_grid = y_pred_proba.reshape(num, num).T
table = pd.DataFrame(pred_grid, columns=x1, index=x2)
sns.heatmap(table, vmin=0, vmax=1)
plt.xticks([])
plt.yticks([])
plt.xlabel(feature1)
plt.ylabel(feature2)
plt.title(title)
plt.show()
return y_pred_proba
###Output
_____no_output_____
###Markdown
Compare Decision Tree, Random Forest, Logistic Regression
###Code
# remind me, what were the features in our models?
print(np.unique(features))
# Instructions
# 1. Choose two features
# 2. Run this code cell
# 3. Interact with the widget sliders
feature1 = 'longitude'
feature2 = 'quantity'
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
def get_X_y(df, feature1, feature2, target):
features = [feature1, feature2]
X = df[features]
y = df[target]
X = X.fillna(X.median())
X = ce.OrdinalEncoder().fit_transform(X)
return X, y
def compare_models(max_depth=1, n_estimators=1):
models = [DecisionTreeClassifier(max_depth=max_depth),
RandomForestClassifier(max_depth=max_depth, n_estimators=n_estimators),
LogisticRegression(solver='lbfgs', multi_class='auto')]
for model in models:
name = model.__class__.__name__
model.fit(X, y)
pred_heatmap(model, X, [feature1, feature2], class_index=0, title=name)
X, y = get_X_y(train, feature1, feature2, target='status_group')
interact(compare_models, max_depth=(1,6,1), n_estimators=(10,40,10));
###Output
_____no_output_____
###Markdown
Review Try Tree Ensembles when you do machine learning with labeled, tabular data- "Tree Ensembles" means Random Forest or Gradient Boosting models. - [Tree Ensembles often have the best predictive accuracy](https://arxiv.org/abs/1708.05070) with labeled, tabular data.- Why? Because trees can fit non-linear, non-[monotonic](https://en.wikipedia.org/wiki/Monotonic_function) relationships, and [interactions](https://christophm.github.io/interpretable-ml-book/interaction.html) between features.- A single decision tree, grown to unlimited depth, will [overfit](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/). We solve this problem by ensembling trees, with bagging (Random Forest) or boosting (Gradient Boosting).- Random Forest's advantage: may be less sensitive to hyperparameters. Gradient Boosting's advantage: may get better predictive accuracy. One-hot encoding isn’t the only way, and may not be the best way, of categorical encoding for tree ensembles.- For example, tree ensembles can work with arbitrary "ordinal" encoding! (Randomly assigning an integer to each category.) Compared to one-hot encoding, the dimensionality will be lower, and the predictive accuracy may be just as good or even better.
###Code
###Output
_____no_output_____ |
python/deep_learning/NOTEBOOK/My_DEEP.ipynb | ###Markdown
Linear Regression
###Code
import numpy
import matplotlib.pyplot as plt
rng = numpy.random
# Parameters
learning_rate = 0.01
training_epochs = 1000
display_step = 50
# Training Data
train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
2.827,3.465,1.65,2.904,2.42,2.94,1.3])
n_samples = train_X.shape[0]
# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(rng.randn(),name="weight")
b = tf.Variable(rng.randn(), name="bias")
pred = tf.add(tf.multiply(X,W),b)
cost = tf.reduce_sum(tf.pow(pred-Y,2))/(2*n_samples)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
sess.run(init)
# Fit all training data
for epoch in range(training_epochs):
for (x, y) in zip(train_X, train_Y):
sess.run(optimizer, feed_dict={X: x, Y: y})
#Display logs per epoch step
if (epoch+1) % display_step == 0:
c = sess.run(cost, feed_dict={X: train_X, Y:train_Y})
print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \
"W=", sess.run(W), "b=", sess.run(b)
print "Optimization Finished!"
training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
print "Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n'
#Graphic display
plt.plot(train_X, train_Y, 'ro', label='Original data')
plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
plt.legend()
plt.show()
###Output
Epoch: 0050 cost= 0.190514207 W= 0.437989 b= -0.553835
Epoch: 0100 cost= 0.177396372 W= 0.426784 b= -0.473228
Epoch: 0150 cost= 0.165793598 W= 0.416246 b= -0.397416
Epoch: 0200 cost= 0.155530766 W= 0.406334 b= -0.326113
Epoch: 0250 cost= 0.146453157 W= 0.397012 b= -0.25905
Epoch: 0300 cost= 0.138424039 W= 0.388244 b= -0.195976
Epoch: 0350 cost= 0.131322324 W= 0.379998 b= -0.136654
Epoch: 0400 cost= 0.125040904 W= 0.372242 b= -0.0808593
Epoch: 0450 cost= 0.119485080 W= 0.364948 b= -0.0283832
Epoch: 0500 cost= 0.114571117 W= 0.358087 b= 0.0209719
Epoch: 0550 cost= 0.110224813 W= 0.351634 b= 0.0673917
Epoch: 0600 cost= 0.106380664 W= 0.345566 b= 0.111051
Epoch: 0650 cost= 0.102980696 W= 0.339858 b= 0.152113
Epoch: 0700 cost= 0.099973619 W= 0.334489 b= 0.190733
Epoch: 0750 cost= 0.097314022 W= 0.32944 b= 0.227056
Epoch: 0800 cost= 0.094961822 W= 0.324691 b= 0.261219
Epoch: 0850 cost= 0.092881501 W= 0.320225 b= 0.29335
Epoch: 0900 cost= 0.091041602 W= 0.316024 b= 0.32357
Epoch: 0950 cost= 0.089414448 W= 0.312073 b= 0.351993
Epoch: 1000 cost= 0.087975383 W= 0.308357 b= 0.378725
Optimization Finished!
Training cost= 0.0879754 W= 0.308357 b= 0.378725
|
TensorFlow_18.ipynb | ###Markdown
Recommender System collaborative filtering * 사용자 기반 필터링* 아이템 기반 필터링 사용자 수가 적거나 아이템이 적어서 의미있는 데이터를 추출하기 어려운 경우 사용
###Code
from math import sqrt
import matplotlib as mpl
mpl.rcParams['axes.unicode_minus'] = False
from matplotlib import font_manager, rc
import matplotlib.pyplot as plt
font_name = font_manager.FontProperties(fname = 'C:/Windows/fonts/malgun.ttf' ).get_name()
rc('font',family=font_name)
critics = {
'BTS' : {'암수살인' : 5, '바울' : 4, '할로윈' : 1.5},
'손흥민' : {'바울' : 5, '할로윈' : 2},
'조용필' : {'암수살인' : 2.5, '바울' : 2, '할로윈' : 1},
'나훈아' : {'암수살인' : 3.5, '바울' : 4, '할로윈' : 5}
}
print(critics.get('BTS').get('바울'))
def sim(i, j) :#전달된 두 데이터의 유사도를 리턴하는 함수
return sqrt(pow(i,2)+pow(j,2)) #i : x2 - x1, j: y2-y1이 전달됨
# 손흥민과 나훈아 사이의 거리를 구하고 싶다
# 피타고라스의 정리 -> 거리가 가까울수록 유사도가 높다
var1 = critics['손흥민']['바울']-critics['나훈아']['바울']
var2 = critics['손흥민']['할로윈']-critics['나훈아']['할로윈']
print(sim(var1, var2))
###Output
3.1622776601683795
###Markdown
**손흥민을 기준으로 다른 사람과의 유사도 측정**
###Code
for i in critics:
# print(i) i는 key
if i !='손흥민':
var1 = critics['손흥민']['바울']-critics[i]['바울']
var2 = critics['손흥민']['할로윈']-critics[i]['할로윈']
print(i, "- 손흥민의 유사도 : ", 1/(1+sim(var1, var2))) # 거리가 짧을수록 유사도가 높다
###Output
BTS - 손흥민의 유사도 : 0.4721359549995794
조용필 - 손흥민의 유사도 : 0.2402530733520421
나훈아 - 손흥민의 유사도 : 0.2402530733520421
###Markdown
두 점 사이의 거리 항목(영화) 데이터가 2종류(두 편)인 경우 : 피타고라스 공식 항목(영화) 데이터가 여러종류(여러편)인 경우 : 유클리디안 거리 **유클리디안 거리 기반 두 데이터 사이의 거리**
###Code
def sim_distance(data, name1, name2):
sum = 0
for i in data[name1] :
if i in data[name2]:
sum += pow(data[name1][i]-data[name2][i], 2)
return 1/(1+sqrt(sum))
print(sim_distance(critics, '손흥민', '나훈아'))
###Output
0.2402530733520421
###Markdown
**손흥민과 나머지 전체 관객과의 평점간 거리: 유클리디안**
###Code
def matchf(data, name, idx = 3, sim = sim_distance):
myList = []
for i in data :
if i != name: #본인이 아닌 경우라면
myList.append((sim(data, name, i),i)) #유사도, 상대방 이름
myList.reverse()
print("역순: ", myList)
return myList[:idx]
# 내림차순 정렬(가장 먼저 나오는 사람이 손흥민과 가장 유사)
li = matchf(critics, '손흥민')
print(li)
def barchart(data, labels): # 손흥민과의 유사도, 손흥민 제외한 이름
position = range(len(data))
plt.barh(position, data, height=0.5, color = 'r') #y축, x축, 막대 높이
plt.yticks(position, labels)
plt.xlabel('simlarity')
plt.ylabel('name')
plt.show()
score = []
names = []
for i in li :
score.append(i[0])
names.append(i[1])
barchart(score, names)
plt.figure(figsize = (14,4))
plt.plot([1,2,3], [1,2,3], 'g^')
plt.text(1,1,'자동차')
plt.text(2,2,'버스')
plt.text(3,3,'열차')
plt.axis([0,5,0,5]) #x,y 축에 대한 크기 재설정
plt.show()
###Output
_____no_output_____
###Markdown
* * * **상관분석** 상관분석을 이용하여 유클리디안 거리 공식의 한계점 극복 : 특정인의 점수가 극단적으로 높거나 낮다면 제대로된 결과를 도출해내기가 어렵다 두 변수간의 선형적 관계를 분석함
###Code
critics = {
'조용필': {'택시운전사': 2.5,'겨울왕국': 3.5,'리빙라스베가스': 3.0,'넘버3': 3.5,'사랑과전쟁': 2.5,'세계대전': 3.0},
'BTS': {'택시운전사': 1.0,'겨울왕국': 4.5,'리빙라스베가스': 0.5,'넘버3': 1.5,'사랑과전쟁': 4.5,'세계대전': 5.0},
'강감찬': {'택시운전사': 3.0,'겨울왕국': 3.5,'리빙라스베가스': 1.5,'넘버3': 5.0,'세계대전': 3.0,'사랑과전쟁': 3.5},
'을지문덕': {'택시운전사': 2.5,'겨울왕국': 3.0,'넘버3': 3.5,'세계대전': 4.0},
'김유신': {'겨울왕국': 3.5,'리빙라스베가스': 3.0,'세계대전': 4.5,'넘버3': 4.0,'사랑과전쟁': 2.5},
'유성룡': {'택시운전사': 3.0,'겨울왕국': 4.0,'리빙라스베가스': 2.0,'넘버3': 3.0,'세계대전': 3.5,'사랑과전쟁': 2.0},
'이황': {'택시운전사': 3.0,'겨울왕국': 4.0,'세계대전': 3.0,'넘버3': 5.0,'사랑과전쟁': 3.5},
'이이': {'겨울왕국': 4.5, '사랑과전쟁': 1.0,'넘버3': 4.0}
}
def drawGraph(data, name1, name2):
plt.figure(figsize = (10,10))
#plot하기 위한 좌표를 지정하는 list 정의
li = [] #name1의 평점을 저장
li2 = [] #name2의 평점을 저장
for i in critics[name1]:
if i in data[name2]: #같은 영화에 대한 평점이 있다면
li.append(critics[name1][i]) #name1의 i영화에 대한 평점
li2.append(critics[name2][i])
plt.text(critics[name1][i],critics[name2][i], i)
plt.plot(li, li2, 'ro')
plt.axis([0,6,0,6])
plt.xlabel(name1)
plt.ylabel(name2)
plt.show()
drawGraph(critics, 'BTS', '유성룡')
drawGraph(critics, '이황', '조용필')
###Output
_____no_output_____
###Markdown
* * * **피어슨 상관계수** x와 y가 함께 변하는 정도(공분산) / (x가 변하는정도 * y가 변하는정도)
###Code
def sim_pearson(data, name1, name2):
sumX = 0 #x의 합
sumY = 0 #y의 합
sumPowX = 0 #x 제곱의 합
sumPowY = 0 #y 제곱의 합
sumXY = 0 #x*y의 합
count = 0 #영화의 개수(n)
for i in data[name1]:
if i in data[name2]: #BTS와 유성룡이 모두 본 영화
sumX += data[name1][i] #BTS의 i영화에 대한 평점
sumY += data[name2][i] #유성룡의 i영화에 대한 평점
sumPowX += pow(data[name1][i],2)
sumPowY += pow(data[name2][i],2)
sumXY += data[name1][i]*data[name2][i]
count += 1
return (sumXY-((sumX*sumY)/count)) / sqrt((sumPowX-(pow(sumX,2)/count))*(sumPowX-(pow(sumX,2)/count)))
print("BTS와 유성룡의 피어슨 상관계수: ", sim_pearson(critics,'BTS', '유성룡'))
print("이황고 조용필의 피어슨 상관계수 : ",sim_pearson(critics, '이황', '조용필'))
###Output
BTS와 유성룡의 피어슨 상관계수: 0.16399999999999987
이황고 조용필의 피어슨 상관계수 : 0.4464285714285719
###Markdown
**딕셔너리를 수행하면서 기준(BTS)과 다른 데이터(사람)와의 상관계수 구하기** (내림차순 정렬)
###Code
def top_match(data, name, index=2, sim_function = sim_pearson):
#(영화평점딕셔너리, 기준이되는 사람의 이름, 피어슨상관계수에서 가장 가까운 몇명을 선택할 것인가, 피어슨 함수 호출 지정)
li = []
for i in data: # 전체 영화를 돌겠다
if name != i: #자신이(BTS) 아니라면
li.append((sim_function(critics, name, i), i))
li.sort()
li.reverse()
return li[:index]
top_match(critics, 'BTS', 3) #BTS와 성향이 가장 비슷한 3명 추출
###Output
_____no_output_____
###Markdown
* * * **영화를 추천하는 시스템 구성, 예상되는 평점 출력** 추천시스템 구성 순서 * 자신을 제외한 나머지 사람들과의 평점에 대한 유사도를 구함 BTS와 강감찬의 추측되는 평점 = 유사도 * (다른사람의)영화평점 * 추측되는 평점들의 총합을 구함 * 모든 사람들을 근거로 했을때 예상되는 평점 추측 추측되는 평점들의 총합/유사도의 총합 * 아직 안본 영화를 대상으로 예상되는 평점을 구하여, 예상되는 평점이 가장 높은 영화를 추천
###Code
def getRecommendatation(data, person, sim_function = sim_pearson):
li = [] #최종적으로 결과를 리턴하는 리스트
score = 0
score_dic = {} #유사도의 총합을 저장하기 위한 딕셔너리
sim_dic = {} #평점의 총합을 저장하기 위한 딕셔너리
result = top_match(data, person, len(data))
# print("중간 : ",result)
for sim, name in result : #유사도, 이름
if sim < 0 : continue # 유사도 0보다 작으면 빼자
for movie in data[name]:
if movie not in data[person]: #이이가 안본 영화
score += sim*data[name][movie] #score변수에 누적 <= 유사도 * 이이가 아닌 다른 사람의 영화평점
score_dic.setdefault(movie, 0)
score_dic[movie] += score # 평점 총합
sim_dic.setdefault(movie, 0)
sim_dic[movie] += sim #유사도의 누적합
score = 0
# print(name,"movie : ", movie)
# print("========================")
for key in score_dic:
score_dic[key]=score_dic[key]/sim_dic[key]
li.append((score_dic[key], key))
li.sort()
li.reverse()
return li[0][1]
print("이이님에게는 ", getRecommendatation(critics, '이이'), "영화를 가장 추천합니다.")
###Output
이이님에게는 세계대전 영화를 가장 추천합니다.
|
in_development/bruker_markpoints_video.ipynb | ###Markdown
NAPE_Py3_7 environment
###Code
import cv2
import imageio_ffmpeg
import imageio
import h5py
import os
import numpy as np
import matplotlib.pyplot as plt
import skimage
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return idx, array[idx]
fdir = r'D:\20200410_gcamp_chrmine\vj_ofc_imageactivate_01_300_stim-006'
fname = 'vj_ofc_imageactivate_01_300_stim-006'
fname_sima = fname + '_sima_mc.h5'
h5_file = h5py.File(os.path.join(fdir, fname_sima), 'r')
im = np.squeeze(np.array(h5_file.get(list(h5_file)[0]).value))
print(im.dtype)
im.shape
# initialize how many frames to average
nframes_avg = 2 # number of frames on one side of reference frame (1 would be 3 frame avg)
interval = nframes_avg + 1
n_samples = im.shape[0]
n_ypix = im.shape[1]
n_xpix = im.shape[2]
start_time = 0 # seconds
end_time = 90 # seconds
tvec = np.round(np.linspace(start_time, end_time, n_samples), decimals = 1)
fs = np.round( float(n_samples)/(end_time - start_time), decimals = 2)/interval
fs
stim_locs = [ (220, 250),
(180, 300),
(220, 350)]
stim_locs = [
(180, 300),
]
stim_initial_delay = 10 # s
stim_pulse_dur = 0.01 # s
stim_IPI = 0.49 # s
stim_reps = 5
stim_onset = []
cumulative_time = stim_initial_delay
for irep in range(stim_reps):
if irep == 0:
stim_onset.append(stim_initial_delay)
else:
stim_onset.append(cumulative_time)
cumulative_time += stim_pulse_dur + stim_IPI
stim_onset
# convert data to appropriate type
# https://stackoverflow.com/questions/25485886/how-to-convert-a-16-bit-to-an-8-bit-image-in-opencv
if np.issubdtype(np.uint16, im[0,0,0]) | np.issubdtype(np.int16, im[0,0,0]):
ratio = np.amax(im) / 255
data = np.squeeze(im/ratio) # USER DEFINE!!!
elif np.issubdtype(np.float32, im[0,0,0]):
data = im / np.max(im) # normalize the data to 0 - 1
data = 255 * data # Now scale by 255
else:
ratio = 1
im_uint8 = data.astype(np.uint8)
# define if want to trim borders and create save array
trim_border = 0
trim_pix = 0
if trim_border == 1:
ypix_include = [trim_pix,n_ypix-trim_pix]
xpix_include = [trim_pix,n_xpix-trim_pix]
else:
ypix_include = [0,n_ypix]
xpix_include = [0,n_xpix]
ypix_total = len(range(ypix_include[0],ypix_include[1]))
xpix_total = len(range(xpix_include[0],xpix_include[1]))
im_final = np.empty([len(range(interval,n_samples-interval,interval)),ypix_total,xpix_total], dtype='uint8')
im_final.shape
# vid params
vid_fps = 10.0
contrast = 5 # half of brightness typically works
brightness = 0 # 40 is good
fout_path = os.path.join(fdir, fname + '_movie.avi')
# text params
font = cv2.FONT_HERSHEY_SIMPLEX
text_color = (200,255,155)
# arrow params
arrow_color = (200,255,155)
thickness = 7
looping_frames = range(nframes_avg,n_samples-interval,interval)
num_video_frames = len(looping_frames)
im_final = np.empty([num_video_frames, ypix_total, xpix_total], dtype='uint8')
# loop through each block of frames to average
frame_count = 0
for iframe in looping_frames:
if nframes_avg == 0:
frames = iframe
im_avg = im_uint8[frames,ypix_include[0]:ypix_include[1],xpix_include[0]:xpix_include[1]]
else:
frames = slice(iframe-nframes_avg,iframe+nframes_avg)
im_avg = np.mean(
im_uint8[frames,ypix_include[0]:ypix_include[1],xpix_include[0]:xpix_include[1]], axis = 0 )
# add time in seconds
frame_time = np.round(frame_count/fs, decimals = 2)
img_annote = cv2.putText(im_avg, str(frame_time) + ' sec' ,(30,50),
font, 1, text_color, 2, cv2.LINE_AA)
# add arrows indicating stim event
if frame_count in np.round(np.multiply(stim_onset, fs)):
for stim_loc in stim_locs:
arrow_start = tuple(np.subtract(stim_loc, (30, 30))) # get arrow start loc by subtracting 30 pixels for x and y
arrow_end = stim_loc
img_annote = cv2.arrowedLine(img_annote, arrow_start, arrow_end,
arrow_color, thickness, tipLength = 0.3)
# change brightness/contrast
img_annote = contrast*img_annote + brightness
img_annote = np.clip(img_annote, 0, 255)
im_final[frame_count,:,:] = img_annote
frame_count += 1
# save the movie
print(fout_path)
imageio.mimwrite(fout_path, im_final , fps = 15.0)
plt.imshow(np.mean(im_final, axis = 0))
#plt.axis([100, 400, 200, 400])
#plt.gca().invert_yaxis()
plt.clim([50, 70])
plt.imshow(im_final[50,:,:])
plt.clim([50, 70])
###Output
_____no_output_____ |
Marketing_Digital(E_COMMERCE).ipynb | ###Markdown
 **Marketing Analytics :**
* Preparação dos Dados:
Analisar os dados de comportamento online.
Limpando base de dados.
Transformar dados em formato JSON.
* Feature Engineering:
Analisar variáveis de usuários.
Criar,Tratar e Agrupar as variáveis.
* Feature Engineering:
Analisar variáveis de usuários.
Criar,Tratar e Agrupar as variáveis.
* Treinando o Modelo:
Treino e teste.
Treinando uma regressão linear.
Análise gráfica dos resultados.
* Melhorando Feature Engineering:
Criando variáveis qualitativas.
Limpando a base de dados.
Criando variáveis diversas.
Slicing.
* Aplicado o Gradient Boosting para prever quanto um usuário irá gastar:
Identificando os tipos das colunas.
Label encoder.
Variáveis categóricas.
Treinando uma regressão linear.
Treinando um gradient boosting.
* Conclusão:
Conseguimos prever o quanto um usuário gastou em sua visita ao Site.
Data Prep
###Code
import pandas as pd
df = pd.read_csv('train.csv')
df.head()
df.shape
len(df.fullVisitorId.unique())
df.dtypes
df = pd.read_csv('train.csv', dtype={'date':object,'fullVisitorId':object,'VisitId':object})
df.dtypes
df.head()
df.device.iloc[0]
type(df.device.iloc[0])
import json
type(json.loads(df.device.iloc[0]))
json.loads(df.device.iloc[0])
pd.DataFrame([json.loads(linha) for linha in df.device])
dicionarios = ['device','geoNetwork','trafficSource','totals']
for coluna in dicionarios:
df = df.join(pd.DataFrame([json.loads(linha) for linha in df[coluna]]))
df.head()
df.drop(dicionarios, axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Limpando os dados
###Code
df.drop('adwordsClickInfo',axis=1,inplace=True)
coluna_na = []
for coluna in df.columns:
print(coluna + ': ' + str(len(df[coluna].unique())))
if len(df[coluna].unique()) == 1:
coluna_na.append(coluna)
coluna_na
len(coluna_na)
df.drop(coluna_na,axis=1,inplace=True)
df.head()
df.shape
df.dtypes
quant = ['bounces', 'hits','newVisits','pageviews', 'transactionRevenue']
for coluna in quant:
df[coluna] = pd.to_numeric(df[coluna])
df.dtypes
df.head()
df.transactionRevenue.fillna(0, inplace=True)
df.transactionRevenue = df.transactionRevenue / 1000000
df.shape
len(set(df.fullVisitorId))
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
df_quant = df.groupby('fullVisitorId',as_index=False)[quant].sum()
df_quant.head()
df_quant.shape
###Output
_____no_output_____
###Markdown
Separando as bases
###Code
y = df_quant.transactionRevenue.copy()
X = df_quant.drop('transactionRevenue',axis=1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.3, random_state=42)
X_train.head()
y_train.head()
###Output
_____no_output_____
###Markdown
Regressão Linear
###Code
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(X_train, y_train)
reg_predict = reg.predict(X_test)
reg_predict
X_test.head()
###Output
_____no_output_____
###Markdown
Avaliando os resultados
###Code
resultados = pd.DataFrame()
resultados = pd.DataFrame()
resultados['revenue'] = y_test
resultados['predict'] = reg_predict
resultados['erro'] = reg_predict - y_test
resultados.head()
resultados[resultados.revenue > 0]
import numpy as np
###Output
_____no_output_____
###Markdown
MSE
###Code
np.mean((reg_predict - y_test)**2)
###Output
_____no_output_____
###Markdown
RMSE
###Code
np.sqrt(np.mean((reg_predict - y_test)**2))
from sklearn.metrics import mean_squared_error
np.sqrt(mean_squared_error(y_test,reg_predict))
np.mean(df_quant.transactionRevenue)
np.std(df_quant.transactionRevenue)
import seaborn as sns
sns.boxplot(reg_predict)
sns.boxplot(y_test)
sns.distplot(reg_predict - y_test)
###Output
C:\Users\Alura Preto\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
Melhorando o Feature Engineering
###Code
visitas_ultima = df.groupby('fullVisitorId',as_index=False)
visitas_ultima = visitas_ultima['visitNumber'].max()
visitas_ultima.head()
df.head()
usuarios_visitas_unicos = df.drop_duplicates(subset=['fullVisitorId','visitNumber'])
usuarios_visitas_unicos.head()
usuarios_visitas_unicos.shape
visitas = pd.merge(visitas_ultima,usuarios_visitas_unicos,left_on=['fullVisitorId','visitNumber'],
right_on=['fullVisitorId','visitNumber'],how='left')
visitas.head()
visitas.shape
visitas_primeira = df.groupby('fullVisitorId',as_index=False)
visitas_primeira = visitas_primeira['visitNumber'].min()
visitas_primeira.head()
visitas_primeira.set_index('fullVisitorId',inplace=True)
visitas_primeira.head()
visitas = visitas.join(visitas_primeira,how='left',on='fullVisitorId',rsuffix='_primeira')
visitas.head()
visitas = pd.merge(visitas,usuarios_visitas_unicos,left_on=['fullVisitorId','visitNumber_primeira'],
right_on=['fullVisitorId','visitNumber'],how='left', suffixes=['_ultima','_primeira'])
visitas.head()
###Output
_____no_output_____
###Markdown
Limpando a base
###Code
quant
for coluna in quant:
visitas.drop(coluna + '_ultima',axis=1,inplace=True)
visitas.drop(coluna + '_primeira',axis=1,inplace=True)
visitas.head()
ids = ['sessionId_ultima','visitId_ultima','sessionId_primeira', 'visitId_primeira']
visitas.drop(ids,axis=1,inplace=True)
visitas.head()
visitas.columns
geo = ['city_primeira','continent_primeira','country_primeira','metro_primeira','region_primeira',
'networkDomain_primeira','subContinent_primeira']
visitas.drop(geo,axis=1,inplace=True)
visitas.head()
###Output
_____no_output_____
###Markdown
Criando novas variáveis
###Code
df_quant.head()
visitas = pd.merge(visitas,df_quant,left_on=['fullVisitorId'],
right_on=['fullVisitorId'],how='left')
visitas.head()
visitas['tempo_dif'] = visitas.visitStartTime_ultima - visitas.visitStartTime_primeira
visitas.head()
visits = df.groupby('fullVisitorId',as_index=False).count().visitNumber.values
visits
visitas['visits'] = visits
visitas.head()
data = '20160904'
data
data[0:4]
data[4:6]
data[6:8]
visitas['ano_ultima'] = pd.to_numeric([data[0:4] for data in visitas.date_ultima])
visitas['mes_ultima'] = pd.to_numeric([data[4:6] for data in visitas.date_ultima])
visitas['dia_ultima'] = pd.to_numeric([data[6:8] for data in visitas.date_ultima])
visitas['ano_primeira'] = pd.to_numeric([data[0:4] for data in visitas.date_primeira])
visitas['mes_primeira'] = pd.to_numeric([data[4:6] for data in visitas.date_primeira])
visitas['dia_primeira'] = pd.to_numeric([data[6:8] for data in visitas.date_primeira])
visitas.head()
visitas.dtypes
###Output
_____no_output_____
###Markdown
Separando a base de dados
###Code
visitas.drop('fullVisitorId',axis=1,inplace=True)
y = visitas.transactionRevenue.copy()
X = visitas.drop('transactionRevenue',axis=1)
X.head()
quali = visitas.dtypes[visitas.dtypes == object].keys()
quali
###Output
_____no_output_____
###Markdown
Label encoder
###Code
from sklearn.preprocessing import LabelEncoder
strings = list(X.operatingSystem_ultima.values.astype('str'))
lbl = LabelEncoder()
lbl.fit(strings)
lbl.transform(strings)
for coluna in quali:
lbl = LabelEncoder()
strings = list(X[coluna].values.astype('str'))
lbl.fit(strings)
X[coluna] = lbl.transform(strings)
X.head()
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.3, random_state=42)
###Output
_____no_output_____
###Markdown
Regressão linear
###Code
reg = LinearRegression()
reg.fit(X_train,y_train)
reg_predict = reg.predict(X_test)
reg_predict[reg_predict < 0] = 0
resultados = pd.DataFrame()
resultados['revenue'] = y_test
resultados['predict'] = reg_predict
resultados['erro'] = reg_predict - y_test
resultados.head()
resultados[resultados.revenue > 0]
np.sqrt(mean_squared_error(y_test,reg_predict))
sns.boxplot(reg_predict)
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
from sklearn.ensemble import GradientBoostingRegressor
gb = GradientBoostingRegressor(random_state=42)
gb.fit(X_train,y_train)
gb_predict = gb.predict(X_test)
gb_predict
gb_predict[gb_predict < 0 ] = 0
gb_predict
resultados = pd.DataFrame()
resultados['revenue'] = y_test
resultados['predict'] = gb_predict
resultados['erro'] = gb_predict - y_test
resultados[resultados.revenue > 0]
np.sqrt(mean_squared_error(y_test,gb_predict))
###Output
_____no_output_____ |
cnn_2017.ipynb | ###Markdown
导入必要的库我们需要导入一个叫 [captcha](https://github.com/lepture/captcha/) 的库来生成验证码。我们生成验证码的字符由数字和大写字母组成。
###Code
from captcha.image import ImageCaptcha
import matplotlib.pyplot as plt
import numpy as np
import random
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import string
characters = string.digits + string.ascii_uppercase
print(characters)
width, height, n_len, n_class = 170, 80, 4, len(characters)
###Output
0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ
###Markdown
定义数据生成器
###Code
from keras.utils.np_utils import to_categorical
def gen(batch_size=32):
X = np.zeros((batch_size, height, width, 3), dtype=np.uint8)
y = [np.zeros((batch_size, n_class), dtype=np.uint8) for i in range(n_len)]
generator = ImageCaptcha(width=width, height=height)
while True:
for i in range(batch_size):
random_str = ''.join([random.choice(characters) for j in range(4)])
X[i] = generator.generate_image(random_str)
for j, ch in enumerate(random_str):
y[j][i, :] = 0
y[j][i, characters.find(ch)] = 1
yield X, y
###Output
Using TensorFlow backend.
###Markdown
测试生成器
###Code
def decode(y):
y = np.argmax(np.array(y), axis=2)[:,0]
return ''.join([characters[x] for x in y])
X, y = next(gen(1))
plt.imshow(X[0])
plt.title(decode(y))
###Output
_____no_output_____
###Markdown
定义网络结构
###Code
from keras.models import *
from keras.layers import *
input_tensor = Input((height, width, 3))
x = input_tensor
for i in range(4):
x = Convolution2D(32*2**i, 3, 3, activation='relu')(x)
x = Convolution2D(32*2**i, 3, 3, activation='relu')(x)
x = MaxPooling2D((2, 2))(x)
x = Flatten()(x)
x = Dropout(0.25)(x)
x = [Dense(n_class, activation='softmax', name='c%d'%(i+1))(x) for i in range(4)]
model = Model(input=input_tensor, output=x)
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
网络结构可视化
###Code
from keras.utils.visualize_util import plot
from IPython.display import Image
plot(model, to_file="model.png", show_shapes=True)
Image('model.png')
###Output
_____no_output_____
###Markdown
训练模型
###Code
model.fit_generator(gen(), samples_per_epoch=51200, nb_epoch=5,
validation_data=gen(), nb_val_samples=1280)
###Output
Epoch 1/5
51200/51200 [==============================] - 127s - loss: 11.7949 - c1_loss: 3.0152 - c2_loss: 2.8573 - c3_loss: 2.9428 - c4_loss: 2.9797 - c1_acc: 0.1671 - c2_acc: 0.2028 - c3_acc: 0.1860 - c4_acc: 0.1785 - val_loss: 3.0524 - val_c1_loss: 0.7782 - val_c2_loss: 0.5730 - val_c3_loss: 0.8024 - val_c4_loss: 0.8988 - val_c1_acc: 0.7594 - val_c2_acc: 0.8109 - val_c3_acc: 0.7742 - val_c4_acc: 0.7586
Epoch 2/5
51200/51200 [==============================] - 125s - loss: 1.7097 - c1_loss: 0.4014 - c2_loss: 0.2837 - c3_loss: 0.4612 - c4_loss: 0.5634 - c1_acc: 0.8728 - c2_acc: 0.9081 - c3_acc: 0.8657 - c4_acc: 0.8464 - val_loss: 0.5816 - val_c1_loss: 0.0885 - val_c2_loss: 0.0693 - val_c3_loss: 0.2037 - val_c4_loss: 0.2201 - val_c1_acc: 0.9680 - val_c2_acc: 0.9805 - val_c3_acc: 0.9328 - val_c4_acc: 0.9297
Epoch 3/5
51200/51200 [==============================] - 125s - loss: 0.7000 - c1_loss: 0.1352 - c2_loss: 0.1087 - c3_loss: 0.2107 - c4_loss: 0.2456 - c1_acc: 0.9587 - c2_acc: 0.9636 - c3_acc: 0.9367 - c4_acc: 0.9312 - val_loss: 0.3853 - val_c1_loss: 0.0716 - val_c2_loss: 0.0714 - val_c3_loss: 0.1000 - val_c4_loss: 0.1423 - val_c1_acc: 0.9773 - val_c2_acc: 0.9773 - val_c3_acc: 0.9656 - val_c4_acc: 0.9609
Epoch 4/5
51168/51200 [============================>.] - ETA: 0s - loss: 0.4766 - c1_loss: 0.0872 - c2_loss: 0.0821 - c3_loss: 0.1389 - c4_loss: 0.1684 - c1_acc: 0.9718 - c2_acc: 0.9705 - c3_acc: 0.9551 - c4_acc: 0.9512
###Markdown
测试模型
###Code
X, y = next(gen(1))
y_pred = model.predict(X)
plt.title('real: %s\npred:%s'%(decode(y), decode(y_pred)))
plt.imshow(X[0], cmap='gray')
plt.axis('off')
###Output
_____no_output_____
###Markdown
计算模型总体准确率
###Code
from tqdm import tqdm
def evaluate(model, batch_num=20):
batch_acc = 0
generator = gen()
for i in tqdm(range(batch_num)):
X, y = generator.next()
y_pred = model.predict(X)
batch_acc += np.mean(map(np.array_equal, np.argmax(y, axis=2).T, np.argmax(y_pred, axis=2).T))
return batch_acc / batch_num
evaluate(model)
###Output
100%|██████████| 20/20 [00:01<00:00, 11.01it/s]
###Markdown
保存模型
###Code
model.save('cnn.h5')
###Output
_____no_output_____ |
Experiment_03.ipynb | ###Markdown
Load dataset
###Code
import os
import wandb
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from os.path import join, exists
from os import makedirs
run_name = "verysmall_balanced_cyclical"
wandb.init(project="binary_search_optimization", name=run_name)
DATASET_DIR = "./datasets/dataset_verysmall_balanced.pkl"
WEIGHTS_DIR = join("./save_weights", run_name)
if not exists(WEIGHTS_DIR):
makedirs(WEIGHTS_DIR)
MAX_MONSTER_NUM = 1000
MONSTER_HPS_COL = ["monster_hp_" + str(num) for num in range(1, MAX_MONSTER_NUM + 1)]
FEATURES_COL = ["focus_damage", "aoe_damage", *MONSTER_HPS_COL]
TARGET_COL = ["attack_num"]
dataset = pd.read_pickle(DATASET_DIR)
# Log data distribution
for col in ["focus_damage", "aoe_damage", "attack_num"]:
plt.hist(dataset[col])
wandb.log({col: plt})
###Output
/anaconda3/envs/dev/lib/python3.7/site-packages/plotly/matplotlylib/mpltools.py:368: MatplotlibDeprecationWarning:
The is_frame_like function was deprecated in Matplotlib 3.1 and will be removed in 3.3.
###Markdown
Train test split
###Code
from sklearn.model_selection import train_test_split
bins = np.linspace(dataset[TARGET_COL].to_numpy().min(), dataset[TARGET_COL].to_numpy().max(), 100, dtype=int)
Y_bin = np.digitize(dataset[TARGET_COL].to_numpy(), bins)
try:
train_set, test_set = train_test_split(dataset, random_state=42, shuffle=True, stratify=Y_bin)
except:
train_set, test_set = train_test_split(dataset, random_state=42, shuffle=True)
X_train, Y_train = train_set[FEATURES_COL].to_numpy(), train_set[TARGET_COL].to_numpy()
X_test, Y_test = test_set[FEATURES_COL].to_numpy(), test_set[TARGET_COL].to_numpy()
len(X_train), len(X_test)
###Output
_____no_output_____
###Markdown
Normalization
###Code
import joblib
from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler() if not exists(join(WEIGHTS_DIR, "X_scaler.pkl")) else joblib.load(join(WEIGHTS_DIR, "X_scaler.pkl"))
X_train_scaled = X_scaler.fit_transform(X_train.astype(np.float32))
X_test_scaled = X_scaler.transform(X_test.astype(np.float32))
Y_scaler = MinMaxScaler() if not exists(join(WEIGHTS_DIR, "Y_scaler.pkl")) else joblib.load(join(WEIGHTS_DIR, "Y_scaler.pkl"))
Y_train_scaled = Y_scaler.fit_transform(Y_train.astype(np.float32))
Y_test_scaled = Y_scaler.transform(Y_test.astype(np.float32))
# Save parameters for scalers
joblib.dump(X_scaler, join(WEIGHTS_DIR, "X_scaler.pkl"))
joblib.dump(Y_scaler, join(WEIGHTS_DIR, "Y_scaler.pkl"))
###Output
_____no_output_____
###Markdown
Model
###Code
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LeakyReLU
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import MAE
from tensorflow.keras.callbacks import LearningRateScheduler
from utilities import LearningRateFinder, Cosine
wandb.config.network_depth = 1
wandb.config.network_width = 16
wandb.config.activation = "LeakyReLU"
wandb.config.optimizer = "Adam"
wandb.config.loss = "MAE"
wandb.config.stepsize = 20
wandb.config.epochs = 160
wandb.config.batch_size = 32
wandb.config.validation_split = 0.2
class SequenceDense(Model):
def __init__(self):
super().__init__()
self.hidden_layers = []
for _ in range(wandb.config.network_depth):
self.hidden_layers.append(Dense(wandb.config.network_width, activation=LeakyReLU()))
self.output_layer = Dense(1, activation="relu")
def call(self, inputs):
output = inputs
for layer in self.hidden_layers:
output = layer(output)
output = self.output_layer(output)
return output
model_name = "model_{}_{}".format(wandb.config.network_depth, wandb.config.network_width)
model = SequenceDense()
model.build(input_shape=X_train_scaled.shape)
model.compile(optimizer=Adam(learning_rate=0.0001), loss="mae")
try:
model.load_weights(join("./save_weights", model_name))
except:
model.save_weights(join("./save_weights", model_name))
lr_finder = LearningRateFinder(model)
lr_finder.find((X_train_scaled, Y_train_scaled), start_lr=1e-10, epochs=20)
lr_finder.plot()
plt.plot(lr_finder.losses)
wandb.log({"lr_finder": plt})
base_lr = lr_finder.lrs[15000]
max_lr = lr_finder.lrs[30000]
wandb.log({"learning_rate_base": base_lr})
wandb.log({"learning_rate_max": max_lr})
wandb.log({"learning_rate_base_index": 15000})
wandb.log({"learning_rate_max_index": 30000})
def history_plot(history):
loss = history.history["loss"]
val_loss = history.history["val_loss"]
plt.subplot(2, 1, 1)
plt.title("loss")
plt.plot(loss)
plt.subplot(2, 1, 2)
plt.title("val_loss")
plt.plot(val_loss)
lr_cosine = Cosine(max_lr, base_lr, wandb.config.stepsize)
callbacks = [LearningRateScheduler(lr_cosine)]
model = SequenceDense()
model.build(input_shape=X_train_scaled.shape)
model.compile(optimizer=Adam(learning_rate=base_lr), loss="mae")
model.load_weights(join("./save_weights", model_name))
history = model.fit(X_train_scaled, Y_train_scaled,
epochs=wandb.config.epochs,
batch_size=wandb.config.batch_size,
validation_split=wandb.config.validation_split,
verbose=1,
callbacks=callbacks)
model.save_weights(join(WEIGHTS_DIR, "model_trained", model_name))
history_plot(history)
loss = history.history["loss"]
plt.title("train_loss")
plt.plot(loss)
wandb.log({"train_loss": plt})
val_loss = history.history["val_loss"]
plt.title("val_loss")
plt.plot(val_loss)
wandb.log({"val_loss": plt})
test_loss = model.evaluate(X_test_scaled, Y_test_scaled, verbose=0)
wandb.log({"test_loss": test_loss})
lr_history = [lr_cosine(epoch) for epoch in range(1, 161)]
plt.title("learning_rate_history")
plt.plot(lr_history)
wandb.log({"test_loss": plt})
pred = model.predict(X_train_scaled)
loss = MAE(Y_train_scaled, pred)
plt.scatter(Y_train_scaled, loss)
wandb.log({"loss_dist": wandb.Image(plt, caption="loss_dist")})
print("Average loss: {}".format(np.mean(loss.numpy())))
###Output
Average loss: 0.010599975474178791
|
data-science-master/Section-2-Basics-of-Python-Programming/Lec-2.03-Variables-Number-Operators/03-operators.ipynb | ###Markdown
--- Department of Data ScienceCourse: Tools and Techniques for Data Science---Instructor: Muhammad Arif Butt, Ph.D. Lecture 2.3 _03-operators.ipynb_ [Learn more about Python Operators](https://docs.python.org/3/library/stdtypes.htmlnumeric-types-int-float-complex) Learning agenda of this notebook 1. Arithmetic operators in Python- Python supports the following arithmetic operators:| Operator | Purpose | Example | Result ||------------|-------------------|-------------|-----------|| `+` | Addition | `2 + 3` | `5` || `-` | Subtraction | `3 - 2` | `1` || `*` | Multiplication | `8 * 12` | `96` || `/` | Division | `100 / 7` | `14.28..` || `//` | Floor Division | `100 // 7` | `14` | | `%` | Modulus/Remainder | `100 % 7` | `2` || `**` | Exponent | `5 ** 3` | `125` |
###Code
x = 10
y = 3
# Output: x + y
print('x + y =',x+y)
# Output: x - y
print('x - y =',x-y)
# Output: x * y
print('x * y =',x*y)
# Output: x / y
print('x / y =',x/y)
# Floor Division: Output: x // y
print('x // y =',x//y)
# Output: x ^ y
print('x ** y =',x**y)
# Output: x % y
print('x % y =',x%y)
###Output
x + y = 13
x - y = 7
x * y = 30
x / y = 3.3333333333333335
x // y = 3
x ** y = 1000
x % y = 1
###Markdown
2. Assignment Operators in Python
###Code
#Assignment operators in Python
x = 4
x += 5 # <-> x = x + 5
print(x)
# x = x - 5
# x -= 5
# x = x * 5
# x *= 5
# x = x / 5
# x /= 5
# x = x % 5
# x %= 5
# x = x // 5
# x //= 5
# x = x ** 5
# x **= 5
# x = x & 5
# x &= 5
# x = x | 5
# x |= 5
# x = x ^ 5
# x ^= 5
# x = x >> 5
# x >>= 5
# x = x << 5
x <<= 5
###Output
_____no_output_____
###Markdown
3. Comparison Operators in Python- Comparison operators compare the contents in a field to either the contents in another field or a constant.
###Code
#Comparison operators in Python
x = 10
y = 12
# Output: x > y
print('x > y is',x>y)
# Output: x < y
print('x < y is',x<y)
# Output: x == y
print('x == y is',x==y)
# Output: x != y
print('x != y is',x!=y)
# Output: x >= y
print('x >= y is',x>=y)
# Output: x <= y
print('x <= y is',x<=y)
a= x<=y
print(a)
###Output
_____no_output_____
###Markdown
4. Logical operators in Python- The logical operators `and`, `or` and `not` operate upon conditions and `True` & `False` values. The `and` and `or` operate on two conditions, whereas `not` operates on a single condition.- The `and` operator returns `True` when both the conditions evaluate to `True`. Otherwise, it returns `False`.| `a` | `b` | `a and b` ||---------|--------|-----------|| `True` | `True` | `True` || `True` | `False`| `False` || `False`| `True` | `False` || `False`| `False`| `False` |- The `or` operator returns `True` if at least one of the conditions evaluates to `True`. It returns `False` only if both conditions are `False`.| `a` | `b` | `a or b` ||---------|--------|-----------|| `True` | `True` | `True` || `True` | `False`| `True` || `False`| `True` | `True` || `False`| `False`| `False` |- The `not` operator returns `False` if a condition is `True` and `True` if the condition is `False`.
###Code
x = True
y = False
print('x and y is',x and y)
print('x or y is',x or y)
print('not x is',not x)
###Output
_____no_output_____
###Markdown
Logical operators can be combined to form complex conditions. Use round brackets or parentheses `(` and `)` to indicate the order in which logical operators should be applied.
###Code
numb = 3
(2 > 3 and 4 <= 5) or not (numb < 0 and True)
###Output
_____no_output_____
###Markdown
- Short Circuit Evaluation of Logical Expressions- When Python is processing a logical expression such as `x >= 2 and (x/y) > 2` , it evaluates the expression from left to right.- The evaluation of a logical expression stops when the overall value is already known. it is called short-circuiting the evaluation.
###Code
# A short circuit happens in 'and' operation, when the first condition evaluates to False
x = 3
y = 0
z = ((x>=6) and (x/y))
z
x = 8
y = 0
z = ((x>=6) and (x/y))
z
# A short circuit happens in 'or' operation, when the first condition evaluates to True
x = 7
y = 0
z = ((x>=6) or (x/y))
z
# Now short circuit will not happen
x = 3
y = 0
z = ((x<=6) and (x/y))
z
# to overcome the above scenario use guard evaluation
x = 3
y = 0
z = ((x <= 6) and (y != 0) and (x/y))
z
###Output
_____no_output_____
###Markdown
5. Bitwise Operators in Python- A bitwise operator is an operator used to perform bitwise operations on bit patterns or binary numerals that involve the manipulation of individual bits.
###Code
a = -5
b = a >> 1
b
#Bitwise operators in Python
x = 10 # 00001010
y = 4 # 00000100
# Bitwise and
print('x & y is',x&y)
# Bitwise or
print('x | y is',x|y)
# Bitwise not
print('~x is',~x)
# Bitwise XOR
print('x^y is',x^y)
# Bitwise right shift
print('x>>3 is',x>>3)
# Bitwise left shift
print('x<<3 is',x<<3)
#Bitwise operators in Python
x = -10
y = 4
print(~x)
print(x^y)
print(x>>3)
###Output
9
-14
-2
###Markdown
6. Identity Operators in Python- Identity operators are used to compare the ID of objects (not their values)- Returns True if both objects refer to same memory location- The two identity operators in Python are `is` and `is not`
###Code
#Identity operators in Python
a = 5
b = 5.0
print(a is b)
print(a==b)
a = 'Hello'
b = 'Hello'
# Output: False
print(a is not b)
# Output: True
print(a is b)
###Output
_____no_output_____
###Markdown
7. Membership Operators in Python- Python’s membership operators test for membership in a sequence, such as strings, lists, or tuples.- The two membership operators in Python are `in` and `not in`
###Code
a = 10
b = 4
list = [1, 2, 3, 4, 5 ]
rv = a in list
print(rv)
rv = b in list
print(rv)
###Output
_____no_output_____
###Markdown
8. Operators Precedence and Associativity: - The **precedence of operators** determines which operator is executed first if there are more than one operator in an expression. - Certain operators have higher precedence than others; for example, the multiplication operator has a higher precedence than the addition operator.- Operators with the highest precedence appear at the top of the table, those with the lowest appear at the bottom.- The **associativity of operators** is the order in which Python evaluates an expression containing multiple operators of the same precedence.- Almost all the operators have left-to-right associativity. >- may be associative (means the operations can be grouped arbitrarily) >-left-associative (means the operations are grouped from the left) >-right-associative (Exponent operator ** has right-to-left associativity in Python) >- non-associative (meaning operations cannot be chained, often because the output type
###Code
# Run interactive help and type "OPERATORS" to get information about precedence
help('OPERATORS')
print(5 + 3 * 2)
print((5 + 3) * 2)
print(2 ** 3 ** 2)
print((2 ** 3) ** 2)
num1, num2, num3 = 2, 3, 4
print ((num1 + num2) * num3)
num1, num2, num3 = 2, 3, 4
print (num1 ** num2 + num3)
num1, num2 = 15, 3
print (~num1 + num2)
###Output
-13
|
quiz/q04/q5/Q4-q5.ipynb | ###Markdown
Download the adjective-noun data here.Create a queue of 10 RDDs using this data set and feed it into a Spark Streaming program. Your Spark Streaming algorithm should maintain a state that keeps track of the longest noun seen so far associated with each distinct adjective. After each RDD, print any 5 adjectives and their associated longest nouns, as well as the longest noun associated with the adjective 'good'. Note that not every line in the data set contains exactly two words, so make sure to clean the data as they are fed into the streaming program. The skeleton code is provided below:
###Code
from pyspark.streaming import StreamingContext
ssc = StreamingContext(sc, 5)
# Provide a checkpointing directory. Required for stateful transformations
ssc.checkpoint("checkpoint")
numPartitions = 8
rdd = sc.textFile('adj_noun_pairs.txt', numPartitions)
rddQueue = rdd.randomSplit([1]*10, 123)
lines = ssc.queueStream(rddQueue)
# FILL IN YOUR CODE
pairs = lines.map(lambda l: tuple(l.split())).filter(lambda p: len(p) == 2)
# Use transform() to access any rdd transformations not directly available in SparkStreaming
topWords = pairs.transform(lambda rdd: rdd.sortBy(lambda x: len(x[1]), False))
def printResults(rdd):
print(rdd.take(5))
print('good\'s the longest noun:', rdd.lookup('good')[0])
print()
topWords.foreachRDD(printResults)
ssc.start()
ssc.awaitTermination(50)
ssc.stop(False)
print("Finished")
###Output
[('medical', 'pneumonoultramicroscopicsilicovolcanoconiosis'), ('result', 'ortho-nitro-para-diamino-triphenylmethane'), ('comma-free', '00101110100001100010111010000110'), ('at-taqafah', 'wad-dimuqratiyyah/rassemblement'), ('delete', 'atlasshrugged/passengernumber4')]
good's the longest noun: morning/afternoon
[('detective-ghost-horror-who', 'travel-romantic-musical-comedy-epic'), ('dunnit-time', 'travel-romantic-musical-comedy-epic'), ('Upper', 'famennian/chautauquan/canadaway'), ('encyclopedia_list_links_solicited', 'building_wikipedia_membership'), ('big_traffic_links_solicited', 'building_wikipedia_membership')]
good's the longest noun: morning/afternoon
[('natural', '5-aminoimidazole-4-carboxamide-1-ß-d-ribofuranoside'), ('automatic', 'transmission/gearbox/transmission'), ('such', 'quarantine-for-decontamination'), ('delete', 'HoldMoreStubbornlyAtLeastTalk'), ('true', 'what-you-see-is-what-you-get')]
good's the longest noun: passage-planning
[('former', 'dmsb-produktionswagen-meisterschaft'), ('=', '9e107d9d372bb6826bd81d3542a419d6'), ('sexual', 'memories/complaints/experiences'), ('unknown', 'mathematician-turned-politician'), ('delete', 'atlasshrugged/reardenlimestone')]
good's the longest noun: nahi-anil-munkar
[('delete', 'whichwikishouldweuse/naminglinkingdiscussion'), ('delete', 'BusinessSchools/KingstonUniversityLondon'), ('kungurian/irenian/filippovian', 'artinskian/baigendzinian/aktastinian'), ('=', 'grizzly_giant_mariposa_grove'), ('german', 'list_of_controversial_issues')]
good's the longest noun: competitiveness
[('synthetic', '5-aminoimidazole-4-carboxamide-1-ß-d-ribofuranoside'), ('stack-oriented', 'non-english-based_programming_languages'), ('delete', 'atlasshrugged/passengernumber2'), ('delete', 'atlasshrugged/passengernumber1'), ('oral', 'trimethoprim/sulfamethoxazole')]
good's the longest noun: characterization
[('longer', 'Pneumonoultramicroscopicsilicovolcanoconiosis'), ('delete', 'BusinessSchools/UniversityOfMichiganAnnArbor'), ('delete', 'businessschools/sheffieldhallamuniversity'), ('delete', 'BusinessSchools/UniversityOfWashington'), ('advanced', 'electronic-counter-counter-measure')]
good's the longest noun: neighbourliness
[('=', '37f332f68db77bd9d7edd4969571ad671cf9dd3b'), ('changxingian/lopingian/djulfian', 'wujiapingian/lopingian/dorashamian'), ('delete', 'BryceSorryAndConfusedHarrington'), ('orient', 'Friedrich-Wilhelms-Universität'), ('embedded', 'encodingssynchronizationvideo')]
good's the longest noun: neighbourliness
[('delete', 'businessschools/norwegianschoolofmanagement'), ('sure', 'anti-the-wrong-guy-getting-executed'), ('net', 'iran/din/traditionaldateofzoroaster'), ('delete', 'wikishouldoffersimplifieduseoftable'), ('continued', 'symptoms/infection/complications')]
good's the longest noun: newspaper-seller
Finished
[('famous', 'llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch'), ('delete', 'atlasshrugged/nationalcouncilofmetalindustries'), ('dideoxy-β-d-fructo-furanosyl', '4-chloro-4-deoxy-α-d-galactopyranoside'), ('own', 'personality/assumptions/habits'), ('delete', 'thingsthatdonotexistforareason')]
good's the longest noun: infrastructure
|
Principal Component Analysis in Python.ipynb | ###Markdown
Principal Component Analysis in Python https://plot.ly/ipython-notebooks/principal-component-analysis/shortcut--pca-in-scikitlearn Principal Component Analysis in 3 Simple Steps PCA is a simple yet popular and useful transformation that is used in numerous applications, such as stock market predictions, the analysis of gene expression data, and many more. In this tutorial, we will see that PCA is not just a "black box", and we are going to unravel its internals in 3 basic steps. A Summary of the PCA Approach Standardize the data. Obtain the Eigenvectors and Eigenvalues from the covariance matrix or correlation matrix, or perform Singular Vector Decomposition. Sort eignvalues in descending order and choose the $k$ eignenvectors that correspond to the $k$ largest eigenvalues where $k$ is the number of dimensions of the new feature subspace ($k <= d$). Construct the projection matrix $W$ from the selected $k$ eigenvectors. Transform the original dataset $X$ via $W$ to obtain a $k$-dimensional feature subspace $Y$ Preparing the Iris DatasetThe Iris dataset contains measurements for 150 iris flowers from three different species.The three classes in the Iris dataset are:1. Irsi-setosa (n = 50)2. Iris-versicolor (n = 50)3. Iris-virginica (n = 50)And the four features of in Iris dataset are:1. sepal length in cm2. sepal width in cm3. petal length in cm4. petal width in cm Loading the DatasetIn oder to load the Iris data directly from the UCI repository, we are going to use the superb pandas library.
###Code
import pandas as pd
df = pd.read_csv(filepath_or_buffer = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
header=None,
sep = ',')
df.columns = ['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
df.dropna(how = 'all', inplace = True) # drops the empty line at file-end
df.tail()
# split data table into data X and class labels y
X = df.iloc[:,0:4].values
y = df.iloc[:,4].values
###Output
_____no_output_____
###Markdown
Our iris dataset is now stored in form of a 150 x 4 matrix where the columns are the different features, and every row represents a separate flower sample. Each sample row x can be pictured as a 4-dimensional vector\mathbf{x^T} = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix} = \begin{pmatrix} \text{sepal length} \\ \text{sepal width} \\\text{petal length} \\ \text{petal width} \end{pmatrix} Exploratory VisulizationTo get a feeling for how the 3 different flower classes are distributes along the 4 different features, let us visulizat them via hisograms.
###Code
import plotly.plotly as py
#plotly.tools.set_credentials_file(username = 'plotlytrial', api_key = 'ylEQjU9b3czju6uknga5')
from plotly.graph_objs import *
import plotly.tools as tls
# plotting histograms
traces = []
legend = {0:False, 1:False, 2:False, 3:True}
colors = {'Iris-setosa': 'rgb(31, 119, 180)',
'Iris-versicolor': 'rgb(255, 127, 14)',
'Iris-virginica': 'rgb(44, 160, 44)'}
for col in range(4):
for key in colors:
traces.append(Histogram(x=X[y==key, col],
opacity=0.75,
xaxis='x%s' %(col+1),
marker=Marker(color=colors[key]),
name=key,
showlegend=legend[col]))
data = Data(traces)
layout = Layout(barmode='overlay',
xaxis=XAxis(domain=[0, 0.25], title='sepal length (cm)'),
xaxis2=XAxis(domain=[0.3, 0.5], title='sepal width (cm)'),
xaxis3=XAxis(domain=[0.55, 0.75], title='petal length (cm)'),
xaxis4=XAxis(domain=[0.8, 1], title='petal width (cm)'),
yaxis=YAxis(title='count'),
title='Distribution of the different Iris flower features')
fig = Figure(data=data, layout=layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
StandardizingWhether to standardize the data prior to a PCA on the covariance matrix depends on the measuremen scales of the original features. Since PCA yields a feature subspace that the variance along the axes, it makes sense to standardize the data, especially, if it was measured on different scales. Although, all features in the Iris dataset were measured in centimeters, let us continue with the transformation of the data onto unit scale (mean = 0 and variance = 1), which is a requirement for the optimal performance of many machine learning algorithms.
###Code
from sklearn.preprocessing import StandardScaler
X_std = StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
1 - Eigendecomposition - Computing Eigenvectors and EigenvaluesThe eigenvectors and eigenvalues of a covariance (or correlation) matrix represent the "core" of a PCA: The eigenvectors (principal components) determine the directions of the new feature space, and the eigenvalues determine theirm magnitue. In other words, the eigenvalues explain the variance of the data along the new feature axes. Covariance MatrixThe classic approach to PCA is to perform the eigendecomposition on the covariance matrix \sum, which is a d x d matrix where each element represents the covariance between two features. The covariance between two features is calculated as follows:\sigma_{jk} = \frac{1}{n-1}\sum_{i=1}^{N}\left( x_{ij}-\bar{x}_j \right) \left( x_{ik}-\bar{x}_k \right).We can summarize the calculation of the covariance matrix via the following matrix equation:\Sigma = \frac{1}{n-1} \left( (\mathbf{X} - \mathbf{\bar{x}})^T\;(\mathbf{X} - \mathbf{\bar{x}}) \right)where \mathbf{\bar{x}} is the mean vecotr \mathbf{\bar{x}} = \sum\limits_{k=1}^n x_{i}.The mean vector is a d-dimensional vector where each value in this vector represents the sample mean of a feature column in the dataset.
###Code
import numpy as np
mean_vec = np.mean(X_std, axis = 0)
cov_mat = (X_std - mean_vec).T.dot((X_std - mean_vec)) / (X_std.shape[0]-1)
print('Covariance matrix \n%s' %cov_mat)
print('Numpy covariance matrix: \n%s' %np.cov(X_std.T))
###Output
Numpy covariance matrix:
[[ 1.00671141 -0.11010327 0.87760486 0.82344326]
[-0.11010327 1.00671141 -0.42333835 -0.358937 ]
[ 0.87760486 -0.42333835 1.00671141 0.96921855]
[ 0.82344326 -0.358937 0.96921855 1.00671141]]
###Markdown
Next, we perform an eigendecomposition on the covariance matrix:
###Code
co_mat = np.cov(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
###Output
Eigenvectors
[[ 0.52237162 -0.37231836 -0.72101681 0.26199559]
[-0.26335492 -0.92555649 0.24203288 -0.12413481]
[ 0.58125401 -0.02109478 0.14089226 -0.80115427]
[ 0.56561105 -0.06541577 0.6338014 0.52354627]]
Eigenvalues
[ 2.93035378 0.92740362 0.14834223 0.02074601]
###Markdown
Correlation MatrixEspecially, in the field of "Fiance", the correlation matrix typically used instead of the covarience matrix. However, the eigendecomposition of the covariance matrix (if the input data was standarized) yields the same results as a eigendecomposition on the correlation matrix, since the correlation matrix can be understood as the normalized covariance matrix. Eigendecomposition of the standardized data based on the correlation matrix:
###Code
cor_mat1 = np.corrcoef(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cor_mat1)
print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
###Output
Eigenvectors
[[ 0.52237162 -0.37231836 -0.72101681 0.26199559]
[-0.26335492 -0.92555649 0.24203288 -0.12413481]
[ 0.58125401 -0.02109478 0.14089226 -0.80115427]
[ 0.56561105 -0.06541577 0.6338014 0.52354627]]
Eigenvalues
[ 2.91081808 0.92122093 0.14735328 0.02060771]
###Markdown
Eigendecomposition of the raw data based on the correlation matrix:
###Code
cor_mat2 = np.corrcoef(X.T)
eig_vals, eig_vecs = np.linalg.eig(cor_mat2)
print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
###Output
Eigenvectors
[[ 0.52237162 -0.37231836 -0.72101681 0.26199559]
[-0.26335492 -0.92555649 0.24203288 -0.12413481]
[ 0.58125401 -0.02109478 0.14089226 -0.80115427]
[ 0.56561105 -0.06541577 0.6338014 0.52354627]]
Eigenvalues
[ 2.91081808 0.92122093 0.14735328 0.02060771]
###Markdown
We can clearly see that all three approaches yield the same eigenvecotrs and eigenvalues pairs:* Eigendecomposition of the covariance matrix after standardizing the data.* Eigendecomposition of the correlation matrix.* Eigendecomposition of the correlation matrix after standardizing the data. Singluar Vector DecompositionWhile the eigendecomposition of the covariance or correlation matrix may be more intuitive, most PCA implementations performs a Singular Vector Decomposition (SVD) to improve the computational efficiency. So, let us perform an SVD to confirm that the result are indeed the same:
###Code
u,s,v = np.linalg.svd(X_std.T)
u
###Output
_____no_output_____
###Markdown
2 - Selecting Principal ComponentsThe typical goal of a PCA is to reduce the dimensionality of the original feature space by projecting it onto a smaller subspace, where the eigenvectors will form the axes. However, the eigenvectors only define the directions of the new axis, since they have all the same unit length 1, which can confirmed by the following two lines of code:
###Code
for ev in eig_vecs:
np.testing.assert_array_almost_equal(1.0, np.linalg.norm(ev))
print('Everything ok!')
###Output
Everything ok!
###Markdown
In order to decide which eigenvector(s) can dropped without losing too much information for the construction for lower-dimensional subspace, we need to inspect the corresponding eigenvalues: The eigenvectors with the lowest eigenvalues bear the least information about the distribution of the data; those are the ones can be dropped.In order to do so, the common approach is to rank the eigenvalues from highest to lowest in order choose the top k eigenvectors.
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort()
eig_pairs.reverse()
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:')
for i in eig_pairs:
print(i[0])
###Output
Eigenvalues in descending order:
2.91081808375
0.921220930707
0.147353278305
0.0206077072356
###Markdown
After sorting the eigenpairs, the next question is "how many principal components are we going to choose for our new feature subspace?" A useful measure is the so-called "explained variance," which can be calculated from the eigenvalues. The explained variance tells us how much information (variance) can be attributed to each of the principal components.
###Code
tot = sum(eig_vals)
var_exp = [(i / tot)*100 for i in sorted(eig_vals, reverse = True)]
cum_var_exp = np.cumsum(var_exp)
trace1 = Bar(x = ['PC %s' %i for i in range(1,5)], y = var_exp, showlegend = False)
trace2 = Scatter(x = ['PC %s' %i for i in range(1,5)], y = cum_var_exp, name = 'cumulative explained variance')
data = Data([trace1, trace2])
layout = Layout(yaxis = YAxis(title = 'Explained variance in percent'),
title = 'Explained variance by different principal components')
fig = Figure(data = data, layout = layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
The plot above clearly shows that most of the variance (72.77% of the variance to be precise) can be explained by the first principal component alone. The second principal still bears some information (23.03%) while the third and fourth components can safely be dropped without losign too much information. Together, the first two principal components contain 95.8% of the information.It's about time to get the really interesting part: The construction of the projection matri that will be used to transform the Iris data onto the new feature suspace. Although, the name "projection matrix" has a nice ring to it, it is basically just a matrix of our concatenated top k eigenvectors.Here, we are reducing the 4-dimensional feature space to a 2-dimensional feature subspace, by choosing the "top 2" eigenvectors with the highest eigenvalues to construct our d x k-dimensional eigenvector matrix W.
###Code
matrix_w = np.hstack((eig_pairs[0][1].reshape(4,1),
eig_pairs[1][1].reshape(4,1)))
print('Matrix W:\n', matrix_w)
###Output
Matrix W:
[[ 0.52237162 -0.37231836]
[-0.26335492 -0.92555649]
[ 0.58125401 -0.02109478]
[ 0.56561105 -0.06541577]]
###Markdown
3 - Projection Onto the New Feature SpaceIn this last step we will use the 4 x 2-dimensional projection matrix W to transform our samples onto the new subspace via the equation Y = X x W, where Y is 150 x 2 matrix of our transformed samples.
###Code
Y = X_std.dot(matrix_w)
traces = []
for name in ('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'):
trace = Scatter(
x=Y[y==name,0],
y=Y[y==name,1],
mode='markers',
name=name,
marker=Marker(
size=12,
line=Line(
color='rgba(217, 217, 217, 0.14)',
width=0.5),
opacity=0.8))
traces.append(trace)
data = Data(traces)
layout = Layout(showlegend=True,
scene=Scene(xaxis=XAxis(title='PC1'),
yaxis=YAxis(title='PC2'),))
fig = Figure(data=data, layout=layout)
py.iplot(fig)
###Output
_____no_output_____ |
Other/public_cloud_computing/guides/G1/G1.ipynb | ###Markdown
Guide 1: Public Cloud with Microsoft Azure Public Cloud services are useful tools to store and analyze documents' contents as well as to extract and process data. If you have never used a public cloud, you've probably already used: uploading documents to remote platform (some popular ones are DropBox, Google Drive, etc), transcribing an audio (after a set of interviews), copying and pasting data from webpage' images (who did not do that, right?) are some example of services that have been automated on public clouds to offer an on demand service.Microsoft Azure is a platform that automates and enhances a lot of the tasks that arise when dealing with storing data, machine learning techniques and cluster computing (check major platforms' websites for a more comprehensive overview, here is a link to [Microsoft Azure](https://azure.microsoft.com/en-us/)). Azure is one of the top competitors for investments on public cloud services, (see [The Top 5 Cloud Computing Vendors](https://www.forbes.com/sites/bobevans1/2017/11/07/the-top-5-cloud-computing-vendors-1-microsoft-2-amazon-3-ibm-4-salesforce-5-sap/3f1473c56f2e)) being the one with greater investments as estimated recentely.In this tutorial we are going to explain: a) cloud computing basics, b) how to sign in Microsoft Azure free services, c) how to deploy some cloud services (e.g. storage account, speech recognition, text analytics, etc.) both using REST APIs and web interface application (e.g. storage explorer). While some cognitive services are accessible from the studio, some are not so we have to poke around to exploit the most interisting services available for social scientists. We estimate that completing this guide will take you around 15-30 minutes for reading, Microsoft Azure set up and deploying services. We'll go over the basics of some Microsoft Azure services, but we should point out that a *lot* of talented people have given tutorials, and we won't do any better than they have. Table of Contents* [Guide 1: Public Cloud with Microsoft Azure](Guide-1:-Public-Cloud-with-Microsoft-Azure) * [Why should you use public cloud services?](Why-should-you-use-public-cloud-services?) * [Public cloud basics](Public-cloud-basics) * [Cloud pros and cons](Cloud-pros-and-cons) * [Getting started: first access to Azure](Getting-started:-first-access-to-Azure) * [Create your Azure free account](Create-your-Azure-free-account) * [Login to Azure Dashboard and create your first service](Login-to-Azure-Dashboard-and-create-your-first-service) * [Create API key to use Microsoft Azure service](Create-API-key-to-use-Microsoft-Azure-service) * [Script: create a set of keys for using Azure services](Script:-create-a-set-of-keys-for-using-Azure-services) * [Recap](Recap) * [What you have learnt](What-you-have-learnt) * [What you will learn next guide](What-you-will-learn-next-guide) Why should you use public cloud services?If you ask different people, you'll get different answers, but one of the commonalities is that most people don't realize is that eventhough these services come with costs (i.e. both monetary and training), they provides great resources that social scientists should start exploring themself. Here are some highlights:- ** You can use them “anytime, anywhere”: ** public cloud users can access, barely always, cloud services and keep their data stored safely in the infrastructure.- ** You won't *need* to plan far ahead for provisioning: ** public cloud users can use infinite computing and storaging resources available on demand. In this way, the user can offload some problems to the service provider such as mantaining both hardware and software.- ** You can buy what you need, when you need it: ** public cloud allows you to use services eliminating any sort of up-front commitment by Cloud user.- ** Public cloud allows teams to collaborate: ** Public cloud allows you to share data and collaborate more easily. Public cloud basicsCloud Computing refers to both the applications delivered as services over the Internet and the hardware/systems software in the datacenters that provide those services. The datacenter hardware and software is what we will call a _Cloud_. When a Cloud is made available to the public (through pay-as-you-go services), it is called a Public Cloud; the service being sold from the datacenter to the provider such as Microsoft, Google, Amazon or IBM (which might or not might be the same) called computing utility [[1](Footnotes)] and the one sold from provider to user that we will refer as a web application or more in general a service. Current examples of public computing services include AmazonWeb Services, Google AppEngine, and Microsoft Azure. A differenet deployment system from the public is the private. The private Cloud refers to internal datacenters of a business or other organization that are not made available to the public. The figure below shows the roles of the people as users or providers of different layers of the Cloud. Image adapted from Armbust et al., 2009 The National Institute of Standards and Technology (NIST) defines cloud as [[2](Footnotes)]:_“a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”_Cloud computing has 5 essential characteristics: 1. Measured Service2. Rapid Elasticity3. Broad Network Access4. Resource Pooling5. On Demand Self ServiceYou can think about cloud computing as composition of three different areas/models: - The Infrastructure as a Service (**IaaS**) is a cloud of resources provided to users. IaaS provides the basic functionality of storage and computing as service consisting in network servers, storage systems, network instrumentality, and information headquarters. - The Platform as a Service (**PaaS**) is a development environment provided as a service. These advanced levels of alternative service can be designed by end-users. PaaS offers virtualized servers that multiple users work on applications or grow innovative applications, with no having to concern about keeping operating systems, server hardware, equalization or weight power calculation. - The Software as a Service (**SaaS**) is an application that is offered to consumers using the Internetwork. A single case of service works in the cloud and multiple end users services. An example of Cloud Computing is Software as a Service (SaaS), where you input data on software and the data is transformed remotely through a software interface without your computer being involved. Thus SaaS eliminates client fears on storage, server applications, application development, and related common concerns of IT. In this to tutorial we are going to show you examples encompasses PaaS, when you will build the experiment using the two tutorials, and SaaS, when you will learn how to use the cloud services in the three guides. Cloud pros and consOverall, the advantages of using a cloud outperforms the disadvantage. When deciding to build your own application, which an example in this series of guides is the experiment, it is important to consider a couple of aspects: cloud security, and cloud service accuracy. In respect to the first, we reccomend making sure that data being stored in the cloud meet the requirements of your institution/research as well as checking the warranty of the chosen provider. In respect to the second, we suggest to consider the accuracy of the algorithm provided by the service for the purpose of the research but this fall outside the purpose of this workshop. Naeem et al.[[3](Footnotes)] discusses some of the advantages and disadvantages when using cloud computing. We report them from their study in the chart below:| Disadvantages | Advantages ||:---:|:---:|| Security in the Cloud | Almost Unlimited Storage || Technical Issues | Quick Development || Non-Interoperability | Cost Efficient || Dependency and vendor lock-in | Automatic Software Integration || Internet Required | Shared Resources || Less Reliability | Easy Access to Information || Less management | Mobility || Raised Vulnerability | Better Hardware Management || Prone to Attack | Backup and Recovery | Getting started: first access to Azure Create your Azure free accountTo access Azure cloud computing services, you will have to sign up for an Azure free account, if you don’t already have one. If you do not have a Microsoft account either you will be asked to create one, otherwise insert your outlook account (e.g. [email protected]). To create your Azure account you will be asked to add your credentials as well as a credit card account. This will not be charged unless you exceed the credit provided with the one month trial version [[4](Footnotes)]. We reccomend to cancel the account after the first month in case you are not interested in the service.Follow the next steps to set up a free account: - Go to https://azure.microsoft.com/en-us/ and click on **`free account`**- Click on **`start free`**- Create an Azure account: choose name, set password, add security info, add credit/debit card information  Login to Azure Dashboard and create your first serviceOnce you have created your Azure free account, you just need to go to the Azure portal and login using the credentials.- Go to https://portal.azure.com/ and sign-in to your account to access Azure Dashboard (note: familiarize with the relevant services?)You are now ready to deploy your first public cloud service! Follow the next steps:- Click on **`create a resource`**- Write on the bar the name of the service you want to subscribe to. For convinience we show the Storage Account that will we use in the next guide. Type **`storage account`** in the bar and then press **`enter`**- You will be directed to the a view containing a short description of the service, as well as links to the documentation and pricing information. Click on **`create`** to start deploying the services.- Next, complete entering the following intormation and click on **`create`** once finished: - Account name: enter lowercase and globally unique name (e.g. "mycloudstorageplayground") - Deployment model: click on Resource manager - Account kind: Storage v1 - Location: East US - Replication: Locally-reduntant storage (LRS) - Performance: click on Standard - Secure transfer required: Enabled - Subscription: Free Trial - Select server region: Eastus - Resource Group: create a new entering name (e.g. myresourcegroup) - Virtual networks: click on Enabled - Pin to dashboard (optional): [x] - Once the service is deployed, you will see on your Dashbord a white box with your storage account's name. Click on the box with your **`storage_account_name`** to access the storage account interface.- The storage account interface shows a summary of the settings defined in the previous steps and other utilities. On the top right box you can see the region from which your service is deployied, the type of storage you have choosen as well as the type of contents you decided to store (i.e. Locally Reduntant Storage which stands for data you might use a lot and the server will know). You can also find the id of your subrscition below erased for privacy purposes. Create API key to use Microsoft Azure serviceWe have shown you how to login to the Azure portal and how to create a Storage Account, now it is time to retrieve the key necessary to use it. We will use the key in the next guide to make requests to Microsoft Azure using Application Programming Interface (API). The API functions as an intermediary that allows two applications to talk to each other, in our case our software and Azure SaaS. The API key allows Azure to identifies your subscription account and to bill it (unless you switch your free account to a pay as you go account your account will not be billed).- To retrieve your Storage Account key, start from going to the dashboard and clicking on the box with your **`storage_account_name`**. Then, click on on **`Access keys`** on the side bar.- Copy storage account name and key1, clicking on the icon in the left, and paste them in the script below.  Script: create a set of keys for using Azure servicesIn the next guides, we are going to poke around with several Azure services. We reccomend you to create all the services on the list below and to save the name you will give to each service and primary key in the cell below. Here is a complete list of the services that you should create:- **Storage Account**- **Face**- **Computer Vision**- **Bing Speech Recognition**- **Text Analytics**When looking for a service, we recommend to click on the **`Create a Resource`** button and to copy each service name on the finder bar as shown before. This will avoid you to look for service at the time, and some headache from navigating yourself through the myriad of services available. Run the cell when you are done, and a file with your key will be automatically generated and stored into the folder public_cloud_computing/guides/keys.
###Code
###############################################################
# copy and paste your services' account name and primary key #
###############################################################
# STORAGE_ACCOUNT
STORAGE_ACCOUNT_NAME = '' #add your account name
STORAGE_ACCOUNT_API_KEY = '' #add your account key1
# COGNITIVE_SCIENCE_FACE_ACCOUNT
FACE_ACCOUNT_NAME = ''
FACE_API_KEY = ''
# COGNITIVE_SCIENCE_COMPUTER_VISION_ACCOUNT
COMPUTER_VISION_NAME = ''
COMPUTER_VISION_API_KEY = ''
# SPEECH_RECOGNITION_ACCOUNT
SPEECH_RECOGNITION_NAME = ''
SPEECH_RECOGNITION_KEY = ''
# TEXT_ANALYTICS_ACCOUNT
TEXT_ANALYTICS_NAME = ''
TEXT_ANALYTICS_API_KEY = ''
#run this cell to write a copy of your Azure services information (NAME and API's key)
#write a dictionary
azure_services_keys = {'STORAGE': {'NAME': STORAGE_ACCOUNT_NAME, 'API_KEY': STORAGE_ACCOUNT_API_KEY},
'FACE': {'NAME': FACE_ACCOUNT_NAME, 'API_KEY': FACE_API_KEY},
'COMPUTER_VISION': {'NAME': COMPUTER_VISION_NAME, 'API_KEY': COMPUTER_VISION_API_KEY},
'SPEECH_RECOGNITION': {'NAME': SPEECH_RECOGNITION_NAME, 'API_KEY': SPEECH_RECOGNITION_KEY},
'TEXT_ANALYTICS': {'NAME': TEXT_ANALYTICS_NAME, 'API_KEY': TEXT_ANALYTICS_API_KEY}}
#dump the dictionary on a file and saved in the folder < /guides/keys >
#import modules
import pickle
import json
#open a .json file and copy the dictionary with all your keys
with open("../keys/azure_services_keys.json", 'wb') as f:
pickle.dump(azure_services_keys, f)
################################
# run this cell once completed #
################################
###Output
_____no_output_____
###Markdown
Recap What you have learnt- What is cloud and its advantages- Access the Azure portal- How to deploy public cloud service - Now that you know more about cloud, what do you think about it? What you will learn next guide- How to use public cloud services: - What is a cloud storage - Access Azure cloud storage using Storage Explorer UI and with Python SDK - Create BLOB container and Upload BLOB (Big Large Binary Objects AKA image, audio, etc.) Question for you- Now that you know more about cloud, what do you think about it?- When would it be useful in your work, research? Footnotes- [1] Armbrust et al, 2009. Above the Clouds: A Berkeley View of Cloud Computing - [2] Peter Mell and Timothy Grance, 2011. The NIST Definition of Cloud Computing: recommendations of the National Institute of Standards and Technology - [3] Naeem et al, 2016. Cluster Computing vs Cloud Computing: a comparison and overview - [4] At subscription of a free account you will receive 200 dollars for 30 days to try pay as you go cloud services and a free account for a year. Once you exceed 200 dollars or the 30 days free trial will expired you will be asked to upgrade your subscription.
###Code
#import library to display notebook as HTML
import os
from IPython.core.display import HTML
#path to .ccs style script
cur_path = os.path.dirname(os.path.abspath("__file__"))
new_path = os.path.relpath('..\\..\\styles\\custom_styles_public_cloud_computing.css', cur_path)
#function to display notebook
def css():
style = open(new_path, "r").read()
return HTML(style)
#run this cell to apply HTML style
css()
###Output
_____no_output_____ |
cs109b_lec4-5_clustering/cs109b_clustering.ipynb | ###Markdown
Old Faithful and Clustering
###Code
faithful = pd.read_csv("faithful.csv")
display(faithful.head())
display(faithful.describe())
import seaborn as sns
plt.figure(figsize=(10,5))
plt.scatter(faithful["eruptions"], faithful["waiting"])
plt.xlabel("eruptions")
plt.ylabel("waiting")
plt.xlim(0,6)
plt.ylim(30,100)
plt.figure(figsize=(10,5))
sns.kdeplot(faithful["eruptions"], faithful["waiting"])
plt.scatter(faithful["eruptions"], faithful["waiting"])
plt.xlim(0,6)
plt.show(30,100)
###Output
_____no_output_____
###Markdown
There are two distinct modes to the data: one with eruption values (voulmes?) of 1 to 3 and low waiting times, and a second cluster with larger eruptions and longer waiting times. Notably, there are very few eruptions in the middle. Review: PCA First, we import data on different types of crime in each US state
###Code
USArrests = pd.read_csv("USArrests.csv")
USArrests['StateAbbrv'] = ["AL", "AK", "AZ", "AR", "CA", "CO", "CT","DE", "FL", "GA", "HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD", "MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ","NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC","SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI", "WY"]
display(USArrests.head())
display(USArrests.describe())
###Output
_____no_output_____
###Markdown
The data has more dimensinons than we can easily visualize, so we use PCA to condense it. As usual, we scale the data before applying PCA. (Note that we scale everything, rather than fitting on train and carrying that scaling to future data-- we won't be using a test set here, so it's correct to use all the data to scale).
###Code
from sklearn import preprocessing
df = USArrests[['Murder','Assault','UrbanPop','Rape']]
scaled_df = pd.DataFrame(preprocessing.scale(df), index=USArrests['State'], columns = df.columns)
fitted_pca = PCA().fit(scaled_df)
USArrests_pca = fitted_pca.transform(scaled_df)
###Output
_____no_output_____
###Markdown
The biplot function plots the first two PCA components, and provides some helpful annotations
###Code
def biplot(scaled_data, fitted_pca, original_dim_labels, point_labels):
pca_results = fitted_pca.transform(scaled_data)
pca1_scores = pca_results[:,0]
pca2_scores = pca_results[:,1]
# plot each point in 2D post-PCA space
plt.scatter(pca1_scores,pca2_scores)
# label each point
for i in range(len(pca1_scores)):
plt.text(pca1_scores[i],pca2_scores[i], point_labels[i])
#for each original dimension, plot what an increase of 1 in that dimension means in this space
for i in range(fitted_pca.components_.shape[1]):
raw_dims_delta_on_pca1 = fitted_pca.components_[0,i]
raw_dims_delta_on_pca2 = fitted_pca.components_[1,i]
plt.arrow(0, 0, raw_dims_delta_on_pca1, raw_dims_delta_on_pca2 ,color = 'r',alpha = 1)
plt.text(raw_dims_delta_on_pca1*1.1, raw_dims_delta_on_pca2*1.1, original_dim_labels[i], color = 'g', ha = 'center', va = 'center')
plt.figure(figsize=(8.5,8.5))
plt.xlim(-3.5,3.5)
plt.ylim(-3.5,3.5)
plt.xlabel("PC{}".format(1))
plt.ylabel("PC{}".format(2))
plt.grid()
biplot(scaled_df, fitted_pca,
original_dim_labels=scaled_df.columns,
point_labels=USArrests['State'])
###Output
_____no_output_____
###Markdown
The red arrows and green text give us a sense of direction. If any state had 'murder' increase by one (scaled) unit, it would move in the direction of the 'murder' line by that amount. An increase by one (scaled) unit of both 'murder' and 'Urban Pop' would apply both moves.We can also make inferrences about what combination of crimes and population puts California at its observed point. Extra: Variance CapturedAs usual, we want to know how what proportion of the variance each PC captures
###Code
plt.figure(figsize=(11,8.5))
plt.plot(range(1,5),fitted_pca.explained_variance_ratio_,"-o")
plt.xlabel("Principal Component")
plt.ylabel("Proportion of Variance Explained")
plt.ylim(0,1)
plt.show()
print("Proportion of variance explained by each PC:")
print(fitted_pca.explained_variance_ratio_)
###Output
_____no_output_____
###Markdown
Even more usefully, we can plot how much of the total variation we'd capture by using N PCs. The PCA-2 plot above has 86.7% of the total variance.
###Code
plt.figure(figsize=(11,8.5))
plt.plot(range(1,5),np.cumsum(fitted_pca.explained_variance_ratio_),"-o")
plt.xlabel("Principal Component")
plt.ylabel("Cumulative Proportion of Variance Explained")
plt.ylim(0,1.1)
plt.show()
print("Total variance capturted when using N PCA components:")
print(np.cumsum(fitted_pca.explained_variance_ratio_))
###Output
_____no_output_____
###Markdown
Scaling and DistancesReturning to the arrest/crime data, we again inspect the data and its PCA plot
###Code
np.random.seed(123)
arrests_sample = USArrests.sample(6)
arrests_sample
np.random.seed(123)
np.round(scaled_df.sample(6),2)
plt.figure(figsize=(10,5))
biplot(scaled_df, fitted_pca,
original_dim_labels=scaled_df.columns,
point_labels=USArrests['State'])
###Output
_____no_output_____
###Markdown
DistancesOne of the key ideas in clustering is the distance or disimilarity between points. Euclidean distance is common, though one is free to define domain-specific measures of how similar/distant two observations are.
###Code
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
###Output
_____no_output_____
###Markdown
The `pdist` function computes the distances between all pairs of data points (which can be quite expensive for large data). `squareform` turns the result into a numpy array (the raw format avoids storing redundant values)The distances between a handful of states are shown below. Hawaii and Indiana are relatively similar on these variables, while Maine and New Mexico are relatively different.
###Code
dist_eucl = pdist(scaled_df,metric="euclidean")
distances = pd.DataFrame(squareform(dist_eucl), index=USArrests["State"].values, columns=USArrests["State"].values)
sample_distances = distances.loc[arrests_sample["State"], arrests_sample["State"]]
sample_distances
###Output
_____no_output_____
###Markdown
For visualization, we can make a heatmap of the sample state's distances
###Code
plt.figure(figsize=(11,8.5))
sns.heatmap(sample_distances,cmap="mako")
plt.show()
###Output
_____no_output_____
###Markdown
We can likewise heatmap all the states.
###Code
import seaborn as sns
plt.figure(figsize=(11,8.5))
sns.heatmap(distances)
plt.show()
###Output
_____no_output_____
###Markdown
KmeansKmeans is a classical, workhorse clustering algorithm, and a common place to start. It assumes there are K centers and, starting from random guesses, algorithmically improves its guess about where the centers must be.
###Code
from sklearn.cluster import KMeans
#random_state parameter sets seed for random number generation
arrests_km = KMeans(n_clusters=3,n_init=25,random_state=123).fit(scaled_df)
arrests_km.cluster_centers_
###Output
_____no_output_____
###Markdown
We can read off where the 3 cluster centers are. (The value 3 is chosen arbitratially- soon we'll see how to tell what number of clusters seems to work best)
###Code
pd.DataFrame(arrests_km.cluster_centers_,columns=['Murder','Assault','UrbanPop','Rape'])
###Output
_____no_output_____
###Markdown
The .lables_ tell us which cluster each point was assigned to
###Code
scaled_df_cluster = scaled_df.copy()
scaled_df_cluster['Cluster'] = arrests_km.labels_
scaled_df_cluster.head()
###Output
_____no_output_____
###Markdown
The mean of the points in each cluster is the cluster center found by K-means
###Code
scaled_df_cluster.groupby('Cluster').mean()
###Output
_____no_output_____
###Markdown
Silhouette PlotsSilhouette plots give rich information on the quality of a clustering
###Code
from sklearn.metrics import silhouette_samples, silhouette_score
import matplotlib.cm as cm
#modified code from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html
def silplot(X, clusterer, pointlabels=None):
cluster_labels = clusterer.labels_
n_clusters = clusterer.n_clusters
# Create a subplot with 1 row and 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(11,8.5)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters = ", n_clusters,
", the average silhouette_score is ", silhouette_avg,".",sep="")
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(0,n_clusters+1):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.nipy_spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.nipy_spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker='.', s=200, lw=0, alpha=0.7,
c=colors, edgecolor='k')
xs = X[:, 0]
ys = X[:, 1]
if pointlabels is not None:
for i in range(len(xs)):
plt.text(xs[i],ys[i],pointlabels[i])
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(centers[:, 0], centers[:, 1], marker='o',
c="white", alpha=1, s=200, edgecolor='k')
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % int(i), alpha=1,
s=50, edgecolor='k')
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
fitted_km = KMeans(n_clusters=4,n_init=25,random_state=123).fit(scaled_df)
silplot(scaled_df.values, fitted_km)
# Objects with negative silhouette
sil = silhouette_samples(scaled_df, fitted_km.labels_)
USArrests.loc[sil<=0,:]
###Output
_____no_output_____
###Markdown
Elbow plots
###Code
wss = []
for i in range(1,11):
fitx = KMeans(n_clusters=i, init='random', n_init=5, random_state=109).fit(scaled_df)
wss.append(fitx.inertia_)
plt.figure(figsize=(11,8.5))
plt.plot(range(1,11), wss, 'bx-')
plt.xlabel('Number of clusters $k$')
plt.ylabel('Inertia')
plt.title('The Elbow Method showing the optimal $k$')
plt.show()
###Output
_____no_output_____
###Markdown
Silhouette Score
###Code
from sklearn.metrics import silhouette_score
scores = [0]
for i in range(2,11):
fitx = KMeans(n_clusters=i, init='random', n_init=5, random_state=109).fit(scaled_df)
score = silhouette_score(scaled_df, fitx.labels_)
scores.append(score)
plt.figure(figsize=(11,8.5))
plt.plot(range(1,11), np.array(scores), 'bx-')
plt.xlabel('Number of clusters $k$')
plt.ylabel('Average Silhouette')
plt.title('The Silhouette Method showing the optimal $k$')
plt.show()
!pip install setuptools-rust
!pip install gap-stat
from gap_stat import OptimalK
###Output
_____no_output_____
###Markdown
Gap Statistic
###Code
from gap_statistic import OptimalK
from sklearn.datasets.samples_generator import make_blobs
gs_obj = OptimalK()
n_clusters = gs_obj(scaled_df.values, n_refs=50, cluster_array=np.arange(1, 15))
print('Optimal clusters: ', n_clusters)
gs_obj.gap_df.head()
gs_obj.plot_results()
###Output
_____no_output_____
###Markdown
Hierarchical ClusteringK-means is a very 'hard' clustering: points belong to exactly one cluster, no matter what. A hierarchical clustering creates a nesting of clusters as existing clusters are merged or split. Dendograms (literally: branch graphs) can show the pattern of splits/merges.
###Code
import scipy.cluster.hierarchy as hac
from scipy.spatial.distance import pdist
plt.figure(figsize=(11,8.5))
dist_mat = pdist(scaled_df, metric="euclidean")
ward_data = hac.ward(dist_mat)
hac.dendrogram(ward_data, labels=USArrests["State"].values);
plt.show()
###Output
_____no_output_____
###Markdown
DBSCANDBSCAN is a more modern clustering approach that allows points to not be part of any cluster, and determines the number of clusters by itself. First, let's look at out data
###Code
multishapes = pd.read_csv("multishapes.csv")
ms = multishapes[['x','y']]
msplot = ms.plot.scatter(x='x',y='y',c='Black',title="Multishapes data",figsize=(11,8.5))
msplot.set_xlabel("X")
msplot.set_ylabel("Y")
plt.show()
###Output
_____no_output_____
###Markdown
To the eye, there's a pretty clear structure to the data However, K-means struggles to find a good clustering
###Code
shape_km = KMeans(n_clusters=5,n_init=25,random_state=123).fit(ms)
plt.figure(figsize=(10,10))
plt.scatter(ms['x'],ms['y'], c=shape_km.labels_);
plt.scatter(shape_km.cluster_centers_[:,0],shape_km.cluster_centers_[:,1], c='r', marker='h', s=100);
#todo: labels? different markers?
###Output
_____no_output_____
###Markdown
DB Scan uses a handful of parameters, including the number of neighbors a point must have to be considered 'core' (`min_samples`) and the distance within which neighbors must fall (`epsilon`). Most reasonable values of min_samples yeild the same results, but tuning epsilon is important.The function below implement's the authors suggestion for setting epsilon: look at the nearest-neighbor distances and find a level where they begin to grow rapidly.
###Code
from sklearn.neighbors import NearestNeighbors
def plot_epsilon(df, min_samples):
fitted_neigbors = NearestNeighbors(n_neighbors=min_samples).fit(df)
distances, indices = fitted_neigbors.kneighbors(df)
dist_to_nth_nearest_neighbor = distances[:,-1]
plt.plot(np.sort(dist_to_nth_nearest_neighbor))
plt.xlabel("Index\n(sorted by increasing distances)")
plt.ylabel("{}-NN Distance (epsilon)".format(min_samples-1))
plt.tick_params(right=True, labelright=True)
plot_epsilon(ms, 3)
###Output
_____no_output_____
###Markdown
The major slope occurs around eps=0.15 when min_samples set to 3.
###Code
from sklearn.cluster import DBSCAN
fitted_dbscan = DBSCAN(eps=0.15).fit(ms)
plt.figure(figsize=(10,10))
plt.scatter(ms['x'],ms['y'], c=fitted_dbscan.labels_);
###Output
_____no_output_____
###Markdown
We see good results with the suggested epsilon. A lower epsilon (0.12) won't quite merge all the clustersWe DBSCAN on crime dataReturning to the crime data, let's tune epsilon and see what clusters are returned
###Code
plot_epsilon(scaled_df, 5)
###Output
_____no_output_____
###Markdown
The optimal value is either around 1.67 or 1.4
###Code
fitted_dbscan = DBSCAN(eps=1.4).fit(scaled_df)
fitted_dbscan.labels_
###Output
_____no_output_____ |
notebooks/test_active_learning_template_least_confidence.ipynb | ###Markdown
###Code
#!git clone --single-branch --branch pytorch_lab_structure https://github.com/ravindrabharathi/fsdl-active-learning.git # add your own username and pw/pat here
!git clone https://github.com/ravindrabharathi/fsdl-active-learning2.git
%cd fsdl-active-learning2
from google.colab import drive
drive.mount('/gdrive')
!mkdir './data/processed/'
!cp -R '/gdrive/MyDrive/Active-learning/droughtwatch/' './data/processed/'
# alternative way: if you cloned the repository to your GDrive account, you can mount it here
#from google.colab import drive
#drive.mount('/content/drive', force_remount=True)
#%cd /content/drive/MyDrive/fsdl-active-learning
!pip3 install PyYAML==5.3.1
!pip3 install boltons wandb pytorch_lightning==1.2.8
!pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 torchtext==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html # general lab / pytorch installs
!pip3 install modAL tensorflow # active learning project
!pip install hdbscan
%env PYTHONPATH=.:$PYTHONPATH
#!wandb init # initialize w&b
#import wandb
#wandb.login()
'''
run=wandb.init(name='fsdl_active_learning',
project='Active_learning_Wandb_Drought_Watch_random_sampling',
notes='Random Sampling Drought Watch dataset with all Bands, pretrained ResNet50',
tags=['DroughtWatch', 'Active-Learning','ResNet','PyTorch','Random Sampling'])
'''
#!python training/run_experiment.py --wandb --gpus=1 --max_epochs=1 --num_workers=4 --data_class=DroughtWatch --model_class=ResnetClassifier --batch_size=32 --sampling_method="random"
!python training/run_experiment.py --gpus=1 --max_epochs=10 --num_workers=4 --batch_size=128 --sampling_method="least_confidence"
###Output
2021-05-05 06:58:59.687266: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Processed DroughtWatch dataset on disk does not have correct size, initiating reprocessing
Downloading raw dataset from https://storage.googleapis.com/wandb_datasets/dw_train_86K_val_10K.zip to /content/fsdl-active-learning2/data/downloaded/droughtwatch/dw_data.zip...
2.00GB [00:47, 45.0MB/s]
Computing SHA-256...
tcmalloc: large alloc 2150301696 bytes == 0x558484788000 @ 0x7f5215e5b1e7 0x55847b764f48 0x55847b72f9c7 0x55847b8ae655 0x55847b848828 0x55847b733292 0x55847b8116ae 0x55847b732ee9 0x55847b82499d 0x55847b7a6fe9 0x55847b73469a 0x55847b7a6e50 0x55847b73469a 0x55847b7a2a45 0x55847b73469a 0x55847b7a2a45 0x55847b73469a 0x55847b7a2a45 0x55847b7a1e0d 0x55847b734e11 0x55847b775d39 0x55847b772c84 0x55847b825178 0x55847b735231 0x55847b7a41e6 0x55847b7a1b0e 0x55847b734e11 0x55847b778029 0x55847b7337f2 0x55847b7a6d75 0x55847b73469a
Unzipping DroughtWatch file...
Loading train/validation datasets as TF tensor
2021-05-05 07:00:35.780899: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-05 07:00:35.791807: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-05-05 07:00:35.848609: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:35.849285: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-05 07:00:35.849333: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-05 07:00:35.928049: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-05-05 07:00:35.928194: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-05-05 07:00:36.089451: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-05-05 07:00:36.109665: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-05-05 07:00:36.537043: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-05-05 07:00:36.552923: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-05-05 07:00:36.557994: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-05-05 07:00:36.558138: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:36.558830: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:36.563952: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-05-05 07:00:36.579322: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-05 07:00:36.579459: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:36.580108: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-05-05 07:00:36.580155: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-05 07:00:36.580198: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-05-05 07:00:36.580214: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-05-05 07:00:36.580227: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-05-05 07:00:36.580242: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-05-05 07:00:36.580259: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-05-05 07:00:36.580274: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-05-05 07:00:36.580289: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-05-05 07:00:36.580351: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:36.581002: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:36.581507: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-05-05 07:00:36.590088: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-05-05 07:00:45.989007: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-05-05 07:00:45.989067: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2021-05-05 07:00:45.989082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2021-05-05 07:00:45.995384: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:45.996075: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:45.996625: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-05 07:00:45.997371: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2021-05-05 07:00:45.997462: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14413 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)
2021-05-05 07:00:46.575746: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-05-05 07:00:46.580765: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2199995000 Hz
2021-05-05 07:00:56.575621: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:177] Filling up shuffle buffer (this may take a while): 27952 of 90000
2021-05-05 07:01:06.575777: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:177] Filling up shuffle buffer (this may take a while): 55887 of 90000
2021-05-05 07:01:16.575834: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:177] Filling up shuffle buffer (this may take a while): 84073 of 90000
2021-05-05 07:01:17.382304: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:230] Shuffle buffer filled.
2021-05-05 07:01:17.395925: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 929500000 exceeds 10% of free system memory.
2021-05-05 07:01:21.728033: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 500907550 exceeds 10% of free system memory.
Converting train/valildation TF tensors to Numpy
Saving train/validation to HDF5 in compressed format...
Saving all remaining labeled images to separate HDF5 pool...
2021-05-05 07:01:39.499629: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 929500000 exceeds 10% of free system memory.
2021-05-05 07:02:12.847924: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 929500000 exceeds 10% of free system memory.
2021-05-05 07:02:46.141313: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 929500000 exceeds 10% of free system memory.
Cleaning up...
INIT SETUP DATA CALLED
-------------
tcmalloc: large alloc 3082084352 bytes == 0x558684aae000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f52139dece8 0x55847b772c25 0x55847b7338e9 0x55847b7a7ade 0x55847b673d14 0x7f51509100a4 0x55847b734d1d 0x55847b7332ff 0x55847b777630 0x55847b77755c 0x55847b81ae59 0x55847b7a2fad 0x55847b73469a 0x55847b7a2c9e 0x55847b7a1e0d 0x55847b734e11 0x55847b775d39 0x55847b772c84 0x55847b825178 0x55847b735231 0x55847b7a41e6 0x55847b7a1b0e 0x55847b734e11 0x55847b778029 0x55847b7337f2 0x55847b7a6d75 0x55847b73469a 0x55847b7a2a45
tcmalloc: large alloc 4011589632 bytes == 0x55873c5fc000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213ad3010 0x7f5213ad373c 0x7f5213ad385d 0x55847b7352f8 0x7f5213a18ef7 0x55847b732fd7 0x55847b732de0 0x55847b7a6ac2 0x55847b7a1b0e 0x55847b73477a 0x55847b7a6e50 0x55847b73469a 0x55847b7a2c9e 0x55847b7a1e0d 0x55847b734e11 0x55847b775d39 0x55847b772c84 0x55847b825178 0x55847b735231 0x55847b7a41e6 0x55847b7a1b0e 0x55847b734e11 0x55847b778029 0x55847b7337f2 0x55847b7a6d75 0x55847b73469a 0x55847b7a2a45
tcmalloc: large alloc 3082084352 bytes == 0x55882b7bc000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213abe3a9 0x7f5213ac0ab5 0x55847b81ae59 0x55847b7a2fad 0x55847b73469a 0x55847b7a2c9e 0x55847b7a1e0d 0x55847b734e11 0x55847b775d39 0x55847b772c84 0x55847b825178 0x55847b735231 0x55847b7a41e6 0x55847b7a1b0e 0x55847b734e11 0x55847b778029 0x55847b7337f2 0x55847b7a6d75 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac
Scenario:
- Multi-class classification
- All channels
Initial training set size: 20000
Initial unlabelled pool size: 66317
Validation set size: 10778
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/hub/checkpoints/resnet50-19c8e357.pth
100% 97.8M/97.8M [00:00<00:00, 319MB/s]
Adapting first convolutional layer to additional channels
[34m[1mwandb[0m: (1) Create a W&B account
[34m[1mwandb[0m: (2) Use an existing W&B account
[34m[1mwandb[0m: (3) Don't visualize my results
[34m[1mwandb[0m: Enter your choice: 2
[34m[1mwandb[0m: You chose 'Use an existing W&B account'
[34m[1mwandb[0m: You can find your API key in your browser here: https://wandb.ai/authorize
[34m[1mwandb[0m: Paste an API key from your profile and hit enter:
[34m[1mwandb[0m: Appending key for api.wandb.ai to your netrc file: /root/.netrc
2021-05-05 07:04:53.123928: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[34m[1mwandb[0m: Tracking run with wandb version 0.10.29
[34m[1mwandb[0m: Syncing run [33mfsdl-active-learning_least_confidence[0m
[34m[1mwandb[0m: ⭐️ View project at [34m[4mhttps://wandb.ai/ravindra/fsdl-active-learning[0m
[34m[1mwandb[0m: 🚀 View run at [34m[4mhttps://wandb.ai/ravindra/fsdl-active-learning/runs/yi7zqgmh[0m
[34m[1mwandb[0m: Run data is saved locally in /content/fsdl-active-learning2/wandb/run-20210505_070451-yi7zqgmh
[34m[1mwandb[0m: Run `wandb offline` to turn off syncing.
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 66%|█████▎ | 160/242 [01:37<00:49, 1.64it/s, loss=0.941, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 0: 74%|█████▉ | 180/242 [01:42<00:35, 1.76it/s, loss=0.941, v_num=qgmh]
Epoch 0: 83%|██████▌ | 200/242 [01:45<00:22, 1.89it/s, loss=0.941, v_num=qgmh]
Epoch 0: 91%|███████▎| 220/242 [01:49<00:10, 2.00it/s, loss=0.941, v_num=qgmh]
Epoch 0: 99%|███████▉| 240/242 [01:53<00:00, 2.11it/s, loss=0.941, v_num=qgmh]
Epoch 0: 100%|████████| 242/242 [01:54<00:00, 2.11it/s, loss=0.905, v_num=qgmh]
Epoch 1: 66%|█████▎ | 160/242 [01:37<00:50, 1.64it/s, loss=0.913, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 1: 74%|█████▉ | 180/242 [01:42<00:35, 1.76it/s, loss=0.913, v_num=qgmh]
Epoch 1: 83%|██████▌ | 200/242 [01:46<00:22, 1.88it/s, loss=0.913, v_num=qgmh]
Epoch 1: 91%|███████▎| 220/242 [01:50<00:11, 2.00it/s, loss=0.913, v_num=qgmh]
Epoch 1: 99%|███████▉| 240/242 [01:54<00:00, 2.10it/s, loss=0.913, v_num=qgmh]
Epoch 1: 100%|████████| 242/242 [01:55<00:00, 2.10it/s, loss=0.864, v_num=qgmh]
Epoch 2: 66%|█████▎ | 160/242 [01:37<00:50, 1.64it/s, loss=0.842, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 2: 74%|█████▉ | 180/242 [01:42<00:35, 1.76it/s, loss=0.842, v_num=qgmh]
Epoch 2: 83%|██████▌ | 200/242 [01:46<00:22, 1.88it/s, loss=0.842, v_num=qgmh]
Epoch 2: 91%|███████▎| 220/242 [01:50<00:11, 2.00it/s, loss=0.842, v_num=qgmh]
Epoch 2: 99%|███████▉| 240/242 [01:54<00:00, 2.10it/s, loss=0.842, v_num=qgmh]
Epoch 2: 100%|████████| 242/242 [01:55<00:00, 2.10it/s, loss=0.844, v_num=qgmh]
Epoch 3: 66%|█████▎ | 160/242 [01:37<00:50, 1.64it/s, loss=0.834, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 3: 74%|█████▉ | 180/242 [01:42<00:35, 1.76it/s, loss=0.834, v_num=qgmh]
Epoch 3: 83%|██████▌ | 200/242 [01:46<00:22, 1.89it/s, loss=0.834, v_num=qgmh]
Epoch 3: 91%|███████▎| 220/242 [01:49<00:10, 2.00it/s, loss=0.834, v_num=qgmh]
Epoch 3: 99%|███████▉| 240/242 [01:53<00:00, 2.11it/s, loss=0.834, v_num=qgmh]
Epoch 3: 100%|████████| 242/242 [01:54<00:00, 2.11it/s, loss=0.837, v_num=qgmh]
Epoch 4: 66%|█████▎ | 160/242 [01:37<00:49, 1.64it/s, loss=0.782, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 4: 74%|█████▉ | 180/242 [01:41<00:35, 1.76it/s, loss=0.782, v_num=qgmh]
Epoch 4: 83%|██████▌ | 200/242 [01:45<00:22, 1.89it/s, loss=0.782, v_num=qgmh]
Epoch 4: 91%|███████▎| 220/242 [01:49<00:10, 2.00it/s, loss=0.782, v_num=qgmh]
Epoch 4: 99%|███████▉| 240/242 [01:53<00:00, 2.11it/s, loss=0.782, v_num=qgmh]
Epoch 4: 100%|████████| 242/242 [01:54<00:00, 2.11it/s, loss=0.785, v_num=qgmh]
Epoch 5: 66%|█████▎ | 160/242 [01:37<00:50, 1.64it/s, loss=0.795, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 5: 74%|█████▉ | 180/242 [01:42<00:35, 1.76it/s, loss=0.795, v_num=qgmh]
Epoch 5: 83%|██████▌ | 200/242 [01:46<00:22, 1.89it/s, loss=0.795, v_num=qgmh]
Epoch 5: 91%|███████▎| 220/242 [01:50<00:11, 2.00it/s, loss=0.795, v_num=qgmh]
Epoch 5: 99%|███████▉| 240/242 [01:53<00:00, 2.11it/s, loss=0.795, v_num=qgmh]
Epoch 5: 100%|█████████| 242/242 [01:54<00:00, 2.10it/s, loss=0.76, v_num=qgmh]
Epoch 6: 66%|█████▎ | 160/242 [01:37<00:49, 1.64it/s, loss=0.753, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 6: 74%|█████▉ | 180/242 [01:42<00:35, 1.76it/s, loss=0.753, v_num=qgmh]
Epoch 6: 83%|██████▌ | 200/242 [01:45<00:22, 1.89it/s, loss=0.753, v_num=qgmh]
Epoch 6: 91%|███████▎| 220/242 [01:49<00:10, 2.00it/s, loss=0.753, v_num=qgmh]
Epoch 6: 99%|███████▉| 240/242 [01:53<00:00, 2.11it/s, loss=0.753, v_num=qgmh]
Epoch 6: 100%|████████| 242/242 [01:54<00:00, 2.11it/s, loss=0.736, v_num=qgmh]
Epoch 7: 66%|█████▎ | 160/242 [01:37<00:50, 1.64it/s, loss=0.718, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 7: 74%|█████▉ | 180/242 [01:42<00:35, 1.76it/s, loss=0.718, v_num=qgmh]
Epoch 7: 83%|██████▌ | 200/242 [01:46<00:22, 1.88it/s, loss=0.718, v_num=qgmh]
Epoch 7: 91%|███████▎| 220/242 [01:50<00:11, 2.00it/s, loss=0.718, v_num=qgmh]
Epoch 7: 99%|███████▉| 240/242 [01:54<00:00, 2.10it/s, loss=0.718, v_num=qgmh]
Epoch 7: 100%|████████| 242/242 [01:55<00:00, 2.10it/s, loss=0.691, v_num=qgmh]
Epoch 8: 66%|█████▎ | 160/242 [01:37<00:50, 1.64it/s, loss=0.666, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 8: 74%|█████▉ | 180/242 [01:42<00:35, 1.76it/s, loss=0.666, v_num=qgmh]
Epoch 8: 83%|██████▌ | 200/242 [01:46<00:22, 1.89it/s, loss=0.666, v_num=qgmh]
Epoch 8: 91%|███████▎| 220/242 [01:49<00:10, 2.00it/s, loss=0.666, v_num=qgmh]
Epoch 8: 99%|███████▉| 240/242 [01:53<00:00, 2.11it/s, loss=0.666, v_num=qgmh]
Epoch 8: 100%|████████| 242/242 [01:54<00:00, 2.11it/s, loss=0.706, v_num=qgmh]
Epoch 9: 66%|█████▎ | 160/242 [01:37<00:49, 1.64it/s, loss=0.644, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 74%|█████▉ | 180/242 [01:41<00:35, 1.77it/s, loss=0.644, v_num=qgmh]
Epoch 9: 83%|██████▌ | 200/242 [01:45<00:22, 1.89it/s, loss=0.644, v_num=qgmh]
Epoch 9: 91%|███████▎| 220/242 [01:49<00:10, 2.00it/s, loss=0.644, v_num=qgmh]
Epoch 9: 99%|███████▉| 240/242 [01:53<00:00, 2.11it/s, loss=0.644, v_num=qgmh]
Epoch 9: 100%|████████| 242/242 [01:54<00:00, 2.11it/s, loss=0.661, v_num=qgmh]
Epoch 9: 100%|████████| 242/242 [01:54<00:00, 2.11it/s, loss=0.661, v_num=qgmh]
Total Unlabelled Pool Size 66317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 519/519 [01:42<00:00, 5.04it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.44352126121520996, 'train_size': 20000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[55153 5856 44724 ... 48623 20058 40793]
-----------------
x-train_new size 2000
len of sample ids 2000
tcmalloc: large alloc 2989137920 bytes == 0x558684aae000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213abe3a9 0x7f5213ac0ab5 0x55847b81ae59 0x55847b7a2fad 0x55847b73469a 0x55847b7a2c9e 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac 0x7f5214c45bf7 0x55847b842c8a
New train set size 22000
New unlabelled pool size 64317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 70%|██████▎ | 180/257 [01:47<00:45, 1.68it/s, loss=0.59, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 78%|███████ | 200/257 [01:51<00:31, 1.79it/s, loss=0.59, v_num=qgmh]
Epoch 9: 86%|███████▋ | 220/257 [01:55<00:19, 1.90it/s, loss=0.59, v_num=qgmh]
Epoch 9: 93%|████████▍| 240/257 [01:59<00:08, 2.00it/s, loss=0.59, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.85it/s][A
Epoch 9: 100%|████████| 257/257 [02:04<00:00, 2.06it/s, loss=0.577, v_num=qgmh]
[ALOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Total Unlabelled Pool Size 64317
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 503/503 [01:39<00:00, 5.03it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.6516162157058716, 'train_size': 22000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 5150 31285 4499 ... 14510 51622 26324]
-----------------
x-train_new size 2000
len of sample ids 2000
tcmalloc: large alloc 2896183296 bytes == 0x558773c6e000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213abe3a9 0x7f5213ac0ab5 0x55847b81ae59 0x55847b7a2fad 0x55847b73469a 0x55847b7a2c9e 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac 0x7f5214c45bf7 0x55847b842c8a
New train set size 24000
New unlabelled pool size 62317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 73%|█████▊ | 200/273 [01:57<00:42, 1.71it/s, loss=0.505, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 81%|██████▍ | 220/273 [02:01<00:29, 1.81it/s, loss=0.505, v_num=qgmh]
Epoch 9: 88%|███████ | 240/273 [02:05<00:17, 1.91it/s, loss=0.505, v_num=qgmh]
Epoch 9: 95%|███████▌| 260/273 [02:09<00:06, 2.01it/s, loss=0.505, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.82it/s][A
Epoch 9: 100%|████████| 273/273 [02:14<00:00, 2.03it/s, loss=0.517, v_num=qgmh]
[ATotal Unlabelled Pool Size 62317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 487/487 [01:36<00:00, 5.03it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.4100646674633026, 'train_size': 24000.001953125}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[32609 62016 23582 ... 47854 29146 48389]
-----------------
x-train_new size 2000
len of sample ids 2000
tcmalloc: large alloc 2803236864 bytes == 0x558684aae000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213abe3a9 0x7f5213ac0ab5 0x55847b81ae59 0x55847b7a2fad 0x55847b73469a 0x55847b7a2c9e 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac 0x7f5214c45bf7 0x55847b842c8a
New train set size 26000
New unlabelled pool size 60317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 76%|██████ | 220/289 [02:06<00:39, 1.74it/s, loss=0.498, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 83%|██████▋ | 240/289 [02:11<00:26, 1.83it/s, loss=0.498, v_num=qgmh]
Epoch 9: 90%|███████▏| 260/289 [02:15<00:15, 1.92it/s, loss=0.498, v_num=qgmh]
Epoch 9: 97%|███████▊| 280/289 [02:19<00:04, 2.01it/s, loss=0.498, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.85it/s][A
Epoch 9: 100%|████████| 289/289 [02:24<00:00, 2.01it/s, loss=0.506, v_num=qgmh]
[ALOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Total Unlabelled Pool Size 60317
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 472/472 [01:33<00:00, 5.04it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.34772950410842896, 'train_size': 25999.998046875}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[27117 2532 16500 ... 17849 41000 35155]
-----------------
x-train_new size 2000
len of sample ids 2000
tcmalloc: large alloc 2710290432 bytes == 0x558773c6e000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213abe3a9 0x7f5213ac0ab5 0x55847b81ae59 0x55847b7a2fad 0x55847b73469a 0x55847b7a2c9e 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac 0x7f5214c45bf7 0x55847b842c8a
New train set size 28000
New unlabelled pool size 58317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 72%|█████▊ | 220/304 [02:16<00:52, 1.61it/s, loss=0.482, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 79%|██████▎ | 240/304 [02:21<00:37, 1.70it/s, loss=0.482, v_num=qgmh]
Epoch 9: 86%|██████▊ | 260/304 [02:25<00:24, 1.79it/s, loss=0.482, v_num=qgmh]
Epoch 9: 92%|███████▎| 280/304 [02:29<00:12, 1.88it/s, loss=0.482, v_num=qgmh]
Epoch 9: 99%|███████▉| 300/304 [02:32<00:02, 1.96it/s, loss=0.482, v_num=qgmh]
Epoch 9: 100%|████████| 304/304 [02:33<00:00, 1.97it/s, loss=0.503, v_num=qgmh]
[ATotal Unlabelled Pool Size 58317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 456/456 [01:30<00:00, 5.04it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.67198246717453, 'train_size': 28000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 4534 1138 12131 ... 33492 10542 5170]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 30000
New unlabelled pool size 56317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 75%|██████ | 240/320 [02:26<00:48, 1.64it/s, loss=0.434, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 81%|██████▌ | 260/320 [02:30<00:34, 1.73it/s, loss=0.434, v_num=qgmh]
Epoch 9: 88%|███████ | 280/320 [02:34<00:22, 1.81it/s, loss=0.434, v_num=qgmh]
Epoch 9: 94%|███████▌| 300/320 [02:38<00:10, 1.89it/s, loss=0.434, v_num=qgmh]
Epoch 9: 100%|████████| 320/320 [02:42<00:00, 1.97it/s, loss=0.434, v_num=qgmh]
Epoch 9: 100%|████████| 320/320 [02:43<00:00, 1.96it/s, loss=0.402, v_num=qgmh]
[ATotal Unlabelled Pool Size 56317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 440/440 [01:27<00:00, 5.02it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.6371610760688782, 'train_size': 30000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[22539 39119 6053 ... 44667 38811 40102]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 32000
New unlabelled pool size 54317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 78%|██████▏ | 260/335 [02:35<00:44, 1.67it/s, loss=0.381, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 84%|██████▋ | 280/335 [02:40<00:31, 1.75it/s, loss=0.381, v_num=qgmh]
Epoch 9: 90%|███████▏| 300/335 [02:44<00:19, 1.83it/s, loss=0.381, v_num=qgmh]
Epoch 9: 96%|███████▋| 320/335 [02:48<00:07, 1.90it/s, loss=0.381, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.87it/s][A
Epoch 9: 100%|████████| 335/335 [02:53<00:00, 1.94it/s, loss=0.376, v_num=qgmh]
[ALOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Total Unlabelled Pool Size 54317
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 425/425 [01:24<00:00, 5.04it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.63285893201828, 'train_size': 32000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 2898 29994 28614 ... 47842 44631 35914]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 34000
New unlabelled pool size 52317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 80%|██████▍ | 280/351 [02:45<00:41, 1.69it/s, loss=0.361, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 85%|██████▊ | 300/351 [02:49<00:28, 1.77it/s, loss=0.361, v_num=qgmh]
Epoch 9: 91%|███████▎| 320/351 [02:53<00:16, 1.84it/s, loss=0.361, v_num=qgmh]
Epoch 9: 97%|███████▋| 340/351 [02:57<00:05, 1.91it/s, loss=0.361, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.85it/s][A
Epoch 9: 100%|████████| 351/351 [03:02<00:00, 1.92it/s, loss=0.385, v_num=qgmh]
[ATotal Unlabelled Pool Size 52317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 409/409 [01:21<00:00, 5.03it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5813788771629333, 'train_size': 34000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[49572 48120 9952 ... 4526 31523 48684]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 36000
New unlabelled pool size 50317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 82%|██████▌ | 300/367 [02:54<00:39, 1.72it/s, loss=0.384, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 87%|██████▉ | 320/367 [02:59<00:26, 1.78it/s, loss=0.384, v_num=qgmh]
Epoch 9: 93%|███████▍| 340/367 [03:03<00:14, 1.85it/s, loss=0.384, v_num=qgmh]
Epoch 9: 98%|███████▊| 360/367 [03:07<00:03, 1.92it/s, loss=0.384, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.86it/s][A
Epoch 9: 100%|████████| 367/367 [03:12<00:00, 1.91it/s, loss=0.402, v_num=qgmh]
[ATotal Unlabelled Pool Size 50317
Query Sample size 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Resetting Predictions
Testing: 100%|████████████████████████████████| 394/394 [01:18<00:00, 5.03it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5972732901573181, 'train_size': 36000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[33004 35164 19561 ... 48091 35591 8369]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 38000
New unlabelled pool size 48317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Validation sanity check: 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/usr/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
Epoch 9: 79%|██████▎ | 300/382 [03:04<00:50, 1.62it/s, loss=0.314, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 84%|██████▋ | 320/382 [03:09<00:36, 1.69it/s, loss=0.314, v_num=qgmh]
Epoch 9: 89%|███████ | 340/382 [03:13<00:23, 1.76it/s, loss=0.314, v_num=qgmh]
Epoch 9: 94%|███████▌| 360/382 [03:17<00:12, 1.83it/s, loss=0.314, v_num=qgmh]
Epoch 9: 99%|███████▉| 380/382 [03:20<00:01, 1.89it/s, loss=0.314, v_num=qgmh]
Epoch 9: 100%|████████| 382/382 [03:22<00:00, 1.89it/s, loss=0.369, v_num=qgmh]
[ATotal Unlabelled Pool Size 48317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 378/378 [01:15<00:00, 5.02it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.512097179889679, 'train_size': 38000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[40737 39070 8211 ... 36428 2021 24403]
-----------------
x-train_new size 2000
len of sample ids 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
New train set size 40000
New unlabelled pool size 46317
Epoch 9: 80%|██████▍ | 320/398 [03:14<00:47, 1.65it/s, loss=0.309, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 85%|██████▊ | 340/398 [03:18<00:33, 1.71it/s, loss=0.309, v_num=qgmh]
Epoch 9: 90%|███████▏| 360/398 [03:22<00:21, 1.78it/s, loss=0.309, v_num=qgmh]
Epoch 9: 95%|███████▋| 380/398 [03:26<00:09, 1.84it/s, loss=0.309, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.87it/s][A
Epoch 9: 100%|████████| 398/398 [03:31<00:00, 1.88it/s, loss=0.353, v_num=qgmh]
[ALOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Total Unlabelled Pool Size 46317
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 362/362 [01:11<00:00, 5.03it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5777144432067871, 'train_size': 40000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[12695 45665 29523 ... 16475 38832 31728]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 42000
New unlabelled pool size 44317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 82%|██████▌ | 340/414 [03:24<00:44, 1.66it/s, loss=0.312, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 87%|██████▉ | 360/414 [03:28<00:31, 1.72it/s, loss=0.312, v_num=qgmh]
Epoch 9: 92%|███████▎| 380/414 [03:32<00:19, 1.79it/s, loss=0.312, v_num=qgmh]
Epoch 9: 97%|███████▋| 400/414 [03:36<00:07, 1.85it/s, loss=0.312, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.84it/s][A
Epoch 9: 100%|████████| 414/414 [03:41<00:00, 1.87it/s, loss=0.365, v_num=qgmh]
[ALOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Total Unlabelled Pool Size 44317
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 347/347 [01:09<00:00, 5.01it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5899993181228638, 'train_size': 42000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[23005 31809 38612 ... 40825 39922 33505]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 44000
New unlabelled pool size 42317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 84%|██████▋ | 360/429 [03:33<00:40, 1.69it/s, loss=0.301, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 89%|███████ | 380/429 [03:38<00:28, 1.74it/s, loss=0.301, v_num=qgmh]
Epoch 9: 93%|███████▍| 400/429 [03:42<00:16, 1.80it/s, loss=0.301, v_num=qgmh]
Epoch 9: 98%|███████▊| 420/429 [03:45<00:04, 1.86it/s, loss=0.301, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.86it/s][A
Epoch 9: 100%|█████████| 429/429 [03:50<00:00, 1.86it/s, loss=0.31, v_num=qgmh]
[ATotal Unlabelled Pool Size 42317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 331/331 [01:05<00:00, 5.02it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.6124252676963806, 'train_size': 44000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[15878 36263 32241 ... 25267 19100 42294]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 46000
New unlabelled pool size 40317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 81%|██████▍ | 360/445 [03:43<00:52, 1.61it/s, loss=0.283, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 90%|███████▏| 400/445 [03:48<00:25, 1.75it/s, loss=0.283, v_num=qgmh]
Validating: 47%|██████████████▌ | 40/85 [00:08<00:09, 4.63it/s][A
Epoch 9: 99%|███████▉| 440/445 [03:55<00:02, 1.86it/s, loss=0.283, v_num=qgmh]
Validating: 94%|█████████████████████████████▏ | 80/85 [00:16<00:01, 4.85it/s][A
Epoch 9: 100%|████████| 445/445 [04:00<00:00, 1.85it/s, loss=0.283, v_num=qgmh]
[ATotal Unlabelled Pool Size 40317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 315/315 [01:03<00:00, 5.00it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5514547228813171, 'train_size': 46000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[15173 548 15349 ... 5020 29857 24139]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 48000
New unlabelled pool size 38317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 83%|██████▌ | 380/460 [03:53<00:49, 1.63it/s, loss=0.286, v_num=qgmh]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 87%|██████▉ | 400/460 [03:57<00:35, 1.68it/s, loss=0.286, v_num=qgmh]
Epoch 9: 91%|███████▎| 420/460 [04:01<00:22, 1.74it/s, loss=0.286, v_num=qgmh]
Epoch 9: 96%|███████▋| 440/460 [04:05<00:11, 1.79it/s, loss=0.286, v_num=qgmh]
Epoch 9: 100%|████████| 460/460 [04:09<00:00, 1.85it/s, loss=0.286, v_num=qgmh]
Epoch 9: 100%|████████| 460/460 [04:10<00:00, 1.84it/s, loss=0.295, v_num=qgmh]
[ATotal Unlabelled Pool Size 38317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 300/300 [00:59<00:00, 5.01it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5812824368476868, 'train_size': 48000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 4969 1536 11036 ... 15549 26376 29152]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 50000
New unlabelled pool size 36317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 4%|█▍ | 20/476 [00:01<00:27, 16.73it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 8%|██▊ | 40/476 [00:05<01:03, 6.89it/s]
Epoch 9: 13%|████▏ | 60/476 [00:09<01:07, 6.17it/s]
Epoch 9: 17%|█████▌ | 80/476 [00:13<01:07, 5.86it/s]
Epoch 9: 21%|██████▋ | 100/476 [00:17<01:06, 5.69it/s]
Epoch 9: 25%|██ | 120/476 [00:18<00:55, 6.44it/s, loss=0.285, v_num=qgmh]
[ATotal Unlabelled Pool Size 36317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 284/284 [00:56<00:00, 5.01it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.585565984249115, 'train_size': 50000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 7048 35300 34933 ... 32359 14479 12314]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 52000
New unlabelled pool size 34317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 4%|█▎ | 20/492 [00:01<00:27, 17.19it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 8%|██▋ | 40/492 [00:05<01:03, 7.12it/s]
Epoch 9: 12%|████ | 60/492 [00:09<01:08, 6.29it/s]
Epoch 9: 16%|█████▎ | 80/492 [00:13<01:09, 5.93it/s]
Epoch 9: 20%|██████▌ | 100/492 [00:17<01:08, 5.74it/s]
Epoch 9: 24%|██▏ | 120/492 [00:18<00:57, 6.51it/s, loss=0.29, v_num=qgmh]
[ATotal Unlabelled Pool Size 34317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 269/269 [00:53<00:00, 5.02it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5476877093315125, 'train_size': 52000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[30939 30655 33714 ... 4653 21395 22098]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 54000
New unlabelled pool size 32317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 4%|█▎ | 20/507 [00:01<00:28, 17.31it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 8%|██▌ | 40/507 [00:05<01:05, 7.13it/s]
Epoch 9: 12%|███▉ | 60/507 [00:09<01:11, 6.29it/s]
Epoch 9: 16%|█████▏ | 80/507 [00:13<01:11, 5.94it/s]
Epoch 9: 20%|██████▎ | 100/507 [00:17<01:10, 5.75it/s]
Epoch 9: 24%|██▏ | 120/507 [00:18<00:59, 6.51it/s, loss=0.29, v_num=qgmh]
[ATotal Unlabelled Pool Size 32317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 253/253 [00:50<00:00, 5.01it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5192623138427734, 'train_size': 54000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[24681 7488 14631 ... 19713 28840 9917]
-----------------
x-train_new size 2000
len of sample ids 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
New train set size 56000
New unlabelled pool size 30317
Epoch 9: 4%|█▎ | 20/523 [00:01<00:29, 16.79it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 8%|██▌ | 40/523 [00:05<01:08, 7.09it/s]
Epoch 9: 11%|███▊ | 60/523 [00:09<01:13, 6.27it/s]
Epoch 9: 15%|█████ | 80/523 [00:13<01:14, 5.91it/s]
Epoch 9: 19%|██████ | 100/523 [00:17<01:13, 5.73it/s]
Epoch 9: 23%|█▊ | 120/523 [00:18<01:02, 6.49it/s, loss=0.288, v_num=qgmh]
[ALOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Total Unlabelled Pool Size 30317
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 237/237 [00:47<00:00, 5.01it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.4511660039424896, 'train_size': 56000.00390625}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[22584 23741 16193 ... 2916 538 9971]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 58000
New unlabelled pool size 28317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 4%|█▏ | 20/539 [00:01<00:31, 16.32it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 7%|██▍ | 40/539 [00:05<01:11, 7.02it/s]
Epoch 9: 11%|███▋ | 60/539 [00:09<01:17, 6.17it/s]
Epoch 9: 15%|████▉ | 80/539 [00:13<01:18, 5.86it/s]
Epoch 9: 19%|█████▉ | 100/539 [00:17<01:17, 5.69it/s]
Epoch 9: 22%|█▊ | 120/539 [00:18<01:05, 6.44it/s, loss=0.291, v_num=qgmh]
[ATotal Unlabelled Pool Size 28317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 222/222 [00:44<00:00, 5.01it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.4401596188545227, 'train_size': 57999.99609375}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[18051 13864 25737 ... 5954 3743 17955]
-----------------
x-train_new size 2000
len of sample ids 2000
tcmalloc: large alloc 2788507648 bytes == 0x558727176000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213ad3010 0x7f5213ad373c 0x7f5213ad385d 0x55847b7352f8 0x7f5213a18ef7 0x55847b732fd7 0x55847b732de0 0x55847b7a6ac2 0x55847b7a1b0e 0x55847b73477a 0x55847b7a6e50 0x55847b73469a 0x55847b7a2c9e 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac 0x7f5214c45bf7 0x55847b842c8a
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
New train set size 60000
New unlabelled pool size 26317
Epoch 9: 4%|█▏ | 20/554 [00:01<00:31, 16.73it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 7%|██▍ | 40/554 [00:05<01:12, 7.09it/s]
Epoch 9: 11%|███▌ | 60/554 [00:09<01:18, 6.28it/s]
Epoch 9: 14%|████▊ | 80/554 [00:13<01:20, 5.91it/s]
Epoch 9: 18%|█████▊ | 100/554 [00:17<01:19, 5.73it/s]
Epoch 9: 22%|█▋ | 120/554 [00:18<01:06, 6.49it/s, loss=0.305, v_num=qgmh]
[ATotal Unlabelled Pool Size 26317
Query Sample size 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Resetting Predictions
Testing: 100%|████████████████████████████████| 206/206 [00:41<00:00, 4.99it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.489873468875885, 'train_size': 60000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[12763 4106 18957 ... 22801 10265 23163]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 62000
New unlabelled pool size 24317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 4%|█▏ | 20/570 [00:01<00:32, 16.88it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 7%|██▎ | 40/570 [00:05<01:15, 7.04it/s]
Epoch 9: 11%|███▍ | 60/570 [00:09<01:22, 6.21it/s]
Epoch 9: 14%|████▋ | 80/570 [00:13<01:23, 5.89it/s]
Epoch 9: 18%|█████▌ | 100/570 [00:17<01:22, 5.71it/s]
Epoch 9: 21%|█▋ | 120/570 [00:18<01:09, 6.47it/s, loss=0.316, v_num=qgmh]
[ATotal Unlabelled Pool Size 24317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 190/190 [00:38<00:00, 4.99it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.5035983324050903, 'train_size': 62000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[11526 7844 9930 ... 22994 8071 12350]
-----------------
x-train_new size 2000
len of sample ids 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
New train set size 64000
New unlabelled pool size 22317
Epoch 9: 3%|█▏ | 20/585 [00:01<00:33, 16.90it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 7%|██▎ | 40/585 [00:05<01:16, 7.12it/s]
Epoch 9: 10%|███▍ | 60/585 [00:09<01:23, 6.29it/s]
Epoch 9: 14%|████▌ | 80/585 [00:13<01:24, 5.95it/s]
Epoch 9: 17%|█████▍ | 100/585 [00:17<01:24, 5.76it/s]
Epoch 9: 21%|█▋ | 120/585 [00:18<01:11, 6.52it/s, loss=0.326, v_num=qgmh]
[ATotal Unlabelled Pool Size 22317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 175/175 [00:35<00:00, 5.00it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.40359365940093994, 'train_size': 64000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[14416 4065 16845 ... 11960 12501 6349]
-----------------
x-train_new size 2000
len of sample ids 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
New train set size 66000
New unlabelled pool size 20317
Epoch 9: 3%|█ | 20/601 [00:01<00:34, 16.77it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 7%|██▏ | 40/601 [00:05<01:19, 7.10it/s]
Epoch 9: 10%|███▎ | 60/601 [00:09<01:26, 6.28it/s]
Epoch 9: 13%|████▍ | 80/601 [00:13<01:27, 5.92it/s]
Epoch 9: 17%|█████▎ | 100/601 [00:17<01:27, 5.74it/s]
Epoch 9: 20%|█▌ | 120/601 [00:18<01:14, 6.50it/s, loss=0.336, v_num=qgmh]
[ALOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Total Unlabelled Pool Size 20317
Query Sample size 2000
Resetting Predictions
Testing: 100%|████████████████████████████████| 159/159 [00:32<00:00, 4.96it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.4016340970993042, 'train_size': 66000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[10394 10431 4750 ... 6308 2375 16787]
-----------------
x-train_new size 2000
len of sample ids 2000
tcmalloc: large alloc 3160301568 bytes == 0x5586fac50000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213ad3010 0x7f5213ad373c 0x7f5213ad385d 0x55847b7352f8 0x7f5213a18ef7 0x55847b732fd7 0x55847b732de0 0x55847b7a6ac2 0x55847b7a1b0e 0x55847b73477a 0x55847b7a6e50 0x55847b73469a 0x55847b7a2c9e 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac 0x7f5214c45bf7 0x55847b842c8a
New train set size 68000
New unlabelled pool size 18317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 3%|█ | 20/617 [00:01<00:35, 16.65it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 6%|██▏ | 40/617 [00:05<01:22, 7.03it/s]
Epoch 9: 10%|███▏ | 60/617 [00:09<01:29, 6.24it/s]
Epoch 9: 13%|████▎ | 80/617 [00:13<01:30, 5.91it/s]
Epoch 9: 16%|█████▏ | 100/617 [00:17<01:30, 5.73it/s]
Epoch 9: 19%|█▌ | 120/617 [00:18<01:16, 6.49it/s, loss=0.336, v_num=qgmh]
[ATotal Unlabelled Pool Size 18317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 144/144 [00:28<00:00, 4.98it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.3880002200603485, 'train_size': 68000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[18269 7700 5737 ... 13537 13187 11143]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 70000
New unlabelled pool size 16317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 3%|█ | 20/632 [00:01<00:36, 16.54it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 6%|██ | 40/632 [00:05<01:23, 7.06it/s]
Epoch 9: 9%|███▏ | 60/632 [00:09<01:31, 6.24it/s]
Epoch 9: 13%|████▏ | 80/632 [00:13<01:34, 5.85it/s]
Epoch 9: 16%|█████ | 100/632 [00:17<01:33, 5.67it/s]
Epoch 9: 19%|█▌ | 120/632 [00:18<01:19, 6.43it/s, loss=0.345, v_num=qgmh]
[ATotal Unlabelled Pool Size 16317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 128/128 [00:25<00:00, 4.95it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.49267634749412537, 'train_size': 70000.0078125}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 440 15763 2801 ... 203 9161 3745]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 72000
New unlabelled pool size 14317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 3%|█ | 20/648 [00:01<00:37, 16.53it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 6%|██ | 40/648 [00:05<01:28, 6.87it/s]
Epoch 9: 9%|███ | 60/648 [00:09<01:35, 6.16it/s]
Epoch 9: 12%|████ | 80/648 [00:13<01:37, 5.85it/s]
Epoch 9: 15%|████▉ | 100/648 [00:17<01:36, 5.68it/s]
Epoch 9: 19%|█▍ | 120/648 [00:18<01:22, 6.44it/s, loss=0.354, v_num=qgmh]
[ATotal Unlabelled Pool Size 14317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|████████████████████████████████| 112/112 [00:22<00:00, 4.91it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.4723755121231079, 'train_size': 72000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[2726 2483 987 ... 5080 4883 357]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 74000
New unlabelled pool size 12317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 3%|▉ | 20/664 [00:01<00:39, 16.35it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 6%|█▉ | 40/664 [00:05<01:29, 6.99it/s]
Epoch 9: 9%|██▉ | 60/664 [00:09<01:37, 6.22it/s]
Epoch 9: 12%|███▉ | 80/664 [00:13<01:39, 5.89it/s]
Epoch 9: 15%|████▊ | 100/664 [00:17<01:38, 5.71it/s]
Epoch 9: 18%|█▍ | 120/664 [00:18<01:24, 6.47it/s, loss=0.369, v_num=qgmh]
[ATotal Unlabelled Pool Size 12317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 97/97 [00:19<00:00, 4.93it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.4438580870628357, 'train_size': 74000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 7201 586 5755 ... 843 12130 9976]
-----------------
x-train_new size 2000
len of sample ids 2000
tcmalloc: large alloc 3532103680 bytes == 0x5588191b4000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213ad3010 0x7f5213ad373c 0x7f5213ad385d 0x55847b7352f8 0x7f5213a18ef7 0x55847b732fd7 0x55847b732de0 0x55847b7a6ac2 0x55847b7a1b0e 0x55847b73477a 0x55847b7a6e50 0x55847b73469a 0x55847b7a2c9e 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac 0x7f5214c45bf7 0x55847b842c8a
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
New train set size 76000
New unlabelled pool size 10317
Epoch 9: 3%|▉ | 20/679 [00:01<00:39, 16.71it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 6%|█▉ | 40/679 [00:05<01:30, 7.04it/s]
Epoch 9: 9%|██▉ | 60/679 [00:09<01:39, 6.24it/s]
Epoch 9: 12%|███▉ | 80/679 [00:13<01:41, 5.90it/s]
Epoch 9: 15%|████▋ | 100/679 [00:17<01:41, 5.72it/s]
Epoch 9: 18%|█▍ | 120/679 [00:18<01:26, 6.48it/s, loss=0.383, v_num=qgmh]
[ATotal Unlabelled Pool Size 10317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 81/81 [00:16<00:00, 4.86it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.40321800112724304, 'train_size': 76000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 5072 9813 10312 ... 4691 9156 4645]
-----------------
x-train_new size 2000
len of sample ids 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
New train set size 78000
New unlabelled pool size 8317
Epoch 9: 3%|▉ | 20/695 [00:01<00:41, 16.36it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 6%|█▉ | 40/695 [00:05<01:33, 6.98it/s]
Epoch 9: 9%|██▊ | 60/695 [00:09<01:42, 6.21it/s]
Epoch 9: 12%|███▊ | 80/695 [00:13<01:44, 5.89it/s]
Epoch 9: 14%|████▌ | 100/695 [00:17<01:44, 5.71it/s]
Epoch 9: 17%|█▍ | 120/695 [00:18<01:28, 6.47it/s, loss=0.393, v_num=qgmh]
[ATotal Unlabelled Pool Size 8317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 65/65 [00:13<00:00, 4.81it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.4406636953353882, 'train_size': 78000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[5353 1742 2013 ... 3726 5712 1849]
-----------------
x-train_new size 2000
len of sample ids 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
New train set size 80000
New unlabelled pool size 6317
Epoch 9: 3%|▉ | 20/710 [00:01<00:41, 16.58it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 6%|█▊ | 40/710 [00:05<01:35, 7.00it/s]
Epoch 9: 8%|██▊ | 60/710 [00:09<01:44, 6.22it/s]
Epoch 9: 11%|███▋ | 80/710 [00:13<01:46, 5.89it/s]
Epoch 9: 14%|████▌ | 100/710 [00:17<01:46, 5.71it/s]
Epoch 9: 17%|█▎ | 120/710 [00:18<01:31, 6.47it/s, loss=0.412, v_num=qgmh]
[ATotal Unlabelled Pool Size 6317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 50/50 [00:10<00:00, 4.76it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.41127118468284607, 'train_size': 80000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[3848 6273 4953 ... 3032 817 2596]
-----------------
x-train_new size 2000
len of sample ids 2000
Epoch 9: 17%|█▎ | 120/710 [00:34<02:48, 3.51it/s, loss=0.412, v_num=qgmh] New train set size 82000
New unlabelled pool size 4317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 3%|▉ | 20/726 [00:01<00:42, 16.52it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 6%|█▊ | 40/726 [00:05<01:37, 7.05it/s]
Epoch 9: 8%|██▋ | 60/726 [00:09<01:46, 6.25it/s]
Epoch 9: 11%|███▋ | 80/726 [00:13<01:49, 5.91it/s]
Epoch 9: 14%|████▍ | 100/726 [00:17<01:49, 5.73it/s]
Epoch 9: 17%|█▎ | 120/726 [00:18<01:33, 6.48it/s, loss=0.428, v_num=qgmh]
[ATotal Unlabelled Pool Size 4317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Query Sample size 2000
Resetting Predictions
Testing: 100%|██████████████████████████████████| 34/34 [00:07<00:00, 4.57it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.37317582964897156, 'train_size': 82000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[3099 1364 3894 ... 790 1019 1195]
-----------------
x-train_new size 2000
len of sample ids 2000
New train set size 84000
New unlabelled pool size 2317
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 9: 3%|▉ | 20/742 [00:01<00:44, 16.23it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/85 [00:00<?, ?it/s][A
Epoch 9: 17%|█▎ | 120/726 [00:37<03:09, 3.20it/s, loss=0.428, v_num=qgmh]
Epoch 9: 8%|██▋ | 60/742 [00:09<01:52, 6.08it/s]
Epoch 9: 11%|███▌ | 80/742 [00:13<01:54, 5.80it/s]
Epoch 9: 13%|████▎ | 100/742 [00:17<01:53, 5.65it/s]
Epoch 9: 16%|█▎ | 120/742 [00:18<01:37, 6.39it/s, loss=0.446, v_num=qgmh]
[ATotal Unlabelled Pool Size 2317
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 19/19 [00:04<00:00, 4.30it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.3569270670413971, 'train_size': 84000.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "least_confidence":
-----------------
[ 387 384 925 ... 218 1494 1871]
-----------------
x-train_new size 2000
len of sample ids 2000
tcmalloc: large alloc 3996852224 bytes == 0x55876d5bc000 @ 0x7f5215e5b1e7 0x7f52139db46e 0x7f5213a2bc7b 0x7f5213a2bd18 0x7f5213ad3010 0x7f5213ad373c 0x7f5213ad385d 0x55847b7352f8 0x7f5213a18ef7 0x55847b732fd7 0x55847b732de0 0x55847b7a6ac2 0x55847b7a1b0e 0x55847b73477a 0x55847b7a6e50 0x55847b73469a 0x55847b7a2c9e 0x55847b73469a 0x55847b7a2a45 0x55847b7a1b0e 0x55847b7a1813 0x55847b86b592 0x55847b86b90d 0x55847b86b7b6 0x55847b843103 0x55847b842dac 0x7f5214c45bf7 0x55847b842c8a
New train set size 86000
New unlabelled pool size 317
[34m[1mwandb[0m: Waiting for W&B process to finish, PID 555
[34m[1mwandb[0m: Program ended successfully.
[34m[1mwandb[0m:
[34m[1mwandb[0m: Find user logs for this run at: /content/fsdl-active-learning2/wandb/run-20210505_070451-yi7zqgmh/logs/debug.log
[34m[1mwandb[0m: Find internal logs for this run at: /content/fsdl-active-learning2/wandb/run-20210505_070451-yi7zqgmh/logs/debug-internal.log
[34m[1mwandb[0m: Run summary:
[34m[1mwandb[0m: train_loss 0.68934
[34m[1mwandb[0m: train_acc 0.80469
[34m[1mwandb[0m: train_size 84000.0
[34m[1mwandb[0m: epoch 9
[34m[1mwandb[0m: trainer/global_step 5422
[34m[1mwandb[0m: _runtime 6013
[34m[1mwandb[0m: _timestamp 1620204304
[34m[1mwandb[0m: _step 116
[34m[1mwandb[0m: val_loss 1.08278
[34m[1mwandb[0m: val_acc 0.5771
[34m[1mwandb[0m: test_acc 0.35693
[34m[1mwandb[0m: Run history:
[34m[1mwandb[0m: train_loss █▇▇▇▇▆▆▆▅▅▅▄▄▃▃▃▂▂▂▂▂▂▂▁▂▂▁▃▄▄▃▄▃▅▄▄▄▅▅▆
[34m[1mwandb[0m: train_acc ▁▁▂▂▂▃▃▃▄▄▄▅▅▆▆▆▇▇▇▇▇▇▇█▇██▇▆▆▆▆▇▅▆▆▅▅▄▅
[34m[1mwandb[0m: train_size ▁▁▁▁▁▁▁▁▁▁▂▂▂▂▃▃▃▃▃▄▄▄▄▅▅▅▅▅▆▆▆▆▆▇▇▇▇███
[34m[1mwandb[0m: epoch ▁▂▃▄▅▆▇█████████████████████████████████
[34m[1mwandb[0m: trainer/global_step ▁▁▁▂▂▂▃▃▃▃▄▄▄▅▅▅▆▆▇▇▇███████████████████
[34m[1mwandb[0m: _runtime ▁▁▁▂▂▂▂▂▃▃▃▄▄▄▄▅▅▅▆▆▆▇▇▇▇▇▇▇▇███████████
[34m[1mwandb[0m: _timestamp ▁▁▁▂▂▂▂▂▃▃▃▄▄▄▄▅▅▅▆▆▆▇▇▇▇▇▇▇▇███████████
[34m[1mwandb[0m: _step ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
[34m[1mwandb[0m: val_loss ▂▂▂█▃▂▁▂▂▃▁▃▄▁▂▂▂▄▂▂▁▄▂▂▂▂▅▇▄▂▅▆▇▂▁▁▁▁▂▂
[34m[1mwandb[0m: val_acc ▆▆▆▁▄▇▇▇▇▄▇▄▃█▇█▇▇▆▇█▇████▇▇▇█▇▇▇██▇▇▇▇▆
[34m[1mwandb[0m: test_acc ▃█▂▁█▇▇▆▆▅▆▆▇▅▆▆▅▅▃▃▄▄▂▂▂▄▄▃▂▃▂▂▁
[34m[1mwandb[0m:
[34m[1mwandb[0m: Synced 5 W&B file(s), 1 media file(s), 0 artifact file(s) and 1 other file(s)
[34m[1mwandb[0m:
[34m[1mwandb[0m: Synced [33mfsdl-active-learning_least_confidence[0m: [34mhttps://wandb.ai/ravindra/fsdl-active-learning/runs/yi7zqgmh[0m
Epoch 9: 18% 120/679 [03:01<14:06, 1.51s/it, loss=0.383, v_num=qgmh]
Epoch 9: 23% 120/523 [12:15<41:09, 6.13s/it, loss=0.288, v_num=qgmh]
Epoch 9: 100% 460/460 [21:14<00:00, 2.77s/it, loss=0.295, v_num=qgmh]
Epoch 9: 25% 120/476 [16:01<47:32, 8.01s/it, loss=0.285, v_num=qgmh]
Epoch 9: 100% 351/351 [54:44<00:00, 9.36s/it, loss=0.385, v_num=qgmh]
Epoch 9: 100% 320/320 [1:03:19<00:00, 11.87s/it, loss=0.402, v_num=qgmh]
Epoch 9: 100% 445/445 [26:22<00:00, 3.56s/it, loss=0.283, v_num=qgmh]
Epoch 9: 100% 429/429 [31:22<00:00, 4.39s/it, loss=0.31, v_num=qgmh]
Epoch 9: 22% 120/554 [09:59<36:09, 5.00s/it, loss=0.305, v_num=qgmh]
Epoch 9: 19% 120/617 [06:06<25:17, 3.05s/it, loss=0.336, v_num=qgmh]
Epoch 9: 17% 120/710 [01:47<08:50, 1.11it/s, loss=0.412, v_num=qgmh]
Epoch 9: 19% 120/632 [05:15<22:27, 2.63s/it, loss=0.345, v_num=qgmh]
Epoch 9: 100% 335/335 [59:05<00:00, 10.58s/it, loss=0.376, v_num=qgmh]
Epoch 9: 100% 273/273 [1:15:25<00:00, 16.58s/it, loss=0.517, v_num=qgmh]
Epoch 9: 100% 382/382 [45:43<00:00, 7.18s/it, loss=0.369, v_num=qgmh]
Epoch 9: 100% 367/367 [50:17<00:00, 8.22s/it, loss=0.402, v_num=qgmh]
Epoch 9: 22% 120/539 [11:06<38:45, 5.55s/it, loss=0.291, v_num=qgmh]
Epoch 9: 24% 120/507 [13:27<43:24, 6.73s/it, loss=0.29, v_num=qgmh]
Epoch 9: 100% 414/414 [36:16<00:00, 5.26s/it, loss=0.365, v_num=qgmh]
Epoch 9: 100% 304/304 [1:07:29<00:00, 13.32s/it, loss=0.503, v_num=qgmh]
Epoch 9: 100% 289/289 [1:11:30<00:00, 14.85s/it, loss=0.506, v_num=qgmh]
Epoch 9: 100% 257/257 [1:19:14<00:00, 18.50s/it, loss=0.577, v_num=qgmh]
Epoch 9: 100% 398/398 [41:03<00:00, 6.19s/it, loss=0.353, v_num=qgmh]
Epoch 9: 21% 120/585 [07:56<30:47, 3.97s/it, loss=0.326, v_num=qgmh]
Epoch 9: 21% 120/570 [08:56<33:32, 4.47s/it, loss=0.316, v_num=qgmh]
Epoch 9: 20% 120/601 [07:00<28:04, 3.50s/it, loss=0.336, v_num=qgmh]
Epoch 9: 18% 120/664 [03:43<16:51, 1.86s/it, loss=0.369, v_num=qgmh]
Epoch 9: 19% 120/648 [04:27<19:37, 2.23s/it, loss=0.354, v_num=qgmh]
Epoch 9: 24% 120/492 [14:42<45:37, 7.36s/it, loss=0.29, v_num=qgmh]
Epoch 9: 17% 120/695 [02:23<11:25, 1.19s/it, loss=0.393, v_num=qgmh]
Epoch 9: 17% 120/726 [01:01<05:09, 1.96it/s, loss=0.428, v_num=qgmh]
Epoch 9: 16% 120/742 [00:31<02:44, 3.79it/s, loss=0.446, v_num=qgmh]
|
1.1 PCA and K-Means Decipher Genome.ipynb | ###Markdown
Clustering Genomic DataIn this case study we accept data from a genome and have the goal of identifying useful genes versus noise. Unfortunately, we don't know which sequences of genes are useful, so we hvae to use unsupervised methods to infer this. In this notebook we walk through the following series of steps: 1. Data is imported and prepared. Initially the sequence, a single string, is split into non-overlapping substrings of length 300, and then counts the combinations of the distrinct 1, 2, 3, and 4-length sequences of base pairs which appear in each substring are made. 2. PCA is performed to try to identify the internal structure of the data. 3. Finally, if PCA reveals some internal structure then we'll apply clustering to it.
###Code
import pandas as pd
from tqdm import tqdm
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
The Data PreparationThe data prep was done fairly easily using regex to break up the strings into the components.
###Code
with open('data/ccrescentus.fa') as file:
data = file.readlines()
cleanarray = [string.replace('\n', '') for string in data[1:]]
cleanstring = ''.join(cleanarray)
import re
arrays = re.findall('.{300}', cleanstring)
arrays
tbls = []
for wordlen in range(1, 5):
counts = []
for array in tqdm(arrays):
worddict = {}
for word in re.findall(''.join(['.{', str(wordlen), '}']), array):
worddict[word] = worddict.get(word, 0) + 1
counts.append(worddict)
tbls.append(pd.DataFrame(counts))
###Output
100%|████████████████████████████████████████████████████████████████████████████████████| 1018/1018 [00:00<00:00, 3351.77it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 1018/1018 [00:00<00:00, 8876.19it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 1018/1018 [00:00<00:00, 9195.68it/s]
100%|███████████████████████████████████████████████████████████████████████████████████| 1018/1018 [00:00<00:00, 12601.34it/s]
###Markdown
Principal Component AnalysisOne reason why we try multiple word lengths is because without additional domain knowlecge it isn't clear if there are more meaningful units we could work with than individual letters. So we calculate frequency tables to see if certain combinations happen more frequently (and more frequently together), and thus might be meaningful, than others. PCA helps us by creating natural clusters where combinations frequently co-occur and in reducing the dimension also enables us a unique way to visualize our dataset in a way that's otherwise not possible.
###Code
components = PCA(n_components=2)
components.fit(tbls[0])
import matplotlib.pyplot as plt
%matplotlib inline
tbls_pca = []
for i in range(4):
print(f"PCA Words of length: {i+1}")
t = tbls[i].fillna(0)
components = PCA(n_components=2)
df = pd.DataFrame(components.fit_transform(t))
tbls_pca.append(df)
plt.scatter(df[0], df[1])
plt.show()
###Output
PCA Words of length: 1
###Markdown
ClusteringIn the cases of words of length 1, 2, and 4 there appears to only be one cluster, so k-means isn't going to yield anything interesting. However, words of length 3 clearly appear to exhibit some clustering behavior. It turns out that this is related to the transpositions of valid codons, but at the moment it isn't necessary to actually know that. However, being able to understand the difference between clusters will end up being very useful in determining which three-base-pair sequences are valid code, so let's cluster!
###Code
triplets = tbls_pca[2]
model = KMeans(n_clusters = 7)
model.fit(triplets)
predictions = model.predict(triplets)
###Output
_____no_output_____
###Markdown
Results in a Pretty GraphUsing our clustering results, we can visualizing the different colors!
###Code
plt.scatter(triplets[0], triplets[1], c=predictions)
plt.show()
###Output
_____no_output_____ |
GAN Attack.ipynb | ###Markdown
---
###Code
model = build_cnn_mnist()
#model.load_weights('./tmp/weights/mnist_cnn_hinge.h5')
model.load_weights('./tmp/weights/mnist_cnn_smxe.h5')
#model.load_weights('./tmp/mnist_cnn_margin_C1_L1/model.h5')
model.compile(loss=keras.losses.sparse_categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
from lib.gan.model_acgan_mnist import *
latent_dim = 100
d = build_discriminator()
g = build_generator(latent_dim)
d.load_weights('./tmp/acgan_mnist/weight_d_epoch049.h5')
g.load_weights('./tmp/acgan_mnist/weight_g_epoch049.h5')
# x_g, _ = generate_random(g, (100, latent_dim))
x_g, _ = generate_ten(g, (100, latent_dim))
plt.imshow(collage(x_g), cmap='gray')
plt.axis('off')
#plt.savefig('acgan_latent.png', bbox_inches='tight')
plt.show()
y_pred = []
fake = []
for i, x in enumerate(x_g):
tmp = d.predict(x.reshape(1, 28, 28, 1))
fake.append(round(float(tmp[0]), 2))
y_pred.append(np.argmax(tmp[1]))
if (i + 1) % 10 == 0:
print("{} {}".format(y_pred, fake))
y_pred = []
fake = []
latent = Input(shape=(latent_dim, ))
image_class = Input(shape=(1, ), dtype='int32')
img = g([latent, image_class])
y = model(img)
combine = Model(inputs=[latent, image_class], outputs=y)
combine.trainable=False
combine.compile(loss=keras.losses.sparse_categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
z, y = random_sample((50, latent_dim))
y_cat = keras.utils.to_categorical(y, NUM_LABELS)
grad_fn = grad_acgan_cross_entropy(combine)
# grad_fn = grad_acgan_hinge(combine)
x_adv = PGD(combine, z, y, grad_fn=grad_fn, norm="2", n_step=200,
step_size=0.01, target=False, init_rnd=0.)
x = g.predict([x_adv, y])
y_pred = np.argmax(combine.predict([x_adv, y]), axis=1)
x_adv = x[y != y_pred]
y_adv = y[y != y_pred]
for i, (x_cur, y_cur) in enumerate(zip(x_adv, y_adv)):
plt.imshow(x_cur.reshape(28, 28), cmap='gray')
plt.axis('off')
plt.savefig('./tmp/gan/pgd/{}.png'.format(i), bbox_inches='tight')
plt.show()
print(y_cur)
print(np.argmax(model.predict(x_cur.reshape(1, 28, 28, 1))))
np.max(x_adv[1])
figure = plt.subplot(1, 10, figsize=(4))
for x_cur in x:
plt.imshow(x_cur.reshape(28, 28))
plt.show()
print(model.predict(x_cur.reshape(1, 28, 28, 1)))
###Output
_____no_output_____
###Markdown
---Simply iterate until a misclassification is found
###Code
adv_found = False
i = 0
while not adv_found:
i += 1
z, y = random_sample((100, latent_dim))
y_pred = np.argmax(combine.predict([z, y]), axis=1)
if np.sum(y_pred == y) != 100:
z_adv, y_adv = z[y_pred != y], y[y_pred != y]
break
print(i * 100)
np.sum(y_pred == y)
x = g.predict([z_adv, y_adv])
plt.imshow(x[5].reshape(28, 28))
np.argmax(model.predict(x), axis=1)
x_adv = g.predict([z_adv, y_adv])
for i, (x_cur, y_cur) in enumerate(zip(x_adv, y_adv)):
plt.imshow(x_cur.reshape(28, 28), cmap='gray')
plt.axis('off')
plt.savefig('./tmp/gan/rnd/{}.png'.format(i), bbox_inches='tight')
plt.show()
print(y_cur)
print(np.argmax(model.predict(x_cur.reshape(1, 28, 28, 1))))
# try different loss function
###Output
_____no_output_____
###Markdown
---
###Code
from lib.OptCarlini_GAN import *
z, y = random_sample((50, latent_dim))
y_cat = keras.utils.to_categorical(y, NUM_LABELS)
opt = OptCarlini_GAN(combine, target=False, c=1, lr=0.01, init_scl=1e-6,
loss_op=0, k=0, use_mask=False, decay=True)
x_adv = np.zeros_like(z)
norm = np.zeros(len(z))
for i, (zi, yi) in enumerate(zip(z, y_cat)):
x_adv[i], norm[i] = opt.optimize(zi, yi,
'./tmp/gan/acgan_cnn_smxe_mnist.h5',
n_step=1000, prog=True)
x = g.predict([x_adv, y])
y_pred = np.argmax(combine.predict([x_adv, y]), axis=1)
x_adv = x[y != y_pred]
y_adv = y[y != y_pred]
for i, (x_cur, y_cur) in enumerate(zip(x_adv, y_adv)):
plt.imshow(x_cur.reshape(28, 28), cmap='gray')
plt.axis('off')
plt.savefig('./tmp/gan/cw/{}.png'.format(i), bbox_inches='tight')
plt.show()
print(y_cur)
print(np.argmax(model.predict(x_cur.reshape(1, 28, 28, 1))))
combine.save_weights('temp.h5')
latent = Input(shape=(latent_dim, ))
image_class = Input(shape=(1, ), dtype='int32')
img = g([latent, image_class])
y = model(img)
combine = Model(inputs=[latent, image_class], outputs=y)
combine.load_weights('temp.h5')
combine.input[0].shape[1].value
combine.get_input_at(0)
###Output
_____no_output_____
###Markdown
---
###Code
x_train, y_train, x_test, y_test = load_dataset_fmnist()
len(x_test)
# PGD: 915 178 0.19453551912568307
# CW: 922 402 0.4360086767895879
###Output
_____no_output_____ |
deep_learning/new-intro-to-pytorch/1. Tensors in PyTorch (Exercises).ipynb | ###Markdown
Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
###Code
# First, import PyTorch
import torch
# Define our activation function that we are going to use within the layers
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable and reproducible
# Features are 3 random normal variables
features = torch.randn((1, 5))
print(features)
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
print(weights)
# and a true bias term
bias = torch.randn((1, 1))
print(bias)
###Output
tensor([[-0.1468, 0.7861, 0.9468, -1.1143, 1.6908]])
tensor([[-0.8948, -0.3556, 1.2324, 0.1382, -1.6822]])
tensor([[0.3177]])
###Markdown
--- Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data so that they all being in roughly the same place and we don't have divergence purely based on the starting point of the weights. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, *one row* and *five columns*, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution; we only require one for the entire layer, so no need to create more than a single value. PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
###Code
## Calculate the output of this network using the weights and bias tensors
# There are 2 ways in which we can calculate the output
sum_first_layer = torch.sum(weights * features)
alternate_sum_first_layer = (weights * features).sum()
plus_bias = sum_first_layer + bias
output = activation(plus_bias)
print(output)
###Output
tensor([[0.1595]])
###Markdown
--- You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication.
###Code
## Calculate the output of this network using matrix multiplication
activation(torch.mm(features, weights.view(5, 1)) + bias)
###Output
_____no_output_____
###Markdown
Stack them up - adding multiple layers That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$
###Code
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
print(W1.shape)
print(W2.shape)
print(B1.shape)
print(B2.shape)
###Output
torch.Size([3, 2])
torch.Size([2, 1])
torch.Size([1, 2])
torch.Size([1, 1])
###Markdown
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
###Code
# Calculate the output of the hidden layer
hidden = activation(torch.mm(features, W1) + B1)
# Use the output from the hidden layer and feed into the output layer
output = activation(torch.mm(hidden, W2) + B2)
print(output)
###Output
tensor([[0.3171]])
###Markdown
---If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
###Code
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
###Output
_____no_output_____
###Markdown
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
###Code
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
###Output
_____no_output_____ |
Day 1 /0_Python_Intro.ipynb | ###Markdown
Introduction to Python Python is a script language. The compiler will translate the code to machine language from top to bottom.In this course we will use interactive python (ipython) which consists of code cells that can be treated as a series of python scripts that can be executed after each other. These Code cells are run by pressing shift-enter or using the play button in the toolbar. Every line will be interpreted as a new command by the compiler. After executing, the little number inbetween the [] indicates the order of the sequence. Recap the code yourselfFirst lets talk about the data types: 1) Integers (Int) --> whole numbers
###Code
a = 10
type(a)
###Output
_____no_output_____
###Markdown
2) Floating Numbers
###Code
b=0.5
type(b)
###Output
_____no_output_____
###Markdown
3) Strings
###Code
c = "Flick Kurs 2020"
type(c)
###Output
_____no_output_____
###Markdown
4) Boolean
###Code
Iamsmart = False
type(Iamsmart)
###Output
_____no_output_____
###Markdown
Let's do some quick math!We can check our variables here again with the print() comand and get everything in one line of code by using ';' to tell the python compiler to read upcoming letters as a new command.
###Code
print(a);print(b);print(c)
###Output
10
0.5
Flick Kurs 2020
###Markdown
With arithmetic operations we have to use " , " instead of " ; ":
###Code
a+b,a*b,a/b
a**2
a>b
Iamsmart==True
###Output
_____no_output_____
###Markdown
Ok, I get it, I am not smart.Of course I can't "add" our String c by doing the mathematical operation +. But there are work arounds we will maybee need later.Lets try something more complicated! To do so we will need our first package of today. The 'math' package brings us many usefull mathematical functions and operations, so we dont have to define these functions with basic python instructions.
###Code
import math
###Output
_____no_output_____
###Markdown
Now we are able to use all of the functions defined in the 'math' package like....
###Code
x = math.cos(2 * math.pi)
print(x)
###Output
1.0
###Markdown
Unfortunatly we are not able to use the cos() function or the pre implemented pi numer without "browsing" the list of implemented stuff in the math packge via math.'function'. This should produce the error:
###Code
#### expect an error! ###
cos(2 * pi)
###Output
_____no_output_____
###Markdown
But we can tell python to not import the whole libary but our needed specific functions we can suddently directly use our comands without browsing the lib via math.'function'.
###Code
from math import cos,pi
cos(2 * math.pi)
x = math.cos(2 * math.pi)
print(x)
###Output
1.0
###Markdown
Functions Functions are extremly important tue to the fact that any code you write should be reusabel as much as possibel. This keeps your coding clean and the lines of written code sane.  Task 1: Write a function named 'adding_numers' that adds two numbers given! Your code: Example solution:
###Code
def adding_numbers(A,B):
C=A+B
return C
###Output
_____no_output_____
###Markdown
Now we can use this function over and over again!
###Code
adding_numbers(5,10)
###Output
_____no_output_____
###Markdown
We could also have used a function from a predifined package like numpy to do this **... but ¯\\ _(ツ)_/¯**.
###Code
from numpy import add
add(5,10)
###Output
_____no_output_____
###Markdown
LoopsRecap the loop section of our Tutorial! Suppose we have an sentence of words as a string 'sentence' and want to count the length of it by writing a function. How are we gonna do that? Task 2: Write a function named 'counting' that solves the Problem! Your code:
###Code
sentence= 'Please count me!'
###Output
_____no_output_____
###Markdown
Example solution:
###Code
sentence= 'Please count me!'
def counting(string):
element_count=0;
for element in string:
element_count=element_count+1
return element_count
print(element_count)
counting(sentence)
###Output
_____no_output_____
###Markdown
The lazy way is again using a predefined function. This time we use pythons baseline function **len()**.
###Code
len(sentence)
###Output
_____no_output_____
###Markdown
As you can see these functions count every sign in the string even ' ' is counted. We can enhance our function to ignore ' ' by specificly searching for that character with the **count()** function:
###Code
def counting_letters(word):
return len(word) - word.count(' ')
counting_letters(sentence)
###Output
_____no_output_____
###Markdown
Classes In this section we will create our very first class "Vehicle". From this Vehicle class we will have other classes like car and bike inheriting functions! 
###Code
class Vehicle():
def description(self):
return print("I'm a Vehicle!")
Vehicle.description()
## This should force an error
###Output
_____no_output_____
###Markdown
Task 3: Delete the "self" from the description(self) and try to explain the outcome! --------------------------------------------------------------------------------------------------------------------------To use the function, described in the Class, we have to create the object first:
###Code
v = Vehicle()
v.description()
###Output
I'm a Vehicle!
###Markdown
Now we will let the Car class inherit from Vehicle.
###Code
class Car(Vehicle):
# class attribute
wheels = 4
# initializer with instance attributes
def __init__(self, color, style):
self.color = color
self.style = style
def description_car(self):
print("I'm a", self.color, self.style)
def error():
return ok
c = Car('black', 'Cabrio')
c.description()
c.description_car()
c.error()
###Output
_____no_output_____
###Markdown
Local/Globel Variabels Outside of the Class, we can not acces the 'wheels' Parameter defined unless we define it to be a "global" variabel.
###Code
### expected error until ONCE defined global see below ###
wheels
class Car(Vehicle):
# class attribute
global wheels
wheels = 4
# initializer with instance attributes
def __init__(self, color, style):
self.color = color
self.style = style
wheels
###Output
_____no_output_____ |
SuperHackers.ipynb | ###Markdown
**Loading the data**
###Code
news = pd.read_csv('https://github.com/EvidenceN/Hacker_News_Trolls/blob/master/top_hacker_authors_dataset/top_hacker_authors.csv?raw=true')
news.head()
comment = news['text']
comment[:5]
author = news['author']
author[:5]
###Output
_____no_output_____
###Markdown
**Tokenizing the data**
###Code
nlp = spacy.load("en_core_web_lg")
tokenizer = Tokenizer(nlp.vocab)
###Output
_____no_output_____
###Markdown
**Using Vader Sentiment Analysis**
###Code
import vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
import vaderSentiment.vaderSentiment as vv
sample = comment[0]
sample
score = SentimentIntensityAnalyzer()
help(score)
###Output
Help on SentimentIntensityAnalyzer in module vaderSentiment.vaderSentiment object:
class SentimentIntensityAnalyzer(builtins.object)
| SentimentIntensityAnalyzer(lexicon_file='vader_lexicon.txt', emoji_lexicon='emoji_utf8_lexicon.txt')
|
| Give a sentiment intensity score to sentences.
|
| Methods defined here:
|
| __init__(self, lexicon_file='vader_lexicon.txt', emoji_lexicon='emoji_utf8_lexicon.txt')
| Initialize self. See help(type(self)) for accurate signature.
|
| make_emoji_dict(self)
| Convert emoji lexicon file to a dictionary
|
| make_lex_dict(self)
| Convert lexicon file to a dictionary
|
| polarity_scores(self, text)
| Return a float for sentiment strength based on the input text.
| Positive values are positive valence, negative value are negative
| valence.
|
| score_valence(self, sentiments, text)
|
| sentiment_valence(self, valence, sentitext, item, i, sentiments)
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
###Markdown
Rober Notebook https://github.com/BrokenShell/SaltyHacker/blob/master/nlp.pyVader Documentationhttps://pypi.org/project/vaderSentiment/
###Code
score.polarity_scores(comment[4])
comment[4]
score.polarity_scores(comment[10])
comment[10]
help(vv)
cleaned = pd.read_csv("https://raw.githubusercontent.com/buildweek-saltiest-hacker/data-engineering-api/master/hacker-comments.csv")
cleaned.head()
text = cleaned['hacker_comment']
sample_text = text[:5]
sample_text
sample_list = []
for i in sample_text:
a = score.polarity_scores(i)
b = a['compound']
c = round(b*10, 2)
sample_list.append(c)
sample_list
###Output
_____no_output_____
###Markdown
Creating a the ranking for each comment.
###Code
# creating a new dataframe that just has the information needed
text = cleaned['hacker_comment']
name = cleaned['hacker_name']
salty_hackers = pd.DataFrame({
'Name':name,
'Comment': text
})
salty_hackers.head()
comment = salty_hackers['Comment']
ranking = []
for i in comment:
scores = score.polarity_scores(i)
final_score = scores['compound']
rounded_score = round(final_score*10, 2)
ranking.append(rounded_score)
salty_hackers['comment_ranking'] = ranking
salty_hackers.head()
sample_data = salty_hackers.iloc[:10]
sample_data
salty_hackers['comment_ranking'].describe()
average = salty_hackers.groupby(by='Name').mean()
average
average[:10]
average['comment_ranking']
average_dict = average['comment_ranking'].to_dict()
average_dict
all_users = average_dict.keys()
all_users
user_list = list(all_users)
sample_users = user_list[:10]
sample_users
for user in sample_users:
sample_rank = average_dict[user]
print(sample_rank)
users = salty_hackers['Name']
user_ranking = []
for user in users:
user_rank = average_dict[user]
round_user_rank = round(user_rank, 2)
user_ranking.append(round_user_rank)
user_ranking[:10]
salty_hackers['user_ranking'] = user_ranking
salty_hackers.head()
salty_hackers['user_ranking'].describe()
###Output
_____no_output_____
###Markdown
Exporting Final Data Set
###Code
compression_opts = dict(method='zip',archive_name='salty_hackers.csv')
salty_hackers.to_csv('salty_hackers.zip', index=False, compression=compression_opts)
###Output
_____no_output_____ |
20200603_hss_section_meeting/introduction_gpu_programming.ipynb | ###Markdown
Introduction To GPU Programming Martin Schwinzerl, Riccardo de Maria HSS Section Meeting, CERN June 3rd, 2020 ' Goals Overview about the GPU computing landscape (Concepts, Hardware) Introduce the (currently) two most established frameworks Provide Simple Examples to get started Outline the typical workflow within a GPU enhanced Program Performance Analysis& Constraints Motivate Design-Decisions & Implementation of SixTrackLib Note: This is an interactive Jupyter Notebook - all presented examples are designed to work and allow you to experiment with them. Notbook available from: GPU Hardware Categories Low-End Gaming: ≈ 100 EUR Still equal or better FP performance than typical CPU in SP and DP DP performance usualy much poorer than SP! AMD RX 560 TI 4 GByte 1024 Cores SP Peak Performance 2406 GFLOP/s DP Peak Performance 163 GFLOP/s ≈ 1/16 SP NVidia GTX 1050 Ti 4 GByte 768 Cores SP Peak Performance 1981 GFLOP/s DP Peak Performance 62 GFLOP/s = 1/32 SP SP ... IEEE754 32Bit single precision floating point numbers DP ... IEEE754 64Bit double precision floating point numbers GPU Hardware Categories Low-End Gaming: ≈ 100 EUR High-End Gaming: ≤ 1000 EUR Substancial performance in SP DP performance varies! NVidia GeForce RTX 2080 8GByte 3072 Cores SP Peak Performance 8920 GFLOP/s DP Peak Performance 279 GFLOP/s = 1/32 SP AMD RX Vega 64 8 GByte 3840 Cores SP Peak Performance 11518 GFLOP/s DP Peak Performance 720 GFLOP/s = 1/16 SP SP ... IEEE754 32Bit single precision floating point numbers DP ... IEEE754 64Bit double precision floating point numbers GPU Hardware Categories Low-End Gaming: ≈ 100 EUR High-End Gaming: ≤ 1 kEUR Hybrid HPC: ≈ 3 kEUR - 8 kEUR Substantial performance in SP DP performance ≈ 1/2 SP Server HPC: ≈ 6 kEUR - 8 kEUR Similar performance characteristics as with hybrid HPC, but No video outputs, i.e. "Accelerator Card" NVidia Titan V 12 GBytes 3072 Cores Hybrid card - Video outputs available SP Peak Performance 12288 GFLOP/s DP Peak Performance 6144 GFLOP/s = 1/2 SP AMD Radeon Instinct MI50 16 GBytes 3840 Cores SP Peak Performance 13400 GFLOP/s DP Peak Performance 6700 GFLOP/s = 1/2 SP SP ... IEEE754 32Bit single precision floating point numbers DP ... IEEE754 64Bit double precision floating point numbers GPU Hardware Categories Low-End Gaming: ≈ 100 EUR High-End Gaming: ≤ 1 kEUR Hybrid HPC: ≈ 3 kEUR - 8 kEUR Server HPC: ≈ 6 kEUR - 8 kEUR AI Server HPC: ≈ 6 kEUR - 10 kEUR Substancial Integer, SP and Half-Precision Performance or TOPS ... Trillion operations per second of dedicated neural network computes Task-specific Hybrid systems with FPGA and/or CPU enhancements No classical GPU, no video outputs "Accelerator Card" SP ... IEEE754 32Bit single precision floating point numbers DP ... IEEE754 64Bit double precision floating point numbers CPUs versus GPUs CPU GPU Cores / Compute Units 2 - 64 (128 HT/SMT) 16 - 80 Arithmetic Units / Core 2-8 64 Peak Performance SP 24-3200 GFLOP/s 1000-19500 GFLOP/s Peak Performance DP 12-1600 GFLOP/s 50-9700 GFLOP/s Compared to CPUs, GPUs have: More arithmetic units (in particular SP) Less logic for control flow Less registers per arithmetic units Less memory but larger memory bandwidth SP ... IEEE754 32Bit single precision floating point numbers DP ... IEEE754 64Bit double precision floating point numbers For CPUs: DP performance is roughly 1/2 of SP performance (YMMV) Introducing Parallelism Task Parallelism: perform multiple tasks on the same dataset Example: Check if a number is a prime number by checking divisibility in parallel Introducing Parallelism Data Parallelism: perform the same task on multiple datasets Example: Vector addition Example: Vector Addtion on the GPU using CuPy Cf. https://github.com/cupy/cupy and https://cupy.chainer.org/ for reference
###Code
# If you have a NVidia GPU, one of the simplest ways to offload calculations to the GPU is to use CuPy
import cupy
import numpy as np
N = 10000000 # 1.0e7 elements
x_host = np.random.rand( N )
y_host = np.random.rand( N )
# Create vectors of random numbers. CuPy handles all the required steps behind the scenes
x = cupy.asarray( x_host )
y = cupy.asarray( y_host )
# x and y are cupy entities that "live" on the GPU while
# x_host and y_host "live" in regular memory
# The same applies to the result of the calculation z:
z = x + y # Vector addition is actually performed on the GPU
# in order to make use of the result outside of the GPU,
# we have to convert z into a regular numpy array
z_host = cupy.asnumpy( z )
# Compare result to calculation on CPU
print( f"calculation on host and device yield same result: {np.allclose( z_host, x_host + y_host, rtol=0.0, atol=1e-16)}" )
del x, y, z, x_host, y_host, z_host
###Output
calculation on host and device yield same result: True
###Markdown
Overview: Frameworks, Libraries, Toolboxes for GPU Programming High-Level Libraries and Applications CuPy: numpy-like python module for NVIDIA GPUs Tensorflow: mostly support for NVIDIA GPUs, some opportunities to use other cards via SYCL Matlab (currently only NVIDIA GPUs are supported) Julia Mathematica .... HPC frameworks & libraries OpenMP: ≥ version 4.5, allows offloading to GPU targets (for AMD, a recent gcc or clang is required) OpenACC: similar to OpenMP. Traditionally NVIDIA focused but recently got capabilities to offload to AMD Intel OneAPI ArrayFire: optimized functions and data-structures to exploit parallelism on both GPUs and CPUs Overview: Frameworks, Libraries, Toolboxes for GPU Programming Low-Level Programming Frameworks ROCm: collection of libraries and tools to target AMD GPUs, provides an OpenCL backend HIP: abstract language to write GPU programs by AMD (open-source), allows "automated" translation of CUDA programs into HIP SyCL: "modernized" abstraction on-top of OpenCL, uses SPIR clang + SPIR-V: compile code fragments to a intermediate format consumable by modern OpenCL implementations clang + PTX: compile code fragments to an intermediate format consumable by CUDA (NVRTC) .... CUDA C99/C++1x based framework for NVIDIA GPUs (some parts are open-source, drivers etc. are proprietary) OpenCL: Vendor neutral computing language, allows to target CPUs, GPUs, FPGAs, Signal Processors Code-Transformation tools and intermediate code representations play an increasingly central role PTX: Parallel Thread Execution language, used by NVIDIA for CUDA SPIR, SPIR-V: Standard Portable Intermediate Representation, cross-platform open standard maintained by the Khronos Group; Central in newer graphic standards and newer iterations of OpenCL Comparison: OpenCL versus CUDA OpenCL Hardware Neutral: GPUs, CPUs, FPGAs, Signal Processors Common software stack standardized by Khronos Group + vendor specific implementations Programs should work across implementations → "Programming to the least common denominator" OpenCL 1.2: Still most commonly used, C99 kernel language, limited set of features, but works virtually everywhere (incl. NVIDIA GPUs) OpenCL 2.x: some support from AMD and Intel OpenCL 3.0: recently specified, attempts to move forward by making requirements more modular CUDA Hardware: Only NVIDIA GPUs, no CPU / emulator available Software implementation only available from NVIDIA, includes libraries, debuggers, profilers, memory checkcers, etc. Software and hardware tightly integrated (versioning, "compute capabilites") Version 10.x allowing both C99 and C++11/14 kernel language Primarily: Run-time compilation of the kernels Kernel code has to be available at run time, compile for selected device at run-time Single-file compilation similar to the default in CUDA available (SyCL) Primarily: Single-file compilation. The system compiler is "replaced" by CUDA provided nvcc compiler This requires binary compability between system compiler, system libraries, graphic stacks and drivers Run-time compilation approach simliar to the default with OpenCL is available (NVRTC) CUDA and OpenCL: Different But Similar - Grid Threads are organized in two layers resembling a 3D grid Simpler use-cases only requiring 1D or 2D topologies are mapped on the 3D structure CUDA and OpenCL: Different But Similar - Memory Model Hierarchical memory model with different "regions" Visibility allows data exchange across individual threads / work-items but may require synchronization On CUDA, closely linked to Hardware implementation On OpenCL, guarantees about different memory regions are more difficult to give Structure Of A Simple GPU Program Structure Of A Simple GPU Program Structure Of A Simple GPU Program Structure Of A Simple GPU Program Structure Of A Simple GPU Program Structure Of A Simple GPU Program Structure Of A Simple GPU Program Example: Vector Addtion on the GPU using PyCuda Cf. https://github.com/inducer/pycuda and https://documen.tician.de/pycuda/ for reference
###Code
# Again, a NVIDIA GPU is required for this example to work!
import pycuda.autoinit
import pycuda.driver as cuda
import numpy as np
from pycuda.compiler import SourceModule
# First, we prepare the program that should be applied to our data, i.e. the "Kernel"
vec_add_program = SourceModule(
"""
__global__ void sum_kernel( double* __restrict__ z,
double const* __restrict__ x,
double const* __restrict__ y, int const N )
{
const int threads_per_block = blockDim.x; /* Assuming 1D grid */
const int ii = threadIdx.x + blockIdx.x * threads_per_block; /* Assuming 1D grid */
if( ii < N )
{
z[ ii ] = x[ ii ] + y[ ii ];
}
}
""")
# Note: even though we are using CUDA, we are able to specify the kernel at run-time ->
# PyCuda uses the NVRTC implementation!
vec_add_kernel = vec_add_program.get_function( "sum_kernel" )
# then, we prepare the structures on the host side, allocate the
# required resources on the device side and move things
N = 10000000 # Again, 10^7 elements to the vector
x_host = np.random.randn( N ) # prepare the elements on the host
y_host = np.random.randn( N )
# allocate the structures on the device
x = cuda.mem_alloc( x_host.nbytes )
y = cuda.mem_alloc( y_host.nbytes )
z = cuda.mem_alloc( max( x_host.nbytes, y_host.nbytes ) )
#transfer the memory from the host to the device; htod ... host to device
cuda.memcpy_htod( x, x_host )
cuda.memcpy_htod( y, y_host )
# find the grid dimensions:
threads_per_block = 128 # This is a pessimistic but educated guess :-)
num_blocks = N // threads_per_block
if N % threads_per_block != 0:
num_blocks += 1
# run the kernel
N_arg = np.int32(N) # pycuda enforces strict type checking
vec_add_kernel( z, x, y, np.int32(N), block=(threads_per_block, 1, 1), grid=(num_blocks,1))
# copy the result back to the host; dtoh ... device to host
z_host = np.empty_like( x_host )
cuda.memcpy_dtoh( z_host, z )
# Compare result to calculation on CPU
print( f"calculation on host and device yield same result: {np.allclose( z_host, x_host + y_host, rtol=0.0, atol=1e-16)}" )
del x, y, z, x_host, y_host, z_host
###Output
calculation on host and device yield same result: True
###Markdown
Example: Vector Addtion on the GPU using PyOpenCL Cf. https://github.com/inducer/pyopencl and https://https://documen.tician.de/pyopencl/ for reference
###Code
import pyopencl as cl
import numpy as np
# setup the OpenCL environment - it's a bit more elaborated as with PyCuda:
# Shortcut: ctx = cl.create_some_context() -> create a context with the first available device
platforms = cl.get_platforms()
for p in platforms:
print( p )
platform = platforms[ 0 ] # Let's select the first one
# get a list of all available devices for a platform
devices = platform.get_devices()
for d in devices:
print( d )
# Again, let's select the first device
device = devices[ 0 ]
# create a context and a queue
ctx = cl.Context( [device,] )
queue = cl.CommandQueue( ctx )
# prepare the program containing the kernel:
vec_add_program = cl.Program( ctx, """
__kernel void sum_kernel( __global double* restrict z,
__global double const* restrict x,
__global double const* restrict y, int const N )
{
int const ii = ( int )get_global_id( 0 ); /* Assume 1D Grid, 0 .. 0th element of ndrange */
if( ii < N )
{
z[ ii ] = x[ ii ] + y[ ii ];
}
}
""" )
# Compile the program containing the kernel
vec_add_program.build()
# get kernel function from vec_add_program
vec_add_kernel = vec_add_program.sum_kernel
# then, we prepare the structures on the host side, allocate the
# required resources on the device side and move things
N = 10000000 # Again, 10^7 elements to the vector
# create the structures on the host
x_host = np.random.rand( N )
y_host = np.random.rand( N )
# allocate the structures on the device
x = cl.Buffer( ctx, cl.mem_flags.READ_ONLY, x_host.nbytes )
y = cl.Buffer( ctx, cl.mem_flags.READ_ONLY, y_host.nbytes )
z = cl.Buffer( ctx, cl.mem_flags.WRITE_ONLY, max( x_host.nbytes, y_host.nbytes ) )
#transfer the memory from the host to the device
cl.enqueue_copy( queue, x, x_host )
cl.enqueue_copy( queue, y, y_host )
# find the grid dimensions:
num_work_items = x_host.shape
workgroup_size = None # let the OpenCL impl find the optimal workgroup size
# run the kernel
Narg = np.int32(N) # like with OpenCL
vec_add_kernel( queue, num_work_items, workgroup_size, z, x, y, Narg )
# copy the result back to the host
z_host = np.empty_like( x_host )
cl.enqueue_copy( queue, z_host, z )
# Compare result to calculation on CPU
print( f"calculation on host and device yield same result: {np.allclose( z_host, x_host + y_host, rtol=0.0, atol=1e-16)}" )
del x, y, z, x_host, y_host, z_host
###Output
calculation on host and device yield same result: True
###Markdown
Performance Analysis: Scaling Program with a total run-time $T$ Assumption: problem size is constant (e.g. add vectors of size $N$, track $N$ particles, etc.) Question: how much can we speed up the execution of the problem if we parallelise it, i.e. the speed-up $\eta_s( n_p )$ $T = t_p + t_s$ $t_p$: fraction of run-time that can be run in "parallel" on $n_p$ "processors" $t_s$: fraction of run-time that is sequential Amdahls law: $$\eta_s( n_p ) = \frac{T}{t_s + \frac{t_p}{n_p}}$$ Performance Analysis: Scaling
###Code
from helpers import plot_amdahl_scaling
plot_amdahl_scaling( 1.0, 128.0 )
###Output
_____no_output_____
###Markdown
Performance Analysis: Conclusions from Amdahl's Law For a given problem with finite $t_s$, increases in performance are diminishing with rising $n_p$ Controling and limiting $t_s$ is crucial to achieve good parallel performance and speedup We assume that $t_s = const.$ for a given problem size. In practice, $t_s = f(n_p)$ Even with $t_s$ very small, scaling to $10^3$ or even $10^4$ of parallel processes (as in GPUs) under the assumptions of Amdahl is hard Scaled Speedup, Gustafson-Barsis Law Amdahl's Law is a bit pessimistic $\longleftarrow$ fixed problem size regardless of $n_p$ What if we can grow the problem size together with rising number $n_p$? Scaled Speedup: $$\eta(n_p) = f_s + n_p \cdot f_p$$ $f_s = t_s / T$, fraction of the runtime that is serial $f_p = t_p / T$, fraction of the runtime that can be run in parallel on $n_p$ processors. It holds that $f_s + f_p = 1$
###Code
from helpers import plot_gustafson_scaling
plot_gustafson_scaling( 1.0, 128.0 )
###Output
_____no_output_____ |
real_estate_data_transformation.ipynb | ###Markdown
Real Estate Data Transformation
###Code
import json
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Get ids properties
###Code
import aiohttp
import asyncio
import nest_asyncio
nest_asyncio.apply()
async def get(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = json.loads(await response.text())
return data['data']
loop = asyncio.get_event_loop()
result = loop.run_until_complete(get('https://api.arrendamientosnutibara.com/promotion/all-promotions'))
properties = [x for x in result if x['status'] == 'PROMOCION' and len(x['property']['images']) >= 3]
df = pd.DataFrame(properties)
df.drop(columns = ['sellValue', 'rentValue', 'status', 'situation', 'keys', 'property'], inplace = True)
ids = df.to_dict('records')
ids_list = []
for data in ids:
ids_list.append(data['id'])
len(ids_list)
###Output
_____no_output_____
###Markdown
Get data per property
###Code
import asyncio
from aiohttp import ClientSession
async def fetch(url, session):
async with session.get(url) as response:
response = json.loads(await response.text())
return response['data']
properties_list = []
async def run(ids_list):
url = "https://api.arrendamientosnutibara.com/promotion/{}"
properties = []
async with ClientSession() as session:
for id in ids_list:
property = asyncio.ensure_future(fetch(url.format(id), session))
properties.append(property)
responses = await asyncio.gather(*properties)
for property in responses:
properties_list.append(property)
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(run(ids_list))
loop.run_until_complete(future)
len(properties_list)
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
df_promotion = pd.DataFrame(properties_list)
df_promotion.drop(columns=['sellValue', 'status', 'keys'], inplace = True)
df_promotion.columns = ['promotion_id', 'promotion_rent', 'property']
promotion_detail = df_promotion['property']
df_promotion_detail = pd.DataFrame(list(promotion_detail))
df_promotion_detail = df_promotion_detail.rename(columns={
"id": "property_id",
})
df_promotion.drop(columns = ['property'], inplace = True)
###Output
_____no_output_____
###Markdown
Join DataFrames
###Code
df = df_promotion.join(df_promotion_detail)
###Output
_____no_output_____
###Markdown
Replace words in property type
###Code
df['type'] = df['type'].str.replace(' SIMP', '')
###Output
_____no_output_____
###Markdown
Capitalize
###Code
df['type'] = df['type'].str.capitalize()
df['neighborhood'] = df['neighborhood'].str.capitalize()
df['city'] = df['city'].str.capitalize()
###Output
_____no_output_____
###Markdown
Delete columns Appropiate API doesn't need
###Code
df = df.drop(columns = ['sector'])
###Output
_____no_output_____
###Markdown
Strip
###Code
df['type'] = df['type'].str.strip()
def lower_columns(df, cols):
df[cols] = df[cols].apply(lambda x: x.str.lower())
return df
lower_columns(df, ['type', 'neighborhood', 'city'])
def remove_accents(df, cols):
df[cols] = df[cols].apply(lambda x: x.str.replace("Ñ", "%"))
df[cols] = df[cols].apply(lambda x: x.str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('ascii'))
df[cols] = df[cols].apply(lambda x: x.str.replace("%", "Ñ"))
return df
remove_accents(df, ['city', 'type', 'neighborhood'])
def convert_to_int_columns(df, cols):
df[cols] = df[cols].apply(lambda x: x.astype(int))
return df
convert_to_int_columns(df, ['stratum'])
def convert_to_float_columns(df, cols):
df[cols] = df[cols].apply(lambda x: x.astype(float))
return df
convert_to_float_columns(df, ['stratum'])
def fillna_columns(df, cols, value):
df[cols] = df[cols].apply(lambda x: x.fillna(value))
return df
fillna_columns(df, ['latitude', 'longitude'], 0.0)
###Output
_____no_output_____
###Markdown
Extract images
###Code
data = df.to_dict('records')
from pandas.io.json import json_normalize
df_images = json_normalize(data, record_path='images', meta=['promotion_id', 'property_id'])
df_images.drop(columns = ['id'], inplace = True)
df_images.head()
df_images['url'] = 'https://assets.arrendamientosnutibara.com/spaces/images/' + df_images['property_id'].astype(str) + '/720' + df_images['resource']
df_images = df_images.drop(columns = ['resource'])
df_images
images = df_images.to_dict('records')
sorted_list = sorted(images, key=lambda x: x['main'] == True, reverse=True)
df_images = pd.DataFrame(sorted_list)
df_images
df_images = df_images.groupby('promotion_id')['url'].apply(list).reset_index(name='images')
df = df.drop(columns = ['images'])
df = pd.merge(df, df_images)
df
###Output
_____no_output_____
###Markdown
Extract facilities
###Code
df_facilities = json_normalize(data, record_path='facilities', meta=['promotion_id', 'property_id'])
df_facilities = df_facilities.drop(columns = ['id'])
df_facilities.columns
df_facilities = df_facilities.rename(columns = {
'facility.id': 'facility_id',
'value': 'facility_value',
'facility.name': 'facility_name'
})
df_facilities
###Output
_____no_output_____
###Markdown
Facilities 112 - Área 105 - Baños 82 - Habitaciones Extract rooms
###Code
df_rooms = df_facilities.loc[df_facilities['facility_id'] == 82]
df_rooms
df_rooms = df_rooms.drop(columns = ['property_id', 'facility_id', 'facility_name'])
df_rooms['facility_value'] = df_rooms['facility_value'].astype(int)
df_rooms.columns
df_rooms = df_rooms.rename(columns={
"facility_value": "rooms",
})
df_rooms
df['rooms'] = df.promotion_id.map(df_rooms.set_index('promotion_id')['rooms'])
df['rooms'] = df['rooms'].fillna(0)
df['rooms'] = df['rooms'].astype(int)
###Output
_____no_output_____
###Markdown
Extract area
###Code
is_area = df_facilities['facility_id'] == 112
df_area = df_facilities[is_area]
df_area = df_area.drop(columns = ['property_id', 'facility_id', 'facility_name'])
df_area['facility_value'] = df_area['facility_value'].str.replace(',', '.')
df_area['facility_value'] = df_area['facility_value'].astype(float)
df_area = df_area.rename(columns={
"facility_value": "area",
})
df_area
df['area'] = df.promotion_id.map(df_area.set_index('promotion_id')['area'])
df['area'] = df['area'].fillna('')
###Output
_____no_output_____
###Markdown
Extract bathrooms
###Code
is_bath = df_facilities['facility_id'] == 105
df_bath = df_facilities[is_bath]
df_bath = df_bath.drop(columns = ['property_id', 'facility_id', 'facility_name'])
df_bath['facility_value'] = df_bath['facility_value'].astype(int)
df_bath = df_bath.rename(columns={
"facility_value": "bathrooms",
})
df_bath
df['bathrooms'] = df.promotion_id.map(df_bath.set_index('promotion_id')['bathrooms'])
df['bathrooms'] = df['bathrooms'].fillna(0)
df['bathrooms'] = df['bathrooms'].astype(int)
df_facilities
df_facilities = df_facilities[df_facilities.facility_id != 82]
df_facilities = df_facilities[df_facilities.facility_id != 105]
df_facilities = df_facilities[df_facilities.facility_id != 112]
###Output
_____no_output_____
###Markdown
Extract other facilities
###Code
df_facilities = df_facilities.groupby('promotion_id')['facility_name'].apply(list).reset_index(name='facilities')
df = df.drop(columns = ['facilities'])
df = pd.merge(df, df_facilities)
###Output
_____no_output_____
###Markdown
Convert to lower
###Code
df['city'] = df['city'].str.lower()
df['type'] = df['type'].str.lower()
df['neighborhood'] = df['neighborhood'].str.lower()
###Output
_____no_output_____
###Markdown
Remove accents
###Code
cols = ['city', 'type', 'neighborhood']
df[cols] = df[cols].apply(lambda x: x.str.replace("Ñ", "%"))
df[cols] = df[cols].apply(lambda x: x.str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('ascii'))
df[cols] = df[cols].apply(lambda x: x.str.replace("%", "Ñ"))
###Output
_____no_output_____
###Markdown
Get type properties
###Code
def get_type_property(case):
sw = {
"apartaestudio": 38,
"apartamento": 2,
"bodega": 27,
"casa": 33,
"casa local": 40,
"consultorio": 29,
"edificio": 30,
"finca": 39, # Finca en Parcelacion
"finca productiva": 31,
"finca recreativa": 35,
"hotel/apart hotel": 32,
"local": 26,
"lote comercial": 28,
"lote": 37, # Lote en Parcelacion
"lote independiente": 34,
"lote industrial": 36,
"oficina": 25
}
return sw.get(case, 'Invalid option')
df['type'] = df['type'].apply(lambda x: get_type_property(x))
df['type'] = df['type'].astype(int)
df
###Output
_____no_output_____
###Markdown
Get cities
###Code
async def get_cities(url):
async with aiohttp.ClientSession(headers=headers) as session:
async with session.get(url) as response:
data = json.loads(await response.text())
cities = data['response']
return cities
loop = asyncio.get_event_loop()
cities = loop.run_until_complete(get_cities('http://appropiate.com/api/ciudad'))
df_cities = pd.DataFrame(list(cities))
cols = ['nombre']
df_cities[cols] = df_cities[cols].apply(lambda x: x.str.replace("Ñ", "%"))
df_cities[cols] = df_cities[cols].apply(lambda x: x.str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('ascii'))
df_cities[cols] = df_cities[cols].apply(lambda x: x.str.replace("%", "Ñ"))
df_cities['nombre'] = df_cities['nombre'].str.lower()
list_cities = df_cities.to_dict('records')
def get_city(case, cities):
for city in cities:
if(case in city['nombre']):
return int(city['id'])
df['city'] = df['city'].apply(lambda x: get_city(x, list_cities))
###Output
_____no_output_____
###Markdown
Get neighborhoods
###Code
async def get_neighborhoods(url):
async with aiohttp.ClientSession(headers=headers) as session:
async with session.get(url) as response:
data = json.loads(await response.text())
neighborhoods = data['response']
return neighborhoods
loop = asyncio.get_event_loop()
neighborhoods = loop.run_until_complete(get_neighborhoods('http://appropiate.com/api/barrio'))
df_neighborhoods = pd.DataFrame(list(neighborhoods))
cols = ['nombre']
df_neighborhoods[cols] = df_neighborhoods[cols].apply(lambda x: x.str.replace("Ñ", "%"))
df_neighborhoods[cols] = df_neighborhoods[cols].apply(lambda x: x.str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('ascii'))
df_neighborhoods[cols] = df_neighborhoods[cols].apply(lambda x: x.str.replace("%", "Ñ"))
df_neighborhoods['nombre'] = df_neighborhoods['nombre'].str.lower()
list_neighborhoods = df_neighborhoods.to_dict('records')
def get_neighborhood(case, neighborhoods):
for neighborhood in neighborhoods:
if(case in neighborhood['nombre']):
return int(neighborhood['id'])
df['neighborhood'] = df['neighborhood'].apply(lambda x: get_neighborhood(x, list_neighborhoods))
df['neighborhood'] = df['neighborhood'].fillna('')
###Output
_____no_output_____
###Markdown
Get other facilities
###Code
async def get_facilities(url):
async with aiohttp.ClientSession(headers=headers) as session:
async with session.get(url) as response:
data = json.loads(await response.text())
facilities = data['response']
return facilities
loop = asyncio.get_event_loop()
facilities = loop.run_until_complete(get_facilities('http://appropiate.com/api/caracteristica'))
df_facilities = pd.DataFrame(list(facilities))
cols = ['nombre']
df_facilities[cols] = df_facilities[cols].apply(lambda x: x.str.replace("Ñ", "%"))
df_facilities[cols] = df_facilities[cols].apply(lambda x: x.str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('ascii'))
df_facilities[cols] = df_facilities[cols].apply(lambda x: x.str.replace("%", "Ñ"))
df_facilities['nombre'] = df_facilities['nombre'].str.lower()
list_facilities = df_facilities.to_dict('records')
def map_other_facilities():
other_facilities_dict = {
"Alcoba Servicio": 83,
"Biblioteca": 481,
"Mezanine": 238,
"Closet": 249,
"Comedor": 443,
"Salon Comedor": 477,
"Barra Americana": 157,
"CuartoUtil": 139,
"Hall": 516,
"Balcon": 133,
"Terraza": 270,
"Patios": 163,
"Zona Ropa": 169,
"Calentador": 200,
"Unidad Cerrada": 319,
"Juegos Infantiles": 390,
"Ascensor": 355,
"Zona Verde": 320,
"Porteria": 327,
"Piscina": 325,
"Salon Social": 318,
"Citofono": 201,
"Aire Acondicionado": 180,
"Shut Basura": 451,
"Red Gas": 205,
"Cocina Integral": 133,
"Parqueadero Cubierto": 343,
"Parqueadero Descubierto": 343,
"Adm. 18. Libre ADM Of. Paga": 456,
"Adm. 56. Libre ADM Pt. Paga": 456,
"Adm. 38. Adm en Of. Of. Paga": 456,
"Adm. 101 Adm en Of. Pt. Paga": 456,
"Persiana": 243,
"Cocineta": 258,
"Gimnasio": 321,
"Sauna": 323,
"Turco": 324,
"Cancha Sintetica": 273,
"Cancha Polideportiva": 209,
"Parqueadero Moto": 343,
"Sotano": 248,
"Circuito cerrado tv": 257,
"Campestre": 227,
"Con Local": 426,
"jacuzzi": 300
}
return other_facilities_dict
###Output
_____no_output_____
###Markdown
Map other facilities
###Code
data_dict = map_other_facilities()
df['facilities'] = df['facilities'].apply(lambda x: [data_dict.get(v) for v in x if v in data_dict])
df[df['promotion_rent'] == 0]
df[df['area'] == 0]
df[df['latitude'] == 0]
df[df['longitude'] == 0]
df[df['promotion_rent'] == 0]
###Output
_____no_output_____ |
CS109/hw/hw0/HW0.ipynb | ###Markdown
Homework 0 Due Tuesday, September 9, 2014 (but no submission is required)---Welcome to CS109 / STAT121 / AC209 / E-109 (http://cs109.github.io/2014/). In this class, we will be using a variety of tools that will require some initial configuration. To ensure everything goes smoothly moving forward, we will setup the majority of those tools in this homework. While some of this will likely be dull, doing it now will enable us to do more exciting work in the weeks that follow without getting bogged down in further software configuration. This homework will not be graded, however it is essential that you complete it timely since it will enable us to set up your accounts. You do not have to hand anything in, with the exception of filling out the online survey. Class Survey, Piazza, and Introduction**Class Survey**Please complete the mandatory course survey located [here](https://docs.google.com/forms/d/1uAxk4am1HZFh15Y8zdGpBm5hGTTmX3IGkBkD3foTbv0/viewform?usp=send_form). It should only take a few moments of your time. Once you fill in the survey we will sign you up to the course forum on Piazza and the dropbox system that you will use to hand in the homework. It is imperative that you fill out the survey on time as we use the provided information to sign you up for these services. **Piazza**Go to [Piazza](https://piazza.com/harvard/fall2014/cs109) and sign up for the class using your Harvard e-mail address. You will use Piazza as a forum for discussion, to find team members, to arrange appointments, and to ask questions. Piazza should be your primary form of communication with the staff. Use the staff e-mail ([email protected]) only for individual requests, e.g., to excuse yourself from a mandatory guest lecture. All homeworks, and project descriptions will be announced on Piazza first. **Introduction**Once you are signed up to the Piazza course forum, introduce yourself to your classmates and course staff with a follow-up post in the introduction thread. Include your name/nickname, your affiliation, why you are taking this course, and tell us something interesting about yourself (e.g., an industry job, an unusual hobby, past travels, or a cool project you did, etc.). Also tell us whether you have experience with data science. Programming expectationsAll the assignments and labs for this class will use Python and, for the most part, the browser-based IPython notebook format you are currently viewing. Knowledge of Python is not a prerequisite for this course, **provided you are comfortable learning on your own as needed**. While we have strived to make the programming component of this course straightforward, we will not devote much time to teaching prorgramming or Python syntax. Basically, you should feel comfortable with:* How to look up Python syntax on Google and StackOverflow.* Basic programming concepts like functions, loops, arrays, dictionaries, strings, and if statements.* How to learn new libraries by reading documentation.* Asking questions on StackOverflow or Piazza.There are many online tutorials to introduce you to scientific python programming. [Here is one](https://github.com/jrjohansson/scientific-python-lectures) that is very nice. Lectures 1-4 are most relevant to this class. Getting PythonYou will be using Python throughout the course, including many popular 3rd party Python libraries for scientific computing. [Anaconda](http://continuum.io/downloads) is an easy-to-install bundle of Python and most of these libraries. We recommend that you use Anaconda for this course.Please visit [this page](https://github.com/cs109/content/wiki/Installing-Python) and follow the instructions to set up Python. Hello, PythonThe IPython notebook is an application to build interactive computational notebooks. You'll be using them to complete labs and homework. Once you've set up Python, please download this HW0 ipython notebook and open it with IPython by typing```ipython notebook ```For the rest of the assignment, use your local copy of this page, running on IPython.Notebooks are composed of many "cells", which can contain text (like this one), or code (like the one below). Double click on the cell below, and evaluate it by clicking the "play" button above, or by hitting shift + enter
###Code
x = [10, 20, 30, 40, 50]
for item in x:
print "Item is ", item
###Output
Item is 10
Item is 20
Item is 30
Item is 40
Item is 50
###Markdown
Python LibrariesWe will be using a several different libraries throughout this course. If you've successfully completed the [installation instructions](https://github.com/cs109/content/wiki/Installing-Python), all of the following statements should run.
###Code
#IPython is what you are using now to run the notebook
import IPython
print "IPython version: %6.6s (need at least 1.0)" % IPython.__version__
# Numpy is a library for working with Arrays
import numpy as np
print "Numpy version: %6.6s (need at least 1.7.1)" % np.__version__
# SciPy implements many different numerical algorithms
import scipy as sp
print "SciPy version: %6.6s (need at least 0.12.0)" % sp.__version__
# Pandas makes working with data tables easier
import pandas as pd
print "Pandas version: %6.6s (need at least 0.11.0)" % pd.__version__
# Module for plotting
import matplotlib
print "Mapltolib version: %6.6s (need at least 1.2.1)" % matplotlib.__version__
# SciKit Learn implements several Machine Learning algorithms
import sklearn
print "Scikit-Learn version: %6.6s (need at least 0.13.1)" % sklearn.__version__
# Requests is a library for getting data from the Web
import requests
print "requests version: %6.6s (need at least 1.2.3)" % requests.__version__
# Networkx is a library for working with networks
import networkx as nx
print "NetworkX version: %6.6s (need at least 1.7)" % nx.__version__
#BeautifulSoup is a library to parse HTML and XML documents
import bs4
print "BeautifulSoup version:%6.6s (need at least 4.0)" % bs4.__version__
#MrJob is a library to run map reduce jobs on Amazon's computers
import mrjob
print "Mr Job version: %6.6s (need at least 0.4)" % mrjob.__version__
#Pattern has lots of tools for working with data from the internet
import pattern
print "Pattern version: %6.6s (need at least 2.6)" % pattern.__version__
#Seaborn is a nice library for visualizations
import seaborn
print "Seaborn version: %6.6s (need at least 0.3.1)" % seaborn.__version__
###Output
IPython version: 5.3.0 (need at least 1.0)
Numpy version: 1.11.3 (need at least 1.7.1)
SciPy version: 0.18.1 (need at least 0.12.0)
Pandas version: 0.19.2 (need at least 0.11.0)
Mapltolib version: 2.0.0 (need at least 1.2.1)
Scikit-Learn version: 0.18.1 (need at least 0.13.1)
requests version: 2.12.4 (need at least 1.2.3)
NetworkX version: 1.11 (need at least 1.7)
BeautifulSoup version: 4.5.3 (need at least 4.0)
Mr Job version: 0.5.8 (need at least 0.4)
Pattern version: 2.6 (need at least 2.6)
Seaborn version: 0.7.1 (need at least 0.3.1)
###Markdown
If any of these libraries are missing or out of date, you will need to [install them](https://github.com/cs109/content/wiki/Installing-Pythoninstalling-additional-libraries) and restart IPython Hello matplotlib The notebook integrates nicely with Matplotlib, the primary plotting package for python. This should embed a figure of a sine wave:
###Code
#this line prepares IPython for working with matplotlib
%matplotlib inline
# this actually imports matplotlib
import matplotlib.pyplot as plt
x = np.linspace(0, 10, 30) #array of 30 points from 0 to 10
y = np.sin(x)
z = y + np.random.normal(size=30) * .2
plt.plot(x, y, 'ro-', label='A sine wave')
plt.plot(x, z, 'b-', label='Noisy sine')
plt.legend(loc = 'lower right')
plt.xlabel("X axis")
plt.ylabel("Y axis")
###Output
_____no_output_____
###Markdown
If that last cell complained about the `%matplotlib` line, you need to update IPython to v1.0, and restart the notebook. See the [installation page](https://github.com/cs109/content/wiki/Installing-Python) Hello NumpyThe Numpy array processing library is the basis of nearly all numerical computing in Python. Here's a 30 second crash course. For more details, consult Chapter 4 of Python for Data Analysis, or the [Numpy User's Guide](http://docs.scipy.org/doc/numpy-dev/user/index.html)
###Code
print "Make a 3 row x 4 column array of random numbers"
x = np.random.random((3, 4))
print x
print
print "Add 1 to every element"
x = x + 1
print x
print
print "Get the element at row 1, column 2"
print x[1, 2]
print
# The colon syntax is called "slicing" the array.
print "Get the first row"
print x[0, :]
print
print "Get every 2nd column of the first row"
print x[0, ::2]
print
###Output
Make a 3 row x 4 column array of random numbers
[[ 0.68729655 0.12365899 0.09136566 0.70376189]
[ 0.48410683 0.55090825 0.13657166 0.563731 ]
[ 0.0929047 0.60827262 0.68790571 0.22153719]]
Add 1 to every element
[[ 1.68729655 1.12365899 1.09136566 1.70376189]
[ 1.48410683 1.55090825 1.13657166 1.563731 ]
[ 1.0929047 1.60827262 1.68790571 1.22153719]]
Get the element at row 1, column 2
1.13657166475
Get the first row
[ 1.68729655 1.12365899 1.09136566 1.70376189]
Get every 2nd column of the first row
[ 1.68729655 1.09136566]
###Markdown
Print the maximum, minimum, and mean of the array. This does **not** require writing a loop. In the code cell below, type `x.m`, to find built-in operations for common array statistics like this
###Code
#your code here
print 'Max value: ', x.max()
print 'Min value: ', x.min()
print 'Mean value: ', x.mean()
###Output
Max value: 1.70376189241
Min value: 1.09136565913
Mean value: 1.41266842165
###Markdown
Call the `x.max` function again, but use the `axis` keyword to print the maximum of each row in x.
###Code
#your code here
print x.max(axis = 1) #每行的最大值
print x.max(axis = 0) #每列的最大值
###Output
[ 1.70376189 1.563731 1.68790571]
[ 1.68729655 1.60827262 1.68790571 1.70376189]
###Markdown
Here's a way to quickly simulate 500 coin "fair" coin tosses (where the probabily of getting Heads is 50%, or 0.5)
###Code
x = np.random.binomial(500, .5)
print "number of heads:", x
###Output
number of heads: 250
###Markdown
Repeat this simulation 500 times, and use the [plt.hist() function](http://matplotlib.org/api/pyplot_api.htmlmatplotlib.pyplot.hist) to plot a histogram of the number of Heads (1s) in each simulation
###Code
#your code here
#way1
heads = [np.random.binomial(500, .5) for i in range(500)]
#way2
heads = np.random.binomial(500, .5, size = 500)
plt.hist(heads, bins = 10) #画出直方图
###Output
_____no_output_____
###Markdown
The Monty Hall ProblemHere's a fun and perhaps surprising statistical riddle, and a good way to get some practice writing python functionsIn a gameshow, contestants try to guess which of 3 closed doors contain a cash prize (goats are behind the other two doors). Of course, the odds of choosing the correct door are 1 in 3. As a twist, the host of the show occasionally opens a door after a contestant makes his or her choice. This door is always one of the two the contestant did not pick, and is also always one of the goat doors (note that it is always possible to do this, since there are two goat doors). At this point, the contestant has the option of keeping his or her original choice, or swtiching to the other unopened door. The question is: is there any benefit to switching doors? The answer surprises many people who haven't heard the question before.We can answer the problem by running simulations in Python. We'll do it in several parts.First, write a function called `simulate_prizedoor`. This function will simulate the location of the prize in many games -- see the detailed specification below:
###Code
"""
Function
--------
simulate_prizedoor
Generate a random array of 0s, 1s, and 2s, representing
hiding a prize between door 0, door 1, and door 2
Parameters
----------
nsim : int
The number of simulations to run
Returns
-------
sims : array
Random array of 0s, 1s, and 2s
Example
-------
>>> print simulate_prizedoor(3)
array([0, 0, 2])
"""
def simulate_prizedoor(nsim):
#compute here
#return answer
return np.random.randint(0, 3, nsim)
print simulate_prizedoor(3)
#your code here
###Output
[1 0 0]
###Markdown
Next, write a function that simulates the contestant's guesses for `nsim` simulations. Call this function `simulate_guess`. The specs:
###Code
"""
Function
--------
simulate_guess
Return any strategy for guessing which door a prize is behind. This
could be a random strategy, one that always guesses 2, whatever.
Parameters
----------
nsim : int
The number of simulations to generate guesses for
Returns
-------
guesses : array
An array of guesses. Each guess is a 0, 1, or 2
Example
-------
>>> print simulate_guess(5)|
array([0, 0, 0, 0, 0])
"""
#your code here
def simulate_guess(nsim):
return np.zeros(nsim, dtype = np.int)
print simulate_guess(3)
###Output
[0 0 0]
###Markdown
Next, write a function, `goat_door`, to simulate randomly revealing one of the goat doors that a contestant didn't pick.
###Code
"""
Function
--------
goat_door
Simulate the opening of a "goat door" that doesn't contain the prize,
and is different from the contestants guess
Parameters
----------
prizedoors : array
The door that the prize is behind in each simulation
guesses : array
THe door that the contestant guessed in each simulation
Returns
-------
goats : array
The goat door that is opened for each simulation. Each item is 0, 1, or 2, and is different
from both prizedoors and guesses
Examples
--------
>>> print goat_door(np.array([0, 1, 2]), np.array([1, 1, 1]))
>>> array([2, 2, 0])
"""
#your code here
def goat_door(prizedoors, guesses):
result = np.random.randint(0, 3, prizedoors.size)
while True:
bad = (result == prizedoors) | (result == guesses)
if not bad.any():
return result
result[bad] = np.random.randint(0, 3, bad.sum())
###Output
_____no_output_____
###Markdown
Write a function, `switch_guess`, that represents the strategy of always switching a guess after the goat door is opened.
###Code
"""
Function
--------
switch_guess
The strategy that always switches a guess after the goat door is opened
Parameters
----------
guesses : array
Array of original guesses, for each simulation
goatdoors : array
Array of revealed goat doors for each simulation
Returns
-------
The new door after switching. Should be different from both guesses and goatdoors
Examples
--------
>>> print switch_guess(np.array([0, 1, 2]), np.array([1, 2, 1]))
>>> array([2, 0, 0])
"""
#your code here
def switch_guess(guesses, goatdoors):
result = np.random.randint(0, 3, guesses.size)
while True:
bad = (result == guesses) | (result == goatdoors)
if not bad.any():
return result
result[bad] = np.random.randint(0, 3, bad.sum())
###Output
_____no_output_____
###Markdown
Last function: write a `win_percentage` function that takes an array of `guesses` and `prizedoors`, and returns the percent of correct guesses
###Code
"""
Function
--------
win_percentage
Calculate the percent of times that a simulation of guesses is correct
Parameters
-----------
guesses : array
Guesses for each simulation
prizedoors : array
Location of prize for each simulation
Returns
--------
percentage : number between 0 and 100
The win percentage
Examples
---------
>>> print win_percentage(np.array([0, 1, 2]), np.array([0, 0, 0]))
33.333
"""
#your code here
def win_percentage(prizedoors, guesses):
return 100 * (prizedoors == guesses).mean()
###Output
_____no_output_____
###Markdown
Now, put it together. Simulate 10000 games where contestant keeps his original guess, and 10000 games where the contestant switches his door after a goat door is revealed. Compute the percentage of time the contestant wins under either strategy. Is one strategy better than the other?
###Code
#your code here
nsim = 100000
print "Win percentage when keeping original door"
print win_percentage(simulate_prizedoor(nsim), simulate_guess(nsim))
pd = simulate_prizedoor(nsim)
guess = simulate_guess(nsim)
goats = goat_door(pd, guess)
guess = switch_guess(guess, goats)
print "Win percentage when switching doors"
print win_percentage(pd, guess).mean()
###Output
Win percentage when keeping original door
33.065
Win percentage when switching doors
66.796
|
samples/jupyter/Multiclass_Classification_Annotation.ipynb | ###Markdown
Use input The crudest solution – show user docs one by one and at each step ask for label for a doc
###Code
from IPython.display import clear_output
import re
def get_label(doc, labels, multi=False):
clear_output()
mapper = {str(i): label for i, label in enumerate(labels)}
legend = "\n".join([f'{index}\t{label}' for index, label in mapper.items()])
if multi:
legend += "\nInput comma-separated list for multiple labels"
user_input = input(f'Select class for "{doc}"\n{legend}\n({"/".join(mapper.keys())})?')
response = user_input.strip()
label = ''
if multi:
keys = re.split(r",\s*", response)
label = [mapper[key] for key in keys]
else:
key = response
label = mapper.get(key)
if label:
return label
# loop if got something wrong
return get_label(doc, labels, multi)
labels = [get_label(doc, classes) for doc in docs]
pd.DataFrame(list(zip(docs, labels)), columns=['docs', 'labels'])
labels = [get_label(doc, classes, multi=True) for doc in docs]
pd.DataFrame(list(zip(docs, labels)), columns=['docs', 'labels'])
###Output
_____no_output_____
###Markdown
Use ipython widgets Something similar to usual spreadsheet software, where we have each row represent the doc and control next to it to select one or multiple classes
###Code
import pandas as pd
import ipywidgets as widgets
import time
from IPython.display import display
from IPython.display import display_html, clear_output
class CheckBoxGroup:
def __init__(self, options):
self.value_mapper = {
label: widgets.Checkbox(
value=False,
description=label,
disabled=False) for label in options
}
self.elements = list(self.value_mapper.values())
@property
def value(self):
return [label for label, element in self.value_mapper.items() if element.value]
def render(self):
return self.elements
class RadioButtonsWrapper:
def __init__(self, options):
self.elements = widgets.RadioButtons(
options=options,
disabled=False
)
@property
def value(self):
return self.elements.value
def render(self):
return [self.elements]
def display_docs(docs, labels, multi=False):
rows = []
value_holders = []
for i, doc in enumerate(docs):
label = widgets.Label(doc)
element = CheckBoxGroup(labels) if multi else RadioButtonsWrapper(labels)
value_holders.append(element)
row = widgets.HBox([label, *element.render()])
row.layout.display = 'flex'
label.layout.flex = '1 0 100px'
for element in element.render():
element.layout.flex = '0 0 100px'
rows.append(row)
table = widgets.VBox(rows)
display(table)
def get_response():
return pd.DataFrame(list(zip(docs, [c.value for c in value_holders])), columns=['docs', 'labels'])
return get_response
get_response = display_docs(docs, classes)
get_response()
get_response = display_docs(docs, classes, multi=True)
get_response()
###Output
_____no_output_____
###Markdown
Using [ipyannotate](https://github.com/natasha/ipyannotate)
###Code
from ipyannotate import annotate
from ipyannotate.buttons import ValueButton, NextButton, BackButton
buttons = [
ValueButton(
icon="🐯",
value="about_tiger",
shortcut="s"
),
ValueButton(
icon="🐺",
value="about_wolf",
shortcut="w"
),
ValueButton(
icon="🦆",
value="about_duck",
shortcut="d"
),
BackButton(),
NextButton()
]
annotation = annotate(docs, buttons=buttons)
annotation
annotation.tasks
###Output
_____no_output_____
###Markdown
> if you need multiple labels per row, pass `multi=True` to `annotate()`. Note that you will need to navigate between samples manually, as previously it was done after label assignment
###Code
buttons = [
ValueButton(
icon="🐯",
value="about_tiger",
shortcut="s"
),
ValueButton(
icon="🐺",
value="about_wolf",
shortcut="w"
),
ValueButton(
icon="🦆",
value="about_duck",
shortcut="d"
),
BackButton(),
NextButton()
]
annotation = annotate(docs, buttons=buttons, multi=True)
annotation
annotation.tasks
###Output
_____no_output_____ |
fake_news/getting_real_about_fake_news_kaggle/getting_real_about_fake_news_kaggle.ipynb | ###Markdown
Getting real about Fake News | KaggleLink: [https://www.kaggle.com/mrisdal/fake-news](https://www.kaggle.com/mrisdal/fake-news)This jupyter notebook covers descriptive analysis of **Getting real about Fake News | Kaggle** dataset. Attributes* **uuid** - unique identifier* **ord_in_thread*** **author** - author of story* **published** - date published* **title** - title of the story* **text** - text of story* **language** - data from webhose.io* **crawled** - date the story was archived* **site_url** - site URL from [BS detector](https://github.com/bs-detector/bs-detector/blob/dev/ext/data/data.json)* **country** - data from webhose.io* **domain_rank** - data from webhose.io* **thread_title*** **spam_score** - data from webhose.io* **main_img_url** - image from story* **replies_count** - number of replies* **participants_count** - number of participants* **likes** - number of Facebook likes* **comments** - number of Facebook comments* **shares** - number of Facebook shares* **type** - type of website (label from [BS detector](https://github.com/bs-detector/bs-detector/blob/dev/ext/data/data.json)) Setup and import libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Read the data
###Code
df = pd.read_csv('data/data.csv')
###Output
_____no_output_____
###Markdown
Analysis Count of records
###Code
len(df)
###Output
_____no_output_____
###Markdown
Data examples
###Code
df.head()
###Output
_____no_output_____
###Markdown
More information about data
###Code
df.info()
df.describe(include='all')
###Output
_____no_output_____
###Markdown
NaN valuesAre there any NaN values in our data?
###Code
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
Let's look at NaN values per each column:
###Code
df.isnull().sum().plot(kind='bar', ylim=(0, len(df)), title='NaN values per column')
###Output
_____no_output_____
###Markdown
Attributes analysis What is the distribution of fake news labels in our data?
###Code
df['type'].value_counts().plot(kind='bar', title='Distribution of labels')
###Output
_____no_output_____ |
5_classification_linear_knn/examples.ipynb | ###Markdown
Logistic regression classes {-1, 1} $$ \hat{y_{i}} = sign () = sign(w^{(0)} + \sum_{j=1}^{n} w^{(j)}x^{(j)}_{i}) \in \{-1, 1\}$$$$ P(y_{i} = 1 | x_{i}) = 1 - P(y_{i} = -1 | x_{i}) $$$$P(y_{i} = 1 | x_{i}) = \frac{1}{1 + e^{-{}}} $$$$P(y_{i} = -1 | x_{i}) = 1 - \frac{1}{1 + e^{-{}}} = \frac{e^{-{}}}{1 + e^{-{}}}$$$$ \frac{P(y_{i} = 1 | x_{i})}{P(y_{i} = -1 | x_{i})} = \frac{P(y_{i} = 1 | x_{i})}{1 - P(y_{i} = 1 | x_{i})} = e^{}$$$$ M_{i} = y_{i} $$$$ Q(w, X, y) = \sum_{i=1}^{N} [M_{i} < 0] \leq \sum_{i=1}^{N} \log_2 (1 + e^{-M_{i}})$$$$ Loss(w, X, y) = \sum_{i=1}^{N} \ln (1 + e^{-M_{i}}) \to min $$ classes {0, 1} $$ p(x_{i}) = P(y_{i} = 1 | x_{i}) = 1 - P(y_{i} = 0 | x_{i}) $$$$ p(x_{i}) = \frac{1}{1 + e^{-{}}} $$$$ 1 - p(x_{i}) = \frac{e^{-{}}}{1 + e^{-{}}} = \frac{1}{1 + e^{}}$$$$ M_{i} = (2y_{i} - 1) $$$$ \ln (1 + e^{-M_{i}}) = [y_{i} == 1] = \ln (1 + e^{-{}}) = -\ln p(x_{i}) = -y_{i}\ln p(x_{i})$$ $$ \ln (1 + e^{-M_{i}}) = [y_{i} == 0] = \ln (1 + e^{{}}) = -\ln (1 - p(x_{i})) = -(1 - y_{i}) \ln (1 - p(x_{i}))$$
###Code
lr_param_grid = {'C': [0.01, 0.1, 1.0, 10.0],
'penalty': ['l1', 'l2']}
lr_clf = GridSearchCV(LogisticRegression(random_state=42, max_iter=1000, solver='saga', n_jobs=-1), lr_param_grid)
lr_best_clf, lr_stats = fit_plot_confusion(lr_clf, X_train, y_train, X_test, y_test)
lr_stats
lr_best_clf.intercept_
coef = lr_best_clf.coef_[0]
plt.figure(figsize=(10, 5))
scale = np.abs(coef).max()
plt.imshow(coef.reshape(8, 8), interpolation='nearest',
cmap=plt.cm.RdYlGn, vmin=-scale, vmax=scale)
y_pred = lr_best_clf.predict(X_test)
metrics.accuracy_score(y_pred=y_pred, y_true=y_test % 2)
###Output
_____no_output_____
###Markdown
KNN
###Code
knn_param_grid = {'n_neighbors': [1, 2, 3, 5, 30, 100], 'weights': ['uniform', 'distance']}
knn_clf = GridSearchCV(KNeighborsClassifier(n_jobs=-1), knn_param_grid)
knn_best_clf, knn_stats = fit_plot_confusion(knn_clf, X_train, y_train, X_test, y_test)
knn_stats
y_pred = knn_best_clf.predict(X_test)
metrics.accuracy_score(y_pred=y_pred, y_true=y_test % 2)
###Output
_____no_output_____ |
matplotlib/gallery_jupyter/lines_bars_and_markers/gradient_bar.ipynb | ###Markdown
Bar chart with gradientsMatplotlib does not natively support gradients. However, we can emulate agradient-filled rectangle by an `.AxesImage` of the right size and coloring.In particular, we use a colormap to generate the actual colors. It is thensufficient to define the underlying values on the corners of the image andlet bicubic interpolation fill out the area. We define the gradient directionby a unit vector *v*. The values at the corners are then obtained by thelengths of the projections of the corner vectors on *v*.A similar approach can be used to create a gradient background for an axes.In that case, it is helpful to uses Axes coordinates (``extent=(0, 1, 0, 1),transform=ax.transAxes``) to be independent of the data coordinates.
###Code
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(19680801)
def gradient_image(ax, extent, direction=0.3, cmap_range=(0, 1), **kwargs):
"""
Draw a gradient image based on a colormap.
Parameters
----------
ax : Axes
The axes to draw on.
extent
The extent of the image as (xmin, xmax, ymin, ymax).
By default, this is in Axes coordinates but may be
changed using the *transform* kwarg.
direction : float
The direction of the gradient. This is a number in
range 0 (=vertical) to 1 (=horizontal).
cmap_range : float, float
The fraction (cmin, cmax) of the colormap that should be
used for the gradient, where the complete colormap is (0, 1).
**kwargs
Other parameters are passed on to `.Axes.imshow()`.
In particular useful is *cmap*.
"""
phi = direction * np.pi / 2
v = np.array([np.cos(phi), np.sin(phi)])
X = np.array([[v @ [1, 0], v @ [1, 1]],
[v @ [0, 0], v @ [0, 1]]])
a, b = cmap_range
X = a + (b - a) / X.max() * X
im = ax.imshow(X, extent=extent, interpolation='bicubic',
vmin=0, vmax=1, **kwargs)
return im
def gradient_bar(ax, x, y, width=0.5, bottom=0):
for left, top in zip(x, y):
right = left + width
gradient_image(ax, extent=(left, right, bottom, top),
cmap=plt.cm.Blues_r, cmap_range=(0, 0.8))
xmin, xmax = xlim = 0, 10
ymin, ymax = ylim = 0, 1
fig, ax = plt.subplots()
ax.set(xlim=xlim, ylim=ylim, autoscale_on=False)
# background image
gradient_image(ax, direction=0, extent=(0, 1, 0, 1), transform=ax.transAxes,
cmap=plt.cm.Oranges, cmap_range=(0.1, 0.6))
N = 10
x = np.arange(N) + 0.15
y = np.random.rand(N)
gradient_bar(ax, x, y, width=0.7)
ax.set_aspect('auto')
plt.show()
###Output
_____no_output_____ |
Projects/Projects/Exploring 67 years of LEGO/notebook.ipynb | ###Markdown
1. IntroductionEveryone loves Lego (unless you ever stepped on one). Did you know by the way that "Lego" was derived from the Danish phrase leg godt, which means "play well"? Unless you speak Danish, probably not. In this project, we will analyze a fascinating dataset on every single lego block that has ever been built!
###Code
# Nothing to do here
###Output
_____no_output_____
###Markdown
2. Reading DataA comprehensive database of lego blocks is provided by Rebrickable. The data is available as csv files and the schema is shown below.Let us start by reading in the colors data to get a sense of the diversity of lego sets!
###Code
# Import modules
import pandas as pd
# Read colors data
colors = pd.read_csv('datasets/colors.csv')
# Print the first few rows
colors.head()
###Output
_____no_output_____
###Markdown
3. Exploring ColorsNow that we have read the colors data, we can start exploring it! Let us start by understanding the number of colors available.
###Code
# How many distinct colors are available?
# -- YOUR CODE FOR TASK 3 --
num_colors = len(colors.name.unique())
num_colors
###Output
_____no_output_____
###Markdown
4. Transparent Colors in Lego SetsThe colors data has a column named is_trans that indicates whether a color is transparent or not. It would be interesting to explore the distribution of transparent vs. non-transparent colors.
###Code
# colors_summary: Distribution of colors based on transparency
# -- YOUR CODE FOR TASK 4 --
colors_summary = colors.groupby(colors['is_trans']).count()
colors_summary
###Output
_____no_output_____
###Markdown
5. Explore Lego SetsAnother interesting dataset available in this database is the sets data. It contains a comprehensive list of sets over the years and the number of parts that each of these sets contained. Let us use this data to explore how the average number of parts in Lego sets has varied over the years.
###Code
%matplotlib inline
# Read sets data as `sets`
sets = pd.read_csv('datasets/sets.csv')
# Create a summary of average number of parts by year: `parts_by_year`
parts_by_year = sets['num_parts'].groupby(sets['year']).mean()
# Plot trends in average number of parts by year
sets.plot()
###Output
_____no_output_____
###Markdown
6. Lego Themes Over YearsLego blocks ship under multiple themes. Let us try to get a sense of how the number of themes shipped has varied over the years.
###Code
# themes_by_year: Number of themes shipped by year
# -- YOUR CODE HERE --
themes_by_year = sets[['year', 'theme_id']].groupby(sets['year']).count()
themes_by_year
###Output
_____no_output_____
###Markdown
7. Wrapping It All Up!Lego blocks offer an unlimited amount of fun across ages. We explored some interesting trends around colors, parts, and themes.
###Code
# Nothing to do here
###Output
_____no_output_____ |
ML_part1_supervised_part2_unsupervised.ipynb | ###Markdown
Quick Machine Learning Part. 1 Supervised Learning Techniques- Decision Tree- Cross-Validation- Naive Bayes- K-Nearest Neighbours- Random Forest- Ensemble Methods
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(style="ticks")
from sklearn.feature_extraction.text import CountVectorizer
from skimage.io import imread, imshow
###Output
_____no_output_____
###Markdown
Exploratory Data- Feature Aggregation: combining features to make new features.- Feature Selection: removing irrelevant features.- Feature Transformation: mathematical transformations, e.g. sqr, exp, log. - Discretization: numerical data -> categorical data e.g. age into age group.- Summary Statistics: - Categorical: frequency of classes, modes, quantiles. - Continuous: mean, median, quantiles. - Entropy: $\sum_{c=1}^{k} p_{c}\log {p_{c}}$ measures randomness in range 0 to $\log k$. Lower means predictable, higher means random. Highest entropy in categorical and continuous distributions are the uniform and normal distributions respectively. - x, y: hamming distance, euclidian distance, correlation, and rank correlation. - Jaccard Coefficient: distance between sets. Intersection over union. - Edit Distance: distance between strings.
###Code
# Show different types of data
titanic = sns.load_dataset("titanic")
titanic.head()
# Show bag of words
text = "The University of British Columbia (UBC) is a public research university with campuses and facilities in British Columbia, Canada."
cv = CountVectorizer()
feat = cv.fit_transform([text])
for word, idx in cv.vocabulary_.items():
print("%-14s%d" % (word, feat[0,idx]))
# Discretization
ages = pd.cut(titanic['age'], bins=(0,20,30,100))
ages_cat = pd.get_dummies(ages)
pd.concat([titanic['age'], ages_cat],axis=1).head()
###Output
_____no_output_____
###Markdown
Decision Tree Learning- Decision trees are nested if-else splitting rules that returns a class label at the end of each sequence.- Decision stumps have only 1 rule based on only 1 feature.- Decision tress allow sequences of splits based on multiple features. It's computationally infeasible to find the best decision tree.- Most commonly used: **Greedy Recursive Splitting** - With full dataset, split to two smaller datasets based on stump - Fit a decision stump to each leaf's data, add stumps to the tree.- Score: **Information Gain** - entropy of labels before split - number of examples satisfying rule * entropy of labels for examples satisfying rule - number of examples NOT satisfying rule * entropy of labels for examples NOT satisfying rule - $I = entropy(y) - \frac {n_{yes}}{n} entropy(y_{yes}) - \frac {n_{no}}{n} entropy(y_{no})$ - information gain for baseline rule is 0 - classification accuracy should gradually increase with depth
###Code
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
digits = datasets.load_digits()
X, y = digits['data'], digits['target']
n, d = X.shape
idx = np.random.randint(0, n)
plt.imshow(digits['images'][idx], cmap='Greys_r')
plt.title('This is a %d' % digits['target'][idx]);
# Depth 1 Decision Tree
stump = DecisionTreeClassifier(max_depth=1)
stump.fit(X,y)
yhat = stump.predict(X)
print("Error rate:", np.sum(y!=yhat)/n) # or np.mean(y!=yhat)
# Plot classification error with increasing depth
errors = []
depths = range(1,20)
for max_depth in depths:
tree = DecisionTreeClassifier(max_depth=max_depth)
tree.fit(X,y)
yhat = tree.predict(X)
errors.append(np.mean(y!=yhat))
plt.plot(depths, errors)
plt.xlabel("Max depth")
plt.ylabel("Classification error");
###Output
_____no_output_____
###Markdown
Fundamentals of Learning- Overfitting: testing accuracy is lower than training accuracy.- Supervised learning steps: training phase -> testing phase. - Test data cannot influence training phase in any way. - Training and testing data are assumed to be IID across examples but not across features.- **Learning Theory**: how does $E_{train}$ training data relate to $E_{test}$ test error. Testing error is what we care.- **Fundamental Trade-Off**: - $E_{test}=(E_{test}-E_{train})+E_{train}$ - test error = approximation error + training error - $E_{approx}$ is the amount of overfitting, decreases when n increases or model complexity increases. - Small $E_{approx}$ implies $E_{train}$ is a good approximation to $E_{test}$. - Trade off of how small you can make $E_{train}$ vs. how well $E_{train}$ approximates $E_{test}$. Simple models like decision stumps have low $E_{approx}$ but high $E_{train}$. Complex models like deep decision tress have low $E_{train}$ but high $E_{approx}$.- **Validation Error**: split training examples to training set and validation set biased approximation of test error. - $E[E_{valid}] = E[E_{test}]$- Parameters control how well we fit a dataset, find by training, e.g. decision tree rules.- Hyper-parameters control how complex the model is, cannot train but can validate using score, e.g. decision tree depth.- **Optimization Bias**: aka overfitting. Grows with complexity of the set of models we search for but shrinks with the number of examples. - Parameter learning: search decision trees, find low training error by chance. - Hyper-parameter tuning: optimize validation error, find low error by chance.
###Code
# Optimization bias
Xtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.2)
tree = DecisionTreeClassifier(max_depth=20)
tree.fit(Xtrain,ytrain)
train_err = np.mean(ytrain!=tree.predict(Xtrain))
print("Training error,", train_err)
test_err = np.mean(ytest!=tree.predict(Xtest))
print("Test error,", test_err)
# Plot difference between training and testing error
train_errors = []
test_errors = []
for max_depth in depths:
tree = DecisionTreeClassifier(max_depth=max_depth)
tree.fit(Xtrain,ytrain)
train_errors.append(np.mean(ytrain!=tree.predict(Xtrain)))
test_errors.append(np.mean(ytest!=tree.predict(Xtest)))
plt.plot(depths, train_errors, label="train")
plt.plot(depths, test_errors, label="test")
plt.xlabel("Max depth")
plt.ylabel("Classification error")
plt.legend()
###Output
_____no_output_____
###Markdown
Probabilistic Classification- Validation error usually has lower optimization bias than training error. Overfitting validation error happens when calculating the error a huge number of times.- Optimization bias is small when compare a few models, large when compare a lot of models. Bias also shrinks with increasing validation set size.- **k-Fold Cross-Validation**: more accurate and expensive with more folds. - To choose depth: for depths 1 to 20, compute the cross-validation score and return the highest score. - To compute the score: for fold 1 to 5, training 80% that doesn't include the fold, lastly return average test score.- Spam Filtering with supervised learning: collect spam-labeled dataset, extract features (e.g. bag of words, bi/trigrams, regex), classify by **naive Bayes**. - $p(y_{i}="spam"|x_{i})= \frac {p(x_{i}|y_{i}="spam") p(y_{i}="spam") }{p(x_{i})} $ - $p(y_{i}="spam") = \frac {spam\ messages}{total\ messages} $ - $p(x_{i}) = \frac {e-mails\ with\ features\ x_{i}}{total\ e-mails}$, hard to estimate ignore now. - $p(x_{i}|y_{i}="spam")= \frac {spam\ messages\ with\ features\ x_{i}}{spam\ messages}$ - Naive Bayes assumes all features are conditionally independent given label $y_{i}$. Non-Parametric Models- **Laplace Smoothing**: add 1 to numerator, add 2 to denominator. This is used when $p(x_{i}|y_{i}="spam")=0$ to avoid automatically getting through. This is done across all features to avoid overfitting by biasing towards uniform distribution. A common variation is to use $\beta\ , \beta k$ instead of 1, 2.- **Decision Theory**: do false positive and false negative carry the same weight? We give cost to each scenario, where we minimize expected cost. - $cost = E[cost(\hat{y_{i}} , \tilde{y_{i}} )]$ - cost = expectation of cost of predicting $\hat{y_{i}}$ if it's really $\tilde{y_{i}}$ with respect to $\tilde{y_{i}}$- **k-Nearest Neighbours**: find the k training examples $x_{i}$ nearest to $\tilde{x_{i}}$, classify using most common label of nearest training examples. - kNN assumes examples with similar features are likely to have similar labels. - Common distance function: Euclidean, O(d) to compute. - As k grows, training error increases and approximation error decreases. - No training phase in KNN. Predictions are expensive O(nd). Storage is expensive O(nd). - Have good consistency properties. Test error is less than twice best possible error.- Parametric Models: fixed number of parameters. - e.g. naive Bayes stores counts, fixed-depth decision tree store rules. - Estimation improves with more data unless model is too simple. - Memory is bounded. - Accuracy limit exists. Infinite n may not be able to acheive optimal error.- Non-parametric Models: number of parameters grows with n. - e.g. KNN stores all training data, decision tree whose depth grows with number of examples. - Complexity grows with more data. - Memory is unbounded - Converges to optimal error.- Curse of Dimensionality: volume of space grows exponentially with dimension, need exponentially more points to fill a high-dimensional volume.
###Code
from sklearn.neighbors import KNeighborsClassifier
# code adapted from http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html
def plotClassifier(model, X, y, transformation=None):
x1 = X[:, 0]
x2 = X[:, 1]
x1_min, x1_max = int(x1.min()) - 1, int(x1.max()) + 1
x2_min, x2_max = int(x2.min()) - 1, int(x2.max()) + 1
x1_line = np.linspace(x1_min, x1_max,200)
x2_line = np.linspace(x2_min, x2_max,200)
x1_mesh, x2_mesh = np.meshgrid(x1_line, x2_line)
mesh_data = np.c_[x1_mesh.ravel(), x2_mesh.ravel()]
if transformation is not None:
mesh_data = transformation(mesh_data)
y_pred = model.predict(mesh_data)
y_pred = np.reshape(y_pred, x1_mesh.shape)
plt.xlim([x1_mesh.min(), x1_mesh.max()])
plt.ylim([x2_mesh.min(), x2_mesh.max()])
plt.contourf(x1_mesh, x2_mesh, -y_pred, cmap=plt.cm.RdBu, alpha=0.6)
plt.scatter(x1[y<0], x2[y<0], color="b", marker="x", label="class $-1$")
plt.scatter(x1[y>0], x2[y>0], color="r", marker="o", label="class $+1$")
plt.legend(loc="best")
plt.tick_params(axis='both', which='both', bottom='off', left='off', labelbottom='off', labelleft='off')
# Make random dataset
N = 50
X = np.random.randn(N,2)
y = np.random.choice((-1,+1),size=N)
X[y>0,0] += 2
X[y>0,1] += 2
dt = DecisionTreeClassifier()
dt.fit(X,y)
plotClassifier(dt, X, y)
nn = KNeighborsClassifier(n_neighbors=10)
nn.fit(X,y)
plotClassifier(nn, X, y)
###Output
_____no_output_____
###Markdown
Ensemble Methods- Common way to define distace: take the "norm" of the difference between feature vectors. Norms are a way to measure the length of a vector. Different norms places different weights on differences. In L2, bigger differences are important. In L1, differences are equally notable. In Linf, only biggest differnce is important. - L2-Norm (Euclidean norm): $||x_{i}-\tilde{x_{\tilde{i}}}||_{2} = \sqrt{\sum_{j=1}^{d}(x_{i,j}-\tilde{x_{\tilde{i,j}}})^{2}} $ which is ||Train example - Test-Example||. - L1-Norm: $||r||_{1} = \sum_{j=1}^{d}|r_{j}|$ - L$_{\infty}$-Norm: $max_{j}{|r_{j}|} $- Optical Character Recognition: - KNN doesn't know labels should be translation invariant. Add transformed data during training to fix.- Ensemble Methods: meta-classifier having input classifiers. - Averaging: input is predictions of a set of models and take the mode of predictions. - Stacking: fit another classifier that uses predictions from models. - Random Forests: average a set of deep decision trees using bootstrapping and random trees, usually predictions are fast. First, bootstrap sample of list of n samples. Second, perform bagging by fitting classifier to each bootstrap sample, at test time average the predictions. Third, for each split in a random tree model, randomly sample a small number of features and only consider them when searching for optimal rule. Splits will use different features in different trees, but will still overfit and errors will be more independent, so average tends to have a much lower test error.
###Code
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=50)
rf.fit(X,y)
plotClassifier(rf, X, y)
import sklearn.datasets
# load the newsgroups data
train = sklearn.datasets.fetch_20newsgroups_vectorized(subset='train')
X_train = train.data
y_train = train.target
test = sklearn.datasets.fetch_20newsgroups_vectorized(subset='test')
X_test = test.data
y_test = test.target
def print_errs(model):
train_err = 1-model.score(X_train, y_train)
test_err = 1-model.score(X_test, y_test)
print("Train error:", train_err)
print("Test error:", test_err)
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
print("Decision Tree")
print_errs(dt)
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
print("Random Forest")
print_errs(rf)
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
print("K-Nearest Neighbours")
print_errs(knn)
from sklearn.ensemble import VotingClassifier
classifiers = {
"decision tree" : dt,
"random forest" : rf,
"KNN" : knn
}
ensemble = VotingClassifier(classifiers.items())
ensemble.fit(X_train, y_train)
print_errs(ensemble)
###Output
Train error: 0.00017677214071065706
Test error: 0.31346255974508763
###Markdown
Part. 2 Unsupervised Learning Techniques- K-means- DBscan- Agglomerative Clustering- Outlier Detection- A Priori Algorithm Clustering- Clustering: same group should be similar, different groups should be different. There are no test errors.- **k-Means**: input is hyper-parameter $k$ number of clusters and initial guess of the mean of each cluster. - Assign each $x_{i}$ to closest mean, update the means, and repeat until convergence. - Objective is to total sum of squared distanced from each example $x$ to its center $w$: $f(w_{1},w_{2}...w_{k},\hat{y_{1}},\hat{y_{2}}...\hat{y_{n}}) = \sum_{i=1}^{n}||w_{\hat{y_{i}}}-x_{i}||^{2}$ - Minimize f in terms of $\hat{y_{i}}$ to update cluster assignment then minimize in terms of $w_{c}$ to update means. - Total cost is $O(ndk)$ - If use L1-norm instead, use k-medians. If we need actual data points as means, use k-medoids. Newer appproach is k-means++.- Random restarts deals with sensitivity to initialization.- Vector quanitization is to compress examples by replacing them with the mean of their cluster.- Issue with k-means is its cannot seperate non-convex shapes of clusters, solve by density-based clustering.- **DBscan**: input is 2 hyper-parameters $\epsilon$ distance to decide if neighbours, $MinNeighbours$ to decide number of neighbors to define dense / a core point. - For each example $x_{i}$: if assigned to cluster do nothing, test whether is a core point, if false do nothing, if true make a new cluster and call the "expand cluster" function. - Expand cluster: assign to cluster all $x_{j}$ within $\epsilon$ of core point $x_{i}$ to this cluster. For each new core point found, expand cluster. - Choosing hyper-parameters using elbow method.- Ensemble clustering combines multiple clustering, but take not of label switching.- **Hierarchical Clustering**: produces tree of clusterings. Each node in the tree splits data into >=2 clusters. Individual data points are leaves. - **Agglomerative Clustering**: start with each point in a cluster, merge closest pair of cluster, stop at one big cluster. - Closest is defined as distance between means of clusters. - Cost is $O(n^{3}d)$- Biclustering: cluster training examples and features. X is plotted as a heatmap, where rows/columns arranged by clusters. Breast cancer visualization is common using hierarchical biclustering + heatmap + dendograms.
###Code
from sklearn.cluster import KMeans, DBSCAN
from sklearn.metrics.pairwise import euclidean_distances
def plot_clust(X,W=None,z=None):
if z is not None:
if np.any(z<0):
plt.scatter(X[z<0,0], X[z<0,1], marker="o", facecolors='none', edgecolor='black', alpha=0.3);
if np.any(z>=0):
plt.scatter(X[z>=0,0], X[z>=0,1], marker="o", c=z[z>=0], alpha=0.3);
else:
plt.scatter(X[:,0], X[:,1], marker="o", c='black', alpha=0.3);
if W is not None:
plt.scatter(W[:,0], W[:,1], marker="^", s=200, c=np.arange(W.shape[0]));
else:
plt.title("number of clusters = %d" % len(set(np.unique(z))-set([-1])));
# Make random dataset
np.random.seed(2)
n = 100
d = 2
k_true = 4
W_true = np.random.randn(k_true,d)*10
z_true = np.random.randint(0,k_true,size=n)
X = np.zeros((n,2))
for i in range(n):
X[i] = W_true[z_true[i]] + np.random.randn(d)
plt.scatter(X[:,0], X[:,1], c=z_true, marker="o", alpha=0.5);
# assign each object to closest mean
def update_z(X,W):
dist2 = euclidean_distances(X, W)
return np.argmin(dist2, axis=1)
# recompute cluster centres
def update_W(X,z,W_old):
# just being a bit careful about the case of a cluster with no points in it
W = W_old.copy()
for kk in range(k):
W[kk] = np.mean(X[z==kk],axis=0)
return W
# Start k-means by randomly initialize means
k = 4
W = X[np.random.choice(n, k, replace=False)]
# can change to loop until np.all(z_new == z)
for itr in range(100):
z = update_z(X,W)
W = update_W(X,z,W)
plot_clust(X,W,z)
# make non-convex dataset
n1 = 100
x1 = np.linspace(-1,1,n1) + np.random.randn(n1)*.1
y1 = 1-x1**2 + np.random.randn(n1)*.1
n2 = 100
x2 = np.linspace(0,2,n2) + np.random.randn(n2)*.1
y2 = (x2-1)**2-1 + np.random.randn(n2)*.1
x = np.concatenate((x1,x2))
y = np.concatenate((y1,y2))
X = np.concatenate((x[:,None],y[:,None]),axis=1)
# run k-means first, to see problem
kmeans = KMeans(n_clusters=2)
kmeans.fit(X)
plot_clust(X,kmeans.cluster_centers_, kmeans.labels_)
dbscan = DBSCAN(eps=0.3)
dbscan.fit(X)
plot_clust(X,z=dbscan.labels_)
# hyper-parameter alteration
dbscan = DBSCAN(eps=0.3, min_samples=25)
dbscan.fit(X)
plot_clust(X,z=dbscan.labels_)
###Output
_____no_output_____ |
GNSSR_MERRByS.ipynb | ###Markdown
GNSS-R MERRByS Python Example codeThis Jupyter Notebook contains some examples for the processing of the GNSS-Reflectometry (GNSS-R) data from www.merbys.co.uk.Surrey Satellite Technology Ltd provides these functions under a permissive MIT license to make it easier for people to get started with this data source. ConventionsData is segmented into 6 hour sections. When inputting start and stop times, these should use the boundary times of 03:00, 09:00, 15:00, 21:00. ContentsThe examples are split into the following sections:1. **Download data from FTP server** - downloads L1b or L2 data from TechDemoSat-1 from the MERRByS server over specified time range2. **Example to read L1b Delay-Doppler maps level data** - reads Level1b metadata, filters it and generates a histogram and map3. **Example to read Level 2 wind-speed** - Displays the Level 2 wind-speed data4. **Other functions**5. **Data search script** - Allows searching the Level 1b data for results by time and location Import helper functionsA number of helpper functions live in the notebook's `/GNSSR_Python` folder. These allow the notebook to focus on the high-level functions.
###Code
# Enable automatically reloading the local python files if they change
%load_ext autoreload
%autoreload 2
#Import local python files
import os
import sys
module_path = os.getcwd() + '\\GNSSR_Python'
if module_path not in sys.path:
sys.path.append(module_path)
#Import profiler
import cProfile
###Output
_____no_output_____
###Markdown
1. Data download**The following contains a Python script to download L1b or L2 data from the MERRByS server.**All the data could be downloaded using any FTP client, but this script is provided to make it easier to select a subset of interest.The functions used for processing MERRByS data expect the files to be available on a local disk drive with the same directory structure as that used on the MERRByS FTP server. For example the following will download 24 hours of datae.g. downloadData('20170217T21:00:00', '20170218T21:00:00', {'L1B': True, 'L2_FDI': True, 'L2_CBRE_v0_5': True}, 'c:\merrbysData\');* This will download L1b and L2 in the time range specified into the c:\merrbysData folder.** You will need to change to following to use your MERRByS FTP Credentials to log-in ** Configuration
###Code
import datetime
# Destination to write data to
dataFolder = os.path.join(os.getcwd() , 'Data\\')
#The FTP data access folder 'Data' for regular users or 'DataFast' for low latency access for approved users
ftpDataFolder = 'Data' # 'Data' or 'DataFast'
# Location and access to the FTP server
## ENTER CREDENTIALS HERE
userName = ''
passWord = ''
ftpServer = 'ftp.merrbys.co.uk'
if len(userName) == 0:
print('Enter FTP credentials!')
#Time range of interest
# Data is segmented every 6 hours so hours must be one of [3, 9, 15, 21]
startTime = datetime.datetime(2017, 2, 1, 3, 0, 0)
stopTime = datetime.datetime(2017, 2, 10, 21, 0, 0)
#Data levels to download
# L1b is Delay-Doppler maps
# L2 FDI Is the original FDI ocean windspeed algorithm
# L2 CBRE Is the improved ocean windspeed algorithm
dataLevels = {'L1B': True, 'L2_FDI': True, 'L2_CBRE_v0_5': True}
###Output
_____no_output_____
###Markdown
Run download
###Code
# Download data from MERRByS server
# Collect all available files within a given date and time range
#Import GNSSR
from GNSSR import DownloadData
DownloadData(startTime, stopTime, dataFolder, ftpServer, userName, passWord, dataLevels, ftpDataFolder)
###Output
Starting download
Complete. Got: 35 segments
###Markdown
2. Generate histogram from Level1b metadata** The following produces two plots based on the Level 1B metadata.**These are:* a 2D histogram of DDM Peak SNR vs. Antenna Gain* an averaged map of Peak SNRData is filtered to specular points only over the ocean and when there is no direct signal interference from code-wrapping.
###Code
# Configuration of the routine for processing the Level1b histogram
import datetime
# Destination to read data from
dataFolder = os.path.join(os.getcwd() , 'Data\\')
#Time range of interest
# Data is segmented every 6 hours so hours must be one of [3, 9, 15, 21]
startTime = datetime.datetime(2017, 2, 1, 21, 0, 0)
stopTime = datetime.datetime(2017, 2, 10, 21, 0, 0)
import numpy as np
import scipy as sp
import sys
import os
import h5py
import matplotlib.pyplot as plt
import ipywidgets as widgets
from GNSSR import *
from CoastalDistanceMap import *
from MapPlotter import *
def RunMERRBySLevel1bHistogramAndMapExample():
#Ignore divide by NaN
np.seterr(divide='ignore', invalid='ignore')
# Select the data names to extract and plot
yName = 'DDMSNRAtPeakSingleDDM'
xName = 'AntennaGainTowardsSpecularPoint'
# Filter by the following data name
filterName = 'DirectSignalInDDM'
#filterOceanOrLand = 'Ocean'
landDistanceThreshold = 50 # km
# Filter by geographic area - if enabled
searchLimitsEnabled = False
searchLatLimit = [-10, 10]
searchLonLimit = [-10, 10]
coastalDistanceMap = CoastalDistanceMap()
coastalDistanceMap.loadMap(os.path.join(os.getcwd(), 'GNSSR_Python', 'landDistGrid_0.10LLRes_hGSHHSres.nc'))
#Generate a list of possible files in the range startTime to stopTime
dataList = FindFiles(startTime, stopTime)
#Initialising lists to ensure they are empty
x = np.array([])
y = np.array([])
mapPlotter = MapPlotter(100e3) #Map grid in km (at equator)
#Generate file input list for range
for entry in dataList:
entryFolder = FolderFromTimeStamp(entry)
#Create file path string
filePath = dataFolder + 'L1B\\' + entryFolder.replace('/','\\') + '\\metadata.nc'
try:
f = h5py.File(filePath, 'r')
except OSError as e:
#File does not exist
#As TDS-1 is run periodically, many time segment files are not populated
#print(e)
continue
print ('Reading file %s...' % entryFolder)
# Loop through all the tracks
trackNumber = 0
while True:
#Group name in NetCDF is 6 digit string
groupName = str(trackNumber).zfill(6)
try:
#Get data into numpy arrays
directSignalInDDM = f['/' + groupName + '/DirectSignalInDDM'][:]
x_vals = f['/' + groupName + '/' + xName][:]
y_vals = f['/' + groupName + '/' + yName][:]
specularPointLon = f['/' + groupName + '/SpecularPointLon'][:]
specularPointLat = f['/' + groupName + '/SpecularPointLat'][:]
except:
#End of data
break
# Filter the data
# Ocean - coastal distance
coastDistance = coastalDistanceMap.getDistanceToCoast(specularPointLon, specularPointLat)
#Initialise filter vector to all ones
acceptedData = np.ones(np.shape(directSignalInDDM))
# Filter out directSignalInDDM when the direct signal is in the
# delay-doppler space of the reflection DDM
acceptedData = np.logical_and(acceptedData, directSignalInDDM==0)
# Filter out land coastDistance=NaN
acceptedData = np.logical_and(acceptedData, np.isfinite(coastDistance))
# Filter out coastal data
acceptedData = np.logical_and(directSignalInDDM==0, coastDistance>landDistanceThreshold)
# Filter out where there could be sea-ice - currently disabled
#acceptedData = np.logical_and(acceptedData, np.abs(specularPointLat) < 55)
# Filter to geographic area
if searchLimitsEnabled:
acceptedData = np.logical_and(acceptedData, np.logical_and(specularPointLat > searchLatLimit[0], specularPointLat < searchLatLimit[1]))
acceptedData = np.logical_and(acceptedData, np.logical_and(specularPointLon > searchLonLimit[0], specularPointLon < searchLonLimit[1]))
#Apply the filter
filtered_x = x_vals[acceptedData]
filtered_y = y_vals[acceptedData]
filtered_lat = specularPointLat[acceptedData]
filtered_lon = specularPointLon[acceptedData]
#Concatenate filtered values to output in histogram
x = np.concatenate((x, filtered_x))
y = np.concatenate((y, filtered_y))
#Accumulate values into the map
mapPlotter.accumulateDataToMap(filtered_lon, filtered_lat, filtered_y)
# Go to next track
trackNumber = trackNumber + 1
f.close()
#Plot the data as a histogram
print('Plotting histogram')
plt.hist2d(x,y,bins=200, normed=True, cmap = 'jet')
#plt.figsize=(24, 40)
#Modify plot
plt.xlabel('Antenna Gain [dB]')
plt.ylabel('DDM Peak SNR [dB]')
plt.title('Histogram of ' + yName + ' and ' + xName)
plt.xlim([-12,max(x)])
plt.ylim([-12,20])
plt.colorbar()
#plt.tight_layout()
plt.show()
#Plot the data on a map
print('Plotting map')
mapPlotter.plotMap()
RunMERRBySLevel1bHistogramAndMapExample()
###Output
Reading file 2017-02/02/H00...
Reading file 2017-02/02/H06...
Reading file 2017-02/02/H12...
Reading file 2017-02/08/H18...
Reading file 2017-02/09/H00...
Reading file 2017-02/09/H06...
Reading file 2017-02/09/H12...
Reading file 2017-02/09/H18...
Reading file 2017-02/10/H00...
Reading file 2017-02/10/H06...
Reading file 2017-02/10/H12...
Plotting histogram
###Markdown
3. Plot wind speed from Level 2 data set** The following produces a plots of the Level 2 wind speed output of the FDI algorithm on a map**
###Code
import numpy as np
import scipy as sp
import sys
import os
import h5py
import matplotlib.pyplot as plt
import ipywidgets as widgets
from GNSSR import *
from CoastalDistanceMap import *
from MapPlotter import *
def RunMERRBySLevel2MapExample():
#Configuration of L2 Data type
l2DataName = 'L2_FDI' # 'L2_FDI' or 'L2_CBRE_v0_5'
#Ignore divide by NaN
np.seterr(divide='ignore', invalid='ignore')
# Set default figure size
plt.rcParams['figure.figsize'] = (12,8)
#Generate a list of possible files in the range startTime to stopTime
dataList = FindFiles(startTime, stopTime)
#Initialising lists to ensure they are empty
x = np.array([])
y = np.array([])
mapPlotter = MapPlotter(100e3) #Map grid in km (at equator)
#Generate file input list for range
for entry in dataList:
entryFolder = FolderFromTimeStamp(entry)
#Create file path string
filePath = dataFolder + l2DataName + '\\' + entryFolder.replace('/','\\') + '\\' + l2DataName + '.nc'
try:
f = h5py.File(filePath, 'r')
except OSError as e:
#File does not exist
#As TDS-1 is run periodically, many time segment files are not populated
#print(e)
continue
print ('Reading file %s...' % entryFolder)
#Get data into numpy arrays
windSpeed = f['/WindSpeed'][:]
specularPointLon = f['/SpecularPointLon'][:]
specularPointLat = f['/SpecularPointLat'][:]
mapPlotter.accumulateDataToMap(specularPointLon, specularPointLat, windSpeed)
f.close()
#Plot the data on a map
mapPlotter.plotMap()
RunMERRBySLevel2MapExample()
###Output
Reading file 2017-02/02/H00...
Reading file 2017-02/02/H06...
Reading file 2017-02/02/H12...
Reading file 2017-02/08/H18...
Reading file 2017-02/09/H00...
Reading file 2017-02/09/H06...
Reading file 2017-02/09/H12...
Reading file 2017-02/09/H18...
Reading file 2017-02/10/H00...
Reading file 2017-02/10/H06...
Reading file 2017-02/10/H12...
###Markdown
4. Other functions Test the loading in of coastal distance mapThe following will load the Coastal distance map (resolution 0.1 degree) and look-up the distance over a grid of latitude and longitudes (testing with a lower resolution grid for speed of test).
###Code
from CoastalDistanceMap import *
coastalDistanceMap = CoastalDistanceMap()
coastalDistanceMap.loadMap(os.path.join(os.getcwd(), 'GNSSR_Python', 'landDistGrid_0.10LLRes_hGSHHSres.nc'))
coastalDistanceMap.displayMapTest()
###Output
_____no_output_____
###Markdown
5. Data search Search for data within the MERRByS siteThe following code can be used to search for data on the MERRByS service.Search can be given with latitude / longitude as well as date/time limits. - The data for the time-range is downloaded to the data folder (if not already there) - Each file is loaded and filtered - The resulting matches are output
###Code
# Configuration of the routine for searching for Level 1b data
import datetime
from GNSSR import *
# Destination to read data from
dataFolder = os.path.join(os.getcwd() , 'Data\\')
#The FTP data access folder 'Data' for regular users or 'DataFast' for low latency access for approved users
ftpDataFolder = 'Data' # 'Data' or 'DataFast'
#Time range of interest
# Data is segmented every 6 hours so hours must be one of [3, 9, 15, 21]
startTime = datetime.datetime(2017, 2, 1, 21, 0, 0)
stopTime = datetime.datetime(2017, 2, 10, 21, 0, 0)
#Geographical limits (in degrees as [min, max])
searchLatLimit = [-20, -10]
searchLonLimit = [-10, 10]
# Check that the required data is downloaded
DownloadData(startTime, stopTime, dataFolder, ftpServer, userName, passWord, dataLevels, ftpDataFolder)
def SearchLevel1bData(startTime, stopTime, searchLatLimit, searchLonLimit):
'''Function for searching through the MERRByS Level 1b data by time and location'''
#Generate a list of possible files in the range startTime to stopTime
dataList = FindFiles(startTime, stopTime)
#Initialising lists to ensure they are empty
dataNameList = []
trackNumberList = []
startTimeList = []
endTimeList = []
#Generate file input list for range
for entry in dataList:
entryFolder = FolderFromTimeStamp(entry)
#Create file path string
filePath = dataFolder + 'L1B\\' + entryFolder.replace('/','\\') + '\\metadata.nc'
try:
f = h5py.File(filePath, 'r')
except OSError as e:
#File does not exist
#As TDS-1 is run periodically, many time segment files are not populated
#print(e)
continue
print ('Reading file %s...' % entryFolder)
# Loop through all the tracks
trackNumber = 0
while True:
#Group name in NetCDF is 6 digit string
groupName = str(trackNumber).zfill(6)
try:
#Get data into numpy arrays
integrationMidPointTime = f['/' + groupName + '/IntegrationMidPointTime'][:]
specularPointLon = f['/' + groupName + '/SpecularPointLon'][:]
specularPointLat = f['/' + groupName + '/SpecularPointLat'][:]
except:
#End of data
break
#Initialise filter vector to all ones
acceptedData = np.ones(np.shape(integrationMidPointTime))
# Filter to geographic area
acceptedData = np.logical_and(acceptedData, np.logical_and(specularPointLat > searchLatLimit[0], specularPointLat < searchLatLimit[1]))
acceptedData = np.logical_and(acceptedData, np.logical_and(specularPointLon > searchLonLimit[0], specularPointLon < searchLonLimit[1]))
#Apply the filter
filtered_time = integrationMidPointTime[acceptedData]
# Add the data to list of matches
if filtered_time.size > 0:
dataNameList.append(entryFolder)
trackNumberList.append(trackNumber)
startOfIntersect = MatlabToPythonDateNum(np.min(filtered_time))
endOfIntersect = MatlabToPythonDateNum(np.max(filtered_time))
startTimeList.append(startOfIntersect)
endTimeList.append(endOfIntersect)
# Go to next track
trackNumber = trackNumber + 1
f.close()
return dataNameList, trackNumberList, startTimeList, endTimeList
# Run the search
dataNameList, trackNumberList, startTimeList, endTimeList = SearchLevel1bData(startTime, stopTime, searchLatLimit, searchLonLimit)
# Print out the search results
print("## Search results ##")
print("DataID,\t\t TrackNumber,\t Start Time,\t\t\t End Time")
for i in range(len(dataNameList)):
print(str(dataNameList[i]) + ",\t" + str(trackNumberList[i]) + ",\t\t" + str(startTimeList[i]) + ",\t" + str(endTimeList[i]))
###Output
Starting download
Complete. Got: 30 segments
Reading file 2017-02/02/H00...
Reading file 2017-02/02/H06...
Reading file 2017-02/02/H12...
Reading file 2017-02/08/H18...
Reading file 2017-02/09/H00...
Reading file 2017-02/09/H06...
Reading file 2017-02/09/H12...
Reading file 2017-02/09/H18...
Reading file 2017-02/10/H00...
Reading file 2017-02/10/H06...
Reading file 2017-02/10/H12...
## Search results ##
DataID, TrackNumber, Start Time, End Time
2017-02/02/H00, 76, 2017-02-02 00:12:01.999001, 2017-02-02 00:15:19.999003
2017-02/02/H00, 89, 2017-02-02 00:10:29.998997, 2017-02-02 00:13:29.998996
2017-02/02/H00, 90, 2017-02-02 00:10:48.998994, 2017-02-02 00:13:52.999002
2017-02/02/H00, 91, 2017-02-02 00:09:52.998996, 2017-02-02 00:11:41.998999
2017-02/02/H00, 146, 2017-02-02 01:48:41.999004, 2017-02-02 01:51:28.999003
2017-02/02/H12, 123, 2017-02-02 12:27:34.998999, 2017-02-02 12:30:35.999003
2017-02/02/H12, 126, 2017-02-02 12:28:27.999004, 2017-02-02 12:28:40.999004
2017-02/02/H12, 127, 2017-02-02 12:28:47.998996, 2017-02-02 12:31:56.998997
2017-02/09/H00, 117, 2017-02-09 00:47:44.998994, 2017-02-09 00:49:09.998996
2017-02/09/H00, 121, 2017-02-09 00:47:47.998998, 2017-02-09 00:49:09.998996
2017-02/09/H00, 123, 2017-02-09 00:45:56.998997, 2017-02-09 00:49:05.998998
2017-02/09/H00, 128, 2017-02-09 00:49:11.998995, 2017-02-09 00:50:52.999001
2017-02/09/H00, 129, 2017-02-09 00:50:24.999002, 2017-02-09 00:50:59.999003
2017-02/09/H12, 92, 2017-02-09 11:28:21.998999, 2017-02-09 11:29:50.998999
2017-02/09/H12, 136, 2017-02-09 13:06:38.998999, 2017-02-09 13:07:29.998994
2017-02/09/H12, 139, 2017-02-09 13:05:21.998994, 2017-02-09 13:05:55.999001
2017-02/10/H00, 161, 2017-02-10 01:06:34.999001, 2017-02-10 01:09:38.998999
2017-02/10/H00, 162, 2017-02-10 01:05:23.999004, 2017-02-10 01:08:29.999001
2017-02/10/H00, 163, 2017-02-10 01:06:11.998995, 2017-02-10 01:07:37.999001
2017-02/10/H00, 165, 2017-02-10 01:07:09.999002, 2017-02-10 01:10:29.999004
|
2021/ComputationalThinking/Part1_PythonForPreProcessing/.ipynb_checkpoints/Formatting-checkpoint.ipynb | ###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Data Preprocessing in Python: Data formatting Case 1: CIA World FactbookThe CIA has an interesting website:https://www.cia.gov/library/publications/resources/the-world-factbook/In this case, I will use the report of millions of megatons of carbon dioxide emitted per country available here:https://www.cia.gov/library/publications/resources/the-world-factbook/fields/274.html Let me first **collect** the data:
###Code
import pandas as pd
link1="https://www.cia.gov/library/publications/resources/the-world-factbook/fields/274.html"
dataco2=pd.read_html(link1,header=0,attrs={'id': 'fieldListing'})
###Output
_____no_output_____
###Markdown
Verify how many elements came with the list:
###Code
type(dataco2), len(dataco2)
###Output
_____no_output_____
###Markdown
Recover data frame from the list:
###Code
cia=dataco2[0]
###Output
_____no_output_____
###Markdown
The cleaning process * __Checking first rows__ to see if header is in place:
###Code
cia.head()
###Output
_____no_output_____
###Markdown
* __Simplifying column names__ to facilitate further work:
###Code
# current columns:
cia.columns
# creating dictionary of changes:
OldToNew={cia.columns[0]:'countries',
cia.columns[1]:'co2'}
# making change happen:
cia.rename(columns=OldToNew,inplace=True)
# current situation
cia.head()
###Output
_____no_output_____
###Markdown
* __Checking last rows__:
###Code
cia.tail()
###Output
_____no_output_____
###Markdown
* __Checking cell values__ to see if each cell has the right value:**a**. Checking "countries": Making sure no blanks in country names, this is a _preventive_ measure:
###Code
cia.countries=cia.countries.str.strip()
###Output
_____no_output_____
###Markdown
**b**. Checking second column: **b.1**. Splitting each cell using a particular string of characters, so that the the number and the unit remain.
###Code
# first look (notice the blank space before Mt)
cia.co2.str.split(pat=' Mt')
# improving first look: "expand" separates into columns
cia.co2.str.split(' Mt',expand=True)
# keeping the first element of the last result:
cia.co2.str.split(' Mt',expand=True)[0]
# Notice that the previous steps **HAVE NOT** done any changes. I have only displayed the results.
# Now I will replace the column:
result1=cia.co2.str.split(' Mt',expand=True)[0]
# assign can create or overwrite a column. Then, I use 'result1' here
cia=cia.assign(co2=result1)
# Current situation:
cia
###Output
_____no_output_____
###Markdown
**b.2.** Keep numeric value:
###Code
# \d+ one or more digits
# \.? with or without a dot
# \,? with or without a comma
# \d* with zero or more digits
cia.co2.str.extract('(\d+\,*\.*\d*)') #Notice the use of parentheses, they signal a *group* for Pandas:
###Output
_____no_output_____
###Markdown
**3.** Keep string representing unit:
###Code
# a sequence of non digits after a space
# \s before \D+
cia.co2.str.extract('\s(\D+)')
## NOTE: Steps **2** and **3** can be done at once:
# simultaneously
cia.co2.str.extract('(\d+\,*\.*\d*)\s(\D+)') # Notice rows indexes **3** and **211**
# Solving previous issue by making the second group conditional (using *s).
cia.co2.str.extract('(\d+\,?\.?\d*)\s*(\D+)*')
# Pandas can give a **name** to the result with **?P < name >**:
cia.co2.str.extract('(?P<number>\d+\,*\.*\d*)\s*(?P<text>\D+)*')
# Notice you have a data frame, let's save it:
result2=cia.co2.str.extract('(?P<number>\d+\,*\.*\d*)\s*(?P<text>\D+)*')
# And let's use the columns of these new data frame:
cia=cia.assign(value=result2.number,
unit=result2.text)
# Current situation:
cia.head()
###Output
_____no_output_____
###Markdown
**b.4.** Delete symbols in numeric data that could be troublesome in future operations:
###Code
# the number have commas, let's get rid of those:
cia.value=cia.value.str.replace(",","")
###Output
_____no_output_____
###Markdown
**b.5.** Replace text in column of **units** for numbers:
###Code
# Check what you have:
cia.unit.value_counts(dropna=False)
# create dictionary for replacements:
replacements={'million': 10**6, "billion": 10**9,None:10**0}
# take a loook at the result:
cia.unit.replace(replacements)
# make it happen
cia.unit.replace(replacements,inplace=True)
#Current situation:
cia.head()
###Output
_____no_output_____
###Markdown
**b.6**. Some housekeeping: We do not need the old CO2 column anymore:
###Code
# when using 'columns=' or 'index=', axis not needed
# when using 'labels' axis is needed
cia.drop(columns='co2',inplace=True)
###Output
_____no_output_____
###Markdown
FORMATTING Formatting numeric columnsFormatting makes sure the data can go into statistical work. So the first step is to detect the data types:
###Code
cia.dtypes
###Output
_____no_output_____
###Markdown
If you request statistics, you only get:
###Code
cia.describe()
###Output
_____no_output_____
###Markdown
The column unit is already a number, because during the cleaning process we created it like that. However, the column _value_ is still text. We can turn it into a numeric one like this:
###Code
pd.to_numeric(cia.value)
###Output
_____no_output_____
###Markdown
Then, let's turn the previous result into a real change:
###Code
cia=cia.assign(value=pd.to_numeric(cia.value))
###Output
_____no_output_____
###Markdown
This should look the same as before:
###Code
cia.head()
###Output
_____no_output_____
###Markdown
But, it is different:
###Code
cia.dtypes
###Output
_____no_output_____
###Markdown
Then, you may get more statistics:
###Code
cia.describe()
###Output
_____no_output_____
###Markdown
We can now multiply both columns, as each has numbers:
###Code
#previous result:
cia.value*cia.unit
###Output
_____no_output_____
###Markdown
That result should be our new CO2:
###Code
cia=cia.assign(co2_in_MT=cia.value*cia.unit)
# current situation:
cia.head()
###Output
_____no_output_____
###Markdown
Let's get rid of the second and third column:
###Code
# you want this:
cia.drop(columns=['value','unit'])
###Output
_____no_output_____
###Markdown
Let's make the changes:
###Code
cia.drop(columns=['value','unit'],inplace=True)
###Output
_____no_output_____
###Markdown
The **cia** data frame is clean and formatted. _______ Case 2: Democracy Index from wikipedia
###Code
demoLink = "https://en.wikipedia.org/wiki/Democracy_Index"
# getting the data frame in one step:
demodex=pd.read_html(demoLink,header=0,flavor='bs4',attrs={'class': 'wikitable sortable'})[0]
###Output
_____no_output_____
###Markdown
The cleaning process * __Checking first rows__ to see if header is in place:
###Code
demodex.head(10)
###Output
_____no_output_____
###Markdown
* __Checking last rows__:
###Code
demodex.tail(10)
###Output
_____no_output_____
###Markdown
The last row must go, let me erase the **Rank** column at the same time:
###Code
#bye row 167, and Rank
demodex=demodex.drop(index=167,columns=['Rank','Score'])
###Output
_____no_output_____
###Markdown
* __Simplifying column names__ to facilitate further work:
###Code
demodex.columns
pattern='\s+'
replacement=""
demodex.columns=demodex.columns.str.replace(pattern,replacement)
# current situation:
demodex
###Output
_____no_output_____
###Markdown
* __Checking cell values__ to see if each cell has the right value:**a.** Let me see if we have some strange value in the numeric columns:
###Code
# this is a preventive step!!
badSymbols=[]
NumericColNames=demodex.iloc[:,1:6].columns
for columnName in NumericColNames:
for cell in demodex[columnName]:
try:
float(cell)
except:
if cell not in badSymbols:
badSymbols.append(cell)
###Output
_____no_output_____
###Markdown
This is a preventive cleaning:
###Code
import numpy as np
# notice use of loc
demodex.loc[:,NumericColNames].replace(to_replace=badSymbols,value=np.nan,inplace=True)
###Output
_____no_output_____
###Markdown
Since the list is empty, the cell values of the numerical columns are clean. **b.** Let me see if we have some strange value in the categorical columns:
###Code
demodex.iloc[:,-2::].apply(set).to_list()
###Output
_____no_output_____
###Markdown
No problem there either. It looks good so far. Let's go to formatting. FORMATTING
###Code
# checking data types:
demodex.dtypes
###Output
_____no_output_____
###Markdown
Formatting numeric columns Above, we realized the need to make some indices into numeric. Let's follow these steps:
###Code
# save column names of the columns to change:
colsToChange=demodex.iloc[:,1:6].columns
# make changes NOT using iloc:
demodex[colsToChange]=demodex[colsToChange].apply(pd.to_numeric)
###Output
_____no_output_____
###Markdown
Formatting categorical columns The *Continent* is a **NOMINAL** column:
###Code
demodex.Continent=pd.Categorical(demodex.Continent)
###Output
_____no_output_____
###Markdown
The *Regimetype* is an **ORDINAL** column:
###Code
# check the levels:
pd.unique(demodex.Regimetype).tolist()
#rewrite the levels in order:
correctLevels=['Authoritarian', 'Hybrid regime', 'Flawed democracy','Full democracy']
#format as ordinal:
demodex.Regimetype=pd.Categorical(demodex.Regimetype,categories=correctLevels,ordered=True)
###Output
_____no_output_____
###Markdown
The data types have changed:
###Code
#then
demodex.dtypes
###Output
_____no_output_____
###Markdown
Regime type is a category, but ordinal:
###Code
demodex.Regimetype
###Output
_____no_output_____ |
_notebooks/2019-09-17-folium-India.ipynb | ###Markdown
Exploring India using Folium> Folium is a powerful Python library that helps you create several types of Leaflet maps.- toc:true- branch: master- badges: true- comments: true- categories: [folium, visualization] *Note: To view the interactive maps, open the notebook in Google Colab.*Folium is a powerful Python library that helps you create several types of Leaflet maps. Folium builds on the data wrangling strengths of the Python ecosystem and the mapping strengths of the Leaflet.js library. Also, these maps are interactive so you can zoom into any region of interest despite the initial zoom level.
###Code
import folium
print('Folium imported.')
###Output
Folium imported.
###Markdown
--- World MapGenerate a World Map.
###Code
world_map = folium.Map()
world_map
###Output
_____no_output_____
###Markdown
--- Map of IndiaAll locations on a map are defined by their respective latitude and longitude values. So you can create a map and pass in a center of Latitude and Longitude values of [0,0]. For a defined center, you can also define the initial zoom level into that location when the map is rendered. The higher the zoom level, the more the map is zoomed into the center.Let's create a map centered around India.
###Code
india_map = folium.Map(location=[21.7679, 78.8718], zoom_start=5)
india_map
###Output
_____no_output_____
###Markdown
Let's play with the zoom level and focus on the city of **Mumbai**.
###Code
mumbai_latitude = 19.0760
mumbai_longitude = 72.8777
mumbai_map = folium.Map(location=[mumbai_latitude, mumbai_longitude], zoom_start=11)
mumbai_map
###Output
_____no_output_____
###Markdown
Let's zoom in further to the **Gateway of India**.
###Code
mumbai_map = folium.Map(location=[18.9220, 72.8347], zoom_start=17)
mumbai_map
###Output
_____no_output_____
###Markdown
--- Stamen Toner MapThese are high contrast, black and white maps that can be used for exploring coastal zones and river meanders. Every 12 years, millions of people are drawn to a spiritual festival in India, the Maha Kumbh Mela, at a place called the Triveni Sangam. It’s the largest gathering of humanity on Earth: over a three-month period, an estimated 100 million people attended the most recent festival, in 2013. The Triveni Sangam, where the festival takes place, is the meeting place of three rivers: Ganga, Yamuna, and Saraswati. However, the third river, the Saraswati is a mythical river, which supposedly dried up many millennia ago. Nevertheless, the Triveni Sangam is to this day referred to as the meeting place of the three rivers. There is a belief that the Saraswati river flows underneath the surface. Let's create a Stamen Toner map of the area.
###Code
sangam_map = folium.Map(location=[25.4224, 81.8866], zoom_start=9, tiles='Stamen Toner')
sangam_map
###Output
_____no_output_____
###Markdown
--- Stamen Terrain MapsThese are maps that feature hill shading and natural vegetation colors. They showcase advanced labeling and linework generalization of dual-carriageway roads.Jammu and Kashmir, a union territory of India, is one of the most beautiful places in the world consisting of mountain rnages, valleys and rivers. Let's explore the terrain in the area.
###Code
kashmir_map = folium.Map(location=[33.7782, 76.5762], zoom_start=8, tiles='Stamen Terrain')
kashmir_map
###Output
_____no_output_____ |
example_to_sql/to_sql_demo_taxi.ipynb | ###Markdown
Performance showcase of added "to_sql" functionality in mlinspectHere the performance of the proposed inspection using sql will be compared to the original one in pandas. Part ofthe "healthcare" and "compas" pipeline will be used. Required packages:See: requirements/requirements.txt and requirements/requirements.dev.txt Some parameters you might want to set:
###Code
import os
import sys
import time
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from inspect import cleandoc
from mlinspect.utils import get_project_root
from mlinspect import PipelineInspector, OperatorType
from mlinspect.inspections import HistogramForColumns, RowLineage, MaterializeFirstOutputRows
from mlinspect.checks import NoBiasIntroducedFor, NoIllegalFeatures
from demo.feature_overview.no_missing_embeddings import NoMissingEmbeddings
from example_pipelines.healthcare import custom_monkeypatching
from mlinspect.to_sql.dbms_connectors.postgresql_connector import PostgresqlConnector
from mlinspect.to_sql.dbms_connectors.umbra_connector import UmbraConnector
# DBMS related:
UMBRA_USER = "postgres"
UMBRA_PW = ""
UMBRA_DB = ""
UMBRA_PORT = 5433
UMBRA_HOST = "/tmp/"
POSTGRES_USER = "luca"
POSTGRES_PW = "password"
POSTGRES_DB = "healthcare_benchmark"
POSTGRES_PORT = 5432
POSTGRES_HOST = "localhost"
pipe = cleandoc("""
import warnings
import os
import pandas as pd
from sklearn.pipeline import Pipeline
from mlinspect.utils import get_project_root
taxi = pd.read_csv(
os.path.join( str(get_project_root()), "example_pipelines", "taxi", "yellow_tripdata_202101_head.csv"),
na_values='?')
taxi = taxi[(taxi['passenger_count']>=1)]
""")
###Output
_____no_output_____
###Markdown
Benchmark setup:
###Code
def run_inspection(code, bias, to_sql, dbms_connector=None, mode=None, materialize=None):
from PIL import Image
import matplotlib.pyplot as plt
from mlinspect.visualisation import save_fig_to_path
inspector_result = PipelineInspector \
.on_pipeline_from_string(code) \
.add_custom_monkey_patching_module(custom_monkeypatching) \
.add_check(NoBiasIntroducedFor(bias))
if to_sql:
inspector_result = inspector_result.execute_in_sql(dbms_connector=dbms_connector, mode=mode,
materialize=materialize)
else:
inspector_result = inspector_result.execute()
check_results = inspector_result.check_to_check_results
no_bias_check_result = check_results[NoBiasIntroducedFor(bias)]
distribution_changes_overview_df = NoBiasIntroducedFor.get_distribution_changes_overview_as_df(
no_bias_check_result)
result = ""
result += distribution_changes_overview_df.to_markdown()
for i in list(no_bias_check_result.bias_distribution_change.items()):
_, join_distribution_changes = i
for column, distribution_change in join_distribution_changes.items():
result += "\n"
result += f"\033[1m Column '{column}'\033[0m"
result += distribution_change.before_and_after_df.to_markdown()
return result
###Output
_____no_output_____
###Markdown
Benchmark of default inspection using CTEs:
###Code
dbms_connector_u = UmbraConnector(dbname=UMBRA_DB, user=UMBRA_USER, password=UMBRA_PW,
port=UMBRA_PORT, host=UMBRA_HOST, add_mlinspect_serial=False)
dbms_connector_p = PostgresqlConnector(dbname=POSTGRES_DB, user=POSTGRES_USER, password=POSTGRES_PW,
port=POSTGRES_PORT, host=POSTGRES_HOST)
def run_for_all(code, bias):
t0 = time.time()
#run_inspection(code=code, bias=bias, to_sql=False)
t1 = time.time()
print("\nOriginal: " + str(t1 - t0))
t0 = time.time()
run_inspection(code=code, bias=bias, to_sql=True, dbms_connector=dbms_connector_p, mode="VIEW",
materialize=None)
t1 = time.time()
print("\nPostgreSQL View: " + str(t1 - t0))
t0 = time.time()
run_inspection(code=code, bias=bias, to_sql=True, dbms_connector=dbms_connector_p, mode="VIEW",
materialize=True)
t1 = time.time()
print("\nPostgreSQL Materialized View: " + str(t1 - t0))
t0 = time.time()
run_inspection(code=code, bias=bias, to_sql=True, dbms_connector=dbms_connector_u, mode="VIEW",
materialize=None)
t1 = time.time()
print("\nUmbra View: " + str(t1 - t0))
t0 = time.time()
run_inspection(code=code, bias=bias, to_sql=True, dbms_connector=dbms_connector_p, mode="CTE",
materialize=None)
t1 = time.time()
print("\nPostgreSQL CTE: " + str(t1 - t0))
t0 = time.time()
run_inspection(code=code, bias=bias, to_sql=True, dbms_connector=dbms_connector_u, mode="CTE",
materialize=None)
t1 = time.time()
print("\nUmbra CTE: " + str(t1 - t0))
###Output
_____no_output_____
###Markdown
End-to-End example of the preprocessing-pipeline inspection + model training:Slightly different inspections results are expected because of the random split. Still, the resulting model accuracy shouldbe similar.
###Code
run_for_all(pipe, ['passenger_count'])
run_for_all(pipe, ['passenger_count','trip_distance'])
run_for_all(pipe, ['passenger_count','trip_distance','PULocationID'])
run_for_all(pipe, ['passenger_count','trip_distance','PULocationID','DOLocationID'])
run_for_all(pipe, ['passenger_count','trip_distance','PULocationID','DOLocationID','payment_type'])
###Output
Original: 2.384185791015625e-07
PostgreSQL View: 8.537209272384644
PostgreSQL Materialized View: 10.996732950210571
Umbra View: 2.854299306869507
PostgreSQL CTE: 18.561489820480347
Umbra CTE: 3.0941214561462402
|
notebooks/examples/binned_scatterplot.ipynb | ###Markdown
Binned Scatterplot------------------This example shows how to make a binned scatterplot.
###Code
import altair as alt
alt.data_transformers.enable('json')
from vega_datasets import data
source = data.movies.url
alt.Chart(source).mark_circle().encode(
alt.X('IMDB_Rating:Q', bin=True),
alt.Y('Rotten_Tomatoes_Rating:Q', bin=True),
size='count()'
)
###Output
_____no_output_____ |
Project 2/.ipynb_checkpoints/cs188_project2-Blank-checkpoint.ipynb | ###Markdown
CS188 Project 2 - Binary Classification Comparative Methods For this project we're going to attempt a binary classification of a dataset using multiple methods and compare results. Our goals for this project will be to introduce you to several of the most common classification techniques, how to perform them and tweek parameters to optimize outcomes, how to produce and interpret results, and compare performance. You will be asked to analyze your findings and provide explanations for observed performance. Specifically you will be asked to classify whether a patient is suffering from heart disease based on a host of potential medical factors.DEFINITIONS Binary Classification:In this case a complex dataset has an added 'target' label with one of two options. Your learning algorithm will try to assign one of these labels to the data. Supervised Learning:This data is fully supervised, which means it's been fully labeled and we can trust the veracity of the labeling. Background: The Dataset For this exercise we will be using a subset of the UCI Heart Disease dataset, leveraging the fourteen most commonly used attributes. All identifying information about the patient has been scrubbed. The dataset includes 14 columns. The information provided by each column is as follows: age: Age in years sex: (1 = male; 0 = female) cp: Chest pain type (0 = asymptomatic; 1 = atypical angina; 2 = non-anginal pain; 3 = typical angina) trestbps: Resting blood pressure (in mm Hg on admission to the hospital) cholserum: Cholestoral in mg/dl fbs Fasting blood sugar > 120 mg/dl (1 = true; 0 = false) restecg: Resting electrocardiographic results (0= showing probable or definite left ventricular hypertrophy by Estes' criteria; 1 = normal; 2 = having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV)) thalach: Maximum heart rate achieved exang: Exercise induced angina (1 = yes; 0 = no) oldpeakST: Depression induced by exercise relative to rest slope: The slope of the peak exercise ST segment (0 = downsloping; 1 = flat; 2 = upsloping) ca: Number of major vessels (0-3) colored by flourosopy thal: 1 = normal; 2 = fixed defect; 7 = reversable defect Sick: Indicates the presence of Heart disease (True = Disease; False = No disease) Loading Essentials and Helper Functions
###Code
#Here are a set of libraries we imported to complete this assignment.
#Feel free to use these or equivalent libraries for your implementation
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # this is used for the plot the graph
import os
import seaborn as sns # used for plot interactive graph.
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn import metrics
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.cluster import KMeans
from sklearn.metrics import confusion_matrix
import sklearn.metrics.cluster as smc
from sklearn.model_selection import KFold
from matplotlib import pyplot
import itertools
%matplotlib inline
import random
random.seed(42)
# Helper function allowing you to export a graph
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# Helper function that allows you to draw nicely formatted confusion matrices
def draw_confusion_matrix(y, yhat, classes):
'''
Draws a confusion matrix for the given target and predictions
Adapted from scikit-learn and discussion example.
'''
plt.cla()
plt.clf()
matrix = confusion_matrix(y, yhat)
plt.imshow(matrix, interpolation='nearest', cmap=plt.cm.Blues)
plt.title("Confusion Matrix")
plt.colorbar()
num_classes = len(classes)
plt.xticks(np.arange(num_classes), classes, rotation=90)
plt.yticks(np.arange(num_classes), classes)
fmt = 'd'
thresh = matrix.max() / 2.
for i, j in itertools.product(range(matrix.shape[0]), range(matrix.shape[1])):
plt.text(j, i, format(matrix[i, j], fmt),
horizontalalignment="center",
color="white" if matrix[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
[20 Points] Part 1. Load the Data and Analyze Let's first load our dataset so we'll be able to work with it. (correct the relative path if your notebook is in a different directory than the csv file.) Question 1.1 Now that our data is loaded, let's take a closer look at the dataset we're working with. Use the head method to display some of the rows so we can visualize the types of data fields we'll be working with, then use the describe method, along with any additional methods you'd like to call to better help you understand what you're working with and what issues you might face. Question 1.2 Discuss your data preprocessing strategy. Are their any datafield types that are problemmatic and why? Will there be any null values you will have to impute and how do you intend to do so? Finally, for your numeric and categorical features, what if any, additional preprocessing steps will you take on those data elements? [Use this area to discuss your data processing strategy] Question 1.3 Before we begin our analysis we need to fix the field(s) that will be problematic. Specifically convert our boolean sick variable into a binary numeric target variable (values of either '0' or '1'), and then drop the original sick datafield from the dataframe. Question 1.4 Now that we have a feel for the data-types for each of the variables, plot histograms of each field and attempt to ascertain how each variable performs (is it a binary, or limited selection, or does it follow a gradient? (Note: No need to describe each variable, but pick out a few you wish to highlight) Question 1.5 We also want to make sure we are dealing with a balanced dataset. In this case, we want to confirm whether or not we have an equitable number of sick and healthy individuals to ensure that our classifier will have a sufficiently balanced dataset to adequately classify the two. Plot a histogram specifically of the sick target, and conduct a count of the number of sick and healthy individuals and report on the results: [Include description of findings here] Question 1.6 Balanced datasets are important to ensure that classifiers train adequately and don't overfit, however arbitrary balancing of a dataset might introduce its own issues. Discuss some of the problems that might arise by artificially balancing a dataset. [Discuss problem here] Question 1.9 Now that we have our dataframe prepared let's start analyzing our data. For this next question let's look at the correlations of our variables to our target value. First, map out the correlations between the values, and then discuss the relationships you observe. Do some research on the variables to understand why they may relate to the observed corellations. Intuitively, why do you think some variables correlate more highly than others (hint: one possible approach you can use the sns heatmap function to map the corr() method)? [Discuss correlations here] [30 Points] Part 2. Prepare the Data Before running our various learning methods, we need to do some additional prep to finalize our data. Specifically you'll have to cut the classification target from the data that will be used to classify, and then you'll have to divide the dataset into training and testing cohorts.Specifically, we're going to ask you to prepare 2 batches of data: 1. Will simply be the raw numeric data that hasn't gone through any additional pre-processing. The other, will be data that you pipeline using your own selected methods. We will then feed both of these datasets into a classifier to showcase just how important this step can be! Question 2.1 Save the target column as a separate array and then drop it from the dataframe. Question 2.2 First Create your 'Raw' unprocessed training data by dividing your dataframe into training and testing cohorts, with your training cohort consisting of 70% of your total dataframe (hint: use the train_test_split method) Output the resulting shapes of your training and testing samples to confirm that your split was successful. Question 2.3 Now create a pipeline to conduct any additional preparation of the data you would like. Output the resulting array to ensure it was processed correctly. Question 2.4 Now create a separate, processed training data set by dividing your processed dataframe into training and testing cohorts, using the same settings as Q2.2 (REMEMBER TO USE DIFFERENT TRAINING AND TESTING VARIABLES SO AS NOT TO OVERWRITE YOUR PREVIOUS DATA). Output the resulting shapes of your training and testing samples to confirm that your split was successful, and describe what differences there are between your two training datasets. [What differences are there between these two datasets?] [50 Points] Part 3. Learning Methods We're finally ready to actually begin classifying our data. To do so we'll employ multiple learning methods and compare result. Linear Decision Boundary Methods SVM (Support Vector Machine) A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. In two dimentional space this hyperplane is a line dividing a plane in two parts where in each class lay in either side. Question 3.1.1 Implement a Support Vector Machine classifier on your RAW dataset. Review the [SVM Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) for how to implement a model. For this implementation you can simply use the default settings, but set probability = True.
###Code
# SVM
###Output
_____no_output_____
###Markdown
Question 3.1.2 Report the accuracy, precision, recall, F1 Score, and confusion matrix of the resulting model. Question 3.1.3 Discuss what each measure is reporting, why they are different, and why are each of these measures is significant. Explore why we might choose to evaluate the performance of differing models differently based on these factors. Try to give some specific examples of scenarios in which you might value one of these measures over the others. [Provide explanation for each measure here] Question 3.1.4 Plot a Receiver Operating Characteristic curve, or ROC curve, and describe what it is and what the results indicate [Describe what an ROC Curve is and what the results mean here] Question 3.1.5 Rerun, using the exact same settings, only this time use your processed data as inputs. Question 3.1.6 Report the accuracy, precision, recall, F1 Score, confusion matrix, and plot the ROC Curve of the resulting model. Question 3.1.7 Hopefully you've noticed a dramatic change in performance. Discuss why you think your new data has had such a dramatic impact. [Provide explanation here] Question 3.1.8 Rerun your SVM, but now modify your model parameter kernel to equal 'linear'. Again report your Accuracy, Precision, Recall, F1 scores, and Confusion matrix and plot the new ROC curve.
###Code
# SVM
###Output
_____no_output_____
###Markdown
Question 3.1.9 Explain the what the new results you've achieved mean. Read the documentation to understand what you've changed about your model and explain why changing that input parameter might impact the results in the manner you've observed. [Provide explanation here] Logistic Regression Knowing that we're dealing with a linearly configured dataset, let's now try another classifier that's well known for handling linear models: Logistic Regression. Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable. Question 3.2.1 Implement a Logistical Regression Classifier. Review the [Logistical Regression Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) for how to implement the model. For this initial model set the solver = 'sag' and max_iter= 10). Report on the same four metrics as the SVM and graph the resulting ROC curve.
###Code
# Logistic Regression
###Output
_____no_output_____
###Markdown
Question 3.2.2 Did you notice that when you ran the previous model you got the following warning: "ConvergenceWarning: The max_iter was reached which means the coef_ did not converge". Check the documentation and see if you can implement a fix for this problem, and again report your results.
###Code
# Logistic Regression
###Output
_____no_output_____
###Markdown
Question 3.2.3 Explain what you changed, and why that produced an improved outcome. [Provide explanation here] Question 3.2.4 Rerun your logistic classifier, but modify the penalty = 'none', solver='sag' and again report the results.
###Code
# Logistic Regression
###Output
_____no_output_____
###Markdown
Question 3.2.5 Explain what what the penalty parameter is doing in this function, what the solver method is, and why this combination likely produced a more optimal outcome. [Provide explanation here] Question 3.2.6 Both logistic regression and linear SVM are trying to classify data points using a linear decision boundary, then what’s the difference between their ways to find this boundary? [Provide Answer here:] Clustering Approaches Let us now try a different approach to classification using a clustering algorithm. Specifically, we're going to be using K-Nearest Neighbor, one of the most popular clustering approaches. K-Nearest Neighbor Question 3.3.1 Implement a K-Nearest Neighbor algorithm on our data and report the results. For this initial implementation simply use the default settings. Refer to the [KNN Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) for details on implementation. Report on the accuracy of the resulting model.
###Code
# k-Nearest Neighbors algorithm
###Output
_____no_output_____ |
Python basics practice/Python 3 (19)/Help Yourself with Methods - Exercise_Py3.ipynb | ###Markdown
Help Yourself with Methods Append the number 100 to the Numbers list.
###Code
Numbers = [15, 40, 50]
Numbers.append(100)
Numbers
###Output
_____no_output_____
###Markdown
With the help of the "extend method", add the numbers 115 an 140 to the list.
###Code
Numbers.extend([115])
Numbers
###Output
_____no_output_____
###Markdown
Print a statement, saying "The fourth element of the Numbers list is:" and then designate the value of the fourth element. Use a trailing comma.
###Code
print("The fourth element of the Numbers list is:" , Numbers[3])
###Output
The fourth element of the Numbers list is: [100]
###Markdown
How many elements are there in the Numbers list?
###Code
len(Numbers)
###Output
_____no_output_____ |
notebooks/debug/Jamshidian.ipynb | ###Markdown
Read IRSM FORM
###Code
main_curve, sprds = xml_parser.get_rate_curves(INPUT_5SWO)
dsc_curve = main_curve
try:
estim_curve = sprds[0]
except TypeError:
estim_curve = main_curve
cal_basket = list(xml_parser.get_calib_basket(INPUT_5SWO))
###Output
_____no_output_____
###Markdown
READ IRSM OUT
###Code
_, irsmout = xml_parser.get_xml(OUTPUT_5SWO)
ref_swos = list(xml_parser.get_calib_basket(irsmout))
ref_mr, (hw_buckets, hw_sigma) = xml_parser.get_hw_params(irsmout)
ref_sigmas = rates.Curve(hw_buckets, hw_sigma, 'PieceWise')
###Output
_____no_output_____
###Markdown
Jamshidian pricer with ref sigma (Hernard)
###Code
calib_premiumsJ = []
debug_df = pd.DataFrame()
swo = cal_basket[2]
jamsh_price, debug = Jamshidian.hw_swo(swo, ref_mr, ref_sigmas, dsc_curve, estim_curve)
debug_df = pd.concat([debug_df, pd.DataFrame(data=debug)], sort=False)
calib_premiumsJ.append(jamsh_price)
a = ref_mr
sigma = ref_sigmas
IsCall = False if swo.pay_rec == 'Receiver' else True
coef = Jamshidian.get_coef(swo, a, sigma, dsc_curve, estim_curve)
b_i = Jamshidian.get_b_i(swo, a)
varx = Jamshidian.get_var_x(swo.expiry, a, sigma)
sgn_changes = hw_helper.sign_changes(coef)
x_star = Jamshidian.get_x_star(coef, b_i, varx)
calib_premiumsJ[0], Jamshidian.hw_swo_analytic(coef, b_i, varx, x_star, IsCall)
(0.015277326631131571, 0.015277326631131571)
###Output
_____no_output_____ |
rawdata/SN2014JLightCurve.ipynb | ###Markdown
Create a data file for SN2014J light curveUsing the [AAVSO light curve](https://www.aavso.org/lcg/plot?auid=000-BLG-310&starname=SN+2014J&lastdays=30&start=01/21/2014&stop=02/21/2014&obscode=&obscode_symbol=2&obstotals=yes&calendar=calendar&forcetics=&grid=on&visual=on&uband=on&v=on&pointsize=1&width=1200&height=450&mag1=&mag2=&mean=&vmean=) and [WebPlotDigitizer](https://automeris.io/WebPlotDigitizer/)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('SN2014JAAVSO.csv')
df
f,ax = plt.subplots()
ax.plot(df['days'],df['mag'],'.')
ax.set_ylim(12.2, 10.4)
###Output
_____no_output_____
###Markdown
Try equation 7 from [this paper](https://arxiv.org/abs/1612.02097)
###Code
def SNIaLC(t, A, t0, tb, a1, a2, s):
ar = 2.*(a1 + 1.)
ad = a1 - a2
tfac = (t - t0)/tb
return A * tfac**ar * (1. + tfac**(s*ad))**(-2./s)
tval = np.linspace(0, 30, 10000)
#fit by eye
A = 1
t0 = -2
tb = 13.
a1 = 0.1
a2 = -2.2
s = 0.6
mag0 = 8
lum = SNIaLC(tval, A, t0, tb, a1, a2, s)
mag = -2.5*np.log10(lum) + mag0
f, (ax1, ax2) = plt.subplots(2,1)
ax1.plot(df['days'],df['mag'],'.')
ax1.plot(tval, mag)
#plt.gca().invert_yaxis()
ax1.set_ylim(12.2, 10.4)
ax2.plot(tval, lum)
print(np.max(lum))
print(tval[np.argmax(lum)])
#dlum/dt initially
x1 = np.argmin(np.abs(tval - 4))
x2 = np.argmin(np.abs(tval - 6))
print(tval[x1], tval[x2], lum[x1], lum[x2])
print((lum[x2] - lum[x1])/(tval[x2] - tval[x1]))
#try to shift the light curve so the peak is at a specified time (rather than specifying t0)
tval = np.linspace(1885, 2020, 10000)
tb = 20
t0 = 1940
tp = tb*(-1.*(a1 + 1.)/(a2 + 1.))**(1./(s*(a1 - a2)))
print(tp)
lum2 = SNIaLC(tval, 1., t0 - tp, tb, a1, a2, s)
f, ax = plt.subplots()
ax.plot(tval, lum2)
ax.plot([t0, t0], [0, np.nanmax(lum2)])
ax.set_xlim(1885, 2020)
###Output
18.777898209646118
|
Dataset Imbalance/Crossentropy Loss.ipynb | ###Markdown
Mount my google drive, where I stored the dataset.
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
**Download dependencies**
###Code
!pip3 install sklearn matplotlib GPUtil
!pip3 install "pillow<7"
!pip3 install torch==1.3.1+cu92 torchvision==0.4.2+cu92 -f https://download.pytorch.org/whl/torch_stable.html
###Output
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.3.1+cu92
[?25l Downloading https://download.pytorch.org/whl/cu92/torch-1.3.1%2Bcu92-cp36-cp36m-linux_x86_64.whl (621.4MB)
[K |████████████████████████████████| 621.4MB 35kB/s eta 0:00:0111s eta 0:20:2744.8MB 125.4MB/s eta 0:00:05| 74.8MB 4.4MB/s eta 0:02:05 |██████ | 115.2MB 4.4MB/s eta 0:01:56 |███████▏ | 139.0MB 91.9MB/s eta 0:00:06��████▏ | 198.1MB 30.0MB/s eta 0:00:15 |██████████▋ | 205.4MB 30.0MB/s eta 0:00:14��█████ | 215.5MB 30.0MB/s eta 0:00:14MB/s eta 0:00:15/s eta 0:00:15██▉ | 249.1MB 26.9MB/s eta 0:00:14 |███████████████▎ | 297.3MB 52.2MB/s eta 0:00:07�███████████ | 310.5MB 52.2MB/s eta 0:00:06 |█████████████████ | 331.5MB 22.0MB/s eta 0:00:14��██ | 350.1MB 22.0MB/s eta 0:00:13 |██████████████████▍ | 358.1MB 22.0MB/s eta 0:00:12 |███████████████████▍ | 376.2MB 116.7MB/s eta 0:00:032MB 116.7MB/s eta 0:00:02 |█████████████████████▍ | 415.3MB 116.7MB/s eta 0:00:02 0:00:11:00:10:00:10�� | 452.9MB 19.6MB/s eta 0:00:09�� | 458.3MB 19.6MB/s eta 0:00:09�� | 461.8MB 19.6MB/s eta 0:00:09��█████████████████████████▎ | 568.3MB 66.1MB/s eta 0:00:01
[?25hCollecting torchvision==0.4.2+cu92
[?25l Downloading https://download.pytorch.org/whl/cu92/torchvision-0.4.2%2Bcu92-cp36-cp36m-linux_x86_64.whl (10.1MB)
[K |████████████████████████████████| 10.1MB 497kB/s eta 0:00:01
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.3.1+cu92) (1.18.1)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.4.2+cu92) (6.2.2)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision==0.4.2+cu92) (1.13.0)
Installing collected packages: torch, torchvision
Successfully installed torch-1.3.1+cu92 torchvision-0.4.2+cu92
###Markdown
**Download Data** In order to acquire the dataset please navigate to:https://ieee-dataport.org/documents/cervigram-image-datasetUnzip the dataset into the folder "dataset".For your environment, please adjust the paths accordingly.
###Code
!rm -vrf "dataset"
!mkdir "dataset"
!cp -r "/content/drive/My Drive/Studiu doctorat leziuni cervicale/cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip"
# !cp -r "cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip"
!unzip "dataset/cervigram-image-dataset-v2.zip" -d "dataset"
###Output
Archive: dataset/cervigram-image-dataset-v2.zip
creating: dataset/data/
creating: dataset/data/test/
creating: dataset/data/test/0/
creating: dataset/data/test/0/20151103002/
inflating: dataset/data/test/0/20151103002/20151103113458.jpg
inflating: dataset/data/test/0/20151103002/20151103113637.jpg
inflating: dataset/data/test/0/20151103002/20151103113659.jpg
inflating: dataset/data/test/0/20151103002/20151103113722.jpg
inflating: dataset/data/test/0/20151103002/20151103113752.jpg
inflating: dataset/data/test/0/20151103002/20151103113755.jpg
inflating: dataset/data/test/0/20151103002/20151103113833.jpg
creating: dataset/data/test/0/20151103005/
inflating: dataset/data/test/0/20151103005/20151103161719.jpg
inflating: dataset/data/test/0/20151103005/20151103161836.jpg
inflating: dataset/data/test/0/20151103005/20151103161908.jpg
inflating: dataset/data/test/0/20151103005/20151103161938.jpg
inflating: dataset/data/test/0/20151103005/20151103162027.jpg
inflating: dataset/data/test/0/20151103005/20151103162122.jpg
inflating: dataset/data/test/0/20151103005/20151103162254.jpg
creating: dataset/data/test/0/20151106002/
inflating: dataset/data/test/0/20151106002/20151106101644.jpg
inflating: dataset/data/test/0/20151106002/20151106101845.jpg
inflating: dataset/data/test/0/20151106002/20151106101905.jpg
inflating: dataset/data/test/0/20151106002/20151106101935.jpg
inflating: dataset/data/test/0/20151106002/20151106101957.jpg
inflating: dataset/data/test/0/20151106002/20151106102000.jpg
inflating: dataset/data/test/0/20151106002/20151106102110.jpg
creating: dataset/data/test/0/20151111002/
inflating: dataset/data/test/0/20151111002/20151111144157.jpg
inflating: dataset/data/test/0/20151111002/20151111144348.jpg
inflating: dataset/data/test/0/20151111002/20151111144420.jpg
inflating: dataset/data/test/0/20151111002/20151111144506.jpg
inflating: dataset/data/test/0/20151111002/20151111144511.jpg
inflating: dataset/data/test/0/20151111002/20151111144515.jpg
inflating: dataset/data/test/0/20151111002/20151111144654.jpg
creating: dataset/data/test/0/20151111004/
inflating: dataset/data/test/0/20151111004/20151111150820.jpg
inflating: dataset/data/test/0/20151111004/20151111151033.jpg
inflating: dataset/data/test/0/20151111004/20151111151104.jpg
inflating: dataset/data/test/0/20151111004/20151111151127.jpg
inflating: dataset/data/test/0/20151111004/20151111151152.jpg
inflating: dataset/data/test/0/20151111004/20151111151160.jpg
inflating: dataset/data/test/0/20151111004/20151111151245.jpg
creating: dataset/data/test/0/20151111007/
inflating: dataset/data/test/0/20151111007/20151111154450.jpg
inflating: dataset/data/test/0/20151111007/20151111154626.jpg
inflating: dataset/data/test/0/20151111007/20151111154652.jpg
inflating: dataset/data/test/0/20151111007/20151111154725.jpg
inflating: dataset/data/test/0/20151111007/20151111154740.jpg
inflating: dataset/data/test/0/20151111007/20151111154800.jpg
inflating: dataset/data/test/0/20151111007/20151111154900.jpg
creating: dataset/data/test/0/20151111010/
inflating: dataset/data/test/0/20151111010/20151111162830.jpg
inflating: dataset/data/test/0/20151111010/20151111162959.jpg
inflating: dataset/data/test/0/20151111010/20151111163012.jpg
inflating: dataset/data/test/0/20151111010/20151111163041.jpg
inflating: dataset/data/test/0/20151111010/20151111163105.jpg
inflating: dataset/data/test/0/20151111010/20151111163115.jpg
inflating: dataset/data/test/0/20151111010/20151111163257.jpg
creating: dataset/data/test/0/20151113012/
inflating: dataset/data/test/0/20151113012/20151113163733.jpg
inflating: dataset/data/test/0/20151113012/20151113163859.jpg
inflating: dataset/data/test/0/20151113012/20151113163928.jpg
inflating: dataset/data/test/0/20151113012/20151113164000.jpg
inflating: dataset/data/test/0/20151113012/20151113164028.jpg
inflating: dataset/data/test/0/20151113012/20151113164100.jpg
inflating: dataset/data/test/0/20151113012/20151113164201.jpg
creating: dataset/data/test/0/20151117005/
inflating: dataset/data/test/0/20151117005/20151117111950.jpg
inflating: dataset/data/test/0/20151117005/20151117112126.jpg
inflating: dataset/data/test/0/20151117005/20151117112146.jpg
inflating: dataset/data/test/0/20151117005/20151117112246.jpg
inflating: dataset/data/test/0/20151117005/20151117112314.jpg
inflating: dataset/data/test/0/20151117005/20151117112400.jpg
inflating: dataset/data/test/0/20151117005/20151117112508.jpg
creating: dataset/data/test/0/20151118006/
inflating: dataset/data/test/0/20151118006/20151118144223.jpg
inflating: dataset/data/test/0/20151118006/20151118144344.jpg
inflating: dataset/data/test/0/20151118006/20151118144414.jpg
inflating: dataset/data/test/0/20151118006/20151118144448.jpg
inflating: dataset/data/test/0/20151118006/20151118144519.jpg
inflating: dataset/data/test/0/20151118006/20151118144600.jpg
inflating: dataset/data/test/0/20151118006/20151118144610.jpg
creating: dataset/data/test/0/20151118009/
inflating: dataset/data/test/0/20151118009/20151118160649.jpg
inflating: dataset/data/test/0/20151118009/20151118160839.jpg
inflating: dataset/data/test/0/20151118009/20151118160853.jpg
inflating: dataset/data/test/0/20151118009/20151118160924.jpg
inflating: dataset/data/test/0/20151118009/20151118160952.jpg
inflating: dataset/data/test/0/20151118009/20151118160970.jpg
inflating: dataset/data/test/0/20151118009/20151118161030.jpg
creating: dataset/data/test/0/20151118011/
inflating: dataset/data/test/0/20151118011/20151118162920.jpg
inflating: dataset/data/test/0/20151118011/20151118163100.jpg
inflating: dataset/data/test/0/20151118011/20151118163137.jpg
inflating: dataset/data/test/0/20151118011/20151118163150.jpg
inflating: dataset/data/test/0/20151118011/20151118163215.jpg
inflating: dataset/data/test/0/20151118011/20151118163218.jpg
inflating: dataset/data/test/0/20151118011/20151118163318.jpg
creating: dataset/data/test/1/
creating: dataset/data/test/1/162231763/
inflating: dataset/data/test/1/162231763/162231763Image0.jpg
inflating: dataset/data/test/1/162231763/162231763Image2.jpg
inflating: dataset/data/test/1/162231763/162231763Image3.jpg
inflating: dataset/data/test/1/162231763/162231763Image5.jpg
inflating: dataset/data/test/1/162231763/162231763Image6.jpg
inflating: dataset/data/test/1/162231763/162231763Image7.jpg
inflating: dataset/data/test/1/162231763/162231763Image8.jpg
creating: dataset/data/test/1/163856190/
inflating: dataset/data/test/1/163856190/163856190Image0.jpg
inflating: dataset/data/test/1/163856190/163856190Image2.jpg
inflating: dataset/data/test/1/163856190/163856190Image3.jpg
inflating: dataset/data/test/1/163856190/163856190Image5.jpg
inflating: dataset/data/test/1/163856190/163856190Image6.jpg
inflating: dataset/data/test/1/163856190/163856190Image7.jpg
inflating: dataset/data/test/1/163856190/163856190Image9.jpg
creating: dataset/data/test/1/164145173/
inflating: dataset/data/test/1/164145173/164145173Image0.jpg
inflating: dataset/data/test/1/164145173/164145173Image2.jpg
inflating: dataset/data/test/1/164145173/164145173Image3.jpg
inflating: dataset/data/test/1/164145173/164145173Image4.jpg
inflating: dataset/data/test/1/164145173/164145173Image6.jpg
inflating: dataset/data/test/1/164145173/164145173Image7.jpg
inflating: dataset/data/test/1/164145173/164145173Image8.jpg
creating: dataset/data/test/1/165554510/
inflating: dataset/data/test/1/165554510/165554510Image0.jpg
inflating: dataset/data/test/1/165554510/165554510Image3.jpg
inflating: dataset/data/test/1/165554510/165554510Image4.jpg
inflating: dataset/data/test/1/165554510/165554510Image5.jpg
inflating: dataset/data/test/1/165554510/165554510Image6.jpg
inflating: dataset/data/test/1/165554510/165554510Image7.jpg
inflating: dataset/data/test/1/165554510/165554510Image8.jpg
creating: dataset/data/test/1/171212253/
inflating: dataset/data/test/1/171212253/171212253Image0.jpg
inflating: dataset/data/test/1/171212253/171212253Image3.jpg
inflating: dataset/data/test/1/171212253/171212253Image4.jpg
inflating: dataset/data/test/1/171212253/171212253Image5.jpg
inflating: dataset/data/test/1/171212253/171212253Image6.jpg
inflating: dataset/data/test/1/171212253/171212253Image8.jpg
inflating: dataset/data/test/1/171212253/171212253Image9.jpg
creating: dataset/data/test/1/20150729004/
inflating: dataset/data/test/1/20150729004/20150729142418.jpg
inflating: dataset/data/test/1/20150729004/20150729142536.jpg
inflating: dataset/data/test/1/20150729004/20150729142617.jpg
inflating: dataset/data/test/1/20150729004/20150729142638.jpg
inflating: dataset/data/test/1/20150729004/20150729142712.jpg
inflating: dataset/data/test/1/20150729004/20150729142728.jpg
inflating: dataset/data/test/1/20150729004/20150729143009.jpg
creating: dataset/data/test/1/20150731002/
inflating: dataset/data/test/1/20150731002/20150731164116.jpg
inflating: dataset/data/test/1/20150731002/20150731164301.jpg
inflating: dataset/data/test/1/20150731002/20150731164316.jpg
inflating: dataset/data/test/1/20150731002/20150731164344.jpg
inflating: dataset/data/test/1/20150731002/20150731164411.jpg
inflating: dataset/data/test/1/20150731002/20150731164418.jpg
inflating: dataset/data/test/1/20150731002/20150731164556.jpg
creating: dataset/data/test/1/20150812006/
inflating: dataset/data/test/1/20150812006/20150812143825.jpg
inflating: dataset/data/test/1/20150812006/20150812143943.jpg
inflating: dataset/data/test/1/20150812006/20150812144013.jpg
inflating: dataset/data/test/1/20150812006/20150812144047.jpg
inflating: dataset/data/test/1/20150812006/20150812144114.jpg
inflating: dataset/data/test/1/20150812006/20150812144206.jpg
inflating: dataset/data/test/1/20150812006/20150812144318.jpg
creating: dataset/data/test/1/20150818001/
inflating: dataset/data/test/1/20150818001/20150818113423.jpg
inflating: dataset/data/test/1/20150818001/20150818113549.jpg
inflating: dataset/data/test/1/20150818001/20150818113620.jpg
inflating: dataset/data/test/1/20150818001/20150818113649.jpg
inflating: dataset/data/test/1/20150818001/20150818113719.jpg
inflating: dataset/data/test/1/20150818001/20150818113721.jpg
inflating: dataset/data/test/1/20150818001/20150818113827.jpg
creating: dataset/data/test/1/20150819006/
inflating: dataset/data/test/1/20150819006/20150819152524.jpg
inflating: dataset/data/test/1/20150819006/20150819152652.jpg
inflating: dataset/data/test/1/20150819006/20150819152726.jpg
inflating: dataset/data/test/1/20150819006/20150819152755.jpg
inflating: dataset/data/test/1/20150819006/20150819152825.jpg
inflating: dataset/data/test/1/20150819006/20150819152827.jpg
inflating: dataset/data/test/1/20150819006/20150819152905.jpg
creating: dataset/data/test/1/20150819011/
inflating: dataset/data/test/1/20150819011/20150819163415.jpg
inflating: dataset/data/test/1/20150819011/20150819163538.jpg
inflating: dataset/data/test/1/20150819011/20150819163608.jpg
inflating: dataset/data/test/1/20150819011/20150819163638.jpg
inflating: dataset/data/test/1/20150819011/20150819163708.jpg
inflating: dataset/data/test/1/20150819011/20150819163778.jpg
inflating: dataset/data/test/1/20150819011/20150819163805.jpg
creating: dataset/data/test/1/20150826008/
inflating: dataset/data/test/1/20150826008/20150826153113.jpg
inflating: dataset/data/test/1/20150826008/20150826153241.jpg
inflating: dataset/data/test/1/20150826008/20150826153310.jpg
inflating: dataset/data/test/1/20150826008/20150826153340.jpg
inflating: dataset/data/test/1/20150826008/20150826153410.jpg
inflating: dataset/data/test/1/20150826008/20150826153425.jpg
inflating: dataset/data/test/1/20150826008/20150826153539.jpg
creating: dataset/data/test/2/
creating: dataset/data/test/2/162021000/
inflating: dataset/data/test/2/162021000/162021000Image0.jpg
inflating: dataset/data/test/2/162021000/162021000Image100.jpg
inflating: dataset/data/test/2/162021000/162021000Image3.jpg
inflating: dataset/data/test/2/162021000/162021000Image6.jpg
inflating: dataset/data/test/2/162021000/162021000Image70.jpg
inflating: dataset/data/test/2/162021000/162021000Image8.jpg
inflating: dataset/data/test/2/162021000/162021000Image9.jpg
creating: dataset/data/test/2/162334723/
inflating: dataset/data/test/2/162334723/162334723Image0.jpg
inflating: dataset/data/test/2/162334723/162334723Image10.jpg
inflating: dataset/data/test/2/162334723/162334723Image12.jpg
inflating: dataset/data/test/2/162334723/162334723Image2.jpg
inflating: dataset/data/test/2/162334723/162334723Image4.jpg
inflating: dataset/data/test/2/162334723/162334723Image8.jpg
inflating: dataset/data/test/2/162334723/162334723Image9.jpg
creating: dataset/data/test/2/162403397/
inflating: dataset/data/test/2/162403397/162403397Image0.jpg
inflating: dataset/data/test/2/162403397/162403397Image1.jpg
inflating: dataset/data/test/2/162403397/162403397Image100.jpg
inflating: dataset/data/test/2/162403397/162403397Image3.jpg
inflating: dataset/data/test/2/162403397/162403397Image4.jpg
inflating: dataset/data/test/2/162403397/162403397Image80.jpg
inflating: dataset/data/test/2/162403397/162403397Image9.jpg
creating: dataset/data/test/2/163138313/
inflating: dataset/data/test/2/163138313/163138313Image0.jpg
inflating: dataset/data/test/2/163138313/163138313Image2.jpg
inflating: dataset/data/test/2/163138313/163138313Image3.jpg
inflating: dataset/data/test/2/163138313/163138313Image4.jpg
inflating: dataset/data/test/2/163138313/163138313Image5.jpg
inflating: dataset/data/test/2/163138313/163138313Image7.jpg
inflating: dataset/data/test/2/163138313/163138313Image8.jpg
creating: dataset/data/test/2/163747350/
inflating: dataset/data/test/2/163747350/163747350Image0.jpg
inflating: dataset/data/test/2/163747350/163747350Image2.jpg
inflating: dataset/data/test/2/163747350/163747350Image3.jpg
inflating: dataset/data/test/2/163747350/163747350Image4.jpg
inflating: dataset/data/test/2/163747350/163747350Image6.jpg
inflating: dataset/data/test/2/163747350/163747350Image7.jpg
inflating: dataset/data/test/2/163747350/163747350Image8.jpg
creating: dataset/data/test/2/165313413/
inflating: dataset/data/test/2/165313413/165313413Image0.jpg
inflating: dataset/data/test/2/165313413/165313413Image10.jpg
inflating: dataset/data/test/2/165313413/165313413Image2.jpg
inflating: dataset/data/test/2/165313413/165313413Image4.jpg
inflating: dataset/data/test/2/165313413/165313413Image5.jpg
inflating: dataset/data/test/2/165313413/165313413Image7.jpg
inflating: dataset/data/test/2/165313413/165313413Image8.jpg
creating: dataset/data/test/2/20150722013/
inflating: dataset/data/test/2/20150722013/20150722161717.jpg
inflating: dataset/data/test/2/20150722013/20150722161844.jpg
inflating: dataset/data/test/2/20150722013/20150722161913.jpg
inflating: dataset/data/test/2/20150722013/20150722161943.jpg
inflating: dataset/data/test/2/20150722013/20150722162013.jpg
inflating: dataset/data/test/2/20150722013/20150722162015.jpg
inflating: dataset/data/test/2/20150722013/20150722162101.jpg
creating: dataset/data/test/2/20150729007/
inflating: dataset/data/test/2/20150729007/20150729165730.jpg
inflating: dataset/data/test/2/20150729007/20150729165856.jpg
inflating: dataset/data/test/2/20150729007/20150729165922.jpg
inflating: dataset/data/test/2/20150729007/20150729165954.jpg
inflating: dataset/data/test/2/20150729007/20150729170022.jpg
inflating: dataset/data/test/2/20150729007/20150729170025.jpg
inflating: dataset/data/test/2/20150729007/20150729170203.jpg
creating: dataset/data/test/2/20150731003/
inflating: dataset/data/test/2/20150731003/20150731170522.jpg
inflating: dataset/data/test/2/20150731003/20150731170730.jpg
inflating: dataset/data/test/2/20150731003/20150731170755.jpg
inflating: dataset/data/test/2/20150731003/20150731170825.jpg
inflating: dataset/data/test/2/20150731003/20150731170906.jpg
inflating: dataset/data/test/2/20150731003/20150731170908.jpg
inflating: dataset/data/test/2/20150731003/20150731171123.jpg
creating: dataset/data/test/2/20150805009/
inflating: dataset/data/test/2/20150805009/20150805154502.jpg
inflating: dataset/data/test/2/20150805009/20150805154629.jpg
inflating: dataset/data/test/2/20150805009/20150805154701.jpg
inflating: dataset/data/test/2/20150805009/20150805154729.jpg
inflating: dataset/data/test/2/20150805009/20150805154759.jpg
inflating: dataset/data/test/2/20150805009/20150805154760.jpg
inflating: dataset/data/test/2/20150805009/20150805154923.jpg
creating: dataset/data/test/2/20150812011/
inflating: dataset/data/test/2/20150812011/20150812164905.jpg
inflating: dataset/data/test/2/20150812011/20150812165027.jpg
inflating: dataset/data/test/2/20150812011/20150812165058.jpg
inflating: dataset/data/test/2/20150812011/20150812165126.jpg
inflating: dataset/data/test/2/20150812011/20150812165156.jpg
inflating: dataset/data/test/2/20150812011/20150812165159.jpg
inflating: dataset/data/test/2/20150812011/20150812165247.jpg
creating: dataset/data/test/2/20150818003/
inflating: dataset/data/test/2/20150818003/20150818165454.jpg
inflating: dataset/data/test/2/20150818003/20150818165643.jpg
inflating: dataset/data/test/2/20150818003/20150818165713.jpg
inflating: dataset/data/test/2/20150818003/20150818165742.jpg
inflating: dataset/data/test/2/20150818003/20150818165812.jpg
inflating: dataset/data/test/2/20150818003/20150818165815.jpg
inflating: dataset/data/test/2/20150818003/20150818165916.jpg
creating: dataset/data/test/3/
creating: dataset/data/test/3/151705083/
inflating: dataset/data/test/3/151705083/151705083Image0.jpg
inflating: dataset/data/test/3/151705083/151705083Image2.jpg
inflating: dataset/data/test/3/151705083/151705083Image3.jpg
inflating: dataset/data/test/3/151705083/151705083Image5.jpg
inflating: dataset/data/test/3/151705083/151705083Image6.jpg
inflating: dataset/data/test/3/151705083/151705083Image7.jpg
inflating: dataset/data/test/3/151705083/151705083Image8.jpg
creating: dataset/data/test/3/153226430/
inflating: dataset/data/test/3/153226430/153226430Image0.jpg
inflating: dataset/data/test/3/153226430/153226430Image2.jpg
inflating: dataset/data/test/3/153226430/153226430Image3.jpg
inflating: dataset/data/test/3/153226430/153226430Image4.jpg
inflating: dataset/data/test/3/153226430/153226430Image6.jpg
inflating: dataset/data/test/3/153226430/153226430Image7.jpg
inflating: dataset/data/test/3/153226430/153226430Image8.jpg
creating: dataset/data/test/3/154649120/
inflating: dataset/data/test/3/154649120/154649120Image0.jpg
inflating: dataset/data/test/3/154649120/154649120Image10.jpg
inflating: dataset/data/test/3/154649120/154649120Image11.jpg
inflating: dataset/data/test/3/154649120/154649120Image2.jpg
inflating: dataset/data/test/3/154649120/154649120Image3.jpg
inflating: dataset/data/test/3/154649120/154649120Image4.jpg
inflating: dataset/data/test/3/154649120/154649120Image9.jpg
creating: dataset/data/test/3/163546870/
inflating: dataset/data/test/3/163546870/163546870Image0.jpg
inflating: dataset/data/test/3/163546870/163546870Image2.jpg
inflating: dataset/data/test/3/163546870/163546870Image4.jpg
inflating: dataset/data/test/3/163546870/163546870Image5.jpg
inflating: dataset/data/test/3/163546870/163546870Image6.jpg
inflating: dataset/data/test/3/163546870/163546870Image7.jpg
inflating: dataset/data/test/3/163546870/163546870Image8.jpg
creating: dataset/data/test/3/165048077/
inflating: dataset/data/test/3/165048077/165048077Image0.jpg
inflating: dataset/data/test/3/165048077/165048077Image3.jpg
inflating: dataset/data/test/3/165048077/165048077Image4.jpg
inflating: dataset/data/test/3/165048077/165048077Image5.jpg
inflating: dataset/data/test/3/165048077/165048077Image6.jpg
inflating: dataset/data/test/3/165048077/165048077Image7.jpg
inflating: dataset/data/test/3/165048077/165048077Image8.jpg
creating: dataset/data/test/3/174946503/
inflating: dataset/data/test/3/174946503/174946503Image0.jpg
inflating: dataset/data/test/3/174946503/174946503Image10.jpg
inflating: dataset/data/test/3/174946503/174946503Image3.jpg
inflating: dataset/data/test/3/174946503/174946503Image5.jpg
inflating: dataset/data/test/3/174946503/174946503Image6.jpg
inflating: dataset/data/test/3/174946503/174946503Image8.jpg
inflating: dataset/data/test/3/174946503/174946503Image9.jpg
creating: dataset/data/test/3/20150717003/
inflating: dataset/data/test/3/20150717003/20150717152616.jpg
inflating: dataset/data/test/3/20150717003/20150717152829.jpg
inflating: dataset/data/test/3/20150717003/20150717152852.jpg
inflating: dataset/data/test/3/20150717003/20150717152921.jpg
inflating: dataset/data/test/3/20150717003/20150717152951.jpg
inflating: dataset/data/test/3/20150717003/20150717152961.jpg
inflating: dataset/data/test/3/20150717003/20150717153107.jpg
creating: dataset/data/test/3/20150805013/
inflating: dataset/data/test/3/20150805013/20150805164531.jpg
inflating: dataset/data/test/3/20150805013/20150805164727.jpg
inflating: dataset/data/test/3/20150805013/20150805164752.jpg
inflating: dataset/data/test/3/20150805013/20150805164822.jpg
inflating: dataset/data/test/3/20150805013/20150805164853.jpg
inflating: dataset/data/test/3/20150805013/20150805164854.jpg
inflating: dataset/data/test/3/20150805013/20150805165042.jpg
creating: dataset/data/test/3/20150821002/
inflating: dataset/data/test/3/20150821002/20150821160244.jpg
inflating: dataset/data/test/3/20150821002/20150821160510.jpg
inflating: dataset/data/test/3/20150821002/20150821160551.jpg
inflating: dataset/data/test/3/20150821002/20150821160621.jpg
inflating: dataset/data/test/3/20150821002/20150821160648.jpg
inflating: dataset/data/test/3/20150821002/20150821160650.jpg
inflating: dataset/data/test/3/20150821002/20150821160755.jpg
creating: dataset/data/test/3/20150826002/
inflating: dataset/data/test/3/20150826002/20150826103859.jpg
inflating: dataset/data/test/3/20150826002/20150826104106.jpg
inflating: dataset/data/test/3/20150826002/20150826104107.jpg
inflating: dataset/data/test/3/20150826002/20150826104127.jpg
inflating: dataset/data/test/3/20150826002/20150826104147.jpg
inflating: dataset/data/test/3/20150826002/20150826104148.jpg
inflating: dataset/data/test/3/20150826002/20150826104150.jpg
creating: dataset/data/test/3/20151118018/
inflating: dataset/data/test/3/20151118018/20151118175237.jpg
inflating: dataset/data/test/3/20151118018/20151118175359.jpg
inflating: dataset/data/test/3/20151118018/20151118175430.jpg
inflating: dataset/data/test/3/20151118018/20151118175455.jpg
inflating: dataset/data/test/3/20151118018/20151118175530.jpg
inflating: dataset/data/test/3/20151118018/20151118175540.jpg
inflating: dataset/data/test/3/20151118018/20151118175619.jpg
creating: dataset/data/test/3/20151119003/
inflating: dataset/data/test/3/20151119003/20151119152531.jpg
inflating: dataset/data/test/3/20151119003/20151119152710.jpg
inflating: dataset/data/test/3/20151119003/20151119152744.jpg
inflating: dataset/data/test/3/20151119003/20151119152815.jpg
inflating: dataset/data/test/3/20151119003/20151119152837.jpg
inflating: dataset/data/test/3/20151119003/20151119152839.jpg
inflating: dataset/data/test/3/20151119003/20151119152921.jpg
creating: dataset/data/train/
creating: dataset/data/train/0/
creating: dataset/data/train/0/20150722014/
inflating: dataset/data/train/0/20150722014/20150722163342.jpg
inflating: dataset/data/train/0/20150722014/20150722163508.jpg
inflating: dataset/data/train/0/20150722014/20150722163537.jpg
inflating: dataset/data/train/0/20150722014/20150722163608.jpg
inflating: dataset/data/train/0/20150722014/20150722163637.jpg
inflating: dataset/data/train/0/20150722014/20150722163638.jpg
inflating: dataset/data/train/0/20150722014/20150722163857.jpg
creating: dataset/data/train/0/20150722015/
inflating: dataset/data/train/0/20150722015/20150722173601.jpg
inflating: dataset/data/train/0/20150722015/20150722173721.jpg
inflating: dataset/data/train/0/20150722015/20150722173753.jpg
inflating: dataset/data/train/0/20150722015/20150722173822.jpg
inflating: dataset/data/train/0/20150722015/20150722173858.jpg
inflating: dataset/data/train/0/20150722015/20150722173900.jpg
inflating: dataset/data/train/0/20150722015/20150722174021.jpg
creating: dataset/data/train/0/20150803002/
inflating: dataset/data/train/0/20150803002/20150803095706.jpg
inflating: dataset/data/train/0/20150803002/20150803095830.jpg
inflating: dataset/data/train/0/20150803002/20150803095856.jpg
inflating: dataset/data/train/0/20150803002/20150803095926.jpg
inflating: dataset/data/train/0/20150803002/20150803095957.jpg
inflating: dataset/data/train/0/20150803002/20150803100006.jpg
inflating: dataset/data/train/0/20150803002/20150803100053.jpg
creating: dataset/data/train/0/20150805007/
inflating: dataset/data/train/0/20150805007/20150805151805.jpg
inflating: dataset/data/train/0/20150805007/20150805151947.jpg
inflating: dataset/data/train/0/20150805007/20150805151951.jpg
inflating: dataset/data/train/0/20150805007/20150805152026.jpg
inflating: dataset/data/train/0/20150805007/20150805152034.jpg
inflating: dataset/data/train/0/20150805007/20150805152115.jpg
inflating: dataset/data/train/0/20150805007/20150805152120.jpg
creating: dataset/data/train/0/20150808001/
inflating: dataset/data/train/0/20150808001/20150808111901.jpg
inflating: dataset/data/train/0/20150808001/20150808112045.jpg
inflating: dataset/data/train/0/20150808001/20150808112104.jpg
inflating: dataset/data/train/0/20150808001/20150808112139.jpg
inflating: dataset/data/train/0/20150808001/20150808112209.jpg
inflating: dataset/data/train/0/20150808001/20150808112229.jpg
inflating: dataset/data/train/0/20150808001/20150808112503.jpg
creating: dataset/data/train/0/20150814006/
inflating: dataset/data/train/0/20150814006/20150814162732.jpg
inflating: dataset/data/train/0/20150814006/20150814162931.jpg
inflating: dataset/data/train/0/20150814006/20150814162945.jpg
inflating: dataset/data/train/0/20150814006/20150814163030.jpg
inflating: dataset/data/train/0/20150814006/20150814163055.jpg
inflating: dataset/data/train/0/20150814006/20150814163077.jpg
inflating: dataset/data/train/0/20150814006/20150814163137.jpg
creating: dataset/data/train/0/20150819010/
inflating: dataset/data/train/0/20150819010/20150819162241.jpg
inflating: dataset/data/train/0/20150819010/20150819162530.jpg
inflating: dataset/data/train/0/20150819010/20150819162541.jpg
inflating: dataset/data/train/0/20150819010/20150819162620.jpg
inflating: dataset/data/train/0/20150819010/20150819162647.jpg
inflating: dataset/data/train/0/20150819010/20150819162784.jpg
inflating: dataset/data/train/0/20150819010/20150819162804.jpg
creating: dataset/data/train/0/20150826005/
inflating: dataset/data/train/0/20150826005/20150826144025.jpg
inflating: dataset/data/train/0/20150826005/20150826144141.jpg
inflating: dataset/data/train/0/20150826005/20150826144218.jpg
inflating: dataset/data/train/0/20150826005/20150826144250.jpg
inflating: dataset/data/train/0/20150826005/20150826144312.jpg
inflating: dataset/data/train/0/20150826005/20150826144315.jpg
inflating: dataset/data/train/0/20150826005/20150826144356.jpg
creating: dataset/data/train/0/20150826007/
inflating: dataset/data/train/0/20150826007/20150826150220.jpg
inflating: dataset/data/train/0/20150826007/20150826150339.jpg
inflating: dataset/data/train/0/20150826007/20150826150413.jpg
inflating: dataset/data/train/0/20150826007/20150826150437.jpg
inflating: dataset/data/train/0/20150826007/20150826150508.jpg
inflating: dataset/data/train/0/20150826007/20150826150600.jpg
inflating: dataset/data/train/0/20150826007/20150826150730.jpg
creating: dataset/data/train/0/20150831002/
inflating: dataset/data/train/0/20150831002/20150831152645.jpg
inflating: dataset/data/train/0/20150831002/20150831152814.jpg
inflating: dataset/data/train/0/20150831002/20150831152842.jpg
inflating: dataset/data/train/0/20150831002/20150831152914.jpg
inflating: dataset/data/train/0/20150831002/20150831152945.jpg
inflating: dataset/data/train/0/20150831002/20150831153000.jpg
inflating: dataset/data/train/0/20150831002/20150831153104.jpg
creating: dataset/data/train/0/20150901002/
inflating: dataset/data/train/0/20150901002/20150901110219.jpg
inflating: dataset/data/train/0/20150901002/20150901110343.jpg
inflating: dataset/data/train/0/20150901002/20150901110417.jpg
inflating: dataset/data/train/0/20150901002/20150901110439.jpg
inflating: dataset/data/train/0/20150901002/20150901110510.jpg
inflating: dataset/data/train/0/20150901002/20150901110518.jpg
inflating: dataset/data/train/0/20150901002/20150901110616.jpg
creating: dataset/data/train/0/20150902006/
inflating: dataset/data/train/0/20150902006/20150902152203.jpg
inflating: dataset/data/train/0/20150902006/20150902152331.jpg
inflating: dataset/data/train/0/20150902006/20150902152359.jpg
inflating: dataset/data/train/0/20150902006/20150902152429.jpg
inflating: dataset/data/train/0/20150902006/20150902152459.jpg
inflating: dataset/data/train/0/20150902006/20150902152460.jpg
inflating: dataset/data/train/0/20150902006/20150902152536.jpg
creating: dataset/data/train/0/20150902009/
inflating: dataset/data/train/0/20150902009/20150902160019.jpg
inflating: dataset/data/train/0/20150902009/20150902160146.jpg
inflating: dataset/data/train/0/20150902009/20150902160220.jpg
inflating: dataset/data/train/0/20150902009/20150902160246.jpg
inflating: dataset/data/train/0/20150902009/20150902160317.jpg
inflating: dataset/data/train/0/20150902009/20150902160349.jpg
inflating: dataset/data/train/0/20150902009/20150902160351.jpg
creating: dataset/data/train/0/20150902011/
inflating: dataset/data/train/0/20150902011/20150902162952.jpg
inflating: dataset/data/train/0/20150902011/20150902163134.jpg
inflating: dataset/data/train/0/20150902011/20150902163201.jpg
inflating: dataset/data/train/0/20150902011/20150902163230.jpg
inflating: dataset/data/train/0/20150902011/20150902163305.jpg
inflating: dataset/data/train/0/20150902011/20150902163388.jpg
inflating: dataset/data/train/0/20150902011/20150902163431.jpg
creating: dataset/data/train/0/20150911004/
inflating: dataset/data/train/0/20150911004/20150911150019.jpg
inflating: dataset/data/train/0/20150911004/20150911150153.jpg
inflating: dataset/data/train/0/20150911004/20150911150241.jpg
inflating: dataset/data/train/0/20150911004/20150911150322.jpg
inflating: dataset/data/train/0/20150911004/20150911150334.jpg
inflating: dataset/data/train/0/20150911004/20150911150500.jpg
inflating: dataset/data/train/0/20150911004/20150911150618.jpg
creating: dataset/data/train/0/20150911006/
inflating: dataset/data/train/0/20150911006/20150911160646.jpg
inflating: dataset/data/train/0/20150911006/20150911160825.jpg
inflating: dataset/data/train/0/20150911006/20150911160836.jpg
inflating: dataset/data/train/0/20150911006/20150911160905.jpg
inflating: dataset/data/train/0/20150911006/20150911160934.jpg
inflating: dataset/data/train/0/20150911006/20150911160936.jpg
inflating: dataset/data/train/0/20150911006/20150911161031.jpg
creating: dataset/data/train/0/20150916005/
inflating: dataset/data/train/0/20150916005/20150916145030.jpg
inflating: dataset/data/train/0/20150916005/20150916145155.jpg
inflating: dataset/data/train/0/20150916005/20150916145222.jpg
inflating: dataset/data/train/0/20150916005/20150916145252.jpg
inflating: dataset/data/train/0/20150916005/20150916145328.jpg
inflating: dataset/data/train/0/20150916005/20150916145332.jpg
inflating: dataset/data/train/0/20150916005/20150916145450.jpg
creating: dataset/data/train/0/20150916011/
inflating: dataset/data/train/0/20150916011/20150916160128.jpg
inflating: dataset/data/train/0/20150916011/20150916160249.jpg
inflating: dataset/data/train/0/20150916011/20150916160316.jpg
inflating: dataset/data/train/0/20150916011/20150916160346.jpg
inflating: dataset/data/train/0/20150916011/20150916160415.jpg
inflating: dataset/data/train/0/20150916011/20150916160455.jpg
inflating: dataset/data/train/0/20150916011/20150916160518.jpg
creating: dataset/data/train/0/20150916012/
inflating: dataset/data/train/0/20150916012/20150916163607.jpg
inflating: dataset/data/train/0/20150916012/20150916163727.jpg
inflating: dataset/data/train/0/20150916012/20150916163752.jpg
inflating: dataset/data/train/0/20150916012/20150916163822.jpg
inflating: dataset/data/train/0/20150916012/20150916163852.jpg
inflating: dataset/data/train/0/20150916012/20150916163869.jpg
inflating: dataset/data/train/0/20150916012/20150916163945.jpg
creating: dataset/data/train/0/20150917002/
inflating: dataset/data/train/0/20150917002/20150917101429.jpg
inflating: dataset/data/train/0/20150917002/20150917101453.jpg
inflating: dataset/data/train/0/20150917002/20150917101615.jpg
inflating: dataset/data/train/0/20150917002/20150917101657.jpg
inflating: dataset/data/train/0/20150917002/20150917101724.jpg
inflating: dataset/data/train/0/20150917002/20150917101729.jpg
inflating: dataset/data/train/0/20150917002/20150917101829.jpg
creating: dataset/data/train/0/20150918003/
inflating: dataset/data/train/0/20150918003/20150918151028.jpg
inflating: dataset/data/train/0/20150918003/20150918151201.jpg
inflating: dataset/data/train/0/20150918003/20150918151223.jpg
inflating: dataset/data/train/0/20150918003/20150918151248.jpg
inflating: dataset/data/train/0/20150918003/20150918151318.jpg
inflating: dataset/data/train/0/20150918003/20150918151351.jpg
inflating: dataset/data/train/0/20150918003/20150918151519.jpg
creating: dataset/data/train/0/20150923007/
inflating: dataset/data/train/0/20150923007/20150923155803.jpg
inflating: dataset/data/train/0/20150923007/20150923155939.jpg
inflating: dataset/data/train/0/20150923007/20150923160007.jpg
inflating: dataset/data/train/0/20150923007/20150923160036.jpg
inflating: dataset/data/train/0/20150923007/20150923160107.jpg
inflating: dataset/data/train/0/20150923007/20150923160122.jpg
inflating: dataset/data/train/0/20150923007/20150923160143.jpg
creating: dataset/data/train/0/20150923011/
inflating: dataset/data/train/0/20150923011/20150923164045.jpg
inflating: dataset/data/train/0/20150923011/20150923164224.jpg
inflating: dataset/data/train/0/20150923011/20150923164233.jpg
inflating: dataset/data/train/0/20150923011/20150923164303.jpg
inflating: dataset/data/train/0/20150923011/20150923164333.jpg
inflating: dataset/data/train/0/20150923011/20150923164380.jpg
inflating: dataset/data/train/0/20150923011/20150923164411.jpg
creating: dataset/data/train/0/20150930008/
inflating: dataset/data/train/0/20150930008/20150930154113.jpg
inflating: dataset/data/train/0/20150930008/20150930154301.jpg
inflating: dataset/data/train/0/20150930008/20150930154321.jpg
inflating: dataset/data/train/0/20150930008/20150930154342.jpg
inflating: dataset/data/train/0/20150930008/20150930154414.jpg
inflating: dataset/data/train/0/20150930008/20150930154450.jpg
inflating: dataset/data/train/0/20150930008/20150930154509.jpg
creating: dataset/data/train/0/20151012006/
inflating: dataset/data/train/0/20151012006/20151012155835.jpg
inflating: dataset/data/train/0/20151012006/20151012155951.jpg
inflating: dataset/data/train/0/20151012006/20151012160021.jpg
inflating: dataset/data/train/0/20151012006/20151012160051.jpg
inflating: dataset/data/train/0/20151012006/20151012160121.jpg
inflating: dataset/data/train/0/20151012006/20151012160200.jpg
inflating: dataset/data/train/0/20151012006/20151012160216.jpg
creating: dataset/data/train/0/20151014011/
inflating: dataset/data/train/0/20151014011/20151014161255.jpg
inflating: dataset/data/train/0/20151014011/20151014161421.jpg
inflating: dataset/data/train/0/20151014011/20151014161450.jpg
inflating: dataset/data/train/0/20151014011/20151014161517.jpg
inflating: dataset/data/train/0/20151014011/20151014161547.jpg
inflating: dataset/data/train/0/20151014011/20151014161551.jpg
inflating: dataset/data/train/0/20151014011/20151014161629.jpg
creating: dataset/data/train/0/20151016002/
inflating: dataset/data/train/0/20151016002/20151016145537.jpg
inflating: dataset/data/train/0/20151016002/20151016145703.jpg
inflating: dataset/data/train/0/20151016002/20151016145726.jpg
inflating: dataset/data/train/0/20151016002/20151016145803.jpg
inflating: dataset/data/train/0/20151016002/20151016145826.jpg
inflating: dataset/data/train/0/20151016002/20151016145828.jpg
inflating: dataset/data/train/0/20151016002/20151016145932.jpg
creating: dataset/data/train/0/20151021002/
inflating: dataset/data/train/0/20151021002/20151021101700.jpg
inflating: dataset/data/train/0/20151021002/20151021101831.jpg
inflating: dataset/data/train/0/20151021002/20151021101853.jpg
inflating: dataset/data/train/0/20151021002/20151021101927.jpg
inflating: dataset/data/train/0/20151021002/20151021101953.jpg
inflating: dataset/data/train/0/20151021002/20151021102000.jpg
inflating: dataset/data/train/0/20151021002/20151021102104.jpg
creating: dataset/data/train/0/20151023005/
inflating: dataset/data/train/0/20151023005/20151023152438.jpg
inflating: dataset/data/train/0/20151023005/20151023152554.jpg
inflating: dataset/data/train/0/20151023005/20151023152624.jpg
inflating: dataset/data/train/0/20151023005/20151023152653.jpg
inflating: dataset/data/train/0/20151023005/20151023152723.jpg
inflating: dataset/data/train/0/20151023005/20151023152740.jpg
inflating: dataset/data/train/0/20151023005/20151023152818.jpg
creating: dataset/data/train/0/20151026002/
inflating: dataset/data/train/0/20151026002/20151026092135.jpg
inflating: dataset/data/train/0/20151026002/20151026092258.jpg
inflating: dataset/data/train/0/20151026002/20151026092324.jpg
inflating: dataset/data/train/0/20151026002/20151026092354.jpg
inflating: dataset/data/train/0/20151026002/20151026092425.jpg
inflating: dataset/data/train/0/20151026002/20151026092448.jpg
inflating: dataset/data/train/0/20151026002/20151026092602.jpg
creating: dataset/data/train/0/20151027002/
inflating: dataset/data/train/0/20151027002/20151027153635.jpg
inflating: dataset/data/train/0/20151027002/20151027153807.jpg
inflating: dataset/data/train/0/20151027002/20151027153836.jpg
inflating: dataset/data/train/0/20151027002/20151027153858.jpg
inflating: dataset/data/train/0/20151027002/20151027153921.jpg
inflating: dataset/data/train/0/20151027002/20151027154016.jpg
inflating: dataset/data/train/0/20151027002/20151027154252.jpg
creating: dataset/data/train/0/20151028001/
inflating: dataset/data/train/0/20151028001/20151028100852.jpg
inflating: dataset/data/train/0/20151028001/20151028100907.jpg
inflating: dataset/data/train/0/20151028001/20151028101037.jpg
inflating: dataset/data/train/0/20151028001/20151028101051.jpg
inflating: dataset/data/train/0/20151028001/20151028101139.jpg
inflating: dataset/data/train/0/20151028001/20151028101200.jpg
inflating: dataset/data/train/0/20151028001/20151028101322.jpg
creating: dataset/data/train/0/20151028009/
inflating: dataset/data/train/0/20151028009/20151028152020.jpg
inflating: dataset/data/train/0/20151028009/20151028152157.jpg
inflating: dataset/data/train/0/20151028009/20151028152211.jpg
inflating: dataset/data/train/0/20151028009/20151028152241.jpg
inflating: dataset/data/train/0/20151028009/20151028152311.jpg
inflating: dataset/data/train/0/20151028009/20151028152340.jpg
inflating: dataset/data/train/0/20151028009/20151028152429.jpg
creating: dataset/data/train/0/20151028014/
inflating: dataset/data/train/0/20151028014/20151028162216.jpg
inflating: dataset/data/train/0/20151028014/20151028162334.jpg
inflating: dataset/data/train/0/20151028014/20151028162404.jpg
inflating: dataset/data/train/0/20151028014/20151028162407.jpg
inflating: dataset/data/train/0/20151028014/20151028162434.jpg
inflating: dataset/data/train/0/20151028014/20151028162700.jpg
inflating: dataset/data/train/0/20151028014/20151028162836.jpg
creating: dataset/data/train/0/20151028016/
inflating: dataset/data/train/0/20151028016/20151028170023.jpg
inflating: dataset/data/train/0/20151028016/20151028170202.jpg
inflating: dataset/data/train/0/20151028016/20151028170221.jpg
inflating: dataset/data/train/0/20151028016/20151028170251.jpg
inflating: dataset/data/train/0/20151028016/20151028170326.jpg
inflating: dataset/data/train/0/20151028016/20151028170352.jpg
inflating: dataset/data/train/0/20151028016/20151028170509.jpg
creating: dataset/data/train/0/20151029001/
inflating: dataset/data/train/0/20151029001/20151029101103.jpg
inflating: dataset/data/train/0/20151029001/20151029101332.jpg
inflating: dataset/data/train/0/20151029001/20151029101338.jpg
inflating: dataset/data/train/0/20151029001/20151029101417.jpg
inflating: dataset/data/train/0/20151029001/20151029101449.jpg
inflating: dataset/data/train/0/20151029001/20151029101500.jpg
inflating: dataset/data/train/0/20151029001/20151029101631.jpg
creating: dataset/data/train/0/20151030003/
inflating: dataset/data/train/0/20151030003/20151030145309.jpg
inflating: dataset/data/train/0/20151030003/20151030145425.jpg
inflating: dataset/data/train/0/20151030003/20151030145455.jpg
inflating: dataset/data/train/0/20151030003/20151030145526.jpg
inflating: dataset/data/train/0/20151030003/20151030145555.jpg
inflating: dataset/data/train/0/20151030003/20151030145600.jpg
inflating: dataset/data/train/0/20151030003/20151030145641.jpg
creating: dataset/data/train/0/20151031002/
inflating: dataset/data/train/0/20151031002/20151031105551.jpg
inflating: dataset/data/train/0/20151031002/20151031105817.jpg
inflating: dataset/data/train/0/20151031002/20151031105828.jpg
inflating: dataset/data/train/0/20151031002/20151031105859.jpg
inflating: dataset/data/train/0/20151031002/20151031105927.jpg
inflating: dataset/data/train/0/20151031002/20151031105980.jpg
inflating: dataset/data/train/0/20151031002/20151031110102.jpg
creating: dataset/data/train/0/20151118013/
inflating: dataset/data/train/0/20151118013/20151118165442.jpg
inflating: dataset/data/train/0/20151118013/20151118165608.jpg
inflating: dataset/data/train/0/20151118013/20151118165637.jpg
inflating: dataset/data/train/0/20151118013/20151118165708.jpg
inflating: dataset/data/train/0/20151118013/20151118165737.jpg
inflating: dataset/data/train/0/20151118013/20151118165800.jpg
inflating: dataset/data/train/0/20151118013/20151118165833.jpg
creating: dataset/data/train/0/20151119001/
inflating: dataset/data/train/0/20151119001/20151119095816.jpg
inflating: dataset/data/train/0/20151119001/20151119100027.jpg
inflating: dataset/data/train/0/20151119001/20151119100043.jpg
inflating: dataset/data/train/0/20151119001/20151119100108.jpg
inflating: dataset/data/train/0/20151119001/20151119100144.jpg
inflating: dataset/data/train/0/20151119001/20151119100145.jpg
inflating: dataset/data/train/0/20151119001/20151119100343.jpg
creating: dataset/data/train/0/20151120006/
inflating: dataset/data/train/0/20151120006/20151120152407.jpg
inflating: dataset/data/train/0/20151120006/20151120152529.jpg
inflating: dataset/data/train/0/20151120006/20151120152604.jpg
inflating: dataset/data/train/0/20151120006/20151120152639.jpg
inflating: dataset/data/train/0/20151120006/20151120152659.jpg
inflating: dataset/data/train/0/20151120006/20151120152700.jpg
inflating: dataset/data/train/0/20151120006/20151120152814.jpg
creating: dataset/data/train/0/20151123006/
inflating: dataset/data/train/0/20151123006/20151123153609.jpg
inflating: dataset/data/train/0/20151123006/20151123153733.jpg
inflating: dataset/data/train/0/20151123006/20151123153815.jpg
inflating: dataset/data/train/0/20151123006/20151123153854.jpg
inflating: dataset/data/train/0/20151123006/20151123153912.jpg
inflating: dataset/data/train/0/20151123006/20151123153916.jpg
inflating: dataset/data/train/0/20151123006/20151123154018.jpg
creating: dataset/data/train/0/20151123008/
inflating: dataset/data/train/0/20151123008/20151123155407.jpg
inflating: dataset/data/train/0/20151123008/20151123155614.jpg
inflating: dataset/data/train/0/20151123008/20151123155630.jpg
inflating: dataset/data/train/0/20151123008/20151123155657.jpg
inflating: dataset/data/train/0/20151123008/20151123155738.jpg
inflating: dataset/data/train/0/20151123008/20151123155750.jpg
inflating: dataset/data/train/0/20151123008/20151123155827.jpg
creating: dataset/data/train/0/20151127011/
inflating: dataset/data/train/0/20151127011/20151127160124.jpg
inflating: dataset/data/train/0/20151127011/20151127160301.jpg
inflating: dataset/data/train/0/20151127011/20151127160317.jpg
inflating: dataset/data/train/0/20151127011/20151127160340.jpg
inflating: dataset/data/train/0/20151127011/20151127160426.jpg
inflating: dataset/data/train/0/20151127011/20151127160500.jpg
inflating: dataset/data/train/0/20151127011/20151127160551.jpg
creating: dataset/data/train/0/20151202002/
inflating: dataset/data/train/0/20151202002/20151202144611.jpg
inflating: dataset/data/train/0/20151202002/20151202144737.jpg
inflating: dataset/data/train/0/20151202002/20151202144806.jpg
inflating: dataset/data/train/0/20151202002/20151202144836.jpg
inflating: dataset/data/train/0/20151202002/20151202144916.jpg
inflating: dataset/data/train/0/20151202002/20151202144984.jpg
inflating: dataset/data/train/0/20151202002/20151202145014.jpg
creating: dataset/data/train/0/20151202006/
inflating: dataset/data/train/0/20151202006/20151202153412.jpg
inflating: dataset/data/train/0/20151202006/20151202153553.jpg
inflating: dataset/data/train/0/20151202006/20151202153624.jpg
inflating: dataset/data/train/0/20151202006/20151202153645.jpg
inflating: dataset/data/train/0/20151202006/20151202153718.jpg
inflating: dataset/data/train/0/20151202006/20151202153811.jpg
inflating: dataset/data/train/0/20151202006/20151202153820.jpg
creating: dataset/data/train/0/20151202007/
inflating: dataset/data/train/0/20151202007/20151202154531.jpg
inflating: dataset/data/train/0/20151202007/20151202154706.jpg
inflating: dataset/data/train/0/20151202007/20151202154734.jpg
inflating: dataset/data/train/0/20151202007/20151202154804.jpg
inflating: dataset/data/train/0/20151202007/20151202154833.jpg
inflating: dataset/data/train/0/20151202007/20151202154838.jpg
inflating: dataset/data/train/0/20151202007/20151202155011.jpg
creating: dataset/data/train/0/20151202009/
inflating: dataset/data/train/0/20151202009/20151202161403.jpg
inflating: dataset/data/train/0/20151202009/20151202161635.jpg
inflating: dataset/data/train/0/20151202009/20151202161720.jpg
inflating: dataset/data/train/0/20151202009/20151202161742.jpg
inflating: dataset/data/train/0/20151202009/20151202161745.jpg
inflating: dataset/data/train/0/20151202009/20151202161819.jpg
inflating: dataset/data/train/0/20151202009/20151202161926.jpg
creating: dataset/data/train/0/20151202011/
inflating: dataset/data/train/0/20151202011/20151202164902.jpg
inflating: dataset/data/train/0/20151202011/20151202165034.jpg
inflating: dataset/data/train/0/20151202011/20151202165056.jpg
inflating: dataset/data/train/0/20151202011/20151202165130.jpg
inflating: dataset/data/train/0/20151202011/20151202165207.jpg
inflating: dataset/data/train/0/20151202011/20151202165220.jpg
inflating: dataset/data/train/0/20151202011/20151202165239.jpg
creating: dataset/data/train/0/20151203001/
inflating: dataset/data/train/0/20151203001/20151203094014.jpg
inflating: dataset/data/train/0/20151203001/20151203094155.jpg
inflating: dataset/data/train/0/20151203001/20151203094212.jpg
inflating: dataset/data/train/0/20151203001/20151203094242.jpg
inflating: dataset/data/train/0/20151203001/20151203094300.jpg
inflating: dataset/data/train/0/20151203001/20151203094350.jpg
inflating: dataset/data/train/0/20151203001/20151203094417.jpg
creating: dataset/data/train/0/20151204001/
inflating: dataset/data/train/0/20151204001/20151204143542.jpg
inflating: dataset/data/train/0/20151204001/20151204143727.jpg
inflating: dataset/data/train/0/20151204001/20151204143741.jpg
inflating: dataset/data/train/0/20151204001/20151204143809.jpg
inflating: dataset/data/train/0/20151204001/20151204143827.jpg
inflating: dataset/data/train/0/20151204001/20151204143837.jpg
inflating: dataset/data/train/0/20151204001/20151204144007.jpg
creating: dataset/data/train/0/20151204004/
inflating: dataset/data/train/0/20151204004/20151204151657.jpg
inflating: dataset/data/train/0/20151204004/20151204151823.jpg
inflating: dataset/data/train/0/20151204004/20151204151900.jpg
inflating: dataset/data/train/0/20151204004/20151204151923.jpg
inflating: dataset/data/train/0/20151204004/20151204151936.jpg
inflating: dataset/data/train/0/20151204004/20151204151940.jpg
inflating: dataset/data/train/0/20151204004/20151204152137.jpg
creating: dataset/data/train/0/20151204007/
inflating: dataset/data/train/0/20151204007/20151204162634.jpg
inflating: dataset/data/train/0/20151204007/20151204162829.jpg
inflating: dataset/data/train/0/20151204007/20151204162840.jpg
inflating: dataset/data/train/0/20151204007/20151204162848.jpg
inflating: dataset/data/train/0/20151204007/20151204162925.jpg
inflating: dataset/data/train/0/20151204007/20151204162933.jpg
inflating: dataset/data/train/0/20151204007/20151204163016.jpg
creating: dataset/data/train/0/20151207009/
inflating: dataset/data/train/0/20151207009/20151207154843.jpg
inflating: dataset/data/train/0/20151207009/20151207155133.jpg
inflating: dataset/data/train/0/20151207009/20151207155148.jpg
inflating: dataset/data/train/0/20151207009/20151207155202.jpg
inflating: dataset/data/train/0/20151207009/20151207155235.jpg
inflating: dataset/data/train/0/20151207009/20151207155280.jpg
inflating: dataset/data/train/0/20151207009/20151207155343.jpg
creating: dataset/data/train/0/20151209006/
inflating: dataset/data/train/0/20151209006/20151209152652.jpg
inflating: dataset/data/train/0/20151209006/20151209152858.jpg
inflating: dataset/data/train/0/20151209006/20151209152916.jpg
inflating: dataset/data/train/0/20151209006/20151209152955.jpg
inflating: dataset/data/train/0/20151209006/20151209153013.jpg
inflating: dataset/data/train/0/20151209006/20151209153022.jpg
inflating: dataset/data/train/0/20151209006/20151209153237.jpg
creating: dataset/data/train/0/20151210001/
inflating: dataset/data/train/0/20151210001/20151210094705.jpg
inflating: dataset/data/train/0/20151210001/20151210094852.jpg
inflating: dataset/data/train/0/20151210001/20151210094901.jpg
inflating: dataset/data/train/0/20151210001/20151210094938.jpg
inflating: dataset/data/train/0/20151210001/20151210094958.jpg
inflating: dataset/data/train/0/20151210001/20151210095000.jpg
inflating: dataset/data/train/0/20151210001/20151210095100.jpg
creating: dataset/data/train/0/20151214001/
inflating: dataset/data/train/0/20151214001/20151214143834.jpg
inflating: dataset/data/train/0/20151214001/20151214143955.jpg
inflating: dataset/data/train/0/20151214001/20151214144016.jpg
inflating: dataset/data/train/0/20151214001/20151214144050.jpg
inflating: dataset/data/train/0/20151214001/20151214144120.jpg
inflating: dataset/data/train/0/20151214001/20151214144122.jpg
inflating: dataset/data/train/0/20151214001/20151214144237.jpg
creating: dataset/data/train/0/20151214002/
inflating: dataset/data/train/0/20151214002/20151214144741.jpg
inflating: dataset/data/train/0/20151214002/20151214144859.jpg
inflating: dataset/data/train/0/20151214002/20151214144929.jpg
inflating: dataset/data/train/0/20151214002/20151214144958.jpg
inflating: dataset/data/train/0/20151214002/20151214145029.jpg
inflating: dataset/data/train/0/20151214002/20151214145100.jpg
inflating: dataset/data/train/0/20151214002/20151214145141.jpg
creating: dataset/data/train/0/20151214007/
inflating: dataset/data/train/0/20151214007/20151214151044.jpg
inflating: dataset/data/train/0/20151214007/20151214151236.jpg
inflating: dataset/data/train/0/20151214007/20151214151305.jpg
inflating: dataset/data/train/0/20151214007/20151214151335.jpg
inflating: dataset/data/train/0/20151214007/20151214151405.jpg
inflating: dataset/data/train/0/20151214007/20151214151450.jpg
inflating: dataset/data/train/0/20151214007/20151214151513.jpg
creating: dataset/data/train/0/20151216015/
inflating: dataset/data/train/0/20151216015/20151216170621.jpg
inflating: dataset/data/train/0/20151216015/20151216170739.jpg
inflating: dataset/data/train/0/20151216015/20151216170815.jpg
inflating: dataset/data/train/0/20151216015/20151216170839.jpg
inflating: dataset/data/train/0/20151216015/20151216170909.jpg
inflating: dataset/data/train/0/20151216015/20151216170910.jpg
inflating: dataset/data/train/0/20151216015/20151216171026.jpg
creating: dataset/data/train/0/20151222004/
inflating: dataset/data/train/0/20151222004/20151222165821.jpg
inflating: dataset/data/train/0/20151222004/20151222170001.jpg
inflating: dataset/data/train/0/20151222004/20151222170023.jpg
inflating: dataset/data/train/0/20151222004/20151222170057.jpg
inflating: dataset/data/train/0/20151222004/20151222170123.jpg
inflating: dataset/data/train/0/20151222004/20151222170150.jpg
inflating: dataset/data/train/0/20151222004/20151222170216.jpg
creating: dataset/data/train/0/20151223009/
inflating: dataset/data/train/0/20151223009/20151223160504.jpg
inflating: dataset/data/train/0/20151223009/20151223160635.jpg
inflating: dataset/data/train/0/20151223009/20151223160921.jpg
inflating: dataset/data/train/0/20151223009/20151223160941.jpg
inflating: dataset/data/train/0/20151223009/20151223160944.jpg
inflating: dataset/data/train/0/20151223009/20151223160949.jpg
inflating: dataset/data/train/0/20151223009/20151223161136.jpg
creating: dataset/data/train/0/20151223012/
inflating: dataset/data/train/0/20151223012/20151223171300.jpg
inflating: dataset/data/train/0/20151223012/20151223171428.jpg
inflating: dataset/data/train/0/20151223012/20151223171519.jpg
inflating: dataset/data/train/0/20151223012/20151223171540.jpg
inflating: dataset/data/train/0/20151223012/20151223171609.jpg
inflating: dataset/data/train/0/20151223012/20151223171618.jpg
inflating: dataset/data/train/0/20151223012/20151223171735.jpg
creating: dataset/data/train/0/20151223015/
inflating: dataset/data/train/0/20151223015/20151223174832.jpg
inflating: dataset/data/train/0/20151223015/20151223175013.jpg
inflating: dataset/data/train/0/20151223015/20151223175039.jpg
inflating: dataset/data/train/0/20151223015/20151223175114.jpg
inflating: dataset/data/train/0/20151223015/20151223175141.jpg
inflating: dataset/data/train/0/20151223015/20151223175150.jpg
inflating: dataset/data/train/0/20151223015/20151223175227.jpg
creating: dataset/data/train/0/20151223016/
inflating: dataset/data/train/0/20151223016/20151223175858.jpg
inflating: dataset/data/train/0/20151223016/20151223180018.jpg
inflating: dataset/data/train/0/20151223016/20151223180047.jpg
inflating: dataset/data/train/0/20151223016/20151223180117.jpg
inflating: dataset/data/train/0/20151223016/20151223180146.jpg
inflating: dataset/data/train/0/20151223016/20151223180150.jpg
inflating: dataset/data/train/0/20151223016/20151223180242.jpg
creating: dataset/data/train/0/20151225002/
inflating: dataset/data/train/0/20151225002/20151225144818.jpg
inflating: dataset/data/train/0/20151225002/20151225144938.jpg
inflating: dataset/data/train/0/20151225002/20151225145003.jpg
inflating: dataset/data/train/0/20151225002/20151225145033.jpg
inflating: dataset/data/train/0/20151225002/20151225145111.jpg
inflating: dataset/data/train/0/20151225002/20151225145152.jpg
inflating: dataset/data/train/0/20151225002/20151225145158.jpg
creating: dataset/data/train/0/20151225003/
inflating: dataset/data/train/0/20151225003/20151225150032.jpg
inflating: dataset/data/train/0/20151225003/20151225150154.jpg
inflating: dataset/data/train/0/20151225003/20151225150224.jpg
inflating: dataset/data/train/0/20151225003/20151225150306.jpg
inflating: dataset/data/train/0/20151225003/20151225150325.jpg
inflating: dataset/data/train/0/20151225003/20151225150350.jpg
inflating: dataset/data/train/0/20151225003/20151225150425.jpg
creating: dataset/data/train/0/20151228001/
inflating: dataset/data/train/0/20151228001/20151228142030.jpg
inflating: dataset/data/train/0/20151228001/20151228142156.jpg
inflating: dataset/data/train/0/20151228001/20151228142232.jpg
inflating: dataset/data/train/0/20151228001/20151228142256.jpg
inflating: dataset/data/train/0/20151228001/20151228142326.jpg
inflating: dataset/data/train/0/20151228001/20151228142350.jpg
inflating: dataset/data/train/0/20151228001/20151228142407.jpg
creating: dataset/data/train/0/20151228003/
inflating: dataset/data/train/0/20151228003/20151228144023.jpg
inflating: dataset/data/train/0/20151228003/20151228144158.jpg
inflating: dataset/data/train/0/20151228003/20151228144230.jpg
inflating: dataset/data/train/0/20151228003/20151228144258.jpg
inflating: dataset/data/train/0/20151228003/20151228144328.jpg
inflating: dataset/data/train/0/20151228003/20151228144330.jpg
inflating: dataset/data/train/0/20151228003/20151228144430.jpg
creating: dataset/data/train/0/20151230010/
inflating: dataset/data/train/0/20151230010/20151230160427.jpg
inflating: dataset/data/train/0/20151230010/20151230160616.jpg
inflating: dataset/data/train/0/20151230010/20151230160634.jpg
inflating: dataset/data/train/0/20151230010/20151230160702.jpg
inflating: dataset/data/train/0/20151230010/20151230160733.jpg
inflating: dataset/data/train/0/20151230010/20151230160735.jpg
inflating: dataset/data/train/0/20151230010/20151230160820.jpg
creating: dataset/data/train/0/20160104002/
inflating: dataset/data/train/0/20160104002/20160104143821.jpg
inflating: dataset/data/train/0/20160104002/20160104143951.jpg
inflating: dataset/data/train/0/20160104002/20160104144020.jpg
inflating: dataset/data/train/0/20160104002/20160104144052.jpg
inflating: dataset/data/train/0/20160104002/20160104144114.jpg
inflating: dataset/data/train/0/20160104002/20160104144121.jpg
inflating: dataset/data/train/0/20160104002/20160104144256.jpg
creating: dataset/data/train/0/20160104005/
inflating: dataset/data/train/0/20160104005/20160104150228.jpg
inflating: dataset/data/train/0/20160104005/20160104150352.jpg
inflating: dataset/data/train/0/20160104005/20160104150420.jpg
inflating: dataset/data/train/0/20160104005/20160104150500.jpg
inflating: dataset/data/train/0/20160104005/20160104150524.jpg
inflating: dataset/data/train/0/20160104005/20160104150530.jpg
inflating: dataset/data/train/0/20160104005/20160104150618.jpg
creating: dataset/data/train/0/20160104006/
inflating: dataset/data/train/0/20160104006/20160104150923.jpg
inflating: dataset/data/train/0/20160104006/20160104151140.jpg
inflating: dataset/data/train/0/20160104006/20160104151217.jpg
inflating: dataset/data/train/0/20160104006/20160104151240.jpg
inflating: dataset/data/train/0/20160104006/20160104151310.jpg
inflating: dataset/data/train/0/20160104006/20160104151333.jpg
inflating: dataset/data/train/0/20160104006/20160104151416.jpg
creating: dataset/data/train/0/20160104010/
inflating: dataset/data/train/0/20160104010/20160104155608.jpg
inflating: dataset/data/train/0/20160104010/20160104155759.jpg
inflating: dataset/data/train/0/20160104010/20160104155837.jpg
inflating: dataset/data/train/0/20160104010/20160104155907.jpg
inflating: dataset/data/train/0/20160104010/20160104155931.jpg
inflating: dataset/data/train/0/20160104010/20160104155960.jpg
inflating: dataset/data/train/0/20160104010/20160104160017.jpg
creating: dataset/data/train/0/20160105005/
inflating: dataset/data/train/0/20160105005/20160105160342.jpg
inflating: dataset/data/train/0/20160105005/20160105160510.jpg
inflating: dataset/data/train/0/20160105005/20160105160544.jpg
inflating: dataset/data/train/0/20160105005/20160105160607.jpg
inflating: dataset/data/train/0/20160105005/20160105160639.jpg
inflating: dataset/data/train/0/20160105005/20160105160643.jpg
inflating: dataset/data/train/0/20160105005/20160105160820.jpg
creating: dataset/data/train/0/20160106016/
inflating: dataset/data/train/0/20160106016/20160106164113.jpg
inflating: dataset/data/train/0/20160106016/20160106164237.jpg
inflating: dataset/data/train/0/20160106016/20160106164308.jpg
inflating: dataset/data/train/0/20160106016/20160106164333.jpg
inflating: dataset/data/train/0/20160106016/20160106164407.jpg
inflating: dataset/data/train/0/20160106016/20160106164415.jpg
inflating: dataset/data/train/0/20160106016/20160106164510.jpg
creating: dataset/data/train/0/20160108002/
inflating: dataset/data/train/0/20160108002/20160108104641.jpg
inflating: dataset/data/train/0/20160108002/20160108104804.jpg
inflating: dataset/data/train/0/20160108002/20160108104814.jpg
inflating: dataset/data/train/0/20160108002/20160108104834.jpg
inflating: dataset/data/train/0/20160108002/20160108104912.jpg
inflating: dataset/data/train/0/20160108002/20160108104922.jpg
inflating: dataset/data/train/0/20160108002/20160108105045.jpg
creating: dataset/data/train/0/20160108005/
inflating: dataset/data/train/0/20160108005/20160108160221.jpg
inflating: dataset/data/train/0/20160108005/20160108160354.jpg
inflating: dataset/data/train/0/20160108005/20160108160409.jpg
inflating: dataset/data/train/0/20160108005/20160108160435.jpg
inflating: dataset/data/train/0/20160108005/20160108160521.jpg
inflating: dataset/data/train/0/20160108005/20160108160530.jpg
inflating: dataset/data/train/0/20160108005/20160108160610.jpg
creating: dataset/data/train/0/20160108006/
inflating: dataset/data/train/0/20160108006/20160108153900.jpg
inflating: dataset/data/train/0/20160108006/20160108154057.jpg
inflating: dataset/data/train/0/20160108006/20160108154132.jpg
inflating: dataset/data/train/0/20160108006/20160108154148.jpg
inflating: dataset/data/train/0/20160108006/20160108154215.jpg
inflating: dataset/data/train/0/20160108006/20160108154318.jpg
inflating: dataset/data/train/0/20160108006/20160108154333.jpg
creating: dataset/data/train/0/20160111007/
inflating: dataset/data/train/0/20160111007/20160111151648.jpg
inflating: dataset/data/train/0/20160111007/20160111151806.jpg
inflating: dataset/data/train/0/20160111007/20160111151838.jpg
inflating: dataset/data/train/0/20160111007/20160111151908.jpg
inflating: dataset/data/train/0/20160111007/20160111151937.jpg
inflating: dataset/data/train/0/20160111007/20160111151940.jpg
inflating: dataset/data/train/0/20160111007/20160111152025.jpg
creating: dataset/data/train/0/20160111011/
inflating: dataset/data/train/0/20160111011/20160111161955.jpg
inflating: dataset/data/train/0/20160111011/20160111162139.jpg
inflating: dataset/data/train/0/20160111011/20160111162209.jpg
inflating: dataset/data/train/0/20160111011/20160111162249.jpg
inflating: dataset/data/train/0/20160111011/20160111162309.jpg
inflating: dataset/data/train/0/20160111011/20160111162316.jpg
inflating: dataset/data/train/0/20160111011/20160111162430.jpg
creating: dataset/data/train/0/20160113006/
inflating: dataset/data/train/0/20160113006/20160113105442.jpg
inflating: dataset/data/train/0/20160113006/20160113105627.jpg
inflating: dataset/data/train/0/20160113006/20160113105703.jpg
inflating: dataset/data/train/0/20160113006/20160113105721.jpg
inflating: dataset/data/train/0/20160113006/20160113105743.jpg
inflating: dataset/data/train/0/20160113006/20160113105910.jpg
inflating: dataset/data/train/0/20160113006/20160113110109.jpg
creating: dataset/data/train/0/20160113007/
inflating: dataset/data/train/0/20160113007/20160113110507.jpg
inflating: dataset/data/train/0/20160113007/20160113110644.jpg
inflating: dataset/data/train/0/20160113007/20160113110653.jpg
inflating: dataset/data/train/0/20160113007/20160113110744.jpg
inflating: dataset/data/train/0/20160113007/20160113110754.jpg
inflating: dataset/data/train/0/20160113007/20160113110814.jpg
inflating: dataset/data/train/0/20160113007/20160113110853.jpg
creating: dataset/data/train/0/20160113026/
inflating: dataset/data/train/0/20160113026/20160113163003.jpg
inflating: dataset/data/train/0/20160113026/20160113163159.jpg
inflating: dataset/data/train/0/20160113026/20160113163218.jpg
inflating: dataset/data/train/0/20160113026/20160113163251.jpg
inflating: dataset/data/train/0/20160113026/20160113163319.jpg
inflating: dataset/data/train/0/20160113026/20160113163320.jpg
inflating: dataset/data/train/0/20160113026/20160113163400.jpg
creating: dataset/data/train/0/20160118001/
inflating: dataset/data/train/0/20160118001/20160118142910.jpg
inflating: dataset/data/train/0/20160118001/20160118143024.jpg
inflating: dataset/data/train/0/20160118001/20160118143055.jpg
inflating: dataset/data/train/0/20160118001/20160118143123.jpg
inflating: dataset/data/train/0/20160118001/20160118143153.jpg
inflating: dataset/data/train/0/20160118001/20160118143160.jpg
inflating: dataset/data/train/0/20160118001/20160118143242.jpg
creating: dataset/data/train/0/20160118004/
inflating: dataset/data/train/0/20160118004/20160118151400.jpg
inflating: dataset/data/train/0/20160118004/20160118151519.jpg
inflating: dataset/data/train/0/20160118004/20160118151548.jpg
inflating: dataset/data/train/0/20160118004/20160118151621.jpg
inflating: dataset/data/train/0/20160118004/20160118151647.jpg
inflating: dataset/data/train/0/20160118004/20160118151680.jpg
inflating: dataset/data/train/0/20160118004/20160118151739.jpg
creating: dataset/data/train/0/20160119003/
inflating: dataset/data/train/0/20160119003/20160119144827.jpg
inflating: dataset/data/train/0/20160119003/20160119144957.jpg
inflating: dataset/data/train/0/20160119003/20160119145027.jpg
inflating: dataset/data/train/0/20160119003/20160119145057.jpg
inflating: dataset/data/train/0/20160119003/20160119145127.jpg
inflating: dataset/data/train/0/20160119003/20160119145137.jpg
inflating: dataset/data/train/0/20160119003/20160119145227.jpg
creating: dataset/data/train/0/20160120002/
inflating: dataset/data/train/0/20160120002/20160120104001.jpg
inflating: dataset/data/train/0/20160120002/20160120104117.jpg
inflating: dataset/data/train/0/20160120002/20160120104147.jpg
inflating: dataset/data/train/0/20160120002/20160120104218.jpg
inflating: dataset/data/train/0/20160120002/20160120104241.jpg
inflating: dataset/data/train/0/20160120002/20160120104251.jpg
inflating: dataset/data/train/0/20160120002/20160120104328.jpg
creating: dataset/data/train/0/20160120004/
inflating: dataset/data/train/0/20160120004/20160120101023.jpg
inflating: dataset/data/train/0/20160120004/20160120101137.jpg
inflating: dataset/data/train/0/20160120004/20160120101207.jpg
inflating: dataset/data/train/0/20160120004/20160120101238.jpg
inflating: dataset/data/train/0/20160120004/20160120101307.jpg
inflating: dataset/data/train/0/20160120004/20160120101310.jpg
inflating: dataset/data/train/0/20160120004/20160120101401.jpg
creating: dataset/data/train/0/20160120007/
inflating: dataset/data/train/0/20160120007/20160120114630.jpg
inflating: dataset/data/train/0/20160120007/20160120114745.jpg
inflating: dataset/data/train/0/20160120007/20160120114831.jpg
inflating: dataset/data/train/0/20160120007/20160120114857.jpg
inflating: dataset/data/train/0/20160120007/20160120114915.jpg
inflating: dataset/data/train/0/20160120007/20160120114920.jpg
inflating: dataset/data/train/0/20160120007/20160120115010.jpg
creating: dataset/data/train/0/20160120008/
inflating: dataset/data/train/0/20160120008/20160120142434.jpg
inflating: dataset/data/train/0/20160120008/20160120142601.jpg
inflating: dataset/data/train/0/20160120008/20160120142627.jpg
inflating: dataset/data/train/0/20160120008/20160120142704.jpg
inflating: dataset/data/train/0/20160120008/20160120142721.jpg
inflating: dataset/data/train/0/20160120008/20160120142735.jpg
inflating: dataset/data/train/0/20160120008/20160120142839.jpg
creating: dataset/data/train/0/20160120010/
inflating: dataset/data/train/0/20160120010/20160120145425.jpg
inflating: dataset/data/train/0/20160120010/20160120145614.jpg
inflating: dataset/data/train/0/20160120010/20160120145644.jpg
inflating: dataset/data/train/0/20160120010/20160120145713.jpg
inflating: dataset/data/train/0/20160120010/20160120145744.jpg
inflating: dataset/data/train/0/20160120010/20160120145754.jpg
inflating: dataset/data/train/0/20160120010/20160120145859.jpg
creating: dataset/data/train/0/20160125001/
inflating: dataset/data/train/0/20160125001/20160125144958.jpg
inflating: dataset/data/train/0/20160125001/20160125145136.jpg
inflating: dataset/data/train/0/20160125001/20160125145206.jpg
inflating: dataset/data/train/0/20160125001/20160125145245.jpg
inflating: dataset/data/train/0/20160125001/20160125145306.jpg
inflating: dataset/data/train/0/20160125001/20160125145314.jpg
inflating: dataset/data/train/0/20160125001/20160125145423.jpg
creating: dataset/data/train/0/20160128005/
inflating: dataset/data/train/0/20160128005/20160128101558.jpg
inflating: dataset/data/train/0/20160128005/20160128101718.jpg
inflating: dataset/data/train/0/20160128005/20160128101749.jpg
inflating: dataset/data/train/0/20160128005/20160128101819.jpg
inflating: dataset/data/train/0/20160128005/20160128101848.jpg
inflating: dataset/data/train/0/20160128005/20160128101854.jpg
inflating: dataset/data/train/0/20160128005/20160128101922.jpg
creating: dataset/data/train/0/20160128006/
inflating: dataset/data/train/0/20160128006/20160128102323.jpg
inflating: dataset/data/train/0/20160128006/20160128102452.jpg
inflating: dataset/data/train/0/20160128006/20160128102522.jpg
inflating: dataset/data/train/0/20160128006/20160128102552.jpg
inflating: dataset/data/train/0/20160128006/20160128102617.jpg
inflating: dataset/data/train/0/20160128006/20160128102627.jpg
inflating: dataset/data/train/0/20160128006/20160128102649.jpg
creating: dataset/data/train/0/20160201007/
inflating: dataset/data/train/0/20160201007/20160201161816.jpg
inflating: dataset/data/train/0/20160201007/20160201161933.jpg
inflating: dataset/data/train/0/20160201007/20160201162002.jpg
inflating: dataset/data/train/0/20160201007/20160201162029.jpg
inflating: dataset/data/train/0/20160201007/20160201162100.jpg
inflating: dataset/data/train/0/20160201007/20160201162105.jpg
inflating: dataset/data/train/0/20160201007/20160201162138.jpg
creating: dataset/data/train/0/20160203004/
inflating: dataset/data/train/0/20160203004/20160203154711.jpg
inflating: dataset/data/train/0/20160203004/20160203154834.jpg
inflating: dataset/data/train/0/20160203004/20160203154902.jpg
inflating: dataset/data/train/0/20160203004/20160203154931.jpg
inflating: dataset/data/train/0/20160203004/20160203155004.jpg
inflating: dataset/data/train/0/20160203004/20160203155008.jpg
inflating: dataset/data/train/0/20160203004/20160203155111.jpg
creating: dataset/data/train/0/20160203010/
inflating: dataset/data/train/0/20160203010/20160203171644.jpg
inflating: dataset/data/train/0/20160203010/20160203171817.jpg
inflating: dataset/data/train/0/20160203010/20160203171844.jpg
inflating: dataset/data/train/0/20160203010/20160203171915.jpg
inflating: dataset/data/train/0/20160203010/20160203171949.jpg
inflating: dataset/data/train/0/20160203010/20160203171959.jpg
inflating: dataset/data/train/0/20160203010/20160203172027.jpg
creating: dataset/data/train/0/20160218002/
inflating: dataset/data/train/0/20160218002/20160218154001.jpg
inflating: dataset/data/train/0/20160218002/20160218154153.jpg
inflating: dataset/data/train/0/20160218002/20160218154221.jpg
inflating: dataset/data/train/0/20160218002/20160218154252.jpg
inflating: dataset/data/train/0/20160218002/20160218154320.jpg
inflating: dataset/data/train/0/20160218002/20160218154324.jpg
inflating: dataset/data/train/0/20160218002/20160218154349.jpg
creating: dataset/data/train/0/20160222002/
inflating: dataset/data/train/0/20160222002/20160222152903.jpg
inflating: dataset/data/train/0/20160222002/20160222153020.jpg
inflating: dataset/data/train/0/20160222002/20160222153048.jpg
inflating: dataset/data/train/0/20160222002/20160222153123.jpg
inflating: dataset/data/train/0/20160222002/20160222153150.jpg
inflating: dataset/data/train/0/20160222002/20160222153169.jpg
inflating: dataset/data/train/0/20160222002/20160222153234.jpg
creating: dataset/data/train/0/20160224008/
inflating: dataset/data/train/0/20160224008/20160224150349.jpg
inflating: dataset/data/train/0/20160224008/20160224150508.jpg
inflating: dataset/data/train/0/20160224008/20160224150536.jpg
inflating: dataset/data/train/0/20160224008/20160224150621.jpg
inflating: dataset/data/train/0/20160224008/20160224150636.jpg
inflating: dataset/data/train/0/20160224008/20160224150651.jpg
inflating: dataset/data/train/0/20160224008/20160224150826.jpg
creating: dataset/data/train/0/20160225007/
inflating: dataset/data/train/0/20160225007/20160225152934.jpg
inflating: dataset/data/train/0/20160225007/20160225153055.jpg
inflating: dataset/data/train/0/20160225007/20160225153127.jpg
inflating: dataset/data/train/0/20160225007/20160225153159.jpg
inflating: dataset/data/train/0/20160225007/20160225153225.jpg
inflating: dataset/data/train/0/20160225007/20160225153233.jpg
inflating: dataset/data/train/0/20160225007/20160225153348.jpg
creating: dataset/data/train/0/20160229001/
inflating: dataset/data/train/0/20160229001/20160229143640.jpg
inflating: dataset/data/train/0/20160229001/20160229143759.jpg
inflating: dataset/data/train/0/20160229001/20160229143831.jpg
inflating: dataset/data/train/0/20160229001/20160229143901.jpg
inflating: dataset/data/train/0/20160229001/20160229143932.jpg
inflating: dataset/data/train/0/20160229001/20160229143947.jpg
inflating: dataset/data/train/0/20160229001/20160229144038.jpg
creating: dataset/data/train/0/20160229002/
inflating: dataset/data/train/0/20160229002/20160229144507.jpg
inflating: dataset/data/train/0/20160229002/20160229144626.jpg
inflating: dataset/data/train/0/20160229002/20160229144645.jpg
inflating: dataset/data/train/0/20160229002/20160229144701.jpg
inflating: dataset/data/train/0/20160229002/20160229144800.jpg
inflating: dataset/data/train/0/20160229002/20160229144806.jpg
inflating: dataset/data/train/0/20160229002/20160229144902.jpg
creating: dataset/data/train/0/20160302011/
inflating: dataset/data/train/0/20160302011/20160302151956.jpg
inflating: dataset/data/train/0/20160302011/20160302152123.jpg
inflating: dataset/data/train/0/20160302011/20160302152127.jpg
inflating: dataset/data/train/0/20160302011/20160302152218.jpg
inflating: dataset/data/train/0/20160302011/20160302152240.jpg
inflating: dataset/data/train/0/20160302011/20160302152259.jpg
inflating: dataset/data/train/0/20160302011/20160302152310.jpg
creating: dataset/data/train/0/20160303001/
inflating: dataset/data/train/0/20160303001/20160303094958.jpg
inflating: dataset/data/train/0/20160303001/20160303095213.jpg
inflating: dataset/data/train/0/20160303001/20160303095223.jpg
inflating: dataset/data/train/0/20160303001/20160303095251.jpg
inflating: dataset/data/train/0/20160303001/20160303095319.jpg
inflating: dataset/data/train/0/20160303001/20160303095324.jpg
inflating: dataset/data/train/0/20160303001/20160303095433.jpg
creating: dataset/data/train/0/20160303006/
inflating: dataset/data/train/0/20160303006/20160303172348.jpg
inflating: dataset/data/train/0/20160303006/20160303172505.jpg
inflating: dataset/data/train/0/20160303006/20160303172533.jpg
inflating: dataset/data/train/0/20160303006/20160303172604.jpg
inflating: dataset/data/train/0/20160303006/20160303172648.jpg
inflating: dataset/data/train/0/20160303006/20160303172659.jpg
inflating: dataset/data/train/0/20160303006/20160303172729.jpg
creating: dataset/data/train/0/20160309009/
inflating: dataset/data/train/0/20160309009/20160309155453.jpg
inflating: dataset/data/train/0/20160309009/20160309155620.jpg
inflating: dataset/data/train/0/20160309009/20160309155642.jpg
inflating: dataset/data/train/0/20160309009/20160309155711.jpg
inflating: dataset/data/train/0/20160309009/20160309155741.jpg
inflating: dataset/data/train/0/20160309009/20160309155754.jpg
inflating: dataset/data/train/0/20160309009/20160309155831.jpg
creating: dataset/data/train/0/20160314006/
inflating: dataset/data/train/0/20160314006/20160314155248.jpg
inflating: dataset/data/train/0/20160314006/20160314155426.jpg
inflating: dataset/data/train/0/20160314006/20160314155430.jpg
inflating: dataset/data/train/0/20160314006/20160314155459.jpg
inflating: dataset/data/train/0/20160314006/20160314155559.jpg
inflating: dataset/data/train/0/20160314006/20160314155566.jpg
inflating: dataset/data/train/0/20160314006/20160314155641.jpg
creating: dataset/data/train/0/20160315003/
inflating: dataset/data/train/0/20160315003/20160315163257.jpg
inflating: dataset/data/train/0/20160315003/20160315163424.jpg
inflating: dataset/data/train/0/20160315003/20160315163454.jpg
inflating: dataset/data/train/0/20160315003/20160315163522.jpg
inflating: dataset/data/train/0/20160315003/20160315163550.jpg
inflating: dataset/data/train/0/20160315003/20160315163560.jpg
inflating: dataset/data/train/0/20160315003/20160315163710.jpg
creating: dataset/data/train/0/20160316010/
inflating: dataset/data/train/0/20160316010/20160316155436.jpg
inflating: dataset/data/train/0/20160316010/20160316155634.jpg
inflating: dataset/data/train/0/20160316010/20160316155648.jpg
inflating: dataset/data/train/0/20160316010/20160316155717.jpg
inflating: dataset/data/train/0/20160316010/20160316155743.jpg
inflating: dataset/data/train/0/20160316010/20160316155750.jpg
inflating: dataset/data/train/0/20160316010/20160316155837.jpg
creating: dataset/data/train/0/20160321008/
inflating: dataset/data/train/0/20160321008/20160321160454.jpg
inflating: dataset/data/train/0/20160321008/20160321160622.jpg
inflating: dataset/data/train/0/20160321008/20160321160647.jpg
inflating: dataset/data/train/0/20160321008/20160321160723.jpg
inflating: dataset/data/train/0/20160321008/20160321160750.jpg
inflating: dataset/data/train/0/20160321008/20160321160768.jpg
inflating: dataset/data/train/0/20160321008/20160321160827.jpg
creating: dataset/data/train/0/20160323001/
inflating: dataset/data/train/0/20160323001/20160323094625.jpg
inflating: dataset/data/train/0/20160323001/20160323094804.jpg
inflating: dataset/data/train/0/20160323001/20160323094825.jpg
inflating: dataset/data/train/0/20160323001/20160323094859.jpg
inflating: dataset/data/train/0/20160323001/20160323094931.jpg
inflating: dataset/data/train/0/20160323001/20160323094943.jpg
inflating: dataset/data/train/0/20160323001/20160323095021.jpg
creating: dataset/data/train/0/20160323010/
inflating: dataset/data/train/0/20160323010/20160323112054.jpg
inflating: dataset/data/train/0/20160323010/20160323112223.jpg
inflating: dataset/data/train/0/20160323010/20160323112242.jpg
inflating: dataset/data/train/0/20160323010/20160323112321.jpg
inflating: dataset/data/train/0/20160323010/20160323112339.jpg
inflating: dataset/data/train/0/20160323010/20160323112346.jpg
inflating: dataset/data/train/0/20160323010/20160323112434.jpg
creating: dataset/data/train/0/20160323019/
inflating: dataset/data/train/0/20160323019/20160323154925.jpg
inflating: dataset/data/train/0/20160323019/20160323155108.jpg
inflating: dataset/data/train/0/20160323019/20160323155127.jpg
inflating: dataset/data/train/0/20160323019/20160323155157.jpg
inflating: dataset/data/train/0/20160323019/20160323155214.jpg
inflating: dataset/data/train/0/20160323019/20160323155229.jpg
inflating: dataset/data/train/0/20160323019/20160323155336.jpg
creating: dataset/data/train/0/20160323020/
inflating: dataset/data/train/0/20160323020/20160323160024.jpg
inflating: dataset/data/train/0/20160323020/20160323160147.jpg
inflating: dataset/data/train/0/20160323020/20160323160212.jpg
inflating: dataset/data/train/0/20160323020/20160323160244.jpg
inflating: dataset/data/train/0/20160323020/20160323160314.jpg
inflating: dataset/data/train/0/20160323020/20160323160327.jpg
inflating: dataset/data/train/0/20160323020/20160323160406.jpg
creating: dataset/data/train/0/20160323026/
inflating: dataset/data/train/0/20160323026/20160323173931.jpg
inflating: dataset/data/train/0/20160323026/20160323174044.jpg
inflating: dataset/data/train/0/20160323026/20160323174116.jpg
inflating: dataset/data/train/0/20160323026/20160323174149.jpg
inflating: dataset/data/train/0/20160323026/20160323174216.jpg
inflating: dataset/data/train/0/20160323026/20160323174226.jpg
inflating: dataset/data/train/0/20160323026/20160323174250.jpg
creating: dataset/data/train/0/20160324001/
inflating: dataset/data/train/0/20160324001/20160324102727.jpg
inflating: dataset/data/train/0/20160324001/20160324102916.jpg
inflating: dataset/data/train/0/20160324001/20160324102945.jpg
inflating: dataset/data/train/0/20160324001/20160324103019.jpg
inflating: dataset/data/train/0/20160324001/20160324103032.jpg
inflating: dataset/data/train/0/20160324001/20160324103103.jpg
inflating: dataset/data/train/0/20160324001/20160324103104.jpg
creating: dataset/data/train/0/20160324007/
inflating: dataset/data/train/0/20160324007/20160324161509.jpg
inflating: dataset/data/train/0/20160324007/20160324161650.jpg
inflating: dataset/data/train/0/20160324007/20160324161718.jpg
inflating: dataset/data/train/0/20160324007/20160324161752.jpg
inflating: dataset/data/train/0/20160324007/20160324161819.jpg
inflating: dataset/data/train/0/20160324007/20160324161825.jpg
inflating: dataset/data/train/0/20160324007/20160324161929.jpg
creating: dataset/data/train/0/20160330013/
inflating: dataset/data/train/0/20160330013/20160330150521.jpg
inflating: dataset/data/train/0/20160330013/20160330150731.jpg
inflating: dataset/data/train/0/20160330013/20160330150752.jpg
inflating: dataset/data/train/0/20160330013/20160330150803.jpg
inflating: dataset/data/train/0/20160330013/20160330150824.jpg
inflating: dataset/data/train/0/20160330013/20160330150835.jpg
inflating: dataset/data/train/0/20160330013/20160330151027.jpg
creating: dataset/data/train/0/20160330016/
inflating: dataset/data/train/0/20160330016/20160330151627.jpg
inflating: dataset/data/train/0/20160330016/20160330151814.jpg
inflating: dataset/data/train/0/20160330016/20160330151833.jpg
inflating: dataset/data/train/0/20160330016/20160330151906.jpg
inflating: dataset/data/train/0/20160330016/20160330151936.jpg
inflating: dataset/data/train/0/20160330016/20160330151939.jpg
inflating: dataset/data/train/0/20160330016/20160330152048.jpg
creating: dataset/data/train/0/20160330020/
inflating: dataset/data/train/0/20160330020/20160330164806.jpg
inflating: dataset/data/train/0/20160330020/20160330164946.jpg
inflating: dataset/data/train/0/20160330020/20160330165013.jpg
inflating: dataset/data/train/0/20160330020/20160330165032.jpg
inflating: dataset/data/train/0/20160330020/20160330165102.jpg
inflating: dataset/data/train/0/20160330020/20160330165105.jpg
inflating: dataset/data/train/0/20160330020/20160330165204.jpg
creating: dataset/data/train/0/20160401001/
inflating: dataset/data/train/0/20160401001/20160401100218.jpg
inflating: dataset/data/train/0/20160401001/20160401100338.jpg
inflating: dataset/data/train/0/20160401001/20160401100407.jpg
inflating: dataset/data/train/0/20160401001/20160401100439.jpg
inflating: dataset/data/train/0/20160401001/20160401100507.jpg
inflating: dataset/data/train/0/20160401001/20160401100511.jpg
inflating: dataset/data/train/0/20160401001/20160401100618.jpg
creating: dataset/data/train/0/20160401002/
inflating: dataset/data/train/0/20160401002/20160401145452.jpg
inflating: dataset/data/train/0/20160401002/20160401145651.jpg
inflating: dataset/data/train/0/20160401002/20160401145725.jpg
inflating: dataset/data/train/0/20160401002/20160401145755.jpg
inflating: dataset/data/train/0/20160401002/20160401145813.jpg
inflating: dataset/data/train/0/20160401002/20160401145823.jpg
inflating: dataset/data/train/0/20160401002/20160401145904.jpg
creating: dataset/data/train/0/20160406003/
inflating: dataset/data/train/0/20160406003/20160406102449.jpg
inflating: dataset/data/train/0/20160406003/20160406102613.jpg
inflating: dataset/data/train/0/20160406003/20160406102647.jpg
inflating: dataset/data/train/0/20160406003/20160406102718.jpg
inflating: dataset/data/train/0/20160406003/20160406102823.jpg
inflating: dataset/data/train/0/20160406003/20160406102838.jpg
inflating: dataset/data/train/0/20160406003/20160406102954.jpg
creating: dataset/data/train/0/20160406012/
inflating: dataset/data/train/0/20160406012/20160406144208.jpg
inflating: dataset/data/train/0/20160406012/20160406144330.jpg
inflating: dataset/data/train/0/20160406012/20160406144408.jpg
inflating: dataset/data/train/0/20160406012/20160406144436.jpg
inflating: dataset/data/train/0/20160406012/20160406144501.jpg
inflating: dataset/data/train/0/20160406012/20160406144532.jpg
inflating: dataset/data/train/0/20160406012/20160406144651.jpg
creating: dataset/data/train/0/20160407005/
inflating: dataset/data/train/0/20160407005/20160407161038.jpg
inflating: dataset/data/train/0/20160407005/20160407161149.jpg
inflating: dataset/data/train/0/20160407005/20160407161219.jpg
inflating: dataset/data/train/0/20160407005/20160407161254.jpg
inflating: dataset/data/train/0/20160407005/20160407161320.jpg
inflating: dataset/data/train/0/20160407005/20160407161333.jpg
inflating: dataset/data/train/0/20160407005/20160407161346.jpg
creating: dataset/data/train/0/20160411004/
inflating: dataset/data/train/0/20160411004/20160411151912.jpg
inflating: dataset/data/train/0/20160411004/20160411152058.jpg
inflating: dataset/data/train/0/20160411004/20160411152113.jpg
inflating: dataset/data/train/0/20160411004/20160411152158.jpg
inflating: dataset/data/train/0/20160411004/20160411152222.jpg
inflating: dataset/data/train/0/20160411004/20160411152238.jpg
inflating: dataset/data/train/0/20160411004/20160411152335.jpg
creating: dataset/data/train/0/20160412003/
inflating: dataset/data/train/0/20160412003/20160412100446.jpg
inflating: dataset/data/train/0/20160412003/20160412100602.jpg
inflating: dataset/data/train/0/20160412003/20160412100630.jpg
inflating: dataset/data/train/0/20160412003/20160412100702.jpg
inflating: dataset/data/train/0/20160412003/20160412100730.jpg
inflating: dataset/data/train/0/20160412003/20160412100741.jpg
inflating: dataset/data/train/0/20160412003/20160412100814.jpg
creating: dataset/data/train/0/20160413009/
inflating: dataset/data/train/0/20160413009/20160413143208.jpg
inflating: dataset/data/train/0/20160413009/20160413143327.jpg
inflating: dataset/data/train/0/20160413009/20160413143404.jpg
inflating: dataset/data/train/0/20160413009/20160413143427.jpg
inflating: dataset/data/train/0/20160413009/20160413143458.jpg
inflating: dataset/data/train/0/20160413009/20160413143500.jpg
inflating: dataset/data/train/0/20160413009/20160413143551.jpg
creating: dataset/data/train/0/20160413010/
inflating: dataset/data/train/0/20160413010/20160413144421.jpg
inflating: dataset/data/train/0/20160413010/20160413144601.jpg
inflating: dataset/data/train/0/20160413010/20160413144625.jpg
inflating: dataset/data/train/0/20160413010/20160413144636.jpg
inflating: dataset/data/train/0/20160413010/20160413144657.jpg
inflating: dataset/data/train/0/20160413010/20160413144661.jpg
inflating: dataset/data/train/0/20160413010/20160413144725.jpg
creating: dataset/data/train/0/20160418008/
inflating: dataset/data/train/0/20160418008/20160418152728.jpg
inflating: dataset/data/train/0/20160418008/20160418152910.jpg
inflating: dataset/data/train/0/20160418008/20160418152939.jpg
inflating: dataset/data/train/0/20160418008/20160418153009.jpg
inflating: dataset/data/train/0/20160418008/20160418153041.jpg
inflating: dataset/data/train/0/20160418008/20160418153056.jpg
inflating: dataset/data/train/0/20160418008/20160418153139.jpg
creating: dataset/data/train/0/20160419002/
inflating: dataset/data/train/0/20160419002/20160419152802.jpg
inflating: dataset/data/train/0/20160419002/20160419152918.jpg
inflating: dataset/data/train/0/20160419002/20160419152949.jpg
inflating: dataset/data/train/0/20160419002/20160419153015.jpg
inflating: dataset/data/train/0/20160419002/20160419153045.jpg
inflating: dataset/data/train/0/20160419002/20160419153051.jpg
inflating: dataset/data/train/0/20160419002/20160419153139.jpg
creating: dataset/data/train/0/20160420006/
inflating: dataset/data/train/0/20160420006/20160420145831.jpg
inflating: dataset/data/train/0/20160420006/20160420150019.jpg
inflating: dataset/data/train/0/20160420006/20160420150053.jpg
inflating: dataset/data/train/0/20160420006/20160420150121.jpg
inflating: dataset/data/train/0/20160420006/20160420150142.jpg
inflating: dataset/data/train/0/20160420006/20160420150150.jpg
inflating: dataset/data/train/0/20160420006/20160420150242.jpg
creating: dataset/data/train/0/20160420009/
inflating: dataset/data/train/0/20160420009/20160420153448.jpg
inflating: dataset/data/train/0/20160420009/20160420153613.jpg
inflating: dataset/data/train/0/20160420009/20160420153639.jpg
inflating: dataset/data/train/0/20160420009/20160420153709.jpg
inflating: dataset/data/train/0/20160420009/20160420153739.jpg
inflating: dataset/data/train/0/20160420009/20160420153764.jpg
inflating: dataset/data/train/0/20160420009/20160420153822.jpg
creating: dataset/data/train/0/20160420012/
inflating: dataset/data/train/0/20160420012/20160420162312.jpg
inflating: dataset/data/train/0/20160420012/20160420162447.jpg
inflating: dataset/data/train/0/20160420012/20160420162509.jpg
inflating: dataset/data/train/0/20160420012/20160420162538.jpg
inflating: dataset/data/train/0/20160420012/20160420162608.jpg
inflating: dataset/data/train/0/20160420012/20160420162616.jpg
inflating: dataset/data/train/0/20160420012/20160420162654.jpg
creating: dataset/data/train/0/20160421001/
inflating: dataset/data/train/0/20160421001/20160421152750.jpg
inflating: dataset/data/train/0/20160421001/20160421152925.jpg
inflating: dataset/data/train/0/20160421001/20160421152956.jpg
inflating: dataset/data/train/0/20160421001/20160421153032.jpg
inflating: dataset/data/train/0/20160421001/20160421153056.jpg
inflating: dataset/data/train/0/20160421001/20160421153066.jpg
inflating: dataset/data/train/0/20160421001/20160421153151.jpg
creating: dataset/data/train/0/20160421003/
inflating: dataset/data/train/0/20160421003/20160421145303.jpg
inflating: dataset/data/train/0/20160421003/20160421145442.jpg
inflating: dataset/data/train/0/20160421003/20160421145509.jpg
inflating: dataset/data/train/0/20160421003/20160421145525.jpg
inflating: dataset/data/train/0/20160421003/20160421145552.jpg
inflating: dataset/data/train/0/20160421003/20160421145562.jpg
inflating: dataset/data/train/0/20160421003/20160421145703.jpg
creating: dataset/data/train/0/20160421004/
inflating: dataset/data/train/0/20160421004/20160421150141.jpg
inflating: dataset/data/train/0/20160421004/20160421150327.jpg
inflating: dataset/data/train/0/20160421004/20160421150408.jpg
inflating: dataset/data/train/0/20160421004/20160421150424.jpg
inflating: dataset/data/train/0/20160421004/20160421150443.jpg
inflating: dataset/data/train/0/20160421004/20160421150516.jpg
inflating: dataset/data/train/0/20160421004/20160421150622.jpg
creating: dataset/data/train/0/20160425009/
inflating: dataset/data/train/0/20160425009/20160425162422.jpg
inflating: dataset/data/train/0/20160425009/20160425162556.jpg
inflating: dataset/data/train/0/20160425009/20160425162631.jpg
inflating: dataset/data/train/0/20160425009/20160425162657.jpg
inflating: dataset/data/train/0/20160425009/20160425162725.jpg
inflating: dataset/data/train/0/20160425009/20160425162735.jpg
inflating: dataset/data/train/0/20160425009/20160425162803.jpg
creating: dataset/data/train/0/20160427001/
inflating: dataset/data/train/0/20160427001/20160427092546.jpg
inflating: dataset/data/train/0/20160427001/20160427092712.jpg
inflating: dataset/data/train/0/20160427001/20160427092733.jpg
inflating: dataset/data/train/0/20160427001/20160427092800.jpg
inflating: dataset/data/train/0/20160427001/20160427092833.jpg
inflating: dataset/data/train/0/20160427001/20160427092844.jpg
inflating: dataset/data/train/0/20160427001/20160427092915.jpg
creating: dataset/data/train/0/20160428003/
inflating: dataset/data/train/0/20160428003/20160428152001.jpg
inflating: dataset/data/train/0/20160428003/20160428152129.jpg
inflating: dataset/data/train/0/20160428003/20160428152147.jpg
inflating: dataset/data/train/0/20160428003/20160428152218.jpg
inflating: dataset/data/train/0/20160428003/20160428152250.jpg
inflating: dataset/data/train/0/20160428003/20160428152267.jpg
inflating: dataset/data/train/0/20160428003/20160428152334.jpg
creating: dataset/data/train/0/20160503002/
inflating: dataset/data/train/0/20160503002/20160503100411.jpg
inflating: dataset/data/train/0/20160503002/20160503100603.jpg
inflating: dataset/data/train/0/20160503002/20160503100634.jpg
inflating: dataset/data/train/0/20160503002/20160503100652.jpg
inflating: dataset/data/train/0/20160503002/20160503100711.jpg
inflating: dataset/data/train/0/20160503002/20160503100725.jpg
inflating: dataset/data/train/0/20160503002/20160503100801.jpg
creating: dataset/data/train/0/20160504004/
inflating: dataset/data/train/0/20160504004/20160504144011.jpg
inflating: dataset/data/train/0/20160504004/20160504144144.jpg
inflating: dataset/data/train/0/20160504004/20160504144215.jpg
inflating: dataset/data/train/0/20160504004/20160504144252.jpg
inflating: dataset/data/train/0/20160504004/20160504144317.jpg
inflating: dataset/data/train/0/20160504004/20160504144326.jpg
inflating: dataset/data/train/0/20160504004/20160504144417.jpg
creating: dataset/data/train/0/20160504006/
inflating: dataset/data/train/0/20160504006/20160504154230.jpg
inflating: dataset/data/train/0/20160504006/20160504154400.jpg
inflating: dataset/data/train/0/20160504006/20160504154420.jpg
inflating: dataset/data/train/0/20160504006/20160504154453.jpg
inflating: dataset/data/train/0/20160504006/20160504154531.jpg
inflating: dataset/data/train/0/20160504006/20160504154549.jpg
inflating: dataset/data/train/0/20160504006/20160504154623.jpg
creating: dataset/data/train/0/20160504010/
inflating: dataset/data/train/0/20160504010/20160504170351.jpg
inflating: dataset/data/train/0/20160504010/20160504170525.jpg
inflating: dataset/data/train/0/20160504010/20160504170559.jpg
inflating: dataset/data/train/0/20160504010/20160504170622.jpg
inflating: dataset/data/train/0/20160504010/20160504170652.jpg
inflating: dataset/data/train/0/20160504010/20160504170669.jpg
inflating: dataset/data/train/0/20160504010/20160504170720.jpg
creating: dataset/data/train/0/20160506004/
inflating: dataset/data/train/0/20160506004/20160506151030.jpg
inflating: dataset/data/train/0/20160506004/20160506151218.jpg
inflating: dataset/data/train/0/20160506004/20160506151235.jpg
inflating: dataset/data/train/0/20160506004/20160506151301.jpg
inflating: dataset/data/train/0/20160506004/20160506151325.jpg
inflating: dataset/data/train/0/20160506004/20160506151333.jpg
inflating: dataset/data/train/0/20160506004/20160506151402.jpg
creating: dataset/data/train/0/20160506006/
inflating: dataset/data/train/0/20160506006/20160506152949.jpg
inflating: dataset/data/train/0/20160506006/20160506153118.jpg
inflating: dataset/data/train/0/20160506006/20160506153122.jpg
inflating: dataset/data/train/0/20160506006/20160506153153.jpg
inflating: dataset/data/train/0/20160506006/20160506153220.jpg
inflating: dataset/data/train/0/20160506006/20160506153238.jpg
inflating: dataset/data/train/0/20160506006/20160506153255.jpg
creating: dataset/data/train/0/20160506007/
inflating: dataset/data/train/0/20160506007/20160506155150.jpg
inflating: dataset/data/train/0/20160506007/20160506155335.jpg
inflating: dataset/data/train/0/20160506007/20160506155351.jpg
inflating: dataset/data/train/0/20160506007/20160506155430.jpg
inflating: dataset/data/train/0/20160506007/20160506155456.jpg
inflating: dataset/data/train/0/20160506007/20160506155467.jpg
inflating: dataset/data/train/0/20160506007/20160506155607.jpg
creating: dataset/data/train/0/20160509002/
inflating: dataset/data/train/0/20160509002/20160509143953.jpg
inflating: dataset/data/train/0/20160509002/20160509144122.jpg
inflating: dataset/data/train/0/20160509002/20160509144152.jpg
inflating: dataset/data/train/0/20160509002/20160509144221.jpg
inflating: dataset/data/train/0/20160509002/20160509144251.jpg
inflating: dataset/data/train/0/20160509002/20160509144267.jpg
inflating: dataset/data/train/0/20160509002/20160509144334.jpg
creating: dataset/data/train/0/20160516004/
inflating: dataset/data/train/0/20160516004/20160516152344.jpg
inflating: dataset/data/train/0/20160516004/20160516152512.jpg
inflating: dataset/data/train/0/20160516004/20160516152544.jpg
inflating: dataset/data/train/0/20160516004/20160516152611.jpg
inflating: dataset/data/train/0/20160516004/20160516152638.jpg
inflating: dataset/data/train/0/20160516004/20160516152642.jpg
inflating: dataset/data/train/0/20160516004/20160516152745.jpg
creating: dataset/data/train/0/20160519001/
inflating: dataset/data/train/0/20160519001/20160519085432.jpg
inflating: dataset/data/train/0/20160519001/20160519085620.jpg
inflating: dataset/data/train/0/20160519001/20160519085641.jpg
inflating: dataset/data/train/0/20160519001/20160519085710.jpg
inflating: dataset/data/train/0/20160519001/20160519085742.jpg
inflating: dataset/data/train/0/20160519001/20160519085750.jpg
inflating: dataset/data/train/0/20160519001/20160519085927.jpg
creating: dataset/data/train/0/20160519004/
inflating: dataset/data/train/0/20160519004/20160519144815.jpg
inflating: dataset/data/train/0/20160519004/20160519145020.jpg
inflating: dataset/data/train/0/20160519004/20160519145036.jpg
inflating: dataset/data/train/0/20160519004/20160519145122.jpg
inflating: dataset/data/train/0/20160519004/20160519145150.jpg
inflating: dataset/data/train/0/20160519004/20160519145165.jpg
inflating: dataset/data/train/0/20160519004/20160519145222.jpg
creating: dataset/data/train/0/20160523004/
inflating: dataset/data/train/0/20160523004/20160523153054.jpg
inflating: dataset/data/train/0/20160523004/20160523153214.jpg
inflating: dataset/data/train/0/20160523004/20160523153250.jpg
inflating: dataset/data/train/0/20160523004/20160523153312.jpg
inflating: dataset/data/train/0/20160523004/20160523153342.jpg
inflating: dataset/data/train/0/20160523004/20160523153350.jpg
inflating: dataset/data/train/0/20160523004/20160523153455.jpg
creating: dataset/data/train/0/20160525002/
inflating: dataset/data/train/0/20160525002/20160525105101.jpg
inflating: dataset/data/train/0/20160525002/20160525105255.jpg
inflating: dataset/data/train/0/20160525002/20160525105330.jpg
inflating: dataset/data/train/0/20160525002/20160525105351.jpg
inflating: dataset/data/train/0/20160525002/20160525105414.jpg
inflating: dataset/data/train/0/20160525002/20160525105424.jpg
inflating: dataset/data/train/0/20160525002/20160525105440.jpg
creating: dataset/data/train/0/20160525006/
inflating: dataset/data/train/0/20160525006/20160525152944.jpg
inflating: dataset/data/train/0/20160525006/20160525153117.jpg
inflating: dataset/data/train/0/20160525006/20160525153154.jpg
inflating: dataset/data/train/0/20160525006/20160525153214.jpg
inflating: dataset/data/train/0/20160525006/20160525153243.jpg
inflating: dataset/data/train/0/20160525006/20160525153258.jpg
inflating: dataset/data/train/0/20160525006/20160525153328.jpg
creating: dataset/data/train/0/20160526001/
inflating: dataset/data/train/0/20160526001/20160526093447.jpg
inflating: dataset/data/train/0/20160526001/20160526093621.jpg
inflating: dataset/data/train/0/20160526001/20160526093636.jpg
inflating: dataset/data/train/0/20160526001/20160526093739.jpg
inflating: dataset/data/train/0/20160526001/20160526093751.jpg
inflating: dataset/data/train/0/20160526001/20160526093769.jpg
inflating: dataset/data/train/0/20160526001/20160526093835.jpg
creating: dataset/data/train/0/20160526002/
inflating: dataset/data/train/0/20160526002/20160526094236.jpg
inflating: dataset/data/train/0/20160526002/20160526094412.jpg
inflating: dataset/data/train/0/20160526002/20160526094431.jpg
inflating: dataset/data/train/0/20160526002/20160526094507.jpg
inflating: dataset/data/train/0/20160526002/20160526094531.jpg
inflating: dataset/data/train/0/20160526002/20160526094548.jpg
inflating: dataset/data/train/0/20160526002/20160526094610.jpg
creating: dataset/data/train/0/20160527004/
inflating: dataset/data/train/0/20160527004/20160527152616.jpg
inflating: dataset/data/train/0/20160527004/20160527152758.jpg
inflating: dataset/data/train/0/20160527004/20160527152816.jpg
inflating: dataset/data/train/0/20160527004/20160527152850.jpg
inflating: dataset/data/train/0/20160527004/20160527152909.jpg
inflating: dataset/data/train/0/20160527004/20160527152914.jpg
inflating: dataset/data/train/0/20160527004/20160527152956.jpg
creating: dataset/data/train/0/20160601006/
inflating: dataset/data/train/0/20160601006/20160601143854.jpg
inflating: dataset/data/train/0/20160601006/20160601144020.jpg
inflating: dataset/data/train/0/20160601006/20160601144051.jpg
inflating: dataset/data/train/0/20160601006/20160601144120.jpg
inflating: dataset/data/train/0/20160601006/20160601144150.jpg
inflating: dataset/data/train/0/20160601006/20160601144166.jpg
inflating: dataset/data/train/0/20160601006/20160601144234.jpg
creating: dataset/data/train/0/20160602002/
inflating: dataset/data/train/0/20160602002/20160602144524.jpg
inflating: dataset/data/train/0/20160602002/20160602144634.jpg
inflating: dataset/data/train/0/20160602002/20160602144705.jpg
inflating: dataset/data/train/0/20160602002/20160602144738.jpg
inflating: dataset/data/train/0/20160602002/20160602144804.jpg
inflating: dataset/data/train/0/20160602002/20160602144813.jpg
inflating: dataset/data/train/0/20160602002/20160602144948.jpg
creating: dataset/data/train/0/20160602003/
inflating: dataset/data/train/0/20160602003/20160602145256.jpg
inflating: dataset/data/train/0/20160602003/20160602145415.jpg
inflating: dataset/data/train/0/20160602003/20160602145450.jpg
inflating: dataset/data/train/0/20160602003/20160602145516.jpg
inflating: dataset/data/train/0/20160602003/20160602145544.jpg
inflating: dataset/data/train/0/20160602003/20160602145550.jpg
inflating: dataset/data/train/0/20160602003/20160602145627.jpg
creating: dataset/data/train/0/20160606003/
inflating: dataset/data/train/0/20160606003/20160606155353.jpg
inflating: dataset/data/train/0/20160606003/20160606155510.jpg
inflating: dataset/data/train/0/20160606003/20160606155535.jpg
inflating: dataset/data/train/0/20160606003/20160606155613.jpg
inflating: dataset/data/train/0/20160606003/20160606155634.jpg
inflating: dataset/data/train/0/20160606003/20160606155644.jpg
inflating: dataset/data/train/0/20160606003/20160606155730.jpg
creating: dataset/data/train/0/20160606005/
inflating: dataset/data/train/0/20160606005/20160606161420.jpg
inflating: dataset/data/train/0/20160606005/20160606161546.jpg
inflating: dataset/data/train/0/20160606005/20160606161617.jpg
inflating: dataset/data/train/0/20160606005/20160606161644.jpg
inflating: dataset/data/train/0/20160606005/20160606161713.jpg
inflating: dataset/data/train/0/20160606005/20160606161720.jpg
inflating: dataset/data/train/0/20160606005/20160606161831.jpg
creating: dataset/data/train/0/20160608006/
inflating: dataset/data/train/0/20160608006/20160608102927.jpg
inflating: dataset/data/train/0/20160608006/20160608103047.jpg
inflating: dataset/data/train/0/20160608006/20160608103130.jpg
inflating: dataset/data/train/0/20160608006/20160608103154.jpg
inflating: dataset/data/train/0/20160608006/20160608103216.jpg
inflating: dataset/data/train/0/20160608006/20160608103220.jpg
inflating: dataset/data/train/0/20160608006/20160608103308.jpg
creating: dataset/data/train/0/20160608007/
inflating: dataset/data/train/0/20160608007/20160608103932.jpg
inflating: dataset/data/train/0/20160608007/20160608104107.jpg
inflating: dataset/data/train/0/20160608007/20160608104132.jpg
inflating: dataset/data/train/0/20160608007/20160608104154.jpg
inflating: dataset/data/train/0/20160608007/20160608104226.jpg
inflating: dataset/data/train/0/20160608007/20160608104235.jpg
inflating: dataset/data/train/0/20160608007/20160608104255.jpg
creating: dataset/data/train/0/20160608016/
inflating: dataset/data/train/0/20160608016/20160608163623.jpg
inflating: dataset/data/train/0/20160608016/20160608163753.jpg
inflating: dataset/data/train/0/20160608016/20160608163814.jpg
inflating: dataset/data/train/0/20160608016/20160608163851.jpg
inflating: dataset/data/train/0/20160608016/20160608163924.jpg
inflating: dataset/data/train/0/20160608016/20160608163929.jpg
inflating: dataset/data/train/0/20160608016/20160608164158.jpg
creating: dataset/data/train/0/20160612001/
inflating: dataset/data/train/0/20160612001/20160612154158.jpg
inflating: dataset/data/train/0/20160612001/20160612154341.jpg
inflating: dataset/data/train/0/20160612001/20160612154445.jpg
inflating: dataset/data/train/0/20160612001/20160612154514.jpg
inflating: dataset/data/train/0/20160612001/20160612154521.jpg
inflating: dataset/data/train/0/20160612001/20160612154535.jpg
inflating: dataset/data/train/0/20160612001/20160612154614.jpg
creating: dataset/data/train/0/20160612003/
inflating: dataset/data/train/0/20160612003/20160612163211.jpg
inflating: dataset/data/train/0/20160612003/20160612163335.jpg
inflating: dataset/data/train/0/20160612003/20160612163400.jpg
inflating: dataset/data/train/0/20160612003/20160612163416.jpg
inflating: dataset/data/train/0/20160612003/20160612163456.jpg
inflating: dataset/data/train/0/20160612003/20160612163462.jpg
inflating: dataset/data/train/0/20160612003/20160612163518.jpg
creating: dataset/data/train/0/20160614001/
inflating: dataset/data/train/0/20160614001/20160614102011.jpg
inflating: dataset/data/train/0/20160614001/20160614102155.jpg
inflating: dataset/data/train/0/20160614001/20160614102235.jpg
inflating: dataset/data/train/0/20160614001/20160614102317.jpg
inflating: dataset/data/train/0/20160614001/20160614102322.jpg
inflating: dataset/data/train/0/20160614001/20160614102334.jpg
inflating: dataset/data/train/0/20160614001/20160614102434.jpg
creating: dataset/data/train/0/20160614003/
inflating: dataset/data/train/0/20160614003/20160614112748.jpg
inflating: dataset/data/train/0/20160614003/20160614112751.jpg
inflating: dataset/data/train/0/20160614003/20160614112917.jpg
inflating: dataset/data/train/0/20160614003/20160614112933.jpg
inflating: dataset/data/train/0/20160614003/20160614113004.jpg
inflating: dataset/data/train/0/20160614003/20160614113019.jpg
inflating: dataset/data/train/0/20160614003/20160614113035.jpg
creating: dataset/data/train/0/20160621001/
inflating: dataset/data/train/0/20160621001/20160621115222.jpg
inflating: dataset/data/train/0/20160621001/20160621115403.jpg
inflating: dataset/data/train/0/20160621001/20160621115420.jpg
inflating: dataset/data/train/0/20160621001/20160621115458.jpg
inflating: dataset/data/train/0/20160621001/20160621115511.jpg
inflating: dataset/data/train/0/20160621001/20160621115525.jpg
inflating: dataset/data/train/0/20160621001/20160621115546.jpg
creating: dataset/data/train/0/20160622006/
inflating: dataset/data/train/0/20160622006/20160622145637.jpg
inflating: dataset/data/train/0/20160622006/20160622145758.jpg
inflating: dataset/data/train/0/20160622006/20160622145817.jpg
inflating: dataset/data/train/0/20160622006/20160622145832.jpg
inflating: dataset/data/train/0/20160622006/20160622145906.jpg
inflating: dataset/data/train/0/20160622006/20160622145909.jpg
inflating: dataset/data/train/0/20160622006/20160622150000.jpg
creating: dataset/data/train/0/20160622008/
inflating: dataset/data/train/0/20160622008/20160622152421.jpg
inflating: dataset/data/train/0/20160622008/20160622152615.jpg
inflating: dataset/data/train/0/20160622008/20160622152633.jpg
inflating: dataset/data/train/0/20160622008/20160622152703.jpg
inflating: dataset/data/train/0/20160622008/20160622152724.jpg
inflating: dataset/data/train/0/20160622008/20160622152735.jpg
inflating: dataset/data/train/0/20160622008/20160622152910.jpg
creating: dataset/data/train/0/20160622012/
inflating: dataset/data/train/0/20160622012/20160622160856.jpg
inflating: dataset/data/train/0/20160622012/20160622161021.jpg
inflating: dataset/data/train/0/20160622012/20160622161042.jpg
inflating: dataset/data/train/0/20160622012/20160622161112.jpg
inflating: dataset/data/train/0/20160622012/20160622161133.jpg
inflating: dataset/data/train/0/20160622012/20160622161144.jpg
inflating: dataset/data/train/0/20160622012/20160622161244.jpg
creating: dataset/data/train/0/20160622013/
inflating: dataset/data/train/0/20160622013/20160622165526.jpg
inflating: dataset/data/train/0/20160622013/20160622165646.jpg
inflating: dataset/data/train/0/20160622013/20160622165730.jpg
inflating: dataset/data/train/0/20160622013/20160622165802.jpg
inflating: dataset/data/train/0/20160622013/20160622165820.jpg
inflating: dataset/data/train/0/20160622013/20160622165829.jpg
inflating: dataset/data/train/0/20160622013/20160622170014.jpg
creating: dataset/data/train/0/20160627007/
inflating: dataset/data/train/0/20160627007/20160627154911.jpg
inflating: dataset/data/train/0/20160627007/20160627155059.jpg
inflating: dataset/data/train/0/20160627007/20160627155109.jpg
inflating: dataset/data/train/0/20160627007/20160627155117.jpg
inflating: dataset/data/train/0/20160627007/20160627155217.jpg
inflating: dataset/data/train/0/20160627007/20160627155223.jpg
inflating: dataset/data/train/0/20160627007/20160627155255.jpg
creating: dataset/data/train/0/20160629008/
inflating: dataset/data/train/0/20160629008/20160629152350.jpg
inflating: dataset/data/train/0/20160629008/20160629152523.jpg
inflating: dataset/data/train/0/20160629008/20160629152555.jpg
inflating: dataset/data/train/0/20160629008/20160629152619.jpg
inflating: dataset/data/train/0/20160629008/20160629152629.jpg
inflating: dataset/data/train/0/20160629008/20160629152640.jpg
inflating: dataset/data/train/0/20160629008/20160629152737.jpg
creating: dataset/data/train/0/20160705003/
inflating: dataset/data/train/0/20160705003/20160705115519.jpg
inflating: dataset/data/train/0/20160705003/20160705115644.jpg
inflating: dataset/data/train/0/20160705003/20160705115717.jpg
inflating: dataset/data/train/0/20160705003/20160705115745.jpg
inflating: dataset/data/train/0/20160705003/20160705115815.jpg
inflating: dataset/data/train/0/20160705003/20160705115825.jpg
inflating: dataset/data/train/0/20160705003/20160705115910.jpg
creating: dataset/data/train/0/20160706005/
inflating: dataset/data/train/0/20160706005/20160706144908.jpg
inflating: dataset/data/train/0/20160706005/20160706145048.jpg
inflating: dataset/data/train/0/20160706005/20160706145117.jpg
inflating: dataset/data/train/0/20160706005/20160706145144.jpg
inflating: dataset/data/train/0/20160706005/20160706145214.jpg
inflating: dataset/data/train/0/20160706005/20160706145220.jpg
inflating: dataset/data/train/0/20160706005/20160706145249.jpg
creating: dataset/data/train/0/20160706014/
inflating: dataset/data/train/0/20160706014/20160706171647.jpg
inflating: dataset/data/train/0/20160706014/20160706171828.jpg
inflating: dataset/data/train/0/20160706014/20160706171835.jpg
inflating: dataset/data/train/0/20160706014/20160706171852.jpg
inflating: dataset/data/train/0/20160706014/20160706171920.jpg
inflating: dataset/data/train/0/20160706014/20160706171930.jpg
inflating: dataset/data/train/0/20160706014/20160706171957.jpg
creating: dataset/data/train/0/20160718003/
inflating: dataset/data/train/0/20160718003/20160718143931.jpg
inflating: dataset/data/train/0/20160718003/20160718144049.jpg
inflating: dataset/data/train/0/20160718003/20160718144117.jpg
inflating: dataset/data/train/0/20160718003/20160718144147.jpg
inflating: dataset/data/train/0/20160718003/20160718144218.jpg
inflating: dataset/data/train/0/20160718003/20160718144221.jpg
inflating: dataset/data/train/0/20160718003/20160718144315.jpg
creating: dataset/data/train/0/20160720001/
inflating: dataset/data/train/0/20160720001/20160720101409.jpg
inflating: dataset/data/train/0/20160720001/20160720101543.jpg
inflating: dataset/data/train/0/20160720001/20160720101558.jpg
inflating: dataset/data/train/0/20160720001/20160720101635.jpg
inflating: dataset/data/train/0/20160720001/20160720101651.jpg
inflating: dataset/data/train/0/20160720001/20160720101662.jpg
inflating: dataset/data/train/0/20160720001/20160720101731.jpg
creating: dataset/data/train/0/20160720013/
inflating: dataset/data/train/0/20160720013/20160720160925.jpg
inflating: dataset/data/train/0/20160720013/20160720161047.jpg
inflating: dataset/data/train/0/20160720013/20160720161129.jpg
inflating: dataset/data/train/0/20160720013/20160720161153.jpg
inflating: dataset/data/train/0/20160720013/20160720161216.jpg
inflating: dataset/data/train/0/20160720013/20160720161220.jpg
inflating: dataset/data/train/0/20160720013/20160720161328.jpg
creating: dataset/data/train/0/20160720015/
inflating: dataset/data/train/0/20160720015/20160720163359.jpg
inflating: dataset/data/train/0/20160720015/20160720163533.jpg
inflating: dataset/data/train/0/20160720015/20160720163557.jpg
inflating: dataset/data/train/0/20160720015/20160720163613.jpg
inflating: dataset/data/train/0/20160720015/20160720163626.jpg
inflating: dataset/data/train/0/20160720015/20160720163631.jpg
inflating: dataset/data/train/0/20160720015/20160720163814.jpg
creating: dataset/data/train/0/20160725006/
inflating: dataset/data/train/0/20160725006/20160725153839.jpg
inflating: dataset/data/train/0/20160725006/20160725154007.jpg
inflating: dataset/data/train/0/20160725006/20160725154041.jpg
inflating: dataset/data/train/0/20160725006/20160725154102.jpg
inflating: dataset/data/train/0/20160725006/20160725154131.jpg
inflating: dataset/data/train/0/20160725006/20160725154148.jpg
inflating: dataset/data/train/0/20160725006/20160725154221.jpg
creating: dataset/data/train/0/20160726002/
inflating: dataset/data/train/0/20160726002/20160726114726.jpg
inflating: dataset/data/train/0/20160726002/20160726114849.jpg
inflating: dataset/data/train/0/20160726002/20160726114916.jpg
inflating: dataset/data/train/0/20160726002/20160726114948.jpg
inflating: dataset/data/train/0/20160726002/20160726115026.jpg
inflating: dataset/data/train/0/20160726002/20160726115035.jpg
inflating: dataset/data/train/0/20160726002/20160726115213.jpg
creating: dataset/data/train/0/20160803003/
inflating: dataset/data/train/0/20160803003/20160803144241.jpg
inflating: dataset/data/train/0/20160803003/20160803144402.jpg
inflating: dataset/data/train/0/20160803003/20160803144432.jpg
inflating: dataset/data/train/0/20160803003/20160803144502.jpg
inflating: dataset/data/train/0/20160803003/20160803144536.jpg
inflating: dataset/data/train/0/20160803003/20160803144540.jpg
inflating: dataset/data/train/0/20160803003/20160803144734.jpg
creating: dataset/data/train/0/20160803008/
inflating: dataset/data/train/0/20160803008/20160803155916.jpg
inflating: dataset/data/train/0/20160803008/20160803160052.jpg
inflating: dataset/data/train/0/20160803008/20160803160124.jpg
inflating: dataset/data/train/0/20160803008/20160803160154.jpg
inflating: dataset/data/train/0/20160803008/20160803160205.jpg
inflating: dataset/data/train/0/20160803008/20160803160211.jpg
inflating: dataset/data/train/0/20160803008/20160803160247.jpg
creating: dataset/data/train/0/20160810007/
inflating: dataset/data/train/0/20160810007/20160810151102.jpg
inflating: dataset/data/train/0/20160810007/20160810151225.jpg
inflating: dataset/data/train/0/20160810007/20160810151317.jpg
inflating: dataset/data/train/0/20160810007/20160810151336.jpg
inflating: dataset/data/train/0/20160810007/20160810151348.jpg
inflating: dataset/data/train/0/20160810007/20160810151354.jpg
inflating: dataset/data/train/0/20160810007/20160810151432.jpg
creating: dataset/data/train/0/20160815006/
inflating: dataset/data/train/0/20160815006/20160815162119.jpg
inflating: dataset/data/train/0/20160815006/20160815162234.jpg
inflating: dataset/data/train/0/20160815006/20160815162305.jpg
inflating: dataset/data/train/0/20160815006/20160815162310.jpg
inflating: dataset/data/train/0/20160815006/20160815162342.jpg
inflating: dataset/data/train/0/20160815006/20160815162347.jpg
inflating: dataset/data/train/0/20160815006/20160815162439.jpg
creating: dataset/data/train/0/20160817008/
inflating: dataset/data/train/0/20160817008/20160817152258.jpg
inflating: dataset/data/train/0/20160817008/20160817152413.jpg
inflating: dataset/data/train/0/20160817008/20160817152439.jpg
inflating: dataset/data/train/0/20160817008/20160817152510.jpg
inflating: dataset/data/train/0/20160817008/20160817152539.jpg
inflating: dataset/data/train/0/20160817008/20160817152547.jpg
inflating: dataset/data/train/0/20160817008/20160817152616.jpg
creating: dataset/data/train/0/20160817009/
inflating: dataset/data/train/0/20160817009/20160817160113.jpg
inflating: dataset/data/train/0/20160817009/20160817160244.jpg
inflating: dataset/data/train/0/20160817009/20160817160301.jpg
inflating: dataset/data/train/0/20160817009/20160817160331.jpg
inflating: dataset/data/train/0/20160817009/20160817160404.jpg
inflating: dataset/data/train/0/20160817009/20160817160417.jpg
inflating: dataset/data/train/0/20160817009/20160817160542.jpg
creating: dataset/data/train/0/20160823001/
inflating: dataset/data/train/0/20160823001/20160823151431.jpg
inflating: dataset/data/train/0/20160823001/20160823151552.jpg
inflating: dataset/data/train/0/20160823001/20160823151618.jpg
inflating: dataset/data/train/0/20160823001/20160823151649.jpg
inflating: dataset/data/train/0/20160823001/20160823151720.jpg
inflating: dataset/data/train/0/20160823001/20160823151725.jpg
inflating: dataset/data/train/0/20160823001/20160823151910.jpg
creating: dataset/data/train/0/20160824006/
inflating: dataset/data/train/0/20160824006/20160824145118.jpg
inflating: dataset/data/train/0/20160824006/20160824145243.jpg
inflating: dataset/data/train/0/20160824006/20160824145317.jpg
inflating: dataset/data/train/0/20160824006/20160824145340.jpg
inflating: dataset/data/train/0/20160824006/20160824145411.jpg
inflating: dataset/data/train/0/20160824006/20160824145423.jpg
inflating: dataset/data/train/0/20160824006/20160824145506.jpg
creating: dataset/data/train/0/20160830003/
inflating: dataset/data/train/0/20160830003/20160830144457.jpg
inflating: dataset/data/train/0/20160830003/20160830144620.jpg
inflating: dataset/data/train/0/20160830003/20160830144648.jpg
inflating: dataset/data/train/0/20160830003/20160830144719.jpg
inflating: dataset/data/train/0/20160830003/20160830144748.jpg
inflating: dataset/data/train/0/20160830003/20160830144758.jpg
inflating: dataset/data/train/0/20160830003/20160830144807.jpg
creating: dataset/data/train/0/20160831002/
inflating: dataset/data/train/0/20160831002/20160831102118.jpg
inflating: dataset/data/train/0/20160831002/20160831102252.jpg
inflating: dataset/data/train/0/20160831002/20160831102332.jpg
inflating: dataset/data/train/0/20160831002/20160831102348.jpg
inflating: dataset/data/train/0/20160831002/20160831102421.jpg
inflating: dataset/data/train/0/20160831002/20160831102438.jpg
inflating: dataset/data/train/0/20160831002/20160831102526.jpg
creating: dataset/data/train/0/20160901007/
inflating: dataset/data/train/0/20160901007/20160901183820.jpg
inflating: dataset/data/train/0/20160901007/20160901183944.jpg
inflating: dataset/data/train/0/20160901007/20160901184008.jpg
inflating: dataset/data/train/0/20160901007/20160901184038.jpg
inflating: dataset/data/train/0/20160901007/20160901184108.jpg
inflating: dataset/data/train/0/20160901007/20160901184114.jpg
inflating: dataset/data/train/0/20160901007/20160901184210.jpg
creating: dataset/data/train/0/20160902006/
inflating: dataset/data/train/0/20160902006/20160902170316.jpg
inflating: dataset/data/train/0/20160902006/20160902170432.jpg
inflating: dataset/data/train/0/20160902006/20160902170506.jpg
inflating: dataset/data/train/0/20160902006/20160902170532.jpg
inflating: dataset/data/train/0/20160902006/20160902170602.jpg
inflating: dataset/data/train/0/20160902006/20160902170610.jpg
inflating: dataset/data/train/0/20160902006/20160902170641.jpg
creating: dataset/data/train/0/20160909001/
inflating: dataset/data/train/0/20160909001/20160909144157.jpg
inflating: dataset/data/train/0/20160909001/20160909144326.jpg
inflating: dataset/data/train/0/20160909001/20160909144350.jpg
inflating: dataset/data/train/0/20160909001/20160909144421.jpg
inflating: dataset/data/train/0/20160909001/20160909144450.jpg
inflating: dataset/data/train/0/20160909001/20160909144456.jpg
inflating: dataset/data/train/0/20160909001/20160909144540.jpg
creating: dataset/data/train/0/20160912008/
inflating: dataset/data/train/0/20160912008/20160912155809.jpg
inflating: dataset/data/train/0/20160912008/20160912155933.jpg
inflating: dataset/data/train/0/20160912008/20160912155958.jpg
inflating: dataset/data/train/0/20160912008/20160912160040.jpg
inflating: dataset/data/train/0/20160912008/20160912160058.jpg
inflating: dataset/data/train/0/20160912008/20160912160067.jpg
inflating: dataset/data/train/0/20160912008/20160912160155.jpg
creating: dataset/data/train/0/20160918002/
inflating: dataset/data/train/0/20160918002/20160918145112.jpg
inflating: dataset/data/train/0/20160918002/20160918145228.jpg
inflating: dataset/data/train/0/20160918002/20160918145300.jpg
inflating: dataset/data/train/0/20160918002/20160918145332.jpg
inflating: dataset/data/train/0/20160918002/20160918145402.jpg
inflating: dataset/data/train/0/20160918002/20160918145417.jpg
inflating: dataset/data/train/0/20160918002/20160918145551.jpg
creating: dataset/data/train/0/20160921009/
inflating: dataset/data/train/0/20160921009/20160921111633.jpg
inflating: dataset/data/train/0/20160921009/20160921111745.jpg
inflating: dataset/data/train/0/20160921009/20160921111815.jpg
inflating: dataset/data/train/0/20160921009/20160921111846.jpg
inflating: dataset/data/train/0/20160921009/20160921111916.jpg
inflating: dataset/data/train/0/20160921009/20160921111925.jpg
inflating: dataset/data/train/0/20160921009/20160921111936.jpg
creating: dataset/data/train/0/20160921013/
inflating: dataset/data/train/0/20160921013/20160921152604.jpg
inflating: dataset/data/train/0/20160921013/20160921152745.jpg
inflating: dataset/data/train/0/20160921013/20160921152813.jpg
inflating: dataset/data/train/0/20160921013/20160921152837.jpg
inflating: dataset/data/train/0/20160921013/20160921152919.jpg
inflating: dataset/data/train/0/20160921013/20160921152924.jpg
inflating: dataset/data/train/0/20160921013/20160921153034.jpg
creating: dataset/data/train/0/20160926004/
inflating: dataset/data/train/0/20160926004/20160926151230.jpg
inflating: dataset/data/train/0/20160926004/20160926151259.jpg
inflating: dataset/data/train/0/20160926004/20160926151429.jpg
inflating: dataset/data/train/0/20160926004/20160926151454.jpg
inflating: dataset/data/train/0/20160926004/20160926151531.jpg
inflating: dataset/data/train/0/20160926004/20160926151538.jpg
inflating: dataset/data/train/0/20160926004/20160926151654.jpg
creating: dataset/data/train/0/20160927006/
inflating: dataset/data/train/0/20160927006/20160927163959.jpg
inflating: dataset/data/train/0/20160927006/20160927164124.jpg
inflating: dataset/data/train/0/20160927006/20160927164208.jpg
inflating: dataset/data/train/0/20160927006/20160927164230.jpg
inflating: dataset/data/train/0/20160927006/20160927164251.jpg
inflating: dataset/data/train/0/20160927006/20160927164302.jpg
inflating: dataset/data/train/0/20160927006/20160927164440.jpg
creating: dataset/data/train/0/20161010005/
inflating: dataset/data/train/0/20161010005/20161010145611.jpg
inflating: dataset/data/train/0/20161010005/20161010145738.jpg
inflating: dataset/data/train/0/20161010005/20161010145802.jpg
inflating: dataset/data/train/0/20161010005/20161010145830.jpg
inflating: dataset/data/train/0/20161010005/20161010145900.jpg
inflating: dataset/data/train/0/20161010005/20161010145917.jpg
inflating: dataset/data/train/0/20161010005/20161010145957.jpg
creating: dataset/data/train/0/20161013002/
inflating: dataset/data/train/0/20161013002/20161013115909.jpg
inflating: dataset/data/train/0/20161013002/20161013120040.jpg
inflating: dataset/data/train/0/20161013002/20161013120111.jpg
inflating: dataset/data/train/0/20161013002/20161013120142.jpg
inflating: dataset/data/train/0/20161013002/20161013120209.jpg
inflating: dataset/data/train/0/20161013002/20161013120217.jpg
inflating: dataset/data/train/0/20161013002/20161013120250.jpg
creating: dataset/data/train/0/20161014001/
inflating: dataset/data/train/0/20161014001/20161014143932.jpg
inflating: dataset/data/train/0/20161014001/20161014144103.jpg
inflating: dataset/data/train/0/20161014001/20161014144126.jpg
inflating: dataset/data/train/0/20161014001/20161014144154.jpg
inflating: dataset/data/train/0/20161014001/20161014144227.jpg
inflating: dataset/data/train/0/20161014001/20161014144230.jpg
inflating: dataset/data/train/0/20161014001/20161014144403.jpg
creating: dataset/data/train/0/20161017001/
inflating: dataset/data/train/0/20161017001/20161017112015.jpg
inflating: dataset/data/train/0/20161017001/20161017112139.jpg
inflating: dataset/data/train/0/20161017001/20161017112206.jpg
inflating: dataset/data/train/0/20161017001/20161017112239.jpg
inflating: dataset/data/train/0/20161017001/20161017112306.jpg
inflating: dataset/data/train/0/20161017001/20161017112311.jpg
inflating: dataset/data/train/0/20161017001/20161017112341.jpg
creating: dataset/data/train/0/20161019006/
inflating: dataset/data/train/0/20161019006/20161019145239.jpg
inflating: dataset/data/train/0/20161019006/20161019145411.jpg
inflating: dataset/data/train/0/20161019006/20161019145425.jpg
inflating: dataset/data/train/0/20161019006/20161019145501.jpg
inflating: dataset/data/train/0/20161019006/20161019145511.jpg
inflating: dataset/data/train/0/20161019006/20161019145525.jpg
inflating: dataset/data/train/0/20161019006/20161019145728.jpg
creating: dataset/data/train/0/20161019017/
inflating: dataset/data/train/0/20161019017/20161019171334.jpg
inflating: dataset/data/train/0/20161019017/20161019171502.jpg
inflating: dataset/data/train/0/20161019017/20161019171532.jpg
inflating: dataset/data/train/0/20161019017/20161019171556.jpg
inflating: dataset/data/train/0/20161019017/20161019171601.jpg
inflating: dataset/data/train/0/20161019017/20161019171610.jpg
inflating: dataset/data/train/0/20161019017/20161019171716.jpg
creating: dataset/data/train/0/20161021004/
inflating: dataset/data/train/0/20161021004/20161021163240.jpg
inflating: dataset/data/train/0/20161021004/20161021163429.jpg
inflating: dataset/data/train/0/20161021004/20161021163500.jpg
inflating: dataset/data/train/0/20161021004/20161021163529.jpg
inflating: dataset/data/train/0/20161021004/20161021163558.jpg
inflating: dataset/data/train/0/20161021004/20161021163604.jpg
inflating: dataset/data/train/0/20161021004/20161021163703.jpg
creating: dataset/data/train/0/20161024007/
inflating: dataset/data/train/0/20161024007/20161024155924.jpg
inflating: dataset/data/train/0/20161024007/20161024160049.jpg
inflating: dataset/data/train/0/20161024007/20161024160117.jpg
inflating: dataset/data/train/0/20161024007/20161024160146.jpg
inflating: dataset/data/train/0/20161024007/20161024160215.jpg
inflating: dataset/data/train/0/20161024007/20161024160229.jpg
inflating: dataset/data/train/0/20161024007/20161024160302.jpg
creating: dataset/data/train/0/20161025001/
inflating: dataset/data/train/0/20161025001/20161025152903.jpg
inflating: dataset/data/train/0/20161025001/20161025153020.jpg
inflating: dataset/data/train/0/20161025001/20161025153111.jpg
inflating: dataset/data/train/0/20161025001/20161025153122.jpg
inflating: dataset/data/train/0/20161025001/20161025153150.jpg
inflating: dataset/data/train/0/20161025001/20161025153168.jpg
inflating: dataset/data/train/0/20161025001/20161025153223.jpg
creating: dataset/data/train/0/20161025002/
inflating: dataset/data/train/0/20161025002/20161025160558.jpg
inflating: dataset/data/train/0/20161025002/20161025160724.jpg
inflating: dataset/data/train/0/20161025002/20161025160751.jpg
inflating: dataset/data/train/0/20161025002/20161025160814.jpg
inflating: dataset/data/train/0/20161025002/20161025160854.jpg
inflating: dataset/data/train/0/20161025002/20161025160909.jpg
inflating: dataset/data/train/0/20161025002/20161025161020.jpg
creating: dataset/data/train/0/20161025003/
inflating: dataset/data/train/0/20161025003/20161025162601.jpg
inflating: dataset/data/train/0/20161025003/20161025162716.jpg
inflating: dataset/data/train/0/20161025003/20161025162747.jpg
inflating: dataset/data/train/0/20161025003/20161025162816.jpg
inflating: dataset/data/train/0/20161025003/20161025162846.jpg
inflating: dataset/data/train/0/20161025003/20161025162858.jpg
inflating: dataset/data/train/0/20161025003/20161025163027.jpg
creating: dataset/data/train/0/20161026006/
inflating: dataset/data/train/0/20161026006/20161026114609.jpg
inflating: dataset/data/train/0/20161026006/20161026114729.jpg
inflating: dataset/data/train/0/20161026006/20161026114810.jpg
inflating: dataset/data/train/0/20161026006/20161026114828.jpg
inflating: dataset/data/train/0/20161026006/20161026114903.jpg
inflating: dataset/data/train/0/20161026006/20161026114911.jpg
inflating: dataset/data/train/0/20161026006/20161026114940.jpg
creating: dataset/data/train/0/20161028003/
inflating: dataset/data/train/0/20161028003/20161028094333.jpg
inflating: dataset/data/train/0/20161028003/20161028094457.jpg
inflating: dataset/data/train/0/20161028003/20161028094522.jpg
inflating: dataset/data/train/0/20161028003/20161028094551.jpg
inflating: dataset/data/train/0/20161028003/20161028094621.jpg
inflating: dataset/data/train/0/20161028003/20161028094634.jpg
inflating: dataset/data/train/0/20161028003/20161028094733.jpg
creating: dataset/data/train/0/20161028008/
inflating: dataset/data/train/0/20161028008/20161028154718.jpg
inflating: dataset/data/train/0/20161028008/20161028154907.jpg
inflating: dataset/data/train/0/20161028008/20161028154942.jpg
inflating: dataset/data/train/0/20161028008/20161028155004.jpg
inflating: dataset/data/train/0/20161028008/20161028155032.jpg
inflating: dataset/data/train/0/20161028008/20161028155033.jpg
inflating: dataset/data/train/0/20161028008/20161028155214.jpg
creating: dataset/data/train/1/
creating: dataset/data/train/1/090450340/
inflating: dataset/data/train/1/090450340/090450340Image0.jpg
inflating: dataset/data/train/1/090450340/090450340Image1.jpg
inflating: dataset/data/train/1/090450340/090450340Image2.jpg
inflating: dataset/data/train/1/090450340/090450340Image4.jpg
inflating: dataset/data/train/1/090450340/090450340Image5.jpg
inflating: dataset/data/train/1/090450340/090450340Image6.jpg
inflating: dataset/data/train/1/090450340/090450340Image7.jpg
creating: dataset/data/train/1/090614767/
inflating: dataset/data/train/1/090614767/090614767Image0.jpg
inflating: dataset/data/train/1/090614767/090614767Image10.jpg
inflating: dataset/data/train/1/090614767/090614767Image121.jpg
inflating: dataset/data/train/1/090614767/090614767Image4.jpg
inflating: dataset/data/train/1/090614767/090614767Image5.jpg
inflating: dataset/data/train/1/090614767/090614767Image75.jpg
inflating: dataset/data/train/1/090614767/090614767Image8.jpg
creating: dataset/data/train/1/091028237/
inflating: dataset/data/train/1/091028237/091028237Image0.jpg
inflating: dataset/data/train/1/091028237/091028237Image1.jpg
inflating: dataset/data/train/1/091028237/091028237Image3.jpg
inflating: dataset/data/train/1/091028237/091028237Image4.jpg
inflating: dataset/data/train/1/091028237/091028237Image6.jpg
inflating: dataset/data/train/1/091028237/091028237Image8.jpg
inflating: dataset/data/train/1/091028237/091028237Image9.jpg
creating: dataset/data/train/1/092355743/
inflating: dataset/data/train/1/092355743/092355743Image0.jpg
inflating: dataset/data/train/1/092355743/092355743Image2.jpg
inflating: dataset/data/train/1/092355743/092355743Image3.jpg
inflating: dataset/data/train/1/092355743/092355743Image4.jpg
inflating: dataset/data/train/1/092355743/092355743Image54.jpg
inflating: dataset/data/train/1/092355743/092355743Image6.jpg
inflating: dataset/data/train/1/092355743/092355743Image74.jpg
creating: dataset/data/train/1/094549143/
inflating: dataset/data/train/1/094549143/094549143Image1.jpg
inflating: dataset/data/train/1/094549143/094549143Image10.jpg
inflating: dataset/data/train/1/094549143/094549143Image12.jpg
inflating: dataset/data/train/1/094549143/094549143Image2.jpg
inflating: dataset/data/train/1/094549143/094549143Image4.jpg
inflating: dataset/data/train/1/094549143/094549143Image5.jpg
inflating: dataset/data/train/1/094549143/094549143Image6.jpg
creating: dataset/data/train/1/100004000/
inflating: dataset/data/train/1/100004000/100004000Image0.jpg
inflating: dataset/data/train/1/100004000/100004000Image10.jpg
inflating: dataset/data/train/1/100004000/100004000Image3.jpg
inflating: dataset/data/train/1/100004000/100004000Image4.jpg
inflating: dataset/data/train/1/100004000/100004000Image5.jpg
inflating: dataset/data/train/1/100004000/100004000Image7.jpg
inflating: dataset/data/train/1/100004000/100004000Image8.jpg
creating: dataset/data/train/1/100435333/
inflating: dataset/data/train/1/100435333/100435333Image0.jpg
inflating: dataset/data/train/1/100435333/100435333Image2.jpg
inflating: dataset/data/train/1/100435333/100435333Image4.jpg
inflating: dataset/data/train/1/100435333/100435333Image5.jpg
inflating: dataset/data/train/1/100435333/100435333Image6.jpg
inflating: dataset/data/train/1/100435333/100435333Image8.jpg
inflating: dataset/data/train/1/100435333/100435333Image9.jpg
creating: dataset/data/train/1/102219213/
inflating: dataset/data/train/1/102219213/102219213Image0.jpg
inflating: dataset/data/train/1/102219213/102219213Image2.jpg
inflating: dataset/data/train/1/102219213/102219213Image3.jpg
inflating: dataset/data/train/1/102219213/102219213Image5.jpg
inflating: dataset/data/train/1/102219213/102219213Image6.jpg
inflating: dataset/data/train/1/102219213/102219213Image7.jpg
inflating: dataset/data/train/1/102219213/102219213Image9.jpg
creating: dataset/data/train/1/103022110/
inflating: dataset/data/train/1/103022110/103022110Image0.jpg
inflating: dataset/data/train/1/103022110/103022110Image1.jpg
inflating: dataset/data/train/1/103022110/103022110Image3.jpg
inflating: dataset/data/train/1/103022110/103022110Image4.jpg
inflating: dataset/data/train/1/103022110/103022110Image6.jpg
inflating: dataset/data/train/1/103022110/103022110Image7.jpg
inflating: dataset/data/train/1/103022110/103022110Image9.jpg
creating: dataset/data/train/1/104638520/
inflating: dataset/data/train/1/104638520/104638520Image0.jpg
inflating: dataset/data/train/1/104638520/104638520Image10.jpg
inflating: dataset/data/train/1/104638520/104638520Image2.jpg
inflating: dataset/data/train/1/104638520/104638520Image5.jpg
inflating: dataset/data/train/1/104638520/104638520Image6.jpg
inflating: dataset/data/train/1/104638520/104638520Image7.jpg
inflating: dataset/data/train/1/104638520/104638520Image8.jpg
creating: dataset/data/train/1/105806403/
inflating: dataset/data/train/1/105806403/105806403Image0.jpg
inflating: dataset/data/train/1/105806403/105806403Image3.jpg
inflating: dataset/data/train/1/105806403/105806403Image4.jpg
inflating: dataset/data/train/1/105806403/105806403Image5.jpg
inflating: dataset/data/train/1/105806403/105806403Image6.jpg
inflating: dataset/data/train/1/105806403/105806403Image7.jpg
inflating: dataset/data/train/1/105806403/105806403Image8.jpg
creating: dataset/data/train/1/115033773/
inflating: dataset/data/train/1/115033773/115033773Image0.jpg
inflating: dataset/data/train/1/115033773/115033773Image2.jpg
inflating: dataset/data/train/1/115033773/115033773Image4.jpg
inflating: dataset/data/train/1/115033773/115033773Image5.jpg
inflating: dataset/data/train/1/115033773/115033773Image6.jpg
inflating: dataset/data/train/1/115033773/115033773Image7.jpg
inflating: dataset/data/train/1/115033773/115033773Image8.jpg
creating: dataset/data/train/1/143607127/
inflating: dataset/data/train/1/143607127/143607127Image0.jpg
inflating: dataset/data/train/1/143607127/143607127Image2.jpg
inflating: dataset/data/train/1/143607127/143607127Image3.jpg
inflating: dataset/data/train/1/143607127/143607127Image4.jpg
inflating: dataset/data/train/1/143607127/143607127Image5.jpg
inflating: dataset/data/train/1/143607127/143607127Image6.jpg
inflating: dataset/data/train/1/143607127/143607127Image7.jpg
creating: dataset/data/train/1/143855533/
inflating: dataset/data/train/1/143855533/143855533Image0.jpg
inflating: dataset/data/train/1/143855533/143855533Image2.jpg
inflating: dataset/data/train/1/143855533/143855533Image3.jpg
inflating: dataset/data/train/1/143855533/143855533Image4.jpg
inflating: dataset/data/train/1/143855533/143855533Image5.jpg
inflating: dataset/data/train/1/143855533/143855533Image6.jpg
inflating: dataset/data/train/1/143855533/143855533Image7.jpg
creating: dataset/data/train/1/144111257/
inflating: dataset/data/train/1/144111257/144111257Image0.jpg
inflating: dataset/data/train/1/144111257/144111257Image2.jpg
inflating: dataset/data/train/1/144111257/144111257Image3.jpg
inflating: dataset/data/train/1/144111257/144111257Image4.jpg
inflating: dataset/data/train/1/144111257/144111257Image46.jpg
inflating: dataset/data/train/1/144111257/144111257Image54.jpg
inflating: dataset/data/train/1/144111257/144111257Image7.jpg
creating: dataset/data/train/1/145030457/
inflating: dataset/data/train/1/145030457/145030457Image0.jpg
inflating: dataset/data/train/1/145030457/145030457Image2.jpg
inflating: dataset/data/train/1/145030457/145030457Image3.jpg
inflating: dataset/data/train/1/145030457/145030457Image4.jpg
inflating: dataset/data/train/1/145030457/145030457Image5.jpg
inflating: dataset/data/train/1/145030457/145030457Image6.jpg
inflating: dataset/data/train/1/145030457/145030457Image7.jpg
creating: dataset/data/train/1/145618193/
inflating: dataset/data/train/1/145618193/145618193Image0.jpg
inflating: dataset/data/train/1/145618193/145618193Image2.jpg
inflating: dataset/data/train/1/145618193/145618193Image3.jpg
inflating: dataset/data/train/1/145618193/145618193Image4.jpg
inflating: dataset/data/train/1/145618193/145618193Image5.jpg
inflating: dataset/data/train/1/145618193/145618193Image6.jpg
inflating: dataset/data/train/1/145618193/145618193Image7.jpg
creating: dataset/data/train/1/145846147/
inflating: dataset/data/train/1/145846147/145846147Image0.jpg
inflating: dataset/data/train/1/145846147/145846147Image34.jpg
inflating: dataset/data/train/1/145846147/145846147Image4.jpg
inflating: dataset/data/train/1/145846147/145846147Image5.jpg
inflating: dataset/data/train/1/145846147/145846147Image6.jpg
inflating: dataset/data/train/1/145846147/145846147Image8.jpg
inflating: dataset/data/train/1/145846147/145846147Image94.jpg
creating: dataset/data/train/1/150222377/
inflating: dataset/data/train/1/150222377/150222377Image0.jpg
inflating: dataset/data/train/1/150222377/150222377Image104.jpg
inflating: dataset/data/train/1/150222377/150222377Image2.jpg
inflating: dataset/data/train/1/150222377/150222377Image3.jpg
inflating: dataset/data/train/1/150222377/150222377Image5.jpg
inflating: dataset/data/train/1/150222377/150222377Image64.jpg
inflating: dataset/data/train/1/150222377/150222377Image9.jpg
creating: dataset/data/train/1/150501227/
inflating: dataset/data/train/1/150501227/150501227Image0.jpg
inflating: dataset/data/train/1/150501227/150501227Image1.jpg
inflating: dataset/data/train/1/150501227/150501227Image3.jpg
inflating: dataset/data/train/1/150501227/150501227Image4.jpg
inflating: dataset/data/train/1/150501227/150501227Image5.jpg
inflating: dataset/data/train/1/150501227/150501227Image666.jpg
inflating: dataset/data/train/1/150501227/150501227Image74.jpg
creating: dataset/data/train/1/150521730/
inflating: dataset/data/train/1/150521730/150521730Image0.jpg
inflating: dataset/data/train/1/150521730/150521730Image2.jpg
inflating: dataset/data/train/1/150521730/150521730Image4.jpg
inflating: dataset/data/train/1/150521730/150521730Image5.jpg
inflating: dataset/data/train/1/150521730/150521730Image6.jpg
inflating: dataset/data/train/1/150521730/150521730Image7.jpg
inflating: dataset/data/train/1/150521730/150521730Image9.jpg
creating: dataset/data/train/1/151423737/
inflating: dataset/data/train/1/151423737/151423737Image0.jpg
inflating: dataset/data/train/1/151423737/151423737Image2.jpg
inflating: dataset/data/train/1/151423737/151423737Image3.jpg
inflating: dataset/data/train/1/151423737/151423737Image4.jpg
inflating: dataset/data/train/1/151423737/151423737Image5.jpg
inflating: dataset/data/train/1/151423737/151423737Image6.jpg
inflating: dataset/data/train/1/151423737/151423737Image9.jpg
creating: dataset/data/train/1/151849783/
inflating: dataset/data/train/1/151849783/151849783Image0.jpg
inflating: dataset/data/train/1/151849783/151849783Image2.jpg
inflating: dataset/data/train/1/151849783/151849783Image3.jpg
inflating: dataset/data/train/1/151849783/151849783Image4.jpg
inflating: dataset/data/train/1/151849783/151849783Image54.jpg
inflating: dataset/data/train/1/151849783/151849783Image6.jpg
inflating: dataset/data/train/1/151849783/151849783Image78.jpg
creating: dataset/data/train/1/152356617/
inflating: dataset/data/train/1/152356617/152356617Image0.jpg
inflating: dataset/data/train/1/152356617/152356617Image3.jpg
inflating: dataset/data/train/1/152356617/152356617Image4.jpg
inflating: dataset/data/train/1/152356617/152356617Image54.jpg
inflating: dataset/data/train/1/152356617/152356617Image6.jpg
inflating: dataset/data/train/1/152356617/152356617Image7.jpg
inflating: dataset/data/train/1/152356617/152356617Image88.jpg
creating: dataset/data/train/1/152815657/
inflating: dataset/data/train/1/152815657/152815657Image0.jpg
inflating: dataset/data/train/1/152815657/152815657Image2.jpg
inflating: dataset/data/train/1/152815657/152815657Image3.jpg
inflating: dataset/data/train/1/152815657/152815657Image5.jpg
inflating: dataset/data/train/1/152815657/152815657Image6.jpg
inflating: dataset/data/train/1/152815657/152815657Image7.jpg
inflating: dataset/data/train/1/152815657/152815657Image8.jpg
creating: dataset/data/train/1/153524690/
inflating: dataset/data/train/1/153524690/153524690Image0.jpg
inflating: dataset/data/train/1/153524690/153524690Image2.jpg
inflating: dataset/data/train/1/153524690/153524690Image3.jpg
inflating: dataset/data/train/1/153524690/153524690Image44.jpg
inflating: dataset/data/train/1/153524690/153524690Image6.jpg
inflating: dataset/data/train/1/153524690/153524690Image7.jpg
inflating: dataset/data/train/1/153524690/153524690Image84.jpg
creating: dataset/data/train/1/153732347/
inflating: dataset/data/train/1/153732347/153732347Image0.jpg
inflating: dataset/data/train/1/153732347/153732347Image1.jpg
inflating: dataset/data/train/1/153732347/153732347Image2.jpg
inflating: dataset/data/train/1/153732347/153732347Image4.jpg
inflating: dataset/data/train/1/153732347/153732347Image5.jpg
inflating: dataset/data/train/1/153732347/153732347Image6.jpg
inflating: dataset/data/train/1/153732347/153732347Image7.jpg
creating: dataset/data/train/1/155511193/
inflating: dataset/data/train/1/155511193/155511193Image0.jpg
inflating: dataset/data/train/1/155511193/155511193Image2.jpg
inflating: dataset/data/train/1/155511193/155511193Image3.jpg
inflating: dataset/data/train/1/155511193/155511193Image4.jpg
inflating: dataset/data/train/1/155511193/155511193Image65.jpg
inflating: dataset/data/train/1/155511193/155511193Image7.jpg
inflating: dataset/data/train/1/155511193/155511193Image84.jpg
creating: dataset/data/train/1/155933557/
inflating: dataset/data/train/1/155933557/155933557Image0.jpg
inflating: dataset/data/train/1/155933557/155933557Image2.jpg
inflating: dataset/data/train/1/155933557/155933557Image3.jpg
inflating: dataset/data/train/1/155933557/155933557Image4.jpg
inflating: dataset/data/train/1/155933557/155933557Image5.jpg
inflating: dataset/data/train/1/155933557/155933557Image6.jpg
inflating: dataset/data/train/1/155933557/155933557Image7.jpg
creating: dataset/data/train/1/161441187/
inflating: dataset/data/train/1/161441187/161441187Image0.jpg
inflating: dataset/data/train/1/161441187/161441187Image10.jpg
inflating: dataset/data/train/1/161441187/161441187Image2.jpg
inflating: dataset/data/train/1/161441187/161441187Image5.jpg
inflating: dataset/data/train/1/161441187/161441187Image6.jpg
inflating: dataset/data/train/1/161441187/161441187Image8.jpg
inflating: dataset/data/train/1/161441187/161441187Image9.jpg
creating: dataset/data/train/1/161811413/
inflating: dataset/data/train/1/161811413/161811413Image0.jpg
inflating: dataset/data/train/1/161811413/161811413Image2.jpg
inflating: dataset/data/train/1/161811413/161811413Image3.jpg
inflating: dataset/data/train/1/161811413/161811413Image4.jpg
inflating: dataset/data/train/1/161811413/161811413Image5.jpg
inflating: dataset/data/train/1/161811413/161811413Image6.jpg
inflating: dataset/data/train/1/161811413/161811413Image7.jpg
creating: dataset/data/train/1/20150909004/
inflating: dataset/data/train/1/20150909004/20150909145523.jpg
inflating: dataset/data/train/1/20150909004/20150909145642.jpg
inflating: dataset/data/train/1/20150909004/20150909145712.jpg
inflating: dataset/data/train/1/20150909004/20150909145742.jpg
inflating: dataset/data/train/1/20150909004/20150909145813.jpg
inflating: dataset/data/train/1/20150909004/20150909145844.jpg
inflating: dataset/data/train/1/20150909004/20150909150017.jpg
creating: dataset/data/train/1/20150914002/
inflating: dataset/data/train/1/20150914002/20150914151336.jpg
inflating: dataset/data/train/1/20150914002/20150914151518.jpg
inflating: dataset/data/train/1/20150914002/20150914151540.jpg
inflating: dataset/data/train/1/20150914002/20150914151610.jpg
inflating: dataset/data/train/1/20150914002/20150914151643.jpg
inflating: dataset/data/train/1/20150914002/20150914151645.jpg
inflating: dataset/data/train/1/20150914002/20150914151807.jpg
creating: dataset/data/train/1/20150923006/
inflating: dataset/data/train/1/20150923006/20150923154621.jpg
inflating: dataset/data/train/1/20150923006/20150923154753.jpg
inflating: dataset/data/train/1/20150923006/20150923154828.jpg
inflating: dataset/data/train/1/20150923006/20150923154846.jpg
inflating: dataset/data/train/1/20150923006/20150923154936.jpg
inflating: dataset/data/train/1/20150923006/20150923154943.jpg
inflating: dataset/data/train/1/20150923006/20150923155146.jpg
creating: dataset/data/train/1/20150930005/
inflating: dataset/data/train/1/20150930005/20150930144528.jpg
inflating: dataset/data/train/1/20150930005/20150930144648.jpg
inflating: dataset/data/train/1/20150930005/20150930144716.jpg
inflating: dataset/data/train/1/20150930005/20150930144759.jpg
inflating: dataset/data/train/1/20150930005/20150930144825.jpg
inflating: dataset/data/train/1/20150930005/20150930144830.jpg
inflating: dataset/data/train/1/20150930005/20150930144952.jpg
creating: dataset/data/train/1/20151020001/
inflating: dataset/data/train/1/20151020001/20151020110941.jpg
inflating: dataset/data/train/1/20151020001/20151020111129.jpg
inflating: dataset/data/train/1/20151020001/20151020111156.jpg
inflating: dataset/data/train/1/20151020001/20151020111224.jpg
inflating: dataset/data/train/1/20151020001/20151020111245.jpg
inflating: dataset/data/train/1/20151020001/20151020111248.jpg
inflating: dataset/data/train/1/20151020001/20151020111438.jpg
creating: dataset/data/train/1/20151028006/
inflating: dataset/data/train/1/20151028006/20151028150313.jpg
inflating: dataset/data/train/1/20151028006/20151028150438.jpg
inflating: dataset/data/train/1/20151028006/20151028150503.jpg
inflating: dataset/data/train/1/20151028006/20151028150533.jpg
inflating: dataset/data/train/1/20151028006/20151028150603.jpg
inflating: dataset/data/train/1/20151028006/20151028150604.jpg
inflating: dataset/data/train/1/20151028006/20151028150705.jpg
creating: dataset/data/train/1/20151102003/
inflating: dataset/data/train/1/20151102003/20151102152252.jpg
inflating: dataset/data/train/1/20151102003/20151102152415.jpg
inflating: dataset/data/train/1/20151102003/20151102152442.jpg
inflating: dataset/data/train/1/20151102003/20151102152513.jpg
inflating: dataset/data/train/1/20151102003/20151102152543.jpg
inflating: dataset/data/train/1/20151102003/20151102152549.jpg
inflating: dataset/data/train/1/20151102003/20151102152644.jpg
creating: dataset/data/train/1/20151106008/
inflating: dataset/data/train/1/20151106008/20151106163323.jpg
inflating: dataset/data/train/1/20151106008/20151106163538.jpg
inflating: dataset/data/train/1/20151106008/20151106163602.jpg
inflating: dataset/data/train/1/20151106008/20151106163635.jpg
inflating: dataset/data/train/1/20151106008/20151106163704.jpg
inflating: dataset/data/train/1/20151106008/20151106163758.jpg
inflating: dataset/data/train/1/20151106008/20151106163801.jpg
creating: dataset/data/train/1/20151112001/
inflating: dataset/data/train/1/20151112001/20151112091716.jpg
inflating: dataset/data/train/1/20151112001/20151112091846.jpg
inflating: dataset/data/train/1/20151112001/20151112091907.jpg
inflating: dataset/data/train/1/20151112001/20151112091946.jpg
inflating: dataset/data/train/1/20151112001/20151112092016.jpg
inflating: dataset/data/train/1/20151112001/20151112092018.jpg
inflating: dataset/data/train/1/20151112001/20151112092139.jpg
creating: dataset/data/train/1/20151116002/
inflating: dataset/data/train/1/20151116002/20151116145120.jpg
inflating: dataset/data/train/1/20151116002/20151116145238.jpg
inflating: dataset/data/train/1/20151116002/20151116145307.jpg
inflating: dataset/data/train/1/20151116002/20151116145338.jpg
inflating: dataset/data/train/1/20151116002/20151116145407.jpg
inflating: dataset/data/train/1/20151116002/20151116145408.jpg
inflating: dataset/data/train/1/20151116002/20151116145508.jpg
creating: dataset/data/train/1/20151116003/
inflating: dataset/data/train/1/20151116003/20151116150341.jpg
inflating: dataset/data/train/1/20151116003/20151116150527.jpg
inflating: dataset/data/train/1/20151116003/20151116150600.jpg
inflating: dataset/data/train/1/20151116003/20151116150620.jpg
inflating: dataset/data/train/1/20151116003/20151116150655.jpg
inflating: dataset/data/train/1/20151116003/20151116150659.jpg
inflating: dataset/data/train/1/20151116003/20151116150800.jpg
creating: dataset/data/train/1/20151116004/
inflating: dataset/data/train/1/20151116004/20151116152130.jpg
inflating: dataset/data/train/1/20151116004/20151116152325.jpg
inflating: dataset/data/train/1/20151116004/20151116152401.jpg
inflating: dataset/data/train/1/20151116004/20151116152424.jpg
inflating: dataset/data/train/1/20151116004/20151116152500.jpg
inflating: dataset/data/train/1/20151116004/20151116152505.jpg
inflating: dataset/data/train/1/20151116004/20151116152601.jpg
creating: dataset/data/train/1/20151116006/
inflating: dataset/data/train/1/20151116006/20151116154505.jpg
inflating: dataset/data/train/1/20151116006/20151116154627.jpg
inflating: dataset/data/train/1/20151116006/20151116154659.jpg
inflating: dataset/data/train/1/20151116006/20151116154728.jpg
inflating: dataset/data/train/1/20151116006/20151116154757.jpg
inflating: dataset/data/train/1/20151116006/20151116154759.jpg
inflating: dataset/data/train/1/20151116006/20151116154837.jpg
creating: dataset/data/train/1/20151118004/
inflating: dataset/data/train/1/20151118004/20151118154201.jpg
inflating: dataset/data/train/1/20151118004/20151118154344.jpg
inflating: dataset/data/train/1/20151118004/20151118154416.jpg
inflating: dataset/data/train/1/20151118004/20151118154437.jpg
inflating: dataset/data/train/1/20151118004/20151118154458.jpg
inflating: dataset/data/train/1/20151118004/20151118154459.jpg
inflating: dataset/data/train/1/20151118004/20151118154557.jpg
creating: dataset/data/train/1/20151201001/
inflating: dataset/data/train/1/20151201001/20151201111528.jpg
inflating: dataset/data/train/1/20151201001/20151201111648.jpg
inflating: dataset/data/train/1/20151201001/20151201111717.jpg
inflating: dataset/data/train/1/20151201001/20151201111746.jpg
inflating: dataset/data/train/1/20151201001/20151201111817.jpg
inflating: dataset/data/train/1/20151201001/20151201111819.jpg
inflating: dataset/data/train/1/20151201001/20151201112027.jpg
creating: dataset/data/train/1/20151207010/
inflating: dataset/data/train/1/20151207010/20151207155920.jpg
inflating: dataset/data/train/1/20151207010/20151207160110.jpg
inflating: dataset/data/train/1/20151207010/20151207160135.jpg
inflating: dataset/data/train/1/20151207010/20151207160151.jpg
inflating: dataset/data/train/1/20151207010/20151207160238.jpg
inflating: dataset/data/train/1/20151207010/20151207160239.jpg
inflating: dataset/data/train/1/20151207010/20151207160402.jpg
creating: dataset/data/train/1/20151209008/
inflating: dataset/data/train/1/20151209008/20151209155256.jpg
inflating: dataset/data/train/1/20151209008/20151209155447.jpg
inflating: dataset/data/train/1/20151209008/20151209155516.jpg
inflating: dataset/data/train/1/20151209008/20151209155530.jpg
inflating: dataset/data/train/1/20151209008/20151209155546.jpg
inflating: dataset/data/train/1/20151209008/20151209155549.jpg
inflating: dataset/data/train/1/20151209008/20151209155644.jpg
creating: dataset/data/train/1/20151210003/
inflating: dataset/data/train/1/20151210003/20151210151935.jpg
inflating: dataset/data/train/1/20151210003/20151210152114.jpg
inflating: dataset/data/train/1/20151210003/20151210152139.jpg
inflating: dataset/data/train/1/20151210003/20151210152215.jpg
inflating: dataset/data/train/1/20151210003/20151210152250.jpg
inflating: dataset/data/train/1/20151210003/20151210152305.jpg
inflating: dataset/data/train/1/20151210003/20151210152629.jpg
creating: dataset/data/train/1/20151223005/
inflating: dataset/data/train/1/20151223005/20151223150535.jpg
inflating: dataset/data/train/1/20151223005/20151223150700.jpg
inflating: dataset/data/train/1/20151223005/20151223150731.jpg
inflating: dataset/data/train/1/20151223005/20151223150756.jpg
inflating: dataset/data/train/1/20151223005/20151223150826.jpg
inflating: dataset/data/train/1/20151223005/20151223150835.jpg
inflating: dataset/data/train/1/20151223005/20151223151014.jpg
creating: dataset/data/train/1/20151228010/
inflating: dataset/data/train/1/20151228010/20151228161115.jpg
inflating: dataset/data/train/1/20151228010/20151228161236.jpg
inflating: dataset/data/train/1/20151228010/20151228161310.jpg
inflating: dataset/data/train/1/20151228010/20151228161335.jpg
inflating: dataset/data/train/1/20151228010/20151228161403.jpg
inflating: dataset/data/train/1/20151228010/20151228161405.jpg
inflating: dataset/data/train/1/20151228010/20151228161445.jpg
creating: dataset/data/train/1/20151230004/
inflating: dataset/data/train/1/20151230004/20151230143256.jpg
inflating: dataset/data/train/1/20151230004/20151230143434.jpg
inflating: dataset/data/train/1/20151230004/20151230143452.jpg
inflating: dataset/data/train/1/20151230004/20151230143522.jpg
inflating: dataset/data/train/1/20151230004/20151230143552.jpg
inflating: dataset/data/train/1/20151230004/20151230143556.jpg
inflating: dataset/data/train/1/20151230004/20151230143658.jpg
creating: dataset/data/train/1/20151230016/
inflating: dataset/data/train/1/20151230016/20151230173012.jpg
inflating: dataset/data/train/1/20151230016/20151230173130.jpg
inflating: dataset/data/train/1/20151230016/20151230173208.jpg
inflating: dataset/data/train/1/20151230016/20151230173243.jpg
inflating: dataset/data/train/1/20151230016/20151230173313.jpg
inflating: dataset/data/train/1/20151230016/20151230173316.jpg
inflating: dataset/data/train/1/20151230016/20151230173430.jpg
creating: dataset/data/train/1/20160201002/
inflating: dataset/data/train/1/20160201002/20160201143900.jpg
inflating: dataset/data/train/1/20160201002/20160201144045.jpg
inflating: dataset/data/train/1/20160201002/20160201144110.jpg
inflating: dataset/data/train/1/20160201002/20160201144143.jpg
inflating: dataset/data/train/1/20160201002/20160201144221.jpg
inflating: dataset/data/train/1/20160201002/20160201144225.jpg
inflating: dataset/data/train/1/20160201002/20160201144316.jpg
creating: dataset/data/train/1/20160225003/
inflating: dataset/data/train/1/20160225003/20160225105320.jpg
inflating: dataset/data/train/1/20160225003/20160225105518.jpg
inflating: dataset/data/train/1/20160225003/20160225105538.jpg
inflating: dataset/data/train/1/20160225003/20160225105604.jpg
inflating: dataset/data/train/1/20160225003/20160225105624.jpg
inflating: dataset/data/train/1/20160225003/20160225105626.jpg
inflating: dataset/data/train/1/20160225003/20160225105706.jpg
creating: dataset/data/train/1/20160321005/
inflating: dataset/data/train/1/20160321005/20160321152549.jpg
inflating: dataset/data/train/1/20160321005/20160321152731.jpg
inflating: dataset/data/train/1/20160321005/20160321152803.jpg
inflating: dataset/data/train/1/20160321005/20160321152840.jpg
inflating: dataset/data/train/1/20160321005/20160321152911.jpg
inflating: dataset/data/train/1/20160321005/20160321152915.jpg
inflating: dataset/data/train/1/20160321005/20160321153208.jpg
creating: dataset/data/train/1/20160425007/
inflating: dataset/data/train/1/20160425007/20160425155644.jpg
inflating: dataset/data/train/1/20160425007/20160425155759.jpg
inflating: dataset/data/train/1/20160425007/20160425155828.jpg
inflating: dataset/data/train/1/20160425007/20160425155901.jpg
inflating: dataset/data/train/1/20160425007/20160425155926.jpg
inflating: dataset/data/train/1/20160425007/20160425155928.jpg
inflating: dataset/data/train/1/20160425007/20160425160013.jpg
creating: dataset/data/train/1/20160427005/
inflating: dataset/data/train/1/20160427005/20160427144639.jpg
inflating: dataset/data/train/1/20160427005/20160427144823.jpg
inflating: dataset/data/train/1/20160427005/20160427144839.jpg
inflating: dataset/data/train/1/20160427005/20160427144901.jpg
inflating: dataset/data/train/1/20160427005/20160427144931.jpg
inflating: dataset/data/train/1/20160427005/20160427144935.jpg
inflating: dataset/data/train/1/20160427005/20160427145110.jpg
creating: dataset/data/train/1/20160427006/
inflating: dataset/data/train/1/20160427006/20160427150204.jpg
inflating: dataset/data/train/1/20160427006/20160427150326.jpg
inflating: dataset/data/train/1/20160427006/20160427150414.jpg
inflating: dataset/data/train/1/20160427006/20160427150424.jpg
inflating: dataset/data/train/1/20160427006/20160427150458.jpg
inflating: dataset/data/train/1/20160427006/20160427150459.jpg
inflating: dataset/data/train/1/20160427006/20160427150550.jpg
creating: dataset/data/train/1/20160517002/
inflating: dataset/data/train/1/20160517002/20160517151843.jpg
inflating: dataset/data/train/1/20160517002/20160517152029.jpg
inflating: dataset/data/train/1/20160517002/20160517152102.jpg
inflating: dataset/data/train/1/20160517002/20160517152129.jpg
inflating: dataset/data/train/1/20160517002/20160517152141.jpg
inflating: dataset/data/train/1/20160517002/20160517152148.jpg
inflating: dataset/data/train/1/20160517002/20160517152250.jpg
creating: dataset/data/train/1/20160606002/
inflating: dataset/data/train/1/20160606002/20160606154125.jpg
inflating: dataset/data/train/1/20160606002/20160606154246.jpg
inflating: dataset/data/train/1/20160606002/20160606154314.jpg
inflating: dataset/data/train/1/20160606002/20160606154343.jpg
inflating: dataset/data/train/1/20160606002/20160606154417.jpg
inflating: dataset/data/train/1/20160606002/20160606154419.jpg
inflating: dataset/data/train/1/20160606002/20160606154517.jpg
creating: dataset/data/train/1/20160608002/
inflating: dataset/data/train/1/20160608002/20160608094812.jpg
inflating: dataset/data/train/1/20160608002/20160608094925.jpg
inflating: dataset/data/train/1/20160608002/20160608095003.jpg
inflating: dataset/data/train/1/20160608002/20160608095028.jpg
inflating: dataset/data/train/1/20160608002/20160608095057.jpg
inflating: dataset/data/train/1/20160608002/20160608095059.jpg
inflating: dataset/data/train/1/20160608002/20160608095148.jpg
creating: dataset/data/train/1/20160615007/
inflating: dataset/data/train/1/20160615007/20160615145306.jpg
inflating: dataset/data/train/1/20160615007/20160615145437.jpg
inflating: dataset/data/train/1/20160615007/20160615145454.jpg
inflating: dataset/data/train/1/20160615007/20160615145525.jpg
inflating: dataset/data/train/1/20160615007/20160615145554.jpg
inflating: dataset/data/train/1/20160615007/20160615145559.jpg
inflating: dataset/data/train/1/20160615007/20160615145658.jpg
creating: dataset/data/train/1/20160622011/
inflating: dataset/data/train/1/20160622011/20160622155936.jpg
inflating: dataset/data/train/1/20160622011/20160622160056.jpg
inflating: dataset/data/train/1/20160622011/20160622160126.jpg
inflating: dataset/data/train/1/20160622011/20160622160156.jpg
inflating: dataset/data/train/1/20160622011/20160622160227.jpg
inflating: dataset/data/train/1/20160622011/20160622160229.jpg
inflating: dataset/data/train/1/20160622011/20160622160308.jpg
creating: dataset/data/train/1/20160810006/
inflating: dataset/data/train/1/20160810006/20160810145547.jpg
inflating: dataset/data/train/1/20160810006/20160810145705.jpg
inflating: dataset/data/train/1/20160810006/20160810145736.jpg
inflating: dataset/data/train/1/20160810006/20160810145814.jpg
inflating: dataset/data/train/1/20160810006/20160810145840.jpg
inflating: dataset/data/train/1/20160810006/20160810145846.jpg
inflating: dataset/data/train/1/20160810006/20160810150000.jpg
creating: dataset/data/train/1/20160824004/
inflating: dataset/data/train/1/20160824004/20160824105919.jpg
inflating: dataset/data/train/1/20160824004/20160824110043.jpg
inflating: dataset/data/train/1/20160824004/20160824110114.jpg
inflating: dataset/data/train/1/20160824004/20160824110140.jpg
inflating: dataset/data/train/1/20160824004/20160824110211.jpg
inflating: dataset/data/train/1/20160824004/20160824110215.jpg
inflating: dataset/data/train/1/20160824004/20160824110232.jpg
creating: dataset/data/train/1/20160914006/
inflating: dataset/data/train/1/20160914006/20160914152351.jpg
inflating: dataset/data/train/1/20160914006/20160914152522.jpg
inflating: dataset/data/train/1/20160914006/20160914152540.jpg
inflating: dataset/data/train/1/20160914006/20160914152619.jpg
inflating: dataset/data/train/1/20160914006/20160914152654.jpg
inflating: dataset/data/train/1/20160914006/20160914152659.jpg
inflating: dataset/data/train/1/20160914006/20160914152815.jpg
creating: dataset/data/train/2/
creating: dataset/data/train/2/084219150/
inflating: dataset/data/train/2/084219150/084219150Image0.jpg
inflating: dataset/data/train/2/084219150/084219150Image1.jpg
inflating: dataset/data/train/2/084219150/084219150Image10.jpg
inflating: dataset/data/train/2/084219150/084219150Image2.jpg
inflating: dataset/data/train/2/084219150/084219150Image5.jpg
inflating: dataset/data/train/2/084219150/084219150Image8.jpg
inflating: dataset/data/train/2/084219150/084219150Image9.jpg
creating: dataset/data/train/2/091029280/
inflating: dataset/data/train/2/091029280/091029280Image0.jpg
inflating: dataset/data/train/2/091029280/091029280Image15.jpg
inflating: dataset/data/train/2/091029280/091029280Image2.jpg
inflating: dataset/data/train/2/091029280/091029280Image3.jpg
inflating: dataset/data/train/2/091029280/091029280Image4.jpg
inflating: dataset/data/train/2/091029280/091029280Image5.jpg
inflating: dataset/data/train/2/091029280/091029280Image64.jpg
creating: dataset/data/train/2/091316240/
inflating: dataset/data/train/2/091316240/091316240Image0.jpg
inflating: dataset/data/train/2/091316240/091316240Image11.jpg
inflating: dataset/data/train/2/091316240/091316240Image13.jpg
inflating: dataset/data/train/2/091316240/091316240Image15.jpg
inflating: dataset/data/train/2/091316240/091316240Image3.jpg
inflating: dataset/data/train/2/091316240/091316240Image4.jpg
inflating: dataset/data/train/2/091316240/091316240Image8.jpg
creating: dataset/data/train/2/092814720/
inflating: dataset/data/train/2/092814720/092814720Image0.jpg
inflating: dataset/data/train/2/092814720/092814720Image2.jpg
inflating: dataset/data/train/2/092814720/092814720Image3.jpg
inflating: dataset/data/train/2/092814720/092814720Image44.jpg
inflating: dataset/data/train/2/092814720/092814720Image5.jpg
inflating: dataset/data/train/2/092814720/092814720Image6.jpg
inflating: dataset/data/train/2/092814720/092814720Image74.jpg
creating: dataset/data/train/2/095446777/
inflating: dataset/data/train/2/095446777/095446777Image0.jpg
inflating: dataset/data/train/2/095446777/095446777Image104.jpg
inflating: dataset/data/train/2/095446777/095446777Image3.jpg
inflating: dataset/data/train/2/095446777/095446777Image4.jpg
inflating: dataset/data/train/2/095446777/095446777Image54.jpg
inflating: dataset/data/train/2/095446777/095446777Image8.jpg
inflating: dataset/data/train/2/095446777/095446777Image9.jpg
creating: dataset/data/train/2/100347213/
inflating: dataset/data/train/2/100347213/100347213Image0.jpg
inflating: dataset/data/train/2/100347213/100347213Image2.jpg
inflating: dataset/data/train/2/100347213/100347213Image3.jpg
inflating: dataset/data/train/2/100347213/100347213Image4.jpg
inflating: dataset/data/train/2/100347213/100347213Image55.jpg
inflating: dataset/data/train/2/100347213/100347213Image6.jpg
inflating: dataset/data/train/2/100347213/100347213Image74.jpg
creating: dataset/data/train/2/100847130/
inflating: dataset/data/train/2/100847130/100847130Image0.jpg
inflating: dataset/data/train/2/100847130/100847130Image10.jpg
inflating: dataset/data/train/2/100847130/100847130Image12.jpg
inflating: dataset/data/train/2/100847130/100847130Image2.jpg
inflating: dataset/data/train/2/100847130/100847130Image3.jpg
inflating: dataset/data/train/2/100847130/100847130Image4.jpg
inflating: dataset/data/train/2/100847130/100847130Image5.jpg
creating: dataset/data/train/2/100938850/
inflating: dataset/data/train/2/100938850/100938850Image0.jpg
inflating: dataset/data/train/2/100938850/100938850Image10.jpg
inflating: dataset/data/train/2/100938850/100938850Image14.jpg
inflating: dataset/data/train/2/100938850/100938850Image2.jpg
inflating: dataset/data/train/2/100938850/100938850Image5.jpg
inflating: dataset/data/train/2/100938850/100938850Image7.jpg
inflating: dataset/data/train/2/100938850/100938850Image8.jpg
creating: dataset/data/train/2/101155613/
inflating: dataset/data/train/2/101155613/101155613Image0.jpg
inflating: dataset/data/train/2/101155613/101155613Image10.jpg
inflating: dataset/data/train/2/101155613/101155613Image12.jpg
inflating: dataset/data/train/2/101155613/101155613Image13.jpg
inflating: dataset/data/train/2/101155613/101155613Image14.jpg
inflating: dataset/data/train/2/101155613/101155613Image8.jpg
inflating: dataset/data/train/2/101155613/101155613Image9.jpg
creating: dataset/data/train/2/101249150/
inflating: dataset/data/train/2/101249150/101249150Image0.jpg
inflating: dataset/data/train/2/101249150/101249150Image10.jpg
inflating: dataset/data/train/2/101249150/101249150Image114.jpg
inflating: dataset/data/train/2/101249150/101249150Image4.jpg
inflating: dataset/data/train/2/101249150/101249150Image5.jpg
inflating: dataset/data/train/2/101249150/101249150Image6.jpg
inflating: dataset/data/train/2/101249150/101249150Image94.jpg
creating: dataset/data/train/2/102415710/
inflating: dataset/data/train/2/102415710/102415710Image0.jpg
inflating: dataset/data/train/2/102415710/102415710Image2.jpg
inflating: dataset/data/train/2/102415710/102415710Image3.jpg
inflating: dataset/data/train/2/102415710/102415710Image4.jpg
inflating: dataset/data/train/2/102415710/102415710Image5.jpg
inflating: dataset/data/train/2/102415710/102415710Image6.jpg
inflating: dataset/data/train/2/102415710/102415710Image7.jpg
creating: dataset/data/train/2/103206470/
inflating: dataset/data/train/2/103206470/103206470Image0.jpg
inflating: dataset/data/train/2/103206470/103206470Image2.jpg
inflating: dataset/data/train/2/103206470/103206470Image3.jpg
inflating: dataset/data/train/2/103206470/103206470Image4.jpg
inflating: dataset/data/train/2/103206470/103206470Image5.jpg
inflating: dataset/data/train/2/103206470/103206470Image7.jpg
inflating: dataset/data/train/2/103206470/103206470Image8.jpg
creating: dataset/data/train/2/103428260/
inflating: dataset/data/train/2/103428260/103428260Image0.jpg
inflating: dataset/data/train/2/103428260/103428260Image12.jpg
inflating: dataset/data/train/2/103428260/103428260Image15.jpg
inflating: dataset/data/train/2/103428260/103428260Image17.jpg
inflating: dataset/data/train/2/103428260/103428260Image3.jpg
inflating: dataset/data/train/2/103428260/103428260Image4.jpg
inflating: dataset/data/train/2/103428260/103428260Image6.jpg
creating: dataset/data/train/2/105332243/
inflating: dataset/data/train/2/105332243/105332243Image0.jpg
inflating: dataset/data/train/2/105332243/105332243Image10.jpg
inflating: dataset/data/train/2/105332243/105332243Image12.jpg
inflating: dataset/data/train/2/105332243/105332243Image13.jpg
inflating: dataset/data/train/2/105332243/105332243Image2.jpg
inflating: dataset/data/train/2/105332243/105332243Image4.jpg
inflating: dataset/data/train/2/105332243/105332243Image5.jpg
creating: dataset/data/train/2/105918500/
inflating: dataset/data/train/2/105918500/105918500Image0.jpg
inflating: dataset/data/train/2/105918500/105918500Image10.jpg
inflating: dataset/data/train/2/105918500/105918500Image2.jpg
inflating: dataset/data/train/2/105918500/105918500Image4.jpg
inflating: dataset/data/train/2/105918500/105918500Image6.jpg
inflating: dataset/data/train/2/105918500/105918500Image7.jpg
inflating: dataset/data/train/2/105918500/105918500Image8.jpg
creating: dataset/data/train/2/110100163/
inflating: dataset/data/train/2/110100163/110100163Image0.jpg
inflating: dataset/data/train/2/110100163/110100163Image1.jpg
inflating: dataset/data/train/2/110100163/110100163Image12.jpg
inflating: dataset/data/train/2/110100163/110100163Image2.jpg
inflating: dataset/data/train/2/110100163/110100163Image4.jpg
inflating: dataset/data/train/2/110100163/110100163Image5.jpg
inflating: dataset/data/train/2/110100163/110100163Image6.jpg
creating: dataset/data/train/2/143028670/
inflating: dataset/data/train/2/143028670/143028670Image0.jpg
inflating: dataset/data/train/2/143028670/143028670Image100.jpg
inflating: dataset/data/train/2/143028670/143028670Image11.jpg
inflating: dataset/data/train/2/143028670/143028670Image120.jpg
inflating: dataset/data/train/2/143028670/143028670Image3.jpg
inflating: dataset/data/train/2/143028670/143028670Image4.jpg
inflating: dataset/data/train/2/143028670/143028670Image5.jpg
creating: dataset/data/train/2/144011743/
inflating: dataset/data/train/2/144011743/144011743Image0.jpg
inflating: dataset/data/train/2/144011743/144011743Image1.jpg
inflating: dataset/data/train/2/144011743/144011743Image3.jpg
inflating: dataset/data/train/2/144011743/144011743Image4.jpg
inflating: dataset/data/train/2/144011743/144011743Image6.jpg
inflating: dataset/data/train/2/144011743/144011743Image8.jpg
inflating: dataset/data/train/2/144011743/144011743Image9.jpg
creating: dataset/data/train/2/145132100/
inflating: dataset/data/train/2/145132100/145132100Image0.jpg
inflating: dataset/data/train/2/145132100/145132100Image1.jpg
inflating: dataset/data/train/2/145132100/145132100Image3.jpg
inflating: dataset/data/train/2/145132100/145132100Image4.jpg
inflating: dataset/data/train/2/145132100/145132100Image6.jpg
inflating: dataset/data/train/2/145132100/145132100Image7.jpg
inflating: dataset/data/train/2/145132100/145132100Image9.jpg
creating: dataset/data/train/2/145703093/
inflating: dataset/data/train/2/145703093/145703093Image0.jpg
inflating: dataset/data/train/2/145703093/145703093Image10.jpg
inflating: dataset/data/train/2/145703093/145703093Image15.jpg
inflating: dataset/data/train/2/145703093/145703093Image2.jpg
inflating: dataset/data/train/2/145703093/145703093Image3.jpg
inflating: dataset/data/train/2/145703093/145703093Image5.jpg
inflating: dataset/data/train/2/145703093/145703093Image6.jpg
creating: dataset/data/train/2/145747013/
inflating: dataset/data/train/2/145747013/145747013Image0.jpg
inflating: dataset/data/train/2/145747013/145747013Image2.jpg
inflating: dataset/data/train/2/145747013/145747013Image3.jpg
inflating: dataset/data/train/2/145747013/145747013Image5.jpg
inflating: dataset/data/train/2/145747013/145747013Image6.jpg
inflating: dataset/data/train/2/145747013/145747013Image7.jpg
inflating: dataset/data/train/2/145747013/145747013Image9.jpg
creating: dataset/data/train/2/145912857/
inflating: dataset/data/train/2/145912857/145912857Image0.jpg
inflating: dataset/data/train/2/145912857/145912857Image2.jpg
inflating: dataset/data/train/2/145912857/145912857Image3.jpg
inflating: dataset/data/train/2/145912857/145912857Image5.jpg
inflating: dataset/data/train/2/145912857/145912857Image7.jpg
inflating: dataset/data/train/2/145912857/145912857Image8.jpg
inflating: dataset/data/train/2/145912857/145912857Image9.jpg
creating: dataset/data/train/2/150649750/
inflating: dataset/data/train/2/150649750/150649750Image0.jpg
inflating: dataset/data/train/2/150649750/150649750Image12.jpg
inflating: dataset/data/train/2/150649750/150649750Image2.jpg
inflating: dataset/data/train/2/150649750/150649750Image3.jpg
inflating: dataset/data/train/2/150649750/150649750Image4.jpg
inflating: dataset/data/train/2/150649750/150649750Image44.jpg
inflating: dataset/data/train/2/150649750/150649750Image6.jpg
creating: dataset/data/train/2/151543077/
inflating: dataset/data/train/2/151543077/151543077Image0.jpg
inflating: dataset/data/train/2/151543077/151543077Image2.jpg
inflating: dataset/data/train/2/151543077/151543077Image3.jpg
inflating: dataset/data/train/2/151543077/151543077Image4.jpg
inflating: dataset/data/train/2/151543077/151543077Image5.jpg
inflating: dataset/data/train/2/151543077/151543077Image6.jpg
inflating: dataset/data/train/2/151543077/151543077Image7.jpg
creating: dataset/data/train/2/151937793/
inflating: dataset/data/train/2/151937793/151937793Image12.jpg
inflating: dataset/data/train/2/151937793/151937793Image13.jpg
inflating: dataset/data/train/2/151937793/151937793Image15.jpg
inflating: dataset/data/train/2/151937793/151937793Image2.jpg
inflating: dataset/data/train/2/151937793/151937793Image5.jpg
inflating: dataset/data/train/2/151937793/151937793Image6.jpg
inflating: dataset/data/train/2/151937793/151937793Image8.jpg
creating: dataset/data/train/2/152111870/
inflating: dataset/data/train/2/152111870/152111870Image0.jpg
inflating: dataset/data/train/2/152111870/152111870Image2.jpg
inflating: dataset/data/train/2/152111870/152111870Image3.jpg
inflating: dataset/data/train/2/152111870/152111870Image4.jpg
inflating: dataset/data/train/2/152111870/152111870Image5.jpg
inflating: dataset/data/train/2/152111870/152111870Image6.jpg
inflating: dataset/data/train/2/152111870/152111870Image7.jpg
creating: dataset/data/train/2/152325917/
inflating: dataset/data/train/2/152325917/152325917Image0.jpg
inflating: dataset/data/train/2/152325917/152325917Image11.jpg
inflating: dataset/data/train/2/152325917/152325917Image2.jpg
inflating: dataset/data/train/2/152325917/152325917Image3.jpg
inflating: dataset/data/train/2/152325917/152325917Image4.jpg
inflating: dataset/data/train/2/152325917/152325917Image8.jpg
inflating: dataset/data/train/2/152325917/152325917Image9.jpg
creating: dataset/data/train/2/152720600/
inflating: dataset/data/train/2/152720600/152720600Image0.jpg
inflating: dataset/data/train/2/152720600/152720600Image10.jpg
inflating: dataset/data/train/2/152720600/152720600Image2.jpg
inflating: dataset/data/train/2/152720600/152720600Image3.jpg
inflating: dataset/data/train/2/152720600/152720600Image6.jpg
inflating: dataset/data/train/2/152720600/152720600Image7.jpg
inflating: dataset/data/train/2/152720600/152720600Image8.jpg
creating: dataset/data/train/2/152815657/
inflating: dataset/data/train/2/152815657/152815657Image0.jpg
inflating: dataset/data/train/2/152815657/152815657Image2.jpg
inflating: dataset/data/train/2/152815657/152815657Image3.jpg
inflating: dataset/data/train/2/152815657/152815657Image5.jpg
inflating: dataset/data/train/2/152815657/152815657Image6.jpg
inflating: dataset/data/train/2/152815657/152815657Image7.jpg
inflating: dataset/data/train/2/152815657/152815657Image8.jpg
creating: dataset/data/train/2/152953857/
inflating: dataset/data/train/2/152953857/152953857Image0.jpg
inflating: dataset/data/train/2/152953857/152953857Image10.jpg
inflating: dataset/data/train/2/152953857/152953857Image12.jpg
inflating: dataset/data/train/2/152953857/152953857Image5.jpg
inflating: dataset/data/train/2/152953857/152953857Image7.jpg
inflating: dataset/data/train/2/152953857/152953857Image8.jpg
inflating: dataset/data/train/2/152953857/152953857Image9.jpg
creating: dataset/data/train/2/153019317/
inflating: dataset/data/train/2/153019317/153019317Image0.jpg
inflating: dataset/data/train/2/153019317/153019317Image10.jpg
inflating: dataset/data/train/2/153019317/153019317Image3.jpg
inflating: dataset/data/train/2/153019317/153019317Image4.jpg
inflating: dataset/data/train/2/153019317/153019317Image6.jpg
inflating: dataset/data/train/2/153019317/153019317Image8.jpg
inflating: dataset/data/train/2/153019317/153019317Image9.jpg
creating: dataset/data/train/2/153026150/
inflating: dataset/data/train/2/153026150/153026150Image0.jpg
inflating: dataset/data/train/2/153026150/153026150Image10.jpg
inflating: dataset/data/train/2/153026150/153026150Image2.jpg
inflating: dataset/data/train/2/153026150/153026150Image3.jpg
inflating: dataset/data/train/2/153026150/153026150Image6.jpg
inflating: dataset/data/train/2/153026150/153026150Image8.jpg
inflating: dataset/data/train/2/153026150/153026150Image9.jpg
creating: dataset/data/train/2/153932600/
inflating: dataset/data/train/2/153932600/153932600Image0.jpg
inflating: dataset/data/train/2/153932600/153932600Image2.jpg
inflating: dataset/data/train/2/153932600/153932600Image3.jpg
inflating: dataset/data/train/2/153932600/153932600Image4.jpg
inflating: dataset/data/train/2/153932600/153932600Image60.jpg
inflating: dataset/data/train/2/153932600/153932600Image7.jpg
inflating: dataset/data/train/2/153932600/153932600Image80.jpg
creating: dataset/data/train/2/154307580/
inflating: dataset/data/train/2/154307580/154307580Image0.jpg
inflating: dataset/data/train/2/154307580/154307580Image2.jpg
inflating: dataset/data/train/2/154307580/154307580Image3.jpg
inflating: dataset/data/train/2/154307580/154307580Image5.jpg
inflating: dataset/data/train/2/154307580/154307580Image6.jpg
inflating: dataset/data/train/2/154307580/154307580Image7.jpg
inflating: dataset/data/train/2/154307580/154307580Image8.jpg
creating: dataset/data/train/2/154804597/
inflating: dataset/data/train/2/154804597/154804597Image0.jpg
inflating: dataset/data/train/2/154804597/154804597Image2.jpg
inflating: dataset/data/train/2/154804597/154804597Image3.jpg
inflating: dataset/data/train/2/154804597/154804597Image5.jpg
inflating: dataset/data/train/2/154804597/154804597Image7.jpg
inflating: dataset/data/train/2/154804597/154804597Image8.jpg
inflating: dataset/data/train/2/154804597/154804597Image9.jpg
creating: dataset/data/train/2/155511193/
inflating: dataset/data/train/2/155511193/155511193Image0.jpg
inflating: dataset/data/train/2/155511193/155511193Image2.jpg
inflating: dataset/data/train/2/155511193/155511193Image3.jpg
inflating: dataset/data/train/2/155511193/155511193Image4.jpg
inflating: dataset/data/train/2/155511193/155511193Image60.jpg
inflating: dataset/data/train/2/155511193/155511193Image7.jpg
inflating: dataset/data/train/2/155511193/155511193Image70.jpg
creating: dataset/data/train/2/155818790/
inflating: dataset/data/train/2/155818790/155818790Image0.jpg
inflating: dataset/data/train/2/155818790/155818790Image100.jpg
inflating: dataset/data/train/2/155818790/155818790Image2.jpg
inflating: dataset/data/train/2/155818790/155818790Image5.jpg
inflating: dataset/data/train/2/155818790/155818790Image7.jpg
inflating: dataset/data/train/2/155818790/155818790Image80.jpg
inflating: dataset/data/train/2/155818790/155818790Image9.jpg
creating: dataset/data/train/2/160015640/
inflating: dataset/data/train/2/160015640/160015640Image0.jpg
inflating: dataset/data/train/2/160015640/160015640Image2.jpg
inflating: dataset/data/train/2/160015640/160015640Image3.jpg
inflating: dataset/data/train/2/160015640/160015640Image4.jpg
inflating: dataset/data/train/2/160015640/160015640Image6.jpg
inflating: dataset/data/train/2/160015640/160015640Image7.jpg
inflating: dataset/data/train/2/160015640/160015640Image8.jpg
creating: dataset/data/train/2/160305777/
inflating: dataset/data/train/2/160305777/160305777Image0.jpg
inflating: dataset/data/train/2/160305777/160305777Image2.jpg
inflating: dataset/data/train/2/160305777/160305777Image3.jpg
inflating: dataset/data/train/2/160305777/160305777Image4.jpg
inflating: dataset/data/train/2/160305777/160305777Image6.jpg
inflating: dataset/data/train/2/160305777/160305777Image7.jpg
inflating: dataset/data/train/2/160305777/160305777Image9.jpg
creating: dataset/data/train/2/161206193/
inflating: dataset/data/train/2/161206193/161206193Image0.jpg
inflating: dataset/data/train/2/161206193/161206193Image2.jpg
inflating: dataset/data/train/2/161206193/161206193Image4.jpg
inflating: dataset/data/train/2/161206193/161206193Image5.jpg
inflating: dataset/data/train/2/161206193/161206193Image60.jpg
inflating: dataset/data/train/2/161206193/161206193Image7.jpg
inflating: dataset/data/train/2/161206193/161206193Image80.jpg
creating: dataset/data/train/2/20150819005/
inflating: dataset/data/train/2/20150819005/20150819150839.jpg
inflating: dataset/data/train/2/20150819005/20150819150958.jpg
inflating: dataset/data/train/2/20150819005/20150819151030.jpg
inflating: dataset/data/train/2/20150819005/20150819151101.jpg
inflating: dataset/data/train/2/20150819005/20150819151129.jpg
inflating: dataset/data/train/2/20150819005/20150819151145.jpg
inflating: dataset/data/train/2/20150819005/20150819151221.jpg
creating: dataset/data/train/2/20150819011/
inflating: dataset/data/train/2/20150819011/20150819163415.jpg
inflating: dataset/data/train/2/20150819011/20150819163538.jpg
inflating: dataset/data/train/2/20150819011/20150819163608.jpg
inflating: dataset/data/train/2/20150819011/20150819163638.jpg
inflating: dataset/data/train/2/20150819011/20150819163708.jpg
inflating: dataset/data/train/2/20150819011/20150819163709.jpg
inflating: dataset/data/train/2/20150819011/20150819163805.jpg
creating: dataset/data/train/2/20150826001/
inflating: dataset/data/train/2/20150826001/20150826100701.jpg
inflating: dataset/data/train/2/20150826001/20150826100707.jpg
inflating: dataset/data/train/2/20150826001/20150826100833.jpg
inflating: dataset/data/train/2/20150826001/20150826100901.jpg
inflating: dataset/data/train/2/20150826001/20150826100927.jpg
inflating: dataset/data/train/2/20150826001/20150826100929.jpg
inflating: dataset/data/train/2/20150826001/20150826101105.jpg
creating: dataset/data/train/2/20150902002/
inflating: dataset/data/train/2/20150902002/20150902100419.jpg
inflating: dataset/data/train/2/20150902002/20150902100456.jpg
inflating: dataset/data/train/2/20150902002/20150902100631.jpg
inflating: dataset/data/train/2/20150902002/20150902100659.jpg
inflating: dataset/data/train/2/20150902002/20150902100725.jpg
inflating: dataset/data/train/2/20150902002/20150902100728.jpg
inflating: dataset/data/train/2/20150902002/20150902100821.jpg
creating: dataset/data/train/2/20150902004/
inflating: dataset/data/train/2/20150902004/20150902145516.jpg
inflating: dataset/data/train/2/20150902004/20150902145636.jpg
inflating: dataset/data/train/2/20150902004/20150902145714.jpg
inflating: dataset/data/train/2/20150902004/20150902145752.jpg
inflating: dataset/data/train/2/20150902004/20150902145813.jpg
inflating: dataset/data/train/2/20150902004/20150902145817.jpg
inflating: dataset/data/train/2/20150902004/20150902145915.jpg
creating: dataset/data/train/2/20150902008/
inflating: dataset/data/train/2/20150902008/20150902154045.jpg
inflating: dataset/data/train/2/20150902008/20150902154223.jpg
inflating: dataset/data/train/2/20150902008/20150902154249.jpg
inflating: dataset/data/train/2/20150902008/20150902154324.jpg
inflating: dataset/data/train/2/20150902008/20150902154350.jpg
inflating: dataset/data/train/2/20150902008/20150902154358.jpg
inflating: dataset/data/train/2/20150902008/20150902154542.jpg
creating: dataset/data/train/2/20150923009/
inflating: dataset/data/train/2/20150923009/20150923161717.jpg
inflating: dataset/data/train/2/20150923009/20150923161858.jpg
inflating: dataset/data/train/2/20150923009/20150923161921.jpg
inflating: dataset/data/train/2/20150923009/20150923161947.jpg
inflating: dataset/data/train/2/20150923009/20150923162023.jpg
inflating: dataset/data/train/2/20150923009/20150923162054.jpg
inflating: dataset/data/train/2/20150923009/20150923162146.jpg
creating: dataset/data/train/2/20151023006/
inflating: dataset/data/train/2/20151023006/20151023155614.jpg
inflating: dataset/data/train/2/20151023006/20151023155750.jpg
inflating: dataset/data/train/2/20151023006/20151023155818.jpg
inflating: dataset/data/train/2/20151023006/20151023155848.jpg
inflating: dataset/data/train/2/20151023006/20151023155916.jpg
inflating: dataset/data/train/2/20151023006/20151023155940.jpg
inflating: dataset/data/train/2/20151023006/20151023160353.jpg
creating: dataset/data/train/2/20151104004/
inflating: dataset/data/train/2/20151104004/20151104113302.jpg
inflating: dataset/data/train/2/20151104004/20151104113446.jpg
inflating: dataset/data/train/2/20151104004/20151104113452.jpg
inflating: dataset/data/train/2/20151104004/20151104113507.jpg
inflating: dataset/data/train/2/20151104004/20151104113513.jpg
inflating: dataset/data/train/2/20151104004/20151104113515.jpg
inflating: dataset/data/train/2/20151104004/20151104113625.jpg
creating: dataset/data/train/2/20151104005/
inflating: dataset/data/train/2/20151104005/20151104141943.jpg
inflating: dataset/data/train/2/20151104005/20151104142113.jpg
inflating: dataset/data/train/2/20151104005/20151104142138.jpg
inflating: dataset/data/train/2/20151104005/20151104142208.jpg
inflating: dataset/data/train/2/20151104005/20151104142244.jpg
inflating: dataset/data/train/2/20151104005/20151104142249.jpg
inflating: dataset/data/train/2/20151104005/20151104142328.jpg
creating: dataset/data/train/2/20151110002/
inflating: dataset/data/train/2/20151110002/20151110145147.jpg
inflating: dataset/data/train/2/20151110002/20151110145323.jpg
inflating: dataset/data/train/2/20151110002/20151110145351.jpg
inflating: dataset/data/train/2/20151110002/20151110145424.jpg
inflating: dataset/data/train/2/20151110002/20151110145500.jpg
inflating: dataset/data/train/2/20151110002/20151110145505.jpg
inflating: dataset/data/train/2/20151110002/20151110145552.jpg
creating: dataset/data/train/2/20151111003/
inflating: dataset/data/train/2/20151111003/20151111145308.jpg
inflating: dataset/data/train/2/20151111003/20151111145500.jpg
inflating: dataset/data/train/2/20151111003/20151111145537.jpg
inflating: dataset/data/train/2/20151111003/20151111145601.jpg
inflating: dataset/data/train/2/20151111003/20151111145634.jpg
inflating: dataset/data/train/2/20151111003/20151111145639.jpg
inflating: dataset/data/train/2/20151111003/20151111145732.jpg
creating: dataset/data/train/2/20151112002/
inflating: dataset/data/train/2/20151112002/20151112093253.jpg
inflating: dataset/data/train/2/20151112002/20151112093428.jpg
inflating: dataset/data/train/2/20151112002/20151112093448.jpg
inflating: dataset/data/train/2/20151112002/20151112093522.jpg
inflating: dataset/data/train/2/20151112002/20151112093541.jpg
inflating: dataset/data/train/2/20151112002/20151112093548.jpg
inflating: dataset/data/train/2/20151112002/20151112093634.jpg
creating: dataset/data/train/2/20151113005/
inflating: dataset/data/train/2/20151113005/20151113150507.jpg
inflating: dataset/data/train/2/20151113005/20151113150638.jpg
inflating: dataset/data/train/2/20151113005/20151113150704.jpg
inflating: dataset/data/train/2/20151113005/20151113150738.jpg
inflating: dataset/data/train/2/20151113005/20151113150801.jpg
inflating: dataset/data/train/2/20151113005/20151113150824.jpg
inflating: dataset/data/train/2/20151113005/20151113150852.jpg
creating: dataset/data/train/2/20151113006/
inflating: dataset/data/train/2/20151113006/20151113151651.jpg
inflating: dataset/data/train/2/20151113006/20151113151812.jpg
inflating: dataset/data/train/2/20151113006/20151113151842.jpg
inflating: dataset/data/train/2/20151113006/20151113151910.jpg
inflating: dataset/data/train/2/20151113006/20151113151940.jpg
inflating: dataset/data/train/2/20151113006/20151113151945.jpg
inflating: dataset/data/train/2/20151113006/20151113152043.jpg
creating: dataset/data/train/2/20151113008/
inflating: dataset/data/train/2/20151113008/20151113153922.jpg
inflating: dataset/data/train/2/20151113008/20151113154045.jpg
inflating: dataset/data/train/2/20151113008/20151113154115.jpg
inflating: dataset/data/train/2/20151113008/20151113154144.jpg
inflating: dataset/data/train/2/20151113008/20151113154214.jpg
inflating: dataset/data/train/2/20151113008/20151113154218.jpg
inflating: dataset/data/train/2/20151113008/20151113154303.jpg
creating: dataset/data/train/2/20151116001/
inflating: dataset/data/train/2/20151116001/20151116143926.jpg
inflating: dataset/data/train/2/20151116001/20151116144107.jpg
inflating: dataset/data/train/2/20151116001/20151116144136.jpg
inflating: dataset/data/train/2/20151116001/20151116144205.jpg
inflating: dataset/data/train/2/20151116001/20151116144236.jpg
inflating: dataset/data/train/2/20151116001/20151116144239.jpg
inflating: dataset/data/train/2/20151116001/20151116144324.jpg
creating: dataset/data/train/2/20151116005/
inflating: dataset/data/train/2/20151116005/20151116153350.jpg
inflating: dataset/data/train/2/20151116005/20151116153513.jpg
inflating: dataset/data/train/2/20151116005/20151116153542.jpg
inflating: dataset/data/train/2/20151116005/20151116153610.jpg
inflating: dataset/data/train/2/20151116005/20151116153643.jpg
inflating: dataset/data/train/2/20151116005/20151116153648.jpg
inflating: dataset/data/train/2/20151116005/20151116153749.jpg
creating: dataset/data/train/2/20151118002/
inflating: dataset/data/train/2/20151118002/20151118151650.jpg
inflating: dataset/data/train/2/20151118002/20151118151825.jpg
inflating: dataset/data/train/2/20151118002/20151118151901.jpg
inflating: dataset/data/train/2/20151118002/20151118151928.jpg
inflating: dataset/data/train/2/20151118002/20151118151950.jpg
inflating: dataset/data/train/2/20151118002/20151118152026.jpg
inflating: dataset/data/train/2/20151118002/20151118152132.jpg
creating: dataset/data/train/2/20151127014/
inflating: dataset/data/train/2/20151127014/20151127162609.jpg
inflating: dataset/data/train/2/20151127014/20151127162822.jpg
inflating: dataset/data/train/2/20151127014/20151127162832.jpg
inflating: dataset/data/train/2/20151127014/20151127162834.jpg
inflating: dataset/data/train/2/20151127014/20151127162941.jpg
inflating: dataset/data/train/2/20151127014/20151127162953.jpg
inflating: dataset/data/train/2/20151127014/20151127163130.jpg
creating: dataset/data/train/2/20151130007/
inflating: dataset/data/train/2/20151130007/20151130161719.jpg
inflating: dataset/data/train/2/20151130007/20151130161848.jpg
inflating: dataset/data/train/2/20151130007/20151130161859.jpg
inflating: dataset/data/train/2/20151130007/20151130161943.jpg
inflating: dataset/data/train/2/20151130007/20151130162006.jpg
inflating: dataset/data/train/2/20151130007/20151130162007.jpg
inflating: dataset/data/train/2/20151130007/20151130162116.jpg
creating: dataset/data/train/2/20151130008/
inflating: dataset/data/train/2/20151130008/20151130155656.jpg
inflating: dataset/data/train/2/20151130008/20151130155900.jpg
inflating: dataset/data/train/2/20151130008/20151130155922.jpg
inflating: dataset/data/train/2/20151130008/20151130160005.jpg
inflating: dataset/data/train/2/20151130008/20151130160030.jpg
inflating: dataset/data/train/2/20151130008/20151130160035.jpg
inflating: dataset/data/train/2/20151130008/20151130160139.jpg
creating: dataset/data/train/2/20151211005/
inflating: dataset/data/train/2/20151211005/20151211155048.jpg
inflating: dataset/data/train/2/20151211005/20151211155217.jpg
inflating: dataset/data/train/2/20151211005/20151211155240.jpg
inflating: dataset/data/train/2/20151211005/20151211155311.jpg
inflating: dataset/data/train/2/20151211005/20151211155341.jpg
inflating: dataset/data/train/2/20151211005/20151211155345.jpg
inflating: dataset/data/train/2/20151211005/20151211155505.jpg
creating: dataset/data/train/2/20151214008/
inflating: dataset/data/train/2/20151214008/20151214152153.jpg
inflating: dataset/data/train/2/20151214008/20151214152437.jpg
inflating: dataset/data/train/2/20151214008/20151214152509.jpg
inflating: dataset/data/train/2/20151214008/20151214152541.jpg
inflating: dataset/data/train/2/20151214008/20151214152606.jpg
inflating: dataset/data/train/2/20151214008/20151214152609.jpg
inflating: dataset/data/train/2/20151214008/20151214152700.jpg
creating: dataset/data/train/2/20151214011/
inflating: dataset/data/train/2/20151214011/20151214161800.jpg
inflating: dataset/data/train/2/20151214011/20151214161914.jpg
inflating: dataset/data/train/2/20151214011/20151214161944.jpg
inflating: dataset/data/train/2/20151214011/20151214162017.jpg
inflating: dataset/data/train/2/20151214011/20151214162046.jpg
inflating: dataset/data/train/2/20151214011/20151214162049.jpg
inflating: dataset/data/train/2/20151214011/20151214162201.jpg
creating: dataset/data/train/2/20151216001/
inflating: dataset/data/train/2/20151216001/20151216145857.jpg
inflating: dataset/data/train/2/20151216001/20151216150022.jpg
inflating: dataset/data/train/2/20151216001/20151216150047.jpg
inflating: dataset/data/train/2/20151216001/20151216150123.jpg
inflating: dataset/data/train/2/20151216001/20151216150151.jpg
inflating: dataset/data/train/2/20151216001/20151216150158.jpg
inflating: dataset/data/train/2/20151216001/20151216150258.jpg
creating: dataset/data/train/2/20151216007/
inflating: dataset/data/train/2/20151216007/20151216144457.jpg
inflating: dataset/data/train/2/20151216007/20151216144617.jpg
inflating: dataset/data/train/2/20151216007/20151216144648.jpg
inflating: dataset/data/train/2/20151216007/20151216144717.jpg
inflating: dataset/data/train/2/20151216007/20151216144750.jpg
inflating: dataset/data/train/2/20151216007/20151216144755.jpg
inflating: dataset/data/train/2/20151216007/20151216144908.jpg
creating: dataset/data/train/2/20151216009/
inflating: dataset/data/train/2/20151216009/20151216153107.jpg
inflating: dataset/data/train/2/20151216009/20151216153230.jpg
inflating: dataset/data/train/2/20151216009/20151216153307.jpg
inflating: dataset/data/train/2/20151216009/20151216153331.jpg
inflating: dataset/data/train/2/20151216009/20151216153403.jpg
inflating: dataset/data/train/2/20151216009/20151216153408.jpg
inflating: dataset/data/train/2/20151216009/20151216153457.jpg
creating: dataset/data/train/2/20151216012/
inflating: dataset/data/train/2/20151216012/20151216161839.jpg
inflating: dataset/data/train/2/20151216012/20151216162028.jpg
inflating: dataset/data/train/2/20151216012/20151216162051.jpg
inflating: dataset/data/train/2/20151216012/20151216162130.jpg
inflating: dataset/data/train/2/20151216012/20151216162156.jpg
inflating: dataset/data/train/2/20151216012/20151216162415.jpg
inflating: dataset/data/train/2/20151216012/20151216162504.jpg
creating: dataset/data/train/2/20151230003/
inflating: dataset/data/train/2/20151230003/20151230144252.jpg
inflating: dataset/data/train/2/20151230003/20151230144417.jpg
inflating: dataset/data/train/2/20151230003/20151230144448.jpg
inflating: dataset/data/train/2/20151230003/20151230144517.jpg
inflating: dataset/data/train/2/20151230003/20151230144549.jpg
inflating: dataset/data/train/2/20151230003/20151230144558.jpg
inflating: dataset/data/train/2/20151230003/20151230144656.jpg
creating: dataset/data/train/2/20151230007/
inflating: dataset/data/train/2/20151230007/20151230152256.jpg
inflating: dataset/data/train/2/20151230007/20151230152431.jpg
inflating: dataset/data/train/2/20151230007/20151230152455.jpg
inflating: dataset/data/train/2/20151230007/20151230152523.jpg
inflating: dataset/data/train/2/20151230007/20151230152558.jpg
inflating: dataset/data/train/2/20151230007/20151230152559.jpg
inflating: dataset/data/train/2/20151230007/20151230152708.jpg
creating: dataset/data/train/2/20151230009/
inflating: dataset/data/train/2/20151230009/20151230154917.jpg
inflating: dataset/data/train/2/20151230009/20151230155102.jpg
inflating: dataset/data/train/2/20151230009/20151230155136.jpg
inflating: dataset/data/train/2/20151230009/20151230155159.jpg
inflating: dataset/data/train/2/20151230009/20151230155229.jpg
inflating: dataset/data/train/2/20151230009/20151230155234.jpg
inflating: dataset/data/train/2/20151230009/20151230155317.jpg
creating: dataset/data/train/2/20151230015/
inflating: dataset/data/train/2/20151230015/20151230171034.jpg
inflating: dataset/data/train/2/20151230015/20151230171203.jpg
inflating: dataset/data/train/2/20151230015/20151230171224.jpg
inflating: dataset/data/train/2/20151230015/20151230171258.jpg
inflating: dataset/data/train/2/20151230015/20151230171324.jpg
inflating: dataset/data/train/2/20151230015/20151230171328.jpg
inflating: dataset/data/train/2/20151230015/20151230171438.jpg
creating: dataset/data/train/2/20151231006/
inflating: dataset/data/train/2/20151231006/20151231155301.jpg
inflating: dataset/data/train/2/20151231006/20151231155428.jpg
inflating: dataset/data/train/2/20151231006/20151231155455.jpg
inflating: dataset/data/train/2/20151231006/20151231155532.jpg
inflating: dataset/data/train/2/20151231006/20151231155553.jpg
inflating: dataset/data/train/2/20151231006/20151231155558.jpg
inflating: dataset/data/train/2/20151231006/20151231155729.jpg
creating: dataset/data/train/2/20160201003/
inflating: dataset/data/train/2/20160201003/20160201145143.jpg
inflating: dataset/data/train/2/20160201003/20160201145301.jpg
inflating: dataset/data/train/2/20160201003/20160201145334.jpg
inflating: dataset/data/train/2/20160201003/20160201145407.jpg
inflating: dataset/data/train/2/20160201003/20160201145428.jpg
inflating: dataset/data/train/2/20160201003/20160201145429.jpg
inflating: dataset/data/train/2/20160201003/20160201145537.jpg
creating: dataset/data/train/2/20160316009/
inflating: dataset/data/train/2/20160316009/20160316154216.jpg
inflating: dataset/data/train/2/20160316009/20160316154331.jpg
inflating: dataset/data/train/2/20160316009/20160316154414.jpg
inflating: dataset/data/train/2/20160316009/20160316154443.jpg
inflating: dataset/data/train/2/20160316009/20160316154515.jpg
inflating: dataset/data/train/2/20160316009/20160316154518.jpg
inflating: dataset/data/train/2/20160316009/20160316154635.jpg
creating: dataset/data/train/2/20160317001/
inflating: dataset/data/train/2/20160317001/20160317151710.jpg
inflating: dataset/data/train/2/20160317001/20160317151833.jpg
inflating: dataset/data/train/2/20160317001/20160317151903.jpg
inflating: dataset/data/train/2/20160317001/20160317151931.jpg
inflating: dataset/data/train/2/20160317001/20160317152003.jpg
inflating: dataset/data/train/2/20160317001/20160317152005.jpg
inflating: dataset/data/train/2/20160317001/20160317152107.jpg
creating: dataset/data/train/2/20160321001/
inflating: dataset/data/train/2/20160321001/20160321143811.jpg
inflating: dataset/data/train/2/20160321001/20160321143933.jpg
inflating: dataset/data/train/2/20160321001/20160321144007.jpg
inflating: dataset/data/train/2/20160321001/20160321144035.jpg
inflating: dataset/data/train/2/20160321001/20160321144103.jpg
inflating: dataset/data/train/2/20160321001/20160321144108.jpg
inflating: dataset/data/train/2/20160321001/20160321144151.jpg
creating: dataset/data/train/2/20160324005/
inflating: dataset/data/train/2/20160324005/20160324153010.jpg
inflating: dataset/data/train/2/20160324005/20160324153130.jpg
inflating: dataset/data/train/2/20160324005/20160324153156.jpg
inflating: dataset/data/train/2/20160324005/20160324153226.jpg
inflating: dataset/data/train/2/20160324005/20160324153256.jpg
inflating: dataset/data/train/2/20160324005/20160324153258.jpg
inflating: dataset/data/train/2/20160324005/20160324153345.jpg
creating: dataset/data/train/2/20160401003/
inflating: dataset/data/train/2/20160401003/20160401150627.jpg
inflating: dataset/data/train/2/20160401003/20160401150811.jpg
inflating: dataset/data/train/2/20160401003/20160401150836.jpg
inflating: dataset/data/train/2/20160401003/20160401150850.jpg
inflating: dataset/data/train/2/20160401003/20160401150917.jpg
inflating: dataset/data/train/2/20160401003/20160401150919.jpg
inflating: dataset/data/train/2/20160401003/20160401150952.jpg
creating: dataset/data/train/2/20160405004/
inflating: dataset/data/train/2/20160405004/20160405122840.jpg
inflating: dataset/data/train/2/20160405004/20160405123022.jpg
inflating: dataset/data/train/2/20160405004/20160405123044.jpg
inflating: dataset/data/train/2/20160405004/20160405123114.jpg
inflating: dataset/data/train/2/20160405004/20160405123144.jpg
inflating: dataset/data/train/2/20160405004/20160405123148.jpg
inflating: dataset/data/train/2/20160405004/20160405123259.jpg
creating: dataset/data/train/2/20160406010/
inflating: dataset/data/train/2/20160406010/20160406150011.jpg
inflating: dataset/data/train/2/20160406010/20160406150139.jpg
inflating: dataset/data/train/2/20160406010/20160406150211.jpg
inflating: dataset/data/train/2/20160406010/20160406150239.jpg
inflating: dataset/data/train/2/20160406010/20160406150310.jpg
inflating: dataset/data/train/2/20160406010/20160406150315.jpg
inflating: dataset/data/train/2/20160406010/20160406150400.jpg
creating: dataset/data/train/2/20160406013/
inflating: dataset/data/train/2/20160406013/20160406154107.jpg
inflating: dataset/data/train/2/20160406013/20160406154234.jpg
inflating: dataset/data/train/2/20160406013/20160406154306.jpg
inflating: dataset/data/train/2/20160406013/20160406154343.jpg
inflating: dataset/data/train/2/20160406013/20160406154405.jpg
inflating: dataset/data/train/2/20160406013/20160406154408.jpg
inflating: dataset/data/train/2/20160406013/20160406154450.jpg
creating: dataset/data/train/2/20160411002/
inflating: dataset/data/train/2/20160411002/20160411144840.jpg
inflating: dataset/data/train/2/20160411002/20160411145020.jpg
inflating: dataset/data/train/2/20160411002/20160411145050.jpg
inflating: dataset/data/train/2/20160411002/20160411145108.jpg
inflating: dataset/data/train/2/20160411002/20160411145124.jpg
inflating: dataset/data/train/2/20160411002/20160411145135.jpg
inflating: dataset/data/train/2/20160411002/20160411145402.jpg
creating: dataset/data/train/2/20160419001/
inflating: dataset/data/train/2/20160419001/20160419112223.jpg
inflating: dataset/data/train/2/20160419001/20160419112344.jpg
inflating: dataset/data/train/2/20160419001/20160419112413.jpg
inflating: dataset/data/train/2/20160419001/20160419112445.jpg
inflating: dataset/data/train/2/20160419001/20160419112514.jpg
inflating: dataset/data/train/2/20160419001/20160419112518.jpg
inflating: dataset/data/train/2/20160419001/20160419112609.jpg
creating: dataset/data/train/2/20160425002/
inflating: dataset/data/train/2/20160425002/20160425143325.jpg
inflating: dataset/data/train/2/20160425002/20160425143443.jpg
inflating: dataset/data/train/2/20160425002/20160425143518.jpg
inflating: dataset/data/train/2/20160425002/20160425143535.jpg
inflating: dataset/data/train/2/20160425002/20160425143612.jpg
inflating: dataset/data/train/2/20160425002/20160425143615.jpg
inflating: dataset/data/train/2/20160425002/20160425143721.jpg
creating: dataset/data/train/2/20160425003/
inflating: dataset/data/train/2/20160425003/20160425145851.jpg
inflating: dataset/data/train/2/20160425003/20160425150041.jpg
inflating: dataset/data/train/2/20160425003/20160425150108.jpg
inflating: dataset/data/train/2/20160425003/20160425150139.jpg
inflating: dataset/data/train/2/20160425003/20160425150202.jpg
inflating: dataset/data/train/2/20160425003/20160425150205.jpg
inflating: dataset/data/train/2/20160425003/20160425150250.jpg
creating: dataset/data/train/2/20160425010/
inflating: dataset/data/train/2/20160425010/20160425170006.jpg
inflating: dataset/data/train/2/20160425010/20160425170137.jpg
inflating: dataset/data/train/2/20160425010/20160425170200.jpg
inflating: dataset/data/train/2/20160425010/20160425170230.jpg
inflating: dataset/data/train/2/20160425010/20160425170302.jpg
inflating: dataset/data/train/2/20160425010/20160425170308.jpg
inflating: dataset/data/train/2/20160425010/20160425170403.jpg
creating: dataset/data/train/2/20160426001/
inflating: dataset/data/train/2/20160426001/20160426114319.jpg
inflating: dataset/data/train/2/20160426001/20160426114509.jpg
inflating: dataset/data/train/2/20160426001/20160426114533.jpg
inflating: dataset/data/train/2/20160426001/20160426114604.jpg
inflating: dataset/data/train/2/20160426001/20160426114647.jpg
inflating: dataset/data/train/2/20160426001/20160426114648.jpg
inflating: dataset/data/train/2/20160426001/20160426114749.jpg
creating: dataset/data/train/2/20160428001/
inflating: dataset/data/train/2/20160428001/20160428104702.jpg
inflating: dataset/data/train/2/20160428001/20160428104839.jpg
inflating: dataset/data/train/2/20160428001/20160428104921.jpg
inflating: dataset/data/train/2/20160428001/20160428104939.jpg
inflating: dataset/data/train/2/20160428001/20160428105010.jpg
inflating: dataset/data/train/2/20160428001/20160428105020.jpg
inflating: dataset/data/train/2/20160428001/20160428105108.jpg
creating: dataset/data/train/2/20160504009/
inflating: dataset/data/train/2/20160504009/20160504163119.jpg
inflating: dataset/data/train/2/20160504009/20160504163236.jpg
inflating: dataset/data/train/2/20160504009/20160504163306.jpg
inflating: dataset/data/train/2/20160504009/20160504163336.jpg
inflating: dataset/data/train/2/20160504009/20160504163406.jpg
inflating: dataset/data/train/2/20160504009/20160504163408.jpg
inflating: dataset/data/train/2/20160504009/20160504163454.jpg
creating: dataset/data/train/2/20160511003/
inflating: dataset/data/train/2/20160511003/20160511101023.jpg
inflating: dataset/data/train/2/20160511003/20160511101141.jpg
inflating: dataset/data/train/2/20160511003/20160511101211.jpg
inflating: dataset/data/train/2/20160511003/20160511101243.jpg
inflating: dataset/data/train/2/20160511003/20160511101312.jpg
inflating: dataset/data/train/2/20160511003/20160511101315.jpg
inflating: dataset/data/train/2/20160511003/20160511101339.jpg
creating: dataset/data/train/2/20160601004/
inflating: dataset/data/train/2/20160601004/20160601101748.jpg
inflating: dataset/data/train/2/20160601004/20160601101929.jpg
inflating: dataset/data/train/2/20160601004/20160601102000.jpg
inflating: dataset/data/train/2/20160601004/20160601102017.jpg
inflating: dataset/data/train/2/20160601004/20160601102050.jpg
inflating: dataset/data/train/2/20160601004/20160601102058.jpg
inflating: dataset/data/train/2/20160601004/20160601102124.jpg
creating: dataset/data/train/2/20160601007/
inflating: dataset/data/train/2/20160601007/20160601145539.jpg
inflating: dataset/data/train/2/20160601007/20160601145655.jpg
inflating: dataset/data/train/2/20160601007/20160601145730.jpg
inflating: dataset/data/train/2/20160601007/20160601145756.jpg
inflating: dataset/data/train/2/20160601007/20160601145825.jpg
inflating: dataset/data/train/2/20160601007/20160601145828.jpg
inflating: dataset/data/train/2/20160601007/20160601145929.jpg
creating: dataset/data/train/2/20160607001/
inflating: dataset/data/train/2/20160607001/20160607150750.jpg
inflating: dataset/data/train/2/20160607001/20160607151011.jpg
inflating: dataset/data/train/2/20160607001/20160607151028.jpg
inflating: dataset/data/train/2/20160607001/20160607151059.jpg
inflating: dataset/data/train/2/20160607001/20160607151127.jpg
inflating: dataset/data/train/2/20160607001/20160607151129.jpg
inflating: dataset/data/train/2/20160607001/20160607151233.jpg
creating: dataset/data/train/2/20160608003/
inflating: dataset/data/train/2/20160608003/20160608095938.jpg
inflating: dataset/data/train/2/20160608003/20160608100109.jpg
inflating: dataset/data/train/2/20160608003/20160608100127.jpg
inflating: dataset/data/train/2/20160608003/20160608100204.jpg
inflating: dataset/data/train/2/20160608003/20160608100227.jpg
inflating: dataset/data/train/2/20160608003/20160608100229.jpg
inflating: dataset/data/train/2/20160608003/20160608100314.jpg
creating: dataset/data/train/2/20160629009/
inflating: dataset/data/train/2/20160629009/20160629153634.jpg
inflating: dataset/data/train/2/20160629009/20160629153816.jpg
inflating: dataset/data/train/2/20160629009/20160629153840.jpg
inflating: dataset/data/train/2/20160629009/20160629153916.jpg
inflating: dataset/data/train/2/20160629009/20160629153937.jpg
inflating: dataset/data/train/2/20160629009/20160629153942.jpg
inflating: dataset/data/train/2/20160629009/20160629154124.jpg
creating: dataset/data/train/2/20160704002/
inflating: dataset/data/train/2/20160704002/20160704143753.jpg
inflating: dataset/data/train/2/20160704002/20160704143911.jpg
inflating: dataset/data/train/2/20160704002/20160704143939.jpg
inflating: dataset/data/train/2/20160704002/20160704144018.jpg
inflating: dataset/data/train/2/20160704002/20160704144040.jpg
inflating: dataset/data/train/2/20160704002/20160704144048.jpg
inflating: dataset/data/train/2/20160704002/20160704144138.jpg
creating: dataset/data/train/2/20160704004/
inflating: dataset/data/train/2/20160704004/20160704150558.jpg
inflating: dataset/data/train/2/20160704004/20160704150717.jpg
inflating: dataset/data/train/2/20160704004/20160704150747.jpg
inflating: dataset/data/train/2/20160704004/20160704150805.jpg
inflating: dataset/data/train/2/20160704004/20160704150814.jpg
inflating: dataset/data/train/2/20160704004/20160704150819.jpg
inflating: dataset/data/train/2/20160704004/20160704150905.jpg
creating: dataset/data/train/2/20160705001/
inflating: dataset/data/train/2/20160705001/20160705112514.jpg
inflating: dataset/data/train/2/20160705001/20160705112648.jpg
inflating: dataset/data/train/2/20160705001/20160705112710.jpg
inflating: dataset/data/train/2/20160705001/20160705112752.jpg
inflating: dataset/data/train/2/20160705001/20160705112806.jpg
inflating: dataset/data/train/2/20160705001/20160705112809.jpg
inflating: dataset/data/train/2/20160705001/20160705112841.jpg
creating: dataset/data/train/2/20160712002/
inflating: dataset/data/train/2/20160712002/20160712112546.jpg
inflating: dataset/data/train/2/20160712002/20160712112730.jpg
inflating: dataset/data/train/2/20160712002/20160712112759.jpg
inflating: dataset/data/train/2/20160712002/20160712112826.jpg
inflating: dataset/data/train/2/20160712002/20160712112902.jpg
inflating: dataset/data/train/2/20160712002/20160712112905.jpg
inflating: dataset/data/train/2/20160712002/20160712113008.jpg
creating: dataset/data/train/2/20160720009/
inflating: dataset/data/train/2/20160720009/20160720151520.jpg
inflating: dataset/data/train/2/20160720009/20160720151659.jpg
inflating: dataset/data/train/2/20160720009/20160720151718.jpg
inflating: dataset/data/train/2/20160720009/20160720151746.jpg
inflating: dataset/data/train/2/20160720009/20160720151816.jpg
inflating: dataset/data/train/2/20160720009/20160720151819.jpg
inflating: dataset/data/train/2/20160720009/20160720151929.jpg
creating: dataset/data/train/2/20160721004/
inflating: dataset/data/train/2/20160721004/20160721111629.jpg
inflating: dataset/data/train/2/20160721004/20160721111803.jpg
inflating: dataset/data/train/2/20160721004/20160721111833.jpg
inflating: dataset/data/train/2/20160721004/20160721111851.jpg
inflating: dataset/data/train/2/20160721004/20160721111921.jpg
inflating: dataset/data/train/2/20160721004/20160721111925.jpg
inflating: dataset/data/train/2/20160721004/20160721112056.jpg
creating: dataset/data/train/2/20160722007/
inflating: dataset/data/train/2/20160722007/20160722165122.jpg
inflating: dataset/data/train/2/20160722007/20160722165254.jpg
inflating: dataset/data/train/2/20160722007/20160722165316.jpg
inflating: dataset/data/train/2/20160722007/20160722165346.jpg
inflating: dataset/data/train/2/20160722007/20160722165412.jpg
inflating: dataset/data/train/2/20160722007/20160722165414.jpg
inflating: dataset/data/train/2/20160722007/20160722165549.jpg
creating: dataset/data/train/2/20160725003/
inflating: dataset/data/train/2/20160725003/20160725144346.jpg
inflating: dataset/data/train/2/20160725003/20160725144516.jpg
inflating: dataset/data/train/2/20160725003/20160725144545.jpg
inflating: dataset/data/train/2/20160725003/20160725144618.jpg
inflating: dataset/data/train/2/20160725003/20160725144639.jpg
inflating: dataset/data/train/2/20160725003/20160725144650.jpg
inflating: dataset/data/train/2/20160725003/20160725144745.jpg
creating: dataset/data/train/2/20160727003/
inflating: dataset/data/train/2/20160727003/20160727105108.jpg
inflating: dataset/data/train/2/20160727003/20160727105221.jpg
inflating: dataset/data/train/2/20160727003/20160727105243.jpg
inflating: dataset/data/train/2/20160727003/20160727105317.jpg
inflating: dataset/data/train/2/20160727003/20160727105351.jpg
inflating: dataset/data/train/2/20160727003/20160727105359.jpg
inflating: dataset/data/train/2/20160727003/20160727105415.jpg
creating: dataset/data/train/2/20160808001/
inflating: dataset/data/train/2/20160808001/20160808144553.jpg
inflating: dataset/data/train/2/20160808001/20160808144712.jpg
inflating: dataset/data/train/2/20160808001/20160808144740.jpg
inflating: dataset/data/train/2/20160808001/20160808144812.jpg
inflating: dataset/data/train/2/20160808001/20160808144841.jpg
inflating: dataset/data/train/2/20160808001/20160808144845.jpg
inflating: dataset/data/train/2/20160808001/20160808144906.jpg
creating: dataset/data/train/2/20160817002/
inflating: dataset/data/train/2/20160817002/20160817144606.jpg
inflating: dataset/data/train/2/20160817002/20160817144739.jpg
inflating: dataset/data/train/2/20160817002/20160817144758.jpg
inflating: dataset/data/train/2/20160817002/20160817144831.jpg
inflating: dataset/data/train/2/20160817002/20160817144858.jpg
inflating: dataset/data/train/2/20160817002/20160817144859.jpg
inflating: dataset/data/train/2/20160817002/20160817144936.jpg
creating: dataset/data/train/2/20160817004/
inflating: dataset/data/train/2/20160817004/20160817150522.jpg
inflating: dataset/data/train/2/20160817004/20160817150645.jpg
inflating: dataset/data/train/2/20160817004/20160817150712.jpg
inflating: dataset/data/train/2/20160817004/20160817150747.jpg
inflating: dataset/data/train/2/20160817004/20160817150812.jpg
inflating: dataset/data/train/2/20160817004/20160817150819.jpg
inflating: dataset/data/train/2/20160817004/20160817150956.jpg
creating: dataset/data/train/2/20160822001/
inflating: dataset/data/train/2/20160822001/20160822144911.jpg
inflating: dataset/data/train/2/20160822001/20160822145034.jpg
inflating: dataset/data/train/2/20160822001/20160822145105.jpg
inflating: dataset/data/train/2/20160822001/20160822145141.jpg
inflating: dataset/data/train/2/20160822001/20160822145203.jpg
inflating: dataset/data/train/2/20160822001/20160822145208.jpg
inflating: dataset/data/train/2/20160822001/20160822145300.jpg
creating: dataset/data/train/2/20160824007/
inflating: dataset/data/train/2/20160824007/20160824150403.jpg
inflating: dataset/data/train/2/20160824007/20160824150540.jpg
inflating: dataset/data/train/2/20160824007/20160824150606.jpg
inflating: dataset/data/train/2/20160824007/20160824150638.jpg
inflating: dataset/data/train/2/20160824007/20160824150705.jpg
inflating: dataset/data/train/2/20160824007/20160824150708.jpg
inflating: dataset/data/train/2/20160824007/20160824150816.jpg
creating: dataset/data/train/2/20160825004/
inflating: dataset/data/train/2/20160825004/20160825173923.jpg
inflating: dataset/data/train/2/20160825004/20160825174136.jpg
inflating: dataset/data/train/2/20160825004/20160825174155.jpg
inflating: dataset/data/train/2/20160825004/20160825174210.jpg
inflating: dataset/data/train/2/20160825004/20160825174237.jpg
inflating: dataset/data/train/2/20160825004/20160825174239.jpg
inflating: dataset/data/train/2/20160825004/20160825174348.jpg
creating: dataset/data/train/2/20160907004/
inflating: dataset/data/train/2/20160907004/20160907150456.jpg
inflating: dataset/data/train/2/20160907004/20160907150633.jpg
inflating: dataset/data/train/2/20160907004/20160907150702.jpg
inflating: dataset/data/train/2/20160907004/20160907150723.jpg
inflating: dataset/data/train/2/20160907004/20160907150742.jpg
inflating: dataset/data/train/2/20160907004/20160907150748.jpg
inflating: dataset/data/train/2/20160907004/20160907150931.jpg
creating: dataset/data/train/2/20160908003/
inflating: dataset/data/train/2/20160908003/20160908104712.jpg
inflating: dataset/data/train/2/20160908003/20160908105024.jpg
inflating: dataset/data/train/2/20160908003/20160908105052.jpg
inflating: dataset/data/train/2/20160908003/20160908105123.jpg
inflating: dataset/data/train/2/20160908003/20160908105152.jpg
inflating: dataset/data/train/2/20160908003/20160908105153.jpg
inflating: dataset/data/train/2/20160908003/20160908105243.jpg
creating: dataset/data/train/2/20160912007/
inflating: dataset/data/train/2/20160912007/20160912151821.jpg
inflating: dataset/data/train/2/20160912007/20160912152011.jpg
inflating: dataset/data/train/2/20160912007/20160912152040.jpg
inflating: dataset/data/train/2/20160912007/20160912152055.jpg
inflating: dataset/data/train/2/20160912007/20160912152134.jpg
inflating: dataset/data/train/2/20160912007/20160912152135.jpg
inflating: dataset/data/train/2/20160912007/20160912152249.jpg
creating: dataset/data/train/2/20160914012/
inflating: dataset/data/train/2/20160914012/20160914163210.jpg
inflating: dataset/data/train/2/20160914012/20160914163346.jpg
inflating: dataset/data/train/2/20160914012/20160914163401.jpg
inflating: dataset/data/train/2/20160914012/20160914163421.jpg
inflating: dataset/data/train/2/20160914012/20160914163434.jpg
inflating: dataset/data/train/2/20160914012/20160914163435.jpg
inflating: dataset/data/train/2/20160914012/20160914163522.jpg
creating: dataset/data/train/2/20160914015/
inflating: dataset/data/train/2/20160914015/20160914170331.jpg
inflating: dataset/data/train/2/20160914015/20160914170505.jpg
inflating: dataset/data/train/2/20160914015/20160914170528.jpg
inflating: dataset/data/train/2/20160914015/20160914170558.jpg
inflating: dataset/data/train/2/20160914015/20160914170628.jpg
inflating: dataset/data/train/2/20160914015/20160914170629.jpg
inflating: dataset/data/train/2/20160914015/20160914170719.jpg
creating: dataset/data/train/2/20160920006/
inflating: dataset/data/train/2/20160920006/20160920170435.jpg
inflating: dataset/data/train/2/20160920006/20160920170553.jpg
inflating: dataset/data/train/2/20160920006/20160920170608.jpg
inflating: dataset/data/train/2/20160920006/20160920170625.jpg
inflating: dataset/data/train/2/20160920006/20160920170652.jpg
inflating: dataset/data/train/2/20160920006/20160920170653.jpg
inflating: dataset/data/train/2/20160920006/20160920170737.jpg
creating: dataset/data/train/2/20160927005/
inflating: dataset/data/train/2/20160927005/20160927161957.jpg
inflating: dataset/data/train/2/20160927005/20160927162205.jpg
inflating: dataset/data/train/2/20160927005/20160927162212.jpg
inflating: dataset/data/train/2/20160927005/20160927162237.jpg
inflating: dataset/data/train/2/20160927005/20160927162301.jpg
inflating: dataset/data/train/2/20160927005/20160927162302.jpg
inflating: dataset/data/train/2/20160927005/20160927162403.jpg
creating: dataset/data/train/2/20160928004/
inflating: dataset/data/train/2/20160928004/20160928102428.jpg
inflating: dataset/data/train/2/20160928004/20160928102546.jpg
inflating: dataset/data/train/2/20160928004/20160928102610.jpg
inflating: dataset/data/train/2/20160928004/20160928102640.jpg
inflating: dataset/data/train/2/20160928004/20160928102713.jpg
inflating: dataset/data/train/2/20160928004/20160928102715.jpg
inflating: dataset/data/train/2/20160928004/20160928102749.jpg
creating: dataset/data/train/2/20161008001/
inflating: dataset/data/train/2/20161008001/20161008095156.jpg
inflating: dataset/data/train/2/20161008001/20161008095338.jpg
inflating: dataset/data/train/2/20161008001/20161008095354.jpg
inflating: dataset/data/train/2/20161008001/20161008095426.jpg
inflating: dataset/data/train/2/20161008001/20161008095454.jpg
inflating: dataset/data/train/2/20161008001/20161008095455.jpg
inflating: dataset/data/train/2/20161008001/20161008095533.jpg
creating: dataset/data/train/2/20161010006/
inflating: dataset/data/train/2/20161010006/20161010150522.jpg
inflating: dataset/data/train/2/20161010006/20161010150658.jpg
inflating: dataset/data/train/2/20161010006/20161010150713.jpg
inflating: dataset/data/train/2/20161010006/20161010150753.jpg
inflating: dataset/data/train/2/20161010006/20161010150813.jpg
inflating: dataset/data/train/2/20161010006/20161010150814.jpg
inflating: dataset/data/train/2/20161010006/20161010150914.jpg
creating: dataset/data/train/3/
creating: dataset/data/train/3/090200510/
inflating: dataset/data/train/3/090200510/090200510Image0.jpg
inflating: dataset/data/train/3/090200510/090200510Image10.jpg
inflating: dataset/data/train/3/090200510/090200510Image11.jpg
inflating: dataset/data/train/3/090200510/090200510Image2.jpg
inflating: dataset/data/train/3/090200510/090200510Image3.jpg
inflating: dataset/data/train/3/090200510/090200510Image8.jpg
inflating: dataset/data/train/3/090200510/090200510Image9.jpg
creating: dataset/data/train/3/093518297/
inflating: dataset/data/train/3/093518297/093518297Image0.jpg
inflating: dataset/data/train/3/093518297/093518297Image2.jpg
inflating: dataset/data/train/3/093518297/093518297Image4.jpg
inflating: dataset/data/train/3/093518297/093518297Image5.jpg
inflating: dataset/data/train/3/093518297/093518297Image6.jpg
inflating: dataset/data/train/3/093518297/093518297Image7.jpg
inflating: dataset/data/train/3/093518297/093518297Image8.jpg
creating: dataset/data/train/3/101450780/
inflating: dataset/data/train/3/101450780/101450780Image0.jpg
inflating: dataset/data/train/3/101450780/101450780Image1.jpg
inflating: dataset/data/train/3/101450780/101450780Image2.jpg
inflating: dataset/data/train/3/101450780/101450780Image3.jpg
inflating: dataset/data/train/3/101450780/101450780Image4.jpg
inflating: dataset/data/train/3/101450780/101450780Image6.jpg
inflating: dataset/data/train/3/101450780/101450780Image9.jpg
creating: dataset/data/train/3/103336120/
inflating: dataset/data/train/3/103336120/103336120Image1.jpg
inflating: dataset/data/train/3/103336120/103336120Image10.jpg
inflating: dataset/data/train/3/103336120/103336120Image11.jpg
inflating: dataset/data/train/3/103336120/103336120Image12.jpg
inflating: dataset/data/train/3/103336120/103336120Image5.jpg
inflating: dataset/data/train/3/103336120/103336120Image6.jpg
inflating: dataset/data/train/3/103336120/103336120Image7.jpg
creating: dataset/data/train/3/114204650/
inflating: dataset/data/train/3/114204650/114204650Image10.jpg
inflating: dataset/data/train/3/114204650/114204650Image13.jpg
inflating: dataset/data/train/3/114204650/114204650Image14.jpg
inflating: dataset/data/train/3/114204650/114204650Image15.jpg
inflating: dataset/data/train/3/114204650/114204650Image4.jpg
inflating: dataset/data/train/3/114204650/114204650Image8.jpg
inflating: dataset/data/train/3/114204650/114204650Image9.jpg
creating: dataset/data/train/3/115924160/
inflating: dataset/data/train/3/115924160/115924160Image0.jpg
inflating: dataset/data/train/3/115924160/115924160Image1.jpg
inflating: dataset/data/train/3/115924160/115924160Image2.jpg
inflating: dataset/data/train/3/115924160/115924160Image5.jpg
inflating: dataset/data/train/3/115924160/115924160Image7.jpg
inflating: dataset/data/train/3/115924160/115924160Image8.jpg
inflating: dataset/data/train/3/115924160/115924160Image9.jpg
creating: dataset/data/train/3/145141110/
inflating: dataset/data/train/3/145141110/145141110Image0.jpg
inflating: dataset/data/train/3/145141110/145141110Image10.jpg
inflating: dataset/data/train/3/145141110/145141110Image2.jpg
inflating: dataset/data/train/3/145141110/145141110Image3.jpg
inflating: dataset/data/train/3/145141110/145141110Image5.jpg
inflating: dataset/data/train/3/145141110/145141110Image6.jpg
inflating: dataset/data/train/3/145141110/145141110Image9.jpg
creating: dataset/data/train/3/150023453/
inflating: dataset/data/train/3/150023453/150023453Image0.jpg
inflating: dataset/data/train/3/150023453/150023453Image10.jpg
inflating: dataset/data/train/3/150023453/150023453Image2.jpg
inflating: dataset/data/train/3/150023453/150023453Image4.jpg
inflating: dataset/data/train/3/150023453/150023453Image5.jpg
inflating: dataset/data/train/3/150023453/150023453Image6.jpg
inflating: dataset/data/train/3/150023453/150023453Image7.jpg
creating: dataset/data/train/3/20150914004/
inflating: dataset/data/train/3/20150914004/20150914155806.jpg
inflating: dataset/data/train/3/20150914004/20150914155948.jpg
inflating: dataset/data/train/3/20150914004/20150914160017.jpg
inflating: dataset/data/train/3/20150914004/20150914160049.jpg
inflating: dataset/data/train/3/20150914004/20150914160121.jpg
inflating: dataset/data/train/3/20150914004/20150914160123.jpg
inflating: dataset/data/train/3/20150914004/20150914160239.jpg
creating: dataset/data/train/3/20150930010/
inflating: dataset/data/train/3/20150930010/20150930160649.jpg
inflating: dataset/data/train/3/20150930010/20150930160831.jpg
inflating: dataset/data/train/3/20150930010/20150930160900.jpg
inflating: dataset/data/train/3/20150930010/20150930160926.jpg
inflating: dataset/data/train/3/20150930010/20150930161002.jpg
inflating: dataset/data/train/3/20150930010/20150930161006.jpg
inflating: dataset/data/train/3/20150930010/20150930161104.jpg
creating: dataset/data/train/3/20151020004/
inflating: dataset/data/train/3/20151020004/20151020160653.jpg
inflating: dataset/data/train/3/20151020004/20151020160843.jpg
inflating: dataset/data/train/3/20151020004/20151020160903.jpg
inflating: dataset/data/train/3/20151020004/20151020160928.jpg
inflating: dataset/data/train/3/20151020004/20151020161109.jpg
inflating: dataset/data/train/3/20151020004/20151020161110.jpg
inflating: dataset/data/train/3/20151020004/20151020161326.jpg
creating: dataset/data/train/3/20160303007/
inflating: dataset/data/train/3/20160303007/20160303173514.jpg
inflating: dataset/data/train/3/20160303007/20160303173705.jpg
inflating: dataset/data/train/3/20160303007/20160303173742.jpg
inflating: dataset/data/train/3/20160303007/20160303173758.jpg
inflating: dataset/data/train/3/20160303007/20160303173826.jpg
inflating: dataset/data/train/3/20160303007/20160303173829.jpg
inflating: dataset/data/train/3/20160303007/20160303173853.jpg
creating: dataset/data/train/3/20160323017/
inflating: dataset/data/train/3/20160323017/20160323151927.jpg
inflating: dataset/data/train/3/20160323017/20160323152105.jpg
inflating: dataset/data/train/3/20160323017/20160323152131.jpg
inflating: dataset/data/train/3/20160323017/20160323152201.jpg
inflating: dataset/data/train/3/20160323017/20160323152231.jpg
inflating: dataset/data/train/3/20160323017/20160323152240.jpg
inflating: dataset/data/train/3/20160323017/20160323152323.jpg
creating: dataset/data/train/3/20160406014/
inflating: dataset/data/train/3/20160406014/20160406155835.jpg
inflating: dataset/data/train/3/20160406014/20160406160044.jpg
inflating: dataset/data/train/3/20160406014/20160406160059.jpg
inflating: dataset/data/train/3/20160406014/20160406160125.jpg
inflating: dataset/data/train/3/20160406014/20160406160152.jpg
inflating: dataset/data/train/3/20160406014/20160406160153.jpg
inflating: dataset/data/train/3/20160406014/20160406160345.jpg
creating: dataset/data/train/3/20160418009/
inflating: dataset/data/train/3/20160418009/20160418154437.jpg
inflating: dataset/data/train/3/20160418009/20160418154633.jpg
inflating: dataset/data/train/3/20160418009/20160418154659.jpg
inflating: dataset/data/train/3/20160418009/20160418154732.jpg
inflating: dataset/data/train/3/20160418009/20160418154803.jpg
inflating: dataset/data/train/3/20160418009/20160418154810.jpg
inflating: dataset/data/train/3/20160418009/20160418154833.jpg
creating: dataset/data/train/3/20160427004/
inflating: dataset/data/train/3/20160427004/20160427143000.jpg
inflating: dataset/data/train/3/20160427004/20160427143137.jpg
inflating: dataset/data/train/3/20160427004/20160427143202.jpg
inflating: dataset/data/train/3/20160427004/20160427143231.jpg
inflating: dataset/data/train/3/20160427004/20160427143301.jpg
inflating: dataset/data/train/3/20160427004/20160427143305.jpg
inflating: dataset/data/train/3/20160427004/20160427143358.jpg
creating: dataset/data/train/3/20160427007/
inflating: dataset/data/train/3/20160427007/20160427151800.jpg
inflating: dataset/data/train/3/20160427007/20160427151938.jpg
inflating: dataset/data/train/3/20160427007/20160427152003.jpg
inflating: dataset/data/train/3/20160427007/20160427152036.jpg
inflating: dataset/data/train/3/20160427007/20160427152112.jpg
inflating: dataset/data/train/3/20160427007/20160427152113.jpg
inflating: dataset/data/train/3/20160427007/20160427152214.jpg
creating: dataset/data/train/3/20160606004/
inflating: dataset/data/train/3/20160606004/20160606160324.jpg
inflating: dataset/data/train/3/20160606004/20160606160448.jpg
inflating: dataset/data/train/3/20160606004/20160606160514.jpg
inflating: dataset/data/train/3/20160606004/20160606160545.jpg
inflating: dataset/data/train/3/20160606004/20160606160614.jpg
inflating: dataset/data/train/3/20160606004/20160606160616.jpg
inflating: dataset/data/train/3/20160606004/20160606160700.jpg
creating: dataset/data/train/3/20160612004/
inflating: dataset/data/train/3/20160612004/20160612164314.jpg
inflating: dataset/data/train/3/20160612004/20160612164454.jpg
inflating: dataset/data/train/3/20160612004/20160612164541.jpg
inflating: dataset/data/train/3/20160612004/20160612164608.jpg
inflating: dataset/data/train/3/20160612004/20160612164615.jpg
inflating: dataset/data/train/3/20160612004/20160612164618.jpg
inflating: dataset/data/train/3/20160612004/20160612164707.jpg
creating: dataset/data/train/3/20160617002/
inflating: dataset/data/train/3/20160617002/20160617152115.jpg
inflating: dataset/data/train/3/20160617002/20160617152452.jpg
inflating: dataset/data/train/3/20160617002/20160617152502.jpg
inflating: dataset/data/train/3/20160617002/20160617152536.jpg
inflating: dataset/data/train/3/20160617002/20160617152625.jpg
inflating: dataset/data/train/3/20160617002/20160617152628.jpg
inflating: dataset/data/train/3/20160617002/20160617152854.jpg
###Markdown
**Constants** For your environment, please modify the paths accordingly.
###Code
TRAIN_PATH = '/content/dataset/data/train/'
TEST_PATH = '/content/dataset/data/test/'
# TRAIN_PATH = 'dataset/data/train/'
# TEST_PATH = 'dataset/data/test/'
CROP_SIZE = 260
IMAGE_SIZE = 224
BATCH_SIZE = 150
# prefix = ''
prefix = '/content/drive/My Drive/Studiu doctorat leziuni cervicale/V2/Chekpoints & Notebooks/'
CHACKPOINT_SIMPLE_MODEL = prefix + 'Cancer Detection MobileNetV2 All Natural Images Full Conv32-0.7 6 Dec.tar'
###Output
_____no_output_____
###Markdown
**Imports**
###Code
import torch as t
import torchvision as tv
import numpy as np
import PIL as pil
import matplotlib.pyplot as plt
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
from torch.nn import Linear, BCEWithLogitsLoss
import sklearn as sk
import sklearn.metrics
from os import listdir
import time
import random
import GPUtil
import math
###Output
_____no_output_____
###Markdown
**Memory Stats**
###Code
import GPUtil
def memory_stats():
for gpu in GPUtil.getGPUs():
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
memory_stats()
###Output
GPU RAM Free: 16280MB | Used: 0MB | Util 0% | Total 16280MB
###Markdown
**Deterministic Measurements** This statements help making the experiments reproducible by fixing the random seeds. Despite fixing the random seeds, experiments are usually not reproducible using different PyTorch releases, commits, platforms or between CPU and GPU executions. Please find more details in the PyTorch documentation:https://pytorch.org/docs/stable/notes/randomness.html
###Code
SEED = 0
t.manual_seed(SEED)
t.cuda.manual_seed(SEED)
t.backends.cudnn.deterministic = True
t.backends.cudnn.benchmark = False
np.random.seed(SEED)
random.seed(SEED)
###Output
_____no_output_____
###Markdown
**Loading Data** The dataset is structured in multiple small folders of 7 images each. This generator iterates through the folders and returns the category and 7 paths: one for each image in the folder. The paths are ordered; the order is important since each folder contains 3 types of images, first 5 are with acetic acid solution and the last two are through a green lens and having iodine solution(a solution of a dark red color).
###Code
def sortByLastDigits(elem):
chars = [c for c in elem if c.isdigit()]
return 0 if len(chars) == 0 else int(''.join(chars))
def getImagesPaths(root_path):
for class_folder in [root_path + f for f in listdir(root_path)]:
category = int(class_folder[-1])
for case_folder in listdir(class_folder):
case_folder_path = class_folder + '/' + case_folder + '/'
img_files = [case_folder_path + file_name for file_name in listdir(case_folder_path)]
yield category, sorted(img_files, key = sortByLastDigits)
###Output
_____no_output_____
###Markdown
We define 3 datasets, which load 3 kinds of images: natural images, images taken through a green lens and images where the doctor applied iodine solution (which gives a dark red color). Each dataset has dynamic and static transformations which could be applied to the data. The static transformations are applied on the initialization of the dataset, while the dynamic ones are applied when loading each batch of data.
###Code
class SimpleImagesDataset(t.utils.data.Dataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
for i in range(5):
img = pil.Image.open(img_files[i])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
def __getitem__(self, i):
x, y = self.dataset[i]
if self.transforms_x != None:
x = self.transforms_x(x)
if self.transforms_y != None:
y = self.transforms_y(y)
return x, y
def __len__(self):
return len(self.dataset)
class GreenLensImagesDataset(SimpleImagesDataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
# Only the green lens image
img = pil.Image.open(img_files[-2])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
class RedImagesDataset(SimpleImagesDataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
# Only the green lens image
img = pil.Image.open(img_files[-1])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
###Output
_____no_output_____
###Markdown
**Preprocess Data** Convert pytorch tensor to numpy array.
###Code
def to_numpy(x):
return x.cpu().detach().numpy()
###Output
_____no_output_____
###Markdown
Data transformations for the test and training sets.
###Code
norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]
transforms_train = tv.transforms.Compose([
tv.transforms.RandomAffine(degrees = 45, translate = None, scale = (1., 2.), shear = 30),
# tv.transforms.CenterCrop(CROP_SIZE),
tv.transforms.Resize(IMAGE_SIZE),
tv.transforms.RandomHorizontalFlip(),
tv.transforms.ToTensor(),
tv.transforms.Lambda(lambda t: t.cuda()),
tv.transforms.Normalize(mean=norm_mean, std=norm_std)
])
transforms_test = tv.transforms.Compose([
# tv.transforms.CenterCrop(CROP_SIZE),
tv.transforms.Resize(IMAGE_SIZE),
tv.transforms.ToTensor(),
tv.transforms.Normalize(mean=norm_mean, std=norm_std)
])
y_transform = tv.transforms.Lambda(lambda y: t.tensor(y, dtype=t.long, device = 'cuda:0'))
###Output
_____no_output_____
###Markdown
Initialize pytorch datasets and loaders for training and test.
###Code
def create_loaders(dataset_class):
dataset_train = dataset_class(TRAIN_PATH, transforms_x_dynamic = transforms_train, transforms_y_dynamic = y_transform)
dataset_test = dataset_class(TEST_PATH, transforms_x_static = transforms_test,
transforms_x_dynamic = tv.transforms.Lambda(lambda t: t.cuda()), transforms_y_dynamic = y_transform)
loader_train = DataLoader(dataset_train, BATCH_SIZE, shuffle = True, num_workers = 0)
loader_test = DataLoader(dataset_test, BATCH_SIZE, shuffle = False, num_workers = 0)
return loader_train, loader_test, len(dataset_train), len(dataset_test)
loader_train_simple_img, loader_test_simple_img, len_train, len_test = create_loaders(SimpleImagesDataset)
###Output
_____no_output_____
###Markdown
**Visualize Data** Load a few images so that we can see the effects of the data augmentation on the training set.
###Code
def plot_one_prediction(x, label, pred):
x, label, pred = to_numpy(x), to_numpy(label), to_numpy(pred)
x = np.transpose(x, [1, 2, 0])
if x.shape[-1] == 1:
x = x.squeeze()
x = x * np.array(norm_std) + np.array(norm_mean)
plt.title(label, color = 'green' if label == pred else 'red')
plt.imshow(x)
def plot_predictions(imgs, labels, preds):
fig = plt.figure(figsize = (20, 5))
for i in range(20):
fig.add_subplot(2, 10, i + 1, xticks = [], yticks = [])
plot_one_prediction(imgs[i], labels[i], preds[i])
# x, y = next(iter(loader_train_simple_img))
# plot_predictions(x, y, y)
###Output
_____no_output_____
###Markdown
**Model** Define a few models to experiment with.
###Code
def get_mobilenet_v2():
model = t.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=True)
# model.classifier[0] = t.nn.Dropout(p=0.9, inplace=False)
# model.classifier[1] = Linear(in_features=1280, out_features=4, bias=True)
# model.features[18].add_module('cnn_drop_18', t.nn.Dropout2d(p = .3))
# model.features[17]._modules['conv'][1].add_module('cnn_drop_17', t.nn.Dropout2d(p = .2))
# model.features[16]._modules['conv'][1].add_module('cnn_drop_16', t.nn.Dropout2d(p = .1))
model = model.cuda()
return model
def get_vgg_19():
model = tv.models.vgg19(pretrained = True)
model = model.cuda()
model.classifier[2].p = .2
model.classifier[6].out_features = 4
return model
def get_res_next_101():
model = t.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl')
model.fc = t.nn.Sequential(
t.nn.Dropout(p = .9),
t.nn.Linear(in_features=2048, out_features=4)
)
model = model.cuda()
return model
def get_resnet_18():
model = tv.models.resnet18(pretrained = True)
model.fc = t.nn.Sequential(
t.nn.Dropout(p = .9),
t.nn.Linear(in_features=512, out_features=4)
)
model = model.cuda()
return model
def get_dense_net():
model = tv.models.densenet121(pretrained = True)
model.classifier = t.nn.Sequential(
t.nn.Dropout(p = .9),
t.nn.Linear(in_features = 1024, out_features = 4)
)
model = model.cuda()
return model
class MobileNetV2_FullConv(t.nn.Module):
def __init__(self):
super().__init__()
self.cnn = get_mobilenet_v2().features
self.cnn[18] = t.nn.Sequential(
tv.models.mobilenet.ConvBNReLU(320, 32, kernel_size=1),
t.nn.Dropout2d(p = .7)
)
# self.fc = t.nn.Sequential(
# t.nn.Flatten(),
# t.nn.Dropout(0.4),
# t.nn.Linear(8 * 7 * 10, 4),
# )
self.fc = t.nn.Linear(32, 4)
def forward(self, x):
x = self.cnn(x)
x = x.mean([2, 3])
x = self.fc(x);
return x
model_simple = t.nn.DataParallel(MobileNetV2_FullConv().cuda())
###Output
Downloading: "https://github.com/pytorch/vision/archive/master.zip" to /root/.cache/torch/hub/master.zip
Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /root/.cache/torch/checkpoints/mobilenet_v2-b0353104.pth
100%|██████████| 13.6M/13.6M [00:00<00:00, 36.2MB/s]
###Markdown
**Train & Evaluate** Timer utility function. This is used to measure the execution speed.
###Code
time_start = 0
def timer_start():
global time_start
time_start = time.time()
def timer_end():
return time.time() - time_start
###Output
_____no_output_____
###Markdown
This function trains the network and evaluates it at the same time. It outputs the metrics recorded during the training for both train and test. We are measuring accuracy and the loss. The function also saves a checkpoint of the model every time the accuracy is improved. In the end we will have a checkpoint of the model which gave the best accuracy.
###Code
import statistics
def train_eval(optimizer, model, loader_train, loader_test, chekpoint_name, epochs):
metrics = {
'losses_train': [],
'losses_test': [],
'acc_train': [],
'acc_test': [],
'prec_train': [],
'prec_test': [],
'rec_train': [],
'rec_test': [],
'f_score_train': [],
'f_score_test': [],
'mean_class_acc_train': [],
'mean_class_acc_test': []
}
best_mean_acc = 0
loss_weights = t.tensor([1/4] * 4, device='cuda:0')
try:
for epoch in range(epochs):
timer_start()
loss_fn = t.nn.CrossEntropyLoss(weight = loss_weights)
# loss_fn = t.nn.CrossEntropyLoss()
# loss_fn = FocalLoss(gamma = 2)
train_epoch_loss, train_epoch_acc, train_epoch_precision, train_epoch_recall, train_epoch_f_score = 0, 0, 0, 0, 0
test_epoch_loss, test_epoch_acc, test_epoch_precision, test_epoch_recall, test_epoch_f_score = 0, 0, 0, 0, 0
# Train
model.train()
conf_matrix = np.zeros((4, 4))
for x, y in loader_train:
y_pred = model.forward(x)
loss = loss_fn(y_pred, y)
loss.backward()
optimizer.step()
# memory_stats()
optimizer.zero_grad()
y_pred, y = to_numpy(y_pred), to_numpy(y)
pred = y_pred.argmax(axis = 1)
ratio = len(y) / len_train
train_epoch_loss += (loss.item() * ratio)
train_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio)
precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro')
train_epoch_precision += (precision * ratio)
train_epoch_recall += (recall * ratio)
train_epoch_f_score += (f_score * ratio)
conf_matrix += sk.metrics.confusion_matrix(y, pred, labels = list(range(4)))
class_acc = [conf_matrix[i][i] / sum(conf_matrix[i]) for i in range(len(conf_matrix))]
mean_class_acc = statistics.harmonic_mean(class_acc)
errors = [1 - conf_matrix[i][i] / sum(conf_matrix[i]) for i in range(len(conf_matrix))]
errors_strong = [math.exp(100 * e) for e in errors]
loss_weights = t.tensor([e / sum(errors_strong) for e in errors_strong], device = 'cuda:0')
metrics['losses_train'].append(train_epoch_loss)
metrics['acc_train'].append(train_epoch_acc)
metrics['prec_train'].append(train_epoch_precision)
metrics['rec_train'].append(train_epoch_recall)
metrics['f_score_train'].append(train_epoch_f_score)
metrics['mean_class_acc_train'].append(mean_class_acc)
# Evaluate
model.eval()
with t.no_grad():
conf_matrix_test = np.zeros((4, 4))
for x, y in loader_test:
y_pred = model.forward(x)
loss = loss_fn(y_pred, y)
y_pred, y = to_numpy(y_pred), to_numpy(y)
pred = y_pred.argmax(axis = 1)
ratio = len(y) / len_test
test_epoch_loss += (loss * ratio)
test_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio )
precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro')
test_epoch_precision += (precision * ratio)
test_epoch_recall += (recall * ratio)
test_epoch_f_score += (f_score * ratio)
conf_matrix_test += sk.metrics.confusion_matrix(y, pred, labels = list(range(4)))
class_acc_test = [conf_matrix_test[i][i] / sum(conf_matrix_test[i]) for i in range(len(conf_matrix_test))]
mean_class_acc_test = statistics.harmonic_mean(class_acc_test)
metrics['losses_test'].append(test_epoch_loss)
metrics['acc_test'].append(test_epoch_acc)
metrics['prec_test'].append(test_epoch_precision)
metrics['rec_test'].append(test_epoch_recall)
metrics['f_score_test'].append(test_epoch_f_score)
metrics['mean_class_acc_test'].append(mean_class_acc_test)
if metrics['mean_class_acc_test'][-1] > best_mean_acc:
best_mean_acc = metrics['mean_class_acc_test'][-1]
t.save({'model': model.state_dict()}, 'checkpint {}.tar'.format(chekpoint_name))
print('Epoch {} mean class acc {} acc {} prec {} rec {} f {} minutes {}'.format(
epoch + 1, metrics['mean_class_acc_test'][-1], metrics['acc_test'][-1], metrics['prec_test'][-1], metrics['rec_test'][-1], metrics['f_score_test'][-1], timer_end() / 60))
except KeyboardInterrupt as e:
print(e)
print('Ended training')
return metrics
###Output
_____no_output_____
###Markdown
Plot a metric for both train and test.
###Code
def plot_train_test(train, test, title, y_title):
plt.plot(range(len(train)), train, label = 'train')
plt.plot(range(len(test)), test, label = 'test')
plt.xlabel('Epochs')
plt.ylabel(y_title)
plt.title(title)
plt.legend()
plt.show()
def plot_precision_recall(metrics):
plt.scatter(metrics['prec_train'], metrics['rec_train'], label = 'train')
plt.scatter(metrics['prec_test'], metrics['rec_test'], label = 'test')
plt.legend()
plt.title('Precision-Recall')
plt.xlabel('Precision')
plt.ylabel('Recall')
###Output
_____no_output_____
###Markdown
Train a model for several epochs. The steps_learning parameter is a list of tuples. Each tuple specifies the steps and the learning rate.
###Code
def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning):
t.cuda.empty_cache()
for steps, learn_rate in steps_learning:
metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps)
index_max = np.array(metrics['mean_class_acc_test']).argmax()
print('Best mean class accuracy :', metrics['mean_class_acc_test'][index_max])
print('Best test accuracy :', metrics['acc_test'][index_max])
print('Corresponding precision :', metrics['prec_test'][index_max])
print('Corresponding recall :', metrics['rec_test'][index_max])
print('Corresponding f1 score :', metrics['f_score_test'][index_max])
plot_train_test(metrics['mean_class_acc_train'], metrics['mean_class_acc_test'], 'Mean Class Accuracy (lr = {})'.format(learn_rate), 'Mean Class Accuracy')
plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate), 'Loss')
plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate), 'Accuracy')
plot_train_test(metrics['prec_train'], metrics['prec_test'], 'Precision (lr = {})'.format(learn_rate), 'Precision')
plot_train_test(metrics['rec_train'], metrics['rec_test'], 'Recall (lr = {})'.format(learn_rate), 'Recall')
plot_train_test(metrics['f_score_train'], metrics['f_score_test'], 'F1 Score (lr = {})'.format(learn_rate), 'F1 Score')
plot_precision_recall(metrics)
###Output
_____no_output_____
###Markdown
Perform actual training
###Code
def calculate_class_acc_for_test_set(model):
model.eval()
with t.no_grad():
conf_matrix = np.zeros((4, 4))
for x, y in loader_test_simple_img:
y_pred = model.forward(x)
y_pred, y = to_numpy(y_pred), to_numpy(y)
pred = y_pred.argmax(axis = 1)
cm = sk.metrics.confusion_matrix(y, pred, labels = list(range(4)))
conf_matrix += cm
print('Confusion matrix:\n', conf_matrix)
class_acc = [conf_matrix[i][i] / sum(conf_matrix[i]) for i in range(len(conf_matrix))]
print('Class acc:\n', class_acc)
mean_class_acc = statistics.harmonic_mean(class_acc)
print('Mean class accuracy:\n', mean_class_acc)
return class_acc
def plot_class_acc(class_acc):
plt.bar(list(range(4)), class_acc, align='center', alpha=0.5)
plt.xticks(list(range(4)), list(range(4)))
plt.xlabel('Classes')
plt.ylabel('Accuracy')
plt.savefig('AccPerClass.pdf', dpi = 300, format = 'pdf')
plt.show()
def plot_class_acc_comparison(class_acc_1, class_acc_2, title_1, title_2):
width = .3
plt.bar(list(range(4)), class_acc_1, width, alpha=0.5, color = 'green', label = title_1)
plt.bar(np.array(list(range(4))) + width, class_acc_2, width, alpha=0.5, color = 'blue', label = title_2)
plt.xticks(np.array(list(range(4))) + width/2, list(range(4)))
plt.xlabel('Classes')
plt.ylabel('Accuracy')
plt.legend()
plt.savefig('ClassAccCompare.pdf', dpi = 300, format = 'pdf')
plt.show()
model = MobileNetV2_FullConv().cuda()
checkpoint = t.load(CHACKPOINT_SIMPLE_MODEL)
model.load_state_dict(checkpoint['model'])
class_acc = calculate_class_acc_for_test_set(model)
plot_class_acc(class_acc)
trainig_data_count = [1540 , 469, 854 , 140 ]
trainig_data_distirbution = [t / sum(trainig_data_count) for t in trainig_data_count]
plt.bar(list(range(4)), trainig_data_count, align='center', alpha=0.5)
plt.xticks(list(range(4)), list(range(4)))
plt.xlabel('Classes')
plt.ylabel('Image count')
# plt.savefig('TrainingDataDistribution.pdf', dpi = 300, format = 'pdf')
# plt.show()
###Output
_____no_output_____ |
machine_learning_lecture2_2020.ipynb | ###Markdown
Lecture 2: Introduction to Machine Learning in Python, May 2020By Dr. Anders Christensen `anders.christensen @ unibas.ch`--- Part 1: Non-linear regression?Last time we saw *linear least squares regression*:
###Code
import matplotlib.pyplot as plt
# plt.rcParams['figure.figsize'] = [8, 6]
import numpy as np
# X-values
x = np.arange(0,20.0, 0.2)
# Y-values: Y = 1.2*X + random noise
y = 1.2 * x + np.random.normal(scale=2.0, size=len(x))
plt.scatter(x,y)
plt.plot(x, x*1.2, color="g")
plt.show()
###Output
_____no_output_____
###Markdown
Approximate a target function as a weighted sum of the features:\begin{equation}y(\mathbf{x}) = x_1 \alpha_1 + x_2 \alpha_2 + \dots + x_n \alpha_n\end{equation}In matrix notation:\begin{equation}\mathbf{y} = \mathbf{X} \mathbf{\alpha}\end{equation}Minimze the error:\begin{equation}\mathbf{\hat{\alpha}} = \text{arg min} || \mathbf{y}^\text{ref} - \mathbf{X}\mathbf{\alpha}||^2\end{equation} What a about linear regression for $\sin \left(x\right)$?Same example as above, just with $y(x) = \sin \left(x\right)$:
###Code
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0,6.6, 0.6)
y = np.sin(x) #+ (np.random.random(size=len(x)) - 0.5) * 0.5
print(x.shape)
print(y.shape)
xplot = np.arange(0,6.6, 0.01)
plt.scatter(x,y, label="Training")
plt.plot(xplot, np.sin(xplot), color="g", label="sin(x)")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Just like yesterday, we can use `numpy.linalg.lstsq()` to solve the optimal coefficients by minimizing the error:\begin{equation}\mathbf{\hat{\alpha}} = \text{arg min} || \mathbf{y}^\text{ref} - \mathbf{X}\mathbf{\alpha}||^2\end{equation}Closed-form solution:\begin{equation}\mathbf{\hat{\alpha}} = \left(\mathbf{X}^\top\mathbf{X} \right)^{-1}\mathbf{X}^\top\mathbf{y}^\text{ref}\end{equation}However, Numpy uses a singluar-value decomposition.
###Code
x_reshape = x.reshape(len(x),1)
alpha, sing, rank, vecs = np.linalg.lstsq(x_reshape, y, rcond=None)
print(alpha)
###Output
_____no_output_____
###Markdown
Making new predictions:* Test set of new $x$-values
###Code
x_test = np.arange(0.0, 6.6, 0.6)
print(x_test)
print(x_test.shape)
###Output
_____no_output_____
###Markdown
Predictions are made with the same equation:\begin{equation}\mathbf{y} = \mathbf{X} \mathbf{\alpha}\end{equation}
###Code
x_test = x_test.reshape(len(x_test),1)
y_test = np.dot(x_test, alpha)
print(y_test)
###Output
_____no_output_____
###Markdown
Making a plot of the predictions:
###Code
plt.plot(xplot, np.sin(xplot), color="g", label="sin(x)")
plt.scatter(x, y, label="Training")
plt.scatter(x_test, y_test, color="r", label="Test")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Part 1½: Kernel ridge regressionThe exercise today is about coding kernel ridge regession!The idea is to describe the target function in terms of basis functions ("kernel functions") placed on the training points.Plot of basis functions for the $\sin\left(x\right)$ curve:
###Code
import qml.kernels
import qml.math
sigma = 0.3
K = qml.kernels.gaussian_kernel(x_reshape, x_reshape, sigma)
K[np.diag_indices_from(K)] += 1e-10
alphas = qml.math.cho_solve(K, y)
xgauss = np.arange(0.0,6.0, 0.01)
fit_curve = np.zeros(len(xgauss))
for alp, xval in zip(alphas, x):
ygauss = np.exp(-(xgauss-xval)**2/(2*sigma**2)) * alp
fit_curve += ygauss
# plt.plot(xgauss, ygauss, color="C0")
plt.scatter(x,y, label="Training")
plt.plot(xgauss, np.sin(xgauss), color="g", label="sin(x)")
plt.plot(xgauss, fit_curve, color="C3", label="KRR")
# plt.ylim([-1.5,1.5])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
$y\left(x\right)$ is now a sum of weighted basis functions:\begin{equation}\hat{y}\left(\tilde{\mathbf{x}}\right) = \sum_i \kappa\left(\mathbf{x}_i, \tilde{\mathbf{x}} \right) \alpha_i\end{equation}As in linear regression, $\mathbf{\alpha}_i$ are our regression weights.In matrix form:\begin{equation}\mathbf{y} = \mathbf{K}\mathbf{\alpha}\end{equation}---For example, Gaussian kernel:\begin{equation}\kappa\left(\mathbf{x}, \tilde{\mathbf{x}} \right) = \exp \left(-\frac{\|\mathbf{x} - \tilde{\mathbf{x}} \|^2}{2\sigma^2} \right)\end{equation}Alternative definition: (used in the exercises today)\begin{equation}\kappa\left(\mathbf{x}, \tilde{\mathbf{x}} \right) = \exp \left(-\gamma \|\mathbf{x} - \tilde{\mathbf{x}} \|^2 \right)\end{equation}The Gaussian kernel is always number between 0 and 1:* 0 if $x$ and $\tilde{x}$ are infinitely apart* 1 if $x$ and $\tilde{x}$ are the same--- Fit the best $\alpha$ coefficients:\begin{equation}\mathbf{\hat{\alpha}} = \text{arg min} || \mathbf{y}^\text{ref} - \mathbf{K}^\mathrm{train}\mathbf{\alpha}||^2\end{equation}Where $\mathbf{y}^\mathrm{ref}$ are the training lables and $\mathbf{K}^\mathrm{train}$ is the pairwise kernel matrix for all the points in the training set, defined as:\begin{equation}\mathbf{K}^\mathrm{train}_{ij} = \kappa\left(\mathbf{x}_i, \mathbf{x}_j \right)\end{equation}Closed-form solution for regression coefficients:\begin{equation}\alpha = \left(\mathbf{K}^\mathrm{train} + \mathbf{I}\lambda\right)^{-1}\mathbf{y}^\text{ref}\end{equation}$\lambda$ is a small number to be added to the digonal for numerical reasons (regularization and numerical stability). First, let's define the kernel function
###Code
def kernel(xi, xj):
sigma = 0.3
k = np.exp(-np.linalg.norm(xi - xj)**2 / (2 * sigma**2))
return k
###Output
_____no_output_____
###Markdown
Next, let's calculate the kernel matrix:
###Code
K = np.zeros((len(x),len(x)))
for i in range(len(x)):
for j in range(len(x)):
K[i,j] = kernel(x[i], x[j])
np.set_printoptions(linewidth=666)
print(K)
###Output
_____no_output_____
###Markdown
With the Kernel matrix above, we can now find the regression coefficients:\begin{equation}\alpha = \left(\mathbf{K}^\mathrm{train} + \mathbf{I}\lambda\right)^{-1}\mathbf{y}^\text{ref}\end{equation}
###Code
alphas = np.matmul(np.linalg.inv(K + np.eye(11)*1e-10), y)
print(alphas)
print(len(alphas))
###Output
_____no_output_____
###Markdown
Another way to solve the equation is via a so-called Cholesky decomposition which is more numerically stable:
###Code
import qml.math
# Add to diagonal
K[np.diag_indices_from(K)] += 1e-10
# Solve
alphas = qml.math.cho_solve(K, y)
print(alphas)
###Output
_____no_output_____
###Markdown
Predictions:Remembering again:\begin{equation}\hat{y}\left(\tilde{\mathbf{x}}\right) = \sum_i \kappa\left(\mathbf{x}_i, \tilde{\mathbf{x}} \right) \alpha_i\end{equation}In matrix form:\begin{equation}\mathbf{y} = \mathbf{K}\mathbf{\alpha}\end{equation}
###Code
# Test points
x_test = np.random.random(size=(20,1)) * 6
# x_test = np.arange(-5, 11.5, 0.5)
# Zero kernel
K_test = np.zeros((len(x_test),len(x)))
# Calculate pair-wise Gaussian kernel function
for i in range(len(x_test)):
for j in range(len(x)):
K_test[i,j] = kernel(x_test[i], x[j])
# Make prediction
y_test = np.dot(K_test, alphas)
# Plot everything
plt.scatter(x,y, label="Training")
plt.plot(xplot, np.sin(xplot), color="g", label="True curve")
plt.scatter(x_test, y_test, color="r", label="Test")
plt.grid(True)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Note on Hyperparameters! Everything repeated, but with Scikit-learn:Everything we've just coded is of course all available in Scikit-learn:https://scikit-learn.org/stable/modules/generated/sklearn.kernel_ridge.KernelRidge.html
###Code
from sklearn.kernel_ridge import KernelRidge
# Gamma instead of sigma!
gam = 1 / (2.0 * sigma**2)
# Make the KRR object
krr = KernelRidge(alpha=1e-10, kernel="rbf", gamma=gam)
# krr = KernelRidge(alpha=1e-10, kernel="linear")
# Fit the machine
krr.fit(x_reshape, y)
# Prediction
y_scikit = krr.predict(x_test)
# Plot everything
plt.scatter(x,y, label="Training")
plt.plot(xgauss, np.sin(xgauss), color="g", label="True curve")
plt.scatter(x_test, y_scikit, color="r", label="Scikit-learn")
plt.legend()
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
What about hyper parameters? Part 2: Molecules in and Machine Learning? Molecule data files
###Code
# Download Coordinate file for ethanol
!wget -O ethanol.xyz https://raw.githubusercontent.com/andersx/ml-intro/master/ethanol.xyz
###Output
_____no_output_____
###Markdown
Molecules are often stored in the ".xyz" format. For example, a conformation for ethanol is stored in the above file:```9C -0.69924 -0.01497 0.32426C -1.90198 -0.93192 0.56363O 0.50350 -0.72657 0.14397H 0.30059 -1.64320 -0.17552H -2.08482 -1.56542 -0.33061H -2.80809 -0.31867 0.75417H -1.71524 -1.58372 1.44281H -0.89079 0.62463 -0.56368H -0.58049 0.63584 1.21559```Such files can for example be visualized with ASE (Atomic Simulation Environment).https://wiki.fysik.dtu.dk/ase/
###Code
!pip install ase
import ase.io
import ase.visualize
ethanol = ase.io.read("ethanol.xyz")
ase.visualize.view(ethanol, viewer="x3d")
###Output
_____no_output_____
###Markdown
Making representations for molecules?We need to calculate the pair wise kernels between molecules:\begin{equation}\kappa\left(\mathbf{x}, \tilde{\mathbf{x}} \right) = \exp \left(-\frac{\|\mathbf{x} - \tilde{\mathbf{x}} \|^2}{2\sigma^2} \right)\end{equation}Also sometimes called the *similarity kernel*: 1 if the two molecules are identical, 0 if they are completely different. Problem: how do we get an input representation? Which features are good?Anything could in principle be used:* Numbers of atoms* Physical observables* Name string (e.g. SMILES)* Coordinates* Nuclear charges* Bonding information* Etc ...Some desireable properties:* Rotational, translations invariance?* Permutational invariance* Uniqueness (injectivity)This kind of problem in machine learning is called "*feature engineering*". There are *many* representations available for molecules. Two examples are follow, which will also be used in the exercies for today.
###Code
!pip install qml
###Output
_____no_output_____
###Markdown
The Coulomb Matrix:Proposed by Rupp et al. (2012) Phys Rev Lett https://doi.org/10.1103/PhysRevLett.108.058301\begin{equation}x_{ij}^\text{CM} = \begin{cases} 0.5Z_i^{2.4} & \text{for } i=j \\ \frac{Z_i Z_j}{R_{ij}} & \text{for } i \neq j \end{cases}\end{equation}The above takes care of translational and permutational invariance.Furthermore, the rows and columns are sorted to ensure permutational invariance.Example for ethane:```73.51669472 34.06515641 36.858105219.58838249 23.5104435 36.8581052 8.06700497 3.03799493 2.46941255 0.5 3.91313164 5.40535191 2.78865961 0.35565997 0.5 3.87121997 5.40076757 2.76284181 0.38595636 0.55366084 0.5 2.9519494 2.75461693 5.40414993 0.38673507 0.39949749 0.32304269 0.5 2.89651793 2.75224092 5.4003281 0.41811064 0.32445499 0.39915951 0.55199418 0.5 2.35852226 2.76046537 5.40252013 0.28533493 0.40534695 0.39832789 0.5530945 0.55433763 0.5``` Example of how to generate with QML:
###Code
import qml
mol = qml.Compound(xyz="ethanol.xyz")
mol.generate_coulomb_matrix(size=12)
np.set_printoptions(linewidth=100)
print(mol.representation)
###Output
_____no_output_____
###Markdown
Bag-of-Bonds:Proposed by Hansen et al. (2015) J Phys Chem Lett https://doi.org/10.1021/acs.jpclett.5b00831Same content as the Coulomb Matrix, but items are grouped differently, so only identical terms will be compared.More accurate for molecules? (Exercise of today)
###Code
mol.generate_bob(asize={"O":2, "C": 3, "H": 8})
np.set_printoptions(linewidth=100)
print(mol.representation)
###Output
_____no_output_____
###Markdown
Exercises set 2: The QM7 dataset.Rupp et al. (2012) Phys Rev Lett https://doi.org/10.1103/PhysRevLett.108.058301The QM7 dataset contains XYZ structures for 7101 small molecules, with up to 7 atoms of type CNO, saturated with H.Some of them look like this:Attempt to map all organic molecules with up to 7 CNO-atoms.For each molecule you will be given the atomization energy calculated using a QM method (PBE0/def2-TZVP).* Instead of the raw 7101 XYZ files, you will be given the Coulomb Matrix features and Bag-of-Bonds features for each molecule, along with the atomization energies. How to determine which machine learning method is better than the other? For a given training/test split, one could calculate, for example, the mean-absolute-error (MAE) or root-mean-squared-error (RMSE) of the predictions, compared to the true values.\begin{equation}\text{MAE} =\frac{1}{N} \sum_{i=1}^{N}|y_i - \tilde{y}_i|\end{equation}\begin{equation}\text{RMSE} = \sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(y_i - \tilde{y}_i\right)^2}\end{equation}
###Code
y_true = np.array([0.00, 1.32, 2.64, 3.96])
y_pred = np.array([0.30, 1.14, 3.18, 2.70])
diff = y_true-y_pred
print(diff)
# Mean-absolute-error
mae = np.mean(np.abs(diff))
print(mae)
# Root-mean-squared-error
rmse = np.sqrt(np.mean(diff**2))
print(rmse)
###Output
_____no_output_____
###Markdown
Learning Curves: What is learning? Able to make better prediction with more training data.The error decays according to a power law with the training set size:\begin{equation}\mathrm{Error} \propto \frac{a}{N^b}\end{equation}Which is the same as:\begin{equation}\log\left(\mathrm{Error}\right) \propto \log\left(a\right) - b \log\left(N\right)\end{equation}On a log-log scale the error is linear with the training set size!It turns out, for the same dataset, these are most always parallel (same $b$-value). We can compare machine learning models (for example based on different representations), by looking at the learning curve! Training, Test, and Validation Splits:One common way to split data is as follows:This is a method to avoid overfitting (i.e. fitting parameters to your "test" data)Procedure for calculating the error for a model:1. Train model on **Training Set**2. Minimize error for **Validation Set**, i.e. optimize hyperparameters + goto 1.3. Calculate error for **Test Set**The last error on the Test Set (from 3.) is your error. Example how to split 100 values into three* 50 Training Set* 10 Validation Set* 40 Test Set
###Code
data_set = np.arange(0,100)
print(data_set)
###Output
_____no_output_____
###Markdown
Remembering the Numpy *slice-notation*:The general syntax is`array[row]`or`array[row, column]`Special notation: n = what is in index n :n = up to n n: = n and onwards n:m = from n to m : = everything
###Code
# First 50
training_set = data_set[:50]
print(training_set)
# Next 10
validation_set = data_set[50:60]
print(validation_set)
# Next 40
test_set = data_set[60:100]
print(test_set)
###Output
_____no_output_____ |
Part3 Industrial application/Global AI Challenge for Building E&M Facilities/FE&lbgm.ipynb | ###Markdown
Global AI Challenge for Building E&M Facilities> 人工智能建筑机电比赛 比赛介绍「国际建筑机电人工智能大挑战 – 人工智能大赛」是一场面向全球的比赛,欢迎任何国籍人士报名参加。准确的**冷负荷需求预测**是提高建筑能效的关键要素。在此项比赛中,参赛者须为举办方指定的商业建筑开发基于人工智能的**冷负荷需求预测模型**。建筑能耗占全世界能源消耗总量的40%。在香港这类高密度城市中,建筑物能耗甚至占到城市总能耗的80%。每年香港在供热、通风及空调所消耗的电费高达123亿港元。建筑节能是实现碳中和所必须的。本次比赛在给出连续18个月的天气特征数据,训练出模型,并且预测给定七天的冷负荷。> 提示:此notebook的Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。**Note!!!: 由于比赛的规则限制,我并不能将我的完整代码呈现读者,此notebook主要展示我的基础过程。**
###Code
from datetime import date, timedelta, datetime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from IPython import display
from sklearn import preprocessing
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.cluster import KMeans
import lightgbm as lgb
import gc
import os
display.set_matplotlib_formats('svg')
debug = False
###Output
D:\tensorflow1\lib\site-packages\dask\dataframe\utils.py:13: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
功能性函数减少内存占用
###Code
from pandas.api.types import is_datetime64_any_dtype as is_datetime
from pandas.api.types import is_categorical_dtype
def reduce_mem_usage(df, use_float16=False):
"""
遍历数据帧的所有列并修改数据类型以减少内存使用。
"""
start_mem = df.memory_usage().sum() / 1024**2
print("Memory usage of dataframe is {:.2f} MB".format(start_mem))
for col in df.columns:
if is_datetime(df[col]) or is_categorical_dtype(df[col]):
continue
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == "int":
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if use_float16 and c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype("category")
end_mem = df.memory_usage().sum() / 1024**2
print("Memory usage after optimization is: {:.2f} MB".format(end_mem))
print("Decreased by {:.1f}%".format(100 * (start_mem - end_mem) / start_mem))
return df
###Output
_____no_output_____
###Markdown
模板函数> 包括对特征求统计学特征,进行独热编码,核函数等......
###Code
def encode_onehot(df,column_name):
feature_df=pd.get_dummies(df[column_name], prefix=column_name)
all = pd.concat([df.drop([column_name], axis=1),feature_df], axis=1)
return all
def encode_count(df,column_name):
lbl = preprocessing.LabelEncoder()
lbl.fit(list(df[column_name].values))
df[column_name] = lbl.transform(list(df[column_name].values))
return df
def merge_count(df,columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].count()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def merge_nunique(df,columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].nunique()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def merge_median(df,columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].median()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def merge_mean(df,columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].mean()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def merge_sum(df,columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].sum()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def merge_max(df,columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].max()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def merge_min(df,columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].min()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def merge_std(df,columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].std()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def merge_var(df, columns,value,cname):
add = pd.DataFrame(df.groupby(columns)[value].var()).reset_index()
add.columns=columns+[cname]
df=df.merge(add,on=columns,how="left")
return df
def feat_count(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].count()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_count" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_nunique(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].nunique()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_nunique" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_mean(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].mean()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_mean" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_kernelMedian(df, df_feature, fe, value, pr, name=""):
def get_median(a, pr=pr):
a = np.array(a)
x = a[~np.isnan(a)]
n = len(x)
weight = np.repeat(1.0, n)
idx = np.argsort(x)
x = x[idx]
if n<pr.shape[0]:
pr = pr[n,:n]
else:
scale = (n-1)/2.
xxx = np.arange(-(n+1)/2.+1, (n+1)/2., step=1)/scale
yyy = 3./4.*(1-xxx**2)
yyy = yyy/np.sum(yyy)
pr = (yyy*n+1)/(n+1)
ans = np.sum(pr*x*weight) / float(np.sum(pr * weight))
return ans
df_count = pd.DataFrame(df_feature.groupby(fe)[value].apply(get_median)).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_mean" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_std(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].std()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_std" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_median(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].median()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_median" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_max(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].max()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_max" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_min(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].min()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_min" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_sum(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].sum()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_sum" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_var(df, df_feature, fe,value,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].var()).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_var" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
def feat_quantile(df, df_feature, fe,value,n,name=""):
df_count = pd.DataFrame(df_feature.groupby(fe)[value].quantile(n)).reset_index()
if not name:
df_count.columns = fe + [value+"_%s_quantile" % ("_".join(fe))]
else:
df_count.columns = fe + [name]
df = df.merge(df_count, on=fe, how="left").fillna(0)
return df
###Output
_____no_output_____
###Markdown
时间序列画图函数
###Code
# Define a function to plot different types of time-series
def plot_series(df=None, column=None, series=pd.Series([]),
label=None, ylabel=None, title=None, start=0, end=None):
"""
Plots a certain time-series which has either been loaded in a dataframe
and which constitutes one of its columns or it a custom pandas series
created by the user. The user can define either the 'df' and the 'column'
or the 'series' and additionally, can also define the 'label', the
'ylabel', the 'title', the 'start' and the 'end' of the plot.
"""
sns.set()
fig, ax = plt.subplots(figsize=(30, 12))
ax.set_xlabel('Time', fontsize=16)
if column:
ax.plot(df[column][start:end], label=label)
ax.set_ylabel(ylabel, fontsize=16)
if series.any():
ax.plot(series, label=label)
ax.set_ylabel(ylabel, fontsize=16)
if label:
ax.legend(fontsize=16)
if title:
ax.set_title(title, fontsize=24)
ax.grid(True)
return ax
###Output
D:\tensorflow1\lib\site-packages\ipykernel_launcher.py:3: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
1. 数据载入df_origin为18个月的原始数据根据主办方提供的数据,其数据描述如下:* Timestamp: 观测的时间戳(yyyy-mm-dd hh:mm:ss)* Average_OAT: 室外平均空气温度(摄氏度)* Humidity: 室外空气的相对湿度(%)* UV_Index: 紫外线指数* Average_Rainfall: 平均降雨量(mm)* ST_CoolingLoad: 南塔冷负荷* NT_CoolingLoad: 北塔冷负荷* CoolingLoad: 南塔北塔冷负荷和,NT_CoolingLoad + ST_CoolingLoad$Cooling Load = mc ΔT$* m为流量(L/s)x水密度(1kg/L)* C为热容量(4.19kJ/kg°C)* ΔT是进口(回水)和出口(供应)冷却水温度差(°C)
###Code
df_origin = pd.read_csv('./dataset/CoolingLoad15months.csv') # 前面15个月
df_three_months = pd.read_csv('./dataset/last3months.csv') # 最后三个月
time_format = '%Y/%m/%d %H:%M'
df_origin["Timestamp"] = pd.to_datetime(df_origin["Timestamp"], format=time_format)
df_three_months["Timestamp"] = pd.to_datetime(df_three_months["Timestamp"], format=time_format)
df_origin = df_origin.append(df_three_months, ignore_index=True)
df_origin.head() # 训练集所有原始数据
###Output
_____no_output_____
###Markdown
数据描述信息
###Code
df_origin.describe()
df_origin.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 52608 entries, 0 to 52607
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Timestamp 52608 non-null datetime64[ns]
1 Average_OAT 48212 non-null float64
2 Humidity 44134 non-null float64
3 UV_Index 52608 non-null float64
4 Average_Rainfall 52608 non-null float64
5 NT_CoolingLoad 47507 non-null float64
6 ST_CoolingLoad 47226 non-null float64
7 CoolingLoad 46341 non-null float64
dtypes: datetime64[ns](1), float64(7)
memory usage: 3.2 MB
###Markdown
直接转换为df_clean
###Code
df_clean = df_origin
df_clean.describe()
df_clean.head()
df_clean.info()
plt.figure(figsize=(22,8))
sns.lineplot(data=df_clean)
###Output
_____no_output_____
###Markdown
查看空值数量
###Code
df_clean.isnull().sum(axis=0)
###Output
_____no_output_____
###Markdown
2. 探索性数据分析 观察冷负荷的分布
###Code
ax = sns.histplot(np.log10(df_clean['NT_CoolingLoad']))
bx = sns.histplot(np.log10(df_clean['ST_CoolingLoad']))
cx = sns.histplot(np.log10(df_clean['CoolingLoad']))
###Output
D:\tensorflow1\lib\site-packages\pandas\core\series.py:726: RuntimeWarning: invalid value encountered in log10
result = getattr(ufunc, method)(*inputs, **kwargs)
###Markdown
由上面图片可以看出,南塔聚集的冷负荷更高。我们使用时序图来观察一下南塔北塔:
###Code
# 时间序列
df_clean.pivot_table(index="Timestamp", values="NT_CoolingLoad").plot(figsize=(20, 4))
plt.show()
df_clean.pivot_table(index="Timestamp", values="ST_CoolingLoad").plot(figsize=(20, 4))
plt.show()
df_clean.pivot_table(index="Timestamp", values="CoolingLoad").plot(figsize=(20, 4))
plt.show()
###Output
_____no_output_____
###Markdown
热力图
###Code
# 协方差 热力图
_, dx = plt.subplots(figsize=(6,6))
columns = df_clean.columns[0: 4]
sns.heatmap(df_clean[columns].corr(), annot=True, cmap='RdYlGn', ax=dx)
###Output
_____no_output_____
###Markdown
3. 数据清洗
###Code
df_clean.describe()
###Output
_____no_output_____
###Markdown
异常值 outlier观察> 实际上不处理结果更好,因此先不做处理
###Code
def plot_outlier():
plt.figure(figsize=(20,5))
plt.subplot(1, 2, 1)
sns.boxplot(y='NT_CoolingLoad',data=df_clean)
plt.subplot(1, 2, 2)
sns.boxplot(y='ST_CoolingLoad',data=df_clean)
plt.show()
plot_outlier()
# 异常值处理
def handle_outlier(df, name):
print(df[name])
q1 = df[name].quantile(0.25)
q3 = df[name].quantile(0.75)
iqr = q3-q1
Lower_tail = q1 - 1.5 * iqr
Upper_tail = q3 + 1.5 * iqr
med = np.median(df[name])
for i in df[name]:
if i > Upper_tail or i < Lower_tail:
df[name] = df[name].replace(i, med)
return df
# df_clean = handle_outlier(df_clean, 'NT_CoolingLoad')
# df_clean = handle_outlier(df_clean, 'ST_CoolingLoad')
# plot_outlier()
df_clean.describe()
len(df_clean), df_clean.isnull().any(axis=1).sum() # 一共有548天的数据
###Output
_____no_output_____
###Markdown
4. 时间序列特征处理时间序列:* year* month* day* time* weekday* date* hour
###Code
def date_handle(df):
"""拆分时间日期信息"""
df.sort_values("Timestamp") # 按照时间戳来排序
df.reset_index(drop=True)
time_format = '%Y/%m/%d %H:%M'
# 加入其他时间信息
df["Timestamp"] = pd.to_datetime(df["Timestamp"], format=time_format)
df["year"] = df["Timestamp"].dt.year
df["month"] = df["Timestamp"].dt.month
df["day"] = df["Timestamp"].dt.day
df["time"] = df["Timestamp"].dt.time
df["hour"] = df["Timestamp"].dt.hour
df["weekday"] = df["Timestamp"].dt.weekday
df["date"] = df["Timestamp"].dt.date
return df
df_clean = date_handle(df_clean)
df_clean.head()
###Output
_____no_output_____
###Markdown
5. 特征空值的填充
###Code
def fillna_data(df):
"""特征空值的填充"""
# 1. 补全部分符合条件的缺失标签
index1 = ~df["CoolingLoad"].isnull() & ~df["NT_CoolingLoad"].isnull() & df["ST_CoolingLoad"].isnull()
index2 = ~df["CoolingLoad"].isnull() & ~df["ST_CoolingLoad"].isnull() & df["NT_CoolingLoad"].isnull()
df.loc[index1, ["ST_CoolingLoad"]] = df[index1]["CoolingLoad"] - df[index1]["NT_CoolingLoad"]
df.loc[index2, ["NT_CoolingLoad"]] = df[index2]["CoolingLoad"] - df[index2]["ST_CoolingLoad"]
# 2. 特征插值填充,使用前10天同一时间段的平均值
df.iloc[:,1:5] = df.groupby("time")["Average_OAT", "Humidity", "UV_Index", "Average_Rainfall"].apply(lambda group: group.interpolate(limit_direction='both')) # 对四个基础特征进行插值
# 变回一小时的数据
# df = df.resample('1H', on="Timestamp").mean()
df.reset_index(drop=True, inplace=True)
return df
df_clean = fillna_data(df_clean)
print(df_clean.isnull().sum()) # 观察空值数量
df_clean.head()
###Output
_____no_output_____
###Markdown
数据转换标准化特征函数
###Code
def scale_feature(X, column_names):
"""标准化特征"""
print(columns)
pre = preprocessing.scale(X[column_names])
X[column_names] = pre
return X
###Output
_____no_output_____
###Markdown
6. 特征工程 6.1 时间特征* 是否为假日
###Code
import holidays
def is_holiday(df):
"""
判断是不是工作日,0为不是,1为是
:param df:
:return:
"""
# 先通过简单的规则
df['IsHoliday'] = 0
# 根据日历制定规则
holiday = ["2020/4/30", "2020/5/1", "2020/6/25", "2020/7/1", "2020/10/1", "2020/10/2", "2020/10/26", "2020/12/25", "2020/12/26", "2021/1/1", "2021/2/12", "2021/2/13", "2021/2/15", "2021/4/4", "2021/5/1", "2021/6/14", "2021/7/1", "2021/9/22", "2021/10/1", "2021/10/14", "2021/12/21", "2020/12/25"]
df["IsHoliday"] = df["Timestamp"].apply(lambda x: holidays.HongKong().get(x, default=0))
holiday_idx = df['IsHoliday'] != 0
df.loc[holiday_idx, 'IsHoliday'] = 1
df['IsHoliday'] = df['IsHoliday'].astype(np.uint8)
return df
###Output
_____no_output_____
###Markdown
6.2 天气特征* 特定分钟下的差值 + delta* 滞后信息 + lag * max, mean, min, std of the specific building historically* 对平均温度做SG滤波* 天气聚类 差值计算
###Code
def delta_feature(df, minute=30):
"""加入温度差异,在规定的minute中的差异"""
time_format = "%Y-%m-%d %H:%M:%S"
delta_time = df["Timestamp"].diff().astype('timedelta64[m]')
for name in df.iloc[:,1:5]:
delta = df[name].diff()
fe_name = "delta_%s_%s_minute" %(name,(str(minute)))
df[fe_name] = (delta / delta_time) * minute
return df
###Output
_____no_output_____
###Markdown
加入滞后能力> 为了提高数据的准确度,使用移动窗口
###Code
def add_lag_feature(weather_df, window=3):
group_df = weather_df
cols = ['Average_OAT', 'Humidity', 'UV_Index', 'Average_Rainfall']
rolled = group_df[cols].rolling(window=window, min_periods=0)
lag_mean = rolled.mean().reset_index().astype(np.float16)
lag_max = rolled.max().reset_index().astype(np.float16)
lag_min = rolled.min().reset_index().astype(np.float16)
lag_std = rolled.std().reset_index().astype(np.float16)
for col in cols:
weather_df[f'{col}_mean_lag{window}'] = lag_mean[col]
weather_df[f'{col}_max_lag{window}'] = lag_max[col]
weather_df[f'{col}_min_lag{window}'] = lag_min[col]
weather_df[f'{col}_std_lag{window}'] = lag_std[col]
return weather_df
###Output
_____no_output_____
###Markdown
加入同一时间段的统计学特征
###Code
def add_same_period_feature(weather_df, window=3):
group_df = weather_df.groupby("time") # 同一时间段
cols = ['Average_OAT', 'Humidity', 'UV_Index', 'Average_Rainfall']
rolled = group_df[cols].rolling(window=window, min_periods=0)
lag_mean = rolled.mean().reset_index()
lag_max = rolled.max().reset_index()
lag_min = rolled.min().reset_index()
lag_std = rolled.std().reset_index()
for col in cols:
weather_df[f'sametime_{col}_mean_lag{window}'] = lag_mean[col]
weather_df[f'sametime_{col}_max_lag{window}'] = lag_max[col]
weather_df[f'sametime_{col}_min_lag{window}'] = lag_min[col]
weather_df[f'sametime_{col}_std_lag{window}'] = lag_std[col]
return weather_df
###Output
_____no_output_____
###Markdown
每月相同的那一天滞后
###Code
def add_same_day_feature(weather_df, window=3):
group_df = weather_df.groupby(["day", "time"]) # 同一时间段
cols = ['Average_OAT', 'Humidity', 'UV_Index', 'Average_Rainfall']
rolled = group_df[cols].rolling(window=window, min_periods=0)
lag_mean = rolled.mean().reset_index()
lag_max = rolled.max().reset_index()
lag_min = rolled.min().reset_index()
lag_std = rolled.std().reset_index()
for col in cols:
weather_df[f'sameday_{col}_mean_lag{window}'] = lag_mean[col]
weather_df[f'sameday_{col}_max_lag{window}'] = lag_max[col]
weather_df[f'sameday_{col}_min_lag{window}'] = lag_min[col]
weather_df[f'sameday_{col}_std_lag{window}'] = lag_std[col]
return weather_df
###Output
_____no_output_____
###Markdown
滤波,时间序列平滑化
###Code
from scipy.signal import savgol_filter as sg
def add_sg(df):
w = 11
p = 2
for name in df.loc[:,["Average_OAT"]]:
df.loc[:, f'{name}_smooth'] = sg(df[name], w, p)
df.loc[:, f'{name}_diff'] = sg(df[name], w, p, 1)
df.loc[:, f'{name}_diff2'] = sg(df[name], w, p, 2)
return df
def avg_rainfall(df):
"""同一日期下(忽略年份)的降水量"""
df = feat_mean(df, df, ["day", "month"], "Average_Rainfall", name="avg_rainfall")
return df
def avg_temperature(df):
"""同一日期下(忽略年份)的平均温度"""
df = feat_mean(df, df, ["day", "month"], "Average_OAT", "avg_temperature")
return df
def avg_uv(df):
"""同一日期下(忽略年份)的紫外线强度"""
df = feat_mean(df, df, ["day", "month"], "UV_Index", "avg_UV")
return df
def avg_humidity(df):
"""同一日期下(忽略年份)的湿度"""
df = feat_mean(df, df, ["day", "month"], "Humidity", "avg_humidity")
return df
###Output
_____no_output_____
###Markdown
天气聚类
###Code
scaler = MinMaxScaler()
weather_scaled = scaler.fit_transform(df_clean[['Average_OAT','Humidity','Average_Rainfall']])
###Output
_____no_output_____
###Markdown
找到最优$K$值
###Code
# optimum K
Nc = range(1, 20)
kmeans = [KMeans(n_clusters=i) for i in Nc]
kmeans
score = [kmeans[i].fit(weather_scaled).score(weather_scaled) for i in range(len(kmeans))]
score
plt.plot(Nc,score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Curve')
plt.show()
kmeans = KMeans(n_clusters=5, max_iter=600, algorithm = 'auto')
kmeans.fit(weather_scaled)
df_clean['weather_cluster'] = kmeans.labels_
# Cluster Relationships with weather variables
plt.figure(figsize=(20,5))
plt.subplot(1, 3, 1)
plt.scatter(df_clean.weather_cluster,df_clean.Average_OAT)
plt.title('Weather Cluster vs. Temperature')
plt.subplot(1, 3, 2)
plt.scatter(df_clean.weather_cluster,df_clean.Humidity)
plt.title('Weather Cluster vs. Humidity')
plt.subplot(1, 3, 3)
plt.scatter(df_clean.weather_cluster,df_clean.Average_Rainfall)
plt.title('Weather Cluster vs. Rainfall')
plt.show()
fig, ax1 = plt.subplots(figsize = (10,7))
ax1.scatter(df_clean.Average_OAT,
df_clean.Humidity,
s = df_clean.Average_Rainfall*10,
c = df_clean.weather_cluster)
ax1.set_xlabel('Temperature')
ax1.set_ylabel('Humidity')
plt.show()
###Output
_____no_output_____
###Markdown
特征工程函数
###Code
def features_engineering(df):
"""特征工程"""
period_hour = 4
# 差值计算
df = delta_feature(df, 15)
df = delta_feature(df, 30)
# 同一时间的滞后
df = add_same_period_feature(df, window=1)
df = add_same_period_feature(df, window=2)
df = add_same_period_feature(df, window=3)
df = add_same_period_feature(df, window=4)
df = add_same_period_feature(df, window=5)
# 窗口滞后
df = add_lag_feature(df, window=3 * period_hour)
df = add_lag_feature(df, window=5 * period_hour)
df = add_lag_feature(df, window=12 * period_hour)
df = add_lag_feature(df, window=24 * period_hour)
df = add_lag_feature(df, window=48 * period_hour)
# 加入节假日信息
df = is_holiday(df)
# 序列平滑化处理
df = add_sg(df)
return df
df_train = features_engineering(df_clean)
df_train = reduce_mem_usage(df_train, use_float16=True)
df_train.head()
###Output
D:\tensorflow1\lib\site-packages\scipy\signal\_arraytools.py:45: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
b = a[a_slice]
###Markdown
观察滤波效果
###Code
df_train.Average_OAT[:500].plot()
df_train.Average_OAT_smooth[:500].plot()
df_train.Average_OAT_diff[:100].plot()
df_train.Average_OAT_diff2[:100].plot()
###Output
_____no_output_____
###Markdown
去掉没有标签的数据
###Code
index3 = df_train["CoolingLoad"].isnull()
df_train = df_train[~index3]
print(df_train.isnull().sum())
###Output
Timestamp 0
Average_OAT 0
Humidity 0
UV_Index 0
Average_Rainfall 0
..
Average_Rainfall_std_lag192 0
IsHoliday 0
Average_OAT_smooth 0
Average_OAT_diff 0
Average_OAT_diff2 0
Length: 188, dtype: int64
###Markdown
6.3 切分训练集和测试集
###Code
split_valid = df_train["Timestamp"] > '2021-09-24'
df_valid = df_train[split_valid]
df_train = df_train[~split_valid]
drops = ["Timestamp", "year", "day", "time", "date", "month"]
df_train = df_train.drop(drops, axis=1)
df_valid = df_valid.drop(drops, axis=1)
###Output
_____no_output_____
###Markdown
7. 模型训练* 评价函数:$rmse = \sqrt{\frac{1}{n}(truth - predict)^2}$* LightGBM模型参数: * learning_rate:迭代步长,学习率; * num_leaves:LightGBM使用leaf-wise的算法,在调节树的复杂度时,使用num_leaves,较小导致欠拟合,较大导致过拟合; * subsample:0-1之间,控制每棵树随机采样的比例,减小这个参数的值,算法会更加保守,避免过拟合。但如果这个值设置得过小,可能会导致欠拟合; * lambda_l2:L2正则化系数,用来控制过拟合; * num_trees:迭代步数。 * ...... 切分特征和对应标签
###Code
y_train_ncl = df_train["NT_CoolingLoad"].reset_index(drop=True)
y_train_scl = df_train["ST_CoolingLoad"].reset_index(drop=True)
y_train_total = df_train["CoolingLoad"].reset_index(drop=True)
X_train = df_train.drop(["NT_CoolingLoad", "ST_CoolingLoad", "CoolingLoad"], axis=1)
y_valid_ncl = df_valid["NT_CoolingLoad"].reset_index(drop=True)
y_valid_scl = df_valid["ST_CoolingLoad"].reset_index(drop=True)
y_valid_total = df_valid["CoolingLoad"].reset_index(drop=True)
X_valid = df_valid.drop(["NT_CoolingLoad", "ST_CoolingLoad", "CoolingLoad"], axis=1)
gc.collect()
X_train = X_train.reset_index(drop=True)
print(X_train.isnull().sum())
###Output
Average_OAT 0
Humidity 0
UV_Index 0
Average_Rainfall 0
hour 0
..
Average_Rainfall_std_lag192 0
IsHoliday 0
Average_OAT_smooth 0
Average_OAT_diff 0
Average_OAT_diff2 0
Length: 179, dtype: int64
###Markdown
训练函数
###Code
def fit_lgbm(train, val, devices=(-1,), seed=None, cat_features=None, num_rounds=1500, lr=0.1, bf=0.7):
"""
lgbm模型训练
:param train:
:param val:
:param devices:
:param seed:
:param cat_features:
:param num_rounds:
:param lr:
:param bf:
:return:
"""
X_train, y_train = train
X_valid, y_valid = val
metric = 'rmse'
params = {'num_leaves': 31,
'objective': 'regression',
# 'max_depth': -1,
'learning_rate': lr,
"boosting": "gbdt",
"bagging_freq": 5,
"bagging_fraction": bf,
"feature_fraction": 0.9,
"metric": metric,
}
# 配置gpu
device = devices[0]
if device == -1:
# use cpu
pass
else:
# use gpu
print(f'using gpu device_id {device}...')
params.update({'device': 'gpu',
'gpu_device_id': device,
'gpu_platform_id': device})
params['seed'] = seed
early_stop = 100
verbose_eval = 25
# star training
d_train = lgb.Dataset(X_train, label=y_train, categorical_feature=cat_features)
d_valid = lgb.Dataset(X_valid, label=y_valid, categorical_feature=cat_features)
watchlist = [d_train, d_valid]
print('training LGB:')
model = lgb.train(params,
train_set=d_train,
num_boost_round=num_rounds,
valid_sets=watchlist,
verbose_eval=verbose_eval,
early_stopping_rounds=early_stop)
# predictions
y_pred_valid = model.predict(X_valid, num_iteration=model.best_iteration)
print('best_score', model.best_score)
log = {'train/rmse': model.best_score['training']['rmse'],
'valid/rmse': model.best_score['valid_1']['rmse']}
return model, y_pred_valid, log
###Output
_____no_output_____
###Markdown
开始训练
###Code
def plot_feature_important(model):
"""绘制特征重要性图"""
importance_df = pd.DataFrame(model[1].feature_importance(),
index=X_train.columns,
columns=['importance']).sort_values('importance')
fig, ax = plt.subplots(figsize=(8, 8))
importance_df.plot.barch(ax=ax)
fig.show()
###Output
_____no_output_____
###Markdown
下面使用KFold来对模型进行校验的方法我在Kaggle上面看到很多,但我认为对时间序列做简单的KFold来验证模型并不正确。会导致使用很多的未来数据来预测过去数据的现象,让模型获得了”未卜先知“的能力。在比赛中也很容易造成过拟合的问题。当然如果想了解更多关于时间序列交叉验证的问题可以见[https://zhuanlan.zhihu.com/p/99674163](https://zhuanlan.zhihu.com/p/99674163)
###Code
# def train_epochs(X_train, y_train, category_cols, num_rounds=1000, lr=0.1, bf=0.7):
# """
# 开始训练
# :param X_train:
# :param y_train:
# :param category_cols:
# :param num_rounds:
# :param lr:
# :param bf:
# :return:
# """
# models = [] # 保存每次的模型
# y_valid_pred_total = np.zeros(X_train.shape[0])
# for train_idx, valid_idx in kf.split(X_train, y_train):
# train_data = X_train.iloc[train_idx,:], y_train[train_idx]
# valid_data = X_train.iloc[valid_idx,:], y_train[valid_idx]
#
# print('train', len(train_idx), 'valid', len(valid_idx))
#
# model, y_pred_valid, log = fit_lgbm(train_data,valid_data, cat_features=category_cols,
# num_rounds=num_rounds, lr=lr, bf=bf)
# y_valid_pred_total[valid_idx] = y_pred_valid
# models.append(model)
# gc.collect()
# if debug:
# break
#
# try:
# sns.distplot(y_train)
# sns.distplot(y_pred_valid)
# plt.show()
#
# except:
# pass
#
# del X_train, y_train
# gc.collect()
#
# print('-------------------------------------------------------------')
#
# return models
# seed = 666 # 随机种子
# shuffle = True # 是否打乱
# folds = 78
# kf = KFold(n_splits=folds, shuffle=shuffle, random_state=seed)
categorical_features = ["IsHoliday", "weekday", "weather_cluster"]
def train_model(y_train, y_valid, X_train=X_train, X_valid=X_valid):
"""训练函数"""
train_data = X_train, y_train
valid_data = X_valid, y_valid
print('train', len(X_train), 'valid', len(X_valid))
model, y_pred_valid, log = fit_lgbm(train_data, valid_data, cat_features=categorical_features)
print(log)
return model, y_pred_valid, log
###Output
_____no_output_____
###Markdown
北塔
###Code
north_model, north_pred, _ = train_model(y_train=y_train_ncl, y_valid=y_valid_ncl)
###Output
train 45670 valid 671
training LGB:
###Markdown
南塔
###Code
south_model, south_pred, _ = train_model(y_train=y_train_scl, y_valid=y_valid_scl)
###Output
train 45670 valid 671
training LGB:
###Markdown
8. 结果预测
###Code
total_pred = south_pred + north_pred
np.sqrt(mean_squared_error(total_pred, y_valid_total)) # 总的rmse
###Output
D:\tensorflow1\lib\site-packages\numpy\core\_methods.py:47: RuntimeWarning: overflow encountered in reduce
return umr_sum(a, axis, dtype, out, keepdims, initial, where)
###Markdown
画对比预测和真实值
###Code
plt.figure(figsize=(20,5))
plt.plot(total_pred, label="predict")
plt.plot(y_valid_total, label="truth")
plt.legend()
plt.show()
###Output
_____no_output_____ |
DataSetReader.ipynb | ###Markdown
Data Mangemnet Data files has to be structerd as the following: . ├── ... ├── root │ ├── class1 │ │ └── Data to be read │ ├── class2 │ │ └── Data to be read │ ├── ... │ └── class3 │ └── Data to be read └── ... Imports
###Code
# Access files and directories
import os
from glob import glob
# To read and preprocess images and show
import cv2
import matplotlib.pyplot as plt
# Progress bar `Not importnant`
from tqdm import tqdm
from IPython.display import clear_output
# To save data as pickle
import joblib
# You know why
import numpy as np
###Output
_____no_output_____
###Markdown
Read Data Variables
###Code
# Data Set Path which is the root path
data_path = '/Users/malikziq/DataSets/APIT_Arabic_Words_Images/'
# Data file extention to be read
extention = '.png'
# Dictionary where each class is the key and the values are the images
data_dic = {}
# Total amonut of images in the dataset
total_count = 0
# Classes in the dataset
data_classes = []
# Amount of data from each class
data_amount = 5000
# Images values
images_values = []
# Images labels
images_labels = []
print(glob(data_path + '*'))
# Read data from each class under the dirs name
# ** Note that the folders or any other type of files that doesn't end with the spicefied extention
# Will not be read
for each in glob(data_path + '*'):
# Take the last file name (Class) from the path
word = each.split("/")[-1]
print('Reading data of class: ',word)
# Set new Class in the data_dic
data_dic[word] = []
data_classes.append(word)
# Get data for each class
for root, dirs, files in os.walk(each):
for file in files:
if file.lower().endswith(extention):
# join root path with image name
image_path = os.path.join(root, file)
# Read image in gray level
image_value = cv2.imread(image_path, 0)
data_dic[word].append(image_value)
total_count += 1
if total_count % data_amount == 0:
break
print(total_count)
###Output
25000
###Markdown
Convert data into two lists:* Images list : that has the images values * Classes list : Images labels
###Code
class_id = 0
for each_class in data_classes:
for each_image in data_dic[each_class]:
images_values.append(each_image)
images_labels.append(class_id)
class_id += 1
###Output
_____no_output_____
###Markdown
Dump data
###Code
# dump data dic(class , [images]) and data classes
joblib.dump((data_dic, data_classes), "dataDic_dataClasses.pkl", compress=3)
# Dump data as lists -> (images list, images labels, classes vlaues)
joblib.dump((images_values, images_labels, data_classes), "imges_labels_classes.pkl", compress=3)
###Output
_____no_output_____ |
tests/data/misc/backfill_user_data.ipynb | ###Markdown
Backfill Lear User Data from AuthThis notebook updates entries in the `users` table in lear db where first/last name are missing with data from the auth api. Setup environmentBefore running following environment setup snippets, ensure environment variables found in `default-bcr-business-setup-TEST` notebook contain the correct values.
###Code
%run /workspaces/lear/tests/data/default-bcr-business-setup-TEST.ipynb
%run /workspaces/lear/tests/data/common/legal_api_utils.ipynb
%run /workspaces/lear/tests/data/common/auth_api_utils.ipynb
import json
from sqlalchemy import and_, or_
from legal_api.models import User
###Output
_____no_output_____
###Markdown
Input Values for filingPlease update following values to appropriate values before running subsequent code snippets!!
###Code
verify_ssl=True # set to False if using proxy to debug requests
###Output
_____no_output_____
###Markdown
Tokens
###Code
auth_token = get_auth_token(verify_ssl)
assert auth_token
# auth_token
###Output
_____no_output_____
###Markdown
Update Lear Users
###Code
# get lear users with no first/last names and has a username
query = db.session.query(User) \
.filter(
or_(
User.firstname == None,
User.firstname == ''
),
or_(
User.lastname == None,
User.lastname == ''
),
User.username != None,
User.username != ''
)
lear_users = query.all()
assert(lear_users)
lear_users_count = len(lear_users)
print(f'lear users to process: {lear_users_count}')
# loop through lear users with missing info and make call to auth users endpoint. if valid response comes back, populate lear user
# with missing info(first name, last name, email) where possible
for lear_user in lear_users:
try:
user_name = lear_user.username
r = get_user_by_username(auth_token, user_name, verify_ssl)
if r.status_code != 200:
raise Exception(f'error {r.status_code}, {r.text}')
auth_user_dict = json.loads(r.text)
first_name = auth_user_dict.get('firstname')
last_name = auth_user_dict.get('lastname')
email = auth_user_dict.get('email')
if first_name or last_name or email:
lear_user.firstname = first_name
lear_user.lastname = last_name
lear_user.email = email
lear_user.save()
else:
raise Exception('no first name or last name provided by auth response')
print(f'lear user user_name: {user_name} updated succeeded')
except Exception as err:
print(f'lear user user_name: {user_name} updated failed, {err}')
###Output
_____no_output_____ |
Practicas-IA/practica-1b4.ipynb | ###Markdown
Explicación de la soluciónEl estado es el id del actor que llevamos, y las acciones son pares con el camino que indica la pelicula en el primer parametro que los conecta y en segundo lugar el siguiente actor. El codigo que nos dan de grados se encarga de transformar todo para poder imprimirlo en pantalla según se indica.Para poder resolver el problema hacemos una búsqueda en anchura que va a encontrar la solución en caso de tener y ademásva a ser la búsqueda óptima.El factor de ramificación máximo será el máximo de coincidencias con otros actores de cada actor.
###Code
from search import *
from search import breadth_first_tree_search, depth_first_tree_search, depth_first_graph_search, breadth_first_graph_search
class grados (Problem):
def __init__(self,source, target):
self.initial = source
self.goal = target
def actions(self, state):
lista = neighbors_for_person(state)
return lista
def result(self, state, action):
return action[1]
## grados.py
import csv
import sys
# diccionario de nombres de personas con ids
names = {}
# diccionario: name, birth, movies (conjunto de movie_ids)
people = {}
# movie_ids to a dictionary of: title, year, stars (a set of person_ids)
movies = {}
def load_data(directory):
"""
Load data from CSV files into memory.
"""
# Cargamos el archivo people
with open(f"{directory}/people.csv", encoding="utf-8") as f:
reader = csv.DictReader(f)
for row in reader:
people[row["id"]] = {
"name": row["name"],
"birth": row["birth"],
"movies": set()
}
if row["name"].lower() not in names:
names[row["name"].lower()] = {row["id"]}
else:
names[row["name"].lower()].add(row["id"])
# cargamos el archivo movies
with open(f"{directory}/movies.csv", encoding="utf-8") as f:
reader = csv.DictReader(f)
for row in reader:
movies[row["id"]] = {
"title": row["title"],
"year": row["year"],
"stars": set()
}
# cargamos el archivo stars
with open(f"{directory}/stars.csv", encoding="utf-8") as f:
reader = csv.DictReader(f)
for row in reader:
try:
people[row["person_id"]]["movies"].add(row["movie_id"])
movies[row["movie_id"]]["stars"].add(row["person_id"])
except KeyError:
pass
def main():
if len(sys.argv) > 2:
sys.exit("Para ejecutarlo en línea de comandos: python grados.py [directory]")
directory = sys.argv[1] if len(sys.argv) == 2 else "large"
# Load data from files into memory
print("Cargando los datos...")
load_data(directory)
print("Datos cargados.")
source = person_id_for_name(input("Nombre: "))
if source is None:
sys.exit("Esa persona no se encuentra.")
target = person_id_for_name(input("Nombre: "))
if target is None:
sys.exit("Esa persona no se encuentra.")
path = shortest_path(source, target)
if path is None:
print("No están conectados.")
else:
degrees = len(path)
print(f"{degrees} grados de separacion.")
path = [(None, source)] + path
for i in range(degrees):
person1 = people[path[i][1]]["name"]
person2 = people[path[i + 1][1]]["name"]
movie = movies[path[i + 1][0]]["title"]
print(f"{i + 1}: {person1} y {person2} participaron en {movie}")
def shortest_path(source, target):
p= grados(source, target)
return breadth_first_graph_search(p).solution
def person_id_for_name(name):
"""
Returns the IMDB id for a person's name,
resolving ambiguities as needed.
"""
person_ids = list(names.get(name.lower(), set()))
if len(person_ids) == 0:
return None
elif len(person_ids) > 1:
print(f"Which '{name}'?")
for person_id in person_ids:
person = people[person_id]
name = person["name"]
birth = person["birth"]
print(f"ID: {person_id}, Name: {name}, Birth: {birth}")
try:
person_id = input("Intended Person ID: ")
if person_id in person_ids:
return person_id
except ValueError:
pass
return None
else:
return person_ids[0]
def neighbors_for_person(person_id):
"""
Returns (movie_id, person_id) pairs for people who starred with a given person.
"""
movie_ids = people[person_id]["movies"]
neighbors = set()
for movie_id in movie_ids:
for person_id in movies[movie_id]["stars"]:
neighbors.add((movie_id, person_id))
return neighbors
#if __name__ == "__main__":
# main()
print(person_id_for_name("Emma Watson"))
#cargamos los datos
load_data("small")
load_data("large")
name="Emma Watson"
person_id=person_id_for_name(name)
print(person_id_for_name("Emma Watson"))
neighbors_for_person(person_id)
source = person_id_for_name(input("Nombre: "))
if source is None:
sys.exit("Esa persona no se encuentra.")
target = person_id_for_name(input("Nombre: "))
if target is None:
sys.exit("Esa persona no se encuentra.")
path = shortest_path(source, target)
if path is None:
print("No están conectados.")
else:
degrees = len(path)
print(f"{degrees} grados de separacion.")
path = [(None, source)] + path
for i in range(degrees):
person1 = people[path[i][1]]["name"]
person2 = people[path[i + 1][1]]["name"]
movie = movies[path[i + 1][0]]["title"]
print(f"{i + 1}: {person1} y {person2} participaron en {movie}")
###Output
_____no_output_____ |
XML_SEARCH/XML_RESULTS_082819/XML_FINAL_ANALYSIS_082819.ipynb | ###Markdown
XML Final Analysis Daina Bouquin, Daniel ChivvisScripts below were used to generate all .csv files in XML_RESULTS Full repo of files: https://github.com/dbouquin/cite_astro_software_2019
###Code
import pandas as pd
import numpy as np
import sys
import csv
XML_results = pd.read_csv("XML_CLEAN_INPUT_082019.csv")
list(XML_results.columns.values)
XML_results.head(5)
# Create column for bibliography section
# If the tag lable or tag content contain any of the following reference elements it will be marked "yes":
# bib
# bibr
# citation-alternatives
# collab
# contrib-group
# element-citation
# mixed-citation
# nlm-citation
# person-group
# pub-id
# ref
# ref-list
# source
# xref
bibliography = ['Parent1_Tag','Parent2_Tag','Parent3_Tag','Parent4_Tag','Parent1_Content','Parent2_Content']
XML_results["bib"] = np.where((XML_results[bibliography] == "bib").any(axis=1) | (XML_results[bibliography]== "bibr").any(axis=1) | (XML_results[bibliography]== "citation-alternatives").any(axis=1) | (XML_results[bibliography]== "collab").any(axis=1) | (XML_results[bibliography]== "contrib-group").any(axis=1) | (XML_results[bibliography]== "element-citation").any(axis=1) | (XML_results[bibliography]== "mixed-citation").any(axis=1) | (XML_results[bibliography]== "nlm-citation").any(axis=1) | (XML_results[bibliography]== "person-group").any(axis=1) | (XML_results[bibliography]== "pub-id").any(axis=1) | (XML_results[bibliography]== "ref").any(axis=1) | (XML_results[bibliography]== "ref-list").any(axis=1) | (XML_results[bibliography]== "source").any(axis=1) | (XML_results[bibliography]== "xref").any(axis=1), "yes", "no")
# Create column for acknowledgements
# If the tag lable or tag content contains "ack" it will be marked "yes"
acknowledgements = ['Parent1_Tag','Parent2_Tag','Parent3_Tag','Parent4_Tag','Parent1_Content','Parent2_Content']
XML_results["ack"] = np.where((XML_results[acknowledgements] == "ack").any(axis=1), "yes", "no")
# Create column for footnotes
# If the tag lable or tag content contains "fn" or "fn-group" it will be marked "yes"
footnotes = ['Parent1_Tag','Parent2_Tag','Parent3_Tag','Parent4_Tag','Parent1_Content','Parent2_Content']
XML_results["fn"] = np.where((XML_results[footnotes] == "fn").any(axis=1) | (XML_results[footnotes]== "fn-group").any(axis=1), "yes", "no")
# Create column for attempt at recognizable credit (bib + ack + fn + "ext-link" + "back")
# ack
# back
# bib
# bibr
# citation-alternatives
# collab
# contrib-group
# element-citation
# ex-link
# fn
# fn-group
# mixed-citation
# nlm-citation
# person-group
# pub-id
# ref
# ref-list
# source
# xref
recognizable = ['Parent1_Tag','Parent2_Tag','Parent3_Tag','Parent4_Tag','Parent1_Content','Parent2_Content']
XML_results["rec_credit"] = np.where((XML_results[recognizable] == "bib").any(axis=1) | (XML_results[recognizable]== "bibr").any(axis=1) | (XML_results[recognizable]== "citation-alternatives").any(axis=1) | (XML_results[recognizable]== "collab").any(axis=1) | (XML_results[recognizable]== "contrib-group").any(axis=1) | (XML_results[recognizable]== "element-citation").any(axis=1) | (XML_results[recognizable]== "mixed-citation").any(axis=1) | (XML_results[recognizable]== "nlm-citation").any(axis=1) | (XML_results[recognizable]== "person-group").any(axis=1) | (XML_results[recognizable]== "pub-id").any(axis=1) | (XML_results[recognizable]== "ref").any(axis=1) | (XML_results[recognizable]== "ref-list").any(axis=1) | (XML_results[recognizable]== "source").any(axis=1) | (XML_results[recognizable]== "xref").any(axis=1) | (XML_results[recognizable]== "fn").any(axis=1) | (XML_results[recognizable]== "fn-group").any(axis=1) | (XML_results[recognizable]== "ack").any(axis=1) | (XML_results[recognizable]== "back").any(axis=1) | (XML_results[recognizable]== "ex-link").any(axis=1), "yes", "no")
# Check new cols
list(XML_results.columns.values)
XML_results.to_csv("XML_FINAL_ANALYSIS_082819.csv")
###Output
_____no_output_____
###Markdown
Summary of Results
###Code
# number of unique aliases found in each paper
XML_alias_per_paper = pd.DataFrame({'count' : XML_results.groupby(["Software_Package","File_Name"])['Alias'].nunique()})
XML_alias_per_paper.to_csv("XML_alias_per_paper_082819.csv")
# XML_alias_per_paper
# How many total papers did we find for each software package?
total_papers = XML_results.groupby('Software_Package')['File_Name'].nunique()
total_papers.to_csv("total_papers_082819.csv")
total_papers
# Total number of unique XML files containing aliases
XML_results.File_Name.nunique()
# Unique aliases per package
alias_per_package = XML_results.groupby('Software_Package')['Alias'].nunique()
alias_per_package.to_csv("alias_per_package_082819.csv")
alias_per_package
# All software mentions per journal
mentions_per_journal = XML_results.groupby('Journal_Title')['File_Name'].nunique()
mentions_per_journal.to_csv("mentions_per_journal_082819.csv")
mentions_per_journal
# software mentions per journal by package
mentions_per_package_by_journal = pd.DataFrame({'count' : XML_results.groupby(["Journal_Title", "Software_Package"])['File_Name'].nunique()})
mentions_per_package_by_journal.to_csv("mentions_per_package_by_journal_082819.csv")
mentions_per_package_by_journal
# For each package count number of articles that mentioned their identifiers
ID_only = XML_results.loc[XML_results['Identifier'] == 1]
ID_only = pd.DataFrame({'count' : ID_only.groupby(["Software_Package", "Alias"])['File_Name'].nunique()})
ID_only.to_csv("ID_only_082819.csv")
ID_only
# For each package count number of articles that mentioned their aliases that aren't identifiers
non_ID_only = XML_results.loc[XML_results['Identifier'] == 0]
non_ID_only = pd.DataFrame({'count' : non_ID_only.groupby(["Software_Package", "Alias"])['File_Name'].nunique()})
non_ID_only.to_csv("non_ID_only_082819.csv")
non_ID_only
# For each package count total number of articles that mentioned each alias
XML_alias_paper = pd.DataFrame({'count' : XML_results.groupby(["Software_Package", "Alias"])['File_Name'].nunique()})
XML_alias_paper.to_csv("XML_alias_paper_082819.csv")
XML_alias_paper
#Tags per package
XML_tags = pd.DataFrame({'count' : XML_results.groupby(["Software_Package", "Parent1_Tag", "Parent2_Tag", "Parent3_Tag", "Parent4_Tag"])['File_Name'].nunique()})
XML_tags.to_csv("XML_tags_082819.csv")
XML_tags
# Total number of unique papers with software aliases by year
XML_unique_paper_per_year = pd.DataFrame(XML_results.groupby(['Software_Package', 'Pub_Year'])['File_Name'].nunique())
XML_unique_paper_per_year.to_csv("XML_unique_paper_per_year_082819.csv")
XML_unique_paper_per_year
# Total number of unique papers with software aliases in the bibliography section by package
bib_only = XML_results.loc[XML_results['bib'] == "yes"]
bib_count = pd.DataFrame({'count' : bib_only.groupby(["Software_Package"])['File_Name'].nunique()})
bib_count.to_csv("bib_count_082819.csv")
bib_count
# Total number of unique papers with software aliases in acknowledgements section by package
ack_only = XML_results.loc[XML_results['ack'] == "yes"]
ack_count = pd.DataFrame({'count' : ack_only.groupby(["Software_Package"])['File_Name'].nunique()})
ack_count.to_csv("ack_count_082819.csv")
ack_count
# Total number of unique papers with software aliases in footnotes section by package
fn_only = XML_results.loc[XML_results['fn'] == "yes"]
fn_count = pd.DataFrame({'count' : fn_only.groupby(["Software_Package"])['File_Name'].nunique()})
fn_count.to_csv("fn_count_082819.csv")
fn_count
# Total number of unique papers with software aliases that gave a recognizable form of credit by package
rec_credit_only = XML_results.loc[XML_results['rec_credit'] == "yes"]
rec_credit_count = pd.DataFrame({'count' : rec_credit_only.groupby(["Software_Package"])['File_Name'].nunique()})
rec_credit_count.to_csv("rec_credit_count_082819.csv")
rec_credit_count
# What papers mentioned software and gave a recognizable form of credit?
rec_credit_only_files = pd.DataFrame(rec_credit_only['File_Name'])
rec_credit_only_files.reset_index(drop=True, inplace=True)
rec_credit_only_files.to_csv('rec_credit_only_files_082819.csv', index=False)
# Total number of unique papers with software aliases that gave no recognizable form of credit by package
no_rec_credit_only = XML_results.loc[XML_results['rec_credit'] == "no"]
no_rec_credit_count = pd.DataFrame({'count' : no_rec_credit_only.groupby(["Software_Package"])['File_Name'].nunique()})
# count of files with alias mentions that aren't clearly recognizable credit
no_rec_credit_count.to_csv("rec_credit_count_082819.csv")
# subset of files with alias mentions that aren't clearly recognizable
no_rec_credit_only_files = pd.DataFrame(no_rec_credit_only['File_Name'])
no_rec_credit_only_files.reset_index(drop=True, inplace=True)
# of the "no-recognizable credit" files, which ones have no aliases whatsoever that point to recogniable credit?
no_rec_credit_only_files = no_rec_credit_only_files[~no_rec_credit_only_files['File_Name'].isin(rec_credit_only_files['File_Name'])].dropna()
no_rec_credit_only_files.to_csv('rec_credit_only_files_082819.csv', index=False)
no_rec_credit_only_files
###Output
_____no_output_____
###Markdown
Example article that mentions "AstroPy" without any attribution: https://iopscience.iop.org/article/10.3847/0004-637X/826/2/191/pdf
###Code
# Trends over time for each AAS Journal
XML_journal_year = pd.DataFrame(XML_results.groupby(['Journal_Title', 'Software_Package', 'Pub_Year'])['File_Name'].nunique())
XML_journal_year.to_csv("XML_journal_year_082819.csv")
XML_journal_year
###Output
_____no_output_____ |
Questions/SampleQuestion4-samc1000_112Question5/CodingApproach/coding_solution.ipynb | ###Markdown
Question 5$\\newcommand{\\ket}[1]{\\left\\lvert{1}\\right\\rangle}\\newcommand{\\bra}[1]{\\left\\langle {1}\\right|}\\newcommand{\\innerbraket}[2]{\\left\\langle{1}\\lvert{2}\\right\\rangle}\\newcommand{\\braket}[2]{\\left\\langle{1},{2}\\right\\rangle}$
###Code
%matplotlib inline
# white background for dark mode displaying graphs
%config InlineBackend.print_figure_kwargs={'facecolor' : "w"}
# imports
from qiskit import *
from qiskit.providers.ibmq import least_busy
from qiskit.visualization import *
from qiskit.quantum_info import Statevector
from qiskit.visualization import plot_bloch_multivector
# hide deprecation warnings produced from the qiskit library
import warnings
warnings.simplefilter('ignore')
###Output
_____no_output_____
###Markdown
Load IBMQ Account
###Code
from qiskit import IBMQ
# following documentation from https://pypi.org/project/python-dotenv/
from dotenv import load_dotenv
load_dotenv() # take environment variables from .env.
import os
IBMQ.save_account(os.getenv('IBMQ_FREE_ACCOUNT'), overwrite=True)
provider = IBMQ.load_account()
from IPython import display
display.Image("../Sample Question 5.png")
###Output
_____no_output_____
###Markdown
A
###Code
bell = QuantumCircuit(2, name="A")
bell.h(0)
bell.x(1)
bell.cx(0, 1)
# make variables for the four options: qc1 - qc4
qc1 = bell.copy() # make a copy of the object
qc1.measure_all()
bell.draw('mpl')
state_vec_A = Statevector(bell)
print("A's final result:\n", state_vec_A)
plot_bloch_multivector(state_vec_A)
###Output
A's final result:
Statevector([0. +0.j, 0.70710678+0.j, 0.70710678+0.j,
0. +0.j],
dims=(2, 2))
###Markdown
B
###Code
bell = QuantumCircuit(2, name="B")
bell.cx(0, 1)
bell.h(0)
bell.x(1)
# make variables for the four options: qc1 - qc4
qc2 = bell.copy() # make a copy of the object
qc2.measure_all()
bell.draw('mpl')
state_vec_B = Statevector(bell)
print("B's final result:\n", state_vec_B)
plot_bloch_multivector(state_vec_B)
###Output
B's final result:
Statevector([0. +0.j, 0. +0.j, 0.70710678+0.j,
0.70710678+0.j],
dims=(2, 2))
###Markdown
C
###Code
bell = QuantumCircuit(2, name="C")
bell.h(0)
bell.x(1)
bell.cz(0, 1)
# make variables for the four options: qc1 - qc4
qc3 = bell.copy() # make a copy of the object
qc3.measure_all()
bell.draw('mpl')
state_vec_C = Statevector(bell)
print("C's final result:\n", state_vec_C)
plot_bloch_multivector(state_vec_C)
###Output
C's final result:
Statevector([ 0. +0.j, 0. +0.j, 0.70710678+0.j,
-0.70710678+0.j],
dims=(2, 2))
###Markdown
D
###Code
bell = QuantumCircuit(2, name="D")
bell.h(0)
bell.h(0)
# make variables for the four options: qc1 - qc4
qc4 = bell.copy() # make a copy of the object
qc4.measure_all()
bell.draw('mpl')
state_vec_D = Statevector(bell)
print("D's final result:\n", state_vec_D)
plot_bloch_multivector(state_vec_D)
###Output
D's final result:
Statevector([ 1.00000000e+00+0.j, -2.23711432e-17+0.j, 0.00000000e+00+0.j,
0.00000000e+00+0.j],
dims=(2, 2))
###Markdown
Get results from real hardware
###Code
# run it on real hardware to see if any physcial effects are detected
real_hardware = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2
and not x.configuration().simulator
and x.status().operational==True))
# execute all four circuits as one job
job = execute([qc1, qc2, qc3, qc4], backend=real_hardware, shots=1024)
result = job.result()
counts_qc1 = result.get_counts(qc1)
counts_qc2 = result.get_counts(qc2)
counts_qc3 = result.get_counts(qc3)
counts_qc4 = result.get_counts(qc4)
print("qc1 counts:",counts_qc1)
print("qc2 counts:",counts_qc2)
print("qc3 counts:",counts_qc3)
print("qc4 counts:",counts_qc4)
###Output
qc1 counts: {'00': 88, '01': 463, '10': 444, '11': 29}
qc2 counts: {'00': 45, '01': 45, '10': 519, '11': 415}
qc3 counts: {'00': 72, '01': 37, '10': 490, '11': 425}
qc4 counts: {'00': 999, '01': 8, '10': 16, '11': 1}
###Markdown
Compare measured results
###Code
plot_histogram([counts_qc1, counts_qc2, counts_qc3, counts_qc4],
legend=["A", "B", "C", "D"], figsize=(15,6))
###Output
_____no_output_____
###Markdown
Circuits for all four Bell states Bell 1$$\Phi^+\rangle = \frac{1}{\sqrt{2}} (|00\rangle + |11\rangle) = \begin{pmatrix} \frac{1}{\sqrt{2}} \\ 0 \\ 0 \\ \frac{1}{\sqrt{2}} \\ \end{pmatrix}$$
###Code
bell1 = QuantumCircuit(2, name="Bell state 1")
bell1.h(0)
bell1.cx(0, 1)
bell1.draw('mpl')
###Output
_____no_output_____
###Markdown
Bell 2$$|\Phi^-\rangle = \frac{1}{\sqrt{2}} (|00\rangle - |11\rangle) = \begin{pmatrix} \frac{1}{\sqrt{2}} \\ 0 \\ 0 \\ -\frac{1}{\sqrt{2}} \\ \end{pmatrix}\\$$
###Code
bell2 = QuantumCircuit(2, name="Bell state 2")
bell2.x(0)
bell2.h(0)
bell2.cx(0, 1)
bell2.draw('mpl')
###Output
_____no_output_____
###Markdown
Bell 3$$|\Psi^+\rangle = \frac{1}{\sqrt{2}} (|01\rangle + |10\rangle) = \begin{pmatrix} 0 \\ \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \\ 0 \\ \end{pmatrix}\\$$
###Code
bell3 = QuantumCircuit(2, name="Bell state 3")
bell3.h(0)
bell3.x(1)
bell3.cx(0, 1)
bell3.draw('mpl')
###Output
_____no_output_____
###Markdown
Bell 4$$|\Psi^-\rangle = \frac{1}{\sqrt{2}} (|01\rangle - |10\rangle) = \begin{pmatrix} 0 \\ \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \\ 0 \\ \end{pmatrix}$$
###Code
bell4 = QuantumCircuit(2, name="Bell state 4")
bell4.x([0,1])
bell4.h(0)
bell4.cx(0, 1)
bell4.draw('mpl')
state_vec4 = Statevector(bell4)
print("Bell 4 final result:\n", state_vec4)
plot_bloch_multivector(state_vec4)
###Output
Bell 4 final result:
Statevector([ 0. +0.j, -0.70710678+0.j, 0.70710678+0.j,
0. +0.j],
dims=(2, 2))
|
22.SQL_imdb/.ipynb_checkpoints/22_sql_imdb-checkpoint.ipynb | ###Markdown
(Q1) List all the directors who directed a 'Comedy' movie in a leap year. (You need to check that the genre is 'Comedy’ and year is a leap year) Your query should return director name, the movie name, and the year.
###Code
query = """
select distinct p.name, m.title, m.year from person p, movie m join
(select md.pid as pid, mg.mid as mid from m_director md join m_genre mg on mg.mid=md.mid
where mg.gid IN
(select g.gid from genre g where g.name like "%Comedy%")) as temp on temp.pid=p.pid and temp.mid=m.mid group by m.title
having m.year%4==0 and m.year%100!=0 or m.year%400==0
"""
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____
###Markdown
(Q2) List the names of all the actors who played in the movie 'Anand' (1971)
###Code
result = pd.read_sql_query("SELECT MID ,year FROM Movie WHERE year LIKE '%I%' LIMIT 0,10", conn)
result
sql = """ UPDATE Movie SET year=SUBSTR(year,-4) WHERE year LIKE '% %' """
cur = conn.cursor()
cur.execute(sql)
conn.commit()
result = pd.read_sql_query("SELECT MID ,year FROM Movie WHERE year LIKE '%I%' LIMIT 0,10", conn)
result
query = """
select trim(pid) from m_cast where pid like ' %'
"""
result = pd.read_sql_query(query, conn)
result
sql = """ UPDATE m_cast SET pid=trim(pid) """
cur = conn.cursor()
cur.execute(sql)
conn.commit()
query = """
select trim(pid) from m_cast where pid like ' %'
"""
result = pd.read_sql_query(query, conn)
result
################# Answer ############
query = """
SELECT * FROM person p where p.pid in
(SELECT pid FROM m_cast WHERE mid IN
(SELECT mid FROM movie where title="Anand" AND year like '1971'))
"""
result = pd.read_sql_query(query, conn)
result
query = """
select * from movie where title="Anand"
"""
result = pd.read_sql_query(query, conn)
result
query = """
SELECT name, pid from person where pid in ("nm0004435", "nm0764407")
"""
result = pd.read_sql_query(query, conn)
result
query = """
SELECT pid, mid FROM m_cast WHERE mid IN (
SELECT mid FROM movie where title="Anand" AND year=1971
)
"""
result = pd.read_sql_query(query, conn)
result
sql = '''UPDATE Movie SET year=SUBSTR(year,-4) WHERE year LIKE '% %' '''
cur = conn.cursor()
cur.execute(sql)
conn.commit()
###Output
_____no_output_____
###Markdown
(Q3) List all the actors who acted in a film before 1970 and in a film after 1990. (That is: 1990.)
###Code
query = '''
select name from person where pid in
(select pid from m_cast where mid in
(select mid from movie where year not between 1970 and 1990)
)
'''
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____
###Markdown
(Q4) List all directors who directed 10 movies or more, in descending order of the number of movies they directed. Return the directors' names and the number of movies each of them directed.
###Code
query = '''
select a.pid, a.name, count(b.mid) num_movies from person a left join m_director b
on a.pid=b.pid group by a.pid having num_movies>10 order by num_movies desc
'''
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____
###Markdown
(Q5-a) For each year, count the number of movies in that year that had only female actors.
###Code
query = '''
select mc.mid, count(mc.mid) mid_cnt from m_cast mc, (select *, count(*) from person where gender=='Female') a having
mid_cnt
'''
result = pd.read_sql_query(query, conn)
result
query = '''
select *, count(*) from person group by pid having gender=='Female'
'''
result = pd.read_sql_query(query, conn)
result
query = '''
select title, year, count(*) count_ from movie where mid in
(select z.mid from Person x, M_Cast xy, Movie z
where x.PID = xy.PID and xy.MID = z.MID and x.Gender=='Female') group by year order by year
'''
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____
###Markdown
(Q6) Find the film(s) with the largest cast. Return the movie title and the size of the cast. By "cast size" we mean the number of distinct actors that played in that movie: if an actor played multiple roles, or if it simply occurs multiple times in casts, we still count her/him only once.
###Code
result = pd.read_sql_query("select mid, count(pid) cnt from m_cast group by mid order by cnt desc limit 1", conn)
result.head()
query = '''
select m.title, mc.mid, count(mc.pid) cnt from m_cast mc left join movie m on m.mid=mc.mid
group by mc.mid order by cnt desc limit 1
'''
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____
###Markdown
(Q7) A decade is a sequence of 10 consecutive years. For example, say in your database you have movie information starting from 1965. Then the first decade is 1965, 1966, ..., 1974; the second one is 1967, 1968, ..., 1976 and so on. Find the decade D with the largest number of films and the total number of films in D.
###Code
query = '''
select y.year, count(*) total_films from (select distinct year from movie) y, movie z
where y.year <= z.year and z.year < y.year+10
group by y.year order by total_films desc limit 1
'''
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____
###Markdown
(Q8) Find the actors that were never unemployed for more than 3 years at a stretch. (Assume that the actors remain unemployed between two consecutive movies).
###Code
query = '''
select name from person where pid not in
(select distinct pid from m_cast as mc natural join movie as m where exists
(select mid from m_cast as mc2 natural join movie as m2 where
mc.pid=mc2.pid and (m2.year-3)>m.year and not exists
(select mid from m_cast as mc3 natural join
movie as m3 where mc.pid=mc3.pid and m.year<m3.year and m3.year<m2.year
)
)
)
'''
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____
###Markdown
(Q9) Find all the actors that made more movies with Yash Chopra than any other director.
###Code
query = '''
select p1.pid, p1.name, count(movie.mid) as movies_yc from person as p1
natural join m_cast natural join movie join m_director on (movie.mid = m_director.mid)
join person as p2 on (m_director.pid = p2.pid) where p2.name = 'Yash Chopra'
group by p1.pid having count(movie.mid)> (select count(movie.mid) from person as p3 natural
join m_cast natural join movie join m_director on (movie.mid = m_director.mid)
join person as p4 on (m_director.pid = p4.pid)
where p1.pid = p3.pid and p4.name != 'Yash Chopra' group by p4.pid) order by movies_yc desc
'''
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____
###Markdown
(Q10) The Shahrukh number of an actor is the length of the shortest path between the actor and Shahrukh Khan in the "co-acting" graph. That is, Shahrukh Khan has Shahrukh number 0; all actors who acted in the same film as Shahrukh have Shahrukh number 1; all actors who acted in the same film as some actor with Shahrukh number 1 have Shahrukh number 2, etc. Return all actors whose Shahrukh number is 2.
###Code
sql = """ UPDATE Person SET Name=trim(Name) """
cur = conn.cursor()
cur.execute(sql)
conn.commit()
query = '''
select distinct PID, name from Person natural join M_Cast
where name <> 'Shahrukh Khan' and MID in
(select MID from M_Cast where PID in
(select PID from Person natural join M_Cast
where Name <> 'Shahrukh Khan' and MID in
(select MID from Person natural join M_Cast where Name = 'Shahrukh Khan')))
and PID not in (select PID from Person natural join M_Cast where Name <> 'Shahrukh Khan'
and MID in (select MID from Person natural join M_Cast where Name = 'Shahrukh Khan'))
'''
result = pd.read_sql_query(query, conn)
result
###Output
_____no_output_____ |
bioinformatics/complexity-measurement/sting/benchmark-sting.ipynb | ###Markdown
```bashgit clone https://github.com/jordanlab/STingcd STing./autogen.sh./configuremakesudo make install cp bin/* scripts/cd ..python STing/scripts/db_util.py fetch --query "Campylobacter jejuni" --out_dir my_dbs --build_index Downgrade gcc to 7.5.0sudo apt-get remove gcc g++sudo apt-get install gcc-7 g++-7ln -s /usr/bin/g++-7 /usr/bin/g++ln -s /usr/bin/gcc-7 /usr/bin/gcc```
###Code
%run ../../multibench.py
from inspect import isfunction
import os, sys
import matplotlib.pyplot as plt
import asciitable
import sys
import os
import shutil
import numpy as np
import glob
from shutil import copyfile
import pathlib
# Summarize numpy array if it has more than 10 elements
np.set_printoptions(threshold=10)
def clean_if_exists(path):
if os.path.exists(path):
if(os.path.isfile(path)):
os.remove(path)
else:
shutil.rmtree(path)
os.mkdir(path)
def get_last_n_lines(string, n):
return "\n".join(string.split("\n")[-n:])
def create_folder_if_doesnt_exist(path):
if not os.path.exists(path):
os.makedirs(path)
# Move two upper directories, import benchmark, revert cwd
sys.path.insert(0, os.path.dirname(os.path.abspath(pathlib.Path().absolute())) + "/..")
input_samples = [os.path.basename(f) for f in glob.glob('input/*_1.fastq.gz')]
input_samples = [f.replace('_1.fastq.gz','') for f in input_samples]
print(input_samples)
sample_sizes = list(range(1, 20, 3))
sample_sizes
def reset_func():
for file in glob.glob("outputs/*"):
clean_if_exists(file)
def benchmark_list_to_results(benchmark_firsts_list):
return {
"memory": max(list(map(lambda result: result.memory.max, benchmark_firsts_list))),
"disk_read": max(list(map(lambda result: result.disk.read_chars, benchmark_firsts_list))),
"disk_write": max(list(map(lambda result: result.disk.write_chars, benchmark_firsts_list))),
"runtime": sum(list(map(lambda result: result.process.execution_time, benchmark_firsts_list)))
}
def sampling_func(sample_size):
samples = input_samples[:sample_size]
return samples
mlst_data_strain_name = "campylobacter_jejuni"
typer_command = {
"command": "typer -x my_dbs/" + mlst_data_strain_name + "/db/index -1 input/%_1.fastq.gz -2 input/%_2.fastq.gz",
"parallel_args": "-j 1 -I%"
}
# active_output_print: prints stdout and stderr on every iteration
multibench_results, debug_str = multi_cmdbench({
"type": [typer_command]
}, reset_func = reset_func, iterations = 10, sampling_func = sampling_func, sample_sizes = sample_sizes,
benchmark_list_to_results = benchmark_list_to_results, active_output_print = False, progress_bar = True)
save_path = "multibench_results.txt"
samples_per_sample_size = []
for sample_size in sample_sizes:
samples_per_sample_size.append(input_samples[:sample_size])
save_multibench_results(multibench_results, samples_per_sample_size, save_path)
read_path = "multibench_results.txt"
multibench_results, samples_per_sample_size = read_multibench_results(read_path)
print(samples_per_sample_size)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
# Typer command Plots
plot_resources(multibench_results, sample_sizes, "type")
###Output
[{'runtime': 0.291, 'memory': 95862784.0, 'disk_read': 47637264.2, 'disk_write': 463329.7}, {'runtime': 0.926, 'memory': 101044633.6, 'disk_read': 186185234.8, 'disk_write': 734114.4}, {'runtime': 1.587, 'memory': 101502566.4, 'disk_read': 324493163.8, 'disk_write': 947632.5}, {'runtime': 2.261, 'memory': 102059622.4, 'disk_read': 462974605.1, 'disk_write': 1153706.7}, {'runtime': 2.925, 'memory': 102463078.4, 'disk_read': 601247569.4, 'disk_write': 1362293.2}, {'runtime': 3.539, 'memory': 102522880.0, 'disk_read': 739579641.5, 'disk_write': 1561794.2}, {'runtime': 4.216, 'memory': 102526976.0, 'disk_read': 877847475.8, 'disk_write': 1766254.0}]
|
pxl_scripts/Pixie_101_Write_a_basic_script.ipynb | ###Markdown
Make sure you are connected to your account---
###Code
# Install the Pixie "px" CLI in this colab environment
!bash -c "$(curl -fsSL https://withpixie.ai/install.sh)"
# You should be logged in but let's double check
!px get pem
# If you are not logged in run:
!px auth login
###Output
[90m[0000][0m [32m INFO[0m Pixie CLI
Opening authentication URL: https://work.withpixie.ai:443/login?local_mode=true&redirect_uri=http%3A%2F%2Flocalhost%3A8085%2Fauth_complete
[90m[0000][0m [32m INFO[0m Starting browser
[90m[0000][0m [32m INFO[0m Fetching refresh token ...
[90m[0000][0m [32m INFO[0m err: browser failed to open
[90m[0000][0m [32m INFO[0m Failed to perform browser based auth. Will try manual auth
Please Visit:
https://work.withpixie.ai:443/login?local_mode=true
Copy and paste token here: qaRYLqVvIM1Q7sjQHlL9jkk5kkNuPWBt
[90m[0016][0m [32m INFO[0m Fetching refresh token
[90m[0016][0m [32m INFO[0m Authentication Successful
###Markdown
You're all set! 🚀--- Run pre-built community scripts
###Code
#View available scripts
!px run -l
# Let's run service_stats to get a feel
!px run px/service_stats
###Output
[90m[0000][0m [32m INFO[0m Pixie CLI
Connection Mode: passthrough
Table ID: test
TIME K8S LATENCY P50 LATENCY P90 LATENCY P99 ERROR RATE RPS BYTES PER S
2020-04-12T01:56:28Z px-sock-shop/front-end [31m451.73[0m [31m1382.92[0m [31m1543.04[0m 0.50 106 3710
2020-04-12T01:57:12Z px-sock-shop/carts 9.21 55.53 140.29 1.00 39 5631
2020-04-12T01:56:18Z px-sock-shop/catalogue 7.18 101.04 102.91 0.00 30 43977
2020-04-12T01:56:44Z px-sock-shop/front-end [31m407.18[0m [31m1314.01[0m [31m1522.95[0m 2.50 100 3532
2020-04-12T01:56:40Z px-sock-shop/shipping 0.97 1.38 1.49 0.00 8 671
2020-04-12T01:57:00Z px-sock-shop/shipping 0.96 1.23 1.25 0.00 6 513
2020-04-12T01:57:14Z px-sock-shop/catalogue 8.61 193.58 195.20 0.00 28 43474
2020-04-12T01:56:34Z px-sock-shop/orders 62.76 66.01 67.18 0.50 12 9300
2020-04-12T01:56:44Z px-sock-shop/shipping 0.86 1.10 1.88 0.00 7 553
2020-04-12T01:55:36Z px-sock-shop/front-end [31m587.20[0m [31m1388.70[0m [31m1697.09[0m 6.50 102 3587
2020-04-12T01:56:38Z px-sock-shop/user 2.96 5.20 81.79 0.00 75 24413
2020-04-12T01:56:14Z px-sock-shop/front-end [31m490.87[0m [31m1410.22[0m [31m1897.19[0m 5.50 101 3552
2020-04-12T01:56:56Z px-sock-shop/shipping 0.94 1.29 1.56 0.00 18 1422
2020-04-12T01:57:00Z px-sock-shop/front-end [31m502.16[0m [31m1394.80[0m [31m1898.38[0m 3.00 92 3278
2020-04-12T01:56:38Z px-sock-shop/orders 47.89 102.42 152.97 0.50 12 10171
2020-04-12T01:56:00Z px-sock-shop/carts 6.44 13.46 19.93 0.00 39 3054
2020-04-12T01:57:04Z px-sock-shop/front-end [31m507.46[0m [31m1405.31[0m [31m1690.38[0m 3.00 111 3975
2020-04-12T01:57:20Z px-sock-shop/front-end [33m395.65[0m [31m1284.37[0m [31m1505.17[0m 1.50 107 3774
2020-04-12T01:56:34Z px-sock-shop/front-end [31m520.76[0m [31m1309.72[0m [31m1457.90[0m 3.50 99 3552
2020-04-12T01:57:18Z px-sock-shop/carts 6.10 45.26 142.45 0.00 30 2032
2020-04-12T01:56:40Z px-sock-shop/carts 7.83 43.32 54.80 1.50 42 3908
2020-04-12T01:56:32Z px-sock-shop/orders 31.31 100.18 101.95 0.00 13 13673
2020-04-12T01:56:26Z px-sock-shop/carts 9.46 103.52 [33m240.02[0m 1.00 42 4605
2020-04-12T01:57:12Z px-sock-shop/catalogue 7.00 51.73 97.52 0.00 25 42556
2020-04-12T01:56:44Z px-sock-shop/orders 22.96 31.81 88.66 2.00 9 5803
2020-04-12T01:57:06Z px-sock-shop/payment 0.44 0.82 1.33 0.00 17 867
2020-04-12T01:57:06Z px-sock-shop/orders 89.77 164.25 166.22 0.00 17 13515
2020-04-12T01:57:22Z px-sock-shop/payment 0.36 0.85 0.95 0.00 4 229
2020-04-12T01:56:16Z px-sock-shop/payment 0.33 0.44 0.64 0.00 7 393
2020-04-12T01:56:32Z px-sock-shop/payment 0.34 0.47 0.68 0.00 13 688
2020-04-12T01:55:32Z px-sock-shop/payment 0.39 0.56 0.91 0.00 11 561
2020-04-12T01:56:34Z px-sock-shop/catalogue 6.71 94.75 103.68 0.00 29 46423
2020-04-12T01:56:24Z px-sock-shop/user 3.65 6.53 12.08 0.00 98 31688
2020-04-12T01:56:00Z px-sock-shop/payment 0.35 0.50 0.60 0.00 14 714
2020-04-12T01:56:50Z px-sock-shop/user 3.25 5.26 10.10 0.00 88 27978
2020-04-12T01:56:58Z px-sock-shop/catalogue 7.64 97.61 103.48 0.00 31 49183
2020-04-12T01:55:52Z px-sock-shop/carts 7.71 37.81 51.54 0.50 38 3568
2020-04-12T01:55:22Z px-sock-shop/carts 8.21 13.53 31.09 0.50 14 1420
2020-04-12T01:57:04Z px-sock-shop/payment 0.36 0.45 0.52 0.00 14 725
2020-04-12T01:56:32Z px-sock-shop/catalogue 8.70 95.13 111.40 0.00 19 29602
2020-04-12T01:57:12Z px-sock-shop/front-end [31m508.77[0m [31m1302.46[0m [31m1508.22[0m 5.50 107 3791
2020-04-12T01:56:02Z px-sock-shop/user 3.46 6.62 14.98 0.00 92 29357
2020-04-12T01:56:44Z px-sock-shop/user 3.43 6.18 14.93 0.00 80 26215
2020-04-12T01:56:12Z px-sock-shop/catalogue 11.99 100.36 111.82 0.00 18 31624
2020-04-12T01:56:10Z px-sock-shop/payment 0.35 0.63 0.67 0.00 12 612
2020-04-12T01:56:36Z px-sock-shop/carts 6.07 27.60 54.14 0.00 30 2038
2020-04-12T01:56:14Z px-sock-shop/catalogue 6.43 97.79 99.73 0.00 32 52361
2020-04-12T01:57:18Z px-sock-shop/catalogue 6.80 182.26 184.83 0.00 30 44291
2020-04-12T01:56:36Z px-sock-shop/catalogue 5.24 42.17 91.08 0.00 15 24806
2020-04-12T01:55:54Z px-sock-shop/payment 0.38 0.74 0.90 0.00 13 699
2020-04-12T01:56:26Z px-sock-shop/orders 21.29 111.77 [33m236.71[0m 2.50 15 11777
2020-04-12T01:56:22Z px-sock-shop/catalogue 7.91 62.10 190.63 0.00 29 50447
2020-04-12T01:55:40Z px-sock-shop/orders 29.63 109.19 110.15 4.00 12 7566
2020-04-12T01:56:36Z px-sock-shop/orders 43.78 117.84 165.54 0.00 11 9113
2020-04-12T01:55:46Z px-sock-shop/orders 23.52 56.23 92.22 2.00 12 8479
2020-04-12T01:57:16Z px-sock-shop/orders 19.34 116.90 117.94 1.50 12 9526
2020-04-12T01:56:04Z px-sock-shop/shipping 0.90 1.51 4.35 0.00 13 1066
2020-04-12T01:57:14Z px-sock-shop/shipping 0.82 1.10 1.15 0.00 14 1106
2020-04-12T01:55:40Z px-sock-shop/carts 5.86 45.77 52.96 0.00 35 3121
2020-04-12T01:56:16Z px-sock-shop/catalogue 5.76 95.23 102.48 0.00 20 26292
2020-04-12T01:56:42Z px-sock-shop/payment 0.33 0.43 0.48 0.00 17 889
2020-04-12T01:56:12Z px-sock-shop/front-end [31m589.12[0m [31m1413.10[0m [31m1891.17[0m 0.50 93 3287
2020-04-12T01:56:10Z px-sock-shop/catalogue 50.40 193.08 199.40 0.00 28 42148
2020-04-12T01:56:30Z px-sock-shop/front-end [31m403.10[0m [31m1015.44[0m [31m1526.43[0m 2.00 111 3928
2020-04-12T01:56:24Z px-sock-shop/orders 22.76 61.65 143.94 1.00 13 10130
2020-04-12T01:55:22Z px-sock-shop/payment 0.35 0.76 0.82 0.00 4 204
2020-04-12T01:56:22Z px-sock-shop/carts 7.68 34.84 47.33 0.50 32 2350
2020-04-12T01:55:44Z px-sock-shop/front-end [31m495.64[0m [31m1210.80[0m [31m1495.79[0m 4.00 103 3666
2020-04-12T01:55:26Z px-sock-shop/payment 0.37 0.69 0.77 0.00 11 594
2020-04-12T01:56:40Z px-sock-shop/user 3.51 6.92 84.60 0.00 98 31555
2020-04-12T01:56:56Z px-sock-shop/payment 0.35 0.54 1.09 0.00 18 918
2020-04-12T01:56:04Z px-sock-shop/orders 26.51 140.82 165.91 1.00 14 11427
2020-04-12T01:56:26Z px-sock-shop/shipping 0.93 1.34 1.37 0.00 12 987
2020-04-12T01:57:08Z px-sock-shop/shipping 1.12 1.69 2.09 0.00 8 671
2020-04-12T01:55:34Z px-sock-shop/carts 12.46 145.87 [33m246.57[0m 0.50 41 4333
2020-04-12T01:56:14Z px-sock-shop/shipping 1.01 1.50 1.84 0.00 6 474
2020-04-12T01:56:16Z px-sock-shop/shipping 0.92 1.01 1.02 0.00 7 553
2020-04-12T01:55:30Z px-sock-shop/front-end [31m591.65[0m [31m1493.26[0m [31m1792.98[0m 0.00 94 3307
2020-04-12T01:55:44Z px-sock-shop/payment 0.34 0.47 0.95 0.00 13 729
2020-04-12T01:56:16Z px-sock-shop/front-end [31m406.37[0m [31m1307.50[0m [31m1543.48[0m 2.00 107 3806
2020-04-12T01:57:06Z px-sock-shop/front-end [31m510.24[0m [31m1488.52[0m [31m1784.63[0m 0.00 89 3132
2020-04-12T01:56:18Z px-sock-shop/carts 7.44 143.40 154.05 2.00 40 2689
2020-04-12T01:56:18Z px-sock-shop/orders 24.66 151.78 154.88 0.00 17 13083
2020-04-12T01:57:14Z px-sock-shop/orders 22.95 99.39 101.75 2.50 16 11922
2020-04-12T01:56:32Z px-sock-shop/shipping 0.95 1.30 1.53 0.00 13 1066
2020-04-12T01:56:02Z px-sock-shop/shipping 0.89 1.08 1.19 0.00 12 948
2020-04-12T01:56:06Z px-sock-shop/orders 17.04 24.19 156.38 0.00 9 7597
2020-04-12T01:56:02Z px-sock-shop/orders 29.15 59.05 59.72 2.00 14 10366
2020-04-12T01:56:20Z px-sock-shop/catalogue 5.04 12.80 90.43 0.00 20 34765
2020-04-12T01:56:28Z px-sock-shop/payment 0.39 0.84 1.12 0.00 15 776
2020-04-12T01:56:08Z px-sock-shop/catalogue 5.99 9.43 14.26 0.00 20 28832
2020-04-12T01:57:14Z px-sock-shop/user 3.61 7.31 66.94 0.00 104 32933
2020-04-12T01:56:46Z px-sock-shop/shipping 0.82 1.27 6.15 0.00 12 987
2020-04-12T01:56:28Z px-sock-shop/carts 5.57 10.48 40.44 0.00 37 3135
2020-04-12T01:57:04Z px-sock-shop/shipping 0.90 1.16 1.33 0.00 13 1066
2020-04-12T01:57:10Z px-sock-shop/catalogue 7.03 92.37 98.41 0.00 28 38851
2020-04-12T01:56:54Z px-sock-shop/front-end [31m497.67[0m [31m1209.78[0m [31m1522.53[0m 1.50 115 4071
2020-04-12T01:57:00Z px-sock-shop/catalogue 3.72 8.86 9.83 0.00 14 24641
2020-04-12T01:55:38Z px-sock-shop/carts 8.79 47.54 52.94 0.50 38 3548
2020-04-12T01:56:20Z px-sock-shop/shipping 0.75 1.16 1.72 0.00 16 1264
2020-04-12T01:56:08Z px-sock-shop/shipping 1.01 1.37 1.80 0.00 10 829
2020-04-12T01:56:10Z px-sock-shop/user 4.12 87.06 94.34 0.00 80 25327
2020-04-12T01:56:12Z px-sock-shop/carts 7.32 40.79 52.82 0.50 38 2733
2020-04-12T01:56:52Z px-sock-shop/shipping 0.90 1.10 1.12 0.00 12 948
2020-04-12T01:56:30Z px-sock-shop/carts 10.53 48.03 58.27 1.50 40 3265
2020-04-12T01:55:42Z px-sock-shop/orders 17.87 27.69 97.84 1.50 12 8508
2020-04-12T01:57:12Z px-sock-shop/shipping 0.98 1.20 1.29 0.00 7 592
2020-04-12T01:55:22Z px-sock-shop/orders 15.45 83.82 112.65 0.00 4 3546
2020-04-12T01:57:18Z px-sock-shop/front-end [31m500.11[0m [31m1202.89[0m [31m1492.15[0m 2.50 108 3780
2020-04-12T01:55:30Z px-sock-shop/payment 0.38 0.63 1.04 0.00 14 714
2020-04-12T01:57:16Z px-sock-shop/shipping 0.91 1.12 1.16 0.00 10 829
2020-04-12T01:56:10Z px-sock-shop/shipping 0.94 1.54 1.85 0.00 12 948
2020-04-12T01:55:56Z px-sock-shop/payment 0.34 0.58 0.66 0.00 14 769
2020-04-12T01:55:52Z px-sock-shop/payment 0.35 0.70 0.99 0.00 14 813
2020-04-12T01:55:26Z px-sock-shop/orders 18.72 74.59 116.28 1.50 11 7353
2020-04-12T01:56:52Z px-sock-shop/orders 25.76 107.64 151.44 1.00 12 9337
2020-04-12T01:55:48Z px-sock-shop/orders 43.99 112.27 164.40 3.00 12 7767
2020-04-12T01:55:24Z px-sock-shop/front-end [31m480.50[0m [31m1395.43[0m [31m2032.60[0m 0.50 88 3112
2020-04-12T01:56:12Z px-sock-shop/shipping 0.68 0.98 1.22 0.00 12 987
2020-04-12T01:55:58Z px-sock-shop/orders 22.75 62.61 73.13 0.50 10 9315
2020-04-12T01:56:06Z px-sock-shop/catalogue 6.24 15.88 100.17 0.00 23 28349
2020-04-12T01:56:28Z px-sock-shop/shipping 0.93 1.91 2.22 0.00 14 1145
2020-04-12T01:56:36Z px-sock-shop/front-end [31m488.18[0m [31m1456.09[0m [31m1978.69[0m 0.00 85 2975
2020-04-12T01:56:54Z px-sock-shop/payment 0.39 0.66 1.44 0.00 9 470
2020-04-12T01:56:36Z px-sock-shop/user 3.44 8.04 89.43 0.00 86 27571
2020-04-12T01:56:22Z px-sock-shop/orders 60.00 79.61 86.26 1.50 10 7063
2020-04-12T01:57:16Z px-sock-shop/payment 0.45 1.12 1.70 0.00 12 645
2020-04-12T01:56:40Z px-sock-shop/front-end [31m401.12[0m [31m1195.20[0m [31m1454.70[0m 5.00 101 3581
2020-04-12T01:56:56Z px-sock-shop/front-end [31m491.30[0m [31m1400.79[0m [31m1701.33[0m 1.50 108 3841
2020-04-12T01:56:34Z px-sock-shop/payment 0.35 0.46 0.50 0.00 12 623
2020-04-12T01:57:22Z px-sock-shop/front-end [31m698.68[0m [31m1592.99[0m [31m1711.04[0m 0.00 18 647
2020-04-12T01:55:30Z px-sock-shop/carts 4.98 15.09 42.95 0.00 34 2752
2020-04-12T01:55:28Z px-sock-shop/carts 8.82 88.52 159.76 2.50 40 5341
2020-04-12T01:56:34Z px-sock-shop/user 3.14 5.91 7.88 0.00 78 25100
2020-04-12T01:56:26Z px-sock-shop/payment 0.37 0.61 8.55 0.00 15 820
2020-04-12T01:57:22Z px-sock-shop/catalogue 7.52 93.77 94.08 0.00 10 11783
2020-04-12T01:57:02Z px-sock-shop/front-end [31m506.05[0m [31m1311.39[0m [31m1711.17[0m 0.50 110 3864
2020-04-12T01:55:26Z px-sock-shop/carts 8.36 48.88 56.13 1.00 39 2968
2020-04-12T01:55:56Z px-sock-shop/orders 20.42 89.40 97.36 2.50 13 9026
2020-04-12T01:55:38Z px-sock-shop/payment 0.36 0.65 0.86 0.00 15 823
2020-04-12T01:56:32Z px-sock-shop/front-end [31m508.60[0m [31m1291.63[0m [31m1507.48[0m 1.00 107 3791
2020-04-12T01:57:00Z px-sock-shop/payment 0.31 0.41 0.46 0.00 6 331
2020-04-12T01:56:44Z px-sock-shop/payment 0.32 0.39 0.64 0.00 9 503
2020-04-12T01:56:14Z px-sock-shop/orders 72.87 151.83 162.95 5.50 11 5658
2020-04-12T01:55:24Z px-sock-shop/orders 110.49 [33m208.92[0m [33m209.84[0m 0.00 9 6799
2020-04-12T01:55:34Z px-sock-shop/front-end [31m511.25[0m [31m1387.42[0m [31m1779.40[0m 0.50 104 3672
2020-04-12T01:56:50Z px-sock-shop/catalogue 6.45 94.34 96.43 0.00 30 45492
2020-04-12T01:55:50Z px-sock-shop/front-end [31m404.78[0m [31m1293.56[0m [31m1805.10[0m 1.00 120 4246
2020-04-12T01:55:36Z px-sock-shop/orders 22.29 103.02 108.26 8.00 11 4491
2020-04-12T01:56:52Z px-sock-shop/front-end [33m308.19[0m [31m1193.83[0m [31m1589.28[0m 1.00 104 3657
2020-04-12T01:56:24Z px-sock-shop/shipping 0.88 1.11 1.14 0.00 12 987
2020-04-12T01:56:52Z px-sock-shop/user 4.39 9.66 85.25 0.00 101 32719
2020-04-12T01:56:30Z px-sock-shop/payment 0.37 1.00 3.78 0.00 9 470
2020-04-12T01:56:38Z px-sock-shop/carts 6.60 41.62 152.80 0.50 38 3432
2020-04-12T01:56:26Z px-sock-shop/front-end [31m509.25[0m [31m1404.90[0m [31m1607.32[0m 2.50 107 3759
2020-04-12T01:56:08Z px-sock-shop/front-end [31m505.84[0m [31m1317.80[0m [31m1692.19[0m 2.50 99 3497
2020-04-12T01:55:36Z px-sock-shop/carts 12.03 54.74 149.75 0.00 38 5628
2020-04-12T01:56:54Z px-sock-shop/user 3.02 4.83 9.22 0.00 63 20719
2020-04-12T01:56:08Z px-sock-shop/payment 0.41 0.57 0.64 0.00 12 681
2020-04-12T01:56:44Z px-sock-shop/catalogue 4.34 7.59 98.75 0.00 15 28197
2020-04-12T01:56:06Z px-sock-shop/front-end [31m502.37[0m [31m1285.69[0m [31m1613.05[0m 1.50 104 3683
2020-04-12T01:57:22Z px-sock-shop/carts 2.72 8.07 9.59 0.00 8 225
2020-04-12T01:56:16Z px-sock-shop/orders 17.24 59.83 60.28 0.50 7 7357
2020-04-12T01:56:02Z px-sock-shop/catalogue 8.36 82.70 93.85 0.00 30 45509
2020-04-12T01:56:02Z px-sock-shop/payment 0.34 0.46 0.51 0.00 14 758
2020-04-12T01:56:26Z px-sock-shop/user 3.39 7.72 62.45 0.00 91 28993
2020-04-12T01:57:20Z px-sock-shop/catalogue 6.91 15.73 92.22 0.00 25 43713
2020-04-12T01:56:56Z px-sock-shop/orders 23.79 67.72 108.20 0.00 18 13810
2020-04-12T01:56:04Z px-sock-shop/carts 4.53 12.48 15.69 1.00 36 3063
2020-04-12T01:57:10Z px-sock-shop/orders 61.54 69.94 71.70 0.00 13 10781
2020-04-12T01:57:08Z px-sock-shop/orders 25.81 76.75 106.24 0.00 8 7804
2020-04-12T01:55:58Z px-sock-shop/front-end [31m497.99[0m [31m1190.12[0m [31m1422.33[0m 2.50 114 4065
2020-04-12T01:57:10Z px-sock-shop/user 2.87 5.51 8.61 0.00 76 24214
2020-04-12T01:57:12Z px-sock-shop/user 3.39 7.18 79.27 0.00 90 29028
2020-04-12T01:57:02Z px-sock-shop/shipping 0.92 1.40 1.76 0.00 12 987
2020-04-12T01:56:42Z px-sock-shop/user 3.29 9.78 15.67 0.00 102 32146
2020-04-12T01:56:34Z px-sock-shop/carts 9.05 107.20 [33m256.03[0m 3.50 38 3340
2020-04-12T01:55:36Z px-sock-shop/payment 0.38 0.97 1.27 0.00 11 762
2020-04-12T01:56:12Z px-sock-shop/orders 32.06 168.06 170.20 0.00 12 10336
2020-04-12T01:55:42Z px-sock-shop/payment 0.34 0.43 0.80 0.00 12 645
2020-04-12T01:57:08Z px-sock-shop/carts 8.50 39.82 57.35 1.50 35 3376
2020-04-12T01:56:28Z px-sock-shop/user 4.34 9.14 72.83 0.00 106 33729
2020-04-12T01:55:52Z px-sock-shop/front-end [31m495.81[0m [31m1295.25[0m [31m1609.44[0m 3.50 101 3552
2020-04-12T01:56:42Z px-sock-shop/front-end [31m503.24[0m [31m1303.20[0m [31m1616.23[0m 1.50 99 3479
2020-04-12T01:56:30Z px-sock-shop/orders 19.20 42.78 53.04 0.50 9 6980
2020-04-12T01:55:48Z px-sock-shop/carts 5.64 12.97 22.98 0.50 29 3408
2020-04-12T01:56:30Z px-sock-shop/shipping 1.00 1.19 1.25 0.00 8 671
2020-04-12T01:56:22Z px-sock-shop/front-end [31m420.34[0m [31m1101.99[0m [31m1503.28[0m 1.00 112 3937
2020-04-12T01:56:58Z px-sock-shop/user 3.16 5.82 9.63 0.00 83 26643
2020-04-12T01:57:10Z px-sock-shop/carts 8.33 48.08 62.36 0.50 39 3034
2020-04-12T01:56:48Z px-sock-shop/payment 0.36 0.59 1.42 0.00 14 769
2020-04-12T01:56:40Z px-sock-shop/payment 0.32 0.42 0.54 0.00 13 762
2020-04-12T01:56:24Z px-sock-shop/catalogue 6.85 109.23 [33m200.65[0m 0.00 23 33128
2020-04-12T01:56:54Z px-sock-shop/carts 10.13 103.66 [33m239.27[0m 2.00 40 3751
2020-04-12T01:56:46Z px-sock-shop/orders 73.46 109.62 117.55 0.00 12 10631
2020-04-12T01:57:06Z px-sock-shop/shipping 0.88 1.33 2.17 0.00 17 1343
2020-04-12T01:56:26Z px-sock-shop/catalogue 7.21 100.55 105.64 0.00 33 49986
2020-04-12T01:56:54Z px-sock-shop/orders 48.62 59.63 62.94 0.50 9 7419
2020-04-12T01:56:56Z px-sock-shop/carts 6.54 22.94 61.75 0.50 39 2273
2020-04-12T01:57:04Z px-sock-shop/user 4.08 78.62 88.22 0.00 102 32510
2020-04-12T01:55:54Z px-sock-shop/carts 5.55 17.03 55.61 1.50 37 4332
2020-04-12T01:57:18Z px-sock-shop/user 3.48 6.25 12.78 0.00 73 23480
2020-04-12T01:56:44Z px-sock-shop/carts 4.55 12.72 53.17 0.50 31 2093
2020-04-12T01:56:46Z px-sock-shop/carts 9.27 103.48 [33m237.02[0m 2.00 42 3790
2020-04-12T01:57:06Z px-sock-shop/catalogue 13.46 98.54 100.93 0.00 26 39357
2020-04-12T01:56:06Z px-sock-shop/shipping 0.91 1.25 1.48 0.00 9 750
2020-04-12T01:56:14Z px-sock-shop/payment 0.33 0.50 0.58 0.00 11 707
2020-04-12T01:57:10Z px-sock-shop/shipping 0.79 1.05 1.13 0.00 13 1027
2020-04-12T01:56:08Z px-sock-shop/user 3.37 7.33 98.75 0.00 92 29589
2020-04-12T01:56:48Z px-sock-shop/carts 6.44 45.34 57.54 1.00 39 3590
2020-04-12T01:56:48Z px-sock-shop/shipping 0.87 1.01 1.30 0.00 11 908
2020-04-12T01:57:18Z px-sock-shop/orders 15.93 58.08 60.89 1.00 11 7457
2020-04-12T01:56:20Z px-sock-shop/user 3.63 8.47 90.05 0.00 110 35651
2020-04-12T01:57:02Z px-sock-shop/carts 10.26 106.19 [33m235.05[0m 1.50 43 4413
2020-04-12T01:55:32Z px-sock-shop/carts 6.86 46.93 52.68 1.00 33 1738
2020-04-12T01:55:28Z px-sock-shop/front-end [31m510.48[0m [31m1401.71[0m [31m1681.87[0m 2.50 96 3432
2020-04-12T01:56:24Z px-sock-shop/front-end [31m506.65[0m [31m1192.05[0m [31m1506.97[0m 1.50 111 3914
2020-04-12T01:56:04Z px-sock-shop/front-end [33m299.14[0m [31m1297.58[0m [31m1592.65[0m 1.50 107 3777
2020-04-12T01:56:38Z px-sock-shop/payment 0.35 0.51 0.55 0.00 12 648
2020-04-12T01:56:42Z px-sock-shop/orders 42.17 70.59 76.44 1.00 16 12441
2020-04-12T01:56:04Z px-sock-shop/payment 0.40 0.76 0.93 0.00 14 761
2020-04-12T01:57:10Z px-sock-shop/front-end [33m302.64[0m [31m1214.75[0m [31m1602.51[0m 0.50 120 4214
2020-04-12T01:56:58Z px-sock-shop/carts 8.67 44.08 48.38 2.00 40 5177
2020-04-12T01:56:38Z px-sock-shop/shipping 0.84 1.17 1.59 0.00 12 948
2020-04-12T01:56:46Z px-sock-shop/catalogue 7.92 92.26 95.96 0.00 35 50468
2020-04-12T01:57:20Z px-sock-shop/carts 8.87 42.91 48.15 1.00 40 3938
2020-04-12T01:56:38Z px-sock-shop/catalogue 80.87 [33m284.33[0m [33m384.55[0m 0.00 35 54630
2020-04-12T01:55:52Z px-sock-shop/orders 26.26 71.95 114.74 4.50 14 8616
2020-04-12T01:57:14Z px-sock-shop/carts 7.89 59.15 128.81 0.50 44 3524
2020-04-12T01:55:54Z px-sock-shop/front-end [31m493.00[0m [31m1205.77[0m [31m1407.57[0m 2.50 106 3785
2020-04-12T01:55:32Z px-sock-shop/orders 21.62 68.49 75.25 0.00 11 8097
2020-04-12T01:55:56Z px-sock-shop/front-end [31m413.44[0m [31m1288.78[0m [31m1510.03[0m 2.00 102 3584
2020-04-12T01:56:10Z px-sock-shop/front-end [31m513.64[0m [31m1482.72[0m [31m1686.70[0m 0.50 85 3007
2020-04-12T01:56:04Z px-sock-shop/catalogue 6.57 89.21 90.75 0.00 23 45528
2020-04-12T01:57:12Z px-sock-shop/orders 48.78 92.17 96.35 5.00 12 7930
2020-04-12T01:57:12Z px-sock-shop/payment 0.37 0.49 0.58 0.00 12 747
2020-04-12T01:56:14Z px-sock-shop/user 3.22 6.38 80.30 0.00 79 25417
2020-04-12T01:57:14Z px-sock-shop/front-end [31m518.68[0m [31m1304.21[0m [31m1586.56[0m 2.50 95 3339
2020-04-12T01:57:16Z px-sock-shop/front-end [33m397.75[0m [31m1187.47[0m [31m1620.47[0m 0.00 108 3797
2020-04-12T01:57:16Z px-sock-shop/catalogue 5.71 8.74 11.45 0.00 20 29770
2020-04-12T01:56:52Z px-sock-shop/payment 0.39 0.94 1.20 0.00 13 685
2020-04-12T01:55:32Z px-sock-shop/front-end [31m504.90[0m [31m1307.09[0m [31m1700.73[0m 1.00 99 3511
2020-04-12T01:56:58Z px-sock-shop/front-end [31m589.61[0m [31m1392.59[0m [31m1671.66[0m 1.00 88 3097
2020-04-12T01:56:30Z px-sock-shop/catalogue 8.59 104.45 111.03 0.00 35 53083
2020-04-12T01:57:16Z px-sock-shop/carts 7.34 23.55 63.46 0.50 40 3265
2020-04-12T01:57:20Z px-sock-shop/payment 0.38 0.64 0.70 0.00 12 645
2020-04-12T01:56:22Z px-sock-shop/user 3.31 7.01 72.01 0.00 73 23684
2020-04-12T01:56:20Z px-sock-shop/orders 26.26 152.66 153.93 0.00 16 13678
2020-04-12T01:55:58Z px-sock-shop/carts 7.98 150.56 [33m237.43[0m 2.00 39 4158
2020-04-12T01:56:46Z px-sock-shop/user 3.56 8.77 90.13 0.00 79 25318
2020-04-12T01:56:08Z px-sock-shop/orders 64.22 106.33 170.17 2.00 12 8985
2020-04-12T01:55:46Z px-sock-shop/front-end [31m509.60[0m [31m1290.69[0m [31m1607.25[0m 2.50 92 3266
2020-04-12T01:57:22Z px-sock-shop/orders 20.26 110.86 111.43 0.00 4 3253
2020-04-12T01:56:50Z px-sock-shop/carts 6.35 40.63 146.68 0.00 37 2945
2020-04-12T01:56:58Z px-sock-shop/payment 0.37 0.50 0.71 0.00 13 718
2020-04-12T01:56:50Z px-sock-shop/orders 25.85 75.80 78.93 1.00 14 11297
2020-04-12T01:55:44Z px-sock-shop/orders 19.60 48.66 62.16 3.00 13 10296
2020-04-12T01:57:18Z px-sock-shop/payment 0.34 0.50 0.60 0.00 11 583
2020-04-12T01:57:00Z px-sock-shop/carts 4.44 40.37 44.12 0.00 22 989
2020-04-12T01:56:32Z px-sock-shop/user 3.47 6.78 69.65 0.00 97 30310
2020-04-12T01:56:56Z px-sock-shop/user 3.98 12.54 85.69 0.00 115 36236
2020-04-12T01:56:48Z px-sock-shop/user 3.66 8.05 12.58 0.00 107 33917
2020-04-12T01:57:00Z px-sock-shop/orders 16.76 113.12 115.09 0.00 6 4844
2020-04-12T01:57:18Z px-sock-shop/shipping 0.95 1.13 1.22 0.00 10 790
2020-04-12T01:56:56Z px-sock-shop/catalogue 5.90 13.17 14.94 0.00 21 34029
2020-04-12T01:55:42Z px-sock-shop/front-end [33m309.27[0m [31m1105.26[0m [31m1593.32[0m 2.00 116 4089
2020-04-12T01:56:48Z px-sock-shop/front-end [31m487.44[0m [31m1196.86[0m [31m1501.89[0m 3.50 106 3756
2020-04-12T01:55:58Z px-sock-shop/payment 0.36 0.51 0.62 0.00 10 521
2020-04-12T01:56:42Z px-sock-shop/carts 6.50 47.00 55.22 0.50 42 3158
2020-04-12T01:55:54Z px-sock-shop/orders 21.13 50.29 56.24 0.50 13 12420
2020-04-12T01:56:12Z px-sock-shop/payment 0.33 0.54 0.95 0.00 12 637
2020-04-12T01:57:02Z px-sock-shop/orders 72.43 169.79 [33m262.53[0m 0.00 12 11492
2020-04-12T01:55:50Z px-sock-shop/carts 7.87 58.67 123.81 0.50 42 3041
2020-04-12T01:56:50Z px-sock-shop/shipping 0.88 1.06 1.65 0.00 13 1066
2020-04-12T01:56:20Z px-sock-shop/carts 4.91 14.95 41.92 0.50 42 3468
2020-04-12T01:56:46Z px-sock-shop/front-end [31m483.11[0m [31m1196.56[0m [31m1698.36[0m 1.50 105 3736
2020-04-12T01:56:36Z px-sock-shop/shipping 0.88 1.12 1.54 0.00 11 869
2020-04-12T01:56:22Z px-sock-shop/payment 0.38 0.62 0.96 0.00 10 568
2020-04-12T01:56:02Z px-sock-shop/front-end [31m509.24[0m [31m1397.52[0m [31m1534.08[0m 2.50 108 3812
2020-04-12T01:57:04Z px-sock-shop/carts 14.83 180.24 [33m360.44[0m 1.50 41 5479
2020-04-12T01:56:14Z px-sock-shop/carts 8.36 94.52 156.93 0.00 29 2633
2020-04-12T01:55:28Z px-sock-shop/payment 0.34 0.55 0.70 0.00 13 688
2020-04-12T01:56:04Z px-sock-shop/user 3.84 9.05 80.02 0.00 104 33131
2020-04-12T01:56:08Z px-sock-shop/carts 7.25 38.14 50.67 0.50 33 2901
2020-04-12T01:55:26Z px-sock-shop/front-end [31m404.11[0m [31m1188.01[0m [31m1592.89[0m 2.50 120 4229
2020-04-12T01:57:20Z px-sock-shop/shipping 0.92 1.12 1.25 0.00 10 829
2020-04-12T01:56:52Z px-sock-shop/carts 5.72 33.29 40.81 0.00 40 2666
2020-04-12T01:56:42Z px-sock-shop/shipping 0.78 1.02 1.33 0.00 16 1264
2020-04-12T01:56:40Z px-sock-shop/orders 19.44 73.92 112.51 4.50 13 7893
2020-04-12T01:56:16Z px-sock-shop/user 3.23 4.80 6.90 0.00 69 22351
2020-04-12T01:57:02Z px-sock-shop/user 3.37 75.65 84.58 0.00 84 26417
2020-04-12T01:56:40Z px-sock-shop/catalogue 3.91 9.16 97.24 0.00 18 25524
2020-04-12T01:57:04Z px-sock-shop/catalogue 5.60 86.80 87.99 0.00 23 36780
2020-04-12T01:56:18Z px-sock-shop/user 3.38 6.32 11.08 0.00 107 33380
2020-04-12T01:56:06Z px-sock-shop/carts 8.15 139.97 163.44 2.00 38 2923
2020-04-12T01:56:16Z px-sock-shop/carts 10.87 [33m253.12[0m [31m413.85[0m 2.00 42 4686
2020-04-12T01:55:50Z px-sock-shop/payment 0.38 0.58 0.79 0.00 11 586
2020-04-12T01:57:08Z px-sock-shop/user 3.81 8.07 18.00 0.00 85 27810
2020-04-12T01:57:14Z px-sock-shop/payment 0.37 0.58 0.65 0.00 16 896
2020-04-12T01:57:02Z px-sock-shop/catalogue 6.07 98.84 106.64 0.00 33 47503
2020-04-12T01:56:50Z px-sock-shop/payment 0.38 0.74 1.25 0.00 14 761
2020-04-12T01:56:54Z px-sock-shop/shipping 0.95 1.14 1.32 0.00 8 671
2020-04-12T01:57:22Z px-sock-shop/user 4.65 9.12 82.68 0.00 59 17933
2020-04-12T01:55:24Z px-sock-shop/payment 0.40 0.63 0.97 0.00 9 459
2020-04-12T01:56:46Z px-sock-shop/payment 0.38 0.51 0.78 0.00 12 637
2020-04-12T01:56:20Z px-sock-shop/payment 0.36 0.54 0.89 0.00 16 816
2020-04-12T01:55:46Z px-sock-shop/payment 0.35 0.69 1.39 0.00 12 681
2020-04-12T01:55:56Z px-sock-shop/carts 7.75 38.69 53.12 0.50 42 2842
2020-04-12T01:56:38Z px-sock-shop/front-end [31m504.62[0m [31m1297.95[0m [31m1808.74[0m 1.00 109 3829
2020-04-12T01:55:42Z px-sock-shop/carts 7.47 16.60 52.96 1.50 41 3007
2020-04-12T01:55:44Z px-sock-shop/carts 7.03 15.97 56.29 1.50 36 4600
2020-04-12T01:57:20Z px-sock-shop/orders 21.03 60.77 98.85 1.50 12 9382
2020-04-12T01:56:36Z px-sock-shop/payment 0.35 0.57 4.63 0.00 11 561
2020-04-12T01:56:28Z px-sock-shop/orders 50.90 90.84 163.84 0.50 15 12913
2020-04-12T01:55:48Z px-sock-shop/front-end [31m491.19[0m [31m1398.05[0m [31m1783.20[0m 2.50 83 2922
2020-04-12T01:57:06Z px-sock-shop/carts 5.41 50.92 90.58 0.00 37 2321
2020-04-12T01:57:20Z px-sock-shop/user 3.56 6.28 54.27 0.00 96 31196
2020-04-12T01:56:00Z px-sock-shop/orders 19.33 66.51 68.89 0.00 14 11425
2020-04-12T01:56:02Z px-sock-shop/carts 8.16 90.17 147.62 0.50 41 3304
2020-04-12T01:56:28Z px-sock-shop/catalogue 8.66 126.47 191.83 0.00 16 28895
2020-04-12T01:55:46Z px-sock-shop/carts 7.25 78.75 [33m242.81[0m 1.00 40 3458
2020-04-12T01:57:04Z px-sock-shop/orders 58.95 135.51 162.41 0.50 14 13627
2020-04-12T01:56:48Z px-sock-shop/catalogue 4.68 9.88 90.58 0.00 21 33942
2020-04-12T01:55:38Z px-sock-shop/orders 27.36 168.63 168.92 1.50 15 12275
2020-04-12T01:56:24Z px-sock-shop/payment 0.36 0.56 0.78 0.00 13 710
2020-04-12T01:56:54Z px-sock-shop/catalogue 7.20 104.09 199.87 0.00 33 49159
2020-04-12T01:56:18Z px-sock-shop/payment 0.34 0.73 1.45 0.00 17 867
2020-04-12T01:55:22Z px-sock-shop/front-end [31m601.72[0m [31m1562.77[0m [31m1690.26[0m 0.00 29 1015
2020-04-12T01:56:58Z px-sock-shop/orders 19.98 68.02 99.79 2.50 13 10692
2020-04-12T01:56:18Z px-sock-shop/front-end [31m596.16[0m [31m1399.96[0m [31m1645.59[0m 2.00 105 3733
2020-04-12T01:55:40Z px-sock-shop/front-end [31m505.01[0m [31m1301.22[0m [31m1685.23[0m 4.00 110 3850
2020-04-12T01:55:34Z px-sock-shop/payment 0.41 11.00 51.22 0.00 11 586
2020-04-12T01:57:02Z px-sock-shop/payment 0.38 0.61 0.84 0.00 12 637
2020-04-12T01:57:08Z px-sock-shop/front-end [31m504.25[0m [31m1083.10[0m [31m1595.72[0m 1.50 105 3718
2020-04-12T01:57:10Z px-sock-shop/payment 0.45 0.87 1.13 0.00 13 663
2020-04-12T01:56:22Z px-sock-shop/shipping 0.94 1.16 1.23 0.00 9 711
2020-04-12T01:56:32Z px-sock-shop/carts 7.78 50.42 66.35 1.00 36 4851
2020-04-12T01:57:06Z px-sock-shop/user 3.33 9.54 92.90 0.00 99 31528
2020-04-12T01:55:28Z px-sock-shop/orders 71.81 146.84 149.46 0.00 13 13515
2020-04-12T01:56:48Z px-sock-shop/orders 16.70 31.74 96.81 2.50 14 9820
2020-04-12T01:57:00Z px-sock-shop/user 3.60 79.11 93.22 0.00 66 22131
2020-04-12T01:57:08Z px-sock-shop/payment 0.33 0.44 0.53 0.00 8 433
2020-04-12T01:56:50Z px-sock-shop/front-end [31m509.23[0m [31m1308.46[0m [31m1599.61[0m 0.50 106 3710
2020-04-12T01:55:24Z px-sock-shop/carts 4.97 10.42 13.11 0.00 28 1419
2020-04-12T01:56:00Z px-sock-shop/front-end [31m454.24[0m [31m1291.02[0m [31m1600.15[0m 0.00 100 3500
2020-04-12T01:56:18Z px-sock-shop/shipping 0.92 1.08 1.51 0.00 17 1343
2020-04-12T01:56:20Z px-sock-shop/front-end [31m409.57[0m [31m1295.91[0m [31m1651.30[0m 0.50 102 3584
2020-04-12T01:55:40Z px-sock-shop/payment 0.34 0.54 0.67 0.00 12 700
2020-04-12T01:56:42Z px-sock-shop/catalogue 6.53 109.73 111.73 0.00 34 52877
2020-04-12T01:56:52Z px-sock-shop/catalogue 6.60 12.47 87.91 0.00 17 29100
2020-04-12T01:56:10Z px-sock-shop/carts 6.58 162.54 [33m260.17[0m 1.00 32 2979
2020-04-12T01:56:12Z px-sock-shop/user 3.69 9.54 98.11 0.00 87 28213
2020-04-12T01:56:30Z px-sock-shop/user 3.50 6.92 9.21 0.00 75 25193
2020-04-12T01:55:48Z px-sock-shop/payment 0.33 0.65 55.48 0.00 12 703
2020-04-12T01:56:06Z px-sock-shop/user 3.24 5.41 8.87 0.00 62 20285
2020-04-12T01:55:30Z px-sock-shop/orders 23.19 102.41 112.60 0.00 14 11860
2020-04-12T01:56:58Z px-sock-shop/shipping 0.90 1.21 1.22 0.00 10 829
2020-04-12T01:57:16Z px-sock-shop/user 3.44 7.26 94.97 0.00 95 30946
2020-04-12T01:56:24Z px-sock-shop/carts 6.91 55.40 135.40 1.00 46 3467
2020-04-12T01:56:34Z px-sock-shop/shipping 0.99 1.17 1.23 0.00 11 908
2020-04-12T01:56:10Z px-sock-shop/orders 29.03 105.21 107.83 0.00 12 10345
2020-04-12T01:55:50Z px-sock-shop/orders 19.56 98.18 98.65 0.00 11 9326
2020-04-12T01:57:08Z px-sock-shop/catalogue 5.37 84.70 89.91 0.00 23 40779
2020-04-12T01:56:06Z px-sock-shop/payment 0.37 0.51 0.58 0.00 9 484
2020-04-12T01:55:34Z px-sock-shop/orders 70.78 163.55 164.78 0.00 11 10999
2020-04-12T01:55:38Z px-sock-shop/front-end [31m499.69[0m [31m1400.19[0m [31m1735.18[0m 2.00 88 3112
2020-04-12T01:57:22Z px-sock-shop/shipping 0.96 1.78 2.12 0.00 11 869
Table ID: outgoing_traffic
REQUESTOR RESPONDER RPS BYTES PER S ERROR RATE
px-sock-shop/orders px-sock-shop/payment 12.11 656.46 0.00
px-sock-shop/front-end px-sock-shop/carts 24.49 1889.79 0.86
px-sock-shop/front-end px-sock-shop/orders 12.07 9546.02 1.27
px-sock-shop/load-test px-sock-shop/front-end 100.34 3543.85 1.87
px-sock-shop/front-end px-sock-shop/user 49.98 16887.27 0.00
px-sock-shop/front-end px-sock-shop/catalogue 24.83 38825.71 0.00
px-sock-shop/orders px-sock-shop/shipping 11.27 912.07 0.00
px-sock-shop/orders px-sock-shop/carts 12.11 1407.92 0.00
px-sock-shop/orders px-sock-shop/user 37.73 11331.12 0.00
Table ID: incoming_traffic
REQUESTOR RESPONDER RPS BYTES PER S ERROR RATE
px-sock-shop/orders px-sock-shop/payment 12.11 656.46 0.00
px-sock-shop/front-end px-sock-shop/carts 24.49 1889.79 0.86
px-sock-shop/front-end px-sock-shop/orders 12.07 9546.02 1.27
px-sock-shop/load-test px-sock-shop/front-end 100.34 3543.85 1.87
px-sock-shop/front-end px-sock-shop/user 49.98 16887.27 0.00
px-sock-shop/front-end px-sock-shop/catalogue 24.83 38825.71 0.00
px-sock-shop/orders px-sock-shop/shipping 11.27 912.07 0.00
px-sock-shop/front-end 0.00 17.00 0.00
px-sock-shop/orders px-sock-shop/carts 12.11 1407.92 0.00
px-sock-shop/orders px-sock-shop/user 37.73 11331.12 0.00
[36m
==> [0m [1mLive UI[0m: [4mhttps://withpixie.ai:443/live?script=px/service_stats[0m.
###Markdown
Make sure you click on the Live UI URL ✨ printed to see some magic 😊------ Let's try writing a script of our own Let's play with request traces in the http_events table|
###Code
#check out what the data looks like by seeing three rows
!px run -f pxl101_htttp_select.pxl
#calculate number of requests per minute
!px run -f pxl101_http_rps.pxl
#calculate number of requests per minute grouped by http_path
!px run -f px101_http_rps_group.pxl
###Output
[90m[0000][0m [32m INFO[0m Pixie CLI
Connection Mode: passthrough
Table ID: output
TIMEBIN MIN HTTP REQ PATH REQUEST COUNT
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bB3HTIFHUKD_i7poT2_-s7WGF_uY2UKt 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=lfXpfBx6Al8xHixl-jw-P2zfSK8zbUQx 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gHNQntBfCi-eRkgtabgvtOLn8Au32i9T 8
2020-04-12T02:47:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=c19c98a3b010774d&recursive=True&timeout_sec=87&wait_for_change=True 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gHNQntBfCi-eRkgtabgvtOLn8Au32i9T 8
2020-04-12T02:55:00Z /computeMetadata/v1/?alt=json&last_etag=22d79ffec8acd196&recursive=True&timeout_sec=84&wait_for_change=True 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=uB_8q0SzFzQpW_9MTpxtgmtzMe0dZ2y2 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VJCVa8tEORtAp_Ia7HyKqprVdK2Kk4FO 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iG0yPc3E2NZ-8Uo4WLWASLkGnl7ps8W_ 7
2020-04-12T02:55:00Z /addresses/57a98d98e4b00679b4a830b0 296
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=YkB6ELeogxrUOYPjdCgPKUkCZMYBnvwe 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XwkMT6uLuCxUyeYGW6n5BhmM5AXko-Je 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=uB_8q0SzFzQpW_9MTpxtgmtzMe0dZ2y2 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3lTR6LCAFweP9b9Q9wFHA50D8sm6tBBJ 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=yObJR8n_AR-IlPkBWPXI_pORB5n9-ih3 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=smm408A4Pm37cCvpxo4jjqgkLL9zFwDz 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=9hD3tUEhvs5tRtu18aN5OU7vj2gTLsRm 3
2020-04-12T02:54:00Z /catalogue/zzz4f044-b040-410d-8ead-4de0446aec7e 71
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D2fIHWZ8mI-c_x7u0EpH4z5BIl9bOI8P 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=P_CJpRE4l8rI0RghYPUqzdlDqI_SAcGm 8
2020-04-12T02:46:00Z /healthz 67
2020-04-12T02:55:00Z /customers/57a98d98e4b00679b4a830b2/addresses 294
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JfXftTRtQpGxULureG5i3V0hNCjpH6Mt 8
2020-04-12T02:50:00Z /orders 788
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3lTR6LCAFweP9b9Q9wFHA50D8sm6tBBJ 3
2020-04-12T02:49:00Z /healthcheck/kubedns 6
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=TDkZhwyxUQtOeITxZIgGVLvSt_3-VnMA 3
2020-04-12T02:49:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 1
2020-04-12T02:54:00Z /v1.38/exec/4bfa9de1ffadaa10b3532b01a5d129aaad0df4ea031e6d9a946647f17da5040a/start 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=P_CJpRE4l8rI0RghYPUqzdlDqI_SAcGm 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M8kJr_QOQ98bONDIBlQ730wJq5NU7QQH 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=eBFAKBqqxZ8LzukR5fm3VreqvWsnq291 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v_NTj5KvxDlqQGVtjWktH6p61_tDaEW4 7
2020-04-12T02:54:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=5a05bb3997c99b1e&recursive=True&timeout_sec=75&wait_for_change=True 1
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=N4ND-6Z-3FD46M5PUCoFFWsZL6L-hPIm 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=N4ND-6Z-3FD46M5PUCoFFWsZL6L-hPIm 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3Elvq_ehubxKBowsrgCDL7I0Of9pXO7m 7
2020-04-12T02:47:00Z /_ping 6
2020-04-12T02:46:00Z /v1.39/containers/json 6
2020-04-12T02:51:00Z /computeMetadata/v1/?alt=json&last_etag=cbea8ff0a94a9376&recursive=True&timeout_sec=66&wait_for_change=True 1
2020-04-12T02:54:00Z /v1.38/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D&limit=0 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=o_XUZx7XCU5LZcbxMGDBlagacE7e5KmR 8
2020-04-12T02:54:00Z /catalogue/808a2de1-1aaa-4c25-a9b9-6612e8f29a38 90
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=boWMLlP8UCC7qBiCssQ9dW6pCs3VBSq2 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=7pUKaPS6opvW6Ff6juP8Wcjw8cXiDUxJ 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bh1FcW_vaXECnZrcgBCKq4GTxmJgMcye 7
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hhP1K5OKeSNJtIUpKljRYyGn7yIbbrMo 3
2020-04-12T02:54:00Z /catalogue/819e1fbf-8b7e-4f6d-811f-693534916a8b 63
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wQ62WuHXGy7IMt6WndAQhjGOwoq5GUzi 3
2020-04-12T02:48:00Z /metrics/probes 4
2020-04-12T02:55:00Z /category.html 283
2020-04-12T02:54:00Z / 795
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=fyPwHubkmgtY5UNQ6UpbUZAmZKMnRzOm 3
2020-04-12T02:50:00Z /healthcheck/dnsmasq 6
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=9hD3tUEhvs5tRtu18aN5OU7vj2gTLsRm 8
2020-04-12T02:54:00Z /v1.38/exec/0a73806ab4d4f5ffdda8d09db63d91feded837d2cb95546712b6f10550650ce1/start 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=lfXpfBx6Al8xHixl-jw-P2zfSK8zbUQx 3
2020-04-12T02:55:00Z /v1.38/exec/699b028f2b58f62c06e900e9cbd06d8190a95b3e6b3939b8af0a573a26f166f7/start 1
2020-04-12T02:55:00Z /detail.html?id=03fef6ac-1896-4ce8-bd69-b798f85c6e0b 41
2020-04-12T02:53:00Z /stats/summary/ 26
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gHNQntBfCi-eRkgtabgvtOLn8Au32i9T 7
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q7wjAXEZHkF99dQh1eYDF0T1PlPNCX-k 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=fyPwHubkmgtY5UNQ6UpbUZAmZKMnRzOm 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=jxp-ontBnAVosO2uDy3AzrKjo0NW4GHH 8
2020-04-12T02:55:00Z /metrics/probes 16
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_N0cdANb6O2YgSVWOV_-SA55t1-CKo9s 3
2020-04-12T02:46:00Z /_ping 6
2020-04-12T02:53:00Z /detail.html?id=837ab141-399e-4c1f-9abc-bace40296bac 53
2020-04-12T02:47:00Z /v1.38/exec/97ad14d92735c67a52159c3ab6a97df13d41de7e2999d685fc70b756fe33b941/start 1
2020-04-12T02:54:00Z /v1.38/exec/651fab7421938e47924fcd4552181813e166968dee0a4122cd85d19ab1a8b42c/start 1
2020-04-12T02:45:00Z /computeMetadata/v1/instance/machine-type 4
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wQ62WuHXGy7IMt6WndAQhjGOwoq5GUzi 8
2020-04-12T02:52:00Z /healthcheck/dnsmasq 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kVthD7n8GxOYHQaN9WJUPyORi_LCD2SY 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XwkMT6uLuCxUyeYGW6n5BhmM5AXko-Je 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ZD970s2YlBWm3mXU3nCtdqdBX9SGVt4 3
2020-04-12T02:54:00Z /detail.html?id=837ab141-399e-4c1f-9abc-bace40296bac 84
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iG0yPc3E2NZ-8Uo4WLWASLkGnl7ps8W_ 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=h5Kh4yCIhgtau_52Ckbotjk7uYekb1ZQ 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ev5cEfhUlhTRiyTorsmczpaF1o1hQZwa 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=-U41wRXZY2v35Zohn11uP89WO-t70QUI 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=uB_8q0SzFzQpW_9MTpxtgmtzMe0dZ2y2 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gHNQntBfCi-eRkgtabgvtOLn8Au32i9T 4
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hkODPfcZzlqO2srXLhJNKFDqFmySYQue 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ru4LCc6m3ocFp3GTlTT9LeKimu63P79 8
2020-04-12T02:51:00Z /computeMetadata/v1/?alt=json&last_etag=b0c8f04cbf346a47&recursive=True&timeout_sec=67&wait_for_change=True 1
2020-04-12T02:54:00Z /addresses/57a98d98e4b00679b4a830b0 743
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=yObJR8n_AR-IlPkBWPXI_pORB5n9-ih3 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=o_XUZx7XCU5LZcbxMGDBlagacE7e5KmR 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=c07qzjiV3u1i3av9WiMBhEz2zn2wBM_k 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=d8t_D3_6dZxNsXfCm6WgaDQTR-v6AjbJ 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wIoUWmu3VmJLuH6ba0N5aL1QGN4UOnF3 8
2020-04-12T02:50:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 14
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=z0lR8W6cU4WD5SMxrMWhEjDMbrwN4m09 8
2020-04-12T02:50:00Z /computeMetadata/v1/instance/hostname 29
2020-04-12T02:55:00Z /v1.38/exec/6898f1d0dd783eb3f439fbd7212e723662223fa140e759f4f09f18a630919c6e/start 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wIoUWmu3VmJLuH6ba0N5aL1QGN4UOnF3 8
2020-04-12T02:47:00Z /metrics 22
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=N4ND-6Z-3FD46M5PUCoFFWsZL6L-hPIm 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EdXACDdjldytp7DMjA41KErjzYx0oIXF 9
2020-04-12T02:45:00Z /computeMetadata/v1/instance/hostname 7
2020-04-12T02:53:00Z /basket.html 516
2020-04-12T02:51:00Z /healthcheck/kubedns 6
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2HQXyDVJ-pcuDSrbtbFx4iu6gPZT4A0y 7
2020-04-12T02:54:00Z /catalogue/837ab141-399e-4c1f-9abc-bace40296bac 85
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q4eR7Cp547FqEZnSgfvRFfjzhv0t7AAr 3
2020-04-12T02:54:00Z /detail.html?id=zzz4f044-b040-410d-8ead-4de0446aec7e 80
2020-04-12T02:50:00Z /readiness 6
2020-04-12T02:49:00Z /paymentAuth 66
2020-04-12T02:49:00Z /metrics 22
2020-04-12T02:53:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=5a05bb3997c99b1e&recursive=True&timeout_sec=75&wait_for_change=True 1
2020-04-12T02:51:00Z /v1.38/containers/09f80db95d352aea8645ce45d5466fa464438a7a0a671e6061879eeb3863e358/exec 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D9VHZ1GQcHpLYoPYPluH3qjq_SNxrFGK 8
2020-04-12T02:54:00Z /detail.html?id=819e1fbf-8b7e-4f6d-811f-693534916a8b 64
2020-04-12T02:45:00Z /readiness 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=boWMLlP8UCC7qBiCssQ9dW6pCs3VBSq2 8
2020-04-12T02:52:00Z /v1.38/exec/961f549fc1489078021e42e3f8f516066f4ec4a2da2c216eeb5182fbe2e3f51c/start 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3lTR6LCAFweP9b9Q9wFHA50D8sm6tBBJ 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=h5Kh4yCIhgtau_52Ckbotjk7uYekb1ZQ 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=0qzsNc1Ts4atafclCEgT0zbm9D0DwDXD 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kz7bCKsNRc3TFaoEY6xHGov5nQsTOR-i 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bh1FcW_vaXECnZrcgBCKq4GTxmJgMcye 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ev5cEfhUlhTRiyTorsmczpaF1o1hQZwa 8
2020-04-12T02:52:00Z /v1.38/exec/b16c353e33c67c8a7e2ef71ff9656ab44dab1f716266f8d5f474805783166cf2/start 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wnsnkEIoMF01x_MRD_XudPWv1qugnfOX 7
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=aEcBzH8p2V21tLLeSMgkUA5eEz82zfjA 3
2020-04-12T02:54:00Z /healthz 306
2020-04-12T02:52:00Z /stats/summary/ 24
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ev5cEfhUlhTRiyTorsmczpaF1o1hQZwa 3
2020-04-12T02:53:00Z /healthcheck/dnsmasq 6
2020-04-12T02:51:00Z /v1.38/containers/09f80db95d352aea8645ce45d5466fa464438a7a0a671e6061879eeb3863e358/json 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=TDkZhwyxUQtOeITxZIgGVLvSt_3-VnMA 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VQPInhoKQSkfixYEU2Zz4ehPodjlJerT 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KHF_u-awW-eJRpdC6foZcwX-5Vsd4jom 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=xPuMp0VXJaFVh6UNcH6fahiGAFb3TMC2 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D2fIHWZ8mI-c_x7u0EpH4z5BIl9bOI8P 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VFJgsDILy7gzG3MactdGgRZseLf-tEFe 7
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EdXACDdjldytp7DMjA41KErjzYx0oIXF 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3_WC_IejM-MiM6a6clv-peeLLDB3qjXl 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=cejNd5pxw0tUA4VbcgLmrQ3U0liqJpDB 8
2020-04-12T02:52:00Z /paymentAuth 784
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ATF243SNwF0z715e_GUPh2nO7N6D5Owk 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=8tsFeCFJ7Jw0-r8rIN6InugDQUE-1nt1 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=CJAMglIfaMw3RLtPKJ-QJ8lswfLnAcPO 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KYXRbX7TwFFFS7Ot_1B4q8MaWD-nQ2Sp 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VQPInhoKQSkfixYEU2Zz4ehPodjlJerT 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KYXRbX7TwFFFS7Ot_1B4q8MaWD-nQ2Sp 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M83aIVaNfF8ZgGIB-O97LzsJzUqOnGO- 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=khCiXJoQcgscOyAyRSwoQeejm4VfwLXm 8
2020-04-12T02:53:00Z /v1.38/exec/03f42d01698f0ef242fcb89b65248d7a86e13d0ef24a2564256ab20ad10d267e/start 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_kDGEIAmOyitXu12aB9vmzhGn8cwfADt 8
2020-04-12T02:49:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bB3HTIFHUKD_i7poT2_-s7WGF_uY2UKt 3
2020-04-12T02:54:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=e0caef74ae1c706&recursive=True&timeout_sec=89&wait_for_change=True 1
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=9hD3tUEhvs5tRtu18aN5OU7vj2gTLsRm 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BB8gDFPb33ZwD4fh8lP396R0QdjjXoCj 8
2020-04-12T02:50:00Z / 1
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=uB_8q0SzFzQpW_9MTpxtgmtzMe0dZ2y2 7
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=39V3Cu1JP-RaIZxMbSaN_GEgt7_370ko 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=zF_SZlG_Ergu-BDtWB2GFUz8ChVRgoao 4
2020-04-12T02:46:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=452LryYsKBED3xt90eVR_3QFavHIAfPE 3
2020-04-12T02:50:00Z /paymentAuth 788
2020-04-12T02:53:00Z /health 160
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v_NTj5KvxDlqQGVtjWktH6p61_tDaEW4 9
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=YkB6ELeogxrUOYPjdCgPKUkCZMYBnvwe 8
2020-04-12T02:53:00Z /computeMetadata/v1/?alt=json&last_etag=22d79ffec8acd196&recursive=True&timeout_sec=84&wait_for_change=True 1
2020-04-12T02:55:00Z /readyz 3
2020-04-12T02:53:00Z /detail.html?id=819e1fbf-8b7e-4f6d-811f-693534916a8b 59
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gDYBwUYYL5ghLdmtYZdixnvKfdRPHd5t 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JBIRgAFTHIyTuo2fmW0MEw9C7emRwYwc 8
2020-04-12T02:55:00Z /orders 562
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BYHbfSN3hQklMnCWFi5-SinWVo9mk3pP 4
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=9hD3tUEhvs5tRtu18aN5OU7vj2gTLsRm 8
2020-04-12T02:47:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 6
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=r9GY3WExwkXvfK6tn3IHj-A6tzYSErxt 3
2020-04-12T02:55:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 11
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=paPxNW3GYkHLgItnu-cegFrWRkUy81rK 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=eBFAKBqqxZ8LzukR5fm3VreqvWsnq291 7
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=zF_SZlG_Ergu-BDtWB2GFUz8ChVRgoao 7
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gDYBwUYYL5ghLdmtYZdixnvKfdRPHd5t 7
2020-04-12T02:53:00Z /readyz 4
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XFomT125sGTs4Cd7c2trsQzXyqFO9425 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kVthD7n8GxOYHQaN9WJUPyORi_LCD2SY 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q4eR7Cp547FqEZnSgfvRFfjzhv0t7AAr 8
2020-04-12T02:54:00Z /basket.html 701
2020-04-12T02:46:00Z /v1.38/exec/f0cecfafa3a7278aa5b4362216ea76dad3b699c4f7cc41c63c5f73dacd9dc581/start 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=khCiXJoQcgscOyAyRSwoQeejm4VfwLXm 8
2020-04-12T02:54:00Z /login 1332
2020-04-12T02:49:00Z /health 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wnsnkEIoMF01x_MRD_XudPWv1qugnfOX 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3_WC_IejM-MiM6a6clv-peeLLDB3qjXl 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2HQXyDVJ-pcuDSrbtbFx4iu6gPZT4A0y 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=AznAsOlKv87EtBeuOZWGlG8_ha579TQV 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=qWVNbJ4L66OyVHX0HVfrA-K_-W8ha44y 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=YkB6ELeogxrUOYPjdCgPKUkCZMYBnvwe 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=khCiXJoQcgscOyAyRSwoQeejm4VfwLXm 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v_NTj5KvxDlqQGVtjWktH6p61_tDaEW4 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_N0cdANb6O2YgSVWOV_-SA55t1-CKo9s 4
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=khCiXJoQcgscOyAyRSwoQeejm4VfwLXm 3
2020-04-12T02:55:00Z / 305
2020-04-12T02:55:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XFomT125sGTs4Cd7c2trsQzXyqFO9425 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3Elvq_ehubxKBowsrgCDL7I0Of9pXO7m 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2j24EuQs5wiEqGUucBxdSs4YZ3KpS42y 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q7HkjK0ZsojyjwWmSQVHky8VE8ujE1Zp 8
2020-04-12T02:55:00Z /v1.38/exec/19e278267062b41ddb0a2c652142c411f99f55619e0a8652317e57b6fb40e2f7/start 1
2020-04-12T02:45:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=452LryYsKBED3xt90eVR_3QFavHIAfPE 7
2020-04-12T02:54:00Z /healthcheck/kubedns 11
2020-04-12T02:55:00Z /catalogue/819e1fbf-8b7e-4f6d-811f-693534916a8b 25
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bB3HTIFHUKD_i7poT2_-s7WGF_uY2UKt 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q7HkjK0ZsojyjwWmSQVHky8VE8ujE1Zp 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=o_XUZx7XCU5LZcbxMGDBlagacE7e5KmR 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XwkMT6uLuCxUyeYGW6n5BhmM5AXko-Je 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KHF_u-awW-eJRpdC6foZcwX-5Vsd4jom 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VQPInhoKQSkfixYEU2Zz4ehPodjlJerT 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D2fIHWZ8mI-c_x7u0EpH4z5BIl9bOI8P 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=QplzvaYvX3HRh0X32FKe5jB79oSM7xLz 3
2020-04-12T02:47:00Z /computeMetadata/v1/instance/machine-type 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=-U41wRXZY2v35Zohn11uP89WO-t70QUI 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=0qzsNc1Ts4atafclCEgT0zbm9D0DwDXD 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=uB_8q0SzFzQpW_9MTpxtgmtzMe0dZ2y2 8
2020-04-12T02:53:00Z /computeMetadata/v1/?alt=json&last_etag=cbea8ff0a94a9376&recursive=True&timeout_sec=66&wait_for_change=True 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/items 585
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VJCVa8tEORtAp_Ia7HyKqprVdK2Kk4FO 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=k80hdpAZZE4LqA2i-VSxM9u_vIN5CWQb 8
2020-04-12T02:52:00Z /health 160
2020-04-12T02:45:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KEkB2dAH3r9eumDWQQKZ9cmajDUeu3gH 3
2020-04-12T02:53:00Z /login 442
2020-04-12T02:52:00Z /computeMetadata/v1/?alt=json&last_etag=22d79ffec8acd196&recursive=True&timeout_sec=84&wait_for_change=True 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=P_CJpRE4l8rI0RghYPUqzdlDqI_SAcGm 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=zF_SZlG_Ergu-BDtWB2GFUz8ChVRgoao 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=4ClcOj84lcDsoDg2CWeF0OnaPcOAhOXC 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=dRiJl-7okNzSLdlEoU6kihXN0NMEX42A 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q7wjAXEZHkF99dQh1eYDF0T1PlPNCX-k 8
2020-04-12T02:52:00Z /metrics/probes 12
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2HQXyDVJ-pcuDSrbtbFx4iu6gPZT4A0y 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2HQXyDVJ-pcuDSrbtbFx4iu6gPZT4A0y 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ATF243SNwF0z715e_GUPh2nO7N6D5Owk 8
2020-04-12T02:53:00Z /computeMetadata/v1/instance/hostname 46
2020-04-12T02:45:00Z /v1.39/containers/json 4
2020-04-12T02:53:00Z / 577
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=boWMLlP8UCC7qBiCssQ9dW6pCs3VBSq2 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JHCevEgD5UotU349IFiHd7Oz6M_1x7aj 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rM4e26RpAtT-4qG093c6x-AHtyqYVYl_ 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1g30WPrncNqSjS17vFAvH_Y3FgoSYyCO 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=b_NWLr5sqryqqoQXhb1NjtX8Cv0s133S 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v_NTj5KvxDlqQGVtjWktH6p61_tDaEW4 8
2020-04-12T02:54:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 30
2020-04-12T02:55:00Z /healthcheck/kubedns 5
2020-04-12T02:50:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=c19c98a3b010774d&recursive=True&timeout_sec=87&wait_for_change=True 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VJCVa8tEORtAp_Ia7HyKqprVdK2Kk4FO 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q7wjAXEZHkF99dQh1eYDF0T1PlPNCX-k 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gHNQntBfCi-eRkgtabgvtOLn8Au32i9T 3
2020-04-12T02:52:00Z /computeMetadata/v1/instance/machine-type 18
2020-04-12T02:54:00Z /catalogue 1370
2020-04-12T02:54:00Z /detail.html?id=03fef6ac-1896-4ce8-bd69-b798f85c6e0b 92
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=d8t_D3_6dZxNsXfCm6WgaDQTR-v6AjbJ 7
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wnsnkEIoMF01x_MRD_XudPWv1qugnfOX 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JSHPz3_OLlFHrrEadONeVJ1h9jgwMOnJ 8
2020-04-12T02:51:00Z /metrics 41
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VJCVa8tEORtAp_Ia7HyKqprVdK2Kk4FO 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VFJgsDILy7gzG3MactdGgRZseLf-tEFe 3
2020-04-12T02:51:00Z /orders 776
2020-04-12T02:55:00Z /healthz 107
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=math8xgQA8Eau4XfTLcadAoctN66WueB 3
2020-04-12T02:47:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=fyPwHubkmgtY5UNQ6UpbUZAmZKMnRzOm 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=r9GY3WExwkXvfK6tn3IHj-A6tzYSErxt 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BYHbfSN3hQklMnCWFi5-SinWVo9mk3pP 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KEkB2dAH3r9eumDWQQKZ9cmajDUeu3gH 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=7pUKaPS6opvW6Ff6juP8Wcjw8cXiDUxJ 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iGdp0Oj8r_VpkpyJRA9RC60vP5X3Qaw8 7
2020-04-12T02:55:00Z /v1.38/exec/f5f13c95738359452d46898522df1198ec12a17646d44aa796fe04922f2dd29b/start 1
2020-04-12T02:46:00Z /readiness 6
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KYXRbX7TwFFFS7Ot_1B4q8MaWD-nQ2Sp 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=qWVNbJ4L66OyVHX0HVfrA-K_-W8ha44y 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=eBFAKBqqxZ8LzukR5fm3VreqvWsnq291 8
2020-04-12T02:51:00Z /computeMetadata/v1/instance/hostname 35
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=YkB6ELeogxrUOYPjdCgPKUkCZMYBnvwe 8
2020-04-12T02:54:00Z /computeMetadata/v1/?alt=json&last_etag=2b84fcb6e37c7462&recursive=True&timeout_sec=60&wait_for_change=True 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v5QvdNFRqWc3PF2lB6xNnJShidperBf8 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=qWVNbJ4L66OyVHX0HVfrA-K_-W8ha44y 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3Elvq_ehubxKBowsrgCDL7I0Of9pXO7m 3
2020-04-12T02:50:00Z /v1.38/exec/2bc2dca03fb29c4c71ded73ea62299b7d208d95b3422d33e1bfb88e105a1d04c/start 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ZdLP0d9tOOyGqB7_n-R_dgDytjOkQlIy 8
2020-04-12T02:54:00Z /computeMetadata/v1/?alt=json&last_etag=b0c8f04cbf346a47&recursive=True&timeout_sec=67&wait_for_change=True 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rM4e26RpAtT-4qG093c6x-AHtyqYVYl_ 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=39V3Cu1JP-RaIZxMbSaN_GEgt7_370ko 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KHF_u-awW-eJRpdC6foZcwX-5Vsd4jom 3
2020-04-12T02:54:00Z /v1.38/exec/f5250c9a37a1c536b225252f601312b7c2528063098e4c342c2cbd5eac5f2d9e/start 1
2020-04-12T02:54:00Z /catalogue/03fef6ac-1896-4ce8-bd69-b798f85c6e0b 98
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=dRiJl-7okNzSLdlEoU6kihXN0NMEX42A 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=39V3Cu1JP-RaIZxMbSaN_GEgt7_370ko 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=zF_SZlG_Ergu-BDtWB2GFUz8ChVRgoao 8
2020-04-12T02:55:00Z /detail.html?id=510a0d7e-8e83-4193-b483-e27e09ddc34d 26
2020-04-12T02:53:00Z /readiness 6
2020-04-12T02:54:00Z /detail.html?id=a0a4f044-b040-410d-8ead-4de0446aec7e 93
2020-04-12T02:54:00Z /v1.38/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0 1
2020-04-12T02:55:00Z /_ping 12
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=cejNd5pxw0tUA4VbcgLmrQ3U0liqJpDB 8
2020-04-12T02:54:00Z /v1.39/containers/json 29
2020-04-12T02:55:00Z /cart 577
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=CJAMglIfaMw3RLtPKJ-QJ8lswfLnAcPO 4
2020-04-12T02:52:00Z /v1.39/containers/json 18
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1g30WPrncNqSjS17vFAvH_Y3FgoSYyCO 3
2020-04-12T02:54:00Z /detail.html?id=510a0d7e-8e83-4193-b483-e27e09ddc34d 86
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bh1FcW_vaXECnZrcgBCKq4GTxmJgMcye 9
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rhmHXZwP_4m7353y8-3WVJwAvMMbiFaQ 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=k80hdpAZZE4LqA2i-VSxM9u_vIN5CWQb 7
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ev5cEfhUlhTRiyTorsmczpaF1o1hQZwa 8
2020-04-12T02:49:00Z /stats/summary/ 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=jxp-ontBnAVosO2uDy3AzrKjo0NW4GHH 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VFJgsDILy7gzG3MactdGgRZseLf-tEFe 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=0qzsNc1Ts4atafclCEgT0zbm9D0DwDXD 8
2020-04-12T02:54:00Z /v1.38/exec/f1dc78c80c5cf2093301e75ce42a8722a9637f679e893341ff0854d7170ec339/start 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=aEcBzH8p2V21tLLeSMgkUA5eEz82zfjA 7
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1HCGPdbluem7ncnbhzkZN4psDbSkQptP 3
2020-04-12T02:54:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 30
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KEkB2dAH3r9eumDWQQKZ9cmajDUeu3gH 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D9VHZ1GQcHpLYoPYPluH3qjq_SNxrFGK 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v5QvdNFRqWc3PF2lB6xNnJShidperBf8 7
2020-04-12T02:55:00Z /shipping 236
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JBIRgAFTHIyTuo2fmW0MEw9C7emRwYwc 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=khCiXJoQcgscOyAyRSwoQeejm4VfwLXm 2
2020-04-12T02:47:00Z /healthcheck/kubedns 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2j24EuQs5wiEqGUucBxdSs4YZ3KpS42y 3
2020-04-12T02:53:00Z /_ping 21
2020-04-12T02:54:00Z /computeMetadata/v1/?alt=json&last_etag=cbea8ff0a94a9376&recursive=True&timeout_sec=66&wait_for_change=True 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=cejNd5pxw0tUA4VbcgLmrQ3U0liqJpDB 8
2020-04-12T02:54:00Z /readyz 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ATF243SNwF0z715e_GUPh2nO7N6D5Owk 3
2020-04-12T02:50:00Z /computeMetadata/v1/instance/machine-type 15
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JfXftTRtQpGxULureG5i3V0hNCjpH6Mt 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rhmHXZwP_4m7353y8-3WVJwAvMMbiFaQ 8
2020-04-12T02:52:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 18
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=zF_SZlG_Ergu-BDtWB2GFUz8ChVRgoao 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iGdp0Oj8r_VpkpyJRA9RC60vP5X3Qaw8 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hkODPfcZzlqO2srXLhJNKFDqFmySYQue 8
2020-04-12T02:54:00Z /v1.38/exec/e946de845a9614b88bf88f2f4f1f394bd4abe0748d568d669f7af71a1f71dc7c/start 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=7pUKaPS6opvW6Ff6juP8Wcjw8cXiDUxJ 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wQ62WuHXGy7IMt6WndAQhjGOwoq5GUzi 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M8kJr_QOQ98bONDIBlQ730wJq5NU7QQH 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ru4LCc6m3ocFp3GTlTT9LeKimu63P79 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=AznAsOlKv87EtBeuOZWGlG8_ha579TQV 8
2020-04-12T02:48:00Z /readiness 6
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=TDkZhwyxUQtOeITxZIgGVLvSt_3-VnMA 7
2020-04-12T02:46:00Z /metrics 22
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D2fIHWZ8mI-c_x7u0EpH4z5BIl9bOI8P 8
2020-04-12T02:53:00Z /cart 1122
2020-04-12T02:55:00Z /detail.html?id=a0a4f044-b040-410d-8ead-4de0446aec7e 24
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hhP1K5OKeSNJtIUpKljRYyGn7yIbbrMo 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=b_NWLr5sqryqqoQXhb1NjtX8Cv0s133S 8
2020-04-12T02:55:00Z /v1.38/exec/1ec05cd0a85f5040034df490c9357a9bf3a788d6ad8cbebde537193cf33312f3/start 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q4eR7Cp547FqEZnSgfvRFfjzhv0t7AAr 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=b_NWLr5sqryqqoQXhb1NjtX8Cv0s133S 8
2020-04-12T02:54:00Z /cart 1585
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=fyPwHubkmgtY5UNQ6UpbUZAmZKMnRzOm 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EdXACDdjldytp7DMjA41KErjzYx0oIXF 7
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ZD970s2YlBWm3mXU3nCtdqdBX9SGVt4 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BYHbfSN3hQklMnCWFi5-SinWVo9mk3pP 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q7wjAXEZHkF99dQh1eYDF0T1PlPNCX-k 8
2020-04-12T02:54:00Z /orders 1415
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BB8gDFPb33ZwD4fh8lP396R0QdjjXoCj 8
2020-04-12T02:51:00Z /paymentAuth 776
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=math8xgQA8Eau4XfTLcadAoctN66WueB 7
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=jxp-ontBnAVosO2uDy3AzrKjo0NW4GHH 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2rE3baY9Isvd9BS6xtIeFuZj2EtgPWLk 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XhHcfcDKKJUu20RPUXihuA99ym49Znm5 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ssTjH1K080O8-gsT407-OqnjobYLg7sz 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rM4e26RpAtT-4qG093c6x-AHtyqYVYl_ 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/items 629
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_kDGEIAmOyitXu12aB9vmzhGn8cwfADt 3
2020-04-12T02:54:00Z /_ping 29
2020-04-12T02:51:00Z /v1.38/exec/6b230d55fc89b5dcc43da028796803c3585e9999a45c052e25c083941ff6ae92/start 1
2020-04-12T02:51:00Z /v1.39/containers/json 18
2020-04-12T02:55:00Z /catalogue/3395a43e-2d88-40de-b95f-e00e1502085b 36
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=-U41wRXZY2v35Zohn11uP89WO-t70QUI 8
2020-04-12T02:53:00Z /detail.html?id=808a2de1-1aaa-4c25-a9b9-6612e8f29a38 50
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=8tsFeCFJ7Jw0-r8rIN6InugDQUE-1nt1 7
2020-04-12T02:51:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 18
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=P_CJpRE4l8rI0RghYPUqzdlDqI_SAcGm 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wQ62WuHXGy7IMt6WndAQhjGOwoq5GUzi 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kVthD7n8GxOYHQaN9WJUPyORi_LCD2SY 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=aEcBzH8p2V21tLLeSMgkUA5eEz82zfjA 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JSHPz3_OLlFHrrEadONeVJ1h9jgwMOnJ 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rhmHXZwP_4m7353y8-3WVJwAvMMbiFaQ 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wIoUWmu3VmJLuH6ba0N5aL1QGN4UOnF3 3
2020-04-12T02:47:00Z /healthcheck/dnsmasq 6
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XFomT125sGTs4Cd7c2trsQzXyqFO9425 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=eBFAKBqqxZ8LzukR5fm3VreqvWsnq291 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iG0yPc3E2NZ-8Uo4WLWASLkGnl7ps8W_ 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q2okhOU-kcEekOhapJoIdWNPZW091tyL 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=math8xgQA8Eau4XfTLcadAoctN66WueB 8
2020-04-12T02:53:00Z /v1.38/exec/58634ddc5f7328ba8f28da10ab41f8d113e60951454fe8737fb69be4b11788aa/start 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EZve75_zED_jSY5HzyHW5SOxNXEp6l9Y 8
2020-04-12T02:48:00Z /computeMetadata/v1/instance/machine-type 6
2020-04-12T02:52:00Z /computeMetadata/v1/?alt=json&last_etag=cbea8ff0a94a9376&recursive=True&timeout_sec=66&wait_for_change=True 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=39V3Cu1JP-RaIZxMbSaN_GEgt7_370ko 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wnsnkEIoMF01x_MRD_XudPWv1qugnfOX 8
2020-04-12T02:55:00Z /detail.html?id=808a2de1-1aaa-4c25-a9b9-6612e8f29a38 35
2020-04-12T02:52:00Z /computeMetadata/v1/instance/hostname 36
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=zF_SZlG_Ergu-BDtWB2GFUz8ChVRgoao 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=boWMLlP8UCC7qBiCssQ9dW6pCs3VBSq2 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=boWMLlP8UCC7qBiCssQ9dW6pCs3VBSq2 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rhmHXZwP_4m7353y8-3WVJwAvMMbiFaQ 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=4ClcOj84lcDsoDg2CWeF0OnaPcOAhOXC 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KHF_u-awW-eJRpdC6foZcwX-5Vsd4jom 7
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2j24EuQs5wiEqGUucBxdSs4YZ3KpS42y 3
2020-04-12T02:55:00Z /detail.html?id=3395a43e-2d88-40de-b95f-e00e1502085b 37
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=lfXpfBx6Al8xHixl-jw-P2zfSK8zbUQx 8
2020-04-12T02:52:00Z /readiness 6
2020-04-12T02:52:00Z /healthcheck/kubedns 6
2020-04-12T02:53:00Z /detail.html?id=510a0d7e-8e83-4193-b483-e27e09ddc34d 58
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ssTjH1K080O8-gsT407-OqnjobYLg7sz 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q2okhOU-kcEekOhapJoIdWNPZW091tyL 8
2020-04-12T02:54:00Z /customers/57a98d98e4b00679b4a830b2/addresses 740
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=eBFAKBqqxZ8LzukR5fm3VreqvWsnq291 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=7pUKaPS6opvW6Ff6juP8Wcjw8cXiDUxJ 7
2020-04-12T02:55:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=c19c98a3b010774d&recursive=True&timeout_sec=87&wait_for_change=True 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=boWMLlP8UCC7qBiCssQ9dW6pCs3VBSq2 3
2020-04-12T02:55:00Z /detail.html?id=819e1fbf-8b7e-4f6d-811f-693534916a8b 23
2020-04-12T02:50:00Z /metrics 34
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=fyPwHubkmgtY5UNQ6UpbUZAmZKMnRzOm 8
2020-04-12T02:51:00Z /computeMetadata/v1/?alt=json&last_etag=22d79ffec8acd196&recursive=True&timeout_sec=84&wait_for_change=True 1
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=d8t_D3_6dZxNsXfCm6WgaDQTR-v6AjbJ 8
2020-04-12T02:53:00Z /healthcheck/kubedns 6
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gDYBwUYYL5ghLdmtYZdixnvKfdRPHd5t 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=b_NWLr5sqryqqoQXhb1NjtX8Cv0s133S 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=9hD3tUEhvs5tRtu18aN5OU7vj2gTLsRm 7
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=r9GY3WExwkXvfK6tn3IHj-A6tzYSErxt 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=N4ND-6Z-3FD46M5PUCoFFWsZL6L-hPIm 8
2020-04-12T02:55:00Z /catalogue/808a2de1-1aaa-4c25-a9b9-6612e8f29a38 37
2020-04-12T02:51:00Z /health 160
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=d8t_D3_6dZxNsXfCm6WgaDQTR-v6AjbJ 8
2020-04-12T02:55:00Z /basket.html 258
2020-04-12T02:55:00Z /catalogue/a0a4f044-b040-410d-8ead-4de0446aec7e 28
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=N4ND-6Z-3FD46M5PUCoFFWsZL6L-hPIm 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=yObJR8n_AR-IlPkBWPXI_pORB5n9-ih3 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=TDkZhwyxUQtOeITxZIgGVLvSt_3-VnMA 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=c07qzjiV3u1i3av9WiMBhEz2zn2wBM_k 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ssTjH1K080O8-gsT407-OqnjobYLg7sz 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=d8t_D3_6dZxNsXfCm6WgaDQTR-v6AjbJ 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v5QvdNFRqWc3PF2lB6xNnJShidperBf8 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=paPxNW3GYkHLgItnu-cegFrWRkUy81rK 3
2020-04-12T02:50:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 14
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=c07qzjiV3u1i3av9WiMBhEz2zn2wBM_k 7
2020-04-12T02:54:00Z /customers/57a98d98e4b00679b4a830b2 1478
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3Elvq_ehubxKBowsrgCDL7I0Of9pXO7m 3
2020-04-12T02:49:00Z /healthz 74
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JSHPz3_OLlFHrrEadONeVJ1h9jgwMOnJ 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KHF_u-awW-eJRpdC6foZcwX-5Vsd4jom 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=e-5mCWOy9VLMbvD0vYvlUeI0de_jBtxL 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=dRiJl-7okNzSLdlEoU6kihXN0NMEX42A 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=lfXpfBx6Al8xHixl-jw-P2zfSK8zbUQx 8
2020-04-12T02:55:00Z /readiness 6
2020-04-12T02:53:00Z /v1.38/exec/0ddedb67a630179ba7c5c0bb63dc5ea7e3f5d7c488503b575248425779f14702/start 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q7HkjK0ZsojyjwWmSQVHky8VE8ujE1Zp 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=QplzvaYvX3HRh0X32FKe5jB79oSM7xLz 8
2020-04-12T02:54:00Z /healthcheck/dnsmasq 12
2020-04-12T02:49:00Z /orders 67
2020-04-12T02:54:00Z /customers/57a98d98e4b00679b4a830b2/cards 740
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=z0lR8W6cU4WD5SMxrMWhEjDMbrwN4m09 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D9VHZ1GQcHpLYoPYPluH3qjq_SNxrFGK 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=z0lR8W6cU4WD5SMxrMWhEjDMbrwN4m09 8
2020-04-12T02:53:00Z /paymentAuth 800
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rhmHXZwP_4m7353y8-3WVJwAvMMbiFaQ 3
2020-04-12T02:46:00Z /computeMetadata/v1/instance/machine-type 6
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bh1FcW_vaXECnZrcgBCKq4GTxmJgMcye 3
2020-04-12T02:51:00Z /v1.38/exec/9775ba466318baf2dcaf98ea1f2b48034b72e0bc50d7f4807da14367d379eb95/start 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=jxp-ontBnAVosO2uDy3AzrKjo0NW4GHH 4
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VGdG2opOWS-N-KFhtrH6JY1jGpkAZwZe 8
2020-04-12T02:53:00Z /detail.html?id=03fef6ac-1896-4ce8-bd69-b798f85c6e0b 51
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v_NTj5KvxDlqQGVtjWktH6p61_tDaEW4 7
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=smm408A4Pm37cCvpxo4jjqgkLL9zFwDz 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=4ClcOj84lcDsoDg2CWeF0OnaPcOAhOXC 8
2020-04-12T02:50:00Z /_ping 15
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KHF_u-awW-eJRpdC6foZcwX-5Vsd4jom 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=aEcBzH8p2V21tLLeSMgkUA5eEz82zfjA 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XFomT125sGTs4Cd7c2trsQzXyqFO9425 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=z0lR8W6cU4WD5SMxrMWhEjDMbrwN4m09 4
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_N0cdANb6O2YgSVWOV_-SA55t1-CKo9s 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=e-5mCWOy9VLMbvD0vYvlUeI0de_jBtxL 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bB3HTIFHUKD_i7poT2_-s7WGF_uY2UKt 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q7HkjK0ZsojyjwWmSQVHky8VE8ujE1Zp 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=k80hdpAZZE4LqA2i-VSxM9u_vIN5CWQb 3
2020-04-12T02:54:00Z /v1.38/exec/7f8b0e4bf15e8d5b51b79dab45ef912c496127c83d1c58b5455cd1a76ed01518/start 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=h5Kh4yCIhgtau_52Ckbotjk7uYekb1ZQ 7
2020-04-12T02:53:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 23
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1HCGPdbluem7ncnbhzkZN4psDbSkQptP 8
2020-04-12T02:53:00Z /metrics 52
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q2okhOU-kcEekOhapJoIdWNPZW091tyL 7
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2j24EuQs5wiEqGUucBxdSs4YZ3KpS42y 8
2020-04-12T02:55:00Z /healthcheck/dnsmasq 4
2020-04-12T02:48:00Z /computeMetadata/v1/instance/hostname 12
2020-04-12T02:50:00Z /metrics/probes 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KYXRbX7TwFFFS7Ot_1B4q8MaWD-nQ2Sp 3
2020-04-12T02:51:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=4ac989275adc8819&recursive=True&timeout_sec=79&wait_for_change=True 1
2020-04-12T02:45:00Z /healthcheck/kubedns 4
2020-04-12T02:53:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 23
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=aEcBzH8p2V21tLLeSMgkUA5eEz82zfjA 8
2020-04-12T02:54:00Z /catalogue/3395a43e-2d88-40de-b95f-e00e1502085b 86
2020-04-12T02:51:00Z /metrics/probes 12
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q4eR7Cp547FqEZnSgfvRFfjzhv0t7AAr 8
2020-04-12T02:46:00Z /healthcheck/kubedns 6
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1HCGPdbluem7ncnbhzkZN4psDbSkQptP 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VQPInhoKQSkfixYEU2Zz4ehPodjlJerT 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bB3HTIFHUKD_i7poT2_-s7WGF_uY2UKt 8
2020-04-12T02:53:00Z /healthz 234
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hkODPfcZzlqO2srXLhJNKFDqFmySYQue 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=xPuMp0VXJaFVh6UNcH6fahiGAFb3TMC2 3
2020-04-12T02:48:00Z /v1.39/containers/json 6
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_N0cdANb6O2YgSVWOV_-SA55t1-CKo9s 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=smm408A4Pm37cCvpxo4jjqgkLL9zFwDz 8
2020-04-12T02:53:00Z /detail.html?id=a0a4f044-b040-410d-8ead-4de0446aec7e 73
2020-04-12T02:48:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 6
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=7pUKaPS6opvW6Ff6juP8Wcjw8cXiDUxJ 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hkODPfcZzlqO2srXLhJNKFDqFmySYQue 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=cejNd5pxw0tUA4VbcgLmrQ3U0liqJpDB 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=k80hdpAZZE4LqA2i-VSxM9u_vIN5CWQb 3
2020-04-12T02:54:00Z /metrics/probes 18
2020-04-12T02:48:00Z /v1.38/exec/48ece189d95ee81d3185205c0da69e3e3d2f32750921ef9e0de8bc7174c720f1/start 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kz7bCKsNRc3TFaoEY6xHGov5nQsTOR-i 8
2020-04-12T02:54:00Z /stats/summary/ 30
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=k80hdpAZZE4LqA2i-VSxM9u_vIN5CWQb 8
2020-04-12T02:50:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=4ac989275adc8819&recursive=True&timeout_sec=79&wait_for_change=True 1
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=aEcBzH8p2V21tLLeSMgkUA5eEz82zfjA 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=e-5mCWOy9VLMbvD0vYvlUeI0de_jBtxL 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=e-5mCWOy9VLMbvD0vYvlUeI0de_jBtxL 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ru4LCc6m3ocFp3GTlTT9LeKimu63P79 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JBIRgAFTHIyTuo2fmW0MEw9C7emRwYwc 7
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hhP1K5OKeSNJtIUpKljRYyGn7yIbbrMo 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XhHcfcDKKJUu20RPUXihuA99ym49Znm5 3
2020-04-12T02:46:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=c19c98a3b010774d&recursive=True&timeout_sec=87&wait_for_change=True 1
2020-04-12T02:55:00Z /paymentAuth 296
2020-04-12T02:54:00Z /detail.html?id=d3588630-ad8e-49df-bbd7-3167f7efb246 78
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JfXftTRtQpGxULureG5i3V0hNCjpH6Mt 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=cejNd5pxw0tUA4VbcgLmrQ3U0liqJpDB 8
2020-04-12T02:48:00Z /healthcheck/kubedns 6
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=o_XUZx7XCU5LZcbxMGDBlagacE7e5KmR 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VFJgsDILy7gzG3MactdGgRZseLf-tEFe 4
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JSHPz3_OLlFHrrEadONeVJ1h9jgwMOnJ 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iG0yPc3E2NZ-8Uo4WLWASLkGnl7ps8W_ 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=r9GY3WExwkXvfK6tn3IHj-A6tzYSErxt 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_kDGEIAmOyitXu12aB9vmzhGn8cwfADt 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=o_XUZx7XCU5LZcbxMGDBlagacE7e5KmR 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=khCiXJoQcgscOyAyRSwoQeejm4VfwLXm 8
2020-04-12T02:53:00Z /category.html 537
2020-04-12T02:52:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=c19c98a3b010774d&recursive=True&timeout_sec=87&wait_for_change=True 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=dRiJl-7okNzSLdlEoU6kihXN0NMEX42A 3
2020-04-12T02:51:00Z /readiness 6
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ATF243SNwF0z715e_GUPh2nO7N6D5Owk 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=-U41wRXZY2v35Zohn11uP89WO-t70QUI 7
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XFomT125sGTs4Cd7c2trsQzXyqFO9425 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gDYBwUYYL5ghLdmtYZdixnvKfdRPHd5t 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=QplzvaYvX3HRh0X32FKe5jB79oSM7xLz 8
2020-04-12T02:48:00Z /metrics 22
2020-04-12T02:46:00Z /metrics/probes 4
2020-04-12T02:55:00Z /customers/57a98d98e4b00679b4a830b2 599
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XwkMT6uLuCxUyeYGW6n5BhmM5AXko-Je 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ZD970s2YlBWm3mXU3nCtdqdBX9SGVt4 8
2020-04-12T02:49:00Z /computeMetadata/v1/?alt=json&last_etag=22d79ffec8acd196&recursive=True&timeout_sec=84&wait_for_change=True 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hkODPfcZzlqO2srXLhJNKFDqFmySYQue 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=smm408A4Pm37cCvpxo4jjqgkLL9zFwDz 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XwkMT6uLuCxUyeYGW6n5BhmM5AXko-Je 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M83aIVaNfF8ZgGIB-O97LzsJzUqOnGO- 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=TDkZhwyxUQtOeITxZIgGVLvSt_3-VnMA 8
2020-04-12T02:47:00Z /readiness 6
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KYXRbX7TwFFFS7Ot_1B4q8MaWD-nQ2Sp 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=xPuMp0VXJaFVh6UNcH6fahiGAFb3TMC2 8
2020-04-12T02:49:00Z /readiness 6
2020-04-12T02:46:00Z /computeMetadata/v1/instance/hostname 12
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=QplzvaYvX3HRh0X32FKe5jB79oSM7xLz 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hkODPfcZzlqO2srXLhJNKFDqFmySYQue 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wIoUWmu3VmJLuH6ba0N5aL1QGN4UOnF3 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_kDGEIAmOyitXu12aB9vmzhGn8cwfADt 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M83aIVaNfF8ZgGIB-O97LzsJzUqOnGO- 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_N0cdANb6O2YgSVWOV_-SA55t1-CKo9s 7
2020-04-12T02:55:00Z /detail.html?id=zzz4f044-b040-410d-8ead-4de0446aec7e 33
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2rE3baY9Isvd9BS6xtIeFuZj2EtgPWLk 8
2020-04-12T02:49:00Z /computeMetadata/v1/instance/hostname 12
2020-04-12T02:53:00Z /catalogue 468
2020-04-12T02:55:00Z /metrics 46
2020-04-12T02:51:00Z /_ping 18
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=7pUKaPS6opvW6Ff6juP8Wcjw8cXiDUxJ 4
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JfXftTRtQpGxULureG5i3V0hNCjpH6Mt 3
2020-04-12T02:54:00Z /cards/57a98d98e4b00679b4a830b1 743
2020-04-12T02:46:00Z /healthcheck/dnsmasq 6
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3_WC_IejM-MiM6a6clv-peeLLDB3qjXl 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KEkB2dAH3r9eumDWQQKZ9cmajDUeu3gH 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BB8gDFPb33ZwD4fh8lP396R0QdjjXoCj 4
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BB8gDFPb33ZwD4fh8lP396R0QdjjXoCj 7
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=CJAMglIfaMw3RLtPKJ-QJ8lswfLnAcPO 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kVthD7n8GxOYHQaN9WJUPyORi_LCD2SY 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=r9GY3WExwkXvfK6tn3IHj-A6tzYSErxt 8
2020-04-12T02:54:00Z /detail.html?id=808a2de1-1aaa-4c25-a9b9-6612e8f29a38 91
2020-04-12T02:49:00Z /metrics/probes 4
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=paPxNW3GYkHLgItnu-cegFrWRkUy81rK 8
2020-04-12T02:46:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 6
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2j24EuQs5wiEqGUucBxdSs4YZ3KpS42y 8
2020-04-12T02:54:00Z /metrics 65
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q7wjAXEZHkF99dQh1eYDF0T1PlPNCX-k 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=jxp-ontBnAVosO2uDy3AzrKjo0NW4GHH 7
2020-04-12T02:48:00Z /healthz 67
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rM4e26RpAtT-4qG093c6x-AHtyqYVYl_ 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BB8gDFPb33ZwD4fh8lP396R0QdjjXoCj 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ZdLP0d9tOOyGqB7_n-R_dgDytjOkQlIy 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=xPuMp0VXJaFVh6UNcH6fahiGAFb3TMC2 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=o_XUZx7XCU5LZcbxMGDBlagacE7e5KmR 7
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EZve75_zED_jSY5HzyHW5SOxNXEp6l9Y 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EdXACDdjldytp7DMjA41KErjzYx0oIXF 8
2020-04-12T02:45:00Z /metrics 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=math8xgQA8Eau4XfTLcadAoctN66WueB 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M83aIVaNfF8ZgGIB-O97LzsJzUqOnGO- 3
2020-04-12T02:53:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=e0caef74ae1c706&recursive=True&timeout_sec=89&wait_for_change=True 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=8tsFeCFJ7Jw0-r8rIN6InugDQUE-1nt1 3
2020-04-12T02:46:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 6
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M8kJr_QOQ98bONDIBlQ730wJq5NU7QQH 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VFJgsDILy7gzG3MactdGgRZseLf-tEFe 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=-U41wRXZY2v35Zohn11uP89WO-t70QUI 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q2okhOU-kcEekOhapJoIdWNPZW091tyL 8
2020-04-12T02:51:00Z /v1.38/exec/5830c967019be4f031644e5f015142503cd61ca30f6526916b37bde706005f00/start 1
2020-04-12T02:55:00Z /catalogue/510a0d7e-8e83-4193-b483-e27e09ddc34d 27
2020-04-12T02:45:00Z /_ping 4
2020-04-12T02:53:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=4ac989275adc8819&recursive=True&timeout_sec=79&wait_for_change=True 1
2020-04-12T02:54:00Z /shipping 651
2020-04-12T02:54:00Z /health 271
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=fyPwHubkmgtY5UNQ6UpbUZAmZKMnRzOm 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2rE3baY9Isvd9BS6xtIeFuZj2EtgPWLk 3
2020-04-12T02:50:00Z /health 114
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=8tsFeCFJ7Jw0-r8rIN6InugDQUE-1nt1 3
2020-04-12T02:53:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=c19c98a3b010774d&recursive=True&timeout_sec=87&wait_for_change=True 1
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3_WC_IejM-MiM6a6clv-peeLLDB3qjXl 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=P_CJpRE4l8rI0RghYPUqzdlDqI_SAcGm 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=paPxNW3GYkHLgItnu-cegFrWRkUy81rK 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ru4LCc6m3ocFp3GTlTT9LeKimu63P79 3
2020-04-12T02:47:00Z /computeMetadata/v1/instance/hostname 12
2020-04-12T02:50:00Z /v1.39/containers/json 15
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=yObJR8n_AR-IlPkBWPXI_pORB5n9-ih3 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=TDkZhwyxUQtOeITxZIgGVLvSt_3-VnMA 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_kDGEIAmOyitXu12aB9vmzhGn8cwfADt 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q4eR7Cp547FqEZnSgfvRFfjzhv0t7AAr 3
2020-04-12T02:50:00Z /stats/summary/ 16
2020-04-12T02:54:00Z /v1.38/exec/8278a61331ad4e2bfe64eb40635a976ccb844e9d0b730cc516c68c550dfa07e0/start 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=452LryYsKBED3xt90eVR_3QFavHIAfPE 8
2020-04-12T02:54:00Z /v1.38/exec/ed55aae162c49c2191135719583caeda17d608eaaaa398df637419c6b04ed235/start 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EZve75_zED_jSY5HzyHW5SOxNXEp6l9Y 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=CJAMglIfaMw3RLtPKJ-QJ8lswfLnAcPO 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=z0lR8W6cU4WD5SMxrMWhEjDMbrwN4m09 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=h5Kh4yCIhgtau_52Ckbotjk7uYekb1ZQ 8
2020-04-12T02:54:00Z /catalogue/510a0d7e-8e83-4193-b483-e27e09ddc34d 88
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=qWVNbJ4L66OyVHX0HVfrA-K_-W8ha44y 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2HQXyDVJ-pcuDSrbtbFx4iu6gPZT4A0y 4
2020-04-12T02:51:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ZdLP0d9tOOyGqB7_n-R_dgDytjOkQlIy 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3Elvq_ehubxKBowsrgCDL7I0Of9pXO7m 7
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=math8xgQA8Eau4XfTLcadAoctN66WueB 8
2020-04-12T02:55:00Z /catalogue/d3588630-ad8e-49df-bbd7-3167f7efb246 27
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1HCGPdbluem7ncnbhzkZN4psDbSkQptP 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D2fIHWZ8mI-c_x7u0EpH4z5BIl9bOI8P 7
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2rE3baY9Isvd9BS6xtIeFuZj2EtgPWLk 3
2020-04-12T02:52:00Z / 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=d8t_D3_6dZxNsXfCm6WgaDQTR-v6AjbJ 8
2020-04-12T02:55:00Z /cards/57a98d98e4b00679b4a830b1 296
2020-04-12T02:55:00Z /catalogue 557
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EZve75_zED_jSY5HzyHW5SOxNXEp6l9Y 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iGdp0Oj8r_VpkpyJRA9RC60vP5X3Qaw8 3
2020-04-12T02:55:00Z /computeMetadata/v1/instance/hostname 22
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2HQXyDVJ-pcuDSrbtbFx4iu6gPZT4A0y 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q7wjAXEZHkF99dQh1eYDF0T1PlPNCX-k 8
2020-04-12T02:51:00Z / 1
2020-04-12T02:51:00Z /healthz 192
2020-04-12T02:49:00Z /healthcheck/dnsmasq 6
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rhmHXZwP_4m7353y8-3WVJwAvMMbiFaQ 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XhHcfcDKKJUu20RPUXihuA99ym49Znm5 8
2020-04-12T02:55:00Z /customers/57a98d98e4b00679b4a830b2/cards 294
2020-04-12T02:48:00Z /stats/summary/ 8
2020-04-12T02:53:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XFomT125sGTs4Cd7c2trsQzXyqFO9425 8
2020-04-12T02:55:00Z /detail.html?id=d3588630-ad8e-49df-bbd7-3167f7efb246 26
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=39V3Cu1JP-RaIZxMbSaN_GEgt7_370ko 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VFJgsDILy7gzG3MactdGgRZseLf-tEFe 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=qWVNbJ4L66OyVHX0HVfrA-K_-W8ha44y 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EdXACDdjldytp7DMjA41KErjzYx0oIXF 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ZdLP0d9tOOyGqB7_n-R_dgDytjOkQlIy 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=b_NWLr5sqryqqoQXhb1NjtX8Cv0s133S 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BYHbfSN3hQklMnCWFi5-SinWVo9mk3pP 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1g30WPrncNqSjS17vFAvH_Y3FgoSYyCO 3
2020-04-12T02:55:00Z /v1.38/exec/c349f216659fa934013cc8bc5840fa52681cebd1b18ae927e43dd5529bd16b0f/start 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=YkB6ELeogxrUOYPjdCgPKUkCZMYBnvwe 4
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M8kJr_QOQ98bONDIBlQ730wJq5NU7QQH 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=AznAsOlKv87EtBeuOZWGlG8_ha579TQV 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hhP1K5OKeSNJtIUpKljRYyGn7yIbbrMo 7
2020-04-12T02:53:00Z /computeMetadata/v1/?alt=json&last_etag=b0c8f04cbf346a47&recursive=True&timeout_sec=67&wait_for_change=True 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2j24EuQs5wiEqGUucBxdSs4YZ3KpS42y 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ssTjH1K080O8-gsT407-OqnjobYLg7sz 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v5QvdNFRqWc3PF2lB6xNnJShidperBf8 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wQ62WuHXGy7IMt6WndAQhjGOwoq5GUzi 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KYXRbX7TwFFFS7Ot_1B4q8MaWD-nQ2Sp 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VGdG2opOWS-N-KFhtrH6JY1jGpkAZwZe 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JBIRgAFTHIyTuo2fmW0MEw9C7emRwYwc 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=YkB6ELeogxrUOYPjdCgPKUkCZMYBnvwe 7
2020-04-12T02:54:00Z /computeMetadata/v1/instance/machine-type 30
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M8kJr_QOQ98bONDIBlQ730wJq5NU7QQH 8
2020-04-12T02:52:00Z /_ping 18
2020-04-12T02:54:00Z /catalogue/a0a4f044-b040-410d-8ead-4de0446aec7e 93
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=math8xgQA8Eau4XfTLcadAoctN66WueB 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JSHPz3_OLlFHrrEadONeVJ1h9jgwMOnJ 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=yObJR8n_AR-IlPkBWPXI_pORB5n9-ih3 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=h5Kh4yCIhgtau_52Ckbotjk7uYekb1ZQ 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JSHPz3_OLlFHrrEadONeVJ1h9jgwMOnJ 7
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ev5cEfhUlhTRiyTorsmczpaF1o1hQZwa 8
2020-04-12T02:54:00Z /readiness 11
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3_WC_IejM-MiM6a6clv-peeLLDB3qjXl 8
2020-04-12T02:47:00Z /healthz 67
2020-04-12T02:54:00Z /v1.38/exec/fc841efd4e7db4756f369d2d2e1d99e821ade9bc7f3d33bf2501c9152f27e294/start 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=b_NWLr5sqryqqoQXhb1NjtX8Cv0s133S 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3lTR6LCAFweP9b9Q9wFHA50D8sm6tBBJ 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VGdG2opOWS-N-KFhtrH6JY1jGpkAZwZe 7
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=z0lR8W6cU4WD5SMxrMWhEjDMbrwN4m09 7
2020-04-12T02:55:00Z /v1.38/exec/12b86165092f9058b0695d476009199d7ed2a3cd1a090765ff57882df1c27583/start 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=-U41wRXZY2v35Zohn11uP89WO-t70QUI 3
2020-04-12T02:55:00Z /v1.39/containers/json 12
2020-04-12T02:53:00Z /orders 1265
2020-04-12T02:54:00Z /v1.38/exec/0eba73ced8a87e4ca3d79faa99297578fa372eda1a6dfb332b13e6980601ce26/start 1
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JfXftTRtQpGxULureG5i3V0hNCjpH6Mt 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=AznAsOlKv87EtBeuOZWGlG8_ha579TQV 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EdXACDdjldytp7DMjA41KErjzYx0oIXF 7
2020-04-12T02:50:00Z /healthcheck/kubedns 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=0qzsNc1Ts4atafclCEgT0zbm9D0DwDXD 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ZdLP0d9tOOyGqB7_n-R_dgDytjOkQlIy 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rM4e26RpAtT-4qG093c6x-AHtyqYVYl_ 8
2020-04-12T02:54:00Z /computeMetadata/v1/?alt=json&last_etag=250b2307abbbe496&recursive=True&timeout_sec=61&wait_for_change=True 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JBIRgAFTHIyTuo2fmW0MEw9C7emRwYwc 3
2020-04-12T02:51:00Z /stats/summary/ 22
2020-04-12T02:54:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=4ac989275adc8819&recursive=True&timeout_sec=79&wait_for_change=True 1
2020-04-12T02:54:00Z /v1.38/exec/4a60f5bd06f501eb2dfc52423d9f00d24c45359a755695a33ef35bb78daa5a96/start 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=4ClcOj84lcDsoDg2CWeF0OnaPcOAhOXC 7
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=c07qzjiV3u1i3av9WiMBhEz2zn2wBM_k 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VQPInhoKQSkfixYEU2Zz4ehPodjlJerT 7
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M83aIVaNfF8ZgGIB-O97LzsJzUqOnGO- 8
2020-04-12T02:52:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hhP1K5OKeSNJtIUpKljRYyGn7yIbbrMo 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=452LryYsKBED3xt90eVR_3QFavHIAfPE 8
2020-04-12T02:53:00Z /detail.html?id=zzz4f044-b040-410d-8ead-4de0446aec7e 62
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=yObJR8n_AR-IlPkBWPXI_pORB5n9-ih3 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VGdG2opOWS-N-KFhtrH6JY1jGpkAZwZe 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kVthD7n8GxOYHQaN9WJUPyORi_LCD2SY 8
2020-04-12T02:50:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gDYBwUYYL5ghLdmtYZdixnvKfdRPHd5t 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q2okhOU-kcEekOhapJoIdWNPZW091tyL 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=dRiJl-7okNzSLdlEoU6kihXN0NMEX42A 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kz7bCKsNRc3TFaoEY6xHGov5nQsTOR-i 4
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BB8gDFPb33ZwD4fh8lP396R0QdjjXoCj 3
2020-04-12T02:54:00Z /category.html 765
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ssTjH1K080O8-gsT407-OqnjobYLg7sz 8
2020-04-12T02:54:00Z /v1.38/exec/a7addb6b3dd6de9850bfa3aa20ee68d15c7147d98e9995fb930ea8f7dd211f43/start 1
2020-04-12T02:51:00Z /healthcheck/dnsmasq 6
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=k80hdpAZZE4LqA2i-VSxM9u_vIN5CWQb 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ZD970s2YlBWm3mXU3nCtdqdBX9SGVt4 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VGdG2opOWS-N-KFhtrH6JY1jGpkAZwZe 3
2020-04-12T02:48:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 6
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2rE3baY9Isvd9BS6xtIeFuZj2EtgPWLk 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q7HkjK0ZsojyjwWmSQVHky8VE8ujE1Zp 7
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=paPxNW3GYkHLgItnu-cegFrWRkUy81rK 8
2020-04-12T02:51:00Z /computeMetadata/v1/instance/machine-type 17
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ru4LCc6m3ocFp3GTlTT9LeKimu63P79 8
2020-04-12T02:47:00Z /stats/summary/ 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=c07qzjiV3u1i3av9WiMBhEz2zn2wBM_k 8
2020-04-12T02:45:00Z /computeMetadata/v1/?alt=json&last_etag=22d79ffec8acd196&recursive=True&timeout_sec=84&wait_for_change=True 1
2020-04-12T02:46:00Z /computeMetadata/v1/?alt=json&last_etag=22d79ffec8acd196&recursive=True&timeout_sec=84&wait_for_change=True 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ZD970s2YlBWm3mXU3nCtdqdBX9SGVt4 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/items 1544
2020-04-12T02:45:00Z /healthz 43
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XwkMT6uLuCxUyeYGW6n5BhmM5AXko-Je 7
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1HCGPdbluem7ncnbhzkZN4psDbSkQptP 3
2020-04-12T02:53:00Z /detail.html?id=3395a43e-2d88-40de-b95f-e00e1502085b 55
2020-04-12T02:54:00Z /v1.38/exec/cc0cbd70b3101060ee8061a1882f4068482da1915eafa637b55822f712533cb7/start 1
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ev5cEfhUlhTRiyTorsmczpaF1o1hQZwa 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wIoUWmu3VmJLuH6ba0N5aL1QGN4UOnF3 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ATF243SNwF0z715e_GUPh2nO7N6D5Owk 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kz7bCKsNRc3TFaoEY6xHGov5nQsTOR-i 7
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D9VHZ1GQcHpLYoPYPluH3qjq_SNxrFGK 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XhHcfcDKKJUu20RPUXihuA99ym49Znm5 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=e-5mCWOy9VLMbvD0vYvlUeI0de_jBtxL 3
2020-04-12T02:55:00Z /v1.38/exec/fe3de91a1775513819df55e32239fdd76f911dee9c5970b5a30ba0dd05ae2968/start 1
2020-04-12T02:47:00Z /metrics/probes 4
2020-04-12T02:50:00Z /v1.38/exec/4234fd89c889640ad2ffca53e933174faf7bbb125461be982b2e8486a493501d/start 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3_WC_IejM-MiM6a6clv-peeLLDB3qjXl 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3lTR6LCAFweP9b9Q9wFHA50D8sm6tBBJ 7
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_N0cdANb6O2YgSVWOV_-SA55t1-CKo9s 8
2020-04-12T02:55:00Z /computeMetadata/v1/instance/machine-type 11
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=xPuMp0VXJaFVh6UNcH6fahiGAFb3TMC2 3
2020-04-12T02:47:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 6
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=0qzsNc1Ts4atafclCEgT0zbm9D0DwDXD 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ru4LCc6m3ocFp3GTlTT9LeKimu63P79 8
2020-04-12T02:55:00Z /computeMetadata/v1/?alt=json&last_etag=250b2307abbbe496&recursive=True&timeout_sec=61&wait_for_change=True 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wnsnkEIoMF01x_MRD_XudPWv1qugnfOX 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iGdp0Oj8r_VpkpyJRA9RC60vP5X3Qaw8 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=h5Kh4yCIhgtau_52Ckbotjk7uYekb1ZQ 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JHCevEgD5UotU349IFiHd7Oz6M_1x7aj 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/items 1563
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v5QvdNFRqWc3PF2lB6xNnJShidperBf8 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JfXftTRtQpGxULureG5i3V0hNCjpH6Mt 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=lfXpfBx6Al8xHixl-jw-P2zfSK8zbUQx 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=paPxNW3GYkHLgItnu-cegFrWRkUy81rK 7
2020-04-12T02:55:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=5d64eb8e89ccf4b7&recursive=True&timeout_sec=85&wait_for_change=True 1
2020-04-12T02:52:00Z /computeMetadata/v1/?alt=json&last_etag=b0c8f04cbf346a47&recursive=True&timeout_sec=67&wait_for_change=True 1
2020-04-12T02:52:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 18
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=smm408A4Pm37cCvpxo4jjqgkLL9zFwDz 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JHCevEgD5UotU349IFiHd7Oz6M_1x7aj 8
2020-04-12T02:51:00Z /computeMetadata/v1/instance/network-interfaces/0/ip 18
2020-04-12T02:45:00Z /healthcheck/dnsmasq 4
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iGdp0Oj8r_VpkpyJRA9RC60vP5X3Qaw8 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=q2okhOU-kcEekOhapJoIdWNPZW091tyL 3
2020-04-12T02:45:00Z /v1.38/exec/9fa89467c1563ee6231717848e46964413cb08001c80be67665c83b025565dd1/start 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=rM4e26RpAtT-4qG093c6x-AHtyqYVYl_ 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=9hD3tUEhvs5tRtu18aN5OU7vj2gTLsRm 4
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ATF243SNwF0z715e_GUPh2nO7N6D5Owk 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=CJAMglIfaMw3RLtPKJ-QJ8lswfLnAcPO 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XhHcfcDKKJUu20RPUXihuA99ym49Znm5 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=r9GY3WExwkXvfK6tn3IHj-A6tzYSErxt 7
2020-04-12T02:54:00Z /paymentAuth 778
2020-04-12T02:52:00Z /orders 784
2020-04-12T02:49:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=c19c98a3b010774d&recursive=True&timeout_sec=87&wait_for_change=True 1
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=8tsFeCFJ7Jw0-r8rIN6InugDQUE-1nt1 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=jxp-ontBnAVosO2uDy3AzrKjo0NW4GHH 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=CJAMglIfaMw3RLtPKJ-QJ8lswfLnAcPO 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VJCVa8tEORtAp_Ia7HyKqprVdK2Kk4FO 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3lTR6LCAFweP9b9Q9wFHA50D8sm6tBBJ 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bh1FcW_vaXECnZrcgBCKq4GTxmJgMcye 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=4ClcOj84lcDsoDg2CWeF0OnaPcOAhOXC 3
2020-04-12T02:55:00Z /detail.html?id=837ab141-399e-4c1f-9abc-bace40296bac 35
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1g30WPrncNqSjS17vFAvH_Y3FgoSYyCO 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gDYBwUYYL5ghLdmtYZdixnvKfdRPHd5t 3
2020-04-12T02:48:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 1
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=3Elvq_ehubxKBowsrgCDL7I0Of9pXO7m 9
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/items 1575
2020-04-12T02:52:00Z /v1.38/exec/627cb2baf3ef6a4e125c155a876a90664b51345d8b4512ac9d55388cafe7665f/start 1
2020-04-12T02:50:00Z /healthz 154
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ZdLP0d9tOOyGqB7_n-R_dgDytjOkQlIy 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kz7bCKsNRc3TFaoEY6xHGov5nQsTOR-i 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=hhP1K5OKeSNJtIUpKljRYyGn7yIbbrMo 8
2020-04-12T02:48:00Z /computeMetadata/v1/?alt=json&last_etag=22d79ffec8acd196&recursive=True&timeout_sec=84&wait_for_change=True 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=452LryYsKBED3xt90eVR_3QFavHIAfPE 4
2020-04-12T02:48:00Z /_ping 6
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=uB_8q0SzFzQpW_9MTpxtgmtzMe0dZ2y2 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=8tsFeCFJ7Jw0-r8rIN6InugDQUE-1nt1 8
2020-04-12T02:47:00Z /v1.39/containers/json 6
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KEkB2dAH3r9eumDWQQKZ9cmajDUeu3gH 8
2020-04-12T02:48:00Z /healthcheck/dnsmasq 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v_NTj5KvxDlqQGVtjWktH6p61_tDaEW4 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EZve75_zED_jSY5HzyHW5SOxNXEp6l9Y 8
2020-04-12T02:55:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 11
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=AznAsOlKv87EtBeuOZWGlG8_ha579TQV 7
2020-04-12T02:55:00Z /health 105
2020-04-12T02:52:00Z /metrics 42
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=6ZD970s2YlBWm3mXU3nCtdqdBX9SGVt4 7
2020-04-12T02:54:00Z /detail.html?id=3395a43e-2d88-40de-b95f-e00e1502085b 88
2020-04-12T02:55:00Z /login 540
2020-04-12T02:54:00Z /catalogue/d3588630-ad8e-49df-bbd7-3167f7efb246 76
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=xPuMp0VXJaFVh6UNcH6fahiGAFb3TMC2 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D9VHZ1GQcHpLYoPYPluH3qjq_SNxrFGK 8
2020-04-12T02:55:00Z /catalogue/03fef6ac-1896-4ce8-bd69-b798f85c6e0b 40
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=P_CJpRE4l8rI0RghYPUqzdlDqI_SAcGm 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=e-5mCWOy9VLMbvD0vYvlUeI0de_jBtxL 8
2020-04-12T02:45:00Z /stats/summary/ 1
2020-04-12T02:49:00Z /computeMetadata/v1/instance/machine-type 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=N4ND-6Z-3FD46M5PUCoFFWsZL6L-hPIm 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JHCevEgD5UotU349IFiHd7Oz6M_1x7aj 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=dRiJl-7okNzSLdlEoU6kihXN0NMEX42A 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=gHNQntBfCi-eRkgtabgvtOLn8Au32i9T 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bh1FcW_vaXECnZrcgBCKq4GTxmJgMcye 7
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=c07qzjiV3u1i3av9WiMBhEz2zn2wBM_k 8
2020-04-12T02:51:00Z /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=e0caef74ae1c706&recursive=True&timeout_sec=89&wait_for_change=True 1
2020-04-12T02:55:00Z /catalogue/837ab141-399e-4c1f-9abc-bace40296bac 36
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=QplzvaYvX3HRh0X32FKe5jB79oSM7xLz 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=_kDGEIAmOyitXu12aB9vmzhGn8cwfADt 8
2020-04-12T02:55:00Z /catalogue/zzz4f044-b040-410d-8ead-4de0446aec7e 37
2020-04-12T02:49:00Z /v1.38/exec/97918de6ab28da3771f85167ece67a513f6e7f4d22d5758befda247f147e8bf4/start 1
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D9VHZ1GQcHpLYoPYPluH3qjq_SNxrFGK 3
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=qWVNbJ4L66OyVHX0HVfrA-K_-W8ha44y 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=XhHcfcDKKJUu20RPUXihuA99ym49Znm5 8
2020-04-12T02:55:00Z /stats/summary/ 20
2020-04-12T02:54:00Z /computeMetadata/v1/instance/hostname 60
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=bB3HTIFHUKD_i7poT2_-s7WGF_uY2UKt 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JHCevEgD5UotU349IFiHd7Oz6M_1x7aj 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=lfXpfBx6Al8xHixl-jw-P2zfSK8zbUQx 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=2rE3baY9Isvd9BS6xtIeFuZj2EtgPWLk 7
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kVthD7n8GxOYHQaN9WJUPyORi_LCD2SY 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=cejNd5pxw0tUA4VbcgLmrQ3U0liqJpDB 3
2020-04-12T02:49:00Z /_ping 6
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JBIRgAFTHIyTuo2fmW0MEw9C7emRwYwc 8
2020-04-12T02:49:00Z /computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 6
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1g30WPrncNqSjS17vFAvH_Y3FgoSYyCO 8
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VQPInhoKQSkfixYEU2Zz4ehPodjlJerT 3
2020-04-12T02:54:00Z /v1.38/exec/cc1bf102772ae10bbb1f2223cb2d1bd6e5558e636b0fa7bfbed6c5550197937a/start 1
2020-04-12T02:53:00Z /detail.html?id=d3588630-ad8e-49df-bbd7-3167f7efb246 71
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q4eR7Cp547FqEZnSgfvRFfjzhv0t7AAr 8
2020-04-12T02:53:00Z /v1.39/containers/json 21
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=kz7bCKsNRc3TFaoEY6xHGov5nQsTOR-i 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=ssTjH1K080O8-gsT407-OqnjobYLg7sz 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=4ClcOj84lcDsoDg2CWeF0OnaPcOAhOXC 3
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M83aIVaNfF8ZgGIB-O97LzsJzUqOnGO- 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wnsnkEIoMF01x_MRD_XudPWv1qugnfOX 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=JHCevEgD5UotU349IFiHd7Oz6M_1x7aj 3
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iG0yPc3E2NZ-8Uo4WLWASLkGnl7ps8W_ 3
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VJCVa8tEORtAp_Ia7HyKqprVdK2Kk4FO 8
2020-04-12T02:55:00Z /computeMetadata/v1/?alt=json&last_etag=2b84fcb6e37c7462&recursive=True&timeout_sec=60&wait_for_change=True 1
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=AznAsOlKv87EtBeuOZWGlG8_ha579TQV 3
2020-04-12T02:53:00Z /computeMetadata/v1/instance/machine-type 23
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=smm408A4Pm37cCvpxo4jjqgkLL9zFwDz 3
2020-04-12T02:53:00Z /metrics/probes 16
2020-04-12T02:52:00Z /healthz 190
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/items 1598
2020-04-12T02:49:00Z /v1.39/containers/json 6
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=v5QvdNFRqWc3PF2lB6xNnJShidperBf8 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1g30WPrncNqSjS17vFAvH_Y3FgoSYyCO 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BYHbfSN3hQklMnCWFi5-SinWVo9mk3pP 8
2020-04-12T02:53:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=EZve75_zED_jSY5HzyHW5SOxNXEp6l9Y 8
2020-04-12T02:54:00Z /v1.38/exec/834b566c3a1c73b51a9f27ecda057dcaae73c6bfda76d4892c450c117ce17858/start 1
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=BYHbfSN3hQklMnCWFi5-SinWVo9mk3pP 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=QplzvaYvX3HRh0X32FKe5jB79oSM7xLz 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=39V3Cu1JP-RaIZxMbSaN_GEgt7_370ko 8
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iG0yPc3E2NZ-8Uo4WLWASLkGnl7ps8W_ 8
2020-04-12T02:54:00Z /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=c6a7bbfe995acc98&recursive=False&timeout_sec=60&wait_for_change=True 5
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=iGdp0Oj8r_VpkpyJRA9RC60vP5X3Qaw8 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=Q7HkjK0ZsojyjwWmSQVHky8VE8ujE1Zp 3
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wIoUWmu3VmJLuH6ba0N5aL1QGN4UOnF3 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=KEkB2dAH3r9eumDWQQKZ9cmajDUeu3gH 3
2020-04-12T02:46:00Z /stats/summary/ 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=eBFAKBqqxZ8LzukR5fm3VreqvWsnq291 8
2020-04-12T02:54:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=1HCGPdbluem7ncnbhzkZN4psDbSkQptP 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=0qzsNc1Ts4atafclCEgT0zbm9D0DwDXD 8
2020-04-12T02:51:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=wQ62WuHXGy7IMt6WndAQhjGOwoq5GUzi 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=M8kJr_QOQ98bONDIBlQ730wJq5NU7QQH 3
2020-04-12T02:52:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=452LryYsKBED3xt90eVR_3QFavHIAfPE 8
2020-04-12T02:50:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=D2fIHWZ8mI-c_x7u0EpH4z5BIl9bOI8P 4
2020-04-12T02:55:00Z /carts/57a98d98e4b00679b4a830b2/merge?sessionId=VGdG2opOWS-N-KFhtrH6JY1jGpkAZwZe 3
|
architectures/Python-Keras-Training/HorovodTF/00_CreateImageAndTest.ipynb | ###Markdown
Create Docker Image for TensorFlowIn this notebook we will create the Docker image for our TensorFlow script to run in. We will go through the process of creating the image and testing it locally to make sure it runs before submitting it to the cluster. It is often recommended you do this rather than debugging on the cluster since debugging on a cluster can be much more difficult and time consuming. **You will need to be running everything on a GPU enabled VM to run this notebook.**
###Code
import sys
sys.path.append("../common")
from dotenv import get_key
import os
from utils import dotenv_for
import docker
###Output
_____no_output_____
###Markdown
We will use fake data here since we don't want to have to download the data etc. Using fake data is often a good way to debug your models as well as checking what IO overhead is. Here we are setting the number of processes (NUM_PROCESSES) to 2 since the VM we are testing on has 2 GPUs. If you are running on a machine with 1 GPU set NUM_PROCESSES to 1.
###Code
dotenv_path = dotenv_for()
USE_FAKE = True
DOCKERHUB = os.getenv('DOCKER_REPOSITORY', "masalvar")
NUM_PROCESSES = 2
DOCKER_PWD = get_key(dotenv_path, 'DOCKER_PWD')
dc = docker.from_env()
image, log_iter = dc.images.build(path='Docker',
tag='{}/caia-horovod-tensorflow'.format(DOCKERHUB))
container_labels = {'containerName': 'tensorflowgpu'}
environment ={
"DISTRIBUTED":True,
"PYTHONPATH":'/workspace/common/',
}
volumes = {
os.getenv('EXT_PWD'): {
'bind': '/workspace',
'mode': 'rw'
}
}
if USE_FAKE:
environment['FAKE'] = True
else:
environment['FAKE'] = False
volumes[os.getenv('EXT_DATA')]={'bind': '/mnt/input', 'mode': 'rw'}
environment['AZ_BATCHAI_INPUT_TRAIN'] = '/mnt/input/train'
environment['AZ_BATCHAI_INPUT_TEST'] = '/mnt/input/validation'
cmd=f'mpirun -np {NUM_PROCESSES} -H localhost:{NUM_PROCESSES} '\
'python -u /workspace/HorovodTF/src/imagenet_estimator_tf_horovod.py'
container = dc.containers.run(image.tags[0],
command=cmd,
detach=True,
labels=container_labels,
runtime='nvidia',
volumes=volumes,
environment=environment,
shm_size='8G',
privileged=True)
###Output
_____no_output_____
###Markdown
With the code below we are simply monitoring what is happening in the container. Feel free to stop the notebook when you are happy that everything is working.
###Code
for line in container.logs(stderr=True, stream=True):
print(line.decode("utf-8"),end ="")
container.reload() # Refresh state
if container.status is 'running':
container.kill()
for line in dc.images.push(image.tags[0],
stream=True,
auth_config={"username": DOCKERHUB,
"password": DOCKER_PWD}):
print(line)
###Output
_____no_output_____ |
12_gradient_boosting_machines/11_intraday_model.ipynb | ###Markdown
Generating intra-day trading signals Imports & Settings
###Code
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import sys, os
from time import time
import numpy as np
import pandas as pd
from scipy.stats import spearmanr
import lightgbm as lgb
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import seaborn as sns
sys.path.insert(1, os.path.join(sys.path[0], '..'))
from utils import format_time
sns.set_style('whitegrid')
idx = pd.IndexSlice
###Output
_____no_output_____
###Markdown
Load Model Data
###Code
data = pd.read_hdf('hf_data.h5', 'model_data')
dates = data.index.get_level_values('date_time').date
for i in range(1, 11):
data[f'ret{i}min'] = data.groupby(['ticker', dates])[f'ret{i}min'].shift()
data = data.dropna(subset=['fwd1min'])
data.info(null_counts=True)
data.to_hdf('hf_data.h5', 'model_data')
###Output
_____no_output_____
###Markdown
Model Training
###Code
class MultipleTimeSeriesCV:
"""Generates tuples of train_idx, test_idx pairs
Assumes the MultiIndex contains levels 'symbol' and 'date'
purges overlapping outcomes"""
def __init__(self,
n_splits=3,
train_period_length=126,
test_period_length=21,
lookahead=None,
date_idx='date',
shuffle=False):
self.n_splits = n_splits
self.lookahead = lookahead
self.test_length = test_period_length
self.train_length = train_period_length
self.shuffle = shuffle
self.date_idx = date_idx
def split(self, X, y=None, groups=None):
unique_dates = X.index.get_level_values(self.date_idx).unique()
days = sorted(unique_dates, reverse=True)
split_idx = []
for i in range(self.n_splits):
test_end_idx = i * self.test_length
test_start_idx = test_end_idx + self.test_length
train_end_idx = test_start_idx + self.lookahead - 1
train_start_idx = train_end_idx + self.train_length + self.lookahead - 1
split_idx.append([train_start_idx, train_end_idx,
test_start_idx, test_end_idx])
dates = X.reset_index()[[self.date_idx]]
for train_start, train_end, test_start, test_end in split_idx:
train_idx = dates[(dates[self.date_idx] > days[train_start])
& (dates[self.date_idx] <= days[train_end])].index
test_idx = dates[(dates[self.date_idx] > days[test_start])
& (dates[self.date_idx] <= days[test_end])].index
if self.shuffle:
np.random.shuffle(list(train_idx))
yield train_idx.to_numpy(), test_idx.to_numpy()
def get_n_splits(self, X, y, groups=None):
return self.n_splits
def get_fi(model):
fi = model.feature_importance(importance_type='gain')
return (pd.Series(fi / fi.sum(),
index=model.feature_name()))
###Output
_____no_output_____
###Markdown
Categorical Variables
###Code
data['stock_id'] = pd.factorize(data.index.get_level_values('ticker'), sort=True)[0]
categoricals = ['minute', 'stock_id']
###Output
_____no_output_____
###Markdown
Custom Metric
###Code
def ic_lgbm(preds, train_data):
"""Custom IC eval metric for lightgbm"""
is_higher_better = True
return 'ic', spearmanr(preds, train_data.get_label())[0], is_higher_better
label = sorted(data.filter(like='fwd').columns)
features = data.columns.difference(label).tolist()
label = label[0]
params = dict(boosting='gbdt',
objective='regression',
metric='None',
verbose=-1)
num_boost_round = 250
DAY = 390 # minuts; 6.5 hrs (9:30 - 15:59)
MONTH = 21 # trading days
n_splits = 24
cv = MultipleTimeSeriesCV(n_splits=n_splits,
lookahead=1,
test_period_length=MONTH * DAY,
train_period_length=12 * MONTH * DAY,
date_idx='date_time')
store='hf_model.h5'
for i, (train_idx, test_idx) in enumerate(cv.split(X=data)):
train_dates = data.iloc[train_idx].index.unique('date_time')
test_dates = data.iloc[test_idx].index.unique('date_time')
print(train_dates.min(), train_dates.max(), test_dates.min(), test_dates.max())
lgb_data = lgb.Dataset(data=data.drop(label, axis=1),
label=data[label],
categorical_feature=categoricals,
free_raw_data=False)
cv_preds = [cv_preds]
T = 0
cv_preds = []
for i, (train_idx, test_idx) in enumerate(cv.split(X=data)):
start = time()
if i < 5: continue
t = time() - start
lgb_train = lgb_data.subset(train_idx.tolist()).construct()
lgb_test = lgb_data.subset(test_idx.tolist()).construct()
model = lgb.train(params=params,
train_set=lgb_train,
valid_sets=[lgb_train, lgb_test],
feval=ic_lgbm,
num_boost_round=num_boost_round,
early_stopping_rounds=50,
verbose_eval=50)
if i == 0:
fi = get_fi(model).to_frame()
else:
fi[i] = get_fi(model)
print(fi[i].nlargest(5))
test_set = data.iloc[test_idx, :]
X_test = test_set.loc[:, model.feature_name()]
y_test = test_set.loc[:, label]
y_pred = model.predict(X_test)
cv_preds.append(y_test
.to_frame('y_test')
.assign(y_pred=y_pred).assign(i=i))
cv_preds = pd.concat(cv_preds)
ic_by_day.to_hdf(store, 'daily_ic')
fi.to_hdf(store, 'fi')
cv_preds.to_hdf(store, 'predictions')
###Output
_____no_output_____
###Markdown
Signal Evaluation
###Code
cv_preds.info(null_counts=True)
###Output
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 3227082 entries, ('AAL', Timestamp('2017-11-30 09:30:00')) to ('XRAY', Timestamp('2017-09-29 15:19:00'))
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 y_test 3227082 non-null float64
1 y_pred 3227082 non-null float64
2 i 3227082 non-null int64
dtypes: float64(2), int64(1)
memory usage: 83.3+ MB
###Markdown
Information Coefficient
###Code
by_day = cv_preds.groupby(cv_preds.index.get_level_values('date_time').date)
ic_by_day = by_day.apply(lambda x: spearmanr(x.y_test, x.y_pred)[0])
daily_ic_mean = ic_by_day.mean()
daily_ic_median = ic_by_day.median()
ic = spearmanr(cv_preds.y_test, cv_preds.y_pred)[0]
print(f'\n{ic:6.2%} | {daily_ic_mean: 6.2%} | {daily_ic_median: 6.2%}')
###Output
1.90% | 1.98% | 1.91%
###Markdown
Compute signal quantiles
###Code
dates = cv_preds.index.get_level_values('date_time').date
cv_preds['decile'] = (cv_preds.groupby(dates, group_keys=False)
.apply(lambda x: pd.qcut(x.y_pred, q=10, labels=list(range(1, 11)))))
cv_preds['quintile'] = (cv_preds.groupby(dates, group_keys=False)
.apply(lambda x: pd.qcut(x.y_pred, q=5, labels=list(range(1, 6)))))
###Output
_____no_output_____
###Markdown
Return Statistics by Quantile
###Code
ret_stats_5 = cv_preds.groupby('quintile').y_test.describe()
ret_stats_5
ret_stats_10 = cv_preds.groupby('decile').y_test.describe()
ret_stats_10
###Output
_____no_output_____
###Markdown
Plot Performance by Decile
###Code
cv_preds= pd.read_hdf('hf_model.h5', 'predictions')
dates = cv_preds.index.get_level_values('date_time')
min_ret_by_decile = cv_preds.groupby(['date_time', 'decile']).y_test.mean()
cumulative_ret_by_decile = (min_ret_by_decile
.unstack('decile')
.add(1)
.cumprod()
.sub(1))
fig, axes = plt.subplots(figsize=(14, 4), ncols=2)
sns.barplot(y='y_test',
x='decile',
data=cv_preds.assign(y_test=cv_preds.y_test.mul(10000)),
ax=axes[0])
axes[0].set_title('Avg. 1-min Return by Signal Decile')
axes[0].set_ylabel('Return (bps)')
axes[0].set_xlabel('Decile')
(min_ret_by_decile
.unstack('decile')
.add(1)
.cumprod()
.sub(1)
.resample('D')
.last()
.dropna()
.sort_index()
.plot(ax=axes[1], title='Cumulative Return by Decile'))
axes[1].yaxis.set_major_formatter(
FuncFormatter(lambda y, _: '{:.0%}'.format(y)))
axes[1].set_xlabel('')
axes[1].set_ylabel('Return')
fig.tight_layout()
fig.savefig('figures/hft_deciles', dpi=300)
###Output
_____no_output_____ |
01_recentFeed.ipynb | ###Markdown
recentFeed> Parse the SEC's recent filings feed.
###Code
#hide
%load_ext autoreload
%autoreload 2
from nbdev import show_doc
#export
import re
import xml.etree.cElementTree as cElTree
from secscan import utils
###Output
_____no_output_____
###Markdown
Download and parse the SEC's recent filings feed:
###Code
#export
def secMostRecentListUrl(count=100) :
"Returns the URL for the SEC's atom-format feed of most recent filings."
return ('/cgi-bin/browse-edgar?'
+('' if count is None else f'count={count}&')
+'action=getcurrent&output=atom')
def printXmlParseWarning(msg,el) :
print('***',msg,'***')
print(cElTree.tostring(el))
print('************************')
titlePat = re.compile(
r"\s*(.+?)\s+-" # formType, ignoring surrounding whitespace
+ r"\s+(.+?)\s*" # cikName, ignoring surrounding whitespace
+ r"\((\d{10})\)") # cik
filedPat = re.compile(
r"filed\D+?\s(\d\d\d\d[-/]?\d\d[-/]?\d\d)\s.*"
+ r"accno\D+?\s("+utils.accessNoPatStr+r")\s",
re.IGNORECASE)
def getRecentChunk(count=100) :
"""
Parses the SEC's atom-format feed of most recent filings and returns a list of tuples:
[(fileDate, cikName, accNo, formType, cik),
... ]
with the most recent filings first
"""
mrListXml = utils.downloadSecUrl(secMostRecentListUrl(count=count), toFormat='xml')
res = []
for listEntry in mrListXml :
if not listEntry.tag.lower().endswith("entry") :
continue
cik = formType = accNo = fDate = cikName = None
for entryItem in listEntry :
itemTag = entryItem.tag.lower()
if itemTag.endswith('title') :
# print('"'+entryItem.text.strip()+'"')
m = titlePat.match(entryItem.text)
if m is None :
printXmlParseWarning('unable to parse title element',listEntry)
continue
formType,cikName,cik = m.groups()
cik = cik.lstrip('0')
# print(repr(formType),repr(cikName),repr(cik))
elif itemTag.endswith('summary') :
# print('"'+entryItem.text.strip()+'"')
m = filedPat.search(entryItem.text)
if m is None :
printXmlParseWarning('unable to parse summary element',listEntry)
continue
fDate,accNo = m.groups()
# print(repr(fDate),repr(accNo))
fTup = (fDate, cikName, accNo, formType, cik)
if all(fTup) :
res.append(fTup)
return res
###Output
_____no_output_____
###Markdown
Test downloading and parsing the SEC's recent filings feed:
###Code
l = getRecentChunk()
utils.printSamp(l,5)
assert len(l)==100, 'parsing recent SEC filings feed'
###Output
0 ('2021-06-17', 'Rafael Holdings, Inc.', '9999999995-21-002404', 'EFFECT', '1713863')
1 ('2021-06-17', 'INPIXON', '9999999995-21-002403', 'EFFECT', '1529113')
2 ('2021-06-17', 'SUPERCONDUCTOR TECHNOLOGIES INC', '9999999995-21-002389', 'EFFECT', '895665')
3 ('2021-06-17', 'FT 9403', '9999999995-21-002388', 'EFFECT', '1851134')
4 ('2021-06-17', 'UNITED INSURANCE HOLDINGS CORP.', '9999999995-21-002402', 'EFFECT', '1401521')
###Markdown
Accumulating the recent filings feed to an S3 bucket:
###Code
#export
def curEasternTimeStampAndDate() :
nowET = utils.curEasternUSTime()
ts = nowET.isoformat().replace('T',' ')
return nowET, ts[:19], ts[:10]
def initRecentFeedS3(bucket, prevDay=None) :
_, curTS, today = curEasternTimeStampAndDate()
utils.pickSaveToS3(bucket, 'today-feed.pkl',
{'updated':curTS, 'filings':set(), 'curDay':today, 'prevDay':None},
use_gzip=True, make_public=True, protocol=2)
def updateRecentFeedS3(bucket, skipOffHours=True) :
nowET, curTS, today = curEasternTimeStampAndDate()
print('updating at', curTS, end='; ')
if skipOffHours and (utils.isWeekend(nowET)
#or nowET.hour<6 or nowET.hour>22
#or (nowET.hour==22 and nowET.minute>10)
) :
print('SEC off hours, skipping update')
return
l = getRecentChunk()
curFeed = utils.pickLoadFromS3(bucket, 'today-feed.pkl', use_gzip=True)
print('last update', curFeed['updated'])
if today != curFeed['curDay'] :
print('starting new day; last day found was',curFeed['curDay'])
utils.pickSaveToS3(bucket, curFeed['curDay']+'-feed.pkl', curFeed,
use_gzip=True, make_public=True, protocol=2)
prevFilings, prevDay = curFeed['filings'], curFeed['curDay']
curFeed = {'filings':set(), 'curDay':today, 'prevDay':prevDay}
elif curFeed['prevDay'] is not None :
print('continuing current day; most recent previous day was',curFeed['prevDay'])
prevFeed = utils.pickLoadFromS3(bucket, curFeed['prevDay']+'-feed.pkl', use_gzip=True)
prevFilings, prevDay = prevFeed['filings'], prevFeed['curDay']
else :
print('continuing current day; no previous day found')
prevFilings, prevDay = set(), None
prevDayCount = newFTodayCount = newFOtherDayCount = 0
for tup in l :
if tup in curFeed['filings'] :
continue
if tup in prevFilings :
prevDayCount += 1
continue
curFeed['filings'].add(tup)
fDate = tup[0]
if fDate == today :
newFTodayCount += 1
else :
newFOtherDayCount += 1
if fDate < today :
print('*** old filing date',tup)
else :
print('*** unexpected future filing date',tup)
print(len(l),'filings,',
prevDayCount,'from prev day,',newFTodayCount,'new fToday,',newFOtherDayCount,'new fOther,',
'total now',len(curFeed['filings']))
curFeed['updated'] = curTS
utils.pickSaveToS3(bucket, 'today-feed.pkl', curFeed,
use_gzip=True, make_public=True, protocol=2)
print('--- update complete at',curEasternTimeStampAndDate()[1])
def getRecentFromS3(bucket, key='today') :
return utils.pickLoadFromS3(bucket, key+'-feed.pkl', use_gzip=True)
def getRecentFromS3Public(bucket, key='today') :
return utils.pickLoadFromS3Public(bucket, key+'-feed.pkl', use_gzip=True)
#hide
# initRecentFeedS3('bucket_name')
# updateRecentFeedS3('bucket_name')
# r = getRecentFromS3Public('bucket_name')
# print(len(r['filings']))
# utils.printSamp(sorted(r['filings']))
#hide
# uncomment and run to regenerate all library Python files
# from nbdev.export import notebook2script; notebook2script()
###Output
_____no_output_____ |
notebooks/data_quality/dq1_cmte_ref_data.ipynb | ###Markdown
Committee Reference – Data Quality Overview Committees in the FEC data set have a unique ID assigned to them. However, since we are combining Committee records from multiple election cycle source files, we really should join to `cmte` using both `cmte_id` and `elect_cycle`. The purpose of this notebook is to explore the "quality" of the Committee data, both within and across election cycles, to see how consistent it is, and how we can create a unified Committee Master entity to improve referential integrity for the larger, complete data set.Here is a list of the examinations in this notebook:* Quality of Committee Names (`cmte_nm`)* Integrity of Committee ID (`cmte_id`)* Consistency of names for `cmte_id`'s across election cycles* Multiple `cmte_id`'s for identical names – within election cycles* Multiple `cmte_id`'s for identical names – across election cyclesHere are additional examinations to do:* Multiple `cmte_id`'s for *similar* names – within election cycles* Multiple `cmte_id`'s for *similar* names – across election cycles Notebook Setup Configure database connect info/options Note: database connect string can be specified on the initial `%sql` command:```pythondatabase_url = "postgresql+psycopg2://user@localhost/fecdb"%sql $database_url```Or, connect string is taken from DATABASE_URL environment variable (if not specified for `%sql`):```python%sql```
###Code
%load_ext sql
%config SqlMagic.autopandas=True
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# connect string taken from DATABASE_URL environment variable
%sql
###Output
_____no_output_____
###Markdown
Configure Python modules
###Code
import pandas as pd
pd.set_option("display.max_rows", 200)
###Output
_____no_output_____
###Markdown
Set styling
###Code
%%html
<style>
tr, th, td {
text-align: left !important;
}
</style>
###Output
_____no_output_____
###Markdown
Examination of Committee Data Set High-level summary First count total records and distinct `cmte_id`'s (and save out results for reference)
###Code
%%sql result <<
select count(*) as count_total,
count(distinct cmte_id) as count_distinct_ids,
count(distinct cmte_nm) as count_distinct_names
from cmte
cmte_count_total = int(result.loc[0][0])
cmte_distinct_ids = int(result.loc[0][1])
cmte_distinct_names = int(result.loc[0][2])
"cmte_count_total = %d, cmte_distinct_ids = %d, cmte_distinct_names = %d" % \
(cmte_count_total, cmte_distinct_ids, cmte_distinct_names)
###Output
_____no_output_____
###Markdown
Quality of Committee Names (`cmte_nm`) Let's try and get a sense of the extent of formatting problems (inconsistencies or flaws). First look for names that have lowercase letters (uppercase is now the standard)...
###Code
%%sql
select elect_cycle,
count(*)
from cmte
where cmte_nm ~ '[a-z]'
group by 1
order by 1
%%sql
select cmte_nm,
array_agg(distinct elect_cycle)
from cmte
where cmte_nm ~ '[a-z]'
group by 1
order by 1
###Output
* postgresql+psycopg2://crash@caladan/fecdb
15 rows affected.
###Markdown
Next, look for names with consecutive whitespace...
###Code
%%sql
select elect_cycle,
count(*)
from cmte
where cmte_nm ~ '\s{2,}'
group by 1
order by 1
%%sql
select cmte_nm,
array_agg(distinct elect_cycle)
from cmte
where cmte_nm ~ '\s{2,}'
group by 1
order by 1
limit 50
###Output
* postgresql+psycopg2://crash@caladan/fecdb
50 rows affected.
###Markdown
And now, names with errant spaces within parentheses...
###Code
%%sql
select elect_cycle,
count(*)
from cmte
where cmte_nm ~ '\( | \)'
group by 1
order by 1
%%sql
select cmte_nm,
array_agg(distinct elect_cycle)
from cmte
where cmte_nm ~ '\( | \)'
group by 1
order by 1
limit 50
###Output
* postgresql+psycopg2://crash@caladan/fecdb
11 rows affected.
###Markdown
And inconsistent spacing around commas (either space before, or no space after)
###Code
%%sql
select elect_cycle,
count(*)
from cmte
where cmte_nm ~ ' ,|,[^ ]'
group by 1
order by 1
%%sql
select cmte_nm,
array_agg(distinct elect_cycle)
from cmte
where cmte_nm ~ ' ,|,[^ ]'
group by 1
order by 1
limit 50
###Output
* postgresql+psycopg2://crash@caladan/fecdb
22 rows affected.
###Markdown
Integrity of Committee ID (`cmte_id`) Count records across election cycles and see if we have any null `cmte_id`'s (all zeros would be good)
###Code
%%sql
select elect_cycle,
count(*) as records,
count(*) - count(cmte_id) as null_cmte_ids
from cmte
group by 1
order by 1
###Output
* postgresql+psycopg2://crash@caladan/fecdb
11 rows affected.
###Markdown
Let's see if there are any duplicate `cmte_id`'s in any election cycles (should also be zero)
###Code
%%sql
with dup_cmte_id as (
select elect_cycle,
cmte_id,
count(*) as id_count
from cmte
group by 1, 2
having count(*) > 1
)
select elect_cycle,
count(*) as dupes,
sum(id_count) as total_dupe_ids,
max(id_count) as max_dupe_ids
from dup_cmte_id
group by 1
###Output
* postgresql+psycopg2://crash@caladan/fecdb
0 rows affected.
###Markdown
Now let's look at repeated `cmte_id`'s across election cycles (note that specifying `distinct` within `array_agg` is a tricky way of sorting the values, for consistency, if we care to group by that field)
###Code
%%sql
with cmte_id_sum as (
select cmte_id,
count(*) as ec_count,
array_agg(distinct elect_cycle) as elect_cycles
from cmte
group by 1
)
select ec_count,
count(*) as cmte_ids,
round(count(*)::numeric / :cmte_distinct_ids * 100.0, 2) as pct_cmte_ids
from cmte_id_sum
group by 1
order by 1 desc
###Output
* postgresql+psycopg2://crash@caladan/fecdb
11 rows affected.
###Markdown
Consistency of names for `cmte_id`'s across election cycles Count the number of different names for Committee records (identified by `cmte_id`) appearing in multiple election cycles
###Code
%%sql
with cmte_diff_names as (
select cmte_id,
count(distinct cmte_nm) as num_diff_names,
array_agg(distinct cmte_nm) as diff_names
from cmte
group by 1
having count(distinct cmte_nm) > 1
)
select count(*) as cmte_ids,
round(count(*)::numeric / :cmte_distinct_ids * 100.0, 2) as pct_cmte_ids
from cmte_diff_names
###Output
* postgresql+psycopg2://crash@caladan/fecdb
1 rows affected.
###Markdown
Let's report by the different levels of variation on name (i.e. number of different representations)
###Code
%%sql
with cmte_diff_names as (
select cmte_id,
count(distinct cmte_nm) as num_diff_names,
array_agg(distinct cmte_nm) as diff_names
from cmte
group by 1
having count(distinct cmte_nm) > 1
)
select num_diff_names,
count(*) as cmte_ids,
round(count(*)::numeric / :cmte_distinct_ids * 100.0, 2) as pct_cmte_ids
from cmte_diff_names
group by 1
order by 1 desc
###Output
* postgresql+psycopg2://crash@caladan/fecdb
8 rows affected.
###Markdown
Get an idea of what the different names associated with the same `cmte_id` look like—let's start with a sampling of Committee IDs with `num_diff_names` = 2 (compare adjacent `cmte_nm`'s)...
###Code
%%sql
with cmte_diff_names as (
select cmte_id,
count(distinct cmte_nm) as num_diff_names,
array_agg(distinct cmte_nm) as diff_names
from cmte
group by 1
having count(distinct cmte_nm) = 2
)
select cmte_id,
unnest(diff_names) as cmte_nm
from cmte_diff_names
order by cmte_id
limit 50
###Output
* postgresql+psycopg2://crash@caladan/fecdb
50 rows affected.
###Markdown
Now, let's look at `num_diff_names` = 3...
###Code
%%sql
with cmte_diff_names as (
select cmte_id,
count(distinct cmte_nm) as num_diff_names,
array_agg(distinct cmte_nm) as diff_names
from cmte
group by 1
having count(distinct cmte_nm) = 3
)
select cmte_id,
unnest(diff_names) as cmte_nm
from cmte_diff_names
order by cmte_id
limit 51
###Output
* postgresql+psycopg2://crash@caladan/fecdb
51 rows affected.
###Markdown
And `num_diff_names` = 4...
###Code
%%sql
with cmte_diff_names as (
select cmte_id,
count(distinct cmte_nm) as num_diff_names,
array_agg(distinct cmte_nm) as diff_names
from cmte
group by 1
having count(distinct cmte_nm) = 4
)
select cmte_id,
unnest(diff_names) as cmte_nm
from cmte_diff_names
order by cmte_id
limit 52
###Output
* postgresql+psycopg2://crash@caladan/fecdb
52 rows affected.
###Markdown
And for the fun of it, let's look at the most extreme examples (`num_diff_names` > 7)
###Code
%%sql
with cmte_diff_names as (
select cmte_id,
count(distinct cmte_nm) as num_diff_names,
array_agg(distinct cmte_nm) as diff_names
from cmte
group by 1
having count(distinct cmte_nm) > 7
)
select cmte_id,
unnest(diff_names) as cmte_nm
from cmte_diff_names
order by cmte_id
limit 100
###Output
* postgresql+psycopg2://crash@caladan/fecdb
26 rows affected.
###Markdown
Multiple `cmte_id`'s for identical names – within election cycles Let's first see how many names are involved, and what percentage of total distinct names that represents
###Code
%%sql
with shared_cmte_name as (
select elect_cycle,
cmte_nm,
count(*) as num_shares
from cmte
where cmte_nm is not null
group by 1, 2
having count(*) > 1
)
select count(*) as shared_names,
round(count(*)::numeric / :cmte_distinct_names * 100.0, 2) as pct_distinct_names
from shared_cmte_name
###Output
* postgresql+psycopg2://crash@caladan/fecdb
1 rows affected.
###Markdown
Now we'll report by the level of replication (name sharing by different Committees, as identified by `cmte_id`) we have in various cycles; note that the same name may "offend" (map to multiple `cmte_id`'s) within different election cycles (either in the same, or differing, "num_shares")
###Code
%%sql
with shared_cmte_name as (
select elect_cycle,
cmte_nm,
count(*) as num_shares
from cmte
where cmte_nm is not null
group by 1, 2
having count(*) > 1
)
select num_shares,
count(*) as shared_names,
array_agg(distinct elect_cycle) as elect_cycles
from shared_cmte_name
group by 1
order by 1 desc
###Output
* postgresql+psycopg2://crash@caladan/fecdb
4 rows affected.
###Markdown
Let's take a look at some of the top offenders (see "elect_cycles" for multiple offenses of the same "num_shares" by a name)
###Code
%%sql
with shared_cmte_name as (
select elect_cycle,
cmte_nm,
count(*) as num_shares,
array_agg(distinct cmte_id) as cmte_ids
from cmte
where cmte_nm is not null
group by 1, 2
having count(*) > 2
)
select cmte_nm,
array_length(cmte_ids, 1) num_cmte_ids,
cmte_ids,
array_agg(elect_cycle) as elect_cycles
from shared_cmte_name
group by 1, 3
order by array_length(cmte_ids, 1) desc, count(*) desc, cmte_nm
###Output
* postgresql+psycopg2://crash@caladan/fecdb
61 rows affected.
###Markdown
And we'll look at the top recurrences for cases where the number of shares is exactly two in an election cycle
###Code
%%sql
with shared_cmte_name as (
select elect_cycle,
cmte_nm,
count(*) as num_shares,
array_agg(distinct cmte_id) as cmte_ids
from cmte
where cmte_nm is not null
group by 1, 2
having count(*) = 2
)
select cmte_nm,
cmte_ids,
count(*) as num_elect_cycles,
array_agg(elect_cycle) as elect_cycles
from shared_cmte_name
group by 1, 2
order by 3 desc, 1, 2
limit 50
###Output
* postgresql+psycopg2://crash@caladan/fecdb
50 rows affected.
###Markdown
Multiple `cmte_id`'s for identical names – across election cycles As above, we'll see how many names are mapped to different `cmte_id`'s, except now *across* election cycles
###Code
%%sql
with shared_cmte_name as (
select cm.cmte_nm,
count(distinct cm2.cmte_id) as num_cmte_ids,
array_agg(distinct cm2.elect_cycle) as elect_cycles
from cmte cm
join cmte cm2 on cm2.cmte_nm = cm.cmte_nm
and cm2.cmte_id != cm.cmte_id
and cm2.elect_cycle != cm.elect_cycle
group by 1
)
select count(*) as shared_names,
round(count(*)::numeric / :cmte_distinct_names * 100.0, 2) as pct_distinct_names
from shared_cmte_name
###Output
* postgresql+psycopg2://crash@caladan/fecdb
1 rows affected.
###Markdown
And now report by the level of replication (name sharing by different Committees) we have in across cycles
###Code
%%sql
with shared_cmte_name as (
select cm.cmte_nm,
count(distinct cm2.cmte_id) as num_cmte_ids,
array_agg(distinct cm2.elect_cycle) as elect_cycles
from cmte cm
join cmte cm2 on cm2.cmte_nm = cm.cmte_nm
and cm2.cmte_id != cm.cmte_id
and cm2.elect_cycle != cm.elect_cycle
group by 1
)
select num_cmte_ids,
count(*) as num_shared_names
from shared_cmte_name
group by 1
order by 1 desc
###Output
* postgresql+psycopg2://crash@caladan/fecdb
9 rows affected.
###Markdown
And we'll take a look at the top offenders for this replication
###Code
%%sql
with shared_cmte_name as (
select cm.cmte_nm,
count(distinct cm2.cmte_id) as num_cmte_ids,
array_agg(distinct cm2.cmte_id) as cmte_ids,
count(*) as count_records
from cmte cm
join cmte cm2 on cm2.cmte_nm = cm.cmte_nm
and cm2.cmte_id != cm.cmte_id
and cm2.elect_cycle != cm.elect_cycle
group by 1
)
select cmte_nm,
num_cmte_ids,
cmte_ids
from shared_cmte_name
order by 2 desc, 1
limit 50
###Output
* postgresql+psycopg2://crash@caladan/fecdb
50 rows affected.
|
examples/jupyter_notebooks/evm-console-log.ipynb | ###Markdown
DEBUG: Tips and techniques EVM's console logsBelow cell copies console logs from EVM to user's workspace.
###Code
import TIevm
#Path to a folder in user's workspace
log_dir='/home/root/evm_logs'
#Run twice to get additional info from buffers
for i in range(2):
TIevm.copy_TI_log(log_dir)
#log name is hard coded to evm_console.log
file = open('/home/root/evm_logs/evm_console.log')
log_content = file.read()
print(log_content)
###Output
_____no_output_____ |
blog/code/2019-04-09_heatmaps/heatmap_example.ipynb | ###Markdown
Read in subset of mock dataset (data was RPKM normalized before subsetting)
###Code
path = './'
data = pd.read_csv(
path + 'heatmap_example.tsv',
sep = '\t',
index_col = 0)
info = pd.read_csv(
path + 'heatmap_example_metadata.txt',
sep = '\t',
header = None)
#Create a samples color dictionary for plots
colors = {'WT':'yellow',
'EXP':'purple'}
#Specify column order for dataframe
col_order = ['WT1',
'WT2',
'WT3',
'WT4',
'WT5',
'WT6',
'EXP1',
'EXP2',
'EXP3',
'EXP4',
'EXP5',
'EXP6']
data = data.reindex(col_order, axis=1)
#Scale gene rows
data_scaled = data.copy()
data_scaled = data_scaled.dropna()
data_scaled[data_scaled.columns] = preprocessing.scale(data_scaled[data_scaled.columns], axis=1)
#Print some info
print('Dataframe size before scaling: ' + str(data.shape))
print('Dataframe size after scaling: ' + str(data_scaled.shape))
###Output
Dataframe size before scaling: (10, 12)
Dataframe size after scaling: (10, 12)
###Markdown
Without scaling
###Code
#Plot heatmap
xp.heatmap(
data,
info,
sample_palette = colors,
figsize = (2, 5),
xticklabels = True,
yticklabels = True,
row_cluster = False,
col_cluster = False,
font_scale = .7,
cbar_kws = {'label': 'no scale'})
#Save and show figure
plt.savefig(
path + 'heatmap_unscaled_example.png',
dpi = 300,
bbox_inches = 'tight')
plt.show()
###Output
_____no_output_____
###Markdown
With scaling
###Code
#Plot heatmap
xp.heatmap(
data_scaled,
info,
sample_palette = colors,
figsize = (2, 5),
xticklabels = True,
yticklabels = True,
row_cluster = False,
col_cluster = False,
font_scale = .7,
cbar_kws = {'label': 'Z-score'})
#Save and show figure
plt.savefig(
path + 'heatmap_scaled_example.png',
dpi = 300,
bbox_inches = 'tight')
plt.show()
###Output
_____no_output_____ |
Production/Prepare-Stage2.ipynb | ###Markdown
PID = np.zeros(train_df.shape[0],dtype=object)StudyI = np.zeros(train_df.shape[0],dtype=object)SeriesI = np.zeros(train_df.shape[0],dtype=object)WindowCenter = np.zeros(train_df.shape[0],dtype=object)WindowWidth = np.zeros(train_df.shape[0],dtype=object)ImagePositionX = np.zeros(train_df.shape[0],dtype=np.float)ImagePositionY = np.zeros(train_df.shape[0],dtype=np.float)ImagePositionZ = np.zeros(train_df.shape[0],dtype=np.float)for i,row in tqdm_notebook(train_df.iterrows(),total=train_df.shape[0]): ds = pydicom.dcmread(train_images_dir + 'ID_{}.dcm'.format(row['PatientID'])) SeriesI[i]=ds.SeriesInstanceUID PID[i]=ds.PatientID StudyI[i]=ds.StudyInstanceUID WindowCenter[i]=ds.WindowCenter WindowWidth[i]=ds.WindowWidth ImagePositionX[i]=float(ds.ImagePositionPatient[0]) ImagePositionY[i]=float(ds.ImagePositionPatient[1]) ImagePositionZ[i]=float(ds.ImagePositionPatient[2])train_df['SeriesI']=SeriesItrain_df['PID']=PIDtrain_df['StudyI']=StudyItrain_df['WindowCenter']=WindowCentertrain_df['WindowWidth']=WindowWidthtrain_df['ImagePositionZ']=ImagePositionZtrain_df['ImagePositionX']=ImagePositionXtrain_df['ImagePositionY']=ImagePositionY
###Code
train_df.to_csv(data_dir+'train_stage2.csv',index=False)
old_base_test=pd.read_csv(data_dir+'stage_1_sample_submission.csv')
old_test_ids=set(old_base_test.ID.values)
old_test_res=train_base_df[train_base_df.ID.isin(old_test_ids)]
old_test_res.shape
old_test_res.to_csv(data_dir+'true_submission_stage1.csv',index=False)
sample_submission=pd.read_csv(data_dir+'stage_2_sample_submission.csv')
sample_submission.head()
sample_submission.shape
test_base_df=sample_submission.copy()
test_base_df['Sub_type'] = test_base_df['ID'].str.split("_", n = 3, expand = True)[2]
test_base_df['PatientID'] = test_base_df['ID'].str.split("_", n = 3, expand = True)[1]
test_base_df.head()
assert test_base_df.shape[0] == test_base_df.PatientID.unique().shape[0]*6
sub_types=test_base_df.Sub_type.unique()
sub_types
dfs =[]
for sub_type in tqdm_notebook(sub_types):
df = test_base_df[test_base_df.Sub_type==sub_type][['PatientID','Label']].copy()
df=df.rename(columns={"Label": sub_type}).reset_index(drop=True)
dfs.append(df)
test_df=dfs[0]
for df in tqdm_notebook(dfs[1:]):
test_df=test_df.merge(df,on='PatientID')
test_ids=test_df.PatientID.unique()
test_df.head()
PID = np.zeros(test_df.shape[0],dtype=object)
StudyI = np.zeros(test_df.shape[0],dtype=object)
SeriesI = np.zeros(test_df.shape[0],dtype=object)
WindowCenter = np.zeros(test_df.shape[0],dtype=object)
WindowWidth = np.zeros(test_df.shape[0],dtype=object)
ImagePositionX = np.zeros(test_df.shape[0],dtype=np.float)
ImagePositionY = np.zeros(test_df.shape[0],dtype=np.float)
ImagePositionZ = np.zeros(test_df.shape[0],dtype=np.float)
for i,row in tqdm_notebook(test_df.iterrows(),total=test_df.shape[0]):
ds = pydicom.dcmread(test_images_dir + 'ID_{}.dcm'.format(row['PatientID']))
SeriesI[i]=ds.SeriesInstanceUID
PID[i]=ds.PatientID
StudyI[i]=ds.StudyInstanceUID
WindowCenter[i]=ds.WindowCenter
WindowWidth[i]=ds.WindowWidth
ImagePositionX[i]=float(ds.ImagePositionPatient[0])
ImagePositionY[i]=float(ds.ImagePositionPatient[1])
ImagePositionZ[i]=float(ds.ImagePositionPatient[2])
test_df['SeriesI']=SeriesI
test_df['PID']=PID
test_df['StudyI']=StudyI
test_df['WindowCenter']=WindowCenter
test_df['WindowWidth']=WindowWidth
test_df['ImagePositionZ']=ImagePositionZ
test_df['ImagePositionX']=ImagePositionX
test_df['ImagePositionY']=ImagePositionY
test_df['SeriesI'] = test_df['SeriesI'].str.split("_", n = 3, expand = True)[1]
test_df['PID'] = test_df['PID'].str.split("_", n = 3, expand = True)[1]
test_df['StudyI'] = test_df['StudyI'].str.split("_", n = 3, expand = True)[1]
test_df.head()
test_df.to_csv(data_dir+'test_stage2.csv',index=False)
test_df=pd.read_csv(data_dir+'test_stage2.csv')
pickle_file=open(outputs_dir+'test_indexes_stage2.pkl','wb')
pickle.dump(test_df.PatientID.values,pickle_file,protocol=4)
pickle_file.close()
###Output
_____no_output_____ |
torch/5. ANN - NyTaxiFare - Regression.ipynb | ###Markdown
Data Preprocessing and Feature extraction
###Code
### We can use the latitude and longitude to get the distance travelled.
### We will use haversine formula to calculate the distance.
def haversine_distance(df, lat1, long1, lat2, long2):
'''
Calculate the distance between longitude and latitudes.
'''
r = 6371 # radius of the earth sphere
phi1 = np.radians(df[lat1])
phi2 = np.radians(df[lat2])
delta_phi = np.radians(df[lat2]-df[lat1])
delta_lambda = np.radians(df[long2]-df[long1])
a = np.sin(delta_phi/2)**2 + np.cos(phi1) * np.cos(phi2) * np.sin(delta_lambda/2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a))
d = (r * c) # in kilometers
return d
df['dist_km'] = haversine_distance(df,'pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude')
df.head()
df['EDTdate'] = pd.to_datetime(df['pickup_datetime'].str[:19]) - pd.Timedelta(hours=4)
df['Hour'] = df['EDTdate'].dt.hour
df['AMorPM'] = np.where(df['Hour']<12,'am','pm')
df['Weekday'] = df['EDTdate'].dt.strftime("%a")
df.head()
### separate categorical column from continous columns
df.columns
cats_col = ['Hour', 'AMorPM', 'Weekday']
conts_col = ['pickup_longitude','pickup_latitude', 'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'dist_km']
y_col = ['fare_amount']
for c in cats_col:
df[c] = df[c].astype('category')
df.dtypes
hr = df['Hour'].cat.codes.values
ampm = df['AMorPM'].cat.codes.values
weekday = df['Weekday'].cat.codes.values
cats = np.stack([hr,ampm,weekday],1) ## stack or combine all the categorical columns into a single array
cats
### stack up continous values
conts = np.stack([ df[c].values for c in conts_col ],1)
conts
### convert the np arrays to tensors
cats = torch.tensor(cats, dtype=torch.int64)
conts = torch.tensor(conts, dtype=torch.float)
y = torch.FloatTensor(df[y_col].values).reshape(-1,1)
cats
y
### final list of attributes
cats.shape
conts.shape
y.shape
###Output
_____no_output_____
###Markdown
Embedding
###Code
### understanding sample example of word embeddings
embeddings = nn.Embedding(num_embeddings=10, embedding_dim=3)
input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) ## the numbers need to be in sync.
embeddings
input
embedded_input = embeddings(input)
embedded_input
embedded_input.shape
## define the embedding size for categorical columns
cat_size = [ len(df[c].cat.categories) for c in cats_col]
embs_size = [ (c, min(50, (c+1)//2)) for c in cat_size ] ### rule of thumb. embedding_size not excedding more than 50
embs_size
###Output
_____no_output_____
###Markdown
Building the Model
###Code
class TabularModel(nn.Module):
def __init__(self,embs_size, n_cont, out_size, layers, p=0.5):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(ne,esize) for ne, esize in embs_size])
self.dropout = nn.Dropout(p)
self.batchnorm1d = nn.BatchNorm1d(n_cont) ## batch normalization for continous values
layerlist = []
n_embs = sum((nf for ni, nf in embs_size))
n_input = n_embs + n_cont
for i in layers:
layerlist.append(nn.Linear(n_input,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_input = i ## after the 1st iteration, the input becomes the output for the layer1
layerlist.append(nn.Linear(layers[-1],out_size))
self.layers = nn.Sequential(*layerlist) ## combine as a sequential layer
def forward(self, x_cat, x_cont):
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
x = self.dropout(x)
x_cont = self.batchnorm1d(x_cont)
x = torch.cat([x, x_cont], 1)
x = self.layers(x)
return x
torch.manual_seed(43)
model = TabularModel(embs_size, conts.shape[1], 1, [200,100], 0.4)
model
## testing some snips in the model
selfembds = nn.ModuleList([ nn.Embedding(num_embeddings=cat_size, embedding_dim=emb_size) for cat_size, emb_size in embs_size])
selfembds
## iterate over the module list
list(enumerate(selfembds))
## generate the embeddings based on the categorical inputs
catz = cats[:3]
print(catz)
embeddingz = []
for i,e in enumerate(selfembds):
print(i,"\t",e,"\t",catz[:,i])
embeddingz.append(e(catz[:,i]))
print(embeddingz)
###Output
tensor([[ 4, 0, 1],
[11, 0, 2],
[ 7, 0, 2]])
0 Embedding(24, 12) tensor([ 4, 11, 7])
1 Embedding(2, 1) tensor([0, 0, 0])
2 Embedding(7, 4) tensor([1, 2, 2])
[tensor([[-0.2444, 1.1423, -0.9040, -1.2943, -0.4704, -1.0650, -0.3825, 0.8106,
1.4967, -0.2787, 0.5608, 0.2403],
[-2.4204, -0.5451, 1.2503, -0.3485, -1.4470, 0.7133, -0.9494, -0.4867,
1.0317, -0.5628, -1.7098, -2.5643],
[-0.9331, -0.2268, -0.0646, -0.1164, -1.9337, 0.6303, -1.0596, -1.4841,
-0.9316, 0.2636, -1.7160, 2.0503]], grad_fn=<EmbeddingBackward>), tensor([[-1.0060],
[-1.0060],
[-1.0060]], grad_fn=<EmbeddingBackward>), tensor([[-0.1661, 0.5776, -1.5871, 0.6386],
[ 0.9419, -0.3500, 0.7339, 0.5983],
[ 0.9419, -0.3500, 0.7339, 0.5983]], grad_fn=<EmbeddingBackward>)]
###Markdown
Loss and Optimizer
###Code
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
Train/Test split
###Code
batch_size = 60000
test_size = int(batch_size * .2)
cat_train = cats[:batch_size-test_size]
cat_test = cats[batch_size-test_size:batch_size]
con_train = conts[:batch_size-test_size]
con_test = conts[batch_size-test_size:batch_size]
y_train = y[:batch_size-test_size]
y_test = y[batch_size-test_size:batch_size]
print(cat_train.shape, cat_test.shape)
###Output
torch.Size([48000, 3]) torch.Size([12000, 3])
###Markdown
Train the model
###Code
import time
start_time = time.time()
epochs = 300
losses = []
for i in range(epochs):
i+=1
y_pred = model(cat_train, con_train)
loss = torch.sqrt(criterion(y_pred, y_train)) # RMSE
losses.append(loss)
# a neat trick to save screen space:
if i%25 == 1:
print(f'epoch: {i:3} loss: {loss.item():10.8f}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'epoch: {i:3} loss: {loss.item():10.8f}') # print the last line
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
plt.plot(range(epochs), losses)
plt.ylabel('RMSE Loss')
plt.xlabel('epoch')
# TO EVALUATE THE ENTIRE TEST SET
with torch.no_grad():
y_val = model(cat_test, con_test)
loss = torch.sqrt(criterion(y_val, y_test))
print(f'RMSE: {loss:.8f}')
if len(losses) == epochs:
torch.save(model.state_dict(), 'TaxiFareRegrModel.pt')
else:
print('Model has not been trained. Consider loading a trained model instead.')
###Output
_____no_output_____ |
docs/case_studies/test15.ipynb | ###Markdown
AutoML Image Classification: With Rotation (Fashion MNIST)
###Code
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
###Output
_____no_output_____
###Markdown

###Code
import random as rn
from abc import ABC, abstractmethod
import autokeras as ak
import h2o
import matplotlib.pyplot as plt
import numpy as np
from h2o.automl import H2OAutoML
from keras.datasets import fashion_mnist
from numpy.random import RandomState
from sklearn.datasets import load_digits
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from tpot import TPOTClassifier
from dpemu import runner
from dpemu.filters.common import GaussianNoise, Clip
from dpemu.filters.image import RotationPIL
from dpemu.nodes import Array
from dpemu.nodes.series import Series
from dpemu.plotting_utils import visualize_scores, print_results_by_model
from dpemu.utils import generate_tmpdir
def get_data():
random_state = RandomState(42)
x, y = load_digits(return_X_y=True)
y = y.astype(np.uint8)
return train_test_split(x, y, test_size=1/7, random_state=random_state)
# (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
# s = x_train.shape[1]
# x_train = x_train.reshape((len(x_train), s**2)).astype(np.float64)
# x_test = x_test.reshape((len(x_test), s**2)).astype(np.float64)
# return x_train, x_test, y_train, y_test
def get_err_root_node():
err_img_node = Array(reshape=(8, 8))
# err_img_node = Array(reshape=(28, 28))
err_root_node = Series(err_img_node)
err_img_node.addfilter(RotationPIL("max_angle"))
return err_root_node
# err_root_node = Series(err_img_node)
# err_img_node.addfilter(GaussianNoise("mean", "std"))
# err_img_node.addfilter(Clip("min_val", "max_val"))
# return err_root_node
def get_err_params_list(data):
angle_steps = np.linspace(0, 180, num=6)
err_params_list = [{"max_angle": a} for a in angle_steps]
return err_params_list
# min_val = np.amin(data)
# max_val = np.amax(data)
# std_steps = np.round(np.linspace(0, max_val, num=6), 3)
# err_params_list = [{"mean": 0, "std": std, "min_val": min_val, "max_val": max_val} for std in std_steps]
# return err_params_list
class Preprocessor:
def run(self, train_data, test_data, params):
return np.round(train_data).astype(np.uint8), np.round(test_data).astype(np.uint8), {}
class AbstractModel(ABC):
def __init__(self):
self.time_limit_mins = 30
self.seed = 42
self.random_state = RandomState(self.seed)
np.random.seed(self.seed)
@abstractmethod
def get_fitted_model(self, train_data, train_labels, params):
pass
@abstractmethod
def get_accuracy(self, data, labels, fitted_model, params):
pass
@abstractmethod
def get_best_pipeline(self, fitted_model):
pass
def run(self, train_data, test_data, params):
train_labels = params["train_labels"]
test_labels = params["test_labels"]
fitted_model = self.get_fitted_model(train_data, train_labels, params)
results = {
"test_acc": self.get_accuracy(test_data, test_labels, fitted_model, params),
"train_acc": self.get_accuracy(train_data, train_labels, fitted_model, params),
"best_pipeline": self.get_best_pipeline(fitted_model),
}
print(type(fitted_model))
print(results["test_acc"])
return results
class TPOTClassifierModel(AbstractModel):
def __init__(self):
super().__init__()
def get_fitted_model(self, train_data, train_labels, params):
return TPOTClassifier(
max_time_mins=self.time_limit_mins,
max_eval_time_mins=self.time_limit_mins,
n_jobs=-1,
random_state=self.seed,
verbosity=1,
).fit(train_data, train_labels)
def get_accuracy(self, data, labels, fitted_model, params):
return round(fitted_model.score(data, labels), 3)
def get_best_pipeline(self, fitted_model):
return [step[1] for step in fitted_model.fitted_pipeline_.steps]
class H2OAutoMLModel(AbstractModel):
def __init__(self):
super().__init__()
h2o.init(name=f"#{rn.SystemRandom().randint(1, 2**30)}", nthreads=20)
h2o.no_progress()
def get_fitted_model(self, train_data, train_labels, params):
train_data = h2o.H2OFrame(np.concatenate((train_data, train_labels.reshape(-1, 1)), axis=1))
x = np.array(train_data.columns)[:-1].tolist()
y = np.array(train_data.columns)[-1].tolist()
train_data[y] = train_data[y].asfactor()
aml = H2OAutoML(max_runtime_secs=60*self.time_limit_mins, seed=self.seed)
aml.train(x=x, y=y, training_frame=train_data)
return aml
def get_accuracy(self, data, labels, fitted_model, params):
data = h2o.H2OFrame(np.concatenate((data, labels.reshape(-1, 1)), axis=1))
y = np.array(data.columns)[-1].tolist()
data[y] = data[y].asfactor()
pred = fitted_model.predict(data).as_data_frame(header=False)["predict"].values.astype(int)
return np.round(np.mean(pred == labels), 3)
def get_best_pipeline(self, fitted_model):
leader_params = fitted_model.leader.get_params()
best_pipeline = [leader_params["model_id"]["actual_value"]["name"]]
if "base_models" in leader_params:
for base_model in leader_params["base_models"]["actual_value"]:
best_pipeline.append(base_model["name"])
h2o.cluster().shutdown()
return best_pipeline
class AutoKerasModel(AbstractModel):
def __init__(self):
super().__init__()
import tensorflow as tf
tf.set_random_seed(self.seed)
import torch
torch.multiprocessing.set_sharing_strategy("file_system")
torch.manual_seed(self.seed)
def get_fitted_model(self, x_train, y_train, params):
s = np.sqrt(x_train.shape[1]).astype(int)
x_train = x_train.reshape((len(x_train), s, s, 1))
clf = ak.ImageClassifier(augment=False, path=generate_tmpdir(), verbose=False)
clf.fit(x_train, y_train, time_limit=60*self.time_limit_mins)
return clf
def get_accuracy(self, x, y, clf, params):
s = np.sqrt(x.shape[1]).astype(int)
x = x.reshape((len(x), s, s, 1))
y_pred = clf.predict(x)
return np.round(accuracy_score(y_true=y, y_pred=y_pred), 3)
def get_best_pipeline(self, clf):
return [m for i, m in enumerate(clf.cnn.best_model.produce_model().modules()) if i > 0]
def get_model_params_dict_list(train_labels, test_labels):
model_params_base = {"train_labels": train_labels, "test_labels": test_labels}
return [
{
"model": AutoKerasModel,
"params_list": [{**model_params_base}],
"use_clean_train_data": False
},
{
"model": AutoKerasModel,
"params_list": [{**model_params_base}],
"use_clean_train_data": True
},
{
"model": TPOTClassifierModel,
"params_list": [{**model_params_base}],
"use_clean_train_data": False
},
{
"model": TPOTClassifierModel,
"params_list": [{**model_params_base}],
"use_clean_train_data": True
},
{
"model": H2OAutoMLModel,
"params_list": [{**model_params_base}],
"use_clean_train_data": False
},
{
"model": H2OAutoMLModel,
"params_list": [{**model_params_base}],
"use_clean_train_data": True
},
]
def visualize(df):
visualize_scores(
df,
score_names=["test_acc", "train_acc"],
is_higher_score_better=[True, True],
err_param_name="max_angle",
# err_param_name="std",
title="Classification scores with added error"
)
plt.show()
train_data, test_data, train_labels, test_labels = get_data()
df = runner.run(
train_data=train_data,
test_data=test_data,
preproc=Preprocessor,
preproc_params=None,
err_root_node=get_err_root_node(),
err_params_list=get_err_params_list(train_data),
model_params_dict_list=get_model_params_dict_list(train_labels, test_labels),
n_processes=1
)
print_results_by_model(df,
["train_labels", "test_labels"],
# ["mean", "min_val", "max_val", "train_labels", "test_labels"],
err_param_name="max_angle",
# err_param_name="std",
pipeline_name="best_pipeline"
)
visualize(df)
###Output
_____no_output_____ |
train/preprocess.ipynb | ###Markdown
All files
###Code
files = !ls ../records/game_reg/
datasets = []
for file in files:
datasets.append(pd.read_csv('../records/game_reg/{}'.format(file), index_col=0))
all_data = pd.concat(datasets, ignore_index=True)
all_data.shape
all_data.head(5)
#all_data['output'] = 2*all_data['isJumping'] + all_data['isDucking']
#all_data[all_data['output'] > 2] = 2
#train_cols = ['cactus1', 'cactus2', 'cactus3', 'ptera1x', 'ptera1y', 'ptera2x', 'ptera2y', 'ptera3x', 'ptera3y']
train_cols = ['cactus1', 'ptera1x', 'ptera1y']
data = all_data[train_cols]
data.describe()
#data.values[data.values < 0] = 0
data.min()
data.max()
data.max() - data.min()
normalized=(data-data.min())/(data.max()-data.min())
normalized.describe()
normalized.head(5)
normalized['ptera1y'] = 1 - normalized['ptera1y']
normalized.head(10)
X_train = 1 - np.array(normalized.values, dtype='float32')
Y_train = np.array(all_data['isJumping'].values, dtype='int32')
Z_train = np.array(all_data['isDucking'].values, dtype='int32')
#Y_train = np.array(data['isJumping'].values, dtype='int32')
X_train[:10]
list(set(X_train[:,2]))
def sigmoid(x):
return 1/(1+np.exp(-16*(x-0.7)))
X_train[:,0:2] = sigmoid(X_train[:,0:2])
#X_train = sigmoid(X_train)
X_train[:10]
list(set(X_train[:,2]))
norm_data = pd.DataFrame(X_train, columns=['cactus', 'pterax', 'pteray'])
norm_data.head()
norm_data['isJumping'] = Y_train
norm_data['isDucking'] = Z_train
norm_data.head()
norm_data.to_csv('../records/datasets/clean_batch_large.csv', index=False)
pd.read_csv('../records/datasets/clean_batch_large.csv').head()
###Output
_____no_output_____ |
4 Final Project/Metrics.ipynb | ###Markdown
**Wyscout Event / Tags Discovery Function**
###Code
def show_event_breakdown(df_events, dic_tags):
"""
Produces a full breakdown of the events, subevents, and the tags for the Wyscout dataset
Use this to look at the various tags attributed to the event taxonomy
"""
df_event_breakdown = df_events.groupby(['event_name','sub_event_name'])\
.agg({'id':'nunique','tags':lambda x: list(x)})\
.reset_index()\
.rename(columns={'id':'numSubEvents','tags':'tagList'})
# creating a histogram of the tags per sub event
df_event_breakdown['tagHist'] = df_event_breakdown.tagList.apply(lambda x: Counter([dic_tags[j] for i in x for j in i]))
dic = {}
for i, cols in df_event_breakdown.iterrows():
eventName, subEventName, numEvents, tagList, tagHist = cols
for key in tagHist:
dic[f'{i}-{key}'] = [eventName, subEventName, numEvents, key, tagHist[key]]
df_event_breakdown = pd.DataFrame.from_dict(dic, orient='index', columns=['event_name','sub_event_name','numSubEvents','tagKey','tagFrequency'])\
.sort_values(['event_name','numSubEvents','tagFrequency'], ascending=[True, False, False])\
.reset_index(drop=True)\
return df_event_breakdown
###Output
_____no_output_____
###Markdown
**ELO Functions**
###Code
def expected_win(r1, r2):
"""
Expected probability of player 1 beating player 2
if player 1 has rating 1 (r1) and player 2 has rating 2 (r2)
"""
return 1.0 / (1 + math.pow(10, (r2-r1)/400))
def update_rating(R, k, P, d):
"""
d = 1 = WIN
d = 0 = LOSS
"""
return R + k*(d-P)
def elo(Ra, Rb, k, d):
"""
d = 1 when player A wins
d = 0 when player B wins
"""
Pa = expected_win(Ra, Rb)
Pb = expected_win(Rb, Ra)
# update if A wins
if d == 1:
Ra = update_rating(Ra, k, Pa, d)
Rb = update_rating(Rb, k, Pb, d-1)
# update if B wins
elif d == 0:
Ra = update_rating(Ra, k, Pa, d)
Rb = update_rating(Rb, k, Pb, d+1)
return Pa, Pb, Ra, Rb
def elo_attack_defence_sequence(things, initial_rating, k, results):
"""
Initialises score dictionaries for attack and defence, and runs through sequence of pairwise results, returning final dictionaries with Elo rankings for both attack (dribblers) and defence (of dribblers)
"""
dic_scores_attack = {i:initial_rating for i in things}
dic_scores_defence = {i:initial_rating for i in things}
for result in results:
winner, loser, dribble_outcome = result
# winner = attacker, loser = defender
if dribble_outcome == 1:
Ra, Rb = dic_scores_attack[winner], dic_scores_defence[loser]
_, _, newRa, newRb = elo(Ra, Rb, k, 1)
dic_scores_attack[winner], dic_scores_defence[loser] = newRa, newRb
# winner = defender, loser = attacker
elif dribble_outcome == 0:
Ra, Rb = dic_scores_defence[winner], dic_scores_attack[loser]
_, _, newRa, newRb = elo(Ra, Rb, k, 1)
dic_scores_defence[winner], dic_scores_attack[loser] = newRa, newRb
return dic_scores_attack, dic_scores_defence
###Output
_____no_output_____
###Markdown
Mean Elo
###Code
def mElo(things, initial_rating, k, results, numEpochs):
"""
Randomises the sequence of the pairwise comparisons, running the Elo sequence in a random
sequence for a number of epochs
Returns the mean Elo ratings over the randomised epoch sequences
"""
lst_outcomes_attack = []
lst_outcomes_defence = []
for i in np.arange(numEpochs):
np.random.shuffle(results)
dic_scores_attack, dic_scores_defence = elo_attack_defence_sequence(things, initial_rating, k, results)
lst_outcomes_attack.append(dic_scores_attack)
lst_outcomes_defence.append(dic_scores_defence)
df_attack = pd.DataFrame(lst_outcomes_attack).mean().sort_values(ascending=False).to_frame(name='eloAttack')
df_attack['player'] = df_attack.index
df_attack = df_attack.reset_index(drop=True)[['player','eloAttack']]
df_defence = pd.DataFrame(lst_outcomes_defence).mean().sort_values(ascending=False).to_frame(name='eloDefence')
df_defence['player'] = df_defence.index
df_defence = df_defence.reset_index(drop=True)[['player','eloDefence']]
df_elo = df_attack.merge(df_defence).sort_values('eloAttack', ascending=False)
df_elo['eloDribbleRank'] = df_elo.index+1
df_elo = df_elo.sort_values('eloDefence', ascending=False).reset_index(drop=True)
df_elo['eloDribbleDefenceRank'] = df_elo.index+1
return df_elo.sort_values('eloDribbleRank').reset_index(drop=True)
###Output
_____no_output_____
###Markdown
**Wyscount Tags**
###Code
dic_tags = {
101: 'Goal',
102: 'Own goal',
301: 'Assist',
302: 'Key pass',
1901: 'Counter attack',
401: 'Left foot',
402: 'Right foot',
403: 'Head/body',
1101: 'Direct',
1102: 'Indirect',
2001: 'Dangerous ball lost',
2101: 'Blocked',
801: 'High',
802: 'Low',
1401: 'Interception',
1501: 'Clearance',
201: 'Opportunity',
1301: 'Feint',
1302: 'Missed ball',
501: 'Free space right',
502: 'Free space left',
503: 'Take on left',
504: 'Take on right',
1601: 'Sliding tackle',
601: 'Anticipated',
602: 'Anticipation',
1701: 'Red card',
1702: 'Yellow card',
1703: 'Second yellow card',
1201: 'Position: Goal low center',
1202: 'Position: Goal low right',
1203: 'Position: Goal center',
1204: 'Position: Goal center left',
1205: 'Position: Goal low left',
1206: 'Position: Goal center right',
1207: 'Position: Goal high center',
1208: 'Position: Goal high left',
1209: 'Position: Goal high right',
1210: 'Position: Out low right',
1211: 'Position: Out center left',
1212: 'Position: Out low left',
1213: 'Position: Out center right',
1214: 'Position: Out high center',
1215: 'Position: Out high left',
1216: 'Position: Out high right',
1217: 'Position: Post low right',
1218: 'Position: Post center left',
1219: 'Position: Post low left',
1220: 'Position: Post center right',
1221: 'Position: Post high center',
1222: 'Position: Post high left',
1223: 'Position: Post high right',
901: 'Through',
1001: 'Fairplay',
701: 'Lost',
702: 'Neutral',
703: 'Won',
1801: 'Accurate',
1802: 'Not accurate'
}
###Output
_____no_output_____
###Markdown
**Repos**
###Code
repo_processed_wyscout = 'Processed Wyscout'
repo_spadl = 'SPADL'
###Output
_____no_output_____
###Markdown
**Loading CSVs** Formations, matches, players
###Code
df_formations = pd.read_csv('df_wyscout_formations.csv').rename(columns={'playerId':'player_id','matchId':'match_id','teamId':'team_id'})
df_matches = pd.read_csv('df_wyscout_matches.csv')
df_players = pd.read_csv('player_positions.csv').drop(columns=['Unnamed: 0'])
df_teams = pd.read_csv('df_teams.csv').rename(columns={'teamId':'team_id','teamName':'team_name','officialTeamName':'official_team_name','teamArea':'team_area'})
###Output
_____no_output_____
###Markdown
Events
###Code
%%time
lst_leagues = os.listdir(repo_processed_wyscout)
lst_df = []
for league in lst_leagues:
df_league = pd.read_csv(os.path.join(repo_processed_wyscout, league), converters={'tags': eval})
df_league['source'] = league.replace('processed_wyscout_','').replace('_v1.csv','').title()
lst_df.append(df_league)
df = pd.concat(lst_df, ignore_index=True)
# converting tags to a list of integers
df['tags'] = df.tags.apply(lambda x: list(x))
# adding interception flag
df['interception_flag'] = df.tags.apply(lambda x: 1 if 1401 in x else 0)
# removing the redundant index column Unnamed: 0
df = df.drop(columns=['Unnamed: 0'])
# getting rid of 709 events that don't have an end position
df = df.loc[pd.isna(df['end_x']) == False].copy()
df['event_sec'] = np.round(df.event_sec, 6)
df['start_x'] = np.round(df.start_x, 2)
df['start_y'] = np.round(df.start_y, 2)
df['end_x'] = np.round(df.end_x, 2)
df['end_y'] = np.round(df.end_y, 2)
print (f'Loaded {len(df)} rows from {len(lst_leagues)} leagues.')
%%time
df_spadl_actions = pd.read_csv('SPADL_Actions.csv').drop_duplicates()
df_spadl_xT = pd.read_csv('SPADL_xT.csv').drop_duplicates()
# urgh doing some more spadl transformations to help the join
df_spadl_actions['match_period'] = df_spadl_actions.period_id.apply(lambda x: '1H' if x == 1.0 else '2H')
df_spadl_xT['match_period'] = df_spadl_xT.period_id.apply(lambda x: '1H' if x == 1.0 else '2H')
df_spadl_actions = df_spadl_actions.rename(columns = {'game_id':'match_id', 'time_seconds':'event_sec'})
df_spadl_xT = df_spadl_xT.rename(columns = {'game_id':'match_id', 'time_seconds':'event_sec'})
# need this hacky interception flag to preseve uniqueness through the join from SPADL to
df_spadl_actions['interception_flag'] = df_spadl_actions.type_name.apply(lambda x: 1 if x == 'interception' else 0)
df_spadl_xT['interception_flag'] = df_spadl_xT.type_name.apply(lambda x: 1 if x == 'interception' else 0)
# converting the event secs to 6dp and positions to 2dp
df_spadl_actions['event_sec'] = np.round(df_spadl_actions.event_sec, 6)
df_spadl_xT['event_sec'] = np.round(df_spadl_xT.event_sec, 6)
df_spadl_actions['start_x'] = np.round(df_spadl_actions.start_x, 2)
df_spadl_actions['start_y'] = np.round(df_spadl_actions.start_y, 2)
df_spadl_actions['end_x'] = np.round(df_spadl_actions.end_x, 2)
df_spadl_actions['end_y'] = np.round(df_spadl_actions.end_y, 2)
df_spadl_xT['start_x'] = np.round(df_spadl_xT.start_x, 2)
df_spadl_xT['start_y'] = np.round(df_spadl_xT.start_y, 2)
df_spadl_xT['end_x'] = np.round(df_spadl_xT.end_x, 2)
df_spadl_xT['end_y'] = np.round(df_spadl_xT.end_y, 2)
print (f'Loaded SPADL data.')
###Output
Loaded SPADL data.
CPU times: user 8.62 s, sys: 1.78 s, total: 10.4 s
Wall time: 10.6 s
###Markdown
**Merging into a single dataframe**
###Code
%%time
# merging xT with spadl
df_spadl_actions_xT = df_spadl_actions.merge(df_spadl_xT, how='left', on=['match_id', 'period_id', 'event_sec', 'team_id', 'player_id', 'start_x',\
'start_y', 'end_x', 'end_y', 'bodypart_id', 'type_id', 'result_id',\
'type_name', 'result_name', 'bodypart_name', 'match_period',\
'interception_flag'])
# merging spadl with wyscout
df_combined = df.merge(df_spadl_actions_xT, how='left', on=['match_id','match_period','event_sec','team_id','player_id','start_x','start_y','interception_flag'], suffixes=('_wyscout','_SPADL'))
###Output
CPU times: user 46.5 s, sys: 6.52 s, total: 53 s
Wall time: 53.3 s
###Markdown
Investigating what's in Wyscout but not SPADL
###Code
df.sub_event_name.value_counts()
df_combined.loc[pd.isna(df_combined['type_name']) == True].sub_event_name.value_counts()
df_spadl_actions.type_name.value_counts()
###Output
_____no_output_____
###Markdown
Investigating shots
###Code
df_combined.loc[df_combined['sub_event_name'] == 'Shot', ['match_id','team_id','player_id','sub_event_name','start_x','start_y','end_x_wyscout','end_y_wyscout','end_x_SPADL','end_y_SPADL','type_name','result_name']]
###Output
_____no_output_____
###Markdown
--- **Analysis**
###Code
df_spadl_actions.type_name.value_counts()
df_breakdown = show_event_breakdown(df, dic_tags)
###Output
_____no_output_____
###Markdown
xT Defensive Analysis
###Code
df_spadl_xT.head()
###Output
_____no_output_____
###Markdown
* Need a nice way of gridding this* Need a nice way of plotting this Winger Analysis
###Code
def plot_xTMap(xT, theme, team_id, bins=(16,12), actions=['cross','pass','dribble'], players=None, vmax_override=None, saveFlag=0):
"""
player_id 14812 = Perisic
player_id 20556 = Cantreva
"""
if players == None:
xT = xT.loc[(xT['team_id'] == team_id) & (xT['type_name'].isin(actions))]
else:
xT = xT.loc[(xT['team_id'] == team_id) & (xT['type_name'].isin(actions)) & (xT['player_id'].isin(players))]
# merging xT with teams dataframe
xT = xT.merge(df_teams)
team_xT = xT[xT.team_id == team_id]
oppo_xT = xT[xT.team_id != team_id]
team_name = team_xT.team_name.unique()[0]
team_pitch = mpl_pitch.Pitch(pitch_type='uefa', figsize=(16,9), pitch_color=theme.bgCol, line_zorder=2, line_color=theme.bgCol)
team_fig, team_ax = team_pitch.draw()
team_fig.patch.set_facecolor(theme.bgCol)
team_bin_statistic = team_pitch.bin_statistic(team_xT.start_x, team_xT.start_y, team_xT.xT_value, statistic='sum', bins=bins)
vmax = team_bin_statistic['statistic'].max()
vmin = 0
if vmax_override != None:
vmax = vmax_override
team_pcm = team_pitch.heatmap(team_bin_statistic, ax=team_ax, cmap='Reds', edgecolors='white', vmin=vmin, vmax=vmax)
team_scatter = team_pitch.scatter(xT.start_x, xT.start_y, c='white', s=2, ax=team_ax)
#team_pcm.axes.invert_yaxis()
#team_cbar = team_fig.colorbar(team_pcm, ax=team_ax)
#title = team_fig.suptitle(f'Origions of threat by location for {team_name}, Serie A 2017/18', x=0.4, y=0.98, fontsize=23, color='black')
if saveFlag == 1:
team_fig.savefig(f'{team_name}.png', dpi=300, transparent=True)
# perisic
plot_xTMap(df_spadl_xT, theme, team_id=3161, bins=(16,12), actions=['cross','pass','dribble'], players=[14812], vmax_override=1.5, saveFlag=1)
# candreva
plot_xTMap(df_spadl_xT, theme, team_id=3161, bins=(16,12), actions=['cross','pass','dribble'], players=[20556], vmax_override=1.5, saveFlag=1)
###Output
_____no_output_____
###Markdown
**Moses & Sanchez Scouting**
###Code
player_map.loc[player_map['player_name'].str.contains('Moses')]
# Moses
plot_xTMap(df_spadl_xT, theme, team_id=1610, bins=(16,12), actions=['cross','pass','dribble'], players=[8625], vmax_override=1.5, saveFlag=1)
# Sanchez
#plot_xTMap(df_spadl_xT, theme, team_id=1609, bins=(16,12), actions=['cross','pass','dribble'], players=[3361], vmax_override=1.5)
actions=['cross','pass','dribble']
bins=(16,12)
vmax_override=1.5
player_id = 3361
xT = df_spadl_xT.loc[(df_spadl_xT['player_id'] == player_id) & (df_spadl_xT['type_name'].isin(actions))]
# merging xT with teams dataframe
xT = xT.merge(df_teams)
team_pitch = mpl_pitch.Pitch(pitch_type='uefa', figsize=(16,9), pitch_color=theme.bgCol, line_zorder=2, line_color=theme.bgCol)
team_fig, team_ax = team_pitch.draw()
team_fig.patch.set_facecolor(theme.bgCol)
team_bin_statistic = team_pitch.bin_statistic(xT.start_x, xT.start_y, xT.xT_value, statistic='sum', bins=bins)
vmax = team_bin_statistic['statistic'].max()
vmin = 0
if vmax_override != None:
vmax = vmax_override
team_pcm = team_pitch.heatmap(team_bin_statistic, ax=team_ax, cmap='Reds', edgecolors='white', vmin=vmin, vmax=vmax)
team_scatter = team_pitch.scatter(xT.start_x, xT.start_y, c='white', s=2, ax=team_ax)
#team_pcm.axes.invert_yaxis()
#team_cbar = team_fig.colorbar(team_pcm, ax=team_ax)
#title = team_fig.suptitle(f'Origions of threat by location for {team_name}, Serie A 2017/18', x=0.4, y=0.98, fontsize=23, color='black')
team_fig.savefig(f'Alexis Sanchez.png', dpi=300, transparent=True)
###Output
_____no_output_____
###Markdown
--- **Delta Strategy Analysis**
###Code
df_teams.loc[df_teams['team_area'] == 'Germany']
%%time
bins = (12,8)
theme = LightTheme()
# identifying teams
## inter = 3161
## juve = 3159
## ac = 3157
## man u = 1611
## city = 1609
## real = 675
## barca = 676
## bayern = 2444
## bvb = 2447
# specify league here
league = 'Spain'
# specify team of interest here
inter_team_id = 675
team_name = 'Real Madrid'
other_team_ids = list(set(df_teams.loc[(df_teams['team_area'] == league) & (df_teams['teamType'] != 'national')].team_id.values) - set([inter_team_id]))
# dataframe: matchId | homeTeamId | awayTeamId
df_italy_matches = df_matches.loc[df_matches['source'] == league, ['matchId','homeTeamId','awayTeamId']].reset_index(drop=True)
# dict: {matchId: [home, away]}
dic_matches = {i:[j,k] for i,j,k in zip(df_italy_matches.matchId,df_italy_matches.homeTeamId,df_italy_matches.awayTeamId)}
# looping through the matches
vs_inter_delta = np.zeros((bins[1], bins[0]))
# greating a team_pitch object as we'll be using a method buried within to capture the counts in a 2D numpy array
team_pitch = mpl_pitch.Pitch(pitch_type='uefa', figsize=(16,9), pitch_color=theme.bgCol, line_zorder=20, line_color='#D3D3D3')#line_color=theme.bgCol)
team_fig, team_ax = team_pitch.draw()
team_fig.patch.set_facecolor(theme.bgCol)
for opp_team_id in other_team_ids:
# each opposition team will have two grids to store the xT counts
vs_inter = np.zeros((bins[1], bins[0]))
vs_other = np.zeros((bins[1], bins[0]))
# getting lists of match ids for the other teams, separating out the matches vs inter and the matches vs other teams
## getting the frequencies, too, as we'll use those to average things out later
opp_team_matches_vs_inter = [i for i in dic_matches.keys() if opp_team_id in dic_matches[i] and inter_team_id in dic_matches[i]]
freq_vs_inter = len(opp_team_matches_vs_inter)
opp_team_matches_vs_other = [i for i in dic_matches.keys() if opp_team_id in dic_matches[i] and inter_team_id not in dic_matches[i]]
freq_vs_other = len(opp_team_matches_vs_other)
## starting with
for match_id in opp_team_matches_vs_inter:
df_xT_match = df_spadl_xT.loc[(df_spadl_xT['match_id'] == match_id) & (df_spadl_xT['team_id'] == opp_team_id)].copy()
vs_inter += team_pitch.bin_statistic(df_xT_match.start_x, df_xT_match.start_y, df_xT_match.xT_value, statistic='sum', bins=bins)['statistic']
for match_id in opp_team_matches_vs_other:
df_xT_match = df_spadl_xT.loc[(df_spadl_xT['match_id'] == match_id) & (df_spadl_xT['team_id'] == opp_team_id)].copy()
vs_other += team_pitch.bin_statistic(df_xT_match.start_x, df_xT_match.start_y, df_xT_match.xT_value, statistic='sum', bins=bins)['statistic']
# calculating averages
mean_vs_inter = vs_inter / freq_vs_inter
mean_vs_other = vs_other / freq_vs_other
# adding to delta
vs_inter_delta += (mean_vs_inter - mean_vs_other)
# providing overlay - the statistics here don't matter - we'll override these soon
team_bin_statistic = team_pitch.bin_statistic(df_xT_match.end_x, df_xT_match.end_y, df_xT_match.xT_value, statistic='sum', bins=bins)
# overriding single match with the aggregated delta
team_bin_statistic['statistic'] = vs_inter_delta
# setting the colour scale
vmax = team_bin_statistic['statistic'].max()
vmin = 0
# plotting the heatmap
team_pcm = team_pitch.heatmap(team_bin_statistic, ax=team_ax, cmap='Reds', edgecolors='white', vmin=vmin, vmax=0.35)
team_cbar = team_fig.colorbar(team_pcm, ax=team_ax)
team_cbar.set_label('xT', rotation=270, fontsize=20)
title = team_fig.suptitle(f'Difference in Threat Strategy Vs {team_name}, 2017/18', x=0.4, y=0.98, fontsize=23, color='black')
team_fig.savefig('Difference_In_Threat_Vs_Real.png', transparent=True, dpi=300)
###Output
CPU times: user 8.8 s, sys: 445 ms, total: 9.24 s
Wall time: 3.79 s
###Markdown
--- **Kicking off the xT metrics**
###Code
min_minutes = 0
df_spadl_xT_players = df_spadl_actions_xT.merge(df_players).merge(df_teams, on=['team_id'], how='inner')\
[['match_id', 'period_id', 'event_sec', 'team_id','official_team_name','team_area', 'player_id','player_name', 'player_role', 'player_position',
'start_x','start_y', 'end_x', 'end_y', 'bodypart_id', 'type_id', 'result_id','type_name', 'result_name', 'bodypart_name', 'xT_value', 'match_period']]
df_spadl_xT_players.head()
###Output
_____no_output_____
###Markdown
Crossing
###Code
#& (df_spadl_xT_players['team_area'] == 'England')\
#& (df_spadl_xT_players['player_position'].isin(['Right MF','Left MF','Left FW','Right FW']))]\
df_spadl_xT_winger_crossing_threat = df_spadl_xT_players.loc[(df_spadl_xT_players['type_name'] == 'cross')]\
.groupby(['team_id','official_team_name','player_id','player_name','player_position','match_id'])\
.agg({'xT_value':np.sum,'result_id':np.sum,'result_name':'count'})\
.reset_index()\
.rename(columns={'result_id':'numSuccessful','result_name':'numAttempts'})\
.merge(df_formations, on=['team_id','player_id','match_id'], how='inner')\
.groupby(['team_id','official_team_name','player_id','player_name','player_position'])\
.agg({'minutesPlayed':np.sum,'xT_value':np.sum,'numSuccessful':np.sum,'numAttempts':np.sum})\
.sort_values('xT_value', ascending=False)\
.reset_index()
df_spadl_xT_winger_crossing_threat['xT_crossing_per_90'] = 90*(df_spadl_xT_winger_crossing_threat['xT_value']) / df_spadl_xT_winger_crossing_threat['minutesPlayed']
df_spadl_xT_winger_crossing_threat['xT_crossing_per_cross'] = df_spadl_xT_winger_crossing_threat['xT_value'] / df_spadl_xT_winger_crossing_threat['numSuccessful']
df_spadl_xT_winger_crossing_threat['xT_crossing_per_attempt'] = df_spadl_xT_winger_crossing_threat['xT_value'] / df_spadl_xT_winger_crossing_threat['numAttempts']
df_spadl_xT_winger_crossing_threat['cross_success_rate'] = df_spadl_xT_winger_crossing_threat['numSuccessful'] / df_spadl_xT_winger_crossing_threat['numAttempts']
# min 900 minutes
df_spadl_xT_winger_crossing_threat = df_spadl_xT_winger_crossing_threat.loc[df_spadl_xT_winger_crossing_threat['minutesPlayed'] >= min_minutes].copy()
df_spadl_xT_winger_crossing_threat = df_spadl_xT_winger_crossing_threat.sort_values('xT_crossing_per_90', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_crossing_threat['xT_crossing_per_90_rank'] = df_spadl_xT_winger_crossing_threat.index + 1
df_spadl_xT_winger_crossing_threat = df_spadl_xT_winger_crossing_threat.sort_values('xT_crossing_per_attempt', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_crossing_threat['xT_crossing_per_attempt_rank'] = df_spadl_xT_winger_crossing_threat.index + 1
df_spadl_xT_winger_crossing_threat = df_spadl_xT_winger_crossing_threat.sort_values('xT_crossing_per_cross', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_crossing_threat['xT_crossing_per_cross_rank'] = df_spadl_xT_winger_crossing_threat.index + 1
df_spadl_xT_winger_crossing_threat = df_spadl_xT_winger_crossing_threat.sort_values('cross_success_rate', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_crossing_threat['cross_success_rate_rank'] = df_spadl_xT_winger_crossing_threat.index + 1
df_spadl_xT_winger_crossing_threat = df_spadl_xT_winger_crossing_threat.sort_values('xT_value', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_crossing_threat['xT_cross_rank_overall'] = df_spadl_xT_winger_crossing_threat.index + 1
df_spadl_xT_winger_crossing_threat.sort_values('xT_cross_rank_overall')
###Output
_____no_output_____
###Markdown
Dribbling
###Code
#& (df_spadl_xT_players['team_area'] == 'Italy')\
#& (df_spadl_xT_players['player_position'].isin(['Right MF','Left MF','Left FW','Right FW']))]\
df_spadl_xT_winger_dribbling_threat = df_spadl_xT_players.loc[(df_spadl_xT_players['type_name'] == 'dribble')]\
.groupby(['team_id','official_team_name','player_id','player_name','player_position','match_id'])\
.agg({'xT_value':np.sum,'result_id':np.sum,'result_name':'count'})\
.reset_index()\
.rename(columns={'result_id':'numSuccessful','result_name':'numAttempts'})\
.merge(df_formations, on=['team_id','player_id','match_id'], how='inner')\
.groupby(['team_id','official_team_name','player_id','player_name','player_position'])\
.agg({'minutesPlayed':np.sum,'xT_value':np.sum,'numSuccessful':np.sum,'numAttempts':np.sum})\
.sort_values('xT_value', ascending=False)\
.reset_index()
df_spadl_xT_winger_dribbling_threat['xT_dribbling_per_90'] = 90*(df_spadl_xT_winger_dribbling_threat['xT_value']) / df_spadl_xT_winger_dribbling_threat['minutesPlayed']
df_spadl_xT_winger_dribbling_threat['xT_dribbling_per_dribble'] = df_spadl_xT_winger_dribbling_threat['xT_value'] / df_spadl_xT_winger_dribbling_threat['numSuccessful']
df_spadl_xT_winger_dribbling_threat['xT_dribbling_per_attempt'] = df_spadl_xT_winger_dribbling_threat['xT_value'] / df_spadl_xT_winger_dribbling_threat['numAttempts']
df_spadl_xT_winger_dribbling_threat['dribble_success_rate'] = df_spadl_xT_winger_dribbling_threat['numSuccessful'] / df_spadl_xT_winger_dribbling_threat['numAttempts']
# min 900 minutes
df_spadl_xT_winger_dribbling_threat = df_spadl_xT_winger_dribbling_threat.loc[df_spadl_xT_winger_dribbling_threat['minutesPlayed'] >= min_minutes].copy()
df_spadl_xT_winger_dribbling_threat = df_spadl_xT_winger_dribbling_threat.sort_values('xT_dribbling_per_90', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_dribbling_threat['xT_dribbling_per_90_rank'] = df_spadl_xT_winger_dribbling_threat.index + 1
df_spadl_xT_winger_dribbling_threat = df_spadl_xT_winger_dribbling_threat.sort_values('xT_dribbling_per_attempt', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_dribbling_threat['xT_dribbling_per_attempt_rank'] = df_spadl_xT_winger_dribbling_threat.index + 1
df_spadl_xT_winger_dribbling_threat = df_spadl_xT_winger_dribbling_threat.sort_values('xT_dribbling_per_dribble', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_dribbling_threat['xT_dribbling_per_dribble_rank'] = df_spadl_xT_winger_dribbling_threat.index + 1
df_spadl_xT_winger_dribbling_threat = df_spadl_xT_winger_dribbling_threat.sort_values('xT_value', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_dribbling_threat['xT_dribbling_rank_overall'] = df_spadl_xT_winger_dribbling_threat.index + 1
df_spadl_xT_winger_dribbling_threat
###Output
_____no_output_____
###Markdown
Passing
###Code
#& (df_spadl_xT_players['team_area'] == 'Italy')\
#& (df_spadl_xT_players['player_position'].isin(['Right MF','Left MF','Left FW','Right FW']))]\
df_spadl_xT_winger_passing_threat = df_spadl_xT_players.loc[(df_spadl_xT_players['type_name'] == 'pass')]\
.groupby(['team_id','official_team_name','player_id','player_name','player_position','match_id'])\
.agg({'xT_value':np.sum,'result_id':np.sum,'result_name':'count'})\
.reset_index()\
.rename(columns={'result_id':'numSuccessful','result_name':'numAttempts'})\
.merge(df_formations, on=['team_id','player_id','match_id'], how='inner')\
.groupby(['team_id','official_team_name','player_id','player_name','player_position'])\
.agg({'minutesPlayed':np.sum,'xT_value':np.sum,'numSuccessful':np.sum,'numAttempts':np.sum})\
.sort_values('xT_value', ascending=False)\
.reset_index()
df_spadl_xT_winger_passing_threat['xT_passing_per_90'] = 90*(df_spadl_xT_winger_passing_threat['xT_value']) / df_spadl_xT_winger_passing_threat['minutesPlayed']
df_spadl_xT_winger_passing_threat['xT_passing_per_pass'] = df_spadl_xT_winger_passing_threat['xT_value'] / df_spadl_xT_winger_passing_threat['numSuccessful']
df_spadl_xT_winger_passing_threat['xT_passing_per_attempt'] = df_spadl_xT_winger_passing_threat['xT_value'] / df_spadl_xT_winger_passing_threat['numAttempts']
df_spadl_xT_winger_passing_threat['pass_success_rate'] = df_spadl_xT_winger_passing_threat['numSuccessful'] / df_spadl_xT_winger_passing_threat['numAttempts']
# min 900 minutes
df_spadl_xT_winger_passing_threat = df_spadl_xT_winger_passing_threat.loc[df_spadl_xT_winger_passing_threat['minutesPlayed'] >= min_minutes].copy()
df_spadl_xT_winger_passing_threat = df_spadl_xT_winger_passing_threat.sort_values('xT_passing_per_90', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_passing_threat['xT_passing_per_90_rank'] = df_spadl_xT_winger_passing_threat.index + 1
df_spadl_xT_winger_passing_threat = df_spadl_xT_winger_passing_threat.sort_values('xT_passing_per_attempt', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_passing_threat['xT_passing_per_attempt_rank'] = df_spadl_xT_winger_passing_threat.index + 1
df_spadl_xT_winger_passing_threat = df_spadl_xT_winger_passing_threat.sort_values('xT_passing_per_pass', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_passing_threat['xT_passing_per_pass_rank'] = df_spadl_xT_winger_passing_threat.index + 1
df_spadl_xT_winger_passing_threat = df_spadl_xT_winger_passing_threat.sort_values('pass_success_rate', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_passing_threat['pass_success_rate_rank'] = df_spadl_xT_winger_passing_threat.index + 1
df_spadl_xT_winger_passing_threat = df_spadl_xT_winger_passing_threat.sort_values('xT_value', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_passing_threat['xT_pass_rank_overall'] = df_spadl_xT_winger_passing_threat.index + 1
df_spadl_xT_winger_passing_threat.sort_values('xT_pass_rank_overall')
###Output
_____no_output_____
###Markdown
Overall xT/90
###Code
#& (df_spadl_xT_players['team_area'] == 'England')\
#& (df_spadl_xT_players['player_position'].isin(['Right MF','Left MF','Left FW','Right FW']))]\
df_spadl_xT_winger_threat = df_spadl_xT_players.loc[(df_spadl_xT_players['type_name'].isin(['pass','cross','dribble']))]\
.groupby(['team_id','official_team_name','player_id','player_name','player_position','match_id'])\
.agg({'xT_value':np.sum,'result_id':np.sum,'result_name':'count'})\
.reset_index()\
.rename(columns={'result_id':'numSuccessful','result_name':'numAttempts'})\
.merge(df_formations, on=['team_id','player_id','match_id'], how='inner')\
.groupby(['team_id','official_team_name','player_id','player_name','player_position'])\
.agg({'minutesPlayed':np.sum,'xT_value':np.sum,'numSuccessful':np.sum,'numAttempts':np.sum})\
.sort_values('xT_value', ascending=False)\
.reset_index()
df_spadl_xT_winger_threat['xT_per_90'] = 90*(df_spadl_xT_winger_threat['xT_value']) / df_spadl_xT_winger_threat['minutesPlayed']
df_spadl_xT_winger_threat['xT_per_success'] = df_spadl_xT_winger_threat['xT_value'] / df_spadl_xT_winger_threat['numSuccessful']
df_spadl_xT_winger_threat['xT_per_attempt'] = df_spadl_xT_winger_threat['xT_value'] / df_spadl_xT_winger_threat['numAttempts']
# min 900 minutes
df_spadl_xT_winger_threat = df_spadl_xT_winger_threat.loc[df_spadl_xT_winger_threat['minutesPlayed'] >= min_minutes].copy()
df_spadl_xT_winger_threat = df_spadl_xT_winger_threat.sort_values('xT_per_90', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_threat['xT_per_90_rank'] = df_spadl_xT_winger_threat.index + 1
df_spadl_xT_winger_threat = df_spadl_xT_winger_threat.sort_values('xT_per_attempt', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_threat['xT_per_attempt_rank'] = df_spadl_xT_winger_threat.index + 1
df_spadl_xT_winger_threat = df_spadl_xT_winger_threat.sort_values('xT_per_success', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_threat['xT_per_success_rank'] = df_spadl_xT_winger_threat.index + 1
df_spadl_xT_winger_threat = df_spadl_xT_winger_threat.sort_values('xT_value', ascending=False).reset_index(drop=True)
df_spadl_xT_winger_threat['xT_rank_overall'] = df_spadl_xT_winger_threat.index + 1
df_spadl_xT_winger_threat.sort_values('xT_rank_overall')
df_spadl_xT_winger_dribbling_threat.loc[df_spadl_xT_winger_dribbling_threat['player_id'] == 14812]
###Output
_____no_output_____
###Markdown
--- **ELO Winger Analysis**
###Code
# okay so we can look at attacking and defending duels later and pick out the winners and losers and push that through elo
df_duels = df.loc[df['event_name'] == 'Duel']
df_duels.sub_event_name.value_counts()
###Output
_____no_output_____
###Markdown
Looking at dribble sequences* Is the SPADL dribble thing even usable?* Want to look at duels and pair them up* Then want to get subset of duels that are happening within a dribble
###Code
player_map = df_formations[['player_id','team_id']].drop_duplicates().merge(df_players).merge(df_teams)
inter_player_map = player_map.loc[player_map['team_id'] == 3161, ['player_id','player_name','player_position']]
inter_player_map.head()
df_ground_duels = df_duels.loc[df_duels['sub_event_name'].isin(['Ground defending duel','Ground attacking duel']), ['id','match_id','team_id','match_period','event_sec','sub_event_name','tags','player_id','player_name','start_x_perc','start_y_perc']]
df_ground_duels['success_flag'] = df_ground_duels.tags.apply(lambda x: 1 if 703 in x else (2 if 702 in x else 0))
df_ground_duels['take_on_flag'] = df_ground_duels.tags.apply(lambda x: 1 if 503 in x else (1 if 504 in x else 0))
df_ground_duels['dribble_flag'] = df_ground_duels.tags.apply(lambda x: 1 if 503 in x else (1 if 504 in x else (1 if 502 in x else (1 if 501 in x else 0))))
df_ground_duels['counter_attack_flag'] = df_ground_duels.tags.apply(lambda x: 1 if 1901 in x else 0)
# getting rid of neutral encounters: only interested in wins and losses
df_ground_duels = df_ground_duels.loc[df_ground_duels['success_flag'] != 2].copy()
#italy_matches = df_matches.loc[df_matches['source'] == 'Italy'].matchId.values
#df_ground_duels = df_ground_duels.loc[df_ground_duels['match_id'].isin(italy_matches)].copy()
###Output
_____no_output_____
###Markdown
Method:* Start with attacking side* Loop through each attacking dribble, look for closest defending dribble that completes x & y to 100 and within 5 seconds of each other
###Code
df_attack = df_ground_duels.loc[(df_ground_duels['sub_event_name'] == 'Ground attacking duel') & (pd.isna(df_ground_duels['player_name']) == False)]
df_defend = df_ground_duels.loc[(df_ground_duels['sub_event_name'] == 'Ground defending duel') & (pd.isna(df_ground_duels['player_name']) == False)]
len(df_attack), len(df_defend)
%%time
dic = {}
# iterating through each attacking duel
for idx, cols in df_attack.iterrows():
# field names for the attacking duel
Id,match_id,team_id,match_period,event_sec,sub_event_name,tags,player_id,player_name,start_x_perc,start_y_perc,success_flag,take_on_flag,dribble_flag,counter_attack_flag = cols
# finding matching defensive duel candidates
## same match, different team, same half, within 10 seconds, and must happen in the same place
df_candidates = df_defend.loc[(df_defend['match_id'] == match_id)\
& (df_defend['team_id'] != team_id)\
& (df_defend['match_period'] == match_period)\
& (df_defend['event_sec'] >= event_sec-5)\
& (df_defend['event_sec'] <= event_sec+5)\
& (df_defend['start_x_perc'] + start_x_perc == 100)\
& (df_defend['start_y_perc'] + start_y_perc == 100)].copy()
# pick best candidate
df_candidates['time_delta_event_sec'] = abs(df_candidates['event_sec'] - event_sec)
# if there is a mapping
if len(df_candidates) > 0:
opp_id, opp_team_id, opp_player_name = df_candidates.sort_values('time_delta_event_sec').head(1)[['id','team_id','player_name']].values[0]
dic[Id] = [Id, opp_id]
# update df_defend to get rid of things that have been selected
# this will have a massive impact when you get into the latter stages of the algo
df_defend = df_defend.loc[df_defend['id'] != opp_id]
# need to reboot df_defend since we were eliminating rows in the match algo
df_defend = df_ground_duels.loc[df_ground_duels['sub_event_name'] == 'Ground defending duel']
# getting a 86% match rate in England and 87% match rate in Italy
print (len(dic) / min([len(df_attack), len(df_defend)]))
# transforming dic into a dataframe that we can start joining on
df_dribbles = pd.DataFrame.from_dict(dic, orient='index', columns=['attack_id','defend_id'])
# merging attack with defence
df_dribbles = df_dribbles.merge(df_attack, left_on='attack_id', right_on='id', suffixes=('','_attack')).merge(df_defend, left_on='defend_id', right_on='id', suffixes=('','_defend'))
df_dribbles = df_dribbles[['attack_id','defend_id','match_id','match_period','event_sec','event_sec_defend',\
'team_id','player_id','player_name','team_id_defend','player_id_defend','player_name_defend',\
'success_flag','success_flag_defend','dribble_flag','take_on_flag','counter_attack_flag',\
'start_x_perc','start_x_perc_defend','start_y_perc','start_y_perc_defend']]
len(df_dribbles)
#df_dribbles.to_csv('df_dribbles.csv', index=None)
###Output
_____no_output_____
###Markdown
Selecting duel type
###Code
duel_type = 'Dribble Opp Half'
if duel_type == 'Take On All':
df_dribbles = df_dribbles.loc[(df_dribbles['take_on_flag'] == 1)].copy()
elif duel_type == 'Take On Opp Half':
df_dribbles = df_dribbles.loc[(df_dribbles['take_on_flag'] == 1) & (df_dribbles['start_x_perc'] > 50)].copy()
elif duel_type == 'Dribble All':
df_dribbles = df_dribbles.loc[(df_dribbles['dribble_flag'] == 1)].copy()
elif duel_type == 'Dribble Opp Half':
df_dribbles = df_dribbles.loc[(df_dribbles['dribble_flag'] == 1) & (df_dribbles['start_x_perc'] > 50)].copy()
elif duel_type == 'Pressure All':
df_dribbles = df_dribbles.loc[(df_dribbles['dribble_flag'] == 0)].copy()
elif duel_type == 'Pressure Opp Half':
df_dribbles = df_dribbles.loc[(df_dribbles['dribble_flag'] == 0) & (df_dribbles['start_x_perc'] > 50)].copy()
elif duel_type == 'All':
pass
len(df_dribbles)
###Output
_____no_output_____
###Markdown
Formatting the winners and losers of the duels
###Code
df_dribbles['winner'] = df_dribbles.apply(lambda x: x.player_id if x.success_flag == 1 else x.player_id_defend, axis=1)
df_dribbles['loser'] = df_dribbles.apply(lambda x: x.player_id if x.success_flag == 0 else x.player_id_defend, axis=1)
winner_loser_outcome = [(i[0], i[1], i[2]) for i in df_dribbles.loc[:, ['winner','loser','success_flag']].values]
###Output
_____no_output_____
###Markdown
Initial Elo conditions & Running meanElo
###Code
%%time
initial_rating = 100
k = 20
players = list(set(df_dribbles['winner'].values).union(set(df_dribbles['loser'].values)))
df_elo = mElo(players, initial_rating, k, winner_loser_outcome, 10).merge(df_players, left_on='player', right_on='player_id', how='inner')
###Output
CPU times: user 1.81 s, sys: 24.6 ms, total: 1.84 s
Wall time: 1.9 s
###Markdown
Attackers
###Code
elo_winger = df_elo.copy()
#elo_winger = df_elo.loc[df_elo['player_position'].isin(['Right MF','Left MF','Right FW','Left FW'])].copy()
#elo_winger = df_elo.loc[df_elo['player_position'].isin(['Right MF','Left MF','Right FW','Left FW','Left DF','Right DF'])].copy()
elo_winger['eloDribblePercentileRank'] = 100 - elo_winger.eloDribbleRank.apply(lambda x: percentileofscore(elo_winger.eloDribbleRank.values, x))
elo_winger.head()
###Output
_____no_output_____
###Markdown
Defenders Dribble Defence
###Code
elo_defender_dribble = df_elo.sort_values('eloDribbleDefenceRank') #df_elo.loc[df_elo['player_role'].isin(['Defender'])].sort_values('eloDribbleDefenceRank')
elo_defender_dribble['eloDribbleDefencePercentileRank'] = 100 - elo_defender_dribble.eloDribbleDefenceRank.apply(lambda x: percentileofscore(elo_defender_dribble.eloDribbleDefenceRank.values, x))
elo_defender_dribble.head()
###Output
_____no_output_____
###Markdown
Pressure Defence
###Code
elo_defender_pressure = df_elo.loc[df_elo['player_role'].isin(['Defender'])].sort_values('eloDribbleDefenceRank')
elo_defender_pressure['eloPressureDefencePercentileRank'] = 100 - elo_defender_pressure.eloDribbleDefenceRank.apply(lambda x: percentileofscore(elo_defender_pressure.eloDribbleDefenceRank.values, x))
elo_defender_pressure.head()
###Output
_____no_output_____
###Markdown
Looking @ Inter Players
###Code
elo_winger.loc[elo_winger['player_id'].isin(inter_player_map.player_id)]
elo_defender_dribble.loc[elo_defender_dribble['player_id'].isin(inter_player_map.player_id)]
elo_defender_pressure.loc[elo_defender_pressure['player_id'].isin(inter_player_map.player_id)]
###Output
_____no_output_____
###Markdown
--- Combining the defensive ELO statistics
###Code
elo_defender_pressure.merge(elo_defender_dribble, on=['player','player_id','player_name','player_role','player_position'], suffixes=('_pressure','_dribble'))\
[['player_id','player_name','player_position','eloDefence_dribble','eloDribbleDefencePercentileRank','eloDefence_pressure','eloPressureDefencePercentileRank']]\
.to_csv('ELO_DEFENCE.csv', index=None)
###Output
_____no_output_____
###Markdown
Outputting the attacking ELO statistics
###Code
elo_winger[['player_id','player_name','player_position','eloAttack','eloDribblePercentileRank']].to_csv('ELO_ATTACK.csv', index=None)
elo_winger.head(20)
###Output
_____no_output_____
###Markdown
Rongier
###Code
elo_winger = elo_winger.drop(columns=['player'])
elo_defender_dribble = elo_defender_dribble.drop(columns=['player'])
df_everything = elo_winger.merge(elo_defender_dribble)\
.merge(df_spadl_xT_winger_crossing_threat, on=['player_id','player_name','player_position'], how='left', suffixes=('','_crossing'))\
.merge(df_spadl_xT_winger_passing_threat, on=['team_id','official_team_name','player_id','player_name','player_position'], how='left', suffixes=('','_passing'))\
.merge(df_spadl_xT_winger_dribbling_threat, on=['team_id','official_team_name','player_id','player_name','player_position'], how='left', suffixes=('','_dribbling'))\
.merge(df_spadl_xT_winger_threat, on=['team_id','official_team_name','player_id','player_name','player_position'], how='left', suffixes=('','_overall'))\
.sort_values('eloDribbleDefenceRank')
df_everything.to_csv('player_stats.csv', index=None)
len(df_everything)
df_everything
df_everything.loc[df_everything['player_name'].str.contains('Leiva')]
df_everything.loc[df_everything['player_position'] == 'Left DF'].sort_values('eloDribbleDefenceRank').head(20)
df_event_breakdown.groupby('event_name').agg({'numSubEvents':np.sum}).reset_index().sort_values('numSubEvents', ascending=False)
df_event_breakdown
###Output
_____no_output_____ |
model_comparisons/noingX_compared.ipynb | ###Markdown
Comparing Encoder-Decoders Analysis Model Architecture
###Code
report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb.json"]
log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb_logs.json"]
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
###Output
encdec_noing6_200_512_04drb
Encoder:
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> output]
(1): nn.LookupTable
(2): nn.Dropout(0.400000)
(3): nn.LSTM(200 -> 512)
(4): nn.Dropout(0.400000)
}
Decoder:
nn.gModule
encdec_noing10_200_512_04drb
Encoder:
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> output]
(1): nn.LookupTable
(2): nn.Dropout(0.400000)
(3): nn.LSTM(200 -> 512)
(4): nn.Dropout(0.400000)
}
Decoder:
nn.gModule
encdec_noing15_200_512_04drb
Encoder:
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> output]
(1): nn.LookupTable
(2): nn.Dropout(0.400000)
(3): nn.LSTM(200 -> 512)
(4): nn.Dropout(0.400000)
}
Decoder:
nn.gModule
encdec_noing23_200_512_04drb
Encoder:
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> output]
(1): nn.LookupTable
(2): nn.Dropout(0.400000)
(3): nn.LSTM(200 -> 512)
(4): nn.Dropout(0.400000)
}
Decoder:
nn.gModule
###Markdown
Perplexity on Each Dataset
###Code
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
###Output
_____no_output_____
###Markdown
Loss vs. Epoch
###Code
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Perplexity vs. Epoch
###Code
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Generations
###Code
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])
###Output
_____no_output_____
###Markdown
BLEU Analysis
###Code
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
###Output
_____no_output_____
###Markdown
N-pairs BLEU AnalysisThis analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
###Code
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
###Output
_____no_output_____
###Markdown
Alignment AnalysisThis analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
###Code
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
###Output
_____no_output_____ |
Strategies (Visualization)/Iron Condor.ipynb | ###Markdown
Call Payoff
###Code
payoff_long_call = call_payoff(sT, strike_price_long_call, premium_long_call)
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_long_call,label='Long 140 Strike Call',color='g')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
#ax.spines['top'].set_visible(False) # Top border removed
#ax.spines['right'].set_visible(False) # Right border removed
#ax.tick_params(top=False, right=False) # Removes the tick-marks on the RHS
plt.grid()
plt.show()
payoff_short_call = call_payoff(sT, strike_price_short_call, premium_short_call) * -1.0
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_short_call,label='Short 130 Strike Call',color='r')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Put Payoff
###Code
def put_payoff(sT, strike_price, premium):
return np.where(sT < strike_price, strike_price - sT, 0) - premium
# stock price
spot_price = 100
# Long put
strike_price_long_put = 60
premium_long_put = 1.0
# Short put
strike_price_short_put = 70
premium_short_put = 1.7
# Stock price range at expiration of the put
sT = np.arange(50,150,1)
payoff_long_put = put_payoff(sT, strike_price_long_put, premium_long_put)
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_long_put,label='Long 60 Strike Put',color='y')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
plt.grid()
plt.show()
payoff_short_put = put_payoff(sT, strike_price_short_put, premium_short_put) * -1.0
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_short_put,label='Short 70 Strike Put',color='m')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Iron Condor Payoff
###Code
payoff = payoff_long_call + payoff_short_call + payoff_long_put + payoff_short_put
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_long_call,'--',label='Long 140 Strike Call',color='g')
ax.plot(sT,payoff_short_call,'--',label='Short 130 Strike Call',color='r')
ax.plot(sT,payoff_long_put,'--',label='Long 60 Strike Put',color='y')
ax.plot(sT,payoff_short_put,'--',label='Short 70 Strike Put',color='m')
ax.plot(sT,payoff,label='Iron Condor')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____ |
notebooks/basics/Lab-Vectors_Factors.ipynb | ###Markdown
VECTORS and FACTORS in R Welcome!By the end of this notebook, you will have learned about **vectors and factors**, two very important data types in R. Table of Contents About the Dataset Vectors Vector Operations Subsetting Vectors FactorsEstimated Time Needed: 25 min About the DatasetYou have received many movie recomendations from your friends and compiled all of the recommendations into a table, with information about each movie. This table has one row for each movie and several columns.- **name** - The name of the movie- **year** - The year the movie was released- **length_min** - The lenght of the movie in minutes- **genre** - The genre of the movie- **average_rating** - Average rating on Imdb- **cost_millions** - The movie's production cost in millions- **sequences** - The amount of sequences- **foreign** - Indicative of whether the movie is foreign (1) or domestic (0)- **age_restriction** - The age restriction for the movieHere's what the data looks like: **Remember**: To run the grey code cells in this exercise, click on the code cell, and then press Shift + Enter. Vectors **Vectors** are strings of numbers, characters or logical data (one-dimension array). In other words, a vector is a simple tool to store your grouped data.In R, you create a vector with the combine function **c()**. You place the vector elements separated by a comma between the brackets. Vectors will be very useful in the future as they allow you to apply operations on a series of data easily.Note that the items in a vector must be of the same class, for example all should be either number, character, or logical. Numeric, Character and Logical Vectors Let's say we have four movie release dates (1985, 1999, 2015, 1964) and we want to assign them to a single variable, `release_year`. This means we'll need to create a vector using **`c()`**.Using numbers, this becomes a **numeric vector**.
###Code
release_year <- c(1985, 1999, 2015, 1964)
release_year
###Output
_____no_output_____
###Markdown
What if we use quotation marks? Then this becomes a **character vector**.
###Code
# Create genre vector and assign values to it
titles <- c("Toy Story", "Akira", "The Breakfast Club")
titles
###Output
_____no_output_____
###Markdown
There are also **logical vectors**, which consist of TRUE's and FALSE's. They're particular important when you want to check its contents
###Code
titles == "Akira" # which item in `titles` is equal to "Akira"?
###Output
_____no_output_____
###Markdown
[Tip] TRUE and FALSE in R Did you know? R only recognizes `TRUE`, `FALSE`, `T` and `F` as special values for true and false. That means all other spellings, including *True* and *true*, are not interpreted by R as logical values. Vector Operations Adding more elements to a vector You can add more elements to a vector with the same **`c()`** function you use the create vectors:
###Code
release_year <- c(1985, 1999, 2015, 1964)
release_year
release_year <- c(release_year, 2016:2018)
release_year
###Output
_____no_output_____
###Markdown
Length of a vector How do we check how many items there are in a vector? We can use the **length()** function:
###Code
release_year
length(release_year)
###Output
_____no_output_____
###Markdown
Head and Tail of a vector We can also retrieve just the **first few items** using the **head()** function:
###Code
head(release_year) #first six items
head(release_year, n = 2) #first n items
head(release_year, 2)
###Output
_____no_output_____
###Markdown
We can also retrieve just the **last few items** using the **tail()** function:
###Code
tail(release_year) #last six items
tail(release_year, 2) #last two items
###Output
_____no_output_____
###Markdown
Sorting a vector We can also sort a vector:
###Code
sort(release_year)
###Output
_____no_output_____
###Markdown
We can also **sort in decreasing order**:
###Code
sort(release_year, decreasing = TRUE)
###Output
_____no_output_____
###Markdown
But if you just want the minimum and maximum values of a vector, you can use the **`min()`** and **`max()`** functions
###Code
min(release_year)
max(release_year)
###Output
_____no_output_____
###Markdown
Average of Numbers If you want to check the average cost of movies produced in 2014, what would you do? Of course, one way is to add all the numbers together, then divide by the number of movies:
###Code
cost_2014 <- c(8.6, 8.5, 8.1)
# sum results in the sum of all elements in the vector
avg_cost_2014 <- sum(cost_2014)/3
avg_cost_2014
###Output
_____no_output_____
###Markdown
You also can use the mean function to find the average of the numeric values in a vector:
###Code
mean_cost_2014 <- mean(cost_2014)
mean_cost_2014
###Output
_____no_output_____
###Markdown
Giving Names to Values in a Vector Suppose you want to remember which year corresponds to which movie.With vectors, you can give names to the elements of a vector using the **names() ** function:
###Code
#Creating a year vector
release_year <- c(1985, 1999, 2010, 2002)
#Assigning names
names(release_year) <- c("The Breakfast Club", "American Beauty", "Black Swan", "Chicago")
release_year
###Output
_____no_output_____
###Markdown
Now, you can retrieve the values based on the names:
###Code
release_year[c("American Beauty", "Chicago")]
###Output
_____no_output_____
###Markdown
Note that the values of the vector are still the years. We can see this in action by adding a number to the first item:
###Code
release_year[1] + 100 #adding 100 to the first item changes the year
###Output
_____no_output_____
###Markdown
And you can retrieve the names of the vector using **`names()`**
###Code
names(release_year)[1:3]
###Output
_____no_output_____
###Markdown
Summarizing Vectors You can also use the **"summary"** function for simple descriptive statistics: minimum, first quartile, mean, third quartile, maximum:
###Code
summary(cost_2014)
###Output
_____no_output_____
###Markdown
Using Logical Operations on Vectors A vector can also be comprised of **`TRUE`** and **`FALSE`**, which are special **logical values** in R. These boolean values are used used to indicate whether a condition is true or false. Let's check whether a movie year of 1997 is older than (**greater in value than**) 2000.
###Code
movie_year <- 1997
movie_year > 2000
###Output
_____no_output_____
###Markdown
You can also make a logical comparison across multiple items in a vector. Which movie release years here are "greater" than 2014?
###Code
movies_years <- c(1998, 2010, 2016)
movies_years > 2014
###Output
_____no_output_____
###Markdown
We can also check for **equivalence**, using **`==`**. Let's check which movie year is equal to 2015.
###Code
movies_years == 2015 # is equal to 2015?
###Output
_____no_output_____
###Markdown
If you want to check which ones are **not equal** to 2015, you can use **`!=`**
###Code
movies_years != 2015
###Output
_____no_output_____
###Markdown
[Tip] Logical Operators in R You can do a variety of logical operations in R including: Checking equivalence: **1 == 2** Checking non-equivalence: **TRUE != FALSE** Greater than: **100 > 1** Greater than or equal to: **100 >= 1** Less than: **1 Less than or equal to: **1 Subsetting Vectors What if you wanted to retrieve the second year from the following **vector of movie years**?
###Code
movie_years <- c(1985, 1999, 2002, 2010, 2012)
movie_years
###Output
_____no_output_____
###Markdown
To retrieve the **second year**, you can use square brackets **`[]`**:
###Code
movie_years[2] #second item
###Output
_____no_output_____
###Markdown
To retrieve the **third year**, you can use:
###Code
movie_years[3]
###Output
_____no_output_____
###Markdown
And if you want to retrieve **multiple items**, you can pass in a vector:
###Code
movie_years[c(1,3)] #first and third items
###Output
_____no_output_____
###Markdown
**Retrieving a vector without some of its items**To retrieve a vector without an item, you can use negative indexing. For example, the following returns a vector slice **without the first item**.
###Code
titles <- c("Black Swan", "Jumanji", "City of God", "Toy Story", "Casino")
titles[-1]
###Output
_____no_output_____
###Markdown
You can save the new vector using a variable:
###Code
new_titles <- titles[-1] #removes "Black Swan", the first item
new_titles
###Output
_____no_output_____
###Markdown
** Missing Values (NA)**Sometimes values in a vector are missing and you have to show them using NA, which is a special value in R for "Not Available". For example, if you don't know the age restriction for some movies, you can use NA.
###Code
age_restric <- c(14, 12, 10, NA, 18, NA)
age_restric
###Output
_____no_output_____
###Markdown
[Tip] Checking NA in R You can check if a value is NA by using the **is.na()** function, which returns TRUE or FALSE. Check if NA: **is.na(NA)** Check if not NA: **!is.na(2)** Subsetting vectors based on a logical condition What if we want to know which movies were created after year 2000? We can simply apply a logical comparison across all the items in a vector:
###Code
release_year > 2000
###Output
_____no_output_____
###Markdown
To retrieve the actual movie years after year 2000, you can simply subset the vector using the logical vector within **square brackets "[]"**:
###Code
release_year[movie_years > 2000] #returns a vector for elements that returned TRUE for the condition
###Output
_____no_output_____
###Markdown
As you may notice, subsetting vectors in R works by retrieving items that were TRUE for the provided condition. For example, `year[year > 2000]` can be verbally explained as: _"From the vector `year`, return only values where the values are TRUE for `year > 2000`"_.You can even manually write out TRUE or T for the values you want to subset:
###Code
release_year
release_year[c(T, F, F, F)] #returns the values that are TRUE
###Output
_____no_output_____
###Markdown
Factors Factors are variables in R which take on a limited number of different values; such variables are often refered to as **categorical variables**. The difference between a categorical variable and a continuous variable is that a categorical variable can belong to a limited number of categories. A continuous variable, on the other hand, can correspond to an infinite number of values. For example, the height of a tree is a continuous variable, but the titles of books would be a categorical variable.One of the most important uses of factors is in statistical modeling; since categorical variables enter into statistical models differently than continuous variables, storing data as factors insures that the modeling functions will treat such data correctly. Let's start with a _**vector**_ of genres:
###Code
genre_vector <- c("Comedy", "Animation", "Crime", "Comedy", "Animation")
genre_vector
###Output
_____no_output_____
###Markdown
As you may have noticed, you can theoretically group the items above into three categories of genres: _Animation_, _Comedy_ and _Crime_. In R-terms, we call these categories **"factor levels"**.The function **factor()** converts a vector into a factor, and creates a factor level for each unique element.
###Code
genre_factor <- as.factor(genre_vector)
levels(genre_factor)
###Output
_____no_output_____
###Markdown
Summarizing Factors When you have a large vector, it becomes difficult to identify which levels are most common (e.g., "How many 'Comedy' movies are there?").To answer this, we can use **summary()**, which produces a **frequency table**, as a named vector.
###Code
summary(genre_factor)
###Output
_____no_output_____
###Markdown
And recall that you can sort the values of the table using **sort()**.
###Code
sort(summary(genre_factor)) #sorts values by ascending order
###Output
_____no_output_____
###Markdown
Ordered factors There are two types of categorical variables: a **nominal categorical variable** and an **ordinal categorical variable**.A **nominal variable** is a categorical variable for names, without an implied order. This means that it is impossible to say that 'one is better or larger than the other'. For example, consider **movie genre** with the categories _Comedy_, _Animation_, _Crime_, _Comedy_, _Animation_. Here, there is no implicit order of low-to-high or high-to-low between the categories. In contrast, **ordinal variables** do have a natural ordering. Consider for example, **movie length** with the categories: _Very short_, _Short_ , _Medium_, _Long_, _Very long_. Here it is obvious that _Medium_ stands above _Short_, and _Long_ stands above _Medium_.
###Code
movie_length <- c("Very Short", "Short", "Medium","Short", "Long",
"Very Short", "Very Long")
movie_length
###Output
_____no_output_____
###Markdown
__`movie_length`__ should be converted to an ordinal factor since its categories have a natural ordering. By default, the function factor() transforms `movie_length` into an unordered factor. To create an **ordered factor**, you have to add two additional arguments: `ordered` and `levels`. - `ordered`: When set to `TRUE` in `factor()`, you indicate that the factor is ordered. - `levels`: In this argument in `factor()`, you give the values of the factor in the correct order.
###Code
movie_length_ordered <- factor(movie_length, ordered = TRUE ,
levels = c("Very Short" , "Short" , "Medium",
"Long","Very Long"))
movie_length_ordered
###Output
_____no_output_____
###Markdown
Now, lets look at the summary of the ordered factor, factor_mvlength_vector:
###Code
summary(movie_length_ordered)
###Output
_____no_output_____ |
codes/eurocodes/ec2/raw_ch4_concrete_cover.ipynb | ###Markdown
Eurocode 2 - Chapter 4 - Concrete coverraw functions
###Code
from streng.codes.eurocodes.ec2.raw.ch4 import concrete_cover
###Output
_____no_output_____
###Markdown
cmindur
###Code
print(concrete_cover.cmindur.__doc__)
cmindur = concrete_cover.cmindur(cat = 'S3', env = 'XS1')
print(f'cmindur = {cmindur}mm')
###Output
cmindur = 30mm
|
4. Sequences, Time Series and Prediction/S+P_Week_1_Lesson_2.ipynb | ###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Lesson 2In the screencast for this lesson I go through a few scenarios for time series. This notebook contains the code for that with a few little extras! :) Setup
###Code
!pip install -U tf-nightly-2.0-preview
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
###Output
_____no_output_____
###Markdown
Trend and Seasonality
###Code
def trend(time, slope=0):
return slope * time
###Output
_____no_output_____
###Markdown
Let's create a time series that just trends upward:
###Code
time = np.arange(4 * 365 + 1)
baseline = 10
series = trend(time, 0.1)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
###Output
_____no_output_____
###Markdown
Now let's generate a time series with a seasonal pattern:
###Code
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
baseline = 10
amplitude = 40
series = seasonality(time, period=365, amplitude=amplitude)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
###Output
_____no_output_____
###Markdown
Now let's create a time series with both trend and seasonality:
###Code
slope = 0.05
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
###Output
_____no_output_____
###Markdown
Noise In practice few real-life time series have such a smooth signal. They usually have some noise, and the signal-to-noise ratio can sometimes be very low. Let's generate some white noise:
###Code
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
plt.figure(figsize=(10, 6))
plot_series(time, noise)
plt.show()
###Output
_____no_output_____
###Markdown
Now let's add this white noise to the time series:
###Code
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
###Output
_____no_output_____
###Markdown
All right, this looks realistic enough for now. Let's try to forecast it. We will split it into two periods: the training period and the validation period (in many cases, you would also want to have a test period). The split will be at time step 1000.
###Code
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
def autocorrelation(time, amplitude, seed=None):
rnd = np.random.RandomState(seed)
φ1 = 0.5
φ2 = -0.1
ar = rnd.randn(len(time) + 50)
ar[:50] = 100
for step in range(50, len(time) + 50):
ar[step] += φ1 * ar[step - 50]
ar[step] += φ2 * ar[step - 33]
return ar[50:] * amplitude
def autocorrelation(time, amplitude, seed=None):
rnd = np.random.RandomState(seed)
φ = 0.8
ar = rnd.randn(len(time) + 1)
for step in range(1, len(time) + 1):
ar[step] += φ * ar[step - 1]
return ar[1:] * amplitude
series = autocorrelation(time, 10, seed=42)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + seasonality(time, period=50, amplitude=150) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + seasonality(time, period=50, amplitude=150) + trend(time, 2)
series2 = autocorrelation(time, 5, seed=42) + seasonality(time, period=50, amplitude=2) + trend(time, -1) + 550
series[200:] = series2[200:]
#series += noise(time, 30)
plot_series(time[:300], series[:300])
plt.show()
def impulses(time, num_impulses, amplitude=1, seed=None):
rnd = np.random.RandomState(seed)
impulse_indices = rnd.randint(len(time), size=10)
series = np.zeros(len(time))
for index in impulse_indices:
series[index] += rnd.rand() * amplitude
return series
series = impulses(time, 10, seed=42)
plot_series(time, series)
plt.show()
def autocorrelation(source, φs):
ar = source.copy()
max_lag = len(φs)
for step, value in enumerate(source):
for lag, φ in φs.items():
if step - lag > 0:
ar[step] += φ * ar[step - lag]
return ar
signal = impulses(time, 10, seed=42)
series = autocorrelation(signal, {1: 0.99})
plot_series(time, series)
plt.plot(time, signal, "k-")
plt.show()
signal = impulses(time, 10, seed=42)
series = autocorrelation(signal, {1: 0.70, 50: 0.2})
plot_series(time, series)
plt.plot(time, signal, "k-")
plt.show()
series_diff1 = series[1:] - series[:-1]
plot_series(time[1:], series_diff1)
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(series)
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(series, order=(5, 1, 0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
import pandas as pd
df = pd.read_csv("sunspots.csv", parse_dates=["Date"], index_col="Date")
series = df["Monthly Mean Total Sunspot Number"].asfreq("1M")
series.head()
series.plot(figsize=(12, 5))
series["1995-01-01":].plot()
series.diff(1).plot()
plt.axis([0, 100, -50, 50])
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(series)
autocorrelation_plot(series.diff(1)[1:])
autocorrelation_plot(series.diff(1)[1:].diff(11 * 12)[11*12+1:])
plt.axis([0, 500, -0.1, 0.1])
autocorrelation_plot(series.diff(1)[1:])
plt.axis([0, 50, -0.1, 0.1])
116.7 - 104.3
[series.autocorr(lag) for lag in range(1, 50)]
pd.read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)
Read a comma-separated values (csv) file into DataFrame.
from pandas.plotting import autocorrelation_plot
series_diff = series
for lag in range(50):
series_diff = series_diff[1:] - series_diff[:-1]
autocorrelation_plot(series_diff)
import pandas as pd
series_diff1 = pd.Series(series[1:] - series[:-1])
autocorrs = [series_diff1.autocorr(lag) for lag in range(1, 60)]
plt.plot(autocorrs)
plt.show()
###Output
_____no_output_____ |
status_random_forest_mba_placement.ipynb | ###Markdown
Dataset link: https://www.kaggle.com/benroshan/factors-affecting-campus-placement Uploading dataset
###Code
from google.colab import files
uploaded = files.upload()
###Output
_____no_output_____
###Markdown
Initialization
###Code
import pandas as pd
import numpy as np
from itertools import product
from sklearn.model_selection import train_test_split
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv('Placement_Data_Full_Class.csv', index_col='sl_no').reset_index(drop=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing data
###Code
df_full_train, df_test = train_test_split(df, test_size=0.2, random_state=42)
df_train, df_val = train_test_split(df_full_train, test_size=0.25, random_state=42)
df_train = df_train.reset_index(drop=True)
df_train.head()
numerical = ['hsc_p', 'degree_p', 'ssc_p']
categorical = ['gender', 'ssc_b', 'hsc_b', 'hsc_s', 'degree_t', 'workex', 'specialisation']
classification_target = ['status']
regression_target = ['salary']
X_train = df_train[numerical+categorical]
y_train = pd.get_dummies(df_train[classification_target])['status_Placed']
X_val = df_val[numerical+categorical]
y_val = pd.get_dummies(df_val[classification_target])['status_Placed']
X_train.head()
y_train.head()
###Output
_____no_output_____
###Markdown
Creating a Pipeline
###Code
def create_new_pipeline(params):
numerical_transformer = SimpleImputer(strategy='mean')
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent')),
('encoding', OneHotEncoder(drop='first'))
])
preprocessor = ColumnTransformer(
transformers=[
('numerical', numerical_transformer, numerical),
('categorical', categorical_transformer, categorical)
])
scaler = StandardScaler()
logreg = RandomForestClassifier(
n_jobs=-1,
random_state=42,
**params
)
pipeline = Pipeline(
steps=[
('preprocessing', preprocessor),
('scaling', scaler),
('model', logreg)
]
)
return pipeline
###Output
_____no_output_____
###Markdown
Hyperparameter Tuning
###Code
search_space = {
'n_estimators': np.linspace(10, 1000, num=20),
'min_samples_split': np.linspace(2, 11, num=10),
'max_features': ['sqrt', 'log2', None]
}
max_score = 0
best_params = {}
for n_estimators, min_samples_split, max_features in product(*search_space.values()):
params = {
'n_estimators': int(n_estimators),
'min_samples_split': int(min_samples_split),
'max_features': max_features
}
pipeline = create_new_pipeline(params)
pipeline.fit(X_train, y_train)
score = pipeline.score(X_val, y_val)
if score > max_score:
max_score = score
best_params = params
best_params
max_score
###Output
_____no_output_____
###Markdown
Training
###Code
X = df_full_train[numerical+categorical]
y = pd.get_dummies(df_full_train[classification_target])['status_Placed']
pipeline = create_new_pipeline(best_params)
pipeline.fit(X, y)
###Output
_____no_output_____
###Markdown
Validation
###Code
pipeline.score(X, y)
###Output
_____no_output_____ |
chebyshev_eig_estimate.ipynb | ###Markdown
Lets assume that $\lambda_k$ monotonically increase and obey the following law:$$\lambda_{max} - \lambda_k = bc^k$$or$$ \lambda_k = \lambda_{max} - bc^k.$$This could be seen as a function (for discrete values)$$ y = a - bc^x. $$ We would like to run a nonlinear regression on it, but it's hard in this form. Lets do some transformations.First, lets get rid of $a$ by doing differences:$$ y(x^{k+1}) - y(x^k) = bc^{x^k}(1-c) $$and if we define $Y(x) = y(x+1) - y(x)$, we could write$$ Y(x) = b(1-c)c^x. $$Lets now say $B = b(1-c)$, then$$ Y = Bc^x $$which is amenable to logarithmic transformation$$ ln(Y) = ln(B) + xln(c), $$which we can run a linear regression on.$ \lambda_{max} \approx \lambda_k + bc^k $, right? :)
###Code
first = 2
from scipy import polyfit
import math
import math
print('lambda_max estimates')
lambda_max_full = 1.0
lambda_max_last = 1.0
for n in range(first,N):
# Do LS using *all* values
Y = [lambdas[k+1] - lambdas[k] for k in range(0, n)]
X = [k for k in range(0, n)]
lnY = [math.log(y) for y in Y]
(ar, br) = polyfit(X, lnY, 1)
# Validation check
# for i in range(len(X)):
# print(lnY[i] - (ar*X[i] + br))
B = math.exp(br)
c_all = math.exp(ar)
b_all = B/(1-c_all)
new_lambda_max_full = lambdas[n] + b_all*math.pow(c_all,n)
# Do LS using last few values
last = 2
Y = [lambdas[k+1] - lambdas[k] for k in range(n-last, n)]
X = [k for k in range(n-last, n)]
lnY = [math.log(y) for y in Y]
(ar, br) = polyfit(X, lnY, 1)
B = math.exp(br)
c_last = math.exp(ar)
b_last = B/(1-c_last)
new_lambda_max_last = lambdas[n] + b_last*math.pow(c_last,n)
if n == first:
print('%2d its: %.3f | %.3f' %
(n+1, new_lambda_max_full, new_lambda_max_last))
else:
print('%2d its: %.3f [%4.2f%%] | %.3f [%4.2f%%]' %
(n+1, new_lambda_max_full, 100*(new_lambda_max_full/lambda_max_full - 1.0),
new_lambda_max_last, 100*(new_lambda_max_last/lambda_max_last - 1.0)))
lambda_max_full = new_lambda_max_full
lambda_max_last = new_lambda_max_last
###Output
lambda_max estimates
3 its: 1.708 | 1.708
4 its: 1.776 [3.99%] | 1.829 [7.11%]
5 its: 1.815 [2.18%] | 1.860 [1.67%]
6 its: 1.838 [1.29%] | 1.872 [0.64%]
7 its: 1.854 [0.85%] | 1.881 [0.49%]
8 its: 1.865 [0.63%] | 1.890 [0.50%]
9 its: 1.875 [0.51%] | 1.900 [0.51%]
10 its: 1.883 [0.44%] | 1.910 [0.51%]
11 its: 1.891 [0.40%] | 1.918 [0.45%]
12 its: 1.897 [0.36%] | 1.926 [0.40%]
13 its: 1.904 [0.32%] | 1.933 [0.34%]
14 its: 1.909 [0.29%] | 1.939 [0.32%]
15 its: 1.914 [0.26%] | 1.943 [0.22%]
16 its: 1.919 [0.24%] | 1.947 [0.21%]
17 its: 1.923 [0.22%] | 1.952 [0.23%]
18 its: 1.927 [0.20%] | 1.954 [0.14%]
19 its: 1.930 [0.18%] | 1.957 [0.12%]
20 its: 1.933 [0.17%] | 1.959 [0.11%]
|
notebooks/Step3 - Prediction.ipynb | ###Markdown
Image PredictionWe want to be able to perform predictions on arbitrary image sizes, but the network has specifically been trained to process 512x512 images. Passing larger images takes up a lot of memory, so I thought that one way to get around that would be to chop up the image into 512x512 pieces, after which each piece is passed through the network and merged together afterwards. The end result is a method that works on arbitrary image sizes. This notebook is primarily for testing that this functionality works as intended.
###Code
import os
from copy import deepcopy
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import cv2
os.chdir('/home/idl/Documents/InsightProject/genImgInpainting/')
import ImageChunker
# SETTINGS
SAMPLE_IMAGE = 'data/sample_imgs/2371732961.jpg'
MASK_IMAGE = 'examples/center_mask_256.png'
BATCH_SIZE = 4
###Output
_____no_output_____
###Markdown
Sample ImagesThese are the images and masks that we will attempt to pass through the network; they are all either too small or too large in one dimension.
###Code
SAMPLE_IMAGE.split('/')[-1]
plt.imshow(im)
# Image samplings
crops = [
[300, 200], [512, 200], [800, 200],
[300, 512], [512, 512], [800, 512],
[300, 1200], [512, 1200], [800, 1500],
]
# Setup the figure
_, axes = plt.subplots(3, 3, figsize=(15, 15))
# Set random seed
np.random.seed(7)
# Lists for saving images and masks
imgs, masks = [], []
# Plot images
for crop, ax in zip(crops, axes.flatten()):
# Load image
im = Image.open(SAMPLE_IMAGE).resize((2048, 2048))
msk_ori = Image.open(MASK_IMAGE).resize((2048, 2048))
# Crop image
h, w = im.height, im.width
left = np.random.randint(0, w - crop[1])
right = left + crop[1]
upper = np.random.randint(0, h - crop[0])
lower = upper + crop[0]
im = im.crop((left, upper, right, lower))
# Create masked array
im = np.array(im) / 255
mask = np.array(msk_ori.crop((left, upper, right, lower))) / 255
im[mask==1] = 0
# Store for prediction
imgs.append(im)
masks.append(mask)
# Show image
ax.imshow(im)
ax.set_title("{}x{}".format(crop[0], crop[1]))
###Output
_____no_output_____
###Markdown
Model LoadingWe'll load the model trained on ImageNet
###Code
# Load image
im = cv2.imread(SAMPLE_IMAGE)
w, h, chs = im.shape
print(im.shape)
msk_ori = cv2.imread(MASK_IMAGE)
msk_ori = cv2.resize(msk_ori, dsize=(h, w), interpolation=cv2.INTER_AREA)
print(msk_ori.shape)
plt.imshow(msk_ori)
img_ori = cv2.imread(SAMPLE_IMAGE)
img_h, img_w, _ = img_ori.shape
print(img_w, img_h)
# Load mask txt file
bboxes = []
with open('/home/idl/Downloads/generative-inpainting-pytorch/data/sample_imgs/2371732961.txt', 'r') as fd :
while True:
line = fd.readline()
if not line:
break
lst = line.rstrip().split(' ')[1:]
w = int(float(lst[2])*img_w)
h = int(float(lst[3])*img_h)
x = int(float(lst[0])*img_w) - w//2
y = int(float(lst[1])*img_h) - h//2
bboxes.append([x, y, w, h])
print(bboxes)
# Used for chunking up images & stiching them back together
chunker = ImageChunker(256, 256, 20)
chunked_images = chunker.dimension_preprocess(deepcopy(im))
chunked_masks = chunker.dimension_preprocess(deepcopy(msk_ori))
# pred_imgs = model.predict([chunked_images, chunked_masks])
# reconstructed_image = chunker.dimension_postprocess(pred_imgs, img)
for im in chunked_images:
print(im.shape)
for ii in range(len(chunked_images)) :
cv2.imwrite(f"/home/idl/Downloads/generative-inpainting-pytorch/data/sample_imgs2/img{ii}.jpg", chunked_images[ii])
cv2.imwrite(f"/home/idl/Downloads/generative-inpainting-pytorch/data/sample_imgs2/msk{ii}.jpg", chunked_masks[ii])
import glob
fnames = glob.glob("/home/idl/Downloads/generative-inpainting-pytorch/data/sample_imgs2/*.jpg")
for fname in fnames :
msk = fname.replace('imgs2/img', 'imgs2/msk')
dest = fname.replace('sample_imgs2/', 'outputs/')
print(fname, msk, dest)
msk_img = cv2.imread(msk)
if np.max(msk_img) == 0 :
!scp {fname} {dest}
else :
!python test_single.py --image {fname} --mask {msk} --output {dest}
###Output
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Traceback (most recent call last):
File "test_single.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
|
Python Tutorials/Python Dictionary Tutorial.ipynb | ###Markdown
PYTHON DICTIONARIES Create, Access, Modify, Delete A dictionary is an undordered, changeable, and indexed python collection. This gives it a lot of functionality which we will explore today.
###Code
my_programming_class = {
"Susan" : "A",
"Mike" : "C",
"Tom" :"B",
"Carl" : "B+",
"Riley" : "C"
}
###Output
_____no_output_____
###Markdown
Above we have a dictionary containing the grades of my programming class.*Dictionaries are created using curly braces {} and listing key and value pairs*The key in this dictionary are the students names, while the Value pair is their current letter grade.Using a dictionary to manage these grades is rather useful for I can update someones grade by their key (name) rather than an index.
###Code
my_programming_class["Riley"] = 'D'
print(my_programming_class)
###Output
{'Mike': 'C', 'Tom': 'B', 'Riley': 'D', 'Carl': 'B+', 'Susan': 'A'}
###Markdown
I can access a specific students grade by referencing their key.
###Code
print(my_programming_class["Tom"])
###Output
B
###Markdown
Or all of the grades of my class using the *values()* function
###Code
print(my_programming_class.values())
###Output
['C', 'B', 'D', 'B+', 'A']
###Markdown
Also if a student drops their is not issue, because i can just remove them from the dictionary all together using *del*.
###Code
del my_programming_class["Riley"]
print(my_programming_class)
###Output
{'Mike': 'C', 'Tom': 'B', 'Carl': 'B+', 'Susan': 'A'}
###Markdown
Looping My programming class is no easy task. A B- minimum is required to pass. Riley already dropped last python prompt because of his D in my class. As final grades are approuching, it would only make sense to write a script that automatically removes and students from my class that have not made the B- average.
###Code
passing_grades = ("A+", "A", "A-", "B+", "B", "B-")
print("Before cutoff")
print(my_programming_class)
for student,grade in my_programming_class.items():
if grade not in passing_grades:
del my_programming_class[student]
print("After cutoff")
print(my_programming_class)
###Output
Before cutoff
{'Mike': 'C', 'Tom': 'B', 'Carl': 'B+', 'Susan': 'A'}
After cutoff
{'Tom': 'B', 'Carl': 'B+', 'Susan': 'A'}
###Markdown
There are a few things going on here so lets break it down.`passing_grades = ("A+", "A", "A-", "B+", "B", "B-")`This line creates a tuple of all the acceptable passing grades. I am a rather stubborn grader and do not see myself changing a B- minimum pass requirement ever, hence why it is a tuple.`for student,grade in my_programming_class.items():`This is the for loop. As you can see I have two iterable members I am declaring. `student` and `grade`. having two iterable member in conjunction with `.items()` after my dictionary allows me to reference both of the keys and values stored within the dictionary.`if grade not in passing_grades:`Already I am taking advantage of having access to the key and value pair by clearly comparing the grades (the value) to a list of acceptable grades.`del my_programming_class[student]`Here I am using the key to tell the dictionary which object I wish to delete from it. Because it is an iterable member it holds the reference to the dictionary key and I don not have to pass the dictionary a string. There are other simpler ways to loop through dictionaries that do not give you as much readability, but are just as useful.With only one iterable member, you iterate through the dictionary keys.
###Code
for students in my_programming_class:
print(students)
###Output
Tom
Carl
Susan
###Markdown
To access the values, you have to pass the key to the dictionary as an index.
###Code
for students in my_programming_class:
print my_programming_class[students]
###Output
B
B+
A
###Markdown
While both using one iterable member or two do give you the same functionality, using two can greatly increase readability providing context for what both the keys and values of the dictionary represent. This makes it a far more pythonic design choice. Merging
###Code
my_programming_class_2 = {
"Sally" : "B+",
"Mary" : "A-",
"Oscar" : "B"
}
###Output
_____no_output_____
###Markdown
This semester I gained a second programming class that has performed rather well, but apparently my difficulty is too much for most students. Now I'm merging the class to just one section and need to update my online gradebook.While python is a great language, 2.7 does not offer any singular built in method to merge two dictionaries. This means we will have to get creative and combine methods.
###Code
def merge_class (x,y):
z = x.copy()
z.update(y)
return z
final_class = merge_class(my_programming_class,my_programming_class_2)
print(final_class)
###Output
{'Carl': 'B+', 'Susan': 'A', 'Oscar': 'B', 'Tom': 'B', 'Sally': 'B+', 'Mary': 'A-'}
|
loop_detection.ipynb | ###Markdown
Loop Detection using TensorFlow Object Detection API This notebook is adapted from the [Eager Few Shot Object Detection Colab](https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tf2_colab.ipynb), which demonstrate fine tuning of a (TF2 friendly) RetinaNet architecture on very few examples of a novel class after initializing from a pre-trained COCO checkpoint. I expanded it to a 3-class object detection proof of concept.
###Code
# Versions
import sys
import tensorflow as tf
print("Python version:", sys.version)
print("TensorFlow version: ", tf.__version__)
import matplotlib
import matplotlib.pyplot as plt
import os
import random
import io
import imageio
import glob
import numpy as np
from io import BytesIO
from PIL import Image, ImageDraw, ImageFont
from IPython.display import display, Javascript
from IPython.display import Image as IPyImage
from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder
%matplotlib inline
###Output
_____no_output_____
###Markdown
Utilities
###Code
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path.
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
def plot_detections(image_np,
boxes,
classes,
scores,
category_index,
figsize=(12, 16),
image_name=None):
"""Wrapper function to visualize detections.
Args:
image_np: uint8 numpy array with shape (img_height, img_width, 3)
boxes: a numpy array of shape [N, 4]
classes: a numpy array of shape [N]. Note that class indices are 1-based,
and match the keys in the label map.
scores: a numpy array of shape [N] or None. If scores=None, then
this function assumes that the boxes to be plotted are groundtruth
boxes and plot all boxes as black with no classes or scores.
category_index: a dict containing category dictionaries (each holding
category index `id` and category name `name`) keyed by category indices.
figsize: size for the figure.
image_name: a name for the image file.
"""
image_np_with_annotations = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_annotations,
boxes,
classes,
scores,
category_index,
use_normalized_coordinates=True,
min_score_thresh=0.8)
if image_name:
plt.imsave(image_name, image_np_with_annotations)
else:
plt.imshow(image_np_with_annotations)
###Output
_____no_output_____
###Markdown
Loop data
###Code
# The loop images were downloaded from GCP (gs://bl831_loop_detection_data)
image_dir = 'bl831_loop_detection_data/gcp_training_images/images'
# Load the first 200 images as the training dataset
num_train = 200
train_images_np = []
for i in range(num_train):
image_path = os.path.join(image_dir, str(i).rjust(4, '0') + '.jpg')
# each element is a uint8 numpy array with shape (704, 480, 3)
train_images_np.append(load_image_into_numpy_array(image_path))
plt.rcParams['axes.grid'] = False
plt.rcParams['xtick.labelsize'] = False
plt.rcParams['ytick.labelsize'] = False
plt.rcParams['xtick.top'] = False
plt.rcParams['xtick.bottom'] = False
plt.rcParams['ytick.left'] = False
plt.rcParams['ytick.right'] = False
plt.rcParams['figure.figsize'] = [14, 7]
# Plot first 6 images
for idx, train_image_np in enumerate(train_images_np[:6]):
plt.subplot(2, 3, idx+1)
plt.imshow(train_image_np)
plt.show()
# By convention, our non-background classes start counting at 1. Given
# that we will be predicting just one class, we will therefore assign it a
# `class id` of 1.
num_classes = 3
nylon_class_id = 1
mitegen_class_id = 2
pin_class_id = 3
category_index = {nylon_class_id: {'id': nylon_class_id, 'name': 'nylon'},
mitegen_class_id: {'id': mitegen_class_id, 'name': 'mitegen'},
pin_class_id: {'id': pin_class_id, 'name': 'pin'}}
category_index
# Convert class labels to one-hot; convert everything to tensors.
# The `label_id_offset` here shifts all classes by a certain number of indices;
# we do this here so that the model receives one-hot labels where non-background
# classes start counting at the zeroth index. This is ordinarily just handled
# automatically in our training binaries, but we need to reproduce it here.
label_id_offset = 1
train_image_tensors = []
for train_image_np in train_images_np:
train_image_tensors.append(tf.expand_dims(tf.convert_to_tensor(
train_image_np, dtype=tf.float32), axis=0))
# label file
label_csv = 'bl831_loop_detection_data/train-11-19-2020.csv'
# Note label was missing in one line!
# UNASSIGNED,gs://bl831_loop_detection_data/gcp_training_images/images/0030.jpg, ,0.4446,0.5083,,,0.5043,0.575,,
# Correct version:
# UNASSIGNED,gs://bl831_loop_detection_data/gcp_training_images/images/0030.jpg,mitegen,0.4446,0.5083,,,0.5043,0.575,,
gt_boxes = []
groundtruth_classes = []
gt_classes_one_hot_tensors = []
gt_box_tensors = []
# very ugly code!
with open(label_csv) as f:
for line in f:
label = line.split(',')
fname=label[1][-8:]
i = int(fname.split('.')[0])
if i >= num_train:
break
if len(gt_boxes) < i+1:
gt_boxes.append([])
groundtruth_classes.append([])
# Note the order!!!
gt_boxes[i].append([float(label[4]), float(label[3]), float(label[8]), float(label[7])])
if label[2] == 'nylon':
groundtruth_classes[i].append(nylon_class_id)
if label[2] == 'mitegen':
groundtruth_classes[i].append(mitegen_class_id)
if label[2] == 'pin':
groundtruth_classes[i].append(pin_class_id)
for gt_box in gt_boxes:
gt_box_tensors.append(tf.convert_to_tensor(gt_box, dtype=tf.float32))
for g in groundtruth_classes:
zero_indexed_groundtruth_classes = tf.convert_to_tensor(g, dtype=tf.int32) - label_id_offset
gt_classes_one_hot_tensors.append(tf.one_hot(
zero_indexed_groundtruth_classes, num_classes))
print('Done prepping data.')
print(gt_classes_one_hot_tensors[0])
print(gt_box_tensors[0])
###Output
Done prepping data.
tf.Tensor(
[[0. 0. 1.]
[0. 1. 0.]], shape=(2, 3), dtype=float32)
tf.Tensor(
[[0.3312 0. 0.5375 0.2685]
[0.5021 0.4517 0.525 0.5043]], shape=(2, 4), dtype=float32)
###Markdown
Sanity Check!
###Code
dummy_scores = np.array([1.0, 1.0], dtype=np.float32) # give boxes a score of 100%
plt.figure(figsize=(30, 15))
for idx in range(6):
plt.subplot(2, 3, idx+1)
plot_detections(
train_images_np[idx],
np.array(gt_boxes[idx], dtype=np.float32),
np.array(groundtruth_classes[idx], dtype=np.int32),
dummy_scores, category_index)
plt.show()
###Output
_____no_output_____
###Markdown
Create model and restore weights for all but last layer
###Code
tf.keras.backend.clear_session()
print('Building model and restoring weights for fine-tuning...', flush=True)
num_classes = 3
pipeline_config = 'models/research/object_detection/configs/tf2/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config'
checkpoint_path = 'ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0'
# Load pipeline config and build a detection model.
#
# Since we are working off of a COCO architecture which predicts 90
# class slots by default, we override the `num_classes` field here to be just
# one (for our new rubber ducky class).
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
model_config.ssd.num_classes = num_classes
model_config.ssd.freeze_batchnorm = True
detection_model = model_builder.build(
model_config=model_config, is_training=True)
# Set up object-based checkpoint restore --- RetinaNet has two prediction
# `heads` --- one for classification, the other for box regression. We will
# restore the box regression head but initialize the classification head
# from scratch (we show the omission below by commenting out the line that
# we would add if we wanted to restore both heads)
fake_box_predictor = tf.train.Checkpoint(
_base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
# _prediction_heads=detection_model._box_predictor._prediction_heads,
# (i.e., the classification head that we *will not* restore)
_box_prediction_head=detection_model._box_predictor._box_prediction_head,
)
fake_model = tf.train.Checkpoint(
_feature_extractor=detection_model._feature_extractor,
_box_predictor=fake_box_predictor)
ckpt = tf.train.Checkpoint(model=fake_model)
ckpt.restore(checkpoint_path).expect_partial()
# Run model through a dummy image so that variables are created
image, shapes = detection_model.preprocess(tf.zeros([1, 640, 640, 3]))
prediction_dict = detection_model.predict(image, shapes)
_ = detection_model.postprocess(prediction_dict, shapes)
print('Weights restored!')
# Deprecated?
tf.keras.backend.set_learning_phase(True)
# These parameters can be tuned; since our training set has 5 images
# it doesn't make sense to have a much larger batch size, though we could
# fit more examples in memory if we wanted to.
batch_size = 25
learning_rate = 0.01
num_batches = 500
# Select variables in top layers to fine-tune.
trainable_variables = detection_model.trainable_variables
to_fine_tune = []
prefixes_to_train = [
'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead',
'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead']
for var in trainable_variables:
if any([var.name.startswith(prefix) for prefix in prefixes_to_train]):
to_fine_tune.append(var)
# Set up forward + backward pass for a single train step.
def get_model_train_step_function(model, optimizer, vars_to_fine_tune):
"""Get a tf.function for training step."""
# Use tf.function for a bit of speed.
# Comment out the tf.function decorator if you want the inside of the
# function to run eagerly.
@tf.function
def train_step_fn(image_tensors,
groundtruth_boxes_list,
groundtruth_classes_list):
"""A single training iteration.
Args:
image_tensors: A list of [1, height, width, 3] Tensor of type tf.float32.
Note that the height and width can vary across images, as they are
reshaped within this function to be 640x640.
groundtruth_boxes_list: A list of Tensors of shape [N_i, 4] with type
tf.float32 representing groundtruth boxes for each image in the batch.
groundtruth_classes_list: A list of Tensors of shape [N_i, num_classes]
with type tf.float32 representing groundtruth boxes for each image in
the batch.
Returns:
A scalar tensor representing the total loss for the input batch.
"""
shapes = tf.constant(batch_size * [[640, 640, 3]], dtype=tf.int32)
model.provide_groundtruth(
groundtruth_boxes_list=groundtruth_boxes_list,
groundtruth_classes_list=groundtruth_classes_list)
with tf.GradientTape() as tape:
preprocessed_images = tf.concat(
[detection_model.preprocess(image_tensor)[0]
for image_tensor in image_tensors], axis=0)
prediction_dict = model.predict(preprocessed_images, shapes)
losses_dict = model.loss(prediction_dict, shapes)
total_loss = losses_dict['Loss/localization_loss'] + losses_dict['Loss/classification_loss']
gradients = tape.gradient(total_loss, vars_to_fine_tune)
optimizer.apply_gradients(zip(gradients, vars_to_fine_tune))
return total_loss
return train_step_fn
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
train_step_fn = get_model_train_step_function(
detection_model, optimizer, to_fine_tune)
print('Start fine-tuning!', flush=True)
for idx in range(num_batches):
# Grab keys for a random subset of examples
all_keys = list(range(len(train_images_np)))
random.shuffle(all_keys)
example_keys = all_keys[:batch_size]
# Note that we do not do data augmentation in this demo. If you want a
# a fun exercise, we recommend experimenting with random horizontal flipping
# and random cropping :)
gt_boxes_list = [gt_box_tensors[key] for key in example_keys]
gt_classes_list = [gt_classes_one_hot_tensors[key] for key in example_keys]
image_tensors = [train_image_tensors[key] for key in example_keys]
# Training step (forward pass + backwards pass)
total_loss = train_step_fn(image_tensors, gt_boxes_list, gt_classes_list)
if idx % 10 == 0:
print('batch ' + str(idx) + ' of ' + str(num_batches)
+ ', loss=' + str(total_loss.numpy()), flush=True)
print('Done fine-tuning!')
###Output
Start fine-tuning!
###Markdown
Load test images and run inference with new model!
###Code
test_image_dir = 'bl831_loop_detection_data/gcp_training_images/images'
test_images_np = []
for i in range(200, 250):
image_path = os.path.join(test_image_dir, str(i).rjust(4, '0') + '.jpg')
test_images_np.append(np.expand_dims(
load_image_into_numpy_array(image_path), axis=0))
# Again, uncomment this decorator if you want to run inference eagerly
@tf.function
def detect(input_tensor):
"""Run detection on an input image.
Args:
input_tensor: A [1, height, width, 3] Tensor of type tf.float32.
Note that height and width can be anything since the image will be
immediately resized according to the needs of the model within this
function.
Returns:
A dict containing 3 Tensors (`detection_boxes`, `detection_classes`,
and `detection_scores`).
"""
preprocessed_image, shapes = detection_model.preprocess(input_tensor)
prediction_dict = detection_model.predict(preprocessed_image, shapes)
return detection_model.postprocess(prediction_dict, shapes)
# Note that the first frame will trigger tracing of the tf.function, which will
# take some time, after which inference should be fast.
label_id_offset = 1
for i in range(len(test_images_np)):
input_tensor = tf.convert_to_tensor(test_images_np[i], dtype=tf.float32)
detections = detect(input_tensor)
plot_detections(
test_images_np[i][0],
detections['detection_boxes'][0].numpy(),
detections['detection_classes'][0].numpy().astype(np.uint32)
+ label_id_offset,
detections['detection_scores'][0].numpy(),
category_index, figsize=(15, 20), image_name=('%04d' % (i+200)) + "-bbox" + ".jpg" )
###Output
_____no_output_____ |
_notebooks/2020-09-11-neural_collaborative_filter.ipynb | ###Markdown
Recommender systems - Neural Collaborative Filtering> Demo- toc: true - badges: true- comments: true- hide: true- categories: [demo, neural networks, deep learning, recommender systems, paper]- image: https://raw.githubusercontent.com/murilo-cunha/inteligencia-superficial/master/images/2020-09-11-neural_collaborative_filter/cover.png Download dependencies and run `tensorboard` in the background:```python!pip install tensorflow lightfm pandas``````python%load_ext tensorboard!tensorboard --logdir 2020-09-11-neural_collaborative_filter/logs &```
###Code
# hide
import datetime
import os
import lightfm
import numpy as np
import pandas as pd
import tensorflow as tf
from lightfm import LightFM
from lightfm.datasets import fetch_movielens
from scipy import sparse
# hide
print(f"Tensorflow version: {tf.__version__}")
print(f"LightFM version: {lightfm.__version__}")
print(f"Pandas version: {pd.__version__}")
print(f"Numpy version: {np.__version__}")
# hide
TOP_K = 5
N_EPOCHS = 10
###Output
_____no_output_____
###Markdown
Data
###Code
# hide_input
data = fetch_movielens(min_rating=3.0)
print("Interaction matrix:")
print(data["train"].toarray()[:10, :10])
# collapse
for dataset in ["test", "train"]:
data[dataset] = (data[dataset].toarray() > 0).astype("int8")
# Make the ratings binary
print("Interaction matrix:")
print(data["train"][:10, :10])
print("\nRatings:")
unique_ratings = np.unique(data["train"])
print(unique_ratings)
from typing import List
def wide_to_long(wide: np.array, possible_ratings: List[int]) -> np.array:
"""Go from wide table to long.
:param wide: wide array with user-item interactions
:param possible_ratings: list of possible ratings that we may have."""
def _get_ratings(arr: np.array, rating: int) -> np.array:
"""Generate long array for the rating provided
:param arr: wide array with user-item interactions
:param rating: the rating that we are interested"""
idx = np.where(arr == rating)
return np.vstack(
(idx[0], idx[1], np.ones(idx[0].size, dtype="int8") * rating)
).T
long_arrays = []
for r in possible_ratings:
long_arrays.append(_get_ratings(wide, r))
return np.vstack(long_arrays)
long_train = wide_to_long(data["train"], unique_ratings)
df_train = pd.DataFrame(long_train, columns=["user_id", "item_id", "interaction"])
# hide_input
print("All interactions:")
df_train.head()
# hide_input
print("Only positive interactions:")
df_train[df_train["interaction"] > 0].head()
###Output
Only positive interactions:
###Markdown
The model (Neural Collaborative Filtering)
###Code
import tensorflow.keras as keras
from tensorflow.keras.layers import (
Concatenate,
Dense,
Embedding,
Flatten,
Input,
Multiply,
)
from tensorflow.keras.models import Model
from tensorflow.keras.regularizers import l2
def create_ncf(
number_of_users: int,
number_of_items: int,
latent_dim_mf: int = 4,
latent_dim_mlp: int = 32,
reg_mf: int = 0,
reg_mlp: int = 0.01,
dense_layers: List[int] = [8, 4],
reg_layers: List[int] = [0.01, 0.01],
activation_dense: str = "relu",
) -> keras.Model:
# input layer
user = Input(shape=(), dtype="int32", name="user_id")
item = Input(shape=(), dtype="int32", name="item_id")
# embedding layers
mf_user_embedding = Embedding(
input_dim=number_of_users,
output_dim=latent_dim_mf,
name="mf_user_embedding",
embeddings_initializer="RandomNormal",
embeddings_regularizer=l2(reg_mf),
input_length=1,
)
mf_item_embedding = Embedding(
input_dim=number_of_items,
output_dim=latent_dim_mf,
name="mf_item_embedding",
embeddings_initializer="RandomNormal",
embeddings_regularizer=l2(reg_mf),
input_length=1,
)
mlp_user_embedding = Embedding(
input_dim=number_of_users,
output_dim=latent_dim_mlp,
name="mlp_user_embedding",
embeddings_initializer="RandomNormal",
embeddings_regularizer=l2(reg_mlp),
input_length=1,
)
mlp_item_embedding = Embedding(
input_dim=number_of_items,
output_dim=latent_dim_mlp,
name="mlp_item_embedding",
embeddings_initializer="RandomNormal",
embeddings_regularizer=l2(reg_mlp),
input_length=1,
)
# MF vector
mf_user_latent = Flatten()(mf_user_embedding(user))
mf_item_latent = Flatten()(mf_item_embedding(item))
mf_cat_latent = Multiply()([mf_user_latent, mf_item_latent])
# MLP vector
mlp_user_latent = Flatten()(mlp_user_embedding(user))
mlp_item_latent = Flatten()(mlp_item_embedding(item))
mlp_cat_latent = Concatenate()([mlp_user_latent, mlp_item_latent])
mlp_vector = mlp_cat_latent
# build dense layers for model
for i in range(len(dense_layers)):
layer = Dense(
dense_layers[i],
activity_regularizer=l2(reg_layers[i]),
activation=activation_dense,
name="layer%d" % i,
)
mlp_vector = layer(mlp_vector)
predict_layer = Concatenate()([mf_cat_latent, mlp_vector])
result = Dense(
1, activation="sigmoid", kernel_initializer="lecun_uniform", name="interaction"
)
output = result(predict_layer)
model = Model(
inputs=[user, item],
outputs=[output],
)
return model
# collapse
from tensorflow.keras.optimizers import Adam
n_users, n_items = data["train"].shape
ncf_model = create_ncf(n_users, n_items)
ncf_model.compile(
optimizer=Adam(),
loss="binary_crossentropy",
metrics=[
tf.keras.metrics.TruePositives(name="tp"),
tf.keras.metrics.FalsePositives(name="fp"),
tf.keras.metrics.TrueNegatives(name="tn"),
tf.keras.metrics.FalseNegatives(name="fn"),
tf.keras.metrics.BinaryAccuracy(name="accuracy"),
tf.keras.metrics.Precision(name="precision"),
tf.keras.metrics.Recall(name="recall"),
tf.keras.metrics.AUC(name="auc"),
],
)
ncf_model._name = "neural_collaborative_filtering"
ncf_model.summary()
def make_tf_dataset(
df: pd.DataFrame,
targets: List[str],
val_split: float = 0.1,
batch_size: int = 512,
seed=42,
):
"""Make TensorFlow dataset from Pandas DataFrame.
:param df: input DataFrame - only contains features and target(s)
:param targets: list of columns names corresponding to targets
:param val_split: fraction of the data that should be used for validation
:param batch_size: batch size for training
:param seed: random seed for shuffling data - `None` won't shuffle the data"""
n_val = round(df.shape[0] * val_split)
if seed:
# shuffle all the rows
x = df.sample(frac=1, random_state=seed).to_dict("series")
else:
x = df.to_dict("series")
y = dict()
for t in targets:
y[t] = x.pop(t)
ds = tf.data.Dataset.from_tensor_slices((x, y))
ds_val = ds.take(n_val).batch(batch_size)
ds_train = ds.skip(n_val).batch(batch_size)
return ds_train, ds_val
# create train and validation datasets
ds_train, ds_val = make_tf_dataset(df_train, ["interaction"])
%%time
# define logs and callbacks
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
early_stopping_callback = tf.keras.callbacks.EarlyStopping(
monitor="val_loss", patience=0
)
train_hist = ncf_model.fit(
ds_train,
validation_data=ds_val,
epochs=N_EPOCHS,
callbacks=[tensorboard_callback, early_stopping_callback],
verbose=1,
)
long_test = wide_to_long(data["train"], unique_ratings)
df_test = pd.DataFrame(long_test, columns=["user_id", "item_id", "interaction"])
ds_test, _ = make_tf_dataset(df_test, ["interaction"], val_split=0, seed=None)
%%time
ncf_predictions = ncf_model.predict(ds_test)
df_test["ncf_predictions"] = ncf_predictions
# hide_input
df_test.head()
# hide
# sanity checks
# stop execution if low standard deviation (all recommendations are the same)
std = df_test.describe().loc["std", "ncf_predictions"]
if std < 0.01:
raise ValueError("Model predictions have standard deviation of less than 1e-2.")
# collapse
data["ncf_predictions"] = df_test.pivot(
index="user_id", columns="item_id", values="ncf_predictions"
).values
print("Neural collaborative filtering predictions")
print(data["ncf_predictions"][:10, :4])
precision_ncf = tf.keras.metrics.Precision(top_k=TOP_K)
recall_ncf = tf.keras.metrics.Recall(top_k=TOP_K)
precision_ncf.update_state(data["test"], data["ncf_predictions"])
recall_ncf.update_state(data["test"], data["ncf_predictions"])
print(
f"At K = {TOP_K}, we have a precision of {precision_ncf.result().numpy():.5f}",
"and a recall of {recall_ncf.result().numpy():.5f}",
)
%%time
# LightFM model
def norm(x: float) -> float:
"""Normalize vector"""
return (x - np.min(x)) / np.ptp(x)
lightfm_model = LightFM(loss="warp")
lightfm_model.fit(sparse.coo_matrix(data["train"]), epochs=N_EPOCHS)
lightfm_predictions = lightfm_model.predict(
df_test["user_id"].values, df_test["item_id"].values
)
df_test["lightfm_predictions"] = lightfm_predictions
wide_predictions = df_test.pivot(
index="user_id", columns="item_id", values="lightfm_predictions"
).values
data["lightfm_predictions"] = norm(wide_predictions)
# compute the metrics
precision_lightfm = tf.keras.metrics.Precision(top_k=TOP_K)
recall_lightfm = tf.keras.metrics.Recall(top_k=TOP_K)
precision_lightfm.update_state(data["test"], data["lightfm_predictions"])
recall_lightfm.update_state(data["test"], data["lightfm_predictions"])
print(
f"At K = {TOP_K}, we have a precision of {precision_lightfm.result().numpy():.5f}",
"and a recall of {recall_lightfm.result().numpy():.5f}",
)
###Output
At K = 5, we have a precision of 0.10541 and a recall of 0.06297
CPU times: user 1.01 s, sys: 235 ms, total: 1.25 s
Wall time: 858 ms
|
week_02.ipynb | ###Markdown
2주차 복습input() : console을 통해 input, str으로 받음
###Code
name = input('what is your name? ')
age = int(input('how old are you? '))
year = 2022
print(f'{name} was born in {year - age + 1}.')
###Output
what is your name? jb
how old are you? 22
jb was born in 2001.
###Markdown
print(): 여러 기능이 있음
###Code
fruit_str_list = input('input your 3 favorite fruits: ')
fruit_list = fruit_str_list.split()
print(fruit_list[0], fruit_list[1], fruit_list[2], sep=' ', end='\nEOL')
###Output
input your 3 favorite fruits: apple banana berry
apple banana berry
EOL
###Markdown
\ 혹은 """ """를 통해 주석 처리 가능
###Code
# print('this will not be printed')
"""
print('this will not be printed')
"""
print('this will be printed')
###Output
this will be printed
###Markdown
if: 조건식을 이용하여 다양한 코드를 작성할 수 있음 (if - elif - else)
###Code
question = int(input('If you like coca-cola, input 1 & if you like pepsi, input 2: '))
if question == 1:
print('you like coca-cola, not pepsi')
elif question == 2:
print('your taste is weird')
else:
print('you should\'ve answered 1 or 2')
###Output
If you like coca-cola, input 1 & if you like pepsi, input 2: 2
your taste is weird
###Markdown
integer: 정수를 나타내며, 음수는 앞에 '-', 숫자 사이에 쉼표 x, 연산자 통해 연산 가능
###Code
def calculator():
num1 = int(input('input the number1: '))
num2 = int(input('input the number2: '))
func = int(input('더하기 1, 빼기 2, 곱하기 3, 몫구하기 4, 나머지구하기 5, 나누기 6, 제곱하기 7: '))
if func == 1:
print(f'{num1} + {num2} = {num1 + num2}')
elif func == 2:
print(f'{num1} - {num2} = {num1 - num2}')
elif func == 3:
print(f'{num1} * {num2} = {num1 * num2}')
elif func == 4:
print(f'{num1} // {num2} = {num1 // num2}')
elif func == 5:
print(f'{num1} % {num2} = {num1 % num2}')
elif func == 6:
print(f'{num1} / {num2} = {num1 / num2}')
elif func == 7:
print(f'{num1} ** {num2} = {num1 ** num2}')
else:
print('your input is weird')
calculator()
###Output
input the number1: 6
input the number2: 3
더하기 1, 빼기 2, 곱하기 3, 몫구하기 4, 나머지구하기 5, 나누기 6, 제곱하기 7: 7
6 ** 3 = 216
|
notebooks/Player_Risk_Calculation.ipynb | ###Markdown
Model To Predict Injury Based on Momentum
###Code
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
import os
import glob
import sys
sys.path.insert(0, '../scripts/')
from football_field import create_football_field
from plots import plot_play
import math
%matplotlib inline
pd.options.display.max_columns = 100
%load_ext autoreload
%autoreload 2
momentum = pd.read_parquet('../working/momentum-allplays-1yarddistance.parquet')
vr = pd.read_csv('../working/video_review-detailed.csv')
vr.head()
momentum = momentum.rename(columns={'gsisid': 'GSISID', 'gsisid_partner': 'Primary_Partner_GSISID'})
injured_momentum = pd.merge(momentum, vr)
injured_momentum['opp_momentum'].plot(kind='hist', figsize=(15, 5))
max_opp_momentum_on_inj = injured_momentum.groupby(['Season_Year','GameKey','PlayID','GSISID','Primary_Partner_GSISID','role','role_partner']).max()['opp_momentum'].reset_index().sort_values('opp_momentum')
inj_at_max_opp_forces = pd.merge(max_opp_momentum_on_inj, momentum)
for i, d in inj_at_max_opp_forces.groupby(['Season_Year','GameKey','PlayID']):
print(i)
#fig, ax = create_football_field()
#fig, ax = plt.subplots(figsize=(10, 10))
plt.plot(d['x'].tolist(), d['y'].tolist(), marker=(3, 0, d['dir'].tolist()), markersize=7, color='red')
plt.plot(d['x_partner'].tolist(), d['y_partner'].tolist(), marker=(3, 0, d['dir_partner'].tolist()), markersize=7, color='blue')
# ax.set_xlim(30, 40)
# plt.ylim(0, 10)
#d.plot(x='x',y='y', kind='scatter', ax=ax)
#d.plot(x='x_partner',y='y_partner', kind='scatter', ax=ax, style='<')
plt.margins(3, 3)
plt.show()
break
play = pd.read_csv('../working/playlevel/during_play/2016-5-3129.csv')
def touch(fname, times=None):
with open(fname, 'a'):
os.utime(fname, times)
def calculateDistance(x1, y1, x2, y2):
dist = math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
return dist
def cart2pol(x, y):
rho = np.sqrt(x**2 + y**2)
phi = np.arctan2(y, x)
return(rho, phi)
def pol2cart(rho, phi):
x = rho * np.cos(phi)
y = rho * np.sin(phi)
return(x, y)
def add_play_physics(play):
# Format columns
play['time'] = pd.to_datetime(play['time'])
# Distance
play['dis_meters'] = play['dis'] / 1.0936 # Add distance in meters
# Speed
play['dis_meters'] / 0.01
play['v_mps'] = play['dis_meters'] / 0.1
# Angles to radians
play['dir_radians'] = play['dir'].apply(math.radians)
play['o_radians'] = play['o'].apply(math.radians)
average_weight_nfl_pounds = 245.86
average_weight_nfl_kg = average_weight_nfl_pounds * 0.45359237
# http://webpages.uidaho.edu/~renaes/251/HON/Student%20PPTs/Avg%20NFL%20ht%20wt.pdf
play['momentum'] = play['v_mps'] * average_weight_nfl_kg
play['momentum_x'] = pol2cart(play['momentum'], play['dir_radians'])[0]
play['momentum_y'] = pol2cart(play['momentum'], play['dir_radians'])[1]
return play
play = add_play_physics(play)
playexpanded = pd.merge(play, play, on=['season_year','gamekey','playid','time'], suffixes=('','_partner'))
playexpanded['opp_momentum'] = np.sqrt(np.square(
playexpanded['momentum_x'] - playexpanded['momentum_x_partner']) +
np.square(playexpanded['momentum_y'] - playexpanded['momentum_y_partner']))
playexpanded['dist'] = np.sqrt((playexpanded['x'] - playexpanded['x_partner']).apply(np.square) + (playexpanded['y'] - playexpanded['y_partner']).apply(np.square))
playexpanded['risk_factor'] = playexpanded['opp_momentum'] / playexpanded['dist']
calculateDistance(playexpanded['x'], playexpanded['y'], playexpanded['x_partner'], playexpanded['y_partner'])
fig, ax = create_football_field()
df = playexpanded.loc[(playexpanded['role'] == 'PLW') & (playexpanded['role_partner'] == 'PR')]
df[['time','role','role_partner','v_mps','v_mps_partner','opp_momentum','x','y','x_partner','y_partner']] \
.plot(kind='scatter', x='x',y='y',
c=df['opp_momentum'].tolist(),
cmap='coolwarm',
ax=ax)
df[['time','role','role_partner','v_mps','v_mps_partner','opp_momentum','x','y','x_partner','y_partner']] \
.plot(kind='scatter', x='x_partner',y='y_partner',
c=df['opp_momentum'].tolist(),
cmap='coolwarm',
ax=ax)
fig, ax = create_football_field()
df = playexpanded.loc[(playexpanded['role'] == 'PLW') & (playexpanded['role_partner'] == 'PR')]
df[['time','role','role_partner','v_mps','v_mps_partner','opp_momentum','x','y','x_partner','y_partner']] \
.plot(kind='scatter', x='x',y='y',
c=df['risk_factor'].tolist(),
cmap='coolwarm',
ax=ax)
df[['time','role','role_partner','v_mps','v_mps_partner','opp_momentum','x','y','x_partner','y_partner']] \
.plot(kind='scatter', x='x_partner',y='y_partner',
c=df['risk_factor'].tolist(),
cmap='coolwarm',
ax=ax)
playexpanded.sort_values('risk_factor', ascending=False).head()
fig, ax = create_football_field()
df = playexpanded.loc[(playexpanded['role'] == 'GL') & (playexpanded['role_partner'] == 'PR')]
df[['time','role','role_partner','v_mps','v_mps_partner','opp_momentum','x','y','x_partner','y_partner']] \
.plot(kind='scatter', x='x',y='y',
c=df['risk_factor'].tolist(),
cmap='coolwarm',
ax=ax)
df[['time','role','role_partner','v_mps','v_mps_partner','opp_momentum','x','y','x_partner','y_partner']] \
.plot(kind='scatter', x='x_partner',y='y_partner',
c=df['risk_factor'].tolist(),
cmap='coolwarm',
ax=ax)
fig, ax = create_football_field()
df = playexpanded.loc[(playexpanded['role'] == 'PFB') & (playexpanded['role_partner'] == 'GR') |
(playexpanded['role'] == 'GR') & (playexpanded['role_partner'] == 'PFB')]
df = df.sort_values('risk_factor')
df[['time','role','role_partner','v_mps','v_mps_partner','opp_momentum','x','y','x_partner','y_partner']] \
.plot(kind='scatter', x='x',y='y',
c=df['risk_factor'].tolist(),
cmap='coolwarm',
ax=ax)
playexpanded[['time','role','role_partner','opp_momentum','risk_factor']].sort_values('risk_factor', ascending=False).head()
playexpanded[['role','role_partner','opp_momentum','risk_factor']].sort_values('risk_factor', ascending=False).head()
play2 = pd.read_parquet('../working/playlevel/momentum_risk/2016-149-3663-risk.parquet')
df = play2.loc[play2['injured_player'] == True]
fig, ax = create_football_field()
df = df.sort_values('risk_factor')
df[['time','role','role_partner','v_mps','v_mps_partner','opp_momentum','x','y','x_partner','y_partner']] \
.plot(kind='scatter', x='x',y='y',
c=df['risk_factor'].tolist(),
cmap='coolwarm',
ax=ax)
play2.head()
pd.merge(play2.groupby(['season_year', 'gamekey', 'playid', 'gsisid'])['risk_factor'].max().reset_index(),
play2, how='left')[['role','risk_factor']].sort_values('risk_factor').plot(x='role', kind='bar')
pd.merge(play2.loc[play2['dist'] < 1].groupby(['season_year', 'gamekey', 'playid', 'gsisid'])['risk_factor'].max().reset_index(),
play2.loc[play2['dist'] < 1], how='left')[['role','risk_factor']].sort_values('risk_factor').plot(x='role', kind='bar')
play2.columns
pd.merge(play2.groupby(['season_year', 'gamekey', 'playid', 'gsisid','role',
'role_partner','injured_player','primary_partner_player',
'injured_player_partner','primary_partner_player_partner'])['risk_factor'].max().reset_index(),
play2, how='left')[['role','role_partner','risk_factor','injured_player',
'primary_partner_player','injured_player_partner',
'primary_partner_player_partner']].sort_values('risk_factor')
play2.groupby(['season_year', 'gamekey', 'playid', 'gsisid','role'])['risk_factor'].mean()
###Output
_____no_output_____ |
Concepts/Python/Multiple Comparisons.ipynb | ###Markdown
[](https://colab.research.google.com/github/PennNGG/Quantitative-Neuroscience/blob/master/Concepts/Python/Multiple%20Comparisons.ipynb) Definitions The multiple comparisons problem in statistics occurs when multiple statistical inferences are done simultaneously, which greatly increases the probability that any one inference will yield an erroneous result, by chance. A lot has been written about this problem, including:- [Its prevalence in fMRI data analysis](https://www.sciencedirect.com/science/article/pii/S1053811912007057?via%3Dihub) (including a compelling illustration by this [prizewinning study](https://blogs.scientificamerican.com/scicurious-brain/ignobel-prize-in-neuroscience-the-dead-salmon-study/)\)- [How Baysian methods can avoid the problem](http://www.stat.columbia.edu/~gelman/research/published/multiple2f.pdf).- [General approaches for correcting for multiple comparisons](http://www.biostathandbook.com/multiplecomparisons.html).Here we will provide some intuition for the problem using a simple thought experiment, to sensitize you to how much of a problem it can be. Consider performing the same statistical test on *N* different samples corresponding to, say, different voxels in fMRI data, using a *p*-value of $\alpha$ (typically 0.05) for each test. Thus, for any one test, the probability of getting a Type I error (rejecting $H_0$ when $H_0$ is true) is $\alpha:$p_{error}=\alpha$For two tests, the probably of getting a Type I error for either test is just one minus the combined probability of not getting a Type I error from either one:$p_{error}=1-(1-\alpha)(1-\alpha)$For *N* tests, the probably of getting a Type I error for either test is just one minus the combined probability of not getting a Type I error from any one:$p_{error}=1-(1-\alpha)^N$Run the cell below to see that the probability of getting a Type I error under these conditions grows rapidly with *N*, implying that it becomes very, very likely that you will get a "statistically significant result" just by chance if you do enough tests:
###Code
import matplotlib.pyplot as plt
import numpy as np
alpha = 0.05
N = np.arange(0,100)
plt.plot(N, 1-(1-alpha)**N)
plt.xlabel('N')
plt.ylabel('$P_{error}$')
###Output
_____no_output_____ |
JupyterNotebooks/ChangePointDetection Demo.ipynb | ###Markdown
Demo: Change Point DetectionThis demonstration shows a very simple computational model to show how change point detection works.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# specify a plotting enviornment (i.e. a set of defaults)
sns.set_context('talk')
###Output
_____no_output_____
###Markdown
Here, we generate a random sequency of numbers to learn. The sequence of numbers is a normally distributed variable, with a discrete change-point after which the mean changes.
###Code
mu1 = 15 # mean prior to change point
mu2 = 5 # mean following change point
stdev = 3
N = 50
X = np.random.normal(mu1, stdev, N)
Y = np.random.normal(mu2, stdev, N)
###Output
_____no_output_____
###Markdown
We train a simple delta rule model to learn the values with fixed weights. The learner updates its estimate after each time point as follows:$$ V_{t+1} = V_{t} + \eta \left (X_t - V_{t} \right ) $$where $\eta$ is the learning rate. Note, for the special case that $\eta =\frac{1}{t}$, that the estimate $V$ is equal to the simple average, or $V=\frac{1}{N}\sum_{t=1}{N}X_t$
###Code
# set the learning rates of the models!
learning_rate0 = 0.3
learning_rate1 = 0.1
# create a simple RL Agent (average all trials, eta = 1/t)
XY = np.concatenate([X, Y])
mle = [np.mean(XY[0:ii]) for ii in range(1, len(XY) + 1)]
# RL agent
v = X[0]
q = []
for X0 in XY:
q.append(v)
v += learning_rate0 *(X0 - v)
# RL agent
v = X[0]
q2 = []
for X0 in XY:
q2.append(v)
v += learning_rate1 *(X0 - v)
# plot the results
with sns.axes_style('ticks'):
plt.figure(figsize=(5, 3))
plt.plot(XY, 'd')
plt.plot(mle, 'r', label='Simple Running Average')
plt.plot(q, 'k', label='Learning Rate = %.2f' % learning_rate0)
plt.plot(q2, 'k--', label='Learning Rate = %.2f' % learning_rate1)
ax = plt.gca()
lb, ub = ax.get_ylim()
ax.set_ylim([0, ub])
sns.despine(trim=True, offset=10)
ax.set_ylabel('Value')
ax.set_xlabel('Time')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
###Output
_____no_output_____
###Markdown
We can also look at how well the models due over all time by looking at their cummulative loss. Loss here is the *Mean Squared Error*, defined:$$ \text{MSE} = \sum_{i}||\hat{x}_i - x_i||^2 $$ It's helpful to break this down across time as well.
###Code
ml_error = []
q_error = []
q2_error = []
# calculate the error term
for ii, X0 in enumerate(XY):
ml_error.append((X0-mle[ii])**2)
q_error.append((X0-q[ii])**2)
q2_error.append((X0-q2[ii])**2)
# plot across time
with sns.axes_style('ticks'):
plt.figure(figsize=(5, 3))
plt.plot(np.cumsum(q_error), 'k', label='Learning Rate = %.2f' % learning_rate0)
plt.plot(np.cumsum(q2_error), 'k--', label='Learning Rate = %.2f' % learning_rate1)
ax = plt.gca()
ax.set_ylabel('Cummulative\nSquared Error Loss')
ax.set_xlabel('Time')
sns.despine(trim=True, offset=10)
###Output
_____no_output_____
###Markdown
To see how the effect of learnign rate effects cumulative performance, we can look at a range of learning rates
###Code
etas = np.arange(0.05, 0.51, 0.01)
CC = sns.color_palette('RdBu', len(etas))
with sns.axes_style('ticks'):
plt.figure(figsize=(5, 3))
for ii, eta in enumerate(etas):
q_error = []
v = 15
for X0 in XY:
q_error.append((X0-v)**2)
v += eta *(X0 - v)
plt.plot(np.cumsum(q_error), color=CC[ii])
ax = plt.gca()
ax.set_ylabel('Cummulative\nSquared Error Loss')
ax.set_xlabel('Time')
sns.despine(trim=True, offset=10)
###Output
_____no_output_____
###Markdown
Learning rate as forgettingAlthough we often think of the "learning rate" as the rate by which we integrate new information, an equally valid view is as the learning rate as the rate by which we forget old information.A higher learning rate weights more recent experiences more strongly and decays experiences more distant in time more quickly. Specifically, the model's learned value is a weighted sum of all of the previous observations, such that$$ V_t = \sum_{k=0}^{t}w_ix_i$$where $w_i$ is the contribution (weight) of an observation at time $i$. Using a fixed learning rate, the each individual observation's contribution to the mean decays. This can be shown by unraveling the recursive sum:$$ V_{t+1} = V_{t} + \eta \left (x_t - V_{t} \right ) $$$$ V_{t+1} = \eta x_{t} + \eta\left (1 - \eta \right ) x_{t-1} + \eta\left (1 - \eta \right )^2 x_{t-2} + \eta\left (1 - \eta \right )^3 x_{t-3} + ... $$ $$ V_{t+1} = \sum_{k=0}^{t}\eta\left (1 - \eta \right )^k X_{t-k} $$ Hense, with a fixed learning rate, this weight for the observation at $i=t-k$ is equal to:$$ w_i = \eta(1-\eta)^{t-i} $$and the contribution of each observation point decays with time.
###Code
# calculate the weight
def weight(eta, k):
return eta * ((1-eta)**k)
# plot over time
with sns.axes_style('ticks'):
plt.figure(figsize=(5, 3))
x = range(36)
ax = plt.gca()
ax.plot(x, np.ones(len(x))/len(x), 'r', label='Simple Running Average')
ax.plot(x, [weight(0.25, k) for k in x],'k', label='Learning Rate = 0.25')
ax.plot(x, [weight(0.1, k) for k in x],'k--', label='Learning Rate = 0.1')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_ylabel('Weight')
ax.set_xlabel('Time past since observation')
sns.despine(trim=False, offset=10)
###Output
_____no_output_____
###Markdown
One notable consequence of this is that lower learning rates result in an algorithm that integrates over a larger period of time. The optimal thing to do after a change point, is to reset and start over
###Code
# create a simple RL Agent
XY = np.concatenate([X, Y])
mle = [
np.mean(X[0:ii]) for ii in range(1, len(X) + 1)] + [np.mean(Y[0:ii]) for ii in range(1, len(Y) + 1)]
v = 10
eta = 0.25
q = []
for X0 in XY:
q.append(v)
v += eta *(X0 - v)
with sns.axes_style('ticks'):
plt.figure(figsize=(5, 3))
plt.plot(XY, 'd')
plt.plot(mle, 'm', label='"Optimal" Behavior')
ax = plt.gca()
ax.set_ylim([0, 25])
sns.despine(trim=True, offset=10)
ax.set_ylabel('Value')
ax.set_xlabel('Time')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
###Output
_____no_output_____
###Markdown
We can also show the optimal effective learning rate over time. It's worth noting that this is uneffected by the variance only because we know the location of the change point.
###Code
x = np.arange(0, 100)
y = np.concatenate([1 / np.arange(1, 51)] * 2)
with sns.axes_style('ticks'):
plt.figure(figsize=(5, 3))
plt.plot(x, y,)
ax = plt.gca()
sns.despine(trim=True, offset=10)
ax.set_ylabel('Optimal Learning Rate')
ax.set_xlabel('Time')
###Output
_____no_output_____
###Markdown
Here is a demonstration of what a Bayesian learner learning the probability of a binary reward (Bernoulli random variable). This can be parameterized with a Beta distribution with two parameters: "a" the number of sucesses and "b" the number of failures
###Code
from scipy.stats import beta
a = 16 # number of times option A was tried & rewarded
b = 4 # number of times option A was tried & not rewarded
mode_a = (a - 1.) / (a + b -2.)
rv = beta(a, b)
a = 5 # number of times option A was tried & rewarded
b = 5 # number of times option B was tried & not rewarded
mode_b = (a - 1.) / (a + b -2.)
rv2 = beta(a, b)
x = np.arange(0, 1.01, 0.01)
with sns.axes_style('ticks'):
plt.figure(figsize=(6, 4))
plt.plot(x, rv.pdf(x), label='Action A')
plt.plot(x, rv2.pdf(x), label='Action B')
ax = plt.gca()
lb, ub = ax.get_ylim()
plt.plot([mode_a, mode_a], [lb, rv.pdf(mode_a)], 'b:')
plt.plot([mode_b, mode_b], [lb, rv2.pdf(mode_b)], 'g:')
ax.set_yticks([])
ax.set_xlabel('Reward Probability')
sns.despine(offset=10, left=True)
plt.legend(loc='upper left')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
The probability that "A" or "B" can be dervied via integration or sampleing. Here, we use sampling
###Code
cmf = rv.cdf(x)
cmf2 = rv2.cdf(x)
y = [x[np.sum(cmf < np.random.rand())] > x[np.sum(cmf2 < np.random.rand())] for _ in range(1000)]
with sns.axes_style('ticks'):
plt.figure(figsize=(4,4))
sns.barplot([1, 2], [np.sum(y)*1./len(y), (len(y) - np.sum(y))*1./len(y)])
ax = plt.gca()
ax.set_xticklabels(['A', 'B'])
ax.set_ylabel('Probability Action is Best')
ax.set_ylim(0, 1)
sns.despine(offset=2)
plt.tight_layout()
###Output
_____no_output_____ |
01-jul-2020_ptru-kinetics-plots.ipynb | ###Markdown
01 Jul 2020 - Read results from MKMCXX microkinetics simulations - This notebook produces DRC and TOF plots based on microkinetics data for the PtRu alloy project.- Want to quickly read in the microkinetics simulation results from MKMCXX code.- Basically, I used my `mkmcxx` class to automate reading the results of each simulation and extract the parameters required to produce a TOF volcano plot and degree of rate control plots.
###Code
import os
import glob
import pickle
import re
import copy
import datetime
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from samueldy_atomistic_utils.microkinetics import mkmcxx
import tqdm
import multiprocessing
# Other resources
%load_ext blackcellmagic
# Autoreload functionality for iterating on packages.
# Uncomment if running cells gets too slow.
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Other utility functions:
###Code
def load_from_file(fname, module=pickle):
"""
Quick function to load an object from a file.
By default, use the pickle module.
"""
with open(fname, "rb") as f:
obj = module.load(f)
return obj
def save_to_file(obj, fname, module=pickle):
"""
Quick function to save a single object to a file.
By default, use the pickle module.
"""
with open(fname, "wb") as f:
module.dump(obj, f)
return True
# try:
# except:
# print("Something went wrong with the pickling.")
# Thanks to https://scentellegher.github.io/visualization/2018/05/02/custom-fonts-matplotlib.html
# specify the custom font to use
plt.rcParams["font.family"] = "sans-serif"
plt.rcParams["font.sans-serif"] = "Nimbus Sans,Arial"
plt.rcParams["mathtext.fontset"] = "custom"
plt.rcParams["font.size"] = 22
plt.rcParams["axes.linewidth"] = 3
plt.rcParams["axes.titlepad"] = 15
plt.rcParams["text.usetex"] = True
# Customize ticks
plt.rcParams["xtick.bottom"] = True
plt.rcParams["xtick.top"] = True
plt.rcParams["xtick.direction"] = "in"
plt.rcParams["ytick.left"] = True
plt.rcParams["ytick.right"] = True
plt.rcParams["ytick.direction"] = "in"
plt.rcParams["xtick.major.size"] = 6
plt.rcParams["xtick.major.width"] = 3
plt.rcParams["ytick.major.size"] = 6
plt.rcParams["ytick.major.width"] = 3
# Custom LaTeX preamble to use when rendering
plt.rcParams["text.usetex"] = True
preamble=r"""
\usepackage{amsmath}
\usepackage{amssymb}
% Use Helvetica for text and math
% Thanks to https://gist.github.com/Pni0/2923266
\renewcommand{\familydefault}{\sfdefault}
\usepackage[scaled=1]{helvet}
\usepackage[helvet]{sfmath}
\everymath={\sf}
% Shorten reaction arrows
\usepackage[version=4, arrows=pgf]{mhchem}
% Make arrows in chemical formulae shorter
\ExplSyntaxOn{}
\keys_define:nn { mhchem }
{
arrow-min-length .code:n =
\cs_set:Npn \__mhchem_arrow_options_minLength:n { {#1} } % default is 2em
}
\ExplSyntaxOff{}
\mhchemoptions{arrow-min-length=1.2em}
"""
plt.rcParams["text.latex.preamble"] = preamble
# Enumerate all compounds and reaction strings used
compound_strings = [
"Haqu",
"NO3-",
"H2",
"N2",
"N2O",
"H2O",
"NO",
"NH3",
"NO3*",
"H*",
"NO2*",
"NO*",
"N*",
"NH*",
"NH2*",
"NH3*",
"O*",
"N2*",
"N2O*",
"OH*",
"H2O*",
"*",
]
rxn_strings = [
"NO + * <-> NO*",
"N2 + * <-> N2*",
"N2O + * <-> N2O*",
"H2O + * <-> H2O*",
"NH3 + * <-> NH3*",
"NO3- + * <-> NO3*",
"NO3* + * <-> NO2* + O*",
"NO2* + * <-> NO* + O*",
"NO* + * <-> N* + O*",
"2 N* <-> N2* + *",
"2 NO* <-> N2O* + O*",
"N2O* + * <-> N2* + O*",
"N* + Haqu <-> NH*",
"NH* + Haqu <-> NH2*",
"NH2* + Haqu <-> NH3*",
"O* + Haqu <-> OH*",
"OH* + Haqu <-> H2O*",
"Haqu + * <-> H*",
"H2 + 2 * <-> 2 H*",
]
rxn_labels_latex = {
"NO + * <-> NO*" : r"$\ce{NO + {}^* \rightleftharpoons NO^*}$",
"N2 + * <-> N2*" : r"$\ce{N2 + {}^* \rightleftharpoons N2^*}$",
"N2O + * <-> N2O*" : r"$\ce{N2O + {}^* \rightleftharpoons N2O^*}$",
"H2O + * <-> H2O*" : r"$\ce{H2O + {}^* \rightleftharpoons H2O^*}$",
"NH3 + * <-> NH3*" : r"$\ce{NH3 + {}^* \rightleftharpoons NH3^*}$",
"NO3- + * <-> NO3*" : r"$\ce{NO3- + {}^* \rightleftharpoons NO3^*}$",
"NO3* + * <-> NO2* + O*": r"$\ce{NO3^* + {}^* \rightleftharpoons NO2^* + O^*}$",
"NO2* + * <-> NO* + O*" : r"$\ce{NO2^* + {}^* \rightleftharpoons NO^* + O^*}$",
"NO* + * <-> N* + O*" : r"$\ce{NO^* + {}^* \rightleftharpoons N^* + O^*}$",
"2 N* <-> N2* + *" : r"$\ce{2N^* \rightleftharpoons N2^* + {}^* }$",
"2 NO* <-> N2O* + O*" : r"$\ce{2NO^* \rightleftharpoons N2O^* + O^*}$",
"N2O* + * <-> N2* + O*" : r"$\ce{N2O^* + {}^* \rightleftharpoons N2^* + O^*}$",
"N* + Haqu <-> NH*" : r"$\ce{N^* + H+ + e- \rightleftharpoons NH^*}$",
"NH* + Haqu <-> NH2*" : r"$\ce{NH^* + H+ + e- \rightleftharpoons NH2^*}$",
"NH2* + Haqu <-> NH3*" : r"$\ce{NH2^* + H+ + e- \rightleftharpoons NH3^*}$",
"O* + Haqu <-> OH*" : r"$\ce{O^* + H+ + e- \rightleftharpoons OH^*}$",
"OH* + Haqu <-> H2O*" : r"$\ce{OH^* + H+ + e- \rightleftharpoons H2O^*}$",
"Haqu + * <-> H*" : r"$\ce{H+ + e- + {}^* \rightleftharpoons H^*}$",
"H2 + 2 * <-> 2 H*" : r"$\ce{H2 + 2 {}^* \rightleftharpoons{} 2H^*}$",
}
###Output
_____no_output_____
###Markdown
20 Jul 2020: Results Parse results - Define some functions to facilitate parsing the data.
###Code
def parse_mkmcxx_results(
results_dir: str = ".", U: float = 0.2, n_jobs: int = 16, outfile_path: str = None
):
"""
Load the results from the specified MKMCXX results folder.
Read in the MKMCXX results from a results folder created by Jin-Xun Liu's
code. This will create a pickle file containing a list of objects, each
containing the `mkmcxx.MicrokineticsSimulation.results()` output for that
folder.
Parameters
----------
results_dir : str, optional
The folder where the results should be located, by default ".". This
folder should contain subfolders of the form "O_<EO>__N<EN>", where
-1*<EO> and -1*<EN> are the binding energies of O and N in kj/mol. Each
subfolder should contain an "output.log" file and a "run" folder
containing the MKCMXX simulation results.
U : float, optional
The applied potential in V vs. RHE, by default 0.2 V.
n_jobs : int, optional
The number of multiprocessing jobs to use when parsing the results, by
default 16.
outfile_path : str, optional
The path in which to save the output pickle file, by default
"jxl-<potential>V-results_<date>.pckl"
"""
if not outfile_path:
outfile_path = f"jxl-{U}V-results_{datetime.date.today().strftime('%d-%b-%Y').lower()}.pckl"
# Verify that results folder exists
if not os.path.exists(results_dir):
raise RuntimeError(f"Folder {results_dir} does not exist.")
# Get list of all directories where MKMCXX was run
print("Searching directories...")
data_dirs = glob.glob(os.path.join(results_dir, "*", "output.log"))
print(f"Found {len(data_dirs)} data directories.")
# Make simulation object for each folder
completed_simulations = [
make_mkmcxx_simulation(folder) for folder in tqdm.tqdm(data_dirs)
]
# Read the results. Parallelize over many operations
with multiprocessing.Pool(processes=16) as p:
parsed_results = list(
tqdm.tqdm(
p.imap(read_results, completed_simulations),
total=len(completed_simulations),
)
)
# Pickle to a results file
print(f"Saving results to pickle file {outfile_path}...")
save_to_file(
parsed_results, outfile_path,
)
print("Saving done.")
# Find directories where there were no results available
dud_directories = [
sim["directory"]
for sim in tqdm.tqdm(
filter(lambda sim: not sim["results"]["range_results"], parsed_results)
)
]
print(f"{len(dud_directories)} directories unexpectedly had no simulation results.")
print(dud_directories)
return True
def make_mkmcxx_simulation(folder_path: str):
"""
Quick wrapper to make a `MicrokineticsSimulation` object
representing a completed simulation directory.
"""
# Define some constants
ev_to_kjmol = 96.485 # kJ/mol per eV
binding_energy_regex = re.compile(r".*O_(?P<EO>\d+)__N_(?P<EN>\d+).*")
# Load dummy data to enable instantiation of dummy MKMCXX runs
fake_reactions = load_from_file(os.path.expanduser("base-reaction-set.pckl"))
# Set up runs
fake_runs = [{"temp": 300, "time": 1e8, "abstol": 1e-8, "reltol": 1e-8}]
# Enter some settings from the input.mkm file
fake_settings = {
"type": "sequencerun",
"usetimestamp": False,
"drc": 0,
"reagents": ["NO3-", "Haqu"],
"keycomponents": ["N2", "NO3-", "N2O"],
"makeplots": 0,
}
# Get O and N binding energies
result = {"U": U}
try:
match = binding_energy_regex.match(folder_path)
result.update(
{
label: -float(energy) / ev_to_kjmol
for label, energy in match.groupdict().items()
}
)
except AttributeError:
print("Invalid folder name. Must be of format .*O_<EO>__N_<EN>.*")
raise RuntimeError
# Instantiate simulation object and add to dictionary
folder_name = os.path.dirname(folder_path)
result.update({"directory": folder_name})
sim = mkmcxx.MicrokineticSimulation(
reactions=list(fake_reactions.values()),
settings=fake_settings,
runs=fake_runs,
directory=folder_name,
# run_directory=".",
run_directory="run",
)
result.update({"sim": sim})
return result
def read_results(sim_object):
"""
Quick wrapper function to read simulation results. Want to package
into a format that can be read by my Jupyter notebook.
"""
results = {}
# Copy the binding energies and potential
results.update(
{label: sim_object[label] for label in ["EO", "EN", "U", "directory"]}
)
# Read the simulation results, and add it to the dictionary, then
# return the dictionary.
results.update({"results": sim_object["sim"].read_results()})
return results
###Output
_____no_output_____
###Markdown
- Now parse the data for 0.1 V and 0.2 V.
###Code
# Now for 0.1 V
results_dir = "0.1V-vs-RHE"
U = 0.1
outfile_path = (
f"jxl-{U}V-results-h2o-corrected_"
f"{datetime.date.today().strftime('%d-%b-%Y').lower()}.pckl"
)
result = parse_mkmcxx_results(results_dir=results_dir, U=U, outfile_path=outfile_path)
###Output
_____no_output_____
###Markdown
Assemble data for plotting - Need to distill down the results to have just the most important things (DRC, TOF, selectivities).
###Code
def get_rate_points(results: list, compound: str = "NO3-"):
"""
Read rate information from the entire dataset, handling cases where the
data does not exist
Parameters
----------
results : list
The output of the `read_results` functions defined above.
compound : str
The key used to look up rate information in the
`mkmcxx.MicrokineticsSimulation.get_results()
["range_results"]["derivatives"]` object
Returns
-------
list
A list of the form [{"EO": <EO>, "EN": <EN>, "ratelog": <ratelog>}, ...],
where <EO> is the O binding energy in eV, <EN> is the N binding energy
in eV, and <ratelog> is the base-10 logarithm of the formation/consumption
rate of the compound specified in `compound_str`.
"""
extracted_rate_points = []
for result in tqdm.tqdm(results):
try:
# Try rounding rates to 6 sig figs.
raw_rate = result["results"]["range_results"]["derivatives"][compound][0]
rounded_rate = np.float(f"{raw_rate:.6e}")
projection = {
"EO": result["EO"],
"EN": result["EN"],
"ratelog": np.log10(-(rounded_rate)),
}
except (KeyError, IndexError):
projection = None
extracted_rate_points.append(projection)
# Filter out any None values
extracted_rate_points = list(filter(lambda x: x, extracted_rate_points))
return extracted_rate_points
def get_drc_points(results: list, rxn: str):
"""
Read rate information from the entire dataset, handling cases where the
data does not exist
Parameters
----------
results : list
The output of the `read_results` functions defined above.
rxn : str
The key used to look up rate information in the
`mkmcxx.MicrokineticsSimulation.get_results()["drc_results"]["drc"]`
object
Returns
-------
list
A list of the form [{"EO": <EO>, "EN": <EN>, "drc": <drc>}, ...], where
<EO> is the O binding energy in eV, <EN> is the N binding energy in eV,
and <drc> is the Campbell degree-of-rate-control coefficient for the
reaction specified in `rxn`.
"""
extracted_drc_points = []
for result in tqdm.tqdm(results):
try:
projection = {
"EO": result["EO"],
"EN": result["EN"],
"drc": np.float(result["results"]["drc_results"]["drc"].loc[rxn, 0]),
}
except (KeyError, IndexError):
projection = None
extracted_drc_points.append(projection)
# Filter out any None values
extracted_drc_points = list(filter(lambda x: x, extracted_drc_points))
return extracted_drc_points
# Going to store condensed plot data
condensed_plot_data = {}
# Get results for 0.1 V. Change potential/file names if you ran simulations
# at a different applied potential.
potentials = [0.1]
datestr = datetime.date.today().strftime('%d-%b-%Y').lower()
for potential in potentials:
results = load_from_file(f"jxl-{potential}V-results-h2o-corrected_{datestr}.pckl")
condensed_plot_data.update(
{
f"{potential}V": {
"ratelog": {
compound_str: get_rate_points(
results=results, compound=compound_str
)
for compound_str in compound_strings
},
"drc": {
rxn_string: get_drc_points(results=results, rxn=rxn_string)
for rxn_string in rxn_strings
},
}
}
)
print(f"Got all data for {potential}V vs. RHE")
# Save all results
datestr = datetime.date.today().strftime("%d %b %Y").lower().replace(" ", "-")
save_to_file(condensed_plot_data, f"all-condensed-plot-data-h2o-corrected_{datestr}.pckl")
###Output
_____no_output_____
###Markdown
Assemble a DRC contour plot - This will be for nitrate-to-nitrate dissociation, the proposed rate-limiting step.
###Code
# Load condensed plot data
condensed_plot_data = load_from_file(
"all-condensed-plot-data-h2o-corrected_24-jul-2020.pckl"
)
# Also want to load the marker data set in order to plot the points on the volcano
marker_data_set = load_from_file("latest-volcano-marker-data.pckl")
# Reindex to plot Ru(211) before Rh(211)
new_idx = [
"comp-1",
"comp-1.5",
"comp-2",
"comp-3",
"comp-4",
"comp-5",
"comp-6.5",
"comp-7.5",
"comp-9",
"ru211",
"rh211",
"pt3ru211"
]
marker_data_set = marker_data_set.loc[new_idx, :]
def plot_marker_set(row, ax):
"""Apply function to plot data points on plot"""
ax.plot(
row["EO"],
row["EN"],
color=row["color"],
marker=row["marker"],
markersize=10,
markeredgewidth=row["markeredgewidth"] + 0.7,
markeredgecolor=row["markeredgecolor"],
label=row["label"],
linestyle="None"
)
def make_drc_plot(
data: list,
rxn_str: str,
ax: plt.Axes = None,
outfile_name: str = None,
clip_threshold: float = 2.0,
contour_levels: np.ndarray = np.r_[-2:2.1:0.1],
):
"""Make a degree of rate control plot for the specified reaction.
Parameters
----------
data :
List containing data of the form [{"EO": <EO>, "EN": <EN>, "drc":
<drc>}, ...], where <EO> is the O binding energy in eV, <EN> is the N
binding energy in eV, and <drc> is the Campbell degree of rate control
coefficient.
rxn_str : str
The reaction to be written above the plot, in LaTeX format.
ax : plt.Axes
The Axes object on which to place this plot. If none, Figure and Axes
objects will be created for you.
outfile_name : str, optional
Name of graphics file (including extension) to which to export the
plotted graph, by default None. If None, no plot will be written to
disk. Passed directly to `matplotlib.pyplot.Figure.savefig`. Has no
effect if `ax` is specified.
clip_threshold : float, optional
Absolute value of the symmetric threshold around 0.0 to which to clip
DRC data that exceeds that threshold, by default 2.0. For example, if
the DRC threshold is set to 2.0, all DRC values greater than 2 or less
than -2 will be set to +2 or -2, respectively.
contour_levels : ndarray, int
Array of level values to use when drawing the contours on the DRC
contour plot. Passed directly to `matplotlib.pyplot.tricontour` and
`matplotlib.pyplot.tricontourf`
"""
# Make data frame full of data
df = pd.DataFrame.from_dict(data=data)
# Filter out NaN/inf values
df = df[~df.isin([np.nan, np.inf, -np.inf]).any(axis=1)]
# Report number of points used for DRC
n_points = len(df)
print(f"Found {n_points} non-NaN/inf points to plot.")
# Clip values outside of DRC \in [-2,2]
mask = df[df["drc"] > clip_threshold].index
df.loc[mask, "drc"] = clip_threshold
mask = df[df["drc"] < -clip_threshold].index
df.loc[mask, "drc"] = -clip_threshold
# Reorder columns if necessary
df = df.loc[:, ["EO", "EN", "drc"]]
# Extract values
values = [np.ravel(k) for k in np.hsplit(df.values, 3)]
# Create DRC plot
fig = None
if not ax:
fig, ax = plt.subplots(1, 1, figsize=(10, 6.5))
print("Making my own plot")
contourset = ax.tricontourf(*values, levels=contour_levels, cmap="jet")
# Plot the marker data
marker_data_set.apply(func=lambda row: plot_marker_set(row, ax), axis=1)
# Set x and y ticks to be the same
ticks = np.r_[-6:-1:1]
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.axis([-6.7,-2.5,-7,-2])
# Place reaction inside frame
ax.annotate(
xy=(0.5, 0.90),
s=rxn_str,
xycoords="axes fraction",
fontsize="medium",
ha="center",
va="top",
)
# ax.set_xlabel("O binding energy / eV")
# ax.set_ylabel("N binding energy / eV")
# ax.axis("square")
# If user didn't specify axis, no figure will have been created. In this
# case, add our own color bar and return the figure and axes objects.
if fig:
print("Adding my own colorbar")
fig.colorbar(
contourset,
label="Degree of rate control factor",
)
if outfile_name:
fig.savefig(outfile_name)
return fig, ax
else:
# Return the contour set object so it can be used for a figure bar in
# the larger figure
return contourset
# Draw all DRC plots
fig, ax = plt.subplots(
nrows=5,
ncols=4,
figsize=(25, 25),
# constrained_layout=True,
sharex=True,
sharey=True,
)
fig.subplots_adjust(wspace=0.0, hspace=0.0, left=0.135, bottom=0.135)
datestr = datetime.date.today().strftime("%d %b %Y").lower().replace(" ", "-")
plot_results = []
for ax_instance, rxn_string in zip(ax.flat, rxn_strings):
plot_results.append(
make_drc_plot(
data=condensed_plot_data["0.1V"]["drc"][rxn_string],
rxn_str=rxn_labels_latex[rxn_string],
ax=ax_instance,
)
)
print(f"Finished plot for {rxn_string}")
# Remove final blank subplot
ax_blank = list(ax.flat)[-1]
ax_blank.set_axis_off()
# Set up common x and y labels
# Thanks to https://stackoverflow.com/a/26892326
fig.text(x=0.45, y=0.1, s="O binding energy / eV", ha="center", fontsize="large")
fig.text(
x=0.1,
y=0.5,
s="N binding energy / eV",
ha="center",
rotation="vertical",
fontsize="large",
)
# Set up color bar
cbar = fig.colorbar(
mappable=plot_results[-1],
cmap="jet",
ax=ax,
location="right",
aspect=40,
shrink=0.70,
pad=0.02,
)
cbar.set_label(label="Degree of rate control", labelpad=15)
# Add legend in the place of the very last subplot
ax_lastplot = list(ax.flat)[-2]
lg = ax_lastplot.legend(
loc="right",
fontsize=18,
frameon=True,
handletextpad=-0.10,
borderpad=0.25,
labelspacing=0.55,
facecolor="k",
framealpha=0.15,
edgecolor="k",
bbox_to_anchor=(1.56, 0, 0.5, 1.0),
ncol=2,
)
# Export graphics
fig.savefig(
f"no3-her-drc-plots_latest.pdf", dpi=150, bbox_inches="tight", pad_inches=0.1
)
fig.savefig(
f"no3-her-drc-plots_latest.png", dpi=150, bbox_inches="tight", pad_inches=0.1
)
###Output
_____no_output_____
###Markdown
Assemble a TOF contour plot
###Code
def make_tof_plot(
data: list,
compound_str: str,
ax: plt.Axes = None,
outfile_name: str = None,
contour_levels: np.ndarray = np.r_[-36:3.1:3],
):
"""Make a degree of rate control plot for the specified reaction.
Parameters
----------
data :
List containing data of the form [{"EO": <EO>, "EN": <EN>, "rate":
<rate>}, ...], where <EO> is the O binding energy in eV, <EN> is the N
binding energy in eV, and <rate> is the Campbell degree of rate control
coefficient.
compound_str : str
The name of the compound (in LaTeX syntax) to show in the title of the
plot.
ax : plt.Axes
The Axes object on which to place this plot. If none, Figure and Axes
objects will be created for you.
outfile_name : str, optional
Name of graphics file (including extension) to which to export the
plotted graph, by default None. If None, no plot will be written to
disk. Passed directly to `matplotlib.pyplot.Figure.savefig`. Has no
effect if `ax` is specified.
contour_levels : ndarray, int
Array of level values to use when drawing the contours on the DRC
contour plot. Passed directly to `matplotlib.pyplot.tricontour` and
`matplotlib.pyplot.tricontourf`
"""
# Make data frame full of data
df = pd.DataFrame.from_dict(data=data)
# Filter out NaN/inf values
df = df[~df.isin([np.nan, np.inf, -np.inf]).any(axis=1)]
# Report number of points used for DRC
n_points = len(df)
print(f"Found {n_points} non-NaN/inf points to plot.")
# Reorder columns if necessary
df = df.loc[:, ["EO", "EN", "ratelog"]]
# Extract values
values = [np.ravel(k) for k in np.hsplit(df.values, 3)]
# Create TOF plot
fig = None
if not ax:
fig, ax = plt.subplots(1, 1, figsize=(10, 6.5))
print("Making my own plot")
contourset = ax.tricontourf(
*values, levels=contour_levels, cmap="Spectral_r")
ax.set_title(
fr"""Volcano plot: {compound_str} TOF""")
ax.set_xlabel("O binding energy / eV")
ax.set_ylabel("N binding energy / eV")
# If user didn't specify axis, no figure will have been created. In this
# case, add our own color bar and return the figure and axes objects.
if fig:
print("Adding my own colorbar")
fig.colorbar(
contourset,
label=r"$\log_{10}(\mathrm{TOF})$ / $\mathrm{s^{-1}}$",
)
if outfile_name:
fig.savefig(outfile_name)
return fig, ax
else:
# Return the contour set object so it can be used for a figure bar in
# the larger figure
return contourset
# Show TOF volcano plot for nitrate consumption
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
datestr = datetime.date.today().strftime("%d %b %Y").lower().replace(" ", "-")
# Plot DRC for NO3 dissociation and HER
result = make_tof_plot(
data=condensed_plot_data["0.1V"]["ratelog"]["NO3-"],
compound_str=r"$\mathrm{NO_3^-}$",
ax=ax,
)
ax.axis([-6.5,-3.5,-7,-3.5])
cbar = fig.colorbar(
mappable=result, cmap="Spectral_r", ax=ax, aspect=20
)
cbar.set_label(label=r"$\log_{10}(\mathrm{TOF})$ / $\mathrm{s^{-1}}$", labelpad=15)
fig.savefig(f"no3-her-ratelog-plots_{datestr}.pdf")
###Output
_____no_output_____ |
Image_combination.ipynb | ###Markdown
Image Combination Importing the libraries
###Code
import torch
import cv2
import numpy as np
import torch.optim as optim
import torchvision.models as models
import torch.nn as nn
import scipy.misc
###Output
_____no_output_____
###Markdown
Some utility function to read images and turn them into a Pytorch tensor
###Code
def loadim(path):
im = cv2.imread(path, cv2.IMREAD_COLOR)
im = np.array([im[:, :, 2], im[:, :, 1], im[:, :, 0]])
im = torch.from_numpy(im)
im = im.type('torch.FloatTensor')
im = im/128 - 2
return im
###Output
_____no_output_____
###Markdown
Defining the model we will use to extract features, in this case, VGG11
###Code
vgg11 = models.vgg11(pretrained=True)
###Output
_____no_output_____
###Markdown
In the following block, we define a series of architectures derived from VGG11, that we will use to extract different level features. It is known that higher layers extract higher level features. We only used the convoutional layers of the VGG for this experiment. We didn't use the dense layers output features.
###Code
class VGG16_conv7(nn.Module):
def __init__(self):
super(VGG16_conv7, self).__init__()
self.features = nn.Sequential(
# stop at conv7
*list(vgg11.features.children())[:-3]
)
def forward(self, x):
x = self.features(x)
return x
class VGG16_conv6(nn.Module):
def __init__(self):
super(VGG16_conv6, self).__init__()
self.features = nn.Sequential(
# stop at conv6
*list(vgg11.features.children())[:-5]
)
def forward(self, x):
x = self.features(x)
return x
class VGG16_conv5(nn.Module):
def __init__(self):
super(VGG16_conv5, self).__init__()
self.features = nn.Sequential(
# stop at conv5
*list(vgg11.features.children())[:-8]
)
def forward(self, x):
x = self.features(x)
return x
class VGG16_conv4(nn.Module):
def __init__(self):
super(VGG16_conv4, self).__init__()
self.features = nn.Sequential(
# stop at conv4
*list(vgg11.features.children())[:-10]
)
def forward(self, x):
x = self.features(x)
return x
class VGG16_conv3(nn.Module):
def __init__(self):
super(VGG16_conv3, self).__init__()
self.features = nn.Sequential(
# stop at conv3
*list(vgg11.features.children())[:-13]
)
def forward(self, x):
x = self.features(x)
return x
class VGG16_conv2(nn.Module):
def __init__(self):
super(VGG16_conv2, self).__init__()
self.features = nn.Sequential(
# stop at conv2
*list(vgg11.features.children())[:-15]
)
def forward(self, x):
x = self.features(x)
return x
class VGG16_conv1(nn.Module):
def __init__(self):
super(VGG16_conv1, self).__init__()
self.features = nn.Sequential(
# stop at conv1
*list(vgg11.features.children())[:-18]
)
def forward(self, x):
x = self.features(x)
return x
# Just realized I forgot the 8th layer
###Output
_____no_output_____
###Markdown
First of all, we compute the features we want our output image to have. For instance, low level features similar to the style image and higher level features similar to the content image. The features we're going to use are, respectively, the lower layers activations and higher layers activations.After that, we will optimize the pixels of a new image in order for them to have similar activations to the ones desired. To do that, the first thing we need to do is initialize the new image. Some options would be the style image, the content image or even a random initialization of the pixels. To do our first experiments, we chose to have as an initialization the content image.To find the opitmal parameters, we define a loss function which is the MSE between the desired features and the ones we actually have, for all the layers.Then, we will use the SGD technique to find the parameters (in this case, input image pixel values) that minimize this function.To calculate the gradient of the loss with respect to the pixel values, we backpropagate the error from each layer down to the pixel values, then we update the pixel values. We will repeat this for a few iterations, saving the result in each step
###Code
def main():
vgg1 = VGG16_conv1()
vgg2 = VGG16_conv2()
vgg3 = VGG16_conv3()
vgg4 = VGG16_conv4()
vgg5 = VGG16_conv5()
vgg6 = VGG16_conv6()
vgg7 = VGG16_conv7()
cont_im = loadim('images/landscape-small.png')
cont_im = cont_im.unsqueeze(0)
cont_im.requires_grad = True
style_im = loadim('images/van-gogh-small.png')
style_im = style_im.unsqueeze(0)
opt = optim.SGD([cont_im], lr=0.0001)
y1_targ = vgg1(style_im)
y2_targ = vgg2(style_im)
y3_targ = vgg3(style_im)
y4_targ = vgg4(style_im)
y5_targ = vgg5(style_im)
y6_targ = vgg6(style_im)
y7_targ = vgg7(style_im)
input_im = cont_im
for i in range(20):
print('Iteration', i)
opt.zero_grad()
y1_ = vgg1(input_im)
y2_ = vgg2(input_im)
y3_ = vgg3(input_im)
y4_ = vgg4(input_im)
y5_ = vgg5(input_im)
y6_ = vgg6(input_im)
y7_ = vgg7(input_im)
y1_d = y1_targ - y1_
y2_d = y2_targ - y2_
y3_d = y3_targ - y3_
y4_d = y4_targ - y4_
y5_d = y5_targ - y5_
y6_d = y6_targ - y6_
y7_d = y7_targ - y7_
y1_d = y1_d * y1_d
y2_d = y2_d * y2_d
y3_d = y3_d * y3_d
y4_d = y4_d * y4_d
y5_d = y5_d * y5_d
y6_d = y6_d * y6_d
y7_d = y7_d * y7_d
loss = torch.tensor(0, dtype=torch.float)
loss.requires_grad = True
for dif in [y1_d, y2_d, y3_d, y4_d, y5_d, y6_d, y7_d]:
l = torch.sum(dif)
loss = loss + l
loss.backward(retain_graph=True)
opt.step()
b = cont_im[0].detach().numpy()
b = np.rollaxis(b, 0, 3)
scipy.misc.imsave('content' + str(i) + '.jpg', b)
if __name__ == '__main__':
main()
###Output
Iteration 0
|
EqualIrreversibleTransitions.ipynb | ###Markdown
Last updated by: Jonathan Liu, 10/15/2020In this notebook we will investigate the behavior of onset time distributions for the simple case of a Markov chain with irreversible transitions and identical transition rates. We will discover that a transiently increasing rate results in lower noise in the ensuing onset time than a steady-state rate. First, consider a Markov chain with $k+1$ states and $k$ irreversible transitions rates, each with rate $\beta$. Labeling the first state with index $0$, the next with $1$, and so on, we have the reaction network:\begin{equation}0 \xrightarrow{\beta} 1 \xrightarrow{\beta} ... \xrightarrow{\beta} k\end{equation}We will be interested in the mean and variance of the distribution of times $P_k(t)$ to start at state $0$ and reach the final state $k$.We will first consider the simple case where the transition rate $\beta$ is constant in time. In this case, the distribution $P_k(t)$ is simply given by a Gamma distribution with shape parameter $k$ and rate parameter $\beta$. $P_k(t)$ then has the form\begin{equation}P_k(t) = \frac{\beta^k}{\Gamma(k)}t^{k-1}e^{-\beta t}\end{equation}where $\Gamma$ is the Gamma function.The mean $\mu_k$ and variance $\sigma^2_k$ have simple analytical expressions and are given by\begin{equation}\mu_k = \frac{k}{\beta} \\\sigma^2_k = \frac{k}{\beta^2}\end{equation} Next, we can examine this system from the perspective of developmental biology. For example, let's consider the system to model the transition of chromatin from an inaccessible, silent state to an accessible, transcriptionally competent state. This could correspond to the time immediately following a nuclear division in the early fruit fly embryo. From a developmental perspective, it's important that $\mu_k$ and $\sigma^2_k$ can be tuned for optimal performance. Namely, given a target mean onset time $\mu$, we would like to minimize the resulting noise $\sigma^2$ in the distribution of onset times around that mean.For ease of visualization, we will work with the squared CV $CV^2 = \frac{\sigma^2}{\mu^2}$, instead of the variance.
###Code
#Import necessary packages
#matplotlib inline
import numpy as np
from scipy.spatial import ConvexHull
import matplotlib.pyplot as plt
import scipy.special as sps
#Simulation for calculating onset times
def CalculatetOn_NEqualTransitions(time,dt,w,N_trans,N_cells):
#Calculates the onset time for a model with N irreversible transitions of
#equal magnitude. The transition rate can be time-varying, but is the same
#global rate for each transition. The model assumes N+1 states, beginning
#in the 0th state. Using finite timesteps and a Markov chain formalism, it
#simulates N realizations of the overall time it takes to reach the
#(N+1)th state. This is vectorized so it calculates it for all AP
#positions.
#Last updated by Jonathan Liu, 10/15/2020
# Inputs:
# time: simulation time vector
# dt: simulation timestep
# w: transition probability vector at each timepoint
# N_trans: number of irreversible transitions
# N_cells: number of cells to simulate
# Outputs:
# t_on: time to reach the final state for each cell (length = N_cells)
## Setup variables
t_on = np.empty(N_cells) #Time to transition to final ON state for each cell
t_on[:] = np.nan
state = np.zeros(N_cells) #State vector describing current state of each cell
finished_states = np.zeros(N_cells) #Vector storing finished statuses of each cell
## Run simulation
#Loop over time
#q = waitbar(0,'Running simulation...')
for i in range(len(time)):
if np.sum(finished_states) == N_cells: #If all cells have turned on, stop the simulation
#print('Halting simulation since all cells have turned on.')
break
#Simulate binomial random variable to see if each cell has transitioned
#If the input transition rate is a nan, this will manifest as never
#transitioning.
p = w[i] * dt #Probability of transition at this timestep
transitioned = np.random.binomial(1,p,N_cells) #Binary transition decision for each cell
#Advance the cells that did transition to the next state
states_to_advance = transitioned == 1
state[transitioned == 1] = state[transitioned == 1] + 1
#See if any states have reached the ON state
t_on[state == N_trans] = time[i]
finished_states[state == N_trans] = 1
state[state == N_trans] = np.nan #Move finished states out of consideration
#waitbar(i/length(time),q,'Running simulation...')
return t_on
###Output
_____no_output_____
###Markdown
The ensuing code calculates the mean and squared CV for varying values of the step number $k$ and the transition rate $\beta$. The final results can be easily visualized in a 2D parameter space, with the mean $mu_k$ as the x axis and the squared CV $\CV^2_k$ as the y axis. We see that the squared CV is constant with regards to the mean. Of particular interest is the boundary at the bottom of the parameter space. For a given mean and upper limit to $k$ and $\beta$, there is a minimum squared CV below which is unachievable.
###Code
#Plot the mean and variance of the Gamma distribution in 2D parameter space, for a given set of parameters
#Function returning the mean and variance of a Gamma distribution
def MeanVarGamDist(shape,rate):
return shape/rate, shape/rate**2
#Let's create a grid of shape and rate parameters
n_steps = np.arange(1,5)
rate = np.arange(0.5,5,0.1)
means_const = np.zeros((len(n_steps),len(rate)))
variances_const = np.zeros((len(n_steps),len(rate)))
for i in range(len(n_steps)):
for j in range(len(rate)):
means_const[i,j], variances_const[i,j] = MeanVarGamDist(n_steps[i],rate[j])
CV2_const = variances_const / means_const**2
plt.figure()
plt.plot(means_const,CV2_const, 'b.')
plt.xlim(0,4)
plt.xlabel('mean')
plt.ylabel('CV^2')
###Output
_____no_output_____
###Markdown
Next, we will investigate the changes to this parameter space by using a transient rate $\beta(t)$. This is of biological interest because many developmental processes occur out of steady state. For example, several models of chromatin accessibility hypothesize that the rate of chromatin state transitioning is coupled to the activity of pioneer factors like Zelda. During each rapid cell cycle division event in the early fly embryo, the nuclear membrane breaks down and reforms again, and transcription factors are expelled and re-introduced back into the nucleus. Thus, after each division event, there is a transient period during which the concentration of pioneer factors at a given gene locus is out of steady state.For now, we will assume a reasonable form for the transition rate. Considering $\beta$ to be a proxy for Zelda concentration, for example, we will write down this transient $\beta(t)$ as the result of a simple diffusive process with form\begin{equation}\beta(t) = \beta (1 - e^{-t / \tau} )\end{equation}Here, $\beta$ is the asymptotic, saturating value of $\beta(t)$, and $\tau$ is the time constant governing the time-varying nature of the transition rate. For a diffusive process, $\tau$ would be highly dependent on the diffusion constant, for example.For comparison, the time plots of the constant and transient input are shown below, for $\tau = 3$ and $\beta = 1$.
###Code
time = np.arange(0,10,0.1)
dt = 0.1
w_base = 1
w_const = w_base * np.ones(time.shape)
N_trans = 2
N_cells = 1000
#Now with transient exponential rate
tau = 3
w_trans = w_base * (1 - np.exp(-time / tau))
#Plot the inputs
plt.figure()
plt.plot(time,w_const,label='constant')
plt.plot(time,w_trans,label='transient')
plt.xlabel('time')
plt.ylabel('rate')
plt.legend()
###Output
_____no_output_____
###Markdown
Because of the time-varying nature of $\beta(t)$, the resulting distribution $P_k(t)$ no longer obeys a simple Gamma distribution, and an analytical solution is difficult (or even impossible). Nevertheless, we can easily simulate the distribution and calculate $\mu_k$ and $\sigma^2_k$ numerically. The results are shown below in red, compared to the steady state case in blue.
###Code
#Now using the transient simulation
time = np.arange(0,10,0.1)
dt = 0.1
N_trans = 2
N_cells = 1000
tau = 3
means_trans = np.zeros((len(n_steps),len(rate)))
CV2_trans = np.zeros((len(n_steps),len(rate)))
for i in range(len(n_steps)):
for j in range(len(rate)):
w_trans = rate[j] * (1 - np.exp(-time / tau))
t_on_trans = CalculatetOn_NEqualTransitions(time,dt,w_trans,n_steps[i],N_cells)
means_trans[i,j] = np.mean(t_on_trans)
CV2_trans[i,j] = np.var(t_on_trans)/np.mean(t_on_trans)**2
plt.figure()
plt.plot(means_const,CV2_const, 'b.', label='steady state')
plt.plot(means_trans,CV2_trans, 'r.', label='transient')
plt.xlim(0,4)
plt.xlabel('mean')
plt.ylabel('CV^2')
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles))
plt.legend(by_label.values(), by_label.keys())
###Output
_____no_output_____
###Markdown
We see that the consequence of a slowly increasing transient transition rate is to move the parameter space to the right and down. That is, the transient transition rate serves to increase the mean and decrease the squared CV. This is interesting if we consider, for example, a vertical slice of constant mean. For example, consider the line $\mu = 2$. Here, for a given upper bound on the number of state $k=4$, the steady-state model can only achieve a minimum squared CV of about $0.25$. In contrast, the transient model with a time constant of $\tau = 3$ can achieve a lower minimum squared CV of around $0.15$!This implies that having a transient transition rate can actually produce a benefit of lowering the noise in onset times for a given mean onset time. While the transient nature of pioneer factor concentrations is an inevitable result of rapid cell divisions, this suggests that the transience can actually be harnessed to decrease overall noise in timing. To see this more explicitly, let's simulate results for fixed number of steps $k = 3$, for varying values of the transition rate $\beta$. We're also going to change the time constant $\tau$ of the transient input.
###Code
#Model parameters
n_steps = 3
rate = np.arange(0.5,5,0.1)
tau = np.array([0.0001,0.5,1,2,3,4])
means = np.zeros((len(tau),len(rate)))
variances = np.zeros((len(tau),len(rate)))
#Simulation parameters
time = np.arange(0,10,0.1)
dt = 0.1
N_trans = 2
N_cells = 1000
for i in range(len(tau)):
for j in range(len(rate)):
w_trans = rate[j] * (1 - np.exp(-time / tau[i]))
t_on_trans = CalculatetOn_NEqualTransitions(time,dt,w_trans,n_steps,N_cells)
means[i,j] = np.mean(t_on_trans)
variances[i,j] = np.var(t_on_trans)
CV2 = variances / means**2
#Plot results
plt.figure()
for i in range(len(tau)):
plt.plot(means[i,:],CV2[i,:],label='tau = ' + str(tau[i]))
plt.legend()
plt.xlabel('mean')
plt.ylabel('CV^2')
plt.title('n = ' + str(n_steps) + ' steps, varying rate from 0.5 to 5')
###Output
_____no_output_____ |
Basic_AutoEncoder.ipynb | ###Markdown
The motivation behind this notebook comes from [Sebastian Rachka](https://sebastianraschka.com/) who constantly inspires me through his work in the field of deep learning. Quite a few days, he open-sourced his repo [deeplearning-models](https://github.com/rasbt/deeplearning-models) which contains implementations of a wide variety of deep learning models. I started with [this notebook](https://github.com/rasbt/deeplearning-models/blob/master/pytorch_ipynb/autoencoder/ae-basic.ipynb) which shows a very simplistic and minimal implementation of a fully-connected _autoencoder_.
###Code
!pip install tensorflow-gpu==2.0.0-beta0
# Imports
import tensorflow as tf
from tensorflow.keras.datasets import mnist
import numpy as np
np.random.seed(7)
print(tf.__version__)
# Load the MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(X_train.shape, X_test.shape)
# Define the constants
NUM_FEATURES = 784
UNITS = 32
# Custom class for a simple FC autoencoder
class AutoEncoder(tf.keras.Model):
def __init__(self, num_features, units):
super(AutoEncoder, self).__init__()
self.encoder = tf.keras.layers.Dense(units, activation='linear',
input_shape=(num_features,),
kernel_initializer='glorot_normal',
bias_initializer='zeros')
self.decoder = tf.keras.layers.Dense(num_features, activation='linear', input_shape=(units,))
self.leaky_relu = tf.keras.layers.LeakyReLU(0.5)
def call(self, x):
encoded = self.encoder(x)
encoded = self.leaky_relu(encoded)
decoded = tf.sigmoid(self.decoder(encoded))
return decoded
# Instantiate the autoencoder
auto_encoder = AutoEncoder(NUM_FEATURES, UNITS)
# Define loss function and optimizer
loss_func = tf.keras.losses.BinaryCrossentropy()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
# Flatten the training images
X_train_copy = X_train.copy()
X_train_copy = X_train_copy.reshape(60000, 28*28).astype(np.float32)
print(X_train_copy.shape)
# Batches of 64
train_ds = tf.data.Dataset.from_tensor_slices((X_train_copy, y_train)).batch(64)
# Average out the loss after each epoch
train_loss = tf.keras.metrics.Mean(name='train_loss')
# Train the model
@tf.function
def model_train(features):
# Define the GradientTape context
with tf.GradientTape() as tape:
# Get the probabilities
decoded = auto_encoder(features)
# Calculate the loss
loss = loss_func(decoded, features)
# Get the gradients
gradients = tape.gradient(loss, auto_encoder.trainable_variables)
# Update the weights
optimizer.apply_gradients(zip(gradients, auto_encoder.trainable_variables))
train_loss(loss)
return decoded
# Begin training
decode_list = []
for epoch in range(20):
for features, _ in train_ds:
features = features
decoded = model_train(features)
template = 'Epoch {}, loss: {}'
print (template.format(epoch+1,
train_loss.result()))
%matplotlib inline
import matplotlib.pyplot as plt
##########################
### VISUALIZATION
##########################
n_images = 15
image_width = 28
fig, axes = plt.subplots(nrows=2, ncols=n_images,
sharex=True, sharey=True, figsize=(20, 2.5))
orig_images = features[:n_images].numpy()
decoded_images = decoded[:n_images].numpy()
for i in range(n_images):
for ax, img in zip(axes, [orig_images, decoded_images]):
curr_img = img[i]
ax[i].imshow(curr_img.reshape((image_width, image_width)), cmap='binary')
###Output
_____no_output_____ |
Array/0924/905. Sort Array By Parity.ipynb | ###Markdown
说明: 给定一个非负整数数组A,返回一个数组,该数组包含A的所有偶数元素,后跟A的所有奇数元素。 您可以返回满足此条件的任何答案数组。 Example 1: Input: [3,1,2,4] Output: [2,4,3,1] The outputs [4,2,3,1], [2,4,1,3], and [4,2,1,3] would also be accepted. Note: 1、1 <= A.length <= 5000 2、0 <= A[i] <= 5000
###Code
class Solution:
def sortArrayByParity(self, A: List[int]) -> List[int]:
i, j = 0, len(A) - 1
while i < j:
# A[i] 是奇数,A[j] 是偶数就 swap
if A[i] % 2 == 1 and A[j] % 2 == 0:
A[i], A[j] = A[j], A[i]
if A[i] % 2 == 0:
i += 1
if A[j] % 2 == 1:
j -= 1
return A
class Solution:
def sortArrayByParity(self, A: List[int]) -> List[int]:
even_num = []
odd_num = []
for a in A:
if a % 2 == 0:
even_num.append(a)
else:
odd_num.append(a)
even_num.extend(odd_num)
return even_num
###Output
_____no_output_____ |
ja/5_realtime-streaming-execution-processing/5_realtime-streaming-execution-processing.ipynb | ###Markdown
5. スマートフォンの伝送データをリアルタイムに加工して送信してみよう このチュートリアルでは **intdash Motion** で取得したデータに、**intdash SDK for Python** (以下、intdash SDKと呼びます)にて関数処理をかけ、intdashにアップロードします。 関数処理のサンプルとして、加速度データに対して移動平均を適用します。このケースでは、以下の方法を中心に解説します。- 他のエッジからリアルタイムで伝送されるデータを取得する- 取得したデータに対して関数処理を適用する- 関数処理を適用したデータをリアルタイムにアップロードする 5.0 事前準備本シナリオを実施する前に、以下を用意する必要があります。- 計測用のエッジ- intdash Motionアプリ- 汎用センサーデータに紐づく信号定義 使用データ本シナリオでは、事前に以下のデータをサーバー側に準備する必要があります。|データ項目|本シナリオで登場するデータ名||:---|:---||Motionにログインしたユーザーアカウント| user1 ||SDKからデータをアップロードするためのエッジアカウント|`sdk_edge`||信号定義(※)| `sp_ACCX`, `sp_ACCY`, `sp_ACCZ`|(※) SDK チュートリアル [3. スマートフォンの伝送データをCSVで保存してみよう](../3_save-data-as-csv/3_save-data-as-csv.ipynb) で使用した信号定義と同じものを使用します。 パッケージのimportとクライアントの生成`intdash.Client` に与える `url` は intdashサーバーの環境情報から、`username` と `password` はログイン用ユーザーアカウントで発行したアクセス情報を指定してください。
###Code
import pandas as pd
import intdash
from intdash import timeutils
# Create client
client = intdash.Client(
url = "https://example.intdash.jp",
username = "user1",
password="password_here"
)
###Output
_____no_output_____
###Markdown
信号定義が登録されていることを確認するこのシナリオで使用する信号定義が登録されていることを確認します。 登録されていない場合、次手順の **「(Option) 信号定義を登録する」** を参照してください。
###Code
signals = client.signals.list(label='sp')
for s in signals:
print(s.label, end=', ')
###Output
sp_ACCX, sp_ACCY, sp_ACCZ,
###Markdown
(Option) 信号定義の登録する```Warning: 既にサーバー側に対象の信号定義が登録されている場合、本手順はスキップしてください。```SDKチュートリアル [3. スマートフォンの伝送データをCSVで保存してみよう](../3_save-data-as-csv/3_save-data-as-csv.ipynb) で使用した信号定義と同じ信号定義を使います。 信号定義を登録するためには、以下のファイルを実行してください。[0_create-signal-general-sensor.ipynb](../0_create-signal-general-sensor/0_create-signal-general-sensor.ipynb) 今回は、「汎用センサー型」のうち、「加速度」に対してのみ変換定義を登録します。 5.1 使用するエッジの取得
###Code
edge1 = client.edges.me()
edge2 = client.edges.list(name='sdk_edge')[0]
edge1.name, edge2.name
###Output
_____no_output_____
###Markdown
5.2 Queueの作成サーバーからデータを受け取ってからサーバーに返すまでの処理はQueueを使って行います。
###Code
import queue
q = queue.Queue(maxsize=5)
###Output
_____no_output_____
###Markdown
5.3 データの取得 (Downstream) の準備あるエッジが送信している時系列データを、サーバーを介して受け取る処理を定義します。 5.3.1 リクエストを作成する`src_edge_uuid` には、データの送り元のエッジを指定します。この例では、 **intdash Motion** を実行しているエッジ `edge1` です。`intdash.DataFilter` の `data_id` に信号定義の `label` 名を指定します。
###Code
d_specs = [
intdash.DownstreamSpec(
src_edge_uuid = edge1.uuid,
filters = [
intdash.DataFilter(data_type=intdash.DataType.float.value, data_id='sp_ACCX',channel=1), # Acceleration
intdash.DataFilter(data_type=intdash.DataType.float.value, data_id='sp_ACCY',channel=1), # Acceleration
intdash.DataFilter(data_type=intdash.DataType.float.value, data_id='sp_ACCZ',channel=1), # Acceleration
],
),
]
###Output
_____no_output_____
###Markdown
5.3.2 データ受け取り時の関数処理を定義する本シナリオでは、データの受け取り側は、時系列データを受け取りそのままQueueに追加します。
###Code
# Put the received time-series data to the Queue.
def callback(unit):
try:
q.put_nowait(unit)
except queue.Full:
pass
###Output
_____no_output_____
###Markdown
5.4 データのアップロード (Upstream) の準備受け取ったデータを加工してサーバーに新しい時系列データとして送付する処理を定義します。 5.4.1 リクエストを作成するアップロード時に使用するエッジのUUIDを指定します。
###Code
u_specs = [
intdash.UpstreamSpec(
src_edge_uuid = edge2.uuid,
),
]
###Output
_____no_output_____
###Markdown
5.4.2 加工処理を定義する本シナリオでは、移動平均を算出してサーバー側に返す処理を定義します。
###Code
import numpy as np
# The function to calculate moving average.
def calc_ave(score, array, ave_num):
array.append(score)
if len(array) > ave_num:
array.popleft()
return np.sum(array)/ len(array)
###Output
_____no_output_____
###Markdown
5.4.3 データの送付側の関数処理を定義する今回は以下の処理を定義します。- Queueからデータを取得する- 取得した時系列データに対してデータ型ごとに処理を分岐する- 加工処理にデータを入れる- 新たに作成したデータを送付する( yield で intdash.Unit を返す)
###Code
AVE_NUM = 5
import struct
from collections import deque
acc_x_dq = deque([])
acc_y_dq = deque([])
acc_z_dq = deque([])
# Calculate moving average of the received time-series data, convert it to Unit and upload.
def upload_func():
while True:
try:
unit = q.get_nowait()
# Skip basetime.
if unit.data.data_type.value == intdash.DataType.basetime.value:
yield unit
continue
if unit.data.data_type.value != intdash.DataType.float.value:
yield unit
continue
# Get intdash.intdash.data.GeneralSensor.
sensor_data = unit.data
if unit.data.data_id == 'sp_ACCX':
acc_x = unit.data.value
ave_acc_x = calc_ave(acc_x, acc_x_dq, AVE_NUM)
if ave_acc_x is None:
continue
yield intdash.Unit(
elapsed_time = unit.elapsed_time,
channel = 1,
data = intdash.data.Float(data_id='sp_ACCX', value=ave_acc_x ),
)
continue
if unit.data.data_id == 'sp_ACCY':
acc_y = unit.data.value
ave_acc_y = calc_ave(acc_y, acc_y_dq, AVE_NUM)
if ave_acc_y is None:
continue
yield intdash.Unit(
elapsed_time = unit.elapsed_time,
channel = 1,
data = intdash.data.Float(data_id='sp_ACCY', value=ave_acc_y ),
)
continue
if unit.data.data_id == 'sp_ACCZ':
acc_z = unit.data.value
ave_acc_z = calc_ave(acc_z, acc_z_dq, AVE_NUM)
if ave_acc_z is None:
continue
yield intdash.Unit(
elapsed_time = unit.elapsed_time,
channel = 1,
data = intdash.data.Float(data_id='sp_ACCZ', value=ave_acc_z ),
)
continue
except queue.Empty:
yield
###Output
_____no_output_____
###Markdown
5.5 ストリーム処理を開始する
###Code
wsconn = client.connect_websocket()
###Output
_____no_output_____
###Markdown
5.5.1 Upstreamを開始する
###Code
wsconn.open_upstreams(
specs = u_specs,
iterators = [upload_func()],
)
###Output
_____no_output_____
###Markdown
5.5.2 Downstreamを開始する
###Code
wsconn.open_downstreams(
specs = d_specs,
callbacks = [callback],
)
###Output
_____no_output_____
###Markdown
5.6 Visual M2M Data Visualizer でデータを確認する**Visual M2M Data Visualizer** (以下、Data Visualizer と呼びます)を使用すると、リアルタイムでデータの通信が行われていることが確認できます。本notebookと同じディレクトリに保存されている 「Screenファイル(.scrn)」 と「DATファイル(.dat)」をData Visualizerにインポートすると、以下のようにデータを確認することができます。 **intdashチュートリアル2の「2.2 Data Visualizerの設定を行う」** を確認してください。 以下の画面では、`Acceleration raw` パネルに変換前のデータ(Motionアプリが送信しているデータ)が表示され、 `Acceleration Converted` パネルに `intdash-py` を使って計算した移動平均が表示されています。 5.7 リアルタイム処理の接続を切断する処理を終了したい場合、必ず以下を実行し切断処理をおこなってください。
###Code
wsconn.close()
###Output
_____no_output_____ |
4 jigsaw/cnn-in-keras-on-folds.ipynb | ###Markdown
General informationThis is a basic kernel with CNN. In this kernel I train a CNN model on folds and calculate the competition metric (not simple auc). Content* [1 Loading and processing data](load)* [2 Validation functions](valid)* [3 Model](model)* [4 Training function](train)* [5 Train and predict](run)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from nltk.tokenize import TweetTokenizer
import datetime
import lightgbm as lgb
from scipy import stats
from scipy.sparse import hstack, csr_matrix
from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold
from wordcloud import WordCloud
from collections import Counter
from nltk.corpus import stopwords
from nltk.util import ngrams
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import metrics
pd.set_option('max_colwidth',400)
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, Conv1D, GRU, CuDNNGRU, CuDNNLSTM, BatchNormalization
from keras.layers import Bidirectional, GlobalMaxPool1D, MaxPooling1D, Add, Flatten
from keras.layers import GlobalAveragePooling1D, GlobalMaxPooling1D, concatenate, SpatialDropout1D
from keras.models import Model, load_model
from keras import initializers, regularizers, constraints, optimizers, layers, callbacks
from keras import backend as K
from keras.engine import InputSpec, Layer
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, TensorBoard, Callback, EarlyStopping
import time
import os
print(os.listdir("../input"))
###Output
Using TensorFlow backend.
###Markdown
Loading dataI'll load preprocessed data from my dataset
###Code
train = pd.read_csv('../input/jigsaw-public-files/train.csv')
test = pd.read_csv('../input/jigsaw-public-files/test.csv')
# after processing some of the texts are emply
train['comment_text'] = train['comment_text'].fillna('')
test['comment_text'] = test['comment_text'].fillna('')
sub = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/sample_submission.csv')
full_text = list(train['comment_text'].values) + list(test['comment_text'].values)
max_features = 300000
tk = Tokenizer(lower = True, filters='', num_words=max_features)
tk.fit_on_texts(full_text)
embedding_path1 = "../input/fasttext-crawl-300d-2m/crawl-300d-2M.vec"
embedding_path2 = "../input/glove840b300dtxt/glove.840B.300d.txt"
embed_size = 300
def get_coefs(word,*arr):
return word, np.asarray(arr, dtype='float32')
def build_matrix(embedding_path, tokenizer):
embedding_index = dict(get_coefs(*o.strip().split(" ")) for o in open(embedding_path))
word_index = tk.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.zeros((nb_words + 1, embed_size))
for word, i in word_index.items():
if i >= max_features:
continue
embedding_vector = embedding_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
return embedding_matrix
# combining embeddings from this kernel: https://www.kaggle.com/tanreinama/simple-lstm-using-identity-parameters-solution
embedding_matrix = np.concatenate([build_matrix(embedding_path1, tk), build_matrix(embedding_path2, tk)], axis=-1)
y = np.where(train['target'] >= 0.5, True, False) * 1
identity_columns = ['male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish', 'muslim', 'black', 'white', 'psychiatric_or_mental_illness']
for col in identity_columns + ['target']:
train[col] = np.where(train[col] >= 0.5, True, False)
n_fold = 5
folds = StratifiedKFold(n_splits=n_fold, shuffle=True, random_state=11)
###Output
_____no_output_____
###Markdown
Validation functions
###Code
SUBGROUP_AUC = 'subgroup_auc'
BPSN_AUC = 'bpsn_auc' # stands for background positive, subgroup negative
BNSP_AUC = 'bnsp_auc' # stands for background negative, subgroup positive
def compute_auc(y_true, y_pred):
try:
return metrics.roc_auc_score(y_true, y_pred)
except ValueError:
return np.nan
def compute_subgroup_auc(df, subgroup, label, oof_name):
subgroup_examples = df[df[subgroup]]
return compute_auc(subgroup_examples[label], subgroup_examples[oof_name])
def compute_bpsn_auc(df, subgroup, label, oof_name):
"""Computes the AUC of the within-subgroup negative examples and the background positive examples."""
subgroup_negative_examples = df[df[subgroup] & ~df[label]]
non_subgroup_positive_examples = df[~df[subgroup] & df[label]]
examples = subgroup_negative_examples.append(non_subgroup_positive_examples)
return compute_auc(examples[label], examples[oof_name])
def compute_bnsp_auc(df, subgroup, label, oof_name):
"""Computes the AUC of the within-subgroup positive examples and the background negative examples."""
subgroup_positive_examples = df[df[subgroup] & df[label]]
non_subgroup_negative_examples = df[~df[subgroup] & ~df[label]]
examples = subgroup_positive_examples.append(non_subgroup_negative_examples)
return compute_auc(examples[label], examples[oof_name])
def compute_bias_metrics_for_model(dataset,
subgroups,
model,
label_col,
include_asegs=False):
"""Computes per-subgroup metrics for all subgroups and one model."""
records = []
for subgroup in subgroups:
record = {
'subgroup': subgroup,
'subgroup_size': len(dataset[dataset[subgroup]])
}
record[SUBGROUP_AUC] = compute_subgroup_auc(dataset, subgroup, label_col, model)
record[BPSN_AUC] = compute_bpsn_auc(dataset, subgroup, label_col, model)
record[BNSP_AUC] = compute_bnsp_auc(dataset, subgroup, label_col, model)
records.append(record)
return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True)
def calculate_overall_auc(df, oof_name):
true_labels = df['target']
predicted_labels = df[oof_name]
return metrics.roc_auc_score(true_labels, predicted_labels)
def power_mean(series, p):
total = sum(np.power(series, p))
return np.power(total / len(series), 1 / p)
def get_final_metric(bias_df, overall_auc, POWER=-5, OVERALL_MODEL_WEIGHT=0.25):
bias_score = np.average([
power_mean(bias_df[SUBGROUP_AUC], POWER),
power_mean(bias_df[BPSN_AUC], POWER),
power_mean(bias_df[BNSP_AUC], POWER)
])
return (OVERALL_MODEL_WEIGHT * overall_auc) + ((1 - OVERALL_MODEL_WEIGHT) * bias_score)
###Output
_____no_output_____
###Markdown
Model
###Code
# adding attention from this kernel: https://www.kaggle.com/christofhenkel/keras-baseline-lstm-attention-5-fold
class Attention(Layer):
def __init__(self, step_dim,
W_regularizer=None, b_regularizer=None,
W_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
self.init = initializers.get('glorot_uniform')
self.W_regularizer = regularizers.get(W_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
self.step_dim = step_dim
self.features_dim = 0
super(Attention, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.W = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
self.features_dim = input_shape[-1]
if self.bias:
self.b = self.add_weight((input_shape[1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
else:
self.b = None
self.built = True
def compute_mask(self, input, input_mask=None):
return None
def call(self, x, mask=None):
features_dim = self.features_dim
step_dim = self.step_dim
eij = K.reshape(K.dot(K.reshape(x, (-1, features_dim)),
K.reshape(self.W, (features_dim, 1))), (-1, step_dim))
if self.bias:
eij += self.b
eij = K.tanh(eij)
a = K.exp(eij)
if mask is not None:
a *= K.cast(mask, K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], self.features_dim
def build_model(X_train, y_train, X_valid, y_valid, max_len, max_features, embed_size, embedding_matrix, lr=0.0, lr_d=0.0, spatial_dr=0.0,
dense_units=128, conv_size=128, dr=0.2, patience=3, fold_id=1):
file_path = f"best_model_fold_{fold_id}.hdf5"
check_point = ModelCheckpoint(file_path, monitor="val_loss", verbose=1,save_best_only=True, mode="min")
early_stop = EarlyStopping(monitor="val_loss", mode="min", patience=patience)
inp = Input(shape = (max_len,))
x = Embedding(max_features + 1, embed_size * 2, weights=[embedding_matrix], trainable=False)(inp)
x1 = SpatialDropout1D(spatial_dr)(x)
att = Attention(max_len)(x1)
# from benchmark kernel
x = Conv1D(conv_size, 2, activation='relu', padding='same')(x1)
x = MaxPooling1D(5, padding='same')(x)
x = Conv1D(conv_size, 3, activation='relu', padding='same')(x)
x = MaxPooling1D(5, padding='same')(x)
x = Flatten()(x)
x = concatenate([x, att])
x = Dropout(dr)(Dense(dense_units, activation='relu') (x))
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss="binary_crossentropy", optimizer=Adam(lr=lr, decay=lr_d), metrics=["accuracy"])
model.fit(X_train, y_train, batch_size=128, epochs=3, validation_data=(X_valid, y_valid),
verbose=2, callbacks=[early_stop, check_point])
return model
###Output
_____no_output_____
###Markdown
Training fuction
###Code
def train_model(X, X_test, y, tokenizer, max_len):
oof = np.zeros((len(X), 1))
prediction = np.zeros((len(X_test), 1))
scores = []
test_tokenized = tokenizer.texts_to_sequences(test['comment_text'])
X_test = pad_sequences(test_tokenized, maxlen = max_len)
for fold_n, (train_index, valid_index) in enumerate(folds.split(X, y)):
print('Fold', fold_n, 'started at', time.ctime())
X_train, X_valid = X.iloc[train_index], X.iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
valid_df = X_valid.copy()
train_tokenized = tokenizer.texts_to_sequences(X_train['comment_text'])
valid_tokenized = tokenizer.texts_to_sequences(X_valid['comment_text'])
X_train = pad_sequences(train_tokenized, maxlen = max_len)
X_valid = pad_sequences(valid_tokenized, maxlen = max_len)
model = build_model(X_train, y_train, X_valid, y_valid, max_len, max_features, embed_size, embedding_matrix,
lr = 1e-3, lr_d = 0, spatial_dr = 0.1, dense_units=128, conv_size=128, dr=0.1, patience=3, fold_id=fold_n)
pred_valid = model.predict(X_valid)
oof[valid_index] = pred_valid
valid_df[oof_name] = pred_valid
bias_metrics_df = compute_bias_metrics_for_model(valid_df, identity_columns, oof_name, 'target')
scores.append(get_final_metric(bias_metrics_df, calculate_overall_auc(valid_df, oof_name)))
prediction += model.predict(X_test, batch_size = 1024, verbose = 1)
prediction /= n_fold
# print('CV mean score: {0:.4f}, std: {1:.4f}.'.format(np.mean(scores), np.std(scores)))
return oof, prediction, scores
###Output
_____no_output_____
###Markdown
Train and predict
###Code
oof_name = 'predicted_target'
max_len = 250
oof, prediction, scores = train_model(X=train, X_test=test, y=train['target'], tokenizer=tk, max_len=max_len)
print('CV mean score: {0:.4f}, std: {1:.4f}.'.format(np.mean(scores), np.std(scores)))
plt.hist(prediction);
plt.hist(oof);
plt.title('Distribution of predictions vs oof predictions');
sub['prediction'] = prediction
sub.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
Lab_5.ipynb | ###Markdown
Lab 5**Joseph Livesey**
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats, signal
import pickle
plt.rcParams['figure.figsize'] = (15, 10)
###Output
_____no_output_____
###Markdown
In this lab, we will be looking at a training set of data from the Higgs boson experiment at the Large Hadron Collider (LHC). First, we load in our data. The data in `higgs` represent a signal—a possible detection of a Higgs boson by the LHC calorimeter. The data in `qcd` are the background distribution due to quantum chromodynamical effects that are prominent at low energy scales.
###Code
dicts = []
for pkl in ['higgs_100000_pt_250_500.pkl', 'qcd_100000_pt_250_500.pkl']:
file = open(pkl, 'rb')
data = pickle.load(file)
dicts.append(data)
higgs = dicts[0]
qcd = dicts[1]
higgs.keys(), qcd.keys()
###Output
_____no_output_____
###Markdown
The data we're working with are derived from the ATLAS detector at the LHC. In this experiment, proton-proton collisions are facilitated at the center of the detector. This collision results in a long decay chain that may contain the decay $H \to b\bar{b}$ (Higgs boson becomes a bottom quark and an anti-bottom quark). Such a decay chain is deemed a **Higgs jet**, distinguishing from other jets that come about as a result of a collision. Each collision produces "jets" of particles that progress through a chain of decays until they strike the ATLAS detector. In order to identify Higgs jets, it is imperative to understand the internal structures of jets.In this experiment, proton-proton collisions were facilitated with a **center-of-mass energy** $\sqrt{s} = 13 \text{ TeV}$, far exceeding previous LHC Higgs experiments that used $\sqrt{s} \simeq 7 \text{ TeV}$. The center-of-mass energy is in essence just the total energy that goes toward making new particles in the laboratory reference frame. The higher value of $\sqrt{s}$ extends the sensitivity of the experiment relative to its predecessors. The detector itself encompasses a large volume with the collision point roughly at the center, and envelops almost the entire solid angle around the collision.There are two triggers that enable the identification of relevant signals. The first-level trigger is in the hardware, selecting a subset of detections to keep the rate of data collection at about 100,000 detections per second. The second-level trigger is in the software, which identifies detections of interest from these and further reduces the rate to about 1,000 detections per second.Non-Higgs signals appear in the data due to "parton scattering" in the particle showers that result from $pp$-collisions. The decay $g \to b\bar{b}$ (initial state is a gluon instead of a Higgs boson) results from this, and our primary objective is to discriminate between jets that come about as a result of this **QCD background** and real Higgs jets. The simulated QCD background we will concern ourselves with is contained within the `qcd` dataset. The keys here mostly refer to elements of what is known as the **jet substructure**. Jets are produced by particle interactions, and the energies of these jets are what is detected by ATLAS. Jets are not simply particles that result from some decay flying until they strike the detector. They are complicated objects with an internal structure that can be characterized through the data collected by ATLAS. Jet substructure is a collection of observable quantities that are dependent on the energy flow within an individual jet.The first order of business in this lab is to understand what each of the keys in these datasets represent. So, let's go through them one by one.* `pt` refers to $p_T$, the **transverse momentum** of the particle in question. The transverse momentum is the component of the particle's momentum in the lab frame perpendicular to the beamline. This is important because, on average, incident particles have zero transverse momentum. So, if an interaction occurs in the detector, nonzero $p_T$ is an indicator of this.* `eta` refers to the parameter $\eta$, which is the **pseudorapidity** of the jet. The pseudorapidity can be calculated from $\eta = -\ln \tan (\theta/2)$, where $\theta$ is the polar angle (the direction $\theta = 0$ is orthogonal to the beam pipe, while $\theta = \pi/2$ corresponds to the axis of the beam pipe).* `phi` refers to the parameter $\phi$, which is the azimuthal angle of the site at which a jet is detected on the detector.* `mass`, as one would expect, refers to the mass of the jet.* `d2` refers to $D_2^{\beta=1}$, which can be calculated from **energy correlation functions**. This is useful as a discriminating variable to tell "two-pronged" from "three-pronged" decays. $E_{\text{CF}i}$ represents the $i$-th energy correlation function. The relationship is$$ D_2^{\beta=1} = E_{\text{CF}3} \left ( \frac{E_{\text{CF}1}}{E_{\text{CF}2}} \right )^3. $$* `ee2` is the value of the two-point energy correlation function, $E_\text{CF2}$, which identifies "two-pronged" jets.* `ee3` is the value of the three-point energy correlation function, $E_\text{CF3}$, which identifies "three-pronged" jets.* `angularity` is angularity, as the name would imply. The **jet angularity** is a sum over the 4-momenta of particles in a jet, weighted by their respective $p_T$.* `t1`, `t2`, and `t3` refer to a parameter called $\tau_n$. This is the $n$**-subjettiness** of the jet substructure. The $n$-subjettiness is a parameter related to the "jet shape", and is used as a discriminating variable to identify jets resulting from different decays.* `t21` and `t32` refer to $\tau_{ij}$, where $\tau_{ij} \equiv \tau_i/\tau_j$. This ratio is effective in identifying different particles from their decays. In particular, $\tau_{21}$ is a useful discriminating variable to identify "two-pronged" decays like $H \to b\bar{b}$, while $\tau_{32}$ is useful for identifying "three-pronged" decays like those of boosted top quarks.* `KtDeltaR` refers to the quantity $k_t \Delta R$. $R = \sqrt{\Delta\eta^2 + \Delta\phi^2}$ is the **radius parameter** of a jet, related to its spatial size on the detector. $k_t \Delta R$ is a measure of the difference in $R$ between two subjets of the jet with the highest overall value of $R$, using an algorithm called $k_t$. Now, we will attempt to determine which of these features best discriminate between the Higgs signal and the QCD background. To do this, we will construct a histogram of the signal and the background for each feature, as well as a histogram of the difference between the two. The mean of each of the latter type of histogram should inform us as to how powerfully we can differentiate between the signal and the background using each feature.
###Code
fig, ax = plt.subplots(14, 2, figsize=(15, 100))
n = 0
for key in higgs.keys():
subtraction = higgs[key] - qcd[key]
discrimination = np.mean(subtraction)
ax[n, 0].hist(higgs[key], bins=100, density=True, label='Higgs')
ax[n, 0].hist(qcd[key], bins=100, density=True, label='QCD')
ax[n, 1].hist(subtraction, bins=100, density=True, color='g', label=r'Higgs $-$ QCD')
ax[n, 1].axvline(discrimination, c='k', ls='--', label='avg difference')
ax[n, 0].set_title('Higgs signal and QCD background')
ax[n, 1].set_title('Difference between distributions')
for m in range(2):
ax[n, m].set_xlabel(str(key))
ax[n, m].set_ylabel('Number')
n += 1
ax[0, 0].legend(loc=0)
ax[0, 1].legend(loc=0);
###Output
_____no_output_____ |
04_user_guide/32_Nullable boolean.ipynb | ###Markdown
Nullable Boolean data typeNew in version 1.0.0. Indexing with NA valuespandas allows indexing with `NA` values in a boolean array, which are treated as `False`.Changed in version 1.0.2.
###Code
s = pd.Series([1, 2, 3])
mask = pd.array([True, False, pd.NA], dtype="boolean")
s[mask]
###Output
_____no_output_____
###Markdown
If you would prefer to keep the `NA` values you can manually fill them with `fillna(True)`.
###Code
s[mask.fillna(True)]
###Output
_____no_output_____
###Markdown
Kleene logical operations`arrays.BooleanArray` implements [Kleene Logic](https://en.wikipedia.org/wiki/Three-valued_logicKleene_and_Priest_logics) (sometimes called three-value logic) forlogical operations like `&` (and), `|` (or) and `^` (exclusive-or).This table demonstrates the results for every combination. These operations are symmetrical,so flipping the left- and right-hand side makes no difference in the result.````````````````````````````````````````````````````````````````````````|Expression|Result||:---------------:|:-------:||True & True|True||True & False|False||True & NA|NA||False & False|False||False & NA|False||NA & NA|NA||True | True|True||True | False|True||True | NA|True||False | False|False||False | NA|NA||NA | NA|NA||True ^ True|False||True ^ False|True||True ^ NA|NA||False ^ False|False||False ^ NA|NA||NA ^ NA|NA|When an `NA` is present in an operation, the output value is `NA` only ifthe result cannot be determined solely based on the other input. For example,`True | NA` is `True`, because both `True | True` and `True | False`are `True`. In that case, we don’t actually need to consider the valueof the `NA`.On the other hand, `True & NA` is `NA`. The result depends on whetherthe `NA` really is `True` or `False`, since `True & True` is `True`,but `True & False` is `False`, so we can’t determine the output.This differs from how `np.nan` behaves in logical operations. pandas treated`np.nan` is *always false in the output*.In `or`
###Code
pd.Series([True, False, np.nan], dtype="object") | True
pd.Series([True, False, np.nan], dtype="boolean") | True
###Output
_____no_output_____
###Markdown
In `and`
###Code
pd.Series([True, False, np.nan], dtype="object") & True
pd.Series([True, False, np.nan], dtype="boolean") & True
###Output
_____no_output_____ |
lab-taxi/Taxi-v2.ipynb | ###Markdown
Results Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 8.76rage reward 8.766 Episode Ended alpha (0.1) || gamma (0.5) || best_avg_reward (8.76)Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 8.79rage reward 8.799 Episode Ended alpha (0.1) || gamma (0.9) || best_avg_reward (8.79)Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 8.92rage reward 8.922 Episode Ended alpha (0.1) || gamma (1) || best_avg_reward (8.92)Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 8.48rage reward 8.4883 Episode Ended alpha (0.05) || gamma (0.5) || best_avg_reward (8.48)Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 8.71rage reward 8.711 Episode Ended alpha (0.05) || gamma (0.9) || best_avg_reward (8.71)Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 8.92rage reward 8.922 Episode Ended alpha (0.05) || gamma (1) || best_avg_reward (8.92)Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 0.51rage reward 0.51185 Episode Ended alpha (0.01) || gamma (0.5) || best_avg_reward (0.51)Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 8.21rage reward 8.21133 Episode Ended alpha (0.01) || gamma (0.9) || best_avg_reward (8.21)Episode 20000/20000 || exploration_rate 5e-05 || Best average reward 8.57rage reward 8.57713 Episode Ended alpha (0.01) || gamma (1) || best_avg_reward (8.57)
###Code
#from agent import Agent
#from monitor import interact
from bayes_opt import BayesianOptimization
import gym
import numpy as np
env = gym.make('Taxi-v3')
agent = Agent()
def taxi_best_params(alpha, gamma):
agent = Agent(alpha=alpha, gamma=gamma)
avg_rewards, best_avg_reward = interact(env, agent, num_episodes)
return best_avg_reward
optimizer = BayesianOptimization(
taxi_best_params,
{
'alpha': (0.1, 0.05, 0.01),
'gamma': (0.5, 0.9, 1)
}
)
optimizer.maximize(3, 2)
print("The optimizer params : {0}".format(optimizer.max['params']))
print("The optimaizer result: {0}".format(optimizer.max['target']))
###Output
| iter | target | alpha | gamma |
-------------------------------------------------
|
stanford_open_policing_dataset_analysis_rhode_island.ipynb | ###Markdown
This dataset was processed as an exercise for the pycon 2018 class taught by Kevin Markham: - Youtube: https://www.youtube.com/watch?v=0hsKLYfyQZc&t=1175s - Dataset: https://openpolicing.stanford.edu/data/
###Code
import pandas as pd
import matplotlib.pyplot as plt
ri = pd.read_csv('data/stanford_open_policing_dataset/ri_statewide_2020_04_01.csv')
ri.head()
###Output
/usr/local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3058: DtypeWarning: Columns (6,17,30) have mixed types.Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Let's check the mixed types warning and make sure we're getting the data correctly. The three columns specified are:department_idfrisk_performedraw_SearchResultThreeWe're using neither of these three columns, so we will ignore this warning for now.Always make sure you understand these warnings, though, as they could cause issues laterif you ignore them completely.
###Code
ri.dtypes
ri.isnull().sum()
###Output
_____no_output_____
###Markdown
1. Dropping columns that have more than 20% null valuesLet's see which columns will be dropped.
###Code
threshold = 0.8 * ri.shape[0]
num_nulls = ri.isnull().sum()
print(f'type(num_nulls): {type(num_nulls)}')
num_nulls[num_nulls.values > threshold].index.values
def get_shorter_df(df):
return df.dropna(axis='columns', thresh=threshold).copy()
ri_short = get_shorter_df(ri)
ri_short
###Output
_____no_output_____
###Markdown
2. Do men or women speed more often?To do this, we can: - filter using reason_for_stop == Speeding and citation_issued == True - do a groupby of the subject_sex column - sum up the number of each type of driver (male/female)
###Code
ri_short.groupby('subject_sex').reason_for_stop.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Problem here is we don't know how many miles women drove vs men. We cannot conclude if men or women are safer drivers from this data alone. 3. Does gender affect who gets searched during a stop?
###Code
ri_short.groupby('subject_sex').search_conducted.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We can also do this using the following code. This is because, the mean is an average of the True=1 and False=0, so it gives you the same number for boolean columns as above. This is bad in the sense that it is not explicit, so a reader of your code may not understand what is going on - avoid it.
###Code
ri_short.groupby('subject_sex').search_conducted.mean()
###Output
_____no_output_____
###Markdown
All we've looked at is gender. We don't know why someone was stopped. If you interview police officers, maybe they will tell you that registration violations require a search. And women may have less registration violations than men. So registration violations is a confounding variable. Let's see if this is related to the type of violation.
###Code
ri_short.groupby(['reason_for_stop', 'subject_sex']) \
.search_conducted.mean() \
.unstack() # Unstack allows us to move the male/female index
# into columns. This makes it easier to read the table.
###Output
_____no_output_____
###Markdown
You can see that men are searched more often than women for all reasons for stops. This is just a data point, but does not say that because they're male, they're searched. It is tough to say any relationship is causal, so we will not go there. 4. Why is search_type missing so often? Why is reason_for_search missing so many times? Maybe because search was not conducted that many times. Let's check.
###Code
ri_short.search_conducted.value_counts()
###Output
_____no_output_____
###Markdown
This is the same as the number of nulls in the reason_for_search column. 5. During a search, how often is the driver frisked?
###Code
ri_short[ri_short.search_conducted == True].frisk_performed.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
You can also do this using the mean() function
###Code
ri_short[ri_short.search_conducted == True].frisk_performed.mean()
'During a search, the driver is frisked {:4.2f}% of the time' \
.format(ri_short[ri_short.search_conducted == True].frisk_performed.mean() * 100.)
###Output
_____no_output_____
###Markdown
6. Which year had the least number of stops?
###Code
ri_non_null_dates = ri_short.dropna(axis=0, how='any', subset=['date']).copy()
years = ri_non_null_dates \
.date\
.str \
.extract(r'(\d*)\-\d*\-\d*') \
.astype('int')
ri_non_null_dates.insert(loc=1, column='year', value=years)
ri_non_null_dates.head()
ri_non_null_dates.groupby('year').year.value_counts()
###Output
_____no_output_____
###Markdown
The year 2005 had the least number of stops. An easier way of doing this:
###Code
ri_short.dropna(axis=0, how='any', subset=['date']) \
.date \
.str \
.slice(0, 4) \
.value_counts(ascending=True)
###Output
_____no_output_____
###Markdown
Another easy way
###Code
combined = ri_short.date.str.cat(ri_short.time, sep = ' ')
ri_short['stop_datetime'] = pd.to_datetime(combined) # You can just use pd.to_datetime(ri.date) here
ri_short.stop_datetime.dt.year.value_counts().sort_values().index[0]
pd.to_datetime(ri_short.date)
###Output
_____no_output_____
###Markdown
7. How does drug activity change by time of day? We're missing contraband_drugs column, since we took it out due to too few values.Let's add it back in.
###Code
ri_short['contraband_drugs'] = ri['contraband_drugs'].copy()
ri_short.contraband_drugs.isnull().sum()
ri_short.shape
ri_short = ri_short.dropna(axis=0, subset=['contraband_drugs'])
ri_short.shape
ri_short['contraband_drugs_int'] = ri_short['contraband_drugs'].astype(int)
###Output
/usr/local/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
###Markdown
Not sure why I'm getting this warning. I have already copied the contraband_drugs column from ri into ri_short. I've also copied when creating ri_short from ri.
###Code
ri_short.loc[:, 'contraband_drugs_int'] = ri_short.loc[:, 'contraband_drugs'].astype(int)
###Output
/usr/local/lib/python3.7/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item] = s
###Markdown
I've seen when searching the web that sometimes this happens when it is not really a problem. I'm not sure what to do here. Let's ignore it for now.
###Code
ri_short.head()
ri_short.groupby(ri_short.stop_datetime.dt.hour) \
.contraband_drugs_int.count().plot();
###Output
_____no_output_____
###Markdown
8. Do most stops occur at night?
###Code
ri_short = get_shorter_df(ri) # Redo this since we removed lots of nulls in 7. above
ri_short['stop_time'] = pd.to_datetime(ri_short.time)
ri_short.groupby(ri_short.stop_time.dt.hour) \
.type.count().plot();
###Output
_____no_output_____
###Markdown
The largest number of stops occur during the day (around 10am). There are a large number of stops that do occur at night in comparison. Consider that the number of motorists travelling during the day are much more than those at night. So the percentage of motorists stopped at night must be a larger percent compared to those stopped during the day. This we will not know until we see the data of how many motorists are travelling during the day/night.
###Code
ri_short.groupby(ri_short.stop_time.dt.hour) \
.type.count().hist();
ri_short.groupby(ri_short.stop_time.dt.hour) \
.type.count()
###Output
_____no_output_____ |
notebooks/classifiers_playground/piecewise_linear_classifier/classifier_1D__piecewise_linear_unconverging.ipynb | ###Markdown
TODOs (from 09.06.2020)1. Strip away the non-useful functions2. Document the remaining functions3. Move the remaining functions to modules4. Test the modules5. Clean up this NB Introduction: movement analysisFrom a sequence of signaling events, _eg_ GPS measurements, determine locations where the user remains for a significant duration of time, called "stays". For each of these, there should be a beginning and end, as well as a location. Generally, this is meant for movement on the surface of the earth, but for present purposes, it is easiest to illustrate in one spatial dimension "1D"; all of the problems and strategies can be generalized to 2D as needed. **Note** the signaling events for a given user, form a set $\mathcal{E} := \{e_i = (\mathbf{x}_i, t_i), i=[0,N-1] \; | \; t_{i+1}>t_i\}$
###Code
import random
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, 20)]
random.shuffle(colors)
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
segs_plot_kwargs = {'linestyle':'--', 'marker':'o', 'color':'k', 'linewidth':4.0, 'markerfacecolor':'w', 'markersize':6.0, 'markeredgewidth':2.0}
from matplotlib.ticker import MultipleLocator, FormatStrFormatter, AutoMinorLocator
eps = 0.25
###Output
_____no_output_____
###Markdown
Testing the box method Once the box (width is provided by the spatial tolerance) is positioned in a good way (_ie_ the centroid), extending the box forwards or backwards in time makes no change to the _score_ of the box.Here, the score could be something like the number of points, the std/MSE; whatever it is, it should be saturated at some point and extending the box makes no difference, meaning that something converges which provides a stopping criterion.
###Code
from synthetic_data.trajectory import get_stay
from synthetic_data.trajectory import get_journey_path, get_segments
from synthetic_data.masking import get_mask_with_duplicates
from synthetic_data.trajectory import get_stay_segs, get_adjusted_stays, get_stay_indices
from synthetic_data.noise import get_noisy_segs, get_noisy_path, get_noise_arr
from synthetic_data.noise import get_noisy_segs, get_noisy_path, get_noise_arr
from synthetic_data.trajectory_class import get_rand_traj, get_rand_stays
configs = {
'threshold':0.5,
'noise_min':0.02,
'noise_max':0.15
}
time_arr, raw_arr, noise_arr, segments = get_rand_traj(configs)
new_stays = get_adjusted_stays(segments, time_arr)
new_t_segs, new_x_segs = get_stay_segs(new_stays)
from numpy.linalg import lstsq
import pickle
from datetime import datetime
ramp = lambda u: np.maximum( u, 0 )
step = lambda u: ( u > 0 ).astype(float)
stays_tag = int((new_x_segs.size)/3)
date_tag = datetime.today().strftime('%Y%m%d')
notes = 'unconverging_example'
trajectory_tag = f"{date_tag}_trajectory_{stays_tag}stays__{notes}"
trajectory = {}
trajectory['segments'] = segments
trajectory['time_arr'] = time_arr
trajectory['raw_locs_arr'] = raw_arr
trajectory['nse_locs_arr'] = noise_arr
#pickle.dump( trajectory, open( trajectory_tag, "wb" ) )
#trajectory = pickle.load( open( "20200625_trajectory_8stays__adjusted.pkl", "rb" ) )
plt.figure(figsize=(20,10))
#plt.plot(t_segs, x_segs, ':', marker='|', color='grey', linewidth=2.0, markerfacecolor='w', markersize=30.0, markeredgewidth=1.0, dashes=[0.5,0.5], label='raw stays')
plt.plot(new_t_segs, new_x_segs, **segs_plot_kwargs, label='adjusted raw stays')
plt.plot(time_arr, raw_arr, ':', label='raw journey')
plt.plot(time_arr, noise_arr, '.-', label='noisy journey', alpha=0.25)
plt.legend();
plt.xlabel(r'time, $t$ [arb.]')
plt.ylabel(r'position, $x$ [arb.]')
ymin = noise_arr.min()-1*eps
ymax = noise_arr.max()+1*eps
#plt.ylim(ymin, ymax)
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.xaxis.set_minor_locator(MultipleLocator(0.5))
plt.xlim(-0.05, 24.05)
#plt.xlim(-0.1, 19.1
#plt.xlim(15.1, 19.1)
plt.title('Trajectory', fontsize=36)
plt.grid(visible=True);
from numpy.linalg import lstsq
ramp = lambda u: np.maximum( u, 0 )
step = lambda u: ( u > 0 ).astype(float)
def SegmentedLinearReg( X, Y, breakpoints ):
# hyperparam
nIterationMax = 20
# Sorting the breaks
breakpoints = np.sort( np.array(breakpoints) )
# XDiffs
dt = np.min( np.diff(X) )
ones = np.ones_like(X)
# loop through the whole data set
for i in range( nIterationMax ):
# Linear regression: solve A*p = Y
Rk = [ramp( X - xk ) for xk in breakpoints ]
Sk = [step( X - xk ) for xk in breakpoints ]
A = np.array([ ones, X ] + Rk + Sk )
p = lstsq(A.transpose(), Y, rcond=None)[0]
# Parameters identification:
a, b = p[0:2]
ck = p[ 2:2+len(breakpoints) ]
dk = p[ 2+len(breakpoints): ]
# Estimation of the next break-points:
newBreakpoints = breakpoints - dk/ck
# Stop condition
print(newBreakpoints.size,breakpoints.size)
if np.max(np.abs(newBreakpoints - breakpoints)) < dt/5:
break
breakpoints = newBreakpoints
# MJS: included
breakpoints = breakpoints[(breakpoints > X.min()) & (breakpoints < X.max())]
breakpoints = np.sort( np.array(breakpoints) )
else:
print( 'maximum iteration reached' )
# Compute the final segmented fit:
Xsolution = np.insert( np.append( breakpoints, max(X) ), 0, min(X) )
ones = np.ones_like(Xsolution)
Rk = [ c*ramp( Xsolution - x0 ) for x0, c in zip(breakpoints, ck) ]
Ysolution = a*ones + b*Xsolution + np.sum( Rk, axis=0 )
return Xsolution, Ysolution
X = np.linspace( 0, 10, 27 )
Y = 0.2*X - 0.3* ramp(X-2) + 0.3*ramp(X-6) + 0.05*np.random.randn(len(X))
plt.plot( X, Y, 'ok' );
initialBreakpoints = [1,3,5,7,9]
plt.plot( *SegmentedLinearReg( X, Y, initialBreakpoints ), '-r' );
plt.xlabel('X'); plt.ylabel('Y');
X = np.linspace( 0, 10, 47 )
Y = 0.2*X - 0.3* ramp(X-2) + 0.3*ramp(X-6) + 0.05*np.random.randn(len(X))
plt.plot( X, Y, 'ok' );
initialBreakpoints = [1,3,5,7,9]
# hyperparam
nIterationMax = 20
# Sorting the breaks
breakpoints = np.sort( np.array(initialBreakpoints) )
# XDiffs
dt = np.min( np.diff(X) )
ones = np.ones_like(X)
# loop through the whole data set
for i in range( nIterationMax ):
# Linear regression: solve A*p = Y
Rk = [ramp( X - xk ) for xk in breakpoints ]
Sk = [step( X - xk ) for xk in breakpoints ]
A = np.array([ ones, X ] + Rk + Sk )
p = lstsq(A.transpose(), Y, rcond=None)[0]
# Parameters identification:
a, b = p[0:2]
ck = p[ 2:2+len(breakpoints) ]
dk = p[ 2+len(breakpoints): ]
# Estimation of the next break-points:
newBreakpoints = breakpoints - dk/ck
# Stop condition
print(newBreakpoints.size,breakpoints.size)
print( 'stop criterion reached',f"{abs(dk/ck)} vs ")#"{np.max(np.abs(newBreakpoints - breakpoints)):.5f}" )
if np.max(np.abs(newBreakpoints - breakpoints)) < dt/5:
print( 'stop criterion reached' )
break
breakpoints = newBreakpoints
# MJS: included
breakpoints = breakpoints[(breakpoints > X.min()) & (breakpoints < X.max())]
breakpoints = np.sort( np.array(breakpoints) )
# Compute the final segmented fit:
Xsolution = np.insert( np.append( breakpoints, max(X) ), 0, min(X) )
ones_ = np.ones_like(Xsolution)
Rk_ = [ c*ramp( Xsolution - x0 ) for x0, c in zip(breakpoints, ck) ]
Ysolution = a*ones_ + b*Xsolution + np.sum( Rk_, axis=0 )
plt.plot(Xsolution, Ysolution, 'r:', lw=1, alpha=0.9 );
else:
print( 'maximum iteration reached' )
# Compute the final segmented fit:
Xsolution = np.insert( np.append( breakpoints, max(X) ), 0, min(X) )
ones_ = np.ones_like(Xsolution)
Rk_ = [ c*ramp( Xsolution - x0 ) for x0, c in zip(breakpoints, ck) ]
Ysolution = a*ones_ + b*Xsolution + np.sum( Rk_, axis=0 )
plt.plot(Xsolution, Ysolution, 'm-', lw=10, alpha=0.2 );
plt.xlabel('X'); plt.ylabel('Y');
plt.figure(figsize=(20,10))
#plt.plot(t_segs, x_segs, ':', marker='|', color='grey', linewidth=2.0, markerfacecolor='w', markersize=30.0, markeredgewidth=1.0, dashes=[0.5,0.5], label='raw stays')
plt.plot(new_t_segs, new_x_segs, **segs_plot_kwargs, label='adjusted raw stays')
plt.plot(time_arr, raw_arr, ':', label='raw journey')
plt.plot(time_arr, noise_arr, '.-', label='noisy journey', alpha=0.25)
# hyperparam
nIterationMax = 500
# Sorting the breaks
#breakpoints = np.sort( np.array(breakpoints0) )
breakpoints = np.arange(0,24,1)
# time_arrDiffs
dt = np.min( np.diff(np.unique( time_arr)) )
ones = np.ones_like(time_arr)
yyysolution_last = noise_arr
all_breakpoints = []
loops1 = []
cycle = []
last_len = 0
set_len = 0
# loop through the whole data set
for i in range( nIterationMax ):
#print(yyysolution_last.shape)
ones = np.ones_like(time_arr)
# Linear regression: solve A*p = Y
Rk = [ramp( time_arr - xk ) for xk in breakpoints ]
Sk = [step( time_arr - xk ) for xk in breakpoints ]
A = np.array([ ones, time_arr ] + Rk + Sk )
p = lstsq(A.transpose(), noise_arr, rcond=None)[0]
# Parameters identification:
a, b = p[0:2]
ck = p[ 2:2+len(breakpoints) ]
dk = p[ 2+len(breakpoints): ]
# Estimation of the next break-points:
newBreakpoints = breakpoints - dk/ck
#print(np.max(np.abs(newBreakpoints - breakpoints)),dt/5, dt)
# Stop condition
if np.max(np.abs(newBreakpoints - breakpoints)) < dt/5:
print('Stopping criterion')
#break
# Compute the final segmented fit:
xxxsolution = np.insert( np.append( breakpoints, max(time_arr) ), 0, min(time_arr) )
ones = np.ones_like(xxxsolution)
Rk = [ c*ramp( xxxsolution - x0 ) for x0, c in zip(breakpoints, ck) ]
yyysolution = a*ones + b*xxxsolution + np.sum( Rk, axis=0 )
# Compute the final segmented fit:
ones_model = np.ones_like(time_arr)
Rk_model = [ c*ramp( time_arr - x0 ) for x0, c in zip(breakpoints, ck) ]
yyysolution_model = a*ones_model + b*time_arr + np.sum( Rk_model, axis=0 )
norm_err = np.linalg.norm(yyysolution_model-yyysolution_last)
round_norm_err = round(norm_err,5)
if round_norm_err in loops1:
cycle.append(round_norm_err)
set_len = len(list(set(cycle)))
consec = True
print("in loop", set_len, f"{np.linalg.norm(yyysolution_model-yyysolution_last):.5f}")
else:
loops1.append(round_norm_err)
consec = False
#print(norm_err, yyysolution.size, breakpoints.size )
if np.linalg.norm(yyysolution_model-yyysolution_last) < 0.01:
print(f'{i}: Stopping criterion #2')
#print(yyysolution.shape)
if i%20==0:
plt.plot(xxxsolution, yyysolution, color=colors[i%len(colors)-1], label=f'Iteration {i}')
breakpoints = newBreakpoints
breakpoints = breakpoints[(breakpoints > time_arr.min()) & (breakpoints < time_arr.max())]
breakpoints = np.sort( breakpoints )
all_breakpoints.append(breakpoints)
yyysolution_last = yyysolution_model
#if i%50:
# print(["{0:0.2f}".format(i) for i in breakpoints],len(breakpoints),len(xxxsolution))
if len(cycle) > 0:
if (last_len == set_len) & consec & (round_norm_err == min(cycle)):
break
else:
last_len = set_len
plt.legend();
plt.xlabel(r'time, $t$ [arb.]')
plt.ylabel(r'position, $x$ [arb.]')
ymin = noise_arr.min()-1*eps
ymax = noise_arr.max()+1*eps
#plt.ylim(ymin, ymax)
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.xaxis.set_minor_locator(MultipleLocator(0.5))
plt.xlim(-0.05, 24.05)
#plt.xlim(-0.1, 19.1
#plt.xlim(15.1, 19.1)
plt.title('Trajectory', fontsize=36)
plt.grid(visible=True);
(len(all_breakpoints),all_breakpoints[-1].size)
all_breakpoints[0]
plot_all_breakpoints[0,:]
plot_all_breakpoints = np.empty(shape=(len(all_breakpoints),all_breakpoints[-1].size))
plot_all_breakpoints[:,:] = np.NaN
for n,row in enumerate(all_breakpoints):
#print(row)
if row.size == all_breakpoints[-1].size:
plot_all_breakpoints[n,:] = row
plt.figure(figsize=[20,8])
plt.plot(plot_all_breakpoints);
plt.figure(figsize=[20,8])
plt.plot(np.sum(plot_all_breakpoints, axis=1));
###Output
_____no_output_____
###Markdown
**Note** sometimes the errors don't converge; make the breakpoints do? Eval
###Code
calc_slope = lambda x1,y1,x2,y2: (y2-y1)/(x2-x1)
final_pairs = []
for n in range(0,len(yyysolution)-1,1):
slope = calc_slope(xxxsolution[n],yyysolution[n],xxxsolution[n+1],yyysolution[n+1])
print(slope)
mask = np.where((time_arr >= xxxsolution[n]) & (time_arr < xxxsolution[n+1]))
if abs(slope) < 0.1:
final_pairs.append((mask[0][0],mask[0][-1]))
from synthetic_data.trajectory import get_stay_indices
true_indices = get_stay_indices(new_stays, time_arr)
true_labels = np.zeros(time_arr.shape)
for pair in true_indices:
true_labels[pair[0]:pair[1]+1] = 1
###Output
_____no_output_____
###Markdown
np.sum(true_labels), true_labels.size-np.sum(true_labels), true_labels.size, np.sum(true_labels)/true_labels.size
###Code
pred_labels = np.zeros(time_arr.shape)
for pair in final_pairs:
pred_labels[pair[0]:pair[1]+1] = 1
np.sum(pred_labels), pred_labels.size-np.sum(pred_labels), pred_labels.size, np.sum(pred_labels)/pred_labels.size
from sklearn.metrics import confusion_matrix, precision_score, recall_score
confusion_matrix(true_labels, pred_labels)
precision_score(true_labels, pred_labels), recall_score(true_labels, pred_labels),
fig = plt.figure(figsize=(22,8))
ax1 = fig.add_subplot(2,1,1)
ax1.plot(new_t_segs, new_x_segs, **segs_plot_kwargs, label='adjusted raw stays')
ax1.plot(time_arr, raw_arr, ':', label='raw journey')
ax1.plot(time_arr, noise_arr, '.-', label='noisy journey', alpha=0.5)
ax1.plot(xxxsolution, yyysolution, '--', label=f'Iteration {i}')
ax1.legend();
ax1.set_xlabel(r'time, $t$ [arb.]')
ax1.set_ylabel(r'position, $x$ [arb.]')
ymin = noise_arr.min()-1*eps
ymax = noise_arr.max()+1*eps
#plt.ylim(ymin, ymax)
ax1.xaxis.set_major_locator(MultipleLocator(1))
ax1.xaxis.set_minor_locator(MultipleLocator(0.5))
ax1.set_xlim(-0.05, 24.05)
#plt.xlim(-0.1, 19.1
#plt.xlim(15.1, 19.1)
ax1.set_title('Trajectory', fontsize=36)
ax1.grid(visible=True);
ax2 = fig.add_subplot(2,1,2, adjustable='box', aspect=1.5, sharex=ax1)
ax2.plot(time_arr, true_labels, 'X:', markersize=8, label='True')
ax2.plot(time_arr, pred_labels, '.', label='Pred.')
ax2.set_ylim(-0.2,1.2)
ax2.set_xlim(-0.5,24.5)
'''ax = plt.gca()
ax2.xaxis.set_major_locator(MultipleLocator(1))
#ax2.xaxis.set_major_formatter(FormatStrFormatter('%d'))
# For the minor ticks, use no labels; default NullFormatter.
ax2.xaxis.set_minor_locator(MultipleLocator(0.5))
'''
plt.xlabel(r'time, $t$ [arb.]')
ax2.set_yticks([0,1])
ax2.set_yticklabels(['travel', 'stay'])
ax2.legend()
#ax2.set_title('Trajectory', fontsize=36)
ax2.grid(visible=True);
from sklearn.metrics import confusion_matrix, precision_score, recall_score
datetag = datetime.today().strftime('%Y%m%d')
### pickle.dump( trajectory, open( trajectory_tag, "wb" ) )
#trajectory = pickle.load( open( "20200625_trajectory_8stays__adjusted.pkl", "rb" ) )
prec_scores, reca_scores = [], []
nnnn = 0
while nnnn < 1:
if nnnn%10 == 0: print(nnnn)
try:
# Create data
time_arr, raw_arr, noise_arr, segments = get_rand_traj()
new_stays = get_adjusted_stays(segments, time_arr)
new_t_segs, new_x_segs = get_stay_segs(new_stays)
min_t, max_t = time_sub.min(), time_sub.max()
'''fig, ax = plt.subplots(1,1,figsize=(22,10))
# The adjusted raw-stays
plt.plot(new_t_segs, new_x_segs, **segs_plot_kwargs, label='adjusted raw stays')
plt.plot(time_sub, noise_journey_sub, '.-', color='gray', label='noisy journey', alpha=0.25)
plt.plot(time_sub, raw_journey_sub, ':', color='C0', label='raw journey')
'''
# hyperparam
nIterationMax = 100
# Sorting the breaks
#breakpoints = np.sort( np.array(breakpoints0) )
breakpoints = np.arange(0,24,1)
# time_arrDiffs
dt = np.min( np.diff(np.unique( time_arr)) )
ones = np.ones_like(time_arr)
yyysolution_last = noise_arr
loops1 = []
cycle = []
last_len = 0
set_len = 0
# loop through the whole data set
for i in range( nIterationMax ):
#print(yyysolution_last.shape)
ones = np.ones_like(time_arr)
# Linear regression: solve A*p = Y
Rk = [ramp( time_arr - xk ) for xk in breakpoints ]
Sk = [step( time_arr - xk ) for xk in breakpoints ]
A = np.array([ ones, time_arr ] + Rk + Sk )
p = lstsq(A.transpose(), noise_arr, rcond=None)[0]
# Parameters identification:
a, b = p[0:2]
ck = p[ 2:2+len(breakpoints) ]
dk = p[ 2+len(breakpoints): ]
# Estimation of the next break-points:
newBreakpoints = breakpoints - dk/ck
#print(np.max(np.abs(newBreakpoints - breakpoints)),dt/5, dt)
# Stop condition
#if np.max(np.abs(newBreakpoints - breakpoints)) < dt/5:
# print('Stopping criterion')
# #break
# Compute the final segmented fit:
xxxsolution = np.insert( np.append( breakpoints, max(time_arr) ), 0, min(time_arr) )
ones = np.ones_like(xxxsolution)
Rk = [ c*ramp( xxxsolution - x0 ) for x0, c in zip(breakpoints, ck) ]
yyysolution = a*ones + b*xxxsolution + np.sum( Rk, axis=0 )
# Compute the final segmented fit:
ones_model = np.ones_like(time_arr)
Rk_model = [ c*ramp( time_arr - x0 ) for x0, c in zip(breakpoints, ck) ]
yyysolution_model = a*ones_model + b*time_arr + np.sum( Rk_model, axis=0 )
norm_err = np.linalg.norm(yyysolution_model-yyysolution_last)
round_norm_err = round(norm_err,5)
if round_norm_err in loops1:
cycle.append(round_norm_err)
set_len = len(list(set(cycle)))
consec = True
#print("in loop", set_len)
else:
loops1.append(round_norm_err)
consec = False
#print(norm_err, yyysolution.size, breakpoints.size )
#if np.linalg.norm(yyysolution_model-yyysolution_last) < 0.01:
# print(f'{i}: Stopping criterion #2')
breakpoints = newBreakpoints
breakpoints = breakpoints[(breakpoints > time_arr.min()) & (breakpoints < time_arr.max())]
breakpoints = np.sort( breakpoints )
yyysolution_last = yyysolution_model
if len(cycle) > 0:
if (last_len == set_len) & consec & (round_norm_err == min(cycle)):
print(f'{i}: Stopping criterion #3')
break
else:
last_len = set_len
#=======================================================
'''plt.plot(xxxsolution, yyysolution, color="C5" , label='Piece. lin.')
plt.legend();
plt.xlabel(r'time, $t$ [arb.]')
plt.ylabel(r'position, $x$ [arb.]')
ymin = noise_journey_sub.min()-1*eps
ymax = noise_journey_sub.max()+1*eps
#plt.ylim(ymin, ymax)
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.xaxis.set_minor_locator(MultipleLocator(0.5))
plt.xlim(-0.05, 24.05)
#plt.xlim(-0.1, 19.1
#plt.xlim(15.1, 19.1)
plt.grid(visible=True)'''
#=======================================================
#print(len(yyysolution))
# Calc slope to identify
calc_slope = lambda x1,y1,x2,y2: (y2-y1)/(x2-x1)
final_pairs = []
for n in range(0,len(yyysolution)-1,1):
slope = calc_slope(xxxsolution[n],yyysolution[n],xxxsolution[n+1],yyysolution[n+1])
#print(slope)
mask = np.where((time_arr >= xxxsolution[n]) & (time_arr < xxxsolution[n+1]))
if abs(slope) < 0.1:
final_pairs.append((mask[0][0],mask[0][-1]))
true_indices = get_stay_indices(new_stays, time_arr)
true_labels = np.zeros(time_arr.shape)
for pair in true_indices:
true_labels[pair[0]:pair[1]+1] = 1
pred_labels = np.zeros(time_arr.shape)
for pair in final_pairs:
pred_labels[pair[0]:pair[1]+1] = 1
prec = precision_score(true_labels, pred_labels)
rec = recall_score(true_labels, pred_labels)
'''plt.title(f'Trajectory, prec.: {prec}, rec. {rec}', fontsize=36)
plt.savefig(trajectory_tag+'.png')'''
#=======================================================
#print(precision_score(true_labels, pred_labels))
#confusion_matrix(true_labels, pred_labels)
prec_scores.append(prec)
reca_scores.append(rec)
nnnn+=1
except:
pass
preciss = np.array(prec_scores)
recalls = np.array(reca_scores)
#print(preciss.mean(), recalls.mean())
recaprec = np.hstack([recalls.reshape(-1,1),preciss.reshape(-1,1)])
recaprec = recaprec[recaprec[:,0].argsort()]
preciss.mean(), (1-recalls).mean()
binw = 0.02
bins=np.arange(0,1.0+binw,binw)
_ = plt.hist(preciss, bins=bins, alpha=0.5, label='prec.')
_ = plt.hist(1-recalls, bins=bins, alpha=0.5, label='1-recall')
plt.legend()
plt.grid();
plt.scatter(1.-recaprec[:,0], recaprec[:,1], alpha=0.2)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.