path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
examples/inference/ex01a_inference_SIR.ipynb | ###Markdown
Inference of parameters (SIR model)In this notebook, we consider the SIR model with symptomatically and asymptomatically infected. We are trying to infer the parameters of the model * $\alpha$ (fraction of asymptomatic infectives), * $\beta$ (probability of infection on contact), * $\gamma_{I_a}$ (rate of recovery for asymptomatic infected individuals), and* $\gamma_{I_s}$ (rate of recovery for symptomatic infected individuals) when given the full data (of classes S, Ia, Is) from a generated trajectory.
###Code
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
import pyross
import time
from IPython.display import Image
Image('https://raw.githubusercontent.com/rajeshrinet/pyross/d2b38651d4d1569f0b10d1780aa01a55c5c89170/examples/inference/SIIR.jpg')
###Output
_____no_output_____
###Markdown
1) Generate a trajectoryWe generate a test trajectory on a population with two ages groups.
###Code
M = 2 # the population has two age groups
N = 1e6 # and this is the total population
# parameters for generating synthetic trajectory
beta = 0.02 # infection rate
gIa = 1./7 # recovery rate of asymptomatic infectives
gIs = 1./7 # recovery rate of asymptomatic infectives
alpha = 0.2 # fraction of asymptomatic infectives
fsa = 1 # the self-isolation parameter
# set the age structure
fi = np.array([0.25, 0.75]) # fraction of population in age age group
Ni = N*fi
# set the contact structure
C = np.array([[18., 9.],
[3., 12.]])
# C_ij = number of people group from group i that an individual from group j meets per day
# set up initial condition
Ia0 = np.array([10, 10]) # each age group has asymptomatic infectives
Is0 = np.array([2, 2]) # and also symptomatic infectives
R0 = np.array([0, 0]) # there are no recovered individuals initially
S0 = Ni - (Ia0 + Is0 + R0)
Tf = 100
Nf = Tf+1
def contactMatrix(t):
return C
parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs,'fsa':fsa}
true_parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs,'fsa':fsa}
# use pyross stochastic to generate traj and save
sto_model = pyross.stochastic.SIR(parameters, M, Ni)
data = sto_model.simulate(S0, Ia0, Is0, contactMatrix, Tf, Nf, method='tau-leaping')
data_array = data['X']
np.save('SIR_sto_traj.npy', data_array)
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
t = data['t']
plt.fill_between(t, 0, np.sum(data_array[:, :M], axis=1), alpha=0.3)
plt.plot(t, np.sum(data_array[:, :M], axis=1), '-', label='S', lw=2)
plt.fill_between(t, 0, np.sum(data_array[:, M:2*M], axis=1), alpha=0.3)
plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-', label='Ia', lw=2)
plt.fill_between(t, 0, np.sum(data_array[:, 2*M:3*M], axis=1), alpha=0.3)
plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-', label='Is', lw=2)
plt.legend(fontsize=26)
plt.grid()
plt.xlabel(r'time')
plt.autoscale(enable=True, axis='x', tight=True)
###Output
_____no_output_____
###Markdown
2) InferenceWe take the first $20$ data points of the trajectories and use it to infer the parameters of the model.
###Code
# load the data and rescale to intensive variables
Tf_inference = 20 # truncate to only getting the first few datapoints
Nf_inference = Tf_inference+1
x = np.load('SIR_sto_traj.npy').astype('float')
x = (x)[:Nf_inference]
estimator = pyross.inference.SIR(parameters, M, Ni)
# Compare the deterministic trajectory and the stochastic trajectory with the same
# initial conditions and parameters
x0=x[0]
estimator.set_det_model(parameters)
estimator.set_contact_matrix(contactMatrix)
xm = estimator.integrate(x[0], 0, Tf_inference, Nf_inference)
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
plt.plot(np.sum(xm[:, M:], axis=1), label='deterministic I')
plt.plot(np.sum(x[:Nf_inference, M:], axis=1), label='stochastic I')
plt.legend()
plt.show()
# compute -log_p for the original (correct) parameters
start_time = time.time()
parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs,'fsa':fsa}
# use faster ODE methods to speed up inference
estimator.set_lyapunov_method('euler')
logp = estimator.obtain_minus_log_p(parameters, x, Tf_inference, contactMatrix, tangent=False)
end_time = time.time()
print(logp)
print(end_time - start_time)
# compare to tangent space
start_time = time.time()
parameters = {'alpha':alpha, 'beta':beta, 'gIa':gIa, 'gIs':gIs,'fsa':fsa}
logp = estimator.obtain_minus_log_p(parameters, x, Tf_inference, contactMatrix, tangent=True)
end_time = time.time()
print(logp)
print(end_time - start_time)
# Define the prior (log normal prior around guess of parameter with defined std. deviation)
alpha_g = 0.25
beta_g = 0.04
gIa_g = 0.1
gIs_g = 0.1
# compute -log_p for the initial guess
parameters = {'alpha':alpha_g, 'beta':beta_g, 'gIa':gIa_g, 'gIs':gIs_g, 'fsa':fsa}
logp = estimator.obtain_minus_log_p(parameters, x, Tf_inference, contactMatrix)
print(logp)
# Set up priors
eps = 1e-4
priors = {
'alpha':{
'mean': alpha_g,
'std': 0.2,
'bounds': [eps, 0.8],
'prior_fun': 'truncnorm'
},
'beta':{
'mean': beta_g,
'std': 0.1,
'bounds': [eps, 0.2],
'prior_fun': 'lognorm'
},
'gIa':{
'mean': gIa_g,
'std': 0.2,
'bounds': [eps, 0.6]
},
'gIs':{
'mean': gIs_g,
'std': 0.2,
'bounds': [eps, 0.6]
}
}
# Stopping criterion for minimisation (realtive change in function value)
ftol = 1e-6
start_time = time.time()
res = estimator.infer_parameters(x, Tf_inference, contactMatrix, priors, tangent=False,
global_max_iter=20, local_max_iter=400,
cma_population=32, global_atol=10,
ftol=ftol, verbose=True)
end_time = time.time()
print(res['map_dict']) # best guess
print(end_time - start_time)
# compute log_p for best estimate
start_time = time.time()
logp = estimator.obtain_minus_log_p(res['map_dict'], x, Tf_inference, contactMatrix)
end_time = time.time()
print(logp)
print(end_time - start_time)
print("True parameters:")
print(true_parameters)
print("\nInferred parameters:")
print(res['map_dict'])
print(res['flat_map'])
x = np.load('SIR_sto_traj.npy').astype('float')
Nf = x.shape[0]
Tf = Nf-1
# set the deterministic method to be solve_ivp for accurate integration over long time scale
estimator.set_det_model(res['map_dict'])
estimator.set_params(res['map_dict'])
x_det = estimator.integrate(x[Nf_inference], Nf_inference, Tf, Nf-Nf_inference)
t_inf = np.linspace(Nf_inference, Tf, Nf-Nf_inference)
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
# plt.plot(np.sum(x_det[:, :M], axis=1), label='Inferred S')
# plt.plot(np.sum(x[:, :M], axis=1), label='True S')
plt.plot(t_inf, np.sum(x_det[:, M:2*M], axis=1), label='Inferred Ia')
plt.plot(np.sum(x[:, M:2*M], axis=1), label='True Ia')
plt.plot(t_inf, np.sum(x_det[:, 2*M:3*M], axis=1), label='Inferred Is')
plt.plot(np.sum(x[:, 2*M:3*M], axis=1), label='True Is')
plt.xlim([0, Tf])
plt.axvspan(0, Tf_inference,
label='Used for inference',
alpha=0.3, color='dodgerblue')
plt.legend()
plt.show()
eps = 1e-3
x = np.load('SIR_sto_traj.npy').astype('float')[:Nf_inference]
hess = estimator.hessian(x, Tf_inference, res, contactMatrix=contactMatrix, eps=eps, tangent=False,
fd_method="central")
cov = np.linalg.inv(hess)
print(cov)
v, w = np.linalg.eig(cov)
print(v)
###Output
[[ 2.38152583e-04 -7.75990575e-08 4.31932018e-04 -1.14960050e-04]
[-7.75990575e-08 5.31514895e-08 -1.50420118e-07 3.86970234e-08]
[ 4.31932018e-04 -1.50420118e-07 8.97543196e-04 -2.31197807e-04]
[-1.14960050e-04 3.86970234e-08 -2.31197807e-04 7.44430959e-05]]
[1.17197378e-03 2.44894533e-05 1.36756717e-05 5.31253490e-08]
###Markdown
From here onwards, still work in process (need to update Forecast module)
###Code
parameters = res['map_dict'].copy()
parameters['fsa'] = fsa
parameters['cov'] = cov
# Initialise pyross forecast module
model_forecast = pyross.forecast.SIR(parameters, M, Ni)
# Initial condition for forecast is last configuration from inference-trajectory
S0_forecast = x[Tf_inference,:M]
Ia0_forecast = x[Tf_inference,M:2*M]
Is0_forecast = x[Tf_inference,2*M:]
print(Ia0_forecast, Is0_forecast)
# Number of simulations over which we average, use 500
Ns = 500
Tf_forecast = Tf - Tf_inference
Nf_forecast = Tf_forecast+1
result_forecast = model_forecast.simulate(S0_forecast, Ia0_forecast, Is0_forecast,
contactMatrix, Tf_forecast, Nf_forecast,
verbose=True, method='deterministic',
Ns=Ns)
trajectories_forecast = result_forecast['X']
t_forecast = result_forecast['t'] + Tf_inference
fontsize=25
#
ylabel=r'Fraction of infectives'
#
# Plot total number of symptomatic infectives
cur_trajectories_forecast = trajectories_forecast[:,4] + trajectories_forecast[:,5]
cur_mean_forecast = np.mean( cur_trajectories_forecast, axis=0)
percentile = 10
percentiles_lower = np.percentile(cur_trajectories_forecast,percentile,axis=0)
percentiles_upper = np.percentile(cur_trajectories_forecast,100-percentile,axis=0)
percentiles_median = np.percentile(cur_trajectories_forecast,50,axis=0)
cur_trajectory_underlying = data_array[:,4] + data_array[:,5]
#
# Plot trajectories
#
fig, ax = plt.subplots(1,1,figsize=(10,8))
ax.axvspan(0, Tf_inference,
label='Range used for inference',
alpha=0.3, color='dodgerblue')
ax.set_title(r'Forecast with inferred parameters',
y=1.05,
fontsize=fontsize)
# for i,e in enumerate(cur_trajectories_forecast):
# ax.plot(t_forecast,e,
# alpha=0.15,
# )
ax.fill_between(t_forecast, percentiles_lower, percentiles_upper, color='darkorange', alpha=0.2)
ax.plot(cur_trajectory_underlying,
lw=3,
color='limegreen',
label='Trajectory used for inference')
ax.plot(t_forecast,percentiles_median,
alpha=1,ls='--',
color='orange',label='Median',
lw=3)
plt.legend()
plt.xlim([0, Tf])
plt.show()
###Output
_____no_output_____ |
chap03/textbook-chap-3-4.ipynb | ###Markdown
3. Getting Started with Neural Networks Predicting House Prices: A Regression ExampleIn the **housing prices** problem, the goal is to predict a value instead of a discrete label. This is an example of a regression problem, and is different from the classification problem we had in the past examples.
###Code
from keras.datasets import boston_housing
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold, train_test_split
from keras.utils.np_utils import to_categorical
from keras import models
from keras import layers
from tensorflow import keras as tf_keras
##########
# Ingestion
##########
(train_data, train_labels), (test_data, test_labels) = boston_housing.load_data()
###Output
_____no_output_____
###Markdown
The data consists of 404 training examples and 102 test samples, each with 13 numericlal features. They are the median price of homes in a Boston suburb in the mid-1970s.
###Code
# For testing
# print([td[:] for td in train_data[0:3]])
# print(train_labels[0:3])
###Output
_____no_output_____
###Markdown
Before we fit the data into the network, we perform feature scaling.
###Code
##########
# Preprocessing
##########
sc = StandardScaler()
x_train = sc.fit_transform(train_data)
x_test = sc.transform(test_data)
num_features = x_train.shape[1]
###Output
_____no_output_____
###Markdown
As the number of samples is small, we use a small network with two hidden layers, each with 64 units. This will help to mitigate overfitting.This model ends with a single unit and no activation. This is typical for a scalar regression problem (predict only a scalar value).
###Code
def build_model(num_features=13):
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(num_features,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
return model
###Output
_____no_output_____
###Markdown
Now, instead of train-test split, we will use k-fold cross validation. In this case, k is 4. Note that we are trying to grid search on no. of epochs to find the best value.
###Code
kf = KFold(n_splits=4, random_state=0, shuffle=True)
f = 1
mae_histories = []
num_epochs = 100
##########
# Train & Model Tuning
##########
for ftrain_index, ftest_index in kf.split(x_train):
print("Processing fold " + str(f))
f_xtrain = x_train[ftrain_index]
f_xtest = x_train[ftest_index]
f_ytrain = train_labels[ftrain_index]
f_ytest = train_labels[ftest_index]
model = build_model()
history = model.fit(f_xtrain, f_ytrain,
validation_data=(f_xtest, f_ytest),
epochs = num_epochs, batch_size=1,
verbose=0)
mae_histories.append(history)
f +=1
###Output
Processing fold 1
Processing fold 2
Processing fold 3
Processing fold 4
###Markdown
Let's check the MAE scores for each of the folds, followed by the mean of the MAE scores.
###Code
# For testing
# for h in mae_histories:
# print(h.history['val_mae'][-1])
# Performance evaluation using MAE
val_maes = [h.history['val_mae'][-1] for h in mae_histories]
print(val_maes)
print(np.mean(np.array(val_maes)))
###Output
[2.525017261505127, 2.4476616382598877, 2.1648025512695312, 2.6725945472717285]
2.4525189995765686
###Markdown
The different runs show different MAE scores. The average is a much more reliable metric. Let's now try with 500 epochs.
###Code
##########
# Train / Model Tuning
##########
f = 1
mae_histories = []
num_epochs = 500
for ftrain_index, ftest_index in kf.split(x_train):
print("Processing fold " + str(f))
f_xtrain = x_train[ftrain_index]
f_xtest = x_train[ftest_index]
f_ytrain = train_labels[ftrain_index]
f_ytest = train_labels[ftest_index]
model = build_model()
history = model.fit(f_xtrain, f_ytrain,
validation_data=(f_xtest, f_ytest),
epochs = num_epochs, batch_size=1,
verbose=0)
mae_histories.append(history)
f +=1
# MAE across different folds
val_maes2 = [h.history['val_mae'][-1] for h in mae_histories]
print(val_maes2)
print(np.mean(np.array(val_maes2)))
###Output
[2.6322991847991943, 2.8974709510803223, 2.3006911277770996, 3.0305492877960205]
2.715252637863159
###Markdown
Let's now compute the MAE across the different epochs.
###Code
val_maes_by_fold = [h.history['val_mae'] for h in mae_histories]
val_maes_by_epoch = np.array(val_maes_by_fold).transpose()
mean_val_maes_by_epoch = np.mean(val_maes_by_epoch, axis=1)
metrics_df = pd.DataFrame({'val_mae' : mean_val_maes_by_epoch})
metrics_df['epoch'] = metrics_df.index+1
fig = plt.figure(figsize=(18,5))
ax = fig.add_subplot(211)
metrics_df.plot(kind='line', x='epoch', y='val_mae', ax=ax, label='validation', color='red',)
ax.set_ylabel("MAE")
plt.show()
###Output
_____no_output_____
###Markdown
Plotting these values here seem hard. Let's omit the first 10 data points, and Replace each point with an exponential moving average of previous points.
###Code
metrics_df2 = metrics_df.iloc[10:].copy()
metrics_df2['val_mae_ewm'] = pd.Series.ewm(metrics_df2['val_mae'], span=5).mean()
fig = plt.figure(figsize=(18,5))
ax = fig.add_subplot(211)
metrics_df2.plot(kind='line', x='epoch', y='val_mae_ewm', ax=ax, label='validation', color='red',)
ax.set_ylabel("Smoothed MAE")
ax.set_xticks(range(10,500,10))
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
From here we can see that the validation MAE stops improving after about 60 epochs. So we use this as the tuned parameter.
###Code
##########
# Tuned Model
##########
final_model = build_model()
final_model.fit(f_xtrain, f_ytrain,
validation_data=(f_xtest, f_ytest),
epochs = 60, batch_size=1,
verbose=0)
##########
# Evaluate on Test Set
##########
test_mse_score, test_mae_score = final_model.evaluate(x_test, test_labels)
print(test_mae_score)
###Output
102/102 [==============================] - 0s 53us/step
2.7454662322998047
###Markdown
MAE is about $2810
###Code
##########
# Predict
##########
final_model.predict(test_data[:5])
###Output
_____no_output_____
###Markdown
Tensorflow Implementation
###Code
# Train-test split
x_train__train, x_train__val, y_train__train, y_train__val = train_test_split(x_train, train_labels)
# Train
modeli = tf_keras.models.Sequential([
tf_keras.layers.Dense(64, activation='relu', input_shape=(num_features,)),
tf_keras.layers.Dense(1),
])
modeli.compile(optimizer='sgd', loss='mean_squared_error', metrics=['mae'])
modeli.fit(x_train, train_labels,
validation_data=(x_train__val, y_train__val),
epochs = 20, verbose=0)
# Evaluate
modeli.evaluate(x_test, test_labels)
# Predict
modeli.predict(x_test[:3])
###Output
4/4 [==============================] - 0s 1ms/step - loss: 18.3456 - mae: 2.8819
|
chapter8_PyTorch-Advances/tensorboard.ipynb | ###Markdown
TensorBoard ๅฏ่งๅ[github](https://github.com/lanpa/tensorboard-pytorch)
###Code
import sys
sys.path.append('..')
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
from torch.autograd import Variable
from torchvision.datasets import CIFAR10
from utils import resnet
from torchvision import transforms as tfs
from datetime import datetime
from tensorboardX import SummaryWriter
# ไฝฟ็จๆฐๆฎๅขๅผบ
def train_tf(x):
im_aug = tfs.Compose([
tfs.Resize(120),
tfs.RandomHorizontalFlip(),
tfs.RandomCrop(96),
tfs.ColorJitter(brightness=0.5, contrast=0.5, hue=0.5),
tfs.ToTensor(),
tfs.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
x = im_aug(x)
return x
def test_tf(x):
im_aug = tfs.Compose([
tfs.Resize(96),
tfs.ToTensor(),
tfs.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
x = im_aug(x)
return x
train_set = CIFAR10('./data', train=True, transform=train_tf)
train_data = torch.utils.data.DataLoader(train_set, batch_size=256, shuffle=True, num_workers=4)
valid_set = CIFAR10('./data', train=False, transform=test_tf)
valid_data = torch.utils.data.DataLoader(valid_set, batch_size=256, shuffle=False, num_workers=4)
net = resnet(3, 10)
optimizer = torch.optim.SGD(net.parameters(), lr=0.1, weight_decay=1e-4)
criterion = nn.CrossEntropyLoss()
writer = SummaryWriter()
def get_acc(output, label):
total = output.shape[0]
_, pred_label = output.max(1)
num_correct = (pred_label == label).sum().data[0]
return num_correct / total
if torch.cuda.is_available():
net = net.cuda()
prev_time = datetime.now()
for epoch in range(30):
train_loss = 0
train_acc = 0
net = net.train()
for im, label in train_data:
if torch.cuda.is_available():
im = Variable(im.cuda()) # (bs, 3, h, w)
label = Variable(label.cuda()) # (bs, h, w)
else:
im = Variable(im)
label = Variable(label)
# forward
output = net(im)
loss = criterion(output, label)
# backward
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.data[0]
train_acc += get_acc(output, label)
cur_time = datetime.now()
h, remainder = divmod((cur_time - prev_time).seconds, 3600)
m, s = divmod(remainder, 60)
time_str = "Time %02d:%02d:%02d" % (h, m, s)
valid_loss = 0
valid_acc = 0
net = net.eval()
for im, label in valid_data:
if torch.cuda.is_available():
im = Variable(im.cuda(), volatile=True)
label = Variable(label.cuda(), volatile=True)
else:
im = Variable(im, volatile=True)
label = Variable(label, volatile=True)
output = net(im)
loss = criterion(output, label)
valid_loss += loss.data[0]
valid_acc += get_acc(output, label)
epoch_str = (
"Epoch %d. Train Loss: %f, Train Acc: %f, Valid Loss: %f, Valid Acc: %f, "
% (epoch, train_loss / len(train_data),
train_acc / len(train_data), valid_loss / len(valid_data),
valid_acc / len(valid_data)))
prev_time = cur_time
# ====================== ไฝฟ็จ tensorboard ==================
writer.add_scalars('Loss', {'train': train_loss / len(train_data),
'valid': valid_loss / len(valid_data)}, epoch)
writer.add_scalars('Acc', {'train': train_acc / len(train_data),
'valid': valid_acc / len(valid_data)}, epoch)
# =========================================================
print(epoch_str + time_str)
###Output
Epoch 0. Train Loss: 1.877906, Train Acc: 0.315410, Valid Loss: 2.198587, Valid Acc: 0.293164, Time 00:00:26
Epoch 1. Train Loss: 1.398501, Train Acc: 0.498657, Valid Loss: 1.877540, Valid Acc: 0.400098, Time 00:00:27
Epoch 2. Train Loss: 1.141419, Train Acc: 0.597628, Valid Loss: 1.872355, Valid Acc: 0.446777, Time 00:00:27
Epoch 3. Train Loss: 0.980048, Train Acc: 0.658367, Valid Loss: 1.672951, Valid Acc: 0.475391, Time 00:00:27
Epoch 4. Train Loss: 0.871448, Train Acc: 0.695073, Valid Loss: 1.263234, Valid Acc: 0.578613, Time 00:00:28
Epoch 5. Train Loss: 0.794649, Train Acc: 0.723992, Valid Loss: 2.142715, Valid Acc: 0.466699, Time 00:00:27
Epoch 6. Train Loss: 0.736611, Train Acc: 0.741554, Valid Loss: 1.701331, Valid Acc: 0.500391, Time 00:00:27
Epoch 7. Train Loss: 0.695095, Train Acc: 0.756816, Valid Loss: 1.385478, Valid Acc: 0.597656, Time 00:00:28
Epoch 8. Train Loss: 0.652659, Train Acc: 0.773796, Valid Loss: 1.029726, Valid Acc: 0.676465, Time 00:00:27
Epoch 9. Train Loss: 0.623829, Train Acc: 0.784144, Valid Loss: 0.933388, Valid Acc: 0.682520, Time 00:00:27
Epoch 10. Train Loss: 0.581615, Train Acc: 0.798792, Valid Loss: 1.291557, Valid Acc: 0.635938, Time 00:00:27
Epoch 11. Train Loss: 0.559358, Train Acc: 0.805708, Valid Loss: 1.430408, Valid Acc: 0.586426, Time 00:00:28
Epoch 12. Train Loss: 0.534197, Train Acc: 0.816853, Valid Loss: 0.960802, Valid Acc: 0.704785, Time 00:00:27
Epoch 13. Train Loss: 0.512111, Train Acc: 0.822389, Valid Loss: 0.923353, Valid Acc: 0.716602, Time 00:00:27
Epoch 14. Train Loss: 0.494577, Train Acc: 0.828225, Valid Loss: 1.023517, Valid Acc: 0.687207, Time 00:00:27
Epoch 15. Train Loss: 0.473396, Train Acc: 0.835212, Valid Loss: 0.842679, Valid Acc: 0.727930, Time 00:00:27
Epoch 16. Train Loss: 0.459708, Train Acc: 0.840290, Valid Loss: 0.826854, Valid Acc: 0.726953, Time 00:00:28
Epoch 17. Train Loss: 0.433836, Train Acc: 0.847931, Valid Loss: 0.730658, Valid Acc: 0.764258, Time 00:00:27
Epoch 18. Train Loss: 0.422375, Train Acc: 0.854401, Valid Loss: 0.677953, Valid Acc: 0.778125, Time 00:00:27
Epoch 19. Train Loss: 0.410208, Train Acc: 0.857370, Valid Loss: 0.787286, Valid Acc: 0.754102, Time 00:00:27
Epoch 20. Train Loss: 0.395556, Train Acc: 0.862923, Valid Loss: 0.859754, Valid Acc: 0.738965, Time 00:00:27
Epoch 21. Train Loss: 0.382050, Train Acc: 0.866554, Valid Loss: 1.266704, Valid Acc: 0.651660, Time 00:00:27
Epoch 22. Train Loss: 0.368614, Train Acc: 0.871213, Valid Loss: 0.912465, Valid Acc: 0.738672, Time 00:00:27
Epoch 23. Train Loss: 0.358302, Train Acc: 0.873964, Valid Loss: 0.963238, Valid Acc: 0.706055, Time 00:00:27
Epoch 24. Train Loss: 0.347568, Train Acc: 0.879620, Valid Loss: 0.777171, Valid Acc: 0.751855, Time 00:00:27
Epoch 25. Train Loss: 0.339247, Train Acc: 0.882215, Valid Loss: 0.707863, Valid Acc: 0.777734, Time 00:00:27
Epoch 26. Train Loss: 0.329292, Train Acc: 0.885830, Valid Loss: 0.682976, Valid Acc: 0.790527, Time 00:00:27
Epoch 27. Train Loss: 0.313049, Train Acc: 0.890761, Valid Loss: 0.665912, Valid Acc: 0.795410, Time 00:00:27
Epoch 28. Train Loss: 0.305482, Train Acc: 0.891944, Valid Loss: 0.880263, Valid Acc: 0.743848, Time 00:00:27
Epoch 29. Train Loss: 0.301507, Train Acc: 0.895289, Valid Loss: 1.062325, Valid Acc: 0.708398, Time 00:00:27
|
notebooks/python_sdk/experiments/deep_learning/Use Keras and HPO to recognize hand-written digits.ipynb | ###Markdown
Use Keras and hyperparameter optimization (HPO) to recognize hand-written digits with `ibm-watson-machine-learning` This notebook contains steps and code to demonstrate support of Deep Learning experiments in the Watson Machine Learning service. It introduces commands for data retrieval, training definition persistance, experiment training, model persistance, model deployment and scoring.Some familiarity with Python is helpful. This notebook uses Python 3.6. Learning goalsThe learning goals of this notebook are:- Working with the Watson Machine Learning service.- Training Deep Learning models (TensorFlow).- Saving trained models in Watson Machine Learning repository.- Online deployment and scoring of the trained model. ContentsThis notebook contains the following parts:1. [Setup](setup)2. [Create model definition](model_df)3. [Train model](training)4. [Persist trained model](persist)5. [Deploy and Score](deploy)6. [Clean up](clean)7. [Summary and next steps](summary) 1. SetupBefore you use the sample code in this notebook, you must perform the following set up tasks:- Create a [Watson Machine Learning Service](https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/) instance (a lite plan is offered). - Create [Cloud Object Storage (COS)](https://console.bluemix.net/catalog/infrastructure/cloud-object-storage) instance (a lite plan is offered). - After you create a COS instance, go to your COS dashboard. - In "Service credentials" tab, click on "New Credential", - Add the inline configuration parameter by enabling "HMAC" checkbox. This configuration parameter adds the following section below to the instance credentials which will be used later on, ``` "cos_hmac_keys": { "access_key_id": "***", "secret_access_key": "***" } ``` 1.1 Working with Cloud Object Storage The `ibm_boto3` library allows Python developers to manage Cloud Object Storage. **Note:** If `ibm_boto3` is not preinstalled in you environment please install it by running the following command: `!pip install ibm-cos-sdk`
###Code
import ibm_boto3
from ibm_botocore.client import Config
import os
import json
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Define the endpoint to be used. You can find this information in "Endpoint" section of your Cloud Object Storage instance's dashboard.
###Code
cos_credentials = {
"apikey": "***",
"cos_hmac_keys": {
"access_key_id": "***",
"secret_access_key": "***"
},
"endpoints": "***",
"iam_apikey_description": "***",
"iam_apikey_name": "***",
"iam_role_crn": "***",
"iam_serviceid_crn": "***",
"resource_instance_id": "***"
}
api_key = cos_credentials['apikey']
service_instance_id = cos_credentials['resource_instance_id']
auth_endpoint = 'https://iam.stage1.ng.bluemix.net/oidc/token'
service_endpoint = 'https://s3.us-west.cloud-object-storage.test.appdomain.cloud'
###Output
_____no_output_____
###Markdown
Create the Boto resource by providing type, endpoint_url and credentials.
###Code
cos = ibm_boto3.resource('s3',
ibm_api_key_id=api_key,
ibm_service_instance_id=service_instance_id,
ibm_auth_endpoint=auth_endpoint,
config=Config(signature_version='oauth'),
endpoint_url=service_endpoint)
###Output
_____no_output_____
###Markdown
Create the buckets that you will use to store training data and training results. **Note:**: Bucket name has to be unique - please update following ones to any unique name.
###Code
buckets = ['tf-keras-data-example', 'tf-keras-results-example']
for bucket in buckets:
if not cos.Bucket(bucket) in cos.buckets.all():
print('Creating bucket "{}"...'.format(bucket))
try:
cos.create_bucket(Bucket=bucket)
except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e:
print('Error: {}.'.format(e.response['Error']['Message']))
###Output
_____no_output_____
###Markdown
The buckets are created.
###Code
print(list(cos.buckets.limit(50)))
###Output
_____no_output_____
###Markdown
1.2 Download the MNIST data and upload it to the COS bucket In this notebook we work with the Keras **MNIST** sample dataset. Download the training data and upload them to 'mnist-keras-data' bucket. Following cell creates the 'MNIST_KERAS_DATA' folder and downloads the file from link.**Note:** First install `wget` library by the following command`!pip install wget`
###Code
link = 'https://s3.amazonaws.com/img-datasets/mnist.npz'
import wget
data_dir = 'MNIST_KERAS_DATA'
if not os.path.isdir(data_dir):
os.mkdir(data_dir)
if not os.path.isfile(os.path.join(data_dir, os.path.join(link.split('/')[-1]))):
wget.download(link, out=data_dir)
!ls MNIST_KERAS_DATA
###Output
mnist.npz
###Markdown
Upload the data files to created bucket.
###Code
bucket_name = buckets[0]
bucket_obj = cos.Bucket(bucket_name)
for filename in os.listdir(data_dir):
with open(os.path.join(data_dir, filename), 'rb') as data:
bucket_obj.upload_file(os.path.join(data_dir, filename), filename)
print('{} is uploaded.'.format(filename))
###Output
mnist.npz is uploaded.
###Markdown
You can see the list of all buckets and their contents.
###Code
for obj in bucket_obj.objects.all():
print('Object key: {}'.format(obj.key))
print('Object size (kb): {}'.format(obj.size/1024))
###Output
_____no_output_____
###Markdown
1.3 Connection to WMLAuthenticate the Watson Machine Learning service on IBM Cloud. You need to provide the platform `api_key` and instance `location`.You can use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli/index.html) to retrieve the platform API Key and instance location.API Key can be generated by the following way:```ibmcloud loginibmcloud iam api-key-create API_KEY_NAME```Get the value of `api_key` from the output.Location of your WML instance can be retrieved in the following way:```ibmcloud login --apikey API_KEY -a https://cloud.ibm.comibmcloud resource service-instance WML_INSTANCE_NAME```Get the value of `location` from the output. **Tip**: Your `Cloud API key` can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below.You can also get a service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.**Action**: Enter your `api_key` and `location` in the following cell.
###Code
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
###Output
_____no_output_____
###Markdown
Install and import the `ibm-watson-machine-learning` package**Note:** `ibm-watson-machine-learning` documentation can be found here.
###Code
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
###Output
_____no_output_____
###Markdown
Working with spacesFirst of all, you need to create a space that will be used for your work. If you do not have a space, you can use [Deployment Spaces Dashboard](https://dataplatform.cloud.ibm.com/ml-runtime/spaces?context=cpdaas) to create one.- Click **New Deployment Space**- Create an empty space- Select Cloud Object Storage- Select Watson Machine Learning instance and press **Create**- Copy `space_id` and paste it below**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/notebooks/python_sdk/instance-management/Space%20management.ipynb).**Action**: Assign space ID below
###Code
space_id = 'PASTE YOUR SPACE ID HERE'
###Output
_____no_output_____
###Markdown
You can use the `list` method to print all existing spaces.
###Code
client.spaces.list(limit=10)
###Output
_____no_output_____
###Markdown
To be able to interact with all resources available in Watson Machine Learning, you need to set the **space** which you will be using.
###Code
client.set.default_space(space_id)
###Output
_____no_output_____
###Markdown
The model is ready to be trained. 2. Create model definition For the purpose of this example two Keras model definitions have been prepared: - Multilayer Perceptron (MLP) - Convolution Neural Network (CNN) 2.1 Prepare model definition metadata
###Code
metaprops = {
client.model_definitions.ConfigurationMetaNames.NAME: "MNIST mlp model definition",
client.model_definitions.ConfigurationMetaNames.DESCRIPTION: "MNIST mlp model definition",
client.model_definitions.ConfigurationMetaNames.COMMAND: "python3 mnist_mlp.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000",
client.model_definitions.ConfigurationMetaNames.PLATFORM: {"name": "python", "versions": ["3.6"]},
client.model_definitions.ConfigurationMetaNames.VERSION: "2.0",
client.model_definitions.ConfigurationMetaNames.SPACE_UID: space_id
}
###Output
_____no_output_____
###Markdown
2.2 Get sample model definition content files from git (Python scripts with CNN and MLP)
###Code
filename_mnist = 'MNIST.zip'
if not os.path.isfile(filename_mnist):
filename_mnist = wget.download('https://github.com/IBM/watson-machine-learning-samples/raw/master/definitions/keras/mnist/MNIST.zip')
###Output
_____no_output_____
###Markdown
2.3 Publish model definition
###Code
model_definition_details = client.model_definitions.store(filename_mnist, meta_props=metaprops)
model_definition_id = client.model_definitions.get_id(model_definition_details)
print(model_definition_id)
###Output
d5fb0e96-1506-4af4-a21a-a07847c63a0d
###Markdown
List model definitions
###Code
client.model_definitions.list(limit=5)
###Output
_____no_output_____
###Markdown
3. Train model 3.1 Prepare training metadata
###Code
training_metadata = {
client.training.ConfigurationMetaNames.NAME: "Keras-MNIST",
client.training.ConfigurationMetaNames.SPACE_UID: space_id,
client.training.ConfigurationMetaNames.DESCRIPTION: "Keras-MNIST predict written digits",
client.training.ConfigurationMetaNames.TAGS :[{
"value": "MNIST",
"description": "predict written difits"
}],
client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {
"name": "MNIST results",
"connection": {
"endpoint_url": service_endpoint,
"access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'],
"secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key']
},
"location": {
"bucket": buckets[0]
},
"type": "s3"
},
client.training.ConfigurationMetaNames.MODEL_DEFINITION:{
"id": model_definition_id,
"command": "python3 mnist_mlp.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000",
"hardware_spec": {
"name": "K80",
"nodes": 1
},
"software_spec": {
"name": "tensorflow_1.15-py3.6"
},
"parameters": {
"name": "MNIST mlp",
"description": "Simple MNIST mlp model"
}
},
client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [
{
"name": "training_input_data",
"type": "s3",
"connection": {
"endpoint_url": service_endpoint,
"access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'],
"secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key']
},
"location": {
"bucket": buckets[1]
},
"schema": {
"id":"idmlp_schema",
"fields": [
{
"name": "text",
"type": "string"
}
]
}
}
]
}
###Output
_____no_output_____
###Markdown
3.2 Train model in background
###Code
training = client.training.run(training_metadata)
###Output
_____no_output_____
###Markdown
3.3 Get training id and status
###Code
training_id = client.training.get_uid(training)
client.training.get_status(training_id)["state"]
###Output
_____no_output_____
###Markdown
3.4 Get training details
###Code
training_details = client.training.get_details(training_id)
print(json.dumps(training_details, indent=2))
###Output
_____no_output_____
###Markdown
List trainings
###Code
client.training.list(limit=5)
###Output
_____no_output_____
###Markdown
Cancel training You can cancel the training run by calling the method below. **Tip**: If you want to delete train runs and results add `hard_delete=True` as a parameter.
###Code
client.training.cancel(training_id)
###Output
_____no_output_____
###Markdown
4. Persist trained model 4.1 Download trained model from COS
###Code
uid = client.training.get_details(training_id)['entity']['results_reference']['location']['logs']
###Output
_____no_output_____
###Markdown
Download model from COS
###Code
bucket_name = buckets[0]
bucket_obj = cos.Bucket(bucket_name)
model_path = ""
for obj in bucket_obj.objects.iterator():
if uid in obj.key and obj.key.endswith(".h5"):
model_path = obj.key
break
model_name = model_path.split("/")[-1]
bucket_obj.download_file(model_path, model_name)
###Output
_____no_output_____
###Markdown
Unpack model and compress it to tar.gz format
###Code
import tarfile
model_name = "mnist_cnn.h5"
with tarfile.open(model_name + ".tar.gz", "w:gz") as tar:
tar.add("mnist_cnn.h5")
###Output
_____no_output_____
###Markdown
4.2 Publish model
###Code
software_spec_uid = client.software_specifications.get_uid_by_name('tensorflow_1.15-py3.6')
model_meta_props = {
client.repository.ModelMetaNames.NAME: "Keras MNIST",
client.repository.ModelMetaNames.TYPE: "keras_2.2.5",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: software_spec_uid
}
published_model = client.repository.store_model(model='mnist_cnn.h5.tar.gz', meta_props=model_meta_props)
model_uid = client.repository.get_model_uid(published_model)
###Output
_____no_output_____
###Markdown
4.3 Get model details
###Code
model_details = client.repository.get_details(model_uid)
print(json.dumps(model_details, indent=2))
###Output
{
"entity": {
"software_spec": {
"id": "2b73a275-7cbf-420b-a912-eae7f436e0bc",
"name": "tensorflow_1.15-py3.6"
},
"type": "keras_2.2.5"
},
"metadata": {
"created_at": "2020-08-13T07:57:10.103Z",
"id": "8096e0bb-fe4f-47ab-88ff-7c076481f9b4",
"modified_at": "2020-08-13T07:57:17.335Z",
"name": "Keras MNIST",
"owner": "IBMid-5500067NJD",
"space_id": "74133c06-dce2-4dfc-b913-2e0dc8efc750"
}
}
###Markdown
List stored models
###Code
client.repository.list_models(limit=5)
###Output
_____no_output_____
###Markdown
5. Deploy and score 5.1 Create online deployment for published model
###Code
deployment = client.deployments.create(model_uid, meta_props={
client.deployments.ConfigurationMetaNames.NAME: "Keras MNIST",
client.deployments.ConfigurationMetaNames.ONLINE: {}})
deployment_uid = client.deployments.get_uid(deployment)
###Output
#######################################################################################
Synchronous deployment creation for uid: '8096e0bb-fe4f-47ab-88ff-7c076481f9b4' started
#######################################################################################
initializing....
ready
------------------------------------------------------------------------------------------------
Successfully finished deployment creation, deployment_uid='b78f9981-1b37-47e4-898d-c7b370fdff7b'
------------------------------------------------------------------------------------------------
###Markdown
5.2 Get deployments details
###Code
deployments_details = client.deployments.get_details(deployment_uid)
print(json.dumps(deployments_details, indent=2))
###Output
{
"entity": {
"asset": {
"id": "8096e0bb-fe4f-47ab-88ff-7c076481f9b4"
},
"custom": {},
"hardware_spec": {
"id": "Not_Applicable",
"name": "S",
"num_nodes": 1
},
"name": "Keras MNIST",
"online": {},
"space_id": "74133c06-dce2-4dfc-b913-2e0dc8efc750",
"status": {
"online_url": {
"url": "https://wml-fvt.ml.test.cloud.ibm.com/ml/v4/deployments/b78f9981-1b37-47e4-898d-c7b370fdff7b/predictions"
},
"state": "ready"
}
},
"metadata": {
"created_at": "2020-08-13T07:57:23.623Z",
"id": "b78f9981-1b37-47e4-898d-c7b370fdff7b",
"modified_at": "2020-08-13T07:57:23.623Z",
"name": "Keras MNIST",
"owner": "IBMid-5500067NJD",
"space_id": "74133c06-dce2-4dfc-b913-2e0dc8efc750"
}
}
###Markdown
List deployments
###Code
client.deployments.list(limit=5)
###Output
_____no_output_____
###Markdown
5.3 Score deployed model Let's plot two digits. **Action:** Please install `matplotlib`, `numpy`
###Code
import wget
dataset_filename='mnist.npz'
if not os.path.isfile(dataset_filename):
dataset_filename = wget.download('https://github.com/IBM/watson-machine-learning-samples/raw/master/data/mnist/mnist.npz')
import numpy as np
mnist_dataset = np.load(dataset_filename)
x_test = mnist_dataset['x_test']
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([x_test[0], x_test[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
###Output
_____no_output_____
###Markdown
Our input node expects to get data with shape (784,) so we need to reshape our two digits.
###Code
image_1 = x_test[0].ravel() / 255
image_2 = x_test[1].ravel() / 255
###Output
_____no_output_____
###Markdown
Prepare scoring payload and score.
###Code
scoring_payload = {
client.deployments.ScoringMetaNames.INPUT_DATA : [
{'values': [image_1.tolist(), image_2.tolist()]}
]
}
scores = client.deployments.score(deployment_uid, meta_props=scoring_payload)
print("Scoring result:\n" + json.dumps(scores, indent=2))
###Output
Scoring result:
{
"predictions": [
{
"fields": [
"prediction",
"prediction_classes",
"probability"
],
"values": [
[
[
1.4112320687043045e-11,
6.71987257505613e-11,
1.7774758021005255e-07,
2.486277423940919e-07,
4.02627052961083e-16,
2.431678990111319e-11,
4.179393737831339e-18,
0.9999996423721313,
4.879279202896214e-10,
1.1160609325600035e-08
],
7,
[
1.4112320687043045e-11,
6.71987257505613e-11,
1.7774758021005255e-07,
2.486277423940919e-07,
4.02627052961083e-16,
2.431678990111319e-11,
4.179393737831339e-18,
0.9999996423721313,
4.879279202896214e-10,
1.1160609325600035e-08
]
],
[
[
1.6697327548387264e-11,
3.123093165413593e-06,
0.9999967813491821,
9.436178061150713e-08,
1.4241461072845846e-17,
3.3048402903190777e-10,
4.1099714777337315e-12,
3.5958473611902297e-12,
2.457088310592326e-09,
2.2838061320256294e-17
],
2,
[
1.6697327548387264e-11,
3.123093165413593e-06,
0.9999967813491821,
9.436178061150713e-08,
1.4241461072845846e-17,
3.3048402903190777e-10,
4.1099714777337315e-12,
3.5958473611902297e-12,
2.457088310592326e-09,
2.2838061320256294e-17
]
]
]
}
]
}
|
apriori/Association.ipynb | ###Markdown
Association mining using Apriori
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from itertools import combinations
from mlxtend import frequent_patterns
%matplotlib inline
df = pd.read_csv("./dataset.csv")
df.head()
def convertToOnehot(df, col):
return df.join(df[col].str.get_dummies(", ")).drop(["List"], axis=1)
def getCount(df, col):
return len(df.loc[df[col] == 1])
def combineCols(df, col1, col2):
return (df[col1] & df[col2])
def checkForCombinations(combList, combTuple, r):
isPresent = True
for comb in combinations(combTuple, r):
if comb not in combList:
isPresent = False
break
return isPresent
def apriori(df, col, s_count):
df = pd.DataFrame(df[col])
df = convertToOnehot(df, col)
items = df.columns
countDf = df.sum()
countDict = countDf.loc[countDf >= s_count].to_dict()
print(countDict)
items = list(countDict.keys())
combHist = []
combPrev = list(combinations(items, 1))
combNext = []
for i in range(2, len(items) - 1):
for comb in combinations(items, i):
if checkForCombinations(combPrev, comb, (i-1)):
combProp = "_".join(str(c) for c in comb)
col1 = "_".join(str(c) for c in comb[:-1])
col2 = str(comb[-1])
df[combProp] = combineCols(df, col1, col2)
countTemp = getCount(df, combProp)
if countTemp >= s_count:
combNext.append(comb)
countDict[combProp] = getCount(df, combProp)
combHist.append(combPrev)
combPrev = combNext
return countDict
def getConfidence(items_given, support_items, countDict):
items_given.sort()
support_items = support_items + items_given
support_items.sort()
items_given_str = "_".join(items_given)
item_support_str = "_".join(support_items)
item_support = item_support_str
items = list(countDict.keys())
if (items_given_str not in items) or (item_support not in items):
return 0
else:
return (countDict[item_support]/ countDict[items_given_str])
countDict = apriori(df, 'List', 2)
getConfidence(["I2", "I1"], ["I5"], countDict)
elements = [s.split("_") for s in countDict.keys() if len(s) > 2]
minConfidence = 0.7
def getAllConfidence(elements, countDict, minConfidence):
for element in elements:
for comb in combinations(element, len(element) - 1):
comb = list(comb)
remaining = list(set(element) - set(comb))
if getConfidence(comb, remaining, countDict) > minConfidence:
print(comb,"->",remaining, getConfidence(comb, remaining, countDict))
if len(element) > 2 and (getConfidence(remaining, comb, countDict) > minConfidence):
print(remaining,"->",comb, getConfidence(remaining, comb, countDict))
getAllConfidence(elements,countDict, 0.7)
oneHotDf = convertToOnehot(df, 'List')
cols = ['I1', 'I2', 'I3', 'I4', 'I5']
oneHotDf[cols] = oneHotDf[cols].astype('bool')
oneHotDf = oneHotDf.drop(['TID'], axis=1)
freq_itemsets = frequent_patterns.apriori(oneHotDf, min_support=0.20, use_colnames=True)
rules = frequent_patterns.association_rules(freq_itemsets, metric="confidence", min_threshold=1)
rules[rules.confidence > minConfidence]
oneHotDf = oneHotDf.replace([False], '?')
oneHotDf.to_csv("./dataset_onehot.csv")
###Output
_____no_output_____
###Markdown
Using Monkey dataset
###Code
df2 = pd.read_csv('./dataset2.csv')
df2 = df2.drop(['TID'], axis=1)
countDict2 = apriori(df2, 'List', 3)
elements2 = [s.split("_") for s in countDict2.keys() if len(s) > 2]
getAllConfidence(elements2, countDict2, 0.7)
countDict2
###Output
_____no_output_____ |
examples/tutorials/websocket/deploy_workers/deploy-and-connect.ipynb | ###Markdown
Deploy PySyft Workers using DockerWe will begin by starting our three workers using that docker-compose file
###Code
!docker-compose up -d
###Output
Creating network "deploy_workers_default" with the default driver
Creating deploy_workers_alice_1 ...
Creating deploy_workers_charlie_1 ...
Creating deploy_workers_bob_1 ...
[2Bting deploy_workers_charlie_1 ... [32mdone[0m[3A[2K[2A[2K
###Markdown
Then import syft and hook pytorch
###Code
import torch
import syft
from syft import WebsocketClientWorker
hook = syft.TorchHook(torch)
###Output
WARNING:tf_encrypted:Falling back to insecure randomness since the required custom op could not be found for the installed version of TensorFlow. Fix this by compiling custom ops. Missing file was '/usr/lib/python3.7/site-packages/tf_encrypted/operations/secure_random/secure_random_module_tf_1.14.0.so'
###Markdown
Here we connect to our three worker using their ids and ports. **Note:** Wait few seconds before running this cell as the deployment may take some time.
###Code
alice = WebsocketClientWorker(hook=hook, id="alice", host='127.0.0.1', port=8777)
bob = WebsocketClientWorker(hook=hook, id="bob", host='127.0.0.1', port=8778)
charlie = WebsocketClientWorker(hook=hook, id="charlie", host='127.0.0.1', port=8779)
###Output
_____no_output_____
###Markdown
Now we can interact with those workers, here we send and get some tensors to make sure that it's working
###Code
t = torch.tensor([73, 74, 75])
ta = t.send(alice)
tb = t.send(bob)
tc = t.send(charlie)
print(ta.get())
print(tb.get())
print(tc.get())
###Output
tensor([73, 74, 75])
tensor([73, 74, 75])
tensor([73, 74, 75])
###Markdown
Here we deployed and interacted with 3 different workers in just some line of code. Stopping the workersYou wouldn't love to have 3 workers running on the background without notice, so don't forget to run this cell to stop all the 3 workers.
###Code
!docker-compose down
###Output
Stopping deploy_workers_charlie_1 ...
Stopping deploy_workers_alice_1 ...
Stopping deploy_workers_bob_1 ...
[2Bping deploy_workers_alice_1 ... [32mdone[0m[3A[2KRemoving deploy_workers_charlie_1 ...
Removing deploy_workers_alice_1 ...
Removing deploy_workers_bob_1 ...
[2BRemoving network deploy_workers_defaultdone[0m
|
Chapter04/Chapter4_BostonLinReg_TF2_alpha.ipynb | ###Markdown
example house
###Code
example_house = 69
y = test_prices[example_house]
y_pred = prediction(test_features,W.numpy(),B.numpy())[example_house]
print("Actual median house value",y," in $10K")
print("Predicted median house value ",y_pred.numpy()," in $10K")
###Output
_____no_output_____ |
kapp_firestore.ipynb | ###Markdown
Database using FirestoreSince we only use the read operation, it was easier to generate a database-file to deploy. This code is not used anymore, and may not work.
###Code
import kapp_firestore_api as f
import pandas as pd
csv_files = {
'babymat': 'babymat.csv',
'bakeartikler': 'bakeartikler.csv',
'bakevarer': 'bakevarer.csv',
'div_matprodukter': 'div_matprodukter.csv',
'drikke': 'drikke.csv',
'fisk': 'fisk.csv',
'frukt_gront': 'frukt_gront.csv',
'helsekost': 'helsekost.csv',
'is_dessert': 'is_dessert.csv',
'meieri_ost': 'meieri_ost.csv',
'palegg_frokost': 'palegg_frokost.csv',
'snacks_godteri': 'snacks_godteri.csv',
}
find_duplicates_all(csv_files)
count_products()
def read_data(csv_file):
"""
Local
Also presents some information about the data.
Returns: A Panda DataFrame.
"""
print("------ Reading", csv_file)
df = pd.read_csv(csv_file)
return df
# %%time
def get_docs():
"""
Firestore
"""
docs = f.products_ref().get()
products = []
for doc in docs:
products.append(doc.to_dict())
return products
def count_products(localOnly = True):
"""
Firestore & Local
Count products in CSV files and in the database.
"""
print("------ Count Products")
csv_len_sum = 0
for k,v in csv_files.items():
df_len = len(pd.read_csv('data/' + v))
csv_len_sum = csv_len_sum + df_len
print("{}: {}".format(v, df_len))
print("---")
print("All files: {}".format(csv_len_sum))
if not localOnly:
print("Database: {}".format(f.nofProducts()))
def find_duplicates_df(df):
"""
Local
Find duplicates in a dataframe (local data from CSV files).
Duplicates is checked on producer+product.
"""
print('------ Finding Duplicates')
dups = df.duplicated(subset=['producer','product'], keep=False)
df_dups = df[dups].sort_values(by=['producer','product'])
display(df_dups)
def find_duplicates_all(csv_files):
"""
Local
See find_duplicates()
"""
df = pd.DataFrame()
for k,v in csv_files.items():
df = pd.concat([df, read_data('data/' + v)], ignore_index=True)
find_duplicates_df(df)
def insert_data(df):
"""
Firestore
Creates products in the Firebase database.
Input: A DataFrame
"""
print("------ Batch Insert Data")
f.batch_create(df)
def insert_from_single_file(csv_file):
"""
Firestore & local
Insert to Firestore from a single CSV file.
"""
df = read_data('data/' + csv_file)
find_duplicates(df)
# insert_data(df)
count_products()
# insert_from_single_file(csv_files['frukt_gront'])
def insert_from_files(csv_files):
"""
Firestore & local
Insert to Firestore from a set of CSV files.
"""
for k,v in csv_files.items():
df = read_data('data/' + v)
# insert_data(df)
# insert_from_files(csv_files)
# count_products()
# f.list_products(False)
###Output
_____no_output_____
###Markdown
Sandbox
###Code
# f.create({
# u'category': u'REMOVE',
# u'comment': u'',
# u'kosher_stamp': u'',
# u'kosher_type': u'p',
# u'producer': u'Toro',
# u'product': u'suppe',
# u'sub_category': u'',
# })
# doc_ref1 = f.get_doc_ref(u'-_\n')
# f.pp(doc_ref1)
# doc_ref1 = f.get_doc_ref(u'J7Rh7IcdEZ3Sxnb0N1yO')
# doc_ref2 = f.delete(f.get_doc_ref(u'J7Rh7IcdEZ3Sxnb0N1yO'))
# doc_ref3 = f.get_doc_ref(u'213132')
# f.update(doc_ref3, {u'comment':'Bye'})
# doc_ref1 = f.get_doc_ref(u'J7Rh7IcdEZ3Sxnb0N1yO')
# doc_ref2 = f.delete(f.get_doc_ref(u'nTk2VlJoA9MOXxZuMrBs'))
# f.pp(doc_ref1)
# display(f.isDeleted(doc_ref1))
# display(f.exists(doc_ref1))
###Output
_____no_output_____ |
15_Generators.ipynb | ###Markdown
Generators
###Code
# Figure 1: Creating our own range() function
def my_range(nn):
counter = 0
while counter < nn:
yield counter
counter += 1
for ii in my_range(3):
print(ii, end=" ")
###Output
0 1 2
###Markdown
Fibonacci!
###Code
# Figure 2: A Fibonacci generator that returns Fibonacci numbers
def Fibonacci(nn):
yield 0
counter = 1
lastnumb = 0
numb = 1
while counter < nn:
yield numb
newnumb = numb + lastnumb
lastnumb = numb
numb = newnumb
counter +=1
for ii in Fibonacci(8):
print(ii, end=" ")
###Output
0 1 1 2 3 5 8 13 |
notebooks/DataModeling.ipynb | ###Markdown
Sรฃo Paulo 18/08/2019
###Code
import pandas as pd
melbourne_file_path = 'data/melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
melbourne_data.columns
melbourne_data.describe()
melbourne_data.head()
# The Melbourne data has some missing values (some houses for which some variables weren't recorded.)
# We'll learn to handle missing values in a later tutorial.
# Your Iowa data doesn't have missing values in the columns you use.
# So we will take the simplest option for now, and drop houses from our data.
# Don't worry about this much for now, though the code is:
# dropna drops missing values (think of na as "not available")
melbourne_data = melbourne_data.dropna(axis=0)
# Selecting The Prediction Target
# We'll use the dot notation to select the column we want to predict, which is called the prediction target.
# By convention, the prediction target is called y
y = melbourne_data.Price
# Choosing "Features"
# The columns that are inputted into our model (and later used to make predictions) are called "features."
# In our case, those would be the columns used to determine the home price.
# We select multiple features by providing a list of column names inside brackets.
# Each item in that list should be a string (with quotes).
melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'Lattitude', 'Longtitude']
# By convention, this data is called X.
X = melbourne_data[melbourne_features]
# Let's quickly review the data we'll be using to predict house prices using the describe method and the head method,
# which shows the top few rows.
X.describe()
X.head()
###Output
_____no_output_____
###Markdown
Building Your ModelYou will use the scikit-learn library to create your models. When coding, this library is written as sklearn, as you will see in the sample code. Scikit-learn is easily the most popular library for modeling the types of data typically stored in DataFrames. The steps to building and using a model are: - Define: What type of model will it be? A decision tree? Some other type of model? Some other parameters of the model type are specified too. - Fit: Capture patterns from provided data. This is the heart of modeling. - Predict: Just what it sounds like - Evaluate: Determine how accurate the model's predictions are.Here is an example of defining a decision tree model with scikit-learn and fitting it with the features and target variable.
###Code
from sklearn.tree import DecisionTreeRegressor
# Define model. Specify a number for random_state to ensure same results each run
melbourne_model = DecisionTreeRegressor(random_state=1)
# Fit model
melbourne_model.fit(X, y)
print("Making predictions for the following 5 houses:")
print(X.head())
print("The predictions are")
print(melbourne_model.predict(X.head()))
###Output
Making predictions for the following 5 houses:
Rooms Bathroom Landsize Lattitude Longtitude
1 2 1.0 156.0 -37.8079 144.9934
2 3 2.0 134.0 -37.8093 144.9944
4 4 1.0 120.0 -37.8072 144.9941
6 3 2.0 245.0 -37.8024 144.9993
7 2 1.0 256.0 -37.8060 144.9954
The predictions are
[1035000. 1465000. 1600000. 1876000. 1636000.]
|
appengine/monorail/tools/datalab/ratelimiting.ipynb | ###Markdown
Analyzing Rate Limit Exceeded eventsUse this notebook to dig into Rate Limit Exceeded events on Monorail.
###Code
import gcp
import gcp.bigquery as bq
context = gcp.Context.default()
print 'The current project is %s' % context.project_id
# Set the date to analyze here:
date = 20160514
%%sql --module by_ip
SELECT
protoPayload.ip as ip,
COUNT(protoPayload.requestId) AS num
FROM
[logs.appengine_googleapis_com_request_log_$date]
WHERE
protoPayload.moduleId is null # == "default", otherwise you get backend queries too.
AND
protoPayload.line.logMessage LIKE "Rate Limit Exceeded%"
GROUP BY
ip
ORDER BY
num DESC
LIMIT
100;
%%sql --module by_ip_class
SELECT
REGEXP_EXTRACT(protoPayload.ip,r'^(?:[^\.]*\.){0}([^\.]*)\.?') AS a,
REGEXP_EXTRACT(protoPayload.ip,r'^(?:[^\.]*\.){1}([^\.]*)\.?') AS b,
REGEXP_EXTRACT(protoPayload.ip,r'^(?:[^\.]*\.){2}([^\.]*)\.?') AS c,
REGEXP_EXTRACT(protoPayload.ip,r'^(?:[^\.]*\.){3}([^\.]*)\.?') AS d,
COUNT(protoPayload.requestId) AS num
FROM
[logs.appengine_googleapis_com_request_log_$date]
WHERE
protoPayload.moduleId is null # == "default", otherwise you get backend queries too.
AND
protoPayload.line.logMessage LIKE "Rate Limit Exceeded%"
GROUP BY
a,
b,
c,
d
ORDER BY
num DESC
LIMIT
100;
%%sql --module by_country
SELECT
protoPayload.line.logMessage as line,
COUNT(DISTINCT protoPayload.ip) as ip_count,
COUNT(protoPayload.requestId) AS req_count
FROM
FLATTEN ([logs.appengine_googleapis_com_request_log_$date], protoPayload.line)
WHERE
protoPayload.moduleId is null # == "default", otherwise you get backend queries too.
AND
protoPayload.line.logMessage LIKE "Rate Limit Exceeded%"
AND
REGEXP_MATCH(protoPayload.line.logMessage, 'X-AppEngine-Country')
GROUP BY
line
ORDER BY
req_count DESC
LIMIT
100;
%%sql --module by_resource
SELECT
protoPayload.resource as resource,
COUNT(protoPayload.requestId) AS req_count
FROM
[logs.appengine_googleapis_com_request_log_$date]
WHERE
protoPayload.moduleId is null # == "default", otherwise you get backend queries too.
AND
protoPayload.line.logMessage LIKE "Rate Limit Exceeded%"
GROUP BY
resource
ORDER BY
req_count DESC
LIMIT
100;
###Output
_____no_output_____
###Markdown
Requests by IP
###Code
df = bq.Query(by_ip, date=date).to_dataframe()
df.head(20)
if len(df):
df.plot()
###Output
_____no_output_____
###Markdown
Requests by IP Class
###Code
df = bq.Query(by_ip_class, date=date).to_dataframe()
df.head(20)
if len(df):
df.plot()
###Output
_____no_output_____
###Markdown
Requests by Country Code
###Code
df = bq.Query(by_country, date=date).to_dataframe()
df.head(20)
if len(df):
df.plot()
###Output
_____no_output_____
###Markdown
Requests by Requested Resource
###Code
df = bq.Query(by_resource, date=date).to_dataframe()
df.head(20)
if len(df):
df.plot()
###Output
_____no_output_____ |
PCP_01_getstarted.ipynb | ###Markdown
Unit 1: Get Started Overview and Learning Objectives Downloading PCP Notebooks (Option 1): GitHub Downloading PCP Notebooks (Option 2): AudioLabs Website Package Management System Python Environment Files Creating Conda Environment Starting Jupyter Server Jupyter Notebook Keyboard Shortcuts HTML Export Further Notes Overview and Learning Objectives In this first unit, we briefly introduce how to use the PCP notebooks and start interacting with them using the Jupyter notebook framework. If a static view of the PCP notebooks is enough for you, the exported HTML versions can be used right away without any installation. Suppose you want to execute, modify, and experiment with the Python code contained in the notebooks' code cells. In that case, you need to install Python, some additional Python packages, and the Jupyter software underlying the web-based interactive computational environment for creating notebook documents. In the following, we discuss the required software components and explain the steps necessary to install them. <!--In particular, you need to complete the following four steps: Downloading the PCP notebooks either from GitHub or the AudioLabs website Installing Miniconda https://docs.conda.io/en/latest/miniconda.html Creating a conda environment conda env create -f environment.yml Starting the Jupyter server conda activate PCP jupyter notebook--> At the end of this unit, you should be able to run the PCP notebooks locally on your computer. Since the steps for installing PCP notebooks are typical for many software projects, we strongly recommend that students carry out these steps themselves while getting familiar with the software concepts involved. As an alternative for locally executing the notebooks, you may also use web-based services such as Google colab or binder. We refer to the PCP notebooks' GitHub repository for further details. Downloading PCP Notebooks (Option 1): GitHubThe latest version of the PCP notebooks is hosted on [GitHub](https://github.com/meinardmueller/PCP). GitHub is a platform for software development and version control using [Git](https://git-scm.com). To use Git, please download and install the [latest version](https://git-scm.com/downloads) for your operating system. For more information on how to setup and use Git, we refer to sources such as the [GitHub Quickstart Guide](https://docs.github.com/en/get-started/quickstart) and the [Git Tutorial](https://git-scm.com/docs/gittutorial). You can download ("clone") the PCP GitHub repository by callinggit clone https://github.com/meinardmueller/PCP.git.Note that the repository can also be run in interactive online Jupyter environments such as Binder without further installation steps. See the Readme.md of the repository for further information. Downloading PCP Notebooks (Option 2): AudioLabs WebsiteAlternatively, you can download a zip-compressed archive containing the PCP notebooks and all data. You can find this archive at https://www.audiolabs-erlangen.de/resources/MIR/PCP/PCP_1.1.2.zip Decompress the archive and store it on your local computer. Package Management System[Conda](https://conda.io/docs/) is an open source package manager that helps finding and installing packages. It runs on Windows, macOS, and Linux. The Conda package and environment manager is included in **Anaconda** and its slim version **Miniconda**, which are free and open-source distributions of Python. Miniconda can make installing Python quick and easy even for new users. Download and install Miniconda for your platform from the following website: https://docs.conda.io/en/latest/miniconda.html The following steps and commands may be useful to get started:* Install Anaconda or [Miniconda](https://conda.io/miniconda.html) (slim version of Anaconda). * On Windows: Start with opening the terminal `Anaconda Prompt`. On Linux/MacOS: You can use a usual shell.* Verify that conda is installed: `conda --version`* Update conda (it is recommended to always keep conda updated to the latest version): `conda update conda` * Default environment is named `base`* Create a new environment and install a package in it. For example: `conda create --name PCP python=3 numpy scipy matplotlib jupyter` creates a Python environment including the packages `numpy`, `scipy`, `matplotlib`, and `jupyter`* List of all environments: `conda info --envs`* Activate environment: `conda activate PCP`* Python version in current environment: `python --version`* List packages contained in environment: `conda list`* Remove environment: `conda env remove --name PCP` Python Environment FilesTo simplify the installation of Python and Jupyter, we recommend to create an environment from an `environment.yml` file, which exactly specifies the packages (along with specific versions). * We already provide such a file for the PCP notebooks. To create the environment named `PCP`, you need to call `conda env create -f environment.yml` * To update the environment, you can call `conda env update -f environment.yml`* Sometimes it may be easier to first remove the environment and than install it again: `conda env remove -n PCP` `conda env create -f environment.yml` * Once the environment has been installed, you need to activate it using: `conda activate PCP` The current `PCP` environment can be listed as follows:
###Code
import os
fn = os.path.join('environment.yml')
with open(fn, 'r', encoding='utf-8') as stream:
env = stream.read()
print(env)
###Output
name: PCP # the name of the environment
channels: # conda channels from which packages are downloaded
- defaults
- conda-forge
dependencies:
- python=3.7.* # Plain Python
- pip=19.2.* # Package installer for Python
- numpy=1.17.* # NumPy
- matplotlib=3.1.* # Matplotlib
# Jupyter Notebook dependencies:
- ipython=7.8.*
- jupyter=1.0.*
- jupyter_client=6.1.* # prevents server error
- jupyter_contrib_nbextensions=0.5.* # spell checker
- nbconvert=5.6.* # HTML export
- pip: # Packages that are installed via pip
- nbstripout==0.3.* # strip notebook output
- notebook>=6.4.4 # fixes Markdown table rendering
|
notebooks/03_out_of_distribution.ipynb | ###Markdown
Model misspecification detection
###Code
import hypothesis
# hypothesis.disable_gpu()
import torch
import plotting
import numpy as np
import matplotlib
import glob
import matplotlib.pyplot as plt
import warnings
import os
from matplotlib import rc
from util import load_ratio_estimator
from util import download
from util import load
from hypothesis.diagnostic import DensityDiagnostic
from util import MarginalizedAgePrior as Prior
from plotting import compute_1d_pdf
from plotting import compute_2d_pdf
from plotting import compute_1d_lr
from plotting import compute_2d_lr
from plotting import plot_1d_confidence_levels
from plotting import plot_1d_contours
from hypothesis.visualization.util import make_square
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
**Note**: This notebook will autodetect the presence of a GPU. Disabling the usage of the GPU can be done by uncommenting `hypothesis.disable_gpu()` and restarting the notebook. Download the required data dependencies
###Code
# Download (part-of) the presimulated test data (about 47 MB)
# https://drive.google.com/file/d/1Z3d2pZXzcyR9nAj3kZBTTevlKugiHCnO/view?usp=sharing
if not os.path.exists("data.tar.gz"):
download("1Z3d2pZXzcyR9nAj3kZBTTevlKugiHCnO", destination="data.tar.gz")
!tar -zxf data.tar.gz # Unpack
ages = np.load("ages.npy")
masses = np.load("masses.npy")
densities = np.load("density-contrasts-cut-noised.npy")
phi = np.load("phi-cut.npy")
print("Completed!")
# Download all pre-trained models (about 3.1 GB)
# https://drive.google.com/file/d/1W0WvrdtVvyTu24FBtvtvQz1pKJxMBO_w/view?usp=sharing
if not os.path.exists("models.tar.gz"):
download("1W0WvrdtVvyTu24FBtvtvQz1pKJxMBO_w", destination="models.tar.gz")
!tar -zxf models.tar.gz # Unpack
print("Completed!")
###Output
Completed!
###Markdown
Utilities
###Code
@torch.no_grad()
def integrate(ratio_estimator, observable):
prior = Prior()
space = [[prior.low.item(), prior.high.item()]]
diagnostic = DensityDiagnostic(space)
observable = observable.view(1, -1)
density = observable.to(hypothesis.accelerator)
# Define the pdf function for integration
def pdf(mass):
mass = torch.tensor(mass).view(1, 1).float()
mass = mass.to(hypothesis.accelerator)
log_posterior = prior.log_prob(mass).item() + ratio_estimator.log_ratio(inputs=mass, outputs=density)
return log_posterior.exp().item()
diagnostic.test(pdf) # Execute the integration
return diagnostic.areas[0]
###Output
_____no_output_____
###Markdown
Demonstration The proper probability distribution diagnostic checks whether the ratio estimator $r(x\vert\vartheta)$ models a proper probability distribution in conjunction with the prior. That is, the diagnostic verifies$$\int_\vartheta p(\vartheta)r(x\vert\vartheta)~d\vartheta = 1~\forall x,$$where $x$ are synthethic observables produced by the simulation model.To demonstrate that this is effectively the case, let's load a pretrained ratio estimator based on the ResNet-50 architecture
###Code
# Load the pretrained ratio estimator
ratio_estimator = load("resnet-50")
###Output
_____no_output_____
###Markdown
and a random observable from the a small test dataset
###Code
# Pick a random synthetic observable
index = np.random.randint(0, len(ages))
groundtruth_age = torch.from_numpy(ages[index])
groundtruth_mass = torch.from_numpy(masses[index])
stellar_density = torch.from_numpy(densities[index]).float().view(1, -1)
# Show how the observable looks like
figure = plt.figure(figsize=(6, 6))
plt.title("Synthetic observable with noise")
plt.plot(phi, stellar_density.view(-1).numpy(), lw=2, color="black")
plt.xlabel(r"$\phi$")
plt.minorticks_on()
plt.ylabel("Relative stellar density")
plt.ylim([-1, 3])
make_square(plt.gca())
plt.show()
###Output
_____no_output_____
###Markdown
Lets now integrate the area under the modelled posterior density function.
###Code
integrated_area = integrate(ratio_estimator, stellar_density) # This might take some time
integrated_area
###Output
_____no_output_____
###Markdown
The approximated integrated area approximates 1 and therefore suggests that the probability density implicitelly modelled through the ratio estimator is valid (although this should be repeated for all observables $x$). With valid we imply that the ratio estimator models a proper probability density. The result does not necessarily say anything about the correctness about the approximation.Lets see what the result is whenever we apply the diagnostic to an out of distribution sample. Since stellar densities cannot possibly take the shape of square waves, we'll evaluate the diagnostic on such an observable. The generated square wave is thus effectively an out of distribution sample.
###Code
import scipy.signal
# You could play with this however you like, see if the result changes!
stellar_density = scipy.signal.square(phi * 0.5, duty=0.5) * 0.75 + 1 + np.random.randn(len(phi)) * 0.1
# stellar_density = np.sin(phi * 0.2 + 15) * 0.1 + 1
# The diagnostic expects PyTorch tensors as a single-precision float.
stellar_density = torch.from_numpy(stellar_density).float()
# Show how the observable looks like
figure = plt.figure(figsize=(6, 6))
plt.title("Out of distribution observable")
plt.plot(phi, stellar_density.view(-1).numpy(), lw=2, color="black")
plt.xlabel(r"$\phi$")
plt.minorticks_on()
plt.ylabel("Relative stellar density")
plt.ylim([-1, 3])
make_square(plt.gca())
plt.show()
integrated_area = integrate(ratio_estimator, stellar_density) # This might take some time
integrated_area
###Output
_____no_output_____
###Markdown
Clearly, the integrated area does not (subjectively) approximate 1 and thus confirms our hypothesis that the diagnostic could be used to detect model misspecification whenever the ratio estimator correctly models probability densities for all synthetic observables produced by the assumed simulation model.However, how should we determine the cut-off point? That is, how do we indicate whenever an observable is an out of distribution sample? Let us consider the figure below for a moment.  If we would take the whiskers as as the thresholds for the out of distribution detection for *observed* data (not simulated), we would have 0.945 and 1.05 as thresholds.
###Code
def is_out_of_distribution(observable):
integrated_area = integrate(ratio_estimator, observable)
not_passed = integrated_area < 0.945 or integrated_area > 1.05
if not_passed:
print("Probably out-of-distribution")
else:
print("Can't be excluded")
###Output
_____no_output_____
###Markdown
If we would apply this to this to the observable from above we would get 
###Code
is_out_of_distribution(stellar_density)
###Output
Probably out-of-distribution
|
Python 300 for Beginner/exercise_Python300forBeginner 03_2021.04.16.fri.ipynb | ###Markdown
031 ~ 040 031 ๋ฌธ์์ด ํฉ์น๊ธฐ์๋ ์ฝ๋์ ์คํ ๊ฒฐ๊ณผ๋ฅผ ์์ํด๋ณด์ธ์.>> a = "3"\>> b = "4"\>> print(a + b)
###Code
# ์์: 34 (type:str)
a="3"
b="4"
print(a+b)
print(type(a+b))
###Output
34
<class 'str'>
###Markdown
032 ๋ฌธ์์ด ๊ณฑํ๊ธฐ์๋ ์ฝ๋์ ์คํ ๊ฒฐ๊ณผ๋ฅผ ์์ํด๋ณด์ธ์.>> print("Hi" * 3)
###Code
# ์์: HiHiHi
print("Hi"*3)
###Output
HiHiHi
###Markdown
033 ๋ฌธ์์ด ๊ณฑํ๊ธฐํ๋ฉด์ '-'๋ฅผ 80๊ฐ ์ถ๋ ฅํ์ธ์. ์คํ ์:--------------------------------------------------------------------------------
###Code
print("-"*80)
###Output
--------------------------------------------------------------------------------
###Markdown
034 ๋ฌธ์์ด ๊ณฑํ๊ธฐ๋ณ์์ ๋ค์๊ณผ ๊ฐ์ ๋ฌธ์์ด์ด ๋ฐ์ธ๋ฉ๋์ด ์์ต๋๋ค.>>> t1 = 'python'\>>> t2 = 'java'๋ณ์์ ๋ฌธ์์ด ๋ํ๊ธฐ์ ๋ฌธ์์ด ๊ณฑํ๊ธฐ๋ฅผ ์ฌ์ฉํด์ ์๋์ ๊ฐ์ด ์ถ๋ ฅํด๋ณด์ธ์.์คํ ์:python java python java python java python java
###Code
t1 = 'python'
t2 = 'java'
print((t1+" "+t2)*4, sep=" ")
# ๋ต
t1 = "python"
t2 = "java"
t3 = t1 + ' ' + t2 + ' '
print(t3 * 4)
###Output
python java python java python java python java
###Markdown
35 ๋ฌธ์์ด ์ถ๋ ฅ๋ณ์์ ๋ค์๊ณผ ๊ฐ์ด ๋ฌธ์์ด๊ณผ ์ ์๊ฐ ๋ฐ์ธ๋ฉ๋์ด ์์ ๋ % formatting์ ์ฌ์ฉํด์ ๋ค์๊ณผ ๊ฐ์ด ์ถ๋ ฅํด๋ณด์ธ์.name1 = "๊น๋ฏผ์" \age1 = 10\name2 = "์ด์ฒ ํฌ"\age2 = 13\\์ด๋ฆ: ๊น๋ฏผ์ ๋์ด: 10\์ด๋ฆ: ์ด์ฒ ํฌ ๋์ด: 13
###Code
name1 = "๊น๋ฏผ์"
age1 = 10
name2 = "์ด์ฒ ํฌ"
age2 = 13
print("์ด๋ฆ: ", "{}", "๋์ด: ", "{}".format(name1, age1))
print("์ด๋ฆ: ", "{}", "๋์ด: ", "{}".format(name2, age2))
###Output
์ด๋ฆ: {} ๋์ด: ๊น๋ฏผ์
์ด๋ฆ: {} ๋์ด: ์ด์ฒ ํฌ
###Markdown
%์ ํจ๊ป ์ฌ์ฉํ๋ ํ์ ๊ธฐํธ- %s : str() ๋ฉ์๋๋ฅผ ์ฌ์ฉํด์ ๋ฌธ์์ด๋ก ๋ณํํ ํ formatting- %d : ๋ถํธ๊ฐ ์๋ ์ญ์ง๋ฒ ์ ์ (signed decimal integer)์ถ์ฒ: https://rfriend.tistory.com/328
###Code
name1 = "๊น๋ฏผ์"
age1 = 10
name2 = "์ด์ฒ ํฌ"
age2 = 13
print("์ด๋ฆ: %s ๋์ด: %d" % (name1, age1))
print("์ด๋ฆ: %s ๋์ด: %d" % (name2, age2))
###Output
์ด๋ฆ: ๊น๋ฏผ์ ๋์ด: 10
์ด๋ฆ: ์ด์ฒ ํฌ ๋์ด: 13
###Markdown
036 ๋ฌธ์์ด ์ถ๋ ฅ๋ฌธ์์ด์ format( ) ๋ฉ์๋๋ฅผ ์ฌ์ฉํด์ 035๋ฒ ๋ฌธ์ ๋ฅผ ๋ค์ ํ์ด๋ณด์ธ์.
###Code
name1 = "๊น๋ฏผ์"
age1 = 10
name2 = "์ด์ฒ ํฌ"
age2 = 13
print("์ด๋ฆ: , {}, ๋์ด: , {}".format(name1, age1))
print("์ด๋ฆ: , {}, ๋์ด: , {}".format(name2, age2))
###Output
์ด๋ฆ: , ๊น๋ฏผ์, ๋์ด: , 10
์ด๋ฆ: , ์ด์ฒ ํฌ, ๋์ด: , 13
###Markdown
037 ๋ฌธ์์ด ์ถ๋ ฅํ์ด์ฌ 3.6๋ถํฐ ์ง์ํ๋ f-string์ ์ฌ์ฉํด์ 035๋ฒ ๋ฌธ์ ๋ฅผ ๋ค์ ํ์ด๋ณด์ธ์.\์ฐธ๊ณ : https://blockdmask.tistory.com/429
###Code
name1 = "๊น๋ฏผ์"
age1 = 10
name2 = "์ด์ฒ ํฌ"
age2 = 13
result1 = f"์ด๋ฆ:{name1}, ๋์ด:{age1}"
print(result1)
result2 = f"์ด๋ฆ:{name2}, ๋์ด:{age2}"
print(result2)
name1 = "๊น๋ฏผ์"
age1 = 10
name2 = "์ด์ฒ ํฌ"
age2 = 13
print(f"์ด๋ฆ: {name1} ๋์ด: {age1}")
print(f"์ด๋ฆ: {name2} ๋์ด: {age2}")
###Output
์ด๋ฆ: ๊น๋ฏผ์ ๋์ด: 10
์ด๋ฆ: ์ด์ฒ ํฌ ๋์ด: 13
###Markdown
038 ์ปด๋ง ์ ๊ฑฐํ๊ธฐ์ผ์ฑ์ ์์ ์์ฅ์ฃผ์์๊ฐ ๋ค์๊ณผ ๊ฐ์ต๋๋ค. ์ปด๋ง๋ฅผ ์ ๊ฑฐํ ํ ์ด๋ฅผ ์ ์ ํ์
์ผ๋ก ๋ณํํด๋ณด์ธ์.์์ฅ์ฃผ์์ = "5,969,782,550"
###Code
์์ฅ์ฃผ์์ = "5,969,782,550"
์ปด๋ง์ ๊ฑฐ = ์์ฅ์ฃผ์์.replace(",", "")
ํ์
๋ณํ = int(์ปด๋ง์ ๊ฑฐ)
print(ํ์
๋ณํ, type(ํ์
๋ณํ))
###Output
5969782550 <class 'int'>
###Markdown
039 ๋ฌธ์์ด ์ฌ๋ผ์ด์ฑ๋ค์๊ณผ ๊ฐ์ ๋ฌธ์์ด์์ '2020/03'๋ง ์ถ๋ ฅํ์ธ์.๋ถ๊ธฐ = "2020/03(E) (IFRS์ฐ๊ฒฐ)"
###Code
๋ถ๊ธฐ = "2020/03(E) (IFRS์ฐ๊ฒฐ)"
print(๋ถ๊ธฐ[:7])
# ๋ฌธ์์ด์์ ์ฌ๋ผ์ด์ฑ์ ์ฌ์ฉํ๋ฉด ์ฌ๋ฌ ๊ธ์๋ฅผ ์ ๊ทผํ ์ ์์ต๋๋ค.
๋ถ๊ธฐ = "2020/03(E) (IFRS์ฐ๊ฒฐ)"
print(๋ถ๊ธฐ[:7])
###Output
2020/03
###Markdown
040 strip ๋ฉ์๋๋ฌธ์์ด์ ์ข์ฐ์ ๊ณต๋ฐฑ์ด ์์ ๋ ์ด๋ฅผ ์ ๊ฑฐํด๋ณด์ธ์.data = " ์ผ์ฑ์ ์ "
###Code
data = " ์ผ์ฑ์ ์ "
data = data.strip()
print(data)
--- seYI
###Output
_____no_output_____ |
notebooks/aspatial_examples.ipynb | ###Markdown
Table of Contents* [PySAL *segregation* module for aspatial indexes](PySAL-*segregation*-module-for-aspatial-indexes) * [Notation](Notation) * [Dissimilarity](Dissimilarity) * [Gini](Gini) * [Entropy](Entropy) * [Atkinson](Atkinson) * [Concentration Profile](Concentration-Profile) * [Isolation](Isolation) * [Exposure](Exposure) * [Correlation Ratio](Correlation-Ratio) * [Modified Dissimilarity](Modified-Dissimilarity) * [Modified Gini](Modified-Gini) * [Bias-Corrected Dissimilarity](Bias-Corrected-Dissimilarity) * [Density-Corrected Dissimilarity](Density-Corrected-Dissimilarity) * [Minimum-Maximum Index (MM)](Minimum-Maximum-Index-%28MM%29) PySAL *segregation* module for aspatial indexes This is an example notebook of functionalities for aspatial indexes of the *segregation* module. Firstly, we need to import the packages we need.
###Code
%matplotlib inline
import geopandas as gpd
import segregation
import libpysal
###Output
_____no_output_____
###Markdown
Then it's time to load some data to estimate segregation. We use the data of 2000 Census Tract Data for the metropolitan area of Sacramento, CA, USA. We use a geopandas dataframe available in PySAL examples repository. We highlight that for nonspatial segregation measures only a pandas dataframe would also work to estimate.For more information about the data: https://github.com/pysal/libpysal/tree/master/libpysal/examples/sacramento2
###Code
s_map = gpd.read_file(libpysal.examples.get_path("sacramentot2.shp"))
s_map.columns
###Output
_____no_output_____
###Markdown
The data have several demographic variables. We are going to assess the segregation of the Hispanic Population (variable 'HISP_'). For this, we only extract some columns of the geopandas dataframe.
###Code
gdf = s_map[['geometry', 'HISP_', 'TOT_POP']]
###Output
_____no_output_____
###Markdown
We also can plot the spatial distribution of the composition of the Hispanic population over the tracts of Sacramento:
###Code
gdf['composition'] = gdf['HISP_'] / gdf['TOT_POP']
gdf.plot(column = 'composition',
cmap = 'OrRd',
figsize=(20,10),
legend = True)
###Output
_____no_output_____
###Markdown
Notation For consistency of notation, we assume that $n_{ij}$ is the population of unit $i \in \{1, ..., I\}$ of group $j \in \{x, y\}$, also $\sum_{j}n_{ij} = n_{i.}$, $\sum_{i}n_{ij} = n_{.j}$, $\sum_{i}\sum_{j}n_{ij} = n_{..}$, $\tilde{s}_{ij} = \frac{n_{ij}}{n_{i.}}$, $\hat{s}_{ij} = \frac{n_{ij}}{n_{.j}}$. The segregation indexes can be build for any group $j$ of the data. Dissimilarity Introduced by *Duncan, O. and B. Duncan (1955). A methodological analysis of segregation indexes. American Sociological Review 20, 210โ17.*, the Dissimilarity Index (D) is given by:$$D = \sum_{i=1}^{I}\frac{n_{i.}\mid \tilde{s}_{ij}-\frac{n_{.j}}{n_{..}}\mid}{2n_{..}\frac{n_{.j}}{n_{..}}\left ( 1-\frac{n_{.j}}{n_{..}} \right )}$$and$$0 \leqslant D \leqslant 1$$ The index is fitted below:
###Code
from segregation.aspatial import Dissim
index = Dissim(gdf, 'HISP_', 'TOT_POP')
type(index)
###Output
_____no_output_____
###Markdown
All the **segregation** classes have the *statistic* and the *core_data* attributes. We can access the point estimation of D for the data set with the **statistic** attribute:
###Code
index.statistic
###Output
_____no_output_____
###Markdown
The interpretation of this value is that 32.18% of the hispanic population would have to move to reach eveness in Sacramento. Gini The Gini coefficient is given by:$$G=\sum_{i_1=1}^{I}\sum_{i_2=1}^{I}\frac{n_{i_1.}n_{i_2.}\mid \tilde{s}_{ij}^{i_1}-\tilde{s}_{ij}^{i_2}\mid}{2n_{..}^2\frac{n_{.j}}{n_{..}}\left ( 1-\frac{n_{.j}}{n_{..}} \right )}$$ The index is fitted below:
###Code
from segregation.aspatial import GiniSeg
index = GiniSeg(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Entropy The global entropy (E) is given by:$$E = \frac{n_{.j}}{n_{..}} \ log\left ( \frac{1}{\frac{n_{.j}}{n_{..}}} \right )+\left ( 1-\frac{n_{.j}}{n_{..}} \right )log\left ( \frac{1}{1-\frac{n_{.j}}{n_{..}}} \right )$$while the unit's entropy is analogously:$$E_i = \tilde{s}_ {ij} \ log\left ( \frac{1}{\tilde{s}_ {ij}} \right )+\left ( 1-\tilde{s}_ {ij} \right )log\left ( \frac{1}{1-\tilde{s}_ {ij}} \right ).$$Therefore, the entropy index (H) is given by:$$H = \sum_{i=1}^{I}\frac{n_{i.}\left ( E-E_i \right )}{En_{..}}$$ The index is fitted below:
###Code
from segregation.aspatial import Entropy
index = Entropy(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Atkinson The Atkinson index (A) is given by:$$A = 1 - \frac{\frac{n_{.j}}{n_{..}}}{1-\frac{n_{.j}}{n_{..}}}\left | \sum_{i=1}^{I}\left [ \frac{\left ( 1-\tilde{s}_{ij} \right )^{1-b}\tilde{s}_{ij}^bt_i}{\frac{n_{.j}}{n_{..}}n_{..}} \right ] \right |^{\frac{1}{1-b}}$$where $b$ is a shape parameter that determines how to weight the increments to segregation contributed by different portions of the Lorenz curve. The index is fitted below (note you can modify the parameter *b*):
###Code
from segregation.aspatial import Atkinson
index = Atkinson(gdf, 'HISP_', 'TOT_POP', b = 0.5)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Concentration Profile The Concentration Profile (R) measure is discussed in Hong, Seong-Yun, and Yukio Sadahiro. "Measuring geographic segregation: a graph-based approach." *Journal of Geographical Systems* 16.2 (2014): 211-231. and tries to inspect the evenness aspect of segregation. The threshold proportion $t$ is given by:$$\upsilon_t = \frac{\sum_{i=1}^{I}n_{ij}g(t,i)}{\sum_{i=1}^{I}n_{ij}}.$$In the equation, $g(t, i)$ is a logical function that is defined as:$$ g(t,i) = \begin{cases} 1 & if \ \frac{n_{ij}}{n_{i.}} \geqslant t \\ 0 & \ otherwise. \end{cases}$$The Concentration Profile (R) is given by:$$R=\frac{\frac{n_{.j}}{n_{..}}-\left ( \int_{t=0}^{\frac{n_{.j}}{n_{..}}}\upsilon_tdt - \int_{t=\frac{n_{.j}}{n_{..}}}^{1}\upsilon_tdt \right )}{1-\frac{n_{.j}}{n_{..}}}.$$ The index is fitted below:
###Code
from segregation.aspatial import ConProf
index = ConProf(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
In addition, this index has a plotting method to see the profile estimated.
###Code
index.plot()
###Output
_____no_output_____
###Markdown
Isolation Isolation (xPx) assess how much a minority group is only exposed to the same group. In other words, how much they only interact the members of the group that they belong. Assuming $j = x$ as the minority group, the isolation of $x$ is giving by:$$xPx=\sum_{i=1}^{I}\left ( \hat{s}_{ix} \right )\left ( \tilde{s}_{ix} \right ).$$ The index is fitted below:
###Code
from segregation.aspatial import Isolation
index = Isolation(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
The interpretation of this number is that if you randomly pick a hispanic person of a specific tract of Sacramento, there is 23.19% of probability that this member shares a unit with another hispanic. Exposure The Exposure (xPy) of $x$ is giving by$$xPy=\sum_{i=1}^{I}\left ( \hat{s}_{iy} \right )\left ( \tilde{s}_{iy} \right ).$$ The index is fitted below:
###Code
from segregation.aspatial import Exposure
index = Exposure(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
The interpretation of this number is that if you randomly pick a hispanic person of a specific tract of Sacramento, there is 76.8% of probability that this member shares a unit with an nonhispanic. Correlation Ratio The correlation ratio (V or $Eta^2$) is given by$$V = Eta^2 = \frac{xPx - \frac{n_{.x}}{n_{..}}}{1 - \frac{n_{.x}}{n_{..}}}.$$ The index is fitted below:
###Code
from segregation.aspatial import CorrelationR
index = CorrelationR(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Modified Dissimilarity The Modified Dissimilarity Index (Dct) based on Carrington, W. J., Troske, K. R., 1997. On measuring segregation in samples with small units. *Journal of Business & Economic Statistics* 15 (4), 402โ409, evaluates the deviation from simulated evenness. This measure is estimated by taking the mean of the classical $D$ under several simulations under evenness from the global minority proportion.Let $D^*$ be the average of the classical D under simulations draw assuming evenness from the global minority proportion. The value of Dct can be evaluated with the following equation: $$ Dct = \begin{cases} \frac{D-D^*}{1-D^*} & if \ D \geqslant D^* \\ \frac{D-D^*}{D^*} & if \ D < D^* \end{cases}$$ The index is fitted below (note you can change the number of simulations):
###Code
from segregation.aspatial import ModifiedDissim
index = ModifiedDissim(gdf, 'HISP_', 'TOT_POP', iterations = 500)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Modified Gini The Modified Gini (Gct) based also on Carrington, W. J., Troske, K. R., 1997. On measuring segregation in samples with small units. *Journal of Business & Economic Statistics* 15 (4), 402โ409, evaluates the deviation from simulated evenness. This measure is estimated by taking the mean of the classical G under several simulations under evenness from the global minority proportion.Let $G^*$ be the average of G under simulations draw assuming evenness from the global minority proportion. The value of Gct can be evaluated with the following equation: $$ Gct = \begin{cases} \frac{G-G^*}{1-G^*} & if \ G \geqslant G^* \\ \frac{G-G^*}{G^*} & if \ G < G^* \end{cases}$$ The index is fitted below (note you can change the number of simulations):
###Code
from segregation.aspatial import ModifiedGiniSeg
index = ModifiedGiniSeg(gdf, 'HISP_', 'TOT_POP', iterations = 500)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Bias-Corrected Dissimilarity The Bias-Corrected Dissimilarity (Dbc) index is presented in Allen, R., Burgess, S., Davidson, R., Windmeijer, F., 2015. More reliable inference for the dissimilarity index of segregation. *The econometrics journal* 18 (1), 40โ66. The Dbc is given by:$$ D_{bc} = 2D - \bar{D}_{b}$$where $\bar{D}_b$ is the average of $B$ resampling using theobserved conditional probabilities for a multinomial distribution for each group independently. The index is fitted below (note you can change the value of B):
###Code
from segregation.aspatial import BiasCorrectedDissim
index = BiasCorrectedDissim(gdf, 'HISP_', 'TOT_POP', B = 500)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Density-Corrected Dissimilarity The Density-Corrected Dissimilarity (Ddc) index is presented in Allen, R., Burgess, S., Davidson, R., Windmeijer, F., 2015. More reliable inference for the dissimilarity index of segregation. *The econometrics journal* 18 (1), 40โ66. The Ddc measure is given by:$$D_{dc} = \frac{1}{2}\sum_{i=1}^{I} \hat{\sigma}_{i} n\left ( \hat{\theta}_i \right )$$where$$\hat{\sigma}^2_i = \frac{\hat{s}_{ix} (1-\hat{s}_{ix})}{n_{.x}} + \frac{\hat{s}_{iy} (1-\hat{s}_{iy})}{n_{.y}} $$and $n\left ( \hat{\theta}_i \right )$ is the $\theta_i$ that maximizes the folded normal distribution $\phi(\hat{\theta}_i-\theta_i) + \phi(\hat{\theta}_i+\theta_i)$ where$$\hat{\theta_i} = \frac{\left | \hat{s}_{ix}-\hat{s}_{iy} \right |}{\hat{\sigma_i}}.$$and $\phi$ is the standard normal density. The index is fitted below (note you can change the tolerance of the optimization step):
###Code
from segregation.aspatial import DensityCorrectedDissim
index = DensityCorrectedDissim(gdf, 'HISP_', 'TOT_POP', xtol = 1e-5)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Minimum-Maximum Index (MM) The Minimum-Maximum Index (MM) is an abstraction of the aspatial version of O'Sullivan, David, and David WS Wong. "A surfaceโbased approach to measuring spatial segregation." Geographical Analysis 39.2 (2007): 147-168. Its formula is given by:$$MM = \frac{\sum_{i=1}^{n} max \left ( \hat{s}_{i1}, \hat{s}_{i2} \right ) - \sum_{i=1}^{n} min \left ( \hat{s}_{i1}, \hat{s}_{i2} \right )}{\sum_{i=1}^{n} max \left ( \hat{s}_{i1}, \hat{s}_{i2} \right )}$$where the sub-indexes $1$ and $2$ are two mutually exclusive groups (for example, Hispanic and non-Hispanic population). Internally, `segregation` creates the complementary group of `HISP_`.
###Code
from segregation.aspatial import MinMax
index = MinMax(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Table of Contents* [PySAL *segregation* module for aspatial indexes](PySAL-*segregation*-module-for-aspatial-indexes) * [Notation](Notation) * [Dissimilarity](Dissimilarity) * [Gini](Gini) * [Entropy](Entropy) * [Atkinson](Atkinson) * [Concentration Profile](Concentration-Profile) * [Isolation](Isolation) * [Exposure](Exposure) * [Correlation Ratio](Correlation-Ratio) * [Modified Dissimilarity](Modified-Dissimilarity) * [Modified Gini](Modified-Gini) * [Bias-Corrected Dissimilarity](Bias-Corrected-Dissimilarity) * [Density-Corrected Dissimilarity](Density-Corrected-Dissimilarity) * [Minimum-Maximum Index (MM)](Minimum-Maximum-Index-%28MM%29) PySAL *segregation* module for aspatial indexes This is an example notebook of functionalities for aspatial indexes of the *segregation* module. Firstly, we need to import the packages we need.
###Code
%matplotlib inline
import geopandas as gpd
import segregation
import libpysal
###Output
_____no_output_____
###Markdown
Then it's time to load some data to estimate segregation. We use the data of 2000 Census Tract Data for the metropolitan area of Sacramento, CA, USA. We use a geopandas dataframe available in PySAL examples repository. We highlight that for nonspatial segregation measures only a pandas dataframe would also work to estimate.For more information about the data: https://github.com/pysal/libpysal/tree/master/libpysal/examples/sacramento2
###Code
s_map = libpysal.examples.load_example("Sacramento1")
s_map = gpd.read_file(s_map.get_path("sacramentot2.shp"))
s_map.columns
###Output
_____no_output_____
###Markdown
The data have several demographic variables. We are going to assess the segregation of the Hispanic Population (variable 'HISP'). For this, we only extract some columns of the geopandas dataframe.
###Code
gdf = s_map[['geometry', 'HISP', 'TOT_POP']]
###Output
_____no_output_____
###Markdown
We also can plot the spatial distribution of the composition of the Hispanic population over the tracts of Sacramento:
###Code
gdf['composition'] = gdf['HISP'] / gdf['TOT_POP']
gdf.plot(column = 'composition',
cmap = 'OrRd',
figsize=(20,10),
legend = True)
###Output
/home/serge/anaconda3/envs/segregation/lib/python3.9/site-packages/geopandas/geodataframe.py:853: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
super(GeoDataFrame, self).__setitem__(key, value)
###Markdown
Notation For consistency of notation, we assume that $n_{ij}$ is the population of unit $i \in \{1, ..., I\}$ of group $j \in \{x, y\}$, also $\sum_{j}n_{ij} = n_{i.}$, $\sum_{i}n_{ij} = n_{.j}$, $\sum_{i}\sum_{j}n_{ij} = n_{..}$, $\tilde{s}_{ij} = \frac{n_{ij}}{n_{i.}}$, $\hat{s}_{ij} = \frac{n_{ij}}{n_{.j}}$. The segregation indexes can be build for any group $j$ of the data. Dissimilarity Introduced by *Duncan, O. and B. Duncan (1955). A methodological analysis of segregation indexes. American Sociological Review 20, 210โ17.*, the Dissimilarity Index (D) is given by:$$D = \sum_{i=1}^{I}\frac{n_{i.}\mid \tilde{s}_{ij}-\frac{n_{.j}}{n_{..}}\mid}{2n_{..}\frac{n_{.j}}{n_{..}}\left ( 1-\frac{n_{.j}}{n_{..}} \right )}$$and$$0 \leqslant D \leqslant 1$$ The index is fitted below:
###Code
from segregation.aspatial import Dissim
index = Dissim(gdf, 'HISP', 'TOT_POP')
type(index)
###Output
_____no_output_____
###Markdown
All the **segregation** classes have the *statistic* and the *core_data* attributes. We can access the point estimation of D for the data set with the **statistic** attribute:
###Code
index.statistic
###Output
_____no_output_____
###Markdown
The interpretation of this value is that 32.18% of the hispanic population would have to move to reach eveness in Sacramento. Gini The Gini coefficient is given by:$$G=\sum_{i_1=1}^{I}\sum_{i_2=1}^{I}\frac{n_{i_1.}n_{i_2.}\mid \tilde{s}_{ij}^{i_1}-\tilde{s}_{ij}^{i_2}\mid}{2n_{..}^2\frac{n_{.j}}{n_{..}}\left ( 1-\frac{n_{.j}}{n_{..}} \right )}$$ The index is fitted below:
###Code
from segregation.aspatial import GiniSeg
index = GiniSeg(gdf, 'HISP', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Entropy The global entropy (E) is given by:$$E = \frac{n_{.j}}{n_{..}} \ log\left ( \frac{1}{\frac{n_{.j}}{n_{..}}} \right )+\left ( 1-\frac{n_{.j}}{n_{..}} \right )log\left ( \frac{1}{1-\frac{n_{.j}}{n_{..}}} \right )$$while the unit's entropy is analogously:$$E_i = \tilde{s}_ {ij} \ log\left ( \frac{1}{\tilde{s}_ {ij}} \right )+\left ( 1-\tilde{s}_ {ij} \right )log\left ( \frac{1}{1-\tilde{s}_ {ij}} \right ).$$Therefore, the entropy index (H) is given by:$$H = \sum_{i=1}^{I}\frac{n_{i.}\left ( E-E_i \right )}{En_{..}}$$ The index is fitted below:
###Code
from segregation.aspatial import Entropy
index = Entropy(gdf, 'HISP', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Atkinson The Atkinson index (A) is given by:$$A = 1 - \frac{\frac{n_{.j}}{n_{..}}}{1-\frac{n_{.j}}{n_{..}}}\left | \sum_{i=1}^{I}\left [ \frac{\left ( 1-\tilde{s}_{ij} \right )^{1-b}\tilde{s}_{ij}^bt_i}{\frac{n_{.j}}{n_{..}}n_{..}} \right ] \right |^{\frac{1}{1-b}}$$where $b$ is a shape parameter that determines how to weight the increments to segregation contributed by different portions of the Lorenz curve. The index is fitted below (note you can modify the parameter *b*):
###Code
from segregation.aspatial import Atkinson
index = Atkinson(gdf, 'HISP', 'TOT_POP', b = 0.5)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Concentration Profile The Concentration Profile (R) measure is discussed in Hong, Seong-Yun, and Yukio Sadahiro. "Measuring geographic segregation: a graph-based approach." *Journal of Geographical Systems* 16.2 (2014): 211-231. and tries to inspect the evenness aspect of segregation. The threshold proportion $t$ is given by:$$\upsilon_t = \frac{\sum_{i=1}^{I}n_{ij}g(t,i)}{\sum_{i=1}^{I}n_{ij}}.$$In the equation, $g(t, i)$ is a logical function that is defined as:$$ g(t,i) = \begin{cases} 1 & if \ \frac{n_{ij}}{n_{i.}} \geqslant t \\ 0 & \ otherwise. \end{cases}$$The Concentration Profile (R) is given by:$$R=\frac{\frac{n_{.j}}{n_{..}}-\left ( \int_{t=0}^{\frac{n_{.j}}{n_{..}}}\upsilon_tdt - \int_{t=\frac{n_{.j}}{n_{..}}}^{1}\upsilon_tdt \right )}{1-\frac{n_{.j}}{n_{..}}}.$$ The index is fitted below:
###Code
from segregation.aspatial import ConProf
index = ConProf(gdf, 'HISP', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
In addition, this index has a plotting method to see the profile estimated.
###Code
index.plot()
###Output
_____no_output_____
###Markdown
Isolation Isolation (xPx) assess how much a minority group is only exposed to the same group. In other words, how much they only interact the members of the group that they belong. Assuming $j = x$ as the minority group, the isolation of $x$ is giving by:$$xPx=\sum_{i=1}^{I}\left ( \hat{s}_{ix} \right )\left ( \tilde{s}_{ix} \right ).$$ The index is fitted below:
###Code
from segregation.aspatial import Isolation
index = Isolation(gdf, 'HISP', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
The interpretation of this number is that if you randomly pick a hispanic person of a specific tract of Sacramento, there is 23.19% of probability that this member shares a unit with another hispanic. Exposure The Exposure (xPy) of $x$ is giving by$$xPy=\sum_{i=1}^{I}\left ( \hat{s}_{iy} \right )\left ( \tilde{s}_{iy} \right ).$$ The index is fitted below:
###Code
from segregation.aspatial import Exposure
index = Exposure(gdf, 'HISP', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
The interpretation of this number is that if you randomly pick a hispanic person of a specific tract of Sacramento, there is 76.8% of probability that this member shares a unit with an nonhispanic. Correlation Ratio The correlation ratio (V or $Eta^2$) is given by$$V = Eta^2 = \frac{xPx - \frac{n_{.x}}{n_{..}}}{1 - \frac{n_{.x}}{n_{..}}}.$$ The index is fitted below:
###Code
from segregation.aspatial import CorrelationR
index = CorrelationR(gdf, 'HISP', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Modified Dissimilarity The Modified Dissimilarity Index (Dct) based on Carrington, W. J., Troske, K. R., 1997. On measuring segregation in samples with small units. *Journal of Business & Economic Statistics* 15 (4), 402โ409, evaluates the deviation from simulated evenness. This measure is estimated by taking the mean of the classical $D$ under several simulations under evenness from the global minority proportion.Let $D^*$ be the average of the classical D under simulations draw assuming evenness from the global minority proportion. The value of Dct can be evaluated with the following equation: $$ Dct = \begin{cases} \frac{D-D^*}{1-D^*} & if \ D \geqslant D^* \\ \frac{D-D^*}{D^*} & if \ D < D^* \end{cases}$$ The index is fitted below (note you can change the number of simulations):
###Code
from segregation.aspatial import ModifiedDissim
index = ModifiedDissim(gdf, 'HISP', 'TOT_POP', iterations = 500)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Modified Gini The Modified Gini (Gct) based also on Carrington, W. J., Troske, K. R., 1997. On measuring segregation in samples with small units. *Journal of Business & Economic Statistics* 15 (4), 402โ409, evaluates the deviation from simulated evenness. This measure is estimated by taking the mean of the classical G under several simulations under evenness from the global minority proportion.Let $G^*$ be the average of G under simulations draw assuming evenness from the global minority proportion. The value of Gct can be evaluated with the following equation: $$ Gct = \begin{cases} \frac{G-G^*}{1-G^*} & if \ G \geqslant G^* \\ \frac{G-G^*}{G^*} & if \ G < G^* \end{cases}$$ The index is fitted below (note you can change the number of simulations):
###Code
from segregation.aspatial import ModifiedGiniSeg
index = ModifiedGiniSeg(gdf, 'HISP', 'TOT_POP', iterations = 500)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Bias-Corrected Dissimilarity The Bias-Corrected Dissimilarity (Dbc) index is presented in Allen, R., Burgess, S., Davidson, R., Windmeijer, F., 2015. More reliable inference for the dissimilarity index of segregation. *The econometrics journal* 18 (1), 40โ66. The Dbc is given by:$$ D_{bc} = 2D - \bar{D}_{b}$$where $\bar{D}_b$ is the average of $B$ resampling using theobserved conditional probabilities for a multinomial distribution for each group independently. The index is fitted below (note you can change the value of B):
###Code
from segregation.aspatial import BiasCorrectedDissim
index = BiasCorrectedDissim(gdf, 'HISP', 'TOT_POP', B = 500)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Density-Corrected Dissimilarity The Density-Corrected Dissimilarity (Ddc) index is presented in Allen, R., Burgess, S., Davidson, R., Windmeijer, F., 2015. More reliable inference for the dissimilarity index of segregation. *The econometrics journal* 18 (1), 40โ66. The Ddc measure is given by:$$D_{dc} = \frac{1}{2}\sum_{i=1}^{I} \hat{\sigma}_{i} n\left ( \hat{\theta}_i \right )$$where$$\hat{\sigma}^2_i = \frac{\hat{s}_{ix} (1-\hat{s}_{ix})}{n_{.x}} + \frac{\hat{s}_{iy} (1-\hat{s}_{iy})}{n_{.y}} $$and $n\left ( \hat{\theta}_i \right )$ is the $\theta_i$ that maximizes the folded normal distribution $\phi(\hat{\theta}_i-\theta_i) + \phi(\hat{\theta}_i+\theta_i)$ where$$\hat{\theta_i} = \frac{\left | \hat{s}_{ix}-\hat{s}_{iy} \right |}{\hat{\sigma_i}}.$$and $\phi$ is the standard normal density. The index is fitted below (note you can change the tolerance of the optimization step):
###Code
from segregation.aspatial import DensityCorrectedDissim
index = DensityCorrectedDissim(gdf, 'HISP', 'TOT_POP', xtol = 1e-5)
type(index)
index.statistic
###Output
_____no_output_____
###Markdown
Minimum-Maximum Index (MM) The Minimum-Maximum Index (MM) is an abstraction of the aspatial version of O'Sullivan, David, and David WS Wong. "A surfaceโbased approach to measuring spatial segregation." Geographical Analysis 39.2 (2007): 147-168. Its formula is given by:$$MM = \frac{\sum_{i=1}^{n} max \left ( \hat{s}_{i1}, \hat{s}_{i2} \right ) - \sum_{i=1}^{n} min \left ( \hat{s}_{i1}, \hat{s}_{i2} \right )}{\sum_{i=1}^{n} max \left ( \hat{s}_{i1}, \hat{s}_{i2} \right )}$$where the sub-indexes $1$ and $2$ are two mutually exclusive groups (for example, Hispanic and non-Hispanic population). Internally, `segregation` creates the complementary group of `HISP`.
###Code
from segregation.aspatial import MinMax
index = MinMax(gdf, 'HISP', 'TOT_POP')
type(index)
index.statistic
###Output
_____no_output_____ |
code/multi_variable_linear_regression_02_matrix.ipynb | ###Markdown
multi variable์ ๋ํ linear regression์ ์ฝ๋๋ฅผ ๋ฆฌ๋ทฐํด๋ณด์ 1. ๋ณ์์ ๊ฐฏ์์ ๋ฐ๋ผ ์ผ์ผ์ด ์ฝ๋ฉํ์ง ์๋๋ก ๋งคํธ๋ฆญ์ค ์ฐ์ฐ์ผ๋ก ์ฒ๋ฆฌ
###Code
import tensorflow as tf
import numpy as np
tf.set_random_seed(777) # for reproducibility
x_data = [[73., 80., 75.],
[93., 88., 93.],
[89., 91., 90.],
[96., 98., 100.],
[73., 66., 70.]]
y_data = [[152.],
[185.],
[180.],
[196.],
[142.]]
X = tf.placeholder(tf.float32, shape=[None, 3])
Y = tf.placeholder(tf.float32, shape=[None, 1])
W = tf.Variable(tf.random_normal([3, 1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
# Hypothesis / y hat
y_hat = tf.matmul(X, W) + b
# cost/loss function
loss = tf.reduce_mean(tf.square(y_hat - Y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
W_hist = tf.summary.histogram("Weight", W)
b_hist = tf.summary.histogram("bias", b)
y_hat_hist = tf.summary.histogram("hypothesis", y_hat)
loss_scal = tf.summary.scalar("loss", loss)
# tensorboard --logdir=./logs/linear_regression_logs
merged_summary = tf.summary.merge_all()
writer = tf.summary.FileWriter("./logs/multi_variable_linear_regression_r0_02")
writer.add_graph(sess.graph) # Show the graph
for step in range(2001):
summary, loss_value, y_hat_value, _ = sess.run([merged_summary, loss, y_hat, train], {X: x_data, Y: y_data})
if step % 100 == 0:
print(step, "loss: ", loss_value, "\nPrediction: ", y_hat_value)
writer.add_summary(summary, global_step=step)
# ๋์ถฉ x1, x2, x3 ๊ฐ์ ๋ฃ๊ณ y ๊ฐ์ prediction ํด๋ณด์
x_test = [[87., 82., 91.]]
y_hat_value = sess.run([y_hat], {X: x_test})
print("X : ", x_test, "\nPrediction: ", y_hat_value[0])
###Output
X : [[87.0, 82.0, 91.0]]
Prediction: [[ 178.82533264]]
|
Aula01_estudos.ipynb | ###Markdown
Olรก! Seja bem-vinda ou bem-vindo ao notebook da aula 01, nesta aula vamos realizar nossa primeira anรกlise de dados e no final jรก seremos capazes de tirar algumas conclusรตes.Nรณs desenvolveremos nosso projeto aqui no google colaboratory, assim podemos mesclar cรฉlulas contendo textos em formato markdown e cรฉlulas de cรณdigo, alรฉm disso vocรช nรฃo precisa instalar nada na sua mรกquina. Entรฃo, que tal comeรงar testando algumas linhas de cรณdigo?No campo abaixo temos o que chamamos de cรฉlula. Na cรฉlula nรณs podemos digitar textos, cรกlculos e cรณdigos. Apรณs eu introduzir o que eu quero na cรฉlula eu vou rodar esse conteรบdo. Para isso podemos usar o atalho SHIFT + ENTER, ou atรฉ mesmo clicar no sรญmbolo de play no canto esquerdo da cรฉlula. Na primeira vez que rodamos algo no nosso notebook ele vai demorar um pouco mais de tempo, e isso ocorre porque o google estรก alocando uma mรกquina virtual para nรณs utilizarmos.Vamos entรฃo fazer alguns testes, como os exemplos abaixo:
###Code
#Os notebooks apresentam um funcionamento em que ao colocar o nome de uma variรกvel ou alguma
#operaรงรฃo direto na linha de comando, ele mostra seu valor, nรฃo necessitando aplicar o 'print',
#porรฉm para algumas variรกveis a aplicaรงรฃo do print(var) darรก outras informaรงรตes sobre o objeto da
#variรกvel
#Aplicando apenas a operaรงรฃo direto na linha de comando
10 + 10
#Usando a funรงรฃo 'print()' obtemos a mesma coisa, mas usando print podemos imprimir mais de uma
# coisa na tela, enquanto que colocando apenas a operaรงรฃo, terรญamos apenas imprimido o รบltimo,
#lembrando que a funรงรฃo 'type(var)' volta o tipo da variรกvel.
print(10+10)
print(type(10+10))
10+10
type(10+10)
'Caio'
###Output
_____no_output_____
###Markdown
Importando os dadosNessa imersรฃo nรณs vamos mergulhar no universo da biologia e da biotecnologia e explorar uma base de dados da รกrea.Para a nossa anรกlise, estรก faltando entรฃo os dados. Para conseguirmos esses dados vamos acessar o Github, nesse link:https://github.com/alura-cursos/imersaodados3/tree/main/dadosEntรฃo, agora vamos importar essa base de dados para dentro do nosso notebook. Para juntar essas informaรงรตes vamos utilizar nossa famosa biblioteca do "Pandas".Vamos importar essa biblioteca atravรฉs do seguinte cรณdigo:
###Code
import pandas as pd
url_dados = 'https://github.com/alura-cursos/imersaodados3/blob/main/dados/dados_experimentos.zip?raw=true'
dados = pd.read_csv(url_dados, compression = 'zip')
dados
#Para colocar dados no notebook poderia ser colocado um caminho a partir de um arquivo
# salvo no drive, clicando no 4ยบ รญcone da barra lateral ร esquerda do google colabs (รญcone de pasta)
#e efetuando uma conexรฃo do google colabs com seu drive.
###Output
_____no_output_____
###Markdown
Observe que imprimimos agora todas as nossas linhas e colunas da nossa tabela.Aqui observamos que temos dados referentes a ID, tratamento, tempo, dose, entre outras.Para nos auxiliar na visualizaรงรฃo e na identificaรงรฃo das colunas dos dados que vamos trabalhar, faremos um recorte, uma espรฉcie de cabeรงalho, com 5 linhas, e para isso usamos o seguinte comando:
###Code
#Chamando apenas a funรงรฃo 'dados.head()' jรก temos as primeiras 5 linhas, porรฉm
# adicionando um nรบmero ao argumento da funรงรฃo, ou seja, dentro do parรชnteses, podemos imprimir
# qualquer quantidade de linhas presentes no DataFrame
dados.head()
###Output
_____no_output_____
###Markdown
Quando estรกvamos trabalhando com o conjunto total de dados nรณs tรญnhamos a informaรงรฃo do nรบmero total de linhas e colunas. Porรฉm, agora com o head, nรณs perdemos essa informaรงรฃo. Entรฃo, caso vocรช queira recuperar a informaรงรฃo do total de linhas e colunas basta utilizar:
###Code
#Lembrando que o python comeรงa a identaรงรฃo a partir do 0, assim a รบltima linhas, nesse caso,
# terรก indice de 23813 mesmo que haja 23814 linhas, como mostrado atraves da funรงรฃo '.shape'
# que nรฃo tem parenteses, nรฃo apresentando argumentos adicionais.
dados.shape
###Output
_____no_output_____
###Markdown
Entรฃo, agora vamos voltar para o nosso problema. Nรณs queremos iniciar fazendo uma anรกlise dos dados que estamos tratando. Vamos comeรงar selecionando sรณ uma coluna, para entendermos do que ela estรก tratando. Vamos iniciar com a coluna tratamento. Para isso vamos digitar:
###Code
# Aqui foi chamado a coluna como uma 'string', ou seja, um texto com o nome da coluna
# como apresentado no DataFrame, colonar espaรงos adicionais e letras maiรบsculas resultaria
# em erro, pois o nome deve coincidir com aquele apresentado no DataFrame.
dados['tratamento']
###Output
_____no_output_____
###Markdown
Esse tipo de informaรงรฃo, da coluna, รฉ o que chamamos de 'sรฉrie'. Jรก a tabela inteira nรณs chamos de 'dataframe'.Agora para vermos os tipos especรญficos de informaรงรฃo que temos dentro dessa coluna vamos digitar:
###Code
# Vimos aqui que os dados estรฃo arrumados de forma adequanda, pois as duas informaรงรตes
# dadas pela coluna 'tratamento' sรฃo diferentes e รบnicas, poderia acontecer, por ex, de existir
# uma coluna chamada 'com droga' ou 'c/ droga', que teriam o mesmo significado de 'com_droga'
# tendo assim de se realizar uma adequaรงรฃo dos dados para que nรฃo haja ambiguidade.
dados['tratamento'].unique()
###Output
_____no_output_____
###Markdown
A resposta foi dada no formato de array, que nada mais รฉ do que um vetor, ou seja, um tipo de estrutura de dados.Observe que encontramos dois tipos: 'com droga' e 'com controle'.Com droga, como o prรณprio nome nos diz, รฉ quando estamos aplicando algum tipo de droga para a amostra. Jรก com o controle รฉ uma tรฉcnica estatรญstica em que isolamos as outras varรญaveis e observamos apenas a variรกvel de interesse. Agora vamos analisar a coluna do tempo:
###Code
# Podemos ver aqui que o resultado tambรฉm pode ser um valor inteiro
dados['tempo'].unique()
###Output
_____no_output_____
###Markdown
Encontramos 3 tipos de informaรงรฃo nessa coluna: 24, 72 e 48. O que pode nos indicar o tempo, provavelmente em horas, de intervalo que foi administrada a dose de medicamentos ou drogas.ร interessante que observamos o comportamento das cรฉlulas com essa diferenรงa de tempo, jรก que se analisรกssemos num perรญodo diferente poderia nรฃo dar tempo suficiente pra cรฉlula manifestar o determinado comportamento. Alรฉm do tempo, podemos analisar tambรฉm a coluna de dose:
###Code
dados['dose'].unique()
###Output
_____no_output_____
###Markdown
Aqui obtemos dois tipos de doses diferentes, d1 e d2, mas nรฃo conseguimos afirmar nada categoricamente de antemรฃo. Vamos entรฃo analisar a categoria droga:
###Code
dados['droga'].unique()
###Output
_____no_output_____
###Markdown
Agora com a resposta de ```dados['droga'].unique()``` obtivemos essa sรฉrie de cรณdigos como resposta. Talvez esses nรบmeros foram codificados na tentativa de anonimizar os tipos de drogas usadas, para tentar evitar qualquer tipo de viรฉs na anรกlise dos resultados.Hรก uma sรฉrie de regras quando alguns experimentos sรฃo feitos, evitando mostrar dados como nome, sexo, e outros fatores, dependendo da anรกlise a ser feita, para que sejam evitados vieses. Vamos analisar a coluna nomeada como g-0 agora:
###Code
dados['g-0'].unique()
###Output
_____no_output_____
###Markdown
Sรณ olhando fica difรญcil tentar deduzir o que esses nรบmeros representam. Entรฃo, nesse ponto, com o auxรญlio da Vanessa, especialista, conseguimos a informaรงรฃo que essa letra 'g' da coluna remete ร palavra gene. Ou seja, esses nรบmeros nos dizem a expressรฃo de cada gene frente as drogas ou a exposiรงรฃo.Quando subimos de volta na tabela, percebemos que hรก diversos valores, inclusive com vรกrias casas decimais. Aparentemente esses nรบmeros foram "arredondados", normalizados, para podermos comparรก-los de alguma forma.Entรฃo, atรฉ agora na nossa anรกlise, jรก conseguimos identificar e entender as informaรงรตes de diferentes colunas; as colunas que nos indicam os tratamentos, as drogas, o tempo, e depois as respostas genรฉticas, que tem a letra g no รญnicio do seu nome.Outro tipo de anรกlise que podemos fazer agora รฉ entender a distribuiรงรฃo dessas informaรงรตes. Por exemplo, podemos saber quantos experimentos temos utilizando droga e quantos nรฃo utilizam; quantos usam o d1, e quantos utilizam o d2, e assim por diante.Como podemos fazer isso? Vamos utilizar esse cรณdigo:
###Code
dados['tratamento'].value_counts()
###Output
_____no_output_____
###Markdown
Agora, em vez de usarmos o ```.unique``` no cรณdigo, nรณs utilizamos o ```.value_counts``` jรก que nosso objetivo era contar a quantidade de valores que apareciam nas colunas, nesse caso, na coluna ```tratamento```.No resultado ele retornou 2 linhas: uma com a quantidade de elementos que temos, ou seja, com a minha frequรชncia, na categoria ```com_droga``` e a outra com a quantidade que temos na categoria ```com_controle```.Houve uma grande diferenรงa no resultado dessas categorias, nรฃo รฉ mesmo? Isso รฉ, no mรญnimo, curioso. Entรฃo, Guilherme vai nos sugerir um desafio, de investigar o por quรช dessa diferenรงa tรฃo grande.Nesse momento o Thiago jรก escreve outros dois desafios, o desafio 2 e o desafio 3. E a Vanessa tambรฉm deixa o seu, desafio 4. Todos os desafios dessa aula estรฃo escritos no final desse notebook. Serรก que essa diferenรงa entre os valores dentro das categorias continuam tรฃo desproporcionais? Vamos investigar a categoria ```dose```:
###Code
dados['dose'].value_counts()
###Output
_____no_output_____
###Markdown
Nesse ponto parece que as coisas jรก estรฃo mais equilibradas. Mas somente com o nรบmero รฉ difรญcil fazermos uma leitura mais aprofundada. Entรฃo, nesse momento, vamos resolver um dos desafios deixados. Entรฃo, se atรฉ aqui vocรช ainda nao resolveu os desafios do vรญdeo e nao quer receber um spoiler, pare aqui e tente resolvรช-los primeiro. Para entendermos melhor e fazer um comparativo รฉ legal traรงarmos a proporรงรฃo entre os dados. Vamos escrever o cรณdigo agora, setando como parรขmetro, dentro dos parรชnteses o ```normalize = True``` :
###Code
dados['tratamento'].value_counts(normalize = True)
###Output
_____no_output_____
###Markdown
Temos agora os dados "normalizados". Podemos interpretar utilizando a porcentagem (multiplicando por 100), o que nos daria algo em torno de 92% versus 8% (aproximadamente), mostrando o tamanho da desproporรงรฃo. Vamos fazer o mesmo agora com a coluna ```dose```:
###Code
dados['dose'].value_counts(normalize = True)
###Output
_____no_output_____
###Markdown
Temos aqui o resultado. Inclusive, como mini-desafio, vocรช pode fazer o mesmo processo com as outras colunas.Certo, mas geralmente, quando hรก a trasmissรฃo dessas informaรงรตes, vemos uma sรฉrie de grรกficos. E, geralmente, รฉ usado aquele grรกfico que parece uma torta, ou uma pizza.Vamos entรฃo plotar um grรกfico dessa tipo, com o seguinte cรณdigo:
###Code
dados['tratamento'].value_counts().plot.pie()
###Output
_____no_output_____
###Markdown
Aqui, em azul, nรณs temos a quantidade de tratamento com drogas(que รฉ muito maior, em proporรงรฃo), e em laranja temos o tratamento "com controle". Ficou atรฉ parecendo um pacman, hehe.Vamos aproveitar e analisar as outras informaรงรตes com os grรกficos tambรฉm, para vermos o que acontece. Vamos analisar a coluna tempo:
###Code
dados['tempo'].value_counts().plot.pie()
###Output
_____no_output_____
###Markdown
Repare como ficou difรญcil a olho nรบ identificarmos qual observaรงรฃo รฉ maior atravรฉs do grรกfico. Esse tipo de grรกfico pode acabar dificultando um pouco a anรกlise, especialmente quando as informaรงรตes estรฃo melhor balanceadas. Alรฉm de identificar a quantidade de horas observadas, nรณs nรฃo conseguimos extrair mais nenhuma informaรงรฃo desse grรกfico. Inclusive, nรณs temos uma espรฉcie de regrinha, que diz que quando formos fazer algum tipo de anรกlise, procurarmos evitar o uso de grรกficos que nos remetem a comida, como por exemplo: rosca, pizza, torta, e assim por diante.Entรฃo qual tipo de grรกfico poderรญamos utlizar para melhorarmos nossa visualizaรงรฃo? Vamos utilizar o grรกfico de barras, atravรฉs deste cรณdigo:
###Code
dados['tempo'].value_counts().plot.bar()
###Output
_____no_output_____
###Markdown
Agora sim conseguimos identificar com mais facilidade qual observaรงรฃo tem a sua maior frequรชncia. Na verdade a quantidade de horas com o maior nรบmero de observaรงรตes foi a de 48.No eixo y temos o nรบmero que nos remete ao nรบmero de observaรงรตes. Entรฃo podemos observar que tivemos pouco mais de 8000 observaรงรตes relativas ร s 48 horas.Entรฃo, o grรกfico de barras acabou sendo muit mais รบtil, nesse caso, do que o grรกfico de pizza. Ao longo da aula nรณs falamos sobre a expressรฃo gรชnica. Se nรณs voltarmos ร tabela, na coluna g-0, nรณs temos alguns valores dentro de um intervalo definido. Para nรฃo termos valores muito distantes entre si, รฉ bastante comum no meio cientรญfico que haja uma normalizaรงรฃo dos resultados, para criamos um intervalo que nรฃo seja tรฃo grande, em que o meio dessa distribuiรงรฃo seja o 0.Como nรณs podemos saber em quais linhas dessa coluna, o meu valor estรก acima de 0?Vamos fazer uma consulta nos nossos dados, da seguinte maneira:
###Code
dados_filtrados = dados[dados['g-0'] > 0]
dados_filtrados.head()
###Output
_____no_output_____
###Markdown
Dessa maneira, temos somente as 5 primeiras linhas com o valores maiores do que 0 na coluna g-0, com a ajuda dessa mรกscara com o cรณdigo ```[dados['g-0']>0]```.Dessa mesma forma que aplicamos essa mรกscara, podemos utilizar o mesmo caminho, ou outras 'querys', para responder vรกrias outras perguntas.Assim, temos outro desafio, o 5, de procurar na documentaรงรฃo do pandas o mรฉtodo query.Alรฉm desse temos outros desafios, o 6 e o 7 e o 8, todos eles estรฃo logo abaixo.
###Code
dados
###Output
_____no_output_____ |
02-python-201/practices/01-numpy/your-code/01-pra-numpy.ipynb | ###Markdown
Programaciรณn 201 Librerรญas cientรญficas en Python: `NumPy`------------------------------------------------------ Ejercicio 1Calculad la norma y el determinante de la siguiente matriz: ```[[1, 0], [2, -1]]```.
###Code
# Importamos la librerรญa numpy.
import numpy as np
# Creamos la matriz.
matriz_one = np.array([[1, 0], [2, -1]])
# Calculamos la norma con la funciรณn `linalg.norm` y la mostramos por pantalla.
norm = np.linalg.norm(matriz_one)
print("La norma de la matriz es: " + str(norm))
# Calculamos el determinante con la funciรณn `linalg.det` y lo mostramos por pantalla.
det = np.linalg.det(matriz_one)
print("El determinante de la matriz es: " + str(det))
###Output
La norma de la matriz es: 2.449489742783178
El determinante de la matriz es: -1.0
###Markdown
Ejercicio 2 Evaluad las funciones arcoseno y arcocoseno en el intervalo [0,1] y con paso (resoluciรณn) de 0.1 y guardadlas en dos _arrays_.
###Code
# Evaluamos la funciรณn del arcoseno en el intervano [0,1] y con paso de 0.1.
array_one = np.array([[]])
for i in np.arange(0, 1, 0.1):
# Utilizamos la funciรณn de numpy `arange` que soporta nรบmeros flotantes.
array_one = np.append(array_one, np.arcsin(i))
# Mostramos por pantalla el resultado.
print("El array resultante de aplicar la funciรณn de arcoseno en el intervalo [0,1] con paso 0.1 es: \n" + str(array_one))
# Evaluamos la funciรณn del arcocoseno en el intervano [0,1] y con paso de 0.1.
array_two = np.array([[]])
for i in np.arange(0, 1, 0.1):
# Utilizamos la funciรณn de numpy `arange` que soporta nรบmeros flotantes.
array_two = np.append(array_two, np.arccos(i))
# Mostramos por pantalla el resultado.
print("El array resultante de aplicar la funciรณn de arcocoseno en el intervalo [0,1] con paso 0.1 es: \n" + str(array_two))
###Output
El array resultante de aplicar la funciรณn de arcocoseno en el intervalo [0,1] con paso 0.1 es:
[1.57079633 1.47062891 1.36943841 1.26610367 1.15927948 1.04719755
0.92729522 0.79539883 0.64350111 0.45102681]
###Markdown
Ejercicio 3Generad una lista de 100 valores enteros aleatorios de 0-9. Realizad los siguientes cรกlculos utilizando mรฉtodos de _numpy_:- Media y desviaciรณn estรกndar de los valores de la lista- Valor mรกximo y mรญnimo- Sumad todos los valores de la lista- Conseguid una lista de valores รบnicos
###Code
# Utilizando la funciรณn `random.randint` generamos una lista de 100 valores enteros aleatorios de 0 a 9, el parรกmetro `size`
# determina el tamaรฑo de la lista y hay que tener en cuenta que el รบltimo nรบmero del rango seleccionado queda excluido, por lo
# que ponemos como รบltimo nรบmero el 10 para que el 9 sea tambiรฉn seleccionado.
list_random_values_0_to_9 = np.random.randint(0, 10, size = 100)
list_random_values_0_to_9
# Calculamos los distintos valores estadรญsticos sobre la lista.
print("La media de los valores de la lista es: " + str(np.mean(list_random_values_0_to_9)))
print("La desviaciรณn tรญpica de los valores de la lista es: " + str(np.std(list_random_values_0_to_9)))
print("El mรกximo de los valores de la lista es: " + str(np.amax(list_random_values_0_to_9)))
print("El mรญnimo de los valores de la lista es: " + str(np.amin(list_random_values_0_to_9)))
print("La suma de los valores de la lista es: " + str(np.sum(list_random_values_0_to_9)))
print("Una lista de valores รบnicos de esta lista serรญa: " + str(np.unique(list_random_values_0_to_9)))
###Output
La media de los valores de la lista es: 4.33
La desviaciรณn tรญpica de los valores de la lista es: 2.98011744735002
El mรกximo de los valores de la lista es: 9
El mรญnimo de los valores de la lista es: 0
La suma de los valores de la lista es: 433
Una lista de valores รบnicos de esta lista serรญa: [0 1 2 3 4 5 6 7 8 9]
###Markdown
Ejercicio 4Ordenad la matriz bidimensional ```[[5,1,7], [0,7,4], [7,23,1]]``` por filas utilizando como algoritmo de ordenaciรณn el [Merge sort](https://en.wikipedia.org/wiki/Merge_sort).**Hint:** No es necesario que implementรฉis el algoritmo de ordenaciรณn a mano, NumPy contiene mรฉtodos para realizar diferentes tipos de ordenaciรณn sobre diferentes estructuras de datos.
###Code
# Creamos la matriz bidimensional y la mostramos por pantalla.
matriz_bidimensional = np.array([[5,1,7], [0,7,4], [7,23,1]])
print(matriz_bidimensional)
# Ordenamos la matriz anterior utilizando la funciรณn `sort`, pasando como parรกmetro el algoritmo de ordenaciรณn "Merge sort" y
# estableciendo la orientaciรณn por filas con el parรกmetro "axis".
sorted_matrix = np.sort(matriz_bidimensional, axis = 0, kind = 'mergesort')
# Mostramos por pantalla el resultado.
print(sorted_matrix)
###Output
[[ 0 1 1]
[ 5 7 4]
[ 7 23 7]]
###Markdown
Ejercicio 5Definid una funciรณn que dadas dos matrices, devuelva el valor absoluto de la multiplicaciรณn de los determinantes ambas, es decir, dadas A y B, nuestra funciรณn devolverรก `|det(A) * det(B)|`.
###Code
# Creamos una funciรณn que dadas dos matrices, devuelva el valor absoluto de la multiplicaciรณn de los determinantes de ambas.
def abs_value_product_det(matrix_a, matrix_b):
product_matrix_det = np.linalg.det(matrix_a) * np.linalg.det(matrix_b)
return np.absolute(product_matrix_det)
# Creamos dos matrices A y B.
a = np.array([[1, 4], [7, 4]])
b = np.array([[3, 5], [2, 3]])
# Llamamos a la funciรณn para comprobar que funciona correctamente.
abs_value_product_det(a, b)
###Output
_____no_output_____
###Markdown
Ejercicio 6Cread una matriz 10x10 que corresponda con la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) usando generadores bรกsicos de arrays. Cread una matriz identidad 10x10 (esta vez usando generadores especificos de matrices identidad) y comprobad que ambas matrices son iguales**Consideraciones**:- La primera matriz debe crearse usando constructores bรกsicos de arrays, como los presentados en los Notebooks de teorรญa.* La segunda matriz debe generarse utilizando el generador de matrices identidad de numpy.* La comparaciรณn debe devolver True si las matrices son iguales (un รบnico True), False de no ser asรญ.
###Code
# Creamos una matriz 10x10 con generadores bรกsicos de arrays y mostramos por pantalla.
basic_matrix = np.zeros(shape = [10, 10])
for i in range(10):
basic_matrix[i][i] = 1
basic_matrix
# Creamos una matriz 10x10 con generadores especรญficos de matrices identidad y mostramos por pantalla.
identity_matrix = np.identity(10)
identity_matrix
# Creamos una funciรณn que compruebe si las dos matrices son iguales:
def compare_function():
compare = False
# Comprobamos si las dimensiones de las matrices y los elementos que contienen son iguales, para ello usamos las funciones
# `shape` y `all`, respectivamente.
if basic_matrix.shape == identity_matrix.shape and (basic_matrix == identity_matrix).all():
compare = True
return compare
# Llamamos a la funciรณn.
compare_function()
###Output
_____no_output_____
###Markdown
Ejercicio 7Cread una matriz de 2x6 donde los valores de cada posiciรณn `(i, j)` correspondan a `i^2+j` para todo `i` par, `i/j^2` para todo `i` impar.
###Code
# Creamos una matriz de tamaรฑo 2x6 con la funciรณn `zeros`.
matrix_2_x_6 = np.zeros(shape = (2, 6))
# Utilizamos dos bucles for para recorrer la matriz.
for i in range(2):
for j in range(6):
# Establecemos la condiciรณn de si "i" es par, y le aplicamos en dicho caso a la matriz la transformaciรณn correspondiente.
# Ya que en matemรกticas los รญndices empiezan en 1, tenemos que sumarle 1 a la i y a la j cuando operamos con ellas para
# que las fรณrmulas funcionen correctamente y los datos sean correctos.
if (i + 1) % 2 == 0:
matrix_2_x_6[i][j] = (i + 1)**2 + (j + 1)
# En caso contrario (caso de ser "i" impar), le aplicamos la otra transformaciรณn que corresponde.
else:
matrix_2_x_6[i][j] = (i + 1) / (j + 1)**2
# Mostramos la matriz resultante.
matrix_2_x_6
###Output
_____no_output_____
###Markdown
Ejercicio 8Cread dos matrices de tamaรฑo 5x5 con nรบmeros reales aleatorios. Obtened el resultado de multiplicar ambas matrices usando los dos mรฉtodos de multiplicaciรณn de matrices vistos en el Notebook de teorรญa. ยฟCuรกl es la diferencia entre ambos resultados? Cread ahora dos matrices de tamaรฑo 4x5 y 5x5 respectivamente, repetid la operaciรณn. Describid cuรกl de los mรฉtodos de multiplicaciรณn podรฉis aplicar y porquรฉ
###Code
# Creamos dos matrices 5x5 con nรบmeros reales aleatorios con la funciรณn `random.choice`.
matrix_5x5_one = np.random.rand(5, 5)
matrix_5x5_two = np.random.rand(5, 5)
# Mostramos por pantalla ambas matrices.
print(matrix_5x5_one)
print(matrix_5x5_two)
# Multiplicamos ambas matrices por el mรฉtodo uno, usando el operador "*".
matrix_product_one = matrix_5x5_one * matrix_5x5_two
# Multiplicamos ambas matrices por el mรฉtodo dos, usando la funciรณn `dot`.
matrix_product_two = matrix_5x5_one.dot(matrix_5x5_two)
# Mostramos el resultado de ambas multiplicaciones para ver la diferencia y vemos que sรญ la hay. Por un lado, el primer mรฉtodo
# devuelve un array con la multiplicaciรณn de ambas matrices elemento a elemento y por otro lado, el segundo mรฉtodo aplica el
# procedimiento matemรกtico del producto matricial.
print("El resultado del primer mรฉtodo para multiplicar matrices es: \n" + str(matrix_product_one))
print("El resultado del segundo mรฉtodo para multiplicar matrices es: \n" + str(matrix_product_two))
# Creamos dos matrices, una 4x5 y otra 5x5, con nรบmeros reales aleatorios de la misma forma que hicimos anteriormente.
matrix_4x5_one = np.random.rand(4, 5)
matrix_5x5_three = np.random.rand(5, 5)
# Mostramos por pantalla ambas matrices.
print(matrix_4x5_one)
print(matrix_5x5_three)
# La multiplicaciรณn entre ambas matrices es posible por el mรฉtodo dos, usando la funciรณn `dot`, ya que esta se puede realizar
# sobre matrices de distinto tamaรฑo, el รบnico requisito es que el nรบmero de columnas de la primera matriz sea igual al nรบmero
# de filas de la segunda matriz. Realizamos la multiplicaciรณn y mostramos el resultado para comprobarlo.
matrix_product_three_valid = matrix_4x5_one.dot(matrix_5x5_three)
print("El resultado de multiplicar estas dos matrices usando la funciรณn 'dot' es: \n" + str(matrix_product_three_valid))
# Sin embargo, si intentamos multiplicar ambas matrices por el mรฉtodo uno, es decir, usando el operador "*", nos darรก error, ya
# que al ser dos matrices de distinto tamaรฑo no puede ejecutar la multiplicaciรณn elemento a elemento porque habrรก elementos que
# no se corresponden con ninguno. Podemos comprobarlo ejecutando el siguiente cรณdigo:
matrix_product_four_no_valid = matrix_4x5_one * matrix_5x5_three
###Output
_____no_output_____ |
lab_12/cfds_lab_12.ipynb | ###Markdown
Lab 12 - "Deep Learning - Convolutional Neural Networks"Chartered Financial Data Scientist (CFDS), Autumn Term 2020 In the last lab you learned about how to utilize a **supervised** (deep) machine learning technique namely **Artificial Neural Networks (ANNs)** to classify tiny images of handwritten digits contained in the MNIST dataset. In this lab, we will learn how to enhance ANNs using PyTorch to classify even more complex images. Therefore, we use a special type of deep neural network referred to **Convolutional Neural Networks (CNNs)**. CNNs encompass the ability to take advantage of the hierarchical pattern in data and assemble more complex patterns using smaller and simpler patterns. Therefore, CNNs are capable to learn a set of discriminative features 'pattern' and subsequently utilize the learned pattern to classify the content of an image.We will again use the functionality of the `PyTorch` library to implement and train an CNN based neural network. The network will be trained on a set of **tiny images of objects** to learn a model of the image content. Upon successful training, we will utilize the learned CNN model to classify so far unseen tiny images into distinct categories such as aeroplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The figure below illustrates a high-level view on the machine learning process we aim to establish in this lab. (Image of the CNN architecture created via http://alexlenail.me/)As always, pls. don't hesitate to ask all your questions either during the lab, post them in our NextThought lab discussion forum (https://financial-data-science.nextthought.io), or send us an email (using our fds.ai email addresses). Lab Objectives: After today's lab, you should be able to:> 1. Understand the basic concepts, intuitions and major building blocks of **Convolutional Neural Networks (CNNs)**.> 2. Know how to **implement and to train a CNN** to learn a model of tiny image data.> 3. Understand how to apply such a learned model to **classify images** images based on their content into distinct categories.> 4. Know how to **interpret and visualize** the model's classification results. Setup of the Jupyter Notebook Environment Similar to the previous labs, we need to import a couple of Python libraries that allow for data analysis and data visualization. We will mostly use the `PyTorch`, `Numpy`, `Sklearn`, `Matplotlib`, `Seaborn` and a few utility libraries throughout this lab:
###Code
# import standard python libraries
import os, urllib, io
from datetime import datetime
import numpy as np
###Output
_____no_output_____
###Markdown
Import Python machine / deep learning libraries:
###Code
# import the PyTorch deep learning library
import torch, torchvision
import torch.nn.functional as F
from torch import nn, optim
from torch.autograd import Variable
###Output
_____no_output_____
###Markdown
Import the sklearn classification metrics:
###Code
# import sklearn classification evaluation library
from sklearn import metrics
from sklearn.metrics import classification_report, confusion_matrix
###Output
_____no_output_____
###Markdown
Import Python plotting libraries:
###Code
# import matplotlib, seaborn, and PIL data visualization libary
import matplotlib.pyplot as plt
import seaborn as sns
from PIL import Image
###Output
_____no_output_____
###Markdown
Enable notebook matplotlib inline plotting:
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Create notebook folder structure to store the data as well as the trained neural network models:
###Code
if not os.path.exists('./data'): os.makedirs('./data') # create data directory
if not os.path.exists('./models'): os.makedirs('./models') # create trained models directory
###Output
_____no_output_____
###Markdown
Set a random `seed` value to obtain reproducable results:
###Code
# init deterministic seed
seed_value = 1234
np.random.seed(seed_value) # set numpy seed
torch.manual_seed(seed_value) # set pytorch seed CPU
###Output
_____no_output_____
###Markdown
Enable GPU computing by setting the `device` flag and init a `CUDA` seed:
###Code
# set cpu or gpu enabled device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu').type
# init deterministic GPU seed
torch.cuda.manual_seed(seed_value)
# log type of device enabled
print('[LOG] notebook with {} computation enabled'.format(str(device)))
###Output
_____no_output_____
###Markdown
Let's determine if we have access to a GPU provided by e.g. Google's COLab environment:
###Code
!nvidia-smi
###Output
_____no_output_____
###Markdown
1. Dataset Download and Data Assessment The **CIFAR-10 database** (**C**anadian **I**nstitute **F**or **A**dvanced **R**esearch) is a collection of images that are commonly used to train machine learning and computer vision algorithms. The database is widely used to conduct computer vision research using machine learning and deep learning methods: (Source: https://www.kaggle.com/c/cifar-10) Further details on the dataset can be obtained via: *Krizhevsky, A., 2009. "Learning Multiple Layers of Features from Tiny Images", ( https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf )."* The CIFAR-10 database contains **60,000 color images** (50,000 training images and 10,000 validation images). The size of each image is 32 by 32 pixels. The collection of images encompasses 10 different classes that represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. Let's define the distinct classs for further analytics:
###Code
cifar10_classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
###Output
_____no_output_____
###Markdown
Thereby the dataset contains 6,000 images for each of the ten classes. The CIFAR-10 is a straightforward dataset that can be used to teach a computer how to recognize objects in images.Let's download, transform and inspect the training images of the dataset. Therefore, we first will define the directory we aim to store the training data:
###Code
train_path = './data/train_cifar10'
###Output
_____no_output_____
###Markdown
Now, let's download the training data accordingly:
###Code
# define pytorch transformation into tensor format
transf = torchvision.transforms.Compose([torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# download and transform training images
cifar10_train_data = torchvision.datasets.CIFAR10(root=train_path, train=True, transform=transf, download=True)
###Output
_____no_output_____
###Markdown
Verify the volume of training images downloaded:
###Code
# get the length of the training data
len(cifar10_train_data)
###Output
_____no_output_____
###Markdown
Furthermore, let's investigate a couple of the training images:
###Code
# set (random) image id
image_id = 1800
# retrieve image exhibiting the image id
cifar10_train_data[image_id]
###Output
_____no_output_____
###Markdown
Ok, that doesn't seem easily interpretable ;) Let's first seperate the image from its label information:
###Code
cifar10_train_image, cifar10_train_label = cifar10_train_data[image_id]
###Output
_____no_output_____
###Markdown
Let's also briefly have a look at the shape of the training image:
###Code
cifar10_train_image.shape
###Output
_____no_output_____
###Markdown
Great, now we are able to visually inspect our sample image:
###Code
# define tensor to image transformation
trans = torchvision.transforms.ToPILImage()
# set image plot title
plt.title('Example: {}, Label: "{}"'.format(str(image_id), str(cifar10_classes[cifar10_train_label])))
# un-normalize cifar 10 image sample
cifar10_train_image_plot = cifar10_train_image / 2.0 + 0.5
# plot 10 image sample
plt.imshow(trans(cifar10_train_image_plot))
###Output
_____no_output_____
###Markdown
Fantastic, right? Let's now decide on where we want to store the evaluation data:
###Code
eval_path = './data/eval_cifar10'
###Output
_____no_output_____
###Markdown
And download the evaluation data accordingly:
###Code
# define pytorch transformation into tensor format
transf = torchvision.transforms.Compose([torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# download and transform validation images
cifar10_eval_data = torchvision.datasets.CIFAR10(root=eval_path, train=False, transform=transf, download=True)
###Output
_____no_output_____
###Markdown
Verify the volume of validation images downloaded:
###Code
# get the length of the training data
len(cifar10_eval_data)
###Output
_____no_output_____
###Markdown
2. Neural Network Implementation In this section we, will implement the architecture of the **neural network** we aim to utilize to learn a model that is capable of classifying the 32x32 pixel CIFAR 10 images according to the objects contained in each image. However, before we start the implementation, let's briefly revisit the process to be established. The following cartoon provides a birds-eye view: Our CNN, which we name 'CIFAR10Net' and aim to implement consists of two **convolutional layers** and three **fully-connected layers**. In general, convolutional layers are specifically designed to learn a set of **high-level features** ("patterns") in the processed images, e.g., tiny edges and shapes. The fully-connected layers utilize the learned features to learn **non-linear feature combinations** that allow for highly accurate classification of the image content into the different image classes of the CIFAR-10 dataset, such as, birds, aeroplanes, horses. Let's implement the network architecture and subsequently have a more in-depth look into its architectural details:
###Code
# implement the CIFAR10Net network architecture
class CIFAR10Net(nn.Module):
# define the class constructor
def __init__(self):
# call super class constructor
super(CIFAR10Net, self).__init__()
# specify convolution layer 1
self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=0)
# define max-pooling layer 1
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
# specify convolution layer 2
self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1, padding=0)
# define max-pooling layer 2
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
# specify fc layer 1 - in 16 * 5 * 5, out 120
self.linear1 = nn.Linear(16 * 5 * 5, 120, bias=True) # the linearity W*x+b
self.relu1 = nn.ReLU(inplace=True) # the non-linearity
# specify fc layer 2 - in 120, out 84
self.linear2 = nn.Linear(120, 84, bias=True) # the linearity W*x+b
self.relu2 = nn.ReLU(inplace=True) # the non-linarity
# specify fc layer 3 - in 84, out 10
self.linear3 = nn.Linear(84, 10) # the linearity W*x+b
# add a softmax to the last layer
self.logsoftmax = nn.LogSoftmax(dim=1) # the softmax
# define network forward pass
def forward(self, images):
# high-level feature learning via convolutional layers
# define conv layer 1 forward pass
x = self.pool1(self.relu1(self.conv1(images)))
# define conv layer 2 forward pass
x = self.pool2(self.relu2(self.conv2(x)))
# feature flattening
# reshape image pixels
x = x.view(-1, 16 * 5 * 5)
# combination of feature learning via non-linear layers
# define fc layer 1 forward pass
x = self.relu1(self.linear1(x))
# define fc layer 2 forward pass
x = self.relu2(self.linear2(x))
# define layer 3 forward pass
x = self.logsoftmax(self.linear3(x))
# return forward pass result
return x
###Output
_____no_output_____
###Markdown
You may have noticed that we applied two more layers (compared to the MNIST example described in the last lab) before the fully-connected layers. These layers are referred to as **convolutional** layers and are usually comprised of three operations, (1) **convolution**, (2) **non-linearity**, and (3) **max-pooling**. Those operations are usually executed in sequential order during the forward pass through a convolutional layer. In the following, we will have a detailed look into the functionality and number of parameters in each layer. We will start with providing images of 3x32x32 dimensions to the network, i.e., the three channels (red, green, blue) of an image each of size 32x32 pixels. 2.1. High-Level Feature Learning by Convolutional Layers Let's first have a look into the convolutional layers of the network as illustrated in the following: **First Convolutional Layer**: The first convolutional layer expects three input channels and will convolve six filters each of size 3x5x5. Let's briefly revisit how we can perform a convolutional operation on a given image. For that, we need to define a kernel which is a matrix of size 5x5, for example. To perform the convolution operation, we slide the kernel along with the image horizontally and vertically and obtain the dot product of the kernel and the pixel values of the image inside the kernel ('receptive field' of the kernel). The following illustration shows an example of a discrete convolution: The left grid is called the input (an image or feature map). The middle grid, referred to as kernel, slides across the input feature map (or image). At each location, the product between each element of the kernel and the input element it overlaps is computed, and the results are summed up to obtain the output in the current location. In general, a discrete convolution is mathematically expressed by: $y(m, n) = x(m, n) * h(m, n) = \sum^{m}_{j=0} \sum^{n}_{i=0} x(i, j) * h(m-i, n-j)$, where $x$ denotes the input image or feature map, $h$ the applied kernel, and, $y$ the output. When performing the convolution operation the 'stride' defines the number of pixels to pass at a time when sliding the kernel over the input. While 'padding' adds the number of pixels to the input image (or feature map) to ensure that the output has the same shape as the input. Let's have a look at another animated example: (Source: https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53)In our implementation padding is set to 0 and stride is set to 1. As a result, the output size of the convolutional layer becomes 6x28x28, because (32 - 5) + 1 = 28. This layer exhibits ((5 x 5 x 3) + 1) x 6 = 456 parameter. **First Max-Pooling Layer:** The max-pooling process is a sample-based discretization operation. The objective is to down-sample an input representation (image, hidden-layer output matrix, etc.), reducing its dimensionality and allowing for assumptions to be made about features contained in the sub-regions binned.To conduct such an operation, we again need to define a kernel. Max-pooling kernels are usually a tiny matrix of, e.g, of size 2x2. To perform the max-pooling operation, we slide the kernel along the image horizontally and vertically (similarly to a convolution) and compute the maximum pixel value of the image (or feature map) inside the kernel (the receptive field of the kernel). The following illustration shows an example of a max-pooling operation: The left grid is called the input (an image or feature map). The middle grid, referred to as kernel, slides across the input feature map (or image). We use a stride of 2, meaning the step distance for stepping over our input will be 2 pixels and won't overlap regions. At each location, the max value of the region that overlaps with the elements of the kernel and the input elements it overlaps is computed, and the results are obtained in the output of the current location. In our implementation, we do max-pooling with a 2x2 kernel and stride 2 this effectively drops the original image size from 6x28x28 to 6x14x14. Let's have a look at an exemplary visualization of 64 features learnt in the first convolutional layer on the CIFAR- 10 dataset. (Source: Yu, Dingjun, Hanli Wang, Peiqiu Chen, and Zhihua Wei. **"Mixed pooling for convolutional neural networks."** In International conference on rough sets and knowledge technology, pp. 364-375. Springer, Cham, 2014) **Second Convolutional Layer:** The second convolutional layer expects 6 input channels and will convolve 16 filters each of size 6x5x5x. Since padding is set to 0 and stride is set 1, the output size is 16x10x10, because (14 - 5) + 1 = 10. This layer therefore has ((5 x 5 x 6) + 1 x 16) = 24,16 parameter.**Second Max-Pooling Layer:** The second down-sampling layer uses max-pooling with 2x2 kernel and stride set to 2. This effectively drops the size from 16x10x10 to 16x5x5. 2.2. Feature FlatteningThe output of the final-max pooling layer needs to be flattened so that we can connect it to a fully connected layer. This is achieved using the `torch.Tensor.view` method. Setting the parameter of the method to `-1` will automatically infer the number of rows required to handle the mini-batch size of the data. 2.3. Learning of Feature Combinations Let's now have a look into the non-linear layers of the network illustrated in the following: The first fully connected layer uses 'Rectified Linear Units' (ReLU) activation functions to learn potential nonlinear combinations of features. The layers are implemented similarly to the fifth lab. Therefore, we will only focus on the number of parameters of each fully-connected layer: **First Fully-Connected Layer:** The first fully-connected layer consists of 120 neurons, thus in total exhibits ((16 x 5 x 5) + 1) x 120 = 48,120 parameter. **Second Fully-Connected Layer:** The output of the first fully-connected layer is then transferred to second fully-connected layer. The layer consists of 84 neurons equipped with ReLu activation functions, this in total exhibits (120 + 1) x 84 = 10,164 parameter. The output of the second fully-connected layer is then transferred to the output-layer (third fully-connected layer). The output layer is equipped with a softmax (that you learned about in the previous lab 05) and is made up of ten neurons, one for each object class contained in the CIFAR-10 dataset. This layer exhibits (84 + 1) x 10 = 850 parameter.As a result our CIFAR-10 convolutional neural exhibits a total of 456 + 2,416 + 48,120 + 10,164 + 850 = 62,006 parameter.(Source: https://www.stefanfiott.com/machine-learning/cifar-10-classifier-using-cnn-in-pytorch/) Now, that we have implemented our first Convolutional Neural Network we are ready to instantiate a network model to be trained:
###Code
model = CIFAR10Net()
###Output
_____no_output_____
###Markdown
Let's push the initialized `CIFAR10Net` model to the computing `device` that is enabled:
###Code
model = model.to(device)
###Output
_____no_output_____
###Markdown
Let's double check if our model was deployed to the GPU if available:
###Code
!nvidia-smi
###Output
_____no_output_____
###Markdown
Once the model is initialized we can visualize the model structure and review the implemented network architecture by execution of the following cell:
###Code
# print the initialized architectures
print('[LOG] CIFAR10Net architecture:\n\n{}\n'.format(model))
###Output
_____no_output_____
###Markdown
Looks like intended? Brilliant! Finally, let's have a look into the number of model parameters that we aim to train in the next steps of the notebook:
###Code
# init the number of model parameters
num_params = 0
# iterate over the distinct parameters
for param in model.parameters():
# collect number of parameters
num_params += param.numel()
# print the number of model paramters
print('[LOG] Number of to be trained CIFAR10Net model parameters: {}.'.format(num_params))
###Output
_____no_output_____
###Markdown
Ok, our "simple" CIFAR10Net model already encompasses an impressive number 62'006 model parameters to be trained. Now that we have implemented the CIFAR10Net, we are ready to train the network. However, before starting the training, we need to define an appropriate loss function. Remember, we aim to train our model to learn a set of model parameters $\theta$ that minimize the classification error of the true class $c^{i}$ of a given CIFAR-10 image $x^{i}$ and its predicted class $\hat{c}^{i} = f_\theta(x^{i})$ as faithfully as possible. In this lab we use (similarly to lab 05) the **'Negative Log-Likelihood (NLL)'** loss. During training the NLL loss will penalize models that result in a high classification error between the predicted class labels $\hat{c}^{i}$ and their respective true class label $c^{i}$. Now that we have implemented the CIFAR10Net, we are ready to train the network. Before starting the training, we need to define an appropriate loss function. Remember, we aim to train our model to learn a set of model parameters $\theta$ that minimize the classification error of the true class $c^{i}$ of a given CIFAR-10 image $x^{i}$ and its predicted class $\hat{c}^{i} = f_\theta(x^{i})$ as faithfully as possible. Let's instantiate the NLL via the execution of the following PyTorch command:
###Code
# define the optimization criterion / loss function
nll_loss = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
Let's also push the initialized `nll_loss` computation to the computing `device` that is enabled:
###Code
nll_loss = nll_loss.to(device)
###Output
_____no_output_____
###Markdown
Based on the loss magnitude of a certain mini-batch PyTorch automatically computes the gradients. But even better, based on the gradient, the library also helps us in the optimization and update of the network parameters $\theta$.We will use the **Stochastic Gradient Descent (SGD) optimization** and set the `learning-rate to 0.001`. Each mini-batch step the optimizer will update the model parameters $\theta$ values according to the degree of classification error (the NLL loss).
###Code
# define learning rate and optimization strategy
learning_rate = 0.001
optimizer = optim.SGD(params=model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Now that we have successfully implemented and defined the three CNN building blocks let's take some time to review the `CIFAR10Net` model definition as well as the `loss`. Please, read the above code and comments carefully and don't hesitate to let us know any questions you might have. 3. Training the Neural Network Model In this section, we will train our neural network model (as implemented in the section above) using the transformed images. More specifically, we will have a detailed look into the distinct training steps as well as how to monitor the training progress. 3.1. Preparing the Network Training So far, we have pre-processed the dataset, implemented the CNN and defined the classification error. Let's now start to train a corresponding model for **20 epochs** and a **mini-batch size of 128** CIFAR-10 images per batch. This implies that the whole dataset will be fed to the CNN 20 times in chunks of 4 images yielding to **12,500 mini-batches** (50.000 training images / 4 images per mini-batch) per epoch. After the processing of each mini-batch, the parameters of the network will be updated.
###Code
# specify the training parameters
num_epochs = 20 # number of training epochs
mini_batch_size = 4 # size of the mini-batches
###Output
_____no_output_____
###Markdown
Furthermore, lets specifiy and instantiate a corresponding PyTorch data loader that feeds the image tensors to our neural network:
###Code
cifar10_train_dataloader = torch.utils.data.DataLoader(cifar10_train_data, batch_size=mini_batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
3.2. Running the Network Training Finally, we start training the model. The training procedure for each mini-batch is performed as follows: >1. do a forward pass through the CIFAR10Net network, >2. compute the negative log-likelihood classification error $\mathcal{L}^{NLL}_{\theta}(c^{i};\hat{c}^{i})$, >3. do a backward pass through the CIFAR10Net network, and >4. update the parameters of the network $f_\theta(\cdot)$.To ensure learning while training our CNN model, we will monitor whether the loss decreases with progressing training. Therefore, we obtain and evaluate the classification performance of the entire training dataset after each training epoch. Based on this evaluation, we can conclude on the training progress and whether the loss is converging (indicating that the model might not improve any further).The following elements of the network training code below should be given particular attention: >- `loss.backward()` computes the gradients based on the magnitude of the reconstruction loss,>- `optimizer.step()` updates the network parameters based on the gradient.
###Code
# init collection of training epoch losses
train_epoch_losses = []
# set the model in training mode
model.train()
# train the CIFAR10 model
for epoch in range(num_epochs):
# init collection of mini-batch losses
train_mini_batch_losses = []
# iterate over all-mini batches
for i, (images, labels) in enumerate(cifar10_train_dataloader):
# push mini-batch data to computation device
images = images.to(device)
labels = labels.to(device)
# run forward pass through the network
output = model(images)
# reset graph gradients
model.zero_grad()
# determine classification loss
loss = nll_loss(output, labels)
# run backward pass
loss.backward()
# update network paramaters
optimizer.step()
# collect mini-batch reconstruction loss
train_mini_batch_losses.append(loss.data.item())
# determine mean min-batch loss of epoch
train_epoch_loss = np.mean(train_mini_batch_losses)
# print epoch loss
now = datetime.utcnow().strftime("%Y%m%d-%H:%M:%S")
print('[LOG {}] epoch: {} train-loss: {}'.format(str(now), str(epoch), str(train_epoch_loss)))
# save model to local directory
model_name = 'cifar10_model_epoch_{}.pth'.format(str(epoch))
torch.save(model.state_dict(), os.path.join("./models", model_name))
# determine mean min-batch loss of epoch
train_epoch_losses.append(train_epoch_loss)
###Output
_____no_output_____
###Markdown
Upon successfull training let's visualize and inspect the training loss per epoch:
###Code
# prepare plot
fig = plt.figure()
ax = fig.add_subplot(111)
# add grid
ax.grid(linestyle='dotted')
# plot the training epochs vs. the epochs' classification error
ax.plot(np.array(range(1, len(train_epoch_losses)+1)), train_epoch_losses, label='epoch loss (blue)')
# add axis legends
ax.set_xlabel("[training epoch $e_i$]", fontsize=10)
ax.set_ylabel("[Classification Error $\mathcal{L}^{NLL}$]", fontsize=10)
# set plot legend
plt.legend(loc="upper right", numpoints=1, fancybox=True)
# add plot title
plt.title('Training Epochs $e_i$ vs. Classification Error $L^{NLL}$', fontsize=10);
###Output
_____no_output_____
###Markdown
Ok, fantastic. The training error converges nicely. We could definitely train the network a couple more epochs until the error converges. But let's stay with the 20 training epochs for now and continue with evaluating our trained model. 4. Evaluation of the Trained Neural Network Model Prior to evaluating our model, let's load the best performing model. Remember, that we stored a snapshot of the model after each training epoch to our local model directory. We will now load the last snapshot saved.
###Code
# restore pre-trained model snapshot
best_model_name = "cifar10_model_epoch_100.pth"
# init pre-trained model class
best_model = CIFAR10Net()
# load pre-trained models
best_model.load_state_dict(torch.load(os.path.join("models", best_model_name), map_location=torch.device('cpu')))
###Output
_____no_output_____
###Markdown
Let's inspect if the model was loaded successfully:
###Code
# set model in evaluation mode
best_model.eval()
###Output
_____no_output_____
###Markdown
In order to evaluate our trained model, we need to feed the CIFAR10 images reserved for evaluation (the images that we didn't use as part of the training process) through the model. Therefore, let's again define a corresponding PyTorch data loader that feeds the image tensors to our neural network:
###Code
cifar10_eval_dataloader = torch.utils.data.DataLoader(cifar10_eval_data, batch_size=10000, shuffle=False)
###Output
_____no_output_____
###Markdown
We will now evaluate the trained model using the same mini-batch approach as we did when training the network and derive the mean negative log-likelihood loss of all mini-batches processed in an epoch:
###Code
# init collection of mini-batch losses
eval_mini_batch_losses = []
# iterate over all-mini batches
for i, (images, labels) in enumerate(cifar10_eval_dataloader):
# run forward pass through the network
output = best_model(images)
# determine classification loss
loss = nll_loss(output, labels)
# collect mini-batch reconstruction loss
eval_mini_batch_losses.append(loss.data.item())
# determine mean min-batch loss of epoch
eval_loss = np.mean(eval_mini_batch_losses)
# print epoch loss
now = datetime.utcnow().strftime("%Y%m%d-%H:%M:%S")
print('[LOG {}] eval-loss: {}'.format(str(now), str(eval_loss)))
###Output
_____no_output_____
###Markdown
Ok, great. The evaluation loss looks in-line with our training loss. Let's now inspect a few sample predictions to get an impression of the model quality. Therefore, we will again pick a random image of our evaluation dataset and retrieve its PyTorch tensor as well as the corresponding label:
###Code
# set (random) image id
image_id = 777
# retrieve image exhibiting the image id
cifar10_eval_image, cifar10_eval_label = cifar10_eval_data[image_id]
###Output
_____no_output_____
###Markdown
Let's now inspect the true class of the image we selected:
###Code
cifar10_classes[cifar10_eval_label]
###Output
_____no_output_____
###Markdown
Ok, the randomly selected image should contain a two (2). Let's inspect the image accordingly:
###Code
# define tensor to image transformation
trans = torchvision.transforms.ToPILImage()
# set image plot title
plt.title('Example: {}, Label: {}'.format(str(image_id), str(cifar10_classes[cifar10_eval_label])))
# un-normalize cifar 10 image sample
cifar10_eval_image_plot = cifar10_eval_image / 2.0 + 0.5
# plot cifar 10 image sample
plt.imshow(trans(cifar10_eval_image_plot))
###Output
_____no_output_____
###Markdown
Ok, let's compare the true label with the prediction of our model:
###Code
best_model(cifar10_eval_image.unsqueeze(0))
###Output
_____no_output_____
###Markdown
We can even determine the likelihood of the most probable class:
###Code
cifar10_classes[torch.argmax(best_model(cifar10_eval_image.unsqueeze(0)), dim=1).item()]
###Output
_____no_output_____
###Markdown
Let's now obtain the predictions for all the CIFAR-10 images of the evaluation data:
###Code
predictions = torch.argmax(best_model(iter(cifar10_eval_dataloader).next()[0]), dim=1)
###Output
_____no_output_____
###Markdown
Furthermore, let's obtain the overall classification accuracy:
###Code
metrics.accuracy_score(cifar10_eval_data.targets, predictions.detach())
###Output
_____no_output_____
###Markdown
Let's also inspect the confusion matrix of the model predictions to determine major sources of misclassification:
###Code
# determine classification matrix of the predicted and target classes
mat = confusion_matrix(cifar10_eval_data.targets, predictions.detach())
# plot corresponding confusion matrix
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False, cmap='YlOrRd_r', xticklabels=cifar10_classes, yticklabels=cifar10_classes)
plt.title('CIFAR-10 classification matrix')
plt.xlabel('[true label]')
plt.ylabel('[predicted label]');
###Output
_____no_output_____
###Markdown
Ok, we can easily see that our current model confuses images of cats and dogs as well as images of trucks and cars quite often. This is again not surprising since those image categories exhibit a high semantic and therefore visual similarity. Exercises: We recommend you try the following exercises as part of the lab:**1. Train the network a couple more epochs and evaluate its prediction accuracy.**> Increase the number of training epochs up to 50 epochs and re-run the network training. Load and evaluate the model exhibiting the lowest training loss. What kind of behavior in terms of prediction accuracy can be observed with increasing the training epochs?
###Code
# ***************************************************
# INSERT YOUR CODE HERE
# ***************************************************
###Output
_____no_output_____
###Markdown
**2. Evaluation of "shallow" vs. "deep" neural network architectures.**> In addition to the architecture of the lab notebook, evaluate further (more shallow as well as more deep) neural network architectures by (1) either removing or adding layers to the network and/or (2) increasing/decreasing the number of neurons per layer. Train a model (using the architectures you selected) for at least 50 training epochs. Analyze the prediction performance of the trained models in terms of training time and prediction accuracy.
###Code
# ***************************************************
# INSERT YOUR CODE HERE
# ***************************************************
###Output
_____no_output_____
###Markdown
Lab Summary: In this lab, a step by step introduction into **design, implementation, training and evaluation** of convolutional neural networks CNNs to classify tiny images of objects is presented. The code and exercises presented in this lab may serves as a starting point for developing more complex, deeper and more tailored CNNs. You may want to execute the content of your lab outside of the Jupyter notebook environment, e.g. on a compute node or a server. The cell below converts the lab notebook into a standalone and executable python script. Pls. note that to convert the notebook, you need to install Python's **nbconvert** library and its extensions:
###Code
# installing the nbconvert library
!pip install nbconvert
!pip install jupyter_contrib_nbextensions
###Output
_____no_output_____
###Markdown
Let's now convert the Jupyter notebook into a plain Python script:
###Code
!jupyter nbconvert --to script cfds_lab_12.ipynb
###Output
_____no_output_____ |
assignment2/TensorFlow_2018.ipynb | ###Markdown
What's this TensorFlow business?You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook) What is it?TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray. Why?* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. How will I learn TensorFlow?TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here. Table of ContentsThis notebook has 5 parts. We will walk through TensorFlow at three different levels of abstraction, which should help you better understand it and prepare you for working on your project.1. Preparation: load the CIFAR-10 dataset.2. Barebone TensorFlow: we will work directly with low-level TensorFlow graphs. 3. Keras Model API: we will use `tf.keras.Model` to define arbitrary neural network architecture. 4. Keras Sequential API: we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently. 5. CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. Here is a table of comparison:| API | Flexibility | Convenience ||---------------|-------------|-------------|| Barebone | High | Low || `tf.keras.Model` | High | Medium || `tf.keras.Sequential` | Low | High | Part I: PreparationFirst, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.In previous parts of the assignment we used CS231N-specific code to download and read the CIFAR-10 dataset; however the `tf.keras.datasets` package in TensorFlow provides prebuilt utility functions for loading many common datasets.For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
###Code
import os
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
def load_cifar10(num_training=49000, num_validation=1000, num_test=10000):
"""
Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 dataset and use appropriate data types and shapes
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean pixel and divide by std
mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean_pixel) / std_pixel
X_val = (X_val - mean_pixel) / std_pixel
X_test = (X_test - mean_pixel) / std_pixel
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
NHW = (0, 1, 2)
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape, y_train.dtype)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
###Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 50s 0us/step
Train data shape: (49000, 32, 32, 3)
Train labels shape: (49000,) int32
Validation data shape: (1000, 32, 32, 3)
Validation labels shape: (1000,)
Test data shape: (10000, 32, 32, 3)
Test labels shape: (10000,)
###Markdown
Preparation: Dataset objectFor our own convenience we'll define a lightweight `Dataset` class which lets us iterate over data and labels. This is not the most flexible or most efficient way to iterate through data, but it will serve our purposes.
###Code
class Dataset(object):
def __init__(self, X, y, batch_size, shuffle=False):
"""
Construct a Dataset object to iterate over data X and labels y
Inputs:
- X: Numpy array of data, of any shape
- y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0]
- batch_size: Integer giving number of elements per minibatch
- shuffle: (optional) Boolean, whether to shuffle the data on each epoch
"""
assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels'
self.X, self.y = X, y
self.batch_size, self.shuffle = batch_size, shuffle
def __iter__(self):
N, B = self.X.shape[0], self.batch_size
idxs = np.arange(N)
if self.shuffle:
np.random.shuffle(idxs)
return iter( (self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B) )
train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True)
val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False)
test_dset = Dataset(X_test, y_test, batch_size=64)
# We can iterate through a dataset like this:
for t, (x, y) in enumerate(train_dset):
print(t, x.shape, y.shape)
if t > 5: break
###Output
0 (64, 32, 32, 3) (64,)
1 (64, 32, 32, 3) (64,)
2 (64, 32, 32, 3) (64,)
3 (64, 32, 32, 3) (64,)
4 (64, 32, 32, 3) (64,)
5 (64, 32, 32, 3) (64,)
6 (64, 32, 32, 3) (64,)
###Markdown
You can optionally **use GPU by setting the flag to True below**. It's not neccessary to use a GPU for this assignment; if you are working on Google Cloud then we recommend that you do not use a GPU, as it will be significantly more expensive.
###Code
# Set up some global variables
USE_GPU = False
if USE_GPU:
device = '/device:GPU:0'
else:
device = '/cpu:0'
# Constant to control how often we print when training models
print_every = 100
print('Using device: ', device)
###Output
Using device: /cpu:0
###Markdown
Part II: Barebone TensorFlowTensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.TensorFlow is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.This means that a typical TensorFlow program is written in two distinct phases:1. Build a computational graph that describes the computation that you want to perform. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.2. Run the computational graph many times. Each time the graph is run you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph. TensorFlow warmup: Flatten FunctionWe can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:- N is the number of datapoints (minibatch size)- H is the height of the feature map- W is the width of the feature map- C is the number of channels in the feature mapThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector. The flatten function below first reads in the value of N from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be H x W x C, but we don't need to specify that explicitly). **NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
###Code
def flatten(x):
"""
Input:
- TensorFlow Tensor of shape (N, D1, ..., DM)
Output:
- TensorFlow Tensor of shape (N, D1 * ... * DM)
"""
N = tf.shape(x)[0]
return tf.reshape(x, (N, -1))
def test_flatten():
# Clear the current TensorFlow graph.
tf.reset_default_graph()
# Stage I: Define the TensorFlow graph describing our computation.
# In this case the computation is trivial: we just want to flatten
# a Tensor using the flatten function defined above.
# Our computation will have a single input, x. We don't know its
# value yet, so we define a placeholder which will hold the value
# when the graph is run. We then pass this placeholder Tensor to
# the flatten function; this gives us a new Tensor which will hold
# a flattened view of x when the graph is run. The tf.device
# context manager tells TensorFlow whether to place these Tensors
# on CPU or GPU.
with tf.device(device):
x = tf.placeholder(tf.float32)
x_flat = flatten(x)
# At this point we have just built the graph describing our computation,
# but we haven't actually computed anything yet. If we print x and x_flat
# we see that they don't hold any data; they are just TensorFlow Tensors
# representing values that will be computed when the graph is run.
print('x: ', type(x), x)
print('x_flat: ', type(x_flat), x_flat)
print()
# We need to use a TensorFlow Session object to actually run the graph.
with tf.Session() as sess:
# Construct concrete values of the input data x using numpy
x_np = np.arange(24).reshape((2, 3, 4))
print('x_np:\n', x_np, '\n')
# Run our computational graph to compute a concrete output value.
# The first argument to sess.run tells TensorFlow which Tensor
# we want it to compute the value of; the feed_dict specifies
# values to plug into all placeholder nodes in the graph. The
# resulting value of x_flat is returned from sess.run as a
# numpy array.
x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
print('x_flat_np:\n', x_flat_np, '\n')
# We can reuse the same graph to perform the same computation
# with different input data
x_np = np.arange(12).reshape((2, 3, 2))
print('x_np:\n', x_np, '\n')
x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
print('x_flat_np:\n', x_flat_np)
test_flatten()
###Output
x: <class 'tensorflow.python.framework.ops.Tensor'> Tensor("Placeholder:0", dtype=float32, device=/device:CPU:0)
x_flat: <class 'tensorflow.python.framework.ops.Tensor'> Tensor("Reshape:0", shape=(?, ?), dtype=float32, device=/device:CPU:0)
x_np:
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
x_flat_np:
[[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.]
[12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.]]
x_np:
[[[ 0 1]
[ 2 3]
[ 4 5]]
[[ 6 7]
[ 8 9]
[10 11]]]
x_flat_np:
[[ 0. 1. 2. 3. 4. 5.]
[ 6. 7. 8. 9. 10. 11.]]
###Markdown
Barebones TensorFlow: Two-Layer NetworkWe will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores. It's important to keep in mind that calling the `two_layer_fc` function **does not** perform any computation; instead it just sets up the computational graph for the forward computation. To actually run the network we need to enter a TensorFlow Session and feed data to the computational graph.After defining the network architecture in the `two_layer_fc` function, we will test the implementation by setting up and running a computational graph, feeding zeros to the network and checking the shape of the output.It's important that you read and understand this implementation.
###Code
def two_layer_fc(x, params):
"""
A fully-connected neural network; the architecture is:
fully-connected layer -> ReLU -> fully connected layer.
Note that we only need to define the forward pass here; TensorFlow will take
care of computing the gradients for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of TensorFlow Tensors giving weights for the
network, where w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A TensorFlow Tensor of shape (N, C) giving classification scores
for the input data x.
"""
w1, w2 = params # Unpack the parameters
x = flatten(x) # Flatten the input; now x has shape (N, D)
h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H)
scores = tf.matmul(h, w2) # Compute scores of shape (N, C)
return scores
def two_layer_fc_test():
# TensorFlow's default computational graph is essentially a hidden global
# variable. To avoid adding to this default graph when you rerun this cell,
# we clear the default graph before constructing the graph we care about.
tf.reset_default_graph()
hidden_layer_size = 42
# Scoping our computational graph setup code under a tf.device context
# manager lets us tell TensorFlow where we want these Tensors to be
# placed.
with tf.device(device):
# Set up a placehoder for the input of the network, and constant
# zero Tensors for the network weights. Here we declare w1 and w2
# using tf.zeros instead of tf.placeholder as we've seen before - this
# means that the values of w1 and w2 will be stored in the computational
# graph itself and will persist across multiple runs of the graph; in
# particular this means that we don't have to pass values for w1 and w2
# using a feed_dict when we eventually run the graph.
x = tf.placeholder(tf.float32)
w1 = tf.zeros((32 * 32 * 3, hidden_layer_size))
w2 = tf.zeros((hidden_layer_size, 10))
# Call our two_layer_fc function to set up the computational
# graph for the forward pass of the network.
scores = two_layer_fc(x, [w1, w2])
# Use numpy to create some concrete data that we will pass to the
# computational graph for the x placeholder.
x_np = np.zeros((64, 32, 32, 3))
with tf.Session() as sess:
# The calls to tf.zeros above do not actually instantiate the values
# for w1 and w2; the following line tells TensorFlow to instantiate
# the values of all Tensors (like w1 and w2) that live in the graph.
sess.run(tf.global_variables_initializer())
# Here we actually run the graph, using the feed_dict to pass the
# value to bind to the placeholder for x; we ask TensorFlow to compute
# the value of the scores Tensor, which it returns as a numpy array.
scores_np = sess.run(scores, feed_dict={x: x_np})
print(scores_np.shape)
two_layer_fc_test()
###Output
(64, 10)
###Markdown
Barebones TensorFlow: Three-Layer ConvNetHere you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two2. ReLU nonlinearity3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one4. ReLU nonlinearity5. Fully-connected layer with bias, producing scores for `C` classes.**HINT**: For convolutions: https://www.tensorflow.org/api_docs/python/tf/nn/conv2d; be careful with padding!**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
###Code
def three_layer_convnet(x, params):
"""
A three-layer convolutional network with the architecture described below:
- A convolutional layer (with bias) with channel_1 filters,
each with shape KW1 x KH1, and zero-padding of two
- ReLU nonlinearity
- A convolutional layer (with bias) with channel_2 filters,
each with shape KW2 x KH2, and zero-padding of one
- ReLU nonlinearity
- Fully-connected layer with bias, producing scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images
- params: A list of TensorFlow Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving
weights for the first convolutional layer.
- conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the
first convolutional layer.
- conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2)
giving weights for the second convolutional layer
- conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the
second convolutional layer.
- fc_w: TensorFlow Tensor giving weights for the fully-connected layer.
Can you figure out what the shape should be? (32 * 32 * 9, C)
- fc_b: TensorFlow Tensor giving biases for the fully-connected layer.
Can you figure out what the shape should be? (C,)
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
############################################################################
x_pad = tf.pad(x, [[0,0], [2,2], [2,2], [0,0]]) # (N,H+4,W+4,3)
out_l1 = tf.nn.conv2d(x_pad, conv_w1, strides=[1,1,1,1], padding='VALID') + conv_b1
# H_l1 = (H - KH1 + 2*P)//S1 + 1
# W_l1 = (W - KW1 + 2*P)//S1 + 1
out_l1 = tf.nn.relu(out_l1) # (N,H_l1,W_l1,channel_1)
out_l1_pad = tf.pad(out_l1, [[0,0], [1,1], [1,1], [0,0]]) # (N,H_l1+2,W_l1+2,channel_1)
out_l2 = tf.nn.conv2d(out_l1_pad, conv_w2, strides=[1,1,1,1], padding='VALID') + conv_b2
# H_l2 = (H_l1 - KH2 + 2*P)//S2 + 1
# W_l2 = (W_l1 - KW2 + 2*P)//S2 + 1
out_l2 = tf.nn.relu(out_l2) # (N,H_l2,W_l2,channel_2)
out_l3 = flatten(out_l2) # (N,H_l2*W_l2*channel_2)
scores = tf.matmul(out_l3, fc_w) + fc_b # (N,C)
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
###Output
_____no_output_____
###Markdown
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we use the `three_layer_convnet` function to set up the computational graph, then run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.When you run this function, `scores_np` should have shape `(64, 10)`.
###Code
def three_layer_convnet_test():
tf.reset_default_graph()
with tf.device(device):
x = tf.placeholder(tf.float32)
conv_w1 = tf.zeros((5, 5, 3, 6))
conv_b1 = tf.zeros((6,))
conv_w2 = tf.zeros((3, 3, 6, 9))
conv_b2 = tf.zeros((9,))
fc_w = tf.zeros((32 * 32 * 9, 10))
fc_b = tf.zeros((10,))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
scores = three_layer_convnet(x, params)
# Inputs to convolutional layers are 4-dimensional arrays with shape
# [batch_size, height, width, channels]
x_np = np.zeros((64, 32, 32, 3))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores, feed_dict={x: x_np})
print('scores_np has shape: ', scores_np.shape)
with tf.device('/cpu:0'):
three_layer_convnet_test()
###Output
scores_np has shape: (64, 10)
###Markdown
Barebones TensorFlow: Training StepWe now define the `training_step` function which sets up the part of the computational graph that performs a single training step. This will take three basic steps:1. Compute the loss2. Compute the gradient of the loss with respect to all network weights3. Make a weight update step using (stochastic) gradient descent.Note that the step of updating the weights is itself an operation in the computational graph - the calls to `tf.assign_sub` in `training_step` return TensorFlow operations that mutate the weights when they are executed. There is an important bit of subtlety here - when we call `sess.run`, TensorFlow does not execute all operations in the computational graph; it only executes the minimal subset of the graph necessary to compute the outputs that we ask TensorFlow to produce. As a result, naively computing the loss would not cause the weight update operations to execute, since the operations needed to compute the loss do not depend on the output of the weight update. To fix this problem, we insert a **control dependency** into the graph, adding a duplicate `loss` node to the graph that does depend on the outputs of the weight update operations; this is the object that we actually return from the `training_step` function. As a result, asking TensorFlow to evaluate the value of the `loss` returned from `training_step` will also implicitly update the weights of the network using that minibatch of data.We need to use a few new TensorFlow functions to do all of this:- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:https://www.tensorflow.org/api_docs/python/tf/reduce_mean- For computing gradients of the loss with respect to the weights we'll use `tf.gradients`: https://www.tensorflow.org/api_docs/python/tf/gradients- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub`: https://www.tensorflow.org/api_docs/python/tf/assign_sub- We'll add a control dependency to the graph using `tf.control_dependencies`: https://www.tensorflow.org/api_docs/python/tf/control_dependencies
###Code
def training_step(scores, y, params, learning_rate):
"""
Set up the part of the computational graph which makes a training step.
Inputs:
- scores: TensorFlow Tensor of shape (N, C) giving classification scores for
the model.
- y: TensorFlow Tensor of shape (N,) giving ground-truth labels for scores;
y[i] == c means that c is the correct class for scores[i].
- params: List of TensorFlow Tensors giving the weights of the model
- learning_rate: Python scalar giving the learning rate to use for gradient
descent step.
Returns:
- loss: A TensorFlow Tensor of shape () (scalar) giving the loss for this
batch of data; evaluating the loss also performs a gradient descent step
on params (see above).
"""
# First compute the loss; the first line gives losses for each example in
# the minibatch, and the second averages the losses acros the batch
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
loss = tf.reduce_mean(losses)
# Compute the gradient of the loss with respect to each parameter of the the
# network. This is a very magical function call: TensorFlow internally
# traverses the computational graph starting at loss backward to each element
# of params, and uses backpropagation to figure out how to compute gradients;
# it then adds new operations to the computational graph which compute the
# requested gradients, and returns a list of TensorFlow Tensors that will
# contain the requested gradients when evaluated.
grad_params = tf.gradients(loss, params)
# Make a gradient descent step on all of the model parameters.
new_weights = []
for w, grad_w in zip(params, grad_params):
new_w = tf.assign_sub(w, learning_rate * grad_w)
new_weights.append(new_w)
# Insert a control dependency so that evaluting the loss causes a weight
# update to happen; see the discussion above.
with tf.control_dependencies(new_weights):
return tf.identity(loss)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Training LoopNow we set up a basic training loop using low-level TensorFlow operations. We will train the model using stochastic gradient descent without momentum. The `training_step` function sets up the part of the computational graph that performs the training step, and the function `train_part2` iterates through the training data, making training steps on each minibatch, and periodically evaluates accuracy on the validation set.
###Code
def train_part2(model_fn, init_fn, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model
using TensorFlow; it should have the following signature:
scores = model_fn(x, params) where x is a TensorFlow Tensor giving a
minibatch of image data, params is a list of TensorFlow Tensors holding
the model weights, and scores is a TensorFlow Tensor of shape (N, C)
giving scores for all elements of x.
- init_fn: A Python function that initializes the parameters of the model.
It should have the signature params = init_fn() where params is a list
of TensorFlow Tensors holding the (randomly initialized) weights of the
model.
- learning_rate: Python float giving the learning rate to use for SGD.
"""
# First clear the default graph
tf.reset_default_graph()
is_training = tf.placeholder(tf.bool, name='is_training')
# Set up the computational graph for performing forward and backward passes,
# and weight updates.
with tf.device(device):
# Set up placeholders for the data and labels
x = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int32, [None])
params = init_fn() # Initialize the model parameters
scores = model_fn(x, params) # Forward pass of the model
loss = training_step(scores, y, params, learning_rate)
# Now we actually run the graph many times using the training data
with tf.Session() as sess:
# Initialize variables that will live in the graph
sess.run(tf.global_variables_initializer())
for t, (x_np, y_np) in enumerate(train_dset):
# Run the graph on a batch of training data; recall that asking
# TensorFlow to evaluate loss will cause an SGD step to happen.
feed_dict = {x: x_np, y: y_np}
loss_np = sess.run(loss, feed_dict=feed_dict)
# Periodically print the loss and check accuracy on the val set
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss_np))
check_accuracy(sess, val_dset, x, scores, is_training)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Check AccuracyWhen training the model we will use the following function to check the accuracy of our model on the training or validation sets. Note that this function accepts a TensorFlow Session object as one of its arguments; this is needed since the function must actually run the computational graph many times on the data that it loads from the dataset `dset`.Also note that we reuse the same computational graph both for taking training steps and for evaluating the model; however since the `check_accuracy` function never evalutes the `loss` value in the computational graph, the part of the graph that updates the weights of the graph do not execute on the validation data.
###Code
def check_accuracy(sess, dset, x, scores, is_training=None):
"""
Check accuracy on a classification model.
Inputs:
- sess: A TensorFlow Session that will be used to run the graph
- dset: A Dataset object on which to check accuracy
- x: A TensorFlow placeholder Tensor where input images should be fed
- scores: A TensorFlow Tensor representing the scores output from the
model; this is the Tensor we will ask TensorFlow to evaluate.
Returns: Nothing, but prints the accuracy of the model
"""
num_correct, num_samples = 0, 0
for x_batch, y_batch in dset:
feed_dict = {x: x_batch, is_training: 0}
scores_np = sess.run(scores, feed_dict=feed_dict)
y_pred = scores_np.argmax(axis=1)
num_samples += x_batch.shape[0]
num_correct += (y_pred == y_batch).sum()
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: InitializationWe'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
###Code
def kaiming_normal(shape):
if len(shape) == 2:
fan_in, fan_out = shape[0], shape[1]
elif len(shape) == 4:
fan_in, fan_out = np.prod(shape[:3]), shape[3]
return tf.random_normal(shape) * np.sqrt(2.0 / fan_in)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Train a Two-Layer NetworkWe are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.We just need to define a function to initialize the weights of the model, and call `train_part2`.Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.You don't need to tune any hyperparameters, but you should achieve accuracies above 40% after one epoch of training.
###Code
def two_layer_fc_init():
"""
Initialize the weights of a two-layer network, for use with the
two_layer_network function defined above.
Inputs: None
Returns: A list of:
- w1: TensorFlow Variable giving the weights for the first layer
- w2: TensorFlow Variable giving the weights for the second layer
"""
hidden_layer_size = 4000
w1 = tf.Variable(kaiming_normal((3 * 32 * 32, 4000)))
w2 = tf.Variable(kaiming_normal((4000, 10)))
return [w1, w2]
learning_rate = 1e-2
train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
###Output
Iteration 0, loss = 3.1701
Got 140 / 1000 correct (14.00%)
Iteration 100, loss = 1.8752
Got 357 / 1000 correct (35.70%)
Iteration 200, loss = 1.5527
Got 388 / 1000 correct (38.80%)
Iteration 300, loss = 1.8458
Got 375 / 1000 correct (37.50%)
Iteration 400, loss = 1.8256
Got 417 / 1000 correct (41.70%)
Iteration 500, loss = 1.7863
Got 437 / 1000 correct (43.70%)
Iteration 600, loss = 1.8450
Got 421 / 1000 correct (42.10%)
Iteration 700, loss = 2.0013
Got 448 / 1000 correct (44.80%)
###Markdown
Barebones TensorFlow: Train a three-layer ConvNetWe will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou don't need to do any hyperparameter tuning, but you should see accuracies above 43% after one epoch of training.
###Code
def three_layer_convnet_init():
"""
Initialize the weights of a Three-Layer ConvNet, for use with the
three_layer_convnet function defined above.
Inputs: None
Returns a list containing:
- conv_w1: TensorFlow Variable giving weights for the first conv layer
- conv_b1: TensorFlow Variable giving biases for the first conv layer
- conv_w2: TensorFlow Variable giving weights for the second conv layer
- conv_b2: TensorFlow Variable giving biases for the second conv layer
- fc_w: TensorFlow Variable giving weights for the fully-connected layer
- fc_b: TensorFlow Variable giving biases for the fully-connected layer
"""
params = None
############################################################################
# TODO: Initialize the parameters of the three-layer network. #
############################################################################
conv_w1 = tf.Variable(kaiming_normal((5, 5, 3, 32)))
conv_b1 = tf.Variable(tf.zeros((32,)))
conv_w2 = tf.Variable(kaiming_normal((3, 3, 32, 16)))
conv_b2 = tf.Variable(tf.zeros((16,)))
fc_w = tf.Variable(kaiming_normal((32*32*16, 10)))
fc_b = tf.Variable(tf.zeros((10,)))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
############################################################################
# END OF YOUR CODE #
############################################################################
return params
learning_rate = 3e-3
train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
###Output
Iteration 0, loss = 2.9449
Got 93 / 1000 correct (9.30%)
Iteration 100, loss = 1.8470
Got 367 / 1000 correct (36.70%)
Iteration 200, loss = 1.5830
Got 394 / 1000 correct (39.40%)
Iteration 300, loss = 1.6223
Got 415 / 1000 correct (41.50%)
Iteration 400, loss = 1.6220
Got 438 / 1000 correct (43.80%)
Iteration 500, loss = 1.6924
Got 459 / 1000 correct (45.90%)
Iteration 600, loss = 1.7157
Got 470 / 1000 correct (47.00%)
Iteration 700, loss = 1.5240
Got 475 / 1000 correct (47.50%)
###Markdown
Part III: Keras Model APIImplementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters, and we had to use a control dependency to implement the gradient descent update step. This was fine for a small network, but could quickly become unweildy for a large complex model.Fortunately TensorFlow provides higher-level packages such as `tf.keras` and `tf.layers` which make it easy to build models out of modular, object-oriented layers; `tf.train` allows you to easily train these models using a variety of different optimization algorithms.In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:1. Define a new class which subclasses `tf.keras.model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.layers` package provides many common neural-network layers, like `tf.layers.Dense` for fully-connected layers and `tf.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super().__init__()` as the first line in your initializer!3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II. Module API: Two-Layer NetworkHere is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.variance_scaling_initializer` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/api_docs/python/tf/variance_scaling_initializerWe construct `tf.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation=tf.nn.relu` to the constructor; the second layer does not apply any activation function.Unfortunately the `flatten` function we defined in Part II is not compatible with the `tf.keras.Model` API; fortunately we can use `tf.layers.flatten` to perform the same operation. The issue with our `flatten` function from Part II has to do with static vs dynamic shapes for Tensors, which is beyond the scope of this notebook; you can read more about the distinction [in the documentation](https://www.tensorflow.org/programmers_guide/faqtensor_shapes).
###Code
class TwoLayerFC(tf.keras.Model):
def __init__(self, hidden_size, num_classes):
super().__init__()
initializer = tf.variance_scaling_initializer(scale=2.0)
self.fc1 = tf.layers.Dense(hidden_size, activation=tf.nn.relu,
kernel_initializer=initializer)
self.fc2 = tf.layers.Dense(num_classes,
kernel_initializer=initializer)
def call(self, x, training=None):
x = tf.layers.flatten(x)
x = self.fc1(x)
x = self.fc2(x)
return x
def test_TwoLayerFC():
""" A small unit test to exercise the TwoLayerFC model above. """
tf.reset_default_graph()
input_size, hidden_size, num_classes = 50, 42, 10
# As usual in TensorFlow, we first need to define our computational graph.
# To this end we first construct a TwoLayerFC object, then use it to construct
# the scores Tensor.
model = TwoLayerFC(hidden_size, num_classes)
with tf.device(device):
x = tf.zeros((64, input_size))
scores = model(x)
# Now that our computational graph has been defined we can run the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_TwoLayerFC()
###Output
(64, 10)
###Markdown
Funtional API: Two-Layer NetworkThe `tf.layers` package provides two different higher-level APIs for defining neural network models. In the example above we used the **object-oriented API**, where each layer of the neural network is represented as a Python object (like `tf.layers.Dense`). Here we showcase the **functional API**, where each layer is a Python function (like `tf.layers.dense`) which inputs and outputs TensorFlow Tensors, and which internally sets up Tensors in the computational graph to hold any learnable weights.To construct a network, one needs to pass the input tensor to the first layer, and construct the subsequent layers sequentially. Here's an example of how to construct the same two-layer nework with the functional API.
###Code
def two_layer_fc_functional(inputs, hidden_size, num_classes):
initializer = tf.variance_scaling_initializer(scale=2.0)
flattened_inputs = tf.layers.flatten(inputs)
fc1_output = tf.layers.dense(flattened_inputs, hidden_size, activation=tf.nn.relu,
kernel_initializer=initializer)
scores = tf.layers.dense(fc1_output, num_classes,
kernel_initializer=initializer)
return scores
def test_two_layer_fc_functional():
""" A small unit test to exercise the TwoLayerFC model above. """
tf.reset_default_graph()
input_size, hidden_size, num_classes = 50, 42, 10
# As usual in TensorFlow, we first need to define our computational graph.
# To this end we first construct a two layer network graph by calling the
# two_layer_network() function. This function constructs the computation
# graph and outputs the score tensor.
with tf.device(device):
x = tf.zeros((64, input_size))
scores = two_layer_fc_functional(x, hidden_size, num_classes)
# Now that our computational graph has been defined we can run the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_two_layer_fc_functional()
###Output
(64, 10)
###Markdown
Keras Model API: Three-Layer ConvNetNow it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:1. Convolutional layer with 5 x 5 kernels, with zero-padding of 22. ReLU nonlinearity3. Convolutional layer with 3 x 3 kernels, with zero-padding of 14. ReLU nonlinearity5. Fully-connected layer to give class scoresYou should initialize the weights of your network using the same initialization method as was used in the two-layer network above.**Hint**: Refer to the documentation for `tf.layers.Conv2D` and `tf.layers.Dense`:https://www.tensorflow.org/api_docs/python/tf/layers/Conv2Dhttps://www.tensorflow.org/api_docs/python/tf/layers/Dense
###Code
class ThreeLayerConvNet(tf.keras.Model):
def __init__(self, channel_1, channel_2, num_classes):
super().__init__()
########################################################################
# TODO: Implement the __init__ method for a three-layer ConvNet. You #
# should instantiate layer objects to be used in the forward pass. #
########################################################################
initializer = tf.variance_scaling_initializer(scale=2.0)
self.conv1 = tf.layers.Conv2D(channel_1, 5, 1, 'same', activation=tf.nn.relu,
kernel_initializer=initializer)
self.conv2 = tf.layers.Conv2D(channel_2, 3, 1, 'same', activation=tf.nn.relu,
kernel_initializer=initializer)
self.fc1 = tf.layers.Dense(num_classes, kernel_initializer=initializer)
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=None):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
x = self.conv1(x)
x = self.conv2(x)
x = tf.layers.flatten(x)
x = self.fc1(x)
scores = x
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
###Output
_____no_output_____
###Markdown
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
###Code
def test_ThreeLayerConvNet():
tf.reset_default_graph()
channel_1, channel_2, num_classes = 12, 8, 10
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
with tf.device(device):
x = tf.zeros((64, 3, 32, 32))
scores = model(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_ThreeLayerConvNet()
###Output
(64, 10)
###Markdown
Keras Model API: Training LoopWe need to implement a slightly different training loop when using the `tf.keras.Model` API. Instead of computing gradients and updating the weights of the model manually, we use an `Optimizer` object from the `tf.train` package which takes care of these details for us. You can read more about `Optimizer`s here: https://www.tensorflow.org/api_docs/python/tf/train/Optimizer
###Code
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1):
"""
Simple training loop for use with models defined using tf.keras. It trains
a model for one epoch on the CIFAR-10 training set and periodically checks
accuracy on the CIFAR-10 validation set.
Inputs:
- model_init_fn: A function that takes no parameters; when called it
constructs the model we want to train: model = model_init_fn()
- optimizer_init_fn: A function which takes no parameters; when called it
constructs the Optimizer object we will use to optimize the model:
optimizer = optimizer_init_fn()
- num_epochs: The number of epochs to train for
Returns: Nothing, but prints progress during training
"""
tf.reset_default_graph()
with tf.device(device):
# Construct the computational graph we will use to train the model. We
# use the model_init_fn to construct the model, declare placeholders for
# the data and labels
x = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int32, [None])
# We need a place holder to explicitly specify if the model is in the training
# phase or not. This is because a number of layers behaves differently in
# training and in testing, e.g., dropout and batch normalization.
# We pass this variable to the computation graph through feed_dict as shown below.
is_training = tf.placeholder(tf.bool, name='is_training')
# Use the model function to build the forward pass.
scores = model_init_fn(x, is_training)
# Compute the loss like we did in Part II
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
loss = tf.reduce_mean(loss)
# Use the optimizer_fn to construct an Optimizer, then use the optimizer
# to set up the training step. Asking TensorFlow to evaluate the
# train_op returned by optimizer.minimize(loss) will cause us to make a
# single update step using the current minibatch of data.
# Note that we use tf.control_dependencies to force the model to run
# the tf.GraphKeys.UPDATE_OPS at each training step. tf.GraphKeys.UPDATE_OPS
# holds the operators that update the states of the network.
# For example, the tf.layers.batch_normalization function adds the running mean
# and variance update operators to tf.GraphKeys.UPDATE_OPS.
optimizer = optimizer_init_fn()
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
# Now we can run the computational graph many times to train the model.
# When we call sess.run we ask it to evaluate train_op, which causes the
# model to update.
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
t = 0
for epoch in range(num_epochs):
print('Starting epoch %d' % epoch)
for x_np, y_np in train_dset:
feed_dict = {x: x_np, y: y_np, is_training:1}
loss_np, _ = sess.run([loss, train_op], feed_dict=feed_dict)
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss_np))
check_accuracy(sess, val_dset, x, scores, is_training=is_training)
print()
t += 1
###Output
_____no_output_____
###Markdown
Keras Model API: Train a Two-Layer NetworkWe can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.train.GradientDescentOptimizer` function; you can [read about it here](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer).You don't need to tune any hyperparameters here, but you should achieve accuracies above 40% after one epoch of training.
###Code
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
return TwoLayerFC(hidden_size, num_classes)(inputs)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 2.7429
Got 120 / 1000 correct (12.00%)
Iteration 100, loss = 1.9007
Got 381 / 1000 correct (38.10%)
Iteration 200, loss = 1.4384
Got 401 / 1000 correct (40.10%)
Iteration 300, loss = 1.7820
Got 378 / 1000 correct (37.80%)
Iteration 400, loss = 1.7431
Got 444 / 1000 correct (44.40%)
Iteration 500, loss = 1.7640
Got 429 / 1000 correct (42.90%)
Iteration 600, loss = 1.8075
Got 434 / 1000 correct (43.40%)
Iteration 700, loss = 1.8534
Got 450 / 1000 correct (45.00%)
###Markdown
Keras Model API: Train a Two-Layer Network (functional API)Similarly, we train the two-layer network constructed using the functional API.
###Code
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
return two_layer_fc_functional(inputs, hidden_size, num_classes)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 2.7217
Got 110 / 1000 correct (11.00%)
Iteration 100, loss = 1.7502
Got 385 / 1000 correct (38.50%)
Iteration 200, loss = 1.3642
Got 403 / 1000 correct (40.30%)
Iteration 300, loss = 1.7298
Got 387 / 1000 correct (38.70%)
Iteration 400, loss = 1.8000
Got 441 / 1000 correct (44.10%)
Iteration 500, loss = 1.7446
Got 451 / 1000 correct (45.10%)
Iteration 600, loss = 1.8246
Got 445 / 1000 correct (44.50%)
Iteration 700, loss = 1.8604
Got 461 / 1000 correct (46.10%)
###Markdown
Keras Model API: Train a Three-Layer ConvNetHere you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.To train the model you should use gradient descent with Nesterov momentum 0.9. **HINT**: https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizerYou don't need to perform any hyperparameter tuning, but you should achieve accuracies above 45% after training for one epoch.
###Code
learning_rate = 3e-3
channel_1, channel_2, num_classes = 32, 16, 10
def model_init_fn(inputs, is_training):
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)(inputs)
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
############################################################################
# END OF YOUR CODE #
############################################################################
return model
def optimizer_init_fn():
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.9, use_nesterov=True)
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 2.6928
Got 97 / 1000 correct (9.70%)
Iteration 100, loss = 1.6665
Got 441 / 1000 correct (44.10%)
Iteration 200, loss = 1.1980
Got 508 / 1000 correct (50.80%)
Iteration 300, loss = 1.3852
Got 513 / 1000 correct (51.30%)
Iteration 400, loss = 1.1385
Got 541 / 1000 correct (54.10%)
Iteration 500, loss = 1.3660
Got 544 / 1000 correct (54.40%)
Iteration 600, loss = 1.4720
Got 557 / 1000 correct (55.70%)
Iteration 700, loss = 1.2319
Got 573 / 1000 correct (57.30%)
###Markdown
Part IV: Keras Sequential APIIn Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model. Keras Sequential API: Two-Layer NetworkHere we rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.You don't need to perform any hyperparameter tuning here, but you should see accuracies above 40% after training for one epoch.
###Code
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
input_shape = (32, 32, 3)
hidden_layer_size, num_classes = 4000, 10
initializer = tf.variance_scaling_initializer(scale=2.0)
layers = [
tf.layers.Flatten(input_shape=input_shape),
tf.layers.Dense(hidden_layer_size, activation=tf.nn.relu,
kernel_initializer=initializer),
tf.layers.Dense(num_classes, kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
return model(inputs)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 2.4820
Got 121 / 1000 correct (12.10%)
Iteration 100, loss = 1.7927
Got 391 / 1000 correct (39.10%)
Iteration 200, loss = 1.3628
Got 400 / 1000 correct (40.00%)
Iteration 300, loss = 1.7886
Got 393 / 1000 correct (39.30%)
Iteration 400, loss = 1.7109
Got 415 / 1000 correct (41.50%)
Iteration 500, loss = 1.7148
Got 449 / 1000 correct (44.90%)
Iteration 600, loss = 1.7814
Got 433 / 1000 correct (43.30%)
Iteration 700, loss = 1.7682
Got 452 / 1000 correct (45.20%)
###Markdown
Keras Sequential API: Three-Layer ConvNetHere you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:1. Convolutional layer with 16 5x5 kernels, using zero padding of 22. ReLU nonlinearity3. Convolutional layer with 32 3x3 kernels, using zero padding of 14. ReLU nonlinearity5. Fully-connected layer giving class scoresYou should initialize the weights of the model using a `tf.variance_scaling_initializer` as above.You should train the model using Nesterov momentum 0.9.You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
###Code
def model_init_fn(inputs, is_training):
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
input_shape = (32, 32, 3)
initializer = tf.variance_scaling_initializer(scale=2.0)
layers = [
tf.layers.Conv2D(16, 5, padding='same', input_shape=input_shape,
activation=tf.nn.relu,
kernel_initializer=initializer),
tf.layers.Conv2D(32, 3, padding='same',
activation=tf.nn.relu,
kernel_initializer=initializer),
tf.layers.Flatten(),
tf.layers.Dense(10, kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
############################################################################
# END OF YOUR CODE #
############################################################################
return model(inputs)
learning_rate = 5e-4
def optimizer_init_fn():
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.9, use_nesterov=True)
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 2.8954
Got 107 / 1000 correct (10.70%)
Iteration 100, loss = 1.8197
Got 405 / 1000 correct (40.50%)
Iteration 200, loss = 1.3621
Got 444 / 1000 correct (44.40%)
Iteration 300, loss = 1.4381
Got 485 / 1000 correct (48.50%)
Iteration 400, loss = 1.4618
Got 498 / 1000 correct (49.80%)
Iteration 500, loss = 1.6703
Got 487 / 1000 correct (48.70%)
Iteration 600, loss = 1.5412
Got 498 / 1000 correct (49.80%)
Iteration 700, loss = 1.4826
Got 520 / 1000 correct (52.00%)
###Markdown
Part V: CIFAR-10 open-ended challengeIn this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the `check_accuracy` and `train` functions from above, or you can implement your own training loop.Describe what you did at the end of the notebook. Some things you can try:- **Filter size**: Above we used 5x5 and 3x3; is this optimal?- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?- **Pooling**: We didn't use any pooling above. Would this improve the model?- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout? WARNING: Batch Normalization / DropoutBatch Normalization and Dropout **WILL NOT WORK CORRECTLY** if you use the `train_part34()` function with the object-oriented `tf.keras.Model` or `tf.keras.Sequential` APIs; if you want to use these layers with this training loop then you **must use the tf.layers functional API**.We wrote `train_part34()` to explicitly demonstrate how TensorFlow works; however there are some subtleties that make it tough to handle the object-oriented batch normalization layer in a simple training loop. In practice both `tf.keras` and `tf` provide higher-level APIs which handle the training loop for you, such as [keras.fit](https://keras.io/models/sequential/) and [tf.Estimator](https://www.tensorflow.org/programmers_guide/estimators), both of which will properly handle batch normalization when using the object-oriented API. Tips for trainingFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind:- If the parameters are working well, you should see improvement within a few hundred iterations- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyondIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.- Model ensembles- Data augmentation- New Architectures - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output. - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together. - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32) Have fun and happy training!
###Code
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Conv2D, BatchNormalization, MaxPool2D, Flatten, Dropout
from tensorflow.python.keras import optimizers
from tensorflow.python.keras import utils
model = Sequential()
model.add( Conv2D(filters=16, kernel_size=5, strides=1, padding='same',
activation='elu', input_shape=(32, 32, 3)) )
model.add( BatchNormalization() )
model.add( Conv2D(filters=16, kernel_size=5, strides=1, padding='same',
activation='elu') )
model.add( MaxPool2D(pool_size=2, padding='same') )
model.add( BatchNormalization() )
model.add( Conv2D(filters=16, kernel_size=5, strides=1, padding='same',
activation='elu') )
model.add( BatchNormalization() )
model.add( Conv2D(filters=16, kernel_size=5, strides=1, padding='same',
activation='elu') )
model.add( MaxPool2D(pool_size=2, padding='same') )
model.add( BatchNormalization() )
model.add( Flatten() )
model.add( BatchNormalization() )
model.add( Dropout(0.25) )
model.add( Dense(units=1024, activation='elu') )
model.add( BatchNormalization() )
model.add( Dropout(0.25) )
model.add( Dense(units=10, activation='softmax') )
opt = optimizers.Adam()
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
num_classes = 10
y_train_k = utils.to_categorical(y_train, num_classes)
y_val_k = utils.to_categorical(y_val, num_classes)
model.fit(X_train, y_train_k, epochs=10, batch_size=256, validation_data=(X_val, y_val_k))
###Output
Train on 49000 samples, validate on 1000 samples
Epoch 1/10
49000/49000 [==============================] - 121s 2ms/step - loss: 1.6446 - acc: 0.4542 - val_loss: 1.4401 - val_acc: 0.4880
Epoch 2/10
49000/49000 [==============================] - 119s 2ms/step - loss: 1.1446 - acc: 0.6009 - val_loss: 1.0239 - val_acc: 0.6300
Epoch 3/10
49000/49000 [==============================] - 120s 2ms/step - loss: 0.9572 - acc: 0.6646 - val_loss: 0.9058 - val_acc: 0.6890
Epoch 4/10
49000/49000 [==============================] - 125s 3ms/step - loss: 0.8547 - acc: 0.7009 - val_loss: 0.8485 - val_acc: 0.7130
Epoch 5/10
49000/49000 [==============================] - 126s 3ms/step - loss: 0.7852 - acc: 0.7231 - val_loss: 0.8011 - val_acc: 0.7300
Epoch 6/10
49000/49000 [==============================] - 122s 2ms/step - loss: 0.7296 - acc: 0.7424 - val_loss: 0.7549 - val_acc: 0.7350
Epoch 7/10
49000/49000 [==============================] - 122s 2ms/step - loss: 0.6770 - acc: 0.7622 - val_loss: 0.7329 - val_acc: 0.7540
Epoch 8/10
49000/49000 [==============================] - 121s 2ms/step - loss: 0.6267 - acc: 0.7806 - val_loss: 0.7005 - val_acc: 0.7680
Epoch 9/10
49000/49000 [==============================] - 121s 2ms/step - loss: 0.5866 - acc: 0.7931 - val_loss: 0.6755 - val_acc: 0.7780
Epoch 10/10
49000/49000 [==============================] - 123s 3ms/step - loss: 0.5361 - acc: 0.8112 - val_loss: 0.6867 - val_acc: 0.7730
|
notebooks/olympics.ipynb | ###Markdown
Olympics 120 years analysis. The dataset has a pretty interesting and comprehensive analysis at [this link](https://www.kaggle.com/heesoo37/olympic-history-data-a-thorough-analysis/notebook) created by the aggregator of the dataset. It covers most of what you would expect from a dataset link. Below is a list of the things covered in that kernel for quick reference. ```3 More athletes, nations, and events3.1 Has the number of athletes, nations, and events changed over time?4 The Art Competitions4.1 Numer of events, nations, and artists over time4.2 Which countries won the most art medals?4.3 Nazis crush the 1936 Art Competitions5 Women in the Olympics5.1 Number of men and women over time5.2 Number of women relative to men across countries5.3 Proportion of women on Olympic teams: 19365.4 Medal counts for women of different nations: 19365.5 Proportion of women on Olympic teams: 19765.6 Medal counts for women of different nations: 19765.7 Proportion of women on Olympic teams: 20165.8 Medal counts for women of different nations: 20166 Geographic representation6.1 Amsterdam 19286.2 Munich 19726.3 Rio 20167 Height and weight of athletes7.1 Data completeness7.2 Athlete height over time7.3 Athlete weight over time7.4 Change in height vs change in weight over time across menโs sports7.5 Change in height vs change in weight over time across womenโs sports8 Summary of key findings``` Lets try to find answers to some of the more out of place questions. **1. What was going on during the World Wars?** *Were there countries participating in Olympics despite being a part of the war? If so, how? Did the participation change as the wars progressed?***2. How did the Great Depression affect the World games?** *When there is no money to burn, do you still come out and play?***3. Does the host country's climate have something to do with the medal tally?** *I can't ski in this climate. It is just not cold enough.* **4. Was hosting the olympics a good investment for the host nations?** *Olympics will bring visitors who bring money, right? Right?***5. Why are Summer Olympics more popular that the Winter ones? (Or are they really?)** *All olympics were made equal in the eyes of **The Association**. Sure?***6. Do the same athletes compete in both Summer and Winter versions?** *But coach, I just came back from the other one.***7. Is the country wise medal tally different for similar sports among the two versions?** *You know m'te, we don' have such sports in the grin'.***8. Do olympics athletes have something in common?** *Athleticism? Duh!***9. Is right to vote a pre cursor to olympics participation? ** *You can go out and play but voting is a no no.*
###Code
import pandas as pd
ATHELETES_DATA_FILE = '..\\data\\athlete_events.csv'
COUNTRY_DATA_FILE = '..\\data\\noc_regions.csv'
athletes = pd.read_csv(ATHELETES_DATA_FILE)
countries = pd.read_csv(COUNTRY_DATA_FILE)
athletes.count()
print(athletes.isnull().sum())
athletes['Medal'].fillna('DNW', inplace=True)
###Output
_____no_output_____ |
IPL MATCH PREDICTOR.ipynb | ###Markdown
New Section
###Code
df_segmentation.describe();
df_segmentation.corr();
plt.figure(figsize = (12,9))
s = sns.heatmap(df_segmentation.corr(),
annot = True,
cmap = 'RdBu',
vmin = -1,
vmax = 1)
s.set_yticklabels(s.get_yticklabels(),rotation = 0,fontsize = 12)
s.set_xticklabels(s.get_xticklabels(),rotation =90,fontsize =12)
plt.title('Correlation Heatmap')
plt.show()
plt.figure(figsize = (12,9))
plt.scatter(df_segmentation.iloc[:,2],df_segmentation.iloc[:,1])
plt.xlabel('Balls_Bowled')
plt.ylabel('Runs_Given')
plt.title('Vizualization of raw data')
scaler = StandardScaler()
segmentation_std = scaler.fit_transform(df_segmentation)
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(segmentation_std)
pca.explained_variance_ratio_
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
matches = pd.read_csv("IPL Matches 2008-2020.csv")
matches=matches[matches["winner"].notna()]
matches["team2"]=matches["team2"].replace("Rising Pune Supergiant","Rising Pune Supergiants")
matches["team1"]=matches["team1"].replace("Rising Pune Supergiant","Rising Pune Supergiants")
matches["winner"]=matches["winner"].replace("Rising Pune Supergiant","Rising Pune Supergiants")
matches["toss_winner"]=matches["toss_winner"].replace("Rising Pune Supergiant","Rising Pune Supergiants")
cols = ["team1","team2","toss_winner","venue","toss_decision","winner"]
encoder= LabelEncoder()
matches["team1"]=encoder.fit_transform(matches["team1"])
matches["result"]=encoder.fit_transform(matches["result"])
matches["team2"]=encoder.fit_transform(matches["team2"])
matches["winner"]=encoder.fit_transform(matches["winner"].astype(str))
matches["toss_winner"]=encoder.fit_transform(matches["toss_winner"])
matches["venue"]=encoder.fit_transform(matches["venue"])
matches["toss_decision"]=encoder.fit_transform(matches["toss_decision"])
matches=matches[cols]
X_train, X_test, y_train, y_test = train_test_split(matches.iloc[:,:-1],matches['winner'], test_size=0.2, random_state=0,shuffle=True)
logreg = LogisticRegression(max_iter=12000)
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
#print('Accuracy of Logistic Regression Classifier on test set: {:.4f}'.format(logreg.score(X_test, y_test)))
#Decision Tree Classifier
dtree=DecisionTreeClassifier()
dtree.fit(X_train,y_train)
y_pred = dtree.predict(X_test)
#print('Accuracy of Decision Tree Classifier on test set: {:.4f}'.format(dtree.score(X_test, y_test)))
#SVM
svm=SVC()
svm.fit(X_train,y_train)
y_pred = svm.predict(X_test)
#print('Accuracy of SVM Classifier on test set: {:.4f}'.format(svm.score(X_test, y_test)))
#Random Forest Classifier
randomForest= RandomForestClassifier(n_estimators=100)
randomForest.fit(X_train,y_train)
y_pred = randomForest.predict(X_test)
ls=["Chennai Super Kings","Mumbai Indians","Mumbai Indians","MA Chidambaram Stadium, Chepauk","bat"]
ls=encoder.fit_transform(ls)
a = randomForest.predict([ls])
print(encoder.inverse_transform(a))
#print('Accuracy of Random Forest Classifier on test set: {:.4f}'.format(randomForest.score(X_test, y_test)))
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
matches = pd.read_csv("IPL Matches 2008-2020.csv")
'''conditions = [matches["venue"] == "Rajiv Gandhi International Stadium, Uppal",matches["venue"] == "Maharashtra Cricket Association Stadium",
matches["venue"] == "Saurashtra Cricket Association Stadium", matches["venue"] == "Holkar Cricket Stadium",
matches["venue"] == "M Chinnaswamy Stadium",matches["venue"] == "Wankhede Stadium",
matches["venue"] == "Eden Gardens",matches["venue"] == "Feroz Shah Kotla",
matches["venue"] == "Punjab Cricket Association IS Bindra Stadium, Mohali",matches["venue"] == "Green Park",
matches["venue"] == "Punjab Cricket Association Stadium, Mohali",matches["venue"] == "Dr DY Patil Sports Academy",
matches["venue"] == "Sawai Mansingh Stadium", matches["venue"] == "MA Chidambaram Stadium, Chepauk",
matches["venue"] == "Newlands", matches["venue"] == "St George's Park" ,
matches["venue"] == "Kingsmead", matches["venue"] == "SuperSport Park",
matches["venue"] == "Buffalo Park", matches["venue"] == "New Wanderers Stadium",
matches["venue"] == "De Beers Diamond Oval", matches["venue"] == "OUTsurance Oval",
matches["venue"] == "Brabourne Stadium",matches["venue"] == "Sardar Patel Stadium",
matches["venue"] == "Barabati Stadium", matches["venue"] == "Vidarbha Cricket Association Stadium, Jamtha",
matches["venue"] == "Himachal Pradesh Cricket Association Stadium",matches["venue"] == "Nehru Stadium",
matches["venue"] == "Dr. Y.S. Rajasekhara Reddy ACA-VDCA Cricket Stadium",matches["venue"] == "Subrata Roy Sahara Stadium",
matches["venue"] == "Shaheed Veer Narayan Singh International Stadium",matches["venue"] == "JSCA International Stadium Complex",
matches["venue"] == "Sheikh Zayed Stadium",matches["venue"] == "Sharjah Cricket Stadium",
matches["venue"] == "Dubai International Cricket Stadium",matches["venue"] == "M. A. Chidambaram Stadium",
matches["venue"] == "Feroz Shah Kotla Ground",matches["venue"] == "M. Chinnaswamy Stadium",
matches["venue"] == "Rajiv Gandhi Intl. Cricket Stadium" ,matches["venue"] == "IS Bindra Stadium",matches["venue"] == "ACA-VDCA Stadium"]
values = ['Hyderabad', 'Mumbai', 'Rajkot',"Indore","Bengaluru","Mumbai","Kolkata","Delhi","Mohali","Kanpur","Mohali","Pune","Jaipur","Chennai","Cape Town","Port Elizabeth","Durban",
"Centurion",'Eastern Cape','Johannesburg','Northern Cape','Bloemfontein','Mumbai','Ahmedabad','Cuttack','Jamtha','Dharamshala','Chennai','Visakhapatnam','Pune','Raipur','Ranchi',
'Abu Dhabi','Sharjah','Dubai','Chennai','Delhi','Bengaluru','Hyderabad','Mohali','Visakhapatnam']
matches['city'] = np.where(matches['city'].isnull(),
np.select(conditions, values),
matches['city'])'''
matches=matches[matches["winner"].notna()]
matches["team2"]=matches["team2"].replace("Rising Pune Supergiant","Rising Pune Supergiants")
matches["team1"]=matches["team1"].replace("Rising Pune Supergiant","Rising Pune Supergiants")
matches["winner"]=matches["winner"].replace("Rising Pune Supergiant","Rising Pune Supergiants")
matches["toss_winner"]=matches["toss_winner"].replace("Rising Pune Supergiant","Rising Pune Supergiants")
cols = ["team1","team2","toss_winner","venue","toss_decision","result","winner"]
encoder= LabelEncoder()
matches["team1"]=encoder.fit_transform(matches["team1"])
matches["result"]=encoder.fit_transform(matches["result"])
matches["team2"]=encoder.fit_transform(matches["team2"])
matches["winner"]=encoder.fit_transform(matches["winner"].astype(str))
matches["toss_winner"]=encoder.fit_transform(matches["toss_winner"])
matches["venue"]=encoder.fit_transform(matches["venue"])
matches["toss_decision"]=encoder.fit_transform(matches["toss_decision"])
matches=matches[cols]
X_train, X_test, y_train, y_test = train_test_split(matches.iloc[:,:-1],matches['winner'], test_size=0.2, random_state=0,shuffle=True)
logreg = LogisticRegression(max_iter=12000)
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print('Accuracy of Logistic Regression Classifier on test set: {:.4f}'.format(logreg.score(X_test, y_test)))
#Decision Tree Classifier
dtree=DecisionTreeClassifier()
dtree.fit(X_train,y_train)
y_pred = dtree.predict(X_test)
print('Accuracy of Decision Tree Classifier on test set: {:.4f}'.format(dtree.score(X_test, y_test)))
#SVM
svm=SVC()
svm.fit(X_train,y_train)
y_pred = svm.predict(X_test)
print('Accuracy of SVM Classifier on test set: {:.4f}'.format(svm.score(X_test, y_test)))
#Random Forest Classifier
randomForest= RandomForestClassifier(n_estimators=100)
randomForest.fit(X_train,y_train)
y_pred = randomForest.predict(X_test)
print('Accuracy of Random Forest Classifier on test set: {:.4f}'.format(randomForest.score(X_test, y_test)))
###Output
Accuracy of Logistic Regression Classifier on test set: 0.2945
Accuracy of Decision Tree Classifier on test set: 0.6074
Accuracy of SVM Classifier on test set: 0.4601
Accuracy of Random Forest Classifier on test set: 0.7362
|
examples/SocketWorker Server Alice.ipynb | ###Markdown
Training the Boston Housing Dataset using PySyft and SocketWorkerThis tutorial is a 3 notebook tutorial. The partners notebooks are the notebooks entitled `SocketWorker Boston Housing Client.ipynb` and `SocketWorker Server Bob.ipynb`. They are in the same folder as this notebook. You should execute this notebook **BEFORE** `SocketWorker Boston Housing Client.ipynb`. Step 1: Hook PyTorchJust like previous tutorials, the first step is to override PyTorch commands using the TorchHook object.
###Code
import syft as sy
hook = sy.TorchHook(verbose=False)
me = hook.local_worker
me.is_client_worker = False
###Output
_____no_output_____
###Markdown
Step 2: Launch ServerThe next step is to launch the server. We set is_pointer=False to tell the worker that this worker object is not merely a connection to a foreign worker but is in fact responsible for computation itself. We set is_client_worker=False to tell the worker to store tensors locally (as opposed to letting a client manage tensor lifecycles).
###Code
local_worker = sy.SocketWorker(hook=hook,
id='alice',
port=2007,
is_client_worker=False)
local_worker.listen()
###Output
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
Received Command From: ('127.0.0.1', 40840)
|
EDA/Encoding for binary features.ipynb | ###Markdown
Microsoft Malware PredictionMalware detection - Encoding evaluation
###Code
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import gc
import category_encoders as ce
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
dtypes = {
'ProductName': 'category',
'EngineVersion': 'category',
'AvSigVersion': 'category',
'IsSxsPassiveMode': 'int8',
'CountryIdentifier': 'int16',
'OrganizationIdentifier': 'float16',
'GeoNameIdentifier': 'float16',
'LocaleEnglishNameIdentifier': 'int8',
'Platform': 'category',
'Processor': 'category',
'OsVer': 'category',
'OsSuite': 'int16',
'OsPlatformSubRelease': 'category',
'SkuEdition': 'category',
'IsProtected': 'float16',
'IeVerIdentifier': 'float16',
'SmartScreen': 'category',
'UacLuaenable': 'float32',
'Census_MDC2FormFactor': 'category',
'Census_DeviceFamily': 'category',
'Census_ProcessorCoreCount': 'float16',
'Census_ProcessorManufacturerIdentifier': 'float16',
'Census_PrimaryDiskTypeName': 'category',
'Census_SystemVolumeTotalCapacity': 'float32',
'Census_TotalPhysicalRAM': 'float32',
'Census_ChassisTypeName': 'category',
'Census_PowerPlatformRoleName': 'category',
'Census_OSVersion': 'category',
'Census_OSArchitecture': 'category',
'Census_OSBranch': 'category',
'Census_OSBuildRevision': 'int32',
'Census_OSEdition': 'category',
'Census_OSSkuName': 'category',
'Census_OSInstallTypeName': 'category',
'Census_OSInstallLanguageIdentifier': 'float16',
'Census_OSUILocaleIdentifier': 'int16',
'Census_OSWUAutoUpdateOptionsName': 'category',
'Census_GenuineStateName': 'category',
'Census_ActivationChannel': 'category',
'Census_FlightRing': 'category',
'Census_IsVirtualDevice': 'float16',
'Census_IsTouchEnabled': 'int8',
'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16',
'Wdft_IsGamer': 'float16',
'Wdft_RegionIdentifier': 'float16',
'HasDetections': 'int8'
}
cols = ['OsBuild', 'Platform', 'EngineVersion', 'OsVer', 'Census_ProcessorClass', 'Processor']
#### Encoding ###
def mean_encoding(dataset, columns, target):
for feature in columns:
dataset_target_mean = dataset.groupby(feature)[target].mean() # calculate mean
enc_name = feature #('%s_target_enc' % feature) # new variable name
dataset[enc_name] = dataset[feature].map(dataset_target_mean) #assign new values
return(dataset)
def frequency_encoding(dataset, columns):
for feature in columns:
dataset_target_mean = dataset[feature].value_counts() # calculate count
enc_name = feature # ('%s_target_enc' % feature) # new variable name
dataset[enc_name] = dataset[feature].map(dataset_target_mean) #assign new values
return(dataset)
def binary_encoding(dataset, columns):
encoder = ce.BinaryEncoder(cols=columns)
dataset = encoder.fit_transform(dataset)
return(dataset)
def onehot_encoding(dataset, columns):
encoder = ce.OneHotEncoder(cols=columns)
dataset = encoder.fit_transform(dataset)
return(dataset)
def ordinal_encoding(dataset, columns):
encoder = ce.OrdinalEncoder(cols=columns)
dataset = encoder.fit_transform(dataset)
return(dataset)
def loo_encoding(dataset, columns, target):
# separete datasets
y = dataset[target]
X = dataset.drop(target, axis = 1)
encoder = ce.LeaveOneOutEncoder(cols=columns)
encoder.fit(X, y)
dataset = encoder.transform(X, y)
dataset[target] = y
return(dataset)
### Correlation ###
def get_correlation(dataframe, col, target):
encoded_feature = dataframe[col].values
cor = abs(np.corrcoef(dataframe[target].values, encoded_feature)[0][1]) # getting abs value here so it will be easier to compare
return(cor)
def generate_corr_list(dataframe, columns, method, df_name, tag, target):
for feature in columns:
try:
enc_name = feature #'%s%s' % (feature, tag) # new variable name
# saving correlation
# corr_features.append([df_name, method, ('%s_target_enc' % feature), get_correlation(dataframe, enc_name)])
corr_features.append([df_name, method, enc_name, get_correlation(dataframe, enc_name, target)])
enc_columns.append(enc_name)
except:
print('ERROR')
pass
def generate_corr_df():
# creating dataframe
return pd.DataFrame(corr_features, columns=['Dataset', 'Method', 'Feature', 'Correlation'])
### Utils ###
def has_high_cardinality(dataset, columns):
CARDINALITY_THRESHOLD = 100
high_cardinality_columns = [c for c in columns if dataset[c].nunique() >= CARDINALITY_THRESHOLD]
return high_cardinality_columns
def only_numbers(dataset, columns):
for feature in columns:
enc_name = feature # ('%s_target_enc' % feature) # new variable name
dataset[enc_name] = dataset[feature].str.extractall('(\d+)').unstack().apply(''.join, axis=1)
return dataset
###Output
_____no_output_____
###Markdown
** **Data**
###Code
target = 'HasDetections'
df = pd.read_csv('../input/train.csv', dtype=dtypes, usecols=(cols + [target]), nrows=3000000)
# Would treat differently columns with high cardinality if we had any
has_high_cardinality(df, cols)
# creating correlation list
corr_features = []
enc_columns = []
tag = '_target_enc'
###Output
_____no_output_____
###Markdown
**Correlation without encode**
###Code
generate_corr_list(df, cols, 'without_encode', 'ordinal', '', target)
###Output
ERROR
ERROR
ERROR
ERROR
ERROR
###Markdown
**Mean**
###Code
mean_df = mean_encoding(df, cols, target)
gc.collect()
generate_corr_list(mean_df, cols, 'mean', 'ordinal', tag, target)
###Output
_____no_output_____
###Markdown
**Frequency**
###Code
frequency_df = frequency_encoding(df, cols)
gc.collect()
generate_corr_list(frequency_df, cols, 'frequency', 'ordinal', tag, target)
###Output
_____no_output_____
###Markdown
**Bynary**
###Code
binary_df = binary_encoding(df.astype(str), cols)
binary_df.head()
###Output
_____no_output_____
###Markdown
**One hot**
###Code
#onehot_df = onehot_encoding(df.astype(str), cols)
#onehot_df.head()
###Output
_____no_output_____
###Markdown
**Ordinal**
###Code
ordinal_df = ordinal_encoding(df.astype(str), cols)
gc.collect()
generate_corr_list(ordinal_df.astype(int), cols, 'ordinal', 'ordinal', tag, target)
###Output
_____no_output_____
###Markdown
**Leave one out**
###Code
lov_df = loo_encoding(df, cols, target)
gc.collect()
generate_corr_list(lov_df, cols, 'leave_one_out', 'ordinal', tag, target)
###Output
_____no_output_____
###Markdown
**Result**
###Code
corr_df = generate_corr_df()
sns.set(rc={'figure.figsize':(8.7,10.27)})
ax = sns.barplot(x="Correlation", y="Feature", hue="Method", data=corr_df)
###Output
_____no_output_____ |
MODULE-6 Training ML Model.ipynb | ###Markdown
Training a Machine Learning Model
###Code
from sklearn.svm import SVC
model=SVC(C=1.0,kernel='rbf',gamma=0.01,probability=True)
model.fit(x_train,y_train)
print("model trained successfully")
#score
model.score(x_train,y_train)
#score
model.score(x_test,y_test)
###Output
_____no_output_____
###Markdown
Model Evaluation- Confusion Metrics- Clasification Report- Kappa Score- AUC and ROC (probablity)
###Code
from sklearn import metrics
y_pred=model.predict(x_test)
y_prob=model.predict_proba(x_test) # probablity
cm=metrics.confusion_matrix(y_test,y_pred)
cm=np.concatenate((cm,cm.sum(axis=1).reshape(-1,1)),axis=1)
cm
cr=metrics.classification_report(y_test,y_pred,target_names=['male','female'],output_dict=True )
pd.DataFrame(cr).T
# kappa
metrics.cohen_kappa_score(y_test,y_pred)
###Output
_____no_output_____
###Markdown
ROC AND AUC
###Code
# roc for female
fpr,tpr,thresh=metrics.roc_curve(y_test,y_prob[:,1])
auc_s=metrics.auc(fpr,tpr)
plt.figure(figsize=(20,10))
plt.plot(fpr,tpr,'-.')
plt.plot([0,1],[0,1],'b--')
for i in range(len(thresh)):
plt.plot(fpr[i],tpr[i],'^')
plt.text(fpr[i],tpr[i],"%0.2f"%thresh[i])
plt.legend(['AUC Score =%0.2f'%auc_s])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Reciever Operationg Characteristics')
plt.show()
###Output
_____no_output_____
###Markdown
HYPER PARAMETER TUNING
###Code
model_tune=SVC()
from sklearn.model_selection import GridSearchCV
param_grid={
'C':[1,10,20,30,50,100],
'kernel':['rbf','poly'],
'gamma':[0.1,0.05,0.01,0.001,0.002,0.005],
'coef0':[0,1],
}
model_grid=GridSearchCV(model_tune,param_grid=param_grid,scoring='accuracy',cv=5,verbose=1)
model_grid.fit(X,Y)
model_grid.best_index_
model_grid.best_params_
model_grid.best_score_
# With best parametter build ML model
model_best=SVC(C=10,kernel='rbf',gamma=0.002,coef0=0,probability=True)
model_best.fit(x_train,y_train)
model_best.score(x_test,y_test)
###Output
_____no_output_____
###Markdown
Model Evaluation- Confusion Metrics- Clasification Report- Kappa Score- AUC and ROC (probablity)
###Code
from sklearn import metrics
y_pred=model_best.predict(x_test)
y_prob=model_best.predict_proba(x_test) # probablity
cm=metrics.confusion_matrix(y_test,y_pred)
cm=np.concatenate((cm,cm.sum(axis=1).reshape(-1,1)),axis=1)
cm
cr=metrics.classification_report(y_test,y_pred,target_names=['male','female'],output_dict=True )
pd.DataFrame(cr).T
# kappa
metrics.cohen_kappa_score(y_test,y_pred)
## ROC AND AUC
# roc for female
fpr,tpr,thresh=metrics.roc_curve(y_test,y_prob[:,1])
auc_s=metrics.auc(fpr,tpr)
plt.figure(figsize=(20,10))
plt.plot(fpr,tpr,'-.')
plt.plot([0,1],[0,1],'b--')
for i in range(len(thresh)):
plt.plot(fpr[i],tpr[i],'^')
plt.text(fpr[i],tpr[i],"%0.2f"%thresh[i])
plt.legend(['AUC Score =%0.2f'%auc_s])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Reciever Operationg Characteristics')
plt.show()
#saving our machine learning model
import pickle
pickle.dump(model_best,open('model_svm.pickle','wb'))
pickle.dump(mean,open('./model/mean_preprocess.pickle','wb'))
###Output
_____no_output_____ |
sklearn/sklearn learning/demonstration/auto_examples_jupyter/model_selection/plot_train_error_vs_test_error.ipynb | ###Markdown
Train error vs Test errorIllustration of how the performance of an estimator on unseen data (test data)is not the same as the performance on training data. As the regularizationincreases the performance on train decreases while the performance on testis optimal within a range of values of the regularization parameter.The example with an Elastic-Net regression model and the performance ismeasured using the explained variance a.k.a. R^2.
###Code
print(__doc__)
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
import numpy as np
from sklearn import linear_model
# #############################################################################
# Generate sample data
n_samples_train, n_samples_test, n_features = 75, 150, 500
np.random.seed(0)
coef = np.random.randn(n_features)
coef[50:] = 0.0 # only the top 10 features are impacting the model
X = np.random.randn(n_samples_train + n_samples_test, n_features)
y = np.dot(X, coef)
# Split train and test data
X_train, X_test = X[:n_samples_train], X[n_samples_train:]
y_train, y_test = y[:n_samples_train], y[n_samples_train:]
# #############################################################################
# Compute train and test errors
alphas = np.logspace(-5, 1, 60)
enet = linear_model.ElasticNet(l1_ratio=0.7, max_iter=10000)
train_errors = list()
test_errors = list()
for alpha in alphas:
enet.set_params(alpha=alpha)
enet.fit(X_train, y_train)
train_errors.append(enet.score(X_train, y_train))
test_errors.append(enet.score(X_test, y_test))
i_alpha_optim = np.argmax(test_errors)
alpha_optim = alphas[i_alpha_optim]
print("Optimal regularization parameter : %s" % alpha_optim)
# Estimate the coef_ on full data with optimal regularization parameter
enet.set_params(alpha=alpha_optim)
coef_ = enet.fit(X, y).coef_
# #############################################################################
# Plot results functions
import matplotlib.pyplot as plt
plt.subplot(2, 1, 1)
plt.semilogx(alphas, train_errors, label='Train')
plt.semilogx(alphas, test_errors, label='Test')
plt.vlines(alpha_optim, plt.ylim()[0], np.max(test_errors), color='k',
linewidth=3, label='Optimum on test')
plt.legend(loc='lower left')
plt.ylim([0, 1.2])
plt.xlabel('Regularization parameter')
plt.ylabel('Performance')
# Show estimated coef_ vs true coef
plt.subplot(2, 1, 2)
plt.plot(coef, label='True coef')
plt.plot(coef_, label='Estimated coef')
plt.legend()
plt.subplots_adjust(0.09, 0.04, 0.94, 0.94, 0.26, 0.26)
plt.show()
###Output
_____no_output_____ |
GAN/GAN.ipynb | ###Markdown
import Data
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./mnist/data/",one_hot=True)
###Output
_____no_output_____
###Markdown
Set Hiper Piramiter
###Code
total_epoch = 200
batch_size = 100
learning_rate = 0.0001
n_hidden = 256
n_input = 28*28
n_noise = 128
###Output
_____no_output_____
###Markdown
Setting Input Tendor
###Code
#Real data
X = tf.placeholder(tf.float32,[None,n_input])
#Noise data
Z = tf.placeholder(tf.float32,[None,n_noise])
#Not used Y
###Output
_____no_output_____
###Markdown
Setting Tensor to Makes Hidden Layers
###Code
G_W1 = tf.Variable(tf.random_normal([n_noise, n_hidden],stddev=0.1))
G_B1 = tf.Variable(tf.zeros([n_hidden]))
G_W2 = tf.Variable(tf.random_normal([n_hidden, n_input],stddev=0.1))
G_B2 = tf.Variable(tf.zeros([n_input]))
###Output
_____no_output_____
###Markdown
Setting Tensor to Makes Discriminator Layers
###Code
D_W1 = tf.Variable(tf.random_normal([n_input, n_hidden],stddev=0.1))
D_B1 = tf.Variable(tf.zeros([n_hidden]))
D_W2 = tf.Variable(tf.random_normal([n_hidden, 1],stddev=0.1))
D_B2 = tf.Variable(tf.zeros([1]))
###Output
_____no_output_____
###Markdown
Create Generator
###Code
def generator(noise_z):
hidden = tf.nn.relu( tf.matmul( noise_z, G_W1 ) + G_B1 )
output = tf.nn.sigmoid( tf.matmul( hidden, G_W2 ) + G_B2 )
return output
###Output
_____no_output_____
###Markdown
Create Discriminator
###Code
def discriminaster(inputs):
hidden = tf.nn.relu( tf.matmul( inputs, D_W1 ) + D_B1 )
output = tf.nn.sigmoid( tf.matmul( hidden, D_W2) + D_B2 )
return output
###Output
_____no_output_____
###Markdown
Function to Make Noise-Data
###Code
def get_noise(batch_size, n_noise):
return np.random.normal( size = ( batch_size, n_noise ) )
###Output
_____no_output_____
###Markdown
Maker Definition used Noise
###Code
G = generator(Z)
D_gene = discriminaster(G)
D_real = discriminaster(X)
###Output
_____no_output_____
###Markdown
Definition to Loss functions
###Code
loss_D = tf.reduce_mean( tf.log( D_real ) + tf.log( 1 - D_gene ) )
loss_G = tf.reduce_mean( tf.log( D_gene ) )
###Output
_____no_output_____
###Markdown
Split G / D Variables
###Code
D_variable_list = [D_W1, D_B1, D_W2, D_B2]
G_variable_list = [G_W1, G_B1, G_W2, G_B2]
###Output
_____no_output_____
###Markdown
Optimizer Setting
###Code
train_D = tf.train.AdamOptimizer(learning_rate).minimize(-loss_D,var_list = D_variable_list)
train_G = tf.train.AdamOptimizer(learning_rate).minimize(-loss_G,var_list = G_variable_list)
###Output
_____no_output_____
###Markdown
Setting etc....
###Code
total_batch = int(mnist.train.num_examples / batch_size)
loss_val_D = 0
loss_val_G = 0
###Output
_____no_output_____
###Markdown
Make Tensor-Flow Session
###Code
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for epoch in range(3):
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
noise = get_noise(batch_size, n_noise)
_, loss_val_D = sess.run( [ train_D, loss_D ], feed_dict = { X: batch_xs, Z: noise } )
_, loss_val_G = sess.run( [ train_G, loss_G ], feed_dict = { Z: noise } )
print('Epoch : ', '%04d' % epoch, 'D loss : {',loss_val_D,'}', 'G loss : {',loss_val_G,'}')
'''
if( epoch == 0 or ( epoch + 1 ) % 10 == 0 ):
sample_size=10
noise = get_noise(sample_size , n_noise)
samples = sess.run(G,feed_dict={Z:noise})
fig, ax = plt.subplots(1,sample_size,figsize=(sample_size,1))
for i in range(sample_size):
ax[i].set_axis_off()
ax[i].imshow(np.reshape(samples[i],(28,28)))
plt.savefig('sample{}.png'.format(str(epoch).zfill(3)),bbox_inches='tight')
plt.close(fig)
'''
sample_size=10
noise = get_noise(sample_size , n_noise)
samples = sess.run(G,feed_dict={Z:noise})
fig, ax = plt.subplots(1,sample_size,figsize=(sample_size,1))
for i in range(sample_size):
ax[i].set_axis_off()
ax[i].imshow(np.reshape(samples[i],(28,28)))
plt.savefig('./GAN_Picture/sample{}.png'.format(str(143).zfill(3)),bbox_inches='tight')
plt.close(fig)
###Output
_____no_output_____ |
notebooks/rice_seed_classification.ipynb | ###Markdown
Rice Seed Classification Preface How will you classify the bellow two images? Proper Shaped Rice Seed Broken Rice SeedIn this tutorial, we will construct a classifier in 3 ways, giving you a rough implementation of how deep learning (step 3) can ignore the manual feature extraction we had to do (step 1 and 2). Although you may notice that the classification task is two easy that step 1 or step 2 is sufficient, will compare all steps for understanding.Step 1. Manual Classsification by Conventional ApproachStep 2. Support Vector Machine based ClassificationStep 3. Convolutional Neural Network based Classificationclick the run button in each cell from the top to bottom and everything should run fine.if you don't want to see the results beforehand, click "edit" and select "delete all the output of the cells" Library and Custom Function Import
###Code
import numpy as np
import math, os, sys
import itertools
import matplotlib.pyplot as plt
plt.style.use('default')
from scipy import ndimage
from skimage import measure, morphology
from skimage.io import imsave, imread
from skimage.color import rgb2gray
from skimage.filters import threshold_otsu
from skimage.transform import resize
from sklearn import svm, datasets
from sklearn.metrics import confusion_matrix
import pandas as pd
#functions to assist visualizations
#confusion matrix drawing function provided by sklearn
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
#code from https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
#print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
#plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
def make_meshgrid(x, y, h=.02):
"""Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
#code from https://scikit-learn.org/stable/auto_examples/svm/plot_iris.html#sphx-glr-auto-examples-svm-plot-iris-py
"""
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(clf, xx, yy, **params):
"""Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
"""
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = plt.contourf(xx, yy, Z, **params)
return out
###Output
_____no_output_____
###Markdown
Dataset Import
###Code
!#we will use the rice seed image datset
!apt-get install subversion > /dev/null
!svn export https://github.com/totti0223/deep_learning_for_biologists_with_keras/trunk/notebooks/data/image image > /dev/null
#this command below see inspects part of the files which have been downloaded from the cell above
!ls image
!ls image/train
!ls image/train/proper
#let's visualize a single file
image = imread("image/train/proper/100.jpg")
plt.figure(figsize=(3,3))
plt.imshow(image)
#lets load everything into memory first
#load training dataset
X_train = []
y_train = []
for root, dirs, files in os.walk("image/train"):
files = [x for x in files if x.endswith(".jpg")]
for file in files:
image_path = os.path.join(root, file)
image = imread(image_path)/255.
image = resize(image,(32,32))
X_train.append(image)
category = os.path.split(root)[-1]
if category == "proper":
y_train.append(0)
else:
y_train.append(1)
X_train = np.array(X_train)
y_train = np.array(y_train)
#load test dataset
X_test = []
y_test = []
for root, dirs, files in os.walk("image/test"):
files = [x for x in files if x.endswith(".jpg")]
for file in files:
image_path = os.path.join(root, file)
image = imread(image_path)/255.
image = resize(image,(32,32))
X_test.append(image)
category = os.path.split(root)[-1]
if category == "proper":
y_test.append(0)
else:
y_test.append(1)
X_test = np.array(X_test)
y_test = np.array(y_test)
print("train dataset shape is:", X_train.shape,y_train.shape)
print("test dataset shape is:", X_test.shape,y_test.shape)
plt.subplots_adjust(wspace=0.4, hspace=0.6)
#randomly show several images from the training dataset
index = np.random.randint(0,X_train.shape[0],size=9)
for i, idx in enumerate(index):
plt.subplot(3,3,i+1)
if y_train[idx] == 0:
label = "proper"
else:
label = "broken"
plt.title(label)
plt.imshow(X_train[idx])
###Output
_____no_output_____
###Markdown
Step 1 Manual Classification Lets manually define the feature for classification. In this case, simply binarizing the image, extracting the area size of each image and then defining a threshold will do well. All of the image transformation and area measuring can be done with the functions of skimage and scipy.
###Code
#Let's try it for one image
image = X_train[0]
#the original image
print(image.shape)
plt.figure(figsize=(3,3))
plt.imshow(image)
plt.title("original")
plt.show()
#gray conversion
gray = rgb2gray(image)
print(gray.shape)
plt.figure(figsize=(3,3))
plt.imshow(gray, cmap=plt.cm.gray)
plt.title("gray converted")
plt.show()
#binary conversion
threshold = threshold_otsu(gray)
binary = gray > threshold
plt.figure(figsize=(3,3))
plt.imshow(binary, cmap=plt.cm.gray)
plt.show()
###Output
_____no_output_____
###Markdown
now that we have a nice binary image, we can isolate the region of rice seed which corresponds to the white region of the image above.
###Code
label_im, nb_labels = ndimage.label(binary)
regionprops = measure.regionprops(label_im, intensity_image=gray)
regionprop = regionprops[0]
print("area is",regionprop.area)
print("major axis length is", regionprop.major_axis_length)
print("minor axis length is", regionprop.minor_axis_length)
#bundling the above into a function
def quantify_area(image):
gray = rgb2gray(image)
threshold = threshold_otsu(gray)
binary = gray > threshold
label_im, nb_labels = ndimage.label(binary)
regionprops = measure.regionprops(label_im, intensity_image=gray)
regionprop = regionprops[0]
area = regionprop.area
return area
#test
area = quantify_area(image)
print(area)
X_train_area = []
for image in X_train:
area = quantify_area(image)
X_train_area.append(area)
X_test_area = []
for image in X_test:
area = quantify_area(image)
X_test_area.append(area)
#check the calculated data area value of training dataset
plt.scatter(range(len(X_train_area)),X_train_area,c=y_train,cmap="jet")
plt.xlabel("rice seed images")
plt.ylabel("area (px)")
plt.show()
#define an area threshold that can seperate the two classes
#must change from the default value or it won't seperate nicely.
#run this code once and try the suitable value that can seperate blue and red with the horizontal lines
area_threshold = 200
#classify whether the image is a proper seed or a broken seed according to the area_threshold value
train_y_pred = []
for area in X_train_area:
if area > area_threshold:
train_y_pred.append(0)
else:
train_y_pred.append(1)
#plot scatter with threshold line
plt.figure(figsize=(5,3))
plt.scatter(range(len(X_train_area)),X_train_area,c=y_train,cmap=plt.cm.coolwarm)
plt.axhline(y=area_threshold)
plt.title("blue:proper seed, red: broken seed")
plt.show()
#calculate confusion matrix
cnf = confusion_matrix(y_train, train_y_pred)
#confusion matrix in figure
plt.figure(figsize=(3,3))
plot_confusion_matrix(cnf, classes=["proper","broken"])
plt.show()
#evaluate it with the test dataset
test_y_pred = []
for area in X_test_area:
if area > area_threshold:
test_y_pred.append(0)
else:
test_y_pred.append(1)
#plot scatter with threshold line
plt.figure(figsize=(5,3))
plt.scatter(range(len(X_test_area)),X_test_area,c=y_test,cmap=plt.cm.coolwarm)
plt.axhline(y=area_threshold)
#plt.plot([100,0],[100,350],'k-',lw=2)
plt.show()
#calculate confusion matrix
cnf = confusion_matrix(y_test, test_y_pred)
#confusion matrix in figure
plt.figure(figsize=(3,3))
plot_confusion_matrix(cnf, classes=["proper","broken"])
plt.show()
#build a classifier
def manual_classifier(image,area_threshold):
gray = rgb2gray(image)
threshold = threshold_otsu(gray)
binary = gray > threshold
label_im, nb_labels = ndimage.label(binary)
regionprops = measure.regionprops(label_im, intensity_image=gray)
regionprop = regionprops[0]
area = regionprop.area
if area > area_threshold:
return 0
else:
return 1
# get a image from test dataset #value must be lower than the size of the test dataset(20-1)
n = 5
image = X_test[n]
label = y_test[n]
area_threshold = 350
prediction = manual_classifier(image,area_threshold)
plt.imshow(image)
print("correct label is: ",label)
print("predicted label is: ",prediction)
###Output
correct label is: 0
predicted label is: 0
###Markdown
Step 2 Support Vector Machine Classification We will next use Support Vector Machine (SVM) for Classification. Since SVM uses data greater than two dimension (features), we will add another metrics, major_axis_length of the region of interest.
###Code
def quantify_mal(image):
gray = rgb2gray(image)
threshold = threshold_otsu(gray)
binary = gray > threshold
label_im, nb_labels = ndimage.label(binary)
regionprops = measure.regionprops(label_im, intensity_image=gray)
regionprop = regionprops[0]
mal = regionprop.major_axis_length
return mal
X_train_mal = []
for image in X_train:
mal = quantify_mal(image)
X_train_mal.append(mal)
X_test_mal = []
for image in X_test:
mal = quantify_mal(image)
X_test_mal.append(mal)
#reshape data for svm input
X_train2 = np.array([[x,y] for x,y in zip(X_train_mal,X_train_area)])
X_test2 = np.array([[x,y] for x,y in zip(X_test_mal,X_test_area)])
#just concatenating the two data
print(X_train2[:5,:])
#define linear support vector machine for defining the threshold
clf=svm.SVC(kernel="linear")
#train the classifier
clf.fit(X_train2,y_train)
xx, yy = make_meshgrid(X_train2[:,0],X_train2[:,1],h=0.08)
plot_contours(clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X_train2[:,0],X_train2[:,1],c=y_train,cmap=plt.cm.coolwarm)
plt.title("svm classifier for train data")
plt.xlabel("major_axis_length (px)")
plt.ylabel("area (px)")
plt.show()
#Coordinates in the red region are classified broken seeds, while blue as proper shaped seeds
xx, yy = make_meshgrid(X_test2[:,0],X_test2[:,1],h=0.08)
plot_contours(clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X_test2[:,0],X_test2[:,1],c=y_test,cmap=plt.cm.coolwarm)
plt.title("svm classifier for test data")
plt.xlabel("major_axis_length (px)")
plt.ylabel("area (px)")
plt.show()
#construct a classifier function
def svm_classifier(clf,image):
gray = rgb2gray(image)
threshold = threshold_otsu(gray)
binary = gray > threshold
label_im, nb_labels = ndimage.label(binary)
regionprops = measure.regionprops(label_im, intensity_image=gray)
regionprop = regionprops[0]
area = regionprop.area
result = clf.predict(np.array([1,area]).reshape(1, -1))
return result
# get a image from test dataset #value must be lower than the size of the test dataset(20-1)
n = 5
image = X_test[n]
label = y_test[n]
prediction = svm_classifier(clf,image)
plt.imshow(image)
print("correct label is: ",label)
print("predicted label is: ",prediction)
###Output
correct label is: 1
predicted label is: [1]
###Markdown
Step 3 Seperating the two classes with Deep learning (Convolutional Neural Network) Finaly, we will use Convolutional Neural Network, a type of deep learning architecture that can handle images. By using CNN, we are freed from defining a suitable feature and only have to feed the images to the network. The CNN will find the most suitable features for classification
###Code
import keras
from keras import backend as K
from keras.models import Sequential
from keras import layers
from keras.utils.np_utils import to_categorical
from sklearn.model_selection import train_test_split
###Output
Using TensorFlow backend.
###Markdown
Prepare Input Data
###Code
y_train2 = to_categorical(y_train)
X_train3, X_valid3, y_train3, y_valid3 = train_test_split(X_train, y_train2, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Model Construction
###Code
model = Sequential([
layers.Conv2D(16, (3,3), input_shape=(32,32,3),name="conv1"),
layers.Activation("relu"),
layers.MaxPool2D((2,2),name="pool1"),
layers.Dropout(0.25),
layers.Conv2D(32, (3,3),name="conv2"),
layers.Activation("relu"),
layers.MaxPool2D((2,2),name="pool2"),
layers.Dropout(0.25),
layers.Flatten(),
layers.Dense(64,name="fc1"),
layers.Activation("relu"),
layers.Dense(2,name="fc2"),
layers.Activation("softmax")
])
model.compile("adam",loss="categorical_crossentropy",metrics=["acc"])
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1 (Conv2D) (None, 30, 30, 16) 448
_________________________________________________________________
activation_1 (Activation) (None, 30, 30, 16) 0
_________________________________________________________________
pool1 (MaxPooling2D) (None, 15, 15, 16) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 15, 15, 16) 0
_________________________________________________________________
conv2 (Conv2D) (None, 13, 13, 32) 4640
_________________________________________________________________
activation_2 (Activation) (None, 13, 13, 32) 0
_________________________________________________________________
pool2 (MaxPooling2D) (None, 6, 6, 32) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 6, 6, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 1152) 0
_________________________________________________________________
fc1 (Dense) (None, 64) 73792
_________________________________________________________________
activation_3 (Activation) (None, 64) 0
_________________________________________________________________
fc2 (Dense) (None, 2) 130
_________________________________________________________________
activation_4 (Activation) (None, 2) 0
=================================================================
Total params: 79,010
Trainable params: 79,010
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training
###Code
history = model.fit(x = X_train3, y = y_train3, batch_size=32, epochs= 10,validation_data=(X_valid3,y_valid3))
###Output
Train on 301 samples, validate on 76 samples
Epoch 1/10
301/301 [==============================] - 4s 14ms/step - loss: 0.5548 - acc: 0.6910 - val_loss: 0.4298 - val_acc: 0.9605
Epoch 2/10
301/301 [==============================] - 0s 322us/step - loss: 0.1893 - acc: 0.9834 - val_loss: 0.1902 - val_acc: 0.9474
Epoch 3/10
301/301 [==============================] - 0s 324us/step - loss: 0.0317 - acc: 0.9900 - val_loss: 0.0921 - val_acc: 0.9737
Epoch 4/10
301/301 [==============================] - 0s 326us/step - loss: 0.0230 - acc: 0.9900 - val_loss: 0.0970 - val_acc: 0.9868
Epoch 5/10
301/301 [==============================] - 0s 330us/step - loss: 0.0528 - acc: 0.9834 - val_loss: 0.1183 - val_acc: 0.9737
Epoch 6/10
301/301 [==============================] - 0s 332us/step - loss: 0.0198 - acc: 0.9934 - val_loss: 0.0587 - val_acc: 0.9868
Epoch 7/10
301/301 [==============================] - 0s 335us/step - loss: 0.0222 - acc: 0.9900 - val_loss: 0.1918 - val_acc: 0.9605
Epoch 8/10
301/301 [==============================] - 0s 374us/step - loss: 0.0062 - acc: 1.0000 - val_loss: 0.0746 - val_acc: 0.9868
Epoch 9/10
301/301 [==============================] - 0s 319us/step - loss: 0.0069 - acc: 1.0000 - val_loss: 0.1629 - val_acc: 0.9605
Epoch 10/10
301/301 [==============================] - 0s 335us/step - loss: 0.0161 - acc: 0.9900 - val_loss: 0.1076 - val_acc: 0.9868
###Markdown
Visualizing Training Result
###Code
plt.plot(history.history["acc"],label="train_accuracy")
plt.plot(history.history["val_acc"],label="validation_accuracy")
plt.legend()
plt.show()
plt.plot(history.history["loss"],label="train_loss")
plt.plot(history.history["val_loss"],label="validation_loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Use the model
###Code
print(X_train3.shape)
n = 102
input_image = X_train3[n][np.newaxis,...]
print("label is: ", y_train3[n])
predictions = model.predict(input_image)
print("prediction is",predictions[0])
###Output
label is: [1. 0.]
prediction is [9.9999750e-01 2.5576944e-06]
|
experiments/tuned_1/oracle.run2/trials/1/trial.ipynb | ###Markdown
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started with that workflow.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"labels_source",
"labels_target",
"domains_source",
"domains_target",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"n_shot",
"n_way",
"n_query",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_transforms_source",
"x_transforms_target",
"episode_transforms_source",
"episode_transforms_target",
"pickle_name",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"torch_default_dtype"
}
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["num_examples_per_domain_per_label_source"]=100
standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 100
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["x_transforms_source"] = ["unit_power"]
standalone_parameters["x_transforms_target"] = ["unit_power"]
standalone_parameters["episode_transforms_source"] = []
standalone_parameters["episode_transforms_target"] = []
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# uncomment for CORES dataset
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
standalone_parameters["labels_source"] = ALL_NODES
standalone_parameters["labels_target"] = ALL_NODES
standalone_parameters["domains_source"] = [1]
standalone_parameters["domains_target"] = [2,3,4,5]
standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl"
# Uncomment these for ORACLE dataset
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS
# standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS
# standalone_parameters["domains_source"] = [8,20, 38,50]
# standalone_parameters["domains_target"] = [14, 26, 32, 44, 56]
# standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl"
# standalone_parameters["num_examples_per_domain_per_label_source"]=1000
# standalone_parameters["num_examples_per_domain_per_label_target"]=1000
# Uncomment these for Metahan dataset
# standalone_parameters["labels_source"] = list(range(19))
# standalone_parameters["labels_target"] = list(range(19))
# standalone_parameters["domains_source"] = [0]
# standalone_parameters["domains_target"] = [1]
# standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl"
# standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# standalone_parameters["num_examples_per_domain_per_label_source"]=200
# standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# Parameters
parameters = {
"experiment_name": "tuned_1_oracle.run2",
"device": "cuda",
"lr": 0.001,
"seed": 1337,
"dataset_seed": 1337,
"labels_source": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"labels_target": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"x_transforms_source": [],
"x_transforms_target": [],
"episode_transforms_source": [],
"episode_transforms_target": [],
"domains_source": [8, 32, 50],
"domains_target": [14, 20, 26, 38, 44],
"num_examples_per_domain_per_label_source": 10000,
"num_examples_per_domain_per_label_target": 1000,
"n_shot": 3,
"n_way": 16,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_loss",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"pickle_name": "oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
# (This is due to the randomized initial weights)
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
###################################
# Build the dataset
###################################
if p.x_transforms_source == []: x_transform_source = None
else: x_transform_source = get_chained_transform(p.x_transforms_source)
if p.x_transforms_target == []: x_transform_target = None
else: x_transform_target = get_chained_transform(p.x_transforms_target)
if p.episode_transforms_source == []: episode_transform_source = None
else: raise Exception("episode_transform_source not implemented")
if p.episode_transforms_target == []: episode_transform_target = None
else: raise Exception("episode_transform_target not implemented")
eaf_source = Episodic_Accessor_Factory(
labels=p.labels_source,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_source,
example_transform_func=episode_transform_source,
)
train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()
eaf_target = Episodic_Accessor_Factory(
labels=p.labels_target,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_target,
example_transform_func=episode_transform_target,
)
train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
# Some quick unit tests on the data
from steves_utils.transforms import get_average_power, get_average_magnitude
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))
assert q_x.dtype == eval(p.torch_default_dtype)
assert s_x.dtype == eval(p.torch_default_dtype)
print("Visually inspect these to see if they line up with expected values given the transforms")
print('x_transforms_source', p.x_transforms_source)
print('x_transforms_target', p.x_transforms_target)
print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy()))
print("Average power, source:", get_average_power(q_x[0].numpy()))
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))
print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy()))
print("Average power, target:", get_average_power(q_x[0].numpy()))
###################################
# Build the model
###################################
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
CNN_Horse_Human_Classification.ipynb | ###Markdown
Author: Kumar R.**Problem Statement: Classification of Horse or Humans using CNN Algoarithm.** Train the model using training imgaes and test the performance of the model using validation images. Check whether the model is working properly by prediction.
###Code
#Import the required libraries
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
#Importing the optimizer
from tensorflow.keras.optimizers import RMSprop
#creating a drive path to load the dataset from the drive account. This is used only if we are training the model using google colab.
from google.colab import drive
drive.mount('/content/drive/')
#Installing keras on server
!pip install -q keras
import keras
#Load the traning and testing images from the drive
train = '/content/drive/MyDrive/Colab Notebooks/Assignment1`/Train'
test = '/content/drive/MyDrive/Colab Notebooks/Assignment1`/Validation'
#Rescale all the images to 1.0/255. The intensity of all the images will be 0-255.
train_data = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_data = ImageDataGenerator(rescale=1./255)
#Flow training images in a batch of 20 using train_data
train_generator = train_data.flow_from_directory(train,
batch_size=20,
class_mode = 'binary',
target_size=(150,150))
#Flow testing images in a batch of 20 using test_data
test_generator = test_data.flow_from_directory(test,
batch_size=20,
class_mode='binary',
target_size=(150,150))
###Output
Found 1027 images belonging to 2 classes.
Found 256 images belonging to 2 classes.
###Markdown
There are 1027 images of 2 classes belongs to training set and 256 images of 2 classes belongs to validation set.
###Code
#Initialize the CNN_model
model = tf.keras.models.Sequential()
#Add convolution layers
model.add(tf.keras.layers.Conv2D(32,(3,3), activation='relu', input_shape=(150,150,3)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Conv2D(64,(3,3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Conv2D(128,(3,3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
#model.add(tf.keras.layers.Conv2D(128,(3,3), activation='relu'))
#model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
#Flattening
model.add(tf.keras.layers.Flatten())
#Fully connected ANN
#model.add(tf.keras.layers.Dense(units=300, activation='relu'))
model.add(tf.keras.layers.Dense(units=512, activation='relu'))
#model.add(tf.keras.layers.Dense(units=256, activation='relu'))
#model.add(tf.keras.layers.Dense(units=128, activation='relu'))
#Output layers
model.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
#Model compilation
model.compile(optimizer=RMSprop(learning_rate=0.001), loss='binary_crossentropy', metrics=['accuracy'])
#Using Earlystopping method to stop the model if the validation score is not improving after certain epochs.
from keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_accuracy', patience=10,verbose=1)
#Training the model
result = model.fit_generator(train_generator,
steps_per_epoch =51,
epochs=30,
validation_data= test_generator,
validation_steps = 13,
callbacks = [early_stop],
verbose=2)
#Visualization of the model's performance
import matplotlib.pyplot as plt
%matplotlib inline
train_score = result.history['accuracy']
test_score = result.history['val_accuracy']
plt.plot(train_score, 'bo', label='Train_Score')
plt.plot(test_score, 'g', label='Test_score')
plt.legend()
#prediction - 1
image_loc = '/content/drive/MyDrive/Colab Notebooks/Assignment1`/Validation/horses/horse1-122.png'
#Image Loading
load_image = tf.keras.preprocessing.image.load_img(image_loc, target_size=(150,150))
#Converting image to array
image_array = tf.keras.preprocessing.image.img_to_array(load_image)
#Model expects image input in the form of batch
batch_image = np.array([image_array])
#the image I'm supplying as an input
load_image
#predict the classification of the image
pred = model.predict(batch_image)
predict = (model.predict(batch_image)>0.5).astype('int32')
print('predicted values: ',pred)
print('Predicted class: ',predict)
print("")
if predict ==0:
print("Its a horse")
else:
print("Its a human")
#prediction - 2
image_loc = '/content/drive/MyDrive/Colab Notebooks/Assignment1`/Anil.jpg'
#load the image
image_load = tf.keras.preprocessing.image.load_img(image_loc, target_size=(150,150))
#Convert image to array
image_aray = tf.keras.preprocessing.image.img_to_array(image_load)
#Batch array
image_batch = np.array([image_aray])
image_load
#Predicting the classification of image.
predi = model.predict(image_batch)
prediction = (model.predict(image_batch)>0.5).astype('int32')
print('predicted value: ',predi)
print('predicted class: ',prediction)
print("")
if prediction==0:
print("It's a Horse")
else:
print('Its a Human')
###Output
predicted value: [[1.]]
predicted class: [[1]]
Its a Human
|
deep_learning_from_scratch_2_ch02.ipynb | ###Markdown
###Code
!git clone https://github.com/kgeneral/deep-learning-from-scratch-2.git
#!cd deep-learning-from-scratch-2 && git pull
!cp ./deep-learning-from-scratch-2/ch02/* .
!ls -al .
!ln -s ./deep-learning-from-scratch-2/common common
!ln -s ./deep-learning-from-scratch-2/dataset dataset
!ls -al common/
cat count_method_big.py
# coding: utf-8
import sys
sys.path.append('..')
import numpy as np
np.set_printoptions(threshold=sys.maxsize) # for debug
#from common import config
#config.GPU = True
def preprocess(text):
text = text.lower()
text = text.replace('.', ' .')
words = text.split(' ')
word_to_id = {}
id_to_word = {}
for word in words:
if word not in word_to_id:
new_id = len(word_to_id)
word_to_id[word] = new_id
id_to_word[new_id] = word
corpus = np.array([word_to_id[w] for w in words])
return corpus, word_to_id, id_to_word
text = 'You say goodbye and I say hello.'
corpus, word_to_id, id_to_word = preprocess(text)
corpus
word_to_id
id_to_word
def create_co_matrix(corpus, vocab_size, window_size=1):
'''๋์๋ฐ์ ํ๋ ฌ ์์ฑ
:param corpus: ๋ง๋ญ์น(๋จ์ด ID ๋ชฉ๋ก)
:param vocab_size: ์ดํ ์
:param window_size: ์๋์ฐ ํฌ๊ธฐ(์๋์ฐ ํฌ๊ธฐ๊ฐ 1์ด๋ฉด ํ๊น ๋จ์ด ์ข์ฐ ํ ๋จ์ด์ฉ์ด ๋งฅ๋ฝ์ ํฌํจ)
:return: ๋์๋ฐ์ ํ๋ ฌ
'''
corpus_size = len(corpus)
co_matrix = np.zeros((vocab_size, vocab_size), dtype=np.int32)
for idx, word_id in enumerate(corpus):
for i in range(1, window_size + 1):
left_idx = idx - i
right_idx = idx + i
if left_idx >= 0:
left_word_id = corpus[left_idx]
co_matrix[word_id, left_word_id] += 1
if right_idx < corpus_size:
right_word_id = corpus[right_idx]
co_matrix[word_id, right_word_id] += 1
return co_matrix
vocab_size = len(word_to_id)
C = create_co_matrix(corpus, vocab_size)
print(C)
def cos_similarity(x, y, eps=1e-8):
'''์ฝ์ฌ์ธ ์ ์ฌ๋ ์ฐ์ถ
:param x: ๋ฒกํฐ
:param y: ๋ฒกํฐ
:param eps: '0์ผ๋ก ๋๋๊ธฐ'๋ฅผ ๋ฐฉ์งํ๊ธฐ ์ํ ์์ ๊ฐ
:return:
'''
nx = x / (np.sqrt(np.sum(x ** 2)) + eps)
ny = y / (np.sqrt(np.sum(y ** 2)) + eps)
return np.dot(nx, ny)
c0 = C[word_to_id['you']] # "you"์ ๋จ์ด ๋ฒกํฐ
c1 = C[word_to_id['i']] # "i"์ ๋จ์ด ๋ฒกํฐ
print(cos_similarity(c0, c1))
def most_similar(query, word_to_id, id_to_word, word_matrix, top=5):
'''์ ์ฌ ๋จ์ด ๊ฒ์
:param query: ์ฟผ๋ฆฌ(ํ
์คํธ)
:param word_to_id: ๋จ์ด์์ ๋จ์ด ID๋ก ๋ณํํ๋ ๋์
๋๋ฆฌ
:param id_to_word: ๋จ์ด ID์์ ๋จ์ด๋ก ๋ณํํ๋ ๋์
๋๋ฆฌ
:param word_matrix: ๋จ์ด ๋ฒกํฐ๋ฅผ ์ ๋ฆฌํ ํ๋ ฌ. ๊ฐ ํ์ ํด๋น ๋จ์ด ๋ฒกํฐ๊ฐ ์ ์ฅ๋์ด ์๋ค๊ณ ๊ฐ์ ํ๋ค.
:param top: ์์ ๋ช ๊ฐ๊น์ง ์ถ๋ ฅํ ์ง ์ง์
'''
if query not in word_to_id:
print('%s(์)๋ฅผ ์ฐพ์ ์ ์์ต๋๋ค.' % query)
return
print('\n[query] ' + query)
query_id = word_to_id[query]
query_vec = word_matrix[query_id]
# ์ฝ์ฌ์ธ ์ ์ฌ๋ ๊ณ์ฐ
vocab_size = len(id_to_word)
similarity = np.zeros(vocab_size)
for i in range(vocab_size):
similarity[i] = cos_similarity(word_matrix[i], query_vec)
# ์ฝ์ฌ์ธ ์ ์ฌ๋๋ฅผ ๊ธฐ์ค์ผ๋ก ๋ด๋ฆผ์ฐจ์์ผ๋ก ์ถ๋ ฅ
count = 0
for i in (-1 * similarity).argsort():
if id_to_word[i] == query:
continue
print(' %s: %s' % (id_to_word[i], similarity[i]))
count += 1
if count >= top:
return
most_similar('you', word_to_id, id_to_word, C, top=5)
most_similar('say', word_to_id, id_to_word, C, top=5)
most_similar('and', word_to_id, id_to_word, C, top=5)
def ppmi(C, verbose=False, eps = 1e-8):
'''PPMI(์ ๋ณ ์ํธ์ ๋ณด๋) ์์ฑ
:param C: ๋์๋ฐ์ ํ๋ ฌ
:param verbose: ์งํ ์ํฉ์ ์ถ๋ ฅํ ์ง ์ฌ๋ถ
:return:
'''
M = np.zeros_like(C, dtype=np.float32)
N = np.sum(C)
S = np.sum(C, axis=0)
total = C.shape[0] * C.shape[1]
cnt = 0
for i in range(C.shape[0]):
for j in range(C.shape[1]):
pmi = np.log2(C[i, j] * N / (S[j]*S[i]) + eps)
M[i, j] = max(0, pmi)
if verbose:
cnt += 1
if cnt % (total//100) == 0:
print('%.1f%% ์๋ฃ' % (100*cnt/total))
return M
W = ppmi(C)
np.set_printoptions(precision=3) # ์ ํจ ์๋ฆฟ์๋ฅผ ์ธ ์๋ฆฌ๋ก ํ์
print('๋์๋ฐ์ ํ๋ ฌ')
print(C)
print('-'*50)
print('PPMI')
print(W)
# SVD
U, S, V = np.linalg.svd(W)
np.set_printoptions(precision=3) # ์ ํจ ์๋ฆฟ์๋ฅผ ์ธ ์๋ฆฌ๋ก ํ์
print(C[0])
print(W[0])
print(U[0])
import matplotlib.pyplot as plt
# ํ๋กฏ
for word, word_id in word_to_id.items():
plt.annotate(word, (U[word_id, 0], U[word_id, 1]))
plt.scatter(U[:,0], U[:,1], alpha=0.5)
plt.show()
from dataset import ptb
corpus, word_to_id, id_to_word = ptb.load_data('train')
print('๋ง๋ญ์น ํฌ๊ธฐ:', len(corpus))
print('corpus[:30]:', corpus[:30])
print()
print('id_to_word[0]:', id_to_word[0])
print('id_to_word[1]:', id_to_word[1])
print('id_to_word[2]:', id_to_word[2])
print()
print("word_to_id['car']:", word_to_id['car'])
print("word_to_id['happy']:", word_to_id['happy'])
print("word_to_id['lexus']:", word_to_id['lexus'])
!cat count_method_big_gpu.py
!python count_method_big_gpu.py
#!python count_method_big.py
###Output
_____no_output_____ |
Code/XGBoost/Part_01_EDA_Pyspark.ipynb | ###Markdown
Part 01 - EDA with PysparkGradient Boosted Trees applied to Fraud detection Pyspark libraries
###Code
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.sql.functions import col, countDistinct
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, explode, array, lit
# Import VectorAssembler and Vectors
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import GBTClassifier
from pyspark.sql.functions import pow, col
import datetime
from pyspark.sql.functions import year, month, dayofmonth
from pyspark.sql.functions import isnan, when, count, col
from pyspark.sql.functions import col, countDistinct
###Output
_____no_output_____
###Markdown
Python libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter(action='ignore', category=FutureWarning)
spark = SparkSession.builder.appName('FraudTreeMethods').getOrCreate()
###Output
_____no_output_____
###Markdown
Read Data
###Code
# inserting the parent directory into current path
sys.path.insert(1, '../work/data_set')
data_name = 'train_sample.csv'
dataset_address = '../work/data_set/'
path = dataset_address + data_name
RDD = spark.read.csv(path, inferSchema=True, header=True)
RDD.show(5)
print('RDD.printSchema is \n')
RDD.printSchema()
###Output
_____no_output_____
###Markdown
Convert the click time to day and hour and add it to data.
###Code
from pyspark.sql.functions import hour, minute, dayofmonth
RDD = RDD.withColumn('hour',hour(RDD.click_time)).\
withColumn('day',dayofmonth(RDD.click_time))
RDD.show(5)
###Output
_____no_output_____
###Markdown
FeatheringFeathering, grouping-merging as follow. In python EDA we did following:```pythongp = df[['ip','day','hour','channel']]\ .groupby(by=['ip','day','hour'])[['channel']]\ .count().reset_index()\ .rename(index=str, columns={'channel': '*ip_day_hour_count_channel'})df = df.merge(gp, on=['ip','day','hour'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select("ip","day","hour", "channel")\
.groupBy("ip","day","hour")\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_day_hour_count_channel")\
.sort(col("ip"))
RDD = RDD.join(gp, ["ip","day","hour"])\
.sort(col("ip"))
print("RDD Columns name = \n", RDD.columns)
###Output
_____no_output_____
###Markdown
In python EDA we did following:```pythongp = df[['ip', 'app', 'channel']].groupby(by=['ip', 'app'])[['channel']].\ count().reset_index().\ rename(index=str, columns={'channel': '*ip_app_count_channel'})df = df.merge(gp, on=['ip','app'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select("ip","app", "channel")\
.groupBy("ip","app")\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_app_count_channel")\
.sort(col("ip"))
RDD = RDD.join(gp, ["ip","app"])\
.sort(col("ip"))
print("RDD Columns name = \n", RDD.columns)
###Output
_____no_output_____
###Markdown
In python EDA we did following:```pythongp = df[['ip','app', 'os', 'channel']].\ groupby(by=['ip', 'app', 'os'])[['channel']].\ count().reset_index().\ rename(index=str, columns={'channel': '*ip_app_os_count_channel'})df = df.merge(gp, on=['ip','app', 'os'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select('ip','app', 'os', 'channel')\
.groupBy('ip', 'app', 'os')\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_app_os_count_channel")\
.sort(col("ip"))
RDD = RDD.join(gp, ['ip','app', 'os'])\
.sort(col("ip"))
print("RDD Columns name = \n", RDD.columns)
###Output
_____no_output_____
###Markdown
In python EDA we did following:```pythongp = df[['ip','day','hour','channel']].\ groupby(by=['ip','day','channel'])[['hour']].\ var().reset_index().\ rename(index=str, columns={'hour': '*ip_day_chan_var_hour'})df = df.merge(gp, on=['ip','day','channel'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select('ip','day','hour','channel')\
.groupBy('ip','day','channel')\
.agg({"hour":"variance"})\
.withColumnRenamed("variance(hour)", "*ip_day_chan_var_hour")\
.sort(col("ip"))
###Output
_____no_output_____
###Markdown
Check out the number of nan and null in the gp.
###Code
gp.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in gp.columns]).show()
###Output
_____no_output_____
###Markdown
We remeber from python EDA the following ```pythonip 0app 0device 0os 0channel 0click_time 0is_attributed 0hour 0day 0*ip_day_hour_count_channel 0*ip_app_count_channel 0*ip_app_os_count_channel 0*ip_tchan_count 89123*ip_app_os_var 89715*ip_app_channel_var_day 84834*ip_app_channel_mean_hour 0dtype: int64```Therefore we skip the following grouping (columns)as follow.```python*ip_tchan_count 10877 non-null float64*ip_app_os_var 10285 non-null float64*ip_app_channel_var_day 15166 non-null float64```Note that the last gp was not joined into the data. **Let's Keep going:**In python EDA we did following:```pythongp = df[['ip','app', 'channel','hour']].\ groupby(by=['ip', 'app', 'channel'])[['hour']].\ mean().reset_index().\ rename(index=str, columns={'hour': '*ip_app_channel_mean_hour'})df = df.merge(gp, on=['ip','app', 'channel'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select('ip','app', 'channel','hour')\
.groupBy('ip', 'app', 'channel')\
.agg({"hour":"mean"})\
.withColumnRenamed("avg(hour)", "*ip_app_channel_mean_hour")\
.sort(col("ip"))
RDD = RDD.join(gp, ['ip', 'app', 'channel'])\
.sort(col("ip"))
print("RDD Columns name = \n", RDD.columns)
RDD.show(5)
###Output
_____no_output_____
###Markdown
Get summary
###Code
# data.summary().show()
cols1 = ['ip', 'app', 'channel',
'os', 'day', 'hour']
RDD.describe(cols1).show()
cols2 = ['device', 'click_time',
'attributed_time','is_attributed']
RDD.describe(cols2).show()
cols3 = ['*ip_day_hour_count_channel',
'*ip_app_count_channel',
'*ip_app_os_count_channel']
RDD.describe(cols3).show()
###Output
_____no_output_____
###Markdown
Check out the uniques number for each column in data.
###Code
cols4 = cols1 + cols2
RDD.agg(*(countDistinct(col(c)).alias(c) for c in cols4)).show()
RDD.agg(*(countDistinct(col(c)).alias(c) for c in cols3)).show()
###Output
_____no_output_____
###Markdown
Over sampling the data* Over sampling* Duplicate the minority rows* Combine both oversampled minority rows and previous majority rows
###Code
# over sampling
major_df = RDD.filter(col("is_attributed") == 0)
minor_df = RDD.filter(col("is_attributed") == 1)
ratio = int(major_df.count()/minor_df.count())
print("ratio: {}".format(ratio))
a = range(ratio)
# duplicate the minority rows
oversampled_df = minor_df.withColumn("dummy", explode(array([lit(x) for x in a]))).drop('dummy')
# combine both oversampled minority rows and previous majority rows combined_df = major_df.unionAll(oversampled_df)
RDD = major_df.unionAll(oversampled_df)
print("RDD Columns name = \n", RDD.columns)
###Output
_____no_output_____
###Markdown
Turn RDD to pandas and use pandas ability for visualization* First take a sample from big RDD* Pass the sample into the pandas data frame
###Code
sub_RDD = RDD.sample(False, 0.01, 42)
data_pd = sub_RDD.toPandas()
data_pd.hist(bins=50,
figsize=(20,15),
facecolor='green')
plt.show()
data_pd.plot(kind="scatter",
x="app",
y="channel",
alpha=0.1,
figsize=(8,5))
plt.figure(figsize=(20,24))
cols = ['app','device','os',
'channel', 'hour', 'day',
'*ip_day_hour_count_channel', '*ip_app_count_channel',
'*ip_app_os_count_channel', '*ip_app_channel_mean_hour']
sub_attributed_mask = data_pd["is_attributed"] == 1
sub_Not_attributed_mask = data_pd["is_attributed"] == 0
for count, col in enumerate(cols, 1):
plt.subplot(4, 3, count)
plt.hist([data_pd[sub_attributed_mask][col],
data_pd[sub_Not_attributed_mask][col]],
color=['goldenrod', 'grey'],
bins=20, ec='k', density=True)
plt.title('Count distribution by {}'.format(col), fontsize=12)
plt.legend(['attributed', 'Not_attributed'])
plt.xlabel(col); plt.ylabel('density')
# path = '../Figures/'
# file_name = 'hist_dens_by_par.png'
# plt.savefig(path+file_name)
###Output
_____no_output_____
###Markdown
TransferingApplying the transfering achieved from previous EDA.
###Code
trans_colmns = ['app','device','os', 'day',
'*ip_day_hour_count_channel',
'*ip_app_count_channel',
'*ip_app_os_count_channel']
def transformer(x):
x = pow(x, (0.05))
return x
###Output
_____no_output_____
###Markdown
Apply the defined function into each column as follow
###Code
RDD = RDD.withColumn("app", transformer('app'))
RDD = RDD.withColumn("device", transformer('device'))
RDD = RDD.withColumn("os", transformer('os'))
RDD = RDD.withColumn("day", transformer('day'))
RDD = RDD.withColumn("*ip_day_hour_count_channel", transformer('*ip_day_hour_count_channel'))
RDD = RDD.withColumn("*ip_app_count_channel", transformer('*ip_app_count_channel'))
RDD = RDD.withColumn("*ip_app_os_count_channel", transformer('*ip_app_os_count_channel'))
RDD.show()
RDD.columns
###Output
_____no_output_____
###Markdown
Drop the click time and attributed time
###Code
RDD = RDD.drop('click_time','attributed_time')
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = RDD.randomSplit([0.7, 0.3])
cols = ['ip',
'app',
'channel',
'os',
'day',
'hour',
'device',
'is_attributed',
'*ip_day_hour_count_channel',
'*ip_app_count_channel',
'*ip_app_os_count_channel',
'*ip_app_channel_mean_hour']
assembler = VectorAssembler(inputCols = cols,outputCol="features")
trainingData = assembler.transform(trainingData)
testData = assembler.transform(testData)
###Output
_____no_output_____
###Markdown
Train the model
###Code
# Train a GBT model.
gbt = GBTClassifier(labelCol="is_attributed", featuresCol="features", maxIter=20, maxDepth=4)
# Train model. This also runs the indexers.
model = gbt.fit(trainingData)
# Make predictions.
predictions = model.transform(testData)
# Select example rows to display.
predictions.select("prediction", "is_attributed", "features").show(5)
# Select (prediction, true label) and compute test error
evaluator = MulticlassClassificationEvaluator(labelCol="is_attributed", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictions)
print("Test Error = %g" % (1.0 - accuracy))
print("Test accuracy = %g" % (accuracy))
predictions.groupBy('prediction').count().show()
###Output
_____no_output_____
###Markdown
Apply to test, predict
###Code
data_name = 'test.csv'
dataset_address = '../work/data_set/'
path = dataset_address + data_name
test = spark.read.csv(path, inferSchema=True, header=True)
test.show(5)
###Output
_____no_output_____
###Markdown
Compare the train data schema with the test make sure about dimensions.```pythonRDD.printSchema is root |-- ip: integer (nullable = true) |-- app: integer (nullable = true) |-- device: integer (nullable = true) |-- os: integer (nullable = true) |-- channel: integer (nullable = true) |-- click_time: string (nullable = true) |-- attributed_time: string (nullable = true) |-- is_attributed: integer (nullable = true)```
###Code
print('test.printSchema is \n')
test.printSchema()
from pyspark.sql.functions import hour, minute, dayofmonth
test = test.withColumn('hour',hour(test.click_time)).\
withColumn('day',dayofmonth(test.click_time))
test.show(5)
###Output
_____no_output_____
###Markdown
Apply feathering to test
###Code
gp = test.select("ip","day","hour", "channel")\
.groupBy("ip","day","hour")\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_day_hour_count_channel")
test = test.join(gp, ["ip","day","hour"])
gp = test.select("ip","app", "channel")\
.groupBy("ip","app")\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_app_count_channel")
test = test.join(gp, ["ip","app"])
gp = test.select('ip','app', 'os', 'channel')\
.groupBy('ip', 'app', 'os')\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_app_os_count_channel")
test = test.join(gp, ['ip','app', 'os'])
gp = test.select('ip','app', 'channel','hour')\
.groupBy('ip', 'app', 'channel')\
.agg({"hour":"mean"})\
.withColumnRenamed("avg(hour)", "*ip_app_channel_mean_hour")
test = test.join(gp, ['ip', 'app', 'channel'])
test.show(5)
test = test.withColumn("app", transformer('app'))
test = test.withColumn("device", transformer('device'))
test = test.withColumn("os", transformer('os'))
test = test.withColumn("day", transformer('day'))
test = test.withColumn("*ip_day_hour_count_channel", transformer('*ip_day_hour_count_channel'))
test = test.withColumn("*ip_app_count_channel", transformer('*ip_app_count_channel'))
test = test.withColumn("*ip_app_os_count_channel", transformer('*ip_app_os_count_channel'))
test.show(5)
assembler = VectorAssembler(inputCols = cols,outputCol="features")
test = assembler.transform(test)
#test.show(3)
predictions = model.transform(test)
#predictions.show(2)
data_to_submit = predictions.select(['click_id','prediction'])
data_to_submit.show(3)
data_to_submit = data_to_submit.withColumnRenamed('prediction','is_attributed')
data_to_submit.show(3)
data_to_submit.groupBy('is_attributed').count().show()
###Output
_____no_output_____ |
GUI Demo/GUI_Demo.ipynb | ###Markdown
Button
###Code
window = tkinter.Tk()
# to rename the title of the window
window.title("GUI")
# pack is used to show the object in the window
label = tkinter.Label(window, text = "Welcome to DataCamp's Tutorial on Tkinter!").pack()
top_frame = tkinter.Frame(window).pack()
bottom_frame = tkinter.Frame(window).pack(side = "bottom")
# Once the frames are created then you are all set to add widgets in both the frames.
btn1 = tkinter.Button(top_frame, text = "Button1", fg = "red").pack() #'fg or foreground' is for coloring the contents (buttons)
btn2 = tkinter.Button(top_frame, text = "Button2", fg = "green").pack()
btn3 = tkinter.Button(bottom_frame, text = "Button3", fg = "purple").pack(side = "left") #'side' is used to left or right align the widgets
btn4 = tkinter.Button(bottom_frame, text = "Button4", fg = "orange").pack(side = "left")
window.mainloop()
###Output
_____no_output_____
###Markdown
Checkbutton
###Code
top = tkinter.Tk()
CheckVar1 = IntVar()
CheckVar2 = IntVar()
tkinter.Checkbutton(top, text = "Machine Learning",variable = CheckVar1,onvalue = 1, offvalue=0).grid(row=0,sticky=W)
tkinter.Checkbutton(top, text = "Deep Learning", variable = CheckVar2, onvalue = 0, offvalue =1).grid(row=1,sticky=W)
top.mainloop()
###Output
_____no_output_____
###Markdown
User Login Interface
###Code
# Let's create the Tkinter window
window = tkinter.Tk()
window.title("GUI")
# You will create two text labels namely 'username' and 'password' and and two input labels for them
tkinter.Label(window, text = "Username").grid(row = 0) #'username' is placed on position 00 (row - 0 and column - 0)
# 'Entry' class is used to display the input-field for 'username' text label
tkinter.Entry(window).grid(row = 0, column = 1) # first input-field is placed on position 01 (row - 0 and column - 1)
tkinter.Label(window, text = "Password").grid(row = 1) #'password' is placed on position 10 (row - 1 and column - 0)
tkinter.Entry(window).grid(row = 1, column = 1) #second input-field is placed on position 11 (row - 1 and column - 1)
# 'Checkbutton' class is for creating a checkbutton which will take a 'columnspan' of width two (covers two columns)
tkinter.Checkbutton(window, text = "Keep Me Logged In").grid(columnspan = 2)
window.mainloop()
###Output
_____no_output_____
###Markdown
Button with response
###Code
# Let's create the Tkinter window
window = tkinter.Tk()
window.title("GUI")
# creating a function called DataCamp_Tutorial()
def DataCamp_Tutorial():
tkinter.Label(window, text = "GUI with Tkinter!").pack()
tkinter.Button(window, text = "Click Me!", command = DataCamp_Tutorial).pack()
window.mainloop()
###Output
_____no_output_____
###Markdown
left, middle and right click
###Code
window = tkinter.Tk()
window.title("GUI")
#You will create three different functions for three different events
def left_click(event):
tkinter.Label(window, text = "Left Click!").pack()
def middle_click(event):
tkinter.Label(window, text = "Middle Click!").pack()
def right_click(event):
tkinter.Label(window, text = "Right Click!").pack()
window.bind("<Button-1>", left_click)
window.bind("<Button-2>", middle_click)
window.bind("<Button-3>", right_click)
window.mainloop()
###Output
_____no_output_____
###Markdown
Response according to user choice
###Code
window = tkinter.Tk()
window.title("GUI")
# Let's create a alert box with 'messagebox' function
tkinter.messagebox.showinfo("Alert Message", "This is just a alert message!")
# Let's also create a question for the user and based upon the response [Yes or No Question] display a message.
response = tkinter.messagebox.askquestion("Tricky Question", "Do you love Deep Learning?")
# A basic 'if/else' block where if user clicks on 'Yes' then it returns 1 else it returns 0. For each response you will display a message with the help of 'Label' method.
if response == 1:
tkinter.Label(window, text = "Yes, offcourse I love Deep Learning!").pack()
else:
tkinter.Label(window, text = "No, I don't love Deep Learning!").pack()
window.mainloop()
###Output
_____no_output_____
###Markdown
Show photoimage
###Code
window = tkinter.Toplevel()
window.title("GUI")
# In order to display the image in a GUI, you will use the 'PhotoImage' method of Tkinter. It will an image from the directory (specified path) and store the image in a variable.
icon = tkinter.PhotoImage(file = "cluster.png")
# Finally, to display the image you will make use of the 'Label' method and pass the 'image' variriable as a parameter and use the pack() method to display inside the GUI.
label = tkinter.Label(window, image = icon)
label.pack()
window.mainloop()
###Output
_____no_output_____
###Markdown
Calculator!
###Code
# Let's create the Tkinter window
window = Tk()
# Then, you will define the size of the window in width(312) and height(324) using the 'geometry' method
window.geometry("400x324")
# In order to prevent the window from getting resized you will call 'resizable' method on the window
window.resizable(0, 0)
#Finally, define the title of the window
window.title("Calcualtor")
# Let's now define the required functions for the Calculator to function properly.
# 1. First is the button click 'btn_click' function which will continuously update the input field whenever a number is entered or any button is pressed it will act as a button click update.
def btn_click(item):
global expression
expression = expression + str(item)
input_text.set(expression)
# 2. Second is the button clear 'btn_clear' function clears the input field or previous calculations using the button "C"
def btn_clear():
global expression
expression = ""
input_text.set("")
# 3. Third and the final function is button equal ("=") 'btn_equal' function which will calculate the expression present in input field. For example: User clicks button 2, + and 3 then clicks "=" will result in an output 5.
def btn_equal():
global expression
result = str(eval(expression)) # 'eval' function is used for evaluating the string expressions directly
# you can also implement your own function to evalute the expression istead of 'eval' function
input_text.set(result)
expression = ""
expression = ""
# In order to get the instance of the input field 'StringVar()' is used
input_text = StringVar()
# Once all the functions are defined then comes the main section where you will start defining the structure of the calculator inside the GUI.
# The first thing is to create a frame for the input field
input_frame = Frame(window, width = 350, height = 50, bd = 0, highlightbackground = "black", highlightcolor = "black", highlightthickness = 1)
input_frame.pack(side = TOP)
# Then you will create an input field inside the 'Frame' that was created in the previous step. Here the digits or the output will be displayed as 'right' aligned
input_field = Entry(input_frame, font = ('arial', 18, 'bold'), textvariable = input_text, width = 50, bg = "#eee", bd = 0, justify = RIGHT)
input_field.grid(row = 0, column = 0)
input_field.pack(ipady = 10) # 'ipady' is an internal padding to increase the height of input field
# Once you have the input field defined then you need a separate frame which will incorporate all the buttons inside it below the 'input field'
btns_frame = Frame(window, width = 350, height = 272.5, bg = "grey")
btns_frame.pack()
# The first row will comprise of the buttons 'Clear (C)' and 'Divide (/)'
clear = Button(btns_frame, text = "C", fg = "black", width = 32, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_clear()).grid(row = 0, column = 0, columnspan = 3, padx = 1, pady = 1)
divide = Button(btns_frame, text = "/", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click("/")).grid(row = 0, column = 3, padx = 1, pady = 1)
# The second row will comprise of the buttons '7', '8', '9' and 'Multiply (*)'
seven = Button(btns_frame, text = "7", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(7)).grid(row = 1, column = 0, padx = 1, pady = 1)
eight = Button(btns_frame, text = "8", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(8)).grid(row = 1, column = 1, padx = 1, pady = 1)
nine = Button(btns_frame, text = "9", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(9)).grid(row = 1, column = 2, padx = 1, pady = 1)
multiply = Button(btns_frame, text = "*", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click("*")).grid(row = 1, column = 3, padx = 1, pady = 1)
# The third row will comprise of the buttons '4', '5', '6' and 'Subtract (-)'
four = Button(btns_frame, text = "4", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(4)).grid(row = 2, column = 0, padx = 1, pady = 1)
five = Button(btns_frame, text = "5", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(5)).grid(row = 2, column = 1, padx = 1, pady = 1)
six = Button(btns_frame, text = "6", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(6)).grid(row = 2, column = 2, padx = 1, pady = 1)
minus = Button(btns_frame, text = "-", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click("-")).grid(row = 2, column = 3, padx = 1, pady = 1)
# The fourth row will comprise of the buttons '1', '2', '3' and 'Addition (+)'
one = Button(btns_frame, text = "1", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(1)).grid(row = 3, column = 0, padx = 1, pady = 1)
two = Button(btns_frame, text = "2", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(2)).grid(row = 3, column = 1, padx = 1, pady = 1)
three = Button(btns_frame, text = "3", fg = "black", width = 10, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(3)).grid(row = 3, column = 2, padx = 1, pady = 1)
plus = Button(btns_frame, text = "+", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click("+")).grid(row = 3, column = 3, padx = 1, pady = 1)
# Finally, the fifth row will comprise of the buttons '0', 'Decimal (.)', and 'Equal To (=)'
zero = Button(btns_frame, text = "0", fg = "black", width = 21, height = 3, bd = 0, bg = "#fff", cursor = "hand2", command = lambda: btn_click(0)).grid(row = 4, column = 0, columnspan = 2, padx = 1, pady = 1)
point = Button(btns_frame, text = ".", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_click(".")).grid(row = 4, column = 2, padx = 1, pady = 1)
equals = Button(btns_frame, text = "=", fg = "black", width = 10, height = 3, bd = 0, bg = "#eee", cursor = "hand2", command = lambda: btn_equal()).grid(row = 4, column = 3, padx = 1, pady = 1)
window.mainloop()
###Output
_____no_output_____ |
S5/EVA6_session5_step3.ipynb | ###Markdown
Target 1. Apply batchnormalization 2. Apply regularization(drop out)3. Reduce the no of parameters Result1. Highest Train Accuracy -99.332. Highest test Accuracy - 99.393. No of Parameters - 16k Analysis1. The model works well enough2. The Model is not overfitting now. We can still improve the model performance to aim for higher accuracy3. Further training gives 99.40+ accuracy but we still need to modify our model Import Libraries
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
###Output
_____no_output_____
###Markdown
###Code
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
###Output
_____no_output_____
###Markdown
Dataset and Creating Train/Test Split
###Code
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
###Output
_____no_output_____
###Markdown
Dataloader Arguments & Test/Train Dataloaders
###Code
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
###Output
CUDA Available? True
###Markdown
Data StatisticsIt is important to know your data very well. Let's check some of the statistics around our data and how it actually looks like
###Code
# We'd need to convert it into Numpy! Remember above we have converted it into tensors already
train_data = train.train_data
train_data = train.transform(train_data.numpy())
print('[Train]')
print(' - Numpy Shape:', train.train_data.cpu().numpy().shape)
print(' - Tensor Shape:', train.train_data.size())
print(' - min:', torch.min(train_data))
print(' - max:', torch.max(train_data))
print(' - mean:', torch.mean(train_data))
print(' - std:', torch.std(train_data))
print(' - var:', torch.var(train_data))
dataiter = iter(train_loader)
images, labels = dataiter.next()
print(images.shape)
print(labels.shape)
# Let's visualize some of the images
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(images[0].numpy().squeeze(), cmap='gray_r')
###Output
/usr/local/lib/python3.6/dist-packages/torchvision/datasets/mnist.py:55: UserWarning: train_data has been renamed data
warnings.warn("train_data has been renamed data")
###Markdown
MOREIt is important that we view as many images as possible. This is required to get some idea on image augmentation later on
###Code
figure = plt.figure()
num_of_images = 60
for index in range(1, num_of_images + 1):
plt.subplot(6, 10, index)
plt.axis('off')
plt.imshow(images[index].numpy().squeeze(), cmap='gray_r')
###Output
_____no_output_____
###Markdown
How did we get those mean and std values which we used above?Let's run a small experiment
###Code
# simple transform
simple_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
# transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
exp = datasets.MNIST('./data', train=True, download=True, transform=simple_transforms)
exp_data = exp.train_data
exp_data = exp.transform(exp_data.numpy())
print('[Train]')
print(' - Numpy Shape:', exp.train_data.cpu().numpy().shape)
print(' - Tensor Shape:', exp.train_data.size())
print(' - min:', torch.min(exp_data))
print(' - max:', torch.max(exp_data))
print(' - mean:', torch.mean(exp_data))
print(' - std:', torch.std(exp_data))
print(' - var:', torch.var(exp_data))
###Output
/usr/local/lib/python3.6/dist-packages/torchvision/datasets/mnist.py:55: UserWarning: train_data has been renamed data
warnings.warn("train_data has been renamed data")
###Markdown
The modelLet's start with the model we first saw
###Code
import torch.nn.functional as F
dropout_value = 0.1
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10),
nn.Dropout(dropout_value)
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(20),
nn.Dropout(dropout_value)
) # output_size = 24
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(20),
nn.Dropout(dropout_value)
) # output_size = 22
# TRANSITION BLOCK 1
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 11
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU()
) # output_size = 11
# CONVOLUTION BLOCK 2
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(20),
nn.Dropout(dropout_value)
) # output_size = 9
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(20),
nn.Dropout(dropout_value)
) # output_size = 7
# OUTPUT BLOCK
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10),
nn.Dropout(dropout_value)
) # output_size = 7
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(7, 7), padding=0, bias=False),
) # output_size = 1
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.convblock7(x)
x = self.convblock8(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
###Output
_____no_output_____
###Markdown
Model ParamsCan't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help
###Code
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
###Output
Requirement already satisfied: torchsummary in /usr/local/lib/python3.6/dist-packages (1.5.1)
cuda
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 10, 26, 26] 90
ReLU-2 [-1, 10, 26, 26] 0
BatchNorm2d-3 [-1, 10, 26, 26] 20
Dropout-4 [-1, 10, 26, 26] 0
Conv2d-5 [-1, 20, 24, 24] 1,800
ReLU-6 [-1, 20, 24, 24] 0
BatchNorm2d-7 [-1, 20, 24, 24] 40
Dropout-8 [-1, 20, 24, 24] 0
Conv2d-9 [-1, 20, 22, 22] 3,600
ReLU-10 [-1, 20, 22, 22] 0
BatchNorm2d-11 [-1, 20, 22, 22] 40
Dropout-12 [-1, 20, 22, 22] 0
MaxPool2d-13 [-1, 20, 11, 11] 0
Conv2d-14 [-1, 10, 11, 11] 200
ReLU-15 [-1, 10, 11, 11] 0
Conv2d-16 [-1, 20, 9, 9] 1,800
ReLU-17 [-1, 20, 9, 9] 0
BatchNorm2d-18 [-1, 20, 9, 9] 40
Dropout-19 [-1, 20, 9, 9] 0
Conv2d-20 [-1, 20, 7, 7] 3,600
ReLU-21 [-1, 20, 7, 7] 0
BatchNorm2d-22 [-1, 20, 7, 7] 40
Dropout-23 [-1, 20, 7, 7] 0
Conv2d-24 [-1, 10, 7, 7] 200
ReLU-25 [-1, 10, 7, 7] 0
BatchNorm2d-26 [-1, 10, 7, 7] 20
Dropout-27 [-1, 10, 7, 7] 0
Conv2d-28 [-1, 10, 1, 1] 4,900
================================================================
Total params: 16,390
Trainable params: 16,390
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.98
Params size (MB): 0.06
Estimated Total Size (MB): 1.05
----------------------------------------------------------------
###Markdown
Training and TestingLooking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions
###Code
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
###Output
_____no_output_____
###Markdown
Let's Train and test our model
###Code
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 15
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc)
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
###Output
_____no_output_____ |
_notebooks/2020-12-10-dog-classifier.ipynb | ###Markdown
Classifying Labradors with fastai> "Using just a few lines of code, we can build and train a neural net to tell the difference between English and American Labrador retrievers."- toc: false- branch: master- badges: false- comments: false- categories: [neural-nets]- image: images/dogs/english.jpg- hide: false- search_exclude: true
###Code
#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
#hide
from fastbook import *
from fastai.vision.widgets import *
###Output
_____no_output_____
###Markdown
Although I've been a practicing data scientist for more than three years, deep learning remained an enigma to me. I completed [Andrew Ng's deep learning specialization](https://www.coursera.org/specializations/deep-learning) on Coursera last year but while I came away with a deeper understanding of the mathematical underpinnings of neural networks, I could not for the life of me build one myself. Enter [fastai](fast.ai). With a mission to "make neural nets uncool again", fastai hands you all the tools to build a deep learning model _today_. I've been working my way through their MOOC, [Practical Deep Learning for Coders](https://course.fast.ai/), one week at a time and reading the corresponding chapters in the [book](https://www.oreilly.com/library/view/deep-learning-for/9781492045519/). I really appreciate how the authors jump right in and ask you to get your hands dirty by building a model using their highly abstracted library (also called [fastai](https://github.com/fastai)). An education in the technical sciences too often starts with the nitty gritty fundamentals and abstruse theories and then works its way up to real-life applications. By this time, of course, most of the students have thrown their hands up in despair and dropped out of the program. The way fastai approaches teaching deep learning is to empower its students right off the bat with the ability to create a working model and then to ask you to look under the hood to understand how it operates and how we might troubleshoot or fine-tune our performance. A brief introduction to LabradorsTo follow along with the course, I decided to create a labrador retriever classifier. I have an American lab named Sydney, and I thought the differences between English and American labs might pose a bit of a challenge to a convolutional neural net since the physical variation between the two types of dog can often be subtle. Some historyAt the beginning of the 20th century, all labs looked similar to American labs. They were working dogs and needed to be agile and athletic. Around the 1940's, dog shows became popular, and breeders began selecting labrador retrievers based on appearance, eventually resulting in what we call the English lab. English labs in England are actually called "show" or "bench" labs, while American labs over the pond are referred to as working Labradors.Nowadays, English labs are more commonly kept as pets while American labs are still popular with hunters and outdoorsmen. Physical differencesEnglish labs tend to be shorter in height and wider in girth. They have shorter snouts and thicker coats. American labs by contrast are taller and thinner with a longer snout. English American These differences may not be stark as both are still Labrador Retrievers and are not bred to a standard. Gathering dataFirst we need images of both American and English labs on which to train our model. The fastai course leverages the Bing Image Search API through MS Azure. The code below shows how I downloaded 150 images each of English and American labrador retrievers and stored them in respective directories.
###Code
#hide
subscription_key = "54ae23fb26514eaf8236f5bd89a96f07"
#hide
# !rm -r dogs/
path = Path('/storage/dogs')
subscription_key = "" # key obtained through MS Azure
search_url = "https://api.bing.microsoft.com/v7.0/images/search"
headers = {"Ocp-Apim-Subscription-Key" : subscription_key}
names = ['english', 'american']
if not path.exists():
path.mkdir()
for o in names:
dest = (path/o)
dest.mkdir(exist_ok=True)
params = {
"q": '{} labrador retriever'.format(o),
"license": "public",
"imageType": "photo",
"count":"150"
}
response = requests.get(search_url, headers=headers, params=params)
response.raise_for_status()
search_results = response.json()
img_urls = [img['contentUrl'] for img in search_results["value"]]
download_images(dest, urls=img_urls)
#hide
path = Path('../../storage/dogs')
fns = get_image_files(path)
fns
#hide
from collections import Counter
fns = get_image_files(path)
fns_subdirs = [p.parts[1] for p in fns]
counts = Counter(fns_subdirs)
counts
#hide
# remove all paths with fewer than 50 instances
insufficient_data_paths = [k for k, v in counts.items() if v < 150]
insufficient_data_paths
#hide
# how many left?
len([x for x in names if x not in insufficient_data_paths])
#hide
for name in insufficient_data_paths:
shutil.rmtree(path/name)
###Output
_____no_output_____
###Markdown
Let's check if any of these files are corrupt.
###Code
fns_updated = get_image_files(path)
failed = verify_images(fns)
failed
#hide
from collections import Counter
failed_subdirs = [p.parts[1] for p in failed]
Counter(failed_subdirs)
###Output
_____no_output_____
###Markdown
We'll remove that corrupt file from our images.
###Code
failed.map(Path.unlink);
###Output
_____no_output_____
###Markdown
First model attemptI create a function to process the data using a fastai class called `DataBlock`, which does the following:* Defines the independent data as an ImageBlock and the dependent data as a CategoryBlock* Retrieves the data using a fastai function `get_image_files` from a given path* Splits the data randomly into a 20% validation set and 80% training set* Attaches the directory name ("english", "american") as the image labels* Crops the images to a uniform 224 pixels by randomly selecting certain 224 pixel areas of each image, ensuring a minimum of 50% of the image is included in the crop. This random cropping repeats for each epoch to capture different pieces of the image.
###Code
def process_dog_data(path):
dogs = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=44),
get_y=parent_label,
item_tfms=RandomResizedCrop(224, min_scale=0.5)
)
return dogs.dataloaders()
###Output
_____no_output_____
###Markdown
The item transformation (`RandomResizedCrop`) is an important design consideration. We want to use as much of the image as possible while ensuring a uniform size for processing. But in the process of naive cropping, we may be omitting pieces of the image that are important for classification (ex. the dog's snout). Padding the image may help but wastes computation for the model and decreases resolution on the useful parts of the image.Another approach of resizing the image (instead of cropping) results in distortions, which is especially problematic for our use case as the main differences between English and American labs is in their proportions. Therefore, we settle on the random cropping approach as a compromise. This strategy also acts as a data augmentation technique by providing different "views" of the same dog to the model.Now we "fine-tune" ResNet-18, which replaces the last layer of the original ResNet-18 with a new random head and uses one epoch to fit this new model on our data. Then we _fit_ this new model for the number of epochs requested (in our case, 4), updating the weights of the later layers faster than the earlier ones.
###Code
dls = process_dog_data(path)
learn = cnn_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(4)
###Output
_____no_output_____
###Markdown
These numbers are not exactly ideal. While training and validation loss mainly decrease, our error rate is actually _increasing_.
###Code
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix(figsize=(5,5))
###Output
_____no_output_____
###Markdown
The confusion matrix shows poor performance, especially on American labs. We can take a closer look at our data using fastai's `ImageClassifierCleaner` tool, which displays the images with the highest loss for both training and validation sets. We can then decide whether to delete these images or move them between classes.
###Code
#hide
interp.plot_top_losses(5, nrows=1)
cleaner = ImageClassifierCleaner(learn)
cleaner
###Output
_____no_output_____
###Markdown
We definitely have a data quality problem here as we can see that the fifth photo from the left is a German shepherd and the fourth photo (and possibly the second) is a golden retriever. We can tag these kinds of images for removal and retrain our model.
###Code
#hide
for idx in cleaner.delete(): cleaner.fns[idx].unlink()
for idx,cat in cleaner.change(): shutil.move(str(cleaner.fns[idx]), path/cat)
###Output
_____no_output_____
###Markdown
After data cleaningNow I've gone through and removed 49 images from the original 300 that were not the correct classifications of American or English labs. Let's see how this culling has affected performance.
###Code
dls = process_dog_data()
learn = cnn_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(4)
###Output
_____no_output_____
###Markdown
Already we see improvement in that our error rate is finally _decreasing_ for each epoch, although our validation loss increases.
###Code
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix(figsize=(5,5))
###Output
_____no_output_____
###Markdown
This confusion matrix shows much better classification for both American and English labs.Now let's see how this model performs on a photo of my own dog. Using the Model for Inference I'll upload a photo of my dog Sydney.
###Code
btn_upload = widgets.FileUpload()
btn_upload
###Output
_____no_output_____
###Markdown
###Code
img = PILImage.create(btn_upload.data[-1])
out_pl = widgets.Output()
out_pl.clear_output()
with out_pl: display(img.rotate(270).to_thumb(128,128))
out_pl
###Output
_____no_output_____
###Markdown
This picture shows her elongated snout and sleeker body, trademarks of an American lab.
###Code
pred,pred_idx,probs = learn.predict(img)
lbl_pred = widgets.Label()
lbl_pred.value = f'Prediction: {pred}; Probability: {probs[pred_idx]:.04f}'
lbl_pred
###Output
_____no_output_____ |
notebooks/demo_Glide.ipynb | ###Markdown
How to run this notebook? Install the DockStream environment: conda env create -f environment.yml in the DockStream directory Activate the environment: conda activate DockStreamCommunity Execute jupyter: jupyter notebook Copy the link to a browser Update variables dockstream_path and dockstream_env (the path to the environment DockStream) in the first code block below `Glide` backend demoThis notebook will demonstrate how to **(a)** set up a `Glide` backend run with `DockStream`, including the most important settings and **(b)** how to set up a `REINVENT` run with `Glide` docking enabled as one of the scoring function components.**Steps:*** a: Set up `DockStream` run 1. Prepare the receptor / grid with `Maestro` 2. Prepare the input: SMILES and configuration file (JSON format) 3. Execute the docking and parse the results* b: Set up `REINVENT` run with a `DockStream` component 1. Prepare the receptor (see *a.1*) 2. Prepare the input (see *a.2*) 3. Prepare the `REINVENT` configuration (JSON format) 4. Execute `REINVENT`If a `Schrodinger` license is available, `DockStream` can make use of `Glide`'s docking capabilities. However, you need to be able to source the _SCHRODINGER_ environment variable in your console. __Note:__ Make sure, you have activated the `DockStream` environment before launching this notebook. By default, this notebook will deposit all files created into `~/Desktop/Glide_demo`.The following imports / loadings are only necessary when executing this notebook. If you want to use `DockStream` directly from the command-line, it is enough to execute the following with the appropriate configurations:```conda activate DockStreampython /path/to/DockStream/docker.py -conf docking.json```
###Code
import os
import json
import tempfile
import seaborn as sns
# update these paths to reflect your system's configuration
dockstream_path = os.path.expanduser("~/Desktop/ProjectData/DockStream")
dockstream_env = os.path.expanduser("~/miniconda3/envs/DockStream")
# no changes are necessary beyond this point
# ---------
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# generate the paths to the entry points
docker = dockstream_path + "/docker.py"
# generate a folder to store the results
output_dir = os.path.expanduser("~/Desktop/Glide_demo")
try:
os.mkdir(output_dir)
except FileExistsError:
pass
# generate the paths to the files shipped with this implementation
grid_file_path = ipynb_path + "/../data/Glide/1UYD_grid.zip"
smiles_path = ipynb_path + "/../data/1UYD/ligands_smiles.txt"
# generate output paths for the configuration file, embedded ligands, the docked ligands and the scores
docking_path = output_dir + "/Glide_docking.json"
ligands_conformers_path = output_dir + "/ligprep_embedded_ligands.sdf"
ligands_docked_path = output_dir + "/Glide_docked_ligands.sdf"
ligands_scores_path = output_dir + "/Glide_scores.csv"
###Output
_____no_output_____
###Markdown
Target preparationAs of yet, `DockStream` does not have a specific target preparator for `Glide`. While this would be possible, feedback from users indicate that they are preferring to set the receptor up with `Maestro` (`Schrodinger`'s GUI), which results in a `zip` file containing all structural and force-field information. `Maestro` has extensive fixing capabilities and a wide variety of additional options. Below are a few screenshots from the "2019-4" release, showing how a target grid could be prepared. To reproduce them, download `1UYD` and load it into `Maestro`:Next, load the protein preparation wizard and fix the most severe issues with your input structure.Once the protein structure is fixed, we need to generate the grid for docking. Start the grid generation assistant and select the reference ligand.You might want to change some settings, e.g. increasing the space around the reference ligand to be considered. Note, that you should set the write-out folder by clicking on "Job settings".Finally, start the run (and be patient, this will take some time). You should see a "glide-grid_1.zip" in the output folder specified. One example archive is also shipped in this `DockStream` installation and will be used below. Warning: If you plan to use your grid file with DockStream, do not specify any rotatable bonds in the receptor (e.g. -OH groups). These are incompatible with Glide's output option "ligandlib_sd", as they (slightly) change the receptor configuration. DockingIn this section we consider a case where we have just prepared the receptor and want to dock a bunch of ligands (molecules, compounds) into the binding cleft. Often, we only have the structure of the molecules in the form of `SMILES`, rather than a 3D structure so the first step will be to generate these conformers before proceeding ("embedding"). In `DockStream`, you can embed your ligands with a variety of programs including `Corina`, `RDKit`, `OMEGA`, and `LigPrep` and use them freely with any backend. Here, we will use `LigPrep` for the conformer embedding.
###Code
# load the smiles (just for illustrative purposes)
# here, 15 moleucles will be used
with open(smiles_path, 'r') as f:
smiles = [smile.strip() for smile in f.readlines()]
print(smiles)
###Output
['C#CCCCn1c(Cc2cc(OC)c(OC)c(OC)c2Cl)nc2c(N)ncnc21', 'CCCCn1c(Cc2cc(OC)c(OC)c(OC)c2)nc2c(N)ncnc21', 'CCCCn1c(Cc2cc(OC)ccc2OC)nc2c(N)ncnc21', 'CCCCn1c(Cc2cccc(OC)c2)nc2c(N)ncnc21', 'C#CCCCn1c(Cc2cc(OC)c(OC)c(OC)c2Cl)nc2c(N)nc(F)nc21', 'CCCCn1c(Cc2ccc(OC)cc2)nc2c(N)ncnc21', 'CCCCn1c(Cc2ccc3c(c2)OCO3)nc2c(N)ncnc21', 'CCCCn1c(Cc2cc(OC)ccc2OC)nc2c(N)nc(F)nc21', 'CCCCn1c(Cc2ccc3c(c2)OCO3)nc2c(N)nc(F)nc21', 'C#CCCCn1c(Cc2cc(OC)ccc2OC)nc2c(N)nc(F)nc21', 'CC(C)NCCCn1c(Cc2cc3c(cc2I)OCO3)nc2c(N)nc(F)nc21', 'CC(C)NCCCn1c(Sc2cc3c(cc2Br)OCO3)nc2c(N)ncnc21', 'CC(C)NCCCn1c(Sc2cc3c(cc2I)OCO3)nc2c(N)ncnc21', 'COc1ccc(OC)c(Cc2nc3nc(F)nc(N)c3[nH]2)c1', 'Nc1nccn2c(NCc3ccccc3)c(Cc3cc4c(cc3Br)OCO4)nc12']
###Markdown
While the embedding and docking tasks in `DockStream` are both specified in the same configuration file, they are handled independently. This means it is perfectly fine to either load conformers (from an `SDF` file) directly or to use a call of `docker.py` merely to generate conformers without doing the docking afterwards.`DockStream` uses the notion of (embedding) "pool"s, of which multiple can be specified and accessed via identifiers. Note, that while the way conformers are generated is highly backend specific, `DockStream` allows you to use the results interchangably. This allows to (a) re-use embedded molecules for multiple docking runs (e.g. different scoring functions), without the necessity to embed them more than once and (b) to combine embeddings and docking backends freely. Below is also a simple definition of a docking run, note that for `Glide` you might want to do a run from within `Maestro` first and then use the generated `.in` file to set the `glide_keywords` (minimally, you need to specify `PRECISION` and the path to the receptor grid file).
###Code
# specify the embedding and docking JSON file as a dictionary and write it out
ed_dict = {
"docking": {
"header": { # general settings
"environment": {
}
},
"ligand_preparation": { # the ligand preparation part, defines how to build the pool
"embedding_pools": [
{
"pool_id": "Ligprep_pool",
"type": "Ligprep",
"parameters": {
"prefix_execution": "module load schrodinger/2019-4",
"parallelization": {
"number_cores": 2
},
"use_epik": {
"target_pH": 7.0,
"pH_tolerance": 2.0
},
"force_field": "OPLS3e"
},
"input": {
"standardize_smiles": False,
"input_path": smiles_path,
"type": "smi" # expected input is a text file with smiles
},
"output": { # the conformers can be written to a file, but "output" is
# not required as the ligands are forwarded internally
"conformer_path": ligands_conformers_path,
"format": "sdf"
}
}
]
},
"docking_runs": [
{
"backend": "Glide",
"run_id": "Glide_run",
"input_pools": ["Ligprep_pool"],
"parameters": {
"prefix_execution": "module load schrodinger/2019-4", # will be executed before a program call
"parallelization": { # if present, the number of cores to be used
# can be specified
"number_cores": 2
},
"glide_flags": { # all all command-line flags for Glide here
"-HOST": "localhost"
},
"glide_keywords": { # add all keywords for the "input.in" file here
"AMIDE_MODE": "trans",
"EXPANDED_SAMPLING": "True",
"GRIDFILE": grid_file_path,
"NENHANCED_SAMPLING": "2",
"POSE_OUTTYPE": "ligandlib_sd",
"POSES_PER_LIG": "3",
"POSTDOCK_NPOSE": "15",
"POSTDOCKSTRAIN": "True",
"PRECISION": "HTVS",
"REWARD_INTRA_HBONDS": "True"
}
},
"output": {
"poses": { "poses_path": ligands_docked_path },
"scores": { "scores_path": ligands_scores_path }
}
}]}}
with open(docking_path, 'w') as f:
json.dump(ed_dict, f, indent=2)
###Output
_____no_output_____
###Markdown
Tautomer enumeration / Protonation statesAnother option available is to use different tautomers / protonation states for your ligands. Depending on the backend used, but especially for `Glide`, the exact protonation of the ligands matters a lot. `LigPrep`'s `Epik` can achieve an enumeration, producing all states reasonable at the specified pH. `DockStream`'s internal ligand numbering scheme works as follows: any ligand gets an increasing number (starting with 0), the `ligand number`. Separated by a ':', every enumeration gets another number, e.g. "1:2" would be the second ligand's third enumeration. After docking, a third number is added for the pose, e.g. "2:2:1" in the final SDF output file would indicate the third ligand's third enumeration, docking pose two. Adding constraints and features to the `Glide` run`Glide` allows you to set a large number of settings (see the list at the very end of the notebook). Apart from settings on `EXAMPLE_SAMPLING` and `NENHANCED_SAMPLING` (value of 1 to 4), which have proven to be useful, you may also want to set constraints and features. Note, that all parameters have to be given as strings. Here is an example:```... "glide_keywords": { "AMIDE_MODE": "trans", "EXPANDED_SAMPLING": "True", "GRIDFILE": "", "NENHANCED_SAMPLING": "2", "POSE_OUTTYPE": "ligandlib_sd", "POSES_PER_LIG": "3", "POSTDOCK_NPOSE": "15", "POSTDOCKSTRAIN": "True", "PRECISION": "HTVS", "REWARD_INTRA_HBONDS": "True" }, "[CONSTRAINT_GROUP:1]": { "USE_CONS": "A:ALA:72:H(hbond):1,", "NREQUIRED_CONS": "ALL" }, "[FEATURE:1]": { "PATTERN1": "[N]C 1 include", "PATTERN2": "[n] 1 include", "PATTERN3": "N(=N=N) 1 include", "PATTERN4": "N(=N)=N 1 include" },...``` Token guardFor the usage of `Glide`, tokens managed by a central licensing server will be consumed for the duration of the run. It might happen that you run out of tokens and the docking will fail for that reason. To ensure protection against that scenario, you can specify a "token guard", which will halt the submission of the actual docking subjobs until enough tokens are available. For this to take effect the specification below has to be added to the `parameters` block for the docking run. Note, that you have to specify the token pool you want to protect against (e.g. "GLIDE_HTVS_DOCKING" for HTVS runs and "GLIDE_SP_DOCKING" for SP runs). You may have any number of token protections activated at a given time. The number next to it specifies the minimum number of tokens for a given pool which need to be available to proceed. Note, that any process will consume 4 tokens, so if you e.g. specify to use parallelization with 8 cores, you will need to request access to at least 32 tokens. The availability will be checked every `wait_interval_seconds` for a maximum duration of `wait_limit_seconds` (note that a value of 0 for the latter means "unlimited").```... "token_guard": { "prefix_execution": "module load schrodinger/2019-4", "token_pools": { "GLIDE_HTVS_DOCKING": 32 }, "wait_interval_seconds": 30, "wait_limit_seconds": 1200 },...``` Executing `DockStream`The following command executes `DockStream` using the the configuration file we generated earlier. Note, that you might want to use the `-debug` flag when calling `DockStream` to get more comprehesive logging output. Logging and error messages will be saved in a file called `dockstream_run.log` (you will have to delete it manually if you restart a job).
###Code
# execute this in a command-line environment after replacing the parameters
!{dockstream_env}/bin/python {docker} -conf {docking_path} -print_scores
###Output
-6.12214
-4.83051
-5.01639
-6.1192
-6.20097
-6.8325
-7.59449
-4.96581
-7.68445
NA
-8.47572
-8.99103
-6.66036
-6.42559
-8.39076
###Markdown
Note, that the scores are usually outputted to a `CSV` file specified by the `scores` block, but that since we have used parameter `-print_scores` they will also be printed to `stdout` (line-by-line). Also note, that ligands that could not be docked successfully will result in `NA` values (for all backends, but `Glide` is particularly prone to that).These scores are associated with docking poses (see picture below for a couple of ligands overlaid in the binding pocket).
###Code
# show glimpse into contents of output CSV
!head -n 10 {ligands_scores_path}
###Output
ligand_number,enumeration,conformer_number,name,score,smiles,lowest_conformer
0,0,0,0:0:0,-6.12214,[H]C#CC([H])([H])C([H])([H])C([H])([H])n1c(C([H])([H])c2c([H])c(OC([H])([H])[H])c(OC([H])([H])[H])c(OC([H])([H])[H])c2Cl)nc2c(N([H])[H])nc([H])nc21,True
1,0,0,1:0:0,-4.83051,[H]c1nc(N([H])[H])c2nc(C([H])([H])c3c([H])c(OC([H])([H])[H])c(OC([H])([H])[H])c(OC([H])([H])[H])c3[H])n(C([H])([H])C([H])([H])C([H])([H])C([H])([H])[H])c2n1,True
1,0,1,1:0:1,-3.9485,[H]c1nc(N([H])[H])c2nc(C([H])([H])c3c([H])c(OC([H])([H])[H])c(OC([H])([H])[H])c(OC([H])([H])[H])c3[H])n(C([H])([H])C([H])([H])C([H])([H])C([H])([H])[H])c2n1,False
1,0,2,1:0:2,-3.81299,[H]c1nc(N([H])[H])c2nc(C([H])([H])c3c([H])c(OC([H])([H])[H])c(OC([H])([H])[H])c(OC([H])([H])[H])c3[H])n(C([H])([H])C([H])([H])C([H])([H])C([H])([H])[H])c2n1,False
2,0,0,2:0:0,-5.01639,[H]c1nc(N([H])[H])c2nc(C([H])([H])c3c([H])c(OC([H])([H])[H])c([H])c([H])c3OC([H])([H])[H])n(C([H])([H])C([H])([H])C([H])([H])C([H])([H])[H])c2n1,True
3,0,0,3:0:0,-6.1192,[H]c1nc(N([H])[H])c2nc(C([H])([H])c3c([H])c([H])c([H])c(OC([H])([H])[H])c3[H])n(C([H])([H])C([H])([H])C([H])([H])C([H])([H])[H])c2n1,True
4,0,0,4:0:0,-6.20097,[H]C#CC([H])([H])C([H])([H])C([H])([H])n1c(C([H])([H])c2c([H])c(OC([H])([H])[H])c(OC([H])([H])[H])c(OC([H])([H])[H])c2Cl)nc2c(N([H])[H])nc(F)nc21,True
4,0,1,4:0:1,-3.02771,[H]C#CC([H])([H])C([H])([H])C([H])([H])n1c(C([H])([H])c2c([H])c(OC([H])([H])[H])c(OC([H])([H])[H])c(OC([H])([H])[H])c2Cl)nc2c(N([H])[H])nc(F)nc21,False
5,0,0,5:0:0,-6.8325,[H]c1nc(N([H])[H])c2nc(C([H])([H])c3c([H])c([H])c(OC([H])([H])[H])c([H])c3[H])n(C([H])([H])C([H])([H])C([H])([H])C([H])([H])[H])c2n1,True
###Markdown
Using `DockStream` as a scoring component in `REINVENT`The *de novo* design platform `REINVENT` holds a recently added `DockStream` scoring function component (also check out our collection of notebooks in the [ReinventCommunity](https://github.com/MolecularAI/ReinventCommunity) repository). This means, provided that all necessary input files and configurations are available, you may run `REINVENT` and incorporate docking scores into the score of the compounds generated. Together with `FastROCS`, this represents the first step to integrate physico-chemical 3D information.While the docking scores are a very crude proxy for the actual binding affinity (at best), it does prove useful as a *geometric filter* (removing ligands that obviously do not fit the binding cavity). Furthermore, a severe limitation of knowledge-based predictions e.g. in activity models is the domain applicability. Docking, as a chemical space agnostic component, can enhance the ability of the agent for scaffold-hopping, i.e. to explore novel sub-areas in the chemical space. The `REINVENT` configuration JSONWhile every docking backend has its own configuration (see section above), calling `DockStream`'s `docker.py` entry point ensures, that they all follow the same external API. Thus the component that needs to be added to `REINVENT`'s JSON configuration (to the `scoring_function`->`parameters` list) looks as follows (irrespective of the backend):```{ "component_type": "dockstream", "name": "dockstream", "weight": 1, "specific_parameters": { "transformation": { "transformation_type": "reverse_sigmoid", "low": -20, "high": -5, "k": 0.2 }, "configuration_path": "/docking.json", "docker_script_path": "/docker.py", "environment_path": "/envs/DockStream/bin/python" }}```You will need to update `configuration_path`, `docker_script_path` and the link to the environment, `environment_path` to match your system's configuration. It might be, that the latter two are already set to meaningful defaults, but your `DockStream` configuration JSON file will be specific for each run. In the example above, we have set `debug` to `false`. Setting that flag to `true`, which will cause `DockStream` to write out a much more comprehensive logging output which is recommended for testing a setup initially. How to find an appropriate transformation?We use a *reverse sigmoid* score transformation to bring the numeric, continuous value that was outputted by `DockStream` and fed back to `REINVENT` into a 0 to 1 regime. The parameters `low`, `high` and `k` are critical: their exact value naturally depends on the backend used, but also on the scoring function (make sure, "more negative is better" - otherwise you are looking for a *sigmoid* transformation) and potentially also the project used. The values reported here can be used as rule-of-thumb for a `Glide` run. Below is a code snippet, that helps to find the appropriate parameters (excerpt of the `ReinventCommunity` notebook `Score_Transformations`).
###Code
# load the dependencies and classes used
%run code/score_transformation.py
# set plotting parameters
small = 12
med = 16
large = 22
params = {"axes.titlesize": large,
"legend.fontsize": med,
"figure.figsize": (16, 10),
"axes.labelsize": med,
"axes.titlesize": med,
"xtick.labelsize": med,
"ytick.labelsize": med,
"figure.titlesize": large}
plt.rcParams.update(params)
plt.style.use("seaborn-whitegrid")
sns.set_style("white")
%matplotlib inline
# set up Enums and factory
tt_enum = TransformationTypeEnum()
csp_enum = ComponentSpecificParametersEnum()
factory = TransformationFactory()
# reverse sigmoid transformation
# ---------
values_list = np.arange(-30, 20, 0.25).tolist()
specific_parameters = {csp_enum.TRANSFORMATION: True,
csp_enum.LOW: -20,
csp_enum.HIGH: -5,
csp_enum.K: 0.2,
csp_enum.TRANSFORMATION_TYPE: tt_enum.REVERSE_SIGMOID}
transform_function = factory.get_transformation_function(specific_parameters)
transformed_scores = transform_function(predictions=values_list,
parameters=specific_parameters)
# render the curve
render_curve(title="Reverse Sigmoid Transformation", x=values_list, y=transformed_scores)
###Output
_____no_output_____ |
Lending_club.ipynb | ###Markdown
Data anlysis on predicting loan defaulters
###Code
#importing necessary libraries
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import seaborn as sns
###Output
_____no_output_____
###Markdown
Data Sourcing
###Code
#reading the available dataframe for analysis
df = pd.read_csv('loan.csv',encoding = "palmos", sep=",", dtype='unicode')
#printing first 5 rows of the dataframe
df.head()
###Output
_____no_output_____
###Markdown
Data Cleansing
###Code
#replacing all the 0 with nan
df = df.replace(0, pd.np.nan)
#finding % missing values in the data
round(100*(df.isnull().sum()/len(df.index)), 2)
# creating a function to delete all columns with 100% nan values
def deletenan(dataframe, axis =1, percent=0.3):
df = dataframe.copy()
ishape = df.shape
if axis == 1:
column = (df.isnull().sum()/len(df))
column = list(column[column.values>=percent].index)
df.drop(labels = column,axis =1,inplace=True)
return df
#deleting columns for all nan values
df = deletenan(df, axis =1,percent = 0.3)
df.shape
# finding total number of null values in the column
df.isnull().sum()
#finding and removing column with contatnt values as they wont be required for the anlysis
unique = df.nunique()
unique = unique[unique.values == 1]
df.drop(labels = list(unique.index), axis =1, inplace=True)
df.shape
#extracting 1st value from the column
df['emp_length']=df.emp_length.str.extract('(\d+)')
df['term']=df.term.str.extract('(\d+)')
#replacing % in int_rate with space and extracting 1st value
df['int_rate'] =df.int_rate.apply(lambda x:x.replace('%',''))
df['int_rate']=df.int_rate.str.extract('(\d+)')
#removing rowa with loan status as current as they they wont be usefull in analysis
df = df[df.loan_status != 'Current']
#printing first five rows after cleaning
df.head()
#removing columns not required for anlysis
df = df.drop(['funded_amnt','total_pymnt','total_rec_prncp','issue_d','total_pymnt_inv','total_rec_int','last_pymnt_d','last_credit_pull_d','total_acc','earliest_cr_line','emp_title','verification_status','url','title','zip_code','delinq_2yrs','inq_last_6mths','revol_bal','recoveries','collection_recovery_fee','collection_recovery_fee','last_pymnt_amnt','pub_rec','out_prncp','out_prncp_inv','total_rec_late_fee','pub_rec_bankruptcies','revol_util'],axis = 1)
df.head()
#finding importatn attributes related to the columns for analysis
df.describe()
df['loan_status'].value_counts()
###Output
_____no_output_____
###Markdown
Standardizing Values
###Code
#replacing loanstatus with boolean expressions
df['loan_status'] = df.loan_status.apply(lambda x: 0 if x=='Fully Paid' else 1)
#here 0 represents fully paid
# 1 represents charged off
#converting necessary columns to numeric
numeric_data = ['loan_amnt','funded_amnt_inv','int_rate','annual_inc','dti']
df[numeric_data] = df[numeric_data].apply(pd.to_numeric)
#creating bins
#loan_amnt
loan_amntbinR=[0,5000,10000,15000,20000,25000,30000,35000]
loan_amntbinL=['5','10','15','20','25','30','35']
df['loan_amntbin'] = pd.cut(df['loan_amnt'], bins=loan_amntbinR, labels=loan_amntbinL)
#int_rate
int_ratebinR=[0,5,10,15,20,25]
int_ratebinL=['very Low Rate','low rate','Medium Rate','High Rate','Very High Rate']
df['int_ratebin'] = pd.cut(df['int_rate'], bins=int_ratebinR, labels=int_ratebinL)
#annual_inc
annual_incbinR=[0,20000,40000,60000,80000,100000]
annual_incbinL=['poor','lower middle','upper middle','Rich','very rich']
df['annual_incbin'] = pd.cut(df['annual_inc'], bins=annual_incbinR, labels=annual_incbinL)
#dti
dtibinR=[0,5,10,15,20,25,30]
dtibinL=['very good','good','high average','low average','poor','very poor']
df['dtibin'] = pd.cut(df['dti'], bins=dtibinR, labels=dtibinL)
df.head()
###Output
_____no_output_____
###Markdown
Analysis
###Code
#finding the total default rate i.e chargedoff/total
print("The total defult rate is ",round(df.loan_status.mean()*100,2),'%' )
# dataframe having data only for chargedoff rows
chdf=df[df['loan_status']==1]
#pivot table to find mean of loan_amount for each term
df.pivot_table(values = 'loan_amnt', index = 'term', aggfunc = 'mean')
#pivot table to find sum of loan_amnt for each grade type
df.pivot_table(values = 'loan_amnt', index = 'grade', aggfunc = 'sum')
#pivot table to find sum of loan amount for each Purpose type
df.pivot_table(values='loan_amnt',index='purpose', aggfunc='sum')
#creating a dataframe for corelation matrix
relation = df[['loan_amnt','funded_amnt_inv','int_rate','annual_inc','dti']]
cor = relation.corr()
cor
###Output
_____no_output_____
###Markdown
Analysis Plots Count plots represent univariate analysis Bar plots and heatmap and clustermap represent bivariata anlysis Plot for loan status
###Code
#plot representing value count for loan status
sns.countplot(df.loan_status)
plt.title('Counts of Loan-Status')
plt.show()
###Output
_____no_output_____
###Markdown
Plots for Term and Annual Income
###Code
df1 = df.groupby(['term', 'annual_incbin']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(10, 5))
plt.title('Relation Between Term and Annual_income')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
most poor people are having term of 36 months Plots for Annual Income
###Code
#plot showing value count for annual income
plt.figure(figsize=(10, 5))
plt.title('Total Loan Counts based on Annual Income')
sns.countplot(df.annual_incbin)
plt.show()
# plot for annual_incbin vs loan status
df2 = df.groupby(['annual_incbin', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(12, 6))
plt.title('Relation Between Annual Income and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
poor people have high chance of loan default Plots for Term
###Code
#plot representing values counts for each term
sns.countplot(chdf.term)
plt.title('Counts of Term')
plt.show()
# plot for term vs loan status
df3 = df.groupby(['term', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(10, 5))
plt.title('Relation Between Term and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
term for 36 months has most number of defaulters as most poor income people have that term Plots for Grade
###Code
#plot representing value count for each grade type
sns.countplot(df.grade)
plt.title('Grade Type')
plt.show()
# plot for grade vs loan status
df4 = df.groupby(['grade', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(15, 8))
plt.title('Relation Between Grades and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
grade G have most % loan defaulters Plots for purpose
###Code
#plot for value count of each purpose
plt.figure(figsize=(25, 10))
plt.title('Count for each Purpose')
sns.countplot(df.purpose)
plt.show()
# plot for purpose vs loan status
df5 = df.groupby(['purpose', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(15, 10))
plt.title('Relation Between Purposes and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
small bussiness pupose have most % loan defaulters Plots for Sub_grade
###Code
#plot showing value count for each sub_grade
plt.figure(figsize=(15, 5))
plt.title('Total amount of values for each Sub-Grade')
sns.countplot(df.sub_grade)
plt.show()
# plot for subgrade vs loan status
df6 = df.groupby(['sub_grade', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(15, 10))
plt.title('Relation Between Sub-Grades and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
sub grade F5 has most number of loan defaulters Plots for open accounts
###Code
#plot showing value count for each open_acc for charged off
plt.figure(figsize=(15, 5))
plt.title('Total Count of Open Accounts for charged off')
sns.countplot(chdf.open_acc)
plt.show()
# plot for open accounts vs loan status
df7 = df.groupby(['open_acc', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(15, 10))
plt.title('Relation Between open accounts and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
people with 7 open accounts are most loan defaulters Plots for Address States
###Code
#plot showing value count for each state
plt.figure(figsize=(15, 5))
plt.title('Counts of Loans in Each State')
sns.countplot(chdf.addr_state)
plt.show()
# plot for addr_state vs loan status
df8 = df.groupby(['addr_state', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(20, 10))
plt.title('Relation Between Address State and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
california(CA) has most number of loan defaulters Plots for Interest rate
###Code
#plot showing value count for int_rate
plt.figure(figsize=(8, 5))
plt.title('Interest Rate on loans')
sns.countplot(df.int_ratebin)
plt.show()
# plot for int_ratebin vs loan status
df9 = df.groupby(['int_ratebin', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(12, 6))
plt.title('Relation Between interest Rate and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
people with very high interest rate are most loan defaulters Plots for Loan Amount
###Code
#plot showing value count for loanamount
plt.figure(figsize=(15, 5))
plt.title('Total Loan Counts based on loan amount')
sns.countplot(df.loan_amntbin)
plt.show()
# plot for loan_amntbin vs loan status
df10 = df.groupby(['loan_amntbin', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(15, 10))
plt.title('Relation Between Loan_amount and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
people with high loan amount between (30k-35k) are most loan defaulters because its high probability that high loan amount gets charged off Plots for Employment Length
###Code
#plot showing value count for each emplength
plt.figure(figsize=(15, 5))
plt.title('Count of Employement year Length')
sns.countplot(chdf.emp_length)
plt.show()
# plot for emp_length vs loan status
df11 = df.groupby(['emp_length', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(15, 10))
plt.title('Relation Between Employement Length years and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
people with employment length of 10years or more are most loan defaulters Plots for Home Ownership
###Code
#plot showing value count for homeownership for charged off
plt.figure(figsize=(15, 5))
plt.title('Total Count of Home Ownership for charged off')
sns.countplot(chdf.home_ownership)
plt.show()
# plot for home_ownership vs loan status
df12 = df.groupby(['home_ownership', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(15, 10))
plt.title('Relation Between Home Ownership and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
people living on rent are highest chance of loan default becausethe most number of defaulters are living on rent Plots for DTI
###Code
#plot showing value count for dti
plt.figure(figsize=(15, 5))
plt.title('Total Loan Counts based on DTI')
sns.countplot(df.dtibin)
plt.show()
# plot for dtibin vs loan status
df13 = df.groupby(['dtibin', 'loan_status']).size().groupby(level =0).apply(lambda y: 100*y / y.sum()).unstack().plot(kind='bar', stacked=True, figsize=(15, 10))
plt.title('Relation Between DTI and Loan_status')
plt.gca().yaxis.set_major_formatter(mtick.PercentFormatter())
###Output
_____no_output_____
###Markdown
people with poor DTI are most % loan defaulters Heatmap
###Code
#plotting heatmap for corelation matrix
plt.subplots(figsize=(15, 10))
plt.title('Correlation between each data variable')
sns.heatmap(cor, xticklabels=cor.columns.values,
yticklabels=cor.columns.values,annot= True,linecolor="black",linewidths=2, cmap="viridis")
plt.show()
###Output
_____no_output_____
###Markdown
loan amount and funded amount by investor has highest correlation Clustermap
###Code
#plotting clustermap for corelation matrix
plt.figure(figsize=(12, 8))
sns.clustermap(cor,annot= True, cmap="RdYlGn",
linecolor="black",linewidths=2)
plt.show()
###Output
_____no_output_____ |
src/03-Plots.ipynb | ###Markdown
Data Preprocessing
###Code
import os
from glob import glob
import numpy as np
import pandas as pd
from PIL import Image
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset
import torch.optim as optim
from torchvision import transforms
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid
# data path that training set is located
path = "../data/fruits/fruits-360/"
# this joins the path + folder and each files e.g. '../data/fruits/fruits-360/Training/Apple Braeburn/115_100.jpg'
files_training = glob(os.path.join(path,'Training', '*/*.jpg'))
num_images = len(files_training)
print('Number of images in Training file:', num_images)
# just to see how many images we have for each label, minimum one and average one, with nice printing style
min_images = 1000
im_cnt = []
class_names = []
print('{:18s}'.format('class'), end='')
print('Count:')
print('-' * 24)
for folder in os.listdir(os.path.join(path, 'Training')):
folder_num = len(os.listdir(os.path.join(path,'Training',folder)))
im_cnt.append(folder_num)
class_names.append(folder)
print('{:20s}'.format(folder), end=' ')
print(folder_num)
num_classes = len(class_names)
print("\nMinumum images per category:", np.min(im_cnt), 'Category:', class_names[im_cnt.index(np.min(im_cnt))])
print('Average number of Images per Category: {:.0f}'.format(np.array(im_cnt).mean()))
print('Total number of classes: {}'.format(num_classes))
fruit_data = pd.DataFrame(data = im_cnt,index = class_names,columns=["image_number"])
fruit_data.head()
top_ten = fruit_data.sort_values(by="image_number",ascending=False)[:10]
bottom_ten = fruit_data.sort_values(by="image_number",ascending=True)[:10]
frames = [top_ten, bottom_ten]
merged_tens = pd.concat(frames)
from sklearn.utils import shuffle
merged_tens = shuffle(merged_tens)
import seaborn as sns
plt.figure(figsize = (12,8))
chart = sns.barplot(x=merged_tens.index, y = merged_tens["image_number"],data=merged_tens, palette="Accent")
chart.set_xticklabels(chart.get_xticklabels(), rotation=45)
chart.set_ylabel("Number of Images")
plt.axhline(y=np.mean(im_cnt), color='r', linestyle='--',label = "Average Number of Images")
plt.legend()
plt.title("Number of Images for Top and Least 10 Fruits")
plt.savefig("../plots/number_of_images.png")
plt.show()
# Just to guess pop_mean and pop_std
tensor_transform = transforms.Compose([transforms.ToTensor()])
training_data = ImageFolder(os.path.join(path, 'Training'), tensor_transform)
data_loader = torch.utils.data.DataLoader(training_data, batch_size=512, shuffle=True)
%time
# this part takes a bit long
pop_mean = [0.6840367,0.5786325,0.5037564] # normally it was --> []
pop_std = [0.30334985,0.3599262,0.3913685]
# for i, data in tqdm(enumerate(data_loader, 0)):
# numpy_image = data[0].numpy()
# batch_mean = np.mean(numpy_image, axis=(0,2,3))
# batch_std = np.std(numpy_image, axis=(0,2,3))
# pop_mean.append(batch_mean)
# pop_std.append(batch_std)
# pop_mean = np.array(pop_mean).mean(axis=0)
# pop_std = np.array(pop_std).mean(axis=0)
# that is why I am inserting last values
print(pop_mean)
print(pop_std)
np.random.seed(123)
shuffle = np.random.permutation(num_images)
# split validation images
split_val = int(num_images * 0.2)
print('Total number of images:', num_images)
print('Number images in validation set:',len(shuffle[:split_val]))
print('Number images in train set:',len(shuffle[split_val:]))
class FruitTrainDataset(Dataset):
def __init__(self, files, shuffle, split_val, class_names, transform=transforms.ToTensor()):
self.shuffle = shuffle
self.class_names = class_names
self.split_val = split_val
self.data = np.array([files[i] for i in shuffle[split_val:]])
self.transform=transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img = Image.open(self.data[idx])
name = self.data[idx].split('/')[-2]
y = self.class_names.index(name)
img = self.transform(img)
return img, y
class FruitValidDataset(Dataset):
def __init__(self, files, shuffle, split_val, class_names, transform=transforms.ToTensor()):
self.shuffle = shuffle
self.class_names = class_names
self.split_val = split_val
self.data = np.array([files[i] for i in shuffle[:split_val]])
self.transform=transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img = Image.open(self.data[idx])
name = self.data[idx].split('/')[-2]
y = self.class_names.index(name)
img = self.transform(img)
return img, y
class FruitTestDataset(Dataset):
def __init__(self, path, class_names, transform=transforms.ToTensor()):
self.class_names = class_names
self.data = np.array(glob(os.path.join(path, '*/*.jpg')))
self.transform=transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img = Image.open(self.data[idx])
name = self.data[idx].split('/')[-2]
y = self.class_names.index(name)
img = self.transform(img)
return img, y
data_transforms = {
'train': transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize(pop_mean, pop_std) # These were the mean and standard deviations that we calculated earlier.
]),
'Test': transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(pop_mean, pop_std) # These were the mean and standard deviations that we calculated earlier.
]),
'valid': transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(pop_mean, pop_std) # These were the mean and standard deviations that we calculated earlier.
])
}
train_dataset = FruitTrainDataset(files_training, shuffle, split_val, class_names, data_transforms['train'])
valid_dataset = FruitValidDataset(files_training, shuffle, split_val, class_names, data_transforms['valid'])
test_dataset = FruitTestDataset("../data/fruits/fruits-360/Test", class_names, transform=data_transforms['Test'])
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=32, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=32, shuffle=True)
dataloaders = {'train': train_loader,
'valid': valid_loader,
'Test': test_loader}
dataset_sizes = {
'train': len(train_dataset),
'valid': len(valid_dataset),
'Test': len(test_dataset)
}
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
inp = pop_std * inp + pop_mean
inp = np.clip(inp, 0, 1)
plt.figure(figsize = (12,8))
plt.axis('off')
if title is not None:
plt.title(title)
plt.imshow(inp)
plt.savefig("../plots/random_fruits2.png")
inputs, classes = next(iter(train_loader))
out = make_grid(inputs)
cats = ['' for x in range(len(classes))]
for i in range(len(classes)):
cats[i] = class_names[classes[i].item()]
imshow(out,title="Random Fruit Images from Dataset")
print(cats)
import random
x = random.randint(0,32)
fig, (ax1, ax2, ax3) = plt.subplots(1,3,figsize = (12,8))
plt.axis('off')
ax1.imshow(inputs[x][0])
ax1.set_title("Original Image")
ax1.axis('off')
ax2.imshow(inputs[x][1])
ax2.set_title("Horizontal Flipped Image")
ax2.axis('off')
ax3.imshow(inputs[x][2])
ax3.set_title("Vertical Flipped Image")
ax2.axis('off')
###Output
_____no_output_____ |
notebooks/Untitled-checkpoint.ipynb | ###Markdown
Mushroom Classfication Importing the libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
df=pd.read_csv("mushrooms.csv")
df
df.info()
df.describe()
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
df.columns
def Preprocessing (df):
df=df.copy()
#encoding
mapping = list()
encoder =LabelEncoder()
for column in range(len(df.columns)):
df[df.columns[column]]= encoder.fit_transform(df[df.columns[column]])
mapping_dict = {index :label for index, label in enumerate(encoder.classes_)}
mapping.append(mapping_dict)
#Splitting
y =df[ 'class'].copy()
X =df.drop('class',axis=1).copy()
#Scaler
scaler=StandardScaler()
X=pd.DataFrame(scaler.fit_transform(X),columns=X.columns)
#Train
X_train,X_test,y_train,y_test= train_test_split(X,y, train_size=0.7, shuffle=True , random_state=123)
return X_train,X_test,y_train,y_test
X_train,X_test,y_train,y_test= Preprocessing (df)
X_train
y_train
###Output
_____no_output_____
###Markdown
Logistic Regression model
###Code
model_1=LogisticRegression()
model_1.fit(X_train,y_train)
model_1.score(X_test,y_test)*100
###Output
_____no_output_____
###Markdown
Support vector classification model
###Code
model_2=SVC()
model_2.fit(X_train,y_train)
model_2.score(X_test,y_test)*100
###Output
_____no_output_____ |
notebooks/rapids-notmnist-viz.ipynb | ###Markdown
notMNIST letters visualization In this notebook, we'll apply some popular visualization techniques to visualize notMNIST letters using a GPU and the [RAPIDS](https://rapids.ai/) libraries (cudf, cuml). This notebook is based on the scikit-learn embedding examples found [here](http://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html).**Note that a GPU is required with this notebook.**This version of the notebook has been tested with RAPIDS version 0.15.First, the needed imports.
###Code
%matplotlib inline
from time import time
import cudf
import numpy as np
import pandas as pd
import os
import urllib.request
from cuml import PCA, TSNE, UMAP
from cuml.random_projection import SparseRandomProjection
from cuml import __version__ as cuml_version
import sklearn
from sklearn import __version__ as sklearn_version
import matplotlib.pyplot as plt
print('Using cudf version:', cudf.__version__)
print('Using cuml version:', cuml_version)
print('Using sklearn version:', sklearn_version)
###Output
_____no_output_____
###Markdown
Then we load the notMNIST data. First time we need to download the data, which can take a while. The data is stored as Numpy arrays in host (CPU) memory.
###Code
def load_not_mnist(directory, filename):
filepath = os.path.join(directory, filename)
if os.path.isfile(filepath):
print('Not downloading, file already exists:', filepath)
else:
if not os.path.isdir(directory):
os.mkdir(directory)
url_base = 'https://a3s.fi/mldata/'
url = url_base + filename
print('Downloading {} to {}'.format(url, filepath))
urllib.request.urlretrieve(url, filepath)
return np.load(filepath)
DATA_DIR = os.path.expanduser('~/data/notMNIST/')
if not os.path.exists(DATA_DIR):
os.makedirs(DATA_DIR)
X = load_not_mnist(DATA_DIR, 'notMNIST_large_images.npy').reshape(-1, 28*28)
X = X.astype(np.float32)
y = load_not_mnist(DATA_DIR, 'notMNIST_large_labels.npy')
print()
print('notMNIST data loaded:',len(X))
print('X:', type(X), 'shape:', X.shape, X.dtype)
print('y:', type(y), 'shape:', y.shape, y.dtype)
###Output
_____no_output_____
###Markdown
Let's convert our data to a cuDF DataFrame in device (GPU) memory.
###Code
%%time
cu_X = cudf.DataFrame.from_pandas(pd.DataFrame(X))
print('cu_X:', type(cu_X), 'shape:', cu_X.shape)
###Output
_____no_output_____
###Markdown
Let's start by inspecting our data by drawing some samples:
###Code
n_img_per_row = 32 # 32*32=1024
img = np.zeros((28 * n_img_per_row, 28 * n_img_per_row))
for i in range(n_img_per_row):
ix = 28 * i
for j in range(n_img_per_row):
iy = 28 * j
img[ix:ix + 28, iy:iy + 28] = X[i * n_img_per_row + j,:].reshape(28,28)
img = np.max(img)-img
plt.figure(figsize=(9, 9))
plt.imshow(img, cmap='Greys')
plt.title('1024 first notMNIST letters')
ax=plt.axis('off')
###Output
_____no_output_____
###Markdown
Let's define a helper function to plot the different visualizations:
###Code
def plot_embedding(X, y, title=None, time=None, color=True, save_as=None):
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)
y_int = [ord(yi)-ord('A') for yi in y]
fig = plt.figure(figsize=(16,9))
ax = fig.add_subplot(111)
plt.axis('off')
if color:
colors = plt.cm.tab10(y_int)
#colors = plt.cm.Set1([yi / 10. for yi in y_int])
alpha = 0.2
else:
colors = 'k'
alpha = 0.2
s = plt.scatter(X[:, 0], X[:, 1], 0.5, color=colors, alpha=alpha)
if color:
x_lim, y_lim = ax.get_xlim(), ax.get_ylim()
x_labs = x_lim[0] + (x_lim[1]-x_lim[0])*np.arange(10)/30
y_labs = y_lim[0] + (y_lim[1]-y_lim[0])*0.1*np.zeros(10)
labels=['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']
for i in range(10):
plt.text(x_labs[i], y_labs[i], labels[i], color=plt.cm.tab10(i),
fontdict={'weight': 'bold', 'size': 32})
if title is not None:
if t0 is not None:
plt.title("%s (%.2fs)" % (title, time))
else:
plt.title(title)
if save_as is not None:
plt.savefig(save_as, bbox_inches='tight', dpi=150)
###Output
_____no_output_____
###Markdown
1. Random projectionA simple first visualization is a [random projection](http://scikit-learn.org/stable/modules/random_projection.htmlrandom-projection) of the data into two dimensions.
###Code
t0 = time()
rp = SparseRandomProjection(n_components=2, random_state=42)
X_projected = rp.fit_transform(cu_X).as_matrix()
t = time() - t0
plot_embedding(X_projected, y, "Random projection", t, color=False)
###Output
_____no_output_____
###Markdown
Since we know the labels for our data, we can color the visulization accordingly.
###Code
plot_embedding(X_projected, y, "Random projection", t)
###Output
_____no_output_____
###Markdown
2. PCA[Principal component analysis](http://scikit-learn.org/stable/modules/decomposition.htmlpca) (PCA) is a standard method to decompose a high-dimensional dataset in a set of successive orthogonal components that explain a maximum amount of the variance. Here we project the data into two first principal components. The components have the maximal possible variance under the orthogonality constraint.
###Code
t0 = time()
pca = PCA(n_components=2)
X_pca = pca.fit_transform(cu_X).as_matrix()
t = time() - t0
plot_embedding(X_pca, y, "PCA projection", t, color=False)
plot_embedding(X_pca, y, "PCA projection", t)
###Output
_____no_output_____
###Markdown
4. t-SNE[t-distributed Stochastic Neighbor Embedding](http://scikit-learn.org/stable/modules/manifold.htmlt-sne) (t-SNE) is a relatively new and popular tool to visualize high-dimensional data. t-SNE is particularly sensitive to local structure and can often reveal clusters in the data.t-SNE has an important tuneable parameter called `perplexity`, that can have a large effect on the resulting visualization, depending on the data. Typical values for perplexity are between 5 and 50.
###Code
t0 = time()
perplexity=30
tsne = TSNE(n_components=2, perplexity=perplexity)
X_tsne = tsne.fit_transform(cu_X).as_matrix()
t = time() - t0
###Output
_____no_output_____
###Markdown
In t-SNE visualizations, there are sometimes outliers that distract the visualization. We therefore remove `prc` % of smallest and largest data points on both dimensions.
###Code
prc = 0.01
min_tsne = np.percentile(X_tsne, prc, axis=0)
max_tsne = np.percentile(X_tsne, 100-prc, axis=0)
range_ok = ((X_tsne[:,0]>min_tsne[0]) &
(X_tsne[:,0]<max_tsne[0]) &
(X_tsne[:,1]>min_tsne[1]) &
(X_tsne[:,1]<max_tsne[1]))
X_tsne_filt = X_tsne[range_ok]
y_tsne_filt = y[range_ok]
plt.figure(figsize=(5,2))
plt.hist(X_tsne_filt[:, 0], 100,
range=(min_tsne[0], max_tsne[0]), alpha=0.7);
plt.hist(X_tsne_filt[:, 1], 100,
range=(min_tsne[1], max_tsne[1]), alpha=0.7);
plot_embedding(X_tsne_filt, y_tsne_filt,
"t-SNE embedding with perplexity=%d" % perplexity,
t)
###Output
_____no_output_____
###Markdown
5. UMAP[Uniform Manifold Approximation and Projection](https://umap-learn.readthedocs.io/en/latest/index.html) (UMAP) is another recently published technique for data visualization and dimensionality reduction based on manifold learning and topological data analysis. The main hyperparameters of UMAP include `n_neighbors` and `min_dist`, which control the the size of the local neighborhood considered and how tightly the algorithm packs neighboring points together, respectively. The values of both hyperparameters have a significant impact on the resulting visualization.
###Code
t0 = time()
n_neighbors = 15
min_dist = 0.1
umapmodel = UMAP(n_neighbors=n_neighbors, min_dist=min_dist)
X_umap = umapmodel.fit_transform(cu_X).as_matrix()
t = time() - t0
###Output
_____no_output_____
###Markdown
In UMAP visualizations, there are sometimes outliers that distract the visualization. We therefore remove `prc` % smallest and largest data points on both dimensions.
###Code
prc = 1.0
min_umap = np.percentile(X_umap, prc, axis=0)
max_umap = np.percentile(X_umap, 100-prc, axis=0)
range_ok = ((X_umap[:,0]>min_umap[0]) &
(X_umap[:,0]<max_umap[0]) &
(X_umap[:,1]>min_umap[1]) &
(X_umap[:,1]<max_umap[1]))
X_umap_filt = X_umap[range_ok]
y_umap_filt = y[range_ok]
plt.figure(figsize=(5,2))
plt.hist(X_umap_filt[:, 0], 100,
range=(min_umap[0], max_umap[0]), alpha=0.7);
plt.hist(X_umap_filt[:, 1], 100,
range=(min_umap[1], max_umap[1]), alpha=0.7);
plot_embedding(X_umap_filt, y_umap_filt,
"UMAP projection with n_neighbors=%d, min_dist=%.2f" % (n_neighbors,
min_dist),
t)
###Output
_____no_output_____
###Markdown
3D visualizationsIn this section, we produce some 3D visualizations using [Plotly](https://plotly.com/python/).
###Code
import plotly.express as px
from plotly.offline import plot
###Output
_____no_output_____
###Markdown
PCA
###Code
t0 = time()
pca3 = PCA(n_components=3)
X_pca3 = pca3.fit_transform(cu_X).as_matrix()
print('{:.2f} seconds elapsed'.format(time() - t0))
fig = px.scatter_3d(
X_pca3, x=0, y=1, z=2, color=y,
title='PCA projection',
labels={'0': 'PC 1', '1': 'PC 2', '2': 'PC 3'}
)
fig.update_traces(marker=dict(size=1))
plot(fig, filename = 'pca3-plot.html');
###Output
_____no_output_____
###Markdown
UMAP
###Code
t0 = time()
n_neighbors = 15
min_dist = 0.1
umapmodel3 = UMAP(n_components=3,
n_neighbors=n_neighbors, min_dist=min_dist)
X_umap3 = umapmodel3.fit_transform(cu_X).as_matrix()
print('{:.2f} seconds elapsed'.format(time() - t0))
prc = 1.0
min_umap3 = np.percentile(X_umap3, prc, axis=0)
max_umap3 = np.percentile(X_umap3, 100-prc, axis=0)
range_ok = ((X_umap3[:,0]>min_umap3[0]) &
(X_umap3[:,0]<max_umap3[0]) &
(X_umap3[:,1]>min_umap3[1]) &
(X_umap3[:,1]<max_umap3[1]) &
(X_umap3[:,2]>min_umap3[2]) &
(X_umap3[:,2]<max_umap3[2]))
X_umap3_filt = X_umap3[range_ok]
y_umap3_filt = y[range_ok]
fig = px.scatter_3d(
X_umap3_filt, x=0, y=1, z=2, color=y_umap3_filt,
title="UMAP projection with n_neighbors=%d, min_dist=%.2f" % (n_neighbors,
min_dist),
labels={'0': 'UMAP 1', '1': 'UMAP 2', '2': 'UMAP 3'}
)
fig.update_traces(marker=dict(size=1))
plot(fig, filename = 'umap3-plot.html');
###Output
_____no_output_____ |
Baseline_Model_Logistic_Regression.ipynb | ###Markdown
**Loading the Data** The data set is a compressed (gzip) NDJSON file.
###Code
import gzip
import json
from pathlib import Path
###Output
_____no_output_____
###Markdown
The datasets are imported from Google Drive. It's about a classification of companies' landing pages by industry sector.
###Code
# Mount Google Drive
from google.colab import drive
drive.mount('/gdrive')
data_path = Path('/gdrive/MyDrive/industry_data/')
file_name1 = 'train_small.ndjson.gz'
file_name2 = 'test_small.ndjson.gz'
###Output
Mounted at /gdrive
###Markdown
Importing the whole train file
###Code
# open train file
with gzip.open(data_path/file_name1, "rt", encoding='UTF-8') as file:
data1 = [json.loads(line) for line in file]
###Output
_____no_output_____
###Markdown
Importing the whole test file
###Code
# open test file
with gzip.open(data_path/file_name2, "rt", encoding='UTF-8') as file:
data2 = [json.loads(line) for line in file]
###Output
_____no_output_____
###Markdown
**Data Visualization** *Training Dataset* The train file is a list of dictionaries, each dictionary is comprised of 4 keys which are: html, industry, industry_label, and url.
###Code
example1 = data1[0]
example1.keys()
# Part of the html content of the first observation in the train file
example1['html'][:512]
###Output
_____no_output_____
###Markdown
Transformation of the train file constructed before as a list of dictionaries to a dataframe.
###Code
import pandas as pd
data_train = pd.DataFrame(data1)
data_train.shape
data_train.head()
###Output
_____no_output_____
###Markdown
Exploration of the distribution of the different classes in the training dataset. There are 19 industry classes.
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,6))
data_train.groupby('industry_label').html.count().sort_values(ascending=False).plot.bar(ylim=0)
plt.show()
data_train['industry_label'].value_counts()
###Output
_____no_output_____
###Markdown
The training dataset is imbalanced. It comprises 25185 observations, in which the "Information Technology and Services" class is the most frequent one with 5191 observations, and the "Leisure, Travel & Tourism" class has the minimum occurrence with only 399 observations.
###Code
# Number of words in the training dataset
print(data_train['html'].apply(lambda x: len(x.split(' '))).sum())
###Output
395211490
###Markdown
There are nearly 395 million words in the 'html' column relative to the train set. *Test Dataset* The test file is also a list of dictionaries, each dictionary comprises 4 keys which are: html, industry, industry_label, and url.
###Code
example2 = data2[0]
example2.keys()
# Part of the html content of the first observation in the test file
example2['html'][:512]
###Output
_____no_output_____
###Markdown
Transformation of the test file constructed before as a list of dictionaries to a dataframe.
###Code
import pandas as pd
data_test = pd.DataFrame(data2)
data_test.shape
data_test.head()
###Output
_____no_output_____
###Markdown
Exploration of the distribution of the different classes in the test dataset. There are also 19 industry classes.
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,6))
data_test.groupby('industry_label').html.count().sort_values(ascending=False).plot.bar(ylim=0)
plt.show()
data_test['industry_label'].value_counts()
###Output
_____no_output_____
###Markdown
The test dataset is also imbalanced. It is comprised of 8396 observations, in which the "Information Technology and Services" class is the most frequent with 1671 observations, and the "Legal Services" class has the minimum occurrence with only 135 observations.
###Code
# Number of words in the test dataset
print(data_test['html'].apply(lambda x: len(x.split(' '))).sum())
###Output
135251555
###Markdown
There are nearly 135 million words in the 'html' column relative to the test set. **Data Cleaning** The landing pages of companies are classified based on their html files. In order to get a better accuracy, one needs to clean these html files to obtain proper texts that are ready for training and testing. So, the html is decoded by usig the "BeautifulSoup" command, and the punctuations that are not relevant for the training process are removed.
###Code
import string
from bs4 import BeautifulSoup
PUNCT_TO_REMOVE = string.punctuation
def cleaning_text(text):
"""custom function to remove the punctuation"""
text = text.replace('\n', ' ')
text = BeautifulSoup(text, 'html.parser').text # HTML decoding
text = text.translate(str.maketrans('', '', PUNCT_TO_REMOVE))
return text
###Output
_____no_output_____
###Markdown
*Training Dataset after Cleaning*
###Code
data_train['html'] = data_train['html'].apply(cleaning_text)
data_train.tail()
# Number of words in the training dataset
print(data_train['html'].apply(lambda x: len(x.split(' '))).sum())
###Output
170234367
###Markdown
More than one-half of the html content is cleaned. *Test Dataset after Cleaning*
###Code
data_test['html'] = data_test['html'].apply(cleaning_text)
data_test.tail()
# Number of words in the test dataset
print(data_test['html'].apply(lambda x: len(x.split(' '))).sum())
###Output
58947648
###Markdown
More than one-half of the html content is cleaned. **Train the Model through a Pipeline** sklearn.feature_extraction.text.TfidfVectorizer is used to calculate a tf-idf vector for each observation in the column 'html', in which:* sublinear_df is set to True to use a logarithmic form for frequency.* min_df is the minimum numbers of documents a word must be present in to be kept.* norm is set to l2, to ensure all our feature vectors have a euclidian norm of 1.* ngram_range is set to (1, 2) to indicate that we want to consider both unigrams and bigrams.* stop_words is set to "english" to remove all common pronouns ("a", "the", ...) to reduce the number of noisy features.Then, the Logistic Regression Model is used to fit the train dataset.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
log_reg = Pipeline([
('tfidf', TfidfVectorizer(sublinear_tf=True, min_df=50, norm='l2', encoding='latin-1', ngram_range=(1, 2), stop_words='english')),
('log_reg', LogisticRegression(multi_class="ovr",n_jobs=1, C=80))
])
# Setting multi_class="ovr",n_jobs=1, and C=80 has given the best f1-scores
log_reg.fit(data_train.html,data_train.industry_label)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:818: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG,
###Markdown
**Evaluate the Model**
###Code
predictions = log_reg.predict(data_test.html)
###Output
_____no_output_____
###Markdown
The model has been evaluated using the classification report.
###Code
from sklearn import metrics
import pandas as pd
lr_report = metrics.classification_report(data_test.industry_label,
predictions,
output_dict=True)
pd.DataFrame(lr_report).T
###Output
_____no_output_____
###Markdown
* f1-score is the harmonic mean between precision and recall.* Recall is the number of correctly predicted elements of a class out of the number of actual/true elements of that class.* Precision is the number of correctly predicted elements of a class out of all predicted elements of that class.* Support is the number of occurence of the given class in the test dataset.The "Telecommunications" class has the lowest recall with approximately 0,34. This means that the percentage of the correctly predicted elements of the "Telecommunications" class out of its true elements is nearly 34%.The "Leisure, Travel & Tourism" class has the highest precision with approximately 0,93. This means that the percentage of the correctly predicted elements of the "Leisure, Travel & Tourism" class out of all predicted elements of this class is nearly 93%. The Confusion Matrix is displayed below to get a deeper look on the correctly-predicted elements and the actual/true ones of each class
###Code
from sklearn.metrics import confusion_matrix
import seaborn as sns
import numpy as np
unique_label = np.unique(data_test.industry_label)
conf_mat = confusion_matrix(data_test.industry_label, predictions)
fig, ax = plt.subplots(figsize=(12,12))
sns.heatmap(conf_mat, annot=True, fmt='d',
xticklabels=unique_label, yticklabels=unique_label)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
###Output
_____no_output_____
###Markdown
* The lowest recall relating to the "Telecommunications" class is confirmed by the confusion matrix, since only 52 elements out of 153 are being correctly predicted in this class.* The "Leisure, Travel & Tourism" class has the highest precision by nearly 0,93. There are 120 correctly-predicted elements in this class out of 129 predicted elements. * 172 elements that belong in reality to the "Management Consulting" class, are incorrectly predicted as belonging to the "Information Technology and Services" class.* 124 elements that belong in reality to the "Marketing and Advertising" class, are incorrectly predicted as belonging to the "Information Technology and Services" class. **Turning the model into an online application** Gradio is used to turn the model into a web interface
###Code
# install gradio
!pip install gradio
#import gradio
import gradio as gr
###Output
_____no_output_____
###Markdown
Define classifier function
###Code
labels = np.unique(data_train['industry_label'])
def predict(text):
text = [text]
text = cleaning_text(text)
pred = log_reg.predict(text)
return pred[0]
###Output
_____no_output_____
###Markdown
Launch gradio interface
###Code
title = "Industry Sector Classification"
example = [['Corporate Site of ING, a global financial institution of Dutch origin, providing news, investor relations and general information'],
['Elektromarkt in Ihrer Nรคhe mit OnlineShop โ expert']]
gr_interface= gr.Interface(fn = predict,
inputs = gr.inputs.Textbox(lines=5,placeholder="Paste some Text here",label="Input Text"),
outputs = gr.outputs.Textbox(),interpretation="default",
description = "Enter some text from a company's landing page and see whether the model infers correctly its industry sector",
title=title,examples=example)
gr_interface.launch(share=True)
###Output
Colab notebook detected. To show errors in colab notebook, set `debug=True` in `launch()`
Running on public URL: https://14289.gradio.app
This share link expires in 72 hours. For free permanent hosting, check out Spaces (https://huggingface.co/spaces)
|
notebooks/simplified_api_example.ipynb | ###Markdown
Using the Revised API ImportsBefore starting up this notebook, go to the main directory (one up from this one) and use`pip install -e .` which install the `solarforcing` package to your environment.
###Code
from solarforcing import FluxCalculation
from datetime import datetime
###Output
_____no_output_____
###Markdown
Accessing the Flux Calculations (ipr, iprm)Now, the functions found in the various notebooks have been built into a few main modules. They include:* calc.py - this contains the main calculations (ex. `vdk2016`, `lshell_to_glat`)* data_access.py - this includes a helper function `grab_potsdam_file` which grabs the ap data file* main.py - this contains the `FluxCalculation` class which brings this all togetherLet's take a look at the "high level" api Investigate `FluxCalculation`We see that this is full of defaults - feel free to change any of these by feeding in your desired value
###Code
FluxCalculation?
flux = FluxCalculation(start_date=datetime(2000, 1, 1), end_date=datetime(2014, 12, 31))
###Output
_____no_output_____
###Markdown
1) Bringing in the data
###Code
flux.grab_data()
###Output
_____no_output_____
###Markdown
2) Calculating the `vdk2016` flux
###Code
flux.calculate_flux()
###Output
_____no_output_____
###Markdown
3) Generate an `xarray.Dataset`
###Code
flux.generate_dataset()
###Output
<xarray.Dataset>
Dimensions: (time: 5479, glat: 17, pressure: 101)
Coordinates:
* glat (glat) float64 44.71 50.53 54.53 57.51 ... 69.84 70.43 70.97 71.47
* pressure (pressure) float64 0.649 0.5603 0.4837 ... 3.086e-07 2.664e-07
* time (time) datetime64[ns] 2000-01-01 2000-01-02 ... 2014-12-31
Data variables:
iprm (time, glat, pressure) float64 0.001594 0.04147 ... 7.311e+10
ap (time) float64 30.12 16.5 12.0 12.88 ... 8.5 20.88 14.88 7.625
###Markdown
Skipping to the last stepYou do not neccessarily need to call of this in order - for example, if you wanted to generate the dataset right away, you can use the following block of code
###Code
flux = FluxCalculation()
flux.generate_dataset()
###Output
_____no_output_____
###Markdown
--- Accessing Solar Irradiance DataAnother important dataset is solar irradiance (ssi) data from LASP - the data can be accessed on this site https://lasp.colorado.edu/lisird/data/nrl2_files.The default dataset in this case is the daily output - this workflow has been migrated from its original state based on [Mike Mill's script](https://svn.code.sf.net/p/codescripts/code/trunk/ncl/solar/createSolarFileNRLSSI2.ncl) Additional ImportsWe will need to access the `SolarIrradiance` object from `solarforcing`
###Code
from solarforcing import SolarIrradiance
###Output
_____no_output_____
###Markdown
Investigate the `SolarIrradiance` ObjectWith this object, we need to specify:- the data directory (`data_dir`)- the data url (`data_url`), which is set to the most recent file from LASP by default- start/end dates (`start_date`/`end_date`) SolarIrradiance? Grab the data by instantiating the `SolarIrradiance` object
###Code
irradiance = SolarIrradiance()
###Output
_____no_output_____
###Markdown
We now have a new variable attached to this object - `raw_ds` which is the original dataset
###Code
irradiance.raw_ds
###Output
_____no_output_____
###Markdown
Apply the Data Cleaning and CalculationsNow that we have our dataset, we can apply the cleaning and calculations by calling `.generate_dataset()`, similar to the previous section
###Code
irradiance.generate_dataset()
###Output
_____no_output_____
###Markdown
Investigate the Resulting DatasetWe now have our processed dataset, `ds`
###Code
irradiance.ds
###Output
_____no_output_____ |
PP_temp_analysis_bonus_2_starter.ipynb | ###Markdown
Reflect Tables into SQLALchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measure = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
###Output
_____no_output_____
###Markdown
Bonus Challenge Assignment: Temperature Analysis II
###Code
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, maximum, and average temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measure.tobs), func.avg(Measure.tobs), func.max(Measure.tobs)).\
filter(Measure.date >= start_date).filter(Measure.date <= end_date).all()
# For example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use the function `calc_temps` to calculate the tmin, tavg, and tmax
# for a year in the data set
#Important note: Using 08/01/2017 as reference date
# Converts reference date to datetime format
recent_date = '2017-08-01'
recent_date_2 = dt.datetime.strptime(recent_date,'%Y-%m-%d')
recent_date
# Uses the most recent date in the query to find the date
# a year prior to that date
date_year_ago = recent_date_2 - dt.timedelta(days=365)
date_year_ago_2 = date_year_ago.strftime('%Y-%m-%d')
#Runs calc_temps function to get the min,avg,and max temp
#
recent_temp_data = calc_temps(date_year_ago_2,recent_date)
recent_temp_data
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for bar height (y value)
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Establishes size of figure
plt.figure(figsize=(3,10))
#Creates bar graph with Avg Temp as the data
plt.bar([''],recent_temp_data[0][1],alpha=0.7,color='#FF9966')
#Creates error bar with min and max temp as the data
plt.errorbar(['',''],[recent_temp_data[0][0],recent_temp_data[0][2]],ecolor='black')
#Sets y limit for graphs
plt.ylim(top=100)
#Title
plt.title('Trip Avg Temp')
#Y-label
plt.ylabel('Temp (F)')
#Change figure color
plt.figure(facecolor='#E6EAE7')
plt.show()
###Output
_____no_output_____
###Markdown
Daily Rainfall Average
###Code
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's
# matching dates
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
rain_data = session.query(Measure.prcp,Measure.station,Station.name,Station.latitude,Station.longitude,Station.elevation).\
group_by(Measure.station).filter(Measure.station==Station.station).filter(Measure.date >= date_year_ago_2).\
filter(Measure.date <= recent_date).order_by(Measure.prcp.desc()).all()
rain_data
# Use this function to calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measure.tobs), func.avg(Measure.tobs), func.max(Measure.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measure.date) == date).all()
# For example
daily_normals("01-01")
# Calculate the daily normals for your trip
# Push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
start_date = '2017-08-01'
end_date = '2017-08-07'
# Use the start and end date to create a range of dates
start_date_dt = dt.datetime.strptime(start_date,'%Y-%m-%d')
end_date_dt = dt.datetime.strptime(end_date,'%Y-%m-%d')
delta = end_date_dt - start_date_dt
# For loop used to populate list for range of dates
# Loops based on number of days between start and end date
date_range = [start_date_dt + dt.timedelta(days=x) for x in range(delta.days)]
# Appends end date to list
date_range.append(end_date_dt)
# Create new list for dates in string format
date_range_2 = []
# Loop through previous date range and convert datetime to string format
for row in date_range:
date_range_2.append(row.strftime('%Y-%m-%d'))
# Strip off the year and save a list of strings in the format %m-%d
# Creates new list for dates in %m-%d format
month_day = []
# Loops through date range of strings and extracts just the month and day
for row in date_range_2:
month_day.append(row.split('-',1)[1])
# Use the `daily_normals` function to calculate the normals for each date string
# and append the results to a list called `normals`.
# Creates new list of normals for each date in date range
md_normals = []
# Loops through month-day range and inserts the daily normals
# in the md_normals list
for row in month_day:
md_normals.append(daily_normals(row))
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
#Note: each tuple is ordered tmin,tmax,tavg
# Creates tmin, tavg, and tmax lists
tmin = []
tavg = []
tmax = []
# Loops through daily normals list and appends tmin, tavg, and tmax
# to respective lists
for row in md_normals:
tmin.append(row[0][0])
tavg.append(row[0][1])
tmax.append(row[0][2])
#Uses tmin, tavg, and tmax lists to create dictionary
t_dict = {'tmin':tmin,'tavg':tavg,'tmax':tmax}
#Forms new dataframe using t_dict
month_day_df = pd.DataFrame(t_dict)
month_day_df['trip_dates'] = month_day
month_day_df_2 = month_day_df.set_index('trip_dates')
month_day_df_2
# Plot the daily normals as an area plot with `stacked=False`
#Creates area plot and line plot for tmin
plt.fill_between(month_day_df['trip_dates'],month_day_df['tmin'],color='#CCEEFF',alpha=0.8)
plt.plot(month_day_df['trip_dates'],month_day_df['tmin'],color='#80D4FF')
#Creates area plot and line plot for tavg
plt.fill_between(month_day_df['trip_dates'],month_day_df['tavg'],color='#FFDDCC',alpha=0.4)
plt.plot(month_day_df['trip_dates'],month_day_df['tavg'],color='#FFAA80')
#Creates area plot and line plot for tmax
plt.fill_between(month_day_df['trip_dates'],month_day_df['tmax'],color='#FFEE99',alpha=0.3)
plt.plot(month_day_df['trip_dates'],month_day_df['tmax'],color='#FFE14D')
#Y-label and X-label
plt.ylabel('Temperature (F)')
plt.xlabel('Date')
#Change figure color
plt.figure(facecolor='#E6EAE7')
plt.show()
###Output
_____no_output_____
###Markdown
Close Session
###Code
engine.dispose()
session.close()
###Output
_____no_output_____ |
ML_AI/PyTorch/Spiral_classification.ipynb | ###Markdown
Create the data
###Code
#importing the necessary packages
import random
import torch
from torch import nn, optim
import math
from IPython import display
from res.plot_lib import plot_data, plot_model, set_default
set_default()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
seed = 12345
random.seed(seed)
torch.manual_seed(seed)
N = 1000 # num_samples_per_class
D = 2 # dimensions
C = 3 # num_classes
H = 100 # num_hidden_units
X = torch.zeros(N * C, D).to(device)
y = torch.zeros(N * C, dtype=torch.long).to(device)
for c in range(C):
index = 0
t = torch.linspace(0, 1, N)
# When c = 0 and t = 0: start of linspace
# When c = 0 and t = 1: end of linpace
# This inner_var is for the formula inside sin() and cos() like sin(inner_var) and cos(inner_Var)
inner_var = torch.linspace(
# When t = 0
(2 * math.pi / C) * (c),
# When t = 1
(2 * math.pi / C) * (2 + c),
N
) + torch.randn(N) * 0.2
for ix in range(N * c, N * (c + 1)):
X[ix] = t[index] * torch.FloatTensor((
math.sin(inner_var[index]), math.cos(inner_var[index])
))
y[ix] = c
index += 1
print("Shapes:")
print("X:", tuple(X.size()))
print("y:", tuple(y.size()))
# visualise the data
plot_data(X, y)
###Output
_____no_output_____
###Markdown
Linear model
###Code
learning_rate = 1e-3
lambda_l2 = 1e-5
# nn package to create our linear model
# each Linear module has a weight and bias
model = nn.Sequential(
nn.Linear(D, H),
nn.Linear(H, C)
)
model.to(device) #Convert to CUDA
# nn package also has different loss functions.
# we use cross entropy loss for our classification task
criterion = torch.nn.CrossEntropyLoss()
# we use the optim package to apply
# stochastic gradient descent for our parameter updates
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=lambda_l2) # built-in L2
# Training
for t in range(1000):
# Feed forward to get the logits
y_pred = model(X)
# Compute the loss and accuracy
loss = criterion(y_pred, y)
score, predicted = torch.max(y_pred, 1)
acc = (y == predicted).sum().float() / len(y)
print("[EPOCH]: %i, [LOSS]: %.6f, [ACCURACY]: %.3f" % (t, loss.item(), acc))
display.clear_output(wait=True)
# zero the gradients before running
# the backward pass.
optimizer.zero_grad()
# Backward pass to compute the gradient
# of loss w.r.t our learnable params.
loss.backward()
# Update params
optimizer.step()
# Plot trained model
print(model)
plot_model(X, y, model)
###Output
_____no_output_____
###Markdown
Two-layered network
###Code
learning_rate = 1e-3
lambda_l2 = 1e-5
# nn package to create our linear model
# each Linear module has a weight and bias
#defining the training loop
model = nn.Sequential(
nn.Linear(D, H),
nn.ReLU(),
nn.Linear(H, C)
)
model.to(device)
# nn package also has different loss functions.
# we use cross entropy loss for our classification task
criterion = torch.nn.CrossEntropyLoss()
# we use the optim package to apply
# ADAM for our parameter updates
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=lambda_l2) # built-in L2
# e = 1. # plotting purpose
# Training
for t in range(1000):
# Feed forward to get the logits
y_pred = model(X)
# Compute the loss and accuracy
loss = criterion(y_pred, y)
score, predicted = torch.max(y_pred, 1)
acc = (y == predicted).sum().float() / len(y)
print("[EPOCH]: %i, [LOSS]: %.6f, [ACCURACY]: %.3f" % (t, loss.item(), acc))
display.clear_output(wait=True)
# zero the gradients before running
# the backward pass.
optimizer.zero_grad()
# Backward pass to compute the gradient
# of loss w.r.t our learnable params.
loss.backward()
# Update params
optimizer.step()
# Plot trained model
print(model)
plot_model(X, y, model)
###Output
_____no_output_____ |
SVM/SVR/SVR_1d/SVR(1d).ipynb | ###Markdown
SVR Concept Support Vectore Regression is a version of Support Vector Machine for regression.we can use SVR for working with continuous Values instead of Classification which is SVM [More about SVR](https://medium.com/coinmonks/support-vector-regression-or-svr-8eb3acf6d0ff) [SVR explained](http://www.saedsayad.com/support_vector_machine_reg.htm) TERMS Two Dashed lines are called Boundary Line Line in between is called Hyperplane Data points which are closest to the Boundary Line are called Support Vectors Distances between Boundary lines and Hyperplane are "+e" and "-e"(negative and positive epsilon) Concept SVR is about deciding a boundary at โeโ distance from the original hyper plane such that data points closest to the hyper plane or the support vectors are within that boundary line Decision boundary is Margin of tolerance and only data points which are within boundary are taken in consideration. To make explanation simpler, that we are going to take only those data points which have least error rate. Thus giving us a better fitting model. SVR in python Let's import all the libraries first!
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.svm import SVR #Our today's topic
###Output
_____no_output_____
###Markdown
Let's get the Data!
###Code
dataset = pd.read_csv('50_Startups.csv') #Loading the data into DataFrame
dataset.head() #Displaying the data to make sure DataFrame is alright
### dataframe.iloc requires 1)ROW(s) and 2)Column(s)
x = dataset.iloc[:, 2:3].values #Put a colon to choose all rows.
y = dataset.iloc[:, 4:].values #Profit column is at 4 index(start counting from 0 like a programmer!)
###Output
_____no_output_____
###Markdown
Let's create the model and Fit the data
###Code
svr_regressor_model = SVR().fit(x, y)
###Output
C:\Users\PC\AppData\Local\Enthought\Canopy\edm\envs\User\lib\site-packages\sklearn\utils\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
Let's visualize the values and model
###Code
plt.scatter(x, y, c='r', label="Values")
plt.plot(x, svr_regressor_model.predict(x), c='b', label="Support Vector Regression")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The model does not look so effective as we expected. WHY? Maybe because we forgot to do feature scaling. Let's try Feature Scaling is a method used to standartize data and it is also referred as "Normalization"
###Code
from sklearn.preprocessing import StandardScaler
scalerx = StandardScaler().fit(x)
scalery = StandardScaler().fit(y)
X = scalerx.transform(x)
Y = scalery.transform(y)
###Output
_____no_output_____
###Markdown
Fitting data to the new model
###Code
#Let's name this model a bit differently to differenciate with previous model
svr_regressor_rbf = SVR().fit(X, Y)
###Output
C:\Users\PC\AppData\Local\Enthought\Canopy\edm\envs\User\lib\site-packages\sklearn\utils\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
Visualizing
###Code
X_grid = np.arange(min(X), max(X), 0.1)
X_grid = X_grid.reshape((len(X_grid), 1))
plt.scatter(X,Y, c='b', label="Values")
plt.plot(X_grid, svr_regressor_rbf.predict(X_grid), c='r', label="SVR",linewidth=3)
plt.title("SVR with RBF(default) kernel")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
This model looks much better than previous. Which means we need to scale the data before fitting it into the SVR model SVR model has many optional parameters, but the most important one is Kernel There are 3 kernels for SVR models: Linear, Polynomial and RBF(default) which we used above SVR model with linear kernel Fitting data to the SVR model with Linear kernel
###Code
#Let's name this model a bit differently to differenciate with previous model
svr_regressor_linear = SVR(kernel='linear').fit(X, Y)
###Output
C:\Users\PC\AppData\Local\Enthought\Canopy\edm\envs\User\lib\site-packages\sklearn\utils\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
Visualizing
###Code
plt.scatter(X, Y, c='r', label="Values")
plt.plot(X, svr_regressor_linear.predict(X), c='b', label="SVR")
plt.title("SVR with linear kernel")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
This model with Linear kernel is worse than previous model with RBF kernel To prove it, I will use Score function of SVR model Formula of computing the score is following $$ Score= 1- \frac { \sum\limits_{k = 1}^n {(y_{obs_k}-y_{pri_k})^2} } {\sum\limits_{k = 1}^n {(y_{obs_k}-y_{obs_ave})^2 }}$$ Let's compute scores of each model
###Code
svr_regressor_rbf.score(X,Y) #rbf kernel with scaling(rbf)
svr_regressor_linear.score(X,Y) #linear kernel with scaling
###Output
_____no_output_____
###Markdown
As we can see, the model with RBF kernel fits better than with Linear kernel Let's try to create a model with Polynomial kernel SVR model with Polynomial kernel Fitting data to the SVR model with Polynomial kernel
###Code
#Let's name this model a bit differently to differenciate with previous model
svr_regressor_model_poly = SVR(kernel='poly').fit(X,Y)
###Output
C:\Users\PC\AppData\Local\Enthought\Canopy\edm\envs\User\lib\site-packages\sklearn\utils\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
Visualizing
###Code
X_grid = np.arange(min(X), max(X), 0.1)
X_grid = X_grid.reshape((len(X_grid), 1))
plt.scatter(X, Y, c='r', label="Values")
plt.plot(X_grid, svr_regressor_model_poly.predict(X_grid), c='b', label="SVR")
plt.title("SVR with Polynomial kernel")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Let's get the score and compare
###Code
svr_regressor_model_poly.score(X,Y) #Polynomial kernel with scaling
###Output
_____no_output_____
###Markdown
We have created 4 models and have 4 different scores for each model. Let's summarize: 1. With default kernel. Score = 0.60 2. With Linear kernel. Score = 0.53 3. With Polynomial kernel. Score = 0.51 Conclusion: Best model for our data was the Model with RBF(default) Kernel and Obviously, for SVR models we need to Scale(normalize, standardize) data prediction with best Model Let's predict of value with best model(rbf svr).we have selected first Marketing Spend as x and wants to predict the benefit that is y, for checking the prediction of rbf svr. x=471784(marketing spend) and y=192261.83(benefit). after prediction the value of y_new by using rbf svr is 0.2385 that is because of scaling data so we should use inverse of scaling that is (inverse_transform).so you can see the predicted value is y_new_inverse is 121531.592 but if you see the real data for this x=471784.1 is y=192261.83 but as we calculated the rbf score=0.6 so this accuracy of prediction is acceptable.Now you can find the benefit of any marketing spend with this accuracy!you can try another x values and find y_new_inverse
###Code
y_new=svr_regressor_rbf.predict(471784.10)
print(y_new)
y_new_inverse = scalery.inverse_transform(y_new)
y_new_inverse
###Output
[ 0.23856378]
###Markdown
Questions 1. Take R&D Spend as Independent variable(X)
###Code
<font size=4> 2. Create a model with Linear kernel and fit the data(X,Y)
###Output
_____no_output_____ |
Runge-Kutta-Multiple_Coupled Variables.ipynb | ###Markdown
Define our coupled derivatives to integrate
###Code
def dydx(x,y):
#Set derivatives
#our equation is d^2y/dx^2=-y
#sp dydx=z
#dzdx=-y
#set y=y[0]
#set x=y[1]
#declare an array
y_derivs=np.zeros(2)
#set dydx=z
y_derivs[0]=y[1]
#set dzdx=-y
y_derivs[1]=-1*y[0]
#here we have to return an array
return y_derivs
###Output
_____no_output_____
###Markdown
Define the 4th order RK method
###Code
def rk4_mv_core(dydx,xi,yi,nv,h):
#declare k? arrays
k1=np.zeros(nv)
k2=np.zeros(nv)
k3=np.zeros(nv)
k4=np.zeros(nv)
#define x at 1/2 step
x_ipoh=xi+.5*h
#define x at 1 step
x_ipo=xi+h
#declare a temp y array
y_temp=np.zeros(nv)
#get k1 values
y_derivs=dydx(xi,yi)
k1[:]=h*y_derivs[:]
#get k2 values
y_temp[:]=yi[i]+.5*k1[:]
y_derivs=dydx(x_ipoh,y_temp)
k2[:]=h*y_derivs[:]
#get k3 values
y_temp[:]=yi[:]+.5*k1[:]
y_derivs=dydx(x_ipoh,y_temp)
k2[:]=h*y_derivs[:]
#get k4 values
y_temp[:]=yi[:]+k3[:]
y_derivs=dydx(x_ipo,y_temp)
k4[:]=h*y_derivs[:]
#advance y by a step h
yipo=yi+(k1+2*k2+2*k3+k4)/6
return yipo
def rk4_mv_ad(dydx,x_i,y_i,nv,h,tol):
#define safety scale
SAFETY=.9
H_NEW_FAC=2.0
#set a maximum number of iterations
imax=10000
i=0
#create an error
Delta=np.full(nv,2*tol)
#remember the step
h_step=h
#adjust step
while(Delta.max()/tol>1.0):
#estimate our error by taking one step of size h v. two steps of size h/2
y_2=rk4_mv_core(dydx,x_i,y_i,nv,h_step)
y_1=rk4_mv_core(dydx,x_i,y_i,nv,.5*h_step)
y_11=rk4_mv_core(dydx,x_i+.5*h_step,y_1,nv,.5*h_step)
#compute an error
Delta=np.fabs(y_2-y_1)
#if the error is too large, take a smaller step
if (Delta.max()>tol):
#our error is too large, take a smaller step
h_step*=SAFETY*(Delta.max()/tol**(-.25))
#check iteration
if(i>=imax):
raise StopIteration("Ending after i=",i)
i+=1
#next time, try to take a bigger step
h_new=np.fmin(h_step*Delta.max()/tol**(-.9),h_step*H_NEW_FAC)
return y_2,h_new,h_step
def rk4_mv(dfdx, a,b,y_a,tol):
#define starting step
xi=a
yi=y_a.copy()
#an initial step size==make very small
h=1.0e-4*(b-a)
imax=10000
i=0
nv=len(y_a)
#set initial conditions
x=np.full(1,a)
y=np.full((1,nv),y_a)
flag=1
while(flag):
yi_new,h_new,h_step=rk4_mv_ad(dydx,xi,yi,nv,h,tol)
h=h_new
#prevent an overshoot
h=b-xi
#recalculate y_i+1
yi_new,h_new,h_step=rk4_mv_ad(dydx,xi,yi,nv,h,tol)
flag=0
#update values
xi+= h_step
yi[:]=yi_new[:]
#add the step to the arrays
###Output
_____no_output_____ |
Santander-Satisf_Clientes.ipynb | ###Markdown
Prevendo o Nรญvel de Satisfaรงรฃo de Clientes Santanderรverton BinJulho, 2020 รndice Introduรงรฃo Carregando os Dados Anรกlise Descritiva Prรฉ-Processamento de Dados Exclusรฃo de Variรกveis Neutras Normalizaรงรฃo dos Dados Reduรงรฃo de Dimensionalidade - PCA Balanceamento dos Dados - SMOTE Modelo Preditivo Criando e Treinando o Modelo Preditivo Avaliando o Modelo Preditivo Conclusรฃo 1- Introduรงรฃo Identificar, desde o inรญcio, se um cliente estรก ou nรฃo satisfeito com a relaรงรฃo com uma empresa pode permitir que esta empresa promova medidas proativas que melhorem a satisfaรงรฃo do cliente antes que ele a abandone. O Santander, ao promover uma competiรงรฃo Kaggle com o objetivo de possibilitar esta identificaรงรฃo antecipada, acredita que clientes insatisfeitos costumam trocar de banco e raramente verbalizam esta insatisfaรงรฃo antes de fazer a troca. Este projeto tem como objetivo criar um modelo preditivo capaz de prever a satisfaรงรฃo dos clientes do Banco Santander, fazendo uso de dados disponibilizados na plataforma do Kaggle, dados estes que podem ser acessados aqui. O arquivo de referรชncia para este estudo apresenta 370 variรกveis preditoras (features) e uma variรกvel-alvo que indica 0 para cliente satisfeito e 1 para cliente insatisfeito. Infelizmente, o dicionรกrio de dados nรฃo foi fornecido, tornando as variรกveis preditoras anรดnimas e, portanto, impedindo que possamos entender exatamente qual รฉ a informaรงรฃo que cada variรกvel representa. Por isso, nรฃo serรก feita uma anรกlise de dados mais aprofundada que possibilitaria extrair informaรงรตes e insights dos dados fornecidos. 2- Carregando os Dados
###Code
# Pacotes utilizados:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix, plot_confusion_matrix
from xgboost import XGBClassifier
# Leitura do arquivo:
cust_sat = pd.read_csv('train.csv')
cust_sat.head()
###Output
_____no_output_____
###Markdown
3- Anรกlise Descritiva Faremos apenas uma anรกlise descritiva dos dados, observando valores mรญnimos, mรกximos, mรฉdia, possรญveis outliers e as suas relaรงรตes com a variรกvel target. Como mencionado anteriormente, uma anรกlise exploratรณria dos dados sรณ seria possรญvel caso tivรฉssemos mais informaรงรตes sobre o que os dados numรฉricos disponibilizados representam. Primeiramente, vamos verificar se existe equilรญbrio entre as observaรงรตes de classes de clientes satisfeitos e insatisfeitos (0 e 1), pois, em nรฃo havendo equilรญbrio, torna-se necessรกrio fazer o balanceamento das classes para evitar que o modelo aprenda muito sobre uma das categorias e pouco sobre a outra.
###Code
# Contabilizando os dados por classe:
cust_sat[['ID', 'TARGET']].groupby(['TARGET']).count()
###Output
_____no_output_____
###Markdown
ร possรญvel observar que as classes estรฃo desbalanceadas, tornando necessรกria a aplicaรงรฃo de alguma tรฉcnica estatรญstica que equilibre as duas classes, evitando problemas no modelo que serรก criado. Neste caso, faremos uso da tรฉcnica SMOTE que serรก executada antes do treinamento do modelo preditivo. A partir de agora, vamos fazer observar as estatรญsticas por variรกvel (nรบmero de observaรงรตes, mรฉdia, desvio padrรฃo, mรญnimo, mรกximo e quartis). Como este problema envolve um grande nรบmero de variรกveis, vamos destacar alguns subconjuntos que podem representar a variabilidade observada nos dados.
###Code
cust_sat.iloc[:, 0:7].describe()
cust_sat.iloc[:, 90:100].describe()
cust_sat.iloc[:, 171:181].describe()
cust_sat.iloc[:, 241:247].describe()
cust_sat.iloc[:, 330:336].describe()
cust_sat.iloc[:, 365:].describe()
###Output
_____no_output_____
###Markdown
Com exceรงรฃo das colunas "ID" e "TARGET", cujos nรบmeros representam, respectivamente, a chave de identificaรงรฃo do cliente e a categoria cliente satisfeito ou insatisfeito, as demais variรกveis representam valores numรฉricos contรญnuos que apresentam uma grande variedade de magnitude entre elas (dezenas, centenas, milhares, etc.). Sendo assim, posteriormente faremos a normalizaรงรฃo dos dados para que os mesmos sejam representados em uma mesma escala. Como passo seguinte, vamos utilizar faixas similares aos subsets acima para criarmos uma matriz de correlaรงรฃo entre as variรกveis seleicionadas e tambรฉm entre elas e a variรกvel target. Desta forma, podemos observar o nรญvel de importรขncia de cada um dos atributos na classificaรงรฃo de satisfaรงรฃo dos clientes para, entรฃo, decidirmos de que maneira serรก feita a seleรงรฃo das variรกveis mais importantes. Para este passo, serรก criada uma funรงรฃo que gera a plotagem da matriz de correlaรงรฃo para o subset indicado.
###Code
# Criando uma funรงรฃo para geraรงรฃo da matriz de correlaรงรฃo:
def subset_corr(start_ind, final_ind, df = cust_sat):
""" Cria um plot indicando a correlaรงรฃo entre as variรกveis do subset e a variรกvel target
Args:
start_ind: nรบmero inteiro, iniciando em 1 - zero รฉ a variรกvel 'ID';
final_ind: nรบmero inteiro, no mรกximo 370.
"""
targ = df['TARGET']
sub_df = df.iloc[:, start_ind:final_ind]
sub_df = pd.concat([sub_df, targ], axis = 1)
subset_corr = sub_df.corr()
fig, ax = plt.subplots(nrows = 1, ncols = 1, figsize = (15, 12))
corr_plot = sns.heatmap(subset_corr, annot = True, fmt = '.3g', vmin = -1, vmax = 1, center = 0,
cmap = 'RdYlBu', square = True)
ax.set_title('Matriz de Correlaรงรฃo entre Variรกveis')
fig.show()
subset_corr(1, 11)
subset_corr(90, 100)
subset_corr(170, 180)
subset_corr(240, 250)
subset_corr(330, 340)
###Output
_____no_output_____
###Markdown
De modo geral, apesar de os dados apresentados nas matrizes acima mostrarem valores muito distintos de correlaรงรฃo com a variรกvel target, pode-se dizer que a grande maioria pertence a uma mesma escala, sendo difรญcil selecionar os que se destacam por uma correlaรงรฃo maior, seja ela positiva ou negativa. Tambรฉm รฉ possรญvel encontrar muitas variรกveis que apresentam forte correlaรงรฃo entre si, o que pode representar algum problema para o modelo, caso elas nรฃo sejam independentes entre si. Tambรฉm podemos perceber variรกveis, como o caso da imp_trasp_var17_out_hace3, que sรฃo representadas por valores zero em todos os registros. A partir destas observaรงรตes, vamos excluir as variรกveis cujos valores mรญnimo e mรกximo sรฃo iguais, sendo que todas as demais variรกveis serรฃo utilizadas no treinamento do modelo, ainda que, devido ao grande nรบmero de atributos, aplicaremos reduรงรฃo de dimensionalidade com PCA. 4- Prรฉ-Processamento de Dados A partir deste momento, vamos fazer o tratamento dos dados, contemplando as questรตes observadas anteriormente na etapa de anรกlise descritiva do conjunto de dados. Faremos a exclusรฃo das variรกveis que nรฃo influenciam nos resultados, colocaremos os dados em uma escala padrรฃo para, posteriormente, aplicarmos a reduรงรฃo de dimensionalidade com maior efetividade e, entรฃo, balancearmos as classes para treinamento do modelo. 4.1- Exclusรฃo de Variรกveis Neutras Pela anรกlise descritiva dos dados, bem como pela avaliaรงรฃo das correlaรงรตes, conseguimos identificar que o conjunto de dados possui variรกveis que apresentam um รบnico valor para todas as observaรงรตes registradas. Com isso, podemos concluir que esta variรกvel nรฃo รฉ capaz de interferir na diferenciaรงรฃo das classes que queremos prever e, portanto, estas variรกveis serรฃo eliminadas do conjunto de dados, permitindo, assim, uma primeira reduรงรฃo no nรบmero de variรกveis.
###Code
# Criando um loop for para armazenar em lista os รญndices das colunas cujos valores mรญnimo e mรกximo sรฃo iguais:
ind_list = []
for i in range(len(cust_sat.columns)):
if cust_sat.iloc[:, i].min() == cust_sat.iloc[:, i].max():
ind_list.append(i)
# Excluindo as colunas correspondentes aos รญndices salvos em ind_list:
cust_sat = cust_sat.drop(cust_sat.iloc[:, ind_list], axis = 1)
cust_sat.head()
###Output
_____no_output_____
###Markdown
Com este procedimento, o dataset original, que apresentava originalmente 371 colunas, agora apresenta 337 colunas, ou seja, 34 colunas foram excluรญdas nesta etapa. 4.2- Normalizaรงรฃo dos Dados O grande nรบmero de variรกveis disponibilizado apresenta tambรฉm uma grande variedade de escalas numรฉricas, o que, para alguns algoritmos de aprendizado, pode representar um problema. Ainda que para alguns algoritmos รฉ verdade que a escala dos dados nรฃo afeta o resultado, queremos aplicar, posteriormente, a reduรงรฃo de dimensionalidade (neste caso, atravรฉs do algoritmo PCA), ou seja, queremos entregar ao modelo um nรบmero menor de variรกveis. Com os dados padronizados, a definiรงรฃo dos componentes principais feita pelo PCA pode ser mais eficiente. Antes de aplicarmos a normalizaรงรฃo aos dados, vamos dividรญ-los em 'X' e 'Y' - variรกveis independentes e variรกvel dependente:
###Code
# Criando as variรกveis 'X' e 'Y' a partir do conjunto de dados atual:
X = cust_sat.iloc[:, 1:-1]
Y = cust_sat.iloc[:, -1]
# Padronizando os dados com o MinMaxScaler():
scaler = MinMaxScaler(feature_range = (0,1))
X_norm = scaler.fit_transform(X)
###Output
_____no_output_____
###Markdown
4.3- Reduรงรฃo de Dimensionalidade - PCA Com os dados jรก padronizados, vamos reduzir as 335 variรกveis preditoras restantes em 30 componentes principais, atravรฉs da aplicaรงรฃo do algoritmo Principal Component Analysis:
###Code
pca = PCA(n_components = 30)
X_comp = pca.fit_transform(X_norm)
###Output
_____no_output_____
###Markdown
4.4- Balanceamento dos Dados - SMOTE Como visto anteriormente, as classes 0 e 1 estรฃo desbalanceadas no conjunto de dados fornecido, o que significa dizer que temos muitas informaรงรตes sobre os clientes satisfeitos e relativamente poucas informaรงรตes sobre clientes insatisfeitos. Para amenizar este problema e evitar que o modelo aprenda muito mais sobre uma das classes, vamos utilizar a funรงรฃo SMOTE. Apenas aplicaremos esta tรฉcnica aos dados que servirรฃo para treinar o modelo e, por isso, antes de fazรช-lo devemos fazer a divisรฃo dos dados em dados de treino e dados de teste:
###Code
# Dividindo os dados em treino e teste:
X_treino, X_teste, Y_treino, Y_teste = train_test_split(X_comp, Y, test_size = 0.3, random_state = 101)
# Aplicando SMOTE aos dados de treino:
sm = SMOTE(random_state = 102)
X_treino_bal, Y_treino_bal = sm.fit_sample(X_treino, Y_treino)
###Output
_____no_output_____
###Markdown
Finalizamos, assim, a etapa de prรฉ-processamento de dados e estamos prontos para a criaรงรฃo do modelo preditivo. 5- Modelo Preditivo Nesta etapa, vamos fazer uso do algoritmo de aprendizado chamado XGBoost. Primeiramente, o modelo serรก criado para, entรฃo, alimentรก-lo com os dados de treino. Com o modelo treinado, serรก possรญvel fazer as previsรตes a partir do conjunto de dados de teste para que, finalmente, possamos avaliar o desempenho do modelo na prediรงรฃo das classes desejadas. Para este estudo, desejamos uma acurรกcia mรญnima de 70%. 5.1- Criando e Treinando o Modelo Preditivo Para efeito de estudo, nรฃo nos aprofundaremos nos diversos parรขmetros do XGBoost que possibilitariam o ajuste fino no intuito de aprimorar o desempenho do modelo. Sendo assim, faremos a sua aplicaรงรฃo com os parรขmetros default do prรณprio algoritmo:
###Code
# Criando e treinando o modelo:
modelo = XGBClassifier()
modelo.fit(X_treino_bal, Y_treino_bal)
###Output
_____no_output_____
###Markdown
Com o modelo treinado, podemos aplicar os dados teste para que o modelo faรงa as suas previsรตes:
###Code
# Fazendo as previsรตes:
Y_pred = modelo.predict(X_teste)
###Output
_____no_output_____
###Markdown
5.2- Avaliando o Modelo Preditivo Para fazermos a avaliaรงรฃo do modelo treinado, precisamos comparar as previsรตes feitas por ele nos dados de teste com os resultados reais do conjunto de dados. Neste estudo, a mรฉtrica de avaliaรงรฃo utilizada serรก a acurรกcia:
###Code
# Calculando a acurรกcia do modelo:
accuracy = accuracy_score(Y_teste, Y_pred)
print('A acurรกcia do modelo de classificaรงรฃo XGBoost para os dados de teste รฉ de', round(accuracy*100), '%.')
###Output
A acurรกcia do modelo de classificaรงรฃo XGBoost para os dados de teste รฉ de 80.0 %.
###Markdown
A acurรกcia de 80% รฉ relativamente boa, maior do que o mรญnimo de 70% estabelecido. No entanto, para entendermos melhor o comportamento do modelo, vamos criar algumas matrizes de confusรฃo que nos permitirรฃo visualizar a eficรกcia do modelo para ambas as classes estudadas.
###Code
# Criando a primeira matriz de confusรฃo:
confusion_matrix(Y_teste, Y_pred)
###Output
_____no_output_____
###Markdown
Para que possamos visualizar a matriz de confusรฃo de maneira mais amigรกvel, utilizaremos a funรงรฃo crosstab do Pandas e, na seqรผรชncia, faremos uma plotagem da mesma atravรฉs do seaborn que permite uma visualizaรงรฃo ainda mais clara quanto ร eficรกcia do modelo:
###Code
# Criando a matriz de confusรฃo com Pandas:
pd.crosstab(Y_teste, Y_pred)
# Plotando a matriz de confusรฃo 'normalizada' com o Seaborn:
class_names = [0, 1]
disp = plot_confusion_matrix(modelo, X_teste, Y_teste,
display_labels = class_names,
cmap=plt.cm.PuBu,
normalize = 'true')
disp.ax_.set_title('Matriz de Confusรฃo')
plt.show()
###Output
_____no_output_____ |
20200711/lifeCycleModel-housing.ipynb | ###Markdown
Calculate the policy of the agent* State Variable: x = [w, n_lag, M, g, s, e], action variable a = [c, b, k, ih], both of them are numpy array
###Code
%pylab inline
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from scipy.interpolate import RegularGridInterpolator as RS
from multiprocessing import Pool
from functools import partial
import warnings
from scipy import optimize
warnings.filterwarnings("ignore")
np.printoptions(precision=2)
# time line
T_min = 0
T_max = 70
T_R = 45
beta = 1/(1+0.02)
# All the money amount are denoted in thousand dollars
earningShock = [0.8,1.2]
# Define transition matrix of economical states
# GOOD -> GOOD 0.8, BAD -> BAD 0.6
Ps = np.array([[0.6, 0.4],[0.2, 0.8]])
# current risk free interest rate
r_f = np.array([0.01 ,0.03])
# stock return depends on current and future econ states
r_m = np.array([[-0.2, 0.15],[-0.15, 0.2]])
# expected return on stock market
# r_bar = 0.0667
# probability of survival
Pa = np.load("prob.npy")
# probability of employment transition
Pe = np.array([[[[0.3, 0.7], [0.1, 0.9]], [[0.25, 0.75], [0.05, 0.95]]],
[[[0.25, 0.75], [0.05, 0.95]], [[0.2, 0.8], [0.01, 0.99]]]])
# deterministic income
detEarning = np.load("detEarning.npy")
# tax rate
tau_L = 0.2
tau_R = 0.1
# minimum consumption
r_bar = 0.02
c_bar = 2
# Define constant variable for housing part, h takes value larger than H
H = 20
rh = 0.02
m = 2
chi = 0.3
po = 10
# fixed cost of modifying the home
c_h = 2
delta = 0.05
# renting price
pr = 2
#Define the new utility function as a funciton of consumption good cost and renting
def u(c, h, q):
alpha = 0.88
kappa = 0.3
gamma = 2
if q == 1:
ch = np.float_power(max(c-c_bar,0),alpha) * np.float_power((1+kappa)*h,1-alpha)
else:
ch = np.float_power(max(c-c_bar,0),alpha) * np.float_power((1-kappa)*(h-(1-q)*H),1-alpha)
return (np.float_power(ch,1-gamma) - 1)/(1 - gamma)
#Define the bequeath function, which is a function of wealth
def ub(TB):
B = 2
gamma = 2
return B*(np.float_power(max(TB+1,0),1-gamma) - 1)/(1 - gamma)
uB = np.vectorize(ub)
#Define the earning function as state and
def y(t, x):
w, n, M, g, s, e = x
if t <= T_R:
# social welfare is 5k when people lost their jobs
return detEarning[t] * earningShock[int(s)] * e + (1-e)*5
else:
return detEarning[t]
# Define the transtiion of state (test)
def transition(x, a, t):
'''
Input: x current state: (w, n_lag, M, g, s, e)
a action taken: (c, b, k, ih, q)
Output: the next possible states with corresponding probabilities
'''
c, b, k, ih, q = a
w, n, M, g, s, e = x
x_next = []
prob_next = []
# calculate m_next, g_next
M_next = M*(1+rh) - m
g_next = (1-delta)*g + ih
# variables needed
N = np.sum(Pa[t:])
discounting = ((1+r_bar)**N - 1)/(r_bar*(1+r_bar)**N)
r_bond = r_f[int(s)]
# calcualte n_next
if t < T_R:
# before retirement agents put 5% of income to 401k
if e == 1:
n_next = (n+0.05*y(t,x))*(1+r_bar)
else:
n_next = n*(1+r_bar)
# for potential s_next, e_next and A_next
for s_next in [0, 1]:
r_stock = r_m[int(s), s_next]
w_next = b*(1+r_bond) + k*(1+r_stock)
for e_next in [0,1]:
x_next.append([w_next, n_next, M_next, g_next, s_next, e_next])
prob_next.append(Ps[int(s),s_next] * Pe[int(s),s_next,int(e),e_next])
else:
# after retirement agents withdraw cash from 401k
n_next = n*(1+r_bar)-n/discounting
e_next = 0
# for potential s_next and A_next
for s_next in [0, 1]:
r_stock = r_m[int(s), s_next]
w_next = b*(1+r_bond) + k*(1+r_stock)
x_next.append([w_next, n_next, M_next, g_next, s_next, e_next])
prob_next.append(Ps[int(s), s_next])
return np.array(x_next), np.array(prob_next)
# Value function is a function of state and time t, not renting out
def Vu(x, t, Vmodel):
'''
Input: x current state: (w, n_lag, M, g, s, e)
find the optimal action taken: (c, b, k, ih, q)
'''
# Define the objective function as a function of action
w, n, M, g, s, e = x
N = np.sum(Pa[t:])
discounting = ((1+0.02)**N - 1)/(r_bar*(1+0.02)**N)
n_discount = n/discounting
ytx = y(t, x)
def obj(thetas):
theta1, theta2, theta3 = thetas
if t < T_R:
if e == 1:
bk = ((1-tau_L)*(ytx * 0.95) + w - m) * theta1
ch = ((1-tau_L)*(ytx * 0.95) + w - m) *(1-theta1)
else:
bk = ((1-tau_L)*ytx + w - m) * theta1
ch = ((1-tau_L)*ytx + w - m) * (1-theta1)
else:
bk = ((1-tau_R)*ytx + w + n_discount - m) * theta1
ch = ((1-tau_R)*ytx + w + n_discount - m) * (1-theta1)
b = bk * theta2
k = bk * (1-theta2)
c = ch * theta3
ih = ch * (1-theta3)/((1+chi)*po)
# fixed cost c = ch - (1+chi)*po*ih - c_h*(ih>0)
h = (1-delta) * g + ih + H
a = (c,b,k,ih,1)
x_next, prob_next = transition(x, a, t)
ww = x_next[:,0]
nn = x_next[:,1]
MM = x_next[:,2]
gg = x_next[:,3]
TB = (H+(1-chi)*(1-delta)*gg)*po - MM + ww + nn
V_tilda = []
for xx in x_next:
V_tilda.append(double(Vmodel[int(xx[4])][int(xx[5])](xx[:4])))
return -(u(c, h, 1) + beta *(Pa[t] * np.dot(V_tilda, prob_next) + (1-Pa[t]) * np.dot(uB(TB),prob_next)))
res = optimize.minimize(obj, [0.5, 0.5, 0.5], method="SLSQP",bounds = ((0, 1),(0, 1),(0, 1)), tol = 1e-9)
max_val = (-res.fun)
theta1_m, theta2_m, theta3_m = res.x
if t < T_R:
if e == 1:
bk_m = ((1-tau_L)*(ytx * 0.95) + w - m) * theta1_m
ch_m = ((1-tau_L)*(ytx * 0.95) + w - m) * (1-theta1_m)
else:
bk_m = ((1-tau_L)*ytx + w - m) * theta1_m
ch_m = ((1-tau_L)*ytx + w - m) * (1-theta1_m)
else:
bk_m = ((1-tau_R)*ytx + w + n_discount - m) * theta1_m
ch_m = ((1-tau_R)*ytx + w + n_discount - m) * (1-theta1_m)
b_m = bk_m * theta2_m
k_m = bk_m * (1-theta2_m)
c_m = ch_m * theta3_m
ih_m = ch_m * (1-theta3_m)/((1+chi)*po)
h_m = (1-delta) * g + ih_m + H
return np.array([max_val, [c_m, b_m, k_m, h_m]])
# Value function is a function of state and time t, renting out
def Vd(x, t, Vmodel):
'''
Input: x current state: (w, n_lag, M, g, s, e)
find the optimal action taken: (c, b, k, ih, q)
'''
# Define the objective function as a function of action
w, n, M, g, s, e = x
N = np.sum(Pa[t:])
discounting = ((1+0.02)**N - 1)/(r_bar*(1+0.02)**N)
n_discount = n/discounting
ytx = y(t, x)
def obj(thetasq):
theta1, theta2, q = thetasq
if t < T_R:
if e == 1:
bk = ((1-tau_L)*(ytx * 0.95) + w - m + pr*(1-q)*H) * theta1
c = ((1-tau_L)*(ytx * 0.95) + w - m + pr*(1-q)*H) *(1-theta1)
else:
bk = ((1-tau_L)*ytx + w - m + pr*(1-q)*H) * theta1
c = ((1-tau_L)*ytx + w - m + pr*(1-q)*H) * (1-theta1)
else:
bk = ((1-tau_R)*ytx + w + n_discount - m + pr*(1-q)*H) * theta1
c = ((1-tau_R)*ytx + w + n_discount - m + pr*(1-q)*H) * (1-theta1)
b = bk * theta2
k = bk * (1-theta2)
h = (1-delta) * g + H
a = (c,b,k,0,q)
x_next, prob_next = transition(x, a, t)
ww = x_next[:,0]
nn = x_next[:,1]
MM = x_next[:,2]
gg = x_next[:,3]
TB = (H+(1-chi)*(1-delta)*gg)*po - MM + ww + nn
V_tilda = []
for xx in x_next:
V_tilda.append(double(Vmodel[int(xx[4])][int(xx[5])](xx[:4])))
return -(u(c, h, q) + beta *(Pa[t] * np.dot(V_tilda, prob_next) + (1-Pa[t]) * np.dot(uB(TB),prob_next)))
res = optimize.minimize(obj, [0.5, 0.5, 0.5], method="SLSQP",bounds = ((0, 1),(0, 1),(0, 1)), tol = 1e-9)
max_val = (-res.fun)
theta1_m, theta2_m, q_m = res.x
if t < T_R:
if e == 1:
bk_m = ((1-tau_L)*(ytx * 0.95) + w - m) * theta1_m
c_m = ((1-tau_L)*(ytx * 0.95) + w - m) * (1-theta1_m)
else:
bk_m = ((1-tau_L)*ytx + w - m) * theta1_m
c_m = ((1-tau_L)*ytx + w - m) * (1-theta1_m)
else:
bk_m = ((1-tau_R)*ytx + w + n_discount - m) * theta1_m
c_m = ((1-tau_R)*ytx + w + n_discount - m) * (1-theta1_m)
b_m = bk_m * theta2_m
k_m = bk_m * (1-theta2_m)
h_m = (1-delta) * g + H
return np.array([max_val, [c_m, b_m, k_m, h_m]])
def V(x, t, Vmodel):
V1 = Vu(x, t, Vmodel)
V2 = Vd(x, t, Vmodel)
if V1[0] >= V2[0]:
return V1
else:
return V2
# wealth discretization
w_grid_size = 10
w_lower = 5
w_upper = 1500
# 401k amount discretization
n_grid_size = 10
n_lower = 5
n_upper = 1500
# mortgage discretization
M_grid_size = 5
M_lower = 10
M_upper = 30
# improvement discretization
g_grid_size = 5
g_lower = 1
g_upper = 20
power = 2
def powspace(start, stop, power, num):
start = np.power(start, 1/float(power))
stop = np.power(stop, 1/float(power))
return np.power( np.linspace(start, stop, num=num), power)
# initialize the state discretization
xgrid = np.array([[w,n,M,g,s,e] for w in powspace(w_lower, w_upper, power, w_grid_size)
for n in powspace(n_lower, n_upper, power, n_grid_size)
for M in powspace(M_lower, M_upper, power, M_grid_size)
for g in powspace(g_lower, g_upper, power, g_grid_size)
for s in [0,1]
for e in [0,1]]).reshape((w_grid_size, n_grid_size, M_grid_size, g_grid_size,2,2,6))
Vgrid = np.zeros((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2, T_max+1))
cgrid = np.zeros((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2, T_max+1))
bgrid = np.zeros((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2, T_max+1))
kgrid = np.zeros((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2, T_max+1))
hgrid = np.zeros((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2, T_max+1))
# w, n_lag, M, g, s, e
terminalV = np.array([x[0] + x[1] + (H+(1-chi)*(1-delta)*x[3])*po - x[2] for x in xgrid.reshape((w_grid_size * n_grid_size * M_grid_size * g_grid_size*2*2, 6))])
Vgrid[:,:,:,:,:,:, T_max] = uB(terminalV).reshape((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2))
###Output
_____no_output_____
###Markdown
Backward Induction Part
###Code
%%time
ws = powspace(w_lower, w_upper, power, w_grid_size)
ns = powspace(n_lower, n_upper, power, n_grid_size)
Ms = powspace(M_lower, M_upper, power, M_grid_size)
gs = powspace(g_lower, g_upper, power, g_grid_size)
xs = xgrid.reshape((w_grid_size * n_grid_size * M_grid_size * g_grid_size*2*2, 6))
pool = Pool()
for t in range(T_max-1, 0, -1):
print(t)
cs = [[RS((ws, ns, Ms, gs), Vgrid[:,:,:,:,s,e,t+1], bounds_error=False, fill_value= None) for e in [0,1]] for s in [0,1]]
f = partial(V, t = t, Vmodel = cs)
results = np.array(pool.map(f, xs))
Vgrid[:,:,:,:,:,:,t] = results[:,0].reshape((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2))
cgrid[:,:,:,:,:,:,t] = np.array([r[0] for r in results[:,1]]).reshape((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2))
bgrid[:,:,:,:,:,:,t] = np.array([r[1] for r in results[:,1]]).reshape((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2))
kgrid[:,:,:,:,:,:,t] = np.array([r[2] for r in results[:,1]]).reshape((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2))
hgrid[:,:,:,:,:,:,t] = np.array([r[3] for r in results[:,1]]).reshape((w_grid_size, n_grid_size, M_grid_size, g_grid_size, 2, 2))
pool.close()
# np.save("Vgrid", Vgrid)
# np.save("cgrid", cgrid)
# np.save("bgrid", bgrid)
# np.save("kgrid", kgrid)
# np.save("hgrid", hgrid)
import numpy as np
# wealth discretization
w_grid_size = 10
w_lower = 5
w_upper = 1500
# 401k amount discretization
n_grid_size = 10
n_lower = 5
n_upper = 1500
# mortgage discretization
M_grid_size = 5
M_lower = 10
M_upper = 30
# improvement discretization
g_grid_size = 5
g_lower = 1
g_upper = 20
power = 2
def powspace(start, stop, power, num):
start = np.power(start, 1/float(power))
stop = np.power(stop, 1/float(power))
return np.power( np.linspace(start, stop, num=num), power)
ws = powspace(w_lower, w_upper, power, w_grid_size)
ns = powspace(n_lower, n_upper, power, n_grid_size)
Ms = powspace(M_lower, M_upper, power, M_grid_size)
gs = powspace(g_lower, g_upper, power, g_grid_size)
Vgrid = np.load("Vgrid.npy")
cgrid = np.load("cgrid.npy")
bgrid = np.load("bgrid.npy")
kgrid = np.load("kgrid.npy")
hgrid = np.load("hgrid.npy")
ws[5]
plt.figure(figsize = [12,8])
plt.plot(cgrid[5,5,3,3,1,1,:], label = "Consumption")
plt.plot(bgrid[5,5,3,3,1,1,:], label = "Bond")
plt.plot(kgrid[5,5,3,3,1,1,:], label = "Stock")
plt.plot(hgrid[5,5,3,3,1,1,:], label = "Housing")
plt.legend()
###Output
_____no_output_____
###Markdown
Simulation Part
###Code
import quantecon as qe
mc = qe.MarkovChain(Ps)
def action(t, x):
w,n,h_lag,s,e,A = x
if A == 1:
c = RS((ws, ns, hs), cgrid[:,:,:,s,e,A,t],bounds_error=False, fill_value= None)([w, n, h_lag])[0]
b = RS((ws, ns, hs), bgrid[:,:,:,s,e,A,t],bounds_error=False, fill_value= None)([w, n, h_lag])[0]
k = RS((ws, ns, hs), kgrid[:,:,:,s,e,A,t],bounds_error=False, fill_value= None)([w, n, h_lag])[0]
h = RS((ws, ns, hs), hgrid[:,:,:,s,e,A,t],bounds_error=False, fill_value= None)([w, n, h_lag])[0]
else:
c = 0
b = 0
k = 0
h = 0
return (c,b,k,h)
# Define the transition of state
def transition(x, a, t, s_next):
'''
Input: x current state: (w, n, s, e, A)
a action taken: (c, b, k)
Output: the next possible states with corresponding probabilities
'''
# unpack variable
c, b, k, h = a
w, n, h_lag, s, e, A = x
# variables used to collect possible states and probabilities
x_next = []
prob_next = []
# Agent is dead at the end of last period
if A == 0:
return np.array([0, 0, 0, s_next, 0, 0])
# Agent is alive
else:
# variables needed
N = np.sum(Pa[t:])
discounting = ((1+r_bar)**N - 1)/(r_bar*(1+r_bar)**N)
Pat = [1-Pa[t], Pa[t]]
r_bond = r_f[int(s)]
# calcualte n_next
if t < T_R:
# before retirement agents put 5% of income to 401k
if e == 1:
n_next = (n+0.05*y(t,x))*(1+r_bar)
else:
n_next = n*(1+r_bar)
# for potential s_next, e_next and A_next
r_stock = r_m[int(s), s_next]
w_next = b*(1+r_bond) + k*(1+r_stock)
for e_next in [0,1]:
for A_next in [0,1]:
x_next.append([w_next, n_next, h, s_next, e_next, A_next])
prob_next.append(Pat[A_next] * Pe[int(s),s_next,int(e),e_next])
else:
# after retirement agents withdraw cash from 401k
n_next = n*(1+r_bar)-n/discounting
e_next = 0
# for potential s_next and A_next
r_stock = r_m[int(s), s_next]
w_next = b*(1+r_bond) + k*(1+r_stock)
for A_next in [0,1]:
x_next.append([w_next, n_next, h, s_next, e_next, A_next])
prob_next.append(Pat[A_next])
return x_next[np.random.choice(len(prob_next), 1, p = prob_next)[0]]
numEcon = 500
sim = 1000
EconStates = [mc.simulate(ts_length=T_max - T_min, init=0) for _ in range(numEcon)]
# simulate an agent age 0 starting with wealth of 20 and 20 in rFund.
def simulateAgent(i):
# states
wealth = []
rFund = []
employ = []
live = []
Salary = []
# actions
Consumption = []
Bond = []
Stock = []
Housing = []
if np.random.rand() > 0.95:
x = [20, 0, 2, 0, 0, 1]
else:
x = [20, 0, 2, 0, 1, 1]
econState = EconStates[i//sim]
for t in range(len(econState)-1):
s = econState[t]
s_next = econState[t+1]
a = action(t, x)
c, b, k, h = a
w, n, h_lag, _, e, A = x
wealth.append(w)
rFund.append(n)
Consumption.append(c)
Bond.append(b)
Stock.append(k)
Housing.append(h)
Salary.append(y(t, x))
employ.append(e)
live.append(A)
x = transition(x, a, t, s_next)
# list of array
return np.array([wealth, rFund, Consumption, Bond, Stock, Housing, Salary, employ, live]).T
%%time
pool = Pool()
agents = pool.map(simulateAgent, list(range(sim*numEcon)))
pool.close()
np.save("agents", agents)
agents = np.load("agents.npy")
# wealth, rFund, Consumption, Bond, Stock, Housing, Salary, employ, live
def collect(attribute, agents):
names = ["wealth", "rFund", "Consumption", "Bond", "Stock", "Housing", "Salary", "employ", "live"]
index = names.index(attribute)
container = np.zeros((agents[0].shape[0],len(agents)))
for i in range(len(agents)):
container[:, i] = agents[i][:, index]
return container
wealth = collect("wealth",agents)
rFund = collect("rFund",agents)
Consumption = collect("Consumption",agents)
Bond = collect("Bond",agents)
Stock = collect("Stock",agents)
Housing = collect("Housing",agents)
Salary = collect("Salary",agents)
employ = collect("employ",agents)
live = collect("live",agents)
# Population during the entire simulation period
plt.plot(np.mean(live, axis = 1))
def quantileForPeopleWholive(attribute, quantiles = [0.25, 0.5, 0.75]):
qList = []
for i in range(69):
if len(np.where(live[i,:] == 1)[0]) == 0:
qList.append(np.array([0] * len(quantiles)))
else:
qList.append(np.quantile(attribute[i, np.where(live[i,:] == 1)], q = quantiles))
return np.array(qList)
def meanForPeopleWholive(attribute):
means = []
for i in range(69):
if len(np.where(live[i,:] == 1)[0]) == 0:
means.append(np.array([0]))
else:
means.append(np.mean(attribute[i, np.where(live[i,:] == 1)]))
return np.array(means)
plt.plot(meanForPeopleWholive(employ)[:45])
# plot the 0.25, 0.5, 0.75 quantiles of wealth
plt.plot(quantileForPeopleWholive(wealth))
# plot the 0.25, 0.5, 0.75 quantiles of wealth
plt.plot(quantileForPeopleWholive(rFund))
# plot the 0.25, 0.5, 0.75 quantiles of wealth
plt.plot(quantileForPeopleWholive(Consumption))
# plot the 0.25, 0.5, 0.75 quantiles of wealth
plt.plot(quantileForPeopleWholive(Bond))
# plot the 0.25, 0.5, 0.75 quantiles of wealth
plt.plot(quantileForPeopleWholive(Stock))
# plot the 0.25, 0.5, 0.75 quantiles of wealth
plt.plot(quantileForPeopleWholive(Salary))
# plot the 0.25, 0.5, 0.75 quantiles of wealth
plt.plot(quantileForPeopleWholive(Housing)[1:70])
# plot the 0.25, 0.5, 0.75 quantiles of wealth
plt.figure(figsize = [14,8])
# plt.plot(meanForPeopleWholive(wealth), label = "wealth")
# plt.plot(meanForPeopleWholive(rFund), label = "rFund")
# plt.plot(meanForPeopleWholive(Consumption), label = "Consumption")
# plt.plot(meanForPeopleWholive(Bond), label = "Bond")
# plt.plot(meanForPeopleWholive(Stock), label = "Stock")
# plt.plot(meanForPeopleWholive(Salary), label = "Salary")
plt.plot(meanForPeopleWholive(Housing), label = "Housing")
plt.legend()
# calculate fraction of consumption, stock investment, bond investment over (wealth + income)
plt.figure(figsize = [14,8])
plt.plot(meanForPeopleWholive(Consumption)[:65]/meanForPeopleWholive(wealth+Salary)[:65], label = "Consumption")
plt.plot(meanForPeopleWholive(Bond)[1:66]/meanForPeopleWholive(wealth+Salary)[1:66], label = "Bond")
plt.plot(meanForPeopleWholive(Stock)[1:66]/meanForPeopleWholive(wealth+Salary)[1:66], label = "Stock")
plt.plot(meanForPeopleWholive(Housing)[1:66]/meanForPeopleWholive(wealth+Salary)[1:66], label = "Housing")
plt.legend()
averageWealth = []
for i in range(5, 46, 5):
averageWealth.append(meanForPeopleWholive(wealth)[i])
medianWealth = []
for i in range(5, 46, 5):
medianWealth.append(quantileForPeopleWholive(wealth)[i][1])
median401k = []
for i in range(5, 46, 5):
median401k.append(quantileForPeopleWholive(rFund)[i][1])
meidianHousing = []
for i in range(5, 46, 5):
meidianHousing.append(quantileForPeopleWholive(Housing)[i][1])
for t in range(5, 46, 5):
# x = [w,n,s,e,A]
i = t//5-1
for e in [0,1]:
for s in [0,1]:
c,b,k,h = action(t, [medianWealth[i], median401k[i], meidianHousing[i],e,s,1])
print(k/(k+b), end = " ")
print()
###Output
0.5 0.5 0.4047964653557799 0.9999999999999998
0.5 0.5 0.5 1.0
0.4422946394913395 0.5 0.5 1.0
0.46796349499066014 0.5 0.5 1.0
0.5 0.500887563745675 0.499535305172671 1.0
0.48456331777762907 0.5 0.5 1.0
0.5 0.5233539585993341 0.5 1.0
0.5 0.5 0.5 1.0
7.112076983269119e-17 0.9999999999999999 1.773062365561816e-17 0.9999999999999996
|
content/homeworks/hw02/notebook/cs109b_hw2_submit.ipynb | ###Markdown
CS109B Data Science 2: Advanced Topics in Data Science Homework 2 - Clustering**Harvard University****Spring 2020****Instructors**: Mark Glickman, Pavlos Protopapas, & Chris Tanner Homework 2 is due February 20th
###Code
#PLEASE RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
INSTRUCTIONS- This is individual homework - No collaboration/Groups- Problem 1 + Problem 2 = 75 points ; Problem 3 = 25 points- To submit your assignment, please follow the instructions on Canvas.- Please restart the kernel and run the entire notebook again before you submit. Please use the libraries below:
###Code
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.cluster.hierarchy as hac
from scipy.spatial.distance import pdist
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import DBSCAN
from sklearn.cluster import KMeans
from gap_statistic import OptimalK
#from sklearn.datasets.samples_generator import make_blobs
###Output
_____no_output_____
###Markdown
*Handy* Algorithms In this assignment, you will be working with data collected from a motion capture camera system. The system was used to record 14 different users performing 5 distinct hand postures with markers attached to a left-handed glove. A set of markers on the back of the glove was used to establish a local coordinate system for the hand, and 8 additional markers were attached to the thumb and fingers of the glove. A total of 24 features were collected based on observations from the camera system. Two other variables in the dataset were the ID of the user and the posture that the user made.These data have been preprocessed, including transformation to the local coordinate system of the record, removal of outliers, and removal of missing data.The dataset `postures_clean.csv` contains 38,943 rows and 26 columns. Each row corresponds to a single frame as captured by the camera system. The data are represented in the following manner:`Class (type: Integer). The hand posture of the given observation, with``1 = Fist (with thumb out)``2 = Stop (hand flat)``3 = Point1 (point with index finger)``4 = Point2 (point with index and middle fingers)``5 = Grab (fingers curled as if to grab)``User (type: Integer). The ID of the user that contributed the record.``X0, Y0, Z0, X1, Y1, Z1,..., X7, Y7, Z7 (type: Real). The x-coordinate, y-coordinate, and z-coordinate of the eight unlabeled marker positions.` Start by reading the dataset into a pandas data frame.
###Code
#your code herepostures_clean.
%config IPCompleter.greedy=True
postures_clean_df = pd.read_csv('data/postures_clean.csv')
postures_clean_df.head()
###Output
_____no_output_____
###Markdown
Problem 1: Clustering with k-means (a) After appropriate pre-processing (but not scaling) run the k-means clustering algorithm, using the `KMeans` class from sklearn.cluster, with the number of clusters corresponding to the number of users, `n_init` of 46, and 109 as the random seed. Add the result as a new column called `Cluster14` to your data frame.
###Code
# your code here
clustering_df = postures_clean_df.drop(['Class', 'User'], axis=1)
num_users = postures_clean_df.User.nunique()
kmeans = KMeans(n_clusters=num_users, init='random',
n_init=46, random_state=109).fit(clustering_df)
postures_clean_df['Cluster14'] = clustering_df['Cluster14'] = kmeans.labels_
###Output
_____no_output_____
###Markdown
(b) Use the function below to visualize the results for k-means on a random sample of 2,000 observations (it will take the sample for you). Does 14 clusters seem to make sense?
###Code
from sklearn.decomposition import PCA
def plot_clusters(full_data, group_col, scaling_mode):
marker_types = [".", "v", "1", "^", "s", "p", "P", "3", "H", "<", "|", "_", "x", "*"]
marker_colors = np.concatenate([np.array(plt.cm.tab10.colors),np.array(plt.cm.Pastel1.colors)])
feature_columns = [colname for colname in list(full_data.columns) if colname not in {'Class','User','Cluster14','Cluster5'}]
features_only = full_data[feature_columns]
# make a scaled df if needed, (but don't scale cluster labels)
if scaling_mode == True:
scaler = StandardScaler()
scaled_features = pd.DataFrame(scaler.fit_transform(features_only), columns=feature_columns)
elif scaling_mode == False:
scaled_features = features_only
else:
raise ValueError("Unexpected value for scaling_mode")
# fit PCA to the whole scaled data
fitted_pca = PCA().fit(scaled_features)
# take a sample of the whole scaled data
scaled_sample = scaled_features.sample(2000, random_state=109)
# apply the PCA transform on the sample
pca_sample = pd.DataFrame(fitted_pca.transform(scaled_sample), columns = ["PCA{}".format(i) for i in range(len(scaled_sample.columns.values))])
pca_sample.index = scaled_sample.index ### New statement
# re-include a cluster label for the pca data
if 'Cluster14' in full_data.columns.values:
pca_sample['Cluster14'] = full_data.loc[pca_sample.index, "Cluster14"]
if 'Cluster5' in full_data.columns.values:
pca_sample['Cluster5'] = full_data.loc[pca_sample.index, "Cluster5"]
plt.figure(figsize=(11,8.5))
for i, (cluster_id, cur_df) in enumerate(pca_sample.groupby([group_col])):
pca1_scores = cur_df.iloc[:,0]
pca2_scores = cur_df.iloc[:,1]
plt.scatter(pca1_scores, pca2_scores, label=cluster_id, c=marker_colors[i].reshape(1,-1), marker=marker_types[i])
plt.xlabel("PC1 ({}%)".format(np.round(100*fitted_pca.explained_variance_ratio_[0],1)))
plt.ylabel("PC2 ({}%)".format(np.round(100*fitted_pca.explained_variance_ratio_[1],1)))
plt.legend()
plt.show()
plot_clusters(clustering_df,'Cluster14', True)
###Output
_____no_output_____
###Markdown
Given the lack of visual distinction betwee any of the cluster when visualizing the clusters and the top 2 primary components, 14 clusters does not seem to make sense. (c) Plot the silhouette scores using the function below, from lecture. Give it a 10% sample of the data to speed the visualization. How reasonable does the clustering seem based on this plot? How does it compare to the information in the plot above?
###Code
from sklearn.metrics import silhouette_samples, silhouette_score
import matplotlib.cm as cm
#modified code from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html
def silplot(X, cluster_labels, clusterer, pointlabels=None):
n_clusters = clusterer.n_clusters
# Create a subplot with 1 row and 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(11,8.5)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters = ", n_clusters,
", the average silhouette_score is ", silhouette_avg,".",sep="")
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(0,n_clusters+1):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.nipy_spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.nipy_spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker='.', s=200, lw=0, alpha=0.7,
c=colors, edgecolor='k')
xs = X[:, 0]
ys = X[:, 1]
if pointlabels is not None:
for i in range(len(xs)):
plt.text(xs[i],ys[i],pointlabels[i])
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(centers[:, 0], centers[:, 1], marker='o',
c="white", alpha=1, s=200, edgecolor='k')
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % int(i), alpha=1,
s=50, edgecolor='k')
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
# your code here
sil_df = clustering_df.sample(frac=0.1)
silplot(sil_df.drop('Cluster14', axis=1).values, sil_df.Cluster14, kmeans)
###Output
For n_clusters = 14, the average silhouette_score is 0.06743615310066292.
###Markdown
Again, the clustering is not great. In class we learned that clusters with a silhouette score around 1 are well clustered, and clusters with a score around 0 lie between two clusters. My average silhouette score of 0.067 is VERY close to 0, indicating that my clustering was not effective. This plot does show that there are a couple of members of cluster 13 that do seem to be well clustered, which was not easy to interpret from the prior clustering chart. (d) Repeat all of the above steps, but attempting to group by posture rather than by user. That is : (i) Run the k-means algorithm with 5 centroids instead of 14, creating a variable named `Cluster5` and adding it to the dataset. (ii) Visualize the results for k-means. Does 5 clusters seem to make sense from this plot?(iii) Plot the silhouette scores on a 10% sample of the data. How reasonable does the clustering seem based on this plot?
###Code
# your code here
clustering_df = postures_clean_df.drop(['Class', 'User'], axis=1)
num_classes = postures_clean_df.Class.nunique()
kmeans = KMeans(n_clusters=num_classes, init='random',
n_init=46, random_state=109).fit(clustering_df)
clustering_df['Cluster5'] = kmeans.labels_
plot_clusters(clustering_df,'Cluster5', True)
###Output
_____no_output_____
###Markdown
Even with 5 clusters, I am not seeing much visual distinction between any of the 5 classes. It probably should be noted that the first 2 principle components do not explain that much of the variance in my data, which may affect my ability to visualize cluster distinctions. However, this still seems pretty bad.
###Code
sil_df = clustering_df.sample(frac=0.1)
silplot(sil_df.drop('Cluster5', axis=1).values, sil_df.Cluster5, kmeans)
###Output
For n_clusters = 5, the average silhouette_score is 0.07293198828521222.
###Markdown
Again, a average silhouette score of 0.072 is pretty bad and doesn't indicate strong distinctions between the clusters. I'd thus argue that 5 clusters still doesn't make a ton of sense. (e) What do the results suggest? Does this make sense in the context of what we know about the problem? The results suggest that there aren't a lot of similarities in how given users position their hands when making different hand postures, and moreover, that there aren't a lot of similarities in the ways that different users make the same posture. This surprises me, as I'd assume there are number of distinct ways people make the same posture. Perhaps by choosing the best k for k-means via cross-validation, we would find that, in fact, if we split each posture into a couple distinct ways of showing it, visible and distinct clusters will appear. For now, though, this data seems hard to cluster. Problem 2: Other Ks In the previous problem, we examined the results of running k-means with 5 and 14 centroids on the postures data. In this problem, we will investigate a broader range of possible cluster sizes, with a borader range of metrics. **For all of these questions, you should work with a sample of 2,000 data points drawn with `pd.sample` and a random seed of 109.** (a) Use the elbow method to evaluate the best choice of the number of clusters, plotting the total within-cluster variation against the number of clusters, for k-means clustering with $k \in \{1,2,...,15\}.$
###Code
clustering_df = postures_clean_df.drop(['Class', 'User', 'Cluster14'], axis=1).sample(n=2000,random_state=109)
wss = []
for i in range(1,15):
fitx = KMeans(n_clusters=i, init='random', n_init=5, random_state=109).fit(clustering_df)
wss.append(fitx.inertia_)
plt.figure(figsize=(11,8.5))
plt.plot(range(1,15), wss, 'bx-')
plt.xlabel('Number of clusters $k$')
plt.ylabel('Inertia')
plt.title('The Elbow Method showing the optimal $k$')
plt.show()
###Output
_____no_output_____
###Markdown
(b) Use the average silhouette to evaluate the choice of the number of clusters for k-means clustering with $k \in \{1,2,...,15\}$. Plot the results.
###Code
from sklearn.metrics import silhouette_score
scores = [0]
for i in range(2,15):
fitx = KMeans(n_clusters=i, init='random', n_init=5, random_state=109).fit(clustering_df)
score = silhouette_score(clustering_df, fitx.labels_)
scores.append(score)
plt.figure(figsize=(11,8.5))
plt.plot(range(1,15), np.array(scores), 'bx-')
plt.xlabel('Number of clusters $k$')
plt.ylabel('Average Silhouette')
plt.title('Silhouette Scores for varying $k$ clusters')
plt.show()
###Output
_____no_output_____
###Markdown
(c) Use the gap statistic to evaluate the choice of the number of clusters for k-means clustering with $k \in \{1,2,..,15\}$. Plot the results.
###Code
from gap_statistic import OptimalK
from sklearn.datasets.samples_generator import make_blobs
gs_obj = OptimalK(n_jobs=1)
n_clusters = gs_obj(clustering_df.values, n_refs=50,
cluster_array=np.arange(1, 15))
print('Optimal clusters: ', n_clusters)
gs_obj.plot_results()
###Output
/Users/Simon/.conda/envs/cs109b/lib/python3.7/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.datasets.samples_generator module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.datasets. Anything that cannot be imported from sklearn.datasets is now part of the private API.
warnings.warn(message, FutureWarning)
###Markdown
(d) After analyzing the plots produced by all three of these measures, discuss the number of k-means clusters that you think is the best fit for this dataset. Defend your answer with evidence from the previous parts of this question, the three graphs produced here, and what you surmise about this dataset. While the gap statistic does not support this, I'd argue that the avg. sil score plot and elbow plot suggest that 2 clusters is the ideal number. At 2, the elbow plot has the closest thing to a "knee", though it is hardly as pronounced as the example in class. Similarly, the avg. sil score is the best at 2, followed by 4 and 12. Overall, it seems as though this data does not have clear clusters in it, and I am thus skeptical about all of my conclusions. Perhaps the data collection was flawed? I've visualized the result of 2 K-means clusters below and the distinction is pretty clear between the two groups, so I'll go with that.
###Code
plotting_df = postures_clean_df.drop(['Class', 'User'], axis=1)
kmeans = KMeans(n_clusters=2, init='random',
n_init=46, random_state=109).fit(plotting_df)
plotting_df['Cluster5'] = kmeans.labels_
plot_clusters(plotting_df,'Cluster5', True)
###Output
_____no_output_____
###Markdown
Problem 3: Alternative Algorithms (e) Run DBSCAN on the data. How many clusters are found, and how well does this clustering perform on e.g. silhouette score, excluding the points not assigned to any cluster? *Note*: Do not use a sample of the data. Running the algorithm may take up to 5-10 minutes.
###Code
#your code here
from sklearn.neighbors import NearestNeighbors
dbscan_df = postures_clean_df.drop(['Class', 'User', 'Cluster14'], axis=1)
# x-axis is each individual data point, numbered by an artificial index
# y-axis is the distance to its 2nd closest neighbor
def plot_epsilon(df, min_samples):
fitted_neigbors = NearestNeighbors(n_neighbors=min_samples).fit(df)
distances, indices = fitted_neigbors.kneighbors(df)
dist_to_nth_nearest_neighbor = distances[:,-1]
plt.figure(figsize=(12,6))
plt.plot(np.sort(dist_to_nth_nearest_neighbor))
plt.xlabel("Index\n(sorted by increasing distances)")
plt.ylabel("{}-NN Distance (epsilon)".format(min_samples-1))
plt.tick_params(right=True, labelright=True)
plt.grid()
min_samples = 2*len(dbscan_df.columns)
plot_epsilon(dbscan_df,min_samples)
###Output
_____no_output_____
###Markdown
Using min_samples = twice the number of predictors (so 48). I decided on an epsilon value of 190, as that is approximately where the plot above starts speeding up rapidly.
###Code
fitted_dbscan = DBSCAN(eps=190).fit(dbscan_df)
labels = fitted_dbscan.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
###Output
Estimated number of clusters: 3
###Markdown
From DBSCAN, I get 3 clusters. The avg silhouette score of these 3 clusters, omitting every data point which was assigned the label of -1 (which means it was in no cluster) is calculated below.
###Code
clustered_indices = np.argwhere(fitted_dbscan.labels_!=-1)
clustered_indices = np.ndarray.flatten(clustered_indices)
dbscan_cluster_df = dbscan_df.loc[clustered_indices,:].reindex()
cluster_labels = np.array(fitted_dbscan.labels_)
filtered_labels = cluster_labels[cluster_labels!=-1]
ss = silhouette_score(dbscan_cluster_df, filtered_labels)
print("DBSCAN Silhouette Score: ", ss)
print("Label Count By Class [0,1,2]:",np.bincount(filtered_labels))
###Output
DBSCAN Silhouette Score: 0.4523547750698143
Label Count By Class [0,1,2]: [38857 15 4]
###Markdown
This Silhouette score is significantly better than any of the others I got earlier, but it's also artificial in that I've omitted points which were not assigned a cluster, which was not an option earlier. While there are only 67 point I'm Omitting, this is significant. Moreover, when I look at the counts of each class above, i have ~40k in one class, with 15 and 4 in the other 2. This hardly counts as 3 clusters.
###Code
###Output
_____no_output_____
###Markdown
(f) Hierarchical clustering. Run agglomerative clustering (using Ward's method), and plot the result using a dendrogram. Interpret the results, and describe the cluster size(s) the plot suggests. What level of aggregation is suggested by the silhouette score?
###Code
import scipy.cluster.hierarchy as hac
from scipy.spatial.distance import pdist
plt.figure(figsize=(11,8.5))
dist_mat = pdist(clustering_df, metric="euclidean")
ward_data = hac.ward(dist_mat)
hac.dendrogram(ward_data);
plt.show()
###Output
_____no_output_____
###Markdown
While there are no huge gaps shown on the dendrogram, the obvious clusters are the 4 depicted w/ different colors above. However, in order to dig into this better, I'll compare the silhouette score to the number of clusters formed by dendrogram below.
###Code
#your code here
tol_list = np.linspace(500, 2400, 100)
num_clusters = []
hac_sil_scores = []
for tol in tol_list:
labellings = hac.fcluster(ward_data, t=tol, criterion='distance')
num = len(np.bincount(labellings))-1
num_clusters.append(num)
hac_sil = silhouette_score(clustering_df, labellings)
hac_sil_scores.append(hac_sil)
fig, ax = plt.subplots(figsize=(16,8))
ax.plot(num_clusters, hac_sil_scores)
ax.set(xlabel='Number of Clusters per Hierarchical Clustering', ylabel='Silhouette Score',title="Number of Clusters vs Sil. Score from Heirarchical Clustering")
###Output
_____no_output_____ |
3_Boston_marathon/.ipynb_checkpoints/CatTia-checkpoint.ipynb | ###Markdown
Add Columns New Feature -- AgeGroup
###Code
# Define age_division
def age_to_age_division(age):
if age<=34 : return 0
if (35<=age)&(age<=39) : return 1
if (40<=age)&(age<=44) : return 2
if (45<=age)&(age<=49) : return 3
if (50<=age)&(age<=54) : return 4
if (55<=age)&(age<=59) : return 5
if (60<=age)&(age<=64) : return 6
if (65<=age)&(age<=69) : return 7
if (70<=age)&(age<=74) : return 8
if (75<=age)&(age<=79) : return 9
if age>=80 : return 10
age_division = df['age'].apply(age_to_age_division)
df.insert(14, 'age_division', value=age_division)
df[['age', 'age_division']].head(8)
###Output
_____no_output_____
###Markdown
New Features -- avgSpeed*
###Code
milestones = [['5k', '10k', '20k', '25k', '30k', '35k', '40k'],
[ 5, 10, 20, 25, 30, 35, 40]]
loc = 10
for i, j in zip(*milestones):
values = df[i] / j
df.insert(loc, i+'_avgSpeed', values)
loc += 1
df.head(8)
profile = ProfileReport(df, title='Profiling Report -- Boston Marathon 2014', explorative=True)
profile.to_file("Boston_2014_Profiling.html")
list(df.select_dtypes(include=numerics).columns.values)
###Output
_____no_output_____ |
Strategy_Control_Matrix.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import re
import os
def change_key(key):
s = list(key)
s[0] = '0'
s[4] = '0'
s[5] = '0'
s[6] = '0'
return "".join(s)
def copy_key(key,run_list):
if key in run_list:
return 'run'
elif change_key(key) in run_list:
return os.path.join(change_key(key),'outputs','data','solar-radiation')
else:
return np.nan
def runrad_rule(key,run_list): # SETS FOR HEATING REDUCED 1C (20C TO 19C) AND SETS FOR COOLING RAISED 1C (24C TO 25C)
if key in run_list:
return True
else:
return False
def SP_rule(key,SP_value): # SET INTEGER FOR HEATING AND COOLING SETPOINT
keys = [int(d) for d in key]
if keys[0]==1:
return SP_value
else:
return np.nan
def GR_rule(key,GR_value): # ALL BUILDINGS GET GREEN ROOFS
keys = [int(d) for d in key]
if keys[1]==1:
return GR_value
else:
return np.nan
def DR_rule(key,DR_name): # EXISTING BUILDINGS HAVE DEEP WALL AND WINDOW RETROFIT (WWR REDUCED)
keys = [int(d) for d in key]
if keys[2] == 1:
return DR_name
else:
return np.nan
def PH_rule(key,PH_value,PH_GR_value): # NEW BUILD REQ PASSIVE
keys = [int(d) for d in key]
if keys[3] == 1:
return PH_value
else:
return np.nan
def PH_rule(key,PH_value): # NEW BUILD REQ PASSIVE
keys = [int(d) for d in key]
if keys[3] == 1:
return PH_value
else:
return np.nan
def PH_GR_rule(key,PH_GR_value): # NEW BUILD REQ PASSIVE AND GREEN ROOF
keys = [int(d) for d in key]
if keys[3] == 1:
if keys[1] == 1:
return PH_GR_value
else:
return np.nan
else:
return np.nan
def HP_rule(key,HP_value): # NEW BUILD AND REFURBISHMENT/RETROFITS REQUIRE HEAT PUMP INSTALLATION (NO DISTRICT SERVICES)
keys = [int(d) for d in key]
if keys[4] == 1:
return HP_value
else:
return np.nan
def MT_rule(key,MT_value): # NEW BUILD REQUIRES MASS TIMBER STRUCTURAL SYSTEMS
keys = [int(d) for d in key]
if keys[5] == 1:
return MT_value
else:
return np.nan
def SW_rule(key,SW_value): # DISTRICT COOLING SYSTEM INSTALLED LINKED TO WRECK BEACH
keys = [int(d) for d in key]
if keys[6] == 1:
return SW_value
else:
return np.nan
def PV_rule(key,PV_value): # ALL BUILDINGS HAVE ROOFTOP PV INSTALLED
keys = [int(d) for d in key]
if keys[7] == 1:
return PV_value
else:
return np.nan
def PV_GR_rule(key,PV_GR_value): # ALL BUILDINGS HAVE ROOFTOP PV AND GREEN ROOF INSTALLED
keys = [int(d) for d in key]
if keys[7] == 1:
if keys[1] == 1:
return PV_GR_value
else:
return np.nan
else:
return np.nan
def PV_PH_rule(key,PV_PH_value): # ALL BUILDINGS HAVE ROOFTOP PV AND PASSIVE HOUSE INSTALLED
keys = [int(d) for d in key]
if keys[7] == 1:
if keys[3] == 1:
return PV_PH_value
else:
return np.nan
else:
return np.nan
def PV_GR_PH_rule(key,PV_GR_PH_value): # ALL BUILDINGS HAVE ROOFTOP PV AND GREEN ROOF AND PASSIVE HOUSE INSTALLED
keys = [int(d) for d in key]
if keys[7] == 1:
if keys[1] == 1:
if keys[3] == 1:
return PV_GR_PH_value
else:
return np.nan
else:
return np.nan
else:
return np.nan
import itertools
def calc_lst(n):
#https://stackoverflow.com/questions/14931769/how-to-get-all-combination-of-n-binary-value
return [list(i) for i in itertools.product([0, 1], repeat=n)]
str_list = []
for key in calc_lst(8):
result = ''.join(str(i) for i in key)
str_list.append(result)
def generate_key_list(n):
# https://stackoverflow.com/questions/14931769/how-to-get-all-combination-of-n-binary-value
key_list = []
elements = [list(i) for i in itertools.product([0, 1], repeat=n)]
for key in elements:
result = ''.join(str(i) for i in key)
key_list.append(result)
return key_list
def rule_dataframe(key_list, run_rad):
key_df = pd.DataFrame(data={'keys':key_list})
key_df['SP_heat'] = key_df.apply(lambda x: SP_rule(x['keys'],-1),axis=1)
key_df['SP_cool'] = key_df.apply(lambda x: SP_rule(x['keys'],1),axis=1)
key_df['GR_roof'] = key_df.apply(lambda x: GR_rule(x['keys'],'ROOF_AS16'),axis=1)
key_df['DR_win'] = key_df.apply(lambda x: DR_rule(x['keys'],'WINDOW_AS6'),axis=1)
key_df['DR_leak'] = key_df.apply(lambda x: DR_rule(x['keys'],'TIGHTNESS_AS2'),axis=1)
key_df['DR_wall'] = key_df.apply(lambda x: DR_rule(x['keys'],'WALL_AS17'),axis=1)
key_df['DR_wwr'] = key_df.apply(lambda x: DR_rule(x['keys'],0.20),axis=1)
key_df['PH_base'] = key_df.apply(lambda x: PH_rule(x['keys'],'FLOOR_AS11'),axis=1)
key_df['PH_leak'] = key_df.apply(lambda x: PH_rule(x['keys'],'TIGHTNESS_AS1'),axis=1)
key_df['PH_win'] = key_df.apply(lambda x: PH_rule(x['keys'],'WINDOW_AS6'),axis=1)
key_df['PH_roof'] = key_df.apply(lambda x: PH_rule(x['keys'],'ROOF_AS17'),axis=1)
key_df['PH_GR_roof'] = key_df.apply(lambda x: PH_GR_rule(x['keys'],'ROOF_AS18'),axis=1)
key_df['PH_wall'] = key_df.apply(lambda x: PH_rule(x['keys'],'WALL_AS18'),axis=1)
key_df['PH_floor'] = key_df.apply(lambda x: PH_rule(x['keys'],'FLOOR_AS12'),axis=1)
key_df['PH_shade'] = key_df.apply(lambda x: PH_rule(x['keys'],'SHADING_AS4'),axis=1)
key_df['PH_wwr'] = key_df.apply(lambda x: PH_rule(x['keys'],0.15),axis=1)
key_df['PH_part'] = key_df.apply(lambda x: PH_rule(x['keys'],'WALL_AS19'),axis=1)
key_df['HP_hvac_cs'] = key_df.apply(lambda x: HP_rule(x['keys'],'HVAC_COOLING_AS5'),axis=1)
key_df['HP_hvac_hs'] = key_df.apply(lambda x: HP_rule(x['keys'],'HVAC_HEATING_AS4'),axis=1)
key_df['HP_supply_cs'] = key_df.apply(lambda x: HP_rule(x['keys'],'SUPPLY_COOLING_AS1'),axis=1)
key_df['HP_supply_hs'] = key_df.apply(lambda x: HP_rule(x['keys'],'SUPPLY_HEATING_AS7'),axis=1)
key_df['HP_supply_dhw'] = key_df.apply(lambda x: HP_rule(x['keys'],'SUPPLY_HOTWATER_AS7'),axis=1)
key_df['MT_cons'] = key_df.apply(lambda x: MT_rule(x['keys'],'CONSTRUCTION_AS2'),axis=1)
key_df['SW_hvac_cs'] = key_df.apply(lambda x: SW_rule(x['keys'],'HVAC_COOLING_AS5'),axis=1)
key_df['SW_supply_cs'] = key_df.apply(lambda x: SW_rule(x['keys'],'SUPPLY_COOLING_AS3'),axis=1)
key_df['PV_GR_PH'] = key_df.apply(lambda x: PV_GR_PH_rule(x['keys'],'ROOF_AS19'),axis=1)
key_df['PV_PH'] = key_df.apply(lambda x: PV_PH_rule(x['keys'],'ROOF_AS21'),axis=1)
key_df['PV_GR'] = key_df.apply(lambda x: PV_GR_rule(x['keys'],'ROOF_AS20'),axis=1)
key_df['PV'] = key_df.apply(lambda x: PV_rule(x['keys'],'ROOF_AS22'),axis=1)
key_df['PV_GR_PH_hg'] = key_df.apply(lambda x: PV_GR_PH_rule(x['keys'], 0.6), axis=1)
key_df['PV_PH_hg'] = key_df.apply(lambda x: PV_PH_rule(x['keys'], 0.25), axis=1)
key_df['PV_GR_hg'] = key_df.apply(lambda x: PV_GR_rule(x['keys'], 0.4), axis=1)
key_df['PV_hg'] = key_df.apply(lambda x: PV_rule(x['keys'], 0.1), axis=1)
key_df['run_rad'] = key_df.apply(lambda x: runrad_rule(x['keys'],run_rad),axis=1)
key_df['copy_rad'] = key_df.apply(lambda x: copy_key(x['keys'],run_rad),axis=1)
return key_df
def order_key_list(run_list,key_list):
no_run_rad_list = list(set(key_list) - set(run_list))
final_list = list()
final_list.extend(run_list)
final_list.extend(no_run_rad_list)
return final_list
run_rad = ['00000000',
'01000000',
'01100000',
'01110000',
'01110001',
'00100000',
'00110000',
'00100001',
'00010000',
'01010000',
'00010001',
'00000001',
'01000001',
'01100001',
'01010001',
'00110001']
run_rad_ids = ['a',
'b',
'c',
'd',
'e',
'f',
'g',
'h',
'i',
'j',
'k',
'l',
'm',
'n',
'o',
'p']
unique_rads = dict(zip(run_rad,run_rad_ids))
order_key_list(run_rad,str_list)
key_df = rule_dataframe(generate_key_list(8), run_rad)
key_df.columns
###Output
_____no_output_____ |
aas_233_workshop/09-Photutils/image_psf_photometry_withNIRCam.ipynb | ###Markdown
Photutils- Code: https://github.com/astropy/photutils- Documentation: http://photutils.readthedocs.org/en/stable/- Issue Tracker: https://github.com/astropy/photutils/issues Photutils capabilities:- Background and background noise estimation- Source Detection and Extraction - DAOFIND and IRAF's starfind - Image segmentation - local peak finder- Aperture photometry- PSF photometry- PSF matching- Centroids- Morphological properties- Elliptical isophote analysis In this additional notebook, we will review:- PSF photometry extracting PSFs from an input image - Use a simulated image of elliptical PSFs - Use a JWST NIRCam simulated image--- Demonstration of `photutils.psf` with an image-based PSF Model
###Code
import os
import sys
import numpy as np
from astropy import units as u
from astropy.table import Table
from astropy.io import fits
from astropy import wcs
%matplotlib inline
from matplotlib import style, pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
plt.rcParams['image.cmap'] = 'viridis'
plt.rcParams['image.origin'] = 'lower'
plt.rcParams['axes.prop_cycle'] = style.library['seaborn-deep']['axes.prop_cycle']
plt.rcParams['figure.figsize'] = (14, 8)
plt.rcParams['axes.titlesize'] = plt.rcParams['axes.labelsize'] = 16
plt.rcParams['xtick.labelsize'] = plt.rcParams['ytick.labelsize'] = 14
plt.rcParams['image.interpolation'] = 'nearest'
import photutils
from photutils import psf
from astropy.modeling import models
photutils.__version__
###Output
_____no_output_____
###Markdown
Build a simulated image with a funky elliptical Moffat-like PSF (but no noise)
###Code
psfmodel = ((models.Shift(-5) & models.Shift(2)) |
models.Rotation2D(-20) |
(models.Identity(1) & models.Scale(1.5)) |
models.Moffat2D(1, 0,0, 6, 4.76))
psfmodel.bounding_box = ((-10, 10), (-10, 10))
psfim = psfmodel.render().T
plt.imshow(psfim)
psfmodel.offset_0 = psfmodel.offset_1 = 0
psfimcen = psfmodel.render()
del psfmodel.bounding_box
psfmodel
im = np.zeros((100, 100))
amps = np.random.randn(100)**2
xs = im.shape[0] * np.random.rand(amps.size)
ys = im.shape[1] * np.random.rand(amps.size)
for x, y, amp in zip(xs, ys, amps):
psfmodel.amplitude_3 = amp
psfmodel.offset_1 = -x
psfmodel.offset_0 = -y
psfmodel.render(im)
plt.imshow(im)
###Output
_____no_output_____
###Markdown
Now we use `FittableImageModel` on a *rendered* version of the PSF model with no pixel subsampling
###Code
plt.imshow(psfimcen)
plt.colorbar()
psf_im_model = psf.FittableImageModel(psfimcen, normalize=1)
plt.figure()
psf_im_model.bounding_box = ((-10, 10), (-10, 10))
psfrendered = psf_im_model.render()
del psf_im_model.bounding_box
plt.imshow(psfrendered)
plt.colorbar()
###Output
_____no_output_____
###Markdown
Now lets try doing photometry First we need to find stars. We'll use the DAOPhot algorithm (which at its core is the same as most other PSF photometry tools). First we estimate the variance in the image to give us some guess as to what might be a good threshold for star-finding.
###Code
from astropy.stats import SigmaClip
bkg_var = photutils.background.BiweightScaleBackgroundRMS(
sigma_clip=SigmaClip(3))(im)
bkg_var
###Output
_____no_output_____
###Markdown
Then we create a `DAOStarFinder` object and run that on the image
###Code
star_finder = photutils.findstars.DAOStarFinder(threshold=bkg_var/2,
fwhm=5)
found_stars = star_finder(im)
plt.imshow(im)
plt.scatter(found_stars['xcentroid'], found_stars['ycentroid'], color='k')
found_stars
###Output
_____no_output_____
###Markdown
And then we create the object to do the photometry, and run it on the table of stars we found.
###Code
ph = psf.BasicPSFPhotometry(psf.DAOGroup(10), None, psf_im_model,
(5, 5), aperture_radius=10)
if 'xcentroid' in found_stars.colnames:
# there's an if here simply to make sure you can run this cell
# multiple times without re-running the star finder
found_stars['xcentroid'].name = 'x_0'
found_stars['ycentroid'].name = 'y_0'
found_stars['flux'].name = 'flux_0'
res = ph.do_photometry(im, found_stars)
plt.imshow(im)
plt.colorbar()
plt.scatter(res['x_0'], res['y_0'], color='k')
plt.scatter(res['x_fit'], res['y_fit'], color='r', s=3, lw=0)
res
###Output
_____no_output_____
###Markdown
And now we try making a residual image to see how well it did
###Code
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,6))
vmin, vmax = -.3, 2.
ax1.imshow(im, vmin=vmin, vmax=vmax)
ax2.imshow(ph.get_residual_image(), vmin=vmin, vmax=vmax)
ax2.scatter(res['x_fit'], res['y_fit'], color='r', s=3, lw=0)
###Output
_____no_output_____
###Markdown
Well that looks OK except that it looks ugly because our psf model that we fit was a bit small. So lets try subtracting the *actual* model
###Code
subtracted_image = psf.subtract_psf(im, psf_im_model, res)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,6))
vmin, vmax = -.3, 2.
ax1.imshow(im, vmin=vmin, vmax=vmax)
ax2.imshow(subtracted_image, vmin=vmin, vmax=vmax)
ax2.scatter(res['x_fit'], res['y_fit'], color='r', s=3, lw=0)
###Output
_____no_output_____
###Markdown
--- Simulated NIRCam data Now lets try something like the above, but with simulated NIRCam data, using an oversampled PSF. In principal this is one of two modes one might take with real JWST data. For many cases using a provided PSF (or generated from `webbpsf`) will be sufficient. But `photutils` will also support high-level tools to build "empirical" PSF's, e.g. directly built from the image. You will need to download the simulated NIRCam image:https://stsci.box.com/s/z2sbv2vuqbtsj75fnjdnalnsrvrdcvgt(the downloaded file should be called `simulated_nircam_1.fits`)and PSF image:https://stsci.box.com/s/5kxh7vsvctc5u10ovvdeyv8n6w5tcds0(the downloaded file should be called `simulated_nircam_psf_1.fits`)Place both of these files in the same directory that you ran this notebook.
###Code
im1fn = 'simulated_nircam_1.fits'
psf1fn = im1fn.replace('_1.fits','_psf_1.fits')
im1f = fits.open(im1fn)
im1 = im1f[1].data
im1h = im1f[1].header
im1wcs = wcs.WCS(im1h)
psf1f = fits.open(psf1fn)
psf1 = psf1f[0].data
psf1h = psf1f[0].header
psf1wcs = wcs.WCS(psf1h)
# this is a quick-and-easy way to re-scale an image, using the
# astropy.visualization package
from astropy.visualization import LogStretch, PercentileInterval
viz = LogStretch() + PercentileInterval(99)
plt.imshow(viz(im1))
###Output
_____no_output_____
###Markdown
OK, lets histogram it so we can see roughly where the threshold should be
###Code
plt.hist(im1.ravel(), bins=100, histtype='step', range=(-10, 200), log=True)
None
dsf = photutils.DAOStarFinder(100, 5)
found_stars = dsf(im1)
found_stars['xcentroid'].name = 'x_0'
found_stars['ycentroid'].name = 'y_0'
found_stars['flux'].name = 'flux_0'
found_stars
plt.imshow(viz(im1))
plt.scatter(found_stars['x_0'], found_stars['y_0'], lw=0, s=3, c='k')
plt.xlim(500, 1500)
plt.ylim(500, 1500)
###Output
_____no_output_____
###Markdown
Now we build the actual PSF model using the file that the PSF is given in. It is using external knowledge that the PSF is 5x oversampled. The simple oversampling below only works as-is because both are square, but that's true here.
###Code
# note that this is *not* the same pixel scale as the image above
plt.imshow(viz(psf1))
psfmodel = psf.FittableImageModel(psf1, oversampling=5)
###Output
_____no_output_____
###Markdown
Lets now zoom in on the image somewhere and see how the model looks compared to the actual image scale
###Code
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(viz(im1))
ax1.set_xlim(1080, 1131)
ax1.set_ylim(1100, 1151)
ax1.set_title('Simulated image')
xg, yg = np.mgrid[-25:25, -25:25]
ax2.imshow(viz(psfmodel(xg, yg)))
ax2.set_title('PSF')
###Output
_____no_output_____
###Markdown
Now we build a PSF photometry runner that is auto-configured to work basically the same as DAOPHOT. All of the steps in photometry are customizable if you like, but for now we'll just use this because it's a familiar code to many people.
###Code
psfphot = psf.DAOPhotPSFPhotometry(crit_separation=5,
threshold=100, fwhm=5,
psf_model=psfmodel, fitshape=(9,9),
niters=1, aperture_radius=5)
results = psfphot(im1, found_stars[:100])
###Output
_____no_output_____
###Markdown
Now lets inspect the residual image as a whole, and zoomed in on a few particular stars.
###Code
res_im1 = psfphot.get_residual_image()
plt.imshow(viz(res_im1))
results
# these *should* be one by itself and one near the core. But you might
# have to change the (10, 500) to something else depending on what you
# want to inspect
for i in (10, 50):
resulti = results[i]
window_rad = 20
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(viz(im1))
ax1.set_xlim(resulti['x_fit']-window_rad, resulti['x_fit']+window_rad)
ax1.set_ylim(resulti['y_fit']-window_rad, resulti['y_fit']+window_rad)
ax1.scatter([resulti['x_0']], [resulti['y_0']], color='r',s=7)
ax1.scatter([resulti['x_fit']], [resulti['y_fit']], color='w',s=7)
ax1.set_title('Original image (star #{})'.format(i))
ax2.imshow(viz(res_im1))
ax2.set_xlim(resulti['x_fit']-window_rad, resulti['x_fit']+window_rad)
ax2.set_ylim(resulti['y_fit']-window_rad, resulti['y_fit']+window_rad)
ax2.scatter([resulti['x_0']], [resulti['y_0']], color='r',s=7)
ax2.scatter([resulti['x_fit']], [resulti['y_fit']], color='w',s=7)
ax2.set_title('Subtracted image (star #{})'.format(i))
###Output
_____no_output_____
###Markdown
OK, looks like at least some of them worked great, but in the crowded areas more iterations/tweaks to the input parameters are needed. See if you can tweak the parameters to make it better! For this simulated data set we only have one band, so there's not much output "science" to show... But below you see how to get out magnitudes,
###Code
# this does not yet exist... but the plan is that it will at launch!
#jwst_calibrated_mags(results['flux_fit'], im1h)
# it would do something like this:
# this zero-point is just a made-up number right now,
# but it's something the instrument team will provide
inst_mag = -2.5*np.log10(results['flux_fit'])
zero_point = 31.2
results['cal_mag'] = zero_point + inst_mag
# if you scroll to the right you'll see the new column
results
###Output
_____no_output_____ |
Lecture 2 Conditionals loops and Functions/Loops/Loops-checkpoint.ipynb | ###Markdown
while loop
###Code
n=int(input())
i=1
while i<=n:
print(i,end=" ")
i=i+1
print()
print("done")
###Output
3
1 2 3
done
###Markdown
for loop and range
###Code
n=int(input())
for i in range(n+1):
print(i)
for i in range(4,10):
print(i)
for i in range(4,10,2):
print(i)
###Output
5
0
1
2
3
4
5
4
5
6
7
8
9
4
6
8
###Markdown
prime number
###Code
n=int(input())
prime=True
for i in range(2,n):
if n%i==0:
prime=True
break
if prime:
print("prime")
else:
print("Not prime")
###Output
3
prime
|
jupyter/2018-02-28 (BCPNN looks like).ipynb | ###Markdown
How the BCPNN learning rule looks like for different probabilities
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 12)
np.set_printoptions(suppress=True, precision=2)
sns.set(font_scale=2.0)
def log_epsilon(x, epsilon=1e-10):
return np.log(np.maximum(x, epsilon))
num = 50
p_i = 1.0
p_j_vector = np.linspace(0.1, 1.0, num=num)
p_ij_vector = np.linspace(0.1, 1.0, num=num)
w1 = np.zeros((num, num))
for index_x, p_j in enumerate(p_j_vector):
for index_y, p_ij in enumerate(p_ij_vector):
w1[index_y, index_x] = log_epsilon(p_ij / (p_i * p_j))
p_i = 0.5
p_j_vector = np.linspace(0.1, 1.0, num=num)
p_ij_vector = np.linspace(0.1, 1.0, num=num)
w2 = np.zeros((num, num))
for index_x, p_j in enumerate(p_j_vector):
for index_y, p_ij in enumerate(p_ij_vector):
w2[index_y, index_x] = log_epsilon(p_ij / (p_i * p_j))
p_i = 0.1
p_j_vector = np.linspace(0.1, 1.0, num=num)
p_ij_vector = np.linspace(0.1, 1.0, num=num)
w3 = np.zeros((num, num))
for index_x, p_j in enumerate(p_j_vector):
for index_y, p_ij in enumerate(p_ij_vector):
w3[index_y, index_x] = log_epsilon(p_ij / (p_i * p_j))
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
vmax = np.max([w1, w2, w3])
vmin = np.min([w1, w2, w3])
extent = [p_j_vector[0], p_j_vector[-1], p_ij_vector[0], p_ij_vector[-1]]
cmap = 'coolwarm'
im1 = ax1.imshow(w1, origin='lower', cmap=cmap, extent=extent, vmin=vmin, vmax=vmax)
ax1.grid()
divider = make_axes_locatable(ax1)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
ax1.set_title(r'$p_i = 1.0$')
ax1.set_xlabel(r'$p_j$')
ax1.set_ylabel(r'$p_{ij}$')
im2 = ax2.imshow(w2, origin='lower', cmap=cmap, extent=extent, vmin=vmin, vmax=vmax)
ax2.grid()
divider = make_axes_locatable(ax2)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
ax2.set_title(r'$p_i = 0.5$')
ax2.set_xlabel(r'$p_j$')
im3 = ax3.imshow(w3, origin='lower', cmap=cmap, extent=extent, vmin=vmin, vmax=vmax)
ax3.grid()
divider = make_axes_locatable(ax3)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical');
ax3.set_title(r'$p_i = 0.1$')
ax3.set_xlabel(r'$p_j$')
plt.setp( ax2.get_yticklabels(), visible=False)
plt.setp( ax3.get_yticklabels(), visible=False);
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
m1 = np.random.rand(3, 3)
m2 = np.arange(0, 3*3, 1).reshape((3, 3))
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(121)
im1 = ax1.imshow(m1, interpolation='None')
divider = make_axes_locatable(ax1)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im1, cax=cax, orientation='vertical')
ax2 = fig.add_subplot(122)
im2 = ax2.imshow(m2, interpolation='None')
divider = make_axes_locatable(ax2)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im2, cax=cax, orientation='vertical');
###Output
_____no_output_____ |
KA3/TagCloud2/140201115_KA3_1.ipynb | ###Markdown
[TagCloud2: Build a tag cloud of a 2012 presidential debate. (Python3)](http://www.cse.msu.edu/~cse231/PracticeOfComputingUsingPython/06_Dictionaries/TagCloud2/)
###Code
# Functions adapted from ProgrammingHistorian (updated to Python3)
# http://niche.uwo.ca/programming-historian/index.php/Tag_clouds
# Take one long string of words and put them in an HTML box.
# If desired, width, background color & border can be changed in the function
# This function stuffs the "body" string into the the HTML formatting string.
def make_HTML_box(body):
'''Required -- body (string), a string of words
Return -- a string that specifies an HTML box containing the body
'''
box_str = """<div style=\"
width: 640px;
background-color: rgb(250,250,250);
border: 1px grey solid;
text-align: center\" >{:s}</div>
"""
return box_str.format(body)
def make_HTML_word(word,cnt,high,low):
''' make a word with a font size to be placed in the box. Font size is scaled
between high and low (to be user set). high and low represent the high
and low counts in the document. cnt is the count of the word
Required -- word (string) to be formatted
-- cnt (int) count of occurances of word
-- high (int) highest word count in the document
-- low (int) lowest word count in the document
Return -- a string formatted for HTML that is scaled with respect to cnt'''
ratio = (cnt-low)/float(high-low)
font_size = high*ratio + (1-ratio)*low
font_size = int(font_size)
word_str = '<span style=\"font-size:{:s}px;\">{:s}</span>'
return word_str.format(str(font_size), word)
def print_HTML_file(body,title):
''' create a standard html page (file) with titles, header etc.
and add the body (an html box) to that page. File created is title+'.html'
Required -- body (string), a string that specifies an HTML box
Return -- nothing'''
fd = open(title+'.html','w')
the_str="""
<html> <head>
<title>"""+title+"""</title>
</head>
<body>
<h1>"""+title+'</h1>'+'\n'+body+'\n'+"""<hr>
</body> </html>
"""
fd.write(the_str)
fd.close()
import re
import nltk
from IPython.core.display import display, HTML
#from htmlFunctions import *
# Load files
debates_f = ["debate.txt","debateTWO.txt"]
stop_words_f = "stopWords.txt"
debates = []
stop_words = []
for file in debates_f:
with open(file) as f:
data = f.read().split("\n")
debates.append(data)
with open(stop_words_f) as f:
stop_words = f.read().split("\n")
# Check correct loading
for d in debates:
print(len(d))
print(len(stop_words))
speaker1 = ["PRESIDENT BARACK OBAMA", "PRESIDENT OBAMA"]
speaker2 = ["MITT ROMNEY", "MR. ROMNEY"]
#script1 = []
#script2 = []
change = False
speaker = 0
scripts = [[],[]]
for d in debates:
for l in d:
if l.startswith(speaker1[0] + ":") or l.startswith(speaker1[1]):
speaker = 0
change = True
elif l.startswith(speaker2[0] + ":") or l.startswith(speaker2[1]):
speaker = 1
change = True
else:
change = False
if change:
splitpoint = l.find(":")
word_str = l[splitpoint + 1:].lower()
else:
word_str = l.lower()
word_tokens = re.findall(r"\w\w\w+", word_str) # at least 3 characters long
filtered = [w for w in word_tokens if not w in stop_words]
scripts[speaker] = scripts[speaker] + filtered
print(len(scripts[0]),len(scripts[1]))
# Get frequencies
# Calculate frequency distribution
fdist1 = nltk.FreqDist(scripts[0])
fdist2 = nltk.FreqDist(scripts[1])
word_freq = [[],[]]
freq = []
# Output top 50 words
for word, frequency in fdist1.most_common(20):
word_freq[0].append((word, frequency))
freq.append(frequency)
print(u'{}: {}'.format(word, frequency))
print("###")
for word, frequency in fdist2.most_common(20):
word_freq[1].append((word, frequency))
freq.append(frequency)
print(u'{}: {}'.format(word, frequency))
high_count=max(freq)
low_count=min(freq)
print(high_count, low_count)
for i, pairs in enumerate(word_freq):
body=''
for word,cnt in pairs:
body = body + " " + make_HTML_word(word,cnt,high_count,low_count)
box = make_HTML_box(body) # creates HTML in a box
print_HTML_file(box,'TagCloud' + str(i)) # writes HTML to file name 'testFile.html'
#display(HTML(box)) # Display HTML
display(HTML(filename = 'TagCloud' + str(i) + ".html")) # Display HTML
###Output
155 27
|
BDA_LSTM.ipynb | ###Markdown
1. BDA Part 1.a. Define BDA methodology
###Code
def kernel(ker, X1, X2, gamma):
K = None
if not ker or ker == 'primal':
K = X1
elif ker == 'linear':
if X2 is not None:
K = sklearn.metrics.pairwise.linear_kernel(
np.asarray(X1).T, np.asarray(X2).T)
else:
K = sklearn.metrics.pairwise.linear_kernel(np.asarray(X1).T)
elif ker == 'rbf':
if X2 is not None:
K = sklearn.metrics.pairwise.rbf_kernel(
np.asarray(X1).T, np.asarray(X2).T, gamma)
else:
K = sklearn.metrics.pairwise.rbf_kernel(
np.asarray(X1).T, None, gamma)
return K
def proxy_a_distance(source_X, target_X):
"""
Compute the Proxy-A-Distance of a source/target representation
"""
nb_source = np.shape(source_X)[0]
nb_target = np.shape(target_X)[0]
train_X = np.vstack((source_X, target_X))
train_Y = np.hstack((np.zeros(nb_source, dtype=int),
np.ones(nb_target, dtype=int)))
clf = svm.LinearSVC(random_state=0)
clf.fit(train_X, train_Y)
y_pred = clf.predict(train_X)
error = metrics.mean_absolute_error(train_Y, y_pred)
dist = 2 * (1 - 2 * error)
return dist
def estimate_mu(_X1, _Y1, _X2, _Y2):
adist_m = proxy_a_distance(_X1, _X2)
C = len(np.unique(_Y1))
epsilon = 1e-3
list_adist_c = []
for i in range(1, C + 1):
ind_i, ind_j = np.where(_Y1 == i), np.where(_Y2 == i)
Xsi = _X1[ind_i[0], :]
Xtj = _X2[ind_j[0], :]
adist_i = proxy_a_distance(Xsi, Xtj)
list_adist_c.append(adist_i)
adist_c = sum(list_adist_c) / C
mu = adist_c / (adist_c + adist_m)
if mu > 1:
mu = 1
if mu < epsilon:
mu = 0
return mu
class BDA:
def __init__(self, kernel_type='primal', dim=30, lamb=1, mu=0.5, gamma=1, T=10, mode='BDA', estimate_mu=False):
'''
Init func
:param kernel_type: kernel, values: 'primal' | 'linear' | 'rbf'
:param dim: dimension after transfer
:param lamb: lambda value in equation
:param mu: mu. Default is -1, if not specificied, it calculates using A-distance
:param gamma: kernel bandwidth for rbf kernel
:param T: iteration number
:param mode: 'BDA' | 'WBDA'
:param estimate_mu: True | False, if you want to automatically estimate mu instead of manally set it
'''
self.kernel_type = kernel_type
self.dim = dim
self.lamb = lamb
self.mu = mu
self.gamma = gamma
self.T = T
self.mode = mode
self.estimate_mu = estimate_mu
def fit(self, Xs, Ys, Xt, Yt):
'''
Transform and Predict using 1NN as JDA paper did
:param Xs: ns * n_feature, source feature
:param Ys: ns * 1, source label
:param Xt: nt * n_feature, target feature
:param Yt: nt * 1, target label
:return: acc, y_pred, list_acc
'''
# ipdb.set_trace()
list_acc = []
X = np.hstack((Xs.T, Xt.T)) # X.shape: [n_feature, ns+nt]
X_mean = np.linalg.norm(X, axis=0) # why it's axis=0? the average of features
X_mean[X_mean==0] = 1
X /= X_mean
m, n = X.shape
ns, nt = len(Xs), len(Xt)
e = np.vstack((1 / ns * np.ones((ns, 1)), -1 / nt * np.ones((nt, 1))))
C = np.unique(Ys)
H = np.eye(n) - 1 / n * np.ones((n, n))
mu = self.mu
M = 0
Y_tar_pseudo = None
Xs_new = None
for t in range(self.T):
print('\tStarting iter %i'%t)
N = 0
M0 = e * e.T * len(C)
# ipdb.set_trace()
if Y_tar_pseudo is not None:
for i in range(len(C)):
e = np.zeros((n, 1))
Ns = len(Ys[np.where(Ys == C[i])])
Nt = len(Y_tar_pseudo[np.where(Y_tar_pseudo == C[i])])
if self.mode == 'WBDA':
Ps = Ns / len(Ys)
Pt = Nt / len(Y_tar_pseudo)
alpha = Pt / Ps
# mu = 1
else:
alpha = 1
tt = Ys == C[i]
e[np.where(tt == True)] = 1 / Ns
# ipdb.set_trace()
yy = Y_tar_pseudo == C[i]
ind = np.where(yy == True)
inds = [item + ns for item in ind]
try:
e[tuple(inds)] = -alpha / Nt
e[np.isinf(e)] = 0
except:
e[tuple(inds)] = 0 # ๏ผ
N = N + np.dot(e, e.T)
# ipdb.set_trace()
# In BDA, mu can be set or automatically estimated using A-distance
# In WBDA, we find that setting mu=1 is enough
if self.estimate_mu and self.mode == 'BDA':
if Xs_new is not None:
mu = estimate_mu(Xs_new, Ys, Xt_new, Y_tar_pseudo)
else:
mu = 0
# ipdb.set_trace()
M = (1 - mu) * M0 + mu * N
M /= np.linalg.norm(M, 'fro')
# ipdb.set_trace()
K = kernel(self.kernel_type, X, None, gamma=self.gamma)
n_eye = m if self.kernel_type == 'primal' else n
a, b = np.linalg.multi_dot([K, M, K.T]) + self.lamb * np.eye(n_eye), np.linalg.multi_dot([K, H, K.T])
w, V = scipy.linalg.eig(a, b)
ind = np.argsort(w)
A = V[:, ind[:self.dim]]
Z = np.dot(A.T, K)
Z_mean = np.linalg.norm(Z, axis=0) # why it's axis=0?
Z_mean[Z_mean==0] = 1
Z /= Z_mean
Xs_new, Xt_new = Z[:, :ns].T, Z[:, ns:].T
global device
model = sklearn.svm.SVC(kernel='linear').fit(Xs_new, Ys.ravel())
Y_tar_pseudo = model.predict(Xt_new)
# ipdb.set_trace()
acc = sklearn.metrics.mean_squared_error(Y_tar_pseudo, Yt) # Yt is already in classes
print(acc)
return Xs_new, Xt_new, A #, acc, Y_tar_pseudo, list_acc
###Output
_____no_output_____
###Markdown
Load data
###Code
Xs, Xt = bda_utils.load_data(if_weekday=1, if_interdet=1)
Xs = Xs[:,8:9]
Xt = Xt[:,8:9]
Xs, Xs_min, Xs_max = bda_utils.normalize2D(Xs)
Xt, Xt_min, Xt_max = bda_utils.normalize2D(Xt)
label_seq_len = 7
# batch_size = full batch
seq_len = 12
reduced_dim = 4
inp_dim = min(Xs.shape[1], Xt.shape[1])
label_dim = min(Xs.shape[1], Xt.shape[1])
hid_dim = 3
layers = 1
lamb = 2
MU = 0.7
bda_dim = label_seq_len-4
kernel_type = 'linear'
hyper = {
'inp_dim':inp_dim,
'label_dim':label_dim,
'label_seq_len':label_seq_len,
'seq_len':seq_len,
'reduced_dim':reduced_dim,
'hid_dim':hid_dim,
'layers':layers,
'lamb':lamb,
'MU': MU,
'bda_dim':bda_dim,
'kernel_type':kernel_type}
hyper = pd.DataFrame(hyper, index=['Values'])
hyper
Xs = Xs[:96, :]
# [sample size, seq_len, inp_dim (dets)], [sample size, label_seq_len, inp_dim (dets)]
Xs_3d, Ys_3d = bda_utils.sliding_window(Xs, Xs, seq_len, label_seq_len)
Xt_3d, Yt_3d = bda_utils.sliding_window(Xt, Xt, seq_len, label_seq_len)
Ys_3d = Ys_3d[:, label_seq_len-1:, :]
Yt_3d = Yt_3d[:, label_seq_len-1:, :]
print(Xs_3d.shape)
print(Ys_3d.shape)
print(Xt_3d.shape)
print(Yt_3d.shape)
plt.plot(Xs_3d[:,10,0])
t_s = time.time()
Xs_train_3d = []
Ys_train_3d = []
Xt_valid_3d = []
Xt_train_3d = []
Yt_valid_3d = []
Yt_train_3d = []
for i in range(Xs_3d.shape[2]):
print('Starting det %i'%i)
bda = BDA(kernel_type='linear', dim=seq_len-reduced_dim, lamb=lamb, mu=MU, gamma=1, T=2) # T is iteration time
Xs_new, Xt_new, A = bda.fit(
Xs_3d[:, :, i], bda_utils.get_class(Ys_3d[:, :, i]), Xt_3d[:, :, i], bda_utils.get_class(Yt_3d[:, :, i])
) # input shape: ns, n_feature | ns, n_label_feature
# normalize
Xs_new, Xs_new_min, Xs_new_max = bda_utils.normalize2D(Xs_new)
Xt_new, Xt_new_min, Xt_new_max = bda_utils.normalize2D(Xt_new)
print(Xs_new.shape)
print(Xt_new.shape)
day_train_t = 1
Xs_train = Xs_new.copy()
Ys_train = Ys_3d[:, :, i]
Xt_valid = Xt_new.copy()[int(96*day_train_t):, :]
Xt_train = Xt_new.copy()[:int(96*day_train_t), :]
Yt_valid = Yt_3d[:, :, i].copy()[int(96*day_train_t):, :]
Yt_train = Yt_3d[:, :, i].copy()[:int(96*day_train_t), :]
print('Time spent:%.5f'%(time.time()-t_s))
print(Xs_train.shape)
print(Ys_train.shape)
print(Xt_valid.shape)
print(Xt_train.shape)
print(Yt_valid.shape)
print(Yt_train.shape)
train_x = np.vstack([Xs_train, Xt_train])
train_y = np.vstack([Ys_train, Yt_train])
###Output
_____no_output_____
###Markdown
LSTM
###Code
model = keras.models.Sequential()
# out shape: [window_size, hid_dim]
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.LSTM(units=hid_dim, return_sequences=True))
model.add(keras.layers.LSTM(units=hid_dim, return_sequences=True))
model.add(keras.layers.LSTM(units=hid_dim, return_sequences=False))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(1, activation='relu'))
bda_utils.setup_seed(1)
###Output
_____no_output_____
###Markdown
Training on target data
###Code
model.compile(loss='mse', optimizer='adam')
history = model.fit(
train_x[:, :, np.newaxis], train_y[:, :, np.newaxis],
epochs=350, batch_size=64, validation_data=(Xt_valid[:, :, np.newaxis], Yt_valid[:, :, np.newaxis]),
verbose=2, shuffle=True
)
###Output
Epoch 1/350
3/3 - 4s - loss: 0.2044 - val_loss: 0.2789
Epoch 2/350
3/3 - 0s - loss: 0.1950 - val_loss: 0.2659
Epoch 3/350
3/3 - 0s - loss: 0.1859 - val_loss: 0.2536
Epoch 4/350
3/3 - 0s - loss: 0.1792 - val_loss: 0.2419
Epoch 5/350
3/3 - 0s - loss: 0.1745 - val_loss: 0.2308
Epoch 6/350
3/3 - 0s - loss: 0.1706 - val_loss: 0.2209
Epoch 7/350
3/3 - 0s - loss: 0.1674 - val_loss: 0.2111
Epoch 8/350
3/3 - 0s - loss: 0.1643 - val_loss: 0.2023
Epoch 9/350
3/3 - 0s - loss: 0.1635 - val_loss: 0.1939
Epoch 10/350
3/3 - 0s - loss: 0.1587 - val_loss: 0.1863
Epoch 11/350
3/3 - 0s - loss: 0.1580 - val_loss: 0.1790
Epoch 12/350
3/3 - 0s - loss: 0.1547 - val_loss: 0.1715
Epoch 13/350
3/3 - 0s - loss: 0.1520 - val_loss: 0.1641
Epoch 14/350
3/3 - 0s - loss: 0.1502 - val_loss: 0.1562
Epoch 15/350
3/3 - 0s - loss: 0.1490 - val_loss: 0.1483
Epoch 16/350
3/3 - 0s - loss: 0.1474 - val_loss: 0.1407
Epoch 17/350
3/3 - 0s - loss: 0.1467 - val_loss: 0.1334
Epoch 18/350
3/3 - 0s - loss: 0.1438 - val_loss: 0.1258
Epoch 19/350
3/3 - 0s - loss: 0.1405 - val_loss: 0.1183
Epoch 20/350
3/3 - 0s - loss: 0.1390 - val_loss: 0.1110
Epoch 21/350
3/3 - 0s - loss: 0.1364 - val_loss: 0.1039
Epoch 22/350
3/3 - 0s - loss: 0.1325 - val_loss: 0.0973
Epoch 23/350
3/3 - 0s - loss: 0.1290 - val_loss: 0.0911
Epoch 24/350
3/3 - 0s - loss: 0.1245 - val_loss: 0.0858
Epoch 25/350
3/3 - 0s - loss: 0.1193 - val_loss: 0.0811
Epoch 26/350
3/3 - 0s - loss: 0.1121 - val_loss: 0.0774
Epoch 27/350
3/3 - 0s - loss: 0.1052 - val_loss: 0.0747
Epoch 28/350
3/3 - 0s - loss: 0.0966 - val_loss: 0.0731
Epoch 29/350
3/3 - 0s - loss: 0.0880 - val_loss: 0.0726
Epoch 30/350
3/3 - 0s - loss: 0.0771 - val_loss: 0.0732
Epoch 31/350
3/3 - 0s - loss: 0.0712 - val_loss: 0.0749
Epoch 32/350
3/3 - 0s - loss: 0.0655 - val_loss: 0.0776
Epoch 33/350
3/3 - 0s - loss: 0.0607 - val_loss: 0.0811
Epoch 34/350
3/3 - 0s - loss: 0.0575 - val_loss: 0.0852
Epoch 35/350
3/3 - 0s - loss: 0.0548 - val_loss: 0.0899
Epoch 36/350
3/3 - 0s - loss: 0.0526 - val_loss: 0.0950
Epoch 37/350
3/3 - 0s - loss: 0.0517 - val_loss: 0.1001
Epoch 38/350
3/3 - 0s - loss: 0.0499 - val_loss: 0.1051
Epoch 39/350
3/3 - 0s - loss: 0.0486 - val_loss: 0.1098
Epoch 40/350
3/3 - 0s - loss: 0.0492 - val_loss: 0.1138
Epoch 41/350
3/3 - 0s - loss: 0.0470 - val_loss: 0.1174
Epoch 42/350
3/3 - 0s - loss: 0.0466 - val_loss: 0.1203
Epoch 43/350
3/3 - 0s - loss: 0.0458 - val_loss: 0.1223
Epoch 44/350
3/3 - 0s - loss: 0.0450 - val_loss: 0.1240
Epoch 45/350
3/3 - 0s - loss: 0.0455 - val_loss: 0.1246
Epoch 46/350
3/3 - 0s - loss: 0.0449 - val_loss: 0.1250
Epoch 47/350
3/3 - 0s - loss: 0.0448 - val_loss: 0.1243
Epoch 48/350
3/3 - 0s - loss: 0.0453 - val_loss: 0.1230
Epoch 49/350
3/3 - 0s - loss: 0.0448 - val_loss: 0.1215
Epoch 50/350
3/3 - 0s - loss: 0.0445 - val_loss: 0.1196
Epoch 51/350
3/3 - 0s - loss: 0.0421 - val_loss: 0.1178
Epoch 52/350
3/3 - 0s - loss: 0.0442 - val_loss: 0.1160
Epoch 53/350
3/3 - 0s - loss: 0.0418 - val_loss: 0.1143
Epoch 54/350
3/3 - 0s - loss: 0.0410 - val_loss: 0.1125
Epoch 55/350
3/3 - 0s - loss: 0.0414 - val_loss: 0.1107
Epoch 56/350
3/3 - 0s - loss: 0.0404 - val_loss: 0.1087
Epoch 57/350
3/3 - 0s - loss: 0.0404 - val_loss: 0.1071
Epoch 58/350
3/3 - 0s - loss: 0.0392 - val_loss: 0.1056
Epoch 59/350
3/3 - 0s - loss: 0.0424 - val_loss: 0.1037
Epoch 60/350
3/3 - 0s - loss: 0.0376 - val_loss: 0.1027
Epoch 61/350
3/3 - 0s - loss: 0.0376 - val_loss: 0.1015
Epoch 62/350
3/3 - 0s - loss: 0.0371 - val_loss: 0.1003
Epoch 63/350
3/3 - 0s - loss: 0.0364 - val_loss: 0.0995
Epoch 64/350
3/3 - 0s - loss: 0.0359 - val_loss: 0.0989
Epoch 65/350
3/3 - 0s - loss: 0.0355 - val_loss: 0.0983
Epoch 66/350
3/3 - 0s - loss: 0.0349 - val_loss: 0.0973
Epoch 67/350
3/3 - 0s - loss: 0.0342 - val_loss: 0.0967
Epoch 68/350
3/3 - 0s - loss: 0.0346 - val_loss: 0.0961
Epoch 69/350
3/3 - 0s - loss: 0.0347 - val_loss: 0.0954
Epoch 70/350
3/3 - 0s - loss: 0.0328 - val_loss: 0.0943
Epoch 71/350
3/3 - 0s - loss: 0.0318 - val_loss: 0.0931
Epoch 72/350
3/3 - 0s - loss: 0.0305 - val_loss: 0.0920
Epoch 73/350
3/3 - 0s - loss: 0.0299 - val_loss: 0.0912
Epoch 74/350
3/3 - 0s - loss: 0.0321 - val_loss: 0.0905
Epoch 75/350
3/3 - 0s - loss: 0.0294 - val_loss: 0.0900
Epoch 76/350
3/3 - 0s - loss: 0.0300 - val_loss: 0.0893
Epoch 77/350
3/3 - 0s - loss: 0.0275 - val_loss: 0.0896
Epoch 78/350
3/3 - 0s - loss: 0.0286 - val_loss: 0.0895
Epoch 79/350
3/3 - 0s - loss: 0.0256 - val_loss: 0.0896
Epoch 80/350
3/3 - 0s - loss: 0.0300 - val_loss: 0.0899
Epoch 81/350
3/3 - 0s - loss: 0.0257 - val_loss: 0.0897
Epoch 82/350
3/3 - 0s - loss: 0.0238 - val_loss: 0.0892
Epoch 83/350
3/3 - 0s - loss: 0.0235 - val_loss: 0.0888
Epoch 84/350
3/3 - 0s - loss: 0.0243 - val_loss: 0.0889
Epoch 85/350
3/3 - 0s - loss: 0.0229 - val_loss: 0.0890
Epoch 86/350
3/3 - 0s - loss: 0.0228 - val_loss: 0.0885
Epoch 87/350
3/3 - 0s - loss: 0.0228 - val_loss: 0.0884
Epoch 88/350
3/3 - 0s - loss: 0.0219 - val_loss: 0.0881
Epoch 89/350
3/3 - 0s - loss: 0.0218 - val_loss: 0.0880
Epoch 90/350
3/3 - 0s - loss: 0.0250 - val_loss: 0.0886
Epoch 91/350
3/3 - 0s - loss: 0.0218 - val_loss: 0.0883
Epoch 92/350
3/3 - 0s - loss: 0.0207 - val_loss: 0.0881
Epoch 93/350
3/3 - 0s - loss: 0.0237 - val_loss: 0.0885
Epoch 94/350
3/3 - 0s - loss: 0.0212 - val_loss: 0.0897
Epoch 95/350
3/3 - 0s - loss: 0.0199 - val_loss: 0.0903
Epoch 96/350
3/3 - 0s - loss: 0.0199 - val_loss: 0.0897
Epoch 97/350
3/3 - 0s - loss: 0.0222 - val_loss: 0.0883
Epoch 98/350
3/3 - 0s - loss: 0.0200 - val_loss: 0.0881
Epoch 99/350
3/3 - 0s - loss: 0.0229 - val_loss: 0.0893
Epoch 100/350
3/3 - 0s - loss: 0.0221 - val_loss: 0.0910
Epoch 101/350
3/3 - 0s - loss: 0.0224 - val_loss: 0.0899
Epoch 102/350
3/3 - 0s - loss: 0.0200 - val_loss: 0.0901
Epoch 103/350
3/3 - 0s - loss: 0.0198 - val_loss: 0.0913
Epoch 104/350
3/3 - 0s - loss: 0.0196 - val_loss: 0.0927
Epoch 105/350
3/3 - 0s - loss: 0.0196 - val_loss: 0.0930
Epoch 106/350
3/3 - 0s - loss: 0.0208 - val_loss: 0.0922
Epoch 107/350
3/3 - 0s - loss: 0.0205 - val_loss: 0.0929
Epoch 108/350
3/3 - 0s - loss: 0.0189 - val_loss: 0.0938
Epoch 109/350
3/3 - 0s - loss: 0.0208 - val_loss: 0.0944
Epoch 110/350
3/3 - 0s - loss: 0.0196 - val_loss: 0.0947
Epoch 111/350
3/3 - 0s - loss: 0.0200 - val_loss: 0.0960
Epoch 112/350
3/3 - 0s - loss: 0.0197 - val_loss: 0.0951
Epoch 113/350
3/3 - 0s - loss: 0.0181 - val_loss: 0.0952
Epoch 114/350
3/3 - 0s - loss: 0.0240 - val_loss: 0.0956
Epoch 115/350
3/3 - 0s - loss: 0.0181 - val_loss: 0.0954
Epoch 116/350
3/3 - 0s - loss: 0.0196 - val_loss: 0.0940
Epoch 117/350
3/3 - 0s - loss: 0.0184 - val_loss: 0.0933
Epoch 118/350
3/3 - 0s - loss: 0.0185 - val_loss: 0.0916
Epoch 119/350
3/3 - 0s - loss: 0.0189 - val_loss: 0.0939
Epoch 120/350
3/3 - 0s - loss: 0.0192 - val_loss: 0.0951
Epoch 121/350
3/3 - 0s - loss: 0.0177 - val_loss: 0.0935
Epoch 122/350
3/3 - 0s - loss: 0.0178 - val_loss: 0.0923
Epoch 123/350
3/3 - 0s - loss: 0.0185 - val_loss: 0.0934
Epoch 124/350
3/3 - 0s - loss: 0.0186 - val_loss: 0.0959
Epoch 125/350
3/3 - 0s - loss: 0.0201 - val_loss: 0.0948
Epoch 126/350
3/3 - 0s - loss: 0.0213 - val_loss: 0.0936
Epoch 127/350
3/3 - 0s - loss: 0.0212 - val_loss: 0.0946
Epoch 128/350
3/3 - 0s - loss: 0.0187 - val_loss: 0.0965
Epoch 129/350
3/3 - 0s - loss: 0.0189 - val_loss: 0.1008
Epoch 130/350
3/3 - 0s - loss: 0.0197 - val_loss: 0.1000
Epoch 131/350
3/3 - 0s - loss: 0.0187 - val_loss: 0.0962
Epoch 132/350
3/3 - 0s - loss: 0.0179 - val_loss: 0.0914
Epoch 133/350
3/3 - 0s - loss: 0.0174 - val_loss: 0.0930
Epoch 134/350
3/3 - 0s - loss: 0.0192 - val_loss: 0.0934
Epoch 135/350
3/3 - 0s - loss: 0.0173 - val_loss: 0.0941
Epoch 136/350
3/3 - 0s - loss: 0.0199 - val_loss: 0.0971
Epoch 137/350
3/3 - 0s - loss: 0.0202 - val_loss: 0.1040
Epoch 138/350
3/3 - 0s - loss: 0.0178 - val_loss: 0.1029
Epoch 139/350
3/3 - 0s - loss: 0.0179 - val_loss: 0.1025
Epoch 140/350
3/3 - 0s - loss: 0.0176 - val_loss: 0.1118
Epoch 141/350
3/3 - 0s - loss: 0.0171 - val_loss: 0.1197
Epoch 142/350
3/3 - 0s - loss: 0.0171 - val_loss: 0.1228
Epoch 143/350
3/3 - 0s - loss: 0.0176 - val_loss: 0.1332
Epoch 144/350
3/3 - 0s - loss: 0.0196 - val_loss: 0.1438
Epoch 145/350
3/3 - 0s - loss: 0.0174 - val_loss: 0.1417
Epoch 146/350
3/3 - 0s - loss: 0.0180 - val_loss: 0.1549
###Markdown
Visualization
###Code
p1 = plt.plot(history.history['loss'], color='blue', label='train')
p2 = plt.plot(history.history['val_loss'], color='red',label='test')
plt.legend()
###Output
_____no_output_____
###Markdown
Evaluation
###Code
g_t = Yt_valid.flatten()
pred = model.predict(Xt_valid[:, :, np.newaxis]).flatten()
pred_base = pd.read_csv('./runs_base/base_data_plot/pred_base_LSTM.csv', header=None)
g_t_base = pd.read_csv('./runs_base/base_data_plot/g_t_base_LSTM.csv', header=None)
plt.rc('text', usetex=False)
plt.rcParams["font.family"] = "Times New Roman"
plt.figure(figsize=[20, 6], dpi=300)
diff = g_t_base.shape[0]-g_t.shape[0]
plt.plot(range(g_t.shape[0]), g_t_base[diff:]*(903-15)+15, 'b', label='Ground Truth')
plt.plot(range(g_t.shape[0]), pred_base[diff:]*(903-15)+15, 'g', label='Base Model (LSTM)')
# plt.figure()
# plt.plot(range(371), g_t_bda)
plt.plot(range(g_t.shape[0]), pred*(903-15)+15, 'r', label='BDA (LSTM)')
plt.legend(loc=1, fontsize=18)
plt.xlabel('Time [15 min]', fontsize=18)
plt.ylabel('Flow [veh/hr]', fontsize=18)
print(bda_utils.nrmse_loss_func(pred, g_t, 0))
print(bda_utils.smape_loss_func(pred, g_t, 0))
print(bda_utils.mae_loss_func(pred, g_t, 0))
print(bda_utils.mape_loss_func(pred, g_t, 0))
###Output
0.1681626189033309
0.43906857635307583
0.13474173405573825
1.206917218793393
|
examples/expert_section/notebooks/create_sparse_netflow_data.ipynb | ###Markdown
Create a more interesting network flow model that exercises slicing and sparsity.
###Code
from netflow import input_schema
inf = float("inf")
dat = input_schema.TicDat(**{'arcs': [[u'warehouse_0', u'customer_4', inf],
[u'plant_12', u'warehouse_1', inf],
[u'warehouse_2', u'customer_8', inf],
[u'warehouse_1', u'customer_3', inf],
[u'warehouse_2', u'customer_6', inf],
[u'plant_6', u'warehouse_1', inf],
[u'warehouse_0', u'customer_9', inf],
[u'warehouse_1', u'customer_7', inf],
[u'warehouse_1', u'customer_5', inf],
[u'plant_4', u'warehouse_2', inf],
[u'plant_9', u'warehouse_0', inf],
[u'warehouse_0', u'customer_7', inf],
[u'warehouse_1', u'customer_2', inf],
[u'warehouse_1', u'customer_0', inf],
[u'warehouse_0', u'customer_1', inf],
[u'warehouse_0', u'customer_6', inf],
[u'warehouse_2', u'customer_3', inf],
[u'plant_14', u'warehouse_0', inf],
[u'plant_1', u'warehouse_1', inf],
[u'plant_5', u'warehouse_0', inf],
[u'warehouse_0', u'customer_2', inf],
[u'plant_13', u'warehouse_2', inf],
[u'plant_0', u'warehouse_0', inf],
[u'warehouse_2', u'customer_1', inf],
[u'warehouse_1', u'customer_8', inf]],
'commodities': [[u'P2'], [u'P3'], [u'P0'], [u'P1'], [u'P4']],
'cost': [[u'P2', u'plant_13', u'warehouse_2', 1.0],
[u'P2', u'warehouse_2', u'customer_8', 1.0],
[u'P1', u'warehouse_1', u'customer_0', 1.0],
[u'P3', u'warehouse_0', u'customer_4', 1.0],
[u'P4', u'warehouse_1', u'customer_3', 1.0],
[u'P1', u'plant_1', u'warehouse_1', 1.0],
[u'P2', u'warehouse_2', u'customer_6', 1.0],
[u'P2', u'plant_4', u'warehouse_2', 1.0],
[u'P0', u'warehouse_0', u'customer_1', 1.0],
[u'P1', u'warehouse_1', u'customer_2', 1.0],
[u'P1', u'plant_6', u'warehouse_1', 1.0],
[u'P0', u'warehouse_0', u'customer_9', 1.0],
[u'P1', u'warehouse_1', u'customer_5', 1.0],
[u'P3', u'warehouse_0', u'customer_7', 1.0],
[u'P4', u'plant_12', u'warehouse_1', 1.0],
[u'P4', u'warehouse_1', u'customer_5', 1.0],
[u'P1', u'warehouse_1', u'customer_7', 1.0],
[u'P0', u'plant_0', u'warehouse_0', 1.0],
[u'P0', u'plant_14', u'warehouse_0', 1.0],
[u'P2', u'warehouse_2', u'customer_3', 1.0],
[u'P4', u'plant_1', u'warehouse_1', 1.0],
[u'P3', u'plant_5', u'warehouse_0', 1.0],
[u'P0', u'warehouse_0', u'customer_6', 1.0],
[u'P3', u'warehouse_0', u'customer_2', 1.0],
[u'P3', u'plant_9', u'warehouse_0', 1.0],
[u'P4', u'warehouse_1', u'customer_0', 1.0],
[u'P2', u'warehouse_2', u'customer_1', 1.0],
[u'P0', u'warehouse_0', u'customer_4', 1.0],
[u'P4', u'warehouse_1', u'customer_8', 1.0],
[u'P3', u'warehouse_0', u'customer_9', 1.0]],
'inflow': [[u'P1', u'customer_0', -10.0],
[u'P0', u'customer_9', -10.0],
[u'P4', u'plant_1', 20.0],
[u'P0', u'plant_0', 20.0],
[u'P1', u'customer_2', -10.0],
[u'P2', u'customer_1', -10.0],
[u'P2', u'plant_4', 20.0],
[u'P2', u'customer_8', -10.0],
[u'P2', u'customer_3', -10.0],
[u'P1', u'plant_1', 20.0],
[u'P1', u'plant_6', 20.0],
[u'P3', u'customer_9', -10.0],
[u'P4', u'customer_3', -10.0],
[u'P3', u'customer_4', -10.0],
[u'P2', u'plant_13', 20.0],
[u'P0', u'customer_1', -10.0],
[u'P4', u'customer_8', -10.0],
[u'P3', u'plant_5', 20.0],
[u'P0', u'customer_6', -10.0],
[u'P1', u'customer_5', -10.0],
[u'P4', u'customer_5', -10.0],
[u'P0', u'customer_4', -10.0],
[u'P1', u'customer_7', -10.0],
[u'P3', u'customer_7', -10.0],
[u'P4', u'plant_12', 20.0],
[u'P3', u'customer_2', -10.0],
[u'P2', u'customer_6', -10.0],
[u'P3', u'plant_9', 20.0],
[u'P4', u'customer_0', -10.0],
[u'P0', u'plant_14', 20.0]],
'nodes': [[u'customer_9'],
[u'customer_8'],
[u'customer_7'],
[u'customer_6'],
[u'customer_5'],
[u'customer_4'],
[u'customer_3'],
[u'customer_2'],
[u'customer_1'],
[u'customer_0'],
[u'warehouse_2'],
[u'warehouse_1'],
[u'warehouse_0'],
[u'plant_9'],
[u'plant_8'],
[u'plant_1'],
[u'plant_0'],
[u'plant_3'],
[u'plant_2'],
[u'plant_5'],
[u'plant_4'],
[u'plant_7'],
[u'plant_6'],
[u'plant_14'],
[u'plant_11'],
[u'plant_10'],
[u'plant_13'],
[u'plant_12']]})
###Output
_____no_output_____
###Markdown
Bear in mind the Inflow table isn't fully populated.
###Code
len(dat.inflow), len(dat.commodities) * len(dat.nodes)
# using the locally installed engine on my computer, comment this out for reproducing
%env PATH = PATH:/Users/XXXXXX/ampl/ampl
from netflow import solve
sln = solve(dat)
sln.parameters
###Output
_____no_output_____ |
examples/notebooks/Converting Long-Format to Wide-Format.ipynb | ###Markdown
Converting long-format dataframes to wide-formatThe purpose of this notebook is to demonstrate the conversion of long-format data into wide-format. Long-format data contains one row per available alternative per choice situation. In contrast, wide-format data contains one row per choice situation. PyLogit and other software packages (e.g. mlogit in R) use data that is in long-format. However, other software packages, such as Statsmodels in Python or Python BIOGEME, use data that is in wide-format.Because different software packages have different data format requirements, it is useful to be able to convert one's data from one format to another. Other PyLogit example notebooks (such as the "Main PyLogit Example") demonstrate how to take data from wide-format and convert it into long-format. This notebook will demonstrate the reverse process: taking data from long-format and converting it into wide-format.The dataset being used in this example is the "Travel Mode Choice" dataset from Greene and Hensher. It is described on the statsmodels website, and their description is reproduced below in full. The data, collected as part of a 1987 intercity mode choice study, are a sub-sample of 210 non-business trips between Sydney, Canberra and Melbourne in which the traveler chooses a mode from four alternatives (plane, car, bus and train). The sample, 840 observations, is choice based with over-sampling of the less popular modes (plane, train and bus) and under-sampling of the more popular mode, car. The level of service data was derived from highway and transport networks in Sydney, Melbourne, non-metropolitan N.S.W. and Victoria, including the Australian Capital Territory. Number of observations: 840 Observations On 4 Modes for 210 Individuals. Number of variables: 8 Variable name definitions:: individual = 1 to 210 mode = 1 - air 2 - train 3 - bus 4 - car choice = 0 - no 1 - yes ttme = terminal waiting time for plane, train and bus (minutes); 0 for car. invc = in vehicle cost for all stages (dollars). invt = travel time (in-vehicle time) for all stages (minutes). gc = generalized cost measure:invc+(invt*value of travel time savings) (dollars). hinc = household income ($1000s). psize = traveling group size in mode chosen (number). Source Greene, W.H. and D. Hensher (1997) Multinomial logit and discrete choice models in Greene, W. H. (1997) LIMDEP version 7.0 userโs manual revised, Plainview, New York econometric software, Inc. Download from on-line complements to Greene, W.H. (2011) Econometric Analysis, Prentice Hall, 7th Edition (data table F18-2) http://people.stern.nyu.edu/wgreene/Text/Edition7/TableF18-2.csv
###Code
# To access the Travel Mode Choice data
import statsmodels.datasets
# To perform the dataset conversion
import pylogit as pl
###Output
_____no_output_____
###Markdown
Load the needed dataset
###Code
# Access the dataset
mode_data = statsmodels.datasets.modechoice.load_pandas()
# Get a pandas dataframe of the mode choice data
long_df = mode_data["data"]
# Look at the dataframe to ensure that it loaded correctly
long_df.head()
###Output
_____no_output_____
###Markdown
Create the needed variables for the conversion function.The function in PyLogit that is used to convert long-format data to wide-format data is "convert_long_to_wide," and it can be accessed through "pl.convert_long_to_wide". The docstring for the function contains all of the information necessary to perform the conversion, but we will leave it to readers to view the docstring at their own leisure. For now, we will simply create the needed objects/arguments for the function.In particular, we will need the following 7 objects:1. ind_vars2. alt_specific_vars3. subset_specific_vars4. obs_id_col5. alt_id_col6. choice_col7. alt_name_dictThe cells below will show exactly what these objects are.
###Code
# ind_vars is a list of strings denoting the column
# headings of data that varies across choice situations,
# but not across alternatives. In our data, this is
# the household income and party size.
individual_specific_variables = ["hinc", "psize"]
# alt_specific_vaars is a list of strings denoting the
# column headings of data that vary not only across
# choice situations but also across all alternatives.
# These are columns such as the "level of service"
# variables.
alternative_specific_variables = ["invc", "invt", "gc"]
# subset_specific_vars is a dictionary. Each key is a
# string that denotes a variable that is subset specific.
# Each value is a list of alternative ids, over which the
# variable actually varies. Note that subset specific
# variables vary across choice situations and across some
# (but not all) alternatives. This is most common when
# using variables that are not meaningfully defined for
# all alternatives. An example of this in our dataset is
# terminal time ("ttme"). This variable is not meaningfully
# defined for the "car" alternative. Therefore, it is always
# zero. Note "4" is the id for the "car" alternative
subset_specific_variables = {"ttme": [1, 2, 3]}
# obs_id_col is the column denoting the id of the choice
# situation. If one was using a panel dataset, with multiple
# choice situations per unit of observation, the column
# denoting the unit of observation would be listed in
# ind_vars (i.e. with the individual specific variables)
observation_id_column = "individual"
# alt_id_col is the column denoting the id of the alternative
# corresponding to a given row.
alternative_id_column = "mode"
# choice_col is the column denoting whether the alternative
# on a given row was chosen in the corresponding choice situation
choice_column = "choice"
# Lastly, alt_name_dict is not necessary. However, it is useful.
# It records the names corresponding to each alternative, if there
# are any, and allows for the creation of meaningful column names
# in the wide-format data (such as when creating the columns
# denoting the available alternatives in each choice situation).
# The keys of alt_name_dict are the unique alternative ids, and
# the values are the names of each alternative.
alternative_name_dict = {1: "air",
2: "train",
3: "bus",
4: "car"}
###Output
_____no_output_____
###Markdown
Create the wide-format dataframe
###Code
# Finally, we can create the wide format dataframe
wide_df = pl.convert_long_to_wide(long_df,
individual_specific_variables,
alternative_specific_variables,
subset_specific_variables,
observation_id_column,
alternative_id_column,
choice_column,
alternative_name_dict)
# Let's look at the created dataframe, transposed for easy viewing
wide_df.head().T
###Output
_____no_output_____ |
.ipynb_checkpoints/word2vec_skills-checkpoint.ipynb | ###Markdown
Testing
###Code
model.similar_by_word('machine_learning')
model.similar_by_word('python')
model.similar_by_word('css')
model.similar_by_word('html')
model.similar_by_word('html5')
model.similar_by_word('bootstrap')
model.similar_by_word('javascript')
model.similar_by_word('nodejs')
model.similar_by_word('node.js')
model.similar_by_word('php')
model.similar_by_word('c++')
model.similar_by_word('web')
model.similar_by_word('rails')
model.similar_by_word('ruby')
model.similar_by_word('mysql')
model.similar_by_word('db2')
model.similar_by_word('sql')
model.similar_by_word('mssql')
model.similar_by_word('db2')
model.similar_by_word('html5')
model.similar_by_word('oracle')
model.similar_by_word('php5')
model.similar_by_word('asp')
model.similar_by_word('svm')
model.similar_by_word('django')
model.similar_by_word('mongodb')
model.similar_by_word('mongo')
model.similar_by_word('falcon')
model.similar_by_word('express')
model.similar_by_word('spark')
model.similar_by_word('hadoop')
model.similar_by_word('hive')
model.similar_by_word('impala')
model.similar_by_word('oozie')
model.similar_by_word('nginx')
model.similar_by_word('rest')
model.similar_by_word('.net')
model.similar_by_word('perl')
model.similar_by_word('unity')
model.similar_by_word('3d')
model.similar_by_word('wordpress')
model.similar_by_word('jquery')
model.similar_by_word('ajax')
###Output
_____no_output_____ |
inference/Worksheet2.ipynb | ###Markdown
Worksheet 2: Inference with two parametersThis time, let's assume we don't know the *phase* of the sine curve, either. Our model is now$m(P) = 1 + 0.1 \sin\left(\frac{2\pi}{P} t + \phi\right)$where (as before) $m$ is the model, $P$ is the period, $t$ is time, and now $\phi$ is a phase offset in radians. 1. Import the modules we'll need
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
###Output
_____no_output_____
###Markdown
2. Load the datasetIt's stored in the text file ``data/worksheet2.txt``. We'll load the time array ``time``, the flux array ``flux``, and the array of uncertainties ``err``:
###Code
time, flux, err = np.loadtxt("data/worksheet2.txt").T
###Output
_____no_output_____ |
examples/mirs-cnn-deepshap.ipynb | ###Markdown
INTERPRETING INDIVIDUAL CNN PREDICTION USING SHAP VALUES 1. Google Colab runtime setup [Optional]
###Code
from google.colab import drive
drive.mount('/content/drive')
# Clone and install spectrai package
!git clone https://github.com/franckalbinet/spectrai.git
!pip install /content/spectrai
# Prepare /root folder content
!cp -r /content/drive/My\ Drive/Colab\ Notebooks/data/data_spectrai /root
# Create configuration file
!mkdir /root/.spectrai_config & cp /content/spectrai/config.toml /root/.spectrai_config
###Output
_____no_output_____
###Markdown
2. Import packages
###Code
# To train on a GPU
!pip install tensorflow-gpu
!pip install shap
from spectrai.datasets.kssl import (get_tax_orders_lookup_tbl, load_data)
from spectrai.vis.spectra import (plot_spectra)
from spectrai.metrics.keras import rpd
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.optimizers import RMSprop, Adam
from tensorflow.keras.optimizers.schedules import ExponentialDecay
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras import layers, models, Model
import tensorflow.keras.backend as K
import tensorflow.keras.utils as utils
import shap
import matplotlib.pyplot as plt
from matplotlib import cm
###Output
_____no_output_____
###Markdown
3. Load KSSL dataset
###Code
# Loading data ("Potassium, NH4OAc: 725" or "CEC": 723 for instance)
X, X_names, y, y_names, instances_id = load_data(analytes=[725])
print('X shape: ', X.shape)
print('X approx. memory size: {} MB'.format(X.nbytes // 10**6))
print('y approx. memory size: {} MB'.format(y.nbytes // 10**6))
print('Wavenumbers: ', X_names)
print('Target variable: ', y_names)
plt.hist(y[:,-1], cumulative=True, log=True)
###Output
_____no_output_____
###Markdown
4. Data preparation
###Code
# Keeping data with analyte concentration > 0 only and for 'alfisols' taxonomic order only.
#TAX_ORDER_ID = 0
idx_y_valid = y[:, -1] > 0
#idx_y_valid = (y[:, -1] > 0) & (y[:, -1] <= np.quantile(y[:, -1], 0.5))
#idx_order = y[:,1] == TAX_ORDER_ID
#idx = idx_y_valid & idx_order
X = X[idx_y_valid,:]
y = y[idx_y_valid,:]
# Scale data
#scaler = MinMaxScaler()
#X = scaler.fit_transform(X)
X = (X - np.min(X))/(np.max(X) - np.min(X))
# Discretize output to convert to a classification problem
NB_BINS = 3
bins = np.linspace(0, 100, NB_BINS+1)[1:-1]
print(bins)
percentiles = np.percentile(y[:, -1], bins)
print(percentiles)
y_discretized = np.digitize(y[:,-1], percentiles)
print(y_discretized)
mask = np.isin(y_discretized, [0, 2])
print(mask)
X = X[mask,:]
print(X.shape)
y = y_discretized[mask]
print(y.shape)
# rename classes from 0, 1, ...
for i, cat in enumerate(list(np.unique(y))):
y[y == cat] = i
# Creating train, valid, test sets
#X, X_test, y, y_test = train_test_split(X, y[:, -1], test_size=0.20, random_state=42)
X, X_test, y, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2, random_state=42)
print('X test shape: ', X_test.shape)
print('y test shape: ', y_test.shape)
print('X valid shape: ', X_valid.shape)
print('X train shape: ', X_train.shape)
print('y train shape: ', y_train.shape)
plt.imshow(np.expand_dims(X_train[0,:], axis=1).T, cmap='magma', aspect=200);
plt.axis('off');
plot_spectra(X_train[10:15,:], X_names)
###Output
_____no_output_____
###Markdown
5. Defining CNN model
###Code
# Model "optimized" for K
activation = 'relu'
input_dim = X_train.shape[1]
num_classes = len(np.unique(y_train))
model = keras.models.Sequential()
model.add(layers.Reshape((input_dim, 1), input_shape=(input_dim,)))
#model.add(layers.BatchNormalization())
model.add(layers.Conv1D(32, 30, activation=activation))
model.add(layers.MaxPool1D(4))
#model.add(layers.BatchNormalization())
model.add(layers.Conv1D(64, 30, activation=activation))
model.add(layers.MaxPool1D(4))
#model.add(layers.BatchNormalization())
model.add(layers.Conv1D(128, 30, activation=activation))
model.add(layers.MaxPool1D(4))
#model.add(layers.Dropout(rate=0.4))
model.add(layers.Flatten())
#model.add(layers.BatchNormalization())
model.add(layers.Dense(100, activation=activation))
#model.add(layers.BatchNormalization())
model.add(layers.Dense(50, activation=activation))
#model.add(layers.BatchNormalization())
model.add(layers.Dense(10, activation=activation))
#model.add(layers.Dropout(rate=0.2))
model.add(layers.Dense(num_classes, activation='softmax'))
model.compile(optimizer=Adam(learning_rate=1e-3), loss=keras.losses.categorical_crossentropy, metrics=['accuracy', tf.keras.metrics.TruePositives(), tf.keras.metrics.TrueNegatives()])
model.summary()
# Wartini's architecture
activation = 'relu'
input_dim = X_train.shape[1]
num_classes = len(np.unique(y_train))
model = keras.models.Sequential()
model.add(layers.Reshape((input_dim, 1), input_shape=(input_dim,)))
model.add(layers.Conv1D(32, 20, activation=activation))
model.add(layers.MaxPool1D(2))
model.add(layers.Conv1D(64, 20, activation=activation))
model.add(layers.MaxPool1D(5))
model.add(layers.Conv1D(128, 20, activation=activation))
model.add(layers.MaxPool1D(5))
model.add(layers.Conv1D(256, 20, activation=activation))
model.add(layers.MaxPool1D(5))
model.add(layers.Dropout(rate=0.4))
model.add(layers.Flatten())
model.add(layers.Dense(100, activation=activation))
model.add(layers.Dropout(rate=0.2))
model.add(layers.Dense(num_classes, activation='softmax'))
model.compile(optimizer=Adam(learning_rate=1e-3), loss=keras.losses.categorical_crossentropy, metrics=['accuracy', tf.keras.metrics.TruePositives(), tf.keras.metrics.TrueNegatives()])
model.summary()
y_train = keras.utils.to_categorical(y_train)
y_valid = keras.utils.to_categorical(y_valid)
print('# of low values: ', np.sum(y_valid == 0))
print('# of high values: ', np.sum(y_valid == 1))
###Output
# of low values: 5188
# of high values: 5188
###Markdown
6. Training the model
###Code
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, min_lr=1e-3)
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid), callbacks=[reduce_lr])
###Output
Epoch 1/20
649/649 [==============================] - 10s 15ms/step - loss: 0.5805 - accuracy: 0.6637 - true_positives: 13772.0000 - true_negatives: 13772.0000 - val_loss: 0.4061 - val_accuracy: 0.8277 - val_true_positives: 4294.0000 - val_true_negatives: 4294.0000
Epoch 2/20
649/649 [==============================] - 9s 14ms/step - loss: 0.3765 - accuracy: 0.8408 - true_positives: 17448.0000 - true_negatives: 17448.0000 - val_loss: 0.3492 - val_accuracy: 0.8500 - val_true_positives: 4410.0000 - val_true_negatives: 4410.0000
Epoch 3/20
649/649 [==============================] - 9s 14ms/step - loss: 0.3481 - accuracy: 0.8548 - true_positives: 17737.0000 - true_negatives: 17737.0000 - val_loss: 0.3343 - val_accuracy: 0.8589 - val_true_positives: 4456.0000 - val_true_negatives: 4456.0000
Epoch 4/20
649/649 [==============================] - 9s 14ms/step - loss: 0.3294 - accuracy: 0.8634 - true_positives: 17916.0000 - true_negatives: 17916.0000 - val_loss: 0.3149 - val_accuracy: 0.8664 - val_true_positives: 4495.0000 - val_true_negatives: 4495.0000
Epoch 5/20
649/649 [==============================] - 9s 14ms/step - loss: 0.3246 - accuracy: 0.8652 - true_positives: 17953.0000 - true_negatives: 17953.0000 - val_loss: 0.3411 - val_accuracy: 0.8583 - val_true_positives: 4453.0000 - val_true_negatives: 4453.0000
Epoch 6/20
649/649 [==============================] - 9s 14ms/step - loss: 0.3148 - accuracy: 0.8681 - true_positives: 18014.0000 - true_negatives: 18014.0000 - val_loss: 0.3227 - val_accuracy: 0.8691 - val_true_positives: 4509.0000 - val_true_negatives: 4509.0000
Epoch 7/20
649/649 [==============================] - 9s 14ms/step - loss: 0.3072 - accuracy: 0.8715 - true_positives: 18084.0000 - true_negatives: 18084.0000 - val_loss: 0.3090 - val_accuracy: 0.8710 - val_true_positives: 4519.0000 - val_true_negatives: 4519.0000
Epoch 8/20
649/649 [==============================] - 9s 14ms/step - loss: 0.3025 - accuracy: 0.8737 - true_positives: 18130.0000 - true_negatives: 18130.0000 - val_loss: 0.2895 - val_accuracy: 0.8811 - val_true_positives: 4571.0000 - val_true_negatives: 4571.0000
Epoch 9/20
649/649 [==============================] - 10s 15ms/step - loss: 0.2955 - accuracy: 0.8776 - true_positives: 18212.0000 - true_negatives: 18212.0000 - val_loss: 0.2953 - val_accuracy: 0.8745 - val_true_positives: 4537.0000 - val_true_negatives: 4537.0000
Epoch 10/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2913 - accuracy: 0.8802 - true_positives: 18266.0000 - true_negatives: 18266.0000 - val_loss: 0.2999 - val_accuracy: 0.8737 - val_true_positives: 4533.0000 - val_true_negatives: 4533.0000
Epoch 11/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2861 - accuracy: 0.8802 - true_positives: 18265.0000 - true_negatives: 18265.0000 - val_loss: 0.2964 - val_accuracy: 0.8778 - val_true_positives: 4554.0000 - val_true_negatives: 4554.0000
Epoch 12/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2846 - accuracy: 0.8832 - true_positives: 18328.0000 - true_negatives: 18328.0000 - val_loss: 0.2729 - val_accuracy: 0.8869 - val_true_positives: 4601.0000 - val_true_negatives: 4601.0000
Epoch 13/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2759 - accuracy: 0.8872 - true_positives: 18410.0000 - true_negatives: 18410.0000 - val_loss: 0.2750 - val_accuracy: 0.8876 - val_true_positives: 4605.0000 - val_true_negatives: 4605.0000
Epoch 14/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2784 - accuracy: 0.8864 - true_positives: 18394.0000 - true_negatives: 18394.0000 - val_loss: 0.2873 - val_accuracy: 0.8832 - val_true_positives: 4582.0000 - val_true_negatives: 4582.0000
Epoch 15/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2714 - accuracy: 0.8895 - true_positives: 18459.0000 - true_negatives: 18459.0000 - val_loss: 0.2730 - val_accuracy: 0.8917 - val_true_positives: 4626.0000 - val_true_negatives: 4626.0000
Epoch 16/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2663 - accuracy: 0.8917 - true_positives: 18503.0000 - true_negatives: 18503.0000 - val_loss: 0.2605 - val_accuracy: 0.8975 - val_true_positives: 4656.0000 - val_true_negatives: 4656.0000
Epoch 17/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2635 - accuracy: 0.8920 - true_positives: 18510.0000 - true_negatives: 18510.0000 - val_loss: 0.2766 - val_accuracy: 0.8909 - val_true_positives: 4622.0000 - val_true_negatives: 4622.0000
Epoch 18/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2598 - accuracy: 0.8935 - true_positives: 18542.0000 - true_negatives: 18542.0000 - val_loss: 0.2723 - val_accuracy: 0.8907 - val_true_positives: 4621.0000 - val_true_negatives: 4621.0000
Epoch 19/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2589 - accuracy: 0.8948 - true_positives: 18569.0000 - true_negatives: 18569.0000 - val_loss: 0.2645 - val_accuracy: 0.9009 - val_true_positives: 4674.0000 - val_true_negatives: 4674.0000
Epoch 20/20
649/649 [==============================] - 9s 14ms/step - loss: 0.2515 - accuracy: 0.9000 - true_positives: 18675.0000 - true_negatives: 18675.0000 - val_loss: 0.2686 - val_accuracy: 0.8973 - val_true_positives: 4655.0000 - val_true_negatives: 4655.0000
###Markdown
7. Saving model
###Code
MODEL_PATH = '/content/drive/My Drive/Colab Notebooks/models/cnn-wartini-10-epochs-CEC-classif-2-classes.h5'
model.save(MODEL_PATH)
MODEL_PATH = '/content/drive/My Drive/Colab Notebooks/models/cnn-basic-50-epochs-cec.h5'
model = models.load_model(MODEL_PATH, custom_objects={'rpd': rpd})
###Output
_____no_output_____
###Markdown
9. Wavenumbers influence using SHAP values
###Code
# select a set of background examples to take an expectation over
background = X_train[np.random.choice(X_train.shape[0], 100, replace=False)]
# explain predictions of the model on three images
e = shap.DeepExplainer(model, background)
# ...or pass tensors directly
shap_values = e.shap_values(X_test[1:10])
#'LOW (CEC <= 10 cmol(+)/kg)', 'HIGH (CEC >= 21 cmol(+)/kg)'
def plot_shap(shap_values, X, X_names,
y_hat, y_true,
class_label=['LOW', 'HIGH'],
figsize=(25, 16), accent_color='firebrick'):
nb_spectra = X.shape[0]
nb_classes = len(shap_values)
min_shap, max_shap = np.min(shap_values), np.max(shap_values)
fig, ax = plt.subplots(nb_spectra, nb_classes + 1, sharey='col', figsize=figsize)
for row in range(nb_spectra):
for col in range(nb_classes +1):
# Plot spectra
ax[row, col].set_xlim(np.max(X_names), np.min(X_names))
alpha = 1 if col == 0 else 0.2
ax[row, col].plot(X_names, X[row], color='black', alpha=alpha)
ax[row, col].spines['top'].set_visible(False)
ax[row, col].spines['right'].set_visible(False)
ax[row, col].spines['left'].set_visible(False)
ax[row, col].spines['bottom'].set_visible(False)
ax[row, col].get_yaxis().set_visible(False)
if row == 0:
label = 'Spectra' if col == 0 else class_label[col-1]
ax[row, col].set_title(label)
if row == nb_spectra - 1:
ax[row, col].set_xlabel('Wavenumber')
if col == 0:
ax[row, col].annotate('Ground truth: {}'.format(class_label[y_true[row]]), xy=[0, 0.8], xycoords='axes fraction', color='steelblue')
# Plot SHAP values
if col >= 1:
ax_twin = ax[row, col].twinx()
ax_twin.set_ylim(min_shap, max_shap)
ax_twin.spines['top'].set_visible(False)
ax_twin.spines['right'].set_visible(False)
ax_twin.spines['left'].set_visible(False)
ax_twin.spines['bottom'].set_visible(False)
ax_twin.tick_params('y', colors=accent_color)
ax_twin.plot(X_names, shap_values[col - 1][row], color=accent_color, alpha=0.6, linewidth=1)
if col == nb_classes:
ax_twin.set_ylabel('SHAP value', color=accent_color)
ax[row, col].annotate('Probability : {:.2f}'.format(y_hat[row, col - 1]), xy=[0, 0.8], xycoords='axes fraction', color='steelblue')
plt.show()
plot_shap(shap_values, X_test[1:10], X_names, model.predict(X_test[1:10]), y_test[1:10])
###Output
_____no_output_____ |
module2/assignment_regression_classification_Original.ipynb | ###Markdown
Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 2 AssignmentYou'll continue to **predict how much it costs to rent an apartment in NYC,** using the dataset from renthop.com.- [ ] Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.- [ ] Engineer at least two new features. (See below for explanation & ideas.)- [ ] Fit a linear regression model with at least two features.- [ ] Get the model's coefficients and intercept.- [ ] Get regression metrics RMSE, MAE, and $R^2$, for both the train and test data.- [ ] What's the best test MAE you can get? Share your score and features used with your cohort on Slack!- [ ] As always, commit your notebook to your fork of the GitHub repo. [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)> "Some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." โโPedro Domingos, ["A Few Useful Things to Know about Machine Learning"](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf)> "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." โโAndrew Ng, [Machine Learning and AI via Brain simulations](https://forum.stanford.edu/events/2011/2011slides/plenary/2011plenaryNg.pdf) > Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. Feature Ideas- Does the apartment have a description?- How long is the description?- How many total perks does each apartment have?- Are cats _or_ dogs allowed?- Are cats _and_ dogs allowed?- Total number of rooms (beds + baths)- Ratio of beds to baths- What's the neighborhood, based on address or latitude & longitude? Stretch Goals- [ ] If you want more math, skim [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 3.1, Simple Linear Regression, & Chapter 3.2, Multiple Linear Regression- [ ] If you want more introduction, watch [Brandon Foltz, Statistics 101: Simple Linear Regression](https://www.youtube.com/watch?v=ZkjP5RJLQF4)(20 minutes, over 1 million views)- [ ] Do the [Plotly Dash](https://dash.plot.ly/) Tutorial, Parts 1 & 2.- [ ] Add your own stretch goal(s) !
###Code
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module1')
# Installs needed for neighborhood lookup
!pip install fiona
!pip install geopandas
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Load the data
import numpy as np
import pandas as pd
# Read New York City apartment rental listing data
df = pd.read_csv('../data/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
df.head()
###Output
_____no_output_____
###Markdown
Example of Neighborhood Lookup
###Code
# Use NYC geojson map to assign neighborhoods based on long/lat coordinates
import geopandas
import json
from shapely.geometry import shape, Point
nyc = geopandas.read_file('https://raw.githubusercontent.com/JimKing100/DS-Unit-2-Regression-Classification/master/module2/NYC.geojson')
# The neighbohood lookup function
def neighborhood(long, lat):
# construct point based on long/lat returned by geocoder
point = Point(long, lat)
n_name = 'Unknown'
# check each polygon to see if it contains the point
for i in range(0, len(nyc.index)):
polygon = shape(nyc.loc[i]['geometry'])
if polygon.contains(point):
n_name = nyc.loc[i]['ntaname']
return n_name
return n_name
longitude = -73.9667
latitude = 40.7947
n_name = neighborhood(longitude, latitude)
print(n_name)
###Output
Upper West Side
###Markdown
Create Train and Test
###Code
# Convert created to datetime
df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True)
df['created'].describe()
# Split off the train data by April and May
mask = (df['created'] > '2016-03-31') & (df['created'] < '2016-06-01')
train = df.loc[mask]
train['created'].describe()
# Split of the test data by June
mask = (df['created'] >= '2016-06-01')
test = df.loc[mask]
test['created'].describe()
###Output
_____no_output_____
###Markdown
Train - Build Features
###Code
train.shape
# Create an amenitites dictionary and sum the values in a new feature no_amenities
amenities = ['elevator', 'cats_allowed', 'hardwood_floors', 'dogs_allowed',
'doorman', 'dishwasher', 'no_fee', 'laundry_in_building', 'fitness_center',
'pre-war', 'laundry_in_unit', 'roof_deck', 'outdoor_space', 'dining_room',
'high_speed_internet', 'balcony', 'swimming_pool', 'new_construction',
'terrace', 'exclusive', 'loft', 'garden_patio', 'wheelchair_access', 'common_outdoor_space']
train['no_amenities'] = train[amenities].sum(axis=1)
train.head()
# Create a pets dictionary and sum the values in a new feature no_amenities
pets = ['cats_allowed', 'dogs_allowed']
train['pets'] = train[pets].sum(axis=1)
train.head()
# Create a rooms dictionary and sum the values in a new feature no_amenities
rooms = ['bedrooms', 'bathrooms']
train['rooms'] = train[rooms].sum(axis=1)
#train.head()
# Create a long/lat dataframe called hoods
hoods = train[['longitude', 'latitude']]
print(hoods.shape)
hoods.head()
# This code was run, but takes about 1/2 hour, so results were saved to a file
# The code adds the neighbohood name to the hoods dataframe
#hoods['neighborhood'] = hoods.apply(lambda x: neighborhood(x['longitude'], x['latitude']), axis=1)
#hoods.head()
hoods.shape
# This reads in the file output from the previous line of code
n_hoods = pd.read_csv('https://raw.githubusercontent.com/JimKing100/DS-Unit-2-Regression-Classification/master/module2/hoods.csv')
n_hoods = n_hoods.rename(columns={'Unnamed: 0': 'old_index'})
n_hoods.head()
# The train data is merged with the hoods data providing the neighbohood name as a new feature
new_train = pd.merge(train, n_hoods, how='left', left_index=True, right_on=['old_index'])
print(new_train.shape)
new_train.head()
# A neighborhood dictionary is created to convert names to numbers
neighborhood_dict = {
'Midwood': 1,
'Bedford:': 2,
'Fordham South': 3,
'Borough Park': 4,
'Rugby-Remsen Village': 5,
'East Flushing': 6,
'Woodhaven': 7,
'Madison': 8,
'Auburndale': 9,
'Williamsbridge-Olinville': 10,
'Murray Hill': 11,
'East Elmhurst': 12,
'Brownsville': 13,
'East New York (Pennsylvania Ave)':14,
'Kensington-Ocean Parkway': 15,
'Parkchester': 16,
'Erasmus': 17,
'Cambria Heights': 18,
'East Flatbush-Farragut': 19,
'Ocean Parkway South': 20,
'Starrett City': 21,
'Morrisania-Melrose': 22,
'Elmhurst': 23,
'East Village': 24,
'Glen Oaks-Floral Park-New Hyde Park': 25,
'Longwood': 26,
'Yorkville': 27,
'Upper East Side-Carnegie Hill': 28,
'Windsor Terrace': 29,
'Hammels-Arverne-Edgemere': 30,
'Rikers Island': 31,
'Hunts Point': 31,
'Jackson Heights': 32,
'Bath Beach': 33,
'Old Town-Dongan Hills-South Beach': 34,
'Flatbush': 35,
'Melrose South-Mott Haven North': 36,
'Ocean Hill': 37,
'Morningside Heights': 38,
'Soundview-Bruckner': 39,
'Allerton-Pelham Gardens': 40,
'Jamaica Estates-Holliswood': 41,
'Hollis': 42,
'Flatlands': 43,
'East New York': 44,
'Kingsbridge Heights': 45,
'Springfield Gardens North': 46,
'Canarsie': 47,
'Norwood': 48,
'Manhattanville': 49,
'West Village': 50,
'Grasmere-Arrochar-Ft. Wadsworth': 51,
'Queens Village': 52,
'Chinatown': 53,
'Pelham Bay-Country Club-City Island': 54,
'Woodlawn-Wakefield': 55,
'Old Astoria': 56,
'Astoria': 57,
'Stuyvesant Town-Cooper Village': 58,
'Dyker Heights': 59,
'Bensonhurst West': 60,
'West New Brighton-New Brighton-St. George': 61,
'New Brighton-Silver Lake': 62,
'Westerleigh': 63,
'University Heights-Morris Heights': 64,
'Bayside-Bayside Hills': 65,
'Crown Heights North': 66,
'East Concourse-Concourse Village': 67,
'North Corona': 68,
'Cypress Hills-City Line': 69,
'Kew Gardens Hills': 70,
'Pomonok-Flushing Heights-Hillcrest': 71,
'North Side-South Side': 72,
'Lower East Side': 73,
'Greenpoint': 74,
'Spuyten Duyvil-Kingsbridge': 75,
'Sunset Park East': 76,
'Marble Hill-Inwood': 77,
'Homecrest': 78,
'Washington Heights North': 79,
'Steinway': 80,
'Mott Haven-Port Morris': 81,
'West Brighton': 82,
'Central Harlem North-Polo Grounds': 83,
'Queensbridge-Ravenswood-Long Island City': 84,
'New Dorp-Midland Beach': 85,
'Van Cortlandt Village': 86,
'Co-op City': 87,
'Bay Ridge': 88,
'Sunset Park West': 89,
'Fort Greene': 90,
'SoHo-TriBeCa-Civic Center-Little Italy': 91,
'Battery Park City-Lower Manhattan': 92,
'Clinton': 93,
'Prospect Heights': 94,
'Baisley Park': 95,
'South Jamaica': 96,
'Ozone Park': 97,
'Georgetown-Marine Park-Bergen Beach-Mill Basin': 98,
'Brighton Beach': 99,
'Bensonhurst East': 100,
'West Farms-Bronx River': 101,
'Sheepshead Bay-Gerritsen Beach-Manhattan Beach': 102,
'Westchester-Unionport': 103,
'Oakwood-Oakwood Beach': 104,
'Grymes Hill-Clifton-Fox Hills': 105,
'Park Slope-Gowanus': 106,
'Stapleton-Rosebank': 107,
'Fresh Meadows-Utopia': 108,
'Bellerose': 109,
'Stuyvesant Heights': 110,
'East Williamsburg': 111,
'Ridgewood': 112,
'East Harlem South': 113,
'Rego Park': 114,
'East Harlem North': 115,
'Bushwick North': 116,
'Bushwick South': 117,
'Central Harlem South': 118,
'College Point': 119,
'Midtown-Midtown South': 120,
'Glendale': 121,
'Charleston-Richmond Valley-Tottenville': 122,
'park-cemetery-etc-Staten Island': 123,
'New Springville-Bloomfield-Travis': 124,
'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill': 125,
'South Ozone Park': 126,
'Lindenwood-Howard Beach': 127,
'Prospect Lefferts Gardens-Wingate': 128,
'Murray Hill-Kips Bay': 129,
"Mariner's Harbor-Arlington-Port Ivory-Graniteville": 130,
'Crown Heights South': 131,
'Brooklyn Heights-Cobble Hill': 132,
'Port Richmond': 133,
'Hunters Point-Sunnyside-West Maspeth': 134,
'Claremont-Bathgate': 135,
'Van Nest-Morris Park-Westchester Square': 136,
'Pelham Parkway': 137,
'Mount Hope': 138,
'Ft. Totten-Bay Terrace-Clearview': 139,
'Whitestone': 140,
'Turtle Bay-East Midtown': 141,
'Lenox Hill-Roosevelt Island': 142,
'Elmhurst-Maspeth': 143,
'Woodside': 144,
'St. Albans': 145,
'Laurelton': 146,
'Hamilton Heights': 147,
'Jamaica': 148,
'Richmond Hill': 149,
'Briarwood-Jamaica Hills': 150,
'Kew Gardens': 151,
'Middle Village': 152,
'Maspeth': 153,
'Upper West Side': 154,
'Lincoln Square': 155,
'Bronxdale': 156,
'Oakland Gardens': 157,
'Douglas Manor-Douglaston-Little Neck': 158,
"Annadale-Huguenot-Prince's Bay-Eltingville": 159,
'Great Kills': 160,
'Seagate-Coney Island': 161,
'Corona': 162,
'Schuylerville-Throgs Neck-Edgewater Park': 163,
'Gravesend': 164,
'East Tremont': 165,
'North Riverdale-Fieldston-Riverdale': 166,
'Bedford Park-Fordham North': 167,
'Belmont': 168,
'Eastchester-Edenwald-Baychester': 169,
'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel': 170,
'Crotona Park East': 171,
'Rossville-Woodrow':172,
'Arden Heights': 173,
'Far Rockaway-Bayswater': 174,
'Soundview-Castle Hill-Clason Point-Harding Park': 175,
'park-cemetery-etc-Bronx': 176,
'Carroll Gardens-Columbia Street-Red Hook': 177,
'park-cemetery-etc-Manhattan': 178,
'Rosedale': 179,
'Flushing': 180,
'Queensboro Hill': 181,
'Hudson Yards-Chelsea-Flatiron-Union Square': 182,
'Gramercy': 183,
'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill': 184,
'Clinton Hill': 185,
'Williamsburg': 186,
'park-cemetery-etc-Brooklyn': 187,
'Washington Heights South': 188,
'Highbridge': 189,
'West Concourse': 190,
'Forest Hills': 191,
'park-cemetery-etc-Queens': 192,
'Springfield Gardens South-Brookville': 193,
'Airport': 194
}
# The neighborhood names are converted to numbers in feature nid
new_train['nid'] = new_train['neighborhood'].map(neighborhood_dict)
new_train.head()
# Null values are filled
values = {'nid': 0}
new_train = new_train.fillna(value=values)
train = new_train
train.shape
# Check interest level values
train['interest_level'].unique()
# Create an interest level dictionary
interest = {
'low': 1,
'medium:': 2,
'high': 3
}
# An interest level feature is added
train['interest'] = train['interest_level'].map(interest)
train.head()
# Null values are filled
values = {'interest': 0}
train = train.fillna(value=values)
train.shape
# Create a decription length feature
train['desc_len'] = train['description'].str.len()
train.head()
# Null values are filled
values = {'desc_len': 0}
train = train.fillna(value=values)
train.shape
from math import cos, asin, sqrt
def distance(lat1, lon1, lat2, lon2):
p = 0.017453292519943295
a = 0.5 - cos((lat2-lat1)*p)/2 + cos(lat1*p)*cos(lat2*p) * (1-cos((lon2-lon1)*p)) / 2
return 12742 * asin(sqrt(a))
def closest(data, v):
return min(data, key=lambda p: distance(v['lat'],v['lon'],p['lat'],p['lon']))
tempDataList = [{'lat': 39.7612992, 'lon': -86.1519681},
{'lat': 39.762241, 'lon': -86.158436 },
{'lat': 39.7622292, 'lon': -86.1578917}]
v = {'lat': 39.7622290, 'lon': -86.1519750}
print(closest(tempDataList, v))
# Null values are filled
values = {'longitude': 50, 'latitude': -70}
train = train.fillna(value=values)
# Create a long/lat dataframe called hoods
nhoods = train[['old_index', 'bedrooms', 'longitude_x', 'latitude_x']]
print(nhoods.shape)
nhoods.head()
from geopy.distance import vincenty as get_geodesic_distance
# from scipy.spatial.distance import euclidean as get_euclidean_distance
def neighbor_mean(beds, source_longitude, source_latitude):
source_lonlat = source_longitude, source_latitude
source_table = train[train['bedrooms'] == beds]
target_table = pd.DataFrame(source_table, columns = ['longitude_x', 'latitude_x', 'price'])
def get_distance(row):
target_lonlat = row['longitude_x'], row['latitude_x']
return get_geodesic_distance(target_lonlat, source_lonlat).meters
target_table['distance'] = target_table.apply(get_distance, axis=1)
# Get the nearest 2 locations
nearest_target_table = target_table.sort_values(['distance'])[:5]
# Get locations within 1000 meters
#filtered_target_table = target_table[target_table['Distance'] < 1000]
nearest_target_table
return nearest_target_table['price'].mean()
#nhoods['mean_neighbor_price'] = nhoods.apply(lambda x: neighbor_mean(x['bedrooms'], x['longitude_x'], x['latitude_x']), axis=1)
#nhoods.head()
# The train data is merged with the hoods data providing the neighbohood name as a new feature
nhoods = pd.read_csv('https://raw.githubusercontent.com/JimKing100/DS-Unit-2-Regression-Classification/master/module2/nhoods.csv')
new_train = pd.merge(train, nhoods, on=['old_index'])
new_train = new_train.rename(columns={'bedrooms_x': 'bedrooms'})
print(new_train.shape)
new_train.head()
train = new_train
#nhoods.to_csv('/content/nhoods.csv')
# This reads in the file output from the previous line of code
#train = pd.read_csv('/content/train.csv')
###Output
_____no_output_____
###Markdown
Test - Build Features (Uses Same Process as Test)
###Code
test.shape
amenities = ['elevator', 'cats_allowed', 'hardwood_floors', 'dogs_allowed',
'doorman', 'dishwasher', 'no_fee', 'laundry_in_building', 'fitness_center',
'pre-war', 'laundry_in_unit', 'roof_deck', 'outdoor_space', 'dining_room',
'high_speed_internet', 'balcony', 'swimming_pool', 'new_construction',
'terrace', 'exclusive', 'loft', 'garden_patio', 'wheelchair_access', 'common_outdoor_space']
test['no_amenities'] = test[amenities].sum(axis=1)
test.head()
pets = ['cats_allowed', 'dogs_allowed']
test['pets'] = test[pets].sum(axis=1)
test.head()
rooms = ['bedrooms', 'bathrooms']
test['rooms'] = test[rooms].sum(axis=1)
test.head()
thoods = test[['longitude', 'latitude']]
print(thoods.shape)
thoods.head()
# This code was run, but takes about 15 minutes, so results were saved to file
#thoods['neighborhood'] = thoods.apply(lambda x: neighborhood(x['longitude'], x['latitude']), axis=1)
#thoods.head()
tn_hoods = pd.read_csv('https://raw.githubusercontent.com/JimKing100/DS-Unit-2-Regression-Classification/master/module2/thoods.csv')
tn_hoods = tn_hoods.rename(columns={'Unnamed: 0': 'old_index'})
tn_hoods.head()
new_test = pd.merge(test, tn_hoods, how='left', left_index=True, right_on=['old_index'])
print(new_test.shape)
new_test.head()
new_test['nid'] = new_test['neighborhood'].map(neighborhood_dict)
new_test.head()
values = {'nid': 0}
new_test = new_test.fillna(value=values)
test = new_test
test.shape
test['interest'] = test['interest_level'].map(interest)
test.head()
test.shape
values = {'interest': 0}
test = test.fillna(value=values)
test.shape
test['desc_len'] = test['description'].str.len()
test.head()
values = {'desc_len': 0}
test = test.fillna(value=values)
test.shape
values = {'longitude': 50, 'latitude': -70}
test = test.fillna(value=values)
# Create a long/lat dataframe called hoods
tnhoods = test[['old_index', 'bedrooms', 'longitude_x', 'latitude_x']]
print(tnhoods.shape)
tnhoods.head()
from geopy.distance import vincenty as get_geodesic_distance
# from scipy.spatial.distance import euclidean as get_euclidean_distance
def neighbor_mean(beds, source_longitude, source_latitude):
source_lonlat = source_longitude, source_latitude
source_table = test[test['bedrooms'] == beds]
target_table = pd.DataFrame(source_table, columns = ['longitude_x', 'latitude_x', 'price'])
def get_distance(row):
target_lonlat = row['longitude_x'], row['latitude_x']
return get_geodesic_distance(target_lonlat, source_lonlat).meters
target_table['distance'] = target_table.apply(get_distance, axis=1)
# Get the nearest 2 locations
nearest_target_table = target_table.sort_values(['distance'])[:5]
# Get locations within 1000 meters
#filtered_target_table = target_table[target_table['Distance'] < 1000]
nearest_target_table
return nearest_target_table['price'].mean()
#tnhoods['mean_neighbor_price'] = tnhoods.apply(lambda x: neighbor_mean(x['bedrooms'], x['longitude_x'], x['latitude_x']), axis=1)
#tnhoods.head()
#tnhoods.to_csv('/content/tnhoods.csv')
#tnhoods.head()
# The train data is merged with the hoods data providing the neighbohood name as a new feature
tnhoods = pd.read_csv('https://raw.githubusercontent.com/JimKing100/DS-Unit-2-Regression-Classification/master/module2/tnhoods.csv')
new_test = pd.merge(test, tnhoods, on=['old_index'])
new_test = new_test.rename(columns={'bedrooms_x': 'bedrooms'})
print(new_test.shape)
new_test.head()
test = new_test
test.head()
#test['mean_neighbor_price'] = test.apply(lambda x: neighbor_mean(x['bedrooms'], x['longitude_x'], x['latitude_x']), axis=1)
#test.head()
# This reads in the file output from the previous line of code
#test = pd.read_csv('/content/test.csv')
values = {'mean_neighbor_price': 0}
train = train.fillna(value=values)
values = {'mean_neighbor_price': 0}
test = test.fillna(value=values)
###Output
_____no_output_____
###Markdown
Run Model
###Code
# 1. Import the appropriate estimator class from Scikit-Learn
from sklearn.linear_model import LinearRegression
import numpy as np
# 2. Instantiate this class
model = LinearRegression()
# 3. Arrange X features matrix & y target vector
features = ['bedrooms', 'bathrooms', 'elevator', 'cats_allowed', 'hardwood_floors', 'dogs_allowed',
'doorman', 'dishwasher', 'no_fee', 'laundry_in_building', 'fitness_center',
'pre-war', 'laundry_in_unit', 'roof_deck', 'outdoor_space', 'dining_room',
'high_speed_internet', 'balcony', 'swimming_pool', 'new_construction',
'terrace', 'exclusive', 'loft', 'garden_patio', 'wheelchair_access', 'common_outdoor_space',
'no_amenities', 'nid', 'pets', 'rooms', 'mean_neighbor_price']
target = 'price'
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# 4. Fit the model
model.fit(X_train, y_train)
# 5. Apply the model
y_pred = model.predict(X_train)
# Show the coefficient
print('Coefficients', model.coef_, '\n')
# Show the intercept
print('Intercept', model.intercept_, '\n')
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
# Print regression metrics
train_mse = mean_squared_error(y_train, y_pred)
train_rmse = np.sqrt(train_mse)
train_mae = mean_absolute_error(y_train, y_pred)
train_r2 = r2_score(y_train, y_pred)
print('Train Mean Squared Error:', train_mse)
print('Train Root Mean Squared Error:', train_rmse)
print('Train Mean Absolute Error:', train_mae)
print('Train R^2:', train_r2)
print('\n')
ty_pred = model.predict(X_test)
# Print regression metrics
test_mse = mean_squared_error(y_test, ty_pred)
test_rmse = np.sqrt(test_mse)
test_mae = mean_absolute_error(y_test, ty_pred)
test_r2 = r2_score(y_test, ty_pred)
print('Test Mean Squared Error:', test_mse)
print('Test Root Mean Squared Error:', test_rmse)
print('Test Mean Absolute Error:', test_mae)
print('Test R^2:', test_r2)
###Output
Coefficients [-2.06533556e+02 3.87815329e+02 1.95521346e+01 -2.19610051e+01
-9.35648419e+01 3.78037264e+01 1.28395860e+02 7.19618839e+00
-7.19607722e+01 -1.01600172e+02 2.26271454e+01 -2.67055793e+01
1.16597466e+02 -4.36765531e+01 -4.56703160e+01 -4.14698072e+01
-1.16102802e+02 -8.10257234e+00 -4.84966208e+01 -5.05427611e+01
1.07756340e+02 1.15784770e+02 1.26837450e+01 -6.71077188e-01
1.01358308e+02 3.61309855e+00 2.84390130e+00 -1.16592902e-01
1.58427212e+01 1.81281773e+02 8.96101445e-01]
Intercept -288.2980488614835
Train Mean Squared Error: 420162.2788861607
Train Root Mean Squared Error: 648.1992586282097
Train Mean Absolute Error: 364.0117255116181
Train R^2: 0.8646832655287394
Test Mean Squared Error: 521724.19941232615
Test Root Mean Squared Error: 722.3047829083829
Test Mean Absolute Error: 406.9150214847682
Test R^2: 0.8321362196556643
|
Chapter 5 - Pre-trained Models/Using HTTP to Run a Python Model.ipynb | ###Markdown
Invoke the model using JSON-RPC
###Code
import (
"fmt"
"log"
"math/rand"
"net/http"
"os"
"os/exec"
"strconv"
"time"
"io/ioutil"
"encoding/json"
"bytes"
)
c, err := jsonrpc.Dial("tcp", "localhost:8001")
p := model{Client: client}
var req PredictRequest = PredictRequest{
Image: testImages[16],
}
var reply interface{}
err := c.Call("predict", req, &reply)
// Predict returns whether the ith image represents trousers or not based on the logistic regression model
func Predict(i int) (bool, error){
b, err := json.Marshal(testImages[i])
if err != nil {
return false, err
}
r := bytes.NewReader(b)
resp, err := http.Post("http://127.0.0.1:8001", "application/json", r)
if err != nil {
return false, err
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return false, err
}
resp.Body.Close()
var resp struct {
IsTrousers bool `json:"is_trousers"`
}
err := json.Unmarshal(body, &resp)
return resp.IsTrousers, err
}
// Expected: true <nil>
Predict(16)
// Expected false <nil>
Predict(0)
###Output
_____no_output_____ |
source/2022/NotOnline/002_make_name_for_data.ipynb | ###Markdown
Lecture02 Naming็ฌฌ02่ฎฒ ่ตทไธชๅๅญๅง Objective ๅญฆไน ็ฎๆ - Understand what a variable is ็่งฃไปไนๆฏๅ้- The key to learn a programming language is to learn the syntax/format/style of the languange ๅญฆไน ไธ้จ็ผ็จ่ฏญ่จ็ๅ
ณ้ฎๅจไบๅญฆไน ่ฟ้จ่ฏญ่จ็่ฏญๆณ/ๆ ผๅผ/้ฃๆ ผ- Python is a case sensitive programming language Pythonไธญๅคงๅฐๅไธๅ่กจ็คบไธๅ็ๅ็งฐ- Special meaning of "=" in programming ็ญไบๅทๅจProgrammingไธญ็ๅฆไธ็งๆไน๏ผ่ตๅผ- Use variable to refer a string or a number ็จๅ้่กจ็คบๅญ็ฌฆไธฒๆๆฐๅญ- Write beautiful Python codes ไนฆๅๆผไบฎ็Pythonไปฃ็ - space between operator ๆไฝ็ฌฆไธค่พน็ไธไธช็ฉบๆ ผ - assign multiple variables in one line ๅจไธ่กไปฃ็ ไธญ็ปๅคไธชๅ้่ตๅผ- Use `print` to print the value of a variable ็จ`print`ๆๅฐๅ้ๅผ Math- $+$, $-$, $\times$, $\div$ Python```pythonJason = "Jason"```
###Code
my_sentence = "I am learning programming in Python."
# a string
print(my_sentence)
my_sentence = "I spoke another sentence."
# expression ่กจ่พพๅผ
# assignment expression ่ตๅผ่กจ่พพๅผ
print(my_sentence)
my_sentence = 2021
print(my_sentence)
my_sentence = "2021"
print(my_sentence)
cm = "cm"
length, width, unit = 18, 13, cm
perimeter = (length + width) * 2
area = length * width
# Jason: 62 234
# Sophine: No output
# Tony: No output
# Yunzi: 62 234
print(perimeter, area, cm)
# 62 234 cm
print("James")
print("I am learning programming in Python.")
print("I am learning programming in Python.")
Jason_1 = "Jason: I am learning programming."
print(Jason_1)
Tony_1 = "Tony: I am learning programming too."
print(Tony_1)
Sophie_1 = "Sophie: I am learning programming too."
print(Sophie_1)
Yunzi_1 = "Yunzi: ๆไนๅจๅญฆไน ็ผ็จใ"
print(Yunzi_1)
Jason_2 = "Jason: I like it. How about you?"
print(Jason_2)
Tony_2 = "Tony: It's interesting, I think."
print(Tony_2)
Sophie_2 = "Sophie: The teacher is my Daddy."
print(Sophie_2)
Yunzi_2 = "Yunzi: I know, but the teacher is my uncle."
Jason_3 = "Jason: Really?"
print(Jason_3)
Tony_3 = "Tony: It's funny, isn't it?"
print(Tony_3)
Sophie_3 = "Sophie: It's time to start the class, let's Zoom"
print(Sophie_3)
Yunzi_3 = "Yunzi: ้ฉฌไธๅฐฑๆฅใ"
print(Yunzi_3)
age_daddy = 41
age_mummy = 39
age_me = 11
age_sister = 5
age_total = age_daddy + age_mummy + age_me + age_sister
print("total age in my family is:", age_total)
age_daddy, age_mummy, age_me, age_sister = 41, 39, 11, 5
age_total = age_daddy + age_mummy + age_me + age_sister
print("total age in my family is:", age_total)
###Output
total age in my family is: 96
###Markdown
Did you know?- There are many programming languages, and Python is just one of them. Python language can almost do anything programming work you want to. ๆ่ฎธๅค็ง็ผ็จ่ฏญ่จ๏ผPythonๅชๆฏๅ
ถไธญ็ไธ็งใPython่ฏญ่จๅ ไน่ฝๅธฎไฝ ๅไปปไฝไฝ ๆณๅ็ๅ
ณไบ็ผ็จ็ๅทฅไฝใ Exercise1. Design a dialog between you and one of you parents. Each person need to say at least three sentences to the other in turn. print the dialog out like the teacher did on class in one cell. ่ฎพ่ฎกไธๆฎตๆๅ
ณไฝ ๅไฝ ็ถๆฏๅ
ถไธญไนไธ็ไธๆฎตๅฏน่ฏ๏ผ่ฆๆฑๆฏไธไธชไบบไบคๆฟ่ฏดไธๅฅ่ฏใๅ่ๅธๅจ่ฏพๅ ไธๅ็้ฃๆ ทๅจไธไธชๅๅ
ๆ ผไธญๆๅฐๅบ่ฟๆฎตๅฏน่ฏใ 2. The following codes have some errors; it cann't be successfully run. Try to find and fix the error so that the codes provides the total age of the students in the class.ไธ้ข็ไปฃ็ ไธญๆไธไบ้่ฏฏๅฏผ่ดๅฎไธ่ฝๆๅ็่ฟ่กใๅฐ่ฏ็ๅปไฟฎๅค่ฟไบ้่ฏฏๅนถ่ฎฉ็จๅบ่พๅบ็ญ็บง้ๆๆๅญฆ็็ๅนด้พไนๅใ```pythonage_Jason, age_Tony, age_Sophie, age_Yunzi = 10, 10, 9, 9age_total = age_jason + ageTony + agesophie + age_yunziprint("total age of these four students is", age_Total) ```Follow the steps listed below ๆ็
งไธ้ข็ๆญฅ้ชคๆฅๅฎๆๆญค้ข - Copy the codes to a new cell ๆไธ้ขไธ่กไปฃ็ ๅคๅถๅฐไธไธชๆฐ็ๅๅ
ๆ ผไธญ - Click "Run" button and look at the errors ่ฟ่กไปฃ็ ่งๅฏ้่ฏฏ - Copy the codes to next new cell ๆไธ้ขไปฃ็ ๅคๅถๅฐๅฆไธไธชๆฐ็ๅๅ
ๆ ผไธญ - Fix the errors and run the codes again till you see the correct result. ไฟฎๅคไปฃ็ ไธญ็้่ฏฏ็ฅ้็จๅบๆๅ่ฟ่กNote: Don't change the values of the age of each student even if your age is not the value.ๆณจๆ๏ผไธ่ฆไฟฎๆน้ข็ฎ็ปๅฎ็ๆฏไธชๅญฆ็็ๅนด้พ๏ผๅณไฝฟไฝ ็ๅนด้พๅนถไธๆฏ้ฃไธชๆฐๅญใ 3. The following codes, written by Celine, calculate the difference of the age between Celine and Yunzi. The codes are not well written but can provide correct result. Please help Celine to beautify the codes while keeping the result unchanged.ๆๆๅ็ไธ้ขไธๆฎตไปฃ็ ่ฎก็ฎไบๅฅน่ชๅทฑๅ่่ไน้ด็ๅนด้พๅทฎใ่ฟไบไปฃ็ ่ฝๆญฃ็กฎ่ฟ่กไฝๆฏๅ็ไธๅฅฝ็ใ่ฏทไฝ ๅธฎๅฉๆๆๅจไธๆนๅไปฃ็ ่ฟ่ก็ปๆ็ๅๆไธ็พๅ่ฟๆฎตไปฃ็ ใ```pythonCeline_age = 4age_yunzi=9cha =age_yunzi-Celine_ageprint("Yunzi bi Celine da", cha, "sui.")```The codes are provided in below cell, beautify it directly on this cell and run it.่ฟๆฎตไปฃ็ ๅทฒ็ปๅจไธ้ข็ๅๅ
ๆ ผไธญ็ปๅบ๏ผ่ฏท็ดๆฅๅจไธ้ข็ๅๅ
ๆ ผไธญไฟฎๆน็พๅไปฃ็ ๏ผๅนถ่ฟ่กๅฎใ
###Code
Celine_age = 4
age_yunzi=9
cha =age_yunzi-Celine_age
print("Yunzi bi Celine da", cha, "sui.")
###Output
Yunzi bi Celine da 5 sui.
|
experiments/basic.ipynb | ###Markdown
Estimator using a low-rank approximation model
###Code
lra_model = BilinearNet(data.n_users, data.n_items, embedding_dim=16, sparse=False)
lra_est = ImplicitEst(model=lra_model,
n_iter=20,
use_cuda=is_cuda_available())
lra_est.fit(train, verbose=True)
prec, recall = precision_recall_score(lra_est, test)
prec.mean(), recall.mean()
mrr_score(lra_est, train).mean(), mrr_score(lra_est, test).mean()
###Output
_____no_output_____
###Markdown
Estimator using a deep neural model
###Code
nn_model = DeepNet(data.n_users, data.n_items, embedding_dim=8, sparse=False, activation=torch.tanh)
nn_est = ImplicitEst(model=nn_model,
n_iter=20,
use_cuda=is_cuda_available())
nn_est.fit(train, verbose=True)
prec, recall = precision_recall_score(nn_est, test)
prec.mean(), recall.mean()
mrr_score(nn_est, train).mean(), mrr_score(nn_est, test).mean()
###Output
_____no_output_____
###Markdown
Estimator using a deep residual model
###Code
res_model.h1_shift
res_model = ResNetPlus(data.n_users, data.n_items, embedding_dim=16, sparse=False)
res_est = ImplicitEst(model=res_model,
n_iter=20,
use_cuda=is_cuda_available())
res_est.fit(train, verbose=True)
res_model.w
prec, recall = precision_recall_score(res_est, test)
prec.mean(), recall.mean()
mrr_score(res_est, train).mean(), mrr_score(res_est, test).mean()
nalu_model = MoTBilinearNet(data.n_users, data.n_items, embedding_dim=10, sparse=False)
nalu_est = ImplicitEst(model=nalu_model,
n_iter=20,
use_cuda=is_cuda_available())
nalu_est.fit(train, verbose=True)
prec, recall = precision_recall_score(nalu_est, test)
prec.mean(), recall.mean()
torch.sigmoid(nalu_model.scaler_w)
mrr_score(nalu_est, train).mean(), mrr_score(nalu_est, test).mean()
###Output
_____no_output_____ |
Benjamin Disraeli Neural Net.ipynb | ###Markdown
Initializing Code Each of the boxes of code in this section should be run once (in the order they are here). To run a box, click on it (so you can see your cursor blinking inside the box) then press "cntrl+enter" (or click the "run" button at the top of this page).
###Code
from pickle import load
from keras.models import load_model
from keras.preprocessing.sequence import pad_sequences
import keras
from keras.preprocessing.text import Tokenizer
import numpy as np
import spacy
import requests
import tqdm
# define functions
def read_file(filepath):
with open(filepath) as f:
str_text = f.read()
return str_text
def separate_punc(doc_text):
return [token.text.lower() for token in nlp(doc_text) if token.text not in '\n\n \n\n\n!"-#$%&()--.*+,-/:;<=>?@[\\]^_`{|}~\t\n ']
def generate_text(model, tokenizer, seq_len, seed_text, num_gen_words, threshold):
'''
INPUTS:
model : model that was trained on text data
tokenizer : tokenizer that was fit on text data
seq_len : length of training sequence
seed_text : raw string text to serve as the seed
num_gen_words : number of words to be generated by model
'''
# Final Output
output_text = []
# Intial Seed Sequence
input_text = seed_text
# Confidendence (note this is generated by the model and may be over-confident; in the future may calibrate with a Bayesian NN)
confidence = 1
# Create num_gen_words
for i in range(num_gen_words):
# Take the input text string and encode it to a sequence
encoded_text = tokenizer.texts_to_sequences([input_text])[0]
# Pad sequences to our trained rate (50 words in the video)
pad_encoded = pad_sequences([encoded_text], maxlen=seq_len, truncating='pre')
# Predict Class Probabilities for each word
pred_word_ind = model.predict_classes(pad_encoded, verbose=0)[0]
wd_confi = model.predict(pad_encoded, verbose=0)[0].max()
confidence = confidence*wd_confi
# Make sure addition of this word won't push us out of our confidence level
if confidence < threshold:
confidence = confidence/wd_confi # adjust confidence for reporting
break
else:
# Grab word
pred_word = tokenizer.index_word[pred_word_ind]
# Update the sequence of input text (shifting one over with the new word)
input_text += ' ' + pred_word
output_text.append(pred_word)
# Make it look like a sentence.
confidence = round(confidence*100,2)
print(f"Confidence is: {confidence}%")
return ' '.join(output_text)
# Load spaCy NLP
nlp = spacy.load('en_core_web_sm',disable=['parser', 'tagger','ner'])
nlp.max_length = 1198623
# Load the concatenated letters for processing
d = read_file('concatenated_letters.txt')
tokens = separate_punc(d)
# organize into sequences of tokens
train_len = 50+1 # 50 training words , then one target word
# Empty list of sequences
text_sequences = []
for i in range(train_len, len(tokens)):
# Grab train_len# amount of characters
seq = tokens[i-train_len:i]
# Add to list of sequences
text_sequences.append(seq)
# integer encode sequences of words
tokenizer = Tokenizer()
tokenizer.fit_on_texts(text_sequences)
sequences = tokenizer.texts_to_sequences(text_sequences)
vocabulary_size = len(tokenizer.word_counts)
# Create Numpy Matrix
sequences = np.array(sequences)
# Initialize Matrices
X = sequences[:,:-1] # first 49 words in the sequence
y = sequences[:,-1] # last word in the sequence
seq_len = X.shape[1] # set sequence length for "generate_text" tool
# load trained model
model = load_model('Disraeli_bot1.h5')
tokenizer = load(open('Disraeli_bot1','rb'))
###Output
C:\Users\Paul\Anaconda3\lib\site-packages\spacy\util.py:275: UserWarning: [W031] Model 'en_core_web_sm' (2.2.0) requires spaCy v2.2 and is incompatible with the current spaCy version (2.3.5). This may lead to unexpected results or runtime errors. To resolve this, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
warnings.warn(warn_msg)
###Markdown
This section lets you interact with the Disraeli neural net (a.k.a. "Disraeli-bot")You can run the box below as many times as you want. When you run it, it will ask for a "prompt" - this is analogous to the first sentence (or two - you can type as much as you want, 25-50 words is ideal) you type in a Gmail email. Then, the Disraeli-bot will spit out up to the next 25 words that Gmail would suggest continuing with (the suggestion powered by the neural net trained on Disraeli's correspondence) provided the confidence level in the prediction does not fall below a specified level (I put the default at 50%, but you can tweak that by changing the "threshold=0.X" argument).**A few notes for Nick** - this performs better on topics covered in the letters than out (I had some from 1868 and 1857 from https://www.jstor.org/stable/10.3138/j.ctt9qh93p). You probably understand much better what the content is, but it looks to me like this is a lot of personal correspondence, so perhaps not super topical for our purposes? Also of note - I didn't get a chance to clean the letters yet, so the footnote annotations and "EBSCO Host checked out to [email protected]", etc. are, unfortunately, included in the nerual net. These are all things that we can fix easily, but take time. Also, the more letters we can feed it (I hit my max allowance of 100 pages/day) the better the performance will get.
###Code
seed_text= input("Give Disraeli-bot a prompt: ")
# seed_text = "Given the opportunity to purchase half of the outstanding equity in the Suez company, which the Pasha is eager for in order to retire debt, I think"
print("Disraeli-bot continues with:...")
generate_text(model,tokenizer,seq_len,seed_text=seed_text,num_gen_words=25, threshold=0.3
)
###Output
Give Disraeli-bot a prompt: Russia and Turkey are at war. Our means of supporting the Ottomans are
Disraeli-bot continues with:...
Confidence is: 32.45%
###Markdown
**Odds and Ends** Use the pip istalls if you don't already have the relevant packages downloaded to your programming environment. The last box includes code to download the model from GitHub if it is not already saved locally.
###Code
# %pip install pandas
# %pip install keras
# %pip install spacy
# %pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0-py3-none-any.whl
# %pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz
# %pip install tensorflow
# %run -m spacy download en_core_web_sm
# # load trained model and tokenizer from GitHub
# url = "https://raw.githubusercontent.com/pjconnell/disraeli_neural_net/main/Disraeli_bot1.h5"
# page = requests.get(url)
# m = page.text
# # # load model
# model = load_model(m)
# # load tokenizer
# url = "https://raw.githubusercontent.com/pjconnell/disraeli_neural_net/main/Disraeli_bot1"
# page = requests.get(url)
# t = page.text
# tokenizer = load(open(t,'rb'))
###Output
_____no_output_____ |
Kaggle_Challenge_X5.ipynb | ###Markdown
###Code
# Installs
%%capture
!pip install --upgrade category_encoders plotly
# Imports
import os, sys
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
!pip install -r requirements.txt
os.chdir('module1')
# Imports
import pandas as pd
import numpy as np
import math
import sklearn
sklearn.__version__
from sklearn.model_selection import train_test_split
# Import the models
from sklearn.linear_model import LogisticRegressionCV
from sklearn.pipeline import make_pipeline
# Import encoder and scaler and imputer
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
# Import random forest classifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
def main_program():
def wrangle(X):
# Wrangles train, validate, and test sets
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded and drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer new feature years - construction_year to date_recorded
X.loc[X['construction_year'] == 0, 'construction_year'] = np.nan
X['years'] = X['year_recorded'] - X['construction_year']
# Remove latitude outliers
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# Features with many zero's are likely nan's
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height',
'population', 'amount_tsh']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
# Impute mean for years
X.loc[X['years'].isna(), 'years'] = X['years'].mean()
#X.loc[X['pump_age'].isna(), 'pump_age'] = X['pump_age'].mean()
# Impute mean for longitude and latitude based on region
average_lat = X.groupby('region').latitude.mean().reset_index()
average_long = X.groupby('region').longitude.mean().reset_index()
shinyanga_lat = average_lat.loc[average_lat['region'] == 'Shinyanga', 'latitude']
shinyanga_long = average_long.loc[average_long['region'] == 'Shinyanga', 'longitude']
X.loc[(X['region'] == 'Shinyanga') & (X['latitude'] > -1), ['latitude']] = shinyanga_lat[17]
X.loc[(X['region'] == 'Shinyanga') & (X['longitude'].isna()), ['longitude']] = shinyanga_long[17]
mwanza_lat = average_lat.loc[average_lat['region'] == 'Mwanza', 'latitude']
mwanza_long = average_long.loc[average_long['region'] == 'Mwanza', 'longitude']
X.loc[(X['region'] == 'Mwanza') & (X['latitude'] > -1), ['latitude']] = mwanza_lat[13]
X.loc[(X['region'] == 'Mwanza') & (X['longitude'].isna()) , ['longitude']] = mwanza_long[13]
#X.loc[X['amount_tsh'].isna(), 'amount_tsh'] = 0
# Clean installer
X['installer'] = X['installer'].str.lower()
X['installer'] = X['installer'].str[:4]
X['installer'].value_counts(normalize=True)
tops = X['installer'].value_counts()[:15].index
X.loc[~X['installer'].isin(tops), 'installer'] = 'other'
# Bin lga
#tops = X['lga'].value_counts()[:10].index
#X.loc[~X['lga'].isin(tops), 'lga'] = 'Other'
# Bin subvillage
tops = X['subvillage'].value_counts()[:25].index
X.loc[~X['subvillage'].isin(tops), 'subvillage'] = 'Other'
# Impute mean for a feature based on latitude and longitude
def latlong_conversion(feature, pop, long, lat):
radius = 0.1
radius_increment = 0.3
if math.isnan(pop):
pop_temp = 0
while pop_temp <= 1 and radius <= 2:
lat_from = lat - radius
lat_to = lat + radius
long_from = long - radius
long_to = long + radius
df = X[(X['latitude'] >= lat_from) &
(X['latitude'] <= lat_to) &
(X['longitude'] >= long_from) &
(X['longitude'] <= long_to)]
pop_temp = df[feature].mean()
radius = radius + radius_increment
else:
pop_temp = pop
if np.isnan(pop_temp):
new_pop = X_train[feature].mean()
else:
new_pop = pop_temp
return new_pop
X.loc[X['population'].isna(), 'population'] = X['population'].mean()
#X['population'] = X.apply(lambda x: latlong_conversion('population', x['population'], x['longitude'], x['latitude']), axis=1)
# Impute mean for tsh based on mean of source_class/basin/waterpoint_type_group
#def tsh_calc(tsh, source, base, waterpoint):
# if math.isnan(tsh):
# if (source, base, waterpoint) in tsh_dict:
# new_tsh = tsh_dict[source, base, waterpoint]
# return new_tsh
# else:
# return tsh
# return tsh
#temp = X[~X['amount_tsh'].isna()].groupby(['source_class',
# 'basin',
# 'waterpoint_type_group'])['amount_tsh'].mean()
#tsh_dict = dict(temp)
#X['amount_tsh'] = X.apply(lambda x: tsh_calc(x['amount_tsh'], x['source_class'], x['basin'], x['waterpoint_type_group']), axis=1)
# Drop unneeded columns
unusable_variance = ['recorded_by', 'id', 'num_private', 'wpt_name', 'scheme_management']
X = X.drop(columns=unusable_variance)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('../data/tanzania/train_features.csv'),
pd.read_csv('../data/tanzania/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=1400,
random_state=42,
min_samples_split=5,
min_samples_leaf=1,
max_features='auto',
max_depth=30,
bootstrap=True,
n_jobs=-1,
verbose = 1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
pd.set_option('display.max_rows', 200)
model = pipeline.named_steps['randomforestclassifier']
encoder = pipeline.named_steps['ordinalencoder']
encoded_columns = encoder.transform(X_train).columns
importances = pd.Series(model.feature_importances_, encoded_columns)
print(importances.sort_values(ascending=False))
assert all(X_test.columns == X_train.columns)
y_pred = pipeline.predict(X_test)
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('/content/submission-f5.csv', index=False)
main_program()
#main_program('quantity_group')
# main_program('gps_height')
# main_program('day_recorded')
# main_program('ward')
# main_program('waterpoint_type')
# main_program('years')
# main_program('construction_year')
# main_program('population')
# main_program('funder')
# main_program('waterpoint_type_group')
# main_program('scheme_name')
# main_program('extraction_type_class')
# main_program('lga')
# main_program('extraction_type')
# main_program('extraction_type_group')
# main_program('district_code')
# main_program('amount_tsh')
# main_program('payment')
# main_program('payment_type')
# main_program('installer')
# main_program('month_recorded')
# main_program('region_code')
# main_program('region')
# main_program('basin')
# main_program('source')
# main_program('source_type')
# main_program('management')
# main_program('scheme_management')
# main_program('water_quality')
# main_program('subvillage')
# main_program('quality_group')
# main_program('permit')
# main_program('public_meeting')
# main_program('year_recorded')
#for i in range(3, 10):
# main_program(i)
#for i in range(1, 6):
# i = i * 5
# print('lga bins: ', i)
# main_program(i)
#i = 25
#for j in range(2, 6):
# j = j * 5
# for k in range(2, 6):
# k = k * 5
# print('installer bins: ', i, 'funder bins: ', j,'subvillage bins: ', k)
# main_program( i, j, k)
# pd.set_option('display.max_rows', 200)
# model = pipeline.named_steps['randomforestclassifier']
# encoder = pipeline.named_steps['ordinalencoder']
# encoded_columns = encoder.transform(X_train).columns
# importances = pd.Series(model.feature_importances_, encoded_columns)
# importances.sort_values(ascending=False)
#assert all(X_test.columns == X_train.columns)
#y_pred = pipeline.predict(X_test)
#submission = sample_submission.copy()
#submission['status_group'] = y_pred
#submission.to_csv('/content/submission-f2.csv', index=False)
###Output
_____no_output_____ |
disease_gene/disease_associates_gene/word_vector_experiment/embed_sentences.ipynb | ###Markdown
Generate Word Vectors For Disease Associate Gene Sentences This notebook is designed to embed disease associates gene (DaG) sentences. After word vectors have been trained, we embed sentences using the following steps:1. Load the total vocab generated from trained word vectors.2. Cycle through each sentence3. For each word in the sentence determine if word is in vocab4. if yes assign index of no assign index for unknown token Set up the Environment
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
from collections import defaultdict
import os
import pickle
import sys
sys.path.append(os.path.abspath('../../../modules'))
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from tqdm import tqdm_notebook
from gensim.models import FastText
from gensim.models import KeyedVectors
from utils.notebook_utils.dataframe_helper import load_candidate_dataframes, generate_embedded_df
#Set up the environment
username = "danich1"
password = "snorkel"
dbname = "pubmeddb"
#Path subject to change for different os
database_str = "postgresql+psycopg2://{}:{}@/{}?host=/var/run/postgresql".format(username, password, dbname)
os.environ['SNORKELDB'] = database_str
from snorkel import SnorkelSession
session = SnorkelSession()
from snorkel.learning.pytorch.rnn.rnn_base import mark_sentence
from snorkel.learning.pytorch.rnn.utils import candidate_to_tokens
from snorkel.models import Candidate, candidate_subclass
DiseaseGene = candidate_subclass('DiseaseGene', ['Disease', 'Gene'])
###Output
_____no_output_____
###Markdown
Disease Associates Disease This section loads the dataframe that contains all disease associates gene candidate sentences and their respective dataset assignments.
###Code
cutoff = 300
total_candidates_df = (
pd.read_table("../dataset_statistics/output/all_dag_map.tsv.xz")
.query("sen_length < @cutoff")
)
total_candidates_df.head(2)
###Output
_____no_output_____
###Markdown
Embed All Disease Gene Sentences This section embeds all candidate sentences. For each sentence, we place tags around each mention, tokenized the sentence and then matched each token to their corresponding word index. Any words missing from our vocab receive a index of 1. Lastly, the embedded sentences are exported as a sparse dataframe.
###Code
word_dict_df = pd.read_table("output/disease_associates_gene_word_dict.tsv")
word_dict = {word[0]:word[1] for word in word_dict_df.values.tolist()}
fixed_word_dict = {word:word_dict[word] + 2 for word in word_dict}
limit = 1000000
total_candidate_count = total_candidates_df.shape[0]
for offset in list(range(0, total_candidate_count, limit)):
candidates = (
session
.query(DiseaseGene)
.filter(
DiseaseGene.id.in_(
total_candidates_df
.candidate_id
.astype(int)
.tolist()
)
)
.offset(offset)
.limit(limit)
.all()
)
max_length = total_candidates_df.sen_length.max()
# if first iteration create the file
if offset == 0:
(
generate_embedded_df(candidates, fixed_word_dict, max_length=max_length)
.to_csv(
"output/all_embedded_dg_sentences.tsv",
index=False,
sep="\t",
mode="w"
)
)
# else append don't overwrite
else:
(
generate_embedded_df(candidates, fixed_word_dict, max_length=max_length)
.to_csv(
"output/all_embedded_dg_sentences.tsv",
index=False,
sep="\t",
mode="a",
header=False
)
)
os.system("cd output; xz all_embedded_dg_sentences.tsv")
###Output
_____no_output_____ |
PythonCodes/[SSC]Sigmoid/20210622-C0Stress.ipynb | ###Markdown
C0 stress field Approach
###Code
import sys
sys.path.insert(0,"/home/nico/Documents/TEAR/Codes_TEAR/PythonCodes/LibFolder")
sys.path.insert(0,"/home/nico/Documents/TEAR/Codes_TEAR/se2dr/se2wave/utils/python")
from Lib_GeneralFunctions import *
from Lib_GeneralSignalProcNAnalysis import *
from Lib_SigmoidProcessing import *
import pandas as pd
from matplotlib.gridspec import GridSpec
# Save into a class the
class SSCreference:
def __init__(self, filename, coordinates, RefSource="SEM2DPACK"):
line = pd.read_csv(filename.format("slip"), header=None)
self.Time = line[0]
self.Slip = line[1]
line = pd.read_csv(filename.format("sr"), header=None)
self.SlipRate = line[1]
self.Coord = coordinates #Only used for labels and printing
self.RefSource = RefSource
#end __init__
# Default object printing information
def __repr__(self):
return "The TPV3reference object was generated from: {} and the receiver is located at {}".format(self.RefSource, self.Coord)
#end __repr__
def __str__(self):
return "The TPV3reference object was generated from: {} and the receiver is located at {}".format(self.RefSource, self.Coord)
#end __str__
def PlotReference(self, ax, SlipSlipRate, filtering=True, **kwargs):
if SlipSlipRate=="Slip":
if(filtering):
ax.plot(self.Time, Butterworth(self.Slip, **kwargs), label = "", c = "k", ls = "--", zorder=1)
else:
ax.plot(self.Time, self.Slip, label = "", c = "k", ls = "--", zorder=1)
elif SlipSlipRate=="SlipRate":
if(filtering):
ax.plot(self.Time, Butterworth(self.SlipRate, **kwargs), label = "", c = "k", ls = "--", zorder=1)
else:
ax.plot(self.Time, self.SlipRate, label = "", c = "k", ls = "--", zorder=1)
return ax
def GenericFigAxis():
fig = plt.figure(figsize=[15,5])
gs = GridSpec(1, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax2 = fig.add_subplot(gs[0, 1])
return fig, [ax1, ax2]
def format_axes(fig):
"""
Format a figure and 4 equidistant reveivers' lines from a single file. Receiver distance defines the color.
"""
for i, ax in enumerate(fig.axes):
ax.set_xlim(-0.5,4)
ax.set_ylim(-0.5,8)
ax.set_xlabel("time(s)")
Lines = fig.axes[-1].get_lines()
legend2 = fig.axes[-1].legend(Lines, ['2km','4km', '6km', '8km'], loc=1)
fig.axes[-1].add_artist(legend2)
fig.axes[-1].set_ylabel("Slip Rate (m/s)")
fig.axes[0].set_ylabel("Slip (m)")
def Multi_format_axes(fig,cmap, LabelsPerColor):
"""
Format a figure that contains different files with
information from several receivers for simulations under sets of blending parameters.
"""
ColorDict = dict(enumerate(LabelsPerColor))
for i, ax in enumerate(fig.axes):
ax.set_xlim(-0.5,4)
ax.set_ylim(-0.5,8)
ax.set_xlabel("time(s)")
Lines = []
for idx,colcol in enumerate(cmap.colors):
Lines.append(mlines.Line2D([], [], color = colcol,
linewidth = 3, label = ColorDict.get(idx)))
legend2 = fig.axes[-1].legend(Lines, LabelsPerColor, loc = 2)
fig.axes[-1].add_artist(legend2)
fig.axes[-1].set_ylabel("Slip Rate (m/s)")
fig.axes[0].set_ylabel("Slip (m)")
path = "/home/nico/Documents/TEAR/Codes_TEAR/ProfilePicking/Output/"
# Reference saved into a list of objects
RefList = [SSCreference(path + "Reference/sem2dpack/sem2d-{}-1.txt", "2km"),
SSCreference(path + "Reference/sem2dpack/sem2d-{}-2.txt", "4km"),
SSCreference(path + "Reference/sem2dpack/sem2d-{}-3.txt", "6km"),
SSCreference(path + "Reference/sem2dpack/sem2d-{}-4.txt", "8km"),
]
from matplotlib.colors import ListedColormap
import matplotlib.lines as mlines
from palettable.cartocolors.qualitative import Safe_6
cmap = ListedColormap(Safe_6.mpl_colors[:])
###Output
_____no_output_____
###Markdown
Sigmoid data
###Code
FolderSigmoidPath = "/home/nico/Documents/TEAR/Codes_TEAR/PythonCodes/[SSC]Sigmoid/ProcessedData/"
ListOfFileNames = ["20210622-P2_P1_050x050-50.05",
"20210622-P3_P1_025x025-25.025"]
Delta1 = ListOfFileNames[:]
SigmoidFiles1 = [LoadPickleFile(FolderSigmoidPath, fname) for fname in Delta1]
fig, axis = GenericFigAxis()
# Sigmoid case plotting
for iidx,SFile in enumerate(SigmoidFiles1):
print(SFile)
for Test1 in SFile:
axis[0].plot(Test1.Time, Test1.Slip, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
axis[1].plot(Test1.Time, Test1.SlipRate, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
LabelsPerColor= ["50x50-P1-$\delta$:50.05", "25x25-P1-$\delta$:25.025"]
Multi_format_axes(fig, cmap, LabelsPerColor)
fig.suptitle("P1 - No blending - $\delta_f: 1.001$")
[item.PlotReference(axis[0], "Slip", filtering=False) for item in RefList]
[item.PlotReference(axis[1], "SlipRate", filtering=False) for item in RefList]
plt.show()
###Output
[FaultData Object, distance 2000 - half thickness 50.05, FaultData Object, distance 4000 - half thickness 50.05, FaultData Object, distance 6000 - half thickness 50.05, FaultData Object, distance 8000 - half thickness 50.05]
[FaultData Object, distance 2000 - half thickness 25.025, FaultData Object, distance 4000 - half thickness 25.025, FaultData Object, distance 6000 - half thickness 25.025, FaultData Object, distance 8000 - half thickness 25.025]
###Markdown
Tilted case
###Code
FolderTiltedPath = "/media/nico/Elements/ToPlot/20210706-TiltingNew/"
TiltedFile1 = LoadPickleFile(Filename = "Tilted20degCont_P8_P1_050x050-Tilt20.0-P1-TPList_t1131_d50.05.pickle",
FolderPath = FolderTiltedPath)
TiltedFile2 = LoadPickleFile(Filename = "Tilted20DegCont_P9_P1_025x025-Tilt20.0-P1-TPList_t1130_d25.025.pickle",
FolderPath = FolderTiltedPath)
fig, axis = GenericFigAxis()
# Tilted case plotting
iidx = 0
for Test1 in TiltedFile1[:-1]:
axis[0].plot(Test1.Time, Test1.DispX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
axis[1].plot(Test1.Time, Test1.VelX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
iidx = 1
for Test1 in TiltedFile2[:-1]:
axis[0].plot(Test1.Time, Test1.DispX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
axis[1].plot(Test1.Time, Test1.VelX, color= cmap.colors[iidx],linewidth=2,zorder=iidx)
LabelsPerColor= ["50x50-P1-$\delta$:50.05", "25x25-P1-$\delta$:25.025"]
Multi_format_axes(fig, cmap, LabelsPerColor)
fig.suptitle("P1 - No blending - $\delta_f: 1.001$")
[item.PlotReference(axis[0], "Slip", filtering=False) for item in RefList]
[item.PlotReference(axis[1], "SlipRate", filtering=False) for item in RefList]
plt.show()
###Output
_____no_output_____ |
easyvqa.ipynb | ###Markdown
###Code
!pwd
!git clone https://github.com/vzhou842/easy-VQA-keras.git
cd easy-VQA-keras
pip install -r requirements.txt
!python train.py
#I have modified train.py by including code to find accuracy using test data.
!python train.py
###Output
--- Reading questions...
Read 38575 training questions and 9673 testing questions.
--- Reading answers...
Found 13 total answers:
['circle', 'green', 'red', 'gray', 'yes', 'teal', 'black', 'rectangle', 'yellow', 'triangle', 'brown', 'blue', 'no']
--- Reading/processing images...
Read 4000 training images and 1000 testing images.
Each image has shape (64, 64, 3).
--- Fitting question tokenizer...
Vocab Size: 27
{'is': 1, 'shape': 2, 'the': 3, 'a': 4, 'image': 5, 'there': 6, 'not': 7, 'what': 8, 'present': 9, 'does': 10, 'contain': 11, 'in': 12, 'color': 13, 'no': 14, 'circle': 15, 'rectangle': 16, 'triangle': 17, 'brown': 18, 'yellow': 19, 'gray': 20, 'teal': 21, 'black': 22, 'red': 23, 'green': 24, 'blue': 25, 'of': 26}
--- Converting questions to bags of words...
Example question bag of words: [0. 1. 1. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 1. 0.]
--- Creating model input images...
tcmalloc: large alloc 1896038400 bytes == 0x55f576bba000 @ 0x7f73aa9521e7 0x7f7367b2f46e 0x7f7367b7fc7b 0x7f7367b82e83 0x7f7367b8307b 0x7f7367c24761 0x55f55f9344b0 0x55f55f934240 0x55f55f9a80f3 0x55f55f9a2ced 0x55f55f935bda 0x55f55f9a3915 0x55f55f9a29ee 0x55f55f9a26f3 0x55f55fa6c4c2 0x55f55fa6c83d 0x55f55fa6c6e6 0x55f55fa44163 0x55f55fa43e0c 0x7f73a973cbf7 0x55f55fa43cea
--- Creating model outputs...
Example model output: [0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
--- Building model...
2021-12-28 15:01:04.420436: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
super(Adam, self).__init__(name, **kwargs)
--- Training model...
2021-12-28 15:01:04.844254: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 1896038400 exceeds 10% of free system memory.
tcmalloc: large alloc 1896038400 bytes == 0x55f60b2e0000 @ 0x7f73aa934b6b 0x7f73aa954379 0x7f737b1f32d7 0x7f73698ad77f 0x7f7369943b28 0x7f737652f123 0x7f736fb33f0d 0x7f7376a67c2f 0x7f736fb36614 0x7f736fb3b5ff 0x7f736fb3c15d 0x7f736fb449fb 0x7f736fb45fc0 0x7f736f81f469 0x7f7376557181 0x7f736f2ee26d 0x7f736f26d1f3 0x7f73592f99d7 0x7f735931d3b8 0x55f55f9344b0 0x55f55f934240 0x55f55f9a80f3 0x55f55f9a29ee 0x55f55f874e2b 0x55f55f9a4fe4 0x55f55f9a2ced 0x55f55f93648c 0x55f55f977159 0x55f55f9740a4 0x55f55f934d49 0x55f55f9a894f
Epoch 1/8
1206/1206 [==============================] - ETA: 0s - loss: 0.8783 - accuracy: 0.64812021-12-28 15:01:17.063390: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 475447296 exceeds 10% of free system memory.
/usr/local/lib/python3.7/dist-packages/keras/engine/functional.py:1410: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
layer_config = serialize_layer_fn(layer)
1206/1206 [==============================] - 14s 10ms/step - loss: 0.8783 - accuracy: 0.6481 - val_loss: 0.7291 - val_accuracy: 0.6833
Epoch 2/8
1206/1206 [==============================] - 11s 9ms/step - loss: 0.7137 - accuracy: 0.6913 - val_loss: 0.6779 - val_accuracy: 0.7090
Epoch 3/8
1206/1206 [==============================] - 11s 9ms/step - loss: 0.6420 - accuracy: 0.7246 - val_loss: 0.5959 - val_accuracy: 0.7428
Epoch 4/8
1206/1206 [==============================] - 11s 9ms/step - loss: 0.5712 - accuracy: 0.7453 - val_loss: 0.5406 - val_accuracy: 0.7575
Epoch 5/8
1206/1206 [==============================] - 12s 10ms/step - loss: 0.5137 - accuracy: 0.7657 - val_loss: 0.4962 - val_accuracy: 0.7681
Epoch 6/8
1206/1206 [==============================] - 11s 9ms/step - loss: 0.4619 - accuracy: 0.7806 - val_loss: 0.4601 - val_accuracy: 0.7764
Epoch 7/8
924/1206 [=====================>........] - ETA: 2s - loss: 0.4207 - accuracy: 0.7897
###Markdown
###Code
###Output
_____no_output_____ |
ๆทฑๅบฆๅญฆไน /d2l-zh-1.1/chapter_convolutional-neural-networks/padding-and-strides.ipynb | ###Markdown
ๅกซๅ
ๅๆญฅๅน
ๅจไธไธ่็ไพๅญ้๏ผๆไปฌไฝฟ็จ้ซๅๅฎฝไธบ3็่พๅ
ฅไธ้ซๅๅฎฝไธบ2็ๅท็งฏๆ ธๅพๅฐ้ซๅๅฎฝไธบ2็่พๅบใไธ่ฌๆฅ่ฏด๏ผๅ่ฎพ่พๅ
ฅๅฝข็ถๆฏ$n_h\times n_w$๏ผๅท็งฏๆ ธ็ชๅฃๅฝข็ถๆฏ$k_h\times k_w$๏ผ้ฃไน่พๅบๅฝข็ถๅฐไผๆฏ$$(n_h-k_h+1) \times (n_w-k_w+1).$$ๆไปฅๅท็งฏๅฑ็่พๅบๅฝข็ถ็ฑ่พๅ
ฅๅฝข็ถๅๅท็งฏๆ ธ็ชๅฃๅฝข็ถๅณๅฎใๆฌ่ๆไปฌๅฐไป็ปๅท็งฏๅฑ็ไธคไธช่ถ
ๅๆฐ๏ผๅณๅกซๅ
ๅๆญฅๅน
ใๅฎไปฌๅฏไปฅๅฏน็ปๅฎๅฝข็ถ็่พๅ
ฅๅๅท็งฏๆ ธๆนๅ่พๅบๅฝข็ถใ ๅกซๅ
ๅกซๅ
๏ผpadding๏ผๆฏๆๅจ่พๅ
ฅ้ซๅๅฎฝ็ไธคไพงๅกซๅ
ๅ
็ด ๏ผ้ๅธธๆฏ0ๅ
็ด ๏ผใๅพ5.2้ๆไปฌๅจๅ่พๅ
ฅ้ซๅๅฎฝ็ไธคไพงๅๅซๆทปๅ ไบๅผไธบ0็ๅ
็ด ๏ผไฝฟๅพ่พๅ
ฅ้ซๅๅฎฝไป3ๅๆไบ5๏ผๅนถๅฏผ่ด่พๅบ้ซๅๅฎฝ็ฑ2ๅขๅ ๅฐ4ใๅพ5.2ไธญ็้ดๅฝฑ้จๅไธบ็ฌฌไธไธช่พๅบๅ
็ด ๅๅ
ถ่ฎก็ฎๆไฝฟ็จ็่พๅ
ฅๅๆ ธๆฐ็ปๅ
็ด ๏ผ$0\times0+0\times1+0\times2+0\times3=0$ใไธ่ฌๆฅ่ฏด๏ผๅฆๆๅจ้ซ็ไธคไพงไธๅ
ฑๅกซๅ
$p_h$่ก๏ผๅจๅฎฝ็ไธคไพงไธๅ
ฑๅกซๅ
$p_w$ๅ๏ผ้ฃไน่พๅบๅฝข็ถๅฐไผๆฏ$$(n_h-k_h+p_h+1)\times(n_w-k_w+p_w+1),$$ไนๅฐฑๆฏ่ฏด๏ผ่พๅบ็้ซๅๅฎฝไผๅๅซๅขๅ $p_h$ๅ$p_w$ใๅจๅพๅคๆ
ๅตไธ๏ผๆไปฌไผ่ฎพ็ฝฎ$p_h=k_h-1$ๅ$p_w=k_w-1$ๆฅไฝฟ่พๅ
ฅๅ่พๅบๅ
ทๆ็ธๅ็้ซๅๅฎฝใ่ฟๆ ทไผๆนไพฟๅจๆ้ ็ฝ็ปๆถๆจๆตๆฏไธชๅฑ็่พๅบๅฝข็ถใๅ่ฎพ่ฟ้$k_h$ๆฏๅฅๆฐ๏ผๆไปฌไผๅจ้ซ็ไธคไพงๅๅซๅกซๅ
$p_h/2$่กใๅฆๆ$k_h$ๆฏๅถๆฐ๏ผไธ็งๅฏ่ฝๆฏๅจ่พๅ
ฅ็้กถ็ซฏไธไพงๅกซๅ
$\lceil p_h/2\rceil$่ก๏ผ่ๅจๅบ็ซฏไธไพงๅกซๅ
$\lfloor p_h/2\rfloor$่กใๅจๅฎฝ็ไธคไพงๅกซๅ
ๅ็ใๅท็งฏ็ฅ็ป็ฝ็ป็ปๅธธไฝฟ็จๅฅๆฐ้ซๅๅฎฝ็ๅท็งฏๆ ธ๏ผๅฆ1ใ3ใ5ๅ7๏ผๆไปฅไธค็ซฏไธ็ๅกซๅ
ไธชๆฐ็ธ็ญใๅฏนไปปๆ็ไบ็ปดๆฐ็ป`X`๏ผ่ฎพๅฎ็็ฌฌ`i`่ก็ฌฌ`j`ๅ็ๅ
็ด ไธบ`X[i,j]`ใๅฝไธค็ซฏไธ็ๅกซๅ
ไธชๆฐ็ธ็ญ๏ผๅนถไฝฟ่พๅ
ฅๅ่พๅบๅ
ทๆ็ธๅ็้ซๅๅฎฝๆถ๏ผๆไปฌๅฐฑ็ฅ้่พๅบ`Y[i,j]`ๆฏ็ฑ่พๅ
ฅไปฅ`X[i,j]`ไธบไธญๅฟ็็ชๅฃๅๅท็งฏๆ ธ่ฟ่กไบ็ธๅ
ณ่ฎก็ฎๅพๅฐ็ใไธ้ข็ไพๅญ้ๆไปฌๅๅปบไธไธช้ซๅๅฎฝไธบ3็ไบ็ปดๅท็งฏๅฑ๏ผ็ถๅ่ฎพ่พๅ
ฅ้ซๅๅฎฝไธคไพง็ๅกซๅ
ๆฐๅๅซไธบ1ใ็ปๅฎไธไธช้ซๅๅฎฝไธบ8็่พๅ
ฅ๏ผๆไปฌๅ็ฐ่พๅบ็้ซๅๅฎฝไนๆฏ8ใ
###Code
from mxnet import nd
from mxnet.gluon import nn
# ๅฎไนไธไธชๅฝๆฐๆฅ่ฎก็ฎๅท็งฏๅฑใๅฎๅๅงๅๅท็งฏๅฑๆ้๏ผๅนถๅฏน่พๅ
ฅๅ่พๅบๅ็ธๅบ็ๅ็ปดๅ้็ปด
def comp_conv2d(conv2d, X):
conv2d.initialize()
# (1, 1)ไปฃ่กจๆน้ๅคงๅฐๅ้้ๆฐ๏ผโๅค่พๅ
ฅ้้ๅๅค่พๅบ้้โไธ่ๅฐไป็ป๏ผๅไธบ1
X = X.reshape((1, 1) + X.shape)
Y = conv2d(X)
return Y.reshape(Y.shape[2:]) # ๆ้คไธๅ
ณๅฟ็ๅไธค็ปด๏ผๆน้ๅ้้
# ๆณจๆ่ฟ้ๆฏไธคไพงๅๅซๅกซๅ
1่กๆๅ๏ผๆไปฅๅจไธคไพงไธๅ
ฑๅกซๅ
2่กๆๅ
conv2d = nn.Conv2D(1, kernel_size=3, padding=1)
X = nd.random.uniform(shape=(8, 8))
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____
###Markdown
ๅฝๅท็งฏๆ ธ็้ซๅๅฎฝไธๅๆถ๏ผๆไปฌไนๅฏไปฅ้่ฟ่ฎพ็ฝฎ้ซๅๅฎฝไธไธๅ็ๅกซๅ
ๆฐไฝฟ่พๅบๅ่พๅ
ฅๅ
ทๆ็ธๅ็้ซๅๅฎฝใ
###Code
# ไฝฟ็จ้ซไธบ5ใๅฎฝไธบ3็ๅท็งฏๆ ธใๅจ้ซๅๅฎฝไธคไพง็ๅกซๅ
ๆฐๅๅซไธบ2ๅ1
conv2d = nn.Conv2D(1, kernel_size=(5, 3), padding=(2, 1))
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____
###Markdown
ๆญฅๅน
ๅจไธไธ่้ๆไปฌไป็ปไบไบ็ปดไบ็ธๅ
ณ่ฟ็ฎใๅท็งฏ็ชๅฃไป่พๅ
ฅๆฐ็ป็ๆๅทฆไธๆนๅผๅง๏ผๆไปๅทฆๅพๅณใไปไธๅพไธ็้กบๅบ๏ผไพๆฌกๅจ่พๅ
ฅๆฐ็ปไธๆปๅจใๆไปฌๅฐๆฏๆฌกๆปๅจ็่กๆฐๅๅๆฐ็งฐไธบๆญฅๅน
๏ผstride๏ผใ็ฎๅๆไปฌ็ๅฐ็ไพๅญ้๏ผๅจ้ซๅๅฎฝไธคไธชๆนๅไธๆญฅๅน
ๅไธบ1ใๆไปฌไนๅฏไปฅไฝฟ็จๆดๅคงๆญฅๅน
ใๅพ5.3ๅฑ็คบไบๅจ้ซไธๆญฅๅน
ไธบ3ใๅจๅฎฝไธๆญฅๅน
ไธบ2็ไบ็ปดไบ็ธๅ
ณ่ฟ็ฎใๅฏไปฅ็ๅฐ๏ผ่พๅบ็ฌฌไธๅ็ฌฌไบไธชๅ
็ด ๆถ๏ผๅท็งฏ็ชๅฃๅไธๆปๅจไบ3่ก๏ผ่ๅจ่พๅบ็ฌฌไธ่ก็ฌฌไบไธชๅ
็ด ๆถๅท็งฏ็ชๅฃๅๅณๆปๅจไบ2ๅใๅฝๅท็งฏ็ชๅฃๅจ่พๅ
ฅไธๅๅๅณๆปๅจ2ๅๆถ๏ผ็ฑไบ่พๅ
ฅๅ
็ด ๆ ๆณๅกซๆปก็ชๅฃ๏ผๆ ็ปๆ่พๅบใๅพ5.3ไธญ็้ดๅฝฑ้จๅไธบ่พๅบๅ
็ด ๅๅ
ถ่ฎก็ฎๆไฝฟ็จ็่พๅ
ฅๅๆ ธๆฐ็ปๅ
็ด ๏ผ$0\times0+0\times1+1\times2+2\times3=8$ใ$0\times0+6\times1+0\times2+0\times3=6$ใไธ่ฌๆฅ่ฏด๏ผๅฝ้ซไธๆญฅๅน
ไธบ$s_h$๏ผๅฎฝไธๆญฅๅน
ไธบ$s_w$ๆถ๏ผ่พๅบๅฝข็ถไธบ$$\lfloor(n_h-k_h+p_h+s_h)/s_h\rfloor \times \lfloor(n_w-k_w+p_w+s_w)/s_w\rfloor.$$ๅฆๆ่ฎพ็ฝฎ$p_h=k_h-1$ๅ$p_w=k_w-1$๏ผ้ฃไน่พๅบๅฝข็ถๅฐ็ฎๅไธบ$\lfloor(n_h+s_h-1)/s_h\rfloor \times \lfloor(n_w+s_w-1)/s_w\rfloor$ใๆด่ฟไธๆญฅ๏ผๅฆๆ่พๅ
ฅ็้ซๅๅฎฝ่ฝๅๅซ่ขซ้ซๅๅฎฝไธ็ๆญฅๅน
ๆด้ค๏ผ้ฃไน่พๅบๅฝข็ถๅฐๆฏ$(n_h/s_h) \times (n_w/s_w)$ใไธ้ขๆไปฌไปค้ซๅๅฎฝไธ็ๆญฅๅน
ๅไธบ2๏ผไป่ไฝฟ่พๅ
ฅ็้ซๅๅฎฝๅๅใ
###Code
conv2d = nn.Conv2D(1, kernel_size=3, padding=1, strides=2)
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____
###Markdown
ๆฅไธๆฅๆฏไธไธช็จๅพฎๅคๆ็นๅฟ็ไพๅญใ
###Code
conv2d = nn.Conv2D(1, kernel_size=(3, 5), padding=(0, 1), strides=(3, 4))
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____ |
files/google/Notebook.ipynb | ###Markdown
Google Play Apps Rating Predictions * **What kind of Andriod Apps are most likely to get high ratings?**  Load Data
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
raw = pd.read_csv("/content/drive/My Drive/Home/bibina Google play/googleplaystore.csv")
raw.head()
raw.info()
data = raw.dropna()
###Output
_____no_output_____
###Markdown
EDA Category * **First, let's take a look at the top 5 categories with the highest ratings**
###Code
data.groupby('Category').mean().sort_values(by='Rating',ascending=False).head(5)
plt.figure(figsize=(16,8))
fig = sns.boxplot(x="Category",y="Rating",data=data,palette = "Set1")
fig.set_xticklabels(fig.get_xticklabels(),rotation=90)
plt.title('Distribution of Ratings in Each Category',fontsize = 20)
###Output
_____no_output_____
###Markdown
* **From this box plot, we can see that the ratings of "Events" and "Education" categories are all pretty steady. Even the lowest rating is above 3.5.*** **Ratings of "Business", "Dating", "Finance", "Lifestyle" and "Tools" categories vary a lot.** Reviews
###Code
pd.to_numeric(data.Reviews)
plt.figure(figsize=(10,5))
sns.regplot(x=pd.to_numeric(data.Reviews),y=data.Rating,data=data)
plt.title('Rating vs Reviews',fontsize = 20)
###Output
_____no_output_____
###Markdown
* **Reviews seem to be slightly correlated to ratings. More reviews, higher ratings.** Size * **Take a rough look at the range of size.**
###Code
data.Size.unique()
###Output
_____no_output_____
###Markdown
* **Change 'k','M' and 'Varies with device' with numbers**
###Code
data.Size.replace({'Varies with device':np.nan},regex=True,inplace=True)
data.Size = (data.Size.replace(r'[kM]+$', '', regex=True).astype(float) * \
data.Size.str.extract(r'[\d\.]+([KM]+)', expand=False)
.fillna(1)
.replace(['k','M'], [1, 1000]).astype(int))
plt.figure(figsize=(10,5))
sns.regplot(x=pd.to_numeric(data.Size),y=data.Rating,data=data)
plt.title('Rating vs Size',fontsize = 20)
###Output
_____no_output_____
###Markdown
* **Size seems seldomly affect ratings, but apps with larger size have higher ratings on average.** Installs
###Code
data.Installs.unique()
ins = data.groupby('Installs').mean()
plt.figure(figsize=(16,8))
fig = sns.barplot(x=ins.index,y=ins['Rating'],data=data,palette = "Set1")
fig.set_xticklabels(fig.get_xticklabels(),rotation=60)
plt.title('Ratings vs Install Times',fontsize = 20)
###Output
_____no_output_____
###Markdown
* **Apps with very few installs as well as very large number of installs have relatively higher ratings.** Price
###Code
data.Price = data.Price.replace(r'[$]+', '', regex=True).astype(float)
plt.figure(figsize=(10,5))
sns.regplot(x=data.Price,y=data.Rating)
plt.title('Rating vs Price',fontsize = 20)
plt.figure(figsize=(10,5))
sns.regplot(x='Price',y='Rating',data=data[data.Price<=50])
plt.title('Rating vs Price(less than $50)',fontsize = 20)
###Output
_____no_output_____
###Markdown
* **Apps with higher price tend to have higher ratings. But there are also a lot of free apps getting full star ratings. Let's take a look at them.**
###Code
fivestar = data[(data.Price==0)&(data.Rating==5.0)]
fivestar.shape
plt.figure(figsize=(16,8))
fig = sns.countplot(x='Category', data=fivestar,palette='Set1')
fig.set_xticklabels(fig.get_xticklabels(),rotation=60)
plt.title('5 Star Free Apps in Each Category ',fontsize = 20)
###Output
_____no_output_____
###Markdown
* **There are nearly 60 five star free apps in "Family" category. "Medical" and "Lifestyle" category also did pretty well.** Predictive Analysis > **From previous analysis, I found that around 1500 ratings were missing. So, next, I'm going to use other complete rows of information to predict those missing ratings.** Feature Engineering **Now, it's time to choose useful features for predicting models. I will drop some columns including:*** **App: ID like, should be excluded from the data*** **Last Updated: Intuition. Seem have nothing to do with Ratings*** **Current Ver: not consistent*** **Type: provide replicated information with Price*** **Genres: provide replicated information with Category**
###Code
raw.drop(['App','Last Updated','Current Ver','Type','Genres'],axis=1,inplace=True)
raw = raw[raw.Size!='1,000+']
raw=raw[raw.Size!='Varies with device']
raw.Size = (raw.Size.replace(r'[kM]+$', '', regex=True).astype(float) * \
raw.Size.str.extract(r'[\d\.]+([KM]+)', expand=False)
.fillna(1)
.replace(['k','M'], [1, 1000]).astype(int))
raw.Reviews = pd.to_numeric(raw.Reviews)
###Output
_____no_output_____
###Markdown
* **Divide features into numerical and categorical ones.**
###Code
num_col = [cname for cname in raw.columns if raw[cname].dtype in ['int64','float64']]
cat_col = [cname for cname in raw.columns if raw[cname].dtype=='object']
cat_col
num_col
###Output
_____no_output_____
###Markdown
* **Perform One Hot Encoding on categorical variables.**
###Code
raw.head()
new = pd.get_dummies(raw, prefix=cat_col, drop_first=True)
new
###Output
_____no_output_____
###Markdown
* **Set target variable: Rating**
###Code
#filter out the test set
test = new[new.Rating.isna()]
label = new.Rating.dropna()
label
data = new.dropna().drop('Rating',axis=1)
###Output
_____no_output_____
###Markdown
Modeling
###Code
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
###Output
_____no_output_____
###Markdown
* **Standard Scaler**
###Code
scaler = StandardScaler()
X = scaler.fit_transform(data)
###Output
_____no_output_____
###Markdown
* **Train Test Split**
###Code
X_train, X_test, y_train, y_test = train_test_split( X, label, test_size=0.2, random_state=101)
###Output
_____no_output_____
###Markdown
* **XGBoost Regressor**
###Code
from xgboost import XGBRegressor
from sklearn.metrics import mean_absolute_error
def getAccuracy_cv(model):
model.fit(X_train,y_train)
# get predictions
preds = model.predict(X_test)
# cross validation
from sklearn.model_selection import cross_val_score
scores = -1 * cross_val_score(model,X,label,cv=10,scoring = 'neg_mean_absolute_error')
print(scores.mean())
# Hyperparameter Tuning
paraList = [500, 1000, 1500]
for i in paraList:
model =XGBRegressor(
learning_rate =0.1,
n_estimators=i,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
nthread=4,
scale_pos_weight=1,
seed=27)
getAccuracy_cv(model)
final_model =XGBRegressor(
learning_rate =0.1,
n_estimators=500,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
nthread=4,
scale_pos_weight=1,
seed=27)
getAccuracy_cv(final_model)
###Output
[18:28:25] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:28:39] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:28:55] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:29:10] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:29:25] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:29:41] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:29:56] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:30:12] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:30:27] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:30:43] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[18:30:58] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
0.36100240595439537
###Markdown
Predict * **Repeat the preprocessing steps on test data**
###Code
test = test.drop('Rating',axis=1)
test.head()
###Output
_____no_output_____ |
Aulas/Aula_02/Spark-I-Introduction.ipynb | ###Markdown
**2021/22** Introduction to Apache SparkIn this lecture we will introduce the Spark framework. Right now, the goal is to explain how it works and to highlight its potentiality.**Disclaimer**: Some content presented in this notebook e.g. images are based on references mentioned at the end of the notebook. **Context** In the past, computers got faster mainly due to processor speed increases. And most of the applications were designed to run in a single processor machine. But as more data was required to be processed and hardware limits were being tested, research efforts moved towards parallel processing and new programming models. **Apache Spark** - is an open-source distributed cluster-computing framework. It is designed for large-scale distributed data processing, with focus on speed and modularity;- provides in-memory storage for intermediate computations;- contain libraries with APIs for machine learning, SQL, stream processing and graph processing. Spark components and APIsSpark offers four components as libraries for diverse workloads in a unified stack.Code can be written in the languages Scala, SQL, Python, Java or R, which then is decomposed into bytecode to be executed in Java Virtual Machines (JVMs) across the cluster.  There are both low-level and high-level APIs related to (distributed) collections of data. We may have collections of:* **Resilient Distributed Dataset (RDD)** * they are now consigned to low-level APIs* **DataFrame** * the most commom structured data - it simply represents a table of data with rows and columns* **Dataset** * collection of objects but only makes sense in the case of Scala and Java Further details are to be covered later on but we can highlight now that our focus will be on **DataFrames** Spark Core and Spark SQL EngineSpark Core contains basic functionalities for running jobs and that are needed by other components. Spark SQL Engine provides additional help to do so.Computations will ultimatelly convert into low-level RDD-based bytecode (in Scala) to be distributed and run in executors across the cluster. Spark SQLSpark SQL provides functions for manipulating large sets of distributed structured data using an SQL subset. (ANSI SQL:2003-compliant)It can also be used for **reading** and **writing** data to and from various structured formats and data sources, such as JavaScript Object Notation (JSON) files, CSV files, Parquet files (an increasingly popular file format that allows for storing a schema alongside the data), relational databases, Hive, and others. There is also a query optimization framework called Catalyst. Spark Structured StreamingSpark Structured Streaming is a framework for ingesting real-time streaming data from various sources, such as HDFS-based, Kafka, Flume, Twitter, ZeroMQ, as well as customized ones. Developers are able to combine and react in real time to both static and streaming data. A stream is perceived as a continuaslly growing structured table, upon against which queries are made as if it was a static table.Aspects of fault tolerance and late-data semantics are handled via Spark SQL core engine. Hence, developers are focussing on just writing streaming applications. Machine Learning MLlibSpark MLlib is a library of common machine-learning (ML) algorithms built on top of DataFrame-based APIs. Among other aspects, these APIs allow to extract or transform features, build pipelines (for training and evaluating) and persist models during deployment (for saving/reloading)Available ML algorithms include logistic regression, naรฏve Bayes classification, support vector machines (SVMs), decision trees, random forests, linear regression, k-means clustering, among others. Graph Processing GraphXSpark Graphx is a library for manipulating graphs, that is, data structures comprising vertices and the edges connecting them. It provides algorithms for building, analysing, connecting and traversing graphs. Among others, there are implementations of important algorithms of graph theory, such as page rank, connected components, shortest paths and singular value decomposition. (SVD) Execution in a distributed architectureA **Spark Application** consists of a **driver** program responsible for orchestrating parallel operations on the Spark cluster. The driver accesses the distributed components in the cluster (**executors** and **manager**) via a **SparkSession**.  SparkSessionA SparkSession instance provides a single entry point to all functionalities.For a Spark application, one needs to create the SparkSession object if none is available, as described below. In that case, we can configure it according to ower own needs.But first, we have to make sure we can access **pyspark** from this notebook. One way to do so is to run the notebook using use a suitable kernel. That is why we have already set one: **PySpark**.For the time being there is no need to provide furter details about this kernel - it is just a file named *kernel.json* placed in a proper location and with some settings.
###Code
from pyspark.sql import SparkSession
# build our own SparkSession
myspark = SparkSession\
.builder\
.appName("BigData")\
.config("spark.sql.shuffle.partitions",6)\
.config("spark.sql.repl.eagereval.enabled",True)\
.getOrCreate()
# check it, including the link
myspark
# print SparkSession object
print(myspark)
# Example of usage:
# creating a range of numbers, represented as a distributed collection
numbers_to_n = myspark.range(1000000).toDF("Number")
###Output
_____no_output_____
###Markdown
Cluster manager and executors* **Cluster manager** * responsible for managing the executors in the cluster of nodes on which the application runs, alongside allocating the requested resources * agnostic where ir runs as long as responsabilities above are met* **Spark executor** * runs on each worker node in the cluster * executors communicate with driver program and are responsible for executing tasks on the workers* **Deployment modes** * variety of configurations and environments available, as shown below: (just for reference) | Mode | Spark driver | Spark executor | Cluster manager ||:----------------|:----------------------------------------------------|:----------------|:-----------------|| Local | Runs on a single JVM, like a laptop or single node | Runs on the same JVM as the driver | Runs on the same host || Standalone | Can run on any node in the cluster | Each node in the cluster will launch its own executor | Can be allocated arbitrarily to any host in the cluster || YARN (client) | Runs on a client, not part of the cluster | YARN's NodeManager's container | YARN's Resource Manager works with YARN's Application Master to allocate the containers on NodeManagers for executors || YARN (cluster) | Runs with the YARN Application Master | Same as YARN client mode | Same as YARN client mode || Kubernetes | Runs in a Kubernetes pod | Each worker runs within its own pod | Kubernetes Master | Distributed data and partitions* Partitioning of data allows for efficient paralelism since every executor can perform work in parallel* Physical data is break up and distributed across storage as chunks called partitions, whether in HDFS or in cloud storage.* Each partition is treated as a dataframe in memory (logical data abstraction) * hence it is a collection of rows that sits on one physical machine of the cluster; * so if we have dataframes in our program we do not (for the most part) manipulate partitions individually - we simply specify high level transformations of data in the physical partitions, and Spark determines how this will play out across the cluster.* As much as possible, data locality is to be pursuit. It means that an executor is prefereably allocated a task that requires reading a partition closest to it in the network in order to minimize network bandwidth.**Question**: What happens if we have* multiple partitions but only one executor;* one partition but thousands of executors? Standalone application running in local modeConceptually, we prototype the application by running it locally with small datasets; then, for large datasets, we use more advanced deployment modes to take advantage of distributed and more powerful execution. Spark shells Spark provides four interpretative shells (windows) to carried out ad hoc data analysis:* pyspark* spark-shell* spark-sql* sparkRThey resemble their shell counterparts for the considered languages. The main difference now is that they have extra support for connecting to the cluster and to loading distributed data into worker's memoryNotice that, if using shells:* the driver is part of the shell* the SparkSession mentioned above is automatically created, accessible via the variable `spark`* they are exited pressing Ctrl-D Note: in accordance to the location of the Spark installation in our computer, we have set for the shell (terminal window) the following environment variables (in the file ~/.profile) export SPARK_HOME=/opt/spark export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin export PYSPARK_PYTHON=/usr/bin/python3 By the way, in Linux the command `which` is useful for checking where programms are installed.
###Code
# run pyspark
#in terminal
#!pyspark
###Output
_____no_output_____
###Markdown
**Stop running the previous cell. Here we can't do that much!** Example running in a `pyspark` shellReading a file and then showing top 10 lines, as well as the number of lines. It runs locally, in a single JVM. First, open an autonomous shell (Terminal window) and run the following commands, one by one: which pyspark pyspark --help pyspark spark.version 2+3 And then execute the commands (the provided file is located in the current directory) lines = spark.read.text("pyspark-help.txt") lines.show(10, truncate=False) lines.count() quit() Spark operations and related computationOperations on distributed data are of two types: **transformations** and **actions** Basic concepts* **Job**: parallel computation created by the driver, consisting of multiple tasks that gets spawned in response to actions, e.g save()* **Stage**: each job gets divided into smaller sets of tasks called stages, that depend on each other* **Task**: single unit of work or execution to be sent to a Spark executor (a task per core)  Transformations In Spark core data structures are **Immutable**, that is, they cannot be changed after creation. If one wants to change a dataframe we need to instruct Spark how to do it. These are called transformations. Hence, transformations transform a DataFrame into a new one without altering the original data. So it returns a new one but transformed.Some examples are:|Transformation | Description||:-------|:-------||**orderBy()**|Returns a new DataFrame sorted by specific column(s)||**groupBy**|Groups the DataFrame using specified columns, so we can run aggregation on them||**filter()**|Filters rows using a given condition||**select()**|Returns a new DataFrame with select columns||**join()**|Joins with another DataFrame, using a given join expression| **Back to our myspark session...**Checking the content of a text file.
###Code
!ls -la
# strings =
strings = myspark.read.text("pyspark-help.txt")
strings.show(5,truncate= False)
n = strings.count()
n
# filtering lines with a particular word, say pyspark
# filtered =
filtered = strings.filter(strings.value.contains("pyspark"))
filtered.show(truncate=False)
filtered
###Output
+------------------------------+
|value |
+------------------------------+
|Usage: ./bin/pyspark [options]|
+------------------------------+
###Markdown
Types of transformationsTransformations can be:* Narrow * a single output partition can be computed from a single input partition (no exchange of data, all performed in memory) * examples are **filter()**, **contains()*** Wide * data from other partitions across the cluster is read in, combined, and written to disk * examples are **groupBy()**, **reduceBy()** Example**Reading structured data, filter some of them and then show the result but sorted**(The file is on the same folder as this notebook)
###Code
! ls -la
# Prior, just let us check the file we are about to use (with help of Linux commands)
! head flights-US-2015.csv
# Read the datafile into a DataFrame using the CSV format,
# by inferring the schema and specifying that the file contains a header,
# which provides column names for comma-separated fields
# info_flights =
info_flights = myspark.read.load("flights-US-2015.csv",
format = "csv",
sep = ",",
header = True,
inferSchema = True)
# info_flights # or print(info_flights)
info_flights
info_flights.head(20)
# check how many records we have in the DataFrame
info_flights.count()
# and showing some of them
info_flights.show()
# get routes from the United States
# and try other options ...
routes = info_flights.filter(info_flights.ORIGIN_COUNTRY_NAME == "United States")
# show the routes
routes.show()
# show the routes ordered by flights
routes_ordered = routes.orderBy("FLIGHTS", ascending = False)
routes_ordered.show()
###Output
+------------------+-------------------+-------+
| DEST_COUNTRY_NAME|ORIGIN_COUNTRY_NAME|FLIGHTS|
+------------------+-------------------+-------+
| United States| United States| 370002|
| Canada| United States| 8399|
| Mexico| United States| 7140|
| United Kingdom| United States| 2025|
| Japan| United States| 1548|
| Germany| United States| 1468|
|Dominican Republic| United States| 1353|
| South Korea| United States| 1048|
| The Bahamas| United States| 955|
| France| United States| 935|
| Colombia| United States| 873|
| Brazil| United States| 853|
| Netherlands| United States| 776|
| China| United States| 772|
| Jamaica| United States| 666|
| Costa Rica| United States| 588|
| El Salvador| United States| 561|
| Panama| United States| 510|
| Cuba| United States| 466|
| Spain| United States| 420|
+------------------+-------------------+-------+
only showing top 20 rows
###Markdown
Lazy evaluation and actions Spark uses lazy evaluation, that is, it waits until the very last moment to execute the graph of computational instructions established, that is, the plan of transformations that we would like to apply to the data.As results are not computed immediately, they are recorded as **lineage** (*trace of descendants*) and at later time in its execution plan, Spark may rearrange certain transformations, coalesce them, or optimize transformations into stages for more efficient execution of the entire flow. Only when an **action** is invoked or data is read/written to disk the lazy evaluation of all recorded transformations is triggered.An action is like a play button. We may have:* Actions to view data in the console.* Actions to collect data to native objects in the respective language.* Actions to write to output data sources.Some examples are:|Action | Description||:-------|:-------||**show()**|Prints the first rows to the console||**take(n)**|Returns the first rows as a list||**count()**|Returns the number of rows||**collect()**|Returns all the records as a list||**save()**|Saves the contents to a data source|
###Code
# Using the variable numbers_to_n (a DataFrame) set before...
even_numbers = numbers_to_n.where("number % 2 = 0") # why didn't return the output?
even_numbers.explain() # or even_numbers.explain(extended=True)
# count
even_numbers.count()
# get 5 of them
even_numbers.take(5)
# the 1st one
even_numbers.first()
# and show
even_numbers.show()
###Output
+------+
|Number|
+------+
| 0|
| 2|
| 4|
| 6|
| 8|
| 10|
| 12|
| 14|
| 16|
| 18|
| 20|
| 22|
| 24|
| 26|
| 28|
| 30|
| 32|
| 34|
| 36|
| 38|
+------+
only showing top 20 rows
###Markdown
Fault tolerance**Lineage** in the context of lazy evaluation and **data immutability** mentioned above gives resiliency in the event of failures as:* Spark records each transformation in its lineage;* DataFrames are immutable between transformations;then Spark can reproduce the original state by replaying the recorded lineage. Spark UISpark UI allow us to monitor the progress of a job. It displays information about the state of Spark jobs, its environment and the cluster state. So it is very useful for tuning and debugging. Usually Spark UI is available on port 4040 of the driver node. (If that port is occupied, another one is provided)In local mode: http://localhost:4040 in a web browser. PS: recall notebook cell above when myspark was checked.
###Code
myspark.sparkContext.uiWebUrl # check where spark ui is running
###Output
_____no_output_____
###Markdown
Check the link presented above after execution.
###Code
# Let us stop the SparkSession
myspark.stop()
###Output
_____no_output_____
###Markdown
ExerciseOur goal now is to write down a Spark program that (i) reads a file containing flight data to and from United States and then (ii) provide answers to the following questions about the data that has been just read:1. How many records exist in the dataset?2. How many routes originate in countries with more than one?3. Give the number of flights in the busiest route?4. Which countries are the top 5 destinations? (by number of flights) The Spark program
###Code
# Import the necessary libraries
import sys
from pyspark.sql import SparkSession
from pyspark.sql.functions import count, max, sum
# Build a SparkSession using the SparkSession APIs. If it does not exist one, then create an instance.
# Notice that we can only have one per JVM
myspark = SparkSession\
.builder\
.appName("Flights")\
.config("spark.sql.repl.eagereval.enabled",True)\
.getOrCreate()
# alternatively we could have written
# myspark = (SparkSession
# .builder
# .appName("Flights")
# .getOrCreate())
# or
# spark = SparkSession.builder.appName("Flights").getOrCreate())
myspark
###Output
_____no_output_____
###Markdown
As before, we are using DataFrame high-level APIs (Spark SQL could also have been used here but we leave it for the time being)
###Code
# read the dataset
# flight_data =
flight_data = myspark.read.load("flights-US-2015.csv",
format = "csv",
sep = ",",
header = True,
inferSchema = True)
flight_data.show()
# First, let us check the schema and the initial lines of the dataset.
# We should always take this step
flight_data.printSchema()
flight_data.show(5)
# Just a detail: to figure out how, for example, sorting by FLIGHTS would work
flight_data.sort("FLIGHTS").explain() # check the Spark physical plan
###Output
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Sort [FLIGHTS#250 ASC NULLS FIRST], true, 0
+- Exchange rangepartitioning(FLIGHTS#250 ASC NULLS FIRST, 6), ENSURE_REQUIREMENTS, [id=#543]
+- FileScan csv [DEST_COUNTRY_NAME#248,ORIGIN_COUNTRY_NAME#249,FLIGHTS#250] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/home/joao/Uni/Mestrado/BigData/abd/Aulas/Aula_02/flights-US-2015..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<DEST_COUNTRY_NAME:string,ORIGIN_COUNTRY_NAME:string,FLIGHTS:int>
###Markdown
**Before moving on, a note about reading data from a csv file:**Above, we have inferred the schema from the first line of the csv file. And by the way reading is a transformation not an action.But we could have set the schema programatically and then read the data from the file accordingly. When schema is inferred from a huge file this may take some time. So in those circunstances we may decide to set the schema programmatically. Questions to be answered
###Code
# 1. How many records exist in the dataset?
flight_data.count()
# 2. How many routes originate in countries with more than one?
n = flight_data.groupBy("ORIGIN_COUNTRY_NAME")\
.count()\
.orderBy("count",ascending=False)
n.show()
n2 = n.filter("count > 1")
n2.show()
# 3. Give the number of flights in the busiest route?
print("max = " + str(flight_data.orderBy("FLIGHTS",ascending=False).first().FLIGHTS))
#or
flight_data.agg({"FLIGHTS" : "max"}).show()
# 4. Which countries are the top 5 destinations? (by number of flights)
# top_dest_countries_df = flight_data\
# show the results. As it is an action, it triggers the above query to be executed
top_dest_countries = (flight_data.groupBy("DEST_COUNTRY_NAME")
.sum("FLIGHTS")
.withColumnRenamed("sum(FLIGHTS)", "total")
.orderBy("total",ascending=False)
)
# print("Total = %d" % (top_dest_countries_df.count()))
top_dest_countries.show(5)
# Finally, stop the SparkSession
myspark.stop()
###Output
_____no_output_____ |
CharLevel_EncoderDecoder.ipynb | ###Markdown
###Code
import numpy as np
import tensorflow as tf
from tensorflow import keras
###Output
_____no_output_____
###Markdown
Data Download ( Considering Hindi - English Translation )
###Code
!!curl -O http://www.manythings.org/anki/hin-eng.zip
!unzip hin-eng.zip
###Output
_____no_output_____
###Markdown
Config
###Code
batch_size = 64 # Batch size for training.
epochs = 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 10000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = "hin.txt"
###Output
_____no_output_____
###Markdown
Data Preperation
###Code
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, "r", encoding="utf-8") as f:
lines = f.read().split("\n")
len(lines)
###Output
_____no_output_____
###Markdown
Lines Contains Translation of an English Sentence to Hindi with Contributor .These are seperated by \t
###Code
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text, _ = line.split("\t")
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = "\t" + target_text + "\n"
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
###Output
_____no_output_____
###Markdown
We will get 2 sets of characters in input_characters and target_characters
###Code
target_characters
len(target_characters)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print("Number of samples:", len(input_texts))
print("Number of unique input tokens:", num_encoder_tokens)
print("Number of unique output tokens:", num_decoder_tokens)
print("Max sequence length for inputs:", max_encoder_seq_length)
print("Max sequence length for outputs:", max_decoder_seq_length)
input_token_index = dict([(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict([(char, i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype="float32"
)
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype="float32"
)
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype="float32"
)
###Output
_____no_output_____
###Markdown
Input Format - (len_of_samples,max_seq_length,which_char_position)
###Code
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.0
encoder_input_data[i, t + 1 :, input_token_index[" "]] = 1.0
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.0
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.0
decoder_input_data[i, t + 1 :, target_token_index[" "]] = 1.0
decoder_target_data[i, t:, target_token_index[" "]] = 1.0
###Output
_____no_output_____
###Markdown
Model
###Code
# Define an input sequence and process it.
encoder_inputs = keras.Input(shape=(None, num_encoder_tokens))
encoder = keras.layers.LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = keras.Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = keras.layers.Dense(num_decoder_tokens, activation="softmax")
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(
optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.fit(
[encoder_input_data, decoder_input_data],
decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2,
)
# Save model
model.save("s2s")
###Output
Epoch 1/100
37/37 [==============================] - 47s 1s/step - loss: 1.7039 - accuracy: 0.7169 - val_loss: 1.4769 - val_accuracy: 0.6885
Epoch 2/100
37/37 [==============================] - 42s 1s/step - loss: 0.8647 - accuracy: 0.8089 - val_loss: 1.3695 - val_accuracy: 0.6906
Epoch 3/100
37/37 [==============================] - 42s 1s/step - loss: 0.8314 - accuracy: 0.8106 - val_loss: 1.3158 - val_accuracy: 0.6924
Epoch 4/100
37/37 [==============================] - 42s 1s/step - loss: 0.7713 - accuracy: 0.8136 - val_loss: 1.2996 - val_accuracy: 0.6922
Epoch 5/100
37/37 [==============================] - 42s 1s/step - loss: 0.7170 - accuracy: 0.8217 - val_loss: 1.1284 - val_accuracy: 0.7203
Epoch 6/100
37/37 [==============================] - 42s 1s/step - loss: 0.6806 - accuracy: 0.8362 - val_loss: 1.0768 - val_accuracy: 0.7224
Epoch 7/100
37/37 [==============================] - 42s 1s/step - loss: 0.6233 - accuracy: 0.8429 - val_loss: 1.0261 - val_accuracy: 0.7352
Epoch 8/100
37/37 [==============================] - 42s 1s/step - loss: 0.5993 - accuracy: 0.8474 - val_loss: 0.9941 - val_accuracy: 0.7391
Epoch 9/100
37/37 [==============================] - 42s 1s/step - loss: 0.5541 - accuracy: 0.8554 - val_loss: 0.9467 - val_accuracy: 0.7517
Epoch 10/100
37/37 [==============================] - 42s 1s/step - loss: 0.5399 - accuracy: 0.8579 - val_loss: 0.9253 - val_accuracy: 0.7582
Epoch 11/100
37/37 [==============================] - 42s 1s/step - loss: 0.5227 - accuracy: 0.8617 - val_loss: 0.9076 - val_accuracy: 0.7617
Epoch 12/100
37/37 [==============================] - 43s 1s/step - loss: 0.5031 - accuracy: 0.8678 - val_loss: 0.9050 - val_accuracy: 0.7623
Epoch 13/100
37/37 [==============================] - 42s 1s/step - loss: 0.4943 - accuracy: 0.8688 - val_loss: 0.8725 - val_accuracy: 0.7690
Epoch 14/100
37/37 [==============================] - 42s 1s/step - loss: 0.4870 - accuracy: 0.8704 - val_loss: 0.8736 - val_accuracy: 0.7684
Epoch 15/100
37/37 [==============================] - 42s 1s/step - loss: 0.4691 - accuracy: 0.8752 - val_loss: 0.8499 - val_accuracy: 0.7740
Epoch 16/100
37/37 [==============================] - 43s 1s/step - loss: 0.4681 - accuracy: 0.8752 - val_loss: 0.8403 - val_accuracy: 0.7741
Epoch 17/100
37/37 [==============================] - 43s 1s/step - loss: 0.4608 - accuracy: 0.8765 - val_loss: 0.8294 - val_accuracy: 0.7765
Epoch 18/100
37/37 [==============================] - 43s 1s/step - loss: 0.4494 - accuracy: 0.8778 - val_loss: 0.8237 - val_accuracy: 0.7781
Epoch 19/100
37/37 [==============================] - 42s 1s/step - loss: 0.4453 - accuracy: 0.8783 - val_loss: 0.8307 - val_accuracy: 0.7738
Epoch 20/100
37/37 [==============================] - 43s 1s/step - loss: 0.4404 - accuracy: 0.8793 - val_loss: 0.8087 - val_accuracy: 0.7804
Epoch 21/100
37/37 [==============================] - 42s 1s/step - loss: 0.4346 - accuracy: 0.8800 - val_loss: 0.8047 - val_accuracy: 0.7801
Epoch 22/100
37/37 [==============================] - 43s 1s/step - loss: 0.4275 - accuracy: 0.8818 - val_loss: 0.7882 - val_accuracy: 0.7826
Epoch 23/100
37/37 [==============================] - 43s 1s/step - loss: 0.4217 - accuracy: 0.8825 - val_loss: 0.7939 - val_accuracy: 0.7814
Epoch 24/100
37/37 [==============================] - 42s 1s/step - loss: 0.4137 - accuracy: 0.8856 - val_loss: 0.7881 - val_accuracy: 0.7847
Epoch 25/100
37/37 [==============================] - 43s 1s/step - loss: 0.4062 - accuracy: 0.8864 - val_loss: 0.7790 - val_accuracy: 0.7854
Epoch 26/100
37/37 [==============================] - 43s 1s/step - loss: 0.4017 - accuracy: 0.8873 - val_loss: 0.7847 - val_accuracy: 0.7867
Epoch 27/100
37/37 [==============================] - 43s 1s/step - loss: 0.4139 - accuracy: 0.8843 - val_loss: 0.7742 - val_accuracy: 0.7868
Epoch 28/100
37/37 [==============================] - 43s 1s/step - loss: 0.4029 - accuracy: 0.8871 - val_loss: 0.7634 - val_accuracy: 0.7902
Epoch 29/100
37/37 [==============================] - 42s 1s/step - loss: 0.3952 - accuracy: 0.8884 - val_loss: 0.7600 - val_accuracy: 0.7910
Epoch 30/100
37/37 [==============================] - 42s 1s/step - loss: 0.3902 - accuracy: 0.8903 - val_loss: 0.7613 - val_accuracy: 0.7908
Epoch 31/100
37/37 [==============================] - 42s 1s/step - loss: 0.3850 - accuracy: 0.8919 - val_loss: 0.7541 - val_accuracy: 0.7909
Epoch 32/100
37/37 [==============================] - 42s 1s/step - loss: 0.3818 - accuracy: 0.8923 - val_loss: 0.7651 - val_accuracy: 0.7912
Epoch 33/100
37/37 [==============================] - 42s 1s/step - loss: 0.3779 - accuracy: 0.8935 - val_loss: 0.7525 - val_accuracy: 0.7921
Epoch 34/100
37/37 [==============================] - 42s 1s/step - loss: 0.3756 - accuracy: 0.8943 - val_loss: 0.7640 - val_accuracy: 0.7913
Epoch 35/100
37/37 [==============================] - 42s 1s/step - loss: 0.3722 - accuracy: 0.8952 - val_loss: 0.7432 - val_accuracy: 0.7952
Epoch 36/100
37/37 [==============================] - 42s 1s/step - loss: 0.3617 - accuracy: 0.8975 - val_loss: 0.7527 - val_accuracy: 0.7934
Epoch 37/100
37/37 [==============================] - 43s 1s/step - loss: 0.3568 - accuracy: 0.8997 - val_loss: 0.7535 - val_accuracy: 0.7943
Epoch 38/100
37/37 [==============================] - 43s 1s/step - loss: 0.3537 - accuracy: 0.9000 - val_loss: 0.7526 - val_accuracy: 0.7945
Epoch 39/100
37/37 [==============================] - 43s 1s/step - loss: 0.3474 - accuracy: 0.9015 - val_loss: 0.7434 - val_accuracy: 0.7974
Epoch 40/100
37/37 [==============================] - 43s 1s/step - loss: 0.3446 - accuracy: 0.9028 - val_loss: 0.7604 - val_accuracy: 0.7932
Epoch 41/100
37/37 [==============================] - 43s 1s/step - loss: 0.3394 - accuracy: 0.9047 - val_loss: 0.7527 - val_accuracy: 0.7962
Epoch 42/100
37/37 [==============================] - 43s 1s/step - loss: 0.3339 - accuracy: 0.9058 - val_loss: 0.7584 - val_accuracy: 0.7943
Epoch 43/100
37/37 [==============================] - 43s 1s/step - loss: 0.3348 - accuracy: 0.9057 - val_loss: 0.7617 - val_accuracy: 0.7936
Epoch 44/100
37/37 [==============================] - 43s 1s/step - loss: 0.3215 - accuracy: 0.9095 - val_loss: 0.7567 - val_accuracy: 0.7958
Epoch 45/100
37/37 [==============================] - 43s 1s/step - loss: 0.3191 - accuracy: 0.9099 - val_loss: 0.7646 - val_accuracy: 0.7953
Epoch 46/100
37/37 [==============================] - 43s 1s/step - loss: 0.3167 - accuracy: 0.9110 - val_loss: 0.7582 - val_accuracy: 0.7963
Epoch 47/100
37/37 [==============================] - 42s 1s/step - loss: 0.3115 - accuracy: 0.9119 - val_loss: 0.7520 - val_accuracy: 0.7968
Epoch 48/100
37/37 [==============================] - 43s 1s/step - loss: 0.3072 - accuracy: 0.9132 - val_loss: 0.7617 - val_accuracy: 0.7970
Epoch 49/100
37/37 [==============================] - 42s 1s/step - loss: 0.3028 - accuracy: 0.9148 - val_loss: 0.7773 - val_accuracy: 0.7949
Epoch 50/100
37/37 [==============================] - 42s 1s/step - loss: 0.2994 - accuracy: 0.9153 - val_loss: 0.7886 - val_accuracy: 0.7931
Epoch 51/100
37/37 [==============================] - 43s 1s/step - loss: 0.2952 - accuracy: 0.9173 - val_loss: 0.7663 - val_accuracy: 0.7954
Epoch 52/100
37/37 [==============================] - 43s 1s/step - loss: 0.2905 - accuracy: 0.9187 - val_loss: 0.7997 - val_accuracy: 0.7915
Epoch 53/100
37/37 [==============================] - 42s 1s/step - loss: 0.2908 - accuracy: 0.9179 - val_loss: 0.7998 - val_accuracy: 0.7909
Epoch 54/100
37/37 [==============================] - 43s 1s/step - loss: 0.2815 - accuracy: 0.9210 - val_loss: 0.7918 - val_accuracy: 0.7921
Epoch 55/100
37/37 [==============================] - 42s 1s/step - loss: 0.2733 - accuracy: 0.9233 - val_loss: 0.7887 - val_accuracy: 0.7928
Epoch 56/100
37/37 [==============================] - 43s 1s/step - loss: 0.2735 - accuracy: 0.9235 - val_loss: 0.7992 - val_accuracy: 0.7915
Epoch 57/100
37/37 [==============================] - 43s 1s/step - loss: 0.2675 - accuracy: 0.9248 - val_loss: 0.8135 - val_accuracy: 0.7887
Epoch 58/100
37/37 [==============================] - 42s 1s/step - loss: 0.2676 - accuracy: 0.9246 - val_loss: 0.8103 - val_accuracy: 0.7906
Epoch 59/100
37/37 [==============================] - 43s 1s/step - loss: 0.2641 - accuracy: 0.9263 - val_loss: 0.8149 - val_accuracy: 0.7912
Epoch 60/100
37/37 [==============================] - 42s 1s/step - loss: 0.2589 - accuracy: 0.9272 - val_loss: 0.8321 - val_accuracy: 0.7889
Epoch 61/100
37/37 [==============================] - 42s 1s/step - loss: 0.2529 - accuracy: 0.9292 - val_loss: 0.8222 - val_accuracy: 0.7916
Epoch 62/100
37/37 [==============================] - 42s 1s/step - loss: 0.2494 - accuracy: 0.9307 - val_loss: 0.8395 - val_accuracy: 0.7879
Epoch 63/100
37/37 [==============================] - 42s 1s/step - loss: 0.2477 - accuracy: 0.9313 - val_loss: 0.8215 - val_accuracy: 0.7918
Epoch 64/100
37/37 [==============================] - 42s 1s/step - loss: 0.2426 - accuracy: 0.9322 - val_loss: 0.8484 - val_accuracy: 0.7882
Epoch 65/100
37/37 [==============================] - 42s 1s/step - loss: 0.2411 - accuracy: 0.9329 - val_loss: 0.8549 - val_accuracy: 0.7887
Epoch 66/100
37/37 [==============================] - 42s 1s/step - loss: 0.2335 - accuracy: 0.9356 - val_loss: 0.8636 - val_accuracy: 0.7893
Epoch 67/100
37/37 [==============================] - 42s 1s/step - loss: 0.2297 - accuracy: 0.9365 - val_loss: 0.8573 - val_accuracy: 0.7883
Epoch 68/100
37/37 [==============================] - 42s 1s/step - loss: 0.2281 - accuracy: 0.9368 - val_loss: 0.8810 - val_accuracy: 0.7870
Epoch 69/100
37/37 [==============================] - 42s 1s/step - loss: 0.2220 - accuracy: 0.9387 - val_loss: 0.8831 - val_accuracy: 0.7876
Epoch 70/100
37/37 [==============================] - 42s 1s/step - loss: 0.2205 - accuracy: 0.9391 - val_loss: 0.8928 - val_accuracy: 0.7859
Epoch 71/100
37/37 [==============================] - 42s 1s/step - loss: 0.2173 - accuracy: 0.9401 - val_loss: 0.8960 - val_accuracy: 0.7847
Epoch 72/100
37/37 [==============================] - 42s 1s/step - loss: 0.2148 - accuracy: 0.9407 - val_loss: 0.8990 - val_accuracy: 0.7867
Epoch 73/100
37/37 [==============================] - 42s 1s/step - loss: 0.2118 - accuracy: 0.9420 - val_loss: 0.9099 - val_accuracy: 0.7849
Epoch 74/100
37/37 [==============================] - 42s 1s/step - loss: 0.2055 - accuracy: 0.9434 - val_loss: 0.9197 - val_accuracy: 0.7859
Epoch 75/100
37/37 [==============================] - 42s 1s/step - loss: 0.2048 - accuracy: 0.9439 - val_loss: 0.9352 - val_accuracy: 0.7838
Epoch 76/100
37/37 [==============================] - 42s 1s/step - loss: 0.1998 - accuracy: 0.9452 - val_loss: 0.9143 - val_accuracy: 0.7848
Epoch 77/100
37/37 [==============================] - 42s 1s/step - loss: 0.1956 - accuracy: 0.9469 - val_loss: 0.9392 - val_accuracy: 0.7852
Epoch 78/100
37/37 [==============================] - 42s 1s/step - loss: 0.1921 - accuracy: 0.9478 - val_loss: 0.9250 - val_accuracy: 0.7853
Epoch 79/100
37/37 [==============================] - 43s 1s/step - loss: 0.1926 - accuracy: 0.9477 - val_loss: 0.9704 - val_accuracy: 0.7805
Epoch 80/100
37/37 [==============================] - 42s 1s/step - loss: 0.1891 - accuracy: 0.9482 - val_loss: 0.9675 - val_accuracy: 0.7825
Epoch 81/100
37/37 [==============================] - 42s 1s/step - loss: 0.1858 - accuracy: 0.9498 - val_loss: 0.9538 - val_accuracy: 0.7850
Epoch 82/100
37/37 [==============================] - 42s 1s/step - loss: 0.1845 - accuracy: 0.9504 - val_loss: 0.9627 - val_accuracy: 0.7835
Epoch 83/100
37/37 [==============================] - 42s 1s/step - loss: 0.1847 - accuracy: 0.9501 - val_loss: 0.9829 - val_accuracy: 0.7819
Epoch 84/100
37/37 [==============================] - 42s 1s/step - loss: 0.1789 - accuracy: 0.9515 - val_loss: 0.9872 - val_accuracy: 0.7824
Epoch 85/100
37/37 [==============================] - 42s 1s/step - loss: 0.1763 - accuracy: 0.9527 - val_loss: 0.9930 - val_accuracy: 0.7821
Epoch 86/100
37/37 [==============================] - 42s 1s/step - loss: 0.1714 - accuracy: 0.9540 - val_loss: 1.0018 - val_accuracy: 0.7820
Epoch 87/100
37/37 [==============================] - 43s 1s/step - loss: 0.1703 - accuracy: 0.9544 - val_loss: 1.0064 - val_accuracy: 0.7804
Epoch 88/100
37/37 [==============================] - 43s 1s/step - loss: 0.1687 - accuracy: 0.9546 - val_loss: 1.0126 - val_accuracy: 0.7811
Epoch 89/100
37/37 [==============================] - 43s 1s/step - loss: 0.1646 - accuracy: 0.9556 - val_loss: 1.0112 - val_accuracy: 0.7815
Epoch 90/100
37/37 [==============================] - 43s 1s/step - loss: 0.1635 - accuracy: 0.9564 - val_loss: 1.0311 - val_accuracy: 0.7808
Epoch 91/100
37/37 [==============================] - 43s 1s/step - loss: 0.1605 - accuracy: 0.9565 - val_loss: 1.0151 - val_accuracy: 0.7823
Epoch 92/100
37/37 [==============================] - 43s 1s/step - loss: 0.1606 - accuracy: 0.9570 - val_loss: 1.0306 - val_accuracy: 0.7800
Epoch 93/100
37/37 [==============================] - 43s 1s/step - loss: 0.1580 - accuracy: 0.9574 - val_loss: 1.0432 - val_accuracy: 0.7802
Epoch 94/100
37/37 [==============================] - 43s 1s/step - loss: 0.1558 - accuracy: 0.9574 - val_loss: 1.0608 - val_accuracy: 0.7789
Epoch 95/100
37/37 [==============================] - 43s 1s/step - loss: 0.1517 - accuracy: 0.9588 - val_loss: 1.0549 - val_accuracy: 0.7816
Epoch 96/100
37/37 [==============================] - 43s 1s/step - loss: 0.1502 - accuracy: 0.9595 - val_loss: 1.0714 - val_accuracy: 0.7794
Epoch 97/100
37/37 [==============================] - 43s 1s/step - loss: 0.1481 - accuracy: 0.9599 - val_loss: 1.0811 - val_accuracy: 0.7793
Epoch 98/100
37/37 [==============================] - 43s 1s/step - loss: 0.1460 - accuracy: 0.9612 - val_loss: 1.0863 - val_accuracy: 0.7788
Epoch 99/100
37/37 [==============================] - 42s 1s/step - loss: 0.1440 - accuracy: 0.9613 - val_loss: 1.0917 - val_accuracy: 0.7795
Epoch 100/100
37/37 [==============================] - 42s 1s/step - loss: 0.1413 - accuracy: 0.9614 - val_loss: 1.0925 - val_accuracy: 0.7812
###Markdown
Translation - Sampling what are the sentences translated to
###Code
# Define sampling models
# Restore the model and construct the encoder and decoder.
model = keras.models.load_model("s2s")
encoder_inputs = model.input[0] # input_1
encoder_outputs, state_h_enc, state_c_enc = model.layers[2].output # lstm_1
encoder_states = [state_h_enc, state_c_enc]
encoder_model = keras.Model(encoder_inputs, encoder_states)
decoder_inputs = model.input[1] # input_2
decoder_state_input_h = keras.Input(shape=(latent_dim,), name="input_3")
decoder_state_input_c = keras.Input(shape=(latent_dim,), name="input_4")
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_lstm = model.layers[3]
decoder_outputs, state_h_dec, state_c_dec = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs
)
decoder_states = [state_h_dec, state_c_dec]
decoder_dense = model.layers[4]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = keras.Model(
[decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states
)
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict((i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict((i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index["\t"]] = 1.0
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ""
while not stop_condition:
output_tokens, h, c = decoder_model.predict([target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if sampled_char == "\n" or len(decoded_sentence) > max_decoder_seq_length:
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.0
# Update states
states_value = [h, c]
return decoded_sentence
for seq_index in range(20):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index : seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print("-")
print("Input sentence:", input_texts[seq_index])
print("Decoded sentence:", decoded_sentence)
###Output
_____no_output_____ |
ch09_cnn/ch13-convolutional-NNs.ipynb | ###Markdown
Intro - Visual Cortex* [LeNet-5 paper](http://goo.gl/A347S4) - intro'd convo & pooling layers
###Code
%%html
<style>
table,td,tr,th {border:none!important}
</style>
# utilities
import matplotlib.pyplot as plt
def plot_image(image):
plt.imshow(image, cmap="gray", interpolation="nearest")
plt.axis("off")
def plot_color_image(image):
plt.imshow(image.astype(np.uint8),interpolation="nearest")
plt.axis("off")
###Output
_____no_output_____
###Markdown
Convolutional Layers* [math detail](http://goo.gl/HAfxXd)* neurons connected to receptor field in next layer. uses *zero padding* to force layers to have same height & width.* also can connect large input layer to much smaller layer by spacing out receptor fields (distance between receptor fields = *stride*)Layers | Padding | Strides- | - | - |  |  Filters* neuron weights can look like small image (w/ size = receptor field)* examples given: 1) vertical filter (single vertical bar, mid-image, all other cells zero) 2) horizontal filter (single horizontal bar, mid-image, all other cells zero)* both return **feature maps** (highlights areas of image most similar to filter)
###Code
import numpy as np
fmap = np.zeros(shape=(7, 7, 1, 2), dtype=np.float32)
fmap[:, 3, 0, 0] = 1
fmap[3, :, 0, 1] = 1
print(fmap[:, :, 0, 0])
print(fmap[:, :, 0, 1])
plt.figure(figsize=(6,6))
plt.subplot(121)
plot_image(fmap[:, :, 0, 0])
plt.subplot(122)
plot_image(fmap[:, :, 0, 1])
plt.show()
from sklearn.datasets import load_sample_image
china = load_sample_image("china.jpg")
flower = load_sample_image("flower.jpg")
image = china[150:220, 130:250]
height, width, channels = image.shape
image_grayscale = image.mean(axis=2).astype(np.float32)
images = image_grayscale.reshape(1, height, width, 1)
import tensorflow as tf
tf.reset_default_graph()
# Define the model
X = tf.placeholder(
tf.float32,
shape=(None, height, width, 1))
feature_maps = tf.constant(fmap)
convolution = tf.nn.conv2d(
X,
feature_maps,
strides=[1,1,1,1],
padding="SAME",
use_cudnn_on_gpu=False)
# Run the model
with tf.Session() as sess:
output = convolution.eval(feed_dict={X: images})
plt.figure(figsize=(6,6))
#plt.subplot(121)
plot_image(images[0, :, :, 0])
#plt.subplot(122)
plot_image(output[0, :, :, 0])
#plt.subplot(123)
plot_image(output[0, :, :, 1])
plt.show()
%%html
<style>
img[alt=stacking] { width: 400px; }
</style>
###Output
_____no_output_____
###Markdown
Stacking Feature Maps* images made of *sublayers* (one per color channel, typical red/green/blue, grayscale = one chan, others = many chans)
###Code
import numpy as np
from sklearn.datasets import load_sample_images
# Load sample images
dataset = np.array(load_sample_images().images, dtype=np.float32)
batch_size, height, width, channels = dataset.shape
# Create 2 filters
filters = np.zeros(shape=(7, 7, channels, 2), dtype=np.float32)
filters[:, 3, :, 0] = 1 # vertical line
filters[3, :, :, 1] = 1 # horizontal line
# Create a graph with input X plus a convolutional layer applying the 2 filters
X = tf.placeholder(tf.float32,
shape=(None, height, width, channels))
convolution = tf.nn.conv2d(
X, filters, strides=[1,2,2,1], padding="SAME")
with tf.Session() as sess:
output = sess.run(convolution, feed_dict={X: dataset})
plt.imshow(output[0, :, :, 1])
plt.show()
%%html
<style>
img[alt=padding] { width: 400px; }
</style>
###Output
_____no_output_____
###Markdown
"Valid" v. "Same" Padding
###Code
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
filter_primes = np.array(
[2., 3., 5., 7., 11., 13.],
dtype=np.float32)
x = tf.constant(
np.arange(1, 13+1, dtype=np.float32).reshape([1, 1, 13, 1]))
print ("x:\n",x)
filters = tf.constant(
filter_primes.reshape(1, 6, 1, 1))
# conv2d arguments:
# x = input minibatch = 4D tensor
# filters = 4D tensor
# strides = 1D array (1, vstride, hstride, 1)
# padding = VALID = no zero padding, may ignore edge rows/cols
# padding = SAME = zero padding used if needed
valid_conv = tf.nn.conv2d(x, filters, strides=[1, 1, 5, 1], padding='VALID')
same_conv = tf.nn.conv2d(x, filters, strides=[1, 1, 5, 1], padding='SAME')
with tf.Session() as sess:
print("VALID:\n", valid_conv.eval())
print("SAME:\n", same_conv.eval())
###Output
x:
Tensor("Const:0", shape=(1, 1, 13, 1), dtype=float32)
VALID:
[[[[ 184.]
[ 389.]]]]
SAME:
[[[[ 143.]
[ 348.]
[ 204.]]]]
###Markdown
Pooling Layers* Goal: subsample (shrink) input image to reduce loading.* Need to define pool size, stride & padding type.* Result: aggregation function (max, mean)* Below: max pool, 2x2, stride = 2, no padding.
###Code
dataset = np.array([china, flower], dtype=np.float32)
batch_size, height, width, channels = dataset.shape
filters = np.zeros(shape=(7, 7, channels, 2), dtype=np.float32)
filters[:, 3, :, 0] = 1 # vertical line
filters[3, :, :, 1] = 1 # horizontal line
X = tf.placeholder(tf.float32,
shape=(None, height, width, channels))
# alternative: avg_pool()
max_pool = tf.nn.max_pool(
X,
ksize=[1, 2, 2, 1],
strides=[1,2,2,1],
padding="VALID")
with tf.Session() as sess:
output = sess.run(max_pool, feed_dict={X: dataset})
plt.figure(figsize=(12,12))
plt.subplot(121)
plot_color_image(dataset[0])
plt.subplot(122)
plot_color_image(output[0])
plt.show()
###Output
_____no_output_____ |
doc/Thermodynamics-LLE.ipynb | ###Markdown
.NET Initialization
###Code
import clr
clr.AddReference(r"..\bin\MiniSim.Core")
import MiniSim.Core.Expressions as expr
from MiniSim.Core.Flowsheeting import MaterialStream, Flowsheet
import MiniSim.Core.Numerics as num
from MiniSim.Core.UnitsOfMeasure import Unit, SI, METRIC, PhysicalDimension
from MiniSim.Core.ModelLibrary import Flash, Heater, Mixer, Splitter, EquilibriumStageSection
import MiniSim.Core.PropertyDatabase as chemsep
from MiniSim.Core.Reporting import Generator, StringBuilderLogger
from MiniSim.Core.Thermodynamics import ThermodynamicSystem, AllowedPhases
from ipywidgets import interact, interactive, fixed, interact_manual,FloatSlider
import ipywidgets as widgets
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
plt.rcParams['savefig.dpi'] = 75
plt.rcParams['figure.autolayout'] = False
plt.rcParams['figure.figsize'] = 10, 6
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2.0
plt.rcParams['lines.markersize'] = 8
plt.rcParams['legend.fontsize'] = 14
#plt.rcParams['text.usetex'] = True
#plt.rcParams['font.family'] = "serif"
#plt.rcParams['font.serif'] = "cm"
plt.rcParams['grid.color'] = 'k'
###Output
_____no_output_____
###Markdown
Set up Thermodynamics
###Code
Database = chemsep.ChemSepAdapter()
logger = StringBuilderLogger()
reporter = Generator(logger)
sys= ThermodynamicSystem("Test2","NRTL", "default")
sys.AddComponent(Database.FindComponent("Water").SetL2Split(0.05))
sys.AddComponent(Database.FindComponent("1-hexanol").SetL2Split(0.95))
hexanol= sys.Components[0]
water= sys.Components[1]
Database.FillBIPs(sys)
sys.EquilibriumMethod.AllowedPhases = AllowedPhases.VLLE
kmolh=Unit.Make([SI.kmol],[SI.h])
names=sys.GetComponentIds()
longnames=sys.GetComponentNames()
cas= sys.GetComponentCASNumbers()
mw= sys.GetComponentMolarWeights()
numComps=len(sys.Components)
df_data=pd.DataFrame(zip(longnames, cas, mw), index= names, columns= ["Name","CASNo","MolarWeight"])
df_data
###Output
_____no_output_____
###Markdown
Interactive Plots
###Code
solver= num.DecompositionSolver(logger)
mixture= MaterialStream("Mix", sys)
mixture.Specify("T",20, METRIC.C)
mixture.Specify("P",1000, METRIC.mbar)
for c in names:
mixture.Specify("n["+c+"]",1.0,kmolh)
mixture.InitializeFromMolarFlows()
mixture.FlashPT()
flowTx= Flowsheet("plots")
flowTx.AddMaterialStream(mixture);
solver.Solve(flowTx)
reporter.Report(flowTx)
print(logger.Flush())
numSteps=40
fig,ax=plt.subplots(1,1,figsize=(10,10))
xvec=[]
x2vec=[]
yvec=[]
i=water
j=hexanol
psys=3000
mixture.Specify("P",psys, METRIC.mbar)
for T in np.linspace(20,90,numSteps):
mixture.Specify("T",T, METRIC.C)
solver.Solve(flowTx)
xvec.append(mixture.GetVariable('xL['+i.ID+']').Val())
x2vec.append(mixture.GetVariable('xL2['+i.ID+']').Val())
yvec.append(mixture.GetVariable('T').Val()-273.15)
ax.plot(xvec, yvec, marker='o')
ax.plot(x2vec, yvec, marker='o')
ax.set_xlabel('$x_{'+i.ID+'}$')
ax.set_ylabel('T [ยฐC]')
#ax.set_ylim([25,175])
ax.set_xlim([0,1])
ax.legend(["L1","L2"])
plt.tight_layout()
plt.title(f'(T,x1,x2)-Diagram of {i.ID}/ {j.ID} at {psys:.1f} mbar');
print(logger.Flush())
###Output
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
Decomposition Result: V=21, E=21, Blocks=12, Singletons=8
Block Statistics:
# Var # Blocks % Blocks
1 8 66,67 %
2 2 16,67 %
3 1 8,33 %
6 1 8,33 %
Problem NLAES was successfully solved (0,00 seconds)
|
ros/ch10/nes_linear.ipynb | ###Markdown
---title: "Regression and Other Stories: National election study"author: "Farhan Reynaldo"date: "Nov 16th, 2021"format: html: warning: false code-tools: true code-overflow: wrapjupyter: python3---
###Code
from pathlib import Path
import arviz as az
import bambi as bmb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
DATA_PATH = Path("../../data")
###Output
_____no_output_____
###Markdown
Load packages
###Code
rng = np.random.default_rng(seed=12)
az.style.use("arviz-white")
###Output
_____no_output_____
###Markdown
Load data
###Code
nes = pd.read_csv(DATA_PATH / "nes.txt", delimiter=" ")
nes.head()
###Output
_____no_output_____
###Markdown
Partyid model to illustrate repeated model use (secret weapon)
###Code
def regress_year(year, data=nes):
this_year = data[data["year"] == year]
model = bmb.Model(
"partyid7 ~ real_ideo + race_adj + C(age_discrete) + educ1 + female + income",
data=this_year,
dropna=True,
)
idata = model.fit()
stats = az.summary(idata, kind="stats")
return (year, stats)
years = np.arange(1972, 2004, 4)
summary = [regress_year(year) for year in years]
###Output
Automatically removing 838/2168 rows from the dataset.
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [partyid7_sigma, Intercept, income, female, educ1, C(age_discrete), race_adj, real_ideo]
###Markdown
plot
###Code
coefs = [
"Intercept",
"real_ideo",
"race_adj",
"C(age_discrete)[2]",
"C(age_discrete)[3]",
"C(age_discrete)[4]",
"educ1",
"female",
"income",
]
coef_names = [
"Intercept",
"Ideology",
"Black",
"Age_30_44",
"Age_45_64",
"Age_65_up",
"Education",
"female",
"income",
]
fig, axes = plt.subplots(2, 5, figsize=(15, 9))
for ax, coef in zip(axes.flat, coefs):
coef_each_year = [summary[i][1].loc[coef, "mean"] for i in range(len(summary))]
coef_sd_each_year = [summary[i][1].loc[coef, "sd"] for i in range(len(summary))]
ax.scatter(years, coef_each_year, color="black")
ax.errorbar(
years, coef_each_year, yerr=coef_sd_each_year, fmt="none", color="black"
)
ax.axhline(0, linestyle="--", color="gray")
ax.set_title(coef_names[coefs.index(coef)])
axes.flat[-1].remove()
###Output
_____no_output_____ |
ArkoudaNotebooks/LANL_Demo.ipynb | ###Markdown
Exploratory Analysis of Netflow Setup Get DataThis notebook assumes that you have downloaded one or more netflow files from the [LANL dataset](https://csr.lanl.gov/data/2017.html) and converted them to HDF5 using something like `hdflow`. Example:```bashpip install hdflowcsv2hdf --format=lanl /path/to/lanl/netflow*``` Chapel[Download](https://chapel-lang.org/download.html) and [build](https://chapel-lang.org/docs/usingchapel/building.html) the Chapel [programming language](https://chapel-lang.org/). Be sure to build for a multi-locale system, if appropriate. Arkouda```bashpip install arkoudacd arkouda/install/dirchpl --fast -senableParScan arkouda_server.chpl./arkouda_server -nl ```
###Code
import arkouda as ak
from glob import glob
ak.connect()
ak.get_config()
ak.get_mem_used()
###Output
_____no_output_____
###Markdown
Load the Data
###Code
hdffiles = glob('/Volumes/Crucial X8/Data/lanl_netflow/hdf5/*.hdf')
fields = ['srcIP', 'dstIP', 'srcPort', 'dstPort', 'start']
%time data = {field: ak.read_hdf(field, hdffiles) for field in fields}
data
###Output
_____no_output_____
###Markdown
Are src and dst Meaningful?Typically, src and dst are not meaningful labels, but the curators of this dataset may have used it to encode the identity of the client and server. If so, then the frequency of server ports should differ quite a bit between src and dst.
###Code
%time (data['srcPort'] == 80).sum(), (data['dstPort'] == 80).sum()
%time (data['srcPort'] == 443).sum(), (data['dstPort'] == 443).sum()
###Output
_____no_output_____
###Markdown
dst has lots of port 80 (HTTP) and 443 (HTTPS), while src has very little. Thus, unlike typical netflow, dst is probably the server side in this dataset, while src is the client side.Confirm by looking at more of the port distributions: src port values and counts
###Code
%time sport, scount = ak.value_counts(data['srcPort'])
###Output
_____no_output_____
###Markdown
top 10 src port counts in numpy
###Code
from collections import Counter
sportCounts = Counter()
for i in range(sport.size):
sportCounts[sport[i]] = scount[i]
sportCounts.most_common(10)
len(sportCounts)
###Output
_____no_output_____
###Markdown
top 10 src port counts in arkouda
###Code
ix = ak.argmaxk(scount,10)
for i in ix.to_ndarray()[::-1]:
print((sport[i], scount[i]))
###Output
_____no_output_____
###Markdown
dest port values and counts
###Code
dport, dcount = ak.value_counts(data['dstPort'])
###Output
_____no_output_____
###Markdown
top 10 dest port counts in numpy
###Code
dportCounts = Counter()
for i in range(dport.size):
dportCounts[dport[i]] = dcount[i]
dportCounts.most_common(10)
len(dportCounts)
###Output
_____no_output_____
###Markdown
top 10 dest port counts in arkouda
###Code
ix = ak.argmaxk(dcount,10)
for i in ix.to_ndarray()[::-1]:
print((dport[i], dcount[i]))
###Output
_____no_output_____ |
Flower_Class_with_transfer_learning_2.ipynb | ###Markdown
Classificaรงรฃo de flores utilizando transferLearning e salvando o modelo para reaproveitรก-lo em outros dispositivos.
###Code
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pathlib
from PIL import Image
import random
import time
!pip install -q -U tf-hub-nightly
!pip install -q tfds-nightly
#importando a base de dados de flores
data_dir = tf.keras.utils.get_file('flower_photos','https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True)
DIRETORIO = pathlib.Path(data_dir)
list_images = list(DIRETORIO.glob('*/*.jpg'))
print ('{} imagens no diretorio'.format(len(list_images)))
# pegando as classes dentro do diretorio
classes = list(DIRETORIO.glob('*'))
class_names = np.array([item.name for item in classes if item.name != 'LICENSE.txt'])
print (class_names)
BATCH_SIZE = 32
SIZE = 224
# criando um image_data_generator para treino e validaรงรฃo
image_data_generator = keras.preprocessing.image.ImageDataGenerator(
rotation_range = 20,
width_shift_range = 0.2,
height_shift_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
rescale = 1./255,
validation_split = 0.1
)
train_data_generator = image_data_generator.flow_from_directory(
directory = DIRETORIO,
target_size = (SIZE, SIZE),
classes = list(class_names),
batch_size = BATCH_SIZE,
shuffle = True,
subset = 'training'
)
validation_data_generator = image_data_generator.flow_from_directory(
directory = DIRETORIO,
target_size = (SIZE, SIZE),
classes = list(class_names),
batch_size = BATCH_SIZE,
subset = 'validation'
)
STEPS_TRAIN = train_data_generator.n // train_data_generator.batch_size
STEPS_VALIDATION = validation_data_generator.n // validation_data_generator.batch_size
print ('STEPS TRAIN: {} e STEPS_VALIDATION: {}'.format(STEPS_TRAIN, STEPS_VALIDATION))
# um simples teste para vericar as fotos randomicamente do train data generator e validation data generator
images, labels = next(validation_data_generator)
plt.figure(figsize=(10, 5))
for k in range(10):
plt.subplot(2, 5, k+1)
n = random.randint(0, len(images)-1)
plt.imshow(images[n])
plt.xticks([])
plt.yticks([])
plt.xlabel(class_names[np.argmax(labels[n])])
plt.show()
# importando a rede neural jรก treinada ImageNet
url_rede_treinada = 'https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2'
rede_treinada = hub.KerasLayer(url_rede_treinada)
# definindo que esta camada importada nรฃo sera treinada
rede_treinada.trainable = False
# criando o modelo
model = keras.Sequential()
model.add(rede_treinada)
model.add(keras.layers.Dense(5, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
EPOCHS = 5
history = model.fit_generator(
generator = train_data_generator,
steps_per_epoch = STEPS_TRAIN,
epochs = EPOCHS,
verbose = 1,
validation_data = validation_data_generator,
validation_steps = STEPS_VALIDATION,
shuffle = True
)
train_accuracy = history.history['accuracy']
train_loss = history.history['loss']
validation_accuracy = history.history['val_accuracy']
validation_loss = history.history['val_loss']
plt.figure(figsize=(10, 5))
plt.subplot(2, 1, 1)
plt.plot(range(EPOCHS), train_accuracy, label='train')
plt.plot(range(EPOCHS), validation_accuracy, label='validation')
plt.legend()
plt.xlabel('Accuracy')
plt.show()
plt.subplot(2, 1, 2)
plt.plot(range(EPOCHS), train_loss, label='train')
plt.plot(range(EPOCHS), validation_loss, label='validation')
plt.legend()
plt.xlabel('Loss')
plt.show()
# SALVANDO O MODELO TREINADO
t = time.time()
export_path = "/content/drive/My Drive/Colab Notebooks/Transfer_Learning_Test/modelo_TRANSFER_LEARNING"
model.save(export_path, save_format='tf')
export_path
modelo_salvo = keras.models.load_model(export_path)
modelo_salvo.summary()
# subindo algumas figuras para o drive para teste final do modelo salvo
diretorio_teste = '/content/drive/My Drive/Colab Notebooks/Transfer_Learning_Test/figuras'
diretorio_teste = pathlib.Path(diretorio_teste)
diretorio_teste
# colocando as imagens num vetor para depois tratar seus dados.
images_test = list(diretorio_teste.glob('*'))
print ('{} imagens no diretorio para teste.'.format(len(images_test)))
# preparando os dados para uma matriz com [n, 224, 224, 3]
image_data = []
for imagem in images_test:
img = Image.open(imagem)
img = img.resize((SIZE, SIZE))
data_ = np.array(img.getdata()).reshape((SIZE, SIZE, 3))*(1./255)
image_data.append(data_)
image_data = np.array(image_data)
image_data.shape
predictions = modelo_salvo.predict(image_data)
label_predictions = []
for vetor in predictions:
label = class_names[np.argmax(vetor)]
percent = 100*np.max(vetor)
label_predictions.append('{:0.0f}% {}'.format(percent, label))
label_predictions = np.array(label_predictions)
print (label_predictions)
plt.figure(figsize=(10, 10))
for k in range(17):
plt.subplot(4, 5, k+1)
plt.imshow(image_data[k])
plt.xlabel(label_predictions[k])
plt.xticks([])
plt.yticks([])
plt.show()
predicao_1 = model.predict(image_data) # prediรงรฃo feita pelo modelo que foi treinado inicialmente
diferenca = predicao_1 - predictions # diferenรงa entre as prediรงรตes do modelo treinado e modelo importado
print (np.max(diferenca)) # como esperado, nรฃo hรก diferenรงa entre os modelos.
###Output
_____no_output_____ |
notebooks/Fig 5C Results distributional approach.ipynb | ###Markdown
Figure 5C: results of tfidf approach
###Code
import os
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('styles.mplstyle')
from helpers import highlight_highest_scores
from helpers import score_heatmap
from helpers import cm2inch
###Output
_____no_output_____
###Markdown
Load all scores
###Code
scores = []
runs = []
for run in range(5):
scores_fn = f'../results/run-{run}/tfidf-run-{run}/tfidf-run-{run}-scores.csv'
if os.path.exists(scores_fn):
scores.append(pd.read_csv(scores_fn, index_col=0))
runs.append(run)
else:
print(f'Results for run {run} could not be found')
df = (pd.concat(scores, keys=runs, names=['run', 'index'])
.reset_index()
.drop('index', axis=1))
means = df.groupby(['subset', 'genre', 'representation', 'segmentation']).mean() * 100
stds = df.groupby(['subset', 'genre', 'representation', 'segmentation']).std() * 100
###Output
_____no_output_____
###Markdown
Plot heatmap
###Code
SEGMENTATIONS = [
'neumes',
'syllables',
'words',
'1-mer',
'2-mer',
'3-mer',
'4-mer',
'5-mer',
'6-mer',
# '7-mer',
'8-mer',
# '9-mer',
'10-mer',
# '11-mer',
'12-mer',
# '13-mer',
'14-mer',
# '15-mer',
'16-mer',
'poisson-3',
'poisson-5',
'poisson-7'
]
REPRESENTATIONS = [
'pitch',
'interval-dependent',
'interval-independent',
'contour-dependent',
'contour-independent'
]
def reshape_df(df, score, genre, subset, split,
index=SEGMENTATIONS, columns=REPRESENTATIONS):
"""Reshape a dataframe with all scores to the slice of interest:
with representations as columns and segmentations as index, and
containing only one type of score as values"""
return (
pd.pivot_table(
df.loc[subset, genre],
columns='representation',
index='segmentation',
values=f'{split}_{score}'
).loc[index, columns]
)
# reshape_df(means, 'weighted_recall', 'responsory', 'full', 'test')
def show_scores(means, stds, score, genre, subset, split,
line_length=.55, line_y=.3, lh=.4, tp=.15,
labels=True, title=True,
fmt='${mu:.1f}^{{\pm {sigma:.1f}}}$'):
"""Generate a heatmap with scores"""
subset_means = reshape_df(means, score, genre, subset, split)
subset_stds = reshape_df(stds, score, genre, subset, split)
# Heatmap
xticklabels_dict = {
'pitch': 'pitch',
'interval-dependent': 'dep.\ninterval',
'interval-independent': 'indep.\ninterval',
'contour-dependent': 'dep.\ncontour',
'contour-independent': 'indep.\ncontour',
}
xticklabels = [xticklabels_dict[rep] for rep in REPRESENTATIONS]
score_heatmap(subset_means, subset_stds,
vmin=0, vmax=100, cbar=False,
cmap='viridis', fmt=fmt,
xticklabels=xticklabels)
# Plot gridlines
pad = 0
lw = 1
plt.plot([1, 1], [0, 100], 'w-', lw=lw)
plt.plot([3, 3], [0, 100], 'w-', lw=lw)
plt.plot([0, 100], [3, 3], 'w-', lw=lw)
plt.plot([0, 100], [14, 14], 'w-', lw=lw)
# Highlight best scores
highlight_highest_scores(subset_means, axis=0, tol=1,
line_y=line_y, line_length=line_length)
# Axes
if labels:
plt.ylabel('segmentation')
plt.xlabel('representation')
else:
plt.ylabel(None)
plt.xlabel(None)
plt.yticks(rotation=0)
# Title
if title:
title = f'{genre.title()} {split.title()} {score.replace("weighted_", "").title()}'
plt.text(0, -tp-2*lh, title, va='bottom', fontweight='bold')
plt.text(0, -tp-lh, f'Distributional approach on {subset} data', va='bottom')
plt.text(0, -tp, (
'Showing $\mathrm{mean}^{\pm\mathrm{std.dev}}$ '
f'over {len(runs)} runs'
), va='bottom', size=5.5, alpha=.5)
plt.tight_layout()
plt.figure(figsize=cm2inch(10, 16), dpi=150)
show_scores(means, stds, 'accuracy', 'responsory', 'full', 'test')
###Output
_____no_output_____
###Markdown
Generate all plots
###Code
for genre in ['responsory', 'antiphon']:
for subset in ['full', 'subset']:
for split in ['train', 'test']:
for score in ['accuracy', 'weighted_f1', 'weighted_precision', 'weighted_recall']:
plt.ioff() # Turn off interactive plotting
plt.figure(figsize=cm2inch(10, 16))
show_scores(means, stds, score, genre, subset, split)
fig_fn = f'../figures/fig05c-tfidf/fig05c-{genre}-{subset}-{split}-{score}.pdf'
plt.savefig(fig_fn)
plt.close()
###Output
_____no_output_____
###Markdown
Generate plots for final figure
###Code
cols = 5
rows = 17
for genre in ['antiphon', 'responsory']:
plt.ioff() # Turn off interactive plotting
plt.figure(figsize=cm2inch(2*(cols+1), rows+1), dpi=150)
show_scores(means, stds, 'weighted_f1', genre, 'full', 'test',
fmt='${mu:.0f}$', line_length=.2, line_y=.25,
labels=False, title=False)
plt.savefig(f'../figures/fig05/fig05-tfidf-{genre}-test-f1.pdf')
plt.close()
###Output
_____no_output_____ |
RecommendSystem/.ipynb_checkpoints/6.spark_recommendation-checkpoint.ipynb | ###Markdown
Sparkๆจ่็ณป็ปby [@ๅฏๅฐ้ณ](http://blog.csdn.net/han_xiaoyang) spark่ชๅธฆไบ็จไบๆจ่็็ฎๆณ๏ผๆไปฌๆฅ็็่ฟไธช็ฎๆณๅฆไฝๅฎๆๆดไธช็ฏ่ใ
###Code
#!/usr/bin/env python
# ๅบไบsparkไธญALS็ๆจ่็ณป็ป๏ผ้ๅฏนmovielensไธญ็ตๅฝฑๆๅๆฐๆฎๅๆจ่
# Edit๏ผๅฏๅฐ้ณ([email protected])
import sys
import itertools
from math import sqrt
from operator import add
from os.path import join, isfile, dirname
from pyspark import SparkConf, SparkContext
from pyspark.mllib.recommendation import ALS
def parseRating(line):
"""
MovieLens็ๆๅๆ ผๅผๆฏuserId::movieId::rating::timestamp
ๆไปฌๅฏนๆ ผๅผๅไธไธช่งฃๆ
"""
fields = line.strip().split("::")
return long(fields[3]) % 10, (int(fields[0]), int(fields[1]), float(fields[2]))
def parseMovie(line):
"""
ๅฏนๅบ็็ตๅฝฑๆไปถ็ๆ ผๅผไธบmovieId::movieTitle
่งฃๆๆint id, ๆๆฌ
"""
fields = line.strip().split("::")
return int(fields[0]), fields[1]
def loadRatings(ratingsFile):
"""
่ฝฝๅ
ฅๅพๅ
"""
if not isfile(ratingsFile):
print "File %s does not exist." % ratingsFile
sys.exit(1)
f = open(ratingsFile, 'r')
ratings = filter(lambda r: r[2] > 0, [parseRating(line)[1] for line in f])
f.close()
if not ratings:
print "No ratings provided."
sys.exit(1)
else:
return ratings
def computeRmse(model, data, n):
"""
่ฏไผฐ็ๆถๅ่ฆ็จ็๏ผ่ฎก็ฎๅๆนๆ น่ฏฏๅทฎ
"""
predictions = model.predictAll(data.map(lambda x: (x[0], x[1])))
predictionsAndRatings = predictions.map(lambda x: ((x[0], x[1]), x[2])) \
.join(data.map(lambda x: ((x[0], x[1]), x[2]))) \
.values()
return sqrt(predictionsAndRatings.map(lambda x: (x[0] - x[1]) ** 2).reduce(add) / float(n))
if __name__ == "__main__":
if (len(sys.argv) != 3):
print "Usage: /path/to/spark/bin/spark-submit --driver-memory 2g " + \
"MovieLensALS.py movieLensDataDir personalRatingsFile"
sys.exit(1)
# ่ฎพๅฎ็ฏๅข
conf = SparkConf() \
.setAppName("MovieLensALS") \
.set("spark.executor.memory", "2g")
sc = SparkContext(conf=conf)
# ่ฝฝๅ
ฅๆๅๆฐๆฎ
myRatings = loadRatings(sys.argv[2])
myRatingsRDD = sc.parallelize(myRatings, 1)
movieLensHomeDir = sys.argv[1]
# ๅพๅฐ็ratingsไธบ(ๆถ้ดๆณๆๅไธไฝๆดๆฐ, (userId, movieId, rating))ๆ ผๅผ็RDD
ratings = sc.textFile(join(movieLensHomeDir, "ratings.dat")).map(parseRating)
# ๅพๅฐ็moviesไธบ(movieId, movieTitle)ๆ ผๅผ็RDD
movies = dict(sc.textFile(join(movieLensHomeDir, "movies.dat")).map(parseMovie).collect())
numRatings = ratings.count()
numUsers = ratings.values().map(lambda r: r[0]).distinct().count()
numMovies = ratings.values().map(lambda r: r[1]).distinct().count()
print "Got %d ratings from %d users on %d movies." % (numRatings, numUsers, numMovies)
# ๆ นๆฎๆถ้ดๆณๆๅไธไฝๆๆดไธชๆฐๆฎ้ๅๆ่ฎญ็ป้(60%), ไบคๅ้ช่ฏ้(20%), ๅ่ฏไผฐ้(20%)
# ่ฎญ็ป, ไบคๅ้ช่ฏ, ๆต่ฏ ้้ฝๆฏ(userId, movieId, rating)ๆ ผๅผ็RDD
numPartitions = 4
training = ratings.filter(lambda x: x[0] < 6) \
.values() \
.union(myRatingsRDD) \
.repartition(numPartitions) \
.cache()
validation = ratings.filter(lambda x: x[0] >= 6 and x[0] < 8) \
.values() \
.repartition(numPartitions) \
.cache()
test = ratings.filter(lambda x: x[0] >= 8).values().cache()
numTraining = training.count()
numValidation = validation.count()
numTest = test.count()
print "Training: %d, validation: %d, test: %d" % (numTraining, numValidation, numTest)
# ่ฎญ็ปๆจกๅ๏ผๅจไบคๅ้ช่ฏ้ไธ็ๆๆ
ranks = [8, 12]
lambdas = [0.1, 10.0]
numIters = [10, 20]
bestModel = None
bestValidationRmse = float("inf")
bestRank = 0
bestLambda = -1.0
bestNumIter = -1
for rank, lmbda, numIter in itertools.product(ranks, lambdas, numIters):
model = ALS.train(training, rank, numIter, lmbda)
validationRmse = computeRmse(model, validation, numValidation)
print "RMSE (validation) = %f for the model trained with " % validationRmse + \
"rank = %d, lambda = %.1f, and numIter = %d." % (rank, lmbda, numIter)
if (validationRmse < bestValidationRmse):
bestModel = model
bestValidationRmse = validationRmse
bestRank = rank
bestLambda = lmbda
bestNumIter = numIter
testRmse = computeRmse(bestModel, test, numTest)
# ๅจๆต่ฏ้ไธ่ฏไผฐ ไบคๅ้ช่ฏ้ไธๆๅฅฝ็ๆจกๅ
print "The best model was trained with rank = %d and lambda = %.1f, " % (bestRank, bestLambda) \
+ "and numIter = %d, and its RMSE on the test set is %f." % (bestNumIter, testRmse)
# ๆไปฌๆๅบ็บฟๆจกๅ่ฎพๅฎไธบๆฏๆฌก้ฝ่ฟๅๅนณๅๅพๅ็ๆจกๅ
meanRating = training.union(validation).map(lambda x: x[2]).mean()
baselineRmse = sqrt(test.map(lambda x: (meanRating - x[2]) ** 2).reduce(add) / numTest)
improvement = (baselineRmse - testRmse) / baselineRmse * 100
print "The best model improves the baseline by %.2f" % (improvement) + "%."
# ไธชๆงๅ็ๆจ่(้ๅฏนๆไธช็จๆท)
myRatedMovieIds = set([x[1] for x in myRatings])
candidates = sc.parallelize([m for m in movies if m not in myRatedMovieIds])
predictions = bestModel.predictAll(candidates.map(lambda x: (0, x))).collect()
recommendations = sorted(predictions, key=lambda x: x[2], reverse=True)[:50]
print "Movies recommended for you:"
for i in xrange(len(recommendations)):
print ("%2d: %s" % (i + 1, movies[recommendations[i][1]])).encode('ascii', 'ignore')
# clean up
sc.stop()
###Output
_____no_output_____ |
Project_2_VQE_Molecules/LiH_Excited_State.ipynb | ###Markdown
Excited state calculation LiHIn order to obtain the excited state of the LiH molecule, we will use the VQE circuit to calculat the ground state wavefunction. We can define an effective Hamiltonian with the lowest eigenstate being the first excited state and the lowest eigenvalue being the energy of this state. In other words, we can shift the ground state energy to the first excited state. We will vary parameters, and thus E0, to hit the minimum of the excited state:Hexc = H โ E0|ฯ0><ฯ0|
###Code
import matplotlib.pyplot as plt
import tequila as tq
###Output
_____no_output_____
###Markdown
As we shown in LiH_molecule.ipynb, we can reduce the number of Hamiltonian terms from 631 to 135 by specifing active space. It allows to speed up calculation without any significant impact on the results.
###Code
ground_state_energy = []
excited_state_energy = []
basis = "sto-3g"
#define active space
active_orbitals = {"A1":[1,2], "B1":[0]}
P0 = tq.paulis.Projector("|00>")
# run VQE
def vqe(H):
results = []
for i in range(2):
U = tq.gates.Ry((i, "a"), 0)
U += tq.gates.CNOT(0, 1) + tq.gates.CNOT(0, 2)
U += tq.gates.CNOT(1, 3) + tq.gates.X([2, 3])
E = tq.ExpectationValue(U, H)
active_vars = E.extract_variables()
angles = {angle: 0.0 for angle in active_vars}
for data, U2 in results:
S2 = tq.ExpectationValue(H=P0, U=U2.dagger() + U)
E -= data.energy * S2
angles = {**angles, **data.angles}
result = tq.optimizer_scipy.minimize(E, method="bfgs", variables=active_vars, initial_values=angles)
results.append((result, U))
return results
for r in [0.5 + 0.1*i for i in range(25)]:
#define molecule
lih = tq.chemistry.Molecule(geometry = "H 0.0 0.0 0.0\nLi 0.0 0.0 {r}".format(r=r), basis_set=basis, active_orbitals=active_orbitals)
H = lih.make_hamiltonian()
value = vqe(H)
ground_state_energy.append(value[0][0].energy)
excited_state_energy.append(value[1][0].energy)
r = [0.5 + 0.1*i for i in range(len(ground_state_energy))]
plt.figure()
plt.xlabel('R')
plt.ylabel('E')
plt.title("LiH Ground State, LiH Excited State")
plt.plot(r, ground_state_energy, marker="o", label="VQE GroundState")
plt.plot(r, excited_state_energy, marker="o", label="VQE ExcitedState")
plt.legend()
plt.show()
###Output
_____no_output_____ |
notebooks/Data - t11c.ipynb | ###Markdown
Treatment T11C
###Code
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
import seaborn as sb
import imblearn
TREATMENT = "t11c"
export_folder = f"../data/output/diagrams/{TREATMENT}"
os.makedirs(export_folder, exist_ok=True)
REF = "t10a"
df_ref_resp = pd.read_csv(f"../data/{TREATMENT}/export/result__{TREATMENT}_prop.csv")[["min_offer"]]
# Read and sanitize the data
df = pd.read_csv(f"../data/{TREATMENT}/export/result__{TREATMENT}_prop.csv")
df_full = df.copy()
# drop_cols = ["worker_id", "resp_worker_id", "prop_worker_id", "updated", "status", "job_id", "status", "timestamp", "rowid", "offer_dss", "offer", "offer_final", "completion_code"]
drop_cols = ["worker_id", "resp_worker_id", "prop_worker_id", "updated", "status", "job_id", "status", "timestamp", "rowid", "offer_dss", "offer", "offer_final", "completion_code", "prop_time_spent"]
df = df[[col for col in df.columns if col not in drop_cols]]
cols = [col for col in df.columns if col != "min_offer"] + ["min_offer"]
df[df_ref_resp.columns] = df_ref_resp
df_full[["offer", "offer_final", "min_offer"]].describe()
###Output
_____no_output_____
###Markdown
**Correlation to the target value**
###Code
import seaborn as sns
from core.utils.preprocessing import df_to_xydf, df_to_xy
# Correlation Matrix Heatmap
CORRELATION_THRESHOLD = 0.20
corr_min_offer = df.corr()["min_offer"]
corr_columns = corr_min_offer[abs(corr_min_offer)>=CORRELATION_THRESHOLD].keys()
select_columns = [col for col in corr_columns if col != "min_offer"]
df_x, df_y = df_to_xydf(df=df, select_columns=select_columns)
df_corr = df_x.copy()
df_corr['min_offer'] = df_y['min_offer']
corr = df_corr.corr()
###Output
_____no_output_____
###Markdown
**Responder's min_offer / Proposer's offer and final_offer distribution**
###Code
bins = list(range(0, 105, 5))
f, axes = plt.subplots(1, 2, figsize=(8,4))
ax = sns.distplot(df["min_offer"], hist=True, kde=False, label="Responder", bins=bins, ax=axes[0])
_ = ax.set_ylabel("Frequency")
ax.legend(loc='best')
ax = sns.distplot(df_full["offer"], hist=True, kde=False, bins=bins, label="Proposer", ax=axes[1])
_ = ax.set_ylabel("Frequency")
ax = sns.distplot(df_full["offer_final"], hist=True, kde=False, bins=bins, label="Proposer + DSS", ax=axes[1])
_ = ax.set_ylabel("Frequency")
ax.legend(loc='center right')
plt.tight_layout()
ax.figure.savefig(os.path.join(export_folder, "min_offer_offer.pdf"))
bins = list(range(-100, 105, 5))
plt.figure(figsize=(8,4))
offer_min_offer_diff = df_full["offer_final"] - df_full["min_offer"]
ax = sns.distplot(offer_min_offer_diff, hist=True, kde=False, axlabel="Final offer - minimum offer", bins=bins)
_ = ax.set_ylabel("Frequency")
plt.tight_layout()
ax.figure.savefig(os.path.join(export_folder, "offer_final-min_offer.pdf"))
bins = list(range(-100, 105, 5))
plt.figure(figsize=(8,4))
offer_min_offer_diff = df_full["offer"] - df_full["min_offer"]
ax = sns.distplot(offer_min_offer_diff, hist=True, kde=False, axlabel="offer - minimum offer", bins=bins, label="Proposer")
_ = ax.set_ylabel("Frequency")
offer_min_offer_diff = df_full["offer_final"] - df_full["min_offer"]
ax = sns.distplot(offer_min_offer_diff, hist=True, kde=False, axlabel="offer - minimum offer", bins=bins, label="Proposer + DSS", ax=ax)
plt.legend()
plt.tight_layout()
ax.figure.savefig(os.path.join(export_folder, "offer-min_offer.pdf"))
from core.models.metrics import cross_compute, avg_gain_ratio, gain_mean, rejection_ratio, loss_sum, MAX_GAIN
def get_infos(min_offer, offer, metrics=None, do_cross_compute=False):
if metrics is None:
metrics = [avg_gain_ratio, gain_mean, rejection_ratio, loss_sum]
#df = pd.DataFrame()
infos = dict()
for idx, metric in enumerate(metrics):
if do_cross_compute:
infos[metric.__name__] = cross_compute(min_offer, offer, metric)
else:
infos[metric.__name__] = metric(min_offer, offer)
return infos
###Output
_____no_output_____
###Markdown
**Proposer's performance**
###Code
df_infos = pd.DataFrame()
#Human (fixed-matching) performance t00
df_infos = df_infos.append(get_infos(df_full['min_offer'], df_full['offer']), ignore_index=True)
#Human (cross-matched) average performance t00
df_infos = df_infos.append(get_infos(df_full['min_offer'], df_full['offer'], do_cross_compute=True), ignore_index=True)
#Human + DSS (fixed-matching) performance t00
df_infos = df_infos.append(get_infos(df_full['min_offer'], df_full['offer_final']), ignore_index=True)
#Human + DSS(cross-matched) average performance t00
df_infos = df_infos.append(get_infos(df_full['min_offer'], df_full['offer_final'], do_cross_compute=True), ignore_index=True)
#Top-model (fixed 50% prediction) average performance t00
fixed_offer = MAX_GAIN // 2
df_infos = df_infos.append(get_infos(df_full['min_offer'], [fixed_offer], do_cross_compute=True), ignore_index=True)
df_infos.index = ["Proposer", "Proposer (cross matched)", "Proposer + DSS", "Proposer + DSS (cross matched)", "AI-System"]
df_infos = df_infos.loc[["Proposer", "Proposer + DSS", "AI-System"]]
df_infos
def woa(offer_final, offer, ai_offer):
res = (abs(offer_final - offer) ) / (abs(ai_offer - offer ))
res = res[np.invert(np.isnan(res) | np.isinf(res))]
# res = np.clip(res, 0, 1)
return 100 * abs(res).mean()
def mard(offer_final, offer):
res = abs(offer_final - offer) / offer
res = res[np.invert(np.isnan(res) | np.isinf(res))]
return 100 * abs(res).mean()
def get_rel_gain(df_infos):
acc = df_infos['avg_gain_ratio']['Proposer']
acc_dss = df_infos['avg_gain_ratio']['Proposer + DSS']
return 100 * abs(acc - acc_dss) / acc
def get_dss_usage(df_full):
return 100 * (df_full.ai_nb_calls > 0).mean()
print("rel_gain: ", round(get_rel_gain(df_infos), 2), "%")
print("dss_usage: ", round(get_dss_usage(df_full), 2), "%")
print("woa: ", round(woa(df_full['offer_final'], df_full['offer'], df_full['ai_offer']), 2), "%")
print("mard: ", round(mard(df_full['offer_final'], df_full['offer']),2), "%")
###Output
rel_gain: 7.61 %
dss_usage: 91.18 %
woa: 66.42 %
mard: 27.75 %
|
summraize_news_articles.ipynb | ###Markdown
Load libraries
###Code
from newspaper import Article
from newspaper import Config
import nltk
nltk.download('punkt')
from transformers import pipeline
import gradio as gr
from gradio.mix import Parallel, Series
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
url = 'https://www.technologyreview.com/2021/07/09/1028140/ai-voice-actors-sound-human/'
article = Article(url, config=config)
###Output
_____no_output_____
###Markdown
Download the article
###Code
article.download()
article.html
###Output
_____no_output_____
###Markdown
Parse information from article
###Code
article.parse()
authors = ", ".join(author for author in article.authors)
title = article.title
date = article.publish_date
text = article.text
image = article.top_image
videos = article.movies
url = article.url
print("Information about the article")
print("=" * 30)
print(f"Title: {title}")
print(f"Author(s): {authors}")
print(f"Publish date: {date}")
print(f"Image: {image}")
print(f"Videos: {videos}")
print(f"Article link: {url}")
print(f"Content: {text[:100] + '...'}")
###Output
Information about the article
==============================
Title: AI voice actors sound more human than everโand theyโre ready to hire
Author(s): Karen Hao
Publish date: 2021-07-09 00:00:00
Image: https://wp.technologyreview.com/wp-content/uploads/2021/07/AIAudioActor-2.jpg?resize=1200,600
Videos: []
Article link: https://www.technologyreview.com/2021/07/09/1028140/ai-voice-actors-sound-human/
Content: The company blog post drips with the enthusiasm of a โ90s US infomercial. WellSaid Labs describes wh...
###Markdown
NLP from article
###Code
article.nlp()
keywords = article.keywords
keywords.sort()
print(keywords)
keywords = "\n".join(keyw for keyw in keywords)
print(f"Article Keywords: \n{keywords}")
###Output
Article Keywords:
actors
ai
audio
certainly
clients
companies
different
everand
hire
human
ready
sound
theyre
voice
voices
###Markdown
Newspaper library summary
###Code
print(f"Summary: \n{article.summary}")
text
###Output
_____no_output_____
###Markdown
summarize with Hugging Face and Gradio
###Code
io1 = gr.Interface.load('huggingface/sshleifer/distilbart-cnn-12-6')
io2 = gr.Interface.load("huggingface/facebook/bart-large-cnn")
io3 = gr.Interface.load("huggingface/google/pegasus-xsum")
io4 = gr.Interface.load("huggingface/sshleifer/distilbart-cnn-6-6")
iface = Parallel(io1, io2, io3, io4,
theme='huggingface',
inputs = gr.inputs.Textbox(lines = 10, label="Text"))
iface.launch()
def extract_article_text(url):
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
article = Article(url, config=config)
article.download()
article.parse()
text = article.text
return text
extractor = gr.Interface(extract_article_text, 'text', 'text')
summarizer = gr.Interface.load("huggingface/facebook/bart-large-cnn")
sample_url = [['https://www.technologyreview.com/2021/07/22/1029973/deepmind-alphafold-protein-folding-biology-disease-drugs-proteome/'],
['https://www.technologyreview.com/2021/07/21/1029860/disability-rights-employment-discrimination-ai-hiring/'],
['https://www.technologyreview.com/2021/07/09/1028140/ai-voice-actors-sound-human/']]
desc = '''
Let Hugging Face models summarize articles for you.
Note: Shorter articles generate faster summaries.
This summarizer uses bart-large-cnn model by Facebook
'''
iface = Series(extractor, summarizer,
inputs = gr.inputs.Textbox(
lines = 2,
label = 'URL'
),
outputs = 'text',
title = 'News Summarizer',
theme = 'huggingface',
description = desc,
examples=sample_url)
iface.launch()
###Output
Colab notebook detected. To show errors in colab notebook, set `debug=True` in `launch()`
This share link will expire in 24 hours. If you need a permanent link, visit: https://gradio.app/introducing-hosted (NEW!)
Running on External URL: https://47893.gradio.app
Interface loading below...
|
prinPy quickstart.ipynb | ###Markdown
prinPy Quickstart Guide prinPy has global and local algorithms. 1. Local Algorithms
###Code
from prinpy.local import CLPCG
# Some other modules
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import timeit
###Output
_____no_output_____
###Markdown
Generate Test Data
###Code
theta = np.linspace(0,np.pi*3, 1000)
r = np.linspace(0,1,1000) ** .5
x_data = r * np.cos(theta) + np.random.normal(scale = .02, size = 1000)
y_data = r * np.sin(theta) + np.random.normal(scale = .02, size = 1000)
###Output
_____no_output_____
###Markdown
Plot
###Code
plt.scatter(x_data, y_data, s = 1)
plt.show()
###Output
_____no_output_____
###Markdown
Fit Principal Curve with Local Algorithms
###Code
cl = CLPCG() # Create CLPCG object
# the fit() method calculates the principal curve
# e_max is determined through trial and error as of
# now, but aim for about 1/2 data error and adjust from
# there.
start = timeit.default_timer()
cl.fit(x_data, y_data, e_max = .03) # CLPCG.fit() to fit PC
stop = timeit.default_timer()
print("Took %f seconds" % (stop - start))
fig, ax = plt.subplots()
ax.scatter(x_data, y_data, s = 3, alpha = .7)
cl.plot(ax) # .plot will display the fit curve.
# you can optionally pass in a matplotlib ax
pts = cl.fit_points # fitted points with PC that spline is passed through
ax.scatter(pts[:,0], pts[:,1], s = 40, c = 'green')
# .proj will return a projection index for each point
proj = cl.project(x_data, y_data)
print(proj[:5])
# additionally, you can get spline ticks or fit points:
tck = cl.spline_ticks
fit_pts = cl.fit_points
print(tck[0])
print(fit_pts[:5])
###Output
[0. 0. 0. 0. 0.02700805 0.04754636
0.09041594 0.12945787 0.18449046 0.22115265 0.2495283 0.30794099
0.36219516 0.4034585 0.45169821 0.50868772 0.57010873 0.63433081
0.69801242 0.75620144 0.80715538 0.89094181 1. 1.
1. 1. ]
[[ 0.01447173 0.01714681]
[ 0.06537318 -0.00911538]
[ 0.17175084 0.04210127]
[ 0.21407512 0.16854487]
[ 0.05238058 0.39507542]]
###Markdown
2. Global
###Code
# NLPCA is the global alg
from prinpy.glob import NLPCA
# Generate some test data
t = np.linspace(0, 1, 1000) + np.random.normal(scale = .1, size = 1000)
x = 5*np.cos(t) + np.random.normal(scale = .1, size = 1000)
y = np.sin(t) + np.random.normal(scale = .1, size = 1000)
plt.scatter(x, y, s = 1)
# create solver
pca = NLPCA()
# transform data for better training with the
# neural net using built in preprocessor
data_new = pca.preprocess( [x,y] )
# fit the data
pca.fit(data_new, epochs = 150, nodes = 15, lr = .01, verbose = 0)
# project the current data. This returns a projection
# index for each point and points to plot the curve
proj, curve_pts = pca.project(data_new)
plt.scatter(data_new[:,0],
data_new[:,1],
s = 5,
c = proj.reshape(-1),
cmap = 'viridis')
plt.plot(curve_pts[:,0],
curve_pts[:,1],
color = 'black',
linewidth = '2.5')
###Output
_____no_output_____ |
Pipeline/ML_Task_Classification_Pipeline.ipynb | ###Markdown
Evaluation:
###Code
performance.columns
# label encoding the data
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
bottom_list_df[0]= le.fit_transform(bottom_list_df[0])
bottom_list_df[1]= le.fit_transform(bottom_list_df[1])
bottom_list_df[2]= le.fit_transform(bottom_list_df[2])
bottom_list_df[3]= le.fit_transform(bottom_list_df[3])
bottom_list_df[4]= le.fit_transform(bottom_list_df[4])
bottom_list_df[5]= le.fit_transform(bottom_list_df[5])
bottom_list_df[6]= le.fit_transform(bottom_list_df[6])
bottom_list_df[7]= le.fit_transform(bottom_list_df[7])
bottom_list_df[8]= le.fit_transform(bottom_list_df[8])
bottom_list_df[9]= le.fit_transform(bottom_list_df[9])
bottom_list_df[10]= le.fit_transform(bottom_list_df[10])
bottom_list_df[11]= le.fit_transform(bottom_list_df[11])
bottom_list_df[12]= le.fit_transform(bottom_list_df[12])
bottom_list_df[13]= le.fit_transform(bottom_list_df[13])
bottom_list_df[14]= le.fit_transform(bottom_list_df[14])
bottom_list_df[15]= le.fit_transform(bottom_list_df[15])
bottom_list_df[16]= le.fit_transform(bottom_list_df[16])
bottom_list_df[17]= le.fit_transform(bottom_list_df[17])
bottom_list_df[18]= le.fit_transform(bottom_list_df[18])
bottom_list_df[19]= le.fit_transform(bottom_list_df[19])
bottom_list_df.head()
top_list_df[0]= le.fit_transform(top_list_df[0])
top_list_df[1]= le.fit_transform(top_list_df[1])
top_list_df[2]= le.fit_transform(top_list_df[2])
top_list_df[3]= le.fit_transform(top_list_df[3])
top_list_df[4]= le.fit_transform(top_list_df[4])
top_list_df[5]= le.fit_transform(top_list_df[5])
top_list_df[6]= le.fit_transform(top_list_df[6])
top_list_df[7]= le.fit_transform(top_list_df[7])
top_list_df[8]= le.fit_transform(top_list_df[8])
top_list_df[9]= le.fit_transform(top_list_df[9])
top_list_df[10]= le.fit_transform(top_list_df[10])
top_list_df[11]= le.fit_transform(top_list_df[11])
top_list_df[12]= le.fit_transform(top_list_df[12])
top_list_df[13]= le.fit_transform(top_list_df[13])
top_list_df[14]= le.fit_transform(top_list_df[14])
top_list_df[15]= le.fit_transform(top_list_df[15])
top_list_df[16]= le.fit_transform(top_list_df[16])
top_list_df[17]= le.fit_transform(top_list_df[17])
top_list_df[18]= le.fit_transform(top_list_df[18])
top_list_df[19]= le.fit_transform(top_list_df[19])
top_list_df.tail()
top_list_df.shape
bottom_list_df.shape
performance.shape
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
#('normalizer', StandardScaler()), #Step1 - normalize data
('clf', LogisticRegression()) #step2 - classifier
])
pipeline.steps
x=0
y=520
data=performance[x:y]
data.shape
###Output
_____no_output_____
###Markdown
Method-1 top_list_Data
###Code
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(top_list_df,data,test_size=0.20,random_state=2)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
from sklearn.model_selection import cross_validate
scores = cross_validate(pipeline, X_train, y_train)
scores
scores['test_score'].mean()
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
clfs = []
clfs.append(LogisticRegression())
clfs.append(SVC())
clfs.append(SVC())
clfs.append(KNeighborsClassifier(n_neighbors=3))
clfs.append(DecisionTreeClassifier())
clfs.append(RandomForestClassifier())
clfs.append(GradientBoostingClassifier())
for classifier in clfs:
pipeline.set_params(clf = classifier)
scores = cross_validate(pipeline, X_train, y_train)
print('---------------------------------')
print(str(classifier))
print('-----------------------------------')
for key, values in scores.items():
print(key,' mean ', values.mean())
print(key,' std ', values.std())
###Output
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:762: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:762: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:762: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
###Markdown
Bottom_list_data
###Code
pipeline1 = Pipeline([
('clf', LogisticRegression()) #step2 - classifier
])
pipeline1.steps
X_train,X_test,y_train,y_test=train_test_split(bottom_list_df,data,test_size=0.30,random_state=2)
scores = cross_validate(pipeline1, X_train, y_train)
scores
scores['test_score'].mean()
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
clfs = []
clfs.append(LogisticRegression())
clfs.append(SVC())
clfs.append(SVC())
clfs.append(KNeighborsClassifier(n_neighbors=3))
clfs.append(DecisionTreeClassifier())
clfs.append(RandomForestClassifier())
clfs.append(GradientBoostingClassifier())
for classifier in clfs:
pipeline1.set_params(clf = classifier)
scores = cross_validate(pipeline1, X_train, y_train)
print('---------------------------------')
print(str(classifier))
print('-----------------------------------')
for key, values in scores.items():
print(key,' mean ', values.mean())
print(key,' std ', values.std())
###Output
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:762: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:762: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:762: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:762: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
n_iter_i = _check_optimize_result(
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
/home/hemanth/.local/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(**kwargs)
###Markdown
Method-2 top_list-df Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
X_train,X_test,y_train,y_test=train_test_split(top_list_df,data,test_size=0.3,random_state=2)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
logreg=LogisticRegression()
model_logreg=logreg.fit(X_train,y_train)
logreg_predict= model_logreg.predict(X_test)
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
logistic=accuracy_score(logreg_predict,y_test)
logistic
print(classification_report(logreg_predict,y_test))
cm = confusion_matrix(y_test, logreg_predict)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
KNN
###Code
from sklearn.neighbors import KNeighborsClassifier
X_train,X_test,y_train,y_test=train_test_split(top_list_df,data,test_size=0.25,random_state=2)
knn=KNeighborsClassifier(n_neighbors=3)
model_knn= knn.fit(X_train,y_train)
knn_predict=model_knn.predict(X_test)
knn=accuracy_score(knn_predict,y_test)
knn
print(classification_report(y_test,knn_predict))
cm = confusion_matrix(y_test, knn_predict)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
SVM
###Code
from sklearn.svm import SVC
svc=SVC()
model_svm=svc.fit(X_train,y_train)
svm_predict=model_svm.predict(X_test)
svm=accuracy_score(svm_predict,y_test)
svm
print(classification_report(svm_predict,y_test))
cm = confusion_matrix(y_test, svm_predict)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
DT
###Code
from sklearn.tree import DecisionTreeClassifier
dtree=DecisionTreeClassifier()
model_dt=dtree.fit(X_train,y_train)
dtree_predict=model_dt.predict(X_test)
dt=accuracy_score(dtree_predict,y_test)
dt
print(classification_report(dtree_predict,y_test))
cm = confusion_matrix(y_test, dtree_predict)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
RF
###Code
from sklearn.ensemble import RandomForestClassifier
rfc=RandomForestClassifier()
model_rf=rfc.fit(X_train,y_train)
rfc_predict=model_rf.predict(X_test)
rf=accuracy_score(rfc_predict,y_test)
rf
print(classification_report(rfc_predict,y_test))
cm = confusion_matrix(y_test, rfc_predict)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
AdaBoosting
###Code
from sklearn.ensemble import AdaBoostClassifier
adc=AdaBoostClassifier(n_estimators=5,learning_rate=1)
model_adaboosting=adc.fit(X_train,y_train)
adc_predict=model_adaboosting.predict(X_test)
adaboostin=accuracy_score(adc_predict,y_test)
adaboostin
print(classification_report(adc_predict,y_test))
cm = confusion_matrix(y_test, adc_predict)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
XGBoosting
###Code
from xgboost import XGBClassifier
xgb=XGBClassifier()
model_xgb=xgb.fit(X_train,y_train)
xgb_predict=model_xgb.predict(X_test)
xgb=accuracy_score(xgb_predict,y_test)
xgb
print(classification_report(xgb_predict,y_test))
cm = confusion_matrix(y_test, xgb_predict)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
pd.DataFrame({"Model Names":['Logistic Reg','SVM','KNN','DT','RF','ADABoostin','XGB'],
"Accuracy socre":[logistic,svm,knn,dt,rf,adaboostin,xgb]})
###Output
_____no_output_____
###Markdown
Bottom_list_df Log_Reg
###Code
from sklearn.linear_model import LogisticRegression
X_train,X_test,y_train,y_test=train_test_split(bottom_list_df,data,test_size=0.3,random_state=2)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
logreg_b=LogisticRegression()
model_logreg_b=logreg_b.fit(X_train,y_train)
logreg_predict_b= model_logreg_b.predict(X_test)
logistic_b=accuracy_score(logreg_predict_b,y_test)
logistic_b
print(classification_report(logreg_predict_b,y_test))
cm = confusion_matrix(y_test, logreg_predict_b)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
knn=KNeighborsClassifier(n_neighbors=3)
model_knn_b= knn.fit(X_train,y_train)
knn_predict_b=model_knn_b.predict(X_test)
knn_b=accuracy_score(knn_predict_b,y_test)
knn_b
print(classification_report(y_test,knn_predict_b))
cm = confusion_matrix(y_test, knn_predict_b)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
SVM
###Code
svc=SVC()
model_svm_b=svc.fit(X_train,y_train)
svm_predict_b=model_svm_b.predict(X_test)
svm_b=accuracy_score(svm_predict_b,y_test)
svm_b
print(classification_report(y_test,svm_predict_b))
cm = confusion_matrix(y_test, svm_predict_b)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
DT
###Code
dtree=DecisionTreeClassifier()
model_dt_b=dtree.fit(X_train,y_train)
dtree_predict_b=model_dt_b.predict(X_test)
dt_b=accuracy_score(dtree_predict_b,y_test)
dt_b
print(classification_report(dtree_predict_b,y_test))
cm = confusion_matrix(y_test, dtree_predict_b)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
RF
###Code
rfc=RandomForestClassifier()
model_rf_b=rfc.fit(X_train,y_train)
rfc_predict_b=model_rf_b.predict(X_test)
rf_b=accuracy_score(rfc_predict_b,y_test)
rf_b
print(classification_report(rfc_predict_b,y_test))
cm = confusion_matrix(y_test, rfc_predict_b)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
ADBoosting
###Code
adc=AdaBoostClassifier(n_estimators=5,learning_rate=1)
model_adaboosting_b=adc.fit(X_train,y_train)
adc_predict_b=model_adaboosting_b.predict(X_test)
adb_b=accuracy_score(adc_predict_b,y_test)
adb_b
print(classification_report(adc_predict_b,y_test))
cm = confusion_matrix(y_test, adc_predict_b)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
###Output
_____no_output_____
###Markdown
XGBOOSting
###Code
xgb=XGBClassifier()
model_xgb_b=xgb.fit(X_train,y_train)
xgb_predict_b=model_xgb_b.predict(X_test)
xgb_b=accuracy_score(xgb_predict_b,y_test)
xgb_b
print(classification_report(xgb_predict_b,y_test))
cm = confusion_matrix(y_test, xgb_predict_b)
plt.figure(figsize=(5,5))
p=sns.heatmap(cm, annot=True)
pd.DataFrame({"Model Names_Bottom_list_df":['Logistic Reg','SVM','KNN','DT','RF','ADABoostin','XGB'],
"Accuracy socre":[logistic_b,svm_b,knn_b,dt_b,rf_b,adb_b,xgb_b]})
###Output
_____no_output_____ |
Yolov4_Object_Detector.ipynb | ###Markdown
Yolo v4 - The Fastest and most Accurate Object DetectorHello everyone, my name is Aris. This is my code to make Object Detector using Yolo v4, which is the fastest and most accurate object detector ever currently now. This simulation also involve Webcam that can detect with your own camera. wanna know more about this Yolo, check this link Darknet Yolo
###Code
# import needed libraries
from IPython.display import display, Javascript, Image
from google.colab.output import eval_js
from google.colab.patches import cv2_imshow
from base64 import b64decode, b64encode
import cv2
import numpy as np
import PIL
import io
import html
import time
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Cloning and Setting Up Darknet for YOLOv4I will be using the famous AlexeyAB's darknet repository in this tutorial to perform YOLOv4 detections.
###Code
# clone darknet repository
!git clone https://github.com/AlexeyAB/darknet
# change makefile to have GPU, OPENCV and LIBSO enabled
%cd darknet
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
!sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
!sed -i 's/LIBSO=0/LIBSO=1/' Makefile
# make darknet (builds darknet in order to you can then use the darknet.py file and have its dependencies)
!make
# get the scaled yolov4 weights file that is pre-trained to detect 80 classes (objects) from shared google drive
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1V3vsIaxAlGWvK4Aar9bAiK5U0QFttKwq' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1V3vsIaxAlGWvK4Aar9bAiK5U0QFttKwq" -O yolov4-csp.weights && rm -rf /tmp/cookies.txt
###Output
_____no_output_____
###Markdown
Darknet for PythonIn order to utilize YOLOv4 with Python code we will use some of pre-built functions in darknet.py by importing the functions into our workstation. Feel free to checkout the darknet.py file to see the function codes in detail!
###Code
# import darknet functions to perform object detections
from darknet import *
# load in our YOLOv4 architecture network
network, class_names, class_colors = load_network("cfg/yolov4-csp.cfg", "cfg/coco.data", "yolov4-csp.weights")
width = network_width(network)
height = network_height(network)
# darknet helper function to run detection on image
def darknet_helper(img, width, height):
darknet_image = make_image(width, height, 3)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_resized = cv2.resize(img_rgb, (width, height),
interpolation=cv2.INTER_LINEAR)
# get image ratios to convert bounding boxes to proper size
img_height, img_width, _ = img.shape
width_ratio = img_width/width
height_ratio = img_height/height
# run model on darknet style image to get detections
copy_image_from_bytes(darknet_image, img_resized.tobytes())
detections = detect_image(network, class_names, darknet_image)
free_image(darknet_image)
return detections, width_ratio, height_ratio
###Output
_____no_output_____
###Markdown
YOLOv4 Example on Test ImageLet's make sure our model has successfully been loaded and that we can make detections properly on a test image. You can try with your own, with upload your image to this google colab. and then make a little change with its path. Let's enjoy it !
###Code
# run test on giraffe.jpg image that comes with repository
image = cv2.imread("data/giraffe.jpg")
detections, width_ratio, height_ratio = darknet_helper(image, width, height)
for label, confidence, bbox in detections:
left, top, right, bottom = bbox2points(bbox)
left, top, right, bottom = int(left * width_ratio), int(top * height_ratio), int(right * width_ratio), int(bottom * height_ratio)
cv2.rectangle(image, (left, top), (right, bottom), class_colors[label], 2)
cv2.putText(image, "{} [{:.2f}]".format(label, float(confidence)),
(left, top - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
class_colors[label], 2)
cv2_imshow(image)
###Output
_____no_output_____
###Markdown
Helper FunctionsHere are a few helper functions defined that will be used to convinient convert between different image types within our later steps.
###Code
# function to convert the JavaScript object into an OpenCV image
def js_to_image(js_reply):
"""
Params:
js_reply: JavaScript object containing image from webcam
Returns:
img: OpenCV BGR image
"""
# decode base64 image
image_bytes = b64decode(js_reply.split(',')[1])
# convert bytes to numpy array
jpg_as_np = np.frombuffer(image_bytes, dtype=np.uint8)
# decode numpy array into OpenCV BGR image
img = cv2.imdecode(jpg_as_np, flags=1)
return img
# function to convert OpenCV Rectangle bounding box image into base64 byte string to be overlayed on video stream
def bbox_to_bytes(bbox_array):
"""
Params:
bbox_array: Numpy array (pixels) containing rectangle to overlay on video stream.
Returns:
bytes: Base64 image byte string
"""
# convert array into PIL image
bbox_PIL = PIL.Image.fromarray(bbox_array, 'RGBA')
iobuf = io.BytesIO()
# format bbox into png for return
bbox_PIL.save(iobuf, format='png')
# format return string
bbox_bytes = 'data:image/png;base64,{}'.format((str(b64encode(iobuf.getvalue()), 'utf-8')))
return bbox_bytes
###Output
_____no_output_____
###Markdown
YOLOv4 on Wecam ImagesRunning YOLOv4 on images taken from webcam is fairly straight-forward. We will utilize code within Google Colab's **Code Snippets** that has a variety of useful code functions to perform various tasks.We will be using the code snippet for **Camera Capture** which runs JavaScript code to utilize your computer's webcam. The code snippet will take a webcam photo, which we will then pass into our YOLOv4 model for object detection.Below is a function to take the webcam picture using JavaScript and then run YOLOv4 on it.
###Code
def take_photo(filename='photo.jpg', quality=0.8):
js = Javascript('''
async function takePhoto(quality) {
const div = document.createElement('div');
const capture = document.createElement('button');
capture.textContent = 'Capture';
div.appendChild(capture);
const video = document.createElement('video');
video.style.display = 'block';
const stream = await navigator.mediaDevices.getUserMedia({video: true});
document.body.appendChild(div);
div.appendChild(video);
video.srcObject = stream;
await video.play();
// Resize the output to fit the video element.
google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);
// Wait for Capture to be clicked.
await new Promise((resolve) => capture.onclick = resolve);
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
stream.getVideoTracks()[0].stop();
div.remove();
return canvas.toDataURL('image/jpeg', quality);
}
''')
display(js)
# get photo data
data = eval_js('takePhoto({})'.format(quality))
# get OpenCV format image
img = js_to_image(data)
# call our darknet helper on webcam image
detections, width_ratio, height_ratio = darknet_helper(img, width, height)
# loop through detections and draw them on webcam image
for label, confidence, bbox in detections:
left, top, right, bottom = bbox2points(bbox)
left, top, right, bottom = int(left * width_ratio), int(top * height_ratio), int(right * width_ratio), int(bottom * height_ratio)
cv2.rectangle(img, (left, top), (right, bottom), class_colors[label], 2)
cv2.putText(img, "{} [{:.2f}]".format(label, float(confidence)),
(left, top - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
class_colors[label], 2)
# save image
cv2.imwrite(filename, img)
return filename
try:
filename = take_photo('photo.jpg')
print('Saved to {}'.format(filename))
# Show the image which was just taken.
display(Image(filename))
except Exception as err:
# Errors will be thrown if the user does not have a webcam or if they do not
# grant the page permission to access it.
print(str(err))
###Output
_____no_output_____
###Markdown
YOLOv4 on Webcam VideosRunning YOLOv4 on webcam video is a little more complex than images. We need to start a video stream using our webcam as input. Then we run each frame through our YOLOv4 model and create an overlay image that contains bounding box of detection(s). We then overlay the bounding box image back onto the next frame of our video stream. YOLOv4 is so fast that it can run the detections in real-time!Yolo Webcam Workflow
###Code
# JavaScript to properly create our live video stream using our webcam as input
def video_stream():
js = Javascript('''
var video;
var div = null;
var stream;
var captureCanvas;
var imgElement;
var labelElement;
var pendingResolve = null;
var shutdown = false;
function removeDom() {
stream.getVideoTracks()[0].stop();
video.remove();
div.remove();
video = null;
div = null;
stream = null;
imgElement = null;
captureCanvas = null;
labelElement = null;
}
function onAnimationFrame() {
if (!shutdown) {
window.requestAnimationFrame(onAnimationFrame);
}
if (pendingResolve) {
var result = "";
if (!shutdown) {
captureCanvas.getContext('2d').drawImage(video, 0, 0, 640, 480);
result = captureCanvas.toDataURL('image/jpeg', 0.8)
}
var lp = pendingResolve;
pendingResolve = null;
lp(result);
}
}
async function createDom() {
if (div !== null) {
return stream;
}
div = document.createElement('div');
div.style.border = '2px solid black';
div.style.padding = '3px';
div.style.width = '100%';
div.style.maxWidth = '600px';
document.body.appendChild(div);
const modelOut = document.createElement('div');
modelOut.innerHTML = "<span>Status:</span>";
labelElement = document.createElement('span');
labelElement.innerText = 'No data';
labelElement.style.fontWeight = 'bold';
modelOut.appendChild(labelElement);
div.appendChild(modelOut);
video = document.createElement('video');
video.style.display = 'block';
video.width = div.clientWidth - 6;
video.setAttribute('playsinline', '');
video.onclick = () => { shutdown = true; };
stream = await navigator.mediaDevices.getUserMedia(
{video: { facingMode: "environment"}});
div.appendChild(video);
imgElement = document.createElement('img');
imgElement.style.position = 'absolute';
imgElement.style.zIndex = 1;
imgElement.onclick = () => { shutdown = true; };
div.appendChild(imgElement);
const instruction = document.createElement('div');
instruction.innerHTML =
'<span style="color: red; font-weight: bold;">' +
'When finished, click here or on the video to stop this demo</span>';
div.appendChild(instruction);
instruction.onclick = () => { shutdown = true; };
video.srcObject = stream;
await video.play();
captureCanvas = document.createElement('canvas');
captureCanvas.width = 640; //video.videoWidth;
captureCanvas.height = 480; //video.videoHeight;
window.requestAnimationFrame(onAnimationFrame);
return stream;
}
async function stream_frame(label, imgData) {
if (shutdown) {
removeDom();
shutdown = false;
return '';
}
var preCreate = Date.now();
stream = await createDom();
var preShow = Date.now();
if (label != "") {
labelElement.innerHTML = label;
}
if (imgData != "") {
var videoRect = video.getClientRects()[0];
imgElement.style.top = videoRect.top + "px";
imgElement.style.left = videoRect.left + "px";
imgElement.style.width = videoRect.width + "px";
imgElement.style.height = videoRect.height + "px";
imgElement.src = imgData;
}
var preCapture = Date.now();
var result = await new Promise(function(resolve, reject) {
pendingResolve = resolve;
});
shutdown = false;
return {'create': preShow - preCreate,
'show': preCapture - preShow,
'capture': Date.now() - preCapture,
'img': result};
}
''')
display(js)
def video_frame(label, bbox):
data = eval_js('stream_frame("{}", "{}")'.format(label, bbox))
return data
###Output
_____no_output_____
###Markdown
Running on Webcam Video
###Code
# start streaming video from webcam
video_stream()
# label for video
label_html = 'Capturing...'
# initialze bounding box to empty
bbox = ''
count = 0
while True:
js_reply = video_frame(label_html, bbox)
if not js_reply:
break
# convert JS response to OpenCV Image
frame = js_to_image(js_reply["img"])
# create transparent overlay for bounding box
bbox_array = np.zeros([480,640,4], dtype=np.uint8)
# call our darknet helper on video frame
detections, width_ratio, height_ratio = darknet_helper(frame, width, height)
# loop through detections and draw them on transparent overlay image
for label, confidence, bbox in detections:
left, top, right, bottom = bbox2points(bbox)
left, top, right, bottom = int(left * width_ratio), int(top * height_ratio), int(right * width_ratio), int(bottom * height_ratio)
bbox_array = cv2.rectangle(bbox_array, (left, top), (right, bottom), class_colors[label], 2)
bbox_array = cv2.putText(bbox_array, "{} [{:.2f}]".format(label, float(confidence)),
(left, top - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
class_colors[label], 2)
bbox_array[:,:,3] = (bbox_array.max(axis = 2) > 0 ).astype(int) * 255
# convert overlay of bbox into bytes
bbox_bytes = bbox_to_bytes(bbox_array)
# update bbox so next frame gets new overlay
bbox = bbox_bytes
###Output
_____no_output_____ |
Student-Admissions**Project**/StudentAdmissionsKeras.ipynb | ###Markdown
Predicting Student Admissions with Neural Networks in KerasIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:- GRE Scores (Test)- GPA Scores (Grades)- Class rank (1-4)The dataset originally came from here: http://www.ats.ucla.edu/ Loading the dataTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:- https://pandas.pydata.org/pandas-docs/stable/- https://docs.scipy.org/
###Code
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
###Output
_____no_output_____
###Markdown
Plotting the dataFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
###Code
# Importing matplotlib
import matplotlib.pyplot as plt
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
###Output
_____no_output_____
###Markdown
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
###Code
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
###Output
_____no_output_____
###Markdown
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it. One-hot encoding the rankFor this, we'll use the `get_dummies` function in pandas.
###Code
# Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)
# Drop the previous rank column
one_hot_data = one_hot_data.drop('rank', axis=1)
# Print the first 10 rows of our data
one_hot_data[:10]
###Output
_____no_output_____
###Markdown
Scaling the dataThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
###Code
# Copying our data
processed_data = one_hot_data[:]
# Scaling the columns
processed_data['gre'] = processed_data['gre']/800
processed_data['gpa'] = processed_data['gpa']/4.0
processed_data[:10]
###Output
_____no_output_____
###Markdown
Splitting the data into Training and Testing In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
###Code
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
###Output
Number of training samples is 360
Number of testing samples is 40
admit gre gpa rank_1 rank_2 rank_3 rank_4
289 0 0.525 0.5650 0 0 0 1
370 1 0.675 0.9425 0 1 0 0
146 0 0.600 0.8500 0 1 0 0
19 1 0.675 0.9525 1 0 0 0
337 0 0.775 0.7725 0 0 0 1
81 0 0.775 0.7675 0 1 0 0
16 0 0.975 0.9675 0 0 0 1
193 0 0.475 0.8975 0 0 0 1
80 0 0.875 0.7250 0 0 0 1
208 0 0.675 0.7900 0 0 1 0
admit gre gpa rank_1 rank_2 rank_3 rank_4
9 0 0.875 0.9800 0 1 0 0
10 0 1.000 1.0000 0 0 0 1
24 1 0.950 0.8375 0 1 0 0
26 1 0.775 0.9025 1 0 0 0
34 0 0.450 0.7850 1 0 0 0
54 0 0.825 0.8350 0 0 1 0
56 0 0.700 0.7975 0 0 1 0
61 0 0.700 0.8300 0 0 0 1
64 0 0.725 1.0000 0 0 1 0
66 0 0.925 0.9050 0 0 0 1
###Markdown
Splitting the data into features and targets (labels)Now, as a final step before the training, we'll split the data into features (X) and targets (y).Also, in Keras, we need to one-hot encode the output. We'll do this with the `to_categorical function`.
###Code
import keras
# Separate data and one-hot encode the output
# Note: We're also turning the data into numpy arrays, in order to train the model in Keras
features = np.array(train_data.drop('admit', axis=1))
targets = np.array(keras.utils.to_categorical(train_data['admit'], 2))
features_test = np.array(test_data.drop('admit', axis=1))
targets_test = np.array(keras.utils.to_categorical(test_data['admit'], 2))
print(features[:10])
print(targets[:10])
###Output
[[ 0.525 0.565 0. 0. 0. 1. ]
[ 0.675 0.9425 0. 1. 0. 0. ]
[ 0.6 0.85 0. 1. 0. 0. ]
[ 0.675 0.9525 1. 0. 0. 0. ]
[ 0.775 0.7725 0. 0. 0. 1. ]
[ 0.775 0.7675 0. 1. 0. 0. ]
[ 0.975 0.9675 0. 0. 0. 1. ]
[ 0.475 0.8975 0. 0. 0. 1. ]
[ 0.875 0.725 0. 0. 0. 1. ]
[ 0.675 0.79 0. 0. 1. 0. ]]
[[ 1. 0.]
[ 0. 1.]
[ 1. 0.]
[ 0. 1.]
[ 1. 0.]
[ 1. 0.]
[ 1. 0.]
[ 1. 0.]
[ 1. 0.]
[ 1. 0.]]
###Markdown
Defining the model architectureHere's where we use Keras to build our neural network.
###Code
# Imports
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
# Building the model
model = Sequential()
model.add(Dense(128, activation='tanh', input_shape=(6,)))
model.add(Dropout(.2))
model.add(Dense(32, activation='tanh'))
model.add(Dropout(.1))
model.add(Dense(2, activation='sigmoid'))
# Compiling the model
model.compile(loss = 'mean_squared_error', optimizer='rmsprop', metrics=['accuracy'])
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_46 (Dense) (None, 128) 896
_________________________________________________________________
dropout_31 (Dropout) (None, 128) 0
_________________________________________________________________
dense_47 (Dense) (None, 32) 4128
_________________________________________________________________
dropout_32 (Dropout) (None, 32) 0
_________________________________________________________________
dense_48 (Dense) (None, 2) 66
=================================================================
Total params: 5,090
Trainable params: 5,090
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training the model
###Code
# Training the model
model.fit(features, targets, epochs=100, batch_size=64, verbose=0)
###Output
_____no_output_____
###Markdown
Scoring the model
###Code
# Evaluating the model on the training and testing set
score = model.evaluate(features, targets)
print("\n Training Accuracy:", score[1])
score = model.evaluate(features_test, targets_test)
print("\n Testing Accuracy:", score[1])
###Output
360/360 [==============================] - 0s 554us/step
Training Accuracy: 0.708333333333
40/40 [==============================] - 0s 132us/step
Testing Accuracy: 0.725
|
ds_challenge_03.ipynb | ###Markdown
Good morning! Here's your Wednesday warmup!(1) Log in to GitHub (2) Create a repo called `ds_challenge_03` (3) Clone the repo to your computer in an appropriate location (4) Download the `data.csv` file that I'll send in the next message (5) Use the `mv` command in the terminal to _move_ `data.csv` into your repo directory. (6) Create a new notebook. (7) Write Python code that opens the file and prints out _only_ the GitHub usernames. (8) Add, commit, and push your code to GitHub. (9) Paste the link to your repo here. (10) Take a look at other students' solutions!
###Code
import csv
with open('data.csv') as f:
reader = csv.reader(f)
data = list(reader)
type(data)
for row in data:
if row[1] == 'angrobanGit': #LOL
print('MangrobanGit')
elif row[1] != '':
print( row[1] )
###Output
User
AlecMorgan
as6140
ZMBailey
rokaandy
AnnaLara
ConnorAnderson29
UpwardTrajectory
AlludedCrabb
jnawjux
kayschulz
kevintheduu
Laura-ShummonMaass
worldyne
glmack
mandoiwanaga
MIAISEMAN
nkacoroski
Patrickbfuller
sherzyang
Teosoft7
TSGreenwood
MangrobanGit
|
DevelopmentNotebook.ipynb | ###Markdown
Time2Vec
###Code
class Time2Vector(Layer):
def __init__(self, seq_len, **kwargs):
super(Time2Vector, self).__init__()
self.seq_len = seq_len
def build(self, input_shape):
''' init weights and biases with shape (batch, seq_len) '''
self.weights_linear = self.add_weight(name="weight_linear",
shape=(int(self.seq_len),),
initializer="uniform",
trainable=True)
self.bias_linear = self.add_weight(name="bias_linear",
shape=(int(self.seq_len),),
initializer="uniform",
trainable=True)
self.weights_periodic = self.add_weight(name="weight_periodic",
shape=(int(self.seq_len),),
initializer="uniform",
trainable=True)
self.bias_periodic = self.add_weight(name="bias_periodic",
shape=(int(self.seq_len),),
initializer="uniform",
trainable=True)
def call(self, x):
''' calculate linear and periodic time features '''
x = tf.math.reduce_mean(x[:,:,:4], axis=-1)
time_linear = self.weights_linear * x *self.bias_linear
time_linear = tf.expand_dims(time_linear, axis=-1)
time_periodic = tf.math.sin(tf.multiply(x, self.weights_periodic) + self.bias_periodic)
time_periodic = tf.expand_dims(time_periodic, axis=-1)
return tf.concat([time_linear, time_periodic], axis=-1)
def get_config(self):
config = super().get_config().copy()
config.update({'seq_len': self.seq_len})
return config
###Output
_____no_output_____
###Markdown
Transformer
###Code
class SingleAttention(Layer):
def __init__(self, d_k, d_v):
super(SingleAttention, self).__init__()
self.d_k = d_k
self.d_v = d_v
def build(self, input_shape):
self.query = Dense(self.d_k,
input_shape=input_shape,
kernel_initializer="glorot_uniform",
bias_initializer="glorot_uniform")
self.key = Dense(self.d_k,
input_shape=input_shape,
kernel_initializer="glorot_uniform",
bias_initializer="glorot_uniform")
self.value = Dense(self.d_v,
input_shape=input_shape,
kernel_initializer="glorot_uniform",
bias_initializer="glorot_uniform")
def call(self, inputs):
q = self.query(inputs[0])
k = self.key(inputs[0])
attn_weights = tf.matmul(q, k, transpose_b=True)
attn_weights = tf.map_fn(lambda x: x/np.sqrt(self.d_k), attn_weights)
attn_weights = tf.nn.softmax(attn_weights, axis=-1)
v = self.value(inputs[2])
attn_out = tf.matmul(attn_weights, v)
return attn_out
class MultiAttention(Layer):
def __init__(self, d_k, d_v, n_heads):
super(MultiAttention, self).__init__()
self.d_k = d_k
self.d_v = d_v
self.n_heads = n_heads
self.attn_heads = list()
def build(self, input_shape):
for n in range(self.n_heads):
self.attn_heads.append(SingleAttention(self.d_k, self.d_v))
self.linear = Dense(input_shape[0][-1],
input_shape=input_shape,
kernel_initializer='glorot_uniform',
bias_initializer='glorot_uniform')
def call(self, inputs):
attn = [self.attn_heads[i](inputs) for i in range(self.n_heads)]
concat_attn = tf.concat(attn, axis=-1)
multi_linear = self.linear(concat_attn)
return multi_linear
class TransformerEncoder(Layer):
def __init__(self, d_k, d_v, n_heads, ff_dim, dropout=0.1, **kwargs):
super(TransformerEncoder, self).__init__()
self.d_k = d_k
self.d_v = d_v
self.n_heads = n_heads
self.attn_heads = list()
self.ff_dim = ff_dim
self.dropout_rate = dropout
def build(self, input_shape):
self.attn_multi = MultiAttention(self.d_k, self.d_v, self.n_heads)
self.attn_dropout = Dropout(self.dropout_rate)
self.attn_normalize = LayerNormalization(input_shape=input_shape, epsilon=1e-6)
self.ff_conv1D_1 = Conv1D(filters=self.ff_dim, kernel_size=1, activation="relu")
self.ff_conv1D_2 = Conv1D(filters=input_shape[0][-1], kernel_size=1)
self.ff_dropout = Dropout(self.dropout_rate)
self.ff_normalize = LayerNormalization(input_shape=input_shape, epsilon=1e-6)
def call(self, inputs):
attn_layer = self.attn_multi(inputs)
attn_layer = self.attn_dropout(attn_layer)
attn_layer = self.attn_normalize(inputs[0] + attn_layer)
ff_layer = self.ff_conv1D_1(attn_layer)
ff_layer = self.ff_conv1D_2(ff_layer)
ff_layer = self.ff_dropout(ff_layer)
ff_layer = self.ff_normalize(inputs[0] + ff_layer)
return ff_layer
def get_config(self):
config = super().get_config().copy()
config.update({'d_k': self.d_k,
'd_v': self.d_v,
'n_heads': self.n_heads,
'ff_dim': self.ff_dim,
'attn_heads': self.attn_heads,
'dropout_rate': self.dropout_rate})
return config
# Hyperparameters
seq_len = 128
batch_size = 32
d_k = 256
d_v = 256
n_heads = 12
ff_dim = 256
def create_model():
'''Initialize time and transformer layers'''
time_embedding = Time2Vector(seq_len)
attn_layer1 = TransformerEncoder(d_k, d_v, n_heads, ff_dim)
attn_layer2 = TransformerEncoder(d_k, d_v, n_heads, ff_dim)
attn_layer3 = TransformerEncoder(d_k, d_v, n_heads, ff_dim)
'''Construct model'''
in_seq = Input(shape=(seq_len, 5))
x = time_embedding(in_seq)
x = Concatenate(axis=-1)([in_seq, x])
x = attn_layer1((x, x, x))
x = attn_layer2((x, x, x))
x = attn_layer3((x, x, x))
x = GlobalAveragePooling1D(data_format='channels_first')(x)
x = Dropout(0.1)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.1)(x)
out = Dense(1, activation='linear')(x)
model = Model(inputs=in_seq, outputs=out)
model.compile(loss='mse', optimizer='adam', metrics=['mae', 'mape'])
return model
model = create_model()
model.summary()
callback = tf.keras.callbacks.ModelCheckpoint('Transformer+TimeEmbedding.hdf5',
monitor='val_loss',
save_best_only=True, verbose=1)
history = model.fit(X_train, Y_train,
batch_size=batch_size,
epochs=35,
callbacks=[callback],
validation_data=(X_val, Y_val))
# model = tf.keras.models.load_model('/content/Transformer+TimeEmbedding.hdf5',
# custom_objects={'Time2Vector': Time2Vector,
# 'SingleAttention': SingleAttention,
# 'MultiAttention': MultiAttention,
# 'TransformerEncoder': TransformerEncoder})
###Output
_____no_output_____ |
notebooks/tests/coordinates.ipynb | ###Markdown
`coordinates.py` This notebook tests the `coordinates.py` module.This module contains the functions that transforms coordinates from cartesian to celestial and vice versa, along with the jacobian of this transformation.\We will test the coordinate change routines against Astropy
###Code
import numpy as np
from astropy.coordinates import SkyCoord
import astropy.units as u
###Output
_____no_output_____
###Markdown
`celestial_to_cartesian` This method take as input an array with shape `(N,3)` containing the celestial coordinates and returns a `(N,3)`-shaped array with cartesian coordinates.
###Code
from figaro.coordinates import celestial_to_cartesian
ra = np.linspace(0,2*np.pi, 72, endpoint = False)
dec = np.linspace(-np.pi/2, np.pi/2., 36)
dist = np.linspace(1, 100, 10)
# For loops
grid = []
for ra_i in ra:
for dec_i in dec:
for d_i in dist:
grid.append(np.array([ra_i, dec_i, d_i]))
celestial_grid = np.array(grid)
sc = SkyCoord(ra = celestial_grid[:,0]*u.rad, dec = celestial_grid[:,1]*u.rad, distance = celestial_grid[:,2]*u.Mpc)
cartesian_astropy = np.array([sc.cartesian.y.value, sc.cartesian.x.value, sc.cartesian.z.value]).T
cartesian_figaro = celestial_to_cartesian(celestial_grid)
np.alltrue(cartesian_astropy == cartesian_figaro)
###Output
_____no_output_____
###Markdown
Since the element-wise comparison fails, let's look at the element-wise differences:
###Code
import matplotlib.pyplot as plt
diffs = (cartesian_astropy - cartesian_figaro).flatten()
a, b, c = plt.hist(diffs, bins = int(np.sqrt(len(diffs))), histtype = 'step', density = True)
plt.figure()
plt.plot(diffs)
np.allclose(cartesian_astropy, cartesian_figaro, atol = 2e-14, rtol = 0)
###Output
_____no_output_____
###Markdown
`cartesian_to_celestial` This method take as input an array with shape `(N,3)` containing the cartesian coordinates and returns a `(N,3)`-shaped array with celestial coordinates.
###Code
from figaro.coordinates import cartesian_to_celestial
ra = np.linspace(0,2*np.pi, 73)[:-1]
dec = np.linspace(-np.pi/2, np.pi/2., 38)
dist = np.linspace(1, 100, 10)
# For loops
grid = []
for ra_i in ra:
for dec_i in dec:
for d_i in dist:
grid.append(np.array([ra_i, dec_i, d_i]))
celestial_grid = np.array(grid)
sc = SkyCoord(ra = celestial_grid[:,0]*u.rad, dec = celestial_grid[:,1]*u.rad, distance = celestial_grid[:,2]*u.Mpc)
cartesian_astropy = np.array([sc.cartesian.y.value, sc.cartesian.x.value, sc.cartesian.z.value]).T
celestial_figaro = cartesian_to_celestial(cartesian_astropy)
np.alltrue(celestial_grid == celestial_figaro)
###Output
_____no_output_____
###Markdown
As before, the element-wise comparison fails. Element-wise differences:
###Code
diffs = (celestial_grid - celestial_figaro).flatten()
a, b, c = plt.hist(diffs, bins = int(np.sqrt(len(diffs))), histtype = 'step', density = True)
plt.figure()
plt.plot(diffs)
np.allclose(celestial_grid, celestial_figaro, atol = 3e-14, rtol = 0)
###Output
_____no_output_____ |
notebooks/01.0-custom-parsing/17.0-cassins-vireo-custom-parsing.ipynb | ###Markdown
Cassin's vireo custom parsing- An labelled (but smaller) dataset of cassins vireo vocalizations - .WAV files - .TextGrid files with labels- This notebook creates a JSON corresponding to each WAV file (and Noise file where available).- Dataset origin: - https://figshare.com/articles/Data_used_in_PLoS_One_article_Complexity_Predictability_and_Time_Homogeneity_of_Syntax_in_the_Songs_of_Cassin_s_Vireo_Vireo_cassini_by_Hedley_2016_/3081814/1 - for consistency with the CATH dataset, we also download from figshare
###Code
from avgn.utils.general import prepare_env
prepare_env()
###Output
env: CUDA_VISIBLE_DEVICES=GPU
###Markdown
Import relevant packages
###Code
from joblib import Parallel, delayed
from tqdm.autonotebook import tqdm
import pandas as pd
pd.options.display.max_columns = None
import librosa
from datetime import datetime
import numpy as np
import xlrd
import avgn
from avgn.custom_parsing.bird_db import generate_json
from avgn.downloading.birdDB import openBirdDB_df
from avgn.utils.paths import DATA_DIR
###Output
_____no_output_____
###Markdown
Load data in original format
###Code
DATASET_ID = 'BIRD_DB_Vireo_cassinii'
# create a unique datetime identifier for the files output by this notebook
DT_ID = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
DT_ID
DSLOC = DATA_DIR/ 'raw' / 'bird-db' / 'CAVI'
DSLOC
WAVLIST = list((DSLOC).expanduser().glob('*/wavs/*.wav'))
len(WAVLIST), WAVLIST[0]
song_db = openBirdDB_df()
song_db[:3]
###Output
_____no_output_____
###Markdown
Create a JSON for each wav
###Code
for wavfile in tqdm(WAVLIST):
generate_json(
wavfile,
DT_ID,
song_db
)
###Output
_____no_output_____ |
notebooks/test_simulator_package.ipynb | ###Markdown
Cost SimulatorThis notebook estimates the costs of a JupyterHub on an autoscaling k8s cluster on the clouds.Let's get an idea of what is JupyterHub and Kubernetes.- [JupyterHub](https://github.com/jupyterhub/jupyterhub) is a multi-user Hub that spawns, manages, multiple instances of the single-user Jupyter notebook server.- [Kubernetes](https://kubernetes.io/docs/concepts/) (commonly stylized as k8s)is a popular open source platform which allows users to build application services across multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time. JupyterHub Distribution1. If you need a simple case for a small amount of users (0-100) and single server then use, [The Littlest JupyterHub distribution](https://github.com/jupyterhub/the-littlest-jupyterhub).1. If you need to allow for even more users, a dynamic amount of servers can be used on a cloud, with [The Zero to JupyterHub with Kubernetes](https://z2jh.jupyter.org/en/latest/). For documentation on deploying JupyterHub on Kubernetes refer to: [Deploy a JupyterHub on Kubernetes](https://github.com/jupyterhub/zero-to-jupyterhub-k8s)We are focussed on estimating the cloud compute costs for a deployment of JupyterHub with Kubernetes so, let's get familiar with Kubernetes terminology for a better understanding of the simulation process:- Node: A node is a single machine with a set of CPU/RAM resources that can be utilized. A cluster in kubernetes is a group of nodes. - Kubernetes will **automatically scale up** your cluster as soon as you need it, and **scale it back down** when you don't need it.- User pod:A Pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. It represents processes running on your Cluster. - JupyterHub will **automatically delete** any user pods that have no activity for a period of time. This helps free up computational resources and keeps costs down if you are using an autoscaling cluster. Input provided to the simulation- Interactive input form - The user of the simulator is provided with a form to draw the line to show the number of users estimated to use the Jupyterhub hour by hour. The data captured by the line is used to generate the user activity.- Configurations about the Node(memory/CPU), user pod(memory/CPU) and the cost per month. Output of the simulation- The data of minute by minute utilization of the cluster.- The information about the scale up/down of the nodes in the cluster.
###Code
# autoreload allows us to make changes to the z2jh_cost_simulator package
# and workaround a caching of the module, so our changes can be seen.
%load_ext autoreload
%autoreload 2
#Import the simulator.py and generate_user_activity.py modules from the z2jh_cost_simulator package
from z2jh_cost_simulator import simulator
from z2jh_cost_simulator import generate_user_activity
###Output
_____no_output_____
###Markdown
Set the configurations for the simulation.1. Info about a single node pool where users reside - CPU / Memory, for example 4 CPU cores / 26 GB memory node - Autoscaling limits, for example 0-5 nodes - Cost, for example 120 USD / month and node - Cluster autoscaler details, how long of node inactivity is required(10 minutes) 1. Info about the resource requests (guaranteed resources) for the users1. How much time before a user pod is culled by inactivity 1. The max lifetime for the user pod before it can be culled down.
###Code
import ipywidgets as widgets
from ipywidgets import Layout, VBox
style_for_widget = {"description_width": "250px"}
layout_for_widget = {"width": "500px"}
box_layout = Layout(border="solid", width="600px")
heading = widgets.HTML(value="<h3 style=text-align:center;>Select the simulation configuration.</h3>")
max_min_nodes = widgets.IntRangeSlider(min=0, max=10, step=1, description='Number of nodes', value=(1,3), style=style_for_widget, layout=layout_for_widget,)
node_cpu = widgets.IntSlider(min=1, max=16, description="Node CPU", value=4, style=style_for_widget, layout=layout_for_widget,)
user_pod_cpu = widgets.FloatSlider(min=0, max=16, step=.05, value=.05, description="User pod CPU", style=style_for_widget, layout=layout_for_widget,)
node_memory = widgets.FloatSlider(min=0, max=128.0, step=1, value=25.0, description="Node Memory(in GB)", style=style_for_widget, layout=layout_for_widget,)
user_pod_memory = widgets.FloatSlider(min=0, max=5.0, step=.064, value=1.024, description="User pod Memory(in GB)", style=style_for_widget, layout=layout_for_widget,)
cost_per_month = widgets.FloatSlider(min=0, max=500.0, step=1, value=120.0, description="Cost per month(in USD)", style=style_for_widget, layout=layout_for_widget,)
pod_culling_max_inactivity_time = widgets.IntSlider(min=0, max=500, step=5, value=60, description="User pod culling for inactivity(in min)", style=style_for_widget, layout=layout_for_widget,)
pod_culling_max_lifetime = widgets.IntSlider(min=0, max=1440, step=20, value=0, description="User pod culling for max lifetime(in min)", style=style_for_widget, layout=layout_for_widget,)
simulation_configurations = [
heading,
max_min_nodes,
node_cpu,
user_pod_cpu,
node_memory,
user_pod_memory,
cost_per_month,
pod_culling_max_inactivity_time,
pod_culling_max_lifetime,
]
VBox(simulation_configurations, layout=box_layout)
#The configurations which will be passed to the Simulation object.
simulation_configurations = {
'min_nodes' : max_min_nodes.lower,
'max_nodes' : max_min_nodes.upper,
'node_cpu' : node_cpu.value,
'node_memory' : node_memory.value,
'user_pod_cpu' : user_pod_cpu.value,
'user_pod_memory': user_pod_memory.value,
'cost_per_month' : cost_per_month.value,
'pod_inactivity_time': pod_culling_max_inactivity_time.value,
'pod_max_lifetime' : pod_culling_max_lifetime.value,
'node_stop_time' : 10,
}
###Output
_____no_output_____
###Markdown
The goal of the interactive input form is to assist the simulator. In a user friendly way, it aims to get an estimate from the user of the `z2jh_cost_simulator`, about for example the maximum users for every given hour during a full day. If you change the usage pattern in the input form below, then select Run -> Run Selected Cell and All Below.
###Code
from z2jh_cost_simulator.input_form import InteractiveInputForm
workweek_day = InteractiveInputForm()
weekend_day = InteractiveInputForm()
#Draw a line to display the number of users for one work week day.
workweek_day_fig = workweek_day.get_input_form("Maximum number of users per day on a weekday", no_users=50)
display(workweek_day_fig)
#Draw a line to display the number of users for one week end day.
weekend_day_fig = weekend_day.get_input_form("Maximum number of users per day on a weekend", no_users=50)
display(weekend_day_fig)
#Display an initial figure
workweek_day.set_default_figure()
weekend_day.set_default_figure()
#Maximum number of users for every given hour for one week day
no_users_on_weekday = workweek_day.get_data().tolist()
#Maximum number of users for every given hour for one week-end day
no_users_on_weekend = weekend_day.get_data().tolist()
hour_wise_users_full_week = []
total_users_weekday = []
total_users_weekend = []
total_users_weekday.extend(no_users_on_weekday * 5)
total_users_weekend.extend(no_users_on_weekend * 2)
hour_wise_users_full_week = total_users_weekday + total_users_weekend
#Generate the user activity based on the hour wise number of users for the full week.
user_activity = generate_user_activity.generate_user_activity(hour_wise_users_full_week)
#Create an object of the Simulation, and pass configurations and the user activity as parameters to the Simulation object.
sim = simulator.Simulation(simulation_configurations, user_activity)
#The run method runs the simulation for one week.
sim.run()
#The simulator returns the node wise utilization information for each minute.
cluster_utilization_data = sim.create_utilization_data()
#The cost of using the JupyterHub deployment for one week.
sim.calculate_cost()
###Output
_____no_output_____
###Markdown
Visualize the simulationUsing the information we get after running the simulator, a line chart is plotted.The chart helps in understanding the node utilization for the selected day of the week.A **Slider** widget has been used to select a particular day.
###Code
#Imports required for the visualization of the simulation
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from ipywidgets import Layout, VBox
widget_style = {"description_width": "initial"}
selected_day = widgets.IntSlider(
min=1, max=7, description="Day of the week", style=widget_style
)
list_nodes = list(
node for node in cluster_utilization_data.columns if node.find("percent") != -1
)
data = []
for node in list_nodes:
data.append(
go.Scatter(
x=cluster_utilization_data["time"],
y=cluster_utilization_data[node][0:1440],
mode="lines",
name="Node" + str(list_nodes.index(node) + 1),
)
)
figure = go.FigureWidget(
data=data,
layout=go.Layout(
title=dict(text="Cluster utilization data"),
xaxis=dict(
title="Time in (hours)",
tickmode="array",
tickangle=45,
tickvals=list(range(0, 1440, 120)),
ticktext=[str(i) + " hours" for i in range(0, 24, 2)],
),
yaxis=dict(title="Utilization(%)", tickformat="0%"),
),
)
def response(change):
for node in list_nodes:
data_for_selected_day = cluster_utilization_data[node][
(selected_day.value - 1) * 1440 : selected_day.value * 1440
]
figure.data[list_nodes.index(node)].y = data_for_selected_day
selected_day.observe(response)
graph_data = widgets.HBox([selected_day])
widgets.VBox([graph_data, figure])
###Output
_____no_output_____ |
Implementation_of_SGD_Classifier_with_LogLoss.ipynb | ###Markdown
Implement SGD Classifier with Logloss and L2 regularization Using SGD without using sklearn Importing packages
###Code
import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn import linear_model
import matplotlib.pyplot as plt
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Creating custom dataset
###Code
X, y = make_classification(n_samples=50000, n_features=15, n_informative=10, n_redundant=5,
n_classes=2, weights=[0.7], class_sep=0.7, random_state=15)
X.shape, y.shape
###Output
_____no_output_____
###Markdown
Splitting data into train and test
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=15)
# Standardizing the data.
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
###Output
_____no_output_____
###Markdown
SGD classifier using SKLearn
###Code
clf = linear_model.SGDClassifier(eta0=0.0001, alpha=0.0001, loss='log', random_state=15, penalty='l2', tol=1e-3, verbose=2, learning_rate='constant')
clf
clf.fit(X=X_train, y=y_train) # fitting our model
clf.coef_, clf.coef_.shape, clf.intercept_
###Output
_____no_output_____
###Markdown
Implement Logistic Regression with L2 regularization Using SGD: without using sklearn Initialize weights
###Code
def initialize_weights(dim):
w= np.array([0]*(len(dim))) #Generating a Array of zeros of size equal to the number of attributes
b=np.array([0]) #Generating a co-efficient vector of size 1
return w,b
dim=X_train[0]
w,b = initialize_weights(dim)
print('w =',(w))
print('b =',str(b))
type(w)
###Output
w = [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
b = [0]
###Markdown
Compute sigmoid $sigmoid(z)= 1/(1+exp(-z))$
###Code
import math
def sigmoid(z):
a= 1/(1+math.exp(-z)) #Computing the value as per the formula mentioned
return a
###Output
_____no_output_____
###Markdown
Compute loss $log loss = -1*\frac{1}{n}\Sigma_{for each Yt,Y_{pred}}(Ytlog10(Y_{pred})+(1-Yt)log10(1-Y_{pred}))$
###Code
def logloss(y_true,y_pred):
loss=0 #Initializing a variable
for i in range(len(y_true)): #Looping through all the Y values
loss= loss+ (y_true[i])*(math.log10(y_pred[i]))+((1-y_true[i])*(math.log10(1-y_pred[i]))) #Computing the Logloss
loss= -1*(loss/len(y_true))
return loss
###Output
_____no_output_____
###Markdown
Compute gradient w.r.to 'w' $dw^{(t)} = x_n(y_n โ ฯ((w^{(t)})^{T} x_n+b^{t}))- \frac{ฮป}{N}w^{(t)}$
###Code
def gradient_dw(x,y,w,b,alpha,N):
m= np.dot(np.transpose(w),x) #Computing the dot product of the weight vector and the x vector
s= sigmoid(m+b) #Taking the siigmoid of the result of the dot product and the co-efficient
dw= (x*(y-s))- ((alpha/N)*w) #Computing the Gradient
return dw
###Output
_____no_output_____
###Markdown
Compute gradient w.r.to 'b' $ db^{(t)} = y_n- ฯ((w^{(t)})^{T} x_n+b^{t})$
###Code
def gradient_db(x,y,w,b):
m= np.dot(np.transpose(w),x) #Computing the dot product of the weight vector and the X vector
s= sigmoid(m+b) #Taking the sigmoid value of the co-efficient and the dot product
db= y-s #Calculating the gradient
return db
###Output
_____no_output_____
###Markdown
Implementing logistic regression
###Code
def train(X_train,y_train,X_test,y_test,epochs,alpha,eta0):
w,b= initialize_weights(X_train[0]) #Initializing the weights and the intercept values to zero
train_loss=[] #initializing lists
test_loss=[]
for i in tqdm(range(epochs)): #Looping through 50 epochs
w_g, w_b= initialize_weights(X_train[0]) #Initializing the weights, intercept and lists for predecting the train and test data
pred_tr=[]
pred_te=[]
for j in range(len(X_train)): #Looping through each row item of the Train Data
w_g= w_g + gradient_dw(X_train[j],y_train[j],w,b,alpha,len(X_train)) #Computing the gradient of the weights and the Gradient
w_b= w_b + gradient_db(X_train[j],y_train[j],w,b)
w= w + (eta0*w_g) #Computing the updated weights
b= b + (eta0*w_b)
for k in range(len(X_train)): #Predicting the probablity of y=1 for each of the rows or observations in X_train
t= sigmoid(b[0]+ np.dot(np.transpose(w),X_train[k])) #Using the formula for a generalized logistic plane y= sigmoid(b0+b1X1+b2X2+b3X3......bnXn)
pred_tr.append(t)
for l in range(len(X_test)): #Predicting the probablity of y=1 for each of the rows or observations in X_test
e= sigmoid(b[0]+ np.dot(np.transpose(w),X_test[l])) #Using the formula for a generalized logistic plane y= sigmoid(b0+b1X1+b2X2+b3X3......bnXn)
pred_te.append(e)
train_loss.append(logloss(y_train,pred_tr)) #leveraging the custom defined logloss function to predict the loss for train data
test_loss.append(logloss(y_test,pred_te)) #leveraging the custom defined logloss function to predict the loss for test data
return w,b,train_loss,test_loss
###Output
_____no_output_____
###Markdown
Experimentation with Learning Rate and the number of epochs
###Code
alpha=0.0001
eta0=0.0001
N=len(X_train)
epochs=100
w,b,trainloss,testloss=train(X_train,y_train,X_test,y_test,epochs,alpha,eta0)
print('Weights Vector:',w)
print('Intercept:',b)
clf.coef_, clf.coef_.shape, clf.intercept_
alpha=0.0001
eta0=0.00001
N=len(X_train)
epochs=100
w1,b1,trainloss1,testloss1=train(X_train,y_train,X_test,y_test,epochs,alpha,eta0)
alpha=0.0001
eta0=0.00005
N=len(X_train)
epochs=30
w2,b2,trainloss2,testlos2=train(X_train,y_train,X_test,y_test,epochs,alpha,eta0)
alpha=0.0001
eta0=0.00005
N=len(X_train)
epochs=25
w3,b3,trainloss3,testlos3=train(X_train,y_train,X_test,y_test,epochs,alpha,eta0)
###Output
100%|โโโโโโโโโโ| 25/25 [00:33<00:00, 1.33s/it]
###Markdown
Commparison between the implementation and the SGDClassifier's for various values of epochs and the learning rate epoch= 100, learning rate= 0.0001
###Code
# these are the results we got after we implemented sgd and found the optimal weights and intercept
w-clf.coef_, b-clf.intercept_
###Output
_____no_output_____
###Markdown
epochs= 100, learning rate= 0.00001
###Code
w1-clf.coef_, b1-clf.intercept_
###Output
_____no_output_____
###Markdown
Observation: From the above two scenarios we see that the difference reduces with the decrease in the learning rate hence decreasing the number of epochs and margninally increasing the learning rate epochs= 30, learning rate= 0.00005
###Code
w2-clf.coef_, b2-clf.intercept_
###Output
_____no_output_____
###Markdown
Observation: We see that the difference has reduced to good extent. Hence trying to reduce further. Epochs= 25, Learning Rate= 0.00005 **(FINAL Hyperparameter Set)**
###Code
w3-clf.coef_, b3-clf.intercept_
###Output
_____no_output_____
###Markdown
Observation: The difference in the range of 10^-3. Hence the final set of Hyperparameters are as belowEpochs= 25, Learning Rate= 0.00005 Plot epoch number vs train , test loss * epoch number on X-axis* loss on Y-axis
###Code
plt.figure(figsize= (10,5)) #Adjusting the size of the Plot
ep= list(range(1,101)) #epoch values
plt.plot(ep,trainloss,label= 'Train Loss') #Plotting the Training Loss
plt.plot(ep,testloss, label= "Test Loss") #Plotting the Test Loss
plt.legend() #Displaying the Legend
plt.xlabel("Epoch") #Setting the x,y labels and the Title of the Plot
plt.ylabel("Log-Loss")
plt.title("epoch Vs Train & Test Loss")
plt.show()
###Output
_____no_output_____
###Markdown
Plot of the Loss for final scenario (Epochs=25, Learning Rate= 0.00005)
###Code
plt.figure(figsize= (10,5)) #Adjusting the size of the Plot
ep= list(range(1,26)) #epoch values
plt.plot(ep,trainloss3,label= 'Train Loss') #Plotting the Training Loss
plt.plot(ep,testlos3, label= "Test Loss") #Plotting the Test Loss
plt.legend() #Displaying the Legend
plt.xlabel("Epoch") #Setting the x,y labels and the Title of the Plot
plt.ylabel("Log-Loss")
plt.title("epoch Vs Train & Test Loss")
plt.show()
def pred(w,b, X):
N = len(X)
predict = []
for i in range(N):
z=np.dot(w,X[i])+b
if sigmoid(z) >= 0.5: # sigmoid(w,x,b) returns 1/(1+exp(-(dot(x,w)+b)))
predict.append(1)
else:
predict.append(0)
return np.array(predict)
print(1-np.sum(y_train - pred(w3,b3,X_train))/len(X_train))
print(1-np.sum(y_test - pred(w3,b3,X_test))/len(X_test))
###Output
_____no_output_____ |
Introduction/2.Comparisons.ipynb | ###Markdown
Comparison Operators We need to be able to compare different variables. We will be working on:* Are these things the same?* Are these things not the same?* How do these things compare?We can compare any data type, and our output will be a boolean (True or False). The other things we will cover are:* Comparing different data types* Making multiple comparisons at onceComparison operators are important on their own (how do these things compare?) and are also useful for sorting and switching (see the next notebook.) Are these things the same? We have already initiated variables by setting something equal to something else - let's do that here by setting an equal to 10 and then setting b equal to a.
###Code
a = 10
b = a
c = 11
print( "a=", a, "; b =", b, "; c = ", c )
###Output
a= 10 ; b = 10 ; c = 11
###Markdown
The first comparison operator is '==' which tests to see if two variables are equal.
###Code
print( "a=", a, "; b =", b, "; c = ", c )
# Is a equal to b?
print( "\n#-# Is a equal to b?")
print( a == b )
# Is a equal to c?
print( "\n#-# Is a equal to c?")
print( a == c )
###Output
a= 10 ; b = 10 ; c = 11
#-# Is a equal to b?
True
#-# Is a equal to c?
False
###Markdown
This tells us that a is equal to c, because it returns 'True' and a is not equal to c, as that returns 'False.' We can also do comparisons with other variable types. Here's an example with strings instead of integers
###Code
aStr = 'apple'
bStr = 'banana'
cStr = 'apple'
print( "aStr =", aStr,"; bStr =", bStr,"; cStr = ", cStr )
# Is aStr equal to bStr?
print( "\n#-# Is aStr equal to bStr?")
print( aStr == bStr )
# Is aStr equal to cStr?
print( "\n#-# Is aStr equal to cStr?")
print( aStr == cStr )
###Output
aStr = apple ; bStr = banana ; cStr = apple
#-# Is aStr equal to bStr?
False
#-# Is aStr equal to cStr?
True
###Markdown
Are these things different? We can also test to see if two values are not equal using the '!=' operator.
###Code
print( "a =", a, "; b =", b, "; c = ", c )
# Is a not equal to b?
print( "\n#-# Is a not equal to b?")
print( a != b )
# Is a not equal to c?
print( "\n#-# Is a not equal to c?")
print( a != c )
###Output
a = 10 ; b = 10 ; c = 11
#-# Is a not equal to b?
False
#-# Is a not equal to c?
True
###Markdown
This gives us the opposite of what we had before. It is false that a and b are not equal, meaning that they are equal. It is true that a and c are not equal. How do these things compare? We can also compare the magnitude of values using ''and '>=', which will return 'True' if the condition is being met.
###Code
print( "a =", a, "; b =", b )
# Is a less than b?
print( "\n#-# Is a less than b?")
print( a < b )
# Is a less than or equal to b?
print( "\n#-# Is a less than or equal to b?")
print( a <= b )
# Is aVar greater than or equal to bVar?
print( "\n#-# Is a greater than or equal to b?")
print( a >= b )
# Is a greater than b?
print( "\n#-# Is a greater than b?")
print( a > b )
###Output
a = 10 ; b = 10
#-# Is a less than b?
False
#-# Is a less than or equal to b?
True
#-# Is a greater than or equal to b?
True
#-# Is a greater than b?
False
###Markdown
Warnings for variable types We do have to watch out for our types. A string of a value is not the same as a value
###Code
aStr = '10'
aFlt = 10.0
print( "a=", a, "; aStr =", aStr, "; aFlt =", aFlt )
# Is a equal to aStr?
print( "\n#-# Is a equal to aStr?")
print( a == aStr )
print( "a type is ", type( a ), "; and aStr type is ", type( aStr ) )
# Is a equal to aFlt?
print( "\n#-# Is a equal to aFlt?")
print( a == aFlt)
print( "a type is ", type( a ), "; and aStr type is ", type( aFlt ) )
###Output
a= 10 ; aStr = 10 ; aFlt = 10.0
#-# Is a equal to aStr?
False
a type is <class 'int'> ; and aStr type is <class 'str'>
#-# Is a equal to aFlt?
True
a type is <class 'int'> ; and aStr type is <class 'float'>
###Markdown
We can compare integers and floats (!) but not other disparate data types.If you let python take care of your data-types, be warned that they could be different from what you think they are! Multiple Comparisons We can make multiple comparisons at once by stringing the statements* and* not* ortogether. The individual testable (true/false) components need to be broken apart. For example,* If the V CATA bus is coming around the corner, then I need to run towards the bus stoprequires several things for it to be true, and to require running. We can break these things out with:We will only run towards the bus stop if all of the statements are true AND The and operator will return True if all of the conditions are met
###Code
print( "a=", a, "; b =", b, "; c = ", c )
# Is a equal to 10?
print( "\n#-# Is a equal to 10?")
print( a == 10 )
# Is a equal to b?
print( "\n#-# Is a equal to b?" )
print( a == b )
# Is a equal to c?
print( "\n#-# Is a equal to c?" )
print( a == c )
# Is a equal to 10 AND a equal to b?
print ( "\n#-# Is a equal to 10 AND a equal to b?")
print( (a == 10) and (a == b) )
# Is a equal to 10 AND a equal to c?
print ( "\n#-# Is a equal to 10 AND a equal to c?")
print( (a == 10) and (a == c) )
###Output
a= 10 ; b = 10 ; c = 11
#-# Is a equal to 10?
True
#-# Is a equal to b?
True
#-# Is a equal to c?
False
#-# Is a equal to 10 AND a equal to b?
True
#-# Is a equal to 10 AND a equal to c?
False
###Markdown
We can also string as many comparisons together as we want
###Code
print( (1 < 2) and (1 < 3) and (1 < 4) and (1 < 5) and (1 < 6) and (1 < 7) and (1 < 8) )
###Output
True
###Markdown
Try it out Given the following scenario, write the appropriate 'and' conditions. It should produce, true.
###Code
year = 1967
song = "Penny Lane"
band = "Beatles"
# The Beatles song "Penny Lane" was released after 1960 and before 1970
###Output
_____no_output_____
###Markdown
OR If we want 'True' for either of the conditions to be met, we can use the 'or' operator.
###Code
print( "a=", a, "; b =", b, "; c = ", c )
# Is a equal to 10?
print( "\n#-# Is a equal to 10?")
print( a == 10 )
# Is a equal to b?
print( "\n#-# Is a equal to b?" )
print( a == b )
# Is a equal to c?
print( "\n#-# Is a equal to c?" )
print( a == c )
# Is a equal to 10?
print( "\n#-# Is a equal to 11?")
print( a == 11 )
# Is a equal to 10 OR a equal to b?
print ( "\n#-# Is a equal to 10 OR a equal to b?")
print( (a == 10) or (a == b) )
# Is a equal to 10 OR a equal to c?
print ( "\n#-# Is a equal to 10 or a equal to c?")
print( (a == 10) or (a == c) )
# Is a equal to 11 OR a equal to c?
print ( "\n#-# Is a equal to 11 or a equal to c?")
print( (a == 11) or (a == c) )
###Output
a= 10 ; b = 10 ; c = 11
#-# Is a equal to 10?
True
#-# Is a equal to b?
True
#-# Is a equal to c?
False
#-# Is a equal to 11?
False
#-# Is a equal to 10 OR a equal to b?
True
#-# Is a equal to 10 or a equal to c?
True
#-# Is a equal to 11 or a equal to c?
False
###Markdown
Try it out Given the following scenario, write the appropriate 'and' and 'or' conditions. It should produce, true.
###Code
year = 1967
song = "Penny Lane"
band = "Beatles"
# The Beatles song released in 1967 was either "Penny Lane" or "Yellow Submarine"
###Output
_____no_output_____
###Markdown
Not We can add a not to change the meaning of the and/or operators
###Code
print( "a=", a, "; b =", b )
# Is aVar equal to 10?
print( "\n#-# Is a equal to 10?")
print( a == 10 )
# Is aVar equal to bVar?
print( "\n#-# Is a equal to b?" )
print( a == b )
# Is aVar equal to 10 AND aVar equal to bVar?
print ( "\n#-# Is a equal to 10 AND a equal to b?")
print( (a == 10) and (a == b) )
# Is aVar equal to 10 AND NOT a equal to b?
print ( "\n#-# Is a equal to 10 AND NOT a equal to b?")
print( (a == 10) and not (a == b) )
###Output
a= 10 ; b = 10
#-# Is a equal to 10?
True
#-# Is a equal to b?
True
#-# Is a equal to 10 AND a equal to b?
True
#-# Is a equal to 10 AND NOT a equal to b?
False
###Markdown
Try it out Given the following scenario, write the appropriate 'and', 'or', and 'not' conditions. It should produce, true.
###Code
year = 1967
song = "Penny Lane"
band = "Beatles"
# The Beatles song not named "Penny Lane" and released either in 1965 or 1967
###Output
_____no_output_____
###Markdown
Try to fill in code to fulfill the request! Here are some variables used in the excercise
###Code
dogA_color='brown'
dogA_mass=42
dogA_gender='male'
dogA_age=5
dogA_name='chip'
dogB_color='white'
dogB_mass=19
dogB_gender='female'
dogB_age=2
dogB_name='lady'
###Output
_____no_output_____
###Markdown
Is dogA the same color as dogB? (False)
###Code
# Example:
print( dogA_color == dogB_color )
###Output
False
###Markdown
Does dogA have the same name as dogB? (False)
###Code
# Try it out here:
###Output
_____no_output_____
###Markdown
Is dogA older than dogB? (True)
###Code
# Try it out here:
###Output
_____no_output_____
###Markdown
Is dogA the same gender as dogB? (False)
###Code
# Try it out here:
###Output
_____no_output_____
###Markdown
Is dogA heavier than dogB and have a different name than dogB? (True)
###Code
# Try it out here:
###Output
_____no_output_____
###Markdown
Does dogA have a different age than dogB and not a different gender than dogB? (False)
###Code
# Try it out here:
###Output
_____no_output_____ |
demo/visualisation.ipynb | ###Markdown
Load Data
###Code
DATA_DIR = '../data'
DATA_PATH = os.path.join(DATA_DIR, 'Location History.json')
def load_location_data_as_dataframe(data_path: str):
with open(data_path) as json_file:
json_data = json.load(json_file)
return pd.json_normalize(json_data, 'locations')
df = load_location_data_as_dataframe(DATA_PATH)
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
def create_geodataframe(df: pd.DataFrame):
df['lat'] = df['latitudeE7'] / 1e7
df['lon'] = df['longitudeE7'] / 1e7
return gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.lon, df.lat))
def clean_geodataframe(gdf: gpd.GeoDataFrame):
gdf['ts'] = pd.to_datetime(gdf['timestampMs'], unit='ms')
_gdf = gdf.drop(columns=['timestampMs', 'latitudeE7', 'longitudeE7', 'source', 'deviceTag', 'activity', 'verticalAccuracy', 'platform', 'platformType', 'locationMetadata', 'velocity', 'heading', 'lat', 'lon'])
return _gdf
def save_geodataframe_as_geojson(gdf: gpd.GeoDataFrame, out_path: str):
gdf.to_file(out_path, driver='GeoJSON')
def plot_locations(gdf: gpd.GeoDataFrame, country : str=None):
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
if country:
ax = world[world.name == country].plot(
color='white', edgecolor='black')
else:
ax = world.plot(
color='white', edgecolor='black')
gdf.plot(ax=ax, color='red');
###Output
_____no_output_____
###Markdown
Process Data
###Code
df = load_location_data_as_dataframe(DATA_PATH)
gdf = create_geodataframe(df)
plot_locations(gdf)
_gdf = clean_geodataframe(gdf)
out_path = os.path.join(DATA_DIR, 'location_history.geojson')
save_geodataframe_as_geojson(_gdf, out_path)
###Output
_____no_output_____ |
04 - Data Analysis With Pandas/notebooks/05_WorkingWithDataFrame.ipynb | ###Markdown
Working with DataFrames
###Code
import pandas as pd
df = pd.read_excel("../data/LungCapData.xls")
df.head(10)
df.Gender.head(10)
df.Gender == "male"
df_male = df[df.Gender == "male"]
df_female = df[df.Gender == "female"]
df.shape
df_male.shape
df_female.shape
df.loc[df.Gender == "female", "Smoke"]
mask1 = df.Gender == "male"
mask1
df_male = df[mask1]
df.head()
df_male = df.loc[mask1]
df_male.head()
df.dtypes # == object
mask2 = df.dtypes == object
mask2
df.loc[:, mask2].head()
df.loc[:, ~mask2].head()
mask1
df.loc[mask1, ~mask2].head()
###Output
_____no_output_____
###Markdown
Filtering DataFrames with many Conditions (AND)
###Code
2 == 2 and 2 > 3
2 == 2 and 3 > 2
2 == 2 or 2 > 3
2 < 1 or 3 == 2
import pandas as pd
df = pd.read_excel("../data/LungCapData.xls")
df.head(10)
df.Gender == "male"
df.Smoke == "yes"
(df.Gender == "male") & (df.Smoke == "yes")
df[(df.Gender == "male") & (df.Smoke == "yes")]
df.head()
(df.Gender == "male") | (df.Smoke == "yes")
df[(df.Gender == "male") | (df.Smoke == "yes")]
mask1 = df.Gender == "male"
mask1.head()
df.Age > 14
mask2 = df.Age > 14
mask2.head()
(mask1 & mask2).head()
df.columns
male_adult = df.loc[mask1 & mask2, ["Smoke", "Caesarean"]]
male_adult.head(5)
male_adult.info()
male_adult.describe()
df.describe()
###Output
_____no_output_____
###Markdown
Filtering DataFrames with many Conditions (OR)
###Code
import pandas as pd
df = pd.read_excel("../data/LungCapData.xls")
df.head()
df.Gender == 'female'
df.Caesarean == 'yes'
(df.Gender == 'female') & (df.Caesarean == 'yes')
df[(df.Gender == 'female') | (df.Caesarean == 'yes')]
mask1 = df.Gender == "female"
mask1.head(5)
mask2 = df.Age < 14
mask2.head(5)
(mask1 | mask2).head(10)
df.loc[mask1 | mask2].head()
female_adult = df.loc[mask1 | mask2, ["Smoke", "Caesarean"]]
female_adult.head()
female_adult.info()
female_adult.describe()
df.describe()
###Output
_____no_output_____
###Markdown
Advanced Filtering with between(), isin() and ~
###Code
import pandas as pd
df = pd.read_csv("../data/gapminder.csv")
df.head()
year_1952 = df.loc[df.year == 1952]
year_1952.head()
year_1952.tail()
year_1952.info()
since1952 = df.loc[df.year >= 1952]
since1952.head()
since1952.tail()
df.year.between(1960, 1969).head()
df_60s = df.loc[df.year.between(1960, 1969, inclusive=True)]
df_60s.head()
df_60s.tail()
selected_year = [1972, 1996]
df.year.isin(selected_year).head()
L = [4, 5, 6]
60 not in L
year_df = df.loc[df.year.isin(selected_year)]
year_df.head()
year_df.tail()
og_not_72_96 = df.loc[~df.year.isin(selected_year)]
og_not_72_96.head()
og_not_72_96.year.unique()
###Output
_____no_output_____
###Markdown
any() and all()
###Code
import pandas as pd
df = pd.read_csv("../data/gapminder.csv")
df.head()
df.country == "Bangladesh"
(df.country == "Bangladesh").any()
(df.country == "Bangladesh").all()
(df.year == 2022).any()
pd.Series([-1, 0.5 , 1, -0.1, 0]).any()
(df.continent == "Asia").any()
###Output
_____no_output_____
###Markdown
Removing Columns
###Code
import pandas as pd
df = pd.read_csv("../data/gapminder.csv")
df.head()
df.drop(columns = "country")
df.head()
df_new = df.drop(columns = "country")
df_new.head()
df.drop(columns = ["country", "pop"], inplace=True)
df.head()
df.drop(labels = "continent", axis = "columns", inplace= True)
df.head()
df.head()
###Output
_____no_output_____
###Markdown
Removing Rows
###Code
import pandas as pd
df = pd.read_csv("../data/covid19.csv", index_col = "Country/Region")
df.head(10)
df.drop(index = "Mainland China")
df.head()
df.drop(index = ["Mainland China","Bangladesh"], inplace = True)
df.head()
###Output
_____no_output_____
###Markdown
Adding new Columns to a DataFrame
###Code
import pandas as pd
df = pd.read_csv("../data/500_Person_Gender_Height_Weight_Index.csv")
df.head()
df["Zeros"] = "Zero"
df.head()
###Output
_____no_output_____
###Markdown
Creating Columns based on other Columns
###Code
import pandas as pd
import numpy as np
df = pd.read_csv("../data/500_Person_Gender_Height_Weight_Index.csv")
df.head()
df['HeightLog10'] = np.log10(df['Height'])
df.head()
df['WeightLog10'] = np.log10(df['Weight'])
df.head()
df['HeightInM'] = df['Height']/100
df.head()
df['BMI'] = (df['Weight'])/ (df['HeightInM']) * (df['HeightInM'])
df.head()
###Output
_____no_output_____
###Markdown
Sorting DataFrames
###Code
import pandas as pd
df = pd.read_csv("../data/gapminder.csv")
df.head()
df.year.sort_values()
df.sort_values(by = "year").head()
df.sort_values(by = "pop", ascending = False, inplace = True)
df.head()
df.sort_values(by = ["country", "continent"], ascending = [True, True], inplace= True)
df.sort_index(ascending = True, inplace = True)
df
df.sort_values(by = "pop").reset_index(drop = True)
df.sort_values(by = "pop", ignore_index = True)
###Output
_____no_output_____
###Markdown
Ranking DataFrames with rank()
###Code
import pandas as pd
ages = pd.Series([15, 32, 45, 21, 55, 15, 0], index = ["A", "B", "C", "D", "E", "F", "G"])
ages = pd.Series([15, 32, 45, 21, 55, 15, 0], index = ["A", "B", "C", "D", "E", "F", "G"])
ages
ages.sort_values(ascending = False)
ages.rank(ascending=False, method = "min").sort_values(ascending = True)
ages.rank(ascending=False, method = "min", pct=True).sort_values() * 100
df = pd.read_csv("../data/500_Person_Gender_Height_Weight_Index.csv")
df.Height.rank(ascending = False).head()
df["Height_Rank"] = df.Height.rank(ascending = False, method="min")
df.head()
df.sort_values("Height", ascending= False).head()
df.drop(columns = "Height_Rank", inplace= True)
###Output
_____no_output_____
###Markdown
nunique(), nlargest() and nsmallest() with DataFrames
###Code
import pandas as pd
df = pd.read_csv("../data/gapminder.csv")
df.head()
df.tail()
df.country.unique()
df.nunique(axis = 1, dropna=False)
df.nunique(dropna = False)
df.nlargest(n = 5, columns = "pop")
df.sort_values("country", ascending = False).head(5)
df.nsmallest(n = 5, columns = "pop")
p = pd.to_numeric(df['pop'])
p.idxmax()
p = pd.to_numeric(df['pop'])
p.idxmin()
###Output
_____no_output_____
###Markdown
Summary Statistics and Accumulations
###Code
import pandas as pd
df = pd.read_csv("../data/500_Person_Gender_Height_Weight_Index.csv")
df.head()
df.describe()
df.count(axis = "columns")
df.count(axis = 1)
df.mean(axis = 1)
df.sum(axis = 0)
df.head()
df.Height.cumsum(axis = 0)
df.corr()
df.cov()
###Output
_____no_output_____
###Markdown
The agg() method
###Code
import pandas as pd
df = pd.read_csv("../data/500_Person_Gender_Height_Weight_Index.csv")
df.head()
df.describe()
df.mean()
df.agg("mean")
df.agg(["mean", "std"])
df.agg(["mean", "std", "min", "max", "median"])
df.agg({"Weight": "mean", "Height":["min", "max"]})
###Output
_____no_output_____
###Markdown
apply()
###Code
import pandas as pd
import numpy as np
df = pd.read_csv("../data/500_Person_Gender_Height_Weight_Index.csv")
df.head()
df.info()
df.min(axis = 0)
df['HeightLog10'] = df['Height'].apply(lambda x : np.log10(x))
df.head()
df['WeightCat'] = df['Weight'].apply(lambda x : "High" if x > 65 else "Low")
df.head()
def heightcat(x):
if x == 199:
return "Tall"
elif x == 170:
return "Medium"
else:
return "Small"
df['HeightCat'] = df['Height'].apply(heightcat)
df.head()
###Output
_____no_output_____
###Markdown
String Operations Intro / Refresher
###Code
"Hello World"
type("Hello World")
hello = "Hello World"
hello
len(hello)
hello.lower()
hello.upper()
hello.title()
hello.split(" ")
hello.replace("Hello", "Hi")
###Output
_____no_output_____
###Markdown
String Operations in Pandas
###Code
import pandas as pd
df = pd.read_excel("../data/LungCapData.xls")
df.head()
df.columns
df.columns.str.lower()
df.columns.str.title()
df.columns.str.contains('s')
###Output
_____no_output_____ |
Pandas/PandasBasics.ipynb | ###Markdown
Pandas
###Code
!wget "https://s3.amazonaws.com/thinkific/file_uploads/118220/attachments/8de/eed/524/read_csv.zip"
!unzip read_csv.zip
import pandas as pd
import numpy as np
df = pd.read_csv('read_csv/data.csv')
df.head()
df.describe()
df1 = pd.read_excel('read_csv/data.xlsx')
df1.head()
df1.describe()
## Data frame operations
df.head(4)
df.shape
df.tail(4)
df.columns
df['Name']
###Output
_____no_output_____
###Markdown
More operations in DataFrame
###Code
!wget "https://s3.amazonaws.com/thinkific/file_uploads/118220/attachments/9c8/8bf/85e/indexing.zip"
!unzip indexing.zip
df = pd.read_csv('indexing/data.csv')
df.head()
df['Normal_Nucleoli']
df[['Normal_Nucleoli', 'Clump_Thickness']]
# selecting rows by position
df.iloc[0:5]
df.iloc[0:]
df.iloc[:10]
# Selecting columns by their position
df.iloc[:, 0: 2]
df[df['Cancer_Type'] == 2 ]
###Output
_____no_output_____
###Markdown
Data Manipulation and Visualization Sorting
###Code
!wget "https://s3.amazonaws.com/thinkific/file_uploads/118220/attachments/9a9/c1d/e3e/Pandas_Part_1.zip"
!unzip Pandas_Part_1.zip
data = pd.read_csv('Pandas Part 1/bigmart_data.csv')
data = data.dropna(how = 'any')
data.head()
# sorting in pandas
# sort_values and sort_index
# sorting by 'Outlet_Establishment_Year'
sorted = data.sort_values(by ='Outlet_Establishment_Year')
sorted[:5]
sorted = data.sort_values(by='Outlet_Establishment_Year', ascending=False)
sorted[:5]
sorted = data.sort_values(by=['Outlet_Establishment_Year', 'Item_Weight'], ascending=True)
sorted[:5]
sorted = data.sort_values(by=['Item_Weight', 'Outlet_Establishment_Year'], ascending=True)
sorted[:5]
sorted = data.sort_index()
sorted[:5]
###Output
_____no_output_____
###Markdown
Merging Data Frames
###Code
#concat - append
#merge - merge
# read documentation for it
###Output
_____no_output_____
###Markdown
Apply Function
###Code
!wget "https://s3.amazonaws.com/thinkific/file_uploads/118220/attachments/81e/d1b/6d0/Apply_function.zip"
!unzip Apply_function.zip
data = pd.read_csv('Apply function/bigmart_data.csv')
df = data.apply(lambda x:x)
df.head()
#access first row
data.apply(lambda x : x[0])
#access first column
data.apply(lambda x : x[0], axis=1)
#access column by column name
data.apply(lambda x : x['Item_Fat_Content'], axis=1)
#actual filtering
data['Item_MRP'][0:5]
#actual filtering
def clip(x):
if(x > 200):
x = 200
return x
data['Item_MRP'].apply(lambda x : clip(x))[0:5]
data['Outlet_Location_Type'][0:5]
def encode(tier):
if tier == 'Tier 1':
tier = 0
elif tier == 'Tier 2':
tier = 1
else:
tier = 2
return tier
data['Outlet_Location_Type'].apply(lambda x : encode(x))[0:5]
###Output
_____no_output_____
###Markdown
Aggregation
###Code
# groupby
# crosstab
# pivottable
price_by_item = data.groupby(by = ['Item_Type'])
price_by_item.first()
price_by_item.Item_MRP.mean()
#crosstab
pd.crosstab(data['Outlet_Size'], data['Outlet_Location_Type'], margins = True)
#pivot table
pd.pivot_table(data, index=['Outlet_Establishment_Year'], values=['Item_Outlet_Sales'])
pd.pivot_table(data, index=['Outlet_Establishment_Year','Outlet_Location_Type','Outlet_Size'], values=['Item_Outlet_Sales'])
pd.pivot_table(data, index=['Outlet_Establishment_Year','Outlet_Location_Type','Outlet_Size'], values=['Item_Outlet_Sales'], aggfunc=[np.mean, np.median, np.min, np.max, np.std])
###Output
_____no_output_____ |
Odomero_WT_21_174/Data Analysis-Pandas-1/13.ipynb | ###Markdown
Pandaspandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built on top of the Python programming language. You can use it to organise data into tables and do calculations on such tables, you will use thepandas module. A module is a package of various pieces of code that can be used individually. The pandas module provides veryextensive and advanced data analysis capabilities to compliment Python. This course only scratches the surface of pandas.I have to tell the computer that Iโm going to use a module.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Importance of Pandas- Pandas provide essential data structures like series, dataframes, and panels which help in manipulating data sets and time series.- It is free to use and an open source library, making it one of the most widely used data science libraries in the world.- Pandas possess the power to perform various tasks. Whether it is computing tasks like finding the mean, median and mode of data, or a task of handling large CSV files and manipulating the contents according to our will, Pandas can do it all. In short, to master data science, you must be skillful in Pandas. `From now on, we will be using real life data to learn and practice as we progress through the course. the file/data can be found in the folder that contains this notebook.` `Data analysis often starts with a question or with some data.` - What is the total, smallest, largest, and average number of deaths due to TB?- What is the death rate (number of deaths divided by population) of each country?- Which countries have the smallest and largest number of deaths?- Which countries have the smallest and largest death rate? **Loading the data**Many applications can read files in Excel format, and pandas can too. Asking thecomputer to read the data looks like this:
###Code
data = pd.read_excel('WHO POP TB some.xls')
data
###Output
_____no_output_____
###Markdown
The variable name data is not descriptive, but as there is only one dataset in our analysis,there is no possible confusion with other data, and short names help to keep the lines ofcode short.The function read_excel() takes a file name as an argument and returns the table contained in the file. In pandas, tables are called dataframes . To load the data, I simply call the function and store the returned dataframe in a variable. A file name must be given as a string , a piece of text surrounded by quotes. The quote marks tell Python that this isnโt a variable, function or module name. Also, the quote marks state that this is a single name, even if it contains spaces, punctuation and other characters besides letters.Misspelling the file name, or not having the file in the same folder as the notebook containing the code, results in a file not found error. **Selecting a column**Now you have the data, let the analysis begin! Letโs tackle the first part of the first question: โWhat are the total, smallest, largest andaverage number of deaths due to TB?โ Obtaining the total number will be done in twosteps: first select the column with the TB deaths, then sum the values in that column.Selecting a single column of a dataframe is done with an expression in the format:`dataFrame['column name']`
###Code
data['TB deaths']
###Output
_____no_output_____
###Markdown
**Task-1**Select the population column and store it in a variable, sothat you can use it in later exercises.
###Code
# Write your code here
population = data["Population (1000s)"]
###Output
_____no_output_____
###Markdown
**Calculations on a column**
###Code
tbColumn = data['TB deaths']
tbColumn.sum()
tbColumn.min()
tbColumn.max()
###Output
_____no_output_____
###Markdown
Like sum() , the column methods min() and max() donโt need arguments, whereas the Python functions min() and max() did need them, because there was no context (column) providing the values.The average number is computed as before, dividing the total by the number of countries.
###Code
tbColumn.sum() / 12
###Output
_____no_output_____
###Markdown
This kind of average is called the mean and thereโs a method for that.
###Code
tbColumn.mean()
tbColumn.median()
###Output
_____no_output_____
###Markdown
**Task-2**Practise the use of column methods by applying them to the population column youobtained in Task-1
###Code
#SUM
population.sum()
#MAXIMUM
population.max()
#MINIMUM
population.min()
#AVERAGE
population.mean()
#AVERAGE
population.sum() / len(population)
population.median()
###Output
_____no_output_____
###Markdown
**Sorting on a column**One of the research questions was: which countries have the smallest and largest number of deaths?Being a small table, it is not too difficult to scan the TB deaths column and find those countries. However, such a process is prone to errors and impractical for large tables. Itโs much better to sort the table by that column, and then look up the countries in the first and last rows.As youโve guessed by now, sorting a table is another single line of code.
###Code
data.sort_values('TB deaths')
###Output
_____no_output_____
###Markdown
The dataframe method sort_values() takes as argument a column name and returns a new dataframe where the rows are in ascending order of the values in that column. Note that sorting doesnโt modify the original dataframe.
###Code
data # rows still in original order
###Output
_____no_output_____
###Markdown
Itโs also possible to sort on a column that has text instead of numbers; the rows will besorted in alphabetical order.
###Code
data.sort_values('Country')
###Output
_____no_output_____
###Markdown
**Task-3**Sort the same table by population, to quickly see which are the least and the most populous countries.
###Code
# Write your code here
data.sort_values('Population (1000s)')
###Output
_____no_output_____
###Markdown
**Calculations over columns**The last remaining task is to calculate the death rate of each country. You may recall that with the simple approach Iโd have to write:`rateAngola = deathsInAngola * 100 / populationOfAngolarateBrazil = deathsInBrazil * 100 / populationOfBrazil`and so on, and so on. If youโve used spreadsheets, itโs the same process: create the formula for the first row and then copy it down for all the rows. This is laborious and errorprone, e.g. if rows are added later on. Given that data is organised by columns, wouldnโt it be nice to simply write the following?`rateColumn = deathsColumn * 100 / populationColumn`With pandas we can do this:
###Code
deathsColumn = data['TB deaths']
populationColumn = data['Population (1000s)']
rateColumn = deathsColumn * 100 / populationColumn
rateColumn
###Output
_____no_output_____
###Markdown
With pandas, the arithmetic operators become much smarter. When adding, subtracting, multiplying or dividing columns, the computer understands that the operation is to be done row by row and creates a new column. All well and nice, but how to put that new column into the dataframe, in order to have everything in a single table? In an assignment `variable = expression` , if the variable hasnโt been mentioned before, the computer creates the variable and stores in it the expressionโs value. Likewise, if I assign to a column that doesnโt exist in the dataframe, the computer will create it.
###Code
data['TB deaths (per 100,000)'] = rateColumn
data
###Output
_____no_output_____ |
SST_training-ONLY GPT.ipynb | ###Markdown
Training of SST Model
###Code
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchtext as tt
from tqdm import tqdm
from pytorch_extras import RAdam, SingleCycleScheduler
from transformers import AlbertModel, AlbertTokenizer
from pytorch_transformers import GPT2Model, GPT2Tokenizer
from deps.torch_train_test_loop.torch_train_test_loop import LoopComponent, TrainTestLoop
from models import SSTClassifier, GPT2Classifier
DEVICE = 'cuda:0'
###Output
_____no_output_____
###Markdown
Load pretrained transformer and tokenizer
###Code
tokenizer = GPT2Tokenizer.from_pretrained(
'gpt2-large', do_lower_case=False)
# lang_model = GPT2Model.from_pretrained(
# 'gpt2-large', output_hidden_states=True, output_attentions=False)
# lang_model.cuda(device=DEVICE)
# lang_model.eval()
print('Pretrained GPT-2 loaded.')
# tokenizer = AlbertTokenizer.from_pretrained(
# 'albert-large-v2', do_lower_case=False)
# lang_model = AlbertModel.from_pretrained(
# 'albert-large-v2', output_hidden_states=True, output_attentions=False)
# lang_model.cuda(device=DEVICE)
# lang_model.eval()
# print('Pretrained albert loaded.')
def tokenized_texts_to_embs(tokenized_texts, pad_token=tokenizer.eos_token):
tokenized_texts = [[*tok_seq, pad_token] for tok_seq in tokenized_texts]
lengths = [len(tok_seq) for tok_seq in tokenized_texts]
max_length = max(lengths)
input_toks = [t + [pad_token] * (max_length - l) for t, l in zip(tokenized_texts, lengths)]
input_ids = [tokenizer.convert_tokens_to_ids(tok_seq) for tok_seq in input_toks]
input_ids = torch.tensor(input_ids).to(device=DEVICE)
mask = [[1.0] * length + [0.0] * (max_length - length) for length in lengths]
mask = torch.tensor(mask).to(device=DEVICE) # [batch sz, num toks]
# with torch.no_grad():
# outputs = lang_model(input_ids=input_ids)
# embs = torch.stack(outputs[-1], -2) # [batch sz, n toks, n layers, d emb]
return mask, input_ids
###Output
_____no_output_____
###Markdown
Prepare datasets
###Code
fine_grained = True # set to False for binary classification
class SSTFilter():
def __init__(self, remove_neutral=False, remove_dupes=False):
self.remove_neutral, self.remove_dupes = (remove_neutral, remove_dupes)
self.prev_seen = {}
def __call__(self, sample):
if self.remove_neutral and (sample.label == 'neutral'):
return False
hashable = ''.join(sample.text)
if self.remove_dupes and (hashable in self.prev_seen):
return False
self.prev_seen[hashable] = True
return True
# tt.datasets.SST.download(root='.data') # download if necessary
_stoi = { s: i for i, s in enumerate(
['very negative', 'negative', 'neutral', 'positive', 'very positive'] \
if fine_grained else ['negative', 'positive']
) }
TEXT = tt.data.RawField(
preprocessing=tokenizer.tokenize,
postprocessing=tokenized_texts_to_embs,
is_target=False)
LABEL = tt.data.RawField(
postprocessing=lambda samples: torch.tensor([_stoi[s] for s in samples], device=DEVICE),
is_target=True)
trn_ds = tt.datasets.SST(
'.data/sst/trees/train.txt', TEXT, LABEL, fine_grained=fine_grained, subtrees=True,
filter_pred=SSTFilter(remove_neutral=(not fine_grained), remove_dupes=True))
val_ds = tt.datasets.SST(
'.data/sst/trees/dev.txt', TEXT, LABEL, fine_grained=fine_grained, subtrees=False,
filter_pred=SSTFilter(remove_neutral=(not fine_grained), remove_dupes=False))
tst_ds = tt.datasets.SST(
'.data/sst/trees/test.txt', TEXT, LABEL, fine_grained=fine_grained, subtrees=False,
filter_pred=SSTFilter(remove_neutral=(not fine_grained), remove_dupes=False))
print('Datasets ready.')
print('Number of samples: {:,} train phrases, {:,} valid sentences, {:,} test sentences.'\
.format(len(trn_ds), len(val_ds), len(tst_ds)))
# _stoi = { s: i for i, s in enumerate(
# ['very negative', 'negative', 'neutral', 'positive', 'very positive'] \
# if fine_grained else ['neg', 'pos']
# ) }
# TEXT = tt.data.RawField(
# preprocessing=tokenizer.tokenize,
# postprocessing=tokenized_texts_to_embs,
# is_target=False)
# LABEL = tt.data.RawField(
# postprocessing=lambda samples: torch.tensor([_stoi[s] for s in samples], device=DEVICE),
# is_target=True)
# trn_ds, tst_ds = tt.datasets.IMDB.splits(TEXT, LABEL)
# # trn_ds = tt.datasets.IMDB(
# # '.data/sst/trees/train.txt', TEXT, LABEL, fine_grained=fine_grained, subtrees=True,
# # filter_pred=SSTFilter(remove_neutral=(not fine_grained), remove_dupes=True))
# # tst_ds = tt.datasets.IMDB(
# # '.data/sst/trees/test.txt', TEXT, LABEL, fine_grained=fine_grained, subtrees=False,
# # filter_pred=SSTFilter(remove_neutral=(not fine_grained), remove_dupes=False))
# trn_ds, val_ds = trn_ds.split()
# print('Datasets ready.')
# print('Number of samples: {:,} train phrases, {:,} valid sentences, {:,} test sentences.'\
# .format(len(trn_ds), len(val_ds), len(tst_ds)))
print(val_ds[0])
###Output
<torchtext.data.example.Example object at 0x7f713103a518>
###Markdown
Training Loop
###Code
class LoopMain(LoopComponent):
def __init__(self, n_classes, device, pct_warmup=0.1, mixup=(0.2, 0.2)):
self.n_classes, self.device, self.pct_warmup = (n_classes, device, pct_warmup)
self.mixup_dist = torch.distributions.Beta(torch.tensor(mixup[0]), torch.tensor(mixup[1]))
self.onehot = torch.eye(self.n_classes, device=self.device)
self.saved_data = []
def on_train_begin(self, loop):
n_iters = len(loop.train_data) * loop.n_epochs
loop.optimizer = RAdam(loop.model.parameters(), lr=5e-4)
loop.scheduler = SingleCycleScheduler(
loop.optimizer, loop.n_optim_steps, frac=self.pct_warmup, min_lr=1e-5)
def on_grads_reset(self, loop):
loop.model.zero_grad()
def on_forward_pass(self, loop):
model, batch = (loop.model, loop.batch)
mask, embs = batch.text
target_probs = self.onehot[batch.label]
# if loop.is_training:
# r = self.mixup_dist.sample([len(mask)]).to(device=mask.device)
# idx = torch.randperm(len(mask))
# mask = mask.lerp(mask[idx], r[:, None])
# embs = embs.lerp(embs[idx], r[:, None, None, None])
# target_probs = target_probs.lerp(target_probs[idx], r[:, None])
pred_scores = model(mask, embs)
_, pred_ids = pred_scores.max(-1)
accuracy = (pred_ids == batch.label).float().mean()
loop.pred_scores, loop.target_probs, loop.accuracy = (pred_scores, target_probs, accuracy)
def on_loss_compute(self, loop):
losses = -loop.target_probs * F.log_softmax(loop.pred_scores, dim=-1) # CE
loop.loss = losses.sum(dim=-1).mean() # sum of classes, mean of batch
def on_backward_pass(self, loop):
loop.loss.backward()
def on_optim_step(self, loop):
loop.optimizer.step()
loop.scheduler.step()
def on_batch_end(self, loop):
self.saved_data.append({
'n_samples': len(loop.batch),
'epoch_desc': loop.epoch_desc,
'epoch_num': loop.epoch_num,
'epoch_frac': loop.epoch_num + loop.batch_num / loop.n_batches,
'batch_num' : loop.batch_num,
'accuracy': loop.accuracy.item(),
'loss': loop.loss.item(),
'lr': loop.optimizer.param_groups[0]['lr'],
'momentum': loop.optimizer.param_groups[0]['betas'][0],
})
class LoopProgressBar(LoopComponent):
def __init__(self, item_names=['loss', 'accuracy']):
self.item_names = item_names
def on_epoch_begin(self, loop):
self.total, self.count = ({ name: 0.0 for name in self.item_names }, 0)
self.pbar = tqdm(total=loop.n_batches, desc=f"{loop.epoch_desc} epoch {loop.epoch_num}")
def on_batch_end(self, loop):
n = len(loop.batch)
self.count += n
for name in self.item_names:
self.total[name] += getattr(loop, name).item() * n
self.pbar.update(1)
if (not loop.is_training):
means = { f'mean_{name}': self.total[name] / self.count for name in self.item_names }
self.pbar.set_postfix(means)
def on_epoch_end(self, loop):
self.pbar.close()
###Output
_____no_output_____
###Markdown
Initialize and train model
###Code
# Seed RNG for replicability. Run at least a few times without seeding to measure performance.
# torch.manual_seed(<type an int here>)
# Make iterators for each split.
trn_itr, val_itr, tst_itr = tt.data.Iterator.splits(
(trn_ds, val_ds, tst_ds),
shuffle=True,
batch_size=64,
device=DEVICE)
# Initialize model.
n_classes = len(_stoi)
model = GPT2Classifier(n_classes)
model = model.cuda(device=DEVICE)
print('Total number of parameters: {:,}'.format(sum(np.prod(p.shape) for p in model.parameters())))
# Train model
loop = TrainTestLoop(model, [LoopMain(n_classes, DEVICE), LoopProgressBar()], trn_itr, val_itr)
loop.train(n_epochs=5)
###Output
train epoch 0: 100%|โโโโโโโโโโ| 2489/2489 [24:02<00:00, 1.82it/s]
valid epoch 0: 100%|โโโโโโโโโโ| 18/18 [00:06<00:00, 2.25it/s, mean_loss=1.18, mean_accuracy=0.46]
train epoch 1: 100%|โโโโโโโโโโ| 2489/2489 [24:07<00:00, 1.94it/s]
valid epoch 1: 100%|โโโโโโโโโโ| 18/18 [00:06<00:00, 2.24it/s, mean_loss=1.1, mean_accuracy=0.524]
train epoch 2: 100%|โโโโโโโโโโ| 2489/2489 [24:12<00:00, 1.95it/s]
valid epoch 2: 100%|โโโโโโโโโโ| 18/18 [00:06<00:00, 2.24it/s, mean_loss=1.09, mean_accuracy=0.537]
train epoch 3: 100%|โโโโโโโโโโ| 2489/2489 [24:11<00:00, 1.80it/s]
valid epoch 3: 100%|โโโโโโโโโโ| 18/18 [00:06<00:00, 2.24it/s, mean_loss=1.09, mean_accuracy=0.555]
train epoch 4: 100%|โโโโโโโโโโ| 2489/2489 [24:12<00:00, 1.56it/s]
valid epoch 4: 100%|โโโโโโโโโโ| 18/18 [00:06<00:00, 2.25it/s, mean_loss=1.13, mean_accuracy=0.549]
###Markdown
Test
###Code
loop.test(tst_itr)
###Output
test epoch 5: 100%|โโโโโโโโโโ| 35/35 [00:12<00:00, 1.73it/s, mean_loss=1.07, mean_accuracy=0.538]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.