path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
cirq_version/00_Course_Introduction.ipynb | ###Markdown
Course overviewThe lecture notes and coding examples are presented in Jupyter notebooks. This ensures that the physics of a topic is immediately made operational in code and that you can try making changes to verify that you understand the underlying concepts. These notebooks are meant to be complementary to the video lectures, video quizzes, and assignments. The goal of this course is to show what benefits quantum technologies can provide to machine learning. In particular, we split this goal to the following objectives:1. **Quantum systems**. Quantum states, evolution, measurements, closed and open quantum systems. Basics of quantum many-body physics.2. **Quantum computing**. Quantum computing paradigms and implementations. Know the limitation of current and near-future quantum technologies and the kind of the tasks where they outperform or are expected to outperform classical computers. Variational circuits. Uses of quantum annealing.3. **Classical-quantum hybrid learning algorithms**. Encoding classical information in quantum systems. Discrete optimization in machine learning. Variational models in unsupervised learning. Kernel methods. Sampling and probabilistic models. 4. **Coherent learning protocols**. Quantum Fourier transformation, quantum phase estimation, and quantum matrix inversion. Basic linear algebra subroutines by quantum algorithms. Gaussian processes on a quantum computer.Quantum computing has two main paradigms, the gate model and quantum annealing:As you will see, the two are quite different, but there are overlaps in what you can use them for. Both paradigms have a lot to offer to machine learning, and therefore we will study both.Each module in the course has several notebooks. To execute them, you have to install a handful of packages -- the details are in the subsequent sections of this notebook. The notebooks often have references to previous notebooks and therefore it is recommended to study them in order. The way quantum computing packages are structured, it is inevitable that some gate operations are used prior to their formal introduction. We kept these forward references to a minimum, but if you feel lost in the beginning, just refer to the notebook on circuits. EnvironmentWe recommend you to use the [Anaconda distribution](https://www.anaconda.com/download/), as it will simplify installing packages. The rest of this notebook assumes that you use Anaconda.We recommend you to create a virtual environment for the course to avoid any interference with your usual Python environment. The course uses Python 3 and the code will not work under Python 2. The recommended version is >=3.5. Execute this command from the command line to create a new environment for the course: `conda create -n qmlmooc python=3.7`. Once it installs some basic packages, activate the environment by `conda activate qmlmooc`. PackagesAlmost all packages can be installed with conda: `conda install jupyter matplotlib networkx numpy scikit-learn scipy`.The only packages not available are the ones produced by quantum hardware vendors. We will use many of their packages. You can install these with pip: `pip install cirq dwave-networkx dimod minorminer`. As a quick sanity check, if you can execute the following cell without error messages, you should not face problems with the rest of the notebooks:
###Code
import matplotlib
import networkx
import numpy
import sklearn
import scipy
import cirq
import dwave_networkx
import dimod
import minorminer
###Output
_____no_output_____ |
0002/Problem_27.ipynb | ###Markdown
Quadratic primesEuler discovered the remarkable quadratic formula:$$ n^2+n+41 $$It turns out that the formula will produce 40 primes for the consecutive integer values $$ 0≤n≤39 $$.However, when $$ n=40,40^2+40+41=40(40+1)+41 $$ is divisible by 41, and certainly when $$ n=41,41^2+41+41 $$is clearly divisible by 41.The incredible formula $$ n^2−79n+1601$$was discovered, which produces 80 primes for the consecutive values 0≤n≤79. The product of the coefficients, −79 and 1601, is −126479.Considering quadratics of the form:$$ n^2+an+b $$, where |a|<1000 and |b|≤1000where |n|is the modulus/absolute value of ne.g. |11|=11 and |−4|=4Find the product of the coefficients, aand b, for the quadratic expression that produces the maximum number of primes for consecutive values of n, starting with n=0.
###Code
%%time
import numpy as np
import math
best_a = 0
best_b = 0
largest_n = 0
def get_primefactors(number):
# Find primes with sieve of eratosthenes
primes = np.ones(np.int(number) + 1)
primes[0] = 0
primes[1] = 0
for i in range(2, np.int(math.sqrt(number))):
newprime = i
notprime = i*i
while notprime <= number:
primes[notprime] = 0
notprime = notprime + newprime
return primes
primes = get_primefactors(1e5)
for a in range(-999,1000):
for b in range(-999,1000):
still_prime = True
n = 0
while still_prime == True:
num = n*n +a*n + b
# check if not prime
if primes[num] == 0:
still_prime = False
if n > largest_n:
best_a = a
best_b = b
largest_n = n
else:
n = n+1
print(best_a)
print(best_b)
print(largest_n)
print(best_a*best_b)
###Output
-61
971
71
-59231
CPU times: user 3.46 s, sys: 29 µs, total: 3.46 s
Wall time: 3.46 s
|
Regression/ElasticNet_RobustScaler.ipynb | ###Markdown
ElasticNet with RobustScaler **This Code template is for the regression analysis using a ElasticNet Regression and the feature rescaling technique RobustScaler in a pipeline** Required Packages
###Code
import warnings as wr
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error
wr.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path) #reading file
df.head()#displaying initial entries
print('Number of rows are :',df.shape[0], ',and number of columns are :',df.shape[1])
df.columns.tolist()
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
plt.figure(figsize = (15, 10))
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f")
plt.show()
correlation = df[df.columns[1:]].corr()[target][:]
correlation
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
#spliting data into X(features) and Y(Target)
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
#we can choose randomstate and test_size as over requerment
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 1) #performing datasplitting
###Output
_____no_output_____
###Markdown
Model Data Scaling**Used RobustScaler*** It scales features using statistics that are robust to outliers. * This method removes the median and scales the data in the range between 1st quartile and 3rd quartile. i.e., in between 25th quantile and 75th quantile range. This range is also called an Interquartile range. ElasticNetElastic Net first emerged as a result of critique on Lasso, whose variable selection can be too dependent on data and thus unstable. The solution is to combine the penalties of Ridge regression and Lasso to get the best of both worlds.**Features of ElasticNet Regression-*** It combines the L1 and L2 approaches.* It performs a more efficient regularization process.* It has two parameters to be set, λ and α. Model Tuning Parametersalpha=1.0, copy_X=True, fit_intercept=True, l1_ratio=0.5, max_iter=1000, normalize=False, positive=False, precompute=False, random_state=50, selection='cyclic', tol=0.0001, warm_start=False 1 alpha : float, default=1.0 1. alpha : float, default=1.0 > Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object. 2. l1_ratio : float, default=0.5> The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. 3. normalize : bool, default=False>This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. 4. precompute : bool or array-like of shape (n_features, n_features), default=False>Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always False to preserve sparsity. 5. max_iter : int, default=1000>The maximum number of iterations. 6. copy_X : bool, default=True>If True, X will be copied; else, it may be overwritten. 7. tol : float, default=1e-4>The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. 8. warm_start : bool, default=False>When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. 9. positive : bool, default=False>When set to True, forces the coefficients to be positive. 10. random_state : int, RandomState instance, default=None>The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. 11. selection : {‘cyclic’, ‘random’}, default=’cyclic’>If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
###Code
#training the ElasticNet
Input=[("scaler",RobustScaler()),("model",ElasticNet(random_state = 5))]
model = Pipeline(Input)
model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
#prediction on testing set
prediction=model.predict(X_test)
###Output
_____no_output_____
###Markdown
Model evolution**r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.**MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.**MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model.
###Code
print('Mean Absolute Error:', mean_absolute_error(y_test, prediction))
print('Mean Squared Error:', mean_squared_error(y_test, prediction))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction)))
print("R-squared score : ",r2_score(y_test,prediction))
#ploting actual and predicted
red = plt.scatter(np.arange(0,80,5),prediction[0:80:5],color = "red")
green = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color = "green")
plt.title("Comparison of Regression Algorithms")
plt.xlabel("Index of Candidate")
plt.ylabel("target")
plt.legend((red,green),('ElasticNet', 'REAL'))
plt.show()
###Output
_____no_output_____
###Markdown
Prediction Plot¶First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(10,6))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
gram-schmidt (1).ipynb | ###Markdown
1.gram-schmidt Algorithm
###Code
import numpy as np
from numpy import linalg as LA #ユークリッドノルム
import random
def gram(a,b):
return ((np.dot(a,b)/np.dot(a,a))*a)
###Output
_____no_output_____
###Markdown
ベクトル $$[3,2,1]^{T},[2,1,3]^{T},[1,3,2]^{T}$$ に対して、
###Code
#直交ベクトル
a=np.array([3,2,1])
b=np.array([2,1,3])
c=np.array([1,3,2])
###Output
_____no_output_____
###Markdown
$$u_{1}=v_{1},\\u_{2}=v_{2}-proj_{u1}(v_{2}),\\u_{2}=v_{3}-proj_{u1}(v_{3})-proj_{u2}(v_{3}),\\.....\\u_{k}=v_{k}-\sum_{j=1}^{k-1}proj_{uj}(v_{k}),$$
###Code
Va=a
Vb=b-gram(Va,b)
Vc=c-gram(Va,c)-gram(Vb,c)
print(Va,Vb,Vc)
print(np.dot(Va,Vb))
#直交ベクトル
###Output
[3 2 1] [-0.35714286 -0.57142857 2.21428571] [-1.2 1.68 0.24]
0.0
###Markdown
$$e_{1}=\frac{u_{1}}{||u_{1}||},\\e_{2}=\frac{u_{2}}{||u_{2}||},\\......\\e_{k}=\frac{u_{k}}{||u_{k}||},$$
###Code
#絶対値
Ea=Va/LA.norm(Va,2)
Eb=Vb/LA.norm(Vb,2)
Ec=Vc/LA.norm(Vc,2)
print(Ea,Eb,Ec)
#絶対値
###Output
[0.80178373 0.53452248 0.26726124] [-0.15430335 -0.24688536 0.95668077] [-0.57735027 0.80829038 0.11547005]
###Markdown
2.gram-schmidt Algorithm of random array ランダムに得られるベクトルに対してグラムシュミット化を行うアルゴリズム
###Code
import numpy as np
from numpy import linalg as LA
dimension=2 #次元を指定する
def randomnumber(dimension):
return np.random.random((dimension,dimension))
def gram(a,b):
return ((np.dot(a,b)/np.dot(a,a))*a)
v=randomnumber(dimension)
u=np.zeros((dimension,dimension),dtype='float64')
u[0]=v[0]
x=0
sum=np.array([0,0],dtype='float64')
for a in range(1,dimension):
for b in range(0,a):
sum+=gram(u[b],v[a])
u[a]=v[a]-sum
print(v)
print(u)
e=np.zeros((dimension,dimension),dtype='float64')
for c in range(0,dimension):
e[c]=u[c]/LA.norm(u[c],2)
print(e)
print(LA.norm(e[0],2))#ノルム
print(LA.norm(e[1],2))
###Output
1.0
|
lstm-neural-network-from-scratch.ipynb | ###Markdown
Implementing LSTM Neural Network from Scratch**Dataset = Us Baby's First Names Link = https://www.kaggle.com/chr7stos/us-names-babies-and-presindent-names/data** Import Required Libraries
###Code
import numpy as np #for maths
import pandas as pd #for data manipulation
import matplotlib.pyplot as plt #for visualization
###Output
_____no_output_____
###Markdown
Load the data
###Code
!dir
#data
path = r'NationalNames.csv'
data = pd.read_csv(path)
#get names from the dataset
data['Name'] = data['Name']
#get first 10000 names
data = np.array(data['Name'][:10000]).reshape(-1,1)
#covert the names to lowee case
data = [x.lower() for x in data[:,0]]
data = np.array(data).reshape(-1,1)
print("Data Shape = {}".format(data.shape))
print()
print("Lets see some names : ")
print(data[1:10])
###Output
Data Shape = (10000, 1)
Lets see some names :
[['anna']
['emma']
['elizabeth']
['minnie']
['margaret']
['ida']
['alice']
['bertha']
['sarah']]
###Markdown
Transform the names to equal length by adding -- > ('.') dots
###Code
#to store the transform data
transform_data = np.copy(data)
#find the max length name
max_length = 0
for index in range(len(data)):
max_length = max(max_length,len(data[index,0]))
#make every name of max length by adding '.'
for index in range(len(data)):
length = (max_length - len(data[index,0]))
string = '.'*length
transform_data[index,0] = ''.join([transform_data[index,0],string])
print("Transformed Data")
print(transform_data[1:10])
###Output
Transformed Data
[['anna........']
['emma........']
['elizabeth...']
['minnie......']
['margaret....']
['ida.........']
['alice.......']
['bertha......']
['sarah.......']]
###Markdown
Lets Make Vocabulary
###Code
#to store the vocabulary
vocab = list()
for name in transform_data[:,0]:
vocab.extend(list(name))
vocab = set(vocab)
vocab_size = len(vocab)
print("Vocab size = {}".format(len(vocab)))
print("Vocab = {}".format(vocab))
###Output
Vocab size = 27
Vocab = {'g', 'r', 'o', '.', 's', 'k', 'a', 'd', 'm', 'h', 't', 'v', 'q', 'z', 'e', 'f', 'u', 'n', 'l', 'i', 'p', 'x', 'c', 'w', 'y', 'b', 'j'}
###Markdown
Map characters to ids and ids to characters
###Code
#map char to id and id to chars
char_id = dict()
id_char = dict()
for i,char in enumerate(vocab):
char_id[char] = i
id_char[i] = char
print('a-{}, 22-{}'.format(char_id['a'],id_char[22]))
###Output
a-6, 22-c
###Markdown
Make the Train datasetExample - names - [['mary.'], ['anna.'] m - [0,0,0,1,0,0] a - [0,0,1,0,0,0] r - [0,1,0,0,0,0] y - [0,0,0,0,1,0] **.** - [1,0,0,0,0,0] 'mary.' = [[0,0,0,1,0,0], [0,0,1,0,0,0], [0,1,0,0,0,0], [0,0,0,0,1,0], [1,0,0,0,0,0]] 'anna.' = [[0,0,1,0,0,0], [0,0,0,0,0,1], [0,0,0,0,0,1], [0,0,1,0,0,0], [1,0,0,0,0,0]] batch_dataset = [ [[0,0,0,1,0,0],[0,0,1,0,0,0]] , [[0,0,1,0,0,0], [0,0,0,0,0,1]], [[0,1,0,0,0,0], [0,0,0,0,0,1]], [[0,0,0,0,1,0], [0,0,1,0,0,0]] , [ [1,0,0,0,0,0], [1,0,0,0,0,0]] ]
###Code
# list of batches of size = 20
train_dataset = []
batch_size = 20
#split the trasnform data into batches of 20
for i in range(len(transform_data)-batch_size+1):
start = i*batch_size
end = start+batch_size
#batch data
batch_data = transform_data[start:end]
if(len(batch_data)!=batch_size):
break
#convert each char of each name of batch data into one hot encoding
char_list = []
for k in range(len(batch_data[0][0])): #12
batch_dataset = np.zeros([batch_size,len(vocab)])
for j in range(batch_size): #20
name = batch_data[j][0] #mary........
char_index = char_id[name[k]] #[0,1,0,0,0,0]
batch_dataset[j,char_index] = 1.0
#store the ith char's one hot representation of each name in batch_data
char_list.append(batch_dataset)
#store each char's of every name in batch dataset into train_dataset
train_dataset.append(char_list)
dd = np.array(train_dataset)
dd.shape
###Output
_____no_output_____
###Markdown
500/82783 isimmax 12/17 harf/isim20 minibatch27/1004 charachers in alphabet
###Code
transform_data[0]
batch_data = transform_data[0:20]
for i in dd[0,:,0,:]:
name = ""
ind = i.argmax()
print(ind,id_char[ind])
name += id_char[ind]
print(name)
###Output
8 m
6 a
1 r
24 y
3 .
3 .
3 .
3 .
3 .
3 .
3 .
3 .
.
###Markdown
Hyperparameters
###Code
#number of input units or embedding size
input_units = 100
#number of hidden neurons
hidden_units = 256
#number of output units i.e vocab size
output_units = vocab_size
#learning rate
learning_rate = 0.005
#beta1 for V parameters used in Adam Optimizer
beta1 = 0.90
#beta2 for S parameters used in Adam Optimizer
beta2 = 0.99
###Output
_____no_output_____
###Markdown
Activation Functions* **Sigmoid = 1/(1+exp(-X))** * **Tanh = (exp(X) - exp(-X)) / (exp(X) + exp(X)) *** **Softmax = exp(X)/(sum(exp(X),1))**
###Code
#Activation Functions
#sigmoid
def sigmoid(X):
return 1/(1+np.exp(-X))
#tanh activation
def tanh_activation(X):
return np.tanh(X)
#softmax activation
def softmax(X):
exp_X = np.exp(X)
exp_X_sum = np.sum(exp_X,axis=1).reshape(-1,1)
exp_X = exp_X/exp_X_sum
return exp_X
#derivative of tanh
def tanh_derivative(X):
return 1-(X**2)
###Output
_____no_output_____
###Markdown
Initialize ParametersEmbeddings Size = 100 Hidden Units = 256 Total INPUT Weights = 100 + 256 = 356 * **LSTM CELL Weights ** * Forget Gate Weights = {356,256} * Input Gate Weights = {356,256} * Gate Gate Weights = {356,256} * Output Gate Weights = {356,256} * **Output CELL Weights ** * Output Weights = {256,27} Store these weights in parameters dictionary!
###Code
#initialize parameters
def initialize_parameters():
#initialize the parameters with 0 mean and 0.01 standard deviation
mean = 0
std = 0.01
#lstm cell weights
forget_gate_weights = np.random.normal(mean,std,(input_units+hidden_units,hidden_units))
input_gate_weights = np.random.normal(mean,std,(input_units+hidden_units,hidden_units))
output_gate_weights = np.random.normal(mean,std,(input_units+hidden_units,hidden_units))
gate_gate_weights = np.random.normal(mean,std,(input_units+hidden_units,hidden_units))
#hidden to output weights (output cell)
hidden_output_weights = np.random.normal(mean,std,(hidden_units,output_units))
parameters = dict()
parameters['fgw'] = forget_gate_weights
parameters['igw'] = input_gate_weights
parameters['ogw'] = output_gate_weights
parameters['ggw'] = gate_gate_weights
parameters['how'] = hidden_output_weights
return parameters
###Output
_____no_output_____
###Markdown
LSTM CELL **Equations*** fa = sigmoid(Wf x [xt,at-1]) * ia = sigmoid(Wi x [xt,at-1]) * ga = tanh(Wg x [xt,at-1]) * oa = sigmoid(Wo x [xt,at-1]) * ct = (fa x ct-1) + (ia x ga) * at = oa x tanh(ct)
###Code
#single lstm cell
def lstm_cell(batch_dataset, prev_activation_matrix, prev_cell_matrix, parameters):
#get parameters
fgw = parameters['fgw']
igw = parameters['igw']
ogw = parameters['ogw']
ggw = parameters['ggw']
#concat batch data and prev_activation matrix
concat_dataset = np.concatenate((batch_dataset,prev_activation_matrix),axis=1)
#forget gate activations
fa = np.matmul(concat_dataset,fgw)
fa = sigmoid(fa)
#input gate activations
ia = np.matmul(concat_dataset,igw)
ia = sigmoid(ia)
#output gate activations
oa = np.matmul(concat_dataset,ogw)
oa = sigmoid(oa)
#gate gate activations
ga = np.matmul(concat_dataset,ggw)
ga = tanh_activation(ga)
#new cell memory matrix
cell_memory_matrix = np.multiply(fa,prev_cell_matrix) + np.multiply(ia,ga)
#current activation matrix
activation_matrix = np.multiply(oa, tanh_activation(cell_memory_matrix))
#lets store the activations to be used in back prop
lstm_activations = dict()
lstm_activations['fa'] = fa
lstm_activations['ia'] = ia
lstm_activations['oa'] = oa
lstm_activations['ga'] = ga
return lstm_activations,cell_memory_matrix,activation_matrix
###Output
_____no_output_____
###Markdown
Output CellEquations * ot = W x at* ot = softmax(ot)
###Code
def output_cell(activation_matrix,parameters):
#get hidden to output parameters
how = parameters['how']
#get outputs
output_matrix = np.matmul(activation_matrix,how)
output_matrix = softmax(output_matrix)
return output_matrix
###Output
_____no_output_____
###Markdown
Get corresponding embeddings for the batch dataset
###Code
def get_embeddings(batch_dataset,embeddings):
embedding_dataset = np.matmul(batch_dataset,embeddings)
return embedding_dataset
###Output
_____no_output_____
###Markdown
Forward PropagationFunction returns the intermediate ativations in the respective caches:* LSTM Cache :- All lstm cell activation in every cell (fa,ia,ga,oa)* Activation Cache : All activation (a0,a1,a2..)* Cell Cache : All cell activations (c0,c1,c2..* Embedding cache : Embeddings of each batch (e0,e1,e2..)* Output Cache : All output (o1,o2,o3... )
###Code
#forward propagation
def forward_propagation(batches,parameters,embeddings):
#get batch size
batch_size = batches[0].shape[0]
#to store the activations of all the unrollings.
lstm_cache = dict() #lstm cache
activation_cache = dict() #activation cache
cell_cache = dict() #cell cache
output_cache = dict() #output cache
embedding_cache = dict() #embedding cache
#initial activation_matrix(a0) and cell_matrix(c0)
a0 = np.zeros([batch_size,hidden_units],dtype=np.float32)
c0 = np.zeros([batch_size,hidden_units],dtype=np.float32)
#store the initial activations in cache
activation_cache['a0'] = a0
cell_cache['c0'] = c0
#unroll the names
for i in range(len(batches)-1):
#get first first character batch
batch_dataset = batches[i]
#get embeddings
batch_dataset = get_embeddings(batch_dataset,embeddings)
embedding_cache['emb'+str(i)] = batch_dataset
#lstm cell
lstm_activations,ct,at = lstm_cell(batch_dataset,a0,c0,parameters)
#output cell
ot = output_cell(at,parameters)
#store the time 't' activations in caches
lstm_cache['lstm' + str(i+1)] = lstm_activations
activation_cache['a'+str(i+1)] = at
cell_cache['c' + str(i+1)] = ct
output_cache['o'+str(i+1)] = ot
#update a0 and c0 to new 'at' and 'ct' for next lstm cell
a0 = at
c0 = ct
return embedding_cache,lstm_cache,activation_cache,cell_cache,output_cache
###Output
_____no_output_____
###Markdown
Calculate the Loss, Perplexity, and Accuracy**Loss*** Loss at time t = -sum(Y x log(d) + (1-Y) x log(1-pred)))/m* Overall Loss = **∑**(Loss(t)) sum of all losses at each time step 't'**Perplexity *** Probability Product = **∏**(prob(pred_char)) for each char in name* Perplexity = (1/probability_product) ^ (1/n) where n in number of chars in name**Accuracy*** Accuracy(t) = (Y==predictions,axis=1) for all time steps* Accuracy = ((**∑**Acc(t))/batch_size)/n for all time steps, n is number of chars in name
###Code
#calculate loss, perplexity and accuracy
def cal_loss_accuracy(batch_labels,output_cache):
loss = 0 #to sum loss for each time step
acc = 0 #to sum acc for each time step
prob = 1 #probability product of each time step predicted char
#batch size
batch_size = batch_labels[0].shape[0]
#loop through each time step
for i in range(1,len(output_cache)+1):
#get true labels and predictions
labels = batch_labels[i]
pred = output_cache['o'+str(i)]
prob = np.multiply(prob,np.sum(np.multiply(labels,pred),axis=1).reshape(-1,1))
loss += np.sum((np.multiply(labels,np.log(pred)) + np.multiply(1-labels,np.log(1-pred))),axis=1).reshape(-1,1)
acc += np.array(np.argmax(labels,1)==np.argmax(pred,1),dtype=np.float32).reshape(-1,1)
#calculate perplexity loss and accuracy
perplexity = np.sum((1/prob)**(1/len(output_cache)))/batch_size
loss = np.sum(loss)*(-1/batch_size)
acc = np.sum(acc)/(batch_size)
acc = acc/len(output_cache)
return perplexity,loss,acc
###Output
_____no_output_____
###Markdown
Calculate Output Cell Errors for each time step* Output Error Cache :- to store output error for each time step* Activation Error Cache : to store activation error for each time step
###Code
#calculate output cell errors
def calculate_output_cell_error(batch_labels,output_cache,parameters):
#to store the output errors for each time step
output_error_cache = dict()
activation_error_cache = dict()
how = parameters['how']
#loop through each time step
for i in range(1,len(output_cache)+1):
#get true and predicted labels
labels = batch_labels[i]
pred = output_cache['o'+str(i)]
#calculate the output_error for time step 't'
error_output = pred - labels
#calculate the activation error for time step 't'
error_activation = np.matmul(error_output,how.T)
#store the output and activation error in dict
output_error_cache['eo'+str(i)] = error_output
activation_error_cache['ea'+str(i)] = error_activation
return output_error_cache,activation_error_cache
###Output
_____no_output_____
###Markdown
Calculate Single LSTM CELL Error
###Code
#calculate error for single lstm cell
def calculate_single_lstm_cell_error(activation_output_error,next_activation_error,next_cell_error,parameters,lstm_activation,cell_activation,prev_cell_activation):
#activation error = error coming from output cell and error coming from the next lstm cell
activation_error = activation_output_error + next_activation_error
#output gate error
oa = lstm_activation['oa']
eo = np.multiply(activation_error,tanh_activation(cell_activation))
eo = np.multiply(np.multiply(eo,oa),1-oa)
#cell activation error
cell_error = np.multiply(activation_error,oa)
cell_error = np.multiply(cell_error,tanh_derivative(tanh_activation(cell_activation)))
#error also coming from next lstm cell
cell_error += next_cell_error
#input gate error
ia = lstm_activation['ia']
ga = lstm_activation['ga']
ei = np.multiply(cell_error,ga)
ei = np.multiply(np.multiply(ei,ia),1-ia)
#gate gate error
eg = np.multiply(cell_error,ia)
eg = np.multiply(eg,tanh_derivative(ga))
#forget gate error
fa = lstm_activation['fa']
ef = np.multiply(cell_error,prev_cell_activation)
ef = np.multiply(np.multiply(ef,fa),1-fa)
#prev cell error
prev_cell_error = np.multiply(cell_error,fa)
#get parameters
fgw = parameters['fgw']
igw = parameters['igw']
ggw = parameters['ggw']
ogw = parameters['ogw']
#embedding + hidden activation error
embed_activation_error = np.matmul(ef,fgw.T)
embed_activation_error += np.matmul(ei,igw.T)
embed_activation_error += np.matmul(eo,ogw.T)
embed_activation_error += np.matmul(eg,ggw.T)
input_hidden_units = fgw.shape[0]
hidden_units = fgw.shape[1]
input_units = input_hidden_units - hidden_units
#prev activation error
prev_activation_error = embed_activation_error[:,input_units:]
#input error (embedding error)
embed_error = embed_activation_error[:,:input_units]
#store lstm error
lstm_error = dict()
lstm_error['ef'] = ef
lstm_error['ei'] = ei
lstm_error['eo'] = eo
lstm_error['eg'] = eg
return prev_activation_error,prev_cell_error,embed_error,lstm_error
###Output
_____no_output_____
###Markdown
Calculate Output Cell Derivatives for each time step
###Code
#calculate output cell derivatives
def calculate_output_cell_derivatives(output_error_cache,activation_cache,parameters):
#to store the sum of derivatives from each time step
dhow = np.zeros(parameters['how'].shape)
batch_size = activation_cache['a1'].shape[0]
#loop through the time steps
for i in range(1,len(output_error_cache)+1):
#get output error
output_error = output_error_cache['eo' + str(i)]
#get input activation
activation = activation_cache['a'+str(i)]
#cal derivative and summing up!
dhow += np.matmul(activation.T,output_error)/batch_size
return dhow
###Output
_____no_output_____
###Markdown
Calculate LSTM CELL Derivatives for each time step
###Code
#calculate derivatives for single lstm cell
def calculate_single_lstm_cell_derivatives(lstm_error,embedding_matrix,activation_matrix):
#get error for single time step
ef = lstm_error['ef']
ei = lstm_error['ei']
eo = lstm_error['eo']
eg = lstm_error['eg']
#get input activations for this time step
concat_matrix = np.concatenate((embedding_matrix,activation_matrix),axis=1)
batch_size = embedding_matrix.shape[0]
#cal derivatives for this time step
dfgw = np.matmul(concat_matrix.T,ef)/batch_size
digw = np.matmul(concat_matrix.T,ei)/batch_size
dogw = np.matmul(concat_matrix.T,eo)/batch_size
dggw = np.matmul(concat_matrix.T,eg)/batch_size
#store the derivatives for this time step in dict
derivatives = dict()
derivatives['dfgw'] = dfgw
derivatives['digw'] = digw
derivatives['dogw'] = dogw
derivatives['dggw'] = dggw
return derivatives
###Output
_____no_output_____
###Markdown
Backward Propagation* Apply chain rule and calculate the errors for each time step* Store the deivatives in **derivatives** dict
###Code
#backpropagation
def backward_propagation(batch_labels,embedding_cache,lstm_cache,activation_cache,cell_cache,output_cache,parameters):
#calculate output errors
output_error_cache,activation_error_cache = calculate_output_cell_error(batch_labels,output_cache,parameters)
#to store lstm error for each time step
lstm_error_cache = dict()
#to store embeding errors for each time step
embedding_error_cache = dict()
# next activation error
# next cell error
#for last cell will be zero
eat = np.zeros(activation_error_cache['ea1'].shape)
ect = np.zeros(activation_error_cache['ea1'].shape)
#calculate all lstm cell errors (going from last time-step to the first time step)
for i in range(len(lstm_cache),0,-1):
#calculate the lstm errors for this time step 't'
pae,pce,ee,le = calculate_single_lstm_cell_error(activation_error_cache['ea'+str(i)],eat,ect,parameters,lstm_cache['lstm'+str(i)],cell_cache['c'+str(i)],cell_cache['c'+str(i-1)])
#store the lstm error in dict
lstm_error_cache['elstm'+str(i)] = le
#store the embedding error in dict
embedding_error_cache['eemb'+str(i-1)] = ee
#update the next activation error and next cell error for previous cell
eat = pae
ect = pce
#calculate output cell derivatives
derivatives = dict()
derivatives['dhow'] = calculate_output_cell_derivatives(output_error_cache,activation_cache,parameters)
#calculate lstm cell derivatives for each time step and store in lstm_derivatives dict
lstm_derivatives = dict()
for i in range(1,len(lstm_error_cache)+1):
lstm_derivatives['dlstm'+str(i)] = calculate_single_lstm_cell_derivatives(lstm_error_cache['elstm'+str(i)],embedding_cache['emb'+str(i-1)],activation_cache['a'+str(i-1)])
#initialize the derivatives to zeros
derivatives['dfgw'] = np.zeros(parameters['fgw'].shape)
derivatives['digw'] = np.zeros(parameters['igw'].shape)
derivatives['dogw'] = np.zeros(parameters['ogw'].shape)
derivatives['dggw'] = np.zeros(parameters['ggw'].shape)
#sum up the derivatives for each time step
for i in range(1,len(lstm_error_cache)+1):
derivatives['dfgw'] += lstm_derivatives['dlstm'+str(i)]['dfgw']
derivatives['digw'] += lstm_derivatives['dlstm'+str(i)]['digw']
derivatives['dogw'] += lstm_derivatives['dlstm'+str(i)]['dogw']
derivatives['dggw'] += lstm_derivatives['dlstm'+str(i)]['dggw']
return derivatives,embedding_error_cache
###Output
_____no_output_____
###Markdown
Adam OptimizerUsing Exponentially Weighted Averages * Vdw = beta1 x Vdw + (1-beta1) x (dw) * Sdw = beta2 x Sdw + (1-beta2) x dw^2* W = W - learning_rate x ( Vdw/ (sqrt(Sdw)+1e-7) )
###Code
#update the parameters using adam optimizer
#adam optimization
def update_parameters(parameters,derivatives,V,S,t):
#get derivatives
dfgw = derivatives['dfgw']
digw = derivatives['digw']
dogw = derivatives['dogw']
dggw = derivatives['dggw']
dhow = derivatives['dhow']
#get parameters
fgw = parameters['fgw']
igw = parameters['igw']
ogw = parameters['ogw']
ggw = parameters['ggw']
how = parameters['how']
#get V parameters
vfgw = V['vfgw']
vigw = V['vigw']
vogw = V['vogw']
vggw = V['vggw']
vhow = V['vhow']
#get S parameters
sfgw = S['sfgw']
sigw = S['sigw']
sogw = S['sogw']
sggw = S['sggw']
show = S['show']
#calculate the V parameters from V and current derivatives
vfgw = (beta1*vfgw + (1-beta1)*dfgw)
vigw = (beta1*vigw + (1-beta1)*digw)
vogw = (beta1*vogw + (1-beta1)*dogw)
vggw = (beta1*vggw + (1-beta1)*dggw)
vhow = (beta1*vhow + (1-beta1)*dhow)
#calculate the S parameters from S and current derivatives
sfgw = (beta2*sfgw + (1-beta2)*(dfgw**2))
sigw = (beta2*sigw + (1-beta2)*(digw**2))
sogw = (beta2*sogw + (1-beta2)*(dogw**2))
sggw = (beta2*sggw + (1-beta2)*(dggw**2))
show = (beta2*show + (1-beta2)*(dhow**2))
#update the parameters
fgw = fgw - learning_rate*((vfgw)/(np.sqrt(sfgw) + 1e-6))
igw = igw - learning_rate*((vigw)/(np.sqrt(sigw) + 1e-6))
ogw = ogw - learning_rate*((vogw)/(np.sqrt(sogw) + 1e-6))
ggw = ggw - learning_rate*((vggw)/(np.sqrt(sggw) + 1e-6))
how = how - learning_rate*((vhow)/(np.sqrt(show) + 1e-6))
#store the new weights
parameters['fgw'] = fgw
parameters['igw'] = igw
parameters['ogw'] = ogw
parameters['ggw'] = ggw
parameters['how'] = how
#store the new V parameters
V['vfgw'] = vfgw
V['vigw'] = vigw
V['vogw'] = vogw
V['vggw'] = vggw
V['vhow'] = vhow
#store the s parameters
S['sfgw'] = sfgw
S['sigw'] = sigw
S['sogw'] = sogw
S['sggw'] = sggw
S['show'] = show
return parameters,V,S
###Output
_____no_output_____
###Markdown
Update the embeddings
###Code
#update the Embeddings
def update_embeddings(embeddings,embedding_error_cache,batch_labels):
#to store the embeddings derivatives
embedding_derivatives = np.zeros(embeddings.shape)
batch_size = batch_labels[0].shape[0]
#sum the embedding derivatives for each time step
for i in range(len(embedding_error_cache)):
embedding_derivatives += np.matmul(batch_labels[i].T,embedding_error_cache['eemb'+str(i)])/batch_size
#update the embeddings
embeddings = embeddings - learning_rate*embedding_derivatives
return embeddings
###Output
_____no_output_____
###Markdown
Functions to Initialize the V and S parameters for Adam Optimizer
###Code
def initialize_V(parameters):
Vfgw = np.zeros(parameters['fgw'].shape)
Vigw = np.zeros(parameters['igw'].shape)
Vogw = np.zeros(parameters['ogw'].shape)
Vggw = np.zeros(parameters['ggw'].shape)
Vhow = np.zeros(parameters['how'].shape)
V = dict()
V['vfgw'] = Vfgw
V['vigw'] = Vigw
V['vogw'] = Vogw
V['vggw'] = Vggw
V['vhow'] = Vhow
return V
def initialize_S(parameters):
Sfgw = np.zeros(parameters['fgw'].shape)
Sigw = np.zeros(parameters['igw'].shape)
Sogw = np.zeros(parameters['ogw'].shape)
Sggw = np.zeros(parameters['ggw'].shape)
Show = np.zeros(parameters['how'].shape)
S = dict()
S['sfgw'] = Sfgw
S['sigw'] = Sigw
S['sogw'] = Sogw
S['sggw'] = Sggw
S['show'] = Show
return S
###Output
_____no_output_____
###Markdown
Train Function1. Initialize Parameters2. Forward Propagation3. Calculate Loss, Perplexity and Accuracy4. Backward Propagation5. Update the Parameters and EmbeddingsBatch Size = 20Repeat the steps 2-5 for each batch!
###Code
#train function
def train(train_dataset,iters=1000,batch_size=20):
#initalize the parameters
parameters = initialize_parameters()
#initialize the V and S parameters for Adam
V = initialize_V(parameters)
S = initialize_S(parameters)
#generate the random embeddings
embeddings = np.random.normal(0,0.01,(len(vocab),input_units))
#to store the Loss, Perplexity and Accuracy for each batch
J = []
P = []
A = []
for step in range(iters):
#get batch dataset
index = step%len(train_dataset)
batches = train_dataset[index]
#forward propagation
embedding_cache,lstm_cache,activation_cache,cell_cache,output_cache = forward_propagation(batches,parameters,embeddings)
#calculate the loss, perplexity and accuracy
perplexity,loss,acc = cal_loss_accuracy(batches,output_cache)
#backward propagation
derivatives,embedding_error_cache = backward_propagation(batches,embedding_cache,lstm_cache,activation_cache,cell_cache,output_cache,parameters)
#update the parameters
parameters,V,S = update_parameters(parameters,derivatives,V,S,step)
#update the embeddings
embeddings = update_embeddings(embeddings,embedding_error_cache,batches)
J.append(loss)
P.append(perplexity)
A.append(acc)
#print loss, accuracy and perplexity
if(step%1000==0):
print("For Single Batch :")
print('Step = {}'.format(step))
print('Loss = {}'.format(round(loss,2)))
print('Perplexity = {}'.format(round(perplexity,2)))
print('Accuracy = {}'.format(round(acc*100,2)))
print()
return embeddings, parameters,J,P,A
###Output
_____no_output_____
###Markdown
Let's Train* Will take around 5-10 mins on CPU
###Code
embeddings,parameters,J,P,A = train(train_dataset,iters=8001)
###Output
For Single Batch :
Step = 0
Loss = 47.05
Perplexity = 27.0
Accuracy = 2.73
For Single Batch :
Step = 1000
Loss = 12.16
Perplexity = 2.11
Accuracy = 75.91
For Single Batch :
Step = 2000
Loss = 9.07
Perplexity = 1.7
Accuracy = 81.82
For Single Batch :
Step = 3000
Loss = 8.01
Perplexity = 1.59
Accuracy = 84.55
For Single Batch :
Step = 4000
Loss = 7.89
Perplexity = 1.57
Accuracy = 83.18
For Single Batch :
Step = 5000
Loss = 7.84
Perplexity = 1.57
Accuracy = 84.55
For Single Batch :
Step = 6000
Loss = 7.99
Perplexity = 1.57
Accuracy = 83.18
For Single Batch :
Step = 7000
Loss = 7.97
Perplexity = 1.57
Accuracy = 82.73
For Single Batch :
Step = 8000
Loss = 7.88
Perplexity = 1.56
Accuracy = 84.09
###Markdown
Let's Plot some graphs* Plotted average loss of 30 batches, average perplexity of 30 batches, and average accuracy of 30 batches.
###Code
avg_loss = list()
avg_acc = list()
avg_perp = list()
i = 0
while(i<len(J)):
avg_loss.append(np.mean(J[i:i+30]))
avg_acc.append(np.mean(A[i:i+30]))
avg_perp.append(np.mean(P[i:i+30]))
i += 30
plt.plot(list(range(len(avg_loss))),avg_loss)
plt.xlabel("x")
plt.ylabel("Loss (Avg of 30 batches)")
plt.title("Loss Graph")
plt.show()
plt.plot(list(range(len(avg_perp))),avg_perp)
plt.xlabel("x")
plt.ylabel("Perplexity (Avg of 30 batches)")
plt.title("Perplexity Graph")
plt.show()
plt.plot(list(range(len(avg_acc))),avg_acc)
plt.xlabel("x")
plt.ylabel("Accuracy (Avg of 30 batches)")
plt.title("Accuracy Graph")
plt.show()
###Output
_____no_output_____
###Markdown
Let's make some predictions
###Code
#predict
def predict(parameters,embeddings,id_char,vocab_size):
#to store some predicted names
names = []
#predict 20 names
for i in range(20):
#initial activation_matrix(a0) and cell_matrix(c0)
a0 = np.zeros([1,hidden_units],dtype=np.float32)
c0 = np.zeros([1,hidden_units],dtype=np.float32)
#initalize blank name
name = ''
#make a batch dataset of single char
batch_dataset = np.zeros([1,vocab_size])
#get random start character
index = np.random.randint(0,27,1)[0]
#make that index 1.0
batch_dataset[0,index] = 1.0
#add first char to name
name += id_char[index]
#get char from id_char dict
char = id_char[index]
#loop until algo predicts '.'
while(char!='.'):
#get embeddings
batch_dataset = get_embeddings(batch_dataset,embeddings)
#lstm cell
lstm_activations,ct,at = lstm_cell(batch_dataset,a0,c0,parameters)
#output cell
ot = output_cell(at,parameters)
#either select random.choice ot np.argmax
pred = np.random.choice(27,1,p=ot[0])[0]
#get predicted char index
#pred = np.argmax(ot)
#add char to name
name += id_char[pred]
char = id_char[pred]
#change the batch_dataset to this new predicted char
batch_dataset = np.zeros([1,vocab_size])
batch_dataset[0,pred] = 1.0
#update a0 and c0 to new 'at' and 'ct' for next lstm cell
a0 = at
c0 = ct
#append the predicted name to names list
names.append(name)
return names
###Output
_____no_output_____
###Markdown
Let's Predict Names using Argmax
###Code
predict(parameters,embeddings,id_char,vocab_size)
###Output
_____no_output_____
###Markdown
Lets predict using Random.Choice
###Code
predict(parameters,embeddings,id_char,vocab_size)
###Output
_____no_output_____ |
docs/getting-started/quick-start-guide.ipynb | ###Markdown
Quick Start Guide The [NeXus Data Format](https://www.nexusformat.org/) is typically used to structure HDF5 files.An HDF5 file is a container for *datasets* and *groups*.Groups are folder-like and work like Python dictionaries.Datasets work like NumPy arrays.In addition, groups and datasets have a dictionary of *attributes*.NeXus extends this with the following:- Definitions for attributes for datasets, in particular a `units` attribute. In NeXus, datasets are referred to as *field*.- Definitions for attributes and structure of groups. This includes: - An `NX_class` attribute, identifying a group as an instance of a particular NeXus class such as [NXdata](https://manual.nexusformat.org/classes/base_classes/NXdata.html) or [NXlog](https://manual.nexusformat.org/classes/base_classes/NXlog.html). - Attributes that identify which fields contained in the group hold signal values, and which hold axis labels. In the following we use a file from the [POWGEN](https://neutrons.ornl.gov/powgen) instrument at SNS.It is bundled with scippnexus and will be downloaded automatically using [pooch](https://pypi.org/project/pooch/) if it is not cached already:
###Code
from scippnexus import data
filename = data.get_path('PG3_4844_event.nxs')
###Output
_____no_output_____
###Markdown
Given such a NeXus file, we first need to open it.Wherever possible this should be done using a context manager as follows:
###Code
import scippnexus as snx
with snx.File(filename) as f:
print(list(f.keys()))
###Output
_____no_output_____
###Markdown
Unfortunately working with a context manager in a Jupyter Notebook is cumbersome, so for the following we open the file directly instead:
###Code
f = snx.File(filename)
###Output
_____no_output_____
###Markdown
Above we saw that the file contains a single key, `'entry'`.When we access it we can see that it belongs to the class [NXentry](https://manual.nexusformat.org/classes/base_classes/NXentry.html) which is found on the top level in any NeXus file:
###Code
entry = f['entry']
entry
###Output
_____no_output_____
###Markdown
We could continue inspecting keys, until we find a group we are interested in.For this example we use the `'proton_charge'` log found within `'DASlogs'`:
###Code
proton_charge = entry['DASlogs']['proton_charge']
proton_charge
###Output
_____no_output_____
###Markdown
This group is an [NXlog](https://manual.nexusformat.org/classes/base_classes/NXlog.html), which typically contains 1-D data with a time axis.Since scippnexus knows about NXlog, it knows how to identify its shape:
###Code
proton_charge.shape
###Output
_____no_output_____
###Markdown
Note:This is in contrast to plain HDF5 where groups do *not* have a shape.Note that not all NeXus classes have a defined shape. We read the NXlog from the file using the slicing notation.To read the entire group, use ellipses (or an empty tuple):
###Code
proton_charge[...]
###Output
_____no_output_____
###Markdown
Above, scippnexus automatically dealt with:- Loading the data field (signal value dataset and its `'units'` attribute).- Identifying the dimension labels (here: `'time'`).- Other fields in the group were loaded as coordinates, including: - Units of the fields. - Uncertainties of the fields (here for `'average_value'`). This structure is compatible with a `scipp.DataArray` and is returned as such.We may also load an individual field instead of an entire group.A field corresponds to a `scipp.Variable`, i.e., similar to how h5py represents datasets as NumPy arrays but with an added unit and dimension labels (if applicable).For example, we may load only the `'value'` dataset:
###Code
proton_charge['value'][...]
###Output
_____no_output_____
###Markdown
Attributes of datasets or groups are accessed just like in h5py:
###Code
proton_charge['value'].attrs['units']
###Output
_____no_output_____
###Markdown
A subset of the group (and its datasets) can be loaded by selecting only a slice.We can also plot this directly using the `plot` method of `scipp.DataArray`:
###Code
proton_charge['time', 193000:197000].plot()
###Output
_____no_output_____
###Markdown
As another example, consider the following [NXdata](https://manual.nexusformat.org/classes/base_classes/NXdata.html) group:
###Code
bank = f['entry/bank103']
print(bank.shape, bank.dims)
###Output
_____no_output_____
###Markdown
This can be loaded and plotted as above.In this case the resulting data array is 2-D:
###Code
da = bank[...]
da
da.plot()
###Output
_____no_output_____ |
MsBIP68/Load addtive data.ipynb | ###Markdown
Load additive data from Data LakeGet raw data from Data Lake
###Code
#objectName = dbutils.widgets.get("Tablename")
objectName = 'Booking'
df = (spark
.read
.format('csv')
.option("delimiter", ";")
.option("multiline", True)
.option("quote", "\"")
.option("escape", "\"")
.option("header",True)
.option('path',f'abfss://<your own filesystem>@<your own filesystem>.dfs.core.windows.net/{objectName}')
.load()
#.limit(1)
)
###Output
_____no_output_____
###Markdown
Handle schemaRemove training header names from dataframe
###Code
df = df.select([col(c).alias(c[:c.index(' ')]) for c in df.columns])
###Output
_____no_output_____
###Markdown
Remove columns that are not used - if they exist
###Code
ColumnnamesToDelete = spark.createDataFrame(
[
(1, "booking_description"),
(2, "last_change_date"),
(3, "color_code_fk")
]
,["number","columnname"]
)
for d in ColumnnamesToDelete.collect():
if d["columnname"] in df.columns:
df = df.drop(d["columnname"])
###Output
_____no_output_____
###Markdown
Change column datatypes if they exits
###Code
from pyspark.sql.types import StringType,BooleanType,DateType
if 'date' in df.columns:
df = df.withColumn("date", df["date"].cast(DateType()))
###Output
_____no_output_____
###Markdown
Remove characters not used in data
###Code
from pyspark.sql.functions import regexp_replace
SearchInColumns = spark.createDataFrame(
[
(1, "start_time"),
(2, "end_time")
]
,["number","columnname"]
)
CharacterChanges = spark.createDataFrame(
[
(1, "\\+", "")
]
,["number","RegExString","NewString"]
)
for s in SearchInColumns.collect():
if s["columnname"] in df.columns:
for r in CharacterChanges.collect():
df = df.withColumn(s["columnname"], regexp_replace(col(s["columnname"]), r["RegExString"], r["NewString"]))
#display(df.limit(10))
#display(df.limit(10))
df.printSchema()
###Output
_____no_output_____
###Markdown
Write data to Bronze areaWrite new data to the delta lake in Bronze versionNotice the APPEND part of the 4th line - this appends all data to the existing data (delta lake)
###Code
bronze_loc = f'abfss://<your own filesystem>@<your own filesystem>.dfs.core.windows.net/Bronze/{objectName}'
df.write.mode("append").format("delta").save(bronze_loc)
###Output
_____no_output_____
###Markdown
Write data to Silver areaLoad to Silver data lake with correct partition approach
###Code
silver_loc = f'abfss://<your own filesystem>@<your own filesystem>.dfs.core.windows.net/Silver/{objectName}'
parquetFile = spark.read.parquet(bronze_loc)
parquetFile.repartition(1).write.parquet(silver_loc)
###Output
_____no_output_____
###Markdown
Things and notes to keep for later
###Code
## managed table - hard link between objects
## df2.write.mode("overwrite").format("delta").saveAsTable("Bookings")
## unmanaged table - soft link between objects
## df2.write.mode("overwrite").format("delta").option("path",save_loc).saveAsTable("Bookings")
## statement = 'select * from Bookings'
## spark.sql(statement).createOrReplaceTempView("Temp_Bookings")
###Output
_____no_output_____ |
recsys/baseline.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import sparse
from tqdm.notebook import tqdm
%matplotlib inline
###Output
_____no_output_____
###Markdown
Сырые данные Считываем данные из .csv Некоторые данные (такие как рубрики и признаки), представлены строками значений. Преобразуем их в списки чисел.
###Code
to_list = lambda rubrics: [int(rubric) for rubric in str(rubrics).split(' ')]
def apply_to_columns(df, columns, func=to_list):
for column in columns:
df.loc[~df[column].isnull(), column] = df.loc[~df[column].isnull(), column].apply(func)
###Output
_____no_output_____
###Markdown
В первую очередь нам понадобятся данные по __пользователям__, __организациям__ и сами __отзывы__.
###Code
users = pd.read_csv('/content/drive/MyDrive/YCup2021[recsys]/users.csv')
users.head()
orgs = pd.read_csv('/content/drive/MyDrive/YCup2021[recsys]/organisations.csv')
# create lists
columns = ['rubrics_id', 'features_id']
apply_to_columns(orgs, columns)
orgs.head()
###Output
_____no_output_____
###Markdown
Чтобы не делать __join__ каждый раз, когда нам потребуется узнать, из какого города организация или пользователь, сразу добавим эту информацию в отзывы.
###Code
reviews = pd.read_csv('/content/drive/MyDrive/YCup2021[recsys]/reviews.csv', low_memory=False)
# encode users ids as numeric
reviews = reviews.merge(users, on='user_id')
reviews = reviews.rename({'city': 'user_city'}, axis=1)
# # encode orgs ids as numeric
reviews = reviews.merge(orgs[['org_id', 'city']], on='org_id')
reviews = reviews.rename({'city': 'org_city'}, axis=1)
# # create lists
columns = ['aspects']
apply_to_columns(reviews, columns)
reviews.head()
###Output
_____no_output_____
###Markdown
Отлично, теперь с отзывами будет удобно работать. Посмотрим на распределение новых отзывов по дням, чтобы понять, как лучше организовать валидацию.
###Code
sns.displot(data=reviews, x='ts', height=8)
plt.title('Распределение отзывов по дням')
plt.show()
###Output
_____no_output_____
###Markdown
Train-test split
###Code
def clear_df(df, suffixes=['_x', '_y'], inplace=True):
'''
clear_df(df, suffixes=['_x', '_y'], inplace=True)
Удаляет из входного df все колонки, оканчивающиеся на заданные суффиксы.
Parameters
----------
df : pandas.DataFrame
suffixies : Iterable, default=['_x', '_y']
Суффиксы колонок, подлежащих удалению
inplace : bool, default=True
Нужно ли удалить колонки "на месте" или же создать копию DataFrame.
Returns
-------
pandas.DataFrame (optional)
df с удалёнными колонками
'''
def bad_suffix(column):
nonlocal suffixes
return any(column.endswith(suffix) for suffix in suffixes)
columns_to_drop = [col for col in df.columns if bad_suffix(col)]
return df.drop(columns_to_drop, axis=1, inplace=inplace)
def extract_unique(reviews, column):
'''
extract_unique(reviews, column)
Извлекает уникальные значения из колонки в DataFrame.
Parameters
----------
reviews : pandas.DataFrame
pandas.DataFrame, из которого будут извлечены значения.
column : str
Имя колонки в <reviews>.
Returns
-------
pandas.DataFrame
Содержит одну именованную колонку с уникальными значениями.
'''
unique = reviews[column].unique()
return pd.DataFrame({column: unique})
def count_unique(reviews, column):
'''
count_unique(reviews, column)
Извлекает и подсчитывает уникальные значения из колонки в DataFrame.
Parameters
----------
reviews : pandas.DataFrame
pandas.DataFrame, из которого будут извлечены значения.
column : str
Имя колонки в <reviews>.
Returns
-------
pandas.DataFrame
Содержит две колонки: с уникальными значениями и счётчиком встреченных.
'''
return reviews[column].value_counts().reset_index(name='count').rename({'index': column}, axis=1)
def filter_reviews(reviews, users=None, orgs=None):
'''
filter_reviews(reviews, users=None, orgs=None)
Оставляет в выборке только отзывы, оставленные заданными пользователями на заданные организации.
Parameters
----------
users: pandas.DataFrame, default=None
DataFrame, содержащий колонку <user_id>.
Если None, то фильтрация не происходит.
orgs: pandas.DataFrame, default=None
DataFrame, содержащий колонку <org_id>.
Если None, то фильтрация не происходит.
Returns
-------
pandas.DataFrame
Отфильтрованная выборка отзывов.
'''
if users is not None:
reviews = reviews.merge(users, on='user_id', how='inner')
clear_df(reviews)
if orgs is not None:
reviews = reviews.merge(orgs, on='org_id', how='inner')
clear_df(reviews)
return reviews
def train_test_split(reviews, ts_start, ts_end=None):
'''
train_test_split(reviews, ts_start, ts_end=None)
Разделяет выборку отзывов на две части: обучающую и тестовую.
В тестовую выборку попадают только отзывы с user_id и org_id, встречающимися в обучающей выборке.
Parameters
----------
reviews : pandas.DataFrame
Отзывы из reviews.csv с обязательными полями:
<rating>, <ts>, <user_id>, <user_city>, <org_id>, <org_city>.
ts_start : int
Первый день отзывов из тестовой выборки (включительно).
ts_end : int, default=None
Последний день отзывов из обучающей выборки (включительно)
Если параметр равен None, то ts_end == reviews['ts'].max().
Returns
-------
splitting : tuple
Кортеж из двух pandas.DataFrame такой же структуры, как и reviews:
в первом отзывы, попавшие в обучающую выборку, во втором - в тестовую.
'''
if not ts_end:
ts_end = reviews['ts'].max()
reviews_train = reviews[(reviews['ts'] < ts_start) | (reviews['ts'] > ts_end)]
reviews_test = reviews[(ts_start <= reviews['ts']) & (reviews['ts'] <= ts_end)]
# 1. Выбираем только отзывы на понравившиеся места у путешественников
reviews_test = reviews_test[reviews_test['rating'] >= 4.0]
reviews_test = reviews_test[reviews_test['user_city'] != reviews_test['org_city']]
# 2. Оставляем в тесте только тех пользователей и организации, которые встречались в трейне
train_orgs = extract_unique(reviews_train, 'org_id')
train_users = extract_unique(reviews_train, 'user_id')
reviews_test = filter_reviews(reviews_test, orgs=train_orgs)
return reviews_train, reviews_test
def process_reviews(reviews):
'''
process_reviews(reviews)
Извлекает из набора отзывов тестовых пользователей и таргет.
Parameters
----------
reviews : pandas.DataFrame
DataFrame с отзывами, содержащий колонки <user_id> и <org_id>
Returns
-------
X : pandas.DataFrame
DataFrame такой же структуры, как и в test_users.csv
y : pandas.DataFrame
DataFrame с колонками <user_id> и <target>.
В <target> содержится список org_id, посещённых пользователем.
'''
y = reviews.groupby('user_id')['org_id'].apply(list).reset_index(name='target')
X = pd.DataFrame(y['user_id'])
return X, y
reviews['ts'].max()
###Output
_____no_output_____
###Markdown
Всего в выборку попали отызывы за **1216** дней. Отложим в тестовую выборку отзывы за последние **100** дней.
###Code
train_reviews, test_reviews = train_test_split(reviews, 1116)
X_test, y_test = process_reviews(test_reviews)
###Output
_____no_output_____
###Markdown
Посмотрим, сколько всего уникальных пользователей попало в эту тестовую выборку:
###Code
len(X_test)
###Output
_____no_output_____
###Markdown
Метрика Метрика принимает на вход два DataFrame, имеющих такую же структуру, как и **y_test**.`print_score` домножает реальное значение метрики на 100 так же, как и в контесте.Подобная имплементация используется для оценки **submission**.
###Code
def MNAP(size=20):
'''
MNAP(size=20)
Создаёт метрику под <size> сделанных предсказаний.
Parameters
----------
size : int, default=20
Размер рекомендованной выборки для каждого пользователя
Returns
-------
func(pd.DataFrame, pd.DataFrame) -> float
Функция, вычисляющая MNAP.
'''
assert size >= 1, "Size must be greater than zero!"
def metric(y_true, predictions, size=size):
'''
metric(y_true, predictions, size=size)
Метрика MNAP для двух перемешанных наборов <y_true> и <y_pred>.
Parameters
----------
y_true : pd.DataFrame
DataFrame с колонками <user_id> и <target>.
В <target> содержится список настоящих org_id, посещённых пользователем.
predictions : pd.DataFrame
DataFrame с колонками <user_id> и <target>.
В <target> содержится список рекомендованных для пользователя org_id.
Returns
-------
float
Значение метрики.
'''
y_true = y_true.rename({'target': 'y_true'}, axis='columns')
predictions = predictions.rename({'target': 'predictions'}, axis='columns')
merged = y_true.merge(predictions, left_on='user_id', right_on='user_id')
def score(x, size=size):
'''
Вспомогательная функция.
'''
y_true = x[1][1]
predictions = x[1][2][:size]
weight = 0
inner_weights = [0]
for n, item in enumerate(predictions):
inner_weight = inner_weights[-1] + (1 if item in y_true else 0)
inner_weights.append(inner_weight)
for n, item in enumerate(predictions):
if item in y_true:
weight += inner_weights[n + 1] / (n + 1)
return weight / min(len(y_true), size)
return np.mean([score(row) for row in merged.iterrows()])
return metric
def print_score(score):
print(f"Score: {score*100.0:.2f}")
N = 20
MNAP_N = MNAP(N)
###Output
_____no_output_____
###Markdown
Подходы без машинного обучения Случайные N мест Попробуем предлагать пользователям случайные места из другого города.
###Code
spb_orgs = orgs[orgs['city'] == 'spb']['org_id']
msk_orgs = orgs[orgs['city'] == 'msk']['org_id']
test_users_with_locations = X_test.merge(users, on='user_id')
%%time
np.random.seed(1337)
choose = lambda x: np.random.choice(spb_orgs, N) if x['city'] == 'msk' else np.random.choice(msk_orgs, N)
target = test_users_with_locations.apply(choose, axis=1)
predictions = X_test.copy()
predictions['target'] = target
print_score(MNAP_N(y_test, predictions))
###Output
Score: 0.02
CPU times: user 4.23 s, sys: 145 ms, total: 4.38 s
Wall time: 4.18 s
###Markdown
N самых популярных мест Предыдущий подход, очевидно, не очень удачно предсказывает, какие места посетит пользователей. Попробуем улучшить стратегию: будем предлагать пользователям самые популярные места, то есть те, на которые оставлено больше всего отзывов.
###Code
msk_orgs = train_reviews[(train_reviews['rating'] >= 4) & (train_reviews['org_city'] == 'msk')]['org_id']
msk_orgs = msk_orgs.value_counts().index[:N].to_list()
spb_orgs = train_reviews[(train_reviews['rating'] >= 4) & (train_reviews['org_city'] == 'spb')]['org_id']
spb_orgs = spb_orgs.value_counts().index[:N].to_list()
%%time
choose = lambda x: spb_orgs if x['city'] == 'msk' else msk_orgs
target = test_users_with_locations.apply(choose, axis=1)
predictions = X_test.copy()
predictions['target'] = target
print_score(MNAP_N(y_test, predictions))
###Output
Score: 4.21
CPU times: user 1.52 s, sys: 4.13 ms, total: 1.52 s
Wall time: 1.52 s
###Markdown
Отлично, метрика немного улучшилась. Но стоит попробовать доработать эту тактику. N самых популярных мест среди туристов
###Code
#info = reduce_reviews(reviews, 1, 12) #5, 12
#(inner_reviews, inner_orgs), (outer_reviews, outer_orgs), train_users = info
#temp = pd.concat([inner_reviews, outer_reviews])
#tourist_reviews = temp[temp['rating'] >= 4.0]
tourist_reviews = train_reviews[train_reviews['rating'] == 4.0]
# набор отзывов только от туристов
tourist_reviews = tourist_reviews[tourist_reviews['user_city'] != tourist_reviews['org_city']]
# выбираем самые популярные места среди туристов из Москвы и Питера
msk_orgs = tourist_reviews[tourist_reviews['org_city'] == 'msk']['org_id']
temp = train_reviews[train_reviews['rating'] == 5.0]
temp = temp[temp['user_city'] != temp['org_city']]
temp = temp[temp['org_city'] == 'msk']['org_id'].value_counts()*5
msk_orgs = msk_orgs.value_counts()*4
for index, value in temp.items():
if index in msk_orgs:
msk_orgs.loc[index] += value
#print(index, value)
#print(msk_orgs)
msk_orgs = msk_orgs.index[:N].to_list()
spb_orgs = tourist_reviews[tourist_reviews['org_city'] == 'spb']['org_id']
temp = train_reviews[train_reviews['rating'] == 5.0]
temp = temp[temp['user_city'] != temp['org_city']]
temp = temp[temp['org_city'] == 'spb']['org_id'].value_counts()*5
spb_orgs = spb_orgs.value_counts()*4
for index, value in temp.items():
if index in spb_orgs:
spb_orgs.loc[index] += value
spb_orgs = spb_orgs.index[:N].to_list()
%%time
choose = lambda x: spb_orgs if x['city'] == 'msk' else msk_orgs
target = test_users_with_locations.apply(choose, axis=1)
predictions = X_test.copy()
predictions['target'] = target
print_score(MNAP_N(y_test, predictions))
###Output
Score: 5.89
CPU times: user 1.51 s, sys: 0 ns, total: 1.51 s
Wall time: 1.51 s
###Markdown
Метрика улучшилась ещё немного. N / rubrics_count самых популярных мест из каждой рубрики
###Code
def extract_top_by_rubrics(reviews, N):
'''
extract_top_by_rubrics(reviews, N)
Набирает самые популярные организации по рубрикам, сохраняя распределение.
Parameters
----------
reviews : pd.DataFrame
Отзывы пользователей для рекомендации.
N : int
Число рекомендаций.
Returns
-------
orgs_list : list
Список отобранных организаций.
'''
# извлечение популярных рубрик
reviews = reviews.merge(orgs, on='org_id')[['org_id', 'rubrics_id']]
rubrics = reviews.explode('rubrics_id').groupby('rubrics_id').size()
rubrics = (rubrics / rubrics.sum() * N).apply(round).sort_values(ascending=False)
# вывод списка рубрик по убыванию популярности
# print(
# pd.read_csv('data/rubrics.csv')
# .merge(rubrics.reset_index(), left_index=True, right_on='rubrics_id')
# .sort_values(by=0, ascending=False)[['rubric_id', 0]]
# )
# извлечение популярных организаций
train_orgs = reviews.groupby('org_id').size().reset_index(name='count').merge(orgs, on='org_id')
train_orgs = train_orgs[['org_id', 'count', 'rubrics_id']]
most_popular_rubric = lambda rubrics_id: max(rubrics_id, key=lambda rubric_id: rubrics[rubric_id])
train_orgs['rubrics_id'] = train_orgs['rubrics_id'].apply(most_popular_rubric)
orgs_by_rubrics = train_orgs.sort_values(by='count', ascending=False).groupby('rubrics_id')['org_id'].apply(list)
# соберём самые популярные организации в рубриках в один список
orgs_list = []
for rubric_id, count in zip(rubrics.index, rubrics):
if rubric_id not in orgs_by_rubrics:
continue
orgs_list.extend(orgs_by_rubrics[rubric_id][:count])
return orgs_list
msk_orgs = extract_top_by_rubrics(tourist_reviews[tourist_reviews['org_city'] == 'msk'], N)
spb_orgs = extract_top_by_rubrics(tourist_reviews[tourist_reviews['org_city'] == 'spb'], N)
%%time
choose = lambda x: spb_orgs if x['city'] == 'msk' else msk_orgs
target = test_users_with_locations.apply(choose, axis=1)
predictions = X_test.copy()
predictions['target'] = target
print_score(MNAP_N(y_test, predictions))
###Output
_____no_output_____
###Markdown
Время ML! Коллаборативная фильтрация Memory-basedДля этой группы методов требуется явное построение матрицы __пользователь-организация__ (__interaction matrix__), где на пересечении $i$-ой строки и $j$-ого столбца будет рейтинг, который $i$-ый пользователь выставил $j$-ой организации или же пропуск, если рейтинг не был установлен.
###Code
def reduce_reviews(reviews, min_user_reviews=5, min_org_reviews=13): #5, 13
'''
reduce_reviews(reviews, min_user_reviews=5, min_org_reviews=13)
Убирает из выборки пользователей и организации, у которых менее <min_reviews> отзывов в родном городе.
Оставляет только отзывы туристов.
Parameters
----------
reviews : pandas.DataFrame
Выборка отзывов с обязательными полями:
<user_id>, <user_city>.
min_user_reviews : int, default=5
Минимальное количество отзывов у пользователя, необходимое для включения в выборку.
min_org_reviews : int, default=13
Минимальное количество отзывов у организации, необходимое для включения в выборку.
Returns
-------
splitting : tuple
Кортеж из двух наборов.
Каждый набор содержит 2 pandas.DataFrame:
1. Урезанная выборка отзывов
2. Набор уникальных организаций
Первый набор содержит DataFrame-ы, относящиеся к отзывам, оставленным в родном городе, а второй -
к отзывам, оставленным в чужом городе. ё
users : pd.DataFrame
Набор уникальных пользователей в выборке
'''
inner_reviews = reviews[reviews['user_city'] == reviews['org_city']]
outer_reviews = reviews[reviews['user_city'] != reviews['org_city']]
# оставляем только отзывы туристов на родной город
tourist_users = extract_unique(outer_reviews, 'user_id')
inner_reviews = filter_reviews(inner_reviews, users=tourist_users)
# выбираем только тех пользователей и организации, у которых есть <min_reviews> отзывов
top_users = count_unique(inner_reviews, 'user_id')
top_users = top_users[top_users['count'] >= min_user_reviews]
top_orgs = count_unique(inner_reviews, 'org_id')
top_orgs = top_orgs[top_orgs['count'] >= min_org_reviews]
inner_reviews = filter_reviews(inner_reviews, users=top_users, orgs=top_orgs)
outer_reviews = filter_reviews(outer_reviews, users=top_users)
# combine reviews
reviews = pd.concat([inner_reviews, outer_reviews])
users = extract_unique(reviews, 'user_id')
orgs = extract_unique(reviews, 'org_id')
return (
(
inner_reviews,
extract_unique(inner_reviews, 'org_id')
),
(
outer_reviews,
extract_unique(outer_reviews, 'org_id')
),
extract_unique(inner_reviews, 'user_id')
)
def create_mappings(df, column):
'''
create_mappings(df, column)
Создаёт маппинг между оригинальными ключами словаря и новыми порядковыми.
Parameters
----------
df : pandas.DataFrame
DataFrame с данными.
column : str
Название колонки, содержащей нужны ключи.
Returns
-------
code_to_idx : dict
Словарь с маппингом: "оригинальный ключ" -> "новый ключ".
idx_to_code : dict
Словарь с маппингом: "новый ключ" -> "оригинальный ключ".
'''
code_to_idx = {}
idx_to_code = {}
for idx, code in enumerate(df[column].to_list()):
code_to_idx[code] = idx
idx_to_code[idx] = code
return code_to_idx, idx_to_code
def map_ids(row, mapping):
'''
Вспомогательная функция
'''
return mapping[row]
def interaction_matrix(reviews, test_users, min_user_reviews=5, min_org_reviews=12):
'''
interaction_matrix(reviews, test_users, min_user_reviews=5, min_org_reviews=12)
Создаёт блочную матрицу взаимодействий (вид матрицы описан в Returns)
Parameters
----------
reviews : pd.DataFrame
Отзывы пользователей для матрицы взаимодействий.
test_users : pd.DataFrame
Пользователи, для которых будет выполнятся предсказание.
min_user_reviews : int, default=5
Минимальное число отзывов от пользователя, необходимое для включения его в матрицу.
min_org_reviews : int, default=12
Минимальное число отзывов на организацию, необходимое для включения её в матрицу.
Returns
-------
InteractionMatrix : scipy.sparse.csr_matrix
Матрица, содержащая рейтинги, выставленные пользователями.
Она блочная и имеет такой вид:
---------------------------------------------------
| TRAIN USERS, INNER ORGS | TRAIN USERS, OUTER ORGS |
| | |
---------------------------------------------------
| TEST USERS, INNER ORGS | TEST USERS, OUTER ORGS |
| | |
---------------------------------------------------
splitting : tuple
Кортеж, содержащий два целых числа:
1. Число пользователей в обучающей выборке
2. Число организаций в домашнем регионе
splitting: tuple
Кортеж, содержащий два котрежа из двух словарей:
1. (idx_to_uid, uid_to_idx) - содержит маппинг индекса к user_id
2. (idx_to_oid, oid_to_idx) - содержит маппинг индекса к org_id
'''
info = reduce_reviews(train_reviews, min_user_reviews, min_org_reviews)
(inner_reviews, inner_orgs), (outer_reviews, outer_orgs), train_users = info
# удалим из обучающей выборки пользователей, которые есть в тестовой
test_users = test_users[['user_id']]
train_users = (
pd.merge(train_users, test_users, indicator=True, how='outer')
.query('_merge=="left_only"')
.drop('_merge', axis=1)
)
inner_reviews = filter_reviews(inner_reviews, train_users)
outer_reviews = filter_reviews(outer_reviews, train_users)
# оставляем отзывы, оставленные тестовыми пользователями
test_reviews = filter_reviews(reviews, test_users, pd.concat([inner_orgs, outer_orgs]))
# получаем полный набор маппингов
all_users = pd.concat([train_users, test_users])
all_orgs = pd.concat([inner_orgs, outer_orgs])
uid_to_idx, idx_to_uid = create_mappings(all_users, 'user_id')
oid_to_idx, idx_to_oid = create_mappings(all_orgs, 'org_id')
# собираем матрицу взаимодействий
reviews = pd.concat([inner_reviews, outer_reviews, test_reviews])
I = reviews['user_id'].apply(map_ids, args=[uid_to_idx]).values
J = reviews['org_id'].apply(map_ids, args=[oid_to_idx]).values
values = reviews['rating']
interactions = sparse.coo_matrix(
(values, (I, J)),
shape=(len(all_users), len(all_orgs)),
dtype=np.float64
).tocsr()
return (
interactions,
(len(train_users), len(inner_orgs)),
(
(idx_to_uid, uid_to_idx),
(idx_to_oid, oid_to_idx)
)
)
###Output
_____no_output_____
###Markdown
ALS
###Code
#!pip install implicit==0.4.8
#0.3.9
!pip install implicit==0.4.3
%%time
import implicit
def make_predictions(interactions, X_test, N):
'''
make_predictions(interactions, X_test, N)
Делает рекомендации для пользователей из <X_test> на основе матрицы взаимодействий.
Parameters
----------
interactions : scipy.sparse.csr_matrix
Разреженная матрица взаимодействий.
X_test : pd.DataFrame
Набор тестовых пользователей, для которых нужно сделать рекомендации.
N : int
Число рекомендаций для каждого пользователя.
Returns
-------
predictions : pd.DataFrame
DataFrame с колонками <user_id> и <target>.
В <target> содержится список рекомендованных для пользователя org_id.
'''
predictions = X_test[['user_id']].copy()
predictions['target'] = pd.Series(dtype=object)
predictions = predictions.set_index('user_id')
interactions, (train_users_len, inner_orgs_len), mappings = interactions
(idx_to_uid, uid_to_idx), (idx_to_oid, oid_to_idx) = mappings
base_model = implicit.als.AlternatingLeastSquares(
factors=32, #factors=5,
iterations= 100, #iterations=75,
regularization=0.05,
random_state=42
#use_gpu = True
)
base_model.fit(interactions.T)
orgs_to_filter = list(np.arange(inner_orgs_len))
recommendations = base_model.recommend_all(
interactions,
N=N,
filter_already_liked_items=True,
filter_items=orgs_to_filter,
show_progress=True
)
for user_id in tqdm(X_test['user_id'].values, leave=False):
predictions.loc[user_id, 'target'] = list(
map(
lambda org_idx: idx_to_oid[org_idx],
recommendations[uid_to_idx[user_id]]
)
)
return predictions.reset_index()
msk_interactions = interaction_matrix(
train_reviews[train_reviews['user_city'] == 'msk'],
test_users_with_locations[test_users_with_locations['city'] == 'msk'],
)
spb_interactions = interaction_matrix(
train_reviews[train_reviews['user_city'] == 'spb'],
test_users_with_locations[test_users_with_locations['city'] == 'spb'],
)
test_msk_users = test_users_with_locations[test_users_with_locations['city'] == 'msk']
test_spb_users = test_users_with_locations[test_users_with_locations['city'] == 'spb']
msk_predictions = make_predictions(msk_interactions, test_msk_users, N)
spb_predictions = make_predictions(spb_interactions, test_spb_users, N)
predictions = pd.concat([msk_predictions, spb_predictions])
%%time
print_score(MNAP_N(y_test, predictions))
###Output
Score: 0.02
CPU times: user 1.46 s, sys: 3.26 ms, total: 1.46 s
Wall time: 1.46 s
###Markdown
SubmissionВыберем лучший метод на валидации, переобучим его на всей выборке и сделаем предсказание на тестовой выборке. Without ML
###Code
#reviews.shape[0]
#reviews[reviews['rating'] >= 4.0]
#reviews['aspects'].notna()
#reviews[reviews['aspects'].notna()]
#temp = reviews[reviews['org_city'] == 'msk']
#temp[temp['user_city'] == 'spb']
#info = reduce_reviews(reviews, 5, 12)
#(inner_reviews, inner_orgs), (outer_reviews, outer_orgs), train_users = info
#temp = pd.concat([inner_reviews, outer_reviews])
#temp = temp[temp['rating'] >= 4.0]
#temp.shape[0]
#info = reduce_reviews(reviews, 1, 12) #5, 12
#(inner_reviews, inner_orgs), (outer_reviews, outer_orgs), train_users = info
#temp = pd.concat([inner_reviews, outer_reviews])
#tourist_reviews = temp[temp['rating'] >= 3.0]
qq = train_reviews[train_reviews['rating'] > 4 ]
#qq = qq[qq['rating'] < 5]
qq.shape[0]
tourist_reviews = train_reviews[train_reviews['rating'] == 4.0]
# набор отзывов только от туристов
tourist_reviews = tourist_reviews[tourist_reviews['user_city'] != tourist_reviews['org_city']]
# выбираем самые популярные места среди туристов из Москвы и Питера
msk_orgs = tourist_reviews[tourist_reviews['org_city'] == 'msk']['org_id']
temp = train_reviews[train_reviews['rating'] == 5.0]
temp = temp[temp['user_city'] != temp['org_city']]
temp = temp[temp['org_city'] == 'msk']['org_id'].value_counts()*5
msk_orgs = msk_orgs.value_counts()*4
for index, value in temp.items():
if index in msk_orgs:
msk_orgs.loc[index] += value
#print(index, value)
#print(msk_orgs)
msk_orgs = msk_orgs.index[:N].to_list()
spb_orgs = tourist_reviews[tourist_reviews['org_city'] == 'spb']['org_id']
temp = train_reviews[train_reviews['rating'] == 5.0]
temp = temp[temp['user_city'] != temp['org_city']]
temp = temp[temp['org_city'] == 'spb']['org_id'].value_counts()*5
spb_orgs = spb_orgs.value_counts()*4
for index, value in temp.items():
if index in spb_orgs:
spb_orgs.loc[index] += value
spb_orgs = spb_orgs.index[:N].to_list()
# набор отзывов только от туристов
tourist_reviews = reviews[reviews['rating'] >= 3.0]
tourist_reviews = tourist_reviews[tourist_reviews['user_city'] != tourist_reviews['org_city']]
# выбираем самые популярные места среди туристов из Москвы и Питера
msk_orgs = tourist_reviews[tourist_reviews['org_city'] == 'msk']['org_id']
msk_orgs = msk_orgs.value_counts().index[:N].to_list()
spb_orgs = tourist_reviews[tourist_reviews['org_city'] == 'spb']['org_id']
spb_orgs = spb_orgs.value_counts().index[:N].to_list()
msk_orgs = str(' '.join(map(str, msk_orgs)))
spb_orgs = str(' '.join(map(str, spb_orgs)))
test_users = pd.read_csv('/content/drive/MyDrive/YCup2021[recsys]/test_users.csv')
test_users['city'] = test_users.merge(users, on='user_id')['city']
choose = lambda x: spb_orgs if x['city'] == 'msk' else msk_orgs
target = test_users.apply(choose, axis=1)
predictions = test_users[['user_id']]
predictions['target'] = target
predictions.head()
predictions.to_csv('answers.csv', index=None)
###Output
_____no_output_____
###Markdown
With ML
###Code
test_users = pd.read_csv('/content/drive/MyDrive/YCup2021[recsys]/test_users.csv')
test_users = test_users.merge(users, on='user_id')
test_msk_users = test_users[test_users['city'] == 'msk'][['user_id', 'city']]
test_spb_users = test_users[test_users['city'] == 'spb'][['user_id', 'city']]
msk_interactions = interaction_matrix(
reviews[reviews['user_city'] == 'msk'],
test_msk_users
)
spb_interactions = interaction_matrix(
reviews[reviews['user_city'] == 'spb'],
test_spb_users
)
#msk_interactions, test_msk_users
#spb_interactions, test_spb_users
msk_predictions = make_predictions(msk_interactions, test_msk_users, N)
spb_predictions = make_predictions(spb_interactions, test_spb_users, N)
predictions = pd.concat([msk_predictions, spb_predictions])
predictions['target'] = predictions['target'].apply(lambda orgs: ' '.join(map(str, orgs)))
predictions.head()
predictions.to_csv('answers_ml.csv', index=None)
###Output
_____no_output_____
###Markdown
Сырые данные Считываем данные из .csv Некоторые данные (такие как рубрики и признаки), представлены строками значений. Преобразуем их в списки чисел.
###Code
to_list = lambda rubrics: [int(rubric) for rubric in str(rubrics).split(' ')]
def apply_to_columns(df, columns, func=to_list):
for column in columns:
df.loc[~df[column].isnull(), column] = df.loc[~df[column].isnull(), column].apply(func)
###Output
_____no_output_____
###Markdown
В первую очередь нам понадобятся данные по __пользователям__, __организациям__ и сами __отзывы__.
###Code
users = pd.read_csv('data/users.csv')
users.head()
orgs = pd.read_csv('data/organisations.csv')
# create lists
columns = ['rubrics_id', 'features_id']
apply_to_columns(orgs, columns)
orgs.head()
###Output
_____no_output_____
###Markdown
Чтобы не делать __join__ каждый раз, когда нам потребуется узнать, из какого города организация или пользователь, сразу добавим эту информацию в отзывы.
###Code
reviews = pd.read_csv('data/reviews.csv', low_memory=False)
# encode users ids as numeric
reviews = reviews.merge(users, on='user_id')
reviews = reviews.rename({'city': 'user_city'}, axis=1)
# # encode orgs ids as numeric
reviews = reviews.merge(orgs[['org_id', 'city']], on='org_id')
reviews = reviews.rename({'city': 'org_city'}, axis=1)
# # create lists
columns = ['aspects']
apply_to_columns(reviews, columns)
reviews.head()
###Output
_____no_output_____
###Markdown
Отлично, теперь с отзывами будет удобно работать. Посмотрим на распределение новых отзывов по дням, чтобы понять, как лучше организовать валидацию.
###Code
sns.displot(data=reviews, x='ts', height=8)
plt.title('Распределение отзывов по дням')
plt.show()
###Output
_____no_output_____
###Markdown
Train-test split
###Code
def clear_df(df, suffixes=['_x', '_y'], inplace=True):
'''
clear_df(df, suffixes=['_x', '_y'], inplace=True)
Удаляет из входного df все колонки, оканчивающиеся на заданные суффиксы.
Parameters
----------
df : pandas.DataFrame
suffixies : Iterable, default=['_x', '_y']
Суффиксы колонок, подлежащих удалению
inplace : bool, default=True
Нужно ли удалить колонки "на месте" или же создать копию DataFrame.
Returns
-------
pandas.DataFrame (optional)
df с удалёнными колонками
'''
def bad_suffix(column):
nonlocal suffixes
return any(column.endswith(suffix) for suffix in suffixes)
columns_to_drop = [col for col in df.columns if bad_suffix(col)]
return df.drop(columns_to_drop, axis=1, inplace=inplace)
def extract_unique(reviews, column):
'''
extract_unique(reviews, column)
Извлекает уникальные значения из колонки в DataFrame.
Parameters
----------
reviews : pandas.DataFrame
pandas.DataFrame, из которого будут извлечены значения.
column : str
Имя колонки в <reviews>.
Returns
-------
pandas.DataFrame
Содержит одну именованную колонку с уникальными значениями.
'''
unique = reviews[column].unique()
return pd.DataFrame({column: unique})
def count_unique(reviews, column):
'''
count_unique(reviews, column)
Извлекает и подсчитывает уникальные значения из колонки в DataFrame.
Parameters
----------
reviews : pandas.DataFrame
pandas.DataFrame, из которого будут извлечены значения.
column : str
Имя колонки в <reviews>.
Returns
-------
pandas.DataFrame
Содержит две колонки: с уникальными значениями и счётчиком встреченных.
'''
return reviews[column].value_counts().reset_index(name='count').rename({'index': column}, axis=1)
def filter_reviews(reviews, users=None, orgs=None):
'''
filter_reviews(reviews, users=None, orgs=None)
Оставляет в выборке только отзывы, оставленные заданными пользователями на заданные организации.
Parameters
----------
users: pandas.DataFrame, default=None
DataFrame, содержащий колонку <user_id>.
Если None, то фильтрация не происходит.
orgs: pandas.DataFrame, default=None
DataFrame, содержащий колонку <org_id>.
Если None, то фильтрация не происходит.
Returns
-------
pandas.DataFrame
Отфильтрованная выборка отзывов.
'''
if users is not None:
reviews = reviews.merge(users, on='user_id', how='inner')
clear_df(reviews)
if orgs is not None:
reviews = reviews.merge(orgs, on='org_id', how='inner')
clear_df(reviews)
return reviews
def train_test_split(reviews, ts_start, ts_end=None):
'''
train_test_split(reviews, ts_start, ts_end=None)
Разделяет выборку отзывов на две части: обучающую и тестовую.
В тестовую выборку попадают только отзывы с user_id и org_id, встречающимися в обучающей выборке.
Parameters
----------
reviews : pandas.DataFrame
Отзывы из reviews.csv с обязательными полями:
<rating>, <ts>, <user_id>, <user_city>, <org_id>, <org_city>.
ts_start : int
Первый день отзывов из тестовой выборки (включительно).
ts_end : int, default=None
Последний день отзывов из обучающей выборки (включительно)
Если параметр равен None, то ts_end == reviews['ts'].max().
Returns
-------
splitting : tuple
Кортеж из двух pandas.DataFrame такой же структуры, как и reviews:
в первом отзывы, попавшие в обучающую выборку, во втором - в тестовую.
'''
if not ts_end:
ts_end = reviews['ts'].max()
reviews_train = reviews[(reviews['ts'] < ts_start) | (reviews['ts'] > ts_end)]
reviews_test = reviews[(ts_start <= reviews['ts']) & (reviews['ts'] <= ts_end)]
# 1. Выбираем только отзывы на понравившиеся места у путешественников
reviews_test = reviews_test[reviews_test['rating'] >= 4.0]
reviews_test = reviews_test[reviews_test['user_city'] != reviews_test['org_city']]
# 2. Оставляем в тесте только тех пользователей и организации, которые встречались в трейне
train_orgs = extract_unique(reviews_train, 'org_id')
train_users = extract_unique(reviews_train, 'user_id')
reviews_test = filter_reviews(reviews_test, orgs=train_orgs)
return reviews_train, reviews_test
def process_reviews(reviews):
'''
process_reviews(reviews)
Извлекает из набора отзывов тестовых пользователей и таргет.
Parameters
----------
reviews : pandas.DataFrame
DataFrame с отзывами, содержащий колонки <user_id> и <org_id>
Returns
-------
X : pandas.DataFrame
DataFrame такой же структуры, как и в test_users.csv
y : pandas.DataFrame
DataFrame с колонками <user_id> и <target>.
В <target> содержится список org_id, посещённых пользователем.
'''
y = reviews.groupby('user_id')['org_id'].apply(list).reset_index(name='target')
X = pd.DataFrame(y['user_id'])
return X, y
reviews['ts'].max()
###Output
_____no_output_____
###Markdown
Всего в выборку попали отызывы за **1216** дней. Отложим в тестовую выборку отзывы за последние **100** дней.
###Code
train_reviews, test_reviews = train_test_split(reviews, 1116)
X_test, y_test = process_reviews(test_reviews)
###Output
_____no_output_____
###Markdown
Посмотрим, сколько всего уникальных пользователей попало в эту тестовую выборку:
###Code
len(X_test)
###Output
_____no_output_____
###Markdown
Метрика Метрика принимает на вход два DataFrame, имеющих такую же структуру, как и **y_test**.`print_score` домножает реальное значение метрики на 100 так же, как и в контесте.Подобная имплементация используется для оценки **submission**.
###Code
def MNAP(size=20):
'''
MNAP(size=20)
Создаёт метрику под <size> сделанных предсказаний.
Parameters
----------
size : int, default=20
Размер рекомендованной выборки для каждого пользователя
Returns
-------
func(pd.DataFrame, pd.DataFrame) -> float
Функция, вычисляющая MNAP.
'''
assert size >= 1, "Size must be greater than zero!"
def metric(y_true, predictions, size=size):
'''
metric(y_true, predictions, size=size)
Метрика MNAP для двух перемешанных наборов <y_true> и <y_pred>.
Parameters
----------
y_true : pd.DataFrame
DataFrame с колонками <user_id> и <target>.
В <target> содержится список настоящих org_id, посещённых пользователем.
predictions : pd.DataFrame
DataFrame с колонками <user_id> и <target>.
В <target> содержится список рекомендованных для пользователя org_id.
Returns
-------
float
Значение метрики.
'''
y_true = y_true.rename({'target': 'y_true'}, axis='columns')
predictions = predictions.rename({'target': 'predictions'}, axis='columns')
merged = y_true.merge(predictions, left_on='user_id', right_on='user_id')
def score(x, size=size):
'''
Вспомогательная функция.
'''
y_true = x[1][1]
predictions = x[1][2][:size]
weight = 0
inner_weights = [0]
for n, item in enumerate(predictions):
inner_weight = inner_weights[-1] + (1 if item in y_true else 0)
inner_weights.append(inner_weight)
for n, item in enumerate(predictions):
if item in y_true:
weight += inner_weights[n + 1] / (n + 1)
return weight / min(len(y_true), size)
return np.mean([score(row) for row in merged.iterrows()])
return metric
def print_score(score):
print(f"Score: {score*100.0:.2f}")
N = 20
MNAP_N = MNAP(N)
###Output
_____no_output_____
###Markdown
Подходы без машинного обучения Случайные N мест Попробуем предлагать пользователям случайные места из другого города.
###Code
spb_orgs = orgs[orgs['city'] == 'spb']['org_id']
msk_orgs = orgs[orgs['city'] == 'msk']['org_id']
test_users_with_locations = X_test.merge(users, on='user_id')
%%time
np.random.seed(1337)
choose = lambda x: np.random.choice(spb_orgs, N) if x['city'] == 'msk' else np.random.choice(msk_orgs, N)
target = test_users_with_locations.apply(choose, axis=1)
predictions = X_test.copy()
predictions['target'] = target
print_score(MNAP_N(y_test, predictions))
###Output
Score: 0.02
CPU times: user 2.2 s, sys: 59.9 ms, total: 2.26 s
Wall time: 2.22 s
###Markdown
N самых популярных мест Предыдущий подход, очевидно, не очень удачно предсказывает, какие места посетит пользователей. Попробуем улучшить стратегию: будем предлагать пользователям самые популярные места, то есть те, на которые оставлено больше всего отзывов.
###Code
msk_orgs = train_reviews[(train_reviews['rating'] >= 4) & (train_reviews['org_city'] == 'msk')]['org_id']
msk_orgs = msk_orgs.value_counts().index[:N].to_list()
spb_orgs = train_reviews[(train_reviews['rating'] >= 4) & (train_reviews['org_city'] == 'spb')]['org_id']
spb_orgs = spb_orgs.value_counts().index[:N].to_list()
%%time
choose = lambda x: spb_orgs if x['city'] == 'msk' else msk_orgs
target = test_users_with_locations.apply(choose, axis=1)
predictions = X_test.copy()
predictions['target'] = target
print_score(MNAP_N(y_test, predictions))
###Output
Score: 4.21
CPU times: user 637 ms, sys: 9.89 ms, total: 647 ms
Wall time: 647 ms
###Markdown
Отлично, метрика немного улучшилась. Но стоит попробовать доработать эту тактику. N самых популярных мест среди туристов
###Code
tourist_reviews = train_reviews[train_reviews['rating'] >= 4.0]
# набор отзывов только от туристов
tourist_reviews = tourist_reviews[tourist_reviews['user_city'] != tourist_reviews['org_city']]
# выбираем самые популярные места среди туристов из Москвы и Питера
msk_orgs = tourist_reviews[tourist_reviews['org_city'] == 'msk']['org_id']
msk_orgs = msk_orgs.value_counts().index[:N].to_list()
spb_orgs = tourist_reviews[tourist_reviews['org_city'] == 'spb']['org_id']
spb_orgs = spb_orgs.value_counts().index[:N].to_list()
%%time
choose = lambda x: spb_orgs if x['city'] == 'msk' else msk_orgs
target = test_users_with_locations.apply(choose, axis=1)
predictions = X_test.copy()
predictions['target'] = target
print_score(MNAP_N(y_test, predictions))
###Output
Score: 6.40
CPU times: user 652 ms, sys: 5.35 ms, total: 657 ms
Wall time: 657 ms
###Markdown
Метрика улучшилась ещё немного. N / rubrics_count самых популярных мест из каждой рубрики
###Code
def extract_top_by_rubrics(reviews, N):
'''
extract_top_by_rubrics(reviews, N)
Набирает самые популярные организации по рубрикам, сохраняя распределение.
Parameters
----------
reviews : pd.DataFrame
Отзывы пользователей для рекомендации.
N : int
Число рекомендаций.
Returns
-------
orgs_list : list
Список отобранных организаций.
'''
# извлечение популярных рубрик
reviews = reviews.merge(orgs, on='org_id')[['org_id', 'rubrics_id']]
rubrics = reviews.explode('rubrics_id').groupby('rubrics_id').size()
rubrics = (rubrics / rubrics.sum() * N).apply(round).sort_values(ascending=False)
# вывод списка рубрик по убыванию популярности
# print(
# pd.read_csv('data/rubrics.csv')
# .merge(rubrics.reset_index(), left_index=True, right_on='rubrics_id')
# .sort_values(by=0, ascending=False)[['rubric_id', 0]]
# )
# извлечение популярных организаций
train_orgs = reviews.groupby('org_id').size().reset_index(name='count').merge(orgs, on='org_id')
train_orgs = train_orgs[['org_id', 'count', 'rubrics_id']]
most_popular_rubric = lambda rubrics_id: max(rubrics_id, key=lambda rubric_id: rubrics[rubric_id])
train_orgs['rubrics_id'] = train_orgs['rubrics_id'].apply(most_popular_rubric)
orgs_by_rubrics = train_orgs.sort_values(by='count', ascending=False).groupby('rubrics_id')['org_id'].apply(list)
# соберём самые популярные организации в рубриках в один список
orgs_list = []
for rubric_id, count in zip(rubrics.index, rubrics):
if rubric_id not in orgs_by_rubrics:
continue
orgs_list.extend(orgs_by_rubrics[rubric_id][:count])
return orgs_list
msk_orgs = extract_top_by_rubrics(tourist_reviews[tourist_reviews['org_city'] == 'msk'], N)
spb_orgs = extract_top_by_rubrics(tourist_reviews[tourist_reviews['org_city'] == 'spb'], N)
%%time
choose = lambda x: spb_orgs if x['city'] == 'msk' else msk_orgs
target = test_users_with_locations.apply(choose, axis=1)
predictions = X_test.copy()
predictions['target'] = target
print_score(MNAP_N(y_test, predictions))
###Output
Score: 5.77
CPU times: user 642 ms, sys: 5 ms, total: 647 ms
Wall time: 647 ms
###Markdown
Время ML! Коллаборативная фильтрация Memory-basedДля этой группы методов требуется явное построение матрицы __пользователь-организация__ (__interaction matrix__), где на пересечении $i$-ой строки и $j$-ого столбца будет рейтинг, который $i$-ый пользователь выставил $j$-ой организации или же пропуск, если рейтинг не был установлен.
###Code
def reduce_reviews(reviews, min_user_reviews=5, min_org_reviews=13):
'''
reduce_reviews(reviews, min_user_reviews=5, min_org_reviews=13)
Убирает из выборки пользователей и организации, у которых менее <min_reviews> отзывов в родном городе.
Оставляет только отзывы туристов.
Parameters
----------
reviews : pandas.DataFrame
Выборка отзывов с обязательными полями:
<user_id>, <user_city>.
min_user_reviews : int, default=5
Минимальное количество отзывов у пользователя, необходимое для включения в выборку.
min_org_reviews : int, default=13
Минимальное количество отзывов у организации, необходимое для включения в выборку.
Returns
-------
splitting : tuple
Кортеж из двух наборов.
Каждый набор содержит 2 pandas.DataFrame:
1. Урезанная выборка отзывов
2. Набор уникальных организаций
Первый набор содержит DataFrame-ы, относящиеся к отзывам, оставленным в родном городе, а второй -
к отзывам, оставленным в чужом городе. ё
users : pd.DataFrame
Набор уникальных пользователей в выборке
'''
inner_reviews = reviews[reviews['user_city'] == reviews['org_city']]
outer_reviews = reviews[reviews['user_city'] != reviews['org_city']]
# оставляем только отзывы туристов на родной город
tourist_users = extract_unique(outer_reviews, 'user_id')
inner_reviews = filter_reviews(inner_reviews, users=tourist_users)
# выбираем только тех пользователей и организации, у которых есть <min_reviews> отзывов
top_users = count_unique(inner_reviews, 'user_id')
top_users = top_users[top_users['count'] >= min_user_reviews]
top_orgs = count_unique(inner_reviews, 'org_id')
top_orgs = top_orgs[top_orgs['count'] >= min_org_reviews]
inner_reviews = filter_reviews(inner_reviews, users=top_users, orgs=top_orgs)
outer_reviews = filter_reviews(outer_reviews, users=top_users)
# combine reviews
reviews = pd.concat([inner_reviews, outer_reviews])
users = extract_unique(reviews, 'user_id')
orgs = extract_unique(reviews, 'org_id')
return (
(
inner_reviews,
extract_unique(inner_reviews, 'org_id')
),
(
outer_reviews,
extract_unique(outer_reviews, 'org_id')
),
extract_unique(inner_reviews, 'user_id')
)
def create_mappings(df, column):
'''
create_mappings(df, column)
Создаёт маппинг между оригинальными ключами словаря и новыми порядковыми.
Parameters
----------
df : pandas.DataFrame
DataFrame с данными.
column : str
Название колонки, содержащей нужны ключи.
Returns
-------
code_to_idx : dict
Словарь с маппингом: "оригинальный ключ" -> "новый ключ".
idx_to_code : dict
Словарь с маппингом: "новый ключ" -> "оригинальный ключ".
'''
code_to_idx = {}
idx_to_code = {}
for idx, code in enumerate(df[column].to_list()):
code_to_idx[code] = idx
idx_to_code[idx] = code
return code_to_idx, idx_to_code
def map_ids(row, mapping):
'''
Вспомогательная функция
'''
return mapping[row]
def interaction_matrix(reviews, test_users, min_user_reviews=5, min_org_reviews=12):
'''
interaction_matrix(reviews, test_users, min_user_reviews=5, min_org_reviews=12)
Создаёт блочную матрицу взаимодействий (вид матрицы описан в Returns)
Parameters
----------
reviews : pd.DataFrame
Отзывы пользователей для матрицы взаимодействий.
test_users : pd.DataFrame
Пользователи, для которых будет выполнятся предсказание.
min_user_reviews : int, default=5
Минимальное число отзывов от пользователя, необходимое для включения его в матрицу.
min_org_reviews : int, default=12
Минимальное число отзывов на организацию, необходимое для включения её в матрицу.
Returns
-------
InteractionMatrix : scipy.sparse.csr_matrix
Матрица, содержащая рейтинги, выставленные пользователями.
Она блочная и имеет такой вид:
---------------------------------------------------
| TRAIN USERS, INNER ORGS | TRAIN USERS, OUTER ORGS |
| | |
---------------------------------------------------
| TEST USERS, INNER ORGS | TEST USERS, OUTER ORGS |
| | |
---------------------------------------------------
splitting : tuple
Кортеж, содержащий два целых числа:
1. Число пользователей в обучающей выборке
2. Число организаций в домашнем регионе
splitting: tuple
Кортеж, содержащий два котрежа из двух словарей:
1. (idx_to_uid, uid_to_idx) - содержит маппинг индекса к user_id
2. (idx_to_oid, oid_to_idx) - содержит маппинг индекса к org_id
'''
info = reduce_reviews(train_reviews, min_user_reviews, min_org_reviews)
(inner_reviews, inner_orgs), (outer_reviews, outer_orgs), train_users = info
# удалим из обучающей выборки пользователей, которые есть в тестовой
test_users = test_users[['user_id']]
train_users = (
pd.merge(train_users, test_users, indicator=True, how='outer')
.query('_merge=="left_only"')
.drop('_merge', axis=1)
)
inner_reviews = filter_reviews(inner_reviews, train_users)
outer_reviews = filter_reviews(outer_reviews, train_users)
# оставляем отзывы, оставленные тестовыми пользователями
test_reviews = filter_reviews(reviews, test_users, pd.concat([inner_orgs, outer_orgs]))
# получаем полный набор маппингов
all_users = pd.concat([train_users, test_users])
all_orgs = pd.concat([inner_orgs, outer_orgs])
uid_to_idx, idx_to_uid = create_mappings(all_users, 'user_id')
oid_to_idx, idx_to_oid = create_mappings(all_orgs, 'org_id')
# собираем матрицу взаимодействий
reviews = pd.concat([inner_reviews, outer_reviews, test_reviews])
I = reviews['user_id'].apply(map_ids, args=[uid_to_idx]).values
J = reviews['org_id'].apply(map_ids, args=[oid_to_idx]).values
values = reviews['rating']
interactions = sparse.coo_matrix(
(values, (I, J)),
shape=(len(all_users), len(all_orgs)),
dtype=np.float64
).tocsr()
return (
interactions,
(len(train_users), len(inner_orgs)),
(
(idx_to_uid, uid_to_idx),
(idx_to_oid, oid_to_idx)
)
)
###Output
_____no_output_____
###Markdown
ALS
###Code
%%time
import implicit
def make_predictions(interactions, X_test, N):
'''
make_predictions(interactions, X_test, N)
Делает рекомендации для пользователей из <X_test> на основе матрицы взаимодействий.
Parameters
----------
interactions : scipy.sparse.csr_matrix
Разреженная матрица взаимодействий.
X_test : pd.DataFrame
Набор тестовых пользователей, для которых нужно сделать рекомендации.
N : int
Число рекомендаций для каждого пользователя.
Returns
-------
predictions : pd.DataFrame
DataFrame с колонками <user_id> и <target>.
В <target> содержится список рекомендованных для пользователя org_id.
'''
predictions = X_test[['user_id']].copy()
predictions['target'] = pd.Series(dtype=object)
predictions = predictions.set_index('user_id')
interactions, (train_users_len, inner_orgs_len), mappings = interactions
(idx_to_uid, uid_to_idx), (idx_to_oid, oid_to_idx) = mappings
base_model = implicit.als.AlternatingLeastSquares(
factors=5,
iterations=75,
regularization=0.05,
random_state=42
)
base_model.fit(interactions.T)
orgs_to_filter = list(np.arange(inner_orgs_len))
recommendations = base_model.recommend_all(
interactions,
N=N,
filter_already_liked_items=True,
filter_items=orgs_to_filter,
show_progress=True
)
for user_id in tqdm(X_test['user_id'].values, leave=False):
predictions.loc[user_id, 'target'] = list(
map(
lambda org_idx: idx_to_oid[org_idx],
recommendations[uid_to_idx[user_id]]
)
)
return predictions.reset_index()
msk_interactions = interaction_matrix(
train_reviews[train_reviews['user_city'] == 'msk'],
test_users_with_locations[test_users_with_locations['city'] == 'msk'],
)
spb_interactions = interaction_matrix(
train_reviews[train_reviews['user_city'] == 'spb'],
test_users_with_locations[test_users_with_locations['city'] == 'spb'],
)
test_msk_users = test_users_with_locations[test_users_with_locations['city'] == 'msk']
test_spb_users = test_users_with_locations[test_users_with_locations['city'] == 'spb']
msk_predictions = make_predictions(msk_interactions, test_msk_users, N)
spb_predictions = make_predictions(spb_interactions, test_spb_users, N)
predictions = pd.concat([msk_predictions, spb_predictions])
%%time
print_score(MNAP_N(y_test, predictions))
###Output
Score: 0.85
CPU times: user 592 ms, sys: 12.3 ms, total: 604 ms
Wall time: 607 ms
###Markdown
SubmissionВыберем лучший метод на валидации, переобучим его на всей выборке и сделаем предсказание на тестовой выборке. Without ML
###Code
# набор отзывов только от туристов
tourist_reviews = reviews[reviews['rating'] >= 4.0]
tourist_reviews = tourist_reviews[tourist_reviews['user_city'] != tourist_reviews['org_city']]
# выбираем самые популярные места среди туристов из Москвы и Питера
msk_orgs = tourist_reviews[tourist_reviews['org_city'] == 'msk']['org_id']
msk_orgs = msk_orgs.value_counts().index[:N].to_list()
spb_orgs = tourist_reviews[tourist_reviews['org_city'] == 'spb']['org_id']
spb_orgs = spb_orgs.value_counts().index[:N].to_list()
msk_orgs = str(' '.join(map(str, msk_orgs)))
spb_orgs = str(' '.join(map(str, spb_orgs)))
test_users = pd.read_csv('data/test_users.csv')
test_users['city'] = test_users.merge(users, on='user_id')['city']
choose = lambda x: spb_orgs if x['city'] == 'msk' else msk_orgs
target = test_users.apply(choose, axis=1)
predictions = test_users[['user_id']]
predictions['target'] = target
predictions.head()
predictions.to_csv('answers.csv', index=None)
###Output
_____no_output_____
###Markdown
With ML
###Code
test_users = pd.read_csv('data/test_users.csv')
test_users = test_users.merge(users, on='user_id')
test_msk_users = test_users[test_users['city'] == 'msk'][['user_id', 'city']]
test_spb_users = test_users[test_users['city'] == 'spb'][['user_id', 'city']]
msk_interactions = interaction_matrix(
reviews[reviews['user_city'] == 'msk'],
test_msk_users
)
spb_interactions = interaction_matrix(
reviews[reviews['user_city'] == 'spb'],
test_spb_users
)
msk_predictions = make_predictions(msk_interactions, test_msk_users, N)
spb_predictions = make_predictions(spb_interactions, test_spb_users, N)
predictions = pd.concat([msk_predictions, spb_predictions])
predictions['target'] = predictions['target'].apply(lambda orgs: ' '.join(map(str, orgs)))
predictions.head()
predictions.to_csv('answers_ml.csv', index=None)
###Output
_____no_output_____ |
Data-Science-HYD-2k19/Topic-Wise/NUMPY/numpy/np.multi index.ipynb | ###Markdown
np.round(matrix_name/any_number,round_till_this_digit) method:
###Code
#rn is the alias for the random given at the starting of this class (check Day 20)
from numpy.random import randn as rn
np.random.seed(101)
df1 = pd.DataFrame(data = np.round(rn(6,3),2),index = higher_index,columns = ['A','B','C'])
df1
df1.to_csv('new.csv')
df1.index.names = ["outside","inner"]
df1.xs('G1')
df1.xs(2,level='inner')
###Output
_____no_output_____ |
old/Models-a2a.ipynb | ###Markdown
BaselineTesting Task (Round-2) : 3{'score_acc': 0.5167118337850045, 'score_secondary_spearman': 0.3149635036496349, 'meta': {'MRR': 0.895, 'Precision': 0.5167118337850045}} Models BioBERT
###Code
import torch
from transformers import BertTokenizer, BertModel
# OPTIONAL: if you want to have more information on what's happening, activate the logger as follows
import logging
#logging.basicConfig(level=logging.INFO)
import matplotlib.pyplot as plt
# % matplotlib inline
# Load pre-trained model tokenizer (vocabulary)
# tokenizer = BertTokenizer.from_pretrained("dmis-lab/biobert-large-cased-v1.1")
# tokenizer = BertTokenizer.from_pretrained("dmis-lab/biobert-v1.1")
tokenizer = BertTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
# model = BertModel.from_pretrained('dmis-lab/biobert-large-cased-v1.1',
# output_hidden_states = True, # Whether the model returns all hidden-states.
# )
###Output
_____no_output_____
###Markdown
Train and test Datasets
###Code
sentences = [[q.question, a] for q in QA for a in q.answers]
flatten = lambda t: [item for sublist in t for item in sublist]
labels = flatten([q.labels for q in QA])
sentences_test = [[q.question, a] for q in QA_test for a in q.answers]
labels_test = flatten([q.labels for q in QA_test])
sum(labels)/len(labels)
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
BATCH_SIZE = 8
BATCH_SIZE_TEST = 64
max_len_seq = 512
class MEDIQA_Dataset(Dataset):
def __init__(self, X, y, transform=None):
self.X = []
self.y = np.array(y)
for q, a in X:
_q = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("[CLS] " + q + " [SEP]"))[:max_len_seq]
_q += [0]*(max_len_seq-len(_q))
_a = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("[CLS] " + a + " [SEP]"))[:max_len_seq]
_a += [0]*(max_len_seq-len(_a))
self.X.append([_q, _a])
self.X = np.array(self.X)
def __len__(self):
return self.X.shape[0]
def __getitem__(self, index):
score = torch.FloatTensor([self.y[index]])
q = torch.LongTensor(self.X[index][0])
a = torch.LongTensor(self.X[index][1])
return score, q, a
# Create train dataset
train_dataset = MEDIQA_Dataset(X=sentences, y=labels)
train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
# Create test dataset
test_dataset = MEDIQA_Dataset(X=sentences_test, y=labels)
test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE_TEST, shuffle=False)
len(sentences)
max_len_seq = 0
lenghts = []
for q,a in sentences:
lenghts.append(len(a))
if len(a)>max_len_seq:
max_len_seq = len(a)
max_len_seq
sorted(lenghts, reverse=True)
# len(bert_clf.bert.encoder.layer)
# bert_clf.bert.config.hidden_size
import torch.nn as nn
from torch.nn import functional as F
class MEDIQA_Model(nn.Module):
def __init__(self):
super(MEDIQA_Model, self).__init__()
# self.bert = BertModel.from_pretrained('dmis-lab/biobert-v1.1')
self.bert = BertModel.from_pretrained('emilyalsentzer/Bio_ClinicalBERT')
modules = [self.bert.embeddings, *self.bert.encoder.layer[:-1]] #Replace 5 by what you want
for module in modules:
for param in module.parameters():
param.requires_grad = False
self.linear1 = nn.Linear(2*self.bert.config.hidden_size, 512)
self.linear2 = nn.Linear(512, 128)
self.linear3 = nn.Linear(128, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, q, a):
# _, pooled_output = self.bert(tokens, output_all=False)
# print(q.shape, a.shape)
CLS1 = self.get_CLS(q)
CLS2 = self.get_CLS(a)
# print('CLS:', CLS1.shape, CLS2.shape)
x = torch.cat([CLS1, CLS2], dim=1)
# print('concat:', x.shape)
x = F.dropout(x, 0.2)
x = self.linear1(x)
x = nn.LeakyReLU(0.1)(x)
x = F.dropout(x, 0.2)
x = self.linear2(x)
x = nn.LeakyReLU(0.1)(x)
x = F.dropout(x, 0.1)
x = self.linear3(x)
prob = self.sigmoid(x)
return prob, CLS1, CLS2
def get_CLS(self, indexed_tokens):
# Map the token strings to their vocabulary indeces.
# indexed_tokens = tokenizer.convert_tokens_to_ids(tokeniEARLY_STOPPINGtext)
# segments_ids = [1] * len(indexed_tokens)
tokens_tensor = indexed_tokens
# segments_tensors = torch.tensor([segments_ids])
outputs = self.bert(tokens_tensor)
CLS = outputs[0][:,0,:]
return CLS
def get_full_sentence_embedding(self, sentence):
embeddings = []
e = 0
max_size = 1024#512
for i in range(int(len(sentence)/max_size)+1):
# print(i, max_size*(i+1), len(sentence)/max_size)
# e = get_bert_sentence_embedding(sentence[i*max_size:max_size*(i+1)])
e = self.get_CLS(sentence[i*max_size:max_size*(i+1)])
# print(e)
embeddings.append(e)
embedding = torch.mean(torch.stack(embeddings), dim=0)
print(embedding)
return embedding
cpu = torch.device('cpu')
cuda = torch.device('cuda')
device = cuda if torch.cuda.is_available() else cpu
# del bert_clf
# torch.cuda.empty_cache()
import gc
gc.collect()
print(torch.cuda.memory_allocated())
print(torch.cuda.max_memory_allocated())
bert_clf = MEDIQA_Model()
bert_clf = bert_clf.to(device)
from sklearn.metrics import accuracy_score
def get_test_acc(model, return_probs_and_labels=False):
model.eval()
pred_probs = []
with torch.no_grad():
for s,q,a in tqdm.tqdm(test_loader):
logits, _, _ = model(q.to(device),a.to(device))
pred_probs.extend(logits.to('cpu'))
pred_probs = np.array([x.item() for x in pred_probs])
pred_labels = (pred_probs > 0.5).astype(np.int16)
acc = accuracy_score(labels_test, pred_labels)
if return_probs_and_labels:
return acc, pred_probs, pred_labels
else:
return acc
a = [1,2,3,4]
a[-3:]
import tqdm
# bert_clf = MEDIQA_Model()
# bert_clf = bert_clf.to(device)
optimizer = torch.optim.Adam(bert_clf.parameters(), lr=1e-4)
bert_clf.train()
EPOCHS = 200
EARLY_STOPPING = 5
loss_func = nn.BCELoss()
def ranking_loss(x1, x2, y):
def dist(a,b):
cos = nn.CosineSimilarity(dim=0)
return 1-cos(a,b)
margin = 0.5
loss = y*dist(x1,x2) + (1-y)*torch.max(0, margin - dist(x1,x2))
return loss
train_losses, test_losses, test_acc = [], [], []
for epoch_num in range(EPOCHS):
losses = []
for step_num, batch_data in enumerate(train_loader):
y_true, questions, answers = batch_data #tuple(t.to(device) for t in batch_data)
if questions.shape != answers.shape: continue
logits, _, _ = bert_clf(questions.to(device), answers.to(device))
loss = loss_func(logits, y_true.to(device))
bert_clf.zero_grad()
loss.backward()
print('step', loss.item(), end="\r")
losses.append(loss.item())
optimizer.step()
del y_true
del questions
del answers
torch.cuda.empty_cache()
print()
print(f'Epoch {epoch_num+1} mean loss:', np.mean(losses))
train_losses.append(np.mean(losses))
acc, probs_labels, _ = get_test_acc(bert_clf, return_probs_and_labels=True)
test_acc.append(acc)
test_loss = loss_func(torch.from_numpy(probs_labels), torch.from_numpy(np.array(labels_test, dtype=np.double))).item()
test_losses.append(test_loss)
print(f'Test acc:', acc, ' Test loss:', test_loss)
print()
if len(test_acc) <= 1 or acc > max(test_acc[:-1]):
torch.save(bert_clf.state_dict(), 'checkpoints/model')
if len(test_acc) > EARLY_STOPPING and test_acc[-(EARLY_STOPPING+1)] > max(test_acc[-EARLY_STOPPING:]):
print('Early stopping')
# recover best execution
model = MEDIQA_Model()
model.load_state_dict(torch.load('checkpoints/model'))
break
import matplotlib.pyplot as plt
plt.plot(train_losses, label='train loss')
plt.plot(test_losses, label='test loss')
plt.plot(test_acc, label='test acc')
plt.legend()
plt.show()
# Save model
torch.save(bert_clf.state_dict(), 'models/mediqa_model_biobert_2')
# Load model
# model = MEDIQA_Model()
# model.load_state_dict(torch.load(PATH))
# model.eval()
acc, probs, y_pred = get_test_acc(model.to(device), return_probs_and_labels=True)
def get_ranking_predictions(probs, y):
rankings = []
entailed = []
i_start = 0
for i, q in enumerate(QA_test):
rankings.append(1- np.array(probs[i_start:i_start+len(q.answers)]))
entailed.append(y[i_start:i_start+len(q.answers)])
i_start += len(q.answers)
assert len(rankings[i] == len(QA_test[i].answer_ids))
assert len(entailed[i] == len(QA_test[i].answers))
return rankings, entailed
ranking_pred, labels_pred = get_ranking_predictions(probs, y_pred)
QA_test.output_predictions(ranking_pred, labels_pred, file='test_biobert2')
evaluate('test_biobert2')
bert_clf.eval()
pred_labels = []
bert_clf.to(device)
with torch.no_grad():
for s,q,a in tqdm.tqdm(test_loader):
logits = bert_clf(q.to(device),a.to(device))
pred_labels.extend(logits.to('cpu'))
pred_labels = np.array([x.item() for x in pred_labels])
pred = (pred_labels > 0.5).astype(np.int16)
pred
from sklearn.metrics import accuracy_score
accuracy_score(labels_test, pred)
torch.cat([torch.tensor([1,2,3]), torch.tensor([1,2,3])], dim=0)
torch.tensor([1,2,3]).shape
for b in train_loader:
print(b)
from sentence_transformers import SentenceTransformer
from scipy.spatial.distance import cosine
cpu = torch.device('cpu')
cuda = torch.device('cuda')
device = cuda if torch.cuda.is_available() else cpu
model_names = [
'bert-base-nli-stsb-mean-tokens',
'bert-large-nli-stsb-mean-tokens',
'roberta-base-nli-stsb-mean-tokens',
'roberta-large-nli-stsb-mean-tokens',
'distilbert-base-nli-stsb-mean-tokens'
]
for name in model_names:
model = SentenceTransformer(name)
model = model.to(cuda)
representations_a = []
representations_b = []
with torch.no_grad():
for (sent_a, sent_b) in tqdm(test_sentences, desc='Embedding Sentences', ncols=800):
sentences_embeddings = model.encode([sent_a, sent_b])
representations_a.append(sentences_embeddings[0])
representations_b.append(sentences_embeddings[1])
obtained_scores = []
for idx, (repr_a, repr_b) in enumerate(zip(representations_a, representations_b)):
score = 1 - cosine(repr_a, repr_b)
obtained_scores.append(score)
corr_score = pearsonr(test_scores[:len(obtained_scores)], obtained_scores)[0]
print(f'{name}: {corr_score}')
###Output
100%|██████████| 405M/405M [00:07<00:00, 51.2MB/s]
###Markdown
BioELMohttps://docs.allennlp.org/v1.0.0rc5/tutorials/how_to/elmo/https://github.com/allenai/allennlp/blob/main/allennlp/modules/elmo.py
###Code
# ! pip install allennlp
import torch
from allennlp.modules.elmo import Elmo, batch_to_ids
# ! pip install allennlp
# ! pip install allennlp-models
cpu = torch.device('cpu')
cuda = torch.device('cuda')
device = cuda if torch.cuda.is_available() else cpu
from allennlp.modules.elmo import Elmo
# elmo = Elmo(
# options_file='https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway_5.5B/elmo_2x4096_512_2048cnn_2xhighway_5.5B_options.json',
# weight_file='https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway_5.5B/elmo_2x4096_512_2048cnn_2xhighway_5.5B_weights.hdf5',
# num_output_representations=3,
# dropout=0
# )
bioelmo = Elmo(
options_file='bioelmo/biomed_elmo_options.json',
weight_file='bioelmo/biomed_elmo_weights.hdf5',
num_output_representations=3,
dropout=0
)
bioelmo = bioelmo.to(device)
sentences = [['First', 'sentence', '.'], ['Another', '.'], ["I", "ate", "a", "carrot", "for", "breakfast"]]
character_ids = batch_to_ids(sentences).to(device)
embeddings = bioelmo(character_ids)
character_ids
def get_elmo_embedding(sentence):
tokens = nltk.word_tokenize(sentence)
# print(tokens)
sentences = [tokens]
character_ids = batch_to_ids(sentences).to(device)
return bioelmo(character_ids)['elmo_representations'][2].mean(dim=0).mean(dim=0)
embeddings['elmo_representations'][0].mean(dim=0).mean(dim=0)
K = 0
q = get_elmo_embedding(QA[K].question)
ans = [get_elmo_embedding(a) for a in QA[K].answers]
from scipy.spatial.distance import cosine
print('Label,Rank,Similarity')
for i,a in enumerate(ans):
sim = 1-cosine(q.detach().cpu(), a.detach().cpu())
print(QA.labels[K][i], QA.references[K][i], sim)
QA[0].labels
sentences = [[q.question, a] for q in QA for a in q.answers]
flatten = lambda t: [item for sublist in t for item in sublist]
labels = flatten([q.labels for q in QA])
sentences_test = [[q.question, a] for q in QA_test for a in q.answers]
labels_test = flatten([q.labels for q in QA_test])
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
BATCH_SIZE = 8
BATCH_SIZE_TEST = 64
max_len_seq = 512
class MEDIQA_Dataset2(Dataset):
def __init__(self, X, y, transform=None):
self.X = []
self.y = np.array(y)
for q, a in X:
_q = batch_to_ids([nltk.word_tokenize(q)])
_a = batch_to_ids([nltk.word_tokenize(a)])
self.X.append([_q, _a])
# self.X = np.array(self.X)
def __len__(self):
# return self.X.shape[0]
return len(self.X)
def __getitem__(self, index):
score = torch.FloatTensor([self.y[index]])
q = torch.LongTensor(self.X[index][0])
a = torch.LongTensor(self.X[index][1])
return score, q, a
# Create train dataset
train_dataset = MEDIQA_Dataset2(X=sentences, y=labels)
train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
# Create test dataset
test_dataset = MEDIQA_Dataset2(X=sentences_test, y=labels)
test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE_TEST, shuffle=False)
import torch.nn as nn
from torch.nn import functional as F
class MEDIQA_Model_bioELMo(nn.Module):
def __init__(self):
super(MEDIQA_Model_bioELMo, self).__init__()
self.bioelmo = Elmo(
options_file='bioelmo/biomed_elmo_options.json',
weight_file='bioelmo/biomed_elmo_weights.hdf5',
num_output_representations=3,
dropout=0
)
for param in self.bioelmo.parameters():
param.requires_grad = False
# self.bioelmo = bioelmo.to(device)
# modules = [self.bert.embeddings, *self.bert.encoder.layer[:-1]] #Replace 5 by what you want
# for module in modules:
# for param in module.parameters():
# param.requires_grad = False
self.linear1 = nn.Linear(2*1024, 128)
self.linear2 = nn.Linear(128, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, q, a):
# _, pooled_output = self.bert(tokens, output_all=False)
# print(q.shape, a.shape)
q_emb = self.get_elmo_embedding(q)
a_emb = self.get_elmo_embedding(a)
# print('CLS:', CLS1.shape, CLS2.shape)
x = torch.cat([q_emb, a_emb], dim=0)
# print('concat:', x.shape)
x = F.dropout(x, 0.2)
x = self.linear1(x)
x = nn.LeakyReLU(0.1)(x)
x = F.dropout(x, 0.1)
x = self.linear2(x)
prob = self.sigmoid(x)
return prob, q_emb, a_emb
def get_elmo_embedding(self, sentence):
# tokens = nltk.word_tokenize(sentence)
# print(tokens)
# sentences = [tokens]
# character_ids = batch_to_ids(sentence).to(device)
return self.bioelmo(sentence)['elmo_representations'][2].mean(dim=0).mean(dim=0)
bioelmo_clf = MEDIQA_Model_bioELMo()
bioelmo_clf = bioelmo_clf.to(device)
from sklearn.metrics import accuracy_score
def get_test_acc(model, return_probs_and_labels=False):
model.eval()
pred_probs = []
with torch.no_grad():
for s,(q,a) in tqdm.tqdm(zip(test_dataset.y, test_dataset.X)):
logits, _, _ = model(q.to(device),a.to(device))
pred_probs.extend(logits.to('cpu'))
pred_probs = np.array([x.item() for x in pred_probs])
pred_labels = (pred_probs > 0.5).astype(np.int16)
acc = accuracy_score(labels_test, pred_labels)
if return_probs_and_labels:
return acc, pred_probs, pred_labels
else:
return acc
# Train
def batch(iterable, n=1):
l = len(iterable)
for ndx in range(0, l, n):
yield iterable[ndx:min(ndx + n, l)]
import tqdm
import random
# bert_clf = MEDIQA_Model()
# bert_clf = bert_clf.to(device)
optimizer = torch.optim.Adam(bioelmo_clf.parameters(), lr=1e-3)
bioelmo_clf.train()
EPOCHS = 100
loss_func = nn.BCELoss()
train_losses, test_losses, test_acc = [], [], []
N = len(train_dataset.y)
for epoch_num in range(EPOCHS):
losses = []
for step_num, batch_data in enumerate(list(zip(train_dataset.y, train_dataset.X))):
y_true, (questions, answers) = batch_data #tuple(t.to(device) for t in batch_data)
# q_e = bioelmo_clf.get_elmo_embedding(questions.to(device))
# a_e = bioelmo_clf.get_elmo_embedding(answers.to(device))
# print(q_e)
# print(a_e)
# x = torch.cat([q_e, a_e], dim=0)
# print(y_true)
logits, _, _ = bioelmo_clf(questions.to(device), answers.to(device))
y_true = torch.from_numpy(np.array([y_true], dtype=np.float32))
loss = loss_func(logits, y_true.to(device))
bioelmo_clf.zero_grad()
loss.backward()
print(f'step {step_num}/{N}', loss.item(), end="\r")
losses.append(loss.item())
optimizer.step()
del y_true
del questions
del answers
torch.cuda.empty_cache()
print()
print(f'Epoch {epoch_num+1}:', np.mean(losses))
train_losses.append(np.mean(losses))
# acc = get_test_acc(bioelmo_clf)
acc, probs_labels, _ = get_test_acc(bioelmo_clf, return_probs_and_labels=True)
test_acc.append(acc)
test_loss = loss_func(torch.from_numpy(probs_labels), torch.from_numpy(np.array(labels_test, dtype=np.double))).item()
test_losses.append(test_loss)
test_acc.append(acc)
print(f'Test acc:', acc, ' Test loss:', test_loss)
print()
if len(test_acc) > 5 and test_acc[-6] > max(test_acc[-5:]):
print('Early stopping')
break
###Output
step 1699/1701 0.6242034435272217443.6841151714324951 0.10314517468214035 0.15529237687587738 0.339221715927124 0.3030651807785034 2.9525387287139893 0.07003825902938843 0.12832801043987274 0.043231889605522156 0.18832813203334808 0.21925053000450134 0.6420642733573914 0.7310922741889954 0.18545939028263092 0.1863744854927063 0.16231219470500946 0.4953307509422302 0.28373128175735474 0.9389384984970093 0.6665723919868469 0.5955307483673096 0.15899860858917236 2.0907347202301025 0.7578749060630798 0.32404041290283203 0.4283123314380646 0.012842393480241299 1.1186972856521606 0.896552562713623 0.5974249243736267 0.4825071096420288 1.257737159729004 1.2403526306152344 0.6301290988922119 0.6242996454238892 0.5066736340522766 0.9388933777809143 0.8013446927070618 0.7967394590377808 0.6420504450798035 0.2655837833881378 0.3406597971916199 0.14492468535900116 0.3025912940502167 0.260383665561676 1.2056822776794434 0.41638943552970886 0.6138029098510742 0.08509863913059235 0.4399171769618988 0.024819491431117058 0.03502781689167023 0.4880228638648987 1.0412309169769287 0.3329477310180664 0.22649303078651428 1.2098809480667114 0.15811429917812347 0.43458157777786255 0.730207085609436 0.4693262577056885 0.4107000231742859 0.698379635810852 0.8760024905204773 0.45869940519332886 0.221679225564003 1.146744966506958 1.154973030090332 0.6550112366676331 0.3216390907764435 0.33180177211761475 1.7560920715332031 0.5610791444778442 0.4339221715927124 0.26364490389823914
|
Scala Programming for Data Science/Data Science with Scala/Module 2: Preparing Data/3.2.2.ipynb | ###Markdown
" Module 2: Preparing Data Handling Missing Data and Imputing Values Lesson Objectives - After completing this lesson, you should be able to: - Drop records according to different criteria- Fill missing data according to different criteria- Drop duplicate records DataFrame NA Functions - The `na` method of DataFrames provides functionality for working with missing data - Returns an instance of `DataFrameNAFunctions`- The following methods are available: - `drop`, for dropping rows containing NaN or null values - `fill`, for replacing NaN or null values - `replace`, for replacing values matching specified keys
###Code
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder().getOrCreate()
import spark.implicits._
import org.apache.spark.sql.functions._
val df = spark.range(0, 10).select("id").
withColumn("uniform", rand(10L)).withColumn("normal", randn(10L))
val halfTonNaN = udf[Double, Double] (x => if (x > 0.5) Double.NaN else x)
val oneToNaN = udf[Double, Double] (x => if (x > 1.0) Double.NaN else x)
val dfnan = df.withColumn("nanUniform", halfTonNaN(df("uniform"))).
withColumn("nanNormal", oneToNaN(df("normal"))).drop("uniform").
withColumnRenamed("nanUniform", "uniform").drop("normal").
withColumnRenamed("nanNoemal", "normal")
dfnan.show()
###Output
+---+-------------------+--------------------+
| id| uniform| nanNormal|
+---+-------------------+--------------------+
| 0|0.41371264720975787| -0.5877482396744728|
| 1| NaN| NaN|
| 2| 0.1982919638208397| -0.256535324205377|
| 3|0.12714181165849525|-0.31703264334668824|
| 4| NaN| 0.4977629425313746|
| 5|0.12030715258495939| -0.506853671746243|
| 6|0.12131363910425985| NaN|
| 7|0.44292918521277047| -0.1413699193557902|
| 8| NaN| 0.9657665088756656|
| 9|0.03650707717266999| -0.5021009082343131|
+---+-------------------+--------------------+
###Markdown
DataFrame NA Functions - drop - `drop` is used for dropping rows containing `NaN` or `null` values according to a criteria - Several implementations available: - `drop(minNonNulls, cols)` - `drop(minNonNulls)` - `drop(how,cols)` - `drop(cols)` - `drop(how)` - `drop()`- `cols` is an `Array` or `Seq` of column names- how should be equal any or all
###Code
// Dropping Rows With minNonNulls Argument
dfnan.na.drop(minNonNulls = 3).show()
// Dropping Rows With How Argument
dfnan.na.drop("all", Array("uniform", "nanNormal")).show()
// Dropping Rows With How Argument
dfnan.na.drop("any", Array("uniform", "nanNormal")).show()
###Output
+---+-------------------+--------------------+
| id| uniform| nanNormal|
+---+-------------------+--------------------+
| 0|0.41371264720975787| -0.5877482396744728|
| 2| 0.1982919638208397| -0.256535324205377|
| 3|0.12714181165849525|-0.31703264334668824|
| 5|0.12030715258495939| -0.506853671746243|
| 7|0.44292918521277047| -0.1413699193557902|
| 9|0.03650707717266999| -0.5021009082343131|
+---+-------------------+--------------------+
###Markdown
DataFrame NA Functions - fill - `fill` is used for replacing NaN or null values according to a criteria- Several implementations available: - `fill(valueMap)` - `fill(value,cols)` - `fill(value)`
###Code
// Filling Missing Data By Column Type
dfnan.na.fill(0.0).show()
// Filling Missing Data With Column Defaults
val uniformMean = dfnan.filter("uniform <> 'NaN'").groupBy().agg(mean("uniform")).first()(0)
dfnan.na.fill(Map("uniform" -> uniformMean)).show(5)
// Filling Missing Data With Column Defaults
val dfCols = dfnan.columns.drop(1)
val dfMeans = dfnan.na.drop().groupBy().
agg(mean("uniform"), mean("nanNormal")).first().toSeq
val meansMap = (dfCols.zip(dfMeans)).toMap
dfnan.na.fill(meansMap).show(5)
###Output
+---+-------------------+--------------------+
| id| uniform| nanNormal|
+---+-------------------+--------------------+
| 0|0.41371264720975787| -0.5877482396744728|
| 1| 0.0| 0.0|
| 2| 0.1982919638208397| -0.256535324205377|
| 3|0.12714181165849525|-0.31703264334668824|
| 4| 0.0| 0.4977629425313746|
| 5|0.12030715258495939| -0.506853671746243|
| 6|0.12131363910425985| 0.0|
| 7|0.44292918521277047| -0.1413699193557902|
| 8| 0.0| 0.9657665088756656|
| 9|0.03650707717266999| -0.5021009082343131|
+---+-------------------+--------------------+
+---+-------------------+--------------------+
| id| uniform| nanNormal|
+---+-------------------+--------------------+
| 0|0.41371264720975787| -0.5877482396744728|
| 1|0.20860049668053607| NaN|
| 2| 0.1982919638208397| -0.256535324205377|
| 3|0.12714181165849525|-0.31703264334668824|
| 4|0.20860049668053607| 0.4977629425313746|
+---+-------------------+--------------------+
only showing top 5 rows
+---+-------------------+--------------------+
| id| uniform| nanNormal|
+---+-------------------+--------------------+
| 0|0.41371264720975787| -0.5877482396744728|
| 1| 0.2231483062765821|-0.38527345109381406|
| 2| 0.1982919638208397| -0.256535324205377|
| 3|0.12714181165849525|-0.31703264334668824|
| 4| 0.2231483062765821| 0.4977629425313746|
+---+-------------------+--------------------+
only showing top 5 rows
###Markdown
DataFrame NA Functions - replace - `replace` is used for replacing values matching specified keys- `cols` argument may be a single column name or an array- replacement argument is a map: - `key` is the value to be matched - `value` is the replacement value itself
###Code
//Replacing Values in a DataFrame
dfnan.na.replace("uniform", Map(Double.NaN -> 0.0)).show()
###Output
_____no_output_____
###Markdown
Duplicates- `dropDuplicates` is a `DataFrame` method - Used to remove duplicate rows- May specify a subset of columns to check for duplicates
###Code
// Dropping Duplicate Rows
val dfDuplicates = df.unionAll(sc.parallelize(Seq((10,1,1),(11,1,1))).toDF())
// Dropping Duplicate Rows
val dfCols = dfnan.withColumnRenamed("nanNormal", "normal").columns
dfDuplicates.dropDuplicates(dfCols).show()
###Output
_____no_output_____ |
SuportVectorMachines.ipynb | ###Markdown
Support Vector Machines in Python for Engineers and Geoscientists Michael Pyrcz, Associate Professor, University of Texas at Austin Contacts: [Twitter/@GeostatsGuy](https://twitter.com/geostatsguy) | [GitHub/GeostatsGuy](https://github.com/GeostatsGuy) | [www.michaelpyrcz.com](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446)This is a tutorial for / demonstration of **support vector machine modeling in Python**. We have included in our workflow some simple wrappers and reimplementations of GSLIB: Geostatistical Library methods** (Deutsch and Journel, 1997). Support vector machines are a powerful method for machine learning classification. The support vector machine is a generalization of the maximal margin classifier that deals with cateogries that cannot be separated linearly. This exercise demonstrates the support vector machine approach in Python with wrappers and reimplimentation of GSLIB methods. The steps include:1. generate a 2D sequential Guassian simulation using a wrapper of GSLIB's sgsim method2. add a trend (to simplify the segmentation problem) and truncate to build a categorical, exhaustive truth model3. extract random samples from the truth model4. separate into training and testing (20%) datasets5. build support vector machine classifiers with simple linear and polynomial kernels6. tune the "C" coefficient for the polynomial model with k-fold cross validation7. compare the tuned, polynomial model with the simple linear kernel model with confusion matricesTo accomplish this I have provide wrappers or reimplementation in Python for the following GSLIB methods. Only the sgsim, locpix are used for this workflow:1. sgsim - sequantial Gaussian simulation limited to 2D and unconditional2. hist - histograms plots reimplemented with GSLIB parameters using python methods3. locmap - location maps reimplemented with GSLIB parameters using python methods4. pixelplt - pixel plots reimplemented with GSLIB parameters using python methods5. locpix - my modification of GSLIB to superimpose a location map on a pixel plot reimplemented with GSLIB parameters using Python methods5. affine - affine correction adjust the mean and standard deviation of a feature reimplemented with GSLIB parameters using Python methods6. varmap - vairogram map7. gam -regularly sampled data variograms8. gamv - irregularly sampled data variograms9. nscore - normal score transform (data transformation to Gaussian with a mean of zero and a standard deviation of one)These methods are all in the functions declared upfront. To run this demo all one has to do is download and place in your working directory the following executables from the GSLIB/bin directory:1. sgsim.exeThe GSLIB source and executables are available at http://www.statios.com/Quick/gslib.html. For the reference on using GSLIB check out the User Guide, GSLIB: Geostatistical Software Library and User's Guide by Clayton V. Deutsch and Andre G. Journel.I used this tutorial in my Introduction to Geostatistics undergraduate class (PGE337 at UT Austin) as part of a first introduction to geostatistics and Python for the engineering undergraduate students. It is assumed that students have no previous Python, geostatistics nor machine learning experience; therefore, all steps of the code and workflow are explored and described. This tutorial is augmented with course notes in my class. The Python code and markdown was developed and tested in Jupyter. Project GoalPrediction of categories, low or high porosity, away from sample data values over the are of interest. Load the required librariesThe following code loads the required libraries.
###Code
import os # to set current working directory
import numpy as np # arrays and matrix math
import pandas as pd # DataFrames
import matplotlib.pyplot as plt # plotting
from sklearn.model_selection import train_test_split # training and testing datasets
from sklearn.metrics import confusion_matrix # for sumarizing model performance
import itertools # assist with iteration used in plot_confusion_matrix
###Output
_____no_output_____
###Markdown
If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs. Declare functionsHere are the wrappers and reimplementations of GSLIB method along with 4 utilities to move between GSLIB's Geo-EAS data sets and DataFrames, and grids and 2D Numpy arrays respectively and 2 utilities to resample from regular datasets. Available GSLIB functions include:1. hist2. pixelplt3. locmap4. locpix5. vargplt6. affine7. nscore8. declus9. gam10. gamv11. vmodel12. sgsim13. visualize_model - to visualize the machine learning solution space14. plot_svc_decision_function - visualize the model with margins included15. plot_confusion_matrix - plot confusion matrixFor now we embed the functions in the workflow below. In the future this will be turned into a proper Python package. Warning, there has been no attempt to make these functions robust in the precense of bad inputs. If you get a crazy error check the inputs. Are the arrays empty and are they the same size when they should be? Are the arrays the correct dimension? Is the parameter order mixed up? Make sure the inputs are consistent with the descriptions in this document.
###Code
# utility to convert GSLIB Geo-EAS files to a pandas DataFrame for use with Python methods
def GSLIB2Dataframe(data_file):
import os
import numpy as np
import pandas as pd
colArray = []
with open(data_file) as myfile: # read first two lines
head = [next(myfile) for x in range(2)]
line2 = head[1].split()
ncol = int(line2[0])
for icol in range(0, ncol):
head = [next(myfile) for x in range(1)]
colArray.append(head[0].split()[0])
data = np.loadtxt(myfile, skiprows = 0)
df = pd.DataFrame(data)
df.columns = colArray
return df
# utility to convert pandas DataFrame to a GSLIB Geo-EAS file for use with GSLIB methods
def Dataframe2GSLIB(data_file,df):
colArray = []
colArray = df.columns
ncol = len(df.columns)
nrow = len(df.index)
file_out = open(data_file, "w")
file_out.write(data_file + '\n')
file_out.write(str(ncol) + '\n')
for icol in range(0, ncol):
file_out.write(df.columns[icol] + '\n')
for irow in range(0, nrow):
for icol in range(0, ncol):
file_out.write(str(df.iloc[irow,icol])+ ' ')
file_out.write('\n')
file_out.close()
# utility to convert GSLIB Geo-EAS files to a numpy ndarray for use with Python methods
def GSLIB2ndarray(data_file,kcol,nx,ny):
import os
import numpy as np
colArray = []
if ny > 1:
array = np.ndarray(shape=(nx,ny),dtype=float,order='F')
else:
array = np.zeros(nx)
with open(data_file) as myfile: # read first two lines
head = [next(myfile) for x in range(2)]
line2 = head[1].split()
ncol = int(line2[0]) # get the number of columns
for icol in range(0, ncol): # read over the column names
head = [next(myfile) for x in range(1)]
if icol == kcol:
col_name = head[0].split()[0]
for iy in range(0,ny):
for ix in range(0,nx):
head = [next(myfile) for x in range(1)]
array[ny-1-iy][ix] = head[0].split()[kcol]
return array,col_name
# utility to convert numpy ndarray to a GSLIB Geo-EAS file for use with GSLIB methods
def ndarray2GSLIB(array,data_file,col_name):
file_out = open(data_file, "w")
file_out.write(data_file + '\n')
file_out.write('1 \n')
file_out.write(col_name + '\n')
if array.ndim == 2:
nx = array.shape[0]
ny = array.shape[1]
ncol = 1
for iy in range(0, ny):
for ix in range(0, nx):
file_out.write(str(array[ny-1-iy,ix])+ '\n')
elif array.ndim == 1:
nx = len(array)
for ix in range(0, nx):
file_out.write(str(array[ix])+ '\n')
else:
Print("Error: must use a 2D array")
return
file_out.close()
# histogram, reimplemented in Python of GSLIB hist with MatPlotLib methods
def hist(array,xmin,xmax,log,cumul,bins,weights,xlabel,title):
plt.figure(figsize=(8,6))
cs = plt.hist(array, alpha = 0.2, color = 'red', edgecolor = 'black', bins=bins, range = [xmin,xmax], weights = weights, log = log, cumulative = cumul)
plt.title(title)
plt.xlabel(xlabel); plt.ylabel('Frequency')
plt.show()
return
# pixel plot, reimplemention in Python of GSLIB pixelplt with MatPlotLib methods
def pixelplt(array,xmin,xmax,ymin,ymax,step,vmin,vmax,title,xlabel,ylabel,vlabel,cmap):
xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax, ymin, -1*step))
plt.figure(figsize=(8,6))
im = plt.contourf(xx,yy,array,cmap=cmap,vmin=vmin,vmax=vmax,levels=np.linspace(vmin,vmax,100))
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
cbar = plt.colorbar(im,orientation = 'vertical',ticks=np.linspace(vmin,vmax,10))
cbar.set_label(vlabel, rotation=270, labelpad=20)
plt.show()
return im
# location map, reimplemention in Python of GSLIB locmap with MatPlotLib methods
def locmap(df,xcol,ycol,vcol,xmin,xmax,ymin,ymax,vmin,vmax,title,xlabel,ylabel,vlabel,cmap):
ixy = 0
plt.figure(figsize=(8,6))
im = plt.scatter(df[xcol],df[ycol],s=None, c=df[vcol], marker=None, cmap=cmap, norm=None, vmin=vmin, vmax=vmax, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black")
plt.title(title)
plt.xlim(xmin,xmax)
plt.ylim(ymin,ymax)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
cbar = plt.colorbar(im, orientation = 'vertical',ticks=np.linspace(vmin,vmax,10))
cbar.set_label(vlabel, rotation=270, labelpad=20)
plt.show()
return im
def vargplt(lag,gamma,npair,vtype,name,xmin,xmax,ymin,ymax,sill,title,cmap):
plt.figure(figsize=(8,6))
marker = ["o","v","s","h","^",">","<"]
im = 0
if type(lag)==type(list()):
if vtype==0:
im = plt.scatter(lag,gamma,s=None, c=npair, marker=None, label = name,cmap=cmap, norm=None, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black")
else:
plt.plot(lag,gamma,'C3',lw=3,c='black')
else:
nvar = lags.shape[0]
for ivar in range(0, nvar):
if vtype[ivar]==0:
im = plt.scatter(lag[ivar],gamma[ivar],s=None,label = name[ivar],c=npair[ivar], marker=marker[ivar], cmap=cmap, norm=None, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black")
else:
plt.plot(lag[ivar],gamma[ivar], 'C3', lw=3,c='black')
ixy = 0
plt.title(title)
plt.xlim(xmin,xmax)
plt.ylim(ymin,ymax)
plt.xlabel('Lag Distance (m)')
plt.ylabel('Variogram')
plt.arrow(0,sill,xmax,0,width=0.002,color='red',head_length=0.0,head_width=0.0)
plt.legend(loc = 'lower right')
if im != 0:
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label('Number of Pairs', rotation=270, labelpad=20)
plt.show()
return im
# pixelplt with location map superimposed, reimplementation in Python of a MOD from GSLIB with MatPlotLib methods
def locpix(array,xmin,xmax,ymin,ymax,step,vmin,vmax,df,xcol,ycol,vcol,title,xlabel,ylabel,vlabel,cmap):
xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax, ymin, -1*step))
plt.figure(figsize=(8,6))
cs = plt.contourf(xx, yy, array, cmap=cmap,vmin=vmin, vmax=vmax, levels=np.linspace(vmin,vmax,100))
im = plt.scatter(df[xcol],df[ycol],s=None, c=df[vcol], marker=None, cmap=cmap, norm=None, vmin=vmin, vmax=vmax, alpha=0.8, linewidths=0.8, verts=None, edgecolors="black")
plt.xlim(xmin,xmax-step)
plt.ylim(ymin+step,ymax)
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
cbar = plt.colorbar(orientation = 'vertical',ticks=np.linspace(vmin,vmax,10))
cbar.set_label(vlabel, rotation=270, labelpad=20)
plt.show()
return cs
# affine distribution correction reimplemented in Python with numpy methods
def affine(array,tmean,tstdev):
if array.ndim != 2:
Print("Error: must use a 2D array")
return
nx = array.shape[0]
ny = array.shape[1]
mean = np.average(array)
stdev = np.std(array)
for iy in range(0,ny):
for ix in range(0,nx):
array[ix,iy]= (tstdev/stdev)*(array[ix,iy] - mean) + tmean
return(array)
# normal score transform, wrapper for nscore from GSLIB (.exe must be in working directory)(not used in this demo)
def nscore(x):
import os
import numpy as np
file = 'nscore_out.dat'
ndarray2GSLIB(x,"nscore.dat","value")
file = open("nscore.par", "w")
file.write(" Parameters for NSCORE \n")
file.write(" ********************* \n")
file.write(" \n")
file.write("START OF PARAMETERS: \n")
file.write("nscore.dat -file with data \n")
file.write("1 0 - columns for variable and weight \n")
file.write("-1.0e21 1.0e21 - trimming limits \n")
file.write("0 -1=transform according to specified ref. dist. \n")
file.write("../histsmth/histsmth.out - file with reference dist. \n")
file.write("1 2 - columns for variable and weight \n")
file.write("nscore.out -file for output \n")
file.write("nscore.trn -file for output transformation table \n")
file.close()
os.system('nscore.exe nscore.par')
file_in = 'nscore.out'
y,name = GSLIB2ndarray('nscore.out',1,nx,ny)
return(y)
# cell-based declustering, 2D wrapper for declus from GSLIB (.exe must be in working directory)
def declus(df,xcol,ycol,vcol,cmin,cmax,cnum,bmin):
import os
import numpy as np
nrow = len(df)
weights = []
file = 'declus_out.dat'
file_out = open(file, "w")
file_out.write('declus_out.dat' + '\n')
file_out.write('3' + '\n')
file_out.write('x' + '\n')
file_out.write('y' + '\n')
file_out.write('value' + '\n')
for irow in range(0, nrow):
file_out.write(str(df.iloc[irow][xcol])+' '+str(df.iloc[irow][ycol])+' '+str(df.iloc[irow][vcol])+' \n')
file_out.close()
file = open("declus.par", "w")
file.write(" Parameters for DECLUS \n")
file.write(" ********************* \n")
file.write(" \n")
file.write("START OF PARAMETERS: \n")
file.write("declus_out.dat -file with data \n")
file.write("1 2 0 3 - columns for X, Y, Z, and variable \n")
file.write("-1.0e21 1.0e21 - trimming limits \n")
file.write("declus.sum -file for summary output \n")
file.write("declus.out -file for output with data & weights \n")
file.write("1.0 1.0 -Y and Z cell anisotropy (Ysize=size*Yanis) \n")
file.write(str(bmin) + " -0=look for minimum declustered mean (1=max) \n")
file.write(str(cnum) + " " + str(cmin) + " " + str(cmax) + " -number of cell sizes, min size, max size \n")
file.write("5 -number of origin offsets \n")
file.close()
os.system('declus.exe declus.par')
df = GSLIB2Dataframe("declus.out")
for irow in range(0, nrow):
weights.append(df.iloc[irow,3])
return(weights)
# regular grid variogram, 2D wrapper for gam from GSLIB (.exe must be in working directory)
def gam_2d(array,nx,ny,hsiz,nlag,xlag,ylag,bstand):
import os
import numpy as np
lag = []; gamma = []; npair = []
ndarray2GSLIB(array,"gam_out.dat","gam.dat")
file = open("gam.par", "w")
file.write(" Parameters for GAM \n")
file.write(" ****************** \n")
file.write(" \n")
file.write("START OF PARAMETERS: \n")
file.write("gam_out.dat -file with data \n")
file.write("1 1 0 - number of variables, column numbers \n")
file.write("-1.0e21 1.0e21 - trimming limits \n")
file.write("gam.out -file for variogram output \n")
file.write("1 -grid or realization number \n")
file.write(str(nx) + " 0.0 " + str(hsiz) + " -nx, xmn, xsiz \n")
file.write(str(ny) + " 0.0 " + str(hsiz) + " -ny, ymn, ysiz \n")
file.write(" 1 0.5 1.0 -nz, zmn, zsiz \n")
file.write("1 " + str(nlag) + " -number of directions, number of lags \n")
file.write(str(xlag) + " " + str(ylag) + " 0 -ixd(1),iyd(1),izd(1) \n")
file.write("1 -standardize sill? (0=no, 1=yes) \n")
file.write("1 -number of variograms \n")
file.write("1 1 1 -tail variable, head variable, variogram type \n")
file.close()
os.system('gam.exe gam.par')
reading = True
with open("gam.out") as myfile:
head = [next(myfile) for x in range(1)] # skip the first line
iline = 0
while reading:
try:
head = [next(myfile) for x in range(1)]
lag.append(head[0].split()[1])
gamma.append(head[0].split()[2])
npair.append(head[0].split()[3])
iline = iline + 1
except StopIteration:
reading = False
return(lag,gamma,npair)
# regular grid variogram, 2D wrapper for gam from GSLIB (.exe must be in working directory)
def gamv_2d(df,xcol,ycol,vcol,nlag,lagdist,azi,atol,bstand):
import os
import numpy as np
lag = []; gamma = []; npair = []
df_ext = pd.DataFrame({'X':df[xcol],'Y':df[ycol],'Z':rand_sample[vcol]})
Dataframe2GSLIB("gamv_out.dat",df_ext)
file = open("gamv.par", "w")
file.write(" Parameters for GAMV \n")
file.write(" ******************* \n")
file.write(" \n")
file.write("START OF PARAMETERS: \n")
file.write("gamv_out.dat -file with data \n")
file.write("1 2 0 - columns for X, Y, Z coordinates \n")
file.write("1 3 0 - number of variables,col numbers \n")
file.write("-1.0e21 1.0e21 - trimming limits \n")
file.write("gamv.out -file for variogram output \n")
file.write(str(nlag) + " -number of lags \n")
file.write(str(lagdist) + " -lag separation distance \n")
file.write(str(lagdist*0.5) + " -lag tolerance \n")
file.write("1 -number of directions \n")
file.write(str(azi) + " " + str(atol) + " 99999.9 0.0 90.0 50.0 -azm,atol,bandh,dip,dtol,bandv \n")
file.write(str(bstand) + " -standardize sills? (0=no, 1=yes) \n")
file.write("1 -number of variograms \n")
file.write("1 1 1 -tail var., head var., variogram type \n")
file.close()
os.system('gamv.exe gamv.par')
reading = True
with open("gamv.out") as myfile:
head = [next(myfile) for x in range(1)] # skip the first line
iline = 0
while reading:
try:
head = [next(myfile) for x in range(1)]
lag.append(head[0].split()[1])
gamma.append(head[0].split()[2])
npair.append(head[0].split()[3])
iline = iline + 1
except StopIteration:
reading = False
return(lag,gamma,npair)
# irregular spaced data, 2D wrapper for varmap from GSLIB (.exe must be in working directory)
def varmapv_2d(df,xcol,ycol,vcol,nx,ny,lagdist,minpairs,vmax,bstand,title,vlabel):
import os
import numpy as np
lag = []; gamma = []; npair = []
df_ext = pd.DataFrame({'X':df[xcol],'Y':df[ycol],'Z':rand_sample[vcol]})
Dataframe2GSLIB("varmap_out.dat",df_ext)
file = open("varmap.par", "w")
file.write(" Parameters for VARMAP \n")
file.write(" ********************* \n")
file.write(" \n")
file.write("START OF PARAMETERS: \n")
file.write("varmap_out.dat -file with data \n")
file.write("1 3 - number of variables: column numbers \n")
file.write("-1.0e21 1.0e21 - trimming limits \n")
file.write("0 -1=regular grid, 0=scattered values \n")
file.write(" 50 50 1 -if =1: nx, ny, nz \n")
file.write("1.0 1.0 1.0 - xsiz, ysiz, zsiz \n")
file.write("1 2 0 -if =0: columns for x,y, z coordinates \n")
file.write("varmap.out -file for variogram output \n")
file.write(str(nx) + " " + str(ny) + " 0 " + "-nxlag, nylag, nzlag \n")
file.write(str(lagdist) + " " + str(lagdist) + " 1.0 -dxlag, dylag, dzlag \n")
file.write(str(minpairs) + " -minimum number of pairs \n")
file.write(str(bstand) + " -standardize sill? (0=no, 1=yes) \n")
file.write("1 -number of variograms \n")
file.write("1 1 1 -tail, head, variogram type \n")
file.close()
os.system('varmap.exe varmap.par')
nnx = nx*2+1; nny = ny*2+1
varmap, name = GSLIB2ndarray("varmap.out",0,nnx,nny)
xmax = ((float(nx)+0.5)*lagdist); xmin = -1*xmax;
ymax = ((float(ny)+0.5)*lagdist); ymin = -1*ymax;
pixelplt(varmap,xmin,xmax,ymin,ymax,lagdist,0,vmax,title,'X','Y',vlabel,cmap)
return(varmap)
# regular spaced data, 2D wrapper for varmap from GSLIB (.exe must be in working directory)
def varmap_2d(array,nx,ny,hsiz,nlagx,nlagy,minpairs,vmax,bstand,title,vlabel):
import os
import numpy as np
ndarray2GSLIB(array,"varmap_out.dat","gam.dat")
file = open("varmap.par", "w")
file.write(" Parameters for VARMAP \n")
file.write(" ********************* \n")
file.write(" \n")
file.write("START OF PARAMETERS: \n")
file.write("varmap_out.dat -file with data \n")
file.write("1 1 - number of variables: column numbers \n")
file.write("-1.0e21 1.0e21 - trimming limits \n")
file.write("1 -1=regular grid, 0=scattered values \n")
file.write(str(nx) + " " + str(ny) + " 1 -if =1: nx, ny, nz \n")
file.write(str(hsiz) + " " + str(hsiz) + " 1.0 - xsiz, ysiz, zsiz \n")
file.write("1 2 0 -if =0: columns for x,y, z coordinates \n")
file.write("varmap.out -file for variogram output \n")
file.write(str(nlagx) + " " + str(nlagy) + " 0 " + "-nxlag, nylag, nzlag \n")
file.write(str(hsiz) + " " + str(hsiz) + " 1.0 -dxlag, dylag, dzlag \n")
file.write(str(minpairs) + " -minimum number of pairs \n")
file.write(str(bstand) + " -standardize sill? (0=no, 1=yes) \n")
file.write("1 -number of variograms \n")
file.write("1 1 1 -tail, head, variogram type \n")
file.close()
os.system('varmap.exe varmap.par')
nnx = nlagx*2+1; nny = nlagy*2+1
varmap, name = GSLIB2ndarray("varmap.out",0,nnx,nny)
xmax = ((float(nlagx)+0.5)*hsiz); xmin = -1*xmax;
ymax = ((float(nlagy)+0.5)*hsiz); ymin = -1*ymax;
pixelplt(varmap,xmin,xmax,ymin,ymax,hsiz,0,vmax,title,'X','Y',vlabel,cmap)
return(varmap)
# variogram model, 2D wrapper for vmodel from GSLIB (.exe must be in working directory)
def vmodel_2d(nlag,step,azi,nug,nst,tstr1,c1,azi1,rmaj1,rmin1,tstr2=1,c2=0,azi2=0,rmaj2=0,rmin2=0):
import os
import numpy as np
lag = []; gamma = []
file = open("vmodel.par", "w")
file.write(" \n")
file.write(" Parameters for VMODEL \n")
file.write(" ********************* \n")
file.write(" \n")
file.write("START OF PARAMETERS: \n")
file.write("vmodel.var -file for variogram output \n")
file.write("1 " + str(nlag) + " -number of directions and lags \n")
file.write(str(azi) + " 0.0 " + str(step) + " -azm, dip, lag distance \n")
file.write(str(nst) + " " + str(nug) + " -nst, nugget effect \n")
file.write(str(tstr1) + " " + str(c1) + " " + str(azi1) + " 0.0 0.0 0.0 -it,cc,ang1,ang2,ang3 \n")
file.write(str(rmaj1) + " " + str(rmin1) + " 0.0 -a_hmax, a_hmin, a_vert \n")
file.write(str(tstr2) + " " + str(c2) + " " + str(azi2) + " 0.0 0.0 0.0 -it,cc,ang1,ang2,ang3 \n")
file.write(str(rmaj2) + " " + str(rmin2) + " 0.0 -a_hmax, a_hmin, a_vert \n")
file.close()
os.system('vmodel.exe vmodel.par')
reading = True
with open("vmodel.var") as myfile:
head = [next(myfile) for x in range(1)] # skip the first line
iline = 0
while reading:
try:
head = [next(myfile) for x in range(1)]
lag.append(head[0].split()[1])
gamma.append(head[0].split()[2])
iline = iline + 1
except StopIteration:
reading = False
return(lag,gamma)
# sequential Gaussian simulation, 2D unconditional wrapper for sgsim from GSLIB (.exe must be in working directory)
def GSLIB_sgsim_2d_uncond(nreal,nx,ny,hsiz,seed,hrange1,hrange2,azi,output_file):
import os
import numpy as np
hmn = hsiz * 0.5
hctab = int(hrange1/hsiz)*2 + 1
sim_array = np.random.rand(nx,ny)
file = open("sgsim.par", "w")
file.write(" Parameters for SGSIM \n")
file.write(" ******************** \n")
file.write(" \n")
file.write("START OF PARAMETER: \n")
file.write("none -file with data \n")
file.write("1 2 0 3 5 0 - columns for X,Y,Z,vr,wt,sec.var. \n")
file.write("-1.0e21 1.0e21 - trimming limits \n")
file.write("0 -transform the data (0=no, 1=yes) \n")
file.write("none.trn - file for output trans table \n")
file.write("1 - consider ref. dist (0=no, 1=yes) \n")
file.write("none.dat - file with ref. dist distribution \n")
file.write("1 0 - columns for vr and wt \n")
file.write("-4.0 4.0 - zmin,zmax(tail extrapolation) \n")
file.write("1 -4.0 - lower tail option, parameter \n")
file.write("1 4.0 - upper tail option, parameter \n")
file.write("0 -debugging level: 0,1,2,3 \n")
file.write("nonw.dbg -file for debugging output \n")
file.write(str(output_file) + " -file for simulation output \n")
file.write(str(nreal) + " -number of realizations to generate \n")
file.write(str(nx) + " " + str(hmn) + " " + str(hsiz) + " \n")
file.write(str(ny) + " " + str(hmn) + " " + str(hsiz) + " \n")
file.write("1 0.0 1.0 - nz zmn zsiz \n")
file.write(str(seed) + " -random number seed \n")
file.write("0 8 -min and max original data for sim \n")
file.write("12 -number of simulated nodes to use \n")
file.write("0 -assign data to nodes (0=no, 1=yes) \n")
file.write("1 3 -multiple grid search (0=no, 1=yes),num \n")
file.write("0 -maximum data per octant (0=not used) \n")
file.write(str(hrange1) + " " + str(hrange2) + " 1.0 -maximum search (hmax,hmin,vert) \n")
file.write(str(azi) + " 0.0 0.0 -angles for search ellipsoid \n")
file.write(str(hctab) + " " + str(hctab) + " 1 -size of covariance lookup table \n")
file.write("0 0.60 1.0 -ktype: 0=SK,1=OK,2=LVM,3=EXDR,4=COLC \n")
file.write("none.dat - file with LVM, EXDR, or COLC variable \n")
file.write("4 - column for secondary variable \n")
file.write("1 0.0 -nst, nugget effect \n")
file.write("1 1.0 " + str(azi) + " 0.0 0.0 -it,cc,ang1,ang2,ang3 \n")
file.write(" " + str(hrange1) + " " + str(hrange2) + " 1.0 -a_hmax, a_hmin, a_vert \n")
file.close()
os.system('"sgsim.exe sgsim.par"')
sim_array = GSLIB2ndarray(output_file,0,nx,ny)
return(sim_array)
# extract regular spaced samples from a model
def regular_sample(array,xmin,xmax,ymin,ymax,step,mx,my,name):
x = []; y = []; v = []; iix = 0; iiy = 0;
xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax, ymin, -1*step))
iiy = 0
for iy in range(0,ny):
if iiy >= my:
iix = 0
for ix in range(0,nx):
if iix >= mx:
x.append(xx[ix,iy]);y.append(yy[ix,iy]); v.append(array[ix,iy])
iix = 0; iiy = 0
iix = iix + 1
iiy = iiy + 1
df = pd.DataFrame(np.c_[x,y,v],columns=['X', 'Y', name])
return(df)
# extract random set of samples from a model
def random_sample(array,xmin,xmax,ymin,ymax,step,nsamp,name):
import random as rand
x = []; y = []; v = []; iix = 0; iiy = 0;
xx, yy = np.meshgrid(np.arange(xmin, xmax, step),np.arange(ymax-1, ymin-1, -1*step))
nx = xx.shape[0]
ny = xx.shape[1]
sample_index = rand.sample(range((nx)*(ny)), nsamp)
for isamp in range(0,nsamp):
iy = int(sample_index[isamp]/ny)
ix = sample_index[isamp] - iy*nx
x.append(xx[ix,iy])
y.append(yy[ix,iy])
v.append(array[ix,iy])
df = pd.DataFrame(np.c_[x,y,v],columns=['X', 'Y', name])
return(df)
def visualize_model(model,xfeature,yfeature,response,title,):# plots the data points and the decision tree prediction
n_classes = 10
cmap = plt.cm.RdYlBu
plot_step = 10.0
plt.figure(figsize=(8,6))
x_min, x_max = min(xfeature) - 1, max(xfeature) + 1
y_min, y_max = min(yfeature) - 1, max(yfeature) + 1
resp_min = round(min(response)); resp_max = round(max(response));
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
z_min = round(min(response)); z_max = round(max(response))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap,vmin=z_min, vmax=z_max)
im = plt.scatter(xfeature,yfeature,s=None, c=response, marker=None, cmap=cmap, norm=None, vmin=z_min, vmax=z_max, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title(title)
plt.xlabel(xfeature.name)
plt.ylabel(yfeature.name)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label(response.name, rotation=270, labelpad=20)
#plt.show()
return(plt)
def plot_svc_decision_function(model,plt,xmin,xmax,ymin,ymax, plot_support=True): # modified from Jake VanderPlas's Python Data Science Handbook
"""Plot the decision function for a 2D SVC"""
xlim = [xmin,xmax]
ylim = [ymin,ymax]
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
plt.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
plt.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=3, linewidth=15, facecolors='black');
plt.xlim(xlim)
plt.ylim(ylim)
plt.show()
def plot_confusion_matrix(cm, classes, # from scikit-learn docs on confusion matrix
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
###Output
_____no_output_____
###Markdown
Set the working directoryI always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). Also, in this case make sure to place the required (see above) GSLIB executables in this directory or a location identified in the environmental variable *Path*.
###Code
os.chdir("c:/PGE337/Variogram") # set the working directory
###Output
_____no_output_____
###Markdown
You will have to update the part in quotes with your own working directory and the format is different on a Mac (e.g. "~/PGE"). Make a 2D spatial modelThe following are the basic parameters for the demonstration. This includes the number of cells in the 2D regular grid, the cell size (step) and the x and y min and max along with the color scheme.Then we make a single realization of a Gausian distributed feature over the specified 2D grid and then apply affine correction to ensure we have a reasonable mean and spread for our feature's distribution, assumed to be Porosity (e.g. no negative values) while retaining the Gaussian distribution. We then add a y-coordinate trend and then trauncate the continuous simulation into 1 - high porosity and -1 - low porosity. We are keeping this workflow simple. *This is our exhaustive, truth model that we will sample*. We added the trend to create a region of low at the top of the model and a region of high at the base of the model. The resulting truth data set can be segmented with linear and low order polynomial kernels. The parameters of *GSLIB_sgsim_2d_uncond* are (nreal,nx,ny,hsiz,seed,hrange1,hrange2,azi,output_file). nreal is the number of realizations, nx and ny are the number of cells in x and y, hsiz is the cell siz, seed is the random number seed, hrange and hrange2 are the variogram ranges in major and minor directions respectively, azi is the azimuth of the primary direction of continuity (0 is aligned with Y axis) and output_file is a GEO_DAS file with the simulated realization. The ouput is the 2D numpy array of the simulation along with the name of the property.
###Code
nx = 100; ny = 100; cell_size = 10 # grid number of cells and cell size
xmin = 0.0; ymin = 0.0; # grid origin
xmax = xmin + nx * cell_size; ymax = ymin + ny * cell_size # calculate the extent of model
seed = 75075 # random number seed for stochastic simulation
range_max = 1000; range_min = 800; azimuth = 90 # Porosity variogram ranges and azimuth
mean = 10.0; stdev = 2.0 # Porosity mean and standard deviation
#cmap = plt.cm.RdYlBu
vmin = 4; vmax = 16; cmap = plt.cm.plasma # color min and max and using the plasma color map
# calculate a stochastic realization with standard normal distribution
sim,value = GSLIB_sgsim_2d_uncond(1,nx,ny,cell_size,seed,range_max,range_min,azimuth,"simulation")
sim = affine(sim,mean,stdev) # correct the distribution to a target mean and standard deviation.
###Output
_____no_output_____
###Markdown
Now let's add a y-direction trend and truncate into 2 categoeries -1 and 1, where 1 is high porosity and -1 is low porosity.
###Code
csim = np.zeros((nx,ny)) # declare a new 2D array
threshold = 10.0; trend_range = 10 # set a threshold and trend magnitude
for iy in range(0, ny): # add trend and stochastic residual and truncate
for ix in range(0, nx):
sim[ix,iy] = sim[ix,iy] + (ix-50)/100 * trend_range
if(sim[ix,iy]<threshold):
csim[ix,iy] = -1
else:
csim[ix,iy] = 1
pixelplt(csim,xmin,xmax,ymin,ymax,cell_size,-1,1,"Truncated Porosity Realization","X(m)","Y(m)","Categories",cmap)
###Output
_____no_output_____
###Markdown
Now let's extract some data samples from our truth model to train and test our support vector machine classification model and separate into training (80%) and testing (20%) subsets.
###Code
rand_sample = random_sample(csim,xmin,xmax,ymin,ymax,cell_size,300,"TPorosity")
train, test = train_test_split(rand_sample, test_size=0.2)
locpix(csim,xmin,xmax,ymin,ymax,cell_size,-1,1,train,'X','Y','TPorosity','Training Data Truncated Porosity Realization and Random Samples','X(m)','Y(m)','Truncated Porosity',cmap)
locmap(train,'X','Y','TPorosity',xmin,xmax,ymin,ymax,-1,1,'Training Dataset Truncated Porosity Samples','X(m)','Y(m)','Truncated Porosity',cmap)
###Output
_____no_output_____
###Markdown
Here's the testing data that we will with hold.
###Code
locpix(csim,xmin,xmax,ymin,ymax,cell_size,-1,1,test,'X','Y','TPorosity','Testing Dataset Truncated Porosity Realization and Random Samples','X(m)','Y(m)','Truncated Porosity',cmap)
locmap(test,'X','Y','TPorosity',xmin,xmax,ymin,ymax,-1,1,'Testing Dataset Truncated Porosity Samples','X(m)','Y(m)','Truncated Porosity',cmap)
###Output
_____no_output_____
###Markdown
It's a good idea to look at the summary statistics of our training data.
###Code
train.describe().transpose() # calculate summary statistics for the data
###Output
_____no_output_____
###Markdown
We observed that the sampled data exist between 0 and 1,000 in both X and Y coordinates and that the truncated Porosity (TPorosity) variable is categorical -1 or 1. It would be trivial to calculate the proportion of high ($prop_{high}$) and low ($prop_{low}$) porosity. $m = prop_{high} \times 1 + prop_{low} \times -1$$prop_{high} = \frac{m + 1}{2} = \frac{-0.04 + 1}{2} = 0.48$$prop_{low} = 1 - prop_{high} = 1 - 0.48 = 0.52$We need to reformat our data to inputs to train the support vector machine. Now we will extract the features (X and Y) into an array,
###Code
train_X = train.iloc[:,0:2]
train_X[:7]
###Output
_____no_output_____
###Markdown
and the cateogrical response (truncated porosity) into separate arrays.
###Code
train_Y = train.iloc[:,2:3]
train_Y[:7]
###Output
_____no_output_____
###Markdown
Let's train and plot linear and polynomial kernel support vector machine models over our solution space. This will provide a spatial classification model for porosity low or high as a function of X and Y location. We will start with a linear kernel.
###Code
from sklearn import svm
C = 1.0
svm_linear = svm.SVC(kernel = 'linear',C = C, verbose = True)
svm_linear.fit(train_X,train_Y["TPorosity"])
plt = visualize_model(svm_linear,train_X["X"],train_X["Y"],train_Y["TPorosity"],'Training Data and Suport Vector Machine Model')
plot_svc_decision_function(svm_linear,plt,xmin,xmax,ymin,ymax, plot_support=True)
###Output
[LibSVM]
###Markdown
The above plot shows the linear kernel support vector machine classification model (low porosity is red and high porosity is blue), the training dataset, margin outlined with dashed lines, and the resulting support vectors circled. Note the support vectors impact the decision boundary as they are either in the margin, between the dashed lines, or misclassified outside the margin. It looks like we have quite a bit misclassication.Let's try a more complicated, flexible classifier with a polynomial kernel. Recall, there is a trade off as more complexity reduces model bias (error due to a simple model not fitting the actual data), but increases model variance (sensitivity to training data).
###Code
from sklearn import svm
C = 0.000001
svm_poly = svm.SVC(kernel = 'poly', degree = 2, max_iter = 100000000, C = C)
svm_poly.fit(train_X,train_Y["TPorosity"])
plt = visualize_model(svm_poly,train_X["X"],train_X["Y"],train_Y["TPorosity"],'Training Data and Suport Vector Machine Model')
plot_svc_decision_function(svm_poly,plt,xmin,xmax,ymin,ymax, plot_support=True)
###Output
_____no_output_____
###Markdown
This model seems to work well for this problem. The $2^{nd}$ order polynomial should have pretty low model variance. Recall the "C" coefficient tunes the width of the margin; therefore, controls the number of support vectors / sample data the impact the model. The dashed line above show a wide margin with a lot of points in the margin included as support vectors. Let's try a dramatically higher "C" coefficient value.
###Code
C = 1.0
svm_poly = svm.SVC(kernel = 'poly', degree = 2, max_iter = 100000000, C = C)
svm_poly.fit(train_X,train_Y["TPorosity"])
plt = visualize_model(svm_poly,train_X["X"],train_X["Y"],train_Y["TPorosity"],'Training Data and Suport Vector Machine Model')
plot_svc_decision_function(svm_poly,plt,xmin,xmax,ymin,ymax, plot_support=True)
###Output
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
###Markdown
With a "C" coefficient of 1.0 we now have much fewer support vectors, but the model is very similar. Let's apply k-fold cross validation to tune the "C" coefficient. We use the default of 3-fold cross validation.
###Code
from sklearn.cross_validation import cross_val_score
from sklearn.learning_curve import validation_curve
all_X = rand_sample.iloc[:,0:2].as_matrix()
all_Y = rand_sample.iloc[:,2].as_matrix()
C_range = np.transpose(np.c_[0.000001,0.00001,0.0001,0.001,0.01,0.1,1.0,10.0]) # the attempted "C" coefficients.
train_score, val_score = validation_curve(svm_poly, all_X, all_Y,'C', C_range)
plt.semilogx(C_range, np.median(train_score, 1), color='blue', label='training score')
plt.semilogx(C_range, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
plt.show()
###Output
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
C:\Users\pm27995\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\svm\base.py:220: ConvergenceWarning: Solver terminated early (max_iter=100000000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
###Markdown
The validation score with the testing data and the training score are maximum with "C" coefficients around $10^{-6}$ to $10^{-5}$. We will retain that model. We run it again below.
###Code
from sklearn import svm
C = 0.000001
svm_poly = svm.SVC(kernel = 'poly', degree = 2, max_iter = 100000000, C = C)
svm_poly.fit(train_X,train_Y["TPorosity"])
plt = visualize_model(svm_poly,train_X["X"],train_X["Y"],train_Y["TPorosity"],'Training Data and Suport Vector Machine Model')
plot_svc_decision_function(svm_poly,plt,xmin,xmax,ymin,ymax, plot_support=True)
###Output
_____no_output_____
###Markdown
Now lets summarize the performance of our classification models, let's calculate and visualize confusion matrices for both the linear and tuned, polynomial kernel models for the training and the testing datasets. Here's the result for the linear kernel for the training and testing datasets.
###Code
test_X = test.iloc[:,0:2]
test_Y = test.iloc[:,2:3]
train_linear_predict = svm_linear.predict(train_X) # get the classifications for the training dataset
cnf_train_linear_matrix = confusion_matrix(train_Y, train_linear_predict) # build the confusion matrix
test_linear_predict = svm_linear.predict(test_X) # get the classifications for the testing dataset
cnf_test_linear_matrix = confusion_matrix(test_Y, test_linear_predict) # build the confusion matrix
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(cnf_train_linear_matrix, classes=[-1,1],
title='Confusion Matrix Train Dataset with Linear Kernel, without normalization')
plt.show()
plt.figure()
plot_confusion_matrix(cnf_test_linear_matrix , classes=[-1,1],
title='Confusion Matrix Test Dataset with Linear Kernel, without normalization')
plt.show()
###Output
Confusion matrix, without normalization
[[115 10]
[ 8 107]]
###Markdown
Here's the result for our best model, the tuned, polynomial kernel for the training and testing datasets.
###Code
test_X = test.iloc[:,0:2]
test_Y = test.iloc[:,2:3]
train_poly_predict = svm_poly.predict(train_X) # get the classifications for the training dataset
cnf_train_poly_matrix = confusion_matrix(train_Y, train_poly_predict) # build the confusion matrix
test_poly_predict = svm_poly.predict(test_X) # get the classifications for the testing dataset
cnf_test_poly_matrix = confusion_matrix(test_Y, test_poly_predict) # build the confusion matrix
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(cnf_train_poly_matrix, classes=[-1,1],
title='Confusion Matrix Train Dataset with Polynomial Kernel, without normalization')
plt.show()
plt.figure()
plot_confusion_matrix(cnf_test_poly_matrix , classes=[-1,1],
title='Confusion Matrix Test Dataset with Polynomial Kernel, without normalization')
plt.show()
###Output
Confusion matrix, without normalization
[[119 6]
[ 9 106]]
|
P1/p1_test_perceptual_phenomenon.ipynb | ###Markdown
Statistics: The Science of Decisions Project Instructions**Data Analyst Nanodegree P1: Test a Perceptual Phenomenon****Chana Greene** Background InformationIn a Stroop task, participants are presented with a list of words, with each word displayed in a color of ink. The participant’s task is to say out loud the color of the ink in which the word is printed. The task has two conditions: a congruent words condition, and an incongruent words condition. In the congruent words condition, the words being displayed are color words whose names match the colors in which they are printed: for example RED, BLUE. In the incongruent words condition, the words displayed are color words whose names do not match the colors in which they are printed: for example PURPLE, ORANGE. In each case, we measure the time it takes to name the ink colors in equally-sized lists. Each participant will go through and record a time from each condition. Questions For InvestigationAs a general note, be sure to keep a record of any resources that you use or refer to in the creation of your project. You will need to report your sources as part of the project submission.[Question 1](Q1)[Question 2](Q2)[Question 3](Q3)[Question 4](Q4)[Question 5](Q5)[Question 6](Q6)[List of Resources](Resources) Answers**1. What is our independent variable? What is our dependent variable? **>Independent variable - congruent vs incongruent words>Dependent variable - time to name the ink color**2. What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices.**> $H_0$ (null) - The population means for the congruent ($\mu_c$) and incongruent ($\mu_i$) word sets are not different: $\mu_c = \mu_i$> $H_A$ (alt) - The population means for the congruent and incongruent word sets are different: $\mu_c \neq \mu_i$.>_Note_: these could also be stated in terms of the the difference $D = \mu_c - \mu_i$ between the two population means; $H_0$: $D = 0$, $H_A$: $D \neq 0$> Statistical Test - We should perform a dependent t-test for paired samples to determine whether or not to reject the null hypothesis. Reasons for selecting this statistical test are (1) our samples only have 24 observations, (2) we cannot assume the samples are normally distributed, (3) we do not know the population standard deviation and (4) the samples are dependent since each subject repeated the task using each treatment (within subject design with two conditions). Points 1-3 rule out a Z-test and point 4 means we must use a t-test for paired samples. We will use the value p = 0.05 to determine significance.Now it’s your chance to try out the Stroop task for yourself. Go to [this link](https://www.google.com/url?q=https://faculty.washington.edu/chudler/java/ready.html&sa=D&usg=AFQjCNFRXmkTGaTjMtk1Xh0SPh-RiaZerA), which has a Java-based applet for performing the Stroop task. Record the times that you received on the task (you do not need to submit your times to the site.) Now, download [this dataset](https://www.google.com/url?q=https://drive.google.com/file/d/0B9Yf01UaIbUgQXpYb2NhZ29yX1U/view?usp%3Dsharing&sa=D&usg=AFQjCNGAjbK9VYD5GsQ8c_iRT9zH9QdOVg) which contains results from a number of participants in the task. Each row of the dataset contains the performance for one participant, with the first number their results on the congruent task and the second number their performance on the incongruent task.**3. Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability.**
###Code
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import matplotlib
%pylab inline
matplotlib.style.use('ggplot')
data_stroop = pd.read_csv('stroopdata.csv')
data_stroop['Diff'] = data_stroop.Congruent - data_stroop.Incongruent
print 'Number of observations:\n', data_stroop.count(),'\n'
print 'Series Mean:\n', data_stroop.mean().round(3),'\n'
print 'Series Median:\n',data_stroop.median().round(3),'\n'
##print 'Standard Deviation:\n',data_stroop.std(ddof=0).round(3),'\n' # was just experimenting with the std() function
print 'Sample Standard Deviation:\n',data_stroop.std().round(3),'\n'
###Output
Populating the interactive namespace from numpy and matplotlib
Number of observations:
Congruent 24
Incongruent 24
Diff 24
dtype: int64
Series Mean:
Congruent 14.051
Incongruent 22.016
Diff -7.965
dtype: float64
Series Median:
Congruent 14.356
Incongruent 21.018
Diff -7.666
dtype: float64
Sample Standard Deviation:
Congruent 3.559
Incongruent 4.797
Diff 4.865
dtype: float64
###Markdown
The mean, median, mode(s) and standard deviation for the Congruent set, Incongruent set and their difference are in the table below:| Series | Mean | Median | Mode | Standard Deviation ||--------|------|--------|------|--------------------||Congruent| 14.051|14.356|14-16 | 3.559||Incongruent|22.016|21.018|20-23|4.797||Difference|-7.965|-7.666|-2-(-4) & -10-(-12)|4.865|_Note_: We use the sample standard deviation because this data set represents a sampling of the population. **4. Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.**Histograms of the Congruent & Incongruent sets as well as their difference:
###Code
data_stroop.hist(bins=8, normed=True, layout=(1,3), figsize=(16,4))
data_stroop[['Congruent','Incongruent']].plot(kind='hist',bins=16,normed=True,alpha=0.5,figsize=(8,6))
data_stroop_norm = (data_stroop - data_stroop.mean())/data_stroop.std()
data_stroop_norm[['Congruent','Incongruent']].plot(kind='hist',bins=10,alpha=0.5,figsize=(8,6))
#plt.scatter(x=data_stroop['Congruent'],y=data_stroop['Incongruent'])
data_stroop[['Congruent','Incongruent']].plot(kind='box',vert=False,sym='.',figsize=(8,6),
xlim=(min(data_stroop['Congruent'])-1,max(data_stroop['Incongruent'])+1))
###Output
_____no_output_____
###Markdown
The distributions of the Congruent set and Incongruent set are roughly gaussian with a little overlap in their right and left tails respectively and the Congruent set is just slightly positively skewed. This can also be seen in the box and wisker plots where the inner quartile range does not overlap for the two distributions but the median of the Incongruent set is just within the maximum of the Congruent set. There are also two outliers from the Incongruent set.**5. Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?**Since we are working with paired samples (each subject performed the task with the congruent and incongruent sets) we will perform the Dependent t-test for paired samples.$$t = \frac{\mu_c-\mu_i}{\frac{S_D}{\sqrt{n}}}$$where,$S_D$ = standard error of the differences between samples$\mu_c$ = the mean of the congruent set sample$\mu_i$ = the mean of the incongruent set sample$n$ = the number of observations in each sample = 24For a confidence level of 95% (or p = 0.025 for a two-tailed test) and degrees of freedom = $n-1$ = 23 our critical statistic is $t_c = \pm 2.069$
###Code
# Manually calculating the t value
#t = data_stroop.mean()['Diff']/(data_stroop.std()['Diff']/np.sqrt(data_stroop.count()['Diff']))
# Alternative method for calculating the t value with the scipy pkg
# Also returns the p-value
t = stats.ttest_rel(data_stroop.Congruent,data_stroop.Incongruent)
print 't = ' + str(t[0].round(2)) + '\n'
print 'p = ' + str(t[1])
###Output
t = -8.02
p = 4.10300058571e-08
|
materials/initial/jupyter-notebooks/quantum-chemistry-tutorials/programmatic-approach.ipynb | ###Markdown
_*Qiskit Chemistry, Programmatic Approach*_ The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.*** ContributorsRichard Chen[1], Antonio Mezzacapo[1], Marco Pistoia[1], Stephen Wood[1] Affiliation- [1]IBMQ IntroductionThis notebook illustrates how to use Qiskit Chemistry's programmatic APIs.In this notebook, we decompose the computation of the ground state energy of a molecule into 4 steps: 1. Define a molecule and get integrals from a computational chemistry driver (PySCF in this case) 2. Construct a Fermionic Hamiltonian and map it onto a qubit Hamiltonian 3. Instantiate and initialize dynamically-loaded algorithmic components, such as the quantum algorithm VQE, the optimizer and variational form it will use, and the initial_state to initialize the variational form 4. Run the algorithm on a quantum backend and retrieve the results
###Code
# import common packages
import numpy as np
from qiskit import Aer
# lib from Qiskit Aqua
from qiskit.aqua import QuantumInstance
from qiskit.aqua.algorithms import VQE, NumPyMinimumEigensolver
from qiskit.aqua.operators import Z2Symmetries
from qiskit.aqua.components.optimizers import COBYLA
# lib from Qiskit Aqua Chemistry
from qiskit.chemistry import FermionicOperator
from qiskit.chemistry.drivers import PySCFDriver, UnitsType
from qiskit.chemistry.components.variational_forms import UCCSD
from qiskit.chemistry.components.initial_states import HartreeFock
###Output
_____no_output_____
###Markdown
Step 1: Define a moleculeHere, we use LiH in the sto3g basis with the PySCF driver as an example.The `molecule` object records the information from the PySCF driver.
###Code
# using driver to get fermionic Hamiltonian
# PySCF example
driver = PySCFDriver(atom='Li .0 .0 .0; H .0 .0 1.6', unit=UnitsType.ANGSTROM,
charge=0, spin=0, basis='sto3g')
molecule = driver.run()
###Output
_____no_output_____
###Markdown
Step 2: Prepare qubit HamiltonianHere, we setup the **to-be-frozen** and **to-be-removed** orbitals to reduce the problem size when we map to the qubit Hamiltonian. Furthermore, we define the **mapping type** for the qubit Hamiltonian.For the particular `parity` mapping, we can further reduce the problem size.
###Code
# please be aware that the idx here with respective to original idx
freeze_list = [0]
remove_list = [-3, -2] # negative number denotes the reverse order
map_type = 'parity'
h1 = molecule.one_body_integrals
h2 = molecule.two_body_integrals
nuclear_repulsion_energy = molecule.nuclear_repulsion_energy
num_particles = molecule.num_alpha + molecule.num_beta
num_spin_orbitals = molecule.num_orbitals * 2
print("HF energy: {}".format(molecule.hf_energy - molecule.nuclear_repulsion_energy))
print("# of electrons: {}".format(num_particles))
print("# of spin orbitals: {}".format(num_spin_orbitals))
# prepare full idx of freeze_list and remove_list
# convert all negative idx to positive
remove_list = [x % molecule.num_orbitals for x in remove_list]
freeze_list = [x % molecule.num_orbitals for x in freeze_list]
# update the idx in remove_list of the idx after frozen, since the idx of orbitals are changed after freezing
remove_list = [x - len(freeze_list) for x in remove_list]
remove_list += [x + molecule.num_orbitals - len(freeze_list) for x in remove_list]
freeze_list += [x + molecule.num_orbitals for x in freeze_list]
# prepare fermionic hamiltonian with orbital freezing and eliminating, and then map to qubit hamiltonian
# and if PARITY mapping is selected, reduction qubits
energy_shift = 0.0
qubit_reduction = True if map_type == 'parity' else False
ferOp = FermionicOperator(h1=h1, h2=h2)
if len(freeze_list) > 0:
ferOp, energy_shift = ferOp.fermion_mode_freezing(freeze_list)
num_spin_orbitals -= len(freeze_list)
num_particles -= len(freeze_list)
if len(remove_list) > 0:
ferOp = ferOp.fermion_mode_elimination(remove_list)
num_spin_orbitals -= len(remove_list)
qubitOp = ferOp.mapping(map_type=map_type, threshold=0.00000001)
qubitOp = Z2Symmetries.two_qubit_reduction(qubitOp, num_particles) if qubit_reduction else qubitOp
qubitOp.chop(10**-10)
print(qubitOp.print_details())
print(qubitOp)
###Output
IIII (-0.20765933501970762+0j)
IIIZ (-0.09376337484626396+0j)
IIZX (-0.0031775814548701616+0j)
IIIX (0.0031775814548701616+0j)
IIXX (-0.0012513965999571266+0j)
IIYY (0.0012513965999571266+0j)
IIZZ (-0.2116250951510974+0j)
IIXZ (0.019200533863103476+0j)
IIXI (0.019200533863103476+0j)
IIZI (0.3581026994577039+0j)
IZII (0.09376337484626406+0j)
ZXII (0.003177581454870162+0j)
IXII (0.003177581454870162+0j)
XXII (-0.001251396599957117+0j)
YYII (0.001251396599957117+0j)
ZZII (-0.2116250951510974+0j)
XZII (-0.019200533863103483+0j)
XIII (0.019200533863103483+0j)
ZIII (-0.3581026994577039+0j)
IZIZ (-0.121827742158206+0j)
IZZX (0.012144897228081718+0j)
IZIX (-0.012144897228081718+0j)
IZXX (0.03169874598733776+0j)
IZYY (-0.03169874598733776+0j)
IXIZ (0.012144897228081717+0j)
ZXIZ (0.012144897228081717+0j)
IXZX (-0.0032659954996661924+0j)
ZXZX (-0.0032659954996661924+0j)
IXIX (0.0032659954996661924+0j)
ZXIX (0.0032659954996661924+0j)
IXXX (-0.008650156860619578+0j)
ZXXX (-0.008650156860619578+0j)
IXYY (0.008650156860619578+0j)
ZXYY (0.008650156860619578+0j)
YYIZ (0.031698745987337754+0j)
XXIZ (-0.031698745987337754+0j)
YYZX (-0.008650156860619578+0j)
XXZX (0.008650156860619578+0j)
YYIX (0.008650156860619578+0j)
XXIX (-0.008650156860619578+0j)
YYXX (-0.030981613344624754+0j)
XXXX (0.030981613344624754+0j)
YYYY (0.030981613344624754+0j)
XXYY (-0.030981613344624754+0j)
ZZIZ (0.05590251078516701+0j)
ZZZX (0.0018710427514219098+0j)
ZZIX (-0.0018710427514219098+0j)
ZZXX (0.00310400411606565+0j)
ZZYY (-0.00310400411606565+0j)
XIIZ (0.012841723180766517+0j)
XZIZ (-0.012841723180766517+0j)
XIZX (-0.0023521521732532856+0j)
XZZX (0.0023521521732532856+0j)
XIIX (0.0023521521732532856+0j)
XZIX (-0.0023521521732532856+0j)
XIXX (-0.007975908750571819+0j)
XZXX (0.007975908750571819+0j)
XIYY (0.007975908750571819+0j)
XZYY (-0.007975908750571819+0j)
ZIIZ (0.11346110712684766+0j)
ZIZX (-0.01083836382875494+0j)
ZIIX (0.01083836382875494+0j)
ZIXX (-0.03355135311123255+0j)
ZIYY (0.03355135311123255+0j)
IZZZ (-0.05590251078516701+0j)
IZXZ (-0.012841723180766517+0j)
IZXI (-0.012841723180766517+0j)
IXZZ (-0.0018710427514219096+0j)
ZXZZ (-0.0018710427514219096+0j)
IXXZ (0.0023521521732532856+0j)
ZXXZ (0.0023521521732532856+0j)
IXXI (0.0023521521732532856+0j)
ZXXI (0.0023521521732532856+0j)
YYZZ (-0.00310400411606565+0j)
XXZZ (0.00310400411606565+0j)
YYXZ (0.007975908750571819+0j)
XXXZ (-0.007975908750571819+0j)
YYXI (0.007975908750571819+0j)
XXXI (-0.007975908750571819+0j)
ZZZZ (0.08447056807294229+0j)
ZZXZ (-0.008994911953942242+0j)
ZZXI (-0.008994911953942242+0j)
XIZZ (-0.008994911953942242+0j)
XZZZ (0.008994911953942242+0j)
XIXZ (0.0066120470661577375+0j)
XZXZ (-0.0066120470661577375+0j)
XIXI (0.0066120470661577375+0j)
XZXI (-0.0066120470661577375+0j)
ZIZZ (0.06035891281078855+0j)
ZIXZ (0.011019231644721898+0j)
ZIXI (0.011019231644721898+0j)
IZZI (0.11346110712684766+0j)
IXZI (-0.01083836382875494+0j)
ZXZI (-0.01083836382875494+0j)
YYZI (-0.03355135311123255+0j)
XXZI (0.03355135311123255+0j)
ZZZI (-0.06035891281078855+0j)
XIZI (-0.0110192316447219+0j)
XZZI (0.0110192316447219+0j)
ZIZI (-0.11344680300366612+0j)
Representation: paulis, qubits: 4, size: 100
###Markdown
We use the classical eigen decomposition to get the smallest eigenvalue as a reference.
###Code
# Using exact eigensolver to get the smallest eigenvalue
exact_eigensolver = NumPyMinimumEigensolver(qubitOp)
ret = exact_eigensolver.run()
print('The computed energy is: {:.12f}'.format(ret.eigenvalue.real))
print('The total ground state energy is: {:.12f}'.format(ret.eigenvalue.real + energy_shift + nuclear_repulsion_energy))
###Output
The computed energy is: -1.077059745735
The total ground state energy is: -7.881072044031
###Markdown
Step 3: Initiate and configure dynamically-loaded instancesTo run VQE with the UCCSD variational form, we require- VQE algorithm- Classical Optimizer- UCCSD variational form- Prepare the initial state in the HartreeFock state [Optional] Setup token to run the experiment on a real deviceIf you would like to run the experiment on a real device, you need to setup your account first.Note: If you did not store your token yet, use `IBMQ.save_account('MY_API_TOKEN')` to store it first.
###Code
# from qiskit import IBMQ
# provider = IBMQ.load_account()
backend = Aer.get_backend('statevector_simulator')
# setup COBYLA optimizer
max_eval = 200
cobyla = COBYLA(maxiter=max_eval)
# setup HartreeFock state
HF_state = HartreeFock(num_spin_orbitals, num_particles, map_type,
qubit_reduction)
# setup UCCSD variational form
var_form = UCCSD(num_orbitals=num_spin_orbitals, num_particles=num_particles,
active_occupied=[0], active_unoccupied=[0, 1],
initial_state=HF_state, qubit_mapping=map_type,
two_qubit_reduction=qubit_reduction, num_time_slices=1)
# setup VQE
vqe = VQE(qubitOp, var_form, cobyla)
quantum_instance = QuantumInstance(backend=backend)
###Output
_____no_output_____
###Markdown
Step 4: Run algorithm and retrieve the results
###Code
results = vqe.run(quantum_instance)
print('The computed ground state energy is: {:.12f}'.format(results.eigenvalue.real))
print('The total ground state energy is: {:.12f}'.format(results.eigenvalue.real + energy_shift + nuclear_repulsion_energy))
print("Parameters: {}".format(results.optimal_point))
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____ |
Google Colab notebooks/practice_04/practice_qp.ipynb | ###Markdown
###Code
!pip install quadprog
from quadprog import solve_qp
from numpy import vstack, hstack
def get_qp(P, q, G=None, h=None, A=None, b=None):
""" minimize: x.T*H*x + q.T*x
s.t: G*x <= h
A*x == b """
qp_G = .5 * (P + P.T) # make sure P is symmetric
qp_a = -q
if A is not None: # Both equality and inequality constraints
qp_C = - vstack([A, G]).T
qp_b = - hstack([b, h])
meq = A.shape[0]
else: # no equality constraint
qp_C = -G.T
qp_b = -h
meq = 0
return solve_qp(qp_G, qp_a, qp_C, qp_b, meq)[0]
from numpy import array, dot
M = array([[1., 2., 0.], [-8., 3., 2.], [0., 1., 1.]])
P = dot(M.T, M)
q = -dot(M.T, array([3., 2., 3.]))
G = array([[1., 2., 1.], [2., 0., 1.], [-1., 2., -1.]])
h = array([3., 2., -2.]).reshape((3,))
get_qp(P, q, G, h)
from numpy import eye,zeros, kron, ones, diag, sum, cumsum
N = 8
x0, x1, xf, yf = 0.5, 0.7, 1.5, -0.2
""" minimize: dr.T*H*dr + q.T*dr
s.t: G*dr <= h
A*dr == b """
G, h = zeros((2*N,2*N)), zeros((2*N))
G[:N, :] = -kron(eye(N), [1,0])
G[N:N + int(N/2), :] = - kron(eye(int(N/2)), [0,1,0,0])
h[N:N + int(N/2)] = -0.1*ones(4)
G[N + int(N/2):2*N, :] = kron(eye(int(N/2)), [0,0,0,1])
h[N + int(N/2):2*N] = 0.1*ones(4)
A , b = zeros((3,2*N)), zeros(3)
A[:2,:] = kron(ones(N),eye(2))
b[:2] = [xf, yf]
A[2,:4*2] = kron(ones(4),[1,0])
b[2] = (x1+x0)/2
R = diag([1,2])
H = kron(eye(N),R)
q = zeros(2*N)
dr = get_qp(H,q,G,h,A,b).reshape((N,2))
r = cumsum(dr,axis=0)
import matplotlib.pyplot as plt
plt.scatter(r[:,0],r[:,1])
plt.vlines((x0+x1)/2, min(r[:,1]),max(r[:,1]))
plt.scatter(xf,yf, color = 'red')
plt.show()
###Output
_____no_output_____ |
Python_CoderDojo05.ipynb | ###Markdown
Practicando División entera y resto* // División entera o parte entera* % Resto
###Code
print(17/5) # División normal
print(17//5) # División entera
print(17%5) # Resto, módulo o resíduo
###Output
3.4
3
2
###Markdown
Reto 5.1. Repartir bombonesUna caja de bombones tiene 50 unidades. Se van a repartir entre 8 amigos a partes iguales. * ¿A cuántos bombones toca cada uno?* ¿Cuántos han sobrado? Reto 5.2. Convertir segundosSolicite al usuario un número de segundos para convertirlos en su equivalente en días, horas, minutos y segundos. Compruebe que en 319.000 segundos hay:* 3 días* 16 horas* 36 minutos* 40 segundos Valor por defecto en un inputPodemos establecer un valor por defecto al solicitar a usuario que introduzca un dato. Si el usuario no introduce nada y pulsa ENTER se aplicará el valor por defecto. Esto se consiguie con un ```or```.
###Code
ciudad = input("Indique su ciudad de nacimiento: ") or "Madrid" # si se pulsa ENTER el valor por defecto es Madrid
print("Usted ha nacido en", ciudad)
edad = int(input("Indique su edad: ") or 16) # si se pulsa ENTER el valor por defecto es 16
print(f"Usted tiene {edad} años.")
###Output
Indique su edad:
Usted tiene 16 años.
###Markdown
Daterminar si un número es par o imparUn número par es aquel que al dividir entre 2 tiene división exacta. Esto es, su resto o módulo es cero. El cero se considera par.
###Code
n = 148 # probar el código poniendo tanto números pares como impares
if n%2==0:
print(f"El número {n} es par.")
else:
print(f"El número {n} es impar.")
###Output
El número 148 es par.
###Markdown
La condición anterior ```n%2==0``` se puede sustituir por ```n%2``` simplemente, ya que ella misma ya es True o False. * Si n es impar al dividir entre 2 el resto es 1, y uno en informática equivales a True.* Si n es par al dividir entre 2 el resto es 0, y cero en informática equivales a False.Veamos la pequeña mejora del código.
###Code
n = 147
if n%2: # n%2 no se iguala a nada ya que en si mismo es 1 (True) o 0 (False)
print(f"El número {n} es impar.")
else:
print(f"El número {n} es par.")
###Output
El número 147 es impar.
###Markdown
Reto 5.3. Comprobar números pares e imparesSolicite un número al usuario y compruebe si su triple es par o impar. Reto 5.4 MultiplosCrear un programa que comprueba si un número es múltiplo de 3, de 5, de ambos, o de ninguno de ellos. Ejemplo:* 6 es múltiplo de 3* 10 es múltiplo de 5* 15 es múltiplo de 3 y de 5* 26 no es múltiplo de 3, ni de 5 Reto 5.5. Dados tres números identificar el mayorPedir al usuario que introduzca por teclado tres números. Mostrar por pantalla una frase que identifique el mayor de los tres. Nota: resuelve el reto sin usar la función ```max()``` que calcula el máximo y luego comprueba que con la función ```max()``` se obtiene el mismo resultado. Comparaciones múltiplesPodemos realizar comparaciones múltiples cuyo resultado será True o False.
###Code
1 < 5 < 9
1 < 2 < 3 < 4 > 9
4 == 2*2 == 2+2 == 8/2
###Output
_____no_output_____
###Markdown
Comprobar si se introduce un número dentro de un rangoNos piden que se introduzca un número entre 10 y 50 ambos incluidos. Este será el rango válido. En caso contrario informar que el número no es válido. Solución 1Usando ```and```
###Code
n = int(input("Introduzca un número entero entre 10 y 50: "))
if n>=10 and n<=50:
print(f"Se ha introducido el número {n} que SI es un número válido.")
else:
print(f"Se ha introducido el número {n} que NO es un número válido.")
###Output
Introduzca un número entero entre 10 y 50: 222
Se ha introducido el número 222 que NO es un número válido.
###Markdown
Solución 2Usando comparaciones múltiples.
###Code
n = int(input("Introduzca un número entero entre 10 y 50: "))
if 10 <= n <= 50:
print(f"Se ha introducido el número {n} que SI es un número válido.")
else:
print(f"Se ha introducido el número {n} que NO es un número válido.")
###Output
Introduzca un número entero entre 10 y 50: 15
Se ha introducido el número 15 que SI es un número válido.
###Markdown
Introducir vocalPedir al usuario que introduzca una vocal minúscula, no acentuada. En caso contrario informar de que el caracter no cumple los requisitos.
###Code
vocal = input("Introduzca una vocal minúscula, no acentuada: ")
if vocal=='a' or vocal=='e' or vocal=='i' or vocal=='o' or vocal=='u':
print(f"El caracter {vocal} cumple que es una vocal minúscula, no acentuada.")
else:
print(f"El caracter {vocal} no cumple que sea una vocal minúscula, no acentuada.")
###Output
Introduzca una vocal minúscula, no acentuada: e
El caracter e cumple que es una vocal minúscula, no acentuada.
|
code/codegraf/003/example.ipynb | ###Markdown
Simple script to input, calculate, and outputNote: in Jupyter notebooks, we can make comments in Markdown cells. So making inline comments (lines that start with the hash character and are ignored by Python) is less important. Here we set up the data needed to run the calculation
###Code
# This is an inline comment that will be ignored by Python
pi = 3.14159
print(pi)
diameter = float(input('Enter the diameter, then press the Enter/return key: ')) # when this line runs you need to enter a number
print(diameter)
###Output
_____no_output_____
###Markdown
The data need to be transformed into a usable form
###Code
radius = diameter/2
print(radius)
###Output
_____no_output_____
###Markdown
We have all of the information necessary to complete the calculation
###Code
print('The area of the circle is ',pi*radius**2)
###Output
_____no_output_____
###Markdown
Using an API to plot the current location of the International Space Station
###Code
import requests
import webbrowser
url = 'http://api.open-notify.org/iss-now.json'
response = requests.get(url)
print('JSON text: ', response.text)
print()
data = response.json()
print('Python data structure: ', data)
print()
latitude = data['iss_position']['latitude']
longitude = data['iss_position']['longitude']
zoom = '4'
googleMapUrl = 'http://www.google.com/maps/place/'+latitude+','+longitude+'/@'+latitude+','+longitude+','+zoom+'z'
print(googleMapUrl)
# The following line will open a tab on your browser with the map if you run it in a local Jupiter notebook (but not in Colab)
success = webbrowser.open_new_tab(googleMapUrl)
###Output
_____no_output_____
###Markdown
Named Entity Recognition (NER)There are a number of ways to do NER. This script uses the powerful Natural Language Toolkit (nltk).The first step is to load several libraries and download data needed by the apply the model.
###Code
import nltk
from nltk.tokenize import word_tokenize
from nltk.tag import pos_tag
import json
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('words')
print('done')
###Output
_____no_output_____
###Markdown
The next step is to run the example text through a pipeline that breaks the text into words, flags the words by part of speech, then tags chunks of words that are thought to be named entites.
###Code
text = '''The House committee investigating the Jan. 6 attack on the U.S. Capitol on Tuesday evening unanimously approved a criminal contempt report against Steve Bannon, an ally of former President Donald Trump's, for defying a subpoena from the panel.'''
tokens = nltk.word_tokenize(text)
print('tokens:', tokens)
tagged_tokens = nltk.pos_tag(tokens)
print('tagged:', tagged_tokens)
named_entity_chunks = nltk.ne_chunk(tagged_tokens)
print('Named Entity chunks:', named_entity_chunks)
print()
###Output
_____no_output_____
###Markdown
In this last step, we extract data from the pipeline output for human-friendly display and collect it in a Python data structure for further use.
###Code
ne_list = []
for chunk in named_entity_chunks:
if hasattr(chunk, 'label'):
ne_dict = {'ne_label': chunk.label()}
# A chunk is some kind of iterable of tuples
# Each tuple contains (word, noun_descriptor)
ne_string = chunk[0][0] # 0th tuple, word
# Iterate through the rest of the tuples in the chunk
for additional_tuple in chunk[1:len(chunk)]:
ne_string += ' ' + additional_tuple[0]
ne_dict['ne_string'] = ne_string
ne_list.append(ne_dict)
print(chunk.label(), ' '.join(c[0] for c in chunk))
print()
print('NE list:', json.dumps(ne_list, indent = 2))
###Output
_____no_output_____
###Markdown
Face detection using pretrained machine learning model using OpenCVThe cv2 module is an open source computer vision library.We will use this image for a test:EMI., Public domain, via Wikimedia Commons**NOTE: After running this code, it's best not to save the notebook without first clearing the image output (`Edit` menu, then select `Clear all outputs`). Without clearing the output, the image data gets saved with the notebook and the time to save may be long.**In the first section of code, we load a number of modules and define two functions that we will use later on.
###Code
# Import libraries and define functions
import numpy as np
import urllib.request as urllib
import cv2
import json
import matplotlib.pyplot as plt
import requests
%matplotlib inline
def convertToRGB(image):
return cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
def detect_faces(cascade, test_image, scaleFactor = 1.1):
# create a copy of the image to prevent any changes to the original one.
image_copy = test_image.copy()
#convert the test image to gray scale as opencv face detector expects gray images
gray_image = cv2.cvtColor(image_copy, cv2.COLOR_BGR2GRAY)
# Applying the haar classifier to detect faces
faces_rect = cascade.detectMultiScale(gray_image, scaleFactor=scaleFactor, minNeighbors=5)
for (x, y, w, h) in faces_rect:
cv2.rectangle(image_copy, (x, y), (x+w, y+h), (0, 255, 0), 15)
image_dimensions = test_image.shape # .shape gives height, width, and layers
# Create a Python data structure for the face rectangles discovered
faces_dict = {'image_width': str(image_dimensions[1]), 'image_height': str(image_dimensions[0])}
faces_list = []
for i in faces_rect:
faces_list.append({'x': str(i[0]), 'y': str(i[1]), 'width': str(i[2]), 'height': str(i[3])})
faces_dict['faces'] = faces_list
return image_copy, faces_dict
###Output
_____no_output_____
###Markdown
The training data is available on the web, but cv2 expects that the training data are in a local file. So we grab the data from GitHub, save the file in the current working directory on the cloud server, then load the file to create a classifier object.There are 4 pretrained models that we can try by uncommenting them in the first section of code (delete the `` sign from the one you want to use and insert `` in front of the one you are finished with).
###Code
# Uncomment to try a different training model
training_file = 'haarcascade_frontalface_default.xml'
#training_file = 'haarcascade_frontalface_alt.xml'
#training_file = 'haarcascade_frontalface_alt2.xml'
#training_file = 'haarcascade_frontalface_alt_tree.xml'
# Load traning model from GitHub
training_data_url = 'https://github.com/parulnith/Face-Detection-in-Python-using-OpenCV/raw/master/data/haarcascades/' + training_file
training_data = requests.get(training_data_url).text
# Save file on cloud server
with open(training_file, 'wt', encoding='utf-8') as file_object:
file_object.write(training_data)
haar_cascade_face = cv2.CascadeClassifier(training_file)
###Output
_____no_output_____
###Markdown
The final section of code retrieves the image from Wikimedia Commons via a URL, then loads it as a cv2 image. (The commented out code can be used if you are running the notebook locally and want to load the file from a path in your filesystem.)The `detect_faces()` function runs the machine learning model on the picture. The last line displays the image with rectangles superimposed to show any faces that were detected. The raw pixel rectangle data are available in the `faces_dict`, which you can view by uncommenting the print statement.
###Code
# URL of test image
image_url = 'https://upload.wikimedia.org/wikipedia/commons/9/9f/Beatles_ad_1965_just_the_beatles_crop.jpg'
#get image by url and load into cv2
resp = urllib.urlopen(image_url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
# alternate code to load image from a local file
# image = cv2.imread(path)
#call the function to detect faces
faces, faces_dict = detect_faces(haar_cascade_face, image)
# uncomment to see the faces rectangles metadata
#print(json.dumps(faces_dict, indent = 2))
#convert to RGB and display image
plt.imshow(convertToRGB(faces))
###Output
_____no_output_____ |
first_last/Part5-Moonshot-Backtest-With-VIX-Filter.ipynb | ###Markdown
Disclaimer Moonshot Backtest With VIX FilterThe trading strategy implementation in `first_last.py` includes a `MIN_VIX` parameter, which, if set, applies the VIX filter from the previous notebook:```pythonif self.MIN_VIX: Query VIX at 15:30 NY time (= close of 14:00:00 bar because VIX is Chicago time) vix = get_prices("vix-30min", fields="Close", start_date=signals.index.min(), end_date=signals.index.max(), times="14:00:00") extract VIX and squeeze single-column DataFrame to Series vix = vix.loc["Close"].xs("14:00:00", level="Time").squeeze() reshape VIX like signals vix = signals.apply(lambda x: vix) signals = signals.where(vix >= self.MIN_VIX, 0)``` Edit the parameter in the strategy file and re-run the backtest, or set it on-the-fly as shown below:
###Code
from quantrocket.moonshot import backtest
backtest("first-last", params={"MIN_VIX": 20}, filepath_or_buffer="first_last_vix_filter.csv")
###Output
_____no_output_____
###Markdown
And view the performance:
###Code
from moonchart import Tearsheet
Tearsheet.from_moonshot_csv("first_last_vix_filter.csv")
###Output
_____no_output_____ |
notebooks/ESALB53/.ipynb_checkpoints/Compare_distributions-checkpoint.ipynb | ###Markdown
Figures and data analysis done for ESLAB's proceeding.https://www.cosmos.esa.int/web/53rd-eslab-symposium
###Code
import os
import numpy as np
import pandas as pd
import pystan
from astropy.table import Table
import matplotlib.pyplot as plt
%matplotlib inline
import corner
import random
import seaborn as sns
###Output
_____no_output_____
###Markdown
load csv files from ESA vo space
###Code
lensedQSO = pd.read_csv("http://vospace.esac.esa.int/vospace/sh/baf64b11fe35d35f18879b1d292b0c4b02286a?dl=1")
(lensedQSO.pmra/lensedQSO.pmra_error).hist(bins=20,range=(-10,10))
(lensedQSO.pmdec/lensedQSO.pmdec_error).hist(bins=20,range=(-10,10))
len(lensedQSO)
allwiseQSO = pd.read_csv("http://vospace.esac.esa.int/vospace/sh/d18d69255b40f4178ec5155a679a33e1dbddd37?dl=1")
len(allwiseQSO)
(allwiseQSO.pmra/allwiseQSO.pmra_error).hist(bins=20,range=(-10,10))
(allwiseQSO.pmdec/allwiseQSO.pmdec_error).hist(bins=20,range=(-10,10))
lensedQSO.head()
allwiseQSO.head()
###Output
_____no_output_____
###Markdown
Here we restrict to a random sample of the allwise QSOs to speed up the computation. The results might change slightly according to the selected sample.
###Code
lqsoNew = lensedQSO[np.isfinite(lensedQSO['pmra'])].copy()
qsoNew = allwiseQSO[np.isfinite(allwiseQSO['pmra'])].sample(n=10*len(lqsoNew)).copy()
def sigma2(ea,ed,c) :
""" the largest eigen value of the covariance matrix defined by
ea : right ascention error
ed : declination error
c : correlation
"""
res = np.power(ea,2) + np.power(ed,2)
res = res + np.sqrt(np.power(ea-ed,2) + np.power(2*ea*ed*c,2))
return res/2
def setMu(d):
"""
set mu, mu_error and mu_norm taking in account the correlation
"""
d['mu'] = np.sqrt(np.power(d.pmra,2)+np.power(d.pmdec,2))
d['mu_error'] = np.sqrt(sigma2(d.pmra_error,d.pmdec_error,d.pmra_pmdec_corr))
d['mu_over_error'] = d.mu/d.mu_error
setMu(lqsoNew)
setMu(qsoNew)
###Output
_____no_output_____
###Markdown
some plot
###Code
sns.kdeplot(qsoNew.mu_over_error,label='QSO')
sns.kdeplot(lqsoNew.mu_over_error,linestyle='--',label='LQSO')
plt.xlabel("$\mu \, / \, \sigma_\mu$")
plt.grid()
sns.kdeplot(np.log(qsoNew.mu),label='QSO')
sns.kdeplot(np.log(lqsoNew.mu),linestyle='--',label='LQSO')
plt.xlabel("$\log \,\mu $")
plt.ylabel("density")
plt.grid()
plt.savefig('log_mu_dist.png')
s = lqsoNew
sns.kdeplot(s.phot_g_mean_mag,s.mu_over_error,cmap="Oranges",
n_levels=6,cbar_kws={'label': 'LQSO'},cbar=True,linewidths=2)
s = qsoNew
sns.kdeplot(s.phot_g_mean_mag,s.mu_over_error,cmap="Blues",
n_levels=6,cbar_kws={'label': 'QSO'},cbar=True,linewidths=2)
plt.xlabel("g [mag]")
plt.ylim(-1,4)
plt.xlim(16,21)
plt.grid()
plt.ylabel("$\mu \, / \, \sigma_\mu$")
plt.savefig('lQSOvsQSO.png')
###Output
_____no_output_____
###Markdown
Model 1 A model to compare the distribution of the proper motion assuming a log normal distribution.
###Code
Nl = len(lqsoNew)
Nq = len(qsoNew)
bayesmod1 = """
data{
int<lower=0> Nq; //number of quasars
int<lower=0> Nl; //number of lens
vector[Nq] muqhat; //propermotion of qso
vector[Nl] mulhat; //propermotion of lens
vector<lower=0>[Nq] sigq; //error on pm of qso
vector<lower=0>[Nl] sigl; //error on pm of lens
}
parameters{
//population parameters
real mu1;
real mu2;
real<lower=0> sigma1;
real<lower=0> sigma2;
vector<lower=0>[Nq] muq; //propermotion of qso
vector<lower=0>[Nl] mul; //propermotion of lens
}
model{
// prior
mu1 ~ normal(0,1);
mu2 ~ normal(0,1);
sigma1 ~ normal(0,1);
sigma2 ~ normal(0,1);
//likelihood
muqhat ~ normal(muq, sigq);
mulhat ~ normal(mul, sigl);
muq ~ lognormal(mu1, sigma1);
mul ~ lognormal(mu2, sigma2);
}
"""
mixedData = {
'Nq': Nq,
'Nl': Nl,
'muqhat': qsoNew.mu,
'mulhat': lqsoNew.mu,
'sigq': qsoNew.mu_error,
'sigl': lqsoNew.mu_error
}
sm1 = pystan.StanModel(model_code=bayesmod1)
###Output
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_34e24fecc14af65df3539389154d0073 NOW.
###Markdown
Could be run for longer, but 1 chain at 1000 samples takes ~4hrs with the full qso sample
###Code
fit1 = sm1.sampling(data=mixedData, iter=1000, chains=1)
params=fit1.extract()
res = pd.DataFrame()
for c in ['mu1','mu2','sigma1','sigma2']:
res[c] = params[c]
res.head()
fig = corner.corner(res,
labels=[r"$\mu_1$", r"$\mu_2$", r"$\sigma_1$", r"$\sigma_2$"],
quantiles=[0.16, 0.5, 0.84],
plot_contours=False, smooth=True)
plt.savefig('model1.png')
def summary(x):
names = {
'Q16' : x.quantile(0.16),
'Q50': x.quantile(0.4),
'Q84': x.quantile(0.84),
'std': x.std()}
return pd.Series(names)
res.apply(summary).round(2)
print(res.apply(summary).round(2).to_latex())
###Output
\begin{tabular}{lrrrr}
\toprule
{} & mu1 & mu2 & sigma1 & sigma2 \\
\midrule
Q16 & -0.74 & -0.46 & 0.35 & 1.08 \\
Q50 & -0.70 & -0.33 & 0.38 & 1.16 \\
Q84 & -0.65 & -0.12 & 0.44 & 1.34 \\
std & 0.04 & 0.17 & 0.04 & 0.14 \\
\bottomrule
\end{tabular}
###Markdown
Model 2 A model to compare the distribution of the proper motion vectors using normal prior and multi normal distribution to fully use Gaia likelihood.We want to construct the posterior distribution of $y=(\mu,\sigma)$ $$\mu_i $$
###Code
bayesmod2 = """
data{
int<lower=0> N; //number of objects
row_vector[2] pmhat[N]; //propermotion observed
cov_matrix[2] Sig[N]; //error on propermotion
}
parameters{
//population parameters
row_vector[2] mu;
row_vector<lower=0>[2] sigma;
row_vector[2] pm[N]; //true propermotion
}
model{
//priors on hyper params
mu ~ normal(0,1);
sigma ~ normal(0,1);
//observed proper motions
for(n in 1:N){
pm[n] ~ normal(mu, sigma);
pmhat[n] ~ multi_normal(pm[n], Sig[n]);
}
}
"""
sm2 = pystan.StanModel(model_code=bayesmod2)
###Output
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_df0098f6db9784cae5cf50f026da1b15 NOW.
###Markdown
Test with toy data
###Code
#make some toy data
Nq = 100 #number of quasar
Nl = 100 #number of lens
muq = [-0.1,-0.1]#population parameters
mul = [0.3,0.4]
sigq = np.reshape([0.02**2,0,0,0.01**2],[2,2])
sigl = np.reshape([0.06**2,0,0,0.05**2],[2,2])
Sigmaq = np.reshape([0.01**2,0,0,0.01**2],[2,2])#observational uncertainty covariance matrix
Sigmal = np.reshape([0.03**2,0,0,0.04**2],[2,2])
#observed proper mootions
pmq = np.empty([Nq, 2])
pmqhat = np.empty([Nq, 2])
for iq in np.arange(Nq):
pmq[iq, :] = np.random.multivariate_normal(muq, sigq)
pmqhat[iq,:] = np.random.multivariate_normal(pmq[iq], Sigmaq)
pml = np.empty([Nl, 2])
pmlhat = np.empty([Nl, 2])
for il in np.arange(Nl):
pml[il, :] = np.random.multivariate_normal(mul, sigl)
pmlhat[il,:] = np.random.multivariate_normal(pml[il], Sigmal)
qsodata={
'N': Nq,
'pmhat': pmqhat,
'Sig': np.dstack([[Sigmaq]*Nq]),
}
fitqso = sm2.sampling(data=qsodata, init='random', iter=2000, chains=1)
lqsodata={
'N': Nl,
'pmhat': pmlhat,
'Sig': np.dstack([[Sigmal]*Nl]),
}
fitlqso = sm2.sampling(data=lqsodata, init='0', iter=2000, chains=1)
def topd(params):
res = pd.DataFrame()
res['mu_r'] = params['mu'][:,0]
res['mu_d'] = params['mu'][:,1]
res['sigma_r'] = params['sigma'][:,0]
res['sigma_d'] = params['sigma'][:,1]
return res
resqso = topd(fitqso.extract())
reslqso = topd(fitlqso.extract())
resqso.head()
fig = corner.corner(resqso,
labels=[r"$\mu_q^r$",r"$\mu_q^d$",
r"$\sigma_q^r$",r"$\sigma_q^d$",
],
quantiles=[0.16, 0.5, 0.84],
plot_contours=False, smooth=True)
fig = corner.corner(reslqso,
labels=[r"$\mu_l^a$",r"$\mu_l^d$",
r"$\sigma_l^r$",r"$\sigma_l^d$",
],
quantiles=[0.16, 0.5, 0.84],
plot_contours=False, smooth=True)
resqso.apply(summary).round(2)
reslqso.apply(summary).round(2)
###Output
_____no_output_____
###Markdown
Now on real data
###Code
def is_pos_def(x): #check covariance matrices are positive definite
return np.all(np.linalg.eigvals(x)>0)
###Output
_____no_output_____
###Markdown
Lensed QSOs
###Code
Nl = len(lqsoNew)
dpmra2 = lqsoNew.pmra_error**2
dpmdec2 = lqsoNew.pmdec_error**2
dpmrapmdec = lqsoNew.pmra_pmdec_corr*lqsoNew.pmra_error*lqsoNew.pmdec_error
lqsodata={
'N': Nl,
'pmhat': np.dstack([lqsoNew.pmra, lqsoNew.pmdec])[0],
'Sig': np.reshape(np.dstack([dpmra2,dpmrapmdec, dpmrapmdec, dpmdec2]), [Nl,2,2])
}
fitlqso = sm2.sampling(data=lqsodata, iter=2000, chains=4)
###Output
/Users/abombrun/anaconda3/lib/python3.6/site-packages/pystan/misc.py:399: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
elif np.issubdtype(np.asarray(v).dtype, float):
###Markdown
On a QSO's sample
###Code
allwisenew2 = allwiseQSO.sample(n=Nl)
Nq = len(allwisenew2)
dpmra2 = allwisenew2.pmra_error**2
dpmdec2 = allwisenew2.pmdec_error**2
dpmrapmdec = allwisenew2.pmra_pmdec_corr*allwisenew2.pmra_error*allwisenew2.pmdec_error
qsodata={
'N': Nq,
'pmhat': np.dstack([allwisenew2.pmra, allwisenew2.pmdec])[0],
'Sig': np.reshape(np.dstack([dpmra2,dpmrapmdec, dpmrapmdec, dpmdec2]), [Nq,2,2])
}
fitqso = sm2.sampling(data=qsodata, iter=2000, chains=4)
###Output
/Users/abombrun/anaconda3/lib/python3.6/site-packages/pystan/misc.py:399: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
elif np.issubdtype(np.asarray(v).dtype, float):
###Markdown
plot summary
###Code
resqso = topd(fitqso.extract())
reslqso = topd(fitlqso.extract())
fig = corner.corner(resqso,
labels=[r"$\mu_q^r$",r"$\mu_q^d$",
r"$\sigma_q^r$",r"$\sigma_q^d$",
],
quantiles=[0.16, 0.5, 0.84],
plot_contours=False, smooth=True)
plt.savefig('model2_qso.png')
fig = corner.corner(reslqso,
labels=[r"$\mu_l^r$",r"$\mu_l^d$",
r"$\sigma_l^r$",r"$\sigma_l^d$",
],
quantiles=[0.16, 0.5, 0.84],
plot_contours=False, smooth=True)
plt.savefig('model2_lqso.png')
resqso.apply(summary).round(2)
reslqso.apply(summary).round(2)
print(resqso.apply(summary).round(2).to_latex())
print(reslqso.apply(summary).round(2).to_latex())
###Output
\begin{tabular}{lrrrr}
\toprule
{} & mu\_r & mu\_d & sigma\_r & sigma\_d \\
\midrule
Q16 & -0.32 & -0.59 & 0.85 & 1.66 \\
Q50 & -0.23 & -0.45 & 0.94 & 1.79 \\
Q84 & -0.08 & -0.19 & 1.09 & 2.02 \\
std & 0.12 & 0.21 & 0.12 & 0.18 \\
\bottomrule
\end{tabular}
|
notebooks/2_feature_extraction.ipynb | ###Markdown
2. Feature Extraction In this notebook we will extract feature needed for model training using event labeled dataset produced from data wrangling notebook. Setup Prerequisite
###Code
!pip install pyspark
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
Import Pyspark, Initiati Spark and Load Dataframe
###Code
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import pyspark.sql.types as T
from functools import reduce
spark = SparkSession.builder.appName('sparkify') \
.config('spark.driver.maxResultSize', '3g') \
.getOrCreate()
df = spark.read.parquet("/content/drive/MyDrive/datasets/dsnd-sparkify/event_labeled.parquet")
df = df.drop('userIdTemp')
df.printSchema()
###Output
root
|-- userId: string (nullable = true)
|-- up_ts: timestamp (nullable = true)
|-- down_ts: string (nullable = true)
|-- isChurn: boolean (nullable = true)
|-- artist: string (nullable = true)
|-- auth: string (nullable = true)
|-- firstName: string (nullable = true)
|-- gender: string (nullable = true)
|-- itemInSession: long (nullable = true)
|-- lastName: string (nullable = true)
|-- length: double (nullable = true)
|-- level: string (nullable = true)
|-- location: string (nullable = true)
|-- method: string (nullable = true)
|-- page: string (nullable = true)
|-- registration: long (nullable = true)
|-- sessionId: long (nullable = true)
|-- song: string (nullable = true)
|-- status: long (nullable = true)
|-- userAgent: string (nullable = true)
|-- ts: timestamp (nullable = true)
###Markdown
Create Functions to Join Multiple Dataframe
###Code
def change_colname_join_df(join_df, suffix='_temp'):
'''
INPUT:
join_df - dataframe on the right side of join
suffix - added string on each column name
OUTPUT:
res_df - dataframe with renamed column
| col_1 | col_ 2 | --> | col_1_temp | col_ 2_temp |
'''
res_df = join_df
for col_name in join_df.columns:
res_df = res_df.withColumnRenamed(col_name, col_name + suffix)
return res_df
def remove_cols_suffix(df1, suffix="_temp"):
'''
INPUT:
df1 - dataframe with suffix column
OUTPUT:
result - dataframe without suffix column
| col_1_temp | col_ 2_temp | --> | col_1 | col_ 2 |
'''
for col in df1.columns:
if suffix in col:
df1 = df1.withColumnRenamed(col, col[:len(col) - len(suffix)])
return df1
def chain_and(df1, df2, key_cols, suffix="_temp"):
'''
DESCRIPTION:
create chaining and condition for joining
(df1.col1 == df2.col1_suffix) & (df1.col2 == df2.col2_suffix)
INPUT:
df1 - left dataframe
df2 - right dataframe
key_cols - columns name for join conditions
suffix - columns name suffix
OUTPUT:
res - chain and for join
'''
for i, col in enumerate(key_cols):
if i == 0:
res = df1[col] == df2[col + suffix]
else:
res = res & (df1[col] == df2[col + suffix])
return res
def chain_left_join(df1, dfs, key_cols, suffix="_temp",
path="/content/drive/MyDrive/datasets/dsnd-sparkify/join_temp"):
'''
INPUT:
df1 - left dataframe
dfs - list of right dataframes
key_cols - columns name for join conditions
suffix - columns name suffix
OUTPUT:
result - dataframe after chain join
'''
for i, df_temp in enumerate(dfs):
# get left dataframe
if i == 0:
res_df = df1
else:
res_df = spark.read.parquet(path + "_" + str(i - 1) + ".parquet")
# change right dataframe columns name
df_temp = change_colname_join_df(df_temp)
# join and save
res_df = res_df.join(df_temp, chain_and(res_df, df_temp, key_cols), how="left") \
.drop(key_cols[0] + suffix).drop(key_cols[1] + suffix)
res_df = remove_cols_suffix(res_df)
res_df.coalesce(1).write.mode("overwrite").parquet(path + "_" + str(i) + ".parquet")
return spark.read.parquet(path + "_" + str(len(dfs) - 1) + ".parquet")
###Output
_____no_output_____
###Markdown
Produce Features
###Code
# dataframe with the key columns ("userId" and "up_ts") and churn status
key_df = df.select(["userId", "up_ts", "isChurn"]).groupBy(["userId", "up_ts"]) \
.agg(F.max("isChurn").alias("is_churn"))
key_df.show(5)
# number of song heard in one subscription
n_songs_play_df = df.filter(df.page == "NextSong").groupBy(["userId", "up_ts"]).count().withColumnRenamed("count", "n_songs")
n_songs_play_df.show(5)
# number of day in subscription
maxdate_df = df.select("ts").agg(F.max(df.ts))
datediff_df = df.select(["userId", "up_ts", "down_ts", "isChurn"]).dropDuplicates() \
.join(maxdate_df, ~df.userId.isNull(), how='left') \
.withColumn("datediff",
F.datediff(F.when(F.col("isChurn"), F.col("down_ts")).otherwise(F.col("max(ts)").cast(T.TimestampType())), df.up_ts)) \
.drop("max(ts)")
datediff_df.show(5)
# number of song played per day
jdf = change_colname_join_df(datediff_df)
songs_rate_df = df.filter(df.page == 'NextSong').groupBy(["userId", "up_ts"]).count() \
.withColumnRenamed("count", "song_count") \
.join(jdf, (df.up_ts == jdf.up_ts_temp) & (df.userId == jdf.userId_temp)) \
.drop("userId_temp", "up_ts_temp", "down_ts_temp", "isChurn_temp") \
.withColumn("song_rate", F.col("song_count") / F.when(F.col("datediff_temp") == 0, 1).otherwise(F.col("datediff_temp"))) \
.withColumnRenamed("datediff_temp", "subs_duration")
songs_rate_df.show(5)
# number of songs added to playlist
n_playlist_df = df.select(["userId", "up_ts", "page"]).filter(df.page =="Add to Playlist") \
.groupBy(["userId", "up_ts"]) \
.agg(F.count(F.col("page")).alias("n_playlist"))
n_playlist_df.show(5)
#number of thumbs up and down
tup_tdown_df = df.select(["userId", "up_ts", "down_ts", "isChurn", "page"]).filter(df.page.isin(["Thumbs Up", "Thumbs Down"])) \
.groupby(["userId", "up_ts", "page"]) \
.agg(F.count(F.col("page"))) \
.groupby(["userId", "up_ts"]) \
.pivot("page") \
.agg(F.first("count(page)")) \
.withColumnRenamed("Thumbs Down", "thumbs_down") \
.withColumnRenamed("Thumbs Up", "thumbs_up")
tup_tdown_df.show(5)
# average session length and number of session
session_df = df.groupBy(["userId", "up_ts", "sessionId"]) \
.agg(
F.min(df.ts).cast(T.LongType()).alias("min"),
F.max(df.ts).cast(T.LongType()).alias("max")
) \
.withColumn("diff", (F.col("max") - F.col("min"))) \
.groupBy(["userId", "up_ts"]) \
.agg(F.avg(F.col("diff")).alias("avg_sess_len"), F.count(F.col("sessionId")).alias("sess_count"))
session_df.show(5)
# platform for each user
device_df = df.select(["userId", "up_ts", "userAgent"]).withColumn("platform",
F.when(df.userAgent.contains("Macintosh"), "macos") \
.when(df.userAgent.contains("Windows"), "windows") \
.when(df.userAgent.contains("iPad"), "ipad") \
.when(df.userAgent.contains("iPhone"), "iphone") \
.when(df.userAgent.contains("Linux"), "linux")) \
.groupBy(["userId", "up_ts", "platform"]) \
.agg(F.count("platform")) \
.withColumn("isplatform", ~F.isnull(F.col("count(platform)"))) \
.groupby(["userId", "up_ts"]) \
.pivot("platform") \
.agg(F.max("isplatform")) \
.fillna(False)
device_df.show(5)
#state
state_df = df.select(["userId","up_ts","location"]) \
.withColumn("state", F.split(df.location, ', ')[1]) \
.groupBy(["userId", "up_ts", "state"]) \
.agg(~F.isnull(F.count(F.col("state")))) \
.withColumnRenamed("(NOT (count(state) IS NULL))", "isstate") \
.groupBy(["userId", "up_ts"]) \
.pivot("state") \
.agg(F.max("isstate")) \
.fillna(False)
state_df.show(5)
# number of error, number of friend added, number of cancellation confirmation page seen
pages_df = df.select(["userId", "up_ts", "page"]) \
.filter(F.col("page").isin(["Error", "Add Friend", "Cancellation Confirmation"])) \
.groupby(["userId", "up_ts"]) \
.agg(
F.count(F.when(F.col("page") == "Error", 1)).alias("n_error"),
F.count(F.when(F.col("page") == "Add Friend", 1)).alias("n_friend_add"),
F.count(F.when(F.col("page") == "Cancellation Confirmation", 1)).alias("n_cancel_page")
)
pages.show(5)
# number of unique song
n_unq_song_df = df.select(["userId", "up_ts", "song"]).distinct() \
.groupby(["userId", "up_ts"]) \
.agg(F.count("song").alias("n_unq_song"))
n_unq_song_df.show(5)
# number of unique artist
n_unq_artist_df = df.select(["userId", "up_ts", "artist"]).distinct() \
.groupby(["userId", "up_ts"]) \
.agg(F.count("artist").alias("n_unq_artist"))
n_unq_artist_df.show(5)
###Output
+-------+-------------------+------------+
| userId| up_ts|n_unq_artist|
+-------+-------------------+------------+
|1322258|2018-10-08 23:28:14| 741|
|1367536|2018-10-08 14:58:27| 1176|
|1421594|2018-10-09 18:22:30| 972|
|1431971|2018-11-17 16:39:25| 390|
|1468354|2018-10-24 02:01:02| 223|
+-------+-------------------+------------+
only showing top 5 rows
###Markdown
Chain Join Feature and Save the Result
###Code
# joining all feature into one table
ml_df = chain_left_join(key_df,
[songs_rate_df, n_playlist_df, tup_tdown_df, session_df, device_df,
pages_df, n_unq_song_df, n_unq_artist_df],
["userId", "up_ts"]) \
.fillna(0)
ml_df.show(5)
ml_df.write.parquet("/content/drive/MyDrive/datasets/dsnd-sparkify/ml_df.parquet")
###Output
_____no_output_____ |
summer-2018-model/notebooks-sam/MergedChoiceTable-work.ipynb | ###Markdown
MergedChoiceTable feature testingSam Maurer, August 2018
###Code
import sys
print(sys.version)
import numpy as np
import pandas as pd
import random
import choicemodels
###Output
/Users/maurer/anaconda3/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
from pandas.core import datetools
###Markdown
Performance comparison`random.choices`: replacement, optional weights `random.sample`: no replacement `np.random.choice`: optional replacement, optional weightsFor each one, draw 100 samples of 10 alternatives from a universe of 100,000
###Code
n = int(1e5)
vals = np.random.rand(n)
weights = np.random.rand(n)
scaled_weights = weights/weights.sum(0) # probs that sum to 1
%%timeit 3
for i in range(100):
random.choices(vals, k=10)
%%timeit 3
for i in range(100):
random.choices(vals, weights, k=10)
%%timeit 3
for i in range(100):
random.sample(vals.tolist(), k=10)
%%timeit 3
for i in range(100):
np.random.choice(vals, replace=True, size=10)
%%timeit 3
for i in range(100):
np.random.choice(vals, replace=False, size=10)
%%timeit 3
for i in range(100):
np.random.choice(vals, replace=True, p=scaled_weights, size=10)
%%timeit 3
for i in range(100):
np.random.choice(vals, replace=False, p=scaled_weights, size=10)
###Output
10 loops, best of 3: 78.8 ms per loop
###Markdown
Here are the winners, with times scaled to be relative:```1 ms replacement, core python 200 ms replacement with weights, numpy400 ms no replacement, numpy240 ms no replacement with weights, numpy```
###Code
# What's the real-world hit?
n = int(5e6)
vals = np.random.rand(n)
weights = np.random.rand(n)
scaled_weights = weights/weights.sum(0) # probs that sum to 1
%%timeit 3
for i in range(100):
np.random.choice(vals, replace=False, p=scaled_weights, size=100)
###Output
1 loop, best of 3: 5.39 s per loop
###Markdown
So drawing 100k samples of 100 without replacement from a universe of 5 million, with weights, would take 90 minues on a fast iMac Integrating MCT with estimation
###Code
alts = pd.DataFrame(np.random.rand(10,2), columns=['b','c'])
print(len(alts))
alts.head(3)
n = 100
w = alts.c/alts.c.sum()
obs = pd.DataFrame({'a': np.random.rand(n),
'chosen': np.random.choice(range(len(alts)), n, p=w)})
print(len(obs))
obs.head(3)
mct = choicemodels.tools.MergedChoiceTable(obs, alts, 'chosen', sample_size=5, replace=False)
print(len(mct.to_frame()))
mct.to_frame().reset_index().head()
m = choicemodels.MultinomialLogit(mct.to_frame(),
observation_id_col = mct.observation_id_col,
choice_col = mct.choice_col,
model_expression = 'a + b + c')
m.fit()
###Output
_____no_output_____
###Markdown
ChoiceModels testing
###Code
obs = pd.DataFrame(np.random.rand(10,1), columns=['a'])
obs.head(3)
alts = pd.DataFrame(np.random.rand(5,2), columns=['b','weight'])
alts.head(3)
choicemodels.tools.MCT(obs, alts, sample_size=3).to_frame()
df = choicemodels.tools.MCT(obs, alts, sample_size=3, weights='weight').to_frame()
df
choicemodels.tools.MCT(obs, alts, sample_size=6, replace=False).to_frame()
isinstance("hello", str)
df[df.index.get_level_values('obs_id').isin([0])]
df[df.index.get_level_values('obs_id').isin([0])].weight
df.weight/df.weight.sum()
np.repeat([1,2,3], 4).tolist()
if True:
a = "yes"
print(a)
np.tile(np.append([1], np.repeat(0, 2)), 3)
[1] + [2]
print(np.append([1], np.repeat(0, 2)))
df = pd.DataFrame(np.random.randn(10000,4), columns=list('ABCD'))
df.head(3)
len(df.A.sample(50000, replace=True))
def a():
return
type(a)
callable(a)
w = pd.DataFrame({'a': [1,1,1], 'b': [1,2,3], 'c': [5,5,5]}).set_index(['a','b'])
w
w = w.loc[1]
w
w.loc[~w.index.isin([2])]
np.repeat(1, 3) * [True, False, True]
w.reset_index().set_index('b')
w = {'w': [1,1,100,25,25,25],
'oid': [0,0,0,1,1,1],
'aid': [0,1,2,0,1,2]}
wgt = pd.DataFrame(w).set_index(['oid','aid']).w
wgt
###Output
_____no_output_____ |
notebooks/Vaporization of water using microwave.ipynb | ###Markdown
In this notebook, we explore how much energy is needed to vaporize (from microwaves) a certain quantity of water. Rising the water temperature up to 100°C. The [specific heat](https://en.wikipedia.org/wiki/Heat_capacity) of water is $C_P$ = 4185.5 J/(kg⋅K). Given a initial temperature of 15°C
###Code
T_init = 15 # °C
T_vapo = 100 # °C
C_P = 4185.5 # J/(kg.K)
m_water = 1 # kg
nrj_100 = m_water * C_P * (T_vapo - T_init)
print('Energy [J] required to heat the water up to 100°C : {} J'.format(nrj_100))
###Output
Energy [J] required to heat the water up to 100°C : 355767.5 J
###Markdown
Vaporization of the water [Enthalpy of Vaporization](https://en.wikipedia.org/wiki/Enthalpy_of_vaporization) is the enthalpy change required to transform a given quantity of a substance from a liquid into a gas at a given pressure. For water at its normal boiling point of 100 ºC, the heat of vaporization is 2257 kJ/kg : this is the amount of energy required to convert 1 g of water into 1 g of vapor without a change in temperature.
###Code
C_v = 2257e3 # J/kg
nrj_vapor = m_water * C_v
print('Energy [J] required to convert {} kg of liquid water into vapor : {} J'.format(m_water, nrj_vapor))
print('Total energy required to vaporize {} kg of water at {}°C into vapor : {} J'.format(m_water, T_init, nrj_100+nrj_vapor))
###Output
Total energy required to vaporize 1 kg of water at 15°C into vapor : 2612767.5 J
|
samples/6.PortalManagement.ipynb | ###Markdown
ポータル管理タスクの一括処理 * 使用環境 * ArcGIS API for Python 1.5.3 * Python 3.6.8 * Jupyter Notebook 新規ユーザーを一括で作成 モジュールをインポートして管理者としてログイン
###Code
# 必要なモジュールをインポート
from arcgis.gis import *
# 管理者としてログイン
user_name = "管理者のユーザーネーム"
my_gis = GIS("ポータルのURL", username = user_name)
print(str(my_gis.properties.user.username) + "としてログインしました。")
###Output
Enter password: ········
nakamura_dev_orgとしてログインしました。
###Markdown
新規ユーザーのアカウントを作成する
###Code
# 新規作成するアカウントのユーザーネームのリスト
new_viewers = ["esrij_dev1", "esrij_dev2", "esrij_dev3"]
new_viewers
# UserManager.create() メソッドを for 文で繰り返し、ユーザーを作成する
# UserManager クラスは arcgis.gis クラスのヘルパークラスです。arcgis.gis.users で利用することができます。
for viewer in new_viewers:
my_gis.users.create(
username = viewer, password = viewer + '_test_pass',
firstname = viewer.split("_")[0], lastname = viewer.split("_")[1],
email = viewer + "@esrij.com", role = "viewer", user_type = "viewer"
)
###Output
_____no_output_____
###Markdown
--- アイテム所有者の変更
###Code
# アイテムの現所有者を検索
current_owner = my_gis.users.search("ads_enterprise_dev1")[0]
current_owner
# ルートフォルダ内のアイテムの確認
current_owner.items()
# アイテムの新たな所有者となるユーザーを検索
target = my_gis.users.search("nakamura_dev_org")[0]
target
# 現所有者の所持する全てのアイテム、グループの所有権を新たな所有者へ移すと同時に、現所有者のアカウントを削除する
current_owner.delete(reassign_to = target)
# アカウントを削除せずにアイテムの所有者のみを変更するメソッドもあります
# current_owner.reassign_to(target)
# アイテムの所有が移ったかを確認
target.items("ads_enterprise_dev1_root")
###Output
_____no_output_____ |
prediction/multitask/pre-training/function documentation generation/python/base_model.ipynb | ###Markdown
**Predict the documentation for python code using codeTrans multitask training model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers**
###Code
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
###Output
_____no_output_____
###Markdown
**2. Load the token classification pipeline and load it into the GPU if avilabile**
###Code
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask", skip_special_tokens=True),
device=0
)
###Output
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:852: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
###Markdown
**3 Give the code for summarization, parse and tokenize it**
###Code
code = "def e(message, exit_code=None):\n print_log(message, YELLOW, BOLD)\n if exit_code is not None:\n sys.exit(exit_code)" #@param {type:"raw"}
!pip install tree_sitter
!git clone https://github.com/tree-sitter/tree-sitter-python
from tree_sitter import Language, Parser
Language.build_library(
'build/my-languages.so',
['tree-sitter-python']
)
PYTHON_LANGUAGE = Language('build/my-languages.so', 'python')
parser = Parser()
parser.set_language(PYTHON_LANGUAGE)
def get_string_from_code(node, lines):
line_start = node.start_point[0]
line_end = node.end_point[0]
char_start = node.start_point[1]
char_end = node.end_point[1]
if line_start != line_end:
code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]]))
else:
code_list.append(lines[line_start][char_start:char_end])
def my_traverse(node, code_list):
lines = code.split('\n')
if node.child_count == 0:
get_string_from_code(node, lines)
elif node.type == 'string':
get_string_from_code(node, lines)
else:
for n in node.children:
my_traverse(n, code_list)
return ' '.join(code_list)
tree = parser.parse(bytes(code, "utf8"))
code_list=[]
tokenized_code = my_traverse(tree.root_node, code_list)
print("Output after tokenization: " + tokenized_code)
###Output
Output after tokenization: def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )
###Markdown
**4. Make Prediction**
###Code
pipeline([tokenized_code])
###Output
Your max_length is set to 512, but you input_length is only 47. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
|
notebooks/devise/notebooks/05 - sentence embeddings with infersent.ipynb | ###Markdown
sentence embeddings with infersent[InferSent](https://github.com/facebookresearch/InferSent) is a sentence embedding model created by Facebook Research using the [SNLI](https://nlp.stanford.edu/projects/snli/) dataset. The whole thing has been released under a [non-commercial license](https://github.com/facebookresearch/InferSent/blob/master/LICENSE) and is starting to gain some traction as it's used in more and more interesting contexts. Unsurprisingly, sentence embeddings are word embeddings for sentences. When a sentence is passed through the network, it is assigned a position in sentence space in which other sentences with similar semantic meanings also sit. The 4096 dimensional feature vector which is produced can be interpreted to
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
plt.rcParams["figure.figsize"] = (20, 20)
import os
import json
import nltk
import numpy as np
import pandas as pd
from PIL import Image
from scipy.spatial.distance import cdist
from tqdm import tqdm_notebook as tqdm
import torch
from torch import nn, optim
from torch.utils.data import Dataset, DataLoader
from torchvision import models, transforms
nltk.download("punkt")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
load InferSent modelWe've stored the relevant infersent code locally in `InferSent.py` so that it can be intuitively imported (as below), but the original can be found as `models.py` in the source repo. We also need to load the model weights in `infersent2.pkl` and the word vectors on which the model was trained from `crawl-300d-2M.vec`. The InferSent API is simple enough to use, and in only a few lines of code we have a working sentence embedding model. Note that this _is_ a model - we're not loading a dictionary and just looking up known keys here as we do with most word vectors. Each time we call `infersent_model.encode()`, the text is passed through a neural network to produce a new, unique embedding which the model had not necessarily seen as part of its training.
###Code
from InferSent import InferSent
MODEL_PATH = "/mnt/efs/models/infersent2.pkl"
params_model = {
"bsize": 1024,
"word_emb_dim": 300,
"enc_lstm_dim": 2048,
"pool_type": "max",
"dpout_model": 0.0,
"version": 2,
}
infersent_model = InferSent(params_model)
infersent_model.load_state_dict(torch.load(MODEL_PATH))
W2V_PATH = "/mnt/efs/nlp/word_vectors/fasttext/crawl-300d-2M.vec"
infersent_model.set_w2v_path(W2V_PATH)
infersent_model.build_vocab_k_words(K=100000)
infersent_model = infersent_model.to(device)
###Output
_____no_output_____
###Markdown
load coco captionsWe'll use the captions from the well known [COCO dataset](http://cocodataset.org/) to demonstrate InferSent's effectiveness.
###Code
with open("/mnt/efs/images/coco/annotations/captions_val2014.json") as f:
meta = json.load(f)
captions = pd.DataFrame(meta["annotations"]).set_index("image_id")["caption"].values
###Output
_____no_output_____
###Markdown
embed captions with infersent
###Code
embeddings = infersent_model.encode(captions, tokenize=True)
index = np.random.choice(len(captions))
embedding = embeddings[index].reshape(1, -1)
query_caption = captions[index]
query_caption
distances = cdist(embedding, embeddings, "cosine").squeeze()
closest_captions = captions[np.argsort(distances)]
closest_captions[:10]
###Output
_____no_output_____
###Markdown
The example above shows the power of modern sentence embedding models which integrate the semantic meaning encoded in word vectors over traditional retrieval methods like TF-IDF or BM25.A great example is the query `'a rainbow is in the sky over an empty stretch of road'`. The fourth result (following a few about rainbows) is `'there is a green street light hanging over this empty intersection'`.Very few of the most significant words in those sentences are exact matches, but the scenes they describe are extremely similar. where infersent breaksWhile infersent is capable of encoding an incredible amount of subtlety in medium length sequences, it really struggles to encode that same level of meaning in short sequences.
###Code
single_word_embedding = infersent_model.encode(["doctor"])
distances = cdist(single_word_embedding, embeddings, "cosine").squeeze()
closest_captions = captions[np.argsort(distances)]
closest_captions[:10]
###Output
_____no_output_____ |
notebooks/Coursework_4_part7_rnn_batch_norm.ipynb | ###Markdown
2017Machine Learning PracticalUniversity of EdinburghGeorgios Pligoropoulos - s1687568Coursework 4 (part 7) Imports, Inits, and helper functions
###Code
jupyterNotebookEnabled = True
plotting = True
coursework, part = 4, 7
saving = True
if jupyterNotebookEnabled:
#%load_ext autoreload
%reload_ext autoreload
%autoreload 2
import sys, os
mlpdir = os.path.expanduser(
'~/[email protected]/msc_Artificial_Intelligence/mlp_Machine_Learning_Practical/mlpractical'
)
sys.path.append(mlpdir)
from collections import OrderedDict
from __future__ import division
import skopt
from mylibs.jupyter_notebook_helper import show_graph
import datetime
import os
import time
import tensorflow as tf
import numpy as np
from mlp.data_providers import MSD10GenreDataProvider, MSD25GenreDataProvider,\
MSD10Genre_Autoencoder_DataProvider, MSD10Genre_StackedAutoEncoderDataProvider
import matplotlib.pyplot as plt
%matplotlib inline
from mylibs.batch_norm import fully_connected_layer_with_batch_norm_and_l2
from mylibs.stacked_autoencoder_pretrainer import \
constructModelFromPretrainedByAutoEncoderStack,\
buildGraphOfStackedAutoencoder, executeNonLinearAutoencoder
from mylibs.jupyter_notebook_helper import getRunTime, getTrainWriter, getValidWriter,\
plotStats, initStats, gatherStats
from mylibs.tf_helper import tfRMSE, tfMSE, fully_connected_layer
#trainEpoch, validateEpoch
from mylibs.py_helper import merge_dicts
from mylibs.dropout_helper import constructProbs
from mylibs.batch_norm import batchNormWrapper_byExponentialMovingAvg,\
fully_connected_layer_with_batch_norm
import pickle
from skopt.plots import plot_convergence
from mylibs.jupyter_notebook_helper import DynStats
import operator
from skopt.space.space import Integer, Categorical
from skopt import gp_minimize
from rnn.rnn_batch_norm import RNNBatchNorm
seed = 16011984
rng = np.random.RandomState(seed=seed)
config = tf.ConfigProto(log_device_placement=True, allow_soft_placement=True)
config.gpu_options.allow_growth = True
figcount = 0
tensorboardLogdir = 'tf_cw%d_%d' % (coursework, part)
curDtype = tf.float32
reluBias = 0.1
batch_size = 50
num_steps = 6 # number of truncated backprop steps ('n' in the discussion above)
#num_classes = 2
state_size = 10 #each state is represented with a certain width, a vector
learningRate = 1e-4 #default of Adam is 1e-3
#momentum = 0.5
#lamda2 = 1e-2
best_params_filename = 'best_params_rnn.npy'
###Output
_____no_output_____
###Markdown
here the state size is equal to the number of classes because we have given to the last output all the responsibility.We are going to follow a repetitive process. For example if num_steps=6 then we break the 120 segments into 20 partsThe output of each part will be the genre. We are comparing against the genre every little part MSD 10 genre task
###Code
segmentCount = 120
segmentLen = 25
from rnn.msd10_data_providers import MSD10Genre_120_rnn_DataProvider
###Output
_____no_output_____
###Markdown
Experiment with Best Parameters
###Code
best_params = np.load(best_params_filename)
best_params
(state_size, num_steps) = best_params
(state_size, num_steps)
rnnModel = RNNBatchNorm(batch_size=batch_size, rng=rng, dtype = curDtype, config=config,
segment_count=segmentCount, segment_len= segmentLen)
show_graph(rnnModel.getGraph(num_steps=num_steps, state_size=state_size))
%%time
epochs = 100
stats, keys = rnnModel.run_rnn(state_size = state_size, num_steps=num_steps,
epochs = epochs)
if plotting:
fig_1, ax_1, fig_2, ax_2 = plotStats(stats, keys)
plt.show()
if saving:
figcount += 1
fig_1.savefig('cw%d_part%d_%02d_fig_error.svg' % (coursework, part, figcount))
fig_2.savefig('cw%d_part%d_%02d_fig_valid.svg' % (coursework, part, figcount))
print max(stats[:, -1]) #maximum validation accuracy
###Output
epochs: 100
rnn steps: 4
state size: 341
End epoch 01 (139.139 secs): err(train)=1.52, acc(train)=0.47, err(valid)=1.86, acc(valid)=0.33,
End epoch 02 (136.111 secs): err(train)=1.47, acc(train)=0.49, err(valid)=1.74, acc(valid)=0.39,
End epoch 03 (136.024 secs): err(train)=1.44, acc(train)=0.50, err(valid)=1.76, acc(valid)=0.39,
End epoch 04 (135.718 secs): err(train)=1.42, acc(train)=0.51, err(valid)=1.71, acc(valid)=0.40,
End epoch 05 (136.560 secs): err(train)=1.39, acc(train)=0.51, err(valid)=1.59, acc(valid)=0.44,
End epoch 06 (134.916 secs): err(train)=1.36, acc(train)=0.53, err(valid)=1.61, acc(valid)=0.43,
End epoch 07 (134.824 secs): err(train)=1.33, acc(train)=0.54, err(valid)=1.62, acc(valid)=0.44,
End epoch 08 (134.745 secs): err(train)=1.31, acc(train)=0.54, err(valid)=1.53, acc(valid)=0.46,
End epoch 09 (134.994 secs): err(train)=1.30, acc(train)=0.55, err(valid)=1.51, acc(valid)=0.46,
End epoch 10 (135.173 secs): err(train)=1.28, acc(train)=0.55, err(valid)=1.47, acc(valid)=0.49,
End epoch 11 (135.006 secs): err(train)=1.26, acc(train)=0.56, err(valid)=1.50, acc(valid)=0.48,
End epoch 12 (134.699 secs): err(train)=1.24, acc(train)=0.57, err(valid)=1.50, acc(valid)=0.49,
End epoch 13 (135.139 secs): err(train)=1.22, acc(train)=0.57, err(valid)=1.48, acc(valid)=0.48,
End epoch 14 (134.665 secs): err(train)=1.21, acc(train)=0.58, err(valid)=1.43, acc(valid)=0.50,
End epoch 15 (135.003 secs): err(train)=1.19, acc(train)=0.59, err(valid)=1.46, acc(valid)=0.49,
End epoch 16 (134.754 secs): err(train)=1.18, acc(train)=0.59, err(valid)=1.42, acc(valid)=0.50,
End epoch 17 (135.116 secs): err(train)=1.17, acc(train)=0.59, err(valid)=1.38, acc(valid)=0.52,
End epoch 18 (135.073 secs): err(train)=1.16, acc(train)=0.60, err(valid)=1.42, acc(valid)=0.51,
End epoch 19 (135.044 secs): err(train)=1.15, acc(train)=0.60, err(valid)=1.41, acc(valid)=0.51,
End epoch 20 (134.449 secs): err(train)=1.14, acc(train)=0.60, err(valid)=1.36, acc(valid)=0.53,
End epoch 21 (133.914 secs): err(train)=1.14, acc(train)=0.61, err(valid)=1.32, acc(valid)=0.55,
End epoch 22 (132.886 secs): err(train)=1.12, acc(train)=0.61, err(valid)=1.39, acc(valid)=0.52,
End epoch 23 (132.495 secs): err(train)=1.12, acc(train)=0.61, err(valid)=1.37, acc(valid)=0.52,
End epoch 24 (132.171 secs): err(train)=1.11, acc(train)=0.61, err(valid)=1.36, acc(valid)=0.53,
End epoch 25 (132.283 secs): err(train)=1.10, acc(train)=0.62, err(valid)=1.33, acc(valid)=0.54,
End epoch 26 (132.222 secs): err(train)=1.10, acc(train)=0.62, err(valid)=1.31, acc(valid)=0.55,
End epoch 27 (132.315 secs): err(train)=1.10, acc(train)=0.62, err(valid)=1.34, acc(valid)=0.54,
End epoch 28 (132.404 secs): err(train)=1.09, acc(train)=0.62, err(valid)=1.36, acc(valid)=0.53,
End epoch 29 (132.017 secs): err(train)=1.08, acc(train)=0.63, err(valid)=1.27, acc(valid)=0.56,
End epoch 30 (132.469 secs): err(train)=1.08, acc(train)=0.63, err(valid)=1.29, acc(valid)=0.56,
End epoch 31 (132.075 secs): err(train)=1.08, acc(train)=0.63, err(valid)=1.31, acc(valid)=0.55,
End epoch 32 (132.539 secs): err(train)=1.07, acc(train)=0.63, err(valid)=1.28, acc(valid)=0.55,
End epoch 33 (132.164 secs): err(train)=1.06, acc(train)=0.63, err(valid)=1.30, acc(valid)=0.55,
End epoch 34 (132.666 secs): err(train)=1.06, acc(train)=0.63, err(valid)=1.27, acc(valid)=0.56,
End epoch 35 (132.103 secs): err(train)=1.06, acc(train)=0.63, err(valid)=1.33, acc(valid)=0.54,
End epoch 36 (132.601 secs): err(train)=1.05, acc(train)=0.63, err(valid)=1.29, acc(valid)=0.55,
End epoch 37 (132.110 secs): err(train)=1.05, acc(train)=0.63, err(valid)=1.26, acc(valid)=0.56,
End epoch 38 (132.868 secs): err(train)=1.05, acc(train)=0.64, err(valid)=1.30, acc(valid)=0.55,
End epoch 39 (132.071 secs): err(train)=1.04, acc(train)=0.64, err(valid)=1.28, acc(valid)=0.56,
End epoch 40 (132.669 secs): err(train)=1.04, acc(train)=0.64, err(valid)=1.25, acc(valid)=0.57,
End epoch 41 (132.112 secs): err(train)=1.03, acc(train)=0.64, err(valid)=1.25, acc(valid)=0.57,
End epoch 42 (132.737 secs): err(train)=1.03, acc(train)=0.65, err(valid)=1.26, acc(valid)=0.56,
End epoch 43 (132.173 secs): err(train)=1.03, acc(train)=0.64, err(valid)=1.30, acc(valid)=0.55,
End epoch 44 (132.676 secs): err(train)=1.03, acc(train)=0.64, err(valid)=1.26, acc(valid)=0.56,
End epoch 45 (132.121 secs): err(train)=1.02, acc(train)=0.65, err(valid)=1.22, acc(valid)=0.58,
End epoch 46 (133.281 secs): err(train)=1.01, acc(train)=0.65, err(valid)=1.24, acc(valid)=0.57,
End epoch 47 (132.024 secs): err(train)=1.01, acc(train)=0.65, err(valid)=1.22, acc(valid)=0.58,
End epoch 48 (132.662 secs): err(train)=1.01, acc(train)=0.65, err(valid)=1.28, acc(valid)=0.56,
End epoch 49 (132.112 secs): err(train)=1.01, acc(train)=0.65, err(valid)=1.23, acc(valid)=0.58,
End epoch 50 (132.779 secs): err(train)=1.00, acc(train)=0.66, err(valid)=1.29, acc(valid)=0.56,
End epoch 51 (132.191 secs): err(train)=1.00, acc(train)=0.66, err(valid)=1.26, acc(valid)=0.56,
End epoch 52 (132.774 secs): err(train)=1.00, acc(train)=0.65, err(valid)=1.27, acc(valid)=0.57,
End epoch 53 (132.140 secs): err(train)=1.00, acc(train)=0.65, err(valid)=1.29, acc(valid)=0.55,
End epoch 54 (132.859 secs): err(train)=1.00, acc(train)=0.66, err(valid)=1.25, acc(valid)=0.57,
End epoch 55 (132.248 secs): err(train)=0.99, acc(train)=0.66, err(valid)=1.22, acc(valid)=0.58,
End epoch 56 (132.784 secs): err(train)=0.99, acc(train)=0.66, err(valid)=1.28, acc(valid)=0.56,
End epoch 57 (132.140 secs): err(train)=0.99, acc(train)=0.66, err(valid)=1.24, acc(valid)=0.58,
End epoch 58 (132.740 secs): err(train)=0.98, acc(train)=0.66, err(valid)=1.19, acc(valid)=0.59,
End epoch 59 (132.080 secs): err(train)=0.98, acc(train)=0.66, err(valid)=1.25, acc(valid)=0.58,
End epoch 60 (132.877 secs): err(train)=0.98, acc(train)=0.66, err(valid)=1.21, acc(valid)=0.58,
End epoch 61 (132.083 secs): err(train)=0.98, acc(train)=0.66, err(valid)=1.25, acc(valid)=0.57,
End epoch 62 (132.842 secs): err(train)=0.97, acc(train)=0.66, err(valid)=1.20, acc(valid)=0.59,
End epoch 63 (132.143 secs): err(train)=0.97, acc(train)=0.66, err(valid)=1.22, acc(valid)=0.59,
End epoch 64 (132.388 secs): err(train)=0.97, acc(train)=0.66, err(valid)=1.24, acc(valid)=0.58,
End epoch 65 (132.199 secs): err(train)=0.97, acc(train)=0.67, err(valid)=1.26, acc(valid)=0.57,
End epoch 66 (132.392 secs): err(train)=0.96, acc(train)=0.67, err(valid)=1.22, acc(valid)=0.59,
End epoch 67 (132.141 secs): err(train)=0.96, acc(train)=0.67, err(valid)=1.24, acc(valid)=0.58,
End epoch 68 (132.522 secs): err(train)=0.96, acc(train)=0.67, err(valid)=1.22, acc(valid)=0.59,
End epoch 69 (132.546 secs): err(train)=0.96, acc(train)=0.67, err(valid)=1.22, acc(valid)=0.58,
End epoch 70 (132.253 secs): err(train)=0.96, acc(train)=0.67, err(valid)=1.20, acc(valid)=0.59,
End epoch 71 (132.534 secs): err(train)=0.96, acc(train)=0.67, err(valid)=1.18, acc(valid)=0.60,
End epoch 72 (132.317 secs): err(train)=0.96, acc(train)=0.67, err(valid)=1.19, acc(valid)=0.60,
End epoch 73 (132.451 secs): err(train)=0.95, acc(train)=0.67, err(valid)=1.19, acc(valid)=0.60,
End epoch 74 (132.243 secs): err(train)=0.95, acc(train)=0.67, err(valid)=1.19, acc(valid)=0.59,
End epoch 75 (132.654 secs): err(train)=0.95, acc(train)=0.67, err(valid)=1.23, acc(valid)=0.59,
End epoch 76 (132.255 secs): err(train)=0.95, acc(train)=0.67, err(valid)=1.17, acc(valid)=0.60,
End epoch 77 (132.654 secs): err(train)=0.95, acc(train)=0.68, err(valid)=1.25, acc(valid)=0.59,
End epoch 78 (132.187 secs): err(train)=0.94, acc(train)=0.67, err(valid)=1.16, acc(valid)=0.60,
End epoch 79 (132.895 secs): err(train)=0.94, acc(train)=0.67, err(valid)=1.23, acc(valid)=0.58,
End epoch 80 (132.320 secs): err(train)=0.94, acc(train)=0.68, err(valid)=1.24, acc(valid)=0.58,
End epoch 81 (132.546 secs): err(train)=0.94, acc(train)=0.67, err(valid)=1.20, acc(valid)=0.59,
End epoch 82 (132.374 secs): err(train)=0.94, acc(train)=0.68, err(valid)=1.19, acc(valid)=0.60,
End epoch 83 (132.774 secs): err(train)=0.93, acc(train)=0.68, err(valid)=1.23, acc(valid)=0.58,
End epoch 84 (132.262 secs): err(train)=0.94, acc(train)=0.67, err(valid)=1.20, acc(valid)=0.60,
End epoch 85 (132.672 secs): err(train)=0.93, acc(train)=0.68, err(valid)=1.19, acc(valid)=0.60,
End epoch 86 (132.240 secs): err(train)=0.93, acc(train)=0.68, err(valid)=1.18, acc(valid)=0.60,
End epoch 87 (132.643 secs): err(train)=0.93, acc(train)=0.68, err(valid)=1.20, acc(valid)=0.59,
End epoch 88 (132.271 secs): err(train)=0.93, acc(train)=0.68, err(valid)=1.18, acc(valid)=0.61,
End epoch 89 (132.771 secs): err(train)=0.93, acc(train)=0.68, err(valid)=1.21, acc(valid)=0.59,
End epoch 90 (132.536 secs): err(train)=0.92, acc(train)=0.68, err(valid)=1.22, acc(valid)=0.59,
End epoch 91 (136.182 secs): err(train)=0.92, acc(train)=0.68, err(valid)=1.19, acc(valid)=0.61,
End epoch 92 (134.153 secs): err(train)=0.92, acc(train)=0.68, err(valid)=1.20, acc(valid)=0.60,
End epoch 93 (134.401 secs): err(train)=0.92, acc(train)=0.68, err(valid)=1.19, acc(valid)=0.60,
End epoch 94 (132.846 secs): err(train)=0.92, acc(train)=0.68, err(valid)=1.21, acc(valid)=0.59,
End epoch 95 (133.465 secs): err(train)=0.92, acc(train)=0.68, err(valid)=1.14, acc(valid)=0.61,
End epoch 96 (133.097 secs): err(train)=0.92, acc(train)=0.68, err(valid)=1.18, acc(valid)=0.60,
End epoch 97 (141.926 secs): err(train)=0.91, acc(train)=0.68, err(valid)=1.19, acc(valid)=0.60,
End epoch 98 (139.937 secs): err(train)=0.91, acc(train)=0.68, err(valid)=1.18, acc(valid)=0.61,
End epoch 99 (135.033 secs): err(train)=0.91, acc(train)=0.68, err(valid)=1.21, acc(valid)=0.60,
End epoch 100 (132.227 secs): err(train)=0.91, acc(train)=0.69, err(valid)=1.16, acc(valid)=0.61,
|
examples/02_X_gate.ipynb | ###Markdown
QISkit Example 02: Apply X-gate, Plot histogram* Create a quantum circuit * Apply x-gate* Visualize the circuit* Run the program* Get the resuls* Plot the results
###Code
# Creating quantum circuits
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister
from qiskit import execute, Aer
from qiskit.tools.visualization import circuit_drawer, plot_histogram
qr = QuantumRegister(2)
cr = ClassicalRegister(2)
qc = QuantumCircuit(qr, cr)
qc.x(qr[0])
qc.measure(qr, cr)
circuit_drawer(qc)
# Run the circuit
job = execute(qc, backend=Aer.get_backend('qasm_simulator'), shots=1024)
result = job.result()
counts = result.get_counts()
plot_histogram(counts)
print("Counts dictionary:", counts )
print("Probability = ", counts['01']/1024)
###Output
_____no_output_____ |
classification/Programming-And-Assignments/module-9-precision-recall-assignment-blank.ipynb | ###Markdown
Exploring precision and recallThe goal of this second notebook is to understand precision-recall in the context of classifiers. * Use Amazon review data in its entirety. * Train a logistic regression model. * Explore various evaluation metrics: accuracy, confusion matrix, precision, recall. * Explore how various metrics can be combined to produce a cost of making an error. * Explore precision and recall curves. Because we are using the full Amazon review dataset (not a subset of words or reviews), in this assignment we return to using GraphLab Create for its efficiency. As usual, let's start by **firing up GraphLab Create**.Make sure you have the latest version of GraphLab Create (1.8.3 or later). If you don't find the decision tree module, then you would need to upgrade graphlab-create using``` pip install graphlab-create --upgrade```See [this page](https://dato.com/download/) for detailed instructions on upgrading.
###Code
import graphlab
from __future__ import division
import numpy as np
graphlab.canvas.set_target('ipynb')
###Output
_____no_output_____
###Markdown
Load amazon review dataset
###Code
products = graphlab.SFrame('amazon_baby.gl/')
###Output
2016-04-15 11:07:05,529 [INFO] graphlab.cython.cy_server, 176: GraphLab Create v1.8.5 started. Logging: C:\Users\ADMINI~1\AppData\Local\Temp\graphlab_server_1460689621.log.0
###Markdown
Extract word counts and sentiments As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following:1. Remove punctuation.2. Remove reviews with "neutral" sentiment (rating 3).3. Set reviews with rating 4 or more to be positive and those with 2 or less to be negative.
###Code
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
review_clean = products['review'].apply(remove_punctuation)
# Count words
products['word_count'] = graphlab.text_analytics.count_words(review_clean)
# Drop neutral sentiment reviews.
products = products[products['rating'] != 3]
# Positive sentiment to +1 and negative sentiment to -1
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
###Output
_____no_output_____
###Markdown
Now, let's remember what the dataset looks like by taking a quick peek:
###Code
products
###Output
_____no_output_____
###Markdown
Split data into training and test setsWe split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
###Code
train_data, test_data = products.random_split(.8, seed=1)
###Output
_____no_output_____
###Markdown
Train a logistic regression classifierWe will now train a logistic regression classifier with **sentiment** as the target and **word_count** as the features. We will set `validation_set=None` to make sure everyone gets exactly the same results. Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
###Code
model = graphlab.logistic_classifier.create(train_data, target='sentiment',
features=['word_count'],
validation_set=None)
###Output
_____no_output_____
###Markdown
Model Evaluation We will explore the advanced model evaluation concepts that were discussed in the lectures. AccuracyOne performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by$$\mbox{accuracy} = \frac{\mbox{ correctly classified data points}}{\mbox{ total data points}}$$To obtain the accuracy of our trained models using GraphLab Create, simply pass the option `metric='accuracy'` to the `evaluate` function. We compute the **accuracy** of our logistic regression model on the **test_data** as follows:
###Code
accuracy= model.evaluate(test_data, metric='accuracy')['accuracy']
print "Test Accuracy: %s" % accuracy
###Output
Test Accuracy: 0.914536837053
###Markdown
Baseline: Majority class predictionRecall from an earlier assignment that we used the **majority class classifier** as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points. Typically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows:
###Code
baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data)
print "Baseline accuracy (majority class classifier): %s" % baseline
###Output
Baseline accuracy (majority class classifier): 0.842782577394
###Markdown
** Quiz Question:** Using accuracy as the evaluation metric, was our **logistic regression model** better than the baseline (majority class classifier)? Confusion MatrixThe accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the **confusion matrix**. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows:``` +---------------------------------------------+ | Predicted label | +----------------------+----------------------+ | (+1) | (-1) |+-------+-----+----------------------+----------------------+| True |(+1) | of true positives | of false negatives || label +-----+----------------------+----------------------+| |(-1) | of false positives | of true negatives |+-------+-----+----------------------+----------------------+```To print out the confusion matrix for a classifier, use `metric='confusion_matrix'`:
###Code
confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix']
confusion_matrix
###Output
_____no_output_____
###Markdown
**Quiz Question**: How many predicted values in the **test set** are **false positives**?
###Code
fa_pos = confusion_matrix[2]['count']
fa_neg = confusion_matrix[3]['count']
print fa_pos
print fa_neg
###Output
1406
26689
###Markdown
Computing the cost of mistakesPut yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, **false positives cost more than false negatives**. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)Suppose you know the costs involved in each kind of mistake: 1. \$100 for each false positive.2. \$1 for each false negative.3. Correctly classified reviews incur no cost.**Quiz Question**: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the **test set**?
###Code
cost = fa_pos * 100 + fa_neg * 1
print cost
###Output
167289
###Markdown
Precision and Recall You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where **precision** comes in:$$[\text{precision}] = \frac{[\text{ positive data points with positive predicitions}]}{\text{[ all data points with positive predictions]}} = \frac{[\text{ true positives}]}{[\text{ true positives}] + [\text{ false positives}]}$$ So to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher. **First**, let us compute the precision of the logistic regression classifier on the **test_data**.
###Code
precision = model.evaluate(test_data, metric='precision')['precision']
print "Precision on test data: %s" % precision
###Output
Precision on test data: 0.948706099815
###Markdown
**Quiz Question**: Out of all reviews in the **test set** that are predicted to be positive, what fraction of them are **false positives**? (Round to the second decimal place e.g. 0.25)
###Code
model.evaluate(test_data)
print (26689 / precision - 26689)/(26689+1443)
###Output
0.0512939001848
###Markdown
**Quiz Question:** Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would: (see the quiz) A complementary metric is **recall**, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews:$$[\text{recall}] = \frac{[\text{ positive data points with positive predicitions}]}{\text{[ all positive data points]}} = \frac{[\text{ true positives}]}{[\text{ true positives}] + [\text{ false negatives}]}$$Let us compute the recall on the **test_data**.
###Code
recall = model.evaluate(test_data, metric='recall')['recall']
print "Recall on test data: %s" % recall
###Output
Recall on test data: 0.949955508098
###Markdown
**Quiz Question**: What fraction of the positive reviews in the **test_set** were correctly predicted as positive by the classifier?**Quiz Question**: What is the recall value for a classifier that predicts **+1** for all data points in the **test_data**?
###Code
model.evaluate(test_data)
26689 / len(test_data)
model.evaluate(test_data)
(26689)/(26689+1406)
###Output
_____no_output_____
###Markdown
Precision-recall tradeoffIn this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve. Varying the thresholdFalse positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold. Write a function called `apply_threshold` that accepts two things* `probabilities` (an SArray of probability values)* `threshold` (a float between 0 and 1).The function should return an array, where each element is set to +1 or -1 depending whether the corresponding probability exceeds `threshold`.
###Code
def apply_threshold(probabilities, threshold):
### YOUR CODE GOES HERE
# +1 if >= threshold and -1 otherwise.
threshold_arr = []
for probabilitie in probabilities:
if probabilitie >= threshold:
threshold_arr.append(+1)
else:
threshold_arr.append(-1)
return graphlab.SArray(threshold_arr)
###Output
_____no_output_____
###Markdown
Run prediction with `output_type='probability'` to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
###Code
probabilities = model.predict(test_data, output_type='probability')
predictions_with_default_threshold = apply_threshold(probabilities, 0.5)
print predictions_with_default_threshold
predictions_with_high_threshold = apply_threshold(probabilities, 0.9)
print "Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum()
print "Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum()
###Output
Number of positive predicted reviews (threshold = 0.9): 25630
###Markdown
**Quiz Question**: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9? Exploring the associated precision and recall as the threshold varies By changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows:
###Code
# Threshold = 0.5
precision_with_default_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_default_threshold)
recall_with_default_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_default_threshold)
# Threshold = 0.9
precision_with_high_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_high_threshold)
recall_with_high_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_high_threshold)
print "Precision (threshold = 0.5): %s" % precision_with_default_threshold
print "Recall (threshold = 0.5) : %s" % recall_with_default_threshold
print "Precision (threshold = 0.9): %s" % precision_with_high_threshold
print "Recall (threshold = 0.9) : %s" % recall_with_high_threshold
###Output
Precision (threshold = 0.9): 0.969527896996
Recall (threshold = 0.9) : 0.884463427656
###Markdown
**Quiz Question (variant 1)**: Does the **precision** increase with a higher threshold?**Quiz Question (variant 2)**: Does the **recall** increase with a higher threshold? Precision-recall curveNow, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve.
###Code
threshold_values = np.linspace(0.5, 1, num=100)
print threshold_values
###Output
[ 0.5 0.50505051 0.51010101 0.51515152 0.52020202 0.52525253
0.53030303 0.53535354 0.54040404 0.54545455 0.55050505 0.55555556
0.56060606 0.56565657 0.57070707 0.57575758 0.58080808 0.58585859
0.59090909 0.5959596 0.6010101 0.60606061 0.61111111 0.61616162
0.62121212 0.62626263 0.63131313 0.63636364 0.64141414 0.64646465
0.65151515 0.65656566 0.66161616 0.66666667 0.67171717 0.67676768
0.68181818 0.68686869 0.69191919 0.6969697 0.7020202 0.70707071
0.71212121 0.71717172 0.72222222 0.72727273 0.73232323 0.73737374
0.74242424 0.74747475 0.75252525 0.75757576 0.76262626 0.76767677
0.77272727 0.77777778 0.78282828 0.78787879 0.79292929 0.7979798
0.8030303 0.80808081 0.81313131 0.81818182 0.82323232 0.82828283
0.83333333 0.83838384 0.84343434 0.84848485 0.85353535 0.85858586
0.86363636 0.86868687 0.87373737 0.87878788 0.88383838 0.88888889
0.89393939 0.8989899 0.9040404 0.90909091 0.91414141 0.91919192
0.92424242 0.92929293 0.93434343 0.93939394 0.94444444 0.94949495
0.95454545 0.95959596 0.96464646 0.96969697 0.97474747 0.97979798
0.98484848 0.98989899 0.99494949 1. ]
###Markdown
For each of the values of threshold, we compute the precision and recall scores.
###Code
precision_all = []
recall_all = []
probabilities = model.predict(test_data, output_type='probability')
for threshold in threshold_values:
predictions = apply_threshold(probabilities, threshold)
precision = graphlab.evaluation.precision(test_data['sentiment'], predictions)
recall = graphlab.evaluation.recall(test_data['sentiment'], predictions)
precision_all.append(precision)
recall_all.append(recall)
print (threshold,precision)
###Output
(0.5, 0.9487060998151571)
(0.50505050505050508, 0.9490590871900679)
(0.51010101010101006, 0.949288256227758)
(0.51515151515151514, 0.9495068190720365)
(0.52020202020202022, 0.9496241405108838)
(0.5252525252525253, 0.9498057110263449)
(0.53030303030303028, 0.9502033245344939)
(0.53535353535353536, 0.9504176483186978)
(0.54040404040404044, 0.9506966773847803)
(0.54545454545454541, 0.9508776947552823)
(0.5505050505050505, 0.9510624597553123)
(0.55555555555555558, 0.9514246849942726)
(0.56060606060606055, 0.9515349070458861)
(0.56565656565656564, 0.9517614593412894)
(0.57070707070707072, 0.952177656597546)
(0.5757575757575758, 0.9525416427340608)
(0.58080808080808077, 0.9528257823446987)
(0.58585858585858586, 0.9529509021637553)
(0.59090909090909094, 0.9530334088538858)
(0.59595959595959602, 0.9530817112222502)
(0.60101010101010099, 0.9532313231323132)
(0.60606060606060608, 0.9535252368771842)
(0.61111111111111116, 0.9536803402782784)
(0.61616161616161613, 0.9536913477837486)
(0.62121212121212122, 0.9540122008446739)
(0.6262626262626263, 0.9541595925297114)
(0.63131313131313127, 0.9544813623052171)
(0.63636363636363635, 0.954630969609262)
(0.64141414141414144, 0.954956912158737)
(0.64646464646464641, 0.9552173913043478)
(0.65151515151515149, 0.9554257942840563)
(0.65656565656565657, 0.9556031509783279)
(0.66161616161616166, 0.9557162059069277)
(0.66666666666666674, 0.955933682373473)
(0.67171717171717171, 0.9560075685903501)
(0.6767676767676768, 0.9561623884944475)
(0.68181818181818188, 0.9564536112528241)
(0.68686868686868685, 0.9566708002042454)
(0.69191919191919193, 0.9569519497590185)
(0.69696969696969702, 0.9572002923976608)
(0.70202020202020199, 0.9573090430201932)
(0.70707070707070707, 0.9575582246960598)
(0.71212121212121215, 0.9577408004691395)
(0.71717171717171713, 0.9581728123280132)
(0.72222222222222221, 0.9584343100536095)
(0.72727272727272729, 0.9587621287856513)
(0.73232323232323238, 0.9591521307131817)
(0.73737373737373746, 0.9592663523865645)
(0.74242424242424243, 0.959585530439913)
(0.74747474747474751, 0.9599069664414663)
(0.7525252525252526, 0.9599571497174098)
(0.75757575757575757, 0.9601701183431952)
(0.76262626262626265, 0.9603465511496168)
(0.76767676767676774, 0.9607160063743839)
(0.77272727272727271, 0.9608708552777984)
(0.77777777777777779, 0.9610871825337888)
(0.78282828282828287, 0.9613668476240054)
(0.78787878787878785, 0.9622026596740962)
(0.79292929292929293, 0.9624156039009752)
(0.79797979797979801, 0.9624873267995945)
(0.80303030303030298, 0.9627275462614714)
(0.80808080808080818, 0.9632042783971075)
(0.81313131313131315, 0.9634923628135018)
(0.81818181818181823, 0.963922783423369)
(0.82323232323232332, 0.9642181708147364)
(0.82828282828282829, 0.9645819917421115)
(0.83333333333333337, 0.9649455593914792)
(0.83838383838383845, 0.9653115501519757)
(0.84343434343434343, 0.9656629487228292)
(0.84848484848484851, 0.9659827625657844)
(0.85353535353535359, 0.9663814180929096)
(0.85858585858585856, 0.9667802059014887)
(0.86363636363636365, 0.9669963201471942)
(0.86868686868686873, 0.9673762680602521)
(0.8737373737373737, 0.9676599676599676)
(0.8787878787878789, 0.9679783950617284)
(0.88383838383838387, 0.968586792525823)
(0.88888888888888884, 0.9689609684177853)
(0.89393939393939403, 0.9693139390168015)
(0.89898989898989901, 0.9694689230289324)
(0.90404040404040409, 0.969731336279379)
(0.90909090909090917, 0.9699262860727729)
(0.91414141414141414, 0.9702966401762531)
(0.91919191919191923, 0.9708135860979463)
(0.92424242424242431, 0.9714047751249306)
(0.92929292929292928, 0.9720318725099601)
(0.93434343434343436, 0.9728831210446207)
(0.93939393939393945, 0.9734256724110163)
(0.94444444444444442, 0.9740412262584538)
(0.94949494949494961, 0.9744635718365016)
(0.95454545454545459, 0.9747663936113283)
(0.95959595959595956, 0.9754932502596054)
(0.96464646464646475, 0.9761974728181017)
(0.96969696969696972, 0.9768717316440627)
(0.97474747474747481, 0.9773373545885394)
(0.97979797979797989, 0.9785300316122234)
(0.98484848484848486, 0.9801318522532285)
(0.98989898989898994, 0.9813079711850841)
(0.99494949494949503, 0.9842386281959133)
(1.0, 0.9916666666666667)
###Markdown
Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def plot_pr_curve(precision, recall, title):
plt.rcParams['figure.figsize'] = 7, 5
plt.locator_params(axis = 'x', nbins = 5)
plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F')
plt.title(title)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.rcParams.update({'font.size': 16})
plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')
###Output
_____no_output_____
###Markdown
**Quiz Question**: Among all the threshold values tried, what is the **smallest** threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places.
###Code
0.98
###Output
_____no_output_____
###Markdown
**Quiz Question**: Using `threshold` = 0.98, how many **false negatives** do we get on the **test_data**? (**Hint**: You may use the `graphlab.evaluation.confusion_matrix` function implemented in GraphLab Create.)
###Code
graphlab.evaluation.confusion_matrix(graphlab.SArray(test_data['sentiment']),[0.98*len(test_data)])
###Output
_____no_output_____
###Markdown
This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier. Evaluating specific search terms So far, we looked at the number of false positives for the **entire test set**. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon. Precision-Recall on all baby related itemsFrom the **test set**, select all the reviews for all products with the word 'baby' in them.
###Code
baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]
###Output
_____no_output_____
###Markdown
Now, let's predict the probability of classifying these reviews as positive:
###Code
probabilities = model.predict(baby_reviews, output_type='probability')
###Output
_____no_output_____
###Markdown
Let's plot the precision-recall curve for the **baby_reviews** dataset.**First**, let's consider the following `threshold_values` ranging from 0.5 to 1:
###Code
threshold_values = np.linspace(0.5, 1, num=100)
###Output
_____no_output_____
###Markdown
**Second**, as we did above, let's compute precision and recall for each value in `threshold_values` on the **baby_reviews** dataset. Complete the code block below.
###Code
precision_all = []
recall_all = []
for threshold in threshold_values:
# Make predictions. Use the `apply_threshold` function
## YOUR CODE HERE
predictions = apply_threshold(probabilities, threshold)
# Calculate the precision.
# YOUR CODE HERE
precision = graphlab.evaluation.precision(baby_reviews['sentiment'], predictions)
# YOUR CODE HERE
recall = graphlab.evaluation.recall(baby_reviews['sentiment'], predictions)
# Append the precision and recall scores.
precision_all.append(precision)
recall_all.append(recall)
print (threshold,precision)
###Output
(0.5, 0.9476563924858654)
(0.50505050505050508, 0.948165723672203)
(0.51010101010101006, 0.9483199415631848)
(0.51515151515151514, 0.9484743285218344)
(0.52020202020202022, 0.9486382745384756)
(0.5252525252525253, 0.9487929773226043)
(0.53030303030303028, 0.9494875549048316)
(0.53535353535353536, 0.9494598058963559)
(0.54040404040404044, 0.9499816782704287)
(0.54545454545454541, 0.9499541704857929)
(0.5505050505050505, 0.9501192004401247)
(0.55555555555555558, 0.9508166636080014)
(0.56060606060606055, 0.9508076358296622)
(0.56565656565656564, 0.9509641873278237)
(0.57070707070707072, 0.9517939282428702)
(0.5757575757575758, 0.9519513991163475)
(0.58080808080808077, 0.952082565425728)
(0.58585858585858586, 0.9524073049252906)
(0.59090909090909094, 0.9523633677991138)
(0.59595959595959602, 0.9523457702253417)
(0.60101010101010099, 0.9523369665619804)
(0.60606060606060608, 0.9528563505268997)
(0.61111111111111116, 0.9528214616096207)
(0.61616161616161613, 0.9527952610144391)
(0.62121212121212122, 0.9529019098831819)
(0.6262626262626263, 0.9530350844625951)
(0.63131313131313127, 0.9532120311919792)
(0.63636363636363635, 0.9533543950938487)
(0.64141414141414144, 0.9536830357142857)
(0.64646464646464641, 0.9540208488458675)
(0.65151515151515149, 0.9541728763040238)
(0.65656565656565657, 0.9541643376187814)
(0.66161616161616166, 0.9542910447761194)
(0.66666666666666674, 0.954248366013072)
(0.67171717171717171, 0.9542141655765277)
(0.6767676767676768, 0.954366934729755)
(0.68181818181818188, 0.9547155688622755)
(0.68686868686868685, 0.9548520044960659)
(0.69191919191919193, 0.9549971873242078)
(0.69696969696969702, 0.9553554680172576)
(0.70202020202020199, 0.9554929577464789)
(0.70707070707070707, 0.9556224144415194)
(0.71212121212121215, 0.9557688688123471)
(0.71717171717171713, 0.9562759140595553)
(0.72222222222222221, 0.9562594268476622)
(0.72727272727272729, 0.9565791957711912)
(0.73232323232323238, 0.9569242395616853)
(0.73737373737373746, 0.9568835098335855)
(0.74242424242424243, 0.9571888615268043)
(0.74747474747474751, 0.9573216995447648)
(0.7525252525252526, 0.9574710461363205)
(0.75757575757575757, 0.9575965012359764)
(0.76262626262626265, 0.9579127785183774)
(0.76767676767676774, 0.9582459485224023)
(0.77272727272727271, 0.9587786259541985)
(0.77777777777777779, 0.9591056755207338)
(0.78282828282828287, 0.9595862861520782)
(0.78787878787878785, 0.9606557377049181)
(0.79292929292929293, 0.9609812632798919)
(0.79797979797979801, 0.9611218568665377)
(0.80303030303030298, 0.9610842207163601)
(0.80808080808080818, 0.9614191547111284)
(0.81313131313131315, 0.9615459312487862)
(0.81818181818181823, 0.9620622568093385)
(0.82323232323232332, 0.9623781676413256)
(0.82828282828282829, 0.9627244340359095)
(0.83333333333333337, 0.9630642954856361)
(0.83838383838383845, 0.9631949882537196)
(0.84343434343434343, 0.9637112593173793)
(0.84848484848484851, 0.9641943734015346)
(0.85353535353535359, 0.9647221127315727)
(0.85858585858585856, 0.9648637978681406)
(0.86363636363636365, 0.9650197628458498)
(0.86868686868686873, 0.9655104063429137)
(0.8737373737373737, 0.9660377358490566)
(0.8787878787878789, 0.9661556838542703)
(0.88383838383838387, 0.9668397922493008)
(0.88888888888888884, 0.9679358717434869)
(0.89393939393939403, 0.9680722891566265)
(0.89898989898989901, 0.968014484007242)
(0.90404040404040409, 0.9679112008072653)
(0.90909090909090917, 0.9678593086719224)
(0.91414141414141414, 0.9681541582150102)
(0.91919191919191923, 0.9684703010577705)
(0.92424242424242431, 0.9691395871653382)
(0.92929292929292928, 0.9697717458359038)
(0.93434343434343436, 0.9710504549214226)
(0.93939393939393945, 0.9712260216847373)
(0.94444444444444442, 0.9714765100671141)
(0.94949494949494961, 0.9722457627118644)
(0.95454545454545459, 0.9724182168056447)
(0.95959595959595956, 0.9734111543450065)
(0.96464646464646475, 0.9742020113686052)
(0.96969696969696972, 0.9750055297500553)
(0.97474747474747481, 0.9751009421265141)
(0.97979797979797989, 0.9766590389016019)
(0.98484848484848486, 0.9790489642184558)
(0.98989898989898994, 0.9801031687546058)
(0.99494949494949503, 0.9844253490870032)
(1.0, 1.0)
###Markdown
**Quiz Question**: Among all the threshold values tried, what is the **smallest** threshold value that achieves a precision of 96.5% or better for the reviews of data in **baby_reviews**? Round your answer to 3 decimal places.
###Code
precision_all.sort()
precision_all
###Output
_____no_output_____
###Markdown
**Quiz Question:** Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%?**Finally**, let's plot the precision recall curve.
###Code
plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)")
###Output
_____no_output_____ |
.ipynb_checkpoints/Clase17_CAPMII-Copy1-checkpoint.ipynb | ###Markdown
Estimando $\beta$ de los activos Conclusiones de la lectura. En la clase anterior aprendimos - ¿qué es el CAPM?; - ¿cuáles son los supuestos sobre los que se funda el CAPM?;- derivamos la fórmula del CAPM; y- aprendimos como obtener la $beta$ de un portafolio a partir de la $beta$ de activos individuales.En la clase de hoy estudiaremos una forma de estimar los $\beta$ de activos individuales.**Objetivos:**- Revisitar riesgo sistemático y no sistemático.- Estudiar un método para estimar las $\beta$ de los activos.*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.- [Notas del curso "Financial Engineering", Columbia University](http://www.columbia.edu/~ks20/FE-Notes/FE-Notes-Sigman.html)___ 1. Riesgo sistemático y no sistemático. Recordamos la fórmula de CAPM:$$E[r_i]-r_f=\beta_i(E[r_M]-r_f),$$donde $\beta_i=\frac{\sigma_{M,i}}{\sigma_M^2}$ y $\sigma_{M,i}$ es la covarianza del portafolio de mercado con el activo individual $i$. Todas las anteriores son variables determinísiticas.- ¿Qué pasa si usamos el CAPM como un modelo de rendimientos? Es decir,$$r_i=r_f+\beta_i(r_M-r_f)+\epsilon_i,$$donde $\epsilon_i$ es un término de error. Despejando $\epsilon_i$, tenemos que: - $E[\epsilon_i]=0$, y- $cov(\epsilon_i,r_M)=0$. Ver en el tablero. Entonces, la varianza del activo $i$ es:$$\sigma_i=\beta_i^2\sigma_M^2+var(\epsilon_i),$$donde el primer término corresponde al riesgo sistemático (de mercado) y el segundo al riesgo idiosincrático.___ 2. Estimando $\beta$ para un activo. - En el mercado real, la cantidad de activos es ENORME, y tratar de construir el portafolio de mercado sería una tarea grandiosa, pero poco realista para un analista financiero. - Por lo tanto, los índices de mercado han sido creados para intentar aproximar el portafolio de mercado.- Dicho índice es un portafolio más pequeño que el de mercado, construido por los que se consideran los activos más dominantes, y que capturan la esencia de el portafolio de mercado. - El índice de mercado más conocido es el Standard & Poor’s 500-stock index (S&P), compuesto de 500 activos.- Un $\beta$ para un activo dado, se puede estimar utilizando el S&P en reemplazo de M, y utilizando datos históricos para ambos rendimientos (el del activo y el del S&P500).- Por ejemplo, considere un activo $i$ para el cual queremos estimar su $\beta_i$.- Este estimado se construye usando medias, varianzas y covarianzas muestrales como sigue: - Escogemos $N$ rendimientos históricos, tales como los reportados mensualmente hace tres años. - Para $k=1,2,\dots,N$, $r_{ik}$ y $r_{S\&Pk}$ denotan el $k-$ésimo valor muestral de los rendimietos.Entonces$$\hat{E[r_i]}=\frac{1}{N}\sum_{k=1}^{N}r_{ik}, \text{ y}$$$$\hat{E[r_{S\&P}]}=\frac{1}{N}\sum_{k=1}^{N}r_{S\&Pk}.$$ Además, la varianza $\sigma_{S\&P}^2$ se estima como$$\hat{\sigma_{S\&P}^2}=\frac{1}{N-1}\sum_{k=1}^{N}(r_{S\&Pk}-\hat{E[r_{S\&P}]})^2,$$y la covarianza $\sigma_{S\&P,i}$$$\hat{\sigma_{S\&P,i}}=\frac{1}{N-1}\sum_{k=1}^{N}(r_{S\&Pk}-\hat{E[r_{S\&P}]})(r_{ik}-\hat{E[r_i]})$$ Finalmente, $$\hat{\beta_i}=\frac{\hat{\sigma_{S\&P,i}}}{\hat{\sigma_{S\&P}^2}}.$$ Ejemplo...Entrar a yahoo finance, a la información de MSFT, AAPL, GCARSOA1.MX y ^GSPC.
###Code
# Importar paquetes
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
import pandas_datareader.data as web
# Función para descargar precios de cierre ajustados de varios activos a la vez:
def get_closes(tickers, start_date=None, end_date=None, freq=None):
# Fecha inicio por defecto (start_date='2010-01-01') y fecha fin por defecto (end_date=today)
# Frecuencia de muestreo por defecto (freq='d')
# Importamos paquetes necesarios
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
import pandas_datareader.data as web
# Creamos DataFrame vacío de precios, con el índice de las fechas
closes = pd.DataFrame(columns = tickers, index=web.YahooDailyReader(symbols=tickers[0], start=start_date, end=end_date, interval=freq).read().index)
# Agregamos cada uno de los precios con YahooDailyReader
for ticker in tickers:
df = web.YahooDailyReader(symbols=ticker, start=start_date, end=end_date, interval=freq).read()
closes[ticker]=df['Adj Close']
closes.index_name = 'Date'
closes = closes.sort_index()
return closes
# Importar datos de AAPL, MSFT, GCARSOA1.MX, y ^GSPC
# Precios mensuales
# Rendimientos mensuales
# Matriz de covarianza
# Beta de Microsoft
# Beta de Apple
# Beta de Grupo Carso
###Output
_____no_output_____ |
Code/early_stop_book.ipynb | ###Markdown
Early Stop and Empty Bins this notebook shows how to create the early stop and empty bin plots Load Data
###Code
set_name = "tishby"
#set_name = "mnist"
nrs = [3,8,1]
samples = 1000
seed(1337)
set_random_seed(1337)
X_train, X_test, y_train, y_test = data_selection.select_data(set_name, shuffle=True,
samples_per_class = samples,
list_of_nrs=nrs
)
###Output
Loading tishby Data...
###Markdown
Create Classes, Define model and train it set parameters like batch, size learning rate...
###Code
# object to record parameters
outputs = classes.Outputs()
nr_of_tot_epochs = 8000
nr_of_epochs = 8000
# TanH 1752, 702
# ReLU 911, 151
batch = 256
learning_r = [0.0004]
###Output
_____no_output_____
###Markdown
Set flags
###Code
# record all epochs or reduced amount
record_all_flag = False
# record intermediate test scores
rec_test_flag = True
# save data and plots and show plots
save_data_flag = False
save_MI_and_plots = True
show_flag=True
stop_early = True
perf_stop = False
# model 1 = model with leading ReLU
# model 2 = model with leading TanH
# model 3 = full ReLU
# model 4 = full TanH
# ...
seed(1337)
set_random_seed(1337)
model_nr = 3
model, architecture = model_selection.select_model(model_nr, nr_of_epochs,
set_name, X_train.shape, y_train)
###Output
amount of classes 2
Input shape: (3276, 12) length: 2
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
###Markdown
define callback function
###Code
# call callback functions
output_recording = LambdaCallback(on_epoch_end=lambda epoch,
logs: Callbacks.record_activations(outputs, model, epoch,
X_train, X_test, y_test, batch,
record_all_flag, rec_test_flag,
specific_records=[nr_of_epochs-1]))
early_stopp = EarlyStopping(monitor='val_loss', patience=0, verbose=0, mode='auto')
###Output
_____no_output_____
###Markdown
callback returns matrix with n arrays where n is the number of features and each array has m elements where m is the number of neurons train model
###Code
adam = optimizers.Adam(lr=learning_r)
model.compile(loss="categorical_crossentropy",
optimizer=adam,
metrics=["accuracy"])
seed(1337)
set_random_seed(1337)
if stop_early == True:
history = model.fit(X_train, y_train, epochs=nr_of_epochs, batch_size=batch,
validation_split=0.2, callbacks = [output_recording, early_stopp])
else:
history = model.fit(X_train, y_train, epochs=nr_of_epochs, batch_size=batch,
validation_split=0.2, callbacks = [output_recording])
###Output
Train on 2620 samples, validate on 656 samples
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/8000
2620/2620 [==============================] - 0s 123us/sample - loss: 0.6841 - acc: 0.5275 - val_loss: 0.6830 - val_acc: 0.5381
Epoch 2/8000
2620/2620 [==============================] - 0s 40us/sample - loss: 0.6825 - acc: 0.5286 - val_loss: 0.6814 - val_acc: 0.5427
Epoch 3/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.6809 - acc: 0.5294 - val_loss: 0.6797 - val_acc: 0.5427
Epoch 4/8000
2620/2620 [==============================] - 0s 42us/sample - loss: 0.6791 - acc: 0.5309 - val_loss: 0.6779 - val_acc: 0.5473
Epoch 5/8000
2620/2620 [==============================] - 0s 35us/sample - loss: 0.6771 - acc: 0.5366 - val_loss: 0.6758 - val_acc: 0.5473
Epoch 6/8000
2620/2620 [==============================] - 0s 33us/sample - loss: 0.6749 - acc: 0.5408 - val_loss: 0.6734 - val_acc: 0.5534
Epoch 7/8000
2620/2620 [==============================] - 0s 38us/sample - loss: 0.6724 - acc: 0.5462 - val_loss: 0.6709 - val_acc: 0.5610
Epoch 8/8000
2620/2620 [==============================] - 0s 34us/sample - loss: 0.6697 - acc: 0.5485 - val_loss: 0.6681 - val_acc: 0.5625
Epoch 9/8000
2620/2620 [==============================] - 0s 36us/sample - loss: 0.6668 - acc: 0.5508 - val_loss: 0.6650 - val_acc: 0.5716
Epoch 10/8000
2620/2620 [==============================] - 0s 35us/sample - loss: 0.6636 - acc: 0.5569 - val_loss: 0.6617 - val_acc: 0.5747
Epoch 11/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.6604 - acc: 0.5588 - val_loss: 0.6581 - val_acc: 0.5808
Epoch 12/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.6568 - acc: 0.5710 - val_loss: 0.6545 - val_acc: 0.5823
Epoch 13/8000
2620/2620 [==============================] - 0s 33us/sample - loss: 0.6529 - acc: 0.5767 - val_loss: 0.6503 - val_acc: 0.5915
Epoch 14/8000
2620/2620 [==============================] - 0s 39us/sample - loss: 0.6486 - acc: 0.5840 - val_loss: 0.6457 - val_acc: 0.5960
Epoch 15/8000
2620/2620 [==============================] - 0s 43us/sample - loss: 0.6437 - acc: 0.5950 - val_loss: 0.6406 - val_acc: 0.6052
Epoch 16/8000
2620/2620 [==============================] - 0s 39us/sample - loss: 0.6384 - acc: 0.6088 - val_loss: 0.6349 - val_acc: 0.6189
Epoch 17/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.6324 - acc: 0.6263 - val_loss: 0.6290 - val_acc: 0.6448
Epoch 18/8000
2620/2620 [==============================] - 0s 40us/sample - loss: 0.6260 - acc: 0.6469 - val_loss: 0.6225 - val_acc: 0.6585
Epoch 19/8000
2620/2620 [==============================] - 0s 35us/sample - loss: 0.6189 - acc: 0.6630 - val_loss: 0.6152 - val_acc: 0.6707
Epoch 20/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.6114 - acc: 0.6695 - val_loss: 0.6076 - val_acc: 0.6768
Epoch 21/8000
2620/2620 [==============================] - 0s 38us/sample - loss: 0.6036 - acc: 0.6794 - val_loss: 0.5998 - val_acc: 0.6845
Epoch 22/8000
2620/2620 [==============================] - 0s 40us/sample - loss: 0.5959 - acc: 0.6950 - val_loss: 0.5920 - val_acc: 0.6966
Epoch 23/8000
2620/2620 [==============================] - 0s 39us/sample - loss: 0.5880 - acc: 0.7050 - val_loss: 0.5835 - val_acc: 0.7073
Epoch 24/8000
2620/2620 [==============================] - 0s 40us/sample - loss: 0.5799 - acc: 0.7107 - val_loss: 0.5750 - val_acc: 0.7134
Epoch 25/8000
2620/2620 [==============================] - 0s 41us/sample - loss: 0.5718 - acc: 0.7115 - val_loss: 0.5665 - val_acc: 0.7119
Epoch 26/8000
2620/2620 [==============================] - 0s 43us/sample - loss: 0.5632 - acc: 0.7248 - val_loss: 0.5580 - val_acc: 0.7393
Epoch 27/8000
2620/2620 [==============================] - 0s 36us/sample - loss: 0.5547 - acc: 0.7427 - val_loss: 0.5493 - val_acc: 0.7576
Epoch 28/8000
2620/2620 [==============================] - 0s 36us/sample - loss: 0.5461 - acc: 0.7450 - val_loss: 0.5398 - val_acc: 0.7515
Epoch 29/8000
2620/2620 [==============================] - 0s 38us/sample - loss: 0.5369 - acc: 0.7454 - val_loss: 0.5311 - val_acc: 0.7637
Epoch 30/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.5278 - acc: 0.7630 - val_loss: 0.5225 - val_acc: 0.7652
Epoch 31/8000
2620/2620 [==============================] - 0s 40us/sample - loss: 0.5185 - acc: 0.7706 - val_loss: 0.5131 - val_acc: 0.7698
Epoch 32/8000
2620/2620 [==============================] - 0s 10us/sample - loss: 0.5093 - acc: 0.7782 - val_loss: 0.5040 - val_acc: 0.7820
Epoch 33/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.5001 - acc: 0.7924 - val_loss: 0.4950 - val_acc: 0.7957
Epoch 34/8000
2620/2620 [==============================] - 0s 10us/sample - loss: 0.4905 - acc: 0.8008 - val_loss: 0.4853 - val_acc: 0.8018
Epoch 35/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.4812 - acc: 0.8099 - val_loss: 0.4765 - val_acc: 0.8110
Epoch 36/8000
2620/2620 [==============================] - 0s 13us/sample - loss: 0.4721 - acc: 0.8160 - val_loss: 0.4670 - val_acc: 0.8140
Epoch 37/8000
2620/2620 [==============================] - 0s 38us/sample - loss: 0.4631 - acc: 0.8225 - val_loss: 0.4580 - val_acc: 0.8247
Epoch 38/8000
2620/2620 [==============================] - 0s 11us/sample - loss: 0.4542 - acc: 0.8286 - val_loss: 0.4490 - val_acc: 0.8308
Epoch 39/8000
2620/2620 [==============================] - 0s 39us/sample - loss: 0.4456 - acc: 0.8359 - val_loss: 0.4405 - val_acc: 0.8384
Epoch 40/8000
2620/2620 [==============================] - 0s 12us/sample - loss: 0.4369 - acc: 0.8427 - val_loss: 0.4327 - val_acc: 0.8399
Epoch 41/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.4288 - acc: 0.8553 - val_loss: 0.4249 - val_acc: 0.8460
Epoch 42/8000
2620/2620 [==============================] - 0s 11us/sample - loss: 0.4210 - acc: 0.8565 - val_loss: 0.4163 - val_acc: 0.8399
Epoch 43/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.4133 - acc: 0.8626 - val_loss: 0.4098 - val_acc: 0.8506
Epoch 44/8000
2620/2620 [==============================] - 0s 13us/sample - loss: 0.4061 - acc: 0.8672 - val_loss: 0.4034 - val_acc: 0.8567
Epoch 45/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.3992 - acc: 0.8702 - val_loss: 0.3979 - val_acc: 0.8613
Epoch 46/8000
2620/2620 [==============================] - 0s 10us/sample - loss: 0.3922 - acc: 0.8718 - val_loss: 0.3903 - val_acc: 0.8643
Epoch 47/8000
2620/2620 [==============================] - 0s 36us/sample - loss: 0.3858 - acc: 0.8756 - val_loss: 0.3844 - val_acc: 0.8628
Epoch 48/8000
2620/2620 [==============================] - 0s 10us/sample - loss: 0.3796 - acc: 0.8733 - val_loss: 0.3792 - val_acc: 0.8643
Epoch 49/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.3738 - acc: 0.8740 - val_loss: 0.3730 - val_acc: 0.8643
Epoch 50/8000
2620/2620 [==============================] - 0s 10us/sample - loss: 0.3677 - acc: 0.8767 - val_loss: 0.3697 - val_acc: 0.8674
Epoch 51/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.3625 - acc: 0.8779 - val_loss: 0.3627 - val_acc: 0.8704
Epoch 52/8000
2620/2620 [==============================] - 0s 11us/sample - loss: 0.3570 - acc: 0.8802 - val_loss: 0.3595 - val_acc: 0.8720
Epoch 53/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.3519 - acc: 0.8794 - val_loss: 0.3532 - val_acc: 0.8735
Epoch 54/8000
2620/2620 [==============================] - 0s 12us/sample - loss: 0.3465 - acc: 0.8798 - val_loss: 0.3495 - val_acc: 0.8720
Epoch 55/8000
2620/2620 [==============================] - 0s 35us/sample - loss: 0.3417 - acc: 0.8817 - val_loss: 0.3441 - val_acc: 0.8765
Epoch 56/8000
2620/2620 [==============================] - 0s 10us/sample - loss: 0.3363 - acc: 0.8802 - val_loss: 0.3408 - val_acc: 0.8720
Epoch 57/8000
2620/2620 [==============================] - 0s 40us/sample - loss: 0.3318 - acc: 0.8821 - val_loss: 0.3355 - val_acc: 0.8735
Epoch 58/8000
2620/2620 [==============================] - 0s 12us/sample - loss: 0.3269 - acc: 0.8824 - val_loss: 0.3288 - val_acc: 0.8811
Epoch 59/8000
2620/2620 [==============================] - 0s 39us/sample - loss: 0.3218 - acc: 0.8836 - val_loss: 0.3265 - val_acc: 0.8689
Epoch 60/8000
2620/2620 [==============================] - 0s 12us/sample - loss: 0.3169 - acc: 0.8859 - val_loss: 0.3203 - val_acc: 0.8841
Epoch 61/8000
2620/2620 [==============================] - 0s 41us/sample - loss: 0.3114 - acc: 0.8866 - val_loss: 0.3176 - val_acc: 0.8704
Epoch 62/8000
2620/2620 [==============================] - 0s 13us/sample - loss: 0.3074 - acc: 0.8847 - val_loss: 0.3130 - val_acc: 0.8704
Epoch 63/8000
2620/2620 [==============================] - 0s 42us/sample - loss: 0.3024 - acc: 0.8863 - val_loss: 0.3093 - val_acc: 0.8704
Epoch 64/8000
2620/2620 [==============================] - 0s 11us/sample - loss: 0.2984 - acc: 0.8893 - val_loss: 0.3051 - val_acc: 0.8704
Epoch 65/8000
2620/2620 [==============================] - 0s 38us/sample - loss: 0.2944 - acc: 0.8885 - val_loss: 0.3009 - val_acc: 0.8735
Epoch 66/8000
2620/2620 [==============================] - 0s 14us/sample - loss: 0.2906 - acc: 0.8889 - val_loss: 0.2994 - val_acc: 0.8720
Epoch 67/8000
2620/2620 [==============================] - 0s 39us/sample - loss: 0.2871 - acc: 0.8908 - val_loss: 0.2962 - val_acc: 0.8720
Epoch 68/8000
2620/2620 [==============================] - 0s 12us/sample - loss: 0.2840 - acc: 0.8916 - val_loss: 0.2936 - val_acc: 0.8780
Epoch 69/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.2808 - acc: 0.8927 - val_loss: 0.2896 - val_acc: 0.8780
Epoch 70/8000
2620/2620 [==============================] - 0s 14us/sample - loss: 0.2776 - acc: 0.8947 - val_loss: 0.2862 - val_acc: 0.8780
Epoch 71/8000
2620/2620 [==============================] - 0s 37us/sample - loss: 0.2750 - acc: 0.8947 - val_loss: 0.2856 - val_acc: 0.8811
Epoch 72/8000
2620/2620 [==============================] - 0s 12us/sample - loss: 0.2750 - acc: 0.8931 - val_loss: 0.2811 - val_acc: 0.8765
Epoch 73/8000
2620/2620 [==============================] - 0s 39us/sample - loss: 0.2690 - acc: 0.8954 - val_loss: 0.2804 - val_acc: 0.8811
Epoch 74/8000
2620/2620 [==============================] - 0s 11us/sample - loss: 0.2670 - acc: 0.8962 - val_loss: 0.2759 - val_acc: 0.8796
Epoch 75/8000
2620/2620 [==============================] - 0s 38us/sample - loss: 0.2645 - acc: 0.8992 - val_loss: 0.2761 - val_acc: 0.8796
###Markdown
Information Plane creation helper functions
###Code
def name_creation(architecture, learning_r, batch, stop_early, perf_stop):
"""
creates architecture name
"""
if stop_early == True:
common_name = "early_stop_" + architecture + "_lr_" + str(learning_r) + "_batchsize_" + str(batch)
elif perf_stop == True:
common_name = "perfect_stop_" + architecture + "_lr_" + str(learning_r) + "_batchsize_" + str(batch)
else:
common_name = "no_early_stop_" + architecture + "_lr_" + str(learning_r) + "_batchsize_" + str(batch)
return common_name
def extract_max_key(MI_dic):
"""
find keys of max mi
"""
max_key = max(MI_dic, key=MI_dic.get)
return max_key
def extract_max(MI_object):
"""
find maximum MI values
"""
max_x = MI_object.mi_x[extract_max_key(MI_object.mi_x)]
max_y = MI_object.mi_y[extract_max_key(MI_object.mi_y)]
return max_x, max_y
###Output
_____no_output_____
###Markdown
create model name and define colours for plots
###Code
common_name = name_creation(architecture, learning_r, batch, stop_early, perf_stop)
if "mnist" in set_name:
common_name = str(samples) + str(nrs) + common_name
color_list = ["red", "blue", "green", "orange", "purple",
"brown", "pink", "teal", "goldenrod", "gray", "limegreen",
"cornflowerblue", "black"]
###Output
_____no_output_____
###Markdown
find model score and potentially safe data
###Code
score = model.evaluate(X_test, y_test, verbose=0)[1]
aname = common_name + "_activations"
outputs.model_score = score
if save_data_flag == True:
util.save(outputs, aname)
hname = common_name + "_history"
h_obj = history.history
h_obj["model_score"] = score
if save_data_flag == True:
util.save(h_obj, hname)
###Output
_____no_output_____
###Markdown
loss and test score plots
###Code
plotting.plot_history(h_obj, common_name, show_flag, save_MI_and_plots)
plotting.plot_test_development(outputs.int_model_score, common_name, show_flag,
save_MI_and_plots)
###Output
creating testscore devolopment plot
###Markdown
find epoch with minimum validation loss for perfect stop note perf_nr_of_epochs down to use as number of epochs for rerun
###Code
# extract index of minimum loss
less_min_epoch = h_obj["val_loss"].index(np.amin(h_obj["val_loss"]))
# + 1 because list index starts at 0 but epochs not
perf_nr_of_epochs = less_min_epoch + 1
print(perf_nr_of_epochs)
# TanH 1752
# ReLU 911
###Output
74
###Markdown
Mutual information calculation in this section the maximum values of the mutual information have to be noted down after a run with the maximum epochs for early stop and perfect stop to have the same maximum plot values this has to be done with all the estimators that are used and safed in the cells before the estimators Binning
###Code
#bin_max_mi = [11.677719641641012, 0.9976734295143714] # TanH
bin_max_mi = [11.677719641641012, 0.9976734295143714]
import importlib
importlib.reload(info_plane)
est_type_flag = 1
bin_amount = 0.07
bin_size_or_nr=True
separate_flag = False
seed(1337)
set_random_seed(1337)
BMI_object = classes.Binning_MI()
if perf_stop == False and stop_early == False:
full_flag = True
BMI_object = info_plane.create_infoplane(common_name, X_train, y_train, outputs,
est_type_flag, color_list, bin_amount,
bin_size_or_nr, show_flag, separate_flag,
save_MI_and_plots, False)
# note these values for next execution and fill into bin_max_mi line 7 of this cell
bin_max_vals = extract_max(BMI_object)
print("maximum values for MI with X and Y", bin_max_vals)
else:
full_flag = False
BMI_object = info_plane.ranged_create_infoplane(bin_max_mi[0], bin_max_mi[1], common_name, X_train,
y_train, outputs, est_type_flag, color_list,
bin_amount, bin_size_or_nr, show_flag, separate_flag,
save_MI_and_plots, False)
###Output
o max: {0: 3.3018062, 1: 5.0639796, 2: 5.4271274, 3: 6.5869904, 4: 6.69625, 5: 0.99999464}
o min: {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 5.381255e-06}
X and Y MI: 0.9976734295143714 , X Entropy: 11.677719641641012
MI for epoch 0 is being calculated for 0.07 bins
MI for epoch 1 is being calculated for 0.07 bins
MI for epoch 2 is being calculated for 0.07 bins
MI for epoch 3 is being calculated for 0.07 bins
MI for epoch 4 is being calculated for 0.07 bins
MI for epoch 5 is being calculated for 0.07 bins
MI for epoch 6 is being calculated for 0.07 bins
MI for epoch 7 is being calculated for 0.07 bins
MI for epoch 8 is being calculated for 0.07 bins
MI for epoch 9 is being calculated for 0.07 bins
MI for epoch 10 is being calculated for 0.07 bins
MI for epoch 11 is being calculated for 0.07 bins
MI for epoch 12 is being calculated for 0.07 bins
MI for epoch 13 is being calculated for 0.07 bins
MI for epoch 14 is being calculated for 0.07 bins
MI for epoch 15 is being calculated for 0.07 bins
MI for epoch 16 is being calculated for 0.07 bins
MI for epoch 17 is being calculated for 0.07 bins
MI for epoch 18 is being calculated for 0.07 bins
MI for epoch 19 is being calculated for 0.07 bins
MI for epoch 20 is being calculated for 0.07 bins
MI for epoch 21 is being calculated for 0.07 bins
MI for epoch 22 is being calculated for 0.07 bins
MI for epoch 23 is being calculated for 0.07 bins
MI for epoch 24 is being calculated for 0.07 bins
MI for epoch 25 is being calculated for 0.07 bins
MI for epoch 26 is being calculated for 0.07 bins
MI for epoch 27 is being calculated for 0.07 bins
MI for epoch 28 is being calculated for 0.07 bins
MI for epoch 29 is being calculated for 0.07 bins
MI for epoch 30 is being calculated for 0.07 bins
Creating epochview plot
|
ANN_11_May_2018.ipynb | ###Markdown
Artificial Neural Network Four companies listed on the JSE : Netcare group limited : Sanlam limited : Nedbank group : Sanlam limited
###Code
#importing the modules in python
from pandas import Series
import matplotlib.pylab as plt
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import statsmodels.tsa.api as smt
import matplotlib.dates as dates
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15, 6
pd.options.display.width = 600
import pandas_datareader.data as web
###Output
_____no_output_____
###Markdown
Netcare
###Code
#load the data
filename = 'ntc.csv'
netcareTS = pd.read_csv(filename,na_filter=True,index_col="Date").dropna()
netcareTS.head(5)
plt.figure(figsize=(20,10))
netcareTS["Close"].plot(label="netcare prices",color="red")
plt.grid(True)
plt.title("Netcare group limited (closing prices)",color="black")
plt.xlabel("Index")
plt.ylabel("Prices(in Rands)")
plt.legend(loc=1)
###Output
_____no_output_____
###Markdown
$\textbf{Log-returns}$To analyze the stock price, we usually calculate the logged return ofthe stock to make the data stationary$r_{t}=log\Big(\frac{p_{t}}{p_{t-1}}\Big)$, where $p_{t}$ is the stock price at time $t$
###Code
#The log-returns
returns=np.log((netcareTS["Close"])/(netcareTS["Close"].shift()))
returns.head(5)
returns=returns.dropna()
plt.figure(figsize=(15,9))
returns.plot(label="Log-returns",color="red")
plt.grid(True)
plt.title("Log returns of Netcare group limited",color="black")
plt.xlabel("Index")
plt.ylabel("log returns")
plt.legend(loc=0)
plt.show()
###Output
_____no_output_____
###Markdown
$\textbf{Split the data into multiple training and testing sets}$ split1: Train 500 Test:200 split2: Train 700 Test:200 Multiplayer peceptron split1: Train 500 Test:200
###Code
x_train=returns[0:500]
x_test=returns[500:700]
y_train=returns[0:500]
y_test=returns[500:700]
x_train=np.reshape(x_train,[len(y_train),1])
x_train.shape
x_test=np.reshape(x_test,[len(y_test),1])
x_test.shape
#Standardization refers to shifting the distribution of each attribute to have a mean of zero
#and a standard deviation of one (unit variance). It is useful to standardize attributes
#for a model that relies on the distribution of attributes such as Gaussian processes.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
import math
import numpy as np
import sys
class NeuralNetMLP(object):
""" Feedforward neural network / Multi-layer perceptron classifier.
Parameters
------------
n_hidden : int (default: 30)
l2 : float (default: 0.) Lambda value for L2-regularization.
No regularization if l2=0. (default)
epochs : int (default: 100) Number of passes over the training set.
eta : float (default: 0.001) Learning rate.
shuffle : bool (default: True) Shuffles training data every epoch
if True to prevent circles.
minibatch_size : int (default: 1) Number of training samples per minibatch.
seed : int (default: None) Random seed for initializing weights and shuffling.
Attributes
-----------
eval_ : dict
Dictionary collecting the cost, training accuracy,
and validation accuracy for each epoch during training.
"""
def __init__(self, n_hidden=2,
l2=0.0006, epochs=3000, eta=0.0006,
shuffle=True, minibatch_size=1, seed=None):
self.random = np.random.RandomState(seed)
self.n_hidden = n_hidden
self.l2 = l2
self.epochs = epochs
self.eta = eta
self.shuffle = shuffle
self.minibatch_size = minibatch_size
def _onehot(self, y):
"""Encode labels into one-hot representation
Parameters
------------
y : array, shape = [n_samples]
Target values.
Returns
-----------
onehot : array, shape = (n_samples, n_labels)
"""
onehot = np.zeros((1, y.shape[0]))
for idx, val in enumerate(y.astype(float)):
onehot[-1, idx] = val
return onehot.T
def _tangent(self,z):
"""
Compute the tangent function
"""
#return (np.sinh(z)/np.cosh(z))
return np.tanh(z)
def _forward(self, X):
"""Compute forward propagation step"""
# step 1: net input of hidden layer
# [n_samples, n_features] dot [n_features, n_hidden]
# -> [n_samples, n_hidden]
z_h = np.dot(X, self.w_h) + self.b_h
# step 2: activation of hidden layer
a_h = self._tangent(z_h)
# step 3: net input of output layer
# [n_samples, n_hidden] dot [n_hidden, n_classlabels]
# -> [n_samples, n_classlabels]
z_out = np.dot(a_h, self.w_out) + self.b_out
# step 4: activation output layer
a_out=self._tangent(z_out)
return z_h, a_h, z_out, a_out
def _mse(self, y_predict, y_target):
"""Compute cost function.
Parameters
----------
y_enc : array, shape = (n_samples, n_labels) one-hot encoded class labels.
output : array, shape = [n_samples, n_output_units] Activation of the output layer (forward propagation)
Returns
---------
cost : float Regularized cost
"""
L2_term = (self.l2 *(np.sum(self.w_h ** 2.) + np.sum(self.w_out ** 2.)))
cost= np.mean((y_predict-y_target)**2)+L2_term
return cost
def predict(self, X):
"""Predict class labels
Parameters
-----------
X : array, shape = [n_samples, n_features]
Input layer with original features.
Returns:
----------
y_pred : array, shape = [n_samples]
Predicted class labels.
"""
z_h, a_h, z_out, a_out = self._forward(X)
print(np.shape(z_out))
y_pred = np.max(z_out, axis=1)
return y_pred
def fit(self, x_train, y_train):
""" Learn weights from training data.
Parameters
-----------
X_train : array, shape = [n_samples, n_features]
Input layer with original features.
y_train : array, shape = [n_samples]
Target class labels.
X_valid : array, shape = [n_samples, n_features]
Sample features for validation during training
y_valid : array, shape = [n_samples]
Sample labels for validation during training
Returns:
----------
self
"""
n_output = y_train.shape[0] # no. of class
#labels
n_features = x_train.shape[1]
########################
# Weight initialization
########################
# weights for input -> hidden
self.b_h = np.zeros(self.n_hidden)
self.w_h = self.random.normal(loc=0.0, scale=0.1,
size=(n_features,
self.n_hidden))
# weights for hidden -> output
self.b_out = np.zeros(1)
self.w_out = self.random.normal(loc=0.0, scale=0.1,
size=(self.n_hidden,
1))
epoch_strlen = len(str(self.epochs)) # for progr. format.
self.eval_ = {'cost': [], 'train_acc': [], 'valid_acc': []}
y_train_enc = self._onehot(y_train)
print(y_train_enc.shape)
# iterate over training epochs
for i in range(self.epochs):
# iterate over minibatches
indices = np.arange(x_train.shape[0])
if self.shuffle:
self.random.shuffle(indices)
for start_idx in range(0, indices.shape[0]-self.minibatch_size +1, self.minibatch_size):
batch_idx = indices[start_idx:start_idx + self.minibatch_size]
# forward propagation
z_h, a_h, z_out, a_out = self._forward(x_train[batch_idx])
##################
# Backpropagation
##################
# [n_samples, n_classlabels]
sigma_out = a_out -y_train_enc[batch_idx]
# [n_samples, n_hidden]
tangent_derivative=1-((np.exp(z_h) -np.exp(-z_h))**2/(np.exp(z_h)+np.exp(-z_h))**2)
# [n_samples, n_classlabels] dot [n_classlabels,
# n_hidden]
# -> [n_samples, n_hidden]
sigma_h = (np.dot(sigma_out, self.w_out.T) *
tangent_derivative)
#sigma_h = (np.dot(sigma_out, self.w_out.T))
# [n_features, n_samples] dot [n_samples,
# n_hidden]
# -> [n_features, n_hidden]
grad_w_h = np.dot(x_train[batch_idx].T, sigma_h)
grad_b_h = np.sum(sigma_h, axis=0)
# [n_hidden, n_samples] dot [n_samples,
# n_classlabels]
# -> [n_hidden, n_classlabels]
grad_w_out = np.dot(a_h.T, sigma_out)
grad_b_out = np.sum(sigma_out, axis=0)
# Regularization and weight updates
delta_w_h = (grad_w_h + self.l2*self.w_h)
delta_b_h = grad_b_h # bias is not regularized
self.w_h -= self.eta * delta_w_h
self.b_h -= self.eta * delta_b_h
delta_w_out = (grad_w_out + self.l2*self.w_out)
delta_b_out = grad_b_out # bias is not regularized
self.w_out -= self.eta * delta_w_out
self.b_out -= self.eta * delta_b_out
#############
# Evaluation
#############
# Evaluation after each epoch during training
z_h, a_h, z_out, a_out = self._forward(x_train)
cost = self._mse(y_train_enc,
a_out)
y_train_pred = self.predict(x_train)
#y_valid_pred = self.predict(X_valid)
train_acc = ((np.sum(y_train ==
y_train_pred)).astype(np.float) /
x_train.shape[0])
sys.stderr.write('\r%0*d/%d | Cost: %.2f '
'| Train Acc.: %.2f '
%
(epoch_strlen, i+1, self.epochs,
cost,
train_acc*100))
sys.stderr.flush()
self.eval_['cost'].append(cost)
self.eval_['train_acc'].append(train_acc)
return self
nn = NeuralNetMLP(n_hidden=2,
l2=0.0006,
epochs=4800,
eta=0.0006,
minibatch_size=1,
shuffle=True,
seed=1)
nn.fit(x_train, y_train)
y_test_pred = nn.predict(x_test)
y_test=np.array(y_test)
test_predicted=np.array(y_test_pred)
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(y_test,label="Actual returns")
plt.plot(test_predicted,label="predicted returns")
plt.grid(True)
plt.title("Actual returns values vs Predicted returns values",color="blue")
plt.xlabel("Index")
plt.ylabel("Closing Prices")
plt.legend(loc=0)
plt.show()
binary_predicted_test=[]
for t in range(0,len(test_predicted)):
if test_predicted[t-1]>=test_predicted[t]:
binary_predicted_test.append(0)
else:
binary_predicted_test.append(1)
print(binary_predicted_test,len(binary_predicted_test))
print("Predicted data:")
#converting data to binary o or 1
binary_test=[]
for t in range(0,len(y_test)):
if y_test[t-1]>=y_test[t]:
binary_test.append(0)
else:
binary_test.append(1)
print(binary_test,len(binary_test))
len(binary_test),len(binary_predicted_test),type(binary_test)
#we count the number of correct predictions, if i-j==0 then we predicted the direction correclty
def counter(x,y):
count=0
for (i,j) in zip(x,y):
if i-j==0:
count=count+1
print ("The number of correct direction predictions is:",count,"out of:",len(x),"data points")
counter(binary_test,binary_predicted_test)
#accuracy decsribes: overall, how often the classifier is correct
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
print("Accuracy for testing data:")
def Confusion_matrix(n):
print(accuracy_score(binary_test[0:n],binary_predicted_test[0:n])*100)
confusion=confusion_matrix(binary_test[0:n],binary_predicted_test[0:n])
print(confusion)
#show confusion matrix
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Confusion_matrix(len(binary_test))
###Output
_____no_output_____
###Markdown
split2: Train 700 Test:200
###Code
returns=returns.dropna()
x_train=returns[0:700]
x_test=returns[700:900]
y_train=returns[0:700]
y_test=returns[700:900]
x_train=np.reshape(x_train,[len(y_train),1])
x_train.shape
x_test=np.reshape(x_test,[len(y_test),1])
x_test.shape
#Standardization refers to shifting the distribution of each attribute to have a mean of zero
#and a standard deviation of one (unit variance). It is useful to standardize attributes
#for a model that relies on the distribution of attributes such as Gaussian processes.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
import math
import numpy as np
import sys
class NeuralNetMLP(object):
""" Feedforward neural network / Multi-layer perceptron classifier.
Parameters
------------
n_hidden : int (default: 30)
l2 : float (default: 0.) Lambda value for L2-regularization.
No regularization if l2=0. (default)
epochs : int (default: 100) Number of passes over the training set.
eta : float (default: 0.001) Learning rate.
shuffle : bool (default: True) Shuffles training data every epoch
if True to prevent circles.
minibatch_size : int (default: 1) Number of training samples per minibatch.
seed : int (default: None) Random seed for initializing weights and shuffling.
Attributes
-----------
eval_ : dict
Dictionary collecting the cost, training accuracy,
and validation accuracy for each epoch during training.
"""
def __init__(self, n_hidden=2,
l2=0.0006, epochs=3000, eta=0.0006,
shuffle=True, minibatch_size=1, seed=None):
self.random = np.random.RandomState(seed)
self.n_hidden = n_hidden
self.l2 = l2
self.epochs = epochs
self.eta = eta
self.shuffle = shuffle
self.minibatch_size = minibatch_size
def _onehot(self, y):
"""Encode labels into one-hot representation
Parameters
------------
y : array, shape = [n_samples]
Target values.
Returns
-----------
onehot : array, shape = (n_samples, n_labels)
"""
onehot = np.zeros((1, y.shape[0]))
for idx, val in enumerate(y.astype(float)):
onehot[-1, idx] = val
return onehot.T
def _tangent(self,z):
"""
Compute the tangent function
"""
#return (np.sinh(z)/np.cosh(z))
return np.tanh(z)
def _forward(self, X):
"""Compute forward propagation step"""
# step 1: net input of hidden layer
# [n_samples, n_features] dot [n_features, n_hidden]
# -> [n_samples, n_hidden]
z_h = np.dot(X, self.w_h) + self.b_h
# step 2: activation of hidden layer
a_h = self._tangent(z_h)
# step 3: net input of output layer
# [n_samples, n_hidden] dot [n_hidden, n_classlabels]
# -> [n_samples, n_classlabels]
z_out = np.dot(a_h, self.w_out) + self.b_out
# step 4: activation output layer
a_out=self._tangent(z_out)
return z_h, a_h, z_out, a_out
def _mse(self, y_predict, y_target):
"""Compute cost function.
Parameters
----------
y_enc : array, shape = (n_samples, n_labels) one-hot encoded class labels.
output : array, shape = [n_samples, n_output_units] Activation of the output layer (forward propagation)
Returns
---------
cost : float Regularized cost
"""
L2_term = (self.l2 *(np.sum(self.w_h ** 2.) + np.sum(self.w_out ** 2.)))
cost= np.mean((y_predict-y_target)**2)+L2_term
return cost
def predict(self, X):
"""Predict class labels
Parameters
-----------
X : array, shape = [n_samples, n_features]
Input layer with original features.
Returns:
----------
y_pred : array, shape = [n_samples]
Predicted class labels.
"""
z_h, a_h, z_out, a_out = self._forward(X)
print(np.shape(z_out))
y_pred = np.max(z_out, axis=1)
return y_pred
def fit(self, x_train, y_train):
""" Learn weights from training data.
Parameters
-----------
X_train : array, shape = [n_samples, n_features]
Input layer with original features.
y_train : array, shape = [n_samples]
Target class labels.
X_valid : array, shape = [n_samples, n_features]
Sample features for validation during training
y_valid : array, shape = [n_samples]
Sample labels for validation during training
Returns:
----------
self
"""
n_output = y_train.shape[0] # no. of class
#labels
n_features = x_train.shape[1]
########################
# Weight initialization
########################
# weights for input -> hidden
self.b_h = np.zeros(self.n_hidden)
self.w_h = self.random.normal(loc=0.0, scale=0.1,
size=(n_features,
self.n_hidden))
# weights for hidden -> output
self.b_out = np.zeros(1)
self.w_out = self.random.normal(loc=0.0, scale=0.1,
size=(self.n_hidden,
1))
epoch_strlen = len(str(self.epochs)) # for progr. format.
self.eval_ = {'cost': [], 'train_acc': [], 'valid_acc': []}
y_train_enc = self._onehot(y_train)
print(y_train_enc.shape)
# iterate over training epochs
for i in range(self.epochs):
# iterate over minibatches
indices = np.arange(x_train.shape[0])
if self.shuffle:
self.random.shuffle(indices)
for start_idx in range(0, indices.shape[0]-self.minibatch_size +1, self.minibatch_size):
batch_idx = indices[start_idx:start_idx + self.minibatch_size]
# forward propagation
z_h, a_h, z_out, a_out = self._forward(x_train[batch_idx])
##################
# Backpropagation
##################
# [n_samples, n_classlabels]
sigma_out = a_out -y_train_enc[batch_idx]
# [n_samples, n_hidden]
#sigmoid_derivative_h = a_h * (1. - a_h)
#tangent_derivative= (1-(np.sinh(z_h)/np.cosh(z_h))**2)
tangent_derivative=1-((np.exp(z_h) -np.exp(-z_h))**2/(np.exp(z_h)+np.exp(-z_h))**2)
#tangent_derivative=1-np.tanh(a_h)*np.tanh(a_h)
# [n_samples, n_classlabels] dot [n_classlabels,
# n_hidden]
# -> [n_samples, n_hidden]
#sigma_h = (np.dot(sigma_out, self.w_out.T) *
#sigmoid_derivative_h)
sigma_h = (np.dot(sigma_out, self.w_out.T) *
tangent_derivative)
#sigma_h = (np.dot(sigma_out, self.w_out.T))
# [n_features, n_samples] dot [n_samples,
# n_hidden]
# -> [n_features, n_hidden]
grad_w_h = np.dot(x_train[batch_idx].T, sigma_h)
grad_b_h = np.sum(sigma_h, axis=0)
# [n_hidden, n_samples] dot [n_samples,
# n_classlabels]
# -> [n_hidden, n_classlabels]
grad_w_out = np.dot(a_h.T, sigma_out)
grad_b_out = np.sum(sigma_out, axis=0)
# Regularization and weight updates
delta_w_h = (grad_w_h + self.l2*self.w_h)
delta_b_h = grad_b_h # bias is not regularized
self.w_h -= self.eta * delta_w_h
self.b_h -= self.eta * delta_b_h
delta_w_out = (grad_w_out + self.l2*self.w_out)
delta_b_out = grad_b_out # bias is not regularized
self.w_out -= self.eta * delta_w_out
self.b_out -= self.eta * delta_b_out
#############
# Evaluation
#############
# Evaluation after each epoch during training
z_h, a_h, z_out, a_out = self._forward(x_train)
cost = self._mse(y_train_enc,
a_out)
y_train_pred = self.predict(x_train)
#y_valid_pred = self.predict(X_valid)
train_acc = ((np.sum(y_train ==
y_train_pred)).astype(np.float) /
x_train.shape[0])
sys.stderr.write('\r%0*d/%d | Cost: %.2f '
'| Train Acc.: %.2f '
%
(epoch_strlen, i+1, self.epochs,
cost,
train_acc*100))
sys.stderr.flush()
self.eval_['cost'].append(cost)
self.eval_['train_acc'].append(train_acc)
return self
nn = NeuralNetMLP(n_hidden=2,
l2=0.0006,
epochs=4600,
eta=0.0006,
minibatch_size=1,
shuffle=True,
seed=1)
nn.fit(x_train, y_train)
y_test_pred = nn.predict(x_test)
y_test=np.array(y_test)
test_predicted=np.array(y_test_pred)
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(y_test,label="Actual returns")
plt.plot(test_predicted,label="predicted returns")
plt.grid(True)
plt.title("Actual returns values vs Predicted returns values",color="blue")
plt.xlabel("Index")
plt.ylabel("Closing Prices")
plt.legend(loc=0)
plt.show()
binary_predicted_test=[]
for t in range(0,len(test_predicted)):
if test_predicted[t-1]>=test_predicted[t]:
binary_predicted_test.append(0)
else:
binary_predicted_test.append(1)
print(binary_predicted_test,len(binary_predicted_test))
print("Predicted data:")
#converting data to binary o or 1
binary_test=[]
for t in range(0,len(y_test)):
if y_test[t-1]>=y_test[t]:
binary_test.append(0)
else:
binary_test.append(1)
print(binary_test,len(binary_test))
len(binary_test),len(binary_predicted_test),type(binary_test)
#we count the number of correct predictions, if i-j==0 then we predicted the direction correclty
def counter(x,y):
count=0
for (i,j) in zip(x,y):
if i-j==0:
count=count+1
print ("The number of correct direction predictions is:",count,"out of:",len(x),"data points")
counter(binary_test,binary_predicted_test)
#accuracy decsribes: overall, how often the classifier is correct
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
print("Accuracy for testing data:")
def Confusion_matrix(n):
print(accuracy_score(binary_test[0:n],binary_predicted_test[0:n])*100)
confusion=confusion_matrix(binary_test[0:n],binary_predicted_test[0:n])
print(confusion)
#show confusion matrix
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Confusion_matrix(len(binary_test))
###Output
_____no_output_____
###Markdown
Santam Group Limited
###Code
#load the data
filename = 'snt.csv'
santamTS = pd.read_csv(filename,na_filter=True,index_col="Date").dropna()
santamTS.head(5)
#The log-returns
returns=np.log((santamTS["Close"])/(santamTS["Close"].shift()))
returns.head(5)
returns=returns.dropna()
plt.figure(figsize=(15,9))
returns.plot(label="Log-returns",color="red")
plt.grid(True)
plt.title("Log returns of Netcare group limited",color="black")
plt.xlabel("Index")
plt.ylabel("log returns")
plt.legend(loc=0)
plt.show()
###Output
_____no_output_____
###Markdown
split1 Train:500 Test:200
###Code
returns=returns.dropna()
x_train=returns[0:500]
x_test=returns[500:700]
y_train=returns[0:500]
y_test=returns[500:700]
x_train=np.reshape(x_train,[len(y_train),1])
x_train.shape
x_test=np.reshape(x_test,[len(y_test),1])
x_test.shape
scaler = StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
import math
import numpy as np
import sys
class NeuralNetMLP(object):
""" Feedforward neural network / Multi-layer perceptron classifier.
Parameters
------------
n_hidden : int (default: 30)
l2 : float (default: 0.) Lambda value for L2-regularization.
No regularization if l2=0. (default)
epochs : int (default: 100) Number of passes over the training set.
eta : float (default: 0.001) Learning rate.
shuffle : bool (default: True) Shuffles training data every epoch
if True to prevent circles.
minibatch_size : int (default: 1) Number of training samples per minibatch.
seed : int (default: None) Random seed for initializing weights and shuffling.
Attributes
-----------
eval_ : dict
Dictionary collecting the cost, training accuracy,
and validation accuracy for each epoch during training.
"""
def __init__(self, n_hidden=2,
l2=0.0006, epochs=3000, eta=0.0006,
shuffle=True, minibatch_size=1, seed=None):
self.random = np.random.RandomState(seed)
self.n_hidden = n_hidden
self.l2 = l2
self.epochs = epochs
self.eta = eta
self.shuffle = shuffle
self.minibatch_size = minibatch_size
def _onehot(self, y):
"""Encode labels into one-hot representation
Parameters
------------
y : array, shape = [n_samples]
Target values.
Returns
-----------
onehot : array, shape = (n_samples, n_labels)
"""
onehot = np.zeros((1, y.shape[0]))
for idx, val in enumerate(y.astype(float)):
onehot[-1, idx] = val
return onehot.T
def _tangent(self,z):
"""
Compute the tangent function
"""
#return (np.sinh(z)/np.cosh(z))
return np.tanh(z)
def _forward(self, X):
"""Compute forward propagation step"""
# step 1: net input of hidden layer
# [n_samples, n_features] dot [n_features, n_hidden]
# -> [n_samples, n_hidden]
z_h = np.dot(X, self.w_h) + self.b_h
# step 2: activation of hidden layer
a_h = self._tangent(z_h)
# step 3: net input of output layer
# [n_samples, n_hidden] dot [n_hidden, n_classlabels]
# -> [n_samples, n_classlabels]
z_out = np.dot(a_h, self.w_out) + self.b_out
# step 4: activation output layer
a_out=self._tangent(z_out)#Linear activation function
return z_h, a_h, z_out, a_out
def _mse(self, y_predict, y_target):
"""Compute cost function.
Parameters
----------
y_enc : array, shape = (n_samples, n_labels) one-hot encoded class labels.
output : array, shape = [n_samples, n_output_units] Activation of the output layer (forward propagation)
Returns
---------
cost : float Regularized cost
"""
L2_term = (self.l2 *(np.sum(self.w_h ** 2.) + np.sum(self.w_out ** 2.)))
cost= np.mean((y_predict-y_target)**2)+L2_term
return cost
def predict(self, X):
"""Predict class labels
Parameters
-----------
X : array, shape = [n_samples, n_features]
Input layer with original features.
Returns:
----------
y_pred : array, shape = [n_samples]
Predicted class labels.
"""
z_h, a_h, z_out, a_out = self._forward(X)
print(np.shape(z_out))
y_pred = np.max(z_out, axis=1)
return y_pred
def fit(self, x_train, y_train):
""" Learn weights from training data.
Parameters
-----------
X_train : array, shape = [n_samples, n_features]
Input layer with original features.
y_train : array, shape = [n_samples]
Target class labels.
X_valid : array, shape = [n_samples, n_features]
Sample features for validation during training
y_valid : array, shape = [n_samples]
Sample labels for validation during training
Returns:
----------
self
"""
n_output = y_train.shape[0] # no. of class
#labels
n_features = x_train.shape[1]
########################
# Weight initialization
########################
# weights for input -> hidden
self.b_h = np.zeros(self.n_hidden)
self.w_h = self.random.normal(loc=0.0, scale=0.1,
size=(n_features,
self.n_hidden))
# weights for hidden -> output
self.b_out = np.zeros(1)
self.w_out = self.random.normal(loc=0.0, scale=0.1,
size=(self.n_hidden,
1))
epoch_strlen = len(str(self.epochs)) # for progr. format.
self.eval_ = {'cost': [], 'train_acc': [], 'valid_acc': []}
y_train_enc = self._onehot(y_train)
print(y_train_enc.shape)
# iterate over training epochs
for i in range(self.epochs):
# iterate over minibatches
indices = np.arange(x_train.shape[0])
if self.shuffle:
self.random.shuffle(indices)
for start_idx in range(0, indices.shape[0]-self.minibatch_size +1, self.minibatch_size):
batch_idx = indices[start_idx:start_idx + self.minibatch_size]
# forward propagation
z_h, a_h, z_out, a_out = self._forward(x_train[batch_idx])
##################
# Backpropagation
##################
# [n_samples, n_classlabels]
sigma_out = a_out -y_train_enc[batch_idx]
# [n_samples, n_hidden]
#sigmoid_derivative_h = a_h * (1. - a_h)
#tangent_derivative= (1-(np.sinh(z_h)/np.cosh(z_h))**2)
tangent_derivative=1-((np.exp(z_h) -np.exp(-z_h))**2/(np.exp(z_h)+np.exp(-z_h))**2)
#tangent_derivative=1-np.tanh(a_h)*np.tanh(a_h)
# [n_samples, n_classlabels] dot [n_classlabels,
# n_hidden]
# -> [n_samples, n_hidden]
#sigma_h = (np.dot(sigma_out, self.w_out.T) *
#sigmoid_derivative_h)
sigma_h = (np.dot(sigma_out, self.w_out.T) *
tangent_derivative)
#sigma_h = (np.dot(sigma_out, self.w_out.T))
# [n_features, n_samples] dot [n_samples,
# n_hidden]
# -> [n_features, n_hidden]
grad_w_h = np.dot(x_train[batch_idx].T, sigma_h)
grad_b_h = np.sum(sigma_h, axis=0)
# [n_hidden, n_samples] dot [n_samples,
# n_classlabels]
# -> [n_hidden, n_classlabels]
grad_w_out = np.dot(a_h.T, sigma_out)
grad_b_out = np.sum(sigma_out, axis=0)
# Regularization and weight updates
delta_w_h = (grad_w_h + self.l2*self.w_h)
delta_b_h = grad_b_h # bias is not regularized
self.w_h -= self.eta * delta_w_h
self.b_h -= self.eta * delta_b_h
delta_w_out = (grad_w_out + self.l2*self.w_out)
delta_b_out = grad_b_out # bias is not regularized
self.w_out -= self.eta * delta_w_out
self.b_out -= self.eta * delta_b_out
#############
# Evaluation
#############
# Evaluation after each epoch during training
z_h, a_h, z_out, a_out = self._forward(x_train)
cost = self._mse(y_train_enc,
a_out)
y_train_pred = self.predict(x_train)
#y_valid_pred = self.predict(X_valid)
train_acc = ((np.sum(y_train ==
y_train_pred)).astype(np.float) /
x_train.shape[0])
#valid_acc = ((np.sum(y_valid ==
# y_valid_pred)).astype(np.float) /
# X_valid.shape[0])
sys.stderr.write('\r%0*d/%d | Cost: %.2f '
'| Train Acc.: %.2f '
%
(epoch_strlen, i+1, self.epochs,
cost,
train_acc*100))
sys.stderr.flush()
self.eval_['cost'].append(cost)
self.eval_['train_acc'].append(train_acc)
return self
nn = NeuralNetMLP(n_hidden=2,
l2=0.0006,
epochs=4800,
eta=0.0006,
minibatch_size=1,
shuffle=True,
seed=1)
nn.fit(x_train, y_train)
y_test_pred = nn.predict(x_test)
y_test=np.array(y_test)
test_predicted=np.array(y_test_pred)
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(y_test,label="Actual returns")
plt.plot(test_predicted,label="predicted returns")
plt.grid(True)
plt.title("Actual returns values vs Predicted returns values",color="blue")
plt.xlabel("Index")
plt.ylabel("Closing Prices")
plt.legend(loc=0)
plt.show()
binary_predicted_test=[]
for t in range(0,len(test_predicted)):
if test_predicted[t-1]>=test_predicted[t]:
binary_predicted_test.append(0)
else:
binary_predicted_test.append(1)
print(binary_predicted_test,len(binary_predicted_test))
print("Predicted data:")
#converting data to binary o or 1
binary_test=[]
for t in range(0,len(y_test)):
if y_test[t-1]>=y_test[t]:
binary_test.append(0)
else:
binary_test.append(1)
print(binary_test,len(binary_test))
len(binary_test),len(binary_predicted_test),type(binary_test)
#we count the number of correct predictions, if i-j==0 then we predicted the direction correclty
def counter(x,y):
count=0
for (i,j) in zip(x,y):
if i-j==0:
count=count+1
print ("The number of correct direction predictions is:",count,"out of:",len(x),"data points")
counter(binary_test,binary_predicted_test)
#accuracy decsribes: overall, how often the classifier is correct
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
print("Accuracy for testing data:")
def Confusion_matrix(n):
print(accuracy_score(binary_test[0:n],binary_predicted_test[0:n])*100)
confusion=confusion_matrix(binary_test[0:n],binary_predicted_test[0:n])
print(confusion)
#show confusion matrix
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Confusion_matrix(len(binary_test))
###Output
_____no_output_____
###Markdown
split 2 Train:700 Test: 200
###Code
returns=returns.dropna()
x_train=returns[0:700]
x_test=returns[700:900]
y_train=returns[0:700]
y_test=returns[700:900]
x_train=np.reshape(x_train,[len(y_train),1])
x_train.shape
x_test=np.reshape(x_test,[len(y_test),1])
x_test.shape
scaler = StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
import math
import numpy as np
import sys
class NeuralNetMLP(object):
""" Feedforward neural network / Multi-layer perceptron classifier.
Parameters
------------
n_hidden : int (default: 30)
l2 : float (default: 0.) Lambda value for L2-regularization.
No regularization if l2=0. (default)
epochs : int (default: 100) Number of passes over the training set.
eta : float (default: 0.001) Learning rate.
shuffle : bool (default: True) Shuffles training data every epoch
if True to prevent circles.
minibatch_size : int (default: 1) Number of training samples per minibatch.
seed : int (default: None) Random seed for initializing weights and shuffling.
Attributes
-----------
eval_ : dict
Dictionary collecting the cost, training accuracy,
and validation accuracy for each epoch during training.
"""
def __init__(self, n_hidden=2,
l2=0.0006, epochs=3000, eta=0.0006,
shuffle=True, minibatch_size=1, seed=None):
self.random = np.random.RandomState(seed)
self.n_hidden = n_hidden
self.l2 = l2
self.epochs = epochs
self.eta = eta
self.shuffle = shuffle
self.minibatch_size = minibatch_size
def _onehot(self, y):
"""Encode labels into one-hot representation
Parameters
------------
y : array, shape = [n_samples]
Target values.
Returns
-----------
onehot : array, shape = (n_samples, n_labels)
"""
onehot = np.zeros((1, y.shape[0]))
for idx, val in enumerate(y.astype(float)):
onehot[-1, idx] = val
#-1 the last row and idx(index) and val the value of the y
#print(onehot,y_train)
return onehot.T
def _tangent(self,z):
"""
Compute the tangent function
"""
#return (np.sinh(z)/np.cosh(z))
return np.tanh(z)
def _forward(self, X):
"""Compute forward propagation step"""
# step 1: net input of hidden layer
# [n_samples, n_features] dot [n_features, n_hidden]
# -> [n_samples, n_hidden]
z_h = np.dot(X, self.w_h) + self.b_h
# step 2: activation of hidden layer
a_h = self._tangent(z_h)
# step 3: net input of output layer
# [n_samples, n_hidden] dot [n_hidden, n_classlabels]
# -> [n_samples, n_classlabels]
z_out = np.dot(a_h, self.w_out) + self.b_out
# step 4: activation output layer
a_out=self._tangent(z_out)#Linear activation function
return z_h, a_h, z_out, a_out
def _mse(self, y_predict, y_target):
"""Compute cost function.
Parameters
----------
y_enc : array, shape = (n_samples, n_labels) one-hot encoded class labels.
output : array, shape = [n_samples, n_output_units] Activation of the output layer (forward propagation)
Returns
---------
cost : float Regularized cost
"""
L2_term = (self.l2 *(np.sum(self.w_h ** 2.) + np.sum(self.w_out ** 2.)))
cost= np.mean((y_predict-y_target)**2)+L2_term
return cost
def predict(self, X):
"""Predict class labels
Parameters
-----------
X : array, shape = [n_samples, n_features]
Input layer with original features.
Returns:
----------
y_pred : array, shape = [n_samples]
Predicted class labels.
"""
z_h, a_h, z_out, a_out = self._forward(X)
print(np.shape(z_out))
y_pred = np.max(z_out, axis=1)
return y_pred
def fit(self, x_train, y_train):
""" Learn weights from training data.
Parameters
-----------
X_train : array, shape = [n_samples, n_features]
Input layer with original features.
y_train : array, shape = [n_samples]
Target class labels.
X_valid : array, shape = [n_samples, n_features]
Sample features for validation during training
y_valid : array, shape = [n_samples]
Sample labels for validation during training
Returns:
----------
self
"""
n_output = y_train.shape[0] # no. of class
#labels
n_features = x_train.shape[1]
########################
# Weight initialization
########################
# weights for input -> hidden
self.b_h = np.zeros(self.n_hidden)
self.w_h = self.random.normal(loc=0.0, scale=0.1,
size=(n_features,
self.n_hidden))
# weights for hidden -> output
self.b_out = np.zeros(1)
self.w_out = self.random.normal(loc=0.0, scale=0.1,
size=(self.n_hidden,
1))
epoch_strlen = len(str(self.epochs)) # for progr. format.
self.eval_ = {'cost': [], 'train_acc': [], 'valid_acc': []}
y_train_enc = self._onehot(y_train)
print(y_train_enc.shape)
# iterate over training epochs
for i in range(self.epochs):
# iterate over minibatches
indices = np.arange(x_train.shape[0])
if self.shuffle:
self.random.shuffle(indices)
for start_idx in range(0, indices.shape[0]-self.minibatch_size +1, self.minibatch_size):
batch_idx = indices[start_idx:start_idx + self.minibatch_size]
# forward propagation
z_h, a_h, z_out, a_out = self._forward(x_train[batch_idx])
##################
# Backpropagation
##################
# [n_samples, n_classlabels]
sigma_out = a_out -y_train_enc[batch_idx]
# [n_samples, n_hidden]
#sigmoid_derivative_h = a_h * (1. - a_h)
#tangent_derivative= (1-(np.sinh(z_h)/np.cosh(z_h))**2)
tangent_derivative=1-((np.exp(z_h) -np.exp(-z_h))**2/(np.exp(z_h)+np.exp(-z_h))**2)
#tangent_derivative=1-np.tanh(a_h)*np.tanh(a_h)
# [n_samples, n_classlabels] dot [n_classlabels,
# n_hidden]
# -> [n_samples, n_hidden]
#sigma_h = (np.dot(sigma_out, self.w_out.T) *
#sigmoid_derivative_h)
sigma_h = (np.dot(sigma_out, self.w_out.T) *
tangent_derivative)
#sigma_h = (np.dot(sigma_out, self.w_out.T))
# [n_features, n_samples] dot [n_samples,
# n_hidden]
# -> [n_features, n_hidden]
grad_w_h = np.dot(x_train[batch_idx].T, sigma_h)
grad_b_h = np.sum(sigma_h, axis=0)
# [n_hidden, n_samples] dot [n_samples,
# n_classlabels]
# -> [n_hidden, n_classlabels]
grad_w_out = np.dot(a_h.T, sigma_out)
grad_b_out = np.sum(sigma_out, axis=0)
# Regularization and weight updates
delta_w_h = (grad_w_h + self.l2*self.w_h)
delta_b_h = grad_b_h # bias is not regularized
self.w_h -= self.eta * delta_w_h
self.b_h -= self.eta * delta_b_h
delta_w_out = (grad_w_out + self.l2*self.w_out)
delta_b_out = grad_b_out # bias is not regularized
self.w_out -= self.eta * delta_w_out
self.b_out -= self.eta * delta_b_out
#############
# Evaluation
#############
# Evaluation after each epoch during training
z_h, a_h, z_out, a_out = self._forward(x_train)
cost = self._mse(y_train_enc,
a_out)
y_train_pred = self.predict(x_train)
train_acc = ((np.sum(y_train ==
y_train_pred)).astype(np.float) /
x_train.shape[0])
sys.stderr.write('\r%0*d/%d | Cost: %.2f '
'| Train Acc.: %.2f '
%
(epoch_strlen, i+1, self.epochs,
cost,
train_acc*100))
sys.stderr.flush()
self.eval_['cost'].append(cost)
self.eval_['train_acc'].append(train_acc)
return self
nn = NeuralNetMLP(n_hidden=2,
l2=0.0006,
epochs=9800,
eta=0.0003,
minibatch_size=1,
shuffle=True,
seed=1)
nn.fit(x_train, y_train)
y_test_pred = nn.predict(x_test)
y_test=np.array(y_test)
test_predicted=np.array(y_test_pred)
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(y_test,label="Actual returns")
plt.plot(test_predicted,label="predicted returns")
plt.grid(True)
plt.title("Actual returns values vs Predicted returns values",color="blue")
plt.xlabel("Index")
plt.ylabel("Closing Prices")
plt.legend(loc=0)
plt.show()
binary_predicted_test=[]
for t in range(0,len(test_predicted)):
if test_predicted[t-1]>=test_predicted[t]:
binary_predicted_test.append(0)
else:
binary_predicted_test.append(1)
print(binary_predicted_test,len(binary_predicted_test))
print("Predicted data:")
#converting data to binary o or 1
binary_test=[]
for t in range(0,len(y_test)):
if y_test[t-1]>=y_test[t]:
binary_test.append(0)
else:
binary_test.append(1)
print(binary_test,len(binary_test))
len(binary_test),len(binary_predicted_test),type(binary_test)
#we count the number of correct predictions, if i-j==0 then we predicted the direction correclty
def counter(x,y):
count=0
for (i,j) in zip(x,y):
if i-j==0:
count=count+1
print ("The number of correct direction predictions is:",count,"out of:",len(x),"data points")
counter(binary_test,binary_predicted_test)
#accuracy decsribes: overall, how often the classifier is correct
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
print("Accuracy for testing data:")
def Confusion_matrix(n):
print(accuracy_score(binary_test[0:n],binary_predicted_test[0:n])*100)
confusion=confusion_matrix(binary_test[0:n],binary_predicted_test[0:n])
print(confusion)
#show confusion matrix
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Confusion_matrix(len(binary_test))
###Output
_____no_output_____
###Markdown
Nedbank group
###Code
#load the data
filename = 'ned.csv'
nedbankTS = pd.read_csv(filename,na_filter=True,index_col="Date").dropna()
nedbankTS.head(5)
#The log-returns
returns=np.log((nedbankTS["Close"])/(nedbankTS["Close"].shift()))
returns.head(5)
returns=returns.dropna()
plt.figure(figsize=(15,9))
returns.plot(label="Log-returns",color="red")
plt.grid(True)
plt.title("Log returns of Netcare group limited",color="black")
plt.xlabel("Index")
plt.ylabel("log returns")
plt.legend(loc=0)
plt.show()
###Output
_____no_output_____
###Markdown
split 1 Train:500 Test: 200
###Code
returns=returns.dropna()
x_train=returns[0:500]
x_test=returns[500:700]
y_train=returns[0:500]
y_test=returns[500:700]
x_train=np.reshape(x_train,[len(y_train),1])
x_train.shape
x_test=np.reshape(x_test,[len(y_test),1])
x_test.shape
scaler = StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
import math
import numpy as np
import sys
class NeuralNetMLP(object):
""" Feedforward neural network / Multi-layer perceptron classifier.
Parameters
------------
n_hidden : int (default: 30)
l2 : float (default: 0.) Lambda value for L2-regularization.
No regularization if l2=0. (default)
epochs : int (default: 100) Number of passes over the training set.
eta : float (default: 0.001) Learning rate.
shuffle : bool (default: True) Shuffles training data every epoch
if True to prevent circles.
minibatch_size : int (default: 1) Number of training samples per minibatch.
seed : int (default: None) Random seed for initializing weights and shuffling.
Attributes
-----------
eval_ : dict
Dictionary collecting the cost, training accuracy,
and validation accuracy for each epoch during training.
"""
def __init__(self, n_hidden=2,
l2=0.0006, epochs=3000, eta=0.0006,
shuffle=True, minibatch_size=1, seed=None):
self.random = np.random.RandomState(seed)
self.n_hidden = n_hidden
self.l2 = l2
self.epochs = epochs
self.eta = eta
self.shuffle = shuffle
self.minibatch_size = minibatch_size
def _onehot(self, y):
"""Encode labels into one-hot representation
Parameters
------------
y : array, shape = [n_samples]
Target values.
Returns
-----------
onehot : array, shape = (n_samples, n_labels)
"""
onehot = np.zeros((1, y.shape[0]))
for idx, val in enumerate(y.astype(float)):
onehot[-1, idx] = val
#-1 the last row and idx(index) and val the value of the y
#print(onehot,y_train)
return onehot.T
def _tangent(self,z):
"""
Compute the tangent function
"""
#return (np.sinh(z)/np.cosh(z))
return np.tanh(z)
def _forward(self, X):
"""Compute forward propagation step"""
# step 1: net input of hidden layer
# [n_samples, n_features] dot [n_features, n_hidden]
# -> [n_samples, n_hidden]
z_h = np.dot(X, self.w_h) + self.b_h
# step 2: activation of hidden layer
a_h = self._tangent(z_h)
# step 3: net input of output layer
# [n_samples, n_hidden] dot [n_hidden, n_classlabels]
# -> [n_samples, n_classlabels]
z_out = np.dot(a_h, self.w_out) + self.b_out
# step 4: activation output layer
a_out=self._tangent(z_out)#Linear activation function
return z_h, a_h, z_out, a_out
def _mse(self, y_predict, y_target):
"""Compute cost function.
Parameters
----------
y_enc : array, shape = (n_samples, n_labels) one-hot encoded class labels.
output : array, shape = [n_samples, n_output_units] Activation of the output layer (forward propagation)
Returns
---------
cost : float Regularized cost
"""
L2_term = (self.l2 *(np.sum(self.w_h ** 2.) + np.sum(self.w_out ** 2.)))
cost= np.mean((y_predict-y_target)**2)+L2_term
#cost=((((y_predict-y_target)**2).sum())/2.0)+L2_term
return cost
def predict(self, X):
"""Predict class labels
Parameters
-----------
X : array, shape = [n_samples, n_features]
Input layer with original features.
Returns:
----------
y_pred : array, shape = [n_samples]
Predicted class labels.
"""
z_h, a_h, z_out, a_out = self._forward(X)
print(np.shape(z_out))
y_pred = np.max(z_out, axis=1)
return y_pred
def fit(self, x_train, y_train):
""" Learn weights from training data.
Parameters
-----------
X_train : array, shape = [n_samples, n_features]
Input layer with original features.
y_train : array, shape = [n_samples]
Target class labels.
X_valid : array, shape = [n_samples, n_features]
Sample features for validation during training
y_valid : array, shape = [n_samples]
Sample labels for validation during training
Returns:
----------
self
"""
n_output = y_train.shape[0] # no. of class
#labels
n_features = x_train.shape[1]
########################
# Weight initialization
########################
# weights for input -> hidden
self.b_h = np.zeros(self.n_hidden)
self.w_h = self.random.normal(loc=0.0, scale=0.1,
size=(n_features,
self.n_hidden))
# weights for hidden -> output
self.b_out = np.zeros(1)
self.w_out = self.random.normal(loc=0.0, scale=0.1,
size=(self.n_hidden,
1))
epoch_strlen = len(str(self.epochs)) # for progr. format.
self.eval_ = {'cost': [], 'train_acc': [], 'valid_acc': []}
y_train_enc = self._onehot(y_train)
print(y_train_enc.shape)
# iterate over training epochs
for i in range(self.epochs):
# iterate over minibatches
indices = np.arange(x_train.shape[0])
if self.shuffle:
self.random.shuffle(indices)
for start_idx in range(0, indices.shape[0]-self.minibatch_size +1, self.minibatch_size):
batch_idx = indices[start_idx:start_idx + self.minibatch_size]
# forward propagation
z_h, a_h, z_out, a_out = self._forward(x_train[batch_idx])
##################
# Backpropagation
##################
# [n_samples, n_classlabels]
sigma_out = a_out -y_train_enc[batch_idx]
# [n_samples, n_hidden]
#sigmoid_derivative_h = a_h * (1. - a_h)
#tangent_derivative= (1-(np.sinh(z_h)/np.cosh(z_h))**2)
tangent_derivative=1-((np.exp(z_h) -np.exp(-z_h))**2/(np.exp(z_h)+np.exp(-z_h))**2)
#tangent_derivative=1-np.tanh(a_h)*np.tanh(a_h)
# [n_samples, n_classlabels] dot [n_classlabels,
# n_hidden]
# -> [n_samples, n_hidden]
#sigma_h = (np.dot(sigma_out, self.w_out.T) *
#sigmoid_derivative_h)
sigma_h = (np.dot(sigma_out, self.w_out.T) *
tangent_derivative)
#sigma_h = (np.dot(sigma_out, self.w_out.T))
# [n_features, n_samples] dot [n_samples,
# n_hidden]
# -> [n_features, n_hidden]
grad_w_h = np.dot(x_train[batch_idx].T, sigma_h)
grad_b_h = np.sum(sigma_h, axis=0)
# [n_hidden, n_samples] dot [n_samples,
# n_classlabels]
# -> [n_hidden, n_classlabels]
grad_w_out = np.dot(a_h.T, sigma_out)
grad_b_out = np.sum(sigma_out, axis=0)
# Regularization and weight updates
delta_w_h = (grad_w_h + self.l2*self.w_h)
delta_b_h = grad_b_h # bias is not regularized
self.w_h -= self.eta * delta_w_h
self.b_h -= self.eta * delta_b_h
delta_w_out = (grad_w_out + self.l2*self.w_out)
delta_b_out = grad_b_out # bias is not regularized
self.w_out -= self.eta * delta_w_out
self.b_out -= self.eta * delta_b_out
#############
# Evaluation
#############
# Evaluation after each epoch during training
z_h, a_h, z_out, a_out = self._forward(x_train)
cost = self._mse(y_train_enc,
a_out)
y_train_pred = self.predict(x_train)
train_acc = ((np.sum(y_train ==
y_train_pred)).astype(np.float) /
x_train.shape[0])
sys.stderr.write('\r%0*d/%d | Cost: %.2f '
'| Train Acc.: %.2f '
%
(epoch_strlen, i+1, self.epochs,
cost,
train_acc*100))
sys.stderr.flush()
self.eval_['cost'].append(cost)
self.eval_['train_acc'].append(train_acc)
return self
nn = NeuralNetMLP(n_hidden=2,
l2=0.0006,
epochs=5400,
eta=0.0006,
minibatch_size=1,
shuffle=True,
seed=1)
nn.fit(x_train, y_train)
y_test_pred = nn.predict(x_test)
y_test=np.array(y_test)
test_predicted=np.array(y_test_pred)
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(y_test,label="Actual returns")
plt.plot(test_predicted,label="predicted returns")
plt.grid(True)
plt.title("Actual returns values vs Predicted returns values",color="blue")
plt.xlabel("Index")
plt.ylabel("Closing Prices")
plt.legend(loc=0)
plt.show()
binary_predicted_test=[]
for t in range(0,len(test_predicted)):
if test_predicted[t-1]>=test_predicted[t]:
binary_predicted_test.append(0)
else:
binary_predicted_test.append(1)
print(binary_predicted_test,len(binary_predicted_test))
print("Predicted data:")
#converting data to binary o or 1
binary_test=[]
for t in range(0,len(y_test)):
if y_test[t-1]>=y_test[t]:
binary_test.append(0)
else:
binary_test.append(1)
print(binary_test,len(binary_test))
len(binary_test),len(binary_predicted_test),type(binary_test)
#we count the number of correct predictions, if i-j==0 then we predicted the direction correclty
def counter(x,y):
count=0
for (i,j) in zip(x,y):
if i-j==0:
count=count+1
print ("The number of correct direction predictions is:",count,"out of:",len(x),"data points")
counter(binary_test,binary_predicted_test)
#accuracy decsribes: overall, how often the classifier is correct
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
print("Accuracy for testing data:")
def Confusion_matrix(n):
print(accuracy_score(binary_test[0:n],binary_predicted_test[0:n])*100)
confusion=confusion_matrix(binary_test[0:n],binary_predicted_test[0:n])
print(confusion)
#show confusion matrix
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Confusion_matrix(len(binary_test))
###Output
_____no_output_____
###Markdown
Split 2 Train: 700 Test: 200
###Code
returns=returns.dropna()
x_train=returns[0:700]
x_test=returns[700:900]
y_train=returns[0:700]
y_test=returns[700:900]
x_train=np.reshape(x_train,[len(y_train),1])
x_train.shape
x_test=np.reshape(x_test,[len(y_test),1])
x_test.shape
scaler = StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
import math
import numpy as np
import sys
class NeuralNetMLP(object):
""" Feedforward neural network / Multi-layer perceptron classifier.
Parameters
------------
n_hidden : int (default: 30)
l2 : float (default: 0.) Lambda value for L2-regularization.
No regularization if l2=0. (default)
epochs : int (default: 100) Number of passes over the training set.
eta : float (default: 0.001) Learning rate.
shuffle : bool (default: True) Shuffles training data every epoch
if True to prevent circles.
minibatch_size : int (default: 1) Number of training samples per minibatch.
seed : int (default: None) Random seed for initializing weights and shuffling.
Attributes
-----------
eval_ : dict
Dictionary collecting the cost, training accuracy,
and validation accuracy for each epoch during training.
"""
def __init__(self, n_hidden=30,
l2=0., epochs=100, eta=0.001,
shuffle=True, minibatch_size=1, seed=None):
self.random = np.random.RandomState(seed)
self.n_hidden = n_hidden
self.l2 = l2
self.epochs = epochs
self.eta = eta
self.shuffle = shuffle
self.minibatch_size = minibatch_size
def _onehot(self, y):
"""Encode labels into one-hot representation
Parameters
------------
y : array, shape = [n_samples]
Target values.
Returns
-----------
onehot : array, shape = (n_samples, n_labels)
"""
onehot = np.zeros((1, y.shape[0]))
for idx, val in enumerate(y.astype(float)):
onehot[-1, idx] = val
return onehot.T
def _tangent(self,z):
"""
Compute the tangent function
"""
#return (np.sinh(z)/np.cosh(z))
return np.tanh(z)
def _forward(self, X):
"""Compute forward propagation step"""
# step 1: net input of hidden layer
# [n_samples, n_features] dot [n_features, n_hidden]
# -> [n_samples, n_hidden]
z_h = np.dot(X, self.w_h) + self.b_h
# step 2: activation of hidden layer
a_h = self._tangent(z_h)
# step 3: net input of output layer
# [n_samples, n_hidden] dot [n_hidden, n_classlabels]
# -> [n_samples, n_classlabels]
z_out = np.dot(a_h, self.w_out) + self.b_out
# step 4: activation output layer
a_out=self._tangent(z_out)#Linear activation function
return z_h, a_h, z_out, a_out
def _mse(self, y_predict, y_target):
"""Compute cost function.
Parameters
----------
y_enc : array, shape = (n_samples, n_labels) one-hot encoded class labels.
output : array, shape = [n_samples, n_output_units] Activation of the output layer (forward propagation)
Returns
---------
cost : float Regularized cost
"""
L2_term = (self.l2 *(np.sum(self.w_h ** 2.) + np.sum(self.w_out ** 2.)))
cost= np.mean((y_predict-y_target)**2)+L2_term
return cost
def predict(self, X):
"""Predict class labels
Parameters
-----------
X : array, shape = [n_samples, n_features]
Input layer with original features.
Returns:
----------
y_pred : array, shape = [n_samples]
Predicted class labels.
"""
z_h, a_h, z_out, a_out = self._forward(X)
print(np.shape(z_out))
y_pred = np.max(z_out, axis=1)
return y_pred
def fit(self, x_train, y_train):
""" Learn weights from training data.
Parameters
-----------
X_train : array, shape = [n_samples, n_features]
Input layer with original features.
y_train : array, shape = [n_samples]
Target class labels.
X_valid : array, shape = [n_samples, n_features]
Sample features for validation during training
y_valid : array, shape = [n_samples]
Sample labels for validation during training
Returns:
----------
self
"""
n_output = y_train.shape[0] # no. of class
#labels
n_features = x_train.shape[1]
########################
# Weight initialization
########################
# weights for input -> hidden
self.b_h = np.zeros(self.n_hidden)
self.w_h = self.random.normal(loc=0.0, scale=0.1,
size=(n_features,
self.n_hidden))
# weights for hidden -> output
self.b_out = np.zeros(1)
self.w_out = self.random.normal(loc=0.0, scale=0.1,
size=(self.n_hidden,
1))
epoch_strlen = len(str(self.epochs)) # for progr. format.
self.eval_ = {'cost': [], 'train_acc': [], 'valid_acc': []}
y_train_enc = self._onehot(y_train)
print(y_train_enc.shape)
# iterate over training epochs
for i in range(self.epochs):
# iterate over minibatches
indices = np.arange(x_train.shape[0])
if self.shuffle:
self.random.shuffle(indices)
for start_idx in range(0, indices.shape[0]-self.minibatch_size +1, self.minibatch_size):
batch_idx = indices[start_idx:start_idx + self.minibatch_size]
# forward propagation
z_h, a_h, z_out, a_out = self._forward(x_train[batch_idx])
##################
# Backpropagation
##################
# [n_samples, n_classlabels]
sigma_out = a_out -y_train_enc[batch_idx]
# [n_samples, n_hidden]
#sigmoid_derivative_h = a_h * (1. - a_h)
#tangent_derivative= (1-(np.sinh(z_h)/np.cosh(z_h))**2)
tangent_derivative=1-((np.exp(z_h) -np.exp(-z_h))**2/(np.exp(z_h)+np.exp(-z_h))**2)
#tangent_derivative=1-np.tanh(a_h)*np.tanh(a_h)
# [n_samples, n_classlabels] dot [n_classlabels,
# n_hidden]
# -> [n_samples, n_hidden]
#sigma_h = (np.dot(sigma_out, self.w_out.T) *
#sigmoid_derivative_h)
sigma_h = (np.dot(sigma_out, self.w_out.T) *
tangent_derivative)
#sigma_h = (np.dot(sigma_out, self.w_out.T))
# [n_features, n_samples] dot [n_samples,
# n_hidden]
# -> [n_features, n_hidden]
grad_w_h = np.dot(x_train[batch_idx].T, sigma_h)
grad_b_h = np.sum(sigma_h, axis=0)
# [n_hidden, n_samples] dot [n_samples,
# n_classlabels]
# -> [n_hidden, n_classlabels]
grad_w_out = np.dot(a_h.T, sigma_out)
grad_b_out = np.sum(sigma_out, axis=0)
# Regularization and weight updates
delta_w_h = (grad_w_h + self.l2*self.w_h)
delta_b_h = grad_b_h # bias is not regularized
self.w_h -= self.eta * delta_w_h
self.b_h -= self.eta * delta_b_h
delta_w_out = (grad_w_out + self.l2*self.w_out)
delta_b_out = grad_b_out # bias is not regularized
self.w_out -= self.eta * delta_w_out
self.b_out -= self.eta * delta_b_out
#############
# Evaluation
#############
# Evaluation after each epoch during training
z_h, a_h, z_out, a_out = self._forward(x_train)
cost = self._mse(y_train_enc,
a_out)
y_train_pred = self.predict(x_train)
#y_valid_pred = self.predict(X_valid)
train_acc = ((np.sum(y_train ==
y_train_pred)).astype(np.float) /
x_train.shape[0])
#valid_acc = ((np.sum(y_valid ==
# y_valid_pred)).astype(np.float) /
# X_valid.shape[0])
sys.stderr.write('\r%0*d/%d | Cost: %.2f '
'| Train Acc.: %.2f '
%
(epoch_strlen, i+1, self.epochs,
cost,
train_acc*100))
sys.stderr.flush()
self.eval_['cost'].append(cost)
self.eval_['train_acc'].append(train_acc)
#self.eval_['valid_acc'].append(valid_acc)
return self
nn = NeuralNetMLP(n_hidden=2,
l2=0.0006,
epochs=4900,
eta=0.0006,
minibatch_size=1,
shuffle=True,
seed=1)
nn.fit(x_train, y_train)
y_test_pred = nn.predict(x_test)
y_test=np.array(y_test)
test_predicted=np.array(y_test_pred)
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(y_test,label="Actual returns")
plt.plot(test_predicted,label="predicted returns")
plt.grid(True)
plt.title("Actual returns values vs Predicted returns values",color="blue")
plt.xlabel("Index")
plt.ylabel("Closing Prices")
plt.legend(loc=0)
plt.show()
binary_predicted_test=[]
for t in range(0,len(test_predicted)):
if test_predicted[t-1]>=test_predicted[t]:
binary_predicted_test.append(0)
else:
binary_predicted_test.append(1)
print(binary_predicted_test,len(binary_predicted_test))
print("Predicted data:")
#converting data to binary o or 1
binary_test=[]
for t in range(0,len(y_test)):
if y_test[t-1]>=y_test[t]:
binary_test.append(0)
else:
binary_test.append(1)
print(binary_test,len(binary_test))
len(binary_test),len(binary_predicted_test),type(binary_test)
#we count the number of correct predictions, if i-j==0 then we predicted the direction correclty
def counter(x,y):
count=0
for (i,j) in zip(x,y):
if i-j==0:
count=count+1
print ("The number of correct direction predictions is:",count,"out of:",len(x),"data points")
counter(binary_test,binary_predicted_test)
#accuracy decsribes: overall, how often the classifier is correct
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
print("Accuracy for testing data:")
def Confusion_matrix(n):
print(accuracy_score(binary_test[0:n],binary_predicted_test[0:n])*100)
confusion=confusion_matrix(binary_test[0:n],binary_predicted_test[0:n])
print(confusion)
#show confusion matrix
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Confusion_matrix(len(binary_test))
###Output
_____no_output_____
###Markdown
Sanlam Group Ltd
###Code
#load the data
filename = 'slm.csv'
sanlamTS = pd.read_csv(filename,na_filter=True,index_col="Date").dropna()
sanlamTS.head(5)
#The log-returns
returns=np.log((sanlamTS["Close"])/(sanlamTS["Close"].shift()))
returns.head(5)
returns=returns.dropna()
returns=returns.dropna()
x_train=returns[0:500]
x_test=returns[500:700]
y_train=returns[0:500]
y_test=returns[500:700]
x_train=np.reshape(x_train,[len(y_train),1])
x_train.shape
x_test=np.reshape(x_test,[len(y_test),1])
x_test.shape
scaler = StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
import math
import numpy as np
import sys
class NeuralNetMLP(object):
""" Feedforward neural network / Multi-layer perceptron classifier.
Parameters
------------
n_hidden : int (default: 30)
l2 : float (default: 0.) Lambda value for L2-regularization.
No regularization if l2=0. (default)
epochs : int (default: 100) Number of passes over the training set.
eta : float (default: 0.001) Learning rate.
shuffle : bool (default: True) Shuffles training data every epoch
if True to prevent circles.
minibatch_size : int (default: 1) Number of training samples per minibatch.
seed : int (default: None) Random seed for initializing weights and shuffling.
Attributes
-----------
eval_ : dict
Dictionary collecting the cost, training accuracy,
and validation accuracy for each epoch during training.
"""
def __init__(self, n_hidden=2,
l2=0.0006, epochs=3000, eta=0.0006,
shuffle=True, minibatch_size=1, seed=None):
self.random = np.random.RandomState(seed)
self.n_hidden = n_hidden
self.l2 = l2
self.epochs = epochs
self.eta = eta
self.shuffle = shuffle
self.minibatch_size = minibatch_size
def _onehot(self, y):
"""Encode labels into one-hot representation
Parameters
------------
y : array, shape = [n_samples]
Target values.
Returns
-----------
onehot : array, shape = (n_samples, n_labels)
"""
onehot = np.zeros((1, y.shape[0]))
for idx, val in enumerate(y.astype(float)):
onehot[-1, idx] = val
#-1 the last row and idx(index) and val the value of the y
#print(onehot,y_train)
return onehot.T
def _sigmoid(self, z):
"""Compute logistic function (sigmoid)"""
return 1. / (1. + np.exp(-(z)))
def _tangent(self,z):
"""
Compute the tangent function
"""
#return (np.sinh(z)/np.cosh(z))
return np.tanh(z)
def _forward(self, X):
"""Compute forward propagation step"""
# step 1: net input of hidden layer
# [n_samples, n_features] dot [n_features, n_hidden]
# -> [n_samples, n_hidden]
z_h = np.dot(X, self.w_h) + self.b_h
# step 2: activation of hidden layer
a_h = self._tangent(z_h)
# step 3: net input of output layer
# [n_samples, n_hidden] dot [n_hidden, n_classlabels]
# -> [n_samples, n_classlabels]
z_out = np.dot(a_h, self.w_out) + self.b_out
# step 4: activation output layer
a_out=self._tangent(z_out)#Linear activation function
return z_h, a_h, z_out, a_out
def _mse(self, y_predict, y_target):
"""Compute cost function.
Parameters
----------
y_enc : array, shape = (n_samples, n_labels) one-hot encoded class labels.
output : array, shape = [n_samples, n_output_units] Activation of the output layer (forward propagation)
Returns
---------
cost : float Regularized cost
"""
L2_term = (self.l2 *(np.sum(self.w_h ** 2.) + np.sum(self.w_out ** 2.)))
cost= np.mean((y_predict-y_target)**2)+L2_term
return cost
def predict(self, X):
"""Predict class labels
Parameters
-----------
X : array, shape = [n_samples, n_features]
Input layer with original features.
Returns:
----------
y_pred : array, shape = [n_samples]
Predicted class labels.
"""
z_h, a_h, z_out, a_out = self._forward(X)
print(np.shape(z_out))
y_pred = np.max(z_out, axis=1)
return y_pred
def fit(self, x_train, y_train):
""" Learn weights from training data.
Parameters
-----------
X_train : array, shape = [n_samples, n_features]
Input layer with original features.
y_train : array, shape = [n_samples]
Target class labels.
X_valid : array, shape = [n_samples, n_features]
Sample features for validation during training
y_valid : array, shape = [n_samples]
Sample labels for validation during training
Returns:
----------
self
"""
n_output = y_train.shape[0] # no. of class
#labels
n_features = x_train.shape[1]
########################
# Weight initialization
########################
# weights for input -> hidden
self.b_h = np.zeros(self.n_hidden)
self.w_h = self.random.normal(loc=0.0, scale=0.1,
size=(n_features,
self.n_hidden))
# weights for hidden -> output
self.b_out = np.zeros(1)
self.w_out = self.random.normal(loc=0.0, scale=0.1,
size=(self.n_hidden,
1))
epoch_strlen = len(str(self.epochs)) # for progr. format.
self.eval_ = {'cost': [], 'train_acc': [], 'valid_acc': []}
y_train_enc = self._onehot(y_train)
print(y_train_enc.shape)
# iterate over training epochs
for i in range(self.epochs):
# iterate over minibatches
indices = np.arange(x_train.shape[0])
if self.shuffle:
self.random.shuffle(indices)
for start_idx in range(0, indices.shape[0]-self.minibatch_size +1, self.minibatch_size):
batch_idx = indices[start_idx:start_idx + self.minibatch_size]
# forward propagation
z_h, a_h, z_out, a_out = self._forward(x_train[batch_idx])
##################
# Backpropagation
##################
# [n_samples, n_classlabels]
sigma_out = a_out -y_train_enc[batch_idx]
# [n_samples, n_hidden]
#sigmoid_derivative_h = a_h * (1. - a_h)
#tangent_derivative= (1-(np.sinh(z_h)/np.cosh(z_h))**2)
tangent_derivative=1-((np.exp(z_h) -np.exp(-z_h))**2/(np.exp(z_h)+np.exp(-z_h))**2)
#tangent_derivative=1-np.tanh(a_h)*np.tanh(a_h)
# [n_samples, n_classlabels] dot [n_classlabels,
# n_hidden]
# -> [n_samples, n_hidden]
#sigma_h = (np.dot(sigma_out, self.w_out.T) *
#sigmoid_derivative_h)
sigma_h = (np.dot(sigma_out, self.w_out.T) *
tangent_derivative)
#sigma_h = (np.dot(sigma_out, self.w_out.T))
# [n_features, n_samples] dot [n_samples,
# n_hidden]
# -> [n_features, n_hidden]
grad_w_h = np.dot(x_train[batch_idx].T, sigma_h)
grad_b_h = np.sum(sigma_h, axis=0)
# [n_hidden, n_samples] dot [n_samples,
# n_classlabels]
# -> [n_hidden, n_classlabels]
grad_w_out = np.dot(a_h.T, sigma_out)
grad_b_out = np.sum(sigma_out, axis=0)
# Regularization and weight updates
delta_w_h = (grad_w_h + self.l2*self.w_h)
delta_b_h = grad_b_h # bias is not regularized
self.w_h -= self.eta * delta_w_h
self.b_h -= self.eta * delta_b_h
delta_w_out = (grad_w_out + self.l2*self.w_out)
delta_b_out = grad_b_out # bias is not regularized
self.w_out -= self.eta * delta_w_out
self.b_out -= self.eta * delta_b_out
#############
# Evaluation
#############
# Evaluation after each epoch during training
z_h, a_h, z_out, a_out = self._forward(x_train)
cost = self._mse(y_train_enc,
a_out)
y_train_pred = self.predict(x_train)
train_acc = ((np.sum(y_train ==
y_train_pred)).astype(np.float) /
x_train.shape[0])
sys.stderr.write('\r%0*d/%d | Cost: %.2f '
'| Train Acc.: %.2f '
%
(epoch_strlen, i+1, self.epochs,
cost,
train_acc*100))
sys.stderr.flush()
self.eval_['cost'].append(cost)
self.eval_['train_acc'].append(train_acc)
return self
nn = NeuralNetMLP(n_hidden=2,
l2=0.00054,
epochs=3930,
eta=0.0007,
minibatch_size=1,
shuffle=True,
seed=1)
nn.fit(x_train, y_train)
y_test_pred = nn.predict(x_test)
y_test=np.array(y_test)
test_predicted=np.array(y_test_pred)
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(y_test,label="Actual returns")
plt.plot(test_predicted,label="predicted returns")
plt.grid(True)
plt.title("Actual returns values vs Predicted returns values",color="blue")
plt.xlabel("Index")
plt.ylabel("Closing Prices")
plt.legend(loc=0)
plt.show()
binary_predicted_test=[]
for t in range(0,len(test_predicted)):
if test_predicted[t-1]>=test_predicted[t]:
binary_predicted_test.append(0)
else:
binary_predicted_test.append(1)
print(binary_predicted_test,len(binary_predicted_test))
print("Predicted data:")
#converting data to binary o or 1
binary_test=[]
for t in range(0,len(y_test)):
if y_test[t-1]>=y_test[t]:
binary_test.append(0)
else:
binary_test.append(1)
print(binary_test,len(binary_test))
len(binary_test),len(binary_predicted_test),type(binary_test)
#we count the number of correct predictions, if i-j==0 then we predicted the direction correclty
def counter(x,y):
count=0
for (i,j) in zip(x,y):
if i-j==0:
count=count+1
print ("The number of correct direction predictions is:",count,"out of:",len(x),"data points")
counter(binary_test,binary_predicted_test)
#accuracy decsribes: overall, how often the classifier is correct
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
print("Accuracy for testing data:")
def Confusion_matrix(n):
print(accuracy_score(binary_test[0:n],binary_predicted_test[0:n])*100)
confusion=confusion_matrix(binary_test[0:n],binary_predicted_test[0:n])
print(confusion)
#show confusion matrix
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Confusion_matrix(len(binary_test))
###Output
_____no_output_____
###Markdown
Split 2 Train:700 Test:200
###Code
returns=returns.dropna()
x_train=returns[0:700]
x_test=returns[700:900]
y_train=returns[0:700]
y_test=returns[700:900]
x_train=np.reshape(x_train,[len(y_train),1])
x_train.shape
x_test=np.reshape(x_test,[len(y_test),1])
x_test.shape
scaler = StandardScaler()
scaler.fit(x_train)
x_train=scaler.transform(x_train)
x_test=scaler.transform(x_test)
import math
import numpy as np
import sys
class NeuralNetMLP(object):
""" Feedforward neural network / Multi-layer perceptron classifier.
Parameters
------------
n_hidden : int (default: 30)
l2 : float (default: 0.) Lambda value for L2-regularization.
No regularization if l2=0. (default)
epochs : int (default: 100) Number of passes over the training set.
eta : float (default: 0.001) Learning rate.
shuffle : bool (default: True) Shuffles training data every epoch
if True to prevent circles.
minibatch_size : int (default: 1) Number of training samples per minibatch.
seed : int (default: None) Random seed for initializing weights and shuffling.
Attributes
-----------
eval_ : dict
Dictionary collecting the cost, training accuracy,
and validation accuracy for each epoch during training.
"""
#def __init__(self, n_hidden=30,
# l2=0., epochs=100, eta=0.001,
# shuffle=True, minibatch_size=1, seed=None):
def __init__(self, n_hidden=2,
l2=0.0006, epochs=3000, eta=0.0006,
shuffle=True, minibatch_size=1, seed=None):
self.random = np.random.RandomState(seed)
self.n_hidden = n_hidden
self.l2 = l2
self.epochs = epochs
self.eta = eta
self.shuffle = shuffle
self.minibatch_size = minibatch_size
def _onehot(self, y):
"""Encode labels into one-hot representation
Parameters
------------
y : array, shape = [n_samples]
Target values.
Returns
-----------
onehot : array, shape = (n_samples, n_labels)
"""
onehot = np.zeros((1, y.shape[0]))
for idx, val in enumerate(y.astype(float)):
onehot[-1, idx] = val
#-1 the last row and idx(index) and val the value of the y
#print(onehot,y_train)
return onehot.T
def _sigmoid(self, z):
"""Compute logistic function (sigmoid)"""
return 1. / (1. + np.exp(-(z)))
def _tangent(self,z):
"""
Compute the tangent function
"""
#return (np.sinh(z)/np.cosh(z))
return np.tanh(z)
def _forward(self, X):
"""Compute forward propagation step"""
# step 1: net input of hidden layer
# [n_samples, n_features] dot [n_features, n_hidden]
# -> [n_samples, n_hidden]
z_h = np.dot(X, self.w_h) + self.b_h
# step 2: activation of hidden layer
a_h = self._tangent(z_h)
# step 3: net input of output layer
# [n_samples, n_hidden] dot [n_hidden, n_classlabels]
# -> [n_samples, n_classlabels]
z_out = np.dot(a_h, self.w_out) + self.b_out
# step 4: activation output layer
a_out=self._tangent(z_out)#Linear activation function
return z_h, a_h, z_out, a_out
def _mse(self, y_predict, y_target):
"""Compute cost function.
Parameters
----------
y_enc : array, shape = (n_samples, n_labels) one-hot encoded class labels.
output : array, shape = [n_samples, n_output_units] Activation of the output layer (forward propagation)
Returns
---------
cost : float Regularized cost
"""
L2_term = (self.l2 *(np.sum(self.w_h ** 2.) + np.sum(self.w_out ** 2.)))
cost= np.mean((y_predict-y_target)**2)+L2_term
#cost=((((y_predict-y_target)**2).sum())/2.0)+L2_term
return cost
def predict(self, X):
"""Predict class labels
Parameters
-----------
X : array, shape = [n_samples, n_features]
Input layer with original features.
Returns:
----------
y_pred : array, shape = [n_samples]
Predicted class labels.
"""
z_h, a_h, z_out, a_out = self._forward(X)
print(np.shape(z_out))
y_pred = np.max(z_out, axis=1)
return y_pred
def fit(self, x_train, y_train):
""" Learn weights from training data.
Parameters
-----------
X_train : array, shape = [n_samples, n_features]
Input layer with original features.
y_train : array, shape = [n_samples]
Target class labels.
X_valid : array, shape = [n_samples, n_features]
Sample features for validation during training
y_valid : array, shape = [n_samples]
Sample labels for validation during training
Returns:
----------
self
"""
n_output = y_train.shape[0] # no. of class
#labels
n_features = x_train.shape[1]
########################
# Weight initialization
########################
# weights for input -> hidden
self.b_h = np.zeros(self.n_hidden)
self.w_h = self.random.normal(loc=0.0, scale=0.1,
size=(n_features,
self.n_hidden))
# weights for hidden -> output
self.b_out = np.zeros(1)
self.w_out = self.random.normal(loc=0.0, scale=0.1,
size=(self.n_hidden,
1))
epoch_strlen = len(str(self.epochs)) # for progr. format.
self.eval_ = {'cost': [], 'train_acc': [], 'valid_acc': []}
y_train_enc = self._onehot(y_train)
print(y_train_enc.shape)
# iterate over training epochs
for i in range(self.epochs):
# iterate over minibatches
indices = np.arange(x_train.shape[0])
if self.shuffle:
self.random.shuffle(indices)
for start_idx in range(0, indices.shape[0]-self.minibatch_size +1, self.minibatch_size):
batch_idx = indices[start_idx:start_idx + self.minibatch_size]
# forward propagation
z_h, a_h, z_out, a_out = self._forward(x_train[batch_idx])
##################
# Backpropagation
##################
# [n_samples, n_classlabels]
sigma_out = a_out -y_train_enc[batch_idx]
# [n_samples, n_hidden]
#sigmoid_derivative_h = a_h * (1. - a_h)
#tangent_derivative= (1-(np.sinh(z_h)/np.cosh(z_h))**2)
tangent_derivative=1-((np.exp(z_h) -np.exp(-z_h))**2/(np.exp(z_h)+np.exp(-z_h))**2)
#tangent_derivative=1-np.tanh(a_h)*np.tanh(a_h)
# [n_samples, n_classlabels] dot [n_classlabels,
# n_hidden]
# -> [n_samples, n_hidden]
#sigma_h = (np.dot(sigma_out, self.w_out.T) *
#sigmoid_derivative_h)
sigma_h = (np.dot(sigma_out, self.w_out.T) *
tangent_derivative)
#sigma_h = (np.dot(sigma_out, self.w_out.T))
# [n_features, n_samples] dot [n_samples,
# n_hidden]
# -> [n_features, n_hidden]
grad_w_h = np.dot(x_train[batch_idx].T, sigma_h)
grad_b_h = np.sum(sigma_h, axis=0)
# [n_hidden, n_samples] dot [n_samples,
# n_classlabels]
# -> [n_hidden, n_classlabels]
grad_w_out = np.dot(a_h.T, sigma_out)
grad_b_out = np.sum(sigma_out, axis=0)
# Regularization and weight updates
delta_w_h = (grad_w_h + self.l2*self.w_h)
delta_b_h = grad_b_h # bias is not regularized
self.w_h -= self.eta * delta_w_h
self.b_h -= self.eta * delta_b_h
delta_w_out = (grad_w_out + self.l2*self.w_out)
delta_b_out = grad_b_out # bias is not regularized
self.w_out -= self.eta * delta_w_out
self.b_out -= self.eta * delta_b_out
#############
# Evaluation
#############
# Evaluation after each epoch during training
z_h, a_h, z_out, a_out = self._forward(x_train)
cost = self._mse(y_train_enc,
a_out)
y_train_pred = self.predict(x_train)
train_acc = ((np.sum(y_train ==
y_train_pred)).astype(np.float) /
x_train.shape[0])
sys.stderr.write('\r%0*d/%d | Cost: %.2f '
'| Train Acc.: %.2f '
%
(epoch_strlen, i+1, self.epochs,
cost,
train_acc*100))
sys.stderr.flush()
self.eval_['cost'].append(cost)
self.eval_['train_acc'].append(train_acc)
return self
nn = NeuralNetMLP(n_hidden=2,
l2=0.00054,
epochs=4600,
eta=0.00056,
minibatch_size=1,
shuffle=True,
seed=1)
nn.fit(x_train, y_train)
y_test_pred = nn.predict(x_test)
y_test=np.array(y_test)
test_predicted=np.array(y_test_pred)
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(y_test,label="Actual returns")
plt.plot(test_predicted,label="predicted returns")
plt.grid(True)
plt.title("Actual returns values vs Predicted returns values",color="blue")
plt.xlabel("Index")
plt.ylabel("Closing Prices")
plt.legend(loc=0)
plt.show()
binary_predicted_test=[]
for t in range(0,len(test_predicted)):
if test_predicted[t-1]>=test_predicted[t]:
binary_predicted_test.append(0)
else:
binary_predicted_test.append(1)
print(binary_predicted_test,len(binary_predicted_test))
print("Predicted data:")
#converting data to binary o or 1
binary_test=[]
for t in range(0,len(y_test)):
if y_test[t-1]>=y_test[t]:
binary_test.append(0)
else:
binary_test.append(1)
print(binary_test,len(binary_test))
len(binary_test),len(binary_predicted_test),type(binary_test)
#we count the number of correct predictions, if i-j==0 then we predicted the direction correclty
def counter(x,y):
count=0
for (i,j) in zip(x,y):
if i-j==0:
count=count+1
print ("The number of correct direction predictions is:",count,"out of:",len(x),"data points")
counter(binary_test,binary_predicted_test)
#accuracy decsribes: overall, how often the classifier is correct
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
print("Accuracy for testing data:")
def Confusion_matrix(n):
print(accuracy_score(binary_test[0:n],binary_predicted_test[0:n])*100)
confusion=confusion_matrix(binary_test[0:n],binary_predicted_test[0:n])
print(confusion)
#show confusion matrix
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Confusion_matrix(len(binary_test))
###Output
_____no_output_____
###Markdown
Accuracy for ANN: Train:500 Test:200, Train:700 Test:200
###Code
import pandas as pd
steps=[[76,73],[71,77.5],[89.5,78.5],[83.5,74.5]]
data=pd.DataFrame(steps,index=['Netcare','Santam','Sanlam','Nedbank'],columns=["Train:500 Test:200","Train:700 Test:200"])
data
###Output
_____no_output_____ |
12. Sentiment RNN/Sentiment_RNN.ipynb | ###Markdown
Sentiment Analysis with an RNNIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. >Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative. Network ArchitectureThe architecture for this network is shown below.>**First, we'll pass in words to an embedding layer.** We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. *In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.*>**After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells.** The LSTM cells will add *recurrent* connections to the network and give us the ability to include information about the *sequence* of words in the movie review data. >**Finally, the LSTM outputs will go to a sigmoid output layer.** We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1. We don't care about the sigmoid outputs except for the **very last one**; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg). --- Load in and visualize the data
###Code
import numpy as np
# read data from text files
with open('data/reviews.txt', 'r') as f:
reviews = f.read()
with open('data/labels.txt', 'r') as f:
labels = f.read()
print(reviews[:2000])
print()
print(labels[:20])
###Output
bromwell high is a cartoon comedy . it ran at the same time as some other programs about school life such as teachers . my years in the teaching profession lead me to believe that bromwell high s satire is much closer to reality than is teachers . the scramble to survive financially the insightful students who can see right through their pathetic teachers pomp the pettiness of the whole situation all remind me of the schools i knew and their students . when i saw the episode in which a student repeatedly tried to burn down the school i immediately recalled . . . . . . . . . at . . . . . . . . . . high . a classic line inspector i m here to sack one of your teachers . student welcome to bromwell high . i expect that many adults of my age think that bromwell high is far fetched . what a pity that it isn t
story of a man who has unnatural feelings for a pig . starts out with a opening scene that is a terrific example of absurd comedy . a formal orchestra audience is turned into an insane violent mob by the crazy chantings of it s singers . unfortunately it stays absurd the whole time with no general narrative eventually making it just too off putting . even those from the era should be turned off . the cryptic dialogue would make shakespeare seem easy to a third grader . on a technical level it s better than you might think with some good cinematography by future great vilmos zsigmond . future stars sally kirkland and frederic forrest can be seen briefly .
homelessness or houselessness as george carlin stated has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school work or vote for the matter . most people think of the homeless as just a lost cause while worrying about things such as racism the war on iraq pressuring kids to succeed technology the elections inflation or worrying if they ll be next to end up on the streets . br br but what if y
positive
negative
po
###Markdown
Data pre-processingThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.You can see an example of the reviews data above. Here are the processing steps, we'll want to take:>* We'll want to get rid of periods and extraneous punctuation.* Also, you might notice that the reviews are delimited with newline characters `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter. * Then I can combined all the reviews back together into one big string.First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
###Code
from string import punctuation
print(punctuation)
# get rid of punctuation
reviews = reviews.lower() # lowercase, standardize
all_text = ''.join([c for c in reviews if c not in punctuation])
# split by new lines and spaces
reviews_split = all_text.split('\n')
all_text = ' '.join(reviews_split)
# create a list of words
words = all_text.split()
words[:30]
###Output
_____no_output_____
###Markdown
Encoding the wordsThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`.
###Code
# feel free to use this import
from collections import Counter
## Build a dictionary that maps words to integers
words_ = set(words)
vocab_to_int = { word: idx for idx, word in enumerate(words_, start=1)}
## use the dict to tokenize each review in reviews_split
## store the tokenized reviews in reviews_ints
reviews_ints = []
for review_ in reviews_split:
reviews_ints.append([vocab_to_int[r] for r in review_.split()])
###Output
_____no_output_____
###Markdown
**Test your code**As a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.
###Code
# stats about vocabulary
print('Unique words: ', len((vocab_to_int))) # should ~ 74000+
print()
# print tokens in first review
print('Tokenized review: \n', reviews_ints[:1])
###Output
Unique words: 74072
Tokenized review:
[[43635, 16089, 6945, 5805, 48740, 32231, 60729, 43579, 46980, 50019, 66624, 40955, 34966, 21813, 40855, 73178, 53782, 48166, 50517, 35027, 34966, 52914, 71189, 29785, 41100, 50019, 20009, 23211, 39157, 52671, 41421, 51300, 56511, 43635, 16089, 39573, 22551, 6945, 67783, 73845, 41421, 52729, 52113, 6945, 52914, 50019, 20188, 41421, 14136, 2441, 50019, 70724, 61127, 59344, 41728, 51197, 1645, 39751, 70872, 31211, 52914, 36481, 50019, 45607, 34701, 50019, 70197, 73978, 73710, 19174, 52671, 34701, 50019, 41942, 7498, 73780, 18371, 70872, 61127, 60490, 7498, 57998, 50019, 54630, 41100, 52594, 5805, 71801, 42774, 28945, 41421, 46471, 70755, 50019, 48166, 7498, 43240, 37281, 46980, 16089, 5805, 52149, 10338, 6660, 7498, 52004, 39379, 41421, 72715, 57775, 34701, 55997, 52914, 71801, 44267, 41421, 43635, 16089, 7498, 53126, 56511, 43176, 14057, 34701, 71189, 49355, 14491, 56511, 43635, 16089, 6945, 23364, 21802, 46223, 5805, 15571, 56511, 60729, 8290, 20596]]
###Markdown
Encoding the labelsOur labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively, and place those in a new list, `encoded_labels`.
###Code
# 1=positive, 0=negative label conversion
labels_split = labels.split('\n')
encoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])
###Output
_____no_output_____
###Markdown
Removing OutliersAs an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:1. Getting rid of extremely long or short reviews; the outliers2. Padding/truncating the remaining data so that we have reviews of the same length.Before we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training.
###Code
# outlier review stats
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
###Output
Zero-length reviews: 1
Maximum review length: 2514
###Markdown
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.> **Exercise:** First, remove *any* reviews with zero length from the `reviews_ints` list and their corresponding label in `encoded_labels`.
###Code
print('Number of reviews before removing outliers: ', len(reviews_ints))
## remove any reviews/labels with zero length from the reviews_ints list.
non_zero_idx = [i for i, review in enumerate(reviews_ints) if len(review) > 0]
reviews_ints = [reviews_ints[i] for i in non_zero_idx]
encoded_labels = np.array([encoded_labels[i] for i in non_zero_idx])
print('Number of reviews after removing outliers: ', len(reviews_ints))
###Output
Number of reviews before removing outliers: 25001
Number of reviews after removing outliers: 25000
###Markdown
--- Padding sequencesTo deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some `seq_length`, we'll pad with 0s. For reviews longer than `seq_length`, we can truncate them to the first `seq_length` words. A good `seq_length`, in this case, is 200.> **Exercise:** Define a function that returns an array `features` that contains the padded data, of a standard size, that we'll pass to the network. * The data should come from `review_ints`, since we want to feed integers to the network. * Each row should be `seq_length` elements long. * For reviews shorter than `seq_length` words, **left pad** with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. * For reviews longer than `seq_length`, use only the first `seq_length` words as the feature vector.As a small example, if the `seq_length=10` and an input review is: ```[117, 18, 128]```The resultant, padded sequence should be: ```[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]```**Your final `features` array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified `seq_length`.**This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
###Code
def pad_features(reviews_ints, seq_length):
''' Return features of review_ints, where each review is padded with 0's
or truncated to the input seq_length.
'''
## implement function
features = np.zeros((len(reviews_ints), seq_length), dtype=int)
for i, review in enumerate(reviews_ints):
features[i, -len(review):] = np.array(review)[:seq_length]
return features
# Test your implementation!
seq_length = 200
features = pad_features(reviews_ints, seq_length=seq_length)
## test statements - do not change - ##
assert len(features)==len(reviews_ints), "Your features should have as many rows as reviews."
assert len(features[0])==seq_length, "Each feature row should contain seq_length values."
# print first 10 values of the first 30 batches
print(features[:30,:10])
###Output
[[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 79 62781 53788 34966 39675 54723 415 65316 55920 55872]
[50869 55875 34966 5805 52629 1647 17817 67527 6945 27852]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[60490 7498 50083 8369 71189 35780 20247 52671 52712 41421]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[50019 69489 13449 7353 5805 1647 69356 7994 41100 50730]
[41100 26222 39074 18824 39151 66663 24422 7079 24027 21975]
[ 0 0 0 0 0 0 0 0 0 0]
[50019 22728 2676 60608 9680 31225 46922 13068 29197 30724]
[ 0 0 0 0 0 0 0 0 0 0]
[73021 17130 60608 9680 44809 31640 3163 6945 48794 818]
[50019 38877 6945 53369 9454 6945 30476 40855 52011 41421]
[60490 7498 13573 58406 43242 14235 19760 7498 50083 55061]
[26222 38877 6945 57775 25305 5702 55734 55323 38527 6945]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 9454 1920 43176 32142 58713 41100 50019 38557 34701 33160]
[ 0 0 0 0 0 0 0 0 0 0]
[ 7498 5229 50019 22728 2676 67314 67783 60729 39573 57775]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]]
###Markdown
Training, Validation, TestWith our data in nice shape, we'll split it into training, validation, and test sets.> **Exercise:** Create the training, validation, and test sets. * You'll need to create sets for the features and the labels, `train_x` and `train_y`, for example. * Define a split fraction, `split_frac` as the fraction of data to **keep** in the training set. Usually this is set to 0.8 or 0.9. * Whatever data is left will be split in half to create the validation and *testing* data.
###Code
split_frac = 0.8
## split data into training, validation, and test data (features and labels, x and y)
total_features = len(features)
train_len = int(total_features * split_frac)
train_x = features[:train_len]
train_y = encoded_labels[:train_len]
test_x = features[train_len:]
test_y = encoded_labels[train_len:]
val_x, test_x = np.split(test_x, [int(len(test_x)/2)])
val_y, test_y = np.split(test_y, [int(len(test_y)/2)])
## print out the shapes of your resultant feature data
print(f"Train set: ({train_x.shape})")
print(f"Validation set: ({val_x.shape})")
print(f"Test set: ({test_x.shape})")
###Output
Train set: ((20000, 200))
Validation set: ((2500, 200))
Test set: ((2500, 200))
###Markdown
**Check your work**With train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:``` Feature Shapes:Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200)``` --- DataLoaders and BatchingAfter creating training, test, and validation data, we can create DataLoaders for this data by following two steps:1. Create a known format for accessing our data, using [TensorDataset](https://pytorch.org/docs/stable/data.html) which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.2. Create DataLoaders and batch our training, validation, and test Tensor datasets.```train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))train_loader = DataLoader(train_data, batch_size=batch_size)```This is an alternative to creating a generator function for batching our data into full batches.
###Code
import torch
from torch.utils.data import TensorDataset, DataLoader
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
valid_data = TensorDataset(torch.from_numpy(val_x), torch.from_numpy(val_y))
test_data = TensorDataset(torch.from_numpy(test_x), torch.from_numpy(test_y))
# dataloaders
batch_size = 50
# make sure to SHUFFLE your data
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)
test_loader = DataLoader(test_data, shuffle=True, batch_size=batch_size)
# obtain one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size()) # batch_size, seq_length
print('Sample input: \n', sample_x)
print()
print('Sample label size: ', sample_y.size()) # batch_size
print('Sample label: \n', sample_y)
###Output
Sample input size: torch.Size([50, 200])
Sample input:
tensor([[ 26353, 7498, 56645, ..., 46137, 73016, 42006],
[ 0, 0, 0, ..., 56511, 43176, 10619],
[ 0, 0, 0, ..., 60729, 39573, 66982],
...,
[ 0, 0, 0, ..., 41421, 60662, 73021],
[ 0, 0, 0, ..., 73931, 5805, 67817],
[ 7498, 70583, 41421, ..., 73021, 50019, 8934]])
Sample label size: torch.Size([50])
Sample label:
tensor([ 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1,
1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0,
1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0,
1, 1, 1, 1, 0, 1, 0, 1])
###Markdown
--- Sentiment Network with PyTorchBelow is where you'll define the network.The layers are as follows:1. An [embedding layer](https://pytorch.org/docs/stable/nn.htmlembedding) that converts our word tokens (integers) into embeddings of a specific size.2. An [LSTM layer](https://pytorch.org/docs/stable/nn.htmllstm) defined by a hidden_state size and number of layers3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size4. A sigmoid activation layer which turns all outputs into a value 0-1; return **only the last sigmoid output** as the output of this network. The Embedding LayerWe need to add an [embedding layer](https://pytorch.org/docs/stable/nn.htmlembedding) because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights. The LSTM Layer(s)We'll create an [LSTM](https://pytorch.org/docs/stable/nn.htmllstm) to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.Most of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships. > **Exercise:** Complete the `__init__`, `forward`, and `init_hidden` functions for the SentimentRNN model class.Note: `init_hidden` should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available.
###Code
# First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
import torch.nn as nn
class SentimentRNN(nn.Module):
"""
The RNN model that will be used to perform Sentiment analysis.
"""
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
"""
Initialize the model by setting up the layers.
"""
super(SentimentRNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define all layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(input_size=embedding_dim, hidden_size=self.hidden_dim,
num_layers=n_layers, dropout=drop_prob, batch_first=True)
self.drop = nn.Dropout(p=drop_prob)
self.fc = nn.Linear(self.hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
embed = self.embed(x)
lstm_out, hidden = self.lstm(embed, hidden)
out = self.drop(lstm_out)
out = self.fc(out)
sig_out = self.sig(out)
# print(sig_out.shape)
sig_out = sig_out.view(batch_size, -1)
# print(sig_out.shape)
sig_out = sig_out[:, -1]
# print(sig_out.shape)
# return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
###Output
_____no_output_____
###Markdown
Instantiate the networkHere, we'll instantiate the network. First up, defining the hyperparameters.* `vocab_size`: Size of our vocabulary or the range of values for our input, word tokens.* `output_size`: Size of our desired output; the number of class scores we want to output (pos/neg).* `embedding_dim`: Number of columns in the embedding lookup table; size of our embeddings.* `hidden_dim`: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.* `n_layers`: Number of LSTM layers in the network. Typically between 1-3> **Exercise:** Define the model hyperparameters.
###Code
# Instantiate the model w/ hyperparams
vocab_size = len(vocab_to_int)+1
output_size = 1
embedding_dim = 400
hidden_dim = 256
n_layers = 2
print(vocab_size)
net = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net)
###Output
74073
SentimentRNN(
(embed): Embedding(74073, 400)
(lstm): LSTM(400, 256, num_layers=2, batch_first=True, dropout=0.5)
(drop): Dropout(p=0.5)
(fc): Linear(in_features=256, out_features=1, bias=True)
(sig): Sigmoid()
)
###Markdown
--- TrainingBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.>We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. [BCELoss](https://pytorch.org/docs/stable/nn.htmlbceloss), or **Binary Cross Entropy Loss**, applies cross entropy loss to a single value between 0 and 1.We also have some data and training hyparameters:* `lr`: Learning rate for our optimizer.* `epochs`: Number of times to iterate through the training dataset.* `clip`: The maximum gradient value to clip at (to prevent exploding gradients).
###Code
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
# training params
epochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
if(train_on_gpu):
net.cuda()
net.train()
# train for some number of epochs
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# batch loop
for inputs, labels in train_loader:
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# print(inputs.shape)
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
###Output
Epoch: 1/4... Step: 100... Loss: 0.632054... Val Loss: 0.656811
Epoch: 1/4... Step: 200... Loss: 0.602866... Val Loss: 0.600420
Epoch: 1/4... Step: 300... Loss: 0.588005... Val Loss: 0.583246
Epoch: 1/4... Step: 400... Loss: 0.752640... Val Loss: 0.730387
Epoch: 2/4... Step: 500... Loss: 0.719230... Val Loss: 0.622351
Epoch: 2/4... Step: 600... Loss: 0.585640... Val Loss: 0.557006
Epoch: 2/4... Step: 700... Loss: 0.547344... Val Loss: 0.579473
Epoch: 2/4... Step: 800... Loss: 0.497805... Val Loss: 0.474605
Epoch: 3/4... Step: 900... Loss: 0.392079... Val Loss: 0.476275
Epoch: 3/4... Step: 1000... Loss: 0.348153... Val Loss: 0.488079
Epoch: 3/4... Step: 1100... Loss: 0.172879... Val Loss: 0.447911
Epoch: 3/4... Step: 1200... Loss: 0.461436... Val Loss: 0.475061
Epoch: 4/4... Step: 1300... Loss: 0.201953... Val Loss: 0.486062
Epoch: 4/4... Step: 1400... Loss: 0.310843... Val Loss: 0.520719
Epoch: 4/4... Step: 1500... Loss: 0.117070... Val Loss: 0.570242
Epoch: 4/4... Step: 1600... Loss: 0.201815... Val Loss: 0.520287
###Markdown
--- TestingThere are a few ways to test your network.* **Test data performance:** First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.* **Inference on user-generated data:** Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called **inference**.
###Code
# Get test data loss and accuracy
test_losses = [] # track loss
num_correct = 0
# init hidden state
h = net.init_hidden(batch_size)
net.eval()
# iterate over test data
for inputs, labels in test_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# get predicted outputs
output, h = net(inputs, h)
# calculate loss
test_loss = criterion(output.squeeze(), labels.float())
test_losses.append(test_loss.item())
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze()) # rounds to the nearest integer
# compare predictions to true label
correct_tensor = pred.eq(labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
# -- stats! -- ##
# avg test loss
print("Test loss: {:.3f}".format(np.mean(test_losses)))
# accuracy over all test data
test_acc = num_correct/len(test_loader.dataset)
print("Test accuracy: {:.3f}".format(test_acc))
###Output
Test loss: 0.513
Test accuracy: 0.794
###Markdown
Inference on a test reviewYou can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly! > **Exercise:** Write a `predict` function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!* You can use any functions that you've already defined or define any helper functions you want to complete `predict`, but it should just take in a trained net, a text review, and a sequence length.
###Code
# negative test review
test_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'
from string import punctuation
def predict(net, test_review, sequence_length=200):
''' Prints out whether a give review is predicted to be
positive or negative in sentiment, using a trained model.
params:
net - A trained net
test_review - a review made of normal text and punctuation
sequence_length - the padded length of a review
'''
review = ''.join([c for c in test_review.lower() if c not in punctuation])
review_int = [vocab_to_int[word] for word in review.split()]
f = np.zeros((1, sequence_length), dtype=int)
f[0, -len(review_int):] = np.array(review_int)[:sequence_length]
net.eval()
feature_tensor = torch.from_numpy(f)
batch_size = feature_tensor.size(0)
h = net.init_hidden(batch_size)
if(train_on_gpu):
feature_tensor = feature_tensor.cuda()
out, h = net(feature_tensor, h)
pred = torch.round(out.squeeze())
# print custom response based on whether test_review is pos/neg
print('Prediction value, pre-rounding: {:.6f}'.format(out.item()))
# print custom response
if(pred.item()==1):
print("Positive review detected!")
else:
print("Negative review detected.")
# positive test review
test_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'
# call function
# try negative and positive reviews!
seq_length=200
predict(net, test_review_pos, seq_length)
###Output
Prediction value, pre-rounding: 0.978208
Positive review detected!
###Markdown
Try out test_reviews of your own!Now that you have a trained model and a predict function, you can pass in _any_ kind of text and this model will predict whether the text has a positive or negative sentiment. Push this model to its limits and try to find what words it associates with positive or negative.Later, you'll learn how to deploy a model like this to a production environment so that it can respond to any kind of user data put into a web app!
###Code
torch.save(net.state_dict(), 'model.pt')
torch.save(optimizer.state_dict(), 'optim.pt')
###Output
_____no_output_____ |
2019/ml/notebook/1. linear regression/00_linear_regression_basic.ipynb | ###Markdown
Drawing Scatter Plot
###Code
fig = sns.scatterplot(x=X_raw, y=y)
fig2 = sns.scatterplot(x=X_raw, y=y_true)
print(type(fig2))
###Output
<class 'matplotlib.axes._subplots.AxesSubplot'>
###Markdown
Numpy Tips[0, 1, 2] + [3, 4]를 [0, 1, 2, 3, 4] 형태로 병합할 수 있다
###Code
n_data = X_raw.shape[0]
dataset_x = np.concatenate([X_raw, X_raw])
dataset_y = np.concatenate([y, y_true])
dataset_category = ['sample'] * n_data + ['true'] * n_data
print(dataset_x.shape, dataset_y.shape)
###Output
(200,) (200,)
###Markdown
python dict를 Pandas Dataframe으로 변환
###Code
dataset = {
'x': dataset_x,
'y': dataset_y,
'category': dataset_category
}
source = pd.DataFrame(dataset)
source.head(3)
###Output
_____no_output_____
###Markdown
Differences between scatterplot and replot in seaborn
###Code
fig3 = sns.relplot(x='x', y='y', hue='category', kind='scatter', data=source)
print(type(fig3))
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression(fit_intercept=True)
###Output
_____no_output_____
###Markdown
**LinearRegression**에는 2차원 이상의 벡터들이 학습에 사용될 수 있다. 따라서 입력 데이터는 칼럼 벡터가 아닌 행렬 형태로 받는다. 칼럼 벡터는 shape = (n, 1) 형식으로 `reshape` 가능
###Code
X = X_raw.reshape(-1, 1) # column vector to matrix
linear_regression.fit(X, y)
###Output
_____no_output_____
###Markdown
predict() 함수는 학습된 모델로 예측을 수행한다. LogisticRegression 등의 대부분의 sckit-learn 구현 모델에는 `fit_predict()` 함수가 제공된다.
###Code
y_pred = linear_regression.predict(X)
dataset = {
'x': np.concatenate([dataset_x, X_raw]),
'y': np.concatenate([dataset_y, y_pred]),
'category': dataset_category + ['prediction'] * n_data
}
source = pd.DataFrame(dataset)
source.tail()
source['category'].unique()
fig4 = sns.relplot(x='x', y='y', hue='category', kind='scatter', data=source)
###Output
_____no_output_____
###Markdown
모델의 파라미터 a, b는 각각 `intercept_`와 `coef_`에 저장되어 있다.
###Code
a_est = linear_regression.coef_
b_est = linear_regression.intercept_
print(f'a_est = {a_est}, type = {type(a_est)}, shape = {a_est.shape}')
print(f'b_est = {b_est:.3}, type = {type(b_est)}, shape = {b_est.shape}')
print(y)
print(y_pred)
# 잔차를 정의해보자
residual = y - y_pred
print(residual)
residual[:5]
print(np.array([1, 2, 3]) / np.array([3, 4, 5]))
# [0.33333333 0.5 0.6 ]
print(np.array([1, 2, 3]) ** 2)
# [1 4 9]
print(np.array([1, 2, 3]) - 0.5)
# [0.5 1.5 2.5]
# MAE(Mean Absolute Error)
print(np.abs(residual).mean()) # residual = y - y_pred
# MAPE(Mean Absolute Percentage Error)
print(np.abs(residual / np.abs(y)).mean())
# MSE(Mean Squared Error)
print((residual ** 2).mean())
# RMSE(Root Mean Squared Error)
print(np.sqrt((residual ** 2).mean()))
# R-square
'''
분산 대비 잔차 비율
따라서
1 - [sigma((y - y_pred)^2) / sigma(y - y_mean^2)]
'''
print(1 - (residual ** 2).sum() / ((y - y.mean()) **2).sum())
###Output
0.8034942875609504
|
notebooks/10-xg-boost.ipynb | ###Markdown
10-xg-boost
###Code
X_train = pd.read_csv('../data/processed/train_features.csv')
y_train = pd.read_csv('../data/processed/train_target.csv')
X_test = pd.read_csv('../data/processed/test_features.csv')
y_test = pd.read_csv('../data/processed/test_target.csv')
X_train = X_train.select_dtypes('number')
X_test = X_test.select_dtypes('number')
cols = X_train.columns.tolist()
assert list(X_train.columns) == list(X_test.columns)
X_cols, y_cols = X_train.columns, y_train.columns
feature_scaler = StandardScaler()
target_scaler = StandardScaler()
X_train = feature_scaler.fit_transform(X_train)
X_test = feature_scaler.transform(X_test)
y_train = target_scaler.fit_transform(y_train)
y_test = target_scaler.transform(y_test)
X_train = pd.DataFrame(X_train, columns=X_cols)
X_test = pd.DataFrame(X_test, columns=X_cols)
y_train = pd.DataFrame(y_train, columns=y_cols)
y_test = pd.DataFrame(y_test, columns=y_cols)
cv = KFold(n_splits=10, shuffle=True, random_state=42)
###Output
_____no_output_____
###Markdown
XGBoost
###Code
# param_grid = {
# 'n_estimators': [50, 100, 150, 200],
# 'max_depth': [3, 4, 5],
# 'learning_rate': [0.01, 0.1, 1]
# }
param_grid = {
'n_estimators': [200],
'max_depth': [3],
'learning_rate': [1]
}
mdl = XGBRegressor(random_state=42)
t1 = time.time()
gscv = GridSearchCV(mdl, n_jobs=-1, cv=cv, param_grid=param_grid, verbose=1)
gscv.fit(X_train, y_train)
t2 = time.time()
print(t2 - t1)
mdl = gscv.best_estimator_
mdl
dump(mdl, '../models/xgb_regressor.joblib')
mean_squared_error(
target_scaler.inverse_transform(y_train),
target_scaler.inverse_transform(mdl.predict(X_train).reshape(-1,1)),
squared=False
)
mean_squared_error(
target_scaler.inverse_transform(y_test),
target_scaler.inverse_transform(mdl.predict(X_test).reshape(-1,1)),
squared=False
)
plt.figure(figsize=(5,5), dpi=300)
ax = plt.gca()
plot_tree(mdl, ax=ax)
plt.title('XG Boost')
plt.tight_layout()
plt.savefig(f'../reports/figures/xgb_tree.png')
###Output
_____no_output_____ |
final-Midterm.ipynb | ###Markdown
Delaware Public School Enrollment Trends BackgroundThe State of Delaware has a long and checkered history of systematic racism within it's public school system. Although the State has recently taken steps to address its inherent biases, such as a [lawsuit settlement](https://www.aclu-de.org/en/news/press-release-agreement-reached-county-track-public-schools-litigation) to reasses property values in New Castle County, with the goal of equalizing funding across school districts. Underlying distrust and discriminatory practices still exsist within the public schoool system. This is most prevelent in New Castle Country which is home to the largest percentage of Non-white population in Delaware [See Chart 41 "Non-White Population by County](https://statisticalatlas.com/county/Delaware/New-Castle-County/Race-and-Ethnicitytop) for a detailed breakdown. One often cited cause of Delaware's school segregation problem is the practice of Desegregation busing that occured within New Castle County between 1974 and 1994. With this policy ennacted under-served primarily African American students living within the City of Wilmington were sent to schools in the more affulent and primarily White surronding suburbs. Conversly, the students living in the suburbs of Wilmington were bussed to historically majority African American schools within the city. This practice angered a large amount of parents in the suburbs who did not their children attending school within the city, leading to a concerning distruct in the public school system within the suburban white communities. Subsequently, this sense of distust lead to a mass exodus of White students from the public school system and spurring the overwhelming prevelence of private schools in Delaware. Research ObjectivesThrough my analysis I intend to demonstrate that the distrust and marginalization of the Delaware Public School System by the White community is not a relic of the past and still persists to this day. I beleive this will demonstrate itself through a significant drop in White student enrollment within the public school system. I would also expect the majority of the changes in enrollment to be occuring in New Castle County due to it having both the largest overall population in Delaware, along with the largest Non-white population and the most significant history of racial discrimination in schools. In order to achive this goal I aim to analyze the recent enrollment trends to the schoool systemn and explain the implications these trends on the school funding system, as well as how the currently fudning system contributes to the aforementioned trends in enrollment. The research questions I asked are as follows:* What are the enrollment trends of Delaware Public Students broken down by Race?* How significant are the curernt enrollment trends?* How and which districts are most affectd by the changes in student enrollment? Data SourceThe following analysis was done using a publicly available data set provided by the Delaware State Government and which is used to allocate yearly funding and resources to schools within the state. [Delaware Enrollment Data Set](https://data.delaware.gov/Education/Student-Enrollment/6i7v-xnmf) Ethical ConsiderationsOne of the most important considerations to make while working on this type of analysis is ensuring that the students involved are never degraded to only a set a numbers. While converting attributes into data allows us to gather significant trends, it is important to remember that numbers cannot completely replicated uniqueness of human beings. A second consideration that must be made is the sheer number of stakeholder, both direct and indirect, in the education system. While it is obvious to point out the direct stakeholders such as the studnets, teachers, and government officials, we must also remember the wider societal stakeholders in the form of taxpayers, families, and local communities. Because of all those involved, it is critical that the analysis to follow is not misconstrued or misinterpreted that can be manipulated to advance false narratives. Data Set Overview
###Code
import pandas as pd
from pingouin import ancova
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
df = pd.read_csv("data/Student_Enrollment.csv")
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 738465 entries, 0 to 738464
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 School Year 738465 non-null int64
1 District Code 738465 non-null int64
2 District 738465 non-null object
3 School Code 738465 non-null int64
4 Organization 738465 non-null object
5 Race 738465 non-null object
6 Gender 738465 non-null object
7 Grade 738465 non-null object
8 SpecialDemo 738465 non-null object
9 Geography 738465 non-null object
10 SubGroup 738465 non-null object
11 RowStatus 738465 non-null object
12 Students 401049 non-null float64
13 EOYEnrollment 738157 non-null float64
14 PctOfEOYEnrollment 401049 non-null float64
15 FallEnrollment 660484 non-null float64
dtypes: float64(4), int64(3), object(9)
memory usage: 90.1+ MB
###Markdown
Column Descriptions. ***School Year***: School Year from which the data was collected. ***District Code***: Number unique the each distinct district. ***District***: Name of each district. ***School Code***: Number representing a school within a district. ***Organization***: Full name of the Organization. ***Race***: Represents the race/ethnicity of the unique group of students within the school/district. ***Gender***: Represents the gender of students within the school/district. ***Grade***: Grade level of the unqiue group of studetns. ***Special Demo***: Represents the special population status of the unique group of students. ***Geography***: Represents the geography of the unique group of students. ***SubGroup***: Represents the unique group of students within a schhool/district described by the combination of Race, Gender, Grade, SpecialDemo and Geography. ***RowStatus***: Indicates whether the aggregate data in the row has been Redacted or Reported. If redacted, certain data has been hidden to comply with state and federal privacy laws. ***Students***: Number of students enrolled at the end of the school year. ***PctOfEOYEnrollment***: The percentage of students enrolled for the specified subgroup divided by the number of students enrolled at the end of the school year. ***FallEnrollment***: The number of studnets enrolled on September 30th of the specified school year.
###Code
#Isolating State Wide Data
all = "All Students"
all_years = df.loc[(df["District"] == "State of Delaware") & (df["Gender"] == all) & (df["Grade"] == all) & (df["SpecialDemo"] == all) & (df["Geography"] == all)]
indexs = all_years[all_years["Race"] == "All Students"].index
all_years = all_years.drop(indexs)
y2020 = all_years[all_years["School Year"] == 2020]
plt.pie(y2020["Students"], labels=y2020["Race"])
plt.title("2020 Student Enrollment by Race")
plt.show()
###Output
_____no_output_____
###Markdown
Pie Chart snapshot of Delaware Public School Demographics
###Code
fig = plt.figure(figsize = (10,10))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224)
ax1.plot(all_years[all_years["Race"] == "White"]["School Year"].values, all_years[all_years["Race"] == "White"]["Students"].values)
ax1.title.set_text("White Student Enrollment")
ax2.plot(all_years[all_years["Race"] == "African American"]["School Year"].values, all_years[all_years["Race"] == "African American"]["Students"].values)
ax2.title.set_text("African American Student Enrollment")
ax3.plot(all_years[all_years["Race"] == "Asian American"]["School Year"].values, all_years[all_years["Race"] == "Asian American"]["Students"].values)
ax3.title.set_text("Asian American Student Enrollment")
ax4.plot(all_years[all_years["Race"] == "Hispanic/Latino"]["School Year"].values, all_years[all_years["Race"] == "Hispanic/Latino"]["Students"].values)
ax4.title.set_text("Hispanic Student Enrollment")
ax1.set_xlabel("School Year")
ax2.set_xlabel("School Year")
ax3.set_xlabel("School Year")
ax4.set_xlabel("School Year")
ax1.set_ylabel("Number of Students")
ax2.set_ylabel("Number of Students")
ax3.set_ylabel("Number of Students")
ax4.set_ylabel("Number of Students")
plt.show()
###Output
_____no_output_____
###Markdown
From an initial look at the above line graphs of changes in Student Enrollment over time as broken down in seperate demographics, we can see that the amount of White students enrolling in public schools seeems to be decreasing while ever other demographic is experiencing an uptick in enrollment. Calculating Pearson's r For Each Group
###Code
years = [2015,2016,2017,2018,2019,2020]
White_enrollment_r = stats.pearsonr(all_years["Students"][all_years["Race"] == "White"],years)
African_American_r = stats.pearsonr(all_years["Students"][all_years["Race"] == "African American"], years)
Asian_American_r = stats.pearsonr(all_years["Students"][all_years["Race"] == "Asian American"], years)
Hispanic_r = stats.pearsonr(all_years["Students"][all_years["Race"] == "Hispanic/Latino"], years)
print("White Student Enrollment r: ", White_enrollment_r[0])
print("African American Student Enrollment r: ", African_American_r[0])
print("Asian American r: ", Asian_American_r[0])
print("Hispanic/Latino r: ", Hispanic_r[0])
###Output
White Student Enrollment r: -0.9974150969419623
African American Student Enrollment r: 0.8939061809517638
Asian American r: 0.9892662569850617
Hispanic/Latino r: 0.9851881711114994
###Markdown
From looking at Pearson' coefficient for each group, we can see all enrollment from a strong linear correlation. Ancova Test for Significant Variation
###Code
ancova(data = all_years, dv="Students", covar="School Year", between = "Race")
###Output
_____no_output_____
###Markdown
Since the above p-values are well below the conventional .05, it is safe to assume that the above graphs have a significant difference between them. This is futher illustrated by the large Sum of Squares values, demonstrating a large amount of variance between them.
###Code
African_American_pct = all_years["Students"][all_years["Race"] == "African American"].pct_change().dropna()
White_pct = all_years["Students"][all_years["Race"] == "White"].pct_change().dropna()
Asian_American_pct = all_years["Students"][all_years["Race"] == "Asian American"].pct_change().dropna()
Hispanic_pct = all_years["Students"][all_years["Race"] == "Hispanic/Latino"].pct_change().dropna()
plt.figure(figsize = (10,10))
plt.plot(years[1:],African_American_pct.values,label = "African American")
plt.plot(years[1:],White_pct.values, label = "White")
plt.plot(years[1:],Asian_American_pct.values, label = "Asian American")
plt.plot(years[1:],Hispanic_pct.values, label = "Hispanic/Latino")
plt.legend(["African American", "White", "Asian American", "Hispanic/Latino"])
plt.xticks(years[1:])
plt.xlabel("School Year")
plt.ylabel("Percent Change")
plt.show()
###Output
_____no_output_____
###Markdown
Variance in the Rate of Change
###Code
columns = ["White", "African American", "Asian American", "Hispanic/Latino"]
White_max = "{:,.5f}".format(White_pct.max())
African_American_max = "{:,.5f}".format(African_American_pct.max())
Asian_American_max = "{:,.5f}".format(Asian_American_pct.max())
Hispanic_max = "{:,.5f}".format(Hispanic_pct.max())
White_min = "{:,.5f}".format(White_pct.min())
African_American_min = "{:,.5f}".format(African_American_pct.min())
Asian_American_min = "{:,.5f}".format(Asian_American_pct.min())
Hispanic_min = "{:,.5f}".format(Hispanic_pct.min())
row = ["Max", "Min"]
data = [[White_max,African_American_max, Asian_American_max,Hispanic_max], [White_min, African_American_min, Asian_American_min, Hispanic_min]]
plt.figure(figsize=(10,1))
min_max_table = plt.table(cellText=data, colLabels = columns, rowLabels = row)
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Identifying which Districts had the Largest Decrease in Population
###Code
#Isolating District Information from larger data set
dist_breakdown = df[(df["School Code"] == 0) & (df["District"] != "State of Delaware")]
dist_breakdown.head()
dist_breakdown = dist_breakdown[(dist_breakdown["Race"] == all) & (dist_breakdown["Gender"] == all) & (dist_breakdown["Grade"] == all) & (dist_breakdown["Geography"] == all) & (dist_breakdown["SubGroup"] == all)]
dist_breakdown = dist_breakdown.sort_values(by = ["District Code","School Year"], ascending = [True,True] )
#Iterates through the isolated dataset specifically pulling information from each district in the years 2015 and 2020
combined_years = pd.DataFrame(columns = ["District", "District Code", "Enroll2015", "Enroll2020"])
for index, row in dist_breakdown.iterrows():
if row["School Year"] == 2015:
combined_years = combined_years.append({"District": row["District"], "District Code": row["District Code"], "Enroll2015": row["EOYEnrollment"], "Enroll2020" : 0}, ignore_index = True)
elif row["School Year"] == 2020:
combined_years["Enroll2020"].loc[combined_years["District Code"] == row["District Code"]] = row["EOYEnrollment"]
#Calculates the percent change of each district, cleans, and sorts the final data frame
combined_years["PercentChange"] = (combined_years["Enroll2020"] - combined_years["Enroll2015"])/combined_years["Enroll2015"]
combined_years.replace(0,np.nan, inplace= True)
combined_years.isnull().sum()
clean_combined = combined_years.dropna()
clean_combined = clean_combined.sort_values(by="PercentChange",ascending = True)
clean_combined.head(10)
###Output
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/core/indexing.py:1637: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_single_block(indexer, value, name)
|
docs/contribute/benchmarks_latest_results/Prod3b/CTAN_Zd20_AzSouth_NSB1x_baseline_pointsource/TRAINING/result_benchmarks_DL1_DirectionLUT.ipynb | ###Markdown
Direction Look-Up-Tables (LUTs) **Recommended datasample(s):** ``gamma-1`` (dataset used to build the energy model)**Data level(s):** DL1b (telescope-wise image parameters)**Description:**To obtain an estimate for an image, given its intensity, width and length, how reliable its axis is as a measure of the shower axis' orientation. The values from the LUTs can be used to set relative weights for the different telescopes in the stereoscopic reconstruction of events with three or more valid images.The approach used here is the following:- calculate for each image the miss parameter, aka the distance from the image axis to the point on the camera which corresponds to the true gamma-ray direction- build a LUT per telescope type, containing in bins of image intensity and width/length, the square of \.**Requirements and steps to reproduce:**This notebook requires a TRAINING file generated using ``protopipe-TRAINING``. The data format required to run the notebook is the current one used by _protopipe_ .To get a filled notebook and reproduce these results,- get the necessary input files using ``protopipe-TRAINING`` (see documentation)- execute the notebook with ``protopipe-BENCHMARK``,``protopipe-BENCHMARK launch --config_file configs/benchmarks.yaml -n TRAINING/benchmarks_DL1_DirectionLUT``To obtain the list of all available parameters add ``--help-notebook``.**Comparison against CTAMARS:**- the input file needs to be a merged TRAINING file from the ``gamma-1`` sample- reference simtel-files, plots, values and settings can be found [here (please, always refer to the latest version)](https://forge.in2p3.fr/projects/benchmarks-reference-analysis/wiki/Comparisons_between_pipelines).**Development and testing:** As with any other part of _protopipe_ and being part of the official repository, this notebook can be further developed by any interested contributor. The execution of this notebook is not currently automatic, it must be done locally by the user _before_ pushing a pull-request. Please, strip the output before pushing. Table of contents- [Counts](Counts)- [Counts ratio between protopipe and CTAMARS](Count-ratio-between-protopipe-and-CTAMARS)- [Direction LUT](Direction-LUT)- [Direction LUT comparisons between protopipe and CTAMARS](Direction-LUT-ratio-between-protopipe-and-CTAMARS) - [Profile along Y-axis (width/length)](Profile-along-Y-axis-(width/length)) - [Ratio between the LUTs](Ratio-between-the-LUTs) Imports
###Code
from pathlib import Path
import warnings
def fxn():
warnings.warn("runtime", RuntimeWarning)
import numpy as np
from scipy.stats import binned_statistic_2d
import pandas
import tables
import uproot
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from matplotlib.pyplot import rc
import matplotlib.style as style
from cycler import cycler
from ctapipe.image import camera_to_shower_coordinates
from protopipe.pipeline.io import get_camera_names, read_protopipe_TRAINING_per_tel_type
# TODO: move to protopipe.benchmarks.utils
def raise_(ex):
"""Raise an exception as a statement.
This is a general purpose raiser for cases such as a lambda function.
Parameters
----------
ex: exception
Python built-in exception to raise.
"""
raise ex
# TODO: move to protopipe.benchmarks.utils
def string_to_boolean(variables):
"""Convert True/False strings to booleans.
Useful in case a specific use of the CLI doesn't allow to read booleans as booleans.
Parameters
----------
variables: list of str
Variables to check.
"""
def check_str(x): return x if type(x) == bool \
else True if x == "True" \
else False if x == "False" \
else raise_(ValueError(f"{x} is not a valid boolean."))
return list(map(check_str, variables))
###Output
_____no_output_____
###Markdown
Input data
###Code
# Parametrized cell
# Modify these variables according to your local setup outside of the container
analyses_directory = "/Users/michele/Applications/ctasoft/dirac/shared_folder/analyses" # path to all analyses
output_directory = Path.cwd() # default output directory for plots
analysis_name = "test"
load_CTAMARS = True
CTAMARS_input_directory = None # Path to DL1 CTAMARS data (if load_CTAMARS is True)
# Parameters
analyses_directory = "/Users/michele/Applications/ctasoft/dirac/shared_folder/analyses"
analysis_name = "v0.5.0a1"
load_protopipe_previous = False
analysis_name_2 = "v0.4.0_dev1"
use_seaborn = True
matplotlib_settings = {
"cmap": "cividis",
"style": "seaborn-colorblind",
"axes.prop_cycle": [
"#0072B2",
"#D55E00",
"#F0E442",
"#009E73",
"#CC79A7",
"#56B4E9",
],
}
seaborn_settings = {
"style": "whitegrid",
"context": "talk",
"rc": {"xtick.bottom": True, "ytick.left": True},
}
load_requirements = True
requirements_input_directory = "/Volumes/DataCEA_PERESANO/Data/CTA/requirements/"
load_CTAMARS = True
input_data_CTAMARS = {
"parent_directory": "/Users/michele/Applications/ctasoft/tests/CTAMARS_reference_data",
"TRAINING/DL1": "TRAINING/DL1",
"TRAINING/DL2": "TRAINING/DL2",
"DL2": "",
"DL3": {
"input_directory": "DL3",
"input_file": "SubarrayLaPalma_4L15M_south_IFAE_50hours_20190630.root",
},
"label": "CTAMARS (2019)",
}
load_EventDisplay = True
input_data_EventDisplay = {
"input_directory": "/Volumes/DataCEA_PERESANO/Data/CTA/ASWG/Prod3b/Release_2018-12-03/ROOT/North/CTA-Performance-North-20deg_20181203",
"input_file": "CTA-Performance-North-20deg-S-50h_20181203.root",
"label": "EventDisplay (2018)",
}
input_filenames = {
"simtel": "/Users/michele/Applications/ctasoft/tests/data/simtel/gamma_20deg_180deg_run100___cta-prod3-demo-2147m-LaPalma-baseline.simtel.gz",
"TRAINING_energy_gamma": "TRAINING_energy_tail_gamma_merged.h5",
"TRAINING_classification_gamma": "TRAINING_classification_tail_gamma_merged.h5",
"DL2_gamma": "DL2_tail_gamma_merged.h5",
"DL2_proton": "DL2_energy_tail_gamma_merged.h5",
"DL2_electron": "DL2_energy_tail_gamma_merged.h5",
"DL3": "performance_protopipe_Prod3b_CTANorth_baseline_full_array_Zd20deg_180deg_Time50.00h.fits.gz",
}
model_configuration_filenames = {
"energy": "RandomForestRegressor.yaml",
"classification": "RandomForestClassifier.yaml",
}
input_filenames_ctapipe = {
"DL1a_gamma": "events_protopipe_CTAMARS_calibration_1stPass.dl1.h5",
"DL1a_gamma_2ndPass": "events_protopipe_CTAMARS_calibration_2ndPass.dl1.h5",
}
output_directory = "/Users/michele/Applications/ctasoft/dirac/shared_folder/analyses/v0.5.0a1/benchmarks_results/TRAINING"
# Handle boolean variables (papermill reads them as strings)
[load_CTAMARS,
use_seaborn] = string_to_boolean([load_CTAMARS,
use_seaborn])
# First we check if a _plots_ folder exists already.
# If not, we create it.
plots_folder = Path(output_directory) / "plots"
plots_folder.mkdir(parents=True, exist_ok=True)
# Plot aesthetics settings
style.use(matplotlib_settings["style"])
cmap = matplotlib_settings["cmap"]
rc('axes', prop_cycle=cycler(color=matplotlib_settings["axes.prop_cycle"]))
if use_seaborn:
import seaborn as sns
sns.set_style(seaborn_settings["style"], seaborn_settings["rc"])
sns.set_context(seaborn_settings["context"])
###Output
_____no_output_____
###Markdown
CTAMARS
###Code
if load_CTAMARS:
# Get input file path
if not CTAMARS_input_directory:
try:
indir_CTAMARS = Path(input_data_CTAMARS["parent_directory"]) / Path(input_data_CTAMARS["TRAINING/DL1"])
except (NameError, KeyError):
print("WARNING: CTAMARS data undefined! Please, check the documentation of protopipe-BENCHMARKS.")
filename_CTAMARS = "DirLUT.root"
filepath_CTAMARS = Path(indir_CTAMARS) / filename_CTAMARS
CTAMARS_cameras = ["LSTCam", "NectarCam"]
CTAMARS_histograms = ["DirLookupTable", "DirLookupTable_degrees", "DirEventStatistics"]
CTAMARS = dict.fromkeys(CTAMARS_cameras)
with uproot.open(filepath_CTAMARS) as infile_CTAMARS:
for camera_index in range(len(CTAMARS_cameras)):
CTAMARS[CTAMARS_cameras[camera_index]] = dict.fromkeys(CTAMARS_histograms)
CTAMARS[CTAMARS_cameras[camera_index]][f"DirLookupTable"] = infile_CTAMARS[f"DirLookupTable_type{camera_index}"]
CTAMARS[CTAMARS_cameras[camera_index]][f"DirLookupTable_degrees"] = infile_CTAMARS[f"DirLookupTable_degrees_type{camera_index}"]
CTAMARS[CTAMARS_cameras[camera_index]][f"DirEventStatistics"] = infile_CTAMARS[f"DirEventStatistics_type{camera_index}"]
CTAMARS_X_edges = CTAMARS["LSTCam"]["DirLookupTable"].axes[0].edges()
CTAMARS_Y_edges = CTAMARS["LSTCam"]["DirLookupTable"].axes[1].edges()
###Output
_____no_output_____
###Markdown
protopipe
###Code
input_directory = Path(analyses_directory) / analysis_name / Path("data/TRAINING/for_energy_estimation/gamma")
try:
input_filename = input_filenames["TRAINING_energy_gamma"]
except (NameError, KeyError):
input_filename = "TRAINING_energy_tail_gamma_merged.h5"
cameras = get_camera_names(input_directory = input_directory, file_name = input_filename)
data = read_protopipe_TRAINING_per_tel_type(input_directory = input_directory, file_name = input_filename, camera_names=cameras)
PROTOPIPE = {}
if load_CTAMARS:
for camera in cameras:
PROTOPIPE[camera] = data[camera].query("image_extraction == 1").copy()
else:
for camera in cameras:
PROTOPIPE[camera] = data[camera]
###Output
_____no_output_____
###Markdown
- ``miss`` is here defined as the absolute value of the component transverse to the main shower axis of the distance between the true source position (0,0 in case of on-axis simulation) and the COG of the cleaned image,- it is calculated for ALL images of the gamma1 sample and added to the tables for each camera,- then we select only images for which miss < 1.0 deg in each camera
###Code
PROTOPIPE_selected = {}
for camera in cameras:
hillas_x = PROTOPIPE[camera]["hillas_x"]
hillas_y = PROTOPIPE[camera]["hillas_y"]
hillas_psi = PROTOPIPE[camera]["hillas_psi"]
# Components of the distance between center of the camera (for on-axis simulations) and reconstructed position of the image
longitudinal, transverse = camera_to_shower_coordinates(x = 0.,
y = 0.,
cog_x = hillas_x,
cog_y = hillas_y,
psi = np.deg2rad(hillas_psi))
# Take the absolute value of the transverse component
# Add miss to the dataframe
PROTOPIPE[camera]["miss"] = np.abs(transverse)
# miss < 1 deg
mask = PROTOPIPE[camera]["miss"] < 1.0
# Make a smaller dataframe with just what we actually need and select for miss < 1 deg
PROTOPIPE_selected[camera] = PROTOPIPE[camera][['hillas_intensity', 'hillas_width', 'hillas_length', 'miss']].copy()
PROTOPIPE_selected[camera] = PROTOPIPE_selected[camera][mask]
###Output
_____no_output_____
###Markdown
Counts[back to top](Table-of-contents) This is just the 2D grid that will host the LUT, showing how many events fall in each bin.In CTAMARS an additional image quality cut for direction reconstruction selects for images that fall in a bin which contains >10 images
###Code
fig = plt.figure(figsize=(12, 5))
plt.subplots_adjust(wspace = 0.25)
PROTOPIPE_COUNTS = {}
for i, camera in enumerate(cameras):
plt.subplot(1, 2, i+1)
intensity = PROTOPIPE_selected[camera]["hillas_intensity"]
width = PROTOPIPE_selected[camera]["hillas_width"]
length = PROTOPIPE_selected[camera]["hillas_length"]
PROTOPIPE_COUNTS[camera], _, _, _ = plt.hist2d(x = np.log10(intensity),
y = width / length,
bins = [CTAMARS_X_edges, CTAMARS_Y_edges],
norm = LogNorm(vmin=1.0, vmax=1.e6),
cmap = "rainbow")
plt.title(camera)
cb = plt.colorbar()
cb.set_label("Number of images")
plt.xlabel("log10(intensity) [phe]")
plt.ylabel("width / length")
plt.savefig(plots_folder / f"DirectionLUT_counts_{camera}_protopipe_{analysis_name}.png")
plt.show()
###Output
_____no_output_____
###Markdown
Counts ratio between protopipe and CTAMARS[back to top](Table-of-contents)
###Code
if load_CTAMARS:
fig = plt.figure(figsize=(15, 7))
plt.subplots_adjust(wspace = 0.4)
font_size = 20
for i, camera in enumerate(cameras):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
RATIO = PROTOPIPE_COUNTS[camera]/CTAMARS[camera]["DirEventStatistics"].values()
plt.subplot(1, 2, i+1)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
plt.pcolormesh(CTAMARS_X_edges,
CTAMARS_Y_edges,
np.transpose(PROTOPIPE_COUNTS[camera]/CTAMARS[camera]["DirEventStatistics"].values()),
#norm = LogNorm(vmin=1.e-1, vmax=3)
vmin=0, vmax=3
)
# add value labels for better visualization
for i, x in enumerate(CTAMARS[camera]["DirLookupTable_degrees"].axes[0].centers()):
for j, y in enumerate(CTAMARS[camera]["DirLookupTable_degrees"].axes[1].centers()):
plt.text(x,
y,
np.round(RATIO[i][j], 1),
ha='center',va='center',
size=10,color='b')
plt.title(camera, fontsize=font_size)
ax = plt.gca()
cb = plt.colorbar()
cb.set_label("Counts ratio protopipe/CTAMARS", fontsize=font_size)
ax.tick_params(axis='both', which='major', labelsize=font_size)
ax.tick_params(axis='both', which='minor', labelsize=font_size)
plt.xlabel("log10(intensity) [phe]", fontsize=font_size)
plt.ylabel("width / length", fontsize=font_size)
plt.savefig(plots_folder / f"DirectionLUT_counts_ratio_CTAMARS_{camera}_protopipe_{analysis_name}.png")
plt.show()
else:
print("CTAMARS reference data not provided.")
###Output
_____no_output_____
###Markdown
Direction LUT[back to top](Table-of-contents)
###Code
# Build the LUT by using,
# - ``np.log10(intensity)`` as ``x`` axis,
# - ``width/length`` as ``y``axis,
# For each 2D bin we calculate the ``mean of miss`` for the images which fall into that bin.
mean_miss = {}
for camera in cameras:
intensity = PROTOPIPE_selected[camera]["hillas_intensity"]
width = PROTOPIPE_selected[camera]["hillas_width"]
length = PROTOPIPE_selected[camera]["hillas_length"]
miss = PROTOPIPE_selected[camera]["miss"]
mean_miss[camera], _, _, _ = binned_statistic_2d(x = np.log10(intensity),
y = width/length,
values = miss,
statistic='mean',
bins=[CTAMARS_X_edges, CTAMARS_Y_edges]
)
# After obtaining such a 2D binned statistic we square the value of each bin.
# That is the final LUT
LUT = {}
for camera in cameras:
LUT[camera] = np.square(mean_miss[camera])
fig = plt.figure(figsize=(12, 5))
plt.subplots_adjust(wspace = 0.4)
for i, camera in enumerate(cameras):
plt.subplot(1, 2, i+1)
plt.pcolormesh(CTAMARS_X_edges,
CTAMARS_Y_edges,
np.transpose( LUT[camera] ),
norm = LogNorm(vmin = 1.e-4, vmax = 2.e-1),
cmap = "rainbow"
)
plt.title(camera)
cb = plt.colorbar()
cb.set_label("<miss>**2")
plt.xlabel("log10(intensity [phe])")
plt.ylabel("width / length")
plt.xlim(CTAMARS_X_edges[1], CTAMARS_X_edges[-2])
plt.savefig(plots_folder / f"DirectionLUT_{camera}_protopipe_{analysis_name}.png")
plt.show()
###Output
_____no_output_____
###Markdown
Direction LUT comparisons between protopipe and CTAMARS[back to top](Table-of-contents) Profile along Y-axis (width/length)[back to top](Table-of-contents) Here we select as an example the bin 9, containing images with 0.45 < width / length < 0.55
###Code
if load_CTAMARS:
plt.figure(figsize=(15,10))
h_space = 0.4 if use_seaborn else 0.2
plt.subplots_adjust(hspace=h_space, wspace=0.2)
for i, camera in enumerate(cameras):
plt.subplot(2, 2, i*2+1)
H = np.transpose(CTAMARS[camera]["DirLookupTable_degrees"].values())
plt.errorbar(x = CTAMARS[camera]["DirLookupTable_degrees"].axes[0].centers(),
y = H[9],
xerr = np.diff(CTAMARS_X_edges)/2,
yerr = None,
fmt="o",
label="CTAMARS")
plt.errorbar(x = CTAMARS[camera]["DirLookupTable_degrees"].axes[0].centers(),
y = np.transpose(LUT[camera])[9],
xerr = np.diff(CTAMARS_X_edges)/2,
yerr = None,
fmt="o",
label="protopipe")
plt.xlabel("log10(intensity) [phe]")
plt.ylabel("<miss>**2 [deg**2]")
plt.grid()
plt.yscale("log")
plt.title(camera)
plt.legend()
plt.xlim(CTAMARS_X_edges[1], CTAMARS_X_edges[-1])
plt.ylim(1.e-4, 2.e-1)
plt.subplot(2, 2, i*2+2)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
ratio = np.transpose(LUT[camera])[9] / H[9]
plt.errorbar(x = CTAMARS[camera]["DirLookupTable_degrees"].axes[0].centers()[1:-1],
y = np.log10(ratio[1:-1]),
xerr = np.diff(CTAMARS_X_edges[1:-1])/2,
yerr = None,
ls = "-",
fmt="o",)
plt.hlines(0., plt.gca().get_xlim()[0], plt.gca().get_xlim()[1], colors="red", linestyles='solid')
plt.xlabel("log10(intensity) [phe]")
plt.ylabel("log10(protopipe / CTAMARS)")
plt.grid()
plt.title(camera)
plt.xlim(CTAMARS_X_edges[1], CTAMARS_X_edges[-1])
plt.ylim(-1,1.)
plt.savefig(plots_folder / f"DirectionLUT_yProfile_CTAMARS_{camera}_protopipe_{analysis_name}.png")
plt.show()
else:
print("CTAMARS reference data not provided.")
###Output
_____no_output_____
###Markdown
Ratio between the LUTs[back to top](Table-of-contents)
###Code
if load_CTAMARS:
# we use the same bin edges of CTAMARS reference data
fig = plt.figure(figsize=(12, 5))
plt.subplots_adjust(wspace = 0.25)
for i, camera in enumerate(cameras):
plt.subplot(1, 2, i+1)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
ratio = LUT[camera] / CTAMARS[camera]["DirLookupTable_degrees"].values()
plt.pcolormesh(CTAMARS_X_edges,
CTAMARS_Y_edges,
np.transpose(ratio),
norm=LogNorm(vmin=1.e-1, vmax=1.e1),
cmap = "viridis"
)
plt.title(camera)
cb = plt.colorbar()
cb.set_label("<miss>**2 ratio protopipe/CTAMARS")
plt.xlabel("log10(intensity) [phe]")
plt.ylabel("width / length")
plt.xlim(CTAMARS_X_edges[1], CTAMARS_X_edges[-2])
plt.savefig(plots_folder / f"DirectionLUT_ratio_CTAMARS_{camera}_protopipe_{analysis_name}.png")
plt.show()
else:
print("CTAMARS reference data not provided.")
###Output
_____no_output_____
###Markdown
Same, but zomming in the regime of current image quality cuts- 0.1 < width/length < 0.6- intensity > 50 phe
###Code
if load_CTAMARS:
fig = plt.figure(figsize=(12, 5))
plt.subplots_adjust(wspace = 0.25)
for i, camera in enumerate(cameras):
plt.subplot(1, 2, i+1)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
ratio = LUT[camera] / CTAMARS[camera]["DirLookupTable_degrees"].values()
plt.pcolormesh(CTAMARS_X_edges[2:-2],
CTAMARS_Y_edges[2:13],
np.transpose(ratio)[2:12,2:-2],
norm=LogNorm(vmin=1.e-1, vmax=1.e1),
cmap = "viridis"
)
plt.title(camera)
cb = plt.colorbar()
cb.set_label("<miss>**2 ratio protopipe/CTAMARS")
plt.xlabel("log10(intensity) [phe]")
plt.ylabel("width / length")
plt.savefig(plots_folder / f"DirectionLUT_counts_ratio_zoomed_CTAMARS_{camera}_protopipe_{analysis_name}.png")
plt.show()
else:
print("CTAMARS reference data not provided.")
###Output
_____no_output_____ |
covid_stats/eurostat.ipynb | ###Markdown
eurostat death data weekly * https://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=demo_r_mweek3&lang=en* https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Weekly_death_statisticssee alsohttps://www.euromomo.eu/graphs-and-maps
###Code
import pandas as pd
import pylab as plt
import numpy as np
import seaborn as sns
###Output
_____no_output_____
###Markdown
read raw data
###Code
raw = pd.read_csv('demo_r_mweek3_1_Data_byage.csv', thousands=',', parse_dates=['TIME'])
raw['YEAR'] = raw.TIME.str[:4].astype(int)
raw['WEEK'] = raw.TIME.str[5:].astype(int)
raw['Value'] = raw.Value.astype('float')
oldest = raw.AGE.unique()[14:19]
old = raw.AGE.unique()[12:14]
middle = raw.AGE.unique()[10:12]
young = raw.AGE.unique()[:10]
def agegroup(x):
if x in oldest:
return '>70'
elif x in old:
return '60-69'
elif x in middle:
return '50-59'
else:
return '<50'
raw['GROUP'] = raw.AGE.apply(agegroup)
raw.dtypes
###Output
_____no_output_____
###Markdown
aggregate the raw data by age and sex to get totalshttps://stackoverflow.com/questions/45436873/pandas-how-to-create-a-datetime-object-from-week-and-year
###Code
df=raw.groupby(['TIME','YEAR','WEEK','GEO']).agg({'Value':np.sum}).reset_index().replace(0,np.nan)
#create the date column
df['DATE'] = pd.to_datetime(df.YEAR.astype(str), format='%Y') + \
pd.to_timedelta(df.WEEK.mul(7).astype(str) + ' days')
df.to_csv('')
###Output
_____no_output_____
###Markdown
how far does our 2020 data go in the year
###Code
df[df.YEAR==2020].WEEK.max()
p = pd.pivot_table(df,index='DATE',columns='GEO',values='Value')
#x.columns = x.columns.get_level_values(1)
print (p.columns)
p[:3]
countries1 = ['Belgium','Switzerland','Sweden','Denmark']
countries2 = ['France','Italy']
all = ['Belgium','Switzerland','Sweden','Spain','Austria','Luxembourg',
'Finland','Portugal','Slovenia','Slovakia','Norway','Lithunia','Estonia','Czechia','Latvia']
###Output
_____no_output_____
###Markdown
seaborn plots
###Code
x = df[df.GEO.isin(countries1)]
g=sns.relplot(x='DATE',y='Value',data=x,kind='line',aspect=3,height=4,hue='GEO',estimator=np.sum,ci=None)
g.savefig('eurostat_flu_cycle.png')
###Output
_____no_output_____
###Markdown
Plot totals up to week 26 per year for a subset of countries using catplot
###Code
sub = df[df.WEEK<=50]
x = sub[sub.GEO.isin(countries1)]
g=sns.catplot(x='YEAR',y='Value',data=x,kind='bar',aspect=3,hue='GEO',estimator=np.sum,ci=None)
g.fig.suptitle('total deaths up to June by year')
g.fig.savefig('eurostat_4countries_totaldeaths.png')
sns.set_context("talk")
x = sub[sub.GEO.isin(all)]
g=sns.catplot(x='YEAR',y='Value',data=x,kind='bar',aspect=1.5,col='GEO',col_wrap=4,height=4,sharey=False,estimator=np.sum,ci=None,color='lightblue')
for axes in g.axes.flat:
axes.set_xticklabels(axes.get_xticklabels(), rotation=65, horizontalalignment='center')
g.fig.savefig('eurostat_totaldeaths_bycountry.png')
x = raw[raw.GEO=='Sweden']
g=sns.catplot(x='GEO',y='Value',data=x,kind='bar',aspect=2,hue='AGE',col='YEAR',col_wrap=2,
sharey=False,estimator=np.sum,ci=None,palette='Set2')
###Output
_____no_output_____
###Markdown
compare given period by age group
###Code
x = raw[raw.GEO.isin(all)]
#x = x[x.YEAR>2006]
x = x[(x.WEEK>40) & (x.WEEK<50)]
#remove france and italy which are incomplete for older years
x = x[~x.GEO.isin(countries2)]
g=sns.catplot(x='YEAR',y='Value',data=x,kind='bar',sharey=False,aspect=5,height=2.5,row='GROUP',estimator=np.sum,ci=None,color='darkblue')
g.fig.suptitle('Eurostat Total deaths by age group, weeks 10-20 (selected countries)')
plt.subplots_adjust(top=0.9)
g.fig.savefig('eurostat_fluseason_deaths.png')
###Output
_____no_output_____
###Markdown
covid peak shown with mean across years
###Code
f,ax=plt.subplots(4,2,figsize=(15,9))
axs=ax.flat
def plot_trend(x,ax):
mx = x[x.YEAR!=2020]
sns.lineplot(x="WEEK", y="Value", data=mx,ax=ax)
s = x[x.YEAR==2020]
sns.lineplot(x='WEEK',y='Value',data=s, color='red',ax=ax)
s = x[x.YEAR==2018]
sns.lineplot(x='WEEK',y='Value',data=s, color='orange',ax=ax)
ax.set_xlabel('')
return
i=0
for c in all[:8]:
x = df[df.GEO==c]
g=plot_trend(x,ax=axs[i])
axs[i].set_title(c)
i+=1
plt.tight_layout()
f.savefig('eurostat_2020peak_trend.png')
###Output
_____no_output_____
###Markdown
age breakdown
###Code
#print (oldest,old,middle,young)
cats = {'>70':oldest,'60-69':old,'50-59':middle,'<50':young}
f,ax=plt.subplots(4,1,figsize=(10,9))
axs=ax.flat
i=0
country='Sweden'
for c in cats:
x = raw[(raw.GEO==country)]
x = x[x.AGE.isin(cats[c])]
x = x.groupby(['TIME','YEAR','WEEK','GEO']).agg({'Value':np.sum}).reset_index().replace(0,np.nan)
g=plot_trend(x,ax=axs[i])
axs[i].set_title(c)
i+=1
plt.tight_layout()
f.suptitle('Deaths trend 2020 (red) - Sweden')
plt.subplots_adjust(top=0.9)
f.savefig('eurostat_2020peak_trend_byage_sweden.jpg')
###Output
_____no_output_____ |
Handwritten_to_Text.ipynb | ###Markdown
**Installing the Dataset and Dependencies**
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import tensorflow as tf
import datetime, os
import cv2
import json
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.callbacks import TensorBoard
from keras import backend
from difflib import get_close_matches
import imutils
%matplotlib inline
!pip install emnist
from emnist import list_datasets
!pip install sklearn
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
print(list_datasets())
###Output
Requirement already satisfied: emnist in /usr/local/lib/python3.7/dist-packages (0.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from emnist) (4.62.3)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from emnist) (1.19.5)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from emnist) (2.23.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->emnist) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->emnist) (2021.10.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->emnist) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->emnist) (2.10)
Requirement already satisfied: sklearn in /usr/local/lib/python3.7/dist-packages (0.0)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sklearn) (1.0.1)
Requirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.4.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (3.0.0)
Requirement already satisfied: numpy>=1.14.6 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.19.5)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sklearn) (1.1.0)
['balanced', 'byclass', 'bymerge', 'digits', 'letters', 'mnist']
###Markdown
**Pre-Processing the Data**
###Code
from emnist import extract_training_samples
(train_images, train_labels) = extract_training_samples('balanced')
print(train_images.shape)
from emnist import extract_test_samples
(test_images, test_labels) = extract_test_samples('balanced')
print(test_images.shape)
# Normalizing the values to 0-1
train_images = train_images.astype('float32')
test_images = test_images.astype('float32')
train_images /= 255
test_images /= 255
#train_images = keras.utils.to_categorical(train_images)
train_labels = keras.utils.to_categorical(train_labels)
test_labels = keras.utils.to_categorical(test_labels)
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J',
'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T',
'U', 'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'd', 'e',
'f', 'g', 'h', 'n', 'q', 'r', 't']
X_train = train_images
X_train=X_train.reshape(X_train.shape[0],28,28,1)
X_test=test_images
X_test=X_test.reshape(X_test.shape[0],28,28,1)
def segmented_image(image_path):
image = cv2.imread(image_path)
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image_blurred = cv2.GaussianBlur(image_gray, (5, 5), 0)
#plt.imshow(image_blurred)
#plt.show()
# perform edge detection, find contours in the edge map using kernel size of 5,5
edged_image = cv2.Canny(image_blurred, 30, 150)
# detecting edges
conts = cv2.findContours(edged_image.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
conts = imutils.grab_contours(conts)
# extracting contours of the image
# now sorting contours from left to right
def sort_contours(conts, method="left-to-right"):
# initialize the reverse flag and sort index
reverse = False
i = 0
# decision variable if we need to sort in reverse
if method == "right-to-left" or method == "bottom-to-top":
reverse = True
# sorting based on x and y coordinates
if method == "top-to-bottom" or method == "bottom-to-top":
i = 1
# construct the list of bounding boxes and sort them from top to
# bottom
boundingBoxes = [cv2.boundingRect(c) for c in conts]
(conts, boundingBoxes) = zip(*sorted(zip(conts, boundingBoxes),
key=lambda b:b[1][i], reverse=reverse))
# return the list of sorted contours and bounding boxes
return (conts, boundingBoxes)
conts = sort_contours(conts, method="left-to-right")[0]
# characters that we will be storing
characters = []
for i in conts:
# compute the bounding box of the contour
(x, y, w, h) = cv2.boundingRect(i)
# filter bounding boxes, ensuring they not very big and small
if (w >= 5 and w <= 150) and (h >= 15 and h <= 120):
# extract the character and threshold it to make the character
# it will appear as white character on a black background
# access the width and height of the thresholded image
roi = image_gray[y:y + h, x:x + w]
thresh = cv2.threshold(roi, 0, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
(tH, tW) = thresh.shape
# if the width is greater than the height, resize along the
# width dimension
if tW > tH:
thresh = imutils.resize(thresh, width=28)
# otherwise, resize along the height
else:
thresh = imutils.resize(thresh, height=28)
(tH, tW) = thresh.shape
dX = int(max(0, 28 - tW) / 2.0)
dY = int(max(0, 28 - tH) / 2.0)
# padding for resizing if dimensions are low
padded = cv2.copyMakeBorder(thresh, top=dY, bottom=dY,
left=dX, right=dX, borderType=cv2.BORDER_CONSTANT,
value=(0, 0, 0))
padded = cv2.resize(padded, (28, 28))
# prepare the padded image for classification via our
# handwriting OCR model
padded = padded.astype("float32") / 255.0
padded = np.expand_dims(padded, axis=-1)
characters.append((padded, (x, y, w, h)))
boxes = [b[1] for b in characters]
characters = np.array([c[0] for c in characters], dtype="float32")
return characters
###Output
_____no_output_____
###Markdown
**Training The model**
###Code
image_shape=(28,28,1)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=image_shape),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(200, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(47, activation='softmax')])
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy'])
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
model.summary()
history=model.fit(X_train, train_labels,validation_data=(X_test,test_labels), epochs=40, batch_size=500,shuffle=True)
###Output
Epoch 1/40
226/226 [==============================] - 114s 502ms/step - loss: 1.8485 - accuracy: 0.4778 - val_loss: 0.7117 - val_accuracy: 0.7827
Epoch 2/40
226/226 [==============================] - 115s 511ms/step - loss: 0.9307 - accuracy: 0.7050 - val_loss: 0.5504 - val_accuracy: 0.8252
Epoch 3/40
226/226 [==============================] - 114s 506ms/step - loss: 0.7711 - accuracy: 0.7501 - val_loss: 0.4970 - val_accuracy: 0.8410
Epoch 4/40
226/226 [==============================] - 115s 509ms/step - loss: 0.7012 - accuracy: 0.7705 - val_loss: 0.4635 - val_accuracy: 0.8454
Epoch 5/40
226/226 [==============================] - 117s 517ms/step - loss: 0.6570 - accuracy: 0.7836 - val_loss: 0.4444 - val_accuracy: 0.8503
Epoch 6/40
226/226 [==============================] - 116s 512ms/step - loss: 0.6234 - accuracy: 0.7927 - val_loss: 0.4256 - val_accuracy: 0.8557
Epoch 7/40
226/226 [==============================] - 114s 504ms/step - loss: 0.6011 - accuracy: 0.7994 - val_loss: 0.4129 - val_accuracy: 0.8620
Epoch 8/40
226/226 [==============================] - 114s 503ms/step - loss: 0.5809 - accuracy: 0.8062 - val_loss: 0.4039 - val_accuracy: 0.8632
Epoch 9/40
226/226 [==============================] - 113s 501ms/step - loss: 0.5647 - accuracy: 0.8115 - val_loss: 0.3951 - val_accuracy: 0.8637
Epoch 10/40
226/226 [==============================] - 115s 508ms/step - loss: 0.5537 - accuracy: 0.8153 - val_loss: 0.3899 - val_accuracy: 0.8649
Epoch 11/40
226/226 [==============================] - 114s 507ms/step - loss: 0.5395 - accuracy: 0.8176 - val_loss: 0.3836 - val_accuracy: 0.8657
Epoch 12/40
226/226 [==============================] - 113s 499ms/step - loss: 0.5293 - accuracy: 0.8209 - val_loss: 0.3769 - val_accuracy: 0.8697
Epoch 13/40
226/226 [==============================] - 112s 497ms/step - loss: 0.5226 - accuracy: 0.8234 - val_loss: 0.3743 - val_accuracy: 0.8694
Epoch 14/40
226/226 [==============================] - 112s 497ms/step - loss: 0.5147 - accuracy: 0.8249 - val_loss: 0.3728 - val_accuracy: 0.8701
Epoch 15/40
226/226 [==============================] - 113s 499ms/step - loss: 0.5068 - accuracy: 0.8274 - val_loss: 0.3678 - val_accuracy: 0.8720
Epoch 16/40
226/226 [==============================] - 114s 503ms/step - loss: 0.5025 - accuracy: 0.8291 - val_loss: 0.3653 - val_accuracy: 0.8711
Epoch 17/40
226/226 [==============================] - 113s 498ms/step - loss: 0.4948 - accuracy: 0.8306 - val_loss: 0.3593 - val_accuracy: 0.8748
Epoch 18/40
226/226 [==============================] - 116s 515ms/step - loss: 0.4908 - accuracy: 0.8329 - val_loss: 0.3593 - val_accuracy: 0.8753
Epoch 19/40
226/226 [==============================] - 114s 506ms/step - loss: 0.4863 - accuracy: 0.8334 - val_loss: 0.3527 - val_accuracy: 0.8776
Epoch 20/40
226/226 [==============================] - 113s 500ms/step - loss: 0.4829 - accuracy: 0.8343 - val_loss: 0.3509 - val_accuracy: 0.8767
Epoch 21/40
226/226 [==============================] - 113s 498ms/step - loss: 0.4766 - accuracy: 0.8368 - val_loss: 0.3518 - val_accuracy: 0.8761
Epoch 22/40
226/226 [==============================] - 111s 493ms/step - loss: 0.4762 - accuracy: 0.8368 - val_loss: 0.3474 - val_accuracy: 0.8765
Epoch 23/40
226/226 [==============================] - 111s 493ms/step - loss: 0.4714 - accuracy: 0.8379 - val_loss: 0.3498 - val_accuracy: 0.8776
Epoch 24/40
226/226 [==============================] - 111s 493ms/step - loss: 0.4669 - accuracy: 0.8384 - val_loss: 0.3482 - val_accuracy: 0.8756
Epoch 25/40
226/226 [==============================] - 112s 494ms/step - loss: 0.4619 - accuracy: 0.8409 - val_loss: 0.3447 - val_accuracy: 0.8790
Epoch 26/40
226/226 [==============================] - 113s 499ms/step - loss: 0.4573 - accuracy: 0.8419 - val_loss: 0.3432 - val_accuracy: 0.8777
Epoch 27/40
226/226 [==============================] - 112s 495ms/step - loss: 0.4600 - accuracy: 0.8410 - val_loss: 0.3419 - val_accuracy: 0.8798
Epoch 28/40
226/226 [==============================] - 111s 492ms/step - loss: 0.4545 - accuracy: 0.8421 - val_loss: 0.3395 - val_accuracy: 0.8811
Epoch 29/40
226/226 [==============================] - 112s 494ms/step - loss: 0.4519 - accuracy: 0.8440 - val_loss: 0.3410 - val_accuracy: 0.8785
Epoch 30/40
226/226 [==============================] - 111s 492ms/step - loss: 0.4525 - accuracy: 0.8434 - val_loss: 0.3378 - val_accuracy: 0.8791
Epoch 31/40
226/226 [==============================] - 111s 491ms/step - loss: 0.4457 - accuracy: 0.8448 - val_loss: 0.3365 - val_accuracy: 0.8792
Epoch 32/40
226/226 [==============================] - 111s 493ms/step - loss: 0.4469 - accuracy: 0.8446 - val_loss: 0.3386 - val_accuracy: 0.8808
Epoch 33/40
226/226 [==============================] - 111s 492ms/step - loss: 0.4415 - accuracy: 0.8471 - val_loss: 0.3351 - val_accuracy: 0.8816
Epoch 34/40
226/226 [==============================] - 111s 493ms/step - loss: 0.4412 - accuracy: 0.8454 - val_loss: 0.3361 - val_accuracy: 0.8814
Epoch 35/40
226/226 [==============================] - 113s 501ms/step - loss: 0.4385 - accuracy: 0.8468 - val_loss: 0.3328 - val_accuracy: 0.8818
Epoch 36/40
226/226 [==============================] - 114s 503ms/step - loss: 0.4382 - accuracy: 0.8466 - val_loss: 0.3369 - val_accuracy: 0.8813
Epoch 37/40
226/226 [==============================] - 113s 499ms/step - loss: 0.4389 - accuracy: 0.8470 - val_loss: 0.3329 - val_accuracy: 0.8820
Epoch 38/40
226/226 [==============================] - 112s 496ms/step - loss: 0.4326 - accuracy: 0.8485 - val_loss: 0.3299 - val_accuracy: 0.8836
Epoch 39/40
226/226 [==============================] - 111s 493ms/step - loss: 0.4347 - accuracy: 0.8488 - val_loss: 0.3330 - val_accuracy: 0.8817
Epoch 40/40
226/226 [==============================] - 112s 494ms/step - loss: 0.4314 - accuracy: 0.8493 - val_loss: 0.3298 - val_accuracy: 0.8832
###Markdown
###Code
while True:pass
results = model.evaluate(X_test,test_labels)
print(results)
#%tensorboard --logdir logs
predictions = model.predict(X_test[:])
###Output
588/588 [==============================] - 7s 12ms/step - loss: 0.3298 - accuracy: 0.8832
[0.3298182785511017, 0.8832446932792664]
###Markdown
**Graphs**
###Code
acc_tr = history.history['accuracy']
loss_tr = history.history['loss']
val_acc = history.history['val_accuracy']
val_loss = history.history['val_loss']
epochs = history.epoch
plt.figure(figsize=(8,8))
plt.title('Accuracy Trends in Model Training Vs Validation')
plt.plot(epochs,acc_tr,'b->',label='Training Accuracy')
plt.plot(epochs,val_acc,'y--o',label='Validation Accuracy')
plt.legend()
plt.xlabel('Epochs')
plt.ylabel('Values')
plt.show()
plt.figure(figsize=(8,8))
plt.title('Loss Trends in Model Training Vs Validation')
plt.plot(epochs,loss_tr,'g->',label='Training Loss')
plt.plot(epochs,val_loss,'r--o',label='Validation Loss')
plt.legend()
plt.xlabel('Epochs')
plt.ylabel('Values')
plt.show()
#Needs changes in latest Tensorflow Version
'''
from sklearn.metrics import classification_report
predict_x=model.predict(X_test)
classes_x=np.argmax(predict_x,axis=1)
oh= sklearn.preprocessing.OneHotEncoder()
preds=oh.fit_transform(predictions.reshape(-1,1))
print(classification_report(predictions,test_labels))
'''
###Output
_____no_output_____
###Markdown
**Confusion Matrix**
###Code
import seaborn as sn
#We use rounded_labels to convert them to Single Digits from One-Hot Encoding
rounded_labels=np.argmax(test_labels, axis=1)
matrix = confusion_matrix(rounded_labels, np.argmax(model.predict(X_test[:]),axis=1))
df_cm = pd.DataFrame(matrix, index = [i for i in class_names],
columns = [i for i in class_names],
dtype=np.int)
plt.figure(figsize = (47,47))
sn.heatmap(df_cm, annot=True,fmt='g')
###Output
_____no_output_____
###Markdown
**Testing our own Images**
###Code
def predict(image):
img = resize_image(image)
img = img[:,:,0]
img = img.reshape((1,28,28))
prediction = model.predict(img[:])
return class_names[np.argmax(prediction)]
# Resize image function without affecting quality of character
# Adding black color pixels peripherally of the character so we receive an image with size 28 * 28
def resize_image(img, size=(28,28)):
h, w = img.shape[:2]
c = img.shape[2] if len(img.shape)>2 else 1
if h == w:
return cv2.resize(img, size, cv2.INTER_AREA)
dif = h if h > w else w
interpolation = cv2.INTER_AREA if dif > (size[0]+size[1])//2 else cv2.INTER_CUBIC
x_pos = (dif - w)//2
y_pos = (dif - h)//2
if len(img.shape) == 2:
mask = np.zeros((dif, dif), dtype=img.dtype)
mask[y_pos:y_pos+h, x_pos:x_pos+w] = img[:h, :w]
else:
mask = np.zeros((dif, dif, c), dtype=img.dtype)
mask[y_pos:y_pos+h, x_pos:x_pos+w, :] = img[:h, :w, :]
return cv2.resize(mask, size, interpolation)
def printImage(image):
print("The text inserted is: ")
img = cv2.imread(image)
plt.imshow(img)
plt.show()
def printPossibleWord(prediction):
#import dictionary (JSON file) as a list
with open('words_dictionary.json') as f:
words_dict = json.load(f)
# find the closest match word with our input
matches = get_close_matches(prediction, words_dict, n=3, cutoff=0.6)
#find the match with most similar characters with the input
max_value = 0
similar_character_counter = zerolistmaker(len(matches))
for i in range(len(matches)):
if len(matches[i]) != len(prediction):
continue
for j in range(len(prediction)):
if matches[i][j] == prediction[j]:
similar_character_counter[i] += 1
if(similar_character_counter):
max_value = max(similar_character_counter)
max_value_list = [i for i, j in enumerate(similar_character_counter) if j == max_value]
# Print the possible word from the dictionary
print("Closest matches from the dictionary:")
for i in max_value_list:
print(matches[i].capitalize())
print("\n")
similar_character_counter = []
def zerolistmaker(n):
listofzeros = [0] * n
return listofzeros
###Output
_____no_output_____
###Markdown
**Input custom words**
###Code
def make_predictions(test_image_path):
chars = segmented_image(image_path=test_image_path)
preds=model.predict(chars)
preds=np.argmax(preds,axis=1)
predicted_phrase= str()
for i in preds:
n=class_names[i]
p=str(n)
predicted_phrase=predicted_phrase+n
print("Model's Prediction:", predicted_phrase.capitalize())
printPossibleWord(predicted_phrase.capitalize())
test_1 = '/content/1.png'
printImage(test_1)
make_predictions(test_1)
test_2 = '/content/2.png'
printImage(test_2)
make_predictions(test_2)
test_3 = '/content/3.png'
printImage(test_3)
make_predictions(test_3)
test_4 = '/content/4.png'
printImage(test_4)
make_predictions(test_4)
###Output
The text inserted is:
|
medulla/Medulla7column_Neuprint_to_NeuroArch.ipynb | ###Markdown
Loading NeuroArch Database with Medulla 7 Column Dataset This tutorial provides code to load NeuroArch database with Medulla 7 Column Dataset. Requirement before running the notebook:- Installed [NeuroArch](https://github.com/fruitflybrain/neuroarch), [OrientDB Community Version](https://www.orientdb.org/download), and [pyorient](https://github.com/fruitflybrain/pyorient). The [NeuroNLP Docker image](https://hub.docker.com/r/fruitflybrain/neuronlp) and [FlyBrainLab Docker image](https://hub.docker.com/r/fruitflybrain/fbl) all have a copy of the software requirement ready.- Download the [Neuprint database dump for the Medulla 7 Column dataset](https://github.com/connectome-neuprint/neuPrint/raw/master/fib25_neo4j_inputs.zip).- Download the neuron skeletons from [ConnectomeHackathon2015 repository](https://github.com/janelia-flyem/ConnectomeHackathon2015) and rename the `skeletons` folder to `swc` and move it under the same directory as this notebook.- Download the [GSE116969 transcriptome cell expression data](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE116969&format=file&file=GSE116969%5FdataTable7b%2Egenes%5Fx%5Fcells%5Fp%5Fexpression%2Emodeled%5Fgenes%2Etxt%2Egz) and uncompress it to the same folder.- Have 1GB free disk space (for Neuprint dump and NeuroArch database).A backup of the database created by this notebook can be downloaded [here](https://drive.google.com/file/d/1XrQWCMB6Y3ADLfWBVF8kA_44KxTVnIq7/view?usp=sharing). To restore it in OrientDB, run```/path/to/orientdb/bin/console.sh "create database plocal:../databases/medulla admin admin; restore database /path/to/medulla7column_fib25_na_v1.0_backup.zip"```
###Code
import glob
import os
import subprocess
import csv
from collections import Counter
import numpy as np
import pandas as pd
from tqdm import tqdm
import h5py
import neuroarch.models as models
import neuroarch.na as na
###Output
_____no_output_____
###Markdown
Extract Neuron and Synapse Attributes
###Code
def process(chunk):
status = np.nonzero(np.array([i == 'Traced' for i in chunk['status:string'].values]))[0]
used = chunk.iloc[status]
neurons = []
for i, row in used.iterrows():
li = [row['bodyId:long'], row['pre:int'], row['post:int'], row['status:string'],\
row['statusLabel:string'], int(row['cropped:boolean']) if not np.isnan(row['cropped:boolean']) else row['cropped:boolean'], row['instance:string'], \
row['type:string']]
neurons.append(li)
return neurons
chunksize = 100000
with open('neurons.csv', 'w') as f:
writer = csv.writer(f)
writer.writerow(['bodyID','pre','post','status','statusLabel','cropped','instance','type'])
df = pd.read_csv('fib25/Neuprint_Neurons_fib25.csv')
neurons = process(df)
writer.writerows(neurons)
neurons = pd.read_csv('neurons.csv')
used = []
swc_dir = 'swc'
for i, row in neurons.iterrows():
if os.path.exists('{}/{}.swc'.format(swc_dir, row['bodyID'])):
if isinstance(row['instance'], str):
if row['instance'] in ['glia', 'cell body']:
continue
used.append(i)
elif isinstance(row['type'], str):
used.append(i)
traced_neuron_id = neurons.iloc[used]['bodyID'].to_numpy()
chunksize = 1000000
pre_syn = np.empty((int(1e7),3), np.int64)
post_syn = np.empty((int(1e7),3), np.int64)
pre_count = 0
post_count = 0
count = 0
for chunk in pd.read_csv('fib25/Neuprint_SynapseSet_to_Synapses_fib25.csv', chunksize=chunksize):
ids = chunk[':START_ID']
pre_site = np.array([[n, int(i.split('_')[0]), int(i.split('_')[1])] \
for n,i in enumerate(ids) if i.split('_')[2] == 'pre'])
post_site = np.array([[n, int(i.split('_')[0]), int(i.split('_')[1])] \
for n,i in enumerate(ids) if i.split('_')[2] == 'post'])
pre_site_known = pre_site[np.logical_and(
np.isin(pre_site[:,1], traced_neuron_id),
np.isin(pre_site[:,2], traced_neuron_id)),0]
post_site_known = post_site[np.logical_and(
np.isin(post_site[:,1], traced_neuron_id),
np.isin(post_site[:,2], traced_neuron_id)),0]
retrieved_pre_site = chunk.iloc[pre_site_known]
pre_site = np.array([[row[':END_ID(Syn-ID)'], int(row[':START_ID'].split('_')[0]), int(row[':START_ID'].split('_')[1])] \
for i, row in retrieved_pre_site.iterrows()])
retrieved_post_site = chunk.iloc[post_site_known]
post_site = np.array([[row[':END_ID(Syn-ID)'], int(row[':START_ID'].split('_')[0]), int(row[':START_ID'].split('_')[1])] \
for i, row in retrieved_post_site.iterrows()])
if pre_site.size:
pre_syn[pre_count:pre_count+pre_site.shape[0], :] = pre_site
pre_count += pre_site.shape[0]
if post_site.size:
post_syn[post_count:post_count+post_site.shape[0], :] = post_site
post_count += post_site.shape[0]
count += chunksize
print(count, pre_count, post_count)
pre_syn = pre_syn[:pre_count,:]
post_syn = post_syn[:post_count,:]
ind = np.argsort(pre_syn[:,0])
pre_syn_sorted = pre_syn[ind, :]
ind = np.argsort(post_syn[:,0])
post_syn_sorted = post_syn[ind, :]
# extract synapse (pre-site) to synapse (post-site) connection
# use only the post synaptic site to get all the synapses because one presynaptic site can have multiple postsynaptic sites
post_syn_index = post_syn_sorted[:,0].copy()
df = pd.read_csv('fib25/Neuprint_Synapse_Connections_fib25.csv')
post_ids = df[':END_ID(Syn-ID)']
used = np.where(post_ids.isin(post_syn_index).to_numpy())[0]
connections = df.iloc[used].to_numpy()
ind = np.argsort(connections[:,1])
connections = connections[ind, :]
# extract synapse details
# with h5py.File('syn_pre_post_sorted_by_synapse_id.h5', 'r') as f:
# pre_syn_sorted = f['pre_syn_sorted'][:]
# post_syn_sorted = f['post_syn_sorted'][:]
chunksize = 100000
pre_syn_index = list(set(pre_syn_sorted[:,0].copy()))
pre_syn_index.extend(list(post_syn_sorted[:,0].copy()))
syn_index = np.array(sorted(pre_syn_index))
del pre_syn_index#, pre_syn_sorted, post_syn_sorted
synapse_array = np.empty((len(syn_index), 6), np.int64)
synapse_count = 0
count = 0
for chunk in pd.read_csv('fib25/Neuprint_Synapses_fib25.csv', chunksize=chunksize):
ids = chunk[':ID(Syn-ID)']
start_id = ids.iloc[0]
stop_id = ids.iloc[-1]
pre_start = np.searchsorted(syn_index, start_id, side='left')
pre_end = np.searchsorted(syn_index, stop_id, side='right')
if pre_start >= len(syn_index):
pre_index = []
else:
if pre_end >= len(syn_index):
pre_index = syn_index[pre_start:pre_end] #same as syn_index[pre_start:]
else:
pre_index = syn_index[pre_start:pre_end]
pre_used_synapse = chunk.loc[ids.isin(pre_index)]
li = np.empty((pre_index.size, 6), np.int64)
i = 0
for _, row in pre_used_synapse.iterrows():
location = eval(row['location:point{srid:9157}'].replace('x', "'x'").replace('y', "'y'").replace('z', "'z'"))
li[i,:] = [row[':ID(Syn-ID)'], # synpase id
0 if row['type:string'] == 'pre' else 1, #synapse type
int(row['confidence:float']*1000000), #confidence
location['x'], location['y'], location['z']]
i += 1
synapse_array[synapse_count:synapse_count+pre_index.shape[0],:] = li
synapse_count += pre_index.shape[0]
count += chunksize
print(count, len(pre_used_synapse))
synapse_array = synapse_array[:synapse_count,:]
# reorder synapses
synapse_connections = connections
ids = synapse_array[:,0]
syn_id_dict = {j: i for i, j in enumerate(ids)}
# ids = pre_syn_sorted[:,0]
# pre_syn_id_dict = {j: i for i, j in enumerate(ids)} # map syn id to pre_syn_sorted
ids = post_syn_sorted[:,0]
post_syn_id_dict = {j: i for i, j in enumerate(ids)} # map syn id to post_syn_sorted
synapse_dict = {}
wrong_synapse = 0
for i, pair in tqdm(enumerate(synapse_connections)):
pre_syn_id = pair[0]
post_syn_id = pair[1]
post_id = post_syn_id_dict[post_syn_id]
post_info = synapse_array[syn_id_dict[post_syn_id]]
post_neuron_id, pre_neuron_id = post_syn_sorted[post_id, 1:]
#if len(np.where((pre_syn_sorted == (pre_syn_id, pre_neuron_id, post_neuron_id)).all(axis=1))[0]) != 1:
# print(pre_syn_id, post_syn_id)
# pre_id = pre_syn_id_dict[pre_syn_id]
pre_info = synapse_array[syn_id_dict[pre_syn_id]]
if pre_neuron_id not in synapse_dict:
synapse_dict[pre_neuron_id] = {}
pre_dict = synapse_dict[pre_neuron_id]
if post_neuron_id not in synapse_dict[pre_neuron_id]:
pre_dict[post_neuron_id] = {'pre_synapse_ids': [],
'post_synapse_ids': [],
'pre_confidence': [],
'post_confidence': [],
'pre_x': [],
'pre_y': [],
'pre_z': [],
'post_x': [],
'post_y': [],
'post_z': [],
}
info_dict = pre_dict[post_neuron_id]
info_dict['pre_synapse_ids'].append(pre_syn_id)
info_dict['post_synapse_ids'].append(post_syn_id)
info_dict['pre_confidence'].append(pre_info[2])
info_dict['post_confidence'].append(post_info[2])
info_dict['pre_x'].append(pre_info[3])
info_dict['pre_y'].append(pre_info[4])
info_dict['pre_z'].append(pre_info[5])
info_dict['post_x'].append(post_info[3])
info_dict['post_y'].append(post_info[4])
info_dict['post_z'].append(post_info[5])
with open('synapses.csv', 'w') as f:
writer = csv.writer(f)
writer.writerow(['pre_id','post_id','N','pre_confidence','post_confidence',\
'pre_x','pre_y','pre_z','post_x','post_y','post_z'])
for pre, k in tqdm(synapse_dict.items()):
for post, v in k.items():
writer.writerow([pre, post, len(v['pre_x']), str(v['pre_confidence']), \
str(v['post_confidence']), str(v['pre_x']), str(v['pre_y']), str(v['pre_z']), \
str(v['post_x']), str(v['post_y']), str(v['post_z'])])
###Output
_____no_output_____
###Markdown
Load Data to NeuroArch
###Code
medulla = na.NeuroArch('medulla', mode = 'o')
species = medulla.add_Species('Drosophila melanogaster', stage = 'adult',
sex = 'female',
synonyms = ['fruit fly', 'common fruit fly', 'vinegar fly'])
version = 'fib25'
datasource = medulla.add_DataSource('Medulla7column', version = version,
url = 'https://www.janelia.org/project-team/flyem/research/previous-connectomes-analyzed/seven-column-connectome-fib-sem',
description = 'data obtained from https://github.com/connectome-neuprint/neuPrint/blob/922a107df827a2fedd671438595603c4d15eafa7/fib25_neo4j_inputs.zip; neuron skeleton from https://github.com/janelia-flyem/ConnectomeHackathon2015',
species = species)
medulla.default_DataSource = datasource
transcriptome_datasource = medulla.add_DataSource('GSE116969', version = '1.0',
url = 'https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE116969',
description = 'Fred P Davis, Aljoscha Nern, Serge Picard, Michael B Reiser, Gerald M Rubin, Sean R Eddy, Gilbert L Henry, A genetic, genomic, and computational resource for exploring neural circuit function. eLife 2020;9:e50901. DOI: 10.7554/eLife.50901',
species = species)
medulla.add_Neuropil('MED(L)', synonyms = ['left medulla'])
for i in range(1, 11):
medulla.add_Subregion('MED-M{}(L)'.format(i),\
synonyms = ['left medulla M{} stratum'.format(i), 'left medulla stratum M{}'.format(i)],
neuropil = 'MED(L)')
nt_df = pd.read_csv('GSE116969_dataTable7b.genes_x_cells_p_expression.modeled_genes.txt', sep = '\t', index_col = 0)
neuron_list = pd.read_csv('neurons.csv')
swc_dir = 'swc'
uname_dict = {}
columns = {}
for i, row in tqdm(neuron_list.iterrows()):
if isinstance(row['instance'], str):
if row['instance'] in ['glia', 'cell body']:
continue
elif isinstance(row['type'], str):
pass
else:
continue
bodyID = row['bodyID']
cell_type = row['type']
name = row['instance']
if not os.path.exists('{}/{}.swc'.format(swc_dir, bodyID)):
continue
if not isinstance(name, str):
if isinstance(cell_type, str):
name = '{}-{}'.format(cell_type, bodyID)
else:
cell_type = 'unknown'
name = 'unknown-{}'.format(bodyID)
else:
if not isinstance(cell_type, str):
if name.split('-')[0] == 'tan':
cell_type = 'tangential'
if name not in uname_dict:
uname_dict[name] = 0
uname_dict[name] += 1
name = '{}-{}'.format(name, uname_dict[name])
elif name.split('-')[0] == 'out':
cell_type = 'output'
elif name in ['Tm23/24', 'Tm23/24-F', 'Dm8-out', 'TmY16?']:
cell_type = name
if name not in uname_dict:
uname_dict[name] = 0
uname_dict[name] += 1
name = '{}-{}'.format(name, uname_dict[name])
else:
cell_type = name.split('-')[0]
else:
if name in ['Tm23/24', 'Tm23/24-F', 'Dm8-out', 'TmY16?']:
if name not in uname_dict:
uname_dict[name] = 0
uname_dict[name] += 1
name = '{}-{}'.format(name, uname_dict[name])
if ' home' in name:
name = name.replace(' home', '-home')
info = {}
column = name.split('-')[-1]
circuit = None
if (len(column) == 1 and column.isalpha()) or column == 'home':
circuit = columns.get(column, None)
if circuit is None:
circuit = medulla.add_Circuit('Column {}'.format(column), 'Column', neuropil = 'MED(L)')
columns[column] = circuit
neurotransmitter = []
if cell_type in nt_df.columns or cell_type == 'R8':
if cell_type in ['R{}'.format(i) for i in range(1,7)]:
gene_expression_type = 'R1_6'
elif cell_type == 'R8':
gene_expression_type = 'R8_Rh5'
else:
gene_expression_type = cell_type
if nt_df[gene_expression_type]['Hdc'] > 0.9:
neurotransmitter.append('histamine')
if nt_df[gene_expression_type]['Gad1'] > 0.9:
neurotransmitter.append('GABA')
if nt_df[gene_expression_type]['VAChT'] > 0.9:
neurotransmitter.append('acetylcholine')
if nt_df[gene_expression_type]['VGlut'] > 0.9:
neurotransmitter.append('glutamate')
if nt_df[gene_expression_type]['ple'] > 0.9 \
and nt_df[gene_expression_type]['Ddc'] > 0.9 \
and nt_df[gene_expression_type]['Vmat'] > 0.9 \
and nt_df[gene_expression_type]['DAT'] > 0.9:
neurotransmitter.append('dopamine')
medulla.add_Neuron(name, # uname
cell_type, # name
referenceId = str(bodyID), #referenceId
info = info if len(info) else None,
morphology = {'type': 'swc', 'filename': '{}/{}.swc'.format(swc_dir, bodyID), 'scale': 0.001*10},
neurotransmitters = neurotransmitter if len(neurotransmitter) else None,
neurotransmitters_datasources = [transcriptome_datasource]*len(neurotransmitter) if len(neurotransmitter) else None,
circuit = circuit)
neurons = medulla.sql_query('select from Neuron').nodes_as_objs
# set the cache so there is no need for database access.
for neuron in neurons:
medulla.set('Neuron', neuron.uname, neuron, medulla.default_DataSource)
neuron_ref_to_obj = {int(neuron.referenceId): neuron for neuron in neurons}
synapse_df = pd.read_csv('synapses.csv')
for i, row in tqdm(synapse_df.iterrows()):
pre_neuron = neuron_ref_to_obj[row['pre_id']]
post_neuron = neuron_ref_to_obj[row['post_id']]
pre_conf = np.array(eval(row['pre_confidence']))/1e6
post_conf = np.array(eval(row['post_confidence']))/1e6
NHP = np.sum(np.logical_and(post_conf>=0.7, pre_conf>=0.7))
content = {'type': 'swc'}
content['x'] = [round(i, 3) for i in (np.array(eval(row['pre_x'])+eval(row['post_x']))/1000.*10).tolist()]
content['y'] = [round(i, 3) for i in (np.array(eval(row['pre_y'])+eval(row['post_y']))/1000.*10).tolist()]
content['z'] = [round(i, 3) for i in (np.array(eval(row['pre_z'])+eval(row['post_z']))/1000.*10).tolist()]
content['r'] = [0]*len(content['x'])
content['parent'] = [-1]*(len(content['x'])//2) + [i+1 for i in range(len(content['x'])//2)]
content['identifier'] = [7]*(len(content['x'])//2) + [8]*(len(content['x'])//2)
content['sample'] = [i+1 for i in range(len(content['x']))]
content['confidence'] = [round(i, 3) for i in pre_conf.tolist()] + [round(i, 3) for i in post_conf.tolist()]
medulla.add_Synapse(pre_neuron, post_neuron, N = row['N'], NHP = NHP,
morphology = content)
#arborization = arborization)
###Output
_____no_output_____
###Markdown
Figure out Arborization Data from Loaded Synapse Positions
###Code
def strata_arborization(z):
sep = np.array([20, 28, 35, 38, 42, 44, 55, 58, 70])
return Counter(np.digitize(z, sep))
neuron_dict = {}
for neuron in neurons:
neuron_dict[neuron.uname] = {'axons': Counter(), 'dendrites': Counter(), 'obj': neuron}
for neuron in tqdm(neurons):
outgoing_synapses = medulla.sql_query("""select expand(out('SendsTo')) from {}""".format(neuron._id)).nodes_as_objs
for synapse in outgoing_synapses:
morphology = [n for n in synapse.out('HasData') if isinstance(n, models.MorphologyData)][0]
arborization = []
arborization.append({'type': 'neuropil',
'synapses': {'MED(L)': len(morphology.x)}})
s = strata_arborization(morphology.z[:(len(morphology.z)//2)])
arborization.append({'type': 'subregion',
'synapses': {'MED-M{}(L)'.format(k+1): v for k, v in s.items()}})
neuron_dict[neuron.uname]['axons'] += s
neuron_dict[synapse.out('SendsTo')[0].uname]['dendrites'] += s
medulla.add_synapse_arborization(synapse, arborization)
for neuron in tqdm(neurons):
arborization = []
arborization.append({'type': 'neuropil',
'dendrites': {'MED(L)': sum(neuron_dict[neuron.uname]['dendrites'].values())},
'axons': {'MED(L)': sum(neuron_dict[neuron.uname]['axons'].values())}})
arborization.append({'type': 'subregion',
'dendrites': {'MED-M{}(L)'.format(k+1): v for k, v in neuron_dict[neuron.uname]['dendrites'].items()},
'axons': {'MED-M{}(L)'.format(k+1): v for k, v in neuron_dict[neuron.uname]['axons'].items()}})
medulla.add_neuron_arborization(neuron, arborization)
###Output
_____no_output_____ |
matplotlib3d-scatter-plots.ipynb | ###Markdown
3D scatter plot and related plotsIf this notebook is not in active (runnable) form, go to [here](https://github.com/fomightez/3Dscatter_plot-binder) and press `launch binder`.(This notebook also works in sessions launched from [here](https://github.com/fomightez/Python_basics_4nanocourse).)------If you haven't used one of these notebooks before, they're basically web pages in which you can write, edit, and run live code. They're meant to encourage experimentation, so don't feel nervous. Just try running a few cells and see what happens!. Some tips: Code cells have boxes around them. To run a code cell, click on the cell and either click the button on the toolbar above, or then hit Shift+Enter. The Shift+Enter combo will also move you to the next cell, so it's a quick way to work through the notebook. Selecting from the menu above the toolbar, Cell > Run All is a shortcut to trigger attempting to run all the cells in the notebook. While a cell is running a * appears in the square brackets next to the cell. Once the cell has finished running the asterisk will be replaced with a number. In most cases you'll want to start from the top of notebook and work your way down running each cell in turn. Later cells might depend on the results of earlier ones. To edit a code cell, just click on it and type stuff. Remember to run the cell once you've finished editing. ---- 3D scatter plot Matplotlib-basedBased on [3D Scatterplot page](https://python-graph-gallery.com/370-3d-scatterplot/) from Yan Holtz's [Python Graph Gallery](https://python-graph-gallery.com/).
###Code
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Dataset
df=pd.DataFrame({'X': range(1,101), 'Y': np.random.randn(100)*15+range(1,101), 'Z': (np.random.randn(100)*15+range(1,101))*2 })
# plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(df['X'], df['Y'], df['Z'], c='skyblue', s=60)
ax.view_init(50, 185)
plt.show()
###Output
_____no_output_____
###Markdown
Using `%matplotlib notebook` in the classic notebook results in an rotatable, 3D view. Once you have a good view you can stop the interactivity by pressing on the blue stop button in the upper right and then in the next cell run the following code to get the values to put in `ax.view_init()` to reproduce that view point later:```pythonprint('ax.azim {}'.format(ax.azim))print('ax.elev {}'.format(ax.elev))```Change `%matplotlib notebook` back to `%matplotlib inline` for static view. **At present, in JupyterLab 3.1.1, all the code on this page will also display plots if you use `%matplotlib inline`.** Otherwise, if trying `%matplotlib notebook` in JupyterLab, you'll see errors about `IPython not defined` when trying to run the cells on this page in JupyterLab.The same holds for other plots below on this page. A more thorough Matplotlib exampleNext example, based on [Part 3](https://jovianlin.io/data-visualization-seaborn-part-3/) of Jovian Lin's 3-part series [Data Visualization with Seaborn](https://jovianlin.io/data-visualization-seaborn-part-1/); however that section acknowledges the solution is based on Matplotlib:
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
!curl -OL https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv
!curl -OL https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv
red_wine = pd.read_csv('winequality-red.csv', sep=';')
white_wine = pd.read_csv('winequality-white.csv', sep=';')
wines = pd.concat([red_wine,white_wine], ignore_index=True)
print("red wines:",len(red_wine))
print("white wines:",len(white_wine))
print("wines:",len(wines))
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d')
xs = wines['residual sugar']
ys = wines['fixed acidity']
zs = wines['alcohol']
ax.scatter(xs, ys, zs, s=50, alpha=0.6, edgecolors='w')
ax.set_xlabel('Residual Sugar')
ax.set_ylabel('Fixed Acidity')
ax.set_zlabel('Alcohol')
plt.show()
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 84199 100 84199 0 0 207k 0 --:--:-- --:--:-- --:--:-- 206k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 258k 100 258k 0 0 843k 0 --:--:-- --:--:-- --:--:-- 841k
red wines: 1599
white wines: 4898
wines: 6497
###Markdown
The earlier parts of the code in the cell above were built-based on the earlier parts of that series. 3D, maptlotlib-based examples with a legendBased on https://stackoverflow.com/a/60621783/8508004
###Code
%matplotlib inline
import seaborn as sns, numpy as np, pandas as pd, random
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
sns.set_style("whitegrid", {'axes.grid' : False})
fig = plt.figure(figsize=(6,6))
ax = Axes3D(fig)
x = np.random.uniform(1,20,size=20)
y = np.random.uniform(1,100,size=20)
z = np.random.uniform(1,100,size=20)
g = ax.scatter(x, y, z, c=x, marker='o', depthshade=False, cmap='Paired')
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
# produce a legend with the unique colors from the scatter
legend = ax.legend(*g.legend_elements(), loc="lower center", title="X Values", borderaxespad=-10, ncol=4)
ax.add_artist(legend)
plt.show()
###Output
_____no_output_____
###Markdown
If you want to see the possibilities for `cmap` enter some nonsensical text as `cmap`, and it will list the possibilities. `viridis` is one
###Code
%matplotlib inline
import seaborn as sns, numpy as np, pandas as pd, random
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
sns.set_style("whitegrid", {'axes.grid' : False})
fig = plt.figure(figsize=(6,6))
ax = Axes3D(fig)
x = np.random.uniform(1,20,size=20)
y = np.random.uniform(1,100,size=20)
z = np.random.uniform(1,100,size=20)
g = ax.scatter(x, y, z, c=x, marker='o', depthshade=False, cmap='viridis')
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
# produce a legend with the unique colors from the scatter
legend = ax.legend(*g.legend_elements(), loc="lower center", title="X Values", borderaxespad=-10, ncol=4)
ax.add_artist(legend)
plt.show()
###Output
_____no_output_____
###Markdown
That last `cmap` is a gradient and if you'd prefer the legend be a color bar showing that gradient, you'd adjust the last code cell to read like this:
###Code
%matplotlib inline
import seaborn as sns, numpy as np, pandas as pd, random
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
sns.set_style("whitegrid", {'axes.grid' : False})
fig = plt.figure(figsize=(7.5,6))
ax = Axes3D(fig)
x = np.random.uniform(1,20,size=20)
y = np.random.uniform(1,100,size=20)
z = np.random.uniform(1,100,size=20)
g = ax.scatter(x, y, z, c=x, marker='o', depthshade=False, cmap='viridis')
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
# produce a legend with a gradient colorbar on the right, based on https://stackoverflow.com/a/5495912/8508004
clb = fig.colorbar(g)
clb.ax.set_title('X') #adding title based on https://stackoverflow.com/a/33740567/8508004
plt.show()
###Output
_____no_output_____
###Markdown
Note the width is also adjusted up (`figsize=(6,6)` to `figsize=(7.5,6)`) to account for the additonal colorbar legend on the right side of the resulting plot figure. ------ 2D, Seaborn-based approach, better?Based on [Part 3](https://jovianlin.io/data-visualization-seaborn-part-3/) of Jovian Lin's 3-part series [Data Visualization with Seaborn](https://jovianlin.io/data-visualization-seaborn-part-1/): >"The better alternative — using Seaborn + toggle the size via the s parameter:"The earlier parts of the code below were built-based on the earlier parts of the series.
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
!curl -OL https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv
!curl -OL https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv
red_wine = pd.read_csv('winequality-red.csv', sep=';')
white_wine = pd.read_csv('winequality-white.csv', sep=';')
wines = pd.concat([red_wine,white_wine], ignore_index=True)
print("red wines:",len(red_wine))
print("white wines:",len(white_wine))
print("wines:",len(wines))
plt.scatter(x = wines['fixed acidity'],
y = wines['alcohol'],
s = wines['residual sugar']*25, # <== 😀 Look here!
alpha=0.4,
edgecolors='w')
plt.xlabel('Fixed Acidity')
plt.ylabel('Alcohol')
plt.title('Wine Alcohol Content - Fixed Acidity - Residual Sugar', y=1.05);
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 84199 100 84199 0 0 297k 0 --:--:-- --:--:-- --:--:-- 297k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 258k 100 258k 0 0 838k 0 --:--:-- --:--:-- --:--:-- 835k
red wines: 1599
white wines: 4898
wines: 6497
###Markdown
2D Using SeabornHowever, the code isn't actually using Seaborn (or at least not current Seaborn code) despite what the source material says. Seems to use Matplpotlib still. I have added use of Seaborn below.
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns; sns.set()
import matplotlib.pyplot as plt
!curl -OL https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv
!curl -OL https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv
red_wine = pd.read_csv('winequality-red.csv', sep=';')
white_wine = pd.read_csv('winequality-white.csv', sep=';')
wines = pd.concat([red_wine,white_wine], ignore_index=True)
print("red wines:",len(red_wine))
print("white wines:",len(white_wine))
print("wines:",len(wines))
ax = sns.scatterplot(x=wines['fixed acidity'], y=wines['alcohol'], size=wines['residual sugar'],
sizes=(25, 1450), alpha=0.4, legend = False, data=wines)
plt.title('Wine Alcohol Content - Fixed Acidity - Residual Sugar', y=1.05);
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 84199 100 84199 0 0 319k 0 --:--:-- --:--:-- --:--:-- 319k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 258k 100 258k 0 0 792k 0 --:--:-- --:--:-- --:--:-- 792k
red wines: 1599
white wines: 4898
wines: 6497
###Markdown
2D, seaborn-based with legend
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns; sns.set()
import matplotlib.pyplot as plt
!curl -OL https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv
!curl -OL https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv
red_wine = pd.read_csv('winequality-red.csv', sep=';')
white_wine = pd.read_csv('winequality-white.csv', sep=';')
wines = pd.concat([red_wine,white_wine], ignore_index=True)
print("red wines:",len(red_wine))
print("white wines:",len(white_wine))
print("wines:",len(wines))
ax = sns.scatterplot(x=wines['fixed acidity'], y=wines['alcohol'], size=wines['residual sugar'],
sizes=(25, 1450), alpha=0.4, data=wines)
#plt.legend(loc='best')
ax.legend(loc='center left', bbox_to_anchor=(1.25, 0.5), ncol=1) # based on https://stackoverflow.com/a/53737271/8508004
plt.title('Wine Alcohol Content - Fixed Acidity - Residual Sugar', y=1.05);
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 84199 100 84199 0 0 248k 0 --:--:-- --:--:-- --:--:-- 247k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 258k 100 258k 0 0 814k 0 --:--:-- --:--:-- --:--:-- 814k
red wines: 1599
white wines: 4898
wines: 6497
|
notebook/.ipynb_checkpoints/HW3-checkpoint.ipynb | ###Markdown
1. Correctly handle missing values in the dataset, with a paragraph explanation justifying your choice- Firstly set all missing values to -1 for distinguish the auto-filled value from the original value in the csv. - And these fields will be fixed in the later part if needed using the strategies from the table below.| column | fix || ------------------------------------------------------------ | ----------------------------------------- || Are you self-employed? | ok || How many employees does your company or organization have? | some missing, drop this feature for now. || Is your employer primarily a tech company/organization? | missing some, set default to 0 || Is your primary role within your company related to tech/IT? | missing a lot, drop this feature || Does your employer provide mental health benefits as part of healthcare coverage? | merge to access to mental health || Do you know the options for mental health care available under your employer-provided coverage? | merge to access to mental health || Has your employer ever formally discussed mental health (for example, as part of a wellness campaign or other official communication)? | merge to access to mental health || Does your employer offer resources to learn more about mental health concerns and options for seeking help? | merge to access to mental health || Is your anonymity protected if you choose to take advantage of mental health or substance abuse treatment resources provided by your employer? | merge to access to mental health || If a mental health issue prompted you to request a medical leave from work, asking for that leave would be:Do you think that discussing a mental health disorder with your employer would have negative consequences? | merge to access to mental health || Do you think that discussing a physical health issue with your employer would have negative consequences? | merge to access to mental health || Would you feel comfortable discussing a mental health disorder with your coworkers? | merge to access to mental health || Would you feel comfortable discussing a mental health disorder with your direct supervisor(s)? | merge to access to mental health || Do you feel that your employer takes mental health as seriously as physical health? | merge to access to mental health || Have you heard of or observed negative consequences for co-workers who have been open about mental health issues in your workplace? | merge to access to mental health || Do you have medical coverage (private insurance or state-provided) which includes treatment of  mental health issues? | drop ,too many missing || Do you know local or online resources to seek help for a mental health disorder? | drop ,too many missing || If you have been diagnosed or treated for a mental health disorder, do you ever reveal this to clients or business contacts? | drop ,too many missing || If you have revealed a mental health issue to a client or business contact, do you believe this has impacted you negatively? | drop ,too many missing || If you have been diagnosed or treated for a mental health disorder, do you ever reveal this to coworkers or employees? | drop ,too many missing || If you have revealed a mental health issue to a coworker or employee, do you believe this has impacted you negatively? | drop ,too many missing || Do you believe your productivity is ever affected by a mental health issue? | drop ,too many missing || If yes, what percentage of your work time (time performing primary or secondary job functions) is affected by a mental health issue? | drop ,too many missing || Do you have previous employers? | 0/1,keep || Have your previous employers provided mental health benefits? | merge to previous access to mental health || Were you aware of the options for mental health care provided by your previous employers? | merge to previous access to mental health || Did your previous employers ever formally discuss mental health (as part of a wellness campaign or other official communication)? | merge to previous access to mental health || Did your previous employers provide resources to learn more about mental health issues and how to seek help? | merge to previous access to mental health || Was your anonymity protected if you chose to take advantage of mental health or substance abuse treatment resources with previous employers? | merge to previous access to mental health || Do you think that discussing a mental health disorder with previous employers would have negative consequences? | merge to previous access to mental health || Do you think that discussing a physical health issue with previous employers would have negative consequences? | merge to previous access to mental health || Would you have been willing to discuss a mental health issue with your previous co-workers? | merge to previous access to mental health || Would you have been willing to discuss a mental health issue with your direct supervisor(s)? | merge to previous access to mental health || Did you feel that your previous employers took mental health as seriously as physical health? | merge to previous access to mental health || Did you hear of or observe negative consequences for co-workers with mental health issues in your previous workplaces? | merge to previous access to mental health || Would you be willing to bring up a physical health issue with a potential employer in an interview? | drop || Why or why not? | drop || Would you bring up a mental health issue with a potential employer in an interview? | merge to attitude toward mental illness || Why or why not? | drop || Do you feel that being identified as a person with a mental health issue would hurt your career? | merge to attitude toward mental illness || Do you think that team members/co-workers would view you more negatively if they knew you suffered from a mental health issue? | merge to attitude toward mental illness || How willing would you be to share with friends and family that you have a mental illness? | merge to attitude toward mental illness || Have you observed or experienced an unsupportive or badly handled response to a mental health issue in your current or previous workplace? | merge to attitude toward mental illness || Have your observations of how another individual who discussed a mental health disorder made you less likely to reveal a mental health issue yourself in your current workplace? | merge to attitude toward mental illness || Do you have a family history of mental illness? | merge to || Have you had a mental health disorder in the past? | Yes/no/maybe -> change to 1,0,0.5 || Do you currently have a mental health disorder? | Yes/no/maybe -> change to 1,0,0.5 || If yes, what condition(s) have you been diagnosed with? | drop || If maybe, what condition(s) do you believe you have? | drop || Have you been diagnosed with a mental health condition by a medical professional? | Yes/no || If so, what condition(s) were you diagnosed with? | drop, custom column || Have you ever sought treatment for a mental health issue from a mental health professional? | 1/0 || If you have a mental health issue, do you feel that it interferes with your work when being treated effectively? | drop || If you have a mental health issue, do you feel that it interferes with your work when NOT being treated effectively? | drop || What is your age? | no missing int || What is your gender? | map to is_male, is_female. || What country do you live in? | drop, high cardinality || What country do you work in? | drop, high cardinality || What US state or territory do you work in? | drop, high cardinality || Which of the following best describes your work position? | drop, high cardinality || Do you work remotely? | sometimes ->1, always->2, never->0 |
###Code
column_keep = ["Is your employer primarily a tech company/organization?","What is your age?","Have you ever sought treatment for a mental health issue from a mental health professional?","Have you been diagnosed with a mental health condition by a medical professional?","Do you have previous employers?","Are you self-employed?"]
clean_DF = dataframe[column_keep].copy()
clean_DF.rename(columns = {'What is your age?':'age',
'Have you ever sought treatment for a mental health issue from a mental health professional?':'sought',
'Have you been diagnosed with a mental health condition by a medical professional?':'diagnosed',
'Do you have previous employers?':'previous-employer',
'Are you self-employed?':'self-employeed',
'Is your employer primarily a tech company/organization?':'tech-company'}, inplace = True)
clean_DF['diagnosed'] = clean_DF['diagnosed'].map({'Yes':1,'No':0})
clean_DF['tech-company'] = clean_DF['tech-company'].map({1:1}).fillna(0)
clean_DF['have-disorder'] = dataframe['Do you currently have a mental health disorder?'].map({"Yes":1,"No":0,"Maybe":0.5}).fillna(0)
clean_DF['have-disorder-past'] = dataframe['Have you had a mental health disorder in the past?'].map({"Yes":1,"No":0,"Maybe":0.5}).fillna(0)
clean_DF['remote']=dataframe['Do you work remotely?'].map({'Always':1,'Sometimes':0.5,'Never':0})
# merge attitude toward mental illness( these two will be the same progress as merge to mental health)
clean_DF['attitude']=dataframe['Would you bring up a mental health issue with a potential employer in an interview?'].map({"Yes":1,"No":0,"Maybe":0.5}).fillna(0) + \
dataframe['Do you feel that being identified as a person with a mental health issue would hurt your career?'].map({"No, I don't think it would":1,"Yes, I think it would":0,"Maybe":0.5}).fillna(0) + \
dataframe['Do you think that team members/co-workers would view you more negatively if they knew you suffered from a mental health issue?'].map({"No, they do not":1,"Yes, I think they would":0,"Maybe":0.5}).fillna(0) + \
dataframe['How willing would you be to share with friends and family that you have a mental illness?'].map({"Yes":1,"No":0,"Maybe":0.5}).fillna(0) + \
dataframe['Have you observed or experienced an unsupportive or badly handled response to a mental health issue in your current or previous workplace?'].str.contains('Yes', regex=False).map({True:1}).fillna(0) + \
dataframe['Have your observations of how another individual who discussed a mental health disorder made you less likely to reveal a mental health issue yourself in your current workplace?'].str.contains('Yes', regex=False).map({True:1}).fillna(0)
# merge previous access to mental health( these two will be the same progress as merge to mental health)
clean_DF['previous_access']= dataframe['Was your anonymity protected if you chose to take advantage of mental health or substance abuse treatment resources with previous employers?'].str.contains('Yes', regex=False).map({True:1}).fillna(0)
# merge access to mental health
clean_DF['access']= dataframe['Does your employer provide mental health benefits as part of healthcare coverage?'].map({"Yes":1}).fillna(0) + \
dataframe['Do you know the options for mental health care available under your employer-provided coverage?'].map({"Yes":1}).fillna(0) + \
dataframe['Has your employer ever formally discussed mental health (for example, as part of a wellness campaign or other official communication)?'].map({"Yes":1}).fillna(0) + \
dataframe['Does your employer offer resources to learn more about mental health concerns and options for seeking help?'].map({"Yes":1}).fillna(0) + \
dataframe['Is your anonymity protected if you choose to take advantage of mental health or substance abuse treatment resources provided by your employer?'].map({"Yes":1}).fillna(0) + \
dataframe['If a mental health issue prompted you to request a medical leave from work, asking for that leave would be:'].map({"Very easy":5,"Somewhat easy":4,"Neither easy nor difficult":3,"Somewhat difficult":2,"Very difficult":1}).fillna(1) + \
dataframe['Do you think that discussing a mental health disorder with your employer would have negative consequences?'].map({"Yes":0}).fillna(1) + \
dataframe['Do you think that discussing a physical health issue with your employer would have negative consequences?'].map({"Yes":0}).fillna(1) + \
dataframe['Would you feel comfortable discussing a mental health disorder with your coworkers?'].map({"Yes":1}).fillna(0) + \
dataframe['Would you feel comfortable discussing a mental health disorder with your direct supervisor(s)?'].map({"Yes":1}).fillna(0) + \
dataframe['Do you feel that your employer takes mental health as seriously as physical health?'].map({"Yes":1}).fillna(0) + \
dataframe['Have you heard of or observed negative consequences for co-workers who have been open about mental health issues in your workplace?'].map({"Yes":0}).fillna(1)
print(clean_DF)
###Output
tech-company age sought diagnosed previous-employer self-employeed \
0 1.0 39 0 1 1 0
1 1.0 29 1 1 1 0
2 1.0 38 1 0 1 0
3 0.0 43 1 1 1 1
4 0.0 43 1 1 1 0
... ... ... ... ... ... ...
1428 0.0 34 1 0 1 1
1429 0.0 56 0 1 0 1
1430 1.0 52 1 1 1 0
1431 0.0 30 0 1 1 0
1432 1.0 25 0 0 0 0
have-disorder have-disorder-past remote attitude previous_access \
0 0.0 1.0 0.5 1.0 0.0
1 1.0 1.0 0.0 1.0 1.0
2 0.0 0.5 1.0 3.0 0.0
3 1.0 1.0 0.5 1.0 0.0
4 1.0 1.0 0.5 2.5 0.0
... ... ... ... ... ...
1428 0.0 0.0 0.5 1.0 0.0
1429 0.0 0.0 0.5 0.0 0.0
1430 0.5 1.0 0.5 2.0 0.0
1431 1.0 0.5 0.5 2.0 0.0
1432 1.0 1.0 0.5 0.0 0.0
access
0 9.0
1 13.0
2 6.0
3 4.0
4 7.0
... ...
1428 4.0
1429 4.0
1430 10.0
1431 7.0
1432 5.0
[1433 rows x 12 columns]
###Markdown
2. Correctly reducing the number of features you use, with a paragraph explanation justifying your choicekeep some highly validate features, e.g. column_keep. And merge the mutiple features that are highly corresponding with each other into three different major features- attitude toward mental illness- access to mental health- previous access to mental healthAnd features with high missing value rate are being droped ( as can be seen in the strategy table above.)
###Code
clean_DF['gender']=dataframe['What is your gender?'].str.lower().map({"male":1,"m":1,"f":2,"female":2}).fillna(0)
clean_DF['gender']
###Output
_____no_output_____
###Markdown
3. Correctly reduce the cardinality of one feature
###Code
clean_DF['company-scale'] = dataframe['How many employees does your company or organization have?'].map({"1-5":1,"6-25":2,"26-100":3,"100-500":4,"500-1000":5,"More than 1000":6}).fillna(0)
clean_DF['company-scale']
###Output
_____no_output_____
###Markdown
4. Correctly scale/normalize at least one feature
###Code
clean_DF['is_anxiety_disorder'] = dataframe['If yes, what condition(s) have you been diagnosed with?'].str.contains('Anxiety', regex=False).map({True:1}).fillna(0)
clean_DF['is_mood_disorder'] = dataframe['If yes, what condition(s) have you been diagnosed with?'].str.contains('Mood', regex=False).map({True:1}).fillna(0)
clean_DF['is_mood_disorder'],clean_DF['is_anxiety_disorder']
###Output
_____no_output_____ |
demo_linear.ipynb | ###Markdown
RoboGraphThis is the demo for the submission of paper> __Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks__Before running the demo, please make sure all the required packages are installed.A detailed instruction is provided in [README.md](./README.md).
###Code
import torch
import numpy as np
import os.path as osp
import tempfile
from torch_geometric.datasets import TUDataset
from torch_geometric.data import DataLoader
from torch_geometric.data.makedirs import makedirs
from robograph.model.gnn import GC_NET, train, eval
from tqdm.notebook import tqdm
from robograph.utils import process_data, cal_logits
from robograph.attack.admm import admm_solver
from robograph.attack.cvx_env_solver import cvx_env_solver
from robograph.attack.dual import dual_solver
from robograph.attack.greedy_attack import Greedy_Attack
from robograph.attack.utils import calculate_Fc
###Output
_____no_output_____
###Markdown
Graph classification with linear activation function
###Code
torch.manual_seed(0)
np.random.seed(0)
# prepare dataset
ds_name = 'ENZYMES'
path = osp.join(tempfile.gettempdir(), 'data', ds_name)
save_path = osp.join(tempfile.gettempdir(), 'data', ds_name, 'saved')
if not osp.isdir(save_path):
makedirs(save_path)
dataset = TUDataset(path, name=ds_name, use_node_attr=True)
dataset = dataset.shuffle()
train_size = len(dataset) // 10 * 3
val_size = len(dataset) // 10 * 2
train_dataset = dataset[:train_size]
val_dataset = dataset[train_size: train_size + val_size]
test_dataset = dataset[train_size + val_size:]
# prepare dataloader
train_loader = DataLoader(train_dataset, batch_size=20)
val_loader = DataLoader(val_dataset, batch_size=20)
test_loader = DataLoader(test_dataset, batch_size=20)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# create model
model = GC_NET(hidden=64,
n_features=dataset.num_features,
n_classes=dataset.num_classes,
act='linear',
pool='avg',
dropout=0.).to(device)
###Output
_____no_output_____
###Markdown
Training a vanilla model
###Code
best=0
for epoch in tqdm(range(200)):
loss_all = train(model, train_loader)
train_acc = eval(model, train_loader)
val_acc = eval(model, val_loader)
if val_acc >= best:
best = val_acc
torch.save(model.state_dict(), osp.join(save_path, "result.pk"))
tqdm.write("epoch {:03d} ".format(epoch+1) +
"train_loss {:.4f} ".format(loss_all) +
"train_acc {:.4f} ".format(train_acc) +
"val_acc {:.4f} ".format(val_acc))
test_acc = eval(model, test_loader, testing=True, save_path=save_path)
print("test_acc {:.4f}".format(test_acc))
###Output
_____no_output_____
###Markdown
Robustness certificate
###Code
W = model.conv.weight.detach().cpu().numpy().astype(np.float64)
U = model.lin.weight.detach().cpu().numpy().astype(np.float64)
k = dataset.num_classes
# counter of certifiably robust and vulnerable
robust_dual = 0
robust_cvx = 0
vul_admm = 0
vul_admm_g = 0
vul_greedy = 0
# counter of correct classification
correct = 0
# attacker settings
strength = 3
delta_g = 10
# setting for solvers
dual_params = dict(iter=200, nonsmooth_init='random')
cvx_params = dict(iter=400, lr=0.3, verbose=0, constr='1+2+3',
activation='linear', algo='swapping', nonsmooth_init='subgrad')
admm_params = dict(iter=200, mu=1)
for data in tqdm(test_dataset, desc='across graphs'):
A, X, y = process_data(data)
deg = A.sum(1)
n_nodes = A.shape[0]
n_edges = np.count_nonzero(A) // 2
delta_l = np.minimum(np.maximum(deg - np.max(deg) + strength, 0), n_nodes - 1).astype(int)
# delta_g
logits = cal_logits(A, X@W, U, act='linear')
c_pred = logits.argmax()
if c_pred != y:
continue
correct += 1
fc_vals_orig = [0] * k
fc_vals_dual = [0] * k
fc_vals_cvx = [0] * k
fc_vals_admm = [0] * k
fc_vals_admm_g = [0] * k
fc_vals_greedy = [0] * k
for c in tqdm(range(k), desc='across labels', leave=False):
if c == y:
continue
u = U[y] - U[c]
XW = X@W
# fc_val_orig
fc_vals_orig[c] = calculate_Fc(A, XW, u / n_nodes)
# fc_val_dual
dual_sol = dual_solver(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g, **dual_params)
fc_vals_dual[c] = dual_sol['opt_f']
# fc_val_cvx
cvx_sol =cvx_env_solver(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g, **cvx_params)
fc_vals_cvx[c] = cvx_sol['opt_f']
# fc_val_admm
admm_params['init_B'] = dual_sol['opt_A']
admm_sol = admm_solver(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g, **admm_params)
fc_vals_admm[c] = admm_sol['opt_f']
# fc_val_admm_g: admm + greedy
attack = Greedy_Attack(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g)
if np.array_equal(admm_sol['opt_A'], admm_sol['opt_A'].T):
admm_A = admm_sol['opt_A']
else:
admm_A = np.minimum(admm_sol['opt_A'], admm_sol['opt_A'].T)
admm_g_sol = attack.attack(admm_A) # init from admm
fc_vals_admm_g[c] = admm_g_sol['opt_f']
# fc_val_greedy
attack = Greedy_Attack(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g)
greedy_sol = attack.attack(A) # init from A
fc_vals_greedy[c] = greedy_sol['opt_f']
if np.min(fc_vals_dual) >= 0:
robust_dual += 1
if np.min(fc_vals_cvx) >= 0:
robust_cvx += 1
if np.min(fc_vals_admm) < 0:
vul_admm += 1
if np.min(fc_vals_admm_g) < 0:
vul_admm_g += 1
if np.min(fc_vals_greedy) < 0:
vul_greedy += 1
print('dataset {}'.format(ds_name),
'strength {:02d}'.format(strength),
'delta_g {:02d}'.format(delta_g),
'dual {:.2f}'.format(robust_dual / correct),
'cvx {:.2f}'.format(robust_cvx / correct),
'admm rate {:.2f}'.format(vul_admm / correct),
'admm_g rate {:.2f}'.format(vul_admm_g / correct),
'greedy rate {:.2f}'.format(vul_greedy / correct),)
###Output
_____no_output_____
###Markdown
Warm start from adversarial sample by greedy method
###Code
strength = 3
for idx, data in tqdm(enumerate(train_dataset), desc='adverarial examples'):
A, X, y = process_data(data)
deg = A.sum(1)
n_nodes = A.shape[0]
delta_l = np.minimum(np.maximum(deg - np.max(deg) + strength, 0), n_nodes - 1).astype(int)
delta_g = n_nodes * np.max(delta_l)
logits = cal_logits(A, X@W, U, act='linear')
c_pred = logits.argmax()
fc_vals_greedy = [0] * k
fc_A_greedy = [A] * k
for c in range(k):
u = U[y] - U[c]
XW = X@W
''' greedy attack '''
attack = Greedy_Attack(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g)
greedy_sol = attack.attack(A) # init from A
fc_vals_greedy[c] = greedy_sol['opt_f']
fc_A_greedy[c] = greedy_sol['opt_A']
pick_idx = np.argmin(fc_vals_greedy)
train_dataset[idx].edge_index = torch.tensor(fc_A_greedy[pick_idx].nonzero())
torch.save(train_dataset, osp.join(save_path, 'adv_set.pk'))
###Output
_____no_output_____
###Markdown
Robust linear model
###Code
model = GC_NET(hidden=64,
n_features=dataset.num_features,
n_classes=dataset.num_classes,
act='linear',
pool='avg',
dropout=0.).to(device)
adv = torch.load(osp.join(save_path, 'adv_set.pk'))
adv_loader = DataLoader(adv + train_dataset, batch_size=20)
best = 0
for epoch in tqdm(range(200), desc='epoch'):
loss_all = train(model, train_loader, robust=True, adv_loader=adv_loader, lamb=0.5)
train_acc = eval(model, train_loader, robust=True)
val_acc = eval(model, val_loader, robust=True)
if val_acc >= best:
best = val_acc
torch.save(model.state_dict(), osp.join(save_path, 'result_robust.pk'))
# tqdm.write("epoch {:03d} ".format(epoch+1) +
# "train_loss {:.4f} ".format(loss_all) +
# "train_acc {:.4f} ".format(train_acc) +
# "val_acc {:.4f} ".format(val_acc))
test_acc = eval(model, test_loader, testing=True, save_path=save_path, robust=True)
print("test_acc {:.4f}".format(test_acc))
###Output
_____no_output_____
###Markdown
Robustness certificate with robust model
###Code
W = model.conv.weight.detach().cpu().numpy().astype(np.float64)
U = model.lin.weight.detach().cpu().numpy().astype(np.float64)
k = dataset.num_classes
# counter of certifiably robust and vulnerable
robust_dual = 0
robust_cvx = 0
vul_admm = 0
vul_admm_g = 0
vul_greedy = 0
# counter of correct classification
correct = 0
# attacker settings
strength = 3
delta_g = 10
# setting for solvers
dual_params = dict(iter=200, nonsmooth_init='random')
cvx_params = dict(iter=400, lr=0.3, verbose=0, constr='1+2+3',
activation='linear', algo='swapping', nonsmooth_init='subgrad')
admm_params = dict(iter=200, mu=1)
for data in tqdm(test_dataset, desc='across graphs'):
A, X, y = process_data(data)
deg = A.sum(1)
n_nodes = A.shape[0]
n_edges = np.count_nonzero(A) // 2
delta_l = np.minimum(np.maximum(deg - np.max(deg) + strength, 0), n_nodes - 1).astype(int)
# delta_g
logits = cal_logits(A, X@W, U, act='linear')
c_pred = logits.argmax()
if c_pred != y:
continue
correct += 1
fc_vals_orig = [0] * k
fc_vals_dual = [0] * k
fc_vals_cvx = [0] * k
fc_vals_admm = [0] * k
fc_vals_admm_g = [0] * k
fc_vals_greedy = [0] * k
for c in tqdm(range(k), desc='across labels', leave=False):
if c == y:
continue
u = U[y] - U[c]
XW = X@W
# fc_val_orig
fc_vals_orig[c] = calculate_Fc(A, XW, u / n_nodes)
# fc_val_dual
dual_sol = dual_solver(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g, **dual_params)
fc_vals_dual[c] = dual_sol['opt_f']
# fc_val_cvx
cvx_sol =cvx_env_solver(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g, **cvx_params)
fc_vals_cvx[c] = cvx_sol['opt_f']
# fc_val_admm
admm_params['init_B'] = dual_sol['opt_A']
admm_sol = admm_solver(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g, **admm_params)
fc_vals_admm[c] = admm_sol['opt_f']
# fc_val_admm_g: admm + greedy
attack = Greedy_Attack(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g)
if np.array_equal(admm_sol['opt_A'], admm_sol['opt_A'].T):
admm_A = admm_sol['opt_A']
else:
admm_A = np.minimum(admm_sol['opt_A'], admm_sol['opt_A'].T)
admm_g_sol = attack.attack(admm_A) # init from admm
fc_vals_admm_g[c] = admm_g_sol['opt_f']
# fc_val_greedy
attack = Greedy_Attack(A, XW, u / n_nodes, delta_l=delta_l, delta_g=delta_g)
greedy_sol = attack.attack(A) # init from A
fc_vals_greedy[c] = greedy_sol['opt_f']
if np.min(fc_vals_dual) >= 0:
robust_dual += 1
if np.min(fc_vals_cvx) >= 0:
robust_cvx += 1
if np.min(fc_vals_admm) < 0:
vul_admm += 1
if np.min(fc_vals_admm_g) < 0:
vul_admm_g += 1
if np.min(fc_vals_greedy) < 0:
vul_greedy += 1
print('dataset {}'.format(ds_name),
'strength {:02d}'.format(strength),
'delta_g {:02d}'.format(delta_g),
'dual {:.2f}'.format(robust_dual / correct),
'cvx {:.2f}'.format(robust_cvx / correct),
'admm rate {:.2f}'.format(vul_admm / correct),
'admm_g rate {:.2f}'.format(vul_admm_g / correct),
'greedy rate {:.2f}'.format(vul_greedy / correct),)
###Output
_____no_output_____ |
_build/jupyter_execute/materials/04-Operators.ipynb | ###Markdown
**Course Announcements**Due Dates:- **CL2** due Wednesday (11:59 PM)- **A1** now available; due next Mon 1/17 (11:59 PM) - originally a typo in Q10, instructions referred to `student_b` twice, third one should be `student_c` (`assert`s would make it clear we wanted `student_c`Notes:- Office Hours Start today!- Waitlist: I've reached out to staff- Anaconda demo- **CL1** has been (re-)graded- Pre-course survey EC has been posted; responses to survey sent **Pre-Course Survey Summary**- N=347 (338 unique)- 70.1% of the class has never programmed before- Common hobbies/activities: Surfing, Netflix/shows, cooking/baking, spots/working out, music/kpop, work - Common futures: Psychology, Neuroscience, Physician, Happy, Content, Employed- Common note: Nervous/anxious scared, but excited and/or willing to work hard **Q&A**Q: What's the difference between print() and just typing the variable name by itself to display the output? A: For our purposes they're very similar. One small difference is that `print()` will *always* print out whatever is in the parentheses; Typing the variable by itself will only show what the variable stores if it is the last thing in the cell.Q: Do the numbers on the side (that appear once you run the code) show up for you (the grader)? Like if I make a mistake and have to run the code multiple times so now I have a large number or something, is that bad? A: We can see them when you submit. However, it doesn't matter to me how long it takes you to understand...just that you understand! So, definitely do not worry about/stress over this!Q: How are hidden test created? What functions are employed? Does it require some new libraries? I have heard of hidden tests in coding assignments before but have never "seen" one and see how it is implemented to check a piece of code. A: Occassionally I'll import new libraries. But, the hidden tests are typically just additional `assert` statements, like the ones you can see in your assignments. That said, when assignments are graded, you'll see all assert statements, hidden ones included...so after A1 is graded, you'll have a better sense for the hidden tests!Q: do the hidden tests effect anything more than just checking our work? Is the verify button basically the same thing? A: The validate button only checks that you passed the *visible* tests. And, nope, hidden tests are just there to check your answers.Q: How would you be able to fix it if you accidentally delete a cell? Is there a way to start over an assignment if needed? A: You can use the "Undo Delete Cell" from the Edit menu at top. To completely restart an assignment, rename the folder where the assignment is stored...and then you'll be able to fetch a fresh copy of the assignment from the assignments tab on datahub.Q: I want to know what each color means when we encode. Sometimes it’s red, green, and mostly black and I am sure other colors will come up. A: I'll point these out as we go! For now, variables you create are black. Functions that exist in python are green. Reserved keywords are bold and green. Red is typically to point out an issue/something you want to fix.Q: If we want to write notes on jupyterhub, how do we change the color of the notes? A: https://campuswire.com/c/G9193CB28/feed/10 Q: What is the purpose of the raise NotImplemented(error)? A: To indicate that this is where you're supposed to put your code. You should delete this line and replace it with your code.Q: Do we need an underscore for a variable name if it is only one word. Example "flowers" A: NoQ: Why does python not have variable types? A: It does, but it is a dynamically-typed programming language.Q: How can we use Anaconda to access the course content, since it's not necessarily connected to UCSD's datahub? Would we download each individual page from the course website and upload it to our own Jupyter notebook? A: Downloading from website would be the best way unless you're familiar with git/GitHub. Operators- assignment- math- logic- comparison- identity- membership Operators are special symbols in Python that carry out arithmetic or logical computation. Assignment Operator Python uses = for assignment.
###Code
my_var = 1
###Output
_____no_output_____
###Markdown
Math Operators- `+`, `-`, `*`, `/` for addition, subtraction, multiplication, & division- `**` for exponentiation & `%` for modulus (remainder)- `//` for floor division (integer division) Python uses the mathematical operators +, -, *, / for 'sum', 'substract', 'multiply', and 'divide', repsectively.
###Code
print(2 + 3)
div_result = 4 / 2
print(div_result)
type(div_result)
###Output
_____no_output_____
###Markdown
Order of OperationsMathematical operators follow the rules for order of operations. - follow the rules for order of operations.- parentheses specify which order you want to occur first
###Code
order_operations = 3 + 16 / 2
print(order_operations)
###Output
_____no_output_____
###Markdown
To specify that you want the addition to occur first, you would use parentheses.
###Code
specify_operations = (3 + 16) / 2
print(specify_operations)
###Output
_____no_output_____
###Markdown
Clicker Question 1What would be the value stored in `my_value`? Note: Best to think about it before running the code to ensure you understand.
###Code
my_value = (3 + 2) + 16 / (4 / 2)
my_value
###Output
_____no_output_____
###Markdown
- A) 7.0- B) 10.5- C) 13.0- D) 20.0- E) Produces an error More Math Python also has ** for exponentiation and % for remainder (called modulus). These also return numbers.
###Code
# 2 to the power 3
2 ** 3
# remainder of 17 divided by 7
17 % 7
###Output
_____no_output_____
###Markdown
Clicker Question 2What would be the value stored in `remainder`?
###Code
remainder = 16 % 5
remainder
###Output
_____no_output_____
###Markdown
- A) 0- B) 1- C) 3- D) 3.2- E) Produces an error Clicker Question 3What would be the value stored in `modulo_time`?
###Code
modulo_time = 4 * 2 % 5
modulo_time
###Output
_____no_output_____
###Markdown
- A) 0- B) 1- C) 3- D) 3.2- E) Produces an error RemainderHow to get Python to tell you the integer and the remainder when dividing?
###Code
a = 17 // 7
b = 17 % 7
print(a, 'remainder', b)
###Output
_____no_output_____
###Markdown
`//` is an operator for floor division (integer division) Logical (Boolean) operators- use Boolean logic- logical operators: `and`, `or`, and `not` Booleans are named after the British mathematician - George Boole. He first formulated Boolean algebra, which are a set of rules for how to reason with and combine these values. This is the basis of all modern computer logic. Python has and, or and not for boolean logic. These operators often return booleans, but can return any Python value. - `and` : True if both are true- `or` : True if at least one is true- `not` : True only if false
###Code
True and True
True or True
True and not False
not False
# two nots cancel one another out
not (not True)
###Output
_____no_output_____
###Markdown
Capitalization matters
###Code
# this will give you an error
# 'TRUE' is not a boolean
# 'True' is
TRUE and TRUE
###Output
_____no_output_____
###Markdown
Clicker Question 4How will the following boolean expression evaluate:
###Code
(6 < 10) and (4 == 4)
###Output
_____no_output_____
###Markdown
- A) True- B) False- C) None- D) This code will fail Comparison Operators Python has comparison operators ==, !=, , >, , and >= for value comparisons. These operators return booleans. - `==` : values are equal- `!=` : values are not equal- `<` : value on left is less than value or right- `>` : value on left is greater than value on right- `<=` : value on left is less than *or equal to* value on right- `>=` : value on left is greater than or equal to value on the right
###Code
a = 12
b = 13
a > b
True == True
True != False
'aa' == 'aa'
12 <= 13
###Output
_____no_output_____
###Markdown
Clicker Question 5Assume you're writing a videogame that will only slay the dragon only if our magic lightsabre sword is charged to 90 or higher and we have 100 or more energy units in our protective shield. Start with the code in the following cell. Replace `---` with operators or values that will evaluate to `True` when the cell is run (and slay the dragon!).- A) I did it!- B) I tried but am stuck.- C) I'm unsure where to start
###Code
## EDIT CODE HERE
sword_charge = ---
shield_energy = ---
(sword_charge ---) and (shield_energy ---)
###Output
_____no_output_____
###Markdown
Identity Operators Identity operators are used to check if two values (or variables) are located on the same part of the memory. Python uses is and is not to compare identity. These operators return booleans. - `is` : True if both refer to the same object- `is not` : True if they do not refer to the same object
###Code
a = 927
b = a
c = 927
print(a is b)
print(c is a)
###Output
_____no_output_____
###Markdown
Two variables that are equal does **not** imply that they are identical. If we wanted that second statement to evaluate as `True` we could use `is not`...
###Code
# make a True statement
print(c is not a)
# testing for value equality
a == b == c
###Output
_____no_output_____
###Markdown
Clicker Question 6Using the variables provded below and identity operators replace `---` with code such that `true_variable` will return `True` and `false_variable` will return `False`. - A) I did it!- B) I tried but am stuck.- C) I'm unsure where to start
###Code
z = 5
x = '5'
c = 'Hello'
d = c
e = [1, 2, 3]
f = [1, 2, 3]
# EDIT CODE HERE
true_variable = ---
false_variable = ---
print(true_variable, false_variable)
###Output
_____no_output_____
###Markdown
Delving Deeper: Identity Operators A **new object** is created each time we have a variable that makes reference to it, but there are *few notable exceptions*:- some simple strings- Integers between -5 and 256 (inclusive)- empty immutable containers (e.g. tuples) - we'll get to these laterWhile these may *seem* random, they exist for memory optimization in Python implementation. Shorter and less complex strings are "interned" (share the same space in memory).The rules behind this are a bit fuzzy, so we'll just go through a few examples here. But, if you want to read more about string interning and how Python handles this, you can read more [here](http://guilload.com/python-string-interning/).
###Code
simple_string = 'string'
simple_string2 = 'string'
simple_string is simple_string2
print(id(simple_string), id(simple_string2))
longer_string = 'really long string that just keeps going'
longer_string2 = 'really long string that just keeps going'
longer_string is longer_string2
print(id(longer_string), id(longer_string2))
d = 5
e = 5
print(id(d), id(e))
print(d is e)
###Output
_____no_output_____
###Markdown
Python implementation front loads an array of integers between -5 to 256, so these objects *already exist*.
###Code
# Python doesn't create a new object here
j = 5
k = 5
l = 'Hello'
m = 'Hello'
true_variable_integer = j is k
true_variable_string = l is m
print(true_variable_integer, true_variable_string)
# Python DOES create a new object here
n = 975 #greater than 256
o = 975
p = 'Hello!' #that exclamation point makes it more complex
q = 'Hello!'
false_variable_integer = n is o
false_variable_string = p is q
print(false_variable_integer, false_variable_string)
###Output
_____no_output_____
###Markdown
Clicker Question 7Using the variables provded below and identity operators replace `---` with code such that `true_variable` will return `True` and `false_variable` will return `False`. - A) I did it!- B) I tried but am stuck.- C) I'm unsure where to start
###Code
a = 5
b = 5
c = b
d = 'Hello!'
e = 'Hello!'
f = 567
g = 567
# EDIT CODE HERE
true_variable = ---
false_variable = ---
print(true_variable, false_variable)
###Output
_____no_output_____
###Markdown
Membership Operators Python uses in and not in to compare membership. These operators return booleans. Membership operators are used to check whether a value or variable is found in a sequence.Here, we'll just be checking for value membership in strings. But, we'll discuss lists, tuples, sets, and dictionaries soon. - `in` : True if value is found in the sequence- `not in` : True if value is not found in the sequence
###Code
x = 'I love COGS18!'
print('l' in x)
print('L' in x)
print('COGS' in x)
print('CSOG' in x)
print(' ' in x)
###Output
_____no_output_____
###Markdown
String Concatenation Operators sometimes do different things on different types of variables. For example, + on strings does concatenation.
###Code
'COGS' + ' 18'
'a' + 'b' + 'c'
###Output
_____no_output_____
###Markdown
Chaining Operators Operators and variables can also be chained together into arbitrarily complex expressions.
###Code
# Note that you can use parentheses to chunk sections
(13 % 7 >= 7) and ('COGS' + '18' == 'COGS18')
(13 % 7 >= 7)
('COGS' + '18' == 'COGS18')
###Output
_____no_output_____
###Markdown
Clicker Question 8How will the following expression evaluate:
###Code
2**2 >= 4 and 13%3 > 1
###Output
_____no_output_____ |
8-Labs/Lab17/src/Lab17.ipynb | ###Markdown
**Download** (right-click, save target as ...) this page as a jupyterlab notebook from: (LINK NEEDS FIXING!)[Lab17](https://atomickitty.ddns.net:8000/user/sensei/files/engr-1330-webroot/engr-1330-webbook/ctds-psuedocourse/docs/8-Labs/Lab8/Lab9_Dev.ipynb?_xsrf=2%7C1b4d47c3%7C0c3aca0c53606a3f4b71c448b09296ae%7C1623531240)___ Laboratory 17: "Towards Hypothesis Testing"
###Code
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
###Output
DESKTOP-EH6HD63
desktop-eh6hd63\farha
C:\Users\Farha\Anaconda3\python.exe
3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
###Markdown
Full name: R: HEX: Title of the notebook Date:  Important Terminology:__Plotting Position:__ An empirical distribution, based on a random sample from a (possibly unknown) probability distribution, obtained by plotting the exceedance (or cumulative) probability of the sample distribution against the sample value. The exceedance probability for a particular sample value is a function of sample size and the rank of the particular sample. For exceedance probabilities, the sample values are ranked from largest to smallest. The general expression in common use for plotting position is$$ P = \frac{m - b}{N + 1 -2b}\ $$where m is the ordered rank of a sample value, N is the sample size, and b is a constant between 0 and 1, depending on the plotting method.__*From:____*https://glossary.ametsoc.org/wiki/*__ __Let's work on an example. First, import the necessary packages:__
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
__Read the "lab13_data.csv" file as a dataset:__
###Code
data = pd.read_csv("lab13_data.csv")
data
###Output
_____no_output_____
###Markdown
__The dataset contains two sets of values: "Set1" and "Set2". Use descriptive functions to learn more the sets.__
###Code
# Let's check out set1 and set2
set1 = data['Set1']
set2 = data['Set2']
print(set1)
print(set2)
set1.describe()
set2.describe()
###Output
_____no_output_____
###Markdown
__Remember the Weibull Plotting Position formula from last session. Use Weibull Plotting Position formula to plot set1 and set2 quantiles on the same graph.____Do they look different? How?__
###Code
def weibull_pp(sample): # Weibull plotting position function
# returns a list of plotting positions; sample must be a numeric list
weibull_pp = [] # null list to return after fill
sample.sort() # sort the sample list in place
for i in range(0,len(sample),1):
weibull_pp.append((i+1)/(len(sample)+1)) #values from the gringorten formula
return weibull_pp
#Convert to numpy arrays
set1 = np.array(set1)
set2 = np.array(set2)
#Apply the weibull pp function
set1_wei = weibull_pp(set1)
set2_wei = weibull_pp(set2)
myfigure = matplotlib.pyplot.figure(figsize = (4,8)) # generate a object from the figure class, set aspect ratio
matplotlib.pyplot.scatter(set1_wei, set1 ,color ='blue')
matplotlib.pyplot.scatter(set2_wei, set2 ,color ='orange')
matplotlib.pyplot.xlabel("Density or Quantile Value")
matplotlib.pyplot.ylabel("Value")
matplotlib.pyplot.title("Quantile Plot for Set1 and Set2 based on Weibull Plotting Function")
matplotlib.pyplot.show()
###Output
_____no_output_____
###Markdown
__Do they look different? How?__ __Define functions for Gringorten, Cunnane, California, and Hazen Plotting Position Formulas. Overlay and Plot them all for set 1 and set2 on two different graphs.__
###Code
def gringorten_pp(sample): # plotting position function
# returns a list of plotting positions; sample must be a numeric list
gringorten_pp = [] # null list to return after fill
sample.sort() # sort the sample list in place
for i in range(0,len(sample),1):
gringorten_pp.append((i+1-0.44)/(len(sample)+0.12)) #values from the gringorten formula
return gringorten_pp
set1_grin = gringorten_pp(set1)
set2_grin = gringorten_pp(set2)
def cunnane_pp(sample): # plotting position function
# returns a list of plotting positions; sample must be a numeric list
cunnane_pp = [] # null list to return after fill
sample.sort() # sort the sample list in place
for i in range(0,len(sample),1):
cunnane_pp.append((i+1-0.40)/(len(sample)+0.2)) #values from the cunnane formula
return cunnane_pp
set1_cun = cunnane_pp(set1)
set2_cun = cunnane_pp(set2)
def california_pp(sample): # plotting position function
# returns a list of plotting positions; sample must be a numeric list
california_pp = [] # null list to return after fill
sample.sort() # sort the sample list in place
for i in range(0,len(sample),1):
california_pp.append((i+1)/(len(sample))) #values from the cunnane formula
return california_pp
set1_cal = california_pp(set1)
set2_cal = california_pp(set2)
def hazen_pp(sample): # plotting position function
# returns a list of plotting positions; sample must be a numeric list
hazen_pp = [] # null list to return after fill
sample.sort() # sort the sample list in place
for i in range(0,len(sample),1):
hazen_pp.append((i+1-0.5)/(len(sample))) #values from the cunnane formula
return hazen_pp
set1_haz = hazen_pp(set1)
set2_haz = hazen_pp(set2)
myfigure = matplotlib.pyplot.figure(figsize = (12,8)) # generate a object from the figure class, set aspect ratio
matplotlib.pyplot.scatter(set1_wei, set1 ,color ='blue',
marker ="^",
s = 50)
matplotlib.pyplot.scatter(set1_grin, set1 ,color ='red',
marker ="o",
s = 20)
matplotlib.pyplot.scatter(set1_cun, set1 ,color ='green',
marker ="s",
s = 20)
matplotlib.pyplot.scatter(set1_cal, set1 ,color ='yellow',
marker ="p",
s = 20)
matplotlib.pyplot.scatter(set1_haz, set1 ,color ='black',
marker ="*",
s = 20)
matplotlib.pyplot.xlabel("Density or Quantile Value")
matplotlib.pyplot.ylabel("Value")
matplotlib.pyplot.title("Quantile Plot for Set1 based on Weibull, Gringorton, Cunnane, California, and Hazen Plotting Functions")
matplotlib.pyplot.show()
###Output
_____no_output_____
###Markdown
__Plot a histogram of Set1 with 10 bins.__
###Code
import matplotlib.pyplot as plt
myfigure = matplotlib.pyplot.figure(figsize = (10,5)) # generate a object from the figure class, set aspect ratio
set1 = data['Set1']
set1.plot.hist(grid=False, bins=10, rwidth=1,
color='navy')
plt.title('Histogram of Set1')
plt.xlabel('Value')
plt.ylabel('Counts')
plt.grid(axis='y',color='yellow', alpha=1)
###Output
_____no_output_____
###Markdown
__Plot a histogram of Set2 with 10 bins.__
###Code
set2 = data['Set2']
set2.plot.hist(grid=False, bins=10, rwidth=1,
color='darkorange')
plt.title('Histogram of Set2')
plt.xlabel('Value')
plt.ylabel('Counts')
plt.grid(axis='y',color='yellow', alpha=1)
###Output
_____no_output_____
###Markdown
__Plot a histogram of both Set1 and Set2 and discuss the differences.__
###Code
fig, ax = plt.subplots()
data.plot.hist(density=False, ax=ax, title='Histogram: Set1 vs. Set2', bins=40)
ax.set_ylabel('Count')
ax.grid(axis='y')
###Output
_____no_output_____
###Markdown
__The cool 'seaborn' package: Another way for plotting histograms and more!__
###Code
import seaborn as sns
sns.distplot(set1,color='navy', rug=True)
sns.distplot(set2,color='darkorange', rug=True)
###Output
_____no_output_____
###Markdown
Important Terminology:__Kernel Density Estimation (KDE):__ a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. This can be useful if you want to visualize just the “shape” of some data, as a kind of continuous replacement for the discrete histogram.__*From:____*https://en.wikipedia.org/wiki/Kernel_density_estimation*____*https://mathisonian.github.io/kde/* >> A SUPERCOOL Blog!____*https://www.youtube.com/watch?v=fJoR3QsfXa0* >> A Nice Intro to distplot in seaborn | Note that displot is pretty much the same thing!__
###Code
sns.distplot(set1,color='navy',kde=True,rug=True)
sns.distplot(set1,color='navy',kde=True)
sns.distplot(set2,color='orange',kde=True)
sns.distplot(set1,color='navy',kde=True)
###Output
_____no_output_____
###Markdown
Important Terminology:__Empirical Cumulative Distribution Function (ECDF):__ the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value. __*From:____*https://en.wikipedia.org/wiki/Empirical_distribution_function*__ __Fit a Normal distribution data model to both Set1 and Set2. Plot them seperately. Describe the fit.__
###Code
set1 = data['Set1']
set2 = data['Set2']
set1 = np.array(set1)
set2 = np.array(set2)
set1_wei = weibull_pp(set1)
set2_wei = weibull_pp(set2)
# Normal Quantile Function
import math
def normdist(x,mu,sigma):
argument = (x - mu)/(math.sqrt(2.0)*sigma)
normdist = (1.0 + math.erf(argument))/2.0
return normdist
# For set1
mu = set1.mean() # Fitted Model
sigma = set1.std()
x = []; ycdf = []
xlow = 0; xhigh = 1.2*max(set1) ; howMany = 100
xstep = (xhigh - xlow)/howMany
for i in range(0,howMany+1,1):
x.append(xlow + i*xstep)
yvalue = normdist(xlow + i*xstep,mu,sigma)
ycdf.append(yvalue)
# Fitting Data to Normal Data Model
# Now plot the sample values and plotting position
myfigure = matplotlib.pyplot.figure(figsize = (7,9)) # generate a object from the figure class, set aspect ratio
matplotlib.pyplot.scatter(set1_wei, set1 ,color ='navy')
matplotlib.pyplot.plot(ycdf, x, color ='gold',linewidth=3)
matplotlib.pyplot.xlabel("Quantile Value")
matplotlib.pyplot.ylabel("Set1 Value")
mytitle = "Normal Distribution Data Model sample mean = : " + str(mu)+ " sample variance =:" + str(sigma**2)
matplotlib.pyplot.title(mytitle)
matplotlib.pyplot.show()
# For set2
mu = set2.mean() # Fitted Model
sigma = set2.std()
x = []; ycdf = []
xlow = 0; xhigh = 1.2*max(set2) ; howMany = 100
xstep = (xhigh - xlow)/howMany
for i in range(0,howMany+1,1):
x.append(xlow + i*xstep)
yvalue = normdist(xlow + i*xstep,mu,sigma)
ycdf.append(yvalue)
# Fitting Data to Normal Data Model
# Now plot the sample values and plotting position
myfigure = matplotlib.pyplot.figure(figsize = (7,9)) # generate a object from the figure class, set aspect ratio
matplotlib.pyplot.scatter(set2_wei, set2 ,color ='orange')
matplotlib.pyplot.plot(ycdf, x, color ='purple',linewidth=3)
matplotlib.pyplot.xlabel("Quantile Value")
matplotlib.pyplot.ylabel("Set2 Value")
mytitle = "Normal Distribution Data Model sample mean = : " + str(mu)+ " sample variance =:" + str(sigma**2)
matplotlib.pyplot.title(mytitle)
matplotlib.pyplot.show()
###Output
_____no_output_____
###Markdown
__Since it was an appropriate fit, we can use the normal distrubation to generate another sample randomly from the same population. Use a histogram with the new generated sets and compare them visually.__
###Code
mu1 = set1.mean()
sd1 = set1.std()
mu2 = set2.mean()
sd2 = set2.std()
set1_s = np.random.normal(mu1, sd1, 100)
set2_s = np.random.normal(mu2, sd2, 100)
data_d = pd.DataFrame({'Set1s':set1_s,'Set2s':set2_s})
fig, ax = plt.subplots()
data_d.plot.hist(density=False, ax=ax, title='Histogram: Set1 samples vs. Set2 samples', bins=40)
ax.set_ylabel('Count')
ax.grid(axis='y')
fig, ax = plt.subplots()
data_d.plot.hist(density=False, ax=ax, title='Histogram: Set1 and Set1 samples vs. Set2 and Set2 samples', bins=40)
data.plot.hist(density=False, ax=ax, bins=40)
ax.set_ylabel('Count')
ax.grid(axis='y')
###Output
_____no_output_____
###Markdown
__Use boxplots to compare the four sets. Discuss their differences.__
###Code
fig = plt.figure(figsize =(10, 7))
plt.boxplot ([set1, set1_s, set2, set2_s],1, '')
plt.show()
###Output
_____no_output_____ |
2. Improving-Deep-Neural-Networks/week3/TensorFlow_Tutorial_v3b.ipynb | ###Markdown
TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables- Start your own session- Train algorithms - Implement a Neural NetworkPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. Updates If you were working on the notebook before this update...* The current notebook is version "v3b".* You can find your original work saved in the notebook with the previous version name (it may be either TensorFlow Tutorial version 3" or "TensorFlow Tutorial version 3a.) * To view the file directory, click on the "Coursera" icon in the top left of this notebook. List of updates* forward_propagation instruction now says 'A1' instead of 'a1' in the formula for Z2; and are updated to say 'A2' instead of 'Z2' in the formula for Z3.* create_placeholders instruction refer to the data type "tf.float32" instead of float.* in the model function, the x axis of the plot now says "iterations (per fives)" instead of iterations(per tens)* In the linear_function, comments remind students to create the variables in the order suggested by the starter code. The comments are updated to reflect this order.* The test of the cost function now creates the logits without passing them through a sigmoid function (since the cost function will include the sigmoid in the built-in tensorflow function).* Updated print statements and 'expected output that are used to check functions, for easier visual comparison. 1 - Exploring the Tensorflow LibraryTo start, you will import the library:
###Code
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
###Code
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
###Output
9
###Markdown
Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.Now let us look at an easy example. Run the cell below:
###Code
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
###Output
Tensor("Mul:0", shape=(), dtype=int32)
###Markdown
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
###Code
sess = tf.Session()
print(sess.run(c))
###Output
20
###Markdown
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
###Code
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
###Output
6
###Markdown
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear functionLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly
###Code
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes X to be a random tensor of shape (3,1)
Initializes W to be a random tensor of shape (4,3)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
"""
Note, to ensure that the "random" numbers generated match the expected results,
please create the variables in the order given in the starting code below.
(Do not re-arrange the order).
"""
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W,X),b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = \n" + str(linear_function()))
###Output
result =
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
###Markdown
*** Expected Output ***: ```result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]``` 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")`- `tf.sigmoid(...)`- `sess.run(..., feed_dict = {x: z})`Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:**```pythonsess = tf.Session() Run the variables initialization (if needed), run the operationsresult = sess.run(..., feed_dict = {...})sess.close() Close the session```**Method 2:**```pythonwith tf.Session() as sess: run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) This takes care of closing the session for you :)```
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name='x')
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict={x: z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
###Output
sigmoid(0) = 0.5
sigmoid(12) = 0.999994
###Markdown
*** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the CostYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$you can do it in one line of code in tensorflow!**Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
###Code
# GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name='z')
y = tf.placeholder(tf.float32, name='y')
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict={z:logits, y:labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = np.array([0.2,0.4,0.7,0.9])
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
###Output
cost = [ 0.79813886 0.91301525 0.40318605 0.34115386]
###Markdown
** Expected Output** : ```cost = [ 0.79813886 0.91301525 0.40318605 0.34115386]``` 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
###Code
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, name='C')
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = \n" + str(one_hot))
###Output
one_hot =
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
###Markdown
**Expected Output**: ```one_hot = [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]]``` 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape)
###Code
# GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
###Output
ones = [ 1. 1. 1.]
###Markdown
**Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the computation graph- Run the graphLet's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS DatasetOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. **Figure 1**: SIGNS dataset Run the following code to load the dataset.
###Code
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
###Output
_____no_output_____
###Markdown
Change the index below and run the cell to visualize some examples in the dataset.
###Code
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
###Output
y = 5
###Markdown
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
###Code
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
number of training examples = 1080
number of test examples = 120
X_train shape: (12288, 1080)
Y_train shape: (6, 1080)
X_test shape: (12288, 120)
Y_test shape: (6, 120)
###Markdown
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholdersYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow.
###Code
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "tf.float32"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "tf.float32"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, shape=[n_x, None], name='X')
Y = tf.placeholder(tf.float32, shape=[n_y, None], name='Y')
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
###Output
X = Tensor("X_1:0", shape=(12288, ?), dtype=float32)
Y = Tensor("Y:0", shape=(6, ?), dtype=float32)
###Markdown
**Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(6, ?), dtype=float32) (not necessarily Placeholder_2) 2.2 - Initializing the parametersYour second task is to initialize the parameters in tensorflow.**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```pythonW1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())```Please use `seed = 1` to make sure your results match ours.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25,12288], initializer=tf.contrib.layers.xavier_initializer(seed=1))
b1 = tf.get_variable("b1", [25,1], initializer=tf.zeros_initializer())
W2 = tf.get_variable("W2", [12,25], initializer=tf.contrib.layers.xavier_initializer(seed=1))
b2 = tf.get_variable("b2", [12,1], initializer=tf.zeros_initializer())
W3 = tf.get_variable("W3", [6,12], initializer=tf.contrib.layers.xavier_initializer(seed=1))
b3 = tf.get_variable("b3", [6,1], initializer=tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref>
b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref>
W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref>
b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>
###Markdown
**Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition- `tf.matmul(...,...)` to do a matrix multiplication- `tf.nn.relu(...)` to apply the ReLU activation**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1,X),b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2,A1),b2) # Z2 = np.dot(W2, A1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3,A2),b3) # Z3 = np.dot(W3, A2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
###Output
Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)
###Markdown
**Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very easy to compute the cost using:```pythontf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))```**Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.- Besides, `tf.reduce_mean` basically does the summation over the examples.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
###Output
cost = Tensor("Mean:0", shape=(), dtype=float32)
###Markdown
**Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.For instance, for gradient descent the optimizer would be:```pythonoptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)```To make the optimization you would do:```python_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})```This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). 2.6 - Building the modelNow, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented.
###Code
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer,cost], feed_dict={X:minibatch_X,Y:minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
###Code
parameters = model(X_train, Y_train, X_test, Y_test)
###Output
Cost after epoch 0: 1.855583
Cost after epoch 100: 1.646989
Cost after epoch 200: 1.527040
Cost after epoch 300: 1.437380
Cost after epoch 400: 1.355500
Cost after epoch 500: 1.280593
Cost after epoch 600: 1.213115
Cost after epoch 700: 1.152342
Cost after epoch 800: 1.094468
Cost after epoch 900: 1.044277
Cost after epoch 1000: 0.992679
Cost after epoch 1100: 0.942379
Cost after epoch 1200: 0.899436
Cost after epoch 1300: 0.855891
Cost after epoch 1400: 0.812784
###Markdown
**Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise)Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right!
###Code
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
###Output
Your algorithm predicts: y = 3
|
NLP/Yelp Reviews Classification.ipynb | ###Markdown
CLASSIFY YELP REVIEWS (NLP) PROBLEM STATEMENT - In this project, Natural Language Processing (NLP) strategies will be used to analyze Yelp reviews data- Number of 'stars' indicate the business rating given by a customer, ranging from 1 to 5- 'Cool', 'Useful' and 'Funny' indicate the number of cool votes given by other Yelp Users. STEP 0: LIBRARIES IMPORT
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
STEP 1: IMPORT DATASET
###Code
yelp_df = pd.read_csv("yelp.csv")
yelp_df.head(10)
yelp_df.tail()
yelp_df.describe()
yelp_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000 entries, 0 to 9999
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 business_id 10000 non-null object
1 date 10000 non-null object
2 review_id 10000 non-null object
3 stars 10000 non-null int64
4 text 10000 non-null object
5 type 10000 non-null object
6 user_id 10000 non-null object
7 cool 10000 non-null int64
8 useful 10000 non-null int64
9 funny 10000 non-null int64
dtypes: int64(4), object(6)
memory usage: 781.4+ KB
###Markdown
STEP 2: VISUALIZE DATASET
###Code
# Let's get the length of the messages
yelp_df['length'] = yelp_df['text'].apply(len)
yelp_df.head()
yelp_df['length'].plot(bins=100, kind='hist')
yelp_df.length.describe()
# Let's see the longest message 43952
yelp_df[yelp_df['length'] == 4997]['text'].iloc[0]
# Let's see the shortest message
yelp_df[yelp_df['length'] == 1]['text'].iloc[0]
# Let's see the message with mean length
yelp_df[yelp_df['length'] == 710]['text'].iloc[0]
sns.countplot(y = 'stars', data=yelp_df)
g = sns.FacetGrid(data=yelp_df, col='stars', col_wrap=3)
g = sns.FacetGrid(data=yelp_df, col='stars', col_wrap=5)
g.map(plt.hist, 'length', bins = 20, color = 'r')
# Let's divide the reviews into 1 and 5 stars
yelp_df_1 = yelp_df[yelp_df['stars']==1]
yelp_df_5 = yelp_df[yelp_df['stars']==5]
yelp_df_1
yelp_df_5
yelp_df_1_5 = pd.concat([yelp_df_1 , yelp_df_5])
yelp_df_1_5
yelp_df_1_5.info()
print( '1-Stars percentage =', (len(yelp_df_1) / len(yelp_df_1_5) )*100,"%")
print( '5-Stars percentage =', (len(yelp_df_5) / len(yelp_df_1_5) )*100,"%")
sns.countplot(yelp_df_1_5['stars'], label = "Count")
###Output
C:\Users\Sagar\anaconda3\envs\tensorflow-gpu\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
STEP 3: CREATE TESTING AND TRAINING DATASET/DATA CLEANING
###Code
import string
string.punctuation
Test = 'Hello Mr. Future, I am so happy to be learning AI now!!'
Test_punc_removed = [char for char in Test if char not in string.punctuation]
Test_punc_removed
# Join the characters again to form the string.
Test_punc_removed_join = ''.join(Test_punc_removed)
Test_punc_removed_join
###Output
_____no_output_____
###Markdown
STEP 3.2 REMOVE STOPWORDS
###Code
# You have to download stopwords Package to execute this command
from nltk.corpus import stopwords
stopwords.words('english')
Test_punc_removed_join
Test_punc_removed_join_clean = [word for word in Test_punc_removed_join.split() if word.lower() not in stopwords.words('english')]
Test_punc_removed_join_clean # Only important (no so common) words are left
mini_challenge = 'Here is a mini challenge, that will teach you how to remove stopwords and punctuations!'
challege = [ char for char in mini_challenge if char not in string.punctuation ]
challenge = ''.join(challege)
challenge = [ word for word in challenge.split() if word.lower() not in stopwords.words('english') ]
###Output
_____no_output_____
###Markdown
STEP 3.3 COUNT VECTORIZER EXAMPLE
###Code
from sklearn.feature_extraction.text import CountVectorizer
sample_data = ['This is the first document.','This document is the second document.','And this is the third one.','Is this the first document?']
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(sample_data)
print(vectorizer.get_feature_names())
print(X.toarray())
mini_challenge = ['Hello World','Hello Hello World','Hello World world world']
vectorizer_challenge = CountVectorizer()
X_challenge = vectorizer_challenge.fit_transform(mini_challenge)
print(X_challenge.toarray())
###Output
[[1 1]
[2 1]
[1 3]]
###Markdown
LET'S APPLY THE PREVIOUS THREE PROCESSES TO OUR YELP REVIEWS EXAMPLE
###Code
# Let's define a pipeline to clean up all the messages
# The pipeline performs the following: (1) remove punctuation, (2) remove stopwords
def message_cleaning(message):
Test_punc_removed = [char for char in message if char not in string.punctuation]
Test_punc_removed_join = ''.join(Test_punc_removed)
Test_punc_removed_join_clean = [word for word in Test_punc_removed_join.split() if word.lower() not in stopwords.words('english')]
return Test_punc_removed_join_clean
# Let's test the newly added function
yelp_df_clean = yelp_df_1_5['text'].apply(message_cleaning)
print(yelp_df_clean[0]) # show the cleaned up version
print(yelp_df_1_5['text'][0]) # show the original version
###Output
My wife took me here on my birthday for breakfast and it was excellent. The weather was perfect which made sitting outside overlooking their grounds an absolute pleasure. Our waitress was excellent and our food arrived quickly on the semi-busy Saturday morning. It looked like the place fills up pretty quickly so the earlier you get here the better.
Do yourself a favor and get their Bloody Mary. It was phenomenal and simply the best I've ever had. I'm pretty sure they only use ingredients from their garden and blend them fresh when you order it. It was amazing.
While EVERYTHING on the menu looks excellent, I had the white truffle scrambled eggs vegetable skillet and it was tasty and delicious. It came with 2 pieces of their griddled bread with was amazing and it absolutely made the meal complete. It was the best "toast" I've ever had.
Anyway, I can't wait to go back!
###Markdown
LET'S APPLY COUNT VECTORIZER TO OUR YELP REVIEWS EXAMPLE
###Code
from sklearn.feature_extraction.text import CountVectorizer
# Define the cleaning pipeline we defined earlier
vectorizer = CountVectorizer(analyzer = message_cleaning)
yelp_countvectorizer = vectorizer.fit_transform(yelp_df_1_5['text'])
print(vectorizer.get_feature_names())
print(yelp_countvectorizer.toarray())
yelp_countvectorizer.shape
###Output
_____no_output_____
###Markdown
STEP4: TRAINING THE MODEL WITH ALL DATASET
###Code
from sklearn.naive_bayes import MultinomialNB
NB_classifier = MultinomialNB()
label = yelp_df_1_5['stars'].values
label
NB_classifier.fit(yelp_countvectorizer, label)
testing_sample = ['amazing food! highly recommmended']
testing_sample_countvectorizer = vectorizer.transform(testing_sample)
test_predict = NB_classifier.predict(testing_sample_countvectorizer)
test_predict
testing_sample = ['shit food, made me sick']
testing_sample_countvectorizer = vectorizer.transform(testing_sample)
test_predict = NB_classifier.predict(testing_sample_countvectorizer)
test_predict
###Output
_____no_output_____
###Markdown
STEP4: DIVIDE THE DATA INTO TRAINING AND TESTING PRIOR TO TRAINING
###Code
X = yelp_countvectorizer
y = label
X.shape, y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
from sklearn.naive_bayes import MultinomialNB
NB_classifier = MultinomialNB()
NB_classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
STEP5: EVALUATING THE MODEL
###Code
from sklearn.metrics import classification_report, confusion_matrix
y_predict_train = NB_classifier.predict(X_train)
y_predict_train
cm = confusion_matrix(y_train, y_predict_train)
sns.heatmap(cm, annot=True)
# Predicting the Test set results
y_predict_test = NB_classifier.predict(X_test)
cm = confusion_matrix(y_test, y_predict_test)
sns.heatmap(cm, annot=True)
print(classification_report(y_test, y_predict_test))
###Output
precision recall f1-score support
1 0.86 0.68 0.76 145
5 0.93 0.98 0.95 673
accuracy 0.92 818
macro avg 0.90 0.83 0.86 818
weighted avg 0.92 0.92 0.92 818
###Markdown
STEP 6: LET'S ADD ADDITIONAL FEATURE TF-IDF - Tf–idf stands for "Term Frequency–Inverse Document Frequency" is a numerical statistic used to reflect how important a word is to a document in a collection or corpus of documents. - TFIDF is used as a weighting factor during text search processes and text mining.- The intuition behing the TFIDF is as follows: if a word appears several times in a given document, this word might be meaningful (more important) than other words that appeared fewer times in the same document. However, if a given word appeared several times in a given document but also appeared many times in other documents, there is a probability that this word might be common frequent word such as 'I' 'am'..etc. (not really important or meaningful!).- TF: Term Frequency is used to measure the frequency of term occurrence in a document: - TF(word) = Number of times the 'word' appears in a document / Total number of terms in the document- IDF: Inverse Document Frequency is used to measure how important a term is: - IDF(word) = log_e(Total number of documents / Number of documents with the term 'word' in it).- Example: Let's assume we have a document that contains 1000 words and the term “John” appeared 20 times, the Term-Frequency for the word 'John' can be calculated as follows: - TF|john = 20/1000 = 0.02- Let's calculate the IDF (inverse document frequency) of the word 'john' assuming that it appears 50,000 times in a 1,000,000 million documents (corpus). - IDF|john = log (1,000,000/50,000) = 1.3- Therefore the overall weight of the word 'john' is as follows - TF-IDF|john = 0.02 * 1.3 = 0.026
###Code
yelp_countvectorizer
from sklearn.feature_extraction.text import TfidfTransformer
yelp_tfidf = TfidfTransformer().fit_transform(yelp_countvectorizer)
print(yelp_tfidf.shape)
yelp_tfidf
print(yelp_tfidf[:,:])
# Sparse matrix with all the values of IF-IDF
X = yelp_tfidf
y = label
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15)
from sklearn.naive_bayes import MultinomialNB
NB_classifier = MultinomialNB()
NB_classifier.fit(X_train, y_train)
from sklearn.metrics import classification_report, confusion_matrix
y_predict_train = NB_classifier.predict(X_train)
y_predict_train
cm = confusion_matrix(y_train, y_predict_train)
sns.heatmap(cm, annot=True)
###Output
_____no_output_____ |
oldnbs/pct-example.ipynb | ###Markdown
PCT Functions
###Code
from pct.functions import Integration
from pct.functions import IntegrationDual
from pct.functions import IndexedParameter
from pct.functions import Constant
from pct.functions import Sigmoid
from pct.functions import PassOn
from pct.putils import FunctionsList
cons = Constant(2)
integrator = Integration(gain=9, slow=10)
integrator.add_link(cons)
out = integrator()
print(out)
o=integrator.run(steps=10, verbose=True)
import numpy as np
input=np.ones((3, 3))*2
input
cons = Constant(input)
integrator = Integration(gain=9, slow=10)
integrator.add_link(cons)
out = integrator()
print(out)
from pct.functions import WeightedSum
ws = WeightedSum(weights=np.ones(3))
ws.add_link(Constant(10))
ws.add_link(Constant(5))
ws.add_link(Constant(20))
ws.summary()
print(ws.get_config())
ws.draw()
###Output
_____no_output_____
###Markdown
Nodes
###Code
from pct.nodes import PCTNode
node = PCTNode()
node.summary()
node.draw()
integ = Integration(10, 100, name="integrator", links=['subtract'], position=-1)
node.insert_function(collection = "output", function=integ)
node.draw()
out =node()
out
#FunctionsList.getInstance().report()
node.run(steps=10, verbose=True)
###Output
1.000 0.000 1.000 0.199
1.000 0.000 1.000 0.297
1.000 0.000 1.000 0.394
1.000 0.000 1.000 0.490
1.000 0.000 1.000 0.585
1.000 0.000 1.000 0.679
1.000 0.000 1.000 0.773
1.000 0.000 1.000 0.865
1.000 0.000 1.000 0.956
1.000 0.000 1.000 1.047
###Markdown
Hierarchies
###Code
from pct.hierarchy import PCTHierarchy
hpct = PCTHierarchy(2, 2, links="dense")
hpct.summary()
hpct.draw(figsize=(8, 10), node_size=1000)
cartpole_hierarchy = PCTHierarchy.load("cartpole.json")
cartpole = FunctionsList.getInstance().get_function("CartPole-v1")
cartpole.render=True
cartpole_hierarchy.set_order("Down")
cartpole_hierarchy.draw(font_size=10, figsize=(8,12), move={'CartPole-v1': [-0.075, 0]}, node_size=1000)
cartpole_hierarchy.run(200)
#cartpole_hierarchy.summary()
cartpole.close()
pole_position_node = PCTNode.from_config({ 'name': 'pole_position_node',
'refcoll': {'0': {'type': 'Step', 'name': 'pole_position_reference', 'value': 0, 'upper' : 2, 'lower' :-2, 'delay' :100, 'period' :500, 'links': {}}},
'percoll': {'0': {'type': 'IndexedParameter', 'name': 'pole_position', 'value': 0, 'index' :4, 'links': {0: 'CartPole-v1'}}},
'comcoll': {'0': {'type': 'Subtract', 'name': 'subtract4', 'value': 0, 'links': {0: 'pole_position_reference', 1: 'pole_position'}}},
'outcoll': {'0': {'type': 'Sigmoid', 'name': 'pole_position_output', 'value': 0, 'links': {0: 'subtract4'}, 'range': 0.45, 'scale': 2}}})
cartpole_hierarchy.add_node(pole_position_node, level=4)
# pole_angle
cartpole_hierarchy.replace_function(level=3, col=0, collection="reference", function=PassOn(name="pole_angle_reference", links=['pole_position_output']), position=0)
FunctionsList.getInstance().get_function("pole_angle_output").set_property('gain', 1.5)
cartpole_hierarchy.set_links('subtract3', 'pole_angle_reference', 'pole_angle')
# pole_velocity
cartpole_hierarchy.replace_function(level=2, col=0, collection="output", function=PassOn(name="pole_velocity_output", links=['subtract2']), position=0)
# cart_position
cartpole_hierarchy.replace_function(level=1, col=0, collection="reference", function=IntegrationDual(name="cart_position_reference", gain=90, slow=100, links=['pole_velocity_output']), position=0)
cartpole_hierarchy.add_links( 'cart_position_reference', 'cart_position')
cartpole_hierarchy.set_links('subtract1', 'cart_position_reference', 'cart_position')
# force
FunctionsList.getInstance().get_function("force").set_property('gain', -0.1)
#cartpole_hierarchy.summary()
cartpole_hierarchy.draw(font_size=10, figsize=(8,16), move={'CartPole-v1': [-0.075, 0] , 'pole_velocity': [-0.02, 0], 'pole_angle': [-0.025, 0]}, node_size=1000)
#cartpole_hierarchy.clear_values()
#cartpole_hierarchy.save("cartpole5-added.json")
cartpole_hierarchy.run(3000, verbose=False)
cartpole.close()
###Output
_____no_output_____ |
04 - Classification/Batch1/Number_Classification/04- Classifying number.ipynb | ###Markdown
Classifier name Step -1 :import datascience libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Step-2: Load Dataset**
###Code
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
print(digits.DESCR)
X = digits.data # independent varible
y = digits.target # dependent varible
###Output
_____no_output_____
###Markdown
each row is an image
###Code
print(X.shape)
print(X)
# Converting into binary image --> Thresholding
X[X > 7] = X.max()
X[X<= 7] = X.min()
# Normalizing
X = X / X.max()
X.shape, y.shape
img = X[0:1]
print(y[0:1])
plt.imshow(img.reshape((8,8)),cmap = 'gray')
###Output
[0]
###Markdown
** Step 4 Splitting Data into traning and testing **
###Code
from sklearn.cross_validation import train_test_split
x_train , x_test, y_train, y_test = train_test_split(X, y,
test_size = 0.2,
random_state = 0)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Step 5: Building Machine Learning classifier or model
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
model_log = LogisticRegression(C = 10.0)
model_knn = KNeighborsClassifier(n_neighbors=3)
model_svm = SVC(C = 10.0,probability=True)
model_dt = DecisionTreeClassifier()
model_rf = RandomForestClassifier(n_estimators=100)
###Output
_____no_output_____
###Markdown
traning model
###Code
model_log.fit(x_train, y_train) # training model
model_knn.fit(x_train, y_train) # training model
model_svm.fit(x_train, y_train) # training model
model_dt.fit(x_train, y_train) # training model
model_rf.fit(x_train, y_train) # training model
###Output
_____no_output_____
###Markdown
**Step -6 :Evaluation model**
###Code
y_pred_log = model_log.predict(x_test) # we use this for evaluation
y_pred_knn = model_knn.predict(x_test) # we use this for evaluation
y_pred_svm = model_svm.predict(x_test) # we use this for evaluation
y_pred_dt = model_dt.predict(x_test) # we use this for evaluation
y_pred_rf = model_rf.predict(x_test) # we use this for evaluation
###Output
_____no_output_____
###Markdown
***classification metrics**
###Code
from sklearn.metrics import confusion_matrix, classification_report
cm_log = confusion_matrix(y_test, y_pred_log) # confusion matrix
cm_knn = confusion_matrix(y_test, y_pred_knn) # confusion matrix
cm_svm = confusion_matrix(y_test, y_pred_svm) # confusion matrix
cm_dt = confusion_matrix(y_test, y_pred_dt) # confusion matrix
cm_rf = confusion_matrix(y_test, y_pred_rf) # confusion matrix
cr_log = classification_report(y_test, y_pred_log) # classification report
cr_knn = classification_report(y_test, y_pred_knn) # classification report
cr_svm = classification_report(y_test, y_pred_svm) # classification report
cr_dt = classification_report(y_test, y_pred_dt) # classification report
cr_rf = classification_report(y_test, y_pred_rf) # classification report
import seaborn as sns
sns.heatmap(cm_log,annot=True,cbar=None,cmap = 'summer')
plt.title('Logistic Regression')
plt.show()
sns.heatmap(cm_knn,annot=True,cbar=None,cmap = 'spring')
plt.title('K Nearest Neighbour')
plt.show()
sns.heatmap(cm_svm,annot=True,cbar=None,cmap = 'winter')
plt.title('Support Vector Machine')
plt.show()
sns.heatmap(cm_dt,annot=True,cbar=None,cmap = 'cool')
plt.title('Desicion Tree')
plt.show()
sns.heatmap(cm_rf,annot=True,cbar=None,cmap = 'autumn')
plt.title('Random Forest')
plt.show()
print('='*20+'Logistic Regression'+'='*20)
print(cr_log)
print('='*20+'KNearest Neighbour'+'='*20)
print(cr_knn)
print('='*20+'Support Vector Machine'+'='*20)
print(cr_svm)
print('='*20+'Descion Tree'+'='*20)
print(cr_dt)
print('='*20+'Random Forest'+'='*20)
print(cr_rf)
###Output
====================Logistic Regression====================
precision recall f1-score support
0 1.00 1.00 1.00 27
1 0.86 0.86 0.86 35
2 0.94 0.94 0.94 36
3 0.93 0.90 0.91 29
4 0.82 0.93 0.87 30
5 0.92 0.90 0.91 40
6 0.96 0.98 0.97 44
7 0.97 0.95 0.96 39
8 0.91 0.77 0.83 39
9 0.84 0.93 0.88 41
avg / total 0.92 0.91 0.91 360
====================KNearest Neighbour====================
precision recall f1-score support
0 0.96 1.00 0.98 27
1 0.75 0.94 0.84 35
2 1.00 1.00 1.00 36
3 0.90 0.93 0.92 29
4 0.97 0.93 0.95 30
5 0.93 0.93 0.93 40
6 0.96 1.00 0.98 44
7 0.97 1.00 0.99 39
8 0.96 0.69 0.81 39
9 0.92 0.88 0.90 41
avg / total 0.93 0.93 0.93 360
====================Support Vector Machine====================
precision recall f1-score support
0 1.00 1.00 1.00 27
1 0.86 0.91 0.89 35
2 0.95 0.97 0.96 36
3 0.93 0.97 0.95 29
4 0.91 0.97 0.94 30
5 0.95 0.93 0.94 40
6 0.98 0.98 0.98 44
7 0.97 0.97 0.97 39
8 0.97 0.77 0.86 39
9 0.91 0.98 0.94 41
avg / total 0.94 0.94 0.94 360
====================Descion Tree====================
precision recall f1-score support
0 0.96 0.89 0.92 27
1 0.75 0.86 0.80 35
2 0.76 0.72 0.74 36
3 0.75 0.83 0.79 29
4 0.90 0.87 0.88 30
5 0.94 0.82 0.88 40
6 0.87 0.91 0.89 44
7 0.88 0.92 0.90 39
8 0.71 0.64 0.68 39
9 0.77 0.80 0.79 41
avg / total 0.83 0.82 0.82 360
====================Random Forest====================
precision recall f1-score support
0 1.00 0.96 0.98 27
1 0.89 0.97 0.93 35
2 1.00 0.97 0.99 36
3 1.00 0.93 0.96 29
4 0.97 0.97 0.97 30
5 0.95 0.93 0.94 40
6 0.98 1.00 0.99 44
7 0.95 1.00 0.97 39
8 1.00 0.87 0.93 39
9 0.89 0.98 0.93 41
avg / total 0.96 0.96 0.96 360
###Markdown
**Saving and loading model**
###Code
from sklearn.externals import joblib
joblib.dump(model_log,'number_rec_log.pkl')
joblib.dump(model_knn,'number_rec_knn.pkl')
joblib.dump(model_svm,'number_rec_svm.pkl')
joblib.dump(model_dt,'number_rec_dt.pkl')
joblib.dump(model_rf,'number_rec_rf.pkl')
classify = joblib.load('number_rec_rf.pkl')
###Output
_____no_output_____
###Markdown
**Tesing with new image**
###Code
import cv2
# Step 1 : Read image
img =cv2.imread('number2.jpg',0) # if you use zero it will convert into grayscale image
# step 2: Thresholding
ret, thresh = cv2.threshold(img,127,255,cv2.THRESH_BINARY_INV)
# step 3 : Resize image
img_re = cv2.resize(thresh,(8,8))
# Step 4: reshape it to row matrix
test = img_re.reshape((1,64))
# Step 5: Normalize
test = test/ test.max()
plt.imshow(test,cmap ='gray')
plt.show()
plt.imshow(img_re)
print('LogisticRegression',model_log.predict(test))
print('KNearest Neighbour', model_knn.predict(test))
print('Support Vector Machine', model_svm.predict(test))
print('Desicion Tree', model_dt.predict(test))
print('Random Forest',model_rf.predict(test))
###Output
LogisticRegression [2]
KNearest Neighbour [2]
Support Vector Machine [2]
Desicion Tree [2]
Random Forest [2]
###Markdown
Real Time Number Detection
###Code
cap = cv2.VideoCapture(0)
while True:
_,img = cap.read()
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (7,7),3)
_,th3 = cv2.threshold(gray,100,255,cv2.THRESH_BINARY_INV)
#th3 = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,21,7)
im2, contours, hierarchy = cv2.findContours(th3,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
ix = np.where(np.array(areas) > 300)[0]
result = np.array([1,0,0,0,0,0,0,0,0,0])
for i in ix:
cnt = contours[i]
xr,yr,wr,hr = cv2.boundingRect(cnt)
if xr< 20 :
xr = 25
if yr < 20:
yr = 25
cv2.rectangle(img,(xr-10,yr-10),(xr+wr+10,yr+hr+10), (0,255,0),2)
roi = th3[yr-20:yr+hr+20, xr-20:xr+wr+20]
roi_re=cv2.resize(roi,(8,8))
g = roi_re.reshape(1,64).astype('float32')
g = g/255.0
result= model_rf.predict(g)
#print(result)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(img,'Number: '+str(result),(xr-10,yr-10), font, 0.4, (255,0,0), 1, cv2.LINE_AA)
cv2.imshow('Threshold',th3)
cv2.imshow('orginal',img)
if cv2.waitKey(41) & 0xff == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
q
###Output
_____no_output_____ |
3d/pinn.ipynb | ###Markdown
MIT LicenseCopyright (c) 2021 alxyokPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE. Solving the 3D Burger's equation **FORGET ABOUT THIS CODE, IT'S UNTRACTABLE. FOR EDUCATIONAL PURPOSES ONLY.****Let's have some fun! Untested fun even 😛 (It works, but what it really does no one knows for certain!)** **Most of the content below is largely inspired from the work of [Raissi et al.](https://maziarraissi.github.io/PINNs/) as weel as [Liu et al.](https://www.sciencedirect.com/science/article/abs/pii/S0142727X21000527), please refer to those papers for a comprehensive theoretical understanding.**The Burger's equation is one of the well-studied fundamental PDEs in that exhibit shocks, and for which an non-trivial analytical solution exists in the Physics litterature. A conjunction of factors (profusion of data, capable cheap hardware, and backprop) has lead to the resurection Deep Learning (DL) which has in turn paved the way for the development of scientific machine learning libraries such as TensorFlow and PyTorch. Those frameworks come with free auto-differentiation, a key tool for this lab which will enable the development of a self-supervised neural model based on residuals.We'll use PyTorch, but TensorFlow + Keras could do just as fine. Be sure to check out [PyTorch Tutorials](https://pytorch.org/tutorials/) and [PyTorch API](https://pytorch.org/docs/1.9.0/), which are a great source of information. Also, [Stackoverflow](https://stackoverflow.com/questions/tagged/pytorch).
###Code
import this
import matplotlib.pyplot as plt
import numpy as np
# we'll use PyTorch, but TensorFlow + Keras could do just as fine
import torch
from torch import nn
###Output
_____no_output_____
###Markdown
1. Problem statementNote: we do not use the Hopf-Cole transformation that would allow for a simplified formula but instead use the raw explicit formulation of the problem. We propose to solve the 3D nonlinear Burger's problem defined by the following set of equations:$u_t + u * u_x + v * u_y + w * u_z - \frac{1}{R_e} (u_{xx} + u_{yy} + u_{zz}) = 0$$v_t + u * v_x + v * v_y + w * v_z - \frac{1}{R_e} (v_{xx} + v_{yy} + v_{zz}) = 0$$w_t + u * w_x + v * w_y + w * w_z - \frac{1}{R_e} (w_{xx} + w_{yy} + w_{zz}) = 0$in which $Re$ is the Reynolds number, which characterizes the fluid flow behavior in various situations, and under the initial condition and boundary conditions defined below. The space domain is $0 0$.$u(x, y, z, 0) = u_0(x, y, z) = sin(\pi x) * cos(\pi y) * cos(\pi z)$$v(x, y, z, 0) = v_0(x, y, z) = sin(\pi y) * cos(\pi x) * cos(\pi z)$$w(x, y, z, 0) = w_0(x, y, z) = sin(\pi z) * cos(\pi x) * cos(\pi y)$as well as:$u(0, y, z, t) = u(1, y, z, t) = 0$$v(x, 0, z, t) = v(x, 1, z, t) = 0$$w(x, y, 0, t) = w(x, y, 1, t) = 0$ 2. The resolution methodWe will build an estimator and have it gradually converge to the 3-tuple solution $U = (u, v, w)$ thanks to a handcrafted loss function based on residuals, computed from original inputs $X = (x, y, z, t)$.We define:* A neural model $pinn := U(x, y, z, t)$* An IC residual function $U0_{residual} := pinn(X, 0) - U0(X)$* A BC residuals function $Ulim_{residual} := U(0, t) = U(1, t) = 0$* A PDE residual function $f := U_t + U * U_{.} - \frac{1}{R_e} * U_{..}$The Physics constraint is a soft-constraint (based on the loss) built by summing the loss of all residuals $L = loss(U0) + loss(Ulim) + loss(f)$. A few of the model's HParams
###Code
# number of samples in every dimension
n = 4
grid_shape = (n, n, n, n)
dtype = torch.float
# reynolds number, try for a range of 10^p where p is an integer
re: float = 100.
# learning rate, classic
lr = 1e-3
###Output
_____no_output_____
###Markdown
Helpers
###Code
def tuplify(X: torch.Tensor) -> tuple:
x = X[:, 0:1]
y = X[:, 1:2]
z = X[:, 2:3]
t = X[:, 3:4]
return x, y, z, t
def meshify(X: torch.Tensor) -> torch.Tensor:
x, y, z, t = tuplify(X)
x, y, z, t = np.meshgrid(x, y, z, t)
x = torch.tensor(x.reshape((-1, 1)))
y = torch.tensor(y.reshape((-1, 1)))
z = torch.tensor(z.reshape((-1, 1)))
t = torch.tensor(t.reshape((-1, 1)))
X = torch.squeeze(torch.stack((x, y, z, t), axis=1))
return X
###Output
_____no_output_____
###Markdown
3. The actual implementation a) IC residuals functionFollowing the article specifications, we'll define the IC with a few cyclical functions.
###Code
def U0(X: torch.Tensor) -> torch.Tensor:
"""Computes the IC as stated previously."""
# X = meshify(X)
x, y, z, _ = tuplify(X)
u_xyz0 = torch.squeeze(torch.sin(np.pi * x) * torch.cos(np.pi * y) * torch.cos(np.pi * z))
v_xyz0 = torch.squeeze(torch.sin(np.pi * y) * torch.cos(np.pi * x) * torch.cos(np.pi * z))
w_xyz0 = torch.squeeze(torch.sin(np.pi * z) * torch.cos(np.pi * x) * torch.cos(np.pi * y))
U0_ = torch.stack((u_xyz0, v_xyz0, w_xyz0), axis=1)
return U0_
def U0_residuals(X: torch.Tensor) -> torch.Tensor:
"""Computes the residuals for the IC."""
return pinn(X) - U0(X)
###Output
_____no_output_____
###Markdown
b) BC residuals functionResiduals on boundary is `0`.
###Code
def Ulim_residuals(X: torch.Tensor) -> torch.Tensor:
"""Computes the residuals at the Boundary."""
return pinn(X) - 0.
###Output
_____no_output_____
###Markdown
c) PDE residuals functionWe need to compute first-order and second-order derivatives of $U$ with respect to $X$. Currently, `torch.__version__ == 1.9.0`, it's a bit tricky, because we cannot filter out *a priori* part of terms that will end-up unused and thus computation is partly wasted. We can only filter *a posteriori*. There's probably some leverage at the DAG level *(Directed Acyclic Graph)*.PyTorch has a `torch.autograd.functional.hessian()` function but only of output scalars and not vectors so we can't use it.
###Code
def f(X: torch.Tensor) -> torch.Tensor:
"""Computes the residuals from the PDE on the rest of the Domain."""
def first_order(X, second_order=False):
U = pinn(X)
u = torch.squeeze(U[:, 0:1])
v = torch.squeeze(U[:, 1:2])
w = torch.squeeze(U[:, 2:3])
U_X = torch.autograd.functional.jacobian(pinn, X, create_graph=True)
u_x = torch.diagonal(torch.squeeze(U_X[:, 0:1, :, 0:1]))
u_y = torch.diagonal(torch.squeeze(U_X[:, 0:1, :, 1:2]))
u_z = torch.diagonal(torch.squeeze(U_X[:, 0:1, :, 2:3]))
u_t = torch.diagonal(torch.squeeze(U_X[:, 0:1, :, 3:4]))
v_x = torch.diagonal(torch.squeeze(U_X[:, 1:2, :, 0:1]))
v_y = torch.diagonal(torch.squeeze(U_X[:, 1:2, :, 1:2]))
v_z = torch.diagonal(torch.squeeze(U_X[:, 1:2, :, 2:3]))
v_t = torch.diagonal(torch.squeeze(U_X[:, 1:2, :, 3:4]))
w_x = torch.diagonal(torch.squeeze(U_X[:, 2:3, :, 0:1]))
w_y = torch.diagonal(torch.squeeze(U_X[:, 2:3, :, 1:2]))
w_z = torch.diagonal(torch.squeeze(U_X[:, 2:3, :, 2:3]))
w_t = torch.diagonal(torch.squeeze(U_X[:, 2:3, :, 3:4]))
if second_order:
return u, v, w, u_x, u_y, u_z, u_t, v_x, v_y, v_z, v_t, w_x, w_y, w_z, w_t
return u_x, v_y, w_z
# way sub-optimal, the first order jacobian should really be computed once
# maybe pytorch is doing this lazy, but still, sub-optimal
def second_order(X):
U_XX = torch.autograd.functional.jacobian(first_order, X)
u_xx = torch.diagonal(torch.squeeze(U_XX[0][:, :, 0:1]))
v_xx = torch.diagonal(torch.squeeze(U_XX[1][:, :, 0:1]))
w_xx = torch.diagonal(torch.squeeze(U_XX[2][:, :, 0:1]))
u_yy = torch.diagonal(torch.squeeze(U_XX[0][:, :, 1:2]))
v_yy = torch.diagonal(torch.squeeze(U_XX[1][:, :, 1:2]))
w_yy = torch.diagonal(torch.squeeze(U_XX[2][:, :, 1:2]))
u_zz = torch.diagonal(torch.squeeze(U_XX[0][:, :, 2:3]))
v_zz = torch.diagonal(torch.squeeze(U_XX[1][:, :, 2:3]))
w_zz = torch.diagonal(torch.squeeze(U_XX[2][:, :, 2:3]))
return u_xx, u_yy, u_zz, v_xx, v_yy, v_zz, w_xx, w_yy, w_zz
u, v, w, u_x, u_y, u_z, u_t, v_x, v_y, v_z, v_t, w_x, w_y, w_z, w_t = first_order(X, second_order=True)
u_xx, u_yy, u_zz, v_xx, v_yy, v_zz, w_xx, w_yy, w_zz = second_order(X)
u_ = u_t + u * u_x + v * u_y + w * u_z - re * (u_xx + u_yy + u_zz)
v_ = v_t + u * v_x + v * v_y + w * v_z - re * (v_xx + v_yy + v_zz)
w_ = w_t + u * w_x + v * w_y + w * w_z - re * (w_xx + w_yy + w_zz)
U = torch.stack((u_, v_, w_), axis=1)
return U
###Output
_____no_output_____
###Markdown
d) The total loss functionSummed-up from all previously defined residuals. Given how input $X$ was produced, it contains both samples from main domain as well as singular values used to compute both IC and BC. We need to carefully route the computation to the right residual function.
###Code
def loss(X: torch.Tensor) -> torch.Tensor:
"""Computes the loss based on all residual terms."""
x0 = X[:, 0:1] == 0.
x1 = X[:, 0:1] == 1.
xl_ = torch.logical_or(x0, x1)
xl_ = torch.cat((xl_,) * 4, axis=1)
xl = torch.masked_select(X, xl_).reshape(-1, 4)
xl_residuals = torch.mean(torch.square(Ulim_residuals(xl)))
y0 = X[:, 1:2] == 0.
y1 = X[:, 1:2] == 1.
yl_ = torch.logical_or(y0, y1)
yl_ = torch.cat((yl_,) * 4, axis=1)
yl = torch.masked_select(X, yl_).reshape(-1, 4)
yl_residuals = torch.mean(torch.square(Ulim_residuals(yl)))
z0 = X[:, 2:3] == 0.
z1 = X[:, 2:3] == 1.
zl_ = torch.logical_or(z0, z1)
zl_ = torch.cat((zl_,) * 4, axis=1)
zl = torch.masked_select(X, zl_).reshape(-1, 4)
zl_residuals = torch.mean(torch.square(Ulim_residuals(zl)))
t0_ = X[:, 3:4] == 0.
t0_ = torch.cat((t0_,) * 4, axis=1)
t0 = torch.masked_select(X, t0_).reshape(-1, 4)
t0_residuals = torch.mean(torch.square(U0_residuals(t0)))
or_ = torch.logical_or(t0_, torch.logical_or(zl_, torch.logical_or(xl_, yl_)))
X_not = torch.logical_not(or_)
X_ = torch.masked_select(X, X_not).reshape(-1, 4)
f_residuals = torch.mean(torch.square(f(X_)))
# final loss is simply the sum of residuals
return torch.mean(torch.stack((
xl_residuals,
yl_residuals,
zl_residuals,
t0_residuals,
f_residuals
)))
###Output
_____no_output_____
###Markdown
e) Defining the model... as a simple straight-forward feed-forward MLP `depth=4` by `width=20` + `activation=nn.Tanh()` defined with PyTorch's sequential API.
###Code
# inputs: X = (x, y, z, t)
# outputs: U = (u, v, w)
pinn = nn.Sequential(
nn.Linear(4, 20, dtype=dtype),
nn.Tanh(),
nn.Linear(20, 20, dtype=dtype),
nn.Tanh(),
nn.Linear(20, 20, dtype=dtype),
nn.Tanh(),
nn.Linear(20, 20, dtype=dtype),
nn.Tanh(),
nn.Linear(20, 3, dtype=dtype),
)
###Output
_____no_output_____
###Markdown
4. LET'S FIT Let's start by sampling in both space and time, and create a 4D-meshgrid (main reason why all this is intractable).
###Code
x = torch.linspace(0.0, 1.0, steps=n, dtype=dtype).T
y = torch.linspace(0.0, 1.0, steps=n, dtype=dtype).T
z = torch.linspace(0.0, 1.0, steps=n, dtype=dtype).T
t = torch.linspace(0.0, 1.0, steps=n, dtype=dtype).T
X = torch.stack((x, y, z, t), axis=1)
X = meshify(X)
u0 = U0(X)[:, 0:1]
v0 = U0(X)[:, 1:2]
w0 = U0(X)[:, 2:3]
###Output
_____no_output_____
###Markdown
...and loop over epochs... And we're done!
###Code
def fit(X: torch.Tensor,
epochs: int,
lr: float = 1e-2):
"""Implements the training loop."""
optimizer = torch.optim.SGD(pinn.parameters(), lr=lr)
for epoch in range(epochs):
optimizer.zero_grad()
loss_ = loss(X)
loss_.backward()
optimizer.step()
if epoch % 1000 == 0:
print(f"epoch: {epoch}, loss: {loss_}")
fit(X, epochs=10000)
###Output
epoch: 0, loss: 0.11308014392852783
epoch: 1000, loss: 0.029296875
epoch: 2000, loss: 0.029296875
epoch: 3000, loss: 0.029296875
|
ML_Models/Regression/Simple_Regression/simple_regression_2.ipynb | ###Markdown
Simple Regression 1. Importing Packages
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
2. Reading Data
###Code
df = pd.read_csv('/FuelConsumptionCo2.csv')
df.head()
###Output
_____no_output_____
###Markdown
3. Data Explortion
###Code
df.describe()
mdf = df[['ENGINESIZE' ,'CYLINDERS' , 'FUELCONSUMPTION_COMB', 'CO2EMISSIONS']]
mdf.head(6)
###Output
_____no_output_____
###Markdown
4. Data Visualizatoin
###Code
mdf.hist()
plt.show()
# ENGINESIZE vs CO2EMISSIONS
plt.scatter(mdf.ENGINESIZE , mdf.CO2EMISSIONS , color='green')
plt.title('ENGINESIZE vs CO2EMISSIONS')
plt.xlabel('ENGINESIZE')
plt.ylabel('CO2EMISSIONS ')
plt.show()
# FUELCONSUMPTION_COMB vs CO2EMISSIONS
plt.scatter(mdf.FUELCONSUMPTION_COMB , mdf.CO2EMISSIONS , color='red')
plt.title('FUELCONSUMPTION_COMB vs CO2EMISSIONS')
plt.xlabel('FUELCONSUMPTION_COMB')
plt.ylabel('CO2EMISSIONS ')
plt.show()
# CYLINDERS vs CO2EMISSIONS
plt.scatter(mdf.CYLINDERS , mdf.CO2EMISSIONS , color='blue')
plt.title('CYLINDERS vs CO2EMISSIONS')
plt.xlabel('CYLINDERS')
plt.ylabel('CO2EMISSIONS ')
plt.show()
###Output
_____no_output_____
###Markdown
5. Train/Test Split
###Code
msk = np.random.rand(len(df)) < 0.8
train = mdf[msk]
test = mdf[msk]
###Output
_____no_output_____
###Markdown
6. Simple Regression Model
###Code
plt.scatter(train.ENGINESIZE , train.CO2EMISSIONS , color='green')
plt.title('ENGINESIZE vs CO2EMISSIONS')
plt.xlabel('ENGINESIZE')
plt.ylabel('CO2EMISSIONS ')
plt.show()
from sklearn import linear_model
regr = linear_model.LinearRegression()
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit(train_x , train_y)
print('Slope: {}'.format(regr.coef_[0][0]))
print('Intercept {}'.format(regr.intercept_[0]))
plt.scatter(train.ENGINESIZE , train.CO2EMISSIONS , color='green')
plt.plot(train_x , regr.coef_[0][0]*train_x + regr.intercept_[0] , '-r')
plt.title('ENGINESIZE vs CO2EMISSIONS')
plt.xlabel('ENGINESIZE')
plt.ylabel('CO2EMISSIONS ')
plt.show()
###Output
_____no_output_____
###Markdown
7. EvaluationWe compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement.There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set:- Mean absolute error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.- Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean absolute error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.- Root Mean Squared Error (RMSE).- R-squared is not error, but is a popular metric for accuracy of your model. It represents how close the data are to the fitted regression line. The higher the R-squared, the better the model fits your data. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
###Code
from sklearn.metrics import r2_score
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
test_y_ = regr.predict(test_x)
# ERRORS
print('Mean Absolute Error: {}'.format(np.mean(test_y_ - test_y)))
print('Mean Square Error: {}'.format(np.mean((test_y_ - test_y)**2)))
print('R2 Score: {}'.format(r2_score(test_y , test_y_)))
###Output
_____no_output_____ |
notebooks/03-ensemble.ipynb | ###Markdown
Ensemble
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import sys
sys.path.insert(0, "../src")
import gc
import pathlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn import metrics
from sklearn import model_selection
from scipy.special import softmax
import torch
import torchcontrib
from torch.optim.swa_utils import AveragedModel, SWALR, update_bn
import callbacks
import config
import dataset
import engine
import models
import utils
import warnings
warnings.filterwarnings("ignore")
# df = pd.read_csv(config.DATA_PATH / "train.csv")
df = pd.read_csv(config.DATA_PATH / "pl-spinal-ensemble10.csv")
df.shape
nets = 10
device = torch.device(config.DEVICE)
EPOCHS = 200
SEED = 42
utils.seed_everything(SEED)
cnns = [None] * nets
valid_scores = []
for i in range(nets):
print("#" * 30)
# DATA
train_indices, valid_indices = model_selection.train_test_split(np.arange(len(df)), test_size=0.1, shuffle=True, stratify=df.digit)
train_dataset = dataset.EMNISTDataset(df, train_indices)
valid_dataset = dataset.EMNISTDataset(df, valid_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=config.TRAIN_BATCH_SIZE, shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=config.TEST_BATCH_SIZE)
# MODEL
model = models.SpinalVGG().to(device)
# OPTIMIZER
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
# SCHEDULER
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode='max', verbose=True, patience=10, factor=0.5,
)
# scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=EPOCHS)
# STOCHASTIC WEIGHT AVERAGING
swa_start = int(EPOCHS * 0.75)
swa_scheduler = SWALR(
optimizer, anneal_strategy="cos", anneal_epochs=swa_start, swa_lr=5e-4
)
swa_model = AveragedModel(model)
swa_model.to(device)
# AMP
scaler = torch.cuda.amp.GradScaler()
# Loop
for epoch in range(EPOCHS):
# TRAIN ONE EPOCH
engine.train(train_loader, model, optimizer, device, scaler)
# VALIDATION
predictions, targets = engine.evaluate(valid_loader, model, device)
predictions = np.argmax(predictions, axis=1)
accuracy = metrics.accuracy_score(targets, predictions)
if epoch % 10 == 0:
print(f"Epoch={epoch}, Accuracy={accuracy:.5f}")
if epoch > swa_start:
swa_model.update_parameters(model)
swa_scheduler.step()
else:
scheduler.step(accuracy)
# Warmup BN-layers
swa_model = swa_model.cpu()
update_bn(train_loader, swa_model)
swa_model.to(device)
# CV Score for SWA model
valid_preds, valid_targs = engine.evaluate(valid_loader, swa_model, device)
valid_predsb = np.argmax(valid_preds, axis=1)
valid_accuracy = metrics.accuracy_score(valid_targs, valid_predsb)
valid_scores.append(valid_accuracy)
print(f"CNN {i}, Validation accuracy of SWA model={valid_accuracy}")
cnns[i] = swa_model
# CLEAN-UP
del model, swa_model
torch.cuda.empty_cache()
gc.collect()
break
print(f"Average CV score={np.mean(valid_scores)}")
###Output
##############################
Epoch=0, Accuracy=0.69143
Epoch=10, Accuracy=0.93071
Epoch=20, Accuracy=0.94417
Epoch=30, Accuracy=0.93868
Epoch 37: reducing learning rate of group 0 to 5.0000e-03.
Epoch=40, Accuracy=0.95414
Epoch=50, Accuracy=0.94467
Epoch=60, Accuracy=0.94716
Epoch 62: reducing learning rate of group 0 to 2.5000e-03.
Epoch=70, Accuracy=0.94865
Epoch 78: reducing learning rate of group 0 to 1.2500e-03.
Epoch=80, Accuracy=0.95065
Epoch=90, Accuracy=0.94965
Epoch 94: reducing learning rate of group 0 to 6.2500e-04.
Epoch=100, Accuracy=0.95065
Epoch 110: reducing learning rate of group 0 to 3.1250e-04.
Epoch=110, Accuracy=0.95065
Epoch=120, Accuracy=0.94915
Epoch 126: reducing learning rate of group 0 to 1.5625e-04.
Epoch=130, Accuracy=0.95015
Epoch=140, Accuracy=0.94865
Epoch 142: reducing learning rate of group 0 to 7.8125e-05.
Epoch=150, Accuracy=0.94965
Epoch=160, Accuracy=0.95065
Epoch=170, Accuracy=0.94915
Epoch=180, Accuracy=0.95015
Epoch=190, Accuracy=0.95115
CNN 0, Validation accuracy of SWA model=0.9481555333998006
Average CV score=0.9481555333998006
###Markdown
PL
###Code
df_test = pd.read_csv(config.TEST_CSV)
test_dataset = dataset.EMNISTDataset(df_test, np.arange(len(df_test)), label=False)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=config.TEST_BATCH_SIZE)
# BLENDED SOFTMAX OF MODELS
preds = np.zeros((len(df_test), 10))
for model in cnns:
preds += engine.evaluate(test_loader, model, device, target=False)
probs = softmax(preds, axis=1)
probs = np.max(probs, axis=1)
digits = np.argmax(preds, axis=1)
pl = df_test.copy()
pl["digit"] = digits
pl["prob"] = probs
pl.head()
pl = pl[pl.prob > 0.995]
print(f"{len(pl)}/{len(df_test)}")
pl.to_csv(config.DATA_PATH / f"pl-spinal-ensemble{nets}.csv", index=False)
###Output
_____no_output_____
###Markdown
Inference
###Code
submission = pd.DataFrame({"id": df_test.id, "digit": digits})
submission.to_csv(f"../output/spinal-ensemble{nets}.csv", index=False)
submission.head()
sns.countplot(submission.digit);
###Output
_____no_output_____ |
bits_wilp/Ex2_Numpy_Q1.ipynb | ###Markdown
Size of Numpy array in bytes
###Code
try:
# feed integer array from user
arr = list(map(int, input("Enter integers seperated by space. Press ENTER to end...").split()))
print("Given sequence is")
print(arr)
# convert Python array to Numpy array
np_array = np.array(arr, dtype=int)
print(f"Number of elements in the numpy array: {np_array.size}")
print(f"Total bytes consumed by the numpy array: {np_array.nbytes}")
print(f"Size in bytes of each element in the numpy array: {(np_array.nbytes)//(np_array.size)}")
except ValueError as e:
print("ERROR: Please enter only integers !!!")
print(e)
###Output
Enter integers seperated by space. Press ENTER to end... 34 23 67 89
|
Day_2/Simple Linear Regression.ipynb | ###Markdown
Step 1 : Preprocessing Data
###Code
import pandas as pd
import numpy as np
# Import pyplot for visualisation of data
from matplotlib import pyplot as plt
# Read dataset from csv
dataset = pd.read_csv('studentscores.csv')
# To see the frst few elements of dataset as well as number of columns
dataset.head()
# Check for any missing values in the dataframe
dataset.isnull().values.any()
# Check for total number of Missing values
dataset.isnull().sum()
# Assign X and Y values from Dataset
X = dataset.iloc[:,:1].values # Only one column so index upto 1, also it ives a 2D array
Y = dataset.iloc[:,-1].values # Only last column so index -1
# Importing train test split function
from sklearn.model_selection import train_test_split
# splitting the data into train set and test set
# random state to fix a random seed for shuffling
# test_size will determine the ratio of split
X_train, X_test , Y_train, Y_test = train_test_split(X, Y, random_state=42, test_size=0.2)
###Output
_____no_output_____
###Markdown
Step 2 : Fitting Simple Linear Regression Model to the training data
###Code
# Import built in function for LinearRegression from sklearn library's linear_model module
from sklearn.linear_model import LinearRegression
# Create an object of class LinearRegression
lr = LinearRegression()
# Fit the training data X_train
lr.fit(X_train,Y_train)
###Output
_____no_output_____
###Markdown
Step 3: Predecting the Result
###Code
# Predicting the result with test data
Y_pred = lr.predict(X_test)
###Output
_____no_output_____
###Markdown
Step 4: Visualization Visualising the Training results
###Code
# Creating a Scatter plot
plt.scatter(X_train , Y_train, color = 'red')
# NOTE: very important to write plt.show() to print graph on screen
plt.show()
# creating the above scatter plot again and adding the trained model line in it.
plt.scatter(X_train, Y_train, color='red')
# Creating a line plot between actual and predicted training data
plt.plot(X_train , lr.predict(X_train), color ='blue')
# show the graph
plt.show()
###Output
_____no_output_____
###Markdown
Visualizing the test results
###Code
# Create a scatter plot with test datasets
plt.scatter(X_test, Y_test, color='red')
# create a prediction model line
plt.plot(X_test,Y_pred, color='blue')
# Show the graph
plt.show()
###Output
_____no_output_____ |
Docking_0608.ipynb | ###Markdown
###Code
!nvidia-smi
#!pip install pymatgen==2020.12.31
!pip install pymatgen==2019.11.11
!pip install --pre graphdot
!pip install gdown
%matplotlib inline
import io
import sys
sys.path.append('/usr/local/lib/python3.6/site-packages/')
import os
import urllib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import graphdot
from graphdot import Graph
from graphdot.graph.adjacency import AtomicAdjacency
from graphdot.graph.reorder import rcm
from graphdot.kernel.marginalized import MarginalizedGraphKernel # https://graphdot.readthedocs.io/en/latest/apidoc/graphdot.kernel.marginalized.html
from graphdot.kernel.marginalized.starting_probability import Uniform
from graphdot.model.gaussian_process import (
GaussianProcessRegressor,
LowRankApproximateGPR
)
from graphdot.kernel.fix import Normalization
import graphdot.microkernel as uX
import ase.io
# for getting all file names into a list under a directory
from os import listdir
# for getting file names that match certain pattern
import glob
import time
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
#cd gdrive/MyDrive/Google\ Colab/Covid-Data
%cd gdrive/MyDrive/Covid-Data/
!pwd
#!ls
#files = ['uncharged_NSP15_6W01_A_3_H.Orderable_zinc_db_enaHLL.2col.csv.1.xz']
#files = [f for f in listdir('/content/gdrive/.shortcut-targets-by-id/1wtzMcocuK8kPsz8K0ktjCZPkv567W6M2/Covid-Data')]
files = glob.glob("uncharged_NSP15_6W01_A_3_H.Orderable_zinc_db_enaHLL.2col.csv.*.xz")
#print(files[0:5])
# concatenate all files into 1 dataset
dataset = pd.DataFrame()
for i in range(len(files[0:5])):
data = pd.read_pickle(files[i])
dataset = pd.concat([dataset, data])
#frames = [pd.read_pickle(f) for f in files]
#dataset = pd.concat(frames)
len(dataset)
target = 'energy'
N_train = len(dataset)//2
N_test = len(dataset)//2
np.random.seed(0)
# select train and test data
train_sel = np.random.choice(len(dataset), N_train, replace=False)
test_sel = np.random.choice(np.setxor1d(np.arange(len(dataset)), train_sel), N_test, replace=False)
train = dataset.iloc[train_sel]
test = dataset.iloc[test_sel]
#uX.SquareExponential?
gpr = GaussianProcessRegressor(
# kernel is the covariance function of the gaussian process (GP)
kernel=Normalization( # kernel equals to normalization -> normalizes a kernel using the cosine of angle formula, k_normalized(x,y) = k(x,y)/sqrt(k(x,x)*k(y,y))
# graphdot.kernel.fix.Normalization(kernel), set kernel as marginalized graph kernel, which is used to calculate the similarity between 2 graphs
# implement the random walk-based graph similarity kernel as Kashima, H., Tsuda, K., & Inokuchi, A. (2003). Marginalized kernels between labeled graphs. ICML
MarginalizedGraphKernel(
# node_kernel - A kernelet that computes the similarity between individual nodes
# uX - graphdot.microkernel - microkernels are positive-semidefinite functions between individual nodes and edges of graphs
node_kernel=uX.Additive( # addition of kernal matrices: sum of k_a(X_a, Y_a) cross for a in features
# uX.Constant - a kernel that returns a constant value, always mutlipled with other microkernels as an adjustable weight
# c, the first input arg. as 0.5, (0.01, 10) the lower and upper bounds of c that is allowed to vary during hyperpara. optimizartion
# uX.KroneckerDelta - a kronecker delta returns 1 when two features are equal and return h (the first input arg here, which is 0.5 in this case) otherwise
# (0.1, 0.9) the lower and upper bounds that h is allowed to vary during hyperpara. optimization
aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 2nd element of graphdot.graph.Graph.nodes
atomic_number=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)), # the 3rd element of graphdot.graph.Graph.nodes
# uX.SquareExponential - Equ. 26 in the paper
# input arg. length_sacle is a float32, set as 1 in this case, which correspond to approx. 1 of the kernal value.
# This is used to determins how quicklys should the kernel decay to zero.
charge=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0), # the 4th element of graphdot.graph.Graph.nodes
chiral=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 5th element of graphdot.graph.Graph.nodes
hcount=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0), # the 6th element of graphdot.graph.Graph.nodes
hybridization=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 7th element of graphdot.graph.Graph.nodes
# uX.Convolution - a convolutional microkernel which averages evaluations of a base microkernel between pairs pf elememts of two variable-length feature sequences
# uX.KroneckerDelta as the base kernel
ring_list=uX.Constant(0.5, (0.01, 100.0)) * uX.Convolution(uX.KroneckerDelta(0.5,(0.1, 0.9))) # the 8th element of graphdot.graph.Graph.nodes
).normalized,
# edge_kernel - A kernelet that computes the similarity between individual edge
edge_kernel=uX.Additive(
aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 3rd element of graphdot.graph.Graph.nodes
conjugated=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 4th element of graphdot.graph.Graph.nodes
order=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)), # the 5th element of graphdot.graph.Graph.nodes
ring_stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)), # the 6th element of graphdot.graph.Graph.nodes
stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)) # the 7th element of graphdot.graph.Graph.nodes
).normalized,
p=Uniform(1.0, p_bounds='fixed'), # the strating probability of the random walk on each node
q=0.05 # the probability for the random walk to stop during each step
)
),
alpha=1e-4, # value added to the diagonal of the kernel matrix during fitting
optimizer=True, # default optimizer of L-BFGS-B based on scipy.optimize.minimize
normalize_y=True, # normalize the y values so taht the means and variance is 0 and 1, repsectively. Will be reversed when predicions are returned
regularization='+', # alpha (1e-4 in this case) is added to the diagonals of the kernal matrix
)
#gpr.fit(train.graphs, train[target], repeat=3, verbose=True)
# using the molecular graph to predict the energy
start_time = time.time()
gpr.fit(train.graphs, train[target], repeat=5, verbose=True)
end_time = time.time()
print("the total time consumption is " + str(end_time - start_time) + ".")
gpr.kernel.hyperparameters
mu = gpr.predict(train.graphs)
plt.scatter(train[target], mu)
plt.show()
print('Training set')
print('MAE:', np.mean(np.abs(train[target] - mu)))
print('RMSE:', np.std(train[target] - mu))
mu_test = gpr.predict(test.graphs)
plt.scatter(test[target], mu_test)
plt.show()
print('Test set')
print('MAE:', np.mean(np.abs(test[target] - mu_test)))
print('RMSE:', np.std(test[target] - mu_test))
###Output
Test set
MAE: 1.2461703788060545
RMSE: 1.6040210737240486
###Markdown
Workon the kernel. Find a kernel that trains and predicts well.
###Code
gpr2 = GaussianProcessRegressor(
kernel=Normalization(
MarginalizedGraphKernel(
node_kernel=uX.Additive(
aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
atomic_number=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),
charge=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0),
chiral=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
hcount=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0),
hybridization=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
ring_list=uX.Constant(0.5, (0.01, 100.0)) * uX.Convolution(uX.KroneckerDelta(0.5,(0.1, 0.9)))
).normalized,
edge_kernel=uX.Additive(
aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
conjugated=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
order=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),
ring_stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),
stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9))
).normalized,
p=Uniform(1.0, p_bounds='fixed'),
q=0.05
)
),
alpha=1e-2, #different from gpr in alpha where gpr's alpha is 1e-4
optimizer=True,
normalize_y=True,
regularization='+',
)
#gpr2.fit(train.graphs, train[target], repeat=3, verbose=True)
gpr2.fit(train.graphs, train[target], repeat=1, verbose=True)
mu = gpr2.predict(train.graphs)
plt.scatter(train[target], mu)
plt.show()
print('Training set')
print('MAE:', np.mean(np.abs(train[target] - mu)))
print('RMSE:', np.std(train[target] - mu))
mu_test = gpr2.predict(test.graphs)
plt.scatter(test[target], mu_test)
plt.show()
print('Test set')
print('MAE:', np.mean(np.abs(test[target] - mu_test)))
print('RMSE:', np.std(test[target] - mu_test))
###Output
Test set
MAE: 0.9561539409612109
RMSE: 1.2284268143181998
|
notebooks/S15E_Efficiency_In_Spark.ipynb | ###Markdown
Using Spark EfficientlyFocus in this lecture is on Spark constructs that can make your programs more efficient. In general, this means minimizing the amount of data transfer across nodes, since this is usually the bottleneck for big data analysis problems.- Shared variables - Accumulators - Broadcast variables- DataFrames- Partitioning and the Spark shuffleSpark tuning and optimization is complicated - this tutorial only touches on some of the basic concepts.Don't forget the otehr areas of optimizaiton shown in previous notebooks:- Use DataFrmaes rather than RDDs- Use pyspark.sql.functions rather than a Python UDF- If you use a UDF, see if you can use a vectorized UDF
###Code
%%spark
import numpy as np
import string
###Output
_____no_output_____
###Markdown
Resources----[The Spark Programming Guide](http://spark.apache.org/docs/latest/programming-guide.html) Shared variablesThe second abstraction in Spark are shared variabels, consisting of accumulators and broadcast variables. Source: https://jaceklaskowski.gitbooks.io/mastering-apache-spark/content/images/sparkcontext-broadcast-executors.png AccumulatorsSpark functions such as `map` can use variables defined in the driver program, but they make local copies of the variable that are not passed back to the driver program. Accumulators are *shared variables* that allow the aggregation of results from workers back to the driver program, for example, as an event counter. Suppose we want to count the number of rows of data with missing information. The most efficient way is to use an **accumulator**.
###Code
ulysses = sc.textFile('/data/texts/Ulysses.txt')
ulysses.take(10)
###Output
[u'', u'The Project Gutenberg EBook of Ulysses, by James Joyce', u'', u'This eBook is for the use of anyone anywhere at no cost and with almost', u'no restrictions whatsoever. You may copy it, give it away or re-use', u'it under the terms of the Project Gutenberg License included with this', u'eBook or online at www.gutenberg.org', u'', u'', u'Title: Ulysses']
###Markdown
Event countingNotice that we have some empty lines. We want to count the number of non-empty lines.
###Code
num_lines = sc.accumulator(0)
def tokenize(line):
table = dict.fromkeys(map(ord, string.punctuation))
return line.translate(table).lower().strip().split()
def tokenize_count(line):
global num_lines
if line:
num_lines += 1
return tokenize(line)
counter = ulysses.flatMap(lambda line: tokenize_count(line)).countByValue()
counter['circle']
num_lines.value
###Output
25510
###Markdown
Broadcast VariablesSometimes we need to send a large read only variable to all workers. For example, we might want to share a large feature matrix to all workers as a part of a machine learning application. This same variable will be sent separately for each parallel operation unless you use a **broadcast variable**. Also, the default variable passing mechanism is optimized for small variables and can be slow when the variable is large.
###Code
from itertools import count
table = dict(zip(string.ascii_letters, count()))
def weight_first(line, table):
words = tokenize(line)
return sum(table.get(word[0], 0) for word in words if word.isalpha())
def weight_last(line, table):
words = tokenize(line)
return sum(table.get(word[-1], 0) for word in words if word.isalpha())
###Output
_____no_output_____
###Markdown
The dictionary `table` is sent out twice to worker nodes, one for each call
###Code
ulysses.map(lambda line: weight_first(line, table)).sum()
ulysses.map(lambda line: weight_last(line, table)).sum()
###Output
2895879
###Markdown
Converting to use broadast variables is simple and more efficient- Use SparkContext.broadcast() to create a broadcast variable- Where you would use var, use var.value- The broadcast variable is sent once to each node and can be re-used
###Code
table_bc = sc.broadcast(table)
def weight_first_bc(line, table):
words = tokenize(line)
return sum(table.value.get(word[0], 0) for word in words if word.isalpha())
def weight_last_bc(line, table):
words = tokenize(line)
return sum(table.value.get(word[-1], 0) for word in words if word.isalpha())
###Output
_____no_output_____
###Markdown
table_bc is sent to nodes only once.Although it looks like table_bc is being passed to each function, all that is passed is a path to the table. The worker checks if the path has been cached and uses the cache instead of loading from the path.
###Code
ulysses.map(lambda line: weight_first_bc(line, table_bc)).sum()
ulysses.map(lambda line: weight_last_bc(line, table_bc)).sum()
###Output
2895879
###Markdown
The Spark Shuffle and Partitioning----Some events trigger the redistribution of data across partitions, and involves the (expensive) copying of data across executors and machines. This is known as the **shuffle**. For example, if we do a `reduceByKey` operation on key-value pair RDD, Spark needs to collect all pairs with the same key in the same partition to do the reduction. For key-value RDDs, you have some control over the partitioning of the RDDs. In particular, you can ask Spark to partition a set of keys so that they are guaranteed to appear together on some node. This can minimize a lot of data transfer. For example, suppose you have a large key-value RDD consisting of user_name: comments from a web user community. Every night, you want to update with new user comments with a join operation
###Code
def fake_data(n, val):
users = list(map(''.join, np.random.choice(list(string.ascii_lowercase), (n,2))))
comments = [val]*n
return tuple(zip(users, comments))
data = fake_data(10000, 'a')
list(data)[:10]
rdd = sc.parallelize(data).reduceByKey(lambda x, y: x+y)
new_data = fake_data(1000, 'b')
list(new_data)[:10]
rdd_new = sc.parallelize(new_data).reduceByKey(lambda x, y: x+y).cache()
rdd_updated = rdd.join(rdd_new)
rdd_updated.take(10)
###Output
[('gs', ('aaaaaaaaaaaaa', 'bbbbb')), ('gg', ('aaaaaaaaaaaaaaa', 'bb')), ('yq', ('aaaaaaaa', 'bb')), ('gc', ('aaaaaaaaaaaaaaaaaa', 'b')), ('go', ('aaaaaaaaaaaaaaa', 'b')), ('gk', ('aaaaaaaaaaaaa', 'b')), ('lf', ('aaaaaaaaaaaaaaaa', 'bb')), ('iq', ('aaaaaaaaaaaaaaaaa', 'bbb')), ('ln', ('aaaaaaaaaaaaaaaaa', 'bb')), ('dr', ('aaaaaaaaaaaaa', 'b'))]
###Markdown
Using `partitionBy`The `join` operation will hash all the keys of both `rdd` and `rdd_nerw`, sending keys with the same hashes to the same node for the actual join operation. There is a lot of unnecessary data transfer. Since `rdd` is a much larger data set than `rdd_new`, we can instead fix the partitioning of `rdd` and just transfer the keys of `rdd_new`. This is done by `rdd.partitionBy(numPartitions)` where `numPartitions` should be at least twice the number of cores.From the R docs for `partitionBy````This function operates on RDDs where every element is of the form list(K, V) or c(K, V). For each element of this RDD, the partitioner is used to compute a hash function and the RDD is partitioned using this hash value.```In other words, which parittion a data element is sent to depends on the key value.
###Code
rdd_A = sc.parallelize([1, 2, 3, 4, 2, 4, 1]).map(lambda x: (x, x))
for item in rdd_A.partitionBy(4).glom().collect():
print(item)
rdd_B = sc.parallelize([(4,'a'), (1,'b'), (2, 'c'), (3, 'd'), (4,'e'), (1, 'f')])
for item in rdd_B.glom().collect():
print(item)
rdd_comb = rdd_A.join(rdd_B).glom()
###Output
_____no_output_____
###Markdown
**Note**: See how all the items from `rdd_B` have been transferred to the partitions created by `rdd_A`, but the items from `rdd_A` have not moved. If `rdd_A` is much larger than `rdd_B` then this minimizes the amount of data transfer.
###Code
for item in rdd_comb.collect():
print(item)
###Output
[]
[(1, (1, 'f')), (1, (1, 'b')), (1, (1, 'f')), (1, (1, 'b'))]
[(2, (2, 'c')), (2, (2, 'c'))]
[(3, (3, 'd'))]
[(4, (4, 'a')), (4, (4, 'e')), (4, (4, 'a')), (4, (4, 'e'))]
[]
[]
[]
###Markdown
Applyin to our word counts
###Code
rdd2 = sc.parallelize(data).reduceByKey(lambda x, y: x+y)
rdd2 = rdd2.partitionBy(10).cache()
rdd2_updated = rdd2.join(rdd_new)
rdd2_updated.take(10)
spark.stop()
###Output
_____no_output_____ |
src/reddit/LDA News Articles - Data Set Compilation.ipynb | ###Markdown
Aggregated in Time Series
###Code
### get date dataframes
df_date = pd.read_pickle(DATA_DIR + 'reddit_2019jun16tojul1_dates.pkl')
df_date = df_date.append(pd.read_pickle(DATA_DIR + 'reddit_2019_dates.pkl'))
df_date = df_date.append(pd.read_csv(DATA_DIR + 'dates_missing_ids.csv', index_col=0))
df2 = df.merge(df_date, on='id', how='left') #df.join(df_date, on='id', how='left')
df2['day'] = pd.to_datetime(df2['created_utc'], yearfirst=True).dt.round('d')
df2['week']= df2['day'] - pd.to_timedelta(df2['day'].dt.dayofweek, unit='d')
df2['candidate_text'] = df2.candidate_text.map(lambda x: list(x))
df2['counter'] = 1
df2.counter = df2.counter.astype('object')
time = 'week' # day
def vector_concatenator(row):
vector = row.topic_titles
v_final = dict()
for t in vector:
if t[0] in v_final.keys():
v_final[t[0]] += t[1]
else:
v_final[t[0]] = t[1]
return list(zip(v_final.keys(), map(lambda x: x/row.counter, v_final.values())))
def vector_counter(row):
vector = row.candidate_text
v_final = dict()
for t in vector:
if t in v_final.keys():
v_final[t] += 1
else:
v_final[t] = 1
return list(zip(v_final.keys(), map(lambda x: x/row.counter, v_final.values())))
def normalize_topics(row):
tt = row.topic_titles.copy()
s = 0
for x in tt:
s += x[1]
for i in range(len(tt)):
tt[i] = (tt[i][0], tt[i][1]*(1.0/s))
return tt
candidate_dfs = dict()
for candidate in set(candidate_dict.values()):
candidate_dfs[candidate] = df2[df2.candidate_text.map(lambda x: candidate in x)]\
[['counter', 'score', 'candidate_text', 'topic_titles', time]]\
.groupby(time).agg(sum)
candidate_dfs[candidate].topic_titles = candidate_dfs[candidate].apply(vector_concatenator, axis=1)
candidate_dfs[candidate].candidate_text = candidate_dfs[candidate].apply(vector_counter, axis=1)
candidate_dfs[candidate].topic_titles = candidate_dfs[candidate].apply(normalize_topics, axis=1)
df2_agg = df2[['counter', 'score', 'candidate_text', 'topic_titles', time]].groupby(time).agg(sum)
df2_agg.topic_titles = df2_agg.apply(vector_concatenator, axis=1)
df2_agg.candidate_text = df2_agg.apply(vector_counter, axis=1)
df2_agg.topic_titles = df2_agg.apply(normalize_topics, axis=1)
candidate_dfs['none'] = df2_agg
!mkdir {DATA_DIR + 'candidate_aggregation/'}
for candidate in set(candidate_dict.values()):
candidate_dfs[candidate].to_csv(DATA_DIR + 'candidate_aggregation/' + candidate + '.csv')
candidate_dfs = dict()
for candidate in set(candidate_dict.values()):
candidate_dfs[candidate] = pd.read_csv(DATA_DIR + 'candidate_aggregation/' + candidate + '.csv')
candidate_dfs['harris']
idxs = [2,4,6,7,8,9,12,13,14,17,18,19,20,21,22,23,24,26,28,29,30,31,32,33,34,35,38,42,44,45,46,47,48,50,51,
52,53,55,58,61,62,63,64,66,67,68,69,70,71,75,77,78,79,80,85,86,87,88,89,91,92,93,96,99]
{topic_titles[i] for i in useful_topics}
def plot_topic_df(dataframe, topic):
def extract_topic(row):
topic_list = list(filter(lambda y: y[0] == topic, row.topic_titles))
if len(topic_list) == 0:
return 0
else:
return topic_list[0][1]
dataframe[dataframe.counter > 10]\
.apply(extract_topic, axis =1)\
.plot(title='{} Topic Frequency'.format(topic).title())
#df2_agg
c = 'none'
plot_topic_df(candidate_dfs[c][candidate_dfs[c].counter > 10], 'mueller report')
def topic_df(dataframe, topic):
def extract_topic(row):
topic_list = list(filter(lambda y: y[0] == topic, row.topic_titles))
if len(topic_list) == 0:
return 0
else:
return topic_list[0][1]
return dataframe[dataframe.counter > 10]\
.apply(extract_topic, axis =1)
topic_df(candidate_dfs[c][candidate_dfs[c].counter > 10], 'israel')
df2[(df2.candidate_text.map(lambda x: c in x)) & (df2.day == '2019-01-25')].loc[8293]
###Output
_____no_output_____ |
Restricted_Boltzmann_Machine.ipynb | ###Markdown
Import Libraries
###Code
import tensorflow as tf #Deep learning library
import numpy as np #Matrix algebra library
from tensorflow.examples.tutorials.mnist import input_data #Import training data
import pandas as pd #Database management library
import matplotlib.pyplot as plt #Visualization library
%matplotlib inline
from sklearn.svm import LinearSVC #For linear image classification
from sklearn.model_selection import GridSearchCV #For hyperparameter optimization
###Output
_____no_output_____
###Markdown
Load MNIST Dataset & Prepare Data for RBM We will be building this RBM in Tensorflow and testing it on the MNIST dataset. We will not need one_hot encoding as we will be using our RBM to extract important features from our image data to be used in a linear classifier.
###Code
mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
###Markdown
Restricted Boltzmann Machine Who is this explanation for? At the risk of sacrificing some important information, I will be discussing this with you as I would a close friend who knew a little about deep learning. Sometimes, I think its nice to get a general overview of how something works before diving into the specifics. If you are someone like me who was taking Geoffrey Hinton's Coursera course and had no idea what he was talking about when he got into Restricted Boltzmann Machines...keep reading! How will we be utilizing this RBM?When my research team began developing a new method of feature engineering for image classification, they were very interested to see how it compared to the classic RBM. Naturally, I was given the tedious task of building one and producing benchmarks for a variety of datasets. The goal of this tutorial is to utilize the RBM to extract important features from an image, and then use those features in a simple linear classifier. We want to take image data that is NOT linearly seperable and attempt to make it... more linearly seperable by using the RBM as a feature engineering tool. This is only one of many use cases for the RBM. How do we do that? The RBM is an unsupervised model, meaning, there is no known output we are mapping our data to. We have our input layer (which is just the pixels of our image) and a hidden layer which will, over time, be learning what the most important features are. Now I know I said I am assuming you know a little about deep learning, but before we get into how RBMs work we should probably set a few definitions. Back propagation: This is a tool that updates the weights of our neural network in a direction that minimizes our error. Over time, these weights will allow our neural network (or RBM in this case) to learn whatever it is we are asking it to learn. Dense layer: In our case, this simply means that every neuron in the input layer is connected to every neuron of the hidden layer. This does not mean that neurons in the input layer are connected to other neurons in the input layer. Input layer neurons are only connected to neurons in the hidden layer. Input Layer: The layer of the RBM that contains the original image data. Hidden Layer: The layer of the RBM that is extracting the key features from the image. The goal of the hidden layer is to eventually reconstruct what it sees in the input layer without using all of the original pixels. It only uses features it deems most important. Lets talk about how this happens. Inside of the RBMSo this is what's going down. Lets take one image from the MNIST dataset and discuss what happens as it goes through a restricted boltzmann machine. MNIST is full of 28x28 images of hand-written digits. First, we take our 28x28 image and flatten it to a 1D array of length 784 (because 28*28 = 784). Now, if you are picturing an artificial neural network in your head, these are all of the neurons of the input layer. Now we need to decide how many neurons we wish to have in the hidden layer. The hidden layer is a magical place. A place where you can reconstruct the original image with less neurons than you started with. What?!??!? Thats right! The goal of the hidden layer is to extract the most important features from the original image and then use those features to reconstruct the original image! So earlier I asked the question: How many neurons should we have in the hidden layer? The answer: I dont know. As you may know by now, machine learning is an iterative process and finding the magic number will take some time. Often, people will reduce the number of neurons in the hidden layer, for example, going from 784 neurons to 650 neurons. Thus, the RBM now has to work hard to capture the essence of the original image in only 650 neurons. How does it do this? Well, lets stop and recap what we are currently looking at. 1) Image gets flattened into a 784 length array and prepared for the input layer of the RBM. 2) We decided that the hidden layer size is 650. So now we will randomly initalize our weights that are connecting each neuron from the input layer to each neuron of the hidden layer. These weights connecting the two layers are what is going to allow the RBM to model the original image with a smaller amount of neurons. Using back propagation, the weights will, over time, learn how to help the hidden layer represent the original image with a smaller number of pixels. How do we know if it's working? So we have inputted our image, created our hidden layer, and our weights are randomly initalized and ready to go. Now what? Well, the RBM is going to attempt to reconstruct the original 28*28 image with only 650 neurons. On its first go around, it will probably do pretty bad considering we gave the model random weights to work with. We can check how bad it did by measuring the difference between the hidden layer reconstruction and the original image. After that, the RBM will go through another iteration, this time updating the weights to help the hidden layer reconstruction better represent the original image, and again, we check to see the difference between the original image and the reconstruction. This proceess goes on and on until we find that our error is low and the reconstructed image closely resembles the original image. Linear classification time! Now that our hidden layer is able to reconstruct the original image extremely well with only 650 neurons, lets see how these 650 neurons perform in a linear classifier. With no feature engineering, meaning, just using our 784 original neurons as our training data, I was able to get 91% accuracy using a LinearSVC. After running the RBM, extracting the 650 neurons containing our high quality features from the hidden layer, and using those features as training data, I got 97% accuracy! I could probably get better results if I experimented with different numbers of neurons in my hidden layer. Let me know if you beat 97%! Now its your turn!Follow along with the code below. I have heavily commented it for you and if you have any questions feel free to email me. Contrastive DivergenceIf you are going through Geoffrey Hinton's course and learned about Contrastive Divergence, I have a tensorflow implementation for you. Feel free to try it out. I personally have never come across a time where CD-n for n > 1 performed better than n = 1 but after hearing so much about it I had to try. For those wondering, all Contrastive Divergence means is, instead of the RBM iterating over and over again going from original input to reconstruction > original input to better reconstruction and so on... now you are using the reconstruction as the input layer in the second iteration... so original input to reconstruction to reconstruction of the reconstruction and so on. It's not very popular but can be fun to play with!
###Code
class RBM(object):
def __init__(self, input_size, output_size, learning_rate, batch_size):
self.input_size = input_size #Size of the input layer
self.output_size = output_size #Size of the hidden layer
self.epochs = 2 #How many times we will update the weights
self.learning_rate = learning_rate #How big of a weight update we will perform
self.batch_size = batch_size #How many images will we "feature engineer" at at time
self.new_input_layer = None #Initalize new input layer variable for k-step contrastive divergence
self.new_hidden_layer = None
self.new_test_hidden_layer = None
#Here we initialize the weights and biases of our RBM
#If you are wondering, the 0 is the mean of the distribution we are getting our random weights from.
#The .01 is the standard deviation.
self.w = np.random.normal(0,.01,[input_size,output_size]) #weights
self.hb = np.random.normal(0,.01,[output_size]) #hidden layer bias
self.vb = np.random.normal(0,.01,[input_size]) #input layer bias (sometimes called visible layer)
#Calculates the sigmoid probabilities of input * weights + bias
#Here we multiply the input layer by the weights and add the bias
#This is the phase that creates the hidden layer
def prob_h_given_v(self, visible, w, hb):
return tf.nn.sigmoid(tf.matmul(visible, w) + hb)
#Calculates the sigmoid probabilities of input * weights + bias
#Here we multiply the hidden layer by the weights and add the input layer bias
#This is the reconstruction phase that recreates the original image from the hidden layer
def prob_v_given_h(self, hidden, w, vb):
return tf.nn.sigmoid(tf.matmul(hidden, tf.transpose(w)) + vb)
#Returns new layer binary values
#This function returns a 0 or 1 based on the sign of the probabilities passed to it
#Our RBM will be utilizing binary features to represent the images
#This function just converts the features we have learned into a binary representation
def sample_prob(self, probs):
return tf.nn.relu(tf.sign(probs - tf.random_uniform(tf.shape(probs))))
def train(self, X, teX):
#Initalize placeholder values for graph
#If this looks strange to you, then you have not used Tensorflow before
_w = tf.placeholder(tf.float32, shape = [self.input_size, self.output_size])
_vb = tf.placeholder(tf.float32, shape = [self.input_size])
_hb = tf.placeholder(tf.float32, shape = [self.output_size])
#initalize previous variables
#we will be saving the weights of the previous and current iterations
pre_w = np.random.normal(0,.01, size = [self.input_size,self.output_size])
pre_vb = np.random.normal(0, .01, size = [self.input_size])
pre_hb = np.random.normal(0, .01, size = [self.output_size])
#initalize current variables
#we will be saving the weights of the previous and current iterations
cur_w = np.random.normal(0, .01, size = [self.input_size,self.output_size])
cur_vb = np.random.normal(0, .01, size = [self.input_size])
cur_hb = np.random.normal(0, .01, size = [self.output_size])
#Plaecholder variable for input layer
v0 = tf.placeholder(tf.float32, shape = [None, self.input_size])
#pass probabilities of input * w + b into sample prob to get binary values of hidden layer
h0 = self.sample_prob(self.prob_h_given_v(v0, _w, _hb ))
#pass probabilities of new hidden unit * w + b into sample prob to get new reconstruction
v1 = self.sample_prob(self.prob_v_given_h(h0, _w, _vb))
#Just get the probailities of the next hidden layer. We wont need the binary values.
#The probabilities here help calculate the gradients during back prop
h1 = self.prob_h_given_v(v1, _w, _hb)
#Contrastive Divergence
positive_grad = tf.matmul(tf.transpose(v0), h0) #input' * hidden0
negative_grad = tf.matmul(tf.transpose(v1), h1) #reconstruction' * hidden1
#(pos_grad - neg_grad) / total number of input samples
CD = (positive_grad - negative_grad) / tf.to_float(tf.shape(v0)[0])
#This is just the definition of contrastive divergence
update_w = _w + self.learning_rate * CD
update_vb = _vb + tf.reduce_mean(v0 - v1, 0)
update_hb = _hb + tf.reduce_mean(h0 - h1, 0)
#MSE - This is our error function
err = tf.reduce_mean(tf.square(v0 - v1))
#Will hold new visible layer.
errors = []
hidden_units = []
reconstruction = []
test_hidden_units = []
test_reconstruction=[]
#The next four lines of code intitalize our Tensorflow graph and create mini batches
#The mini batch code is from cognitive class. I love the way they did this. Just giving credit!
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(self.epochs):
for start, end in zip(range(0, len(X), self.batch_size), range(self.batch_size, len(X), self.batch_size)):
batch = X[start:end] #Mini batch of images taken from training data
#Feed in batch, previous weights/bias, update weights and store them in current weights
cur_w = sess.run(update_w, feed_dict = {v0:batch, _w:pre_w , _vb:pre_vb, _hb:pre_hb})
cur_hb = sess.run(update_hb, feed_dict = {v0:batch, _w:pre_w , _vb:pre_vb, _hb:pre_hb})
cur_vb = sess.run(update_vb, feed_dict = {v0:batch, _w:pre_w , _vb:pre_vb, _hb:pre_hb})
#Save weights
pre_w = cur_w
pre_hb = cur_hb
pre_vb = cur_vb
#At the end of each iteration, the reconstructed images are stored and the error is outputted
reconstruction.append(sess.run(v1, feed_dict={v0: X, _w: cur_w, _vb: cur_vb, _hb: cur_hb}))
print('Learning Rate: {}: Batch Size: {}: Hidden Layers: {}: Epoch: {}: Error: {}:'.format(self.learning_rate, self.batch_size,
self.output_size, (epoch+1),
sess.run(err, feed_dict={v0: X, _w: cur_w, _vb: cur_vb, _hb: cur_hb})))
#Store final reconstruction in RBM object
self.new_input_layer = reconstruction[-1]
#Store weights in RBM object
self.w = pre_w
self.hb = pre_hb
self.vb = pre_vb
#This is used for Contrastive Divergence.
#This function makes the reconstruction your new input layer.
def rbm_output(self, X):
input_x = tf.constant(X)
_w = tf.constant(self.w)
_hb = tf.constant(self.hb)
_vb = tf.constant(self.vb)
out = tf.nn.sigmoid(tf.matmul(input_x, _w) + _hb)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
return sess.run(out)
#A function for training your RBM
#Keep k at 1 for a traditional RBM.
def train_cd_k(k, input_size, output_size, trX, teX, learning_rate, batch_size):
cdk_train_input = trX #Training data
cdk_test_input = teX #Testing data
cdk_train_hidden = None #Variable to store hidden layer for training data
cdk_test_hidden = None #Variable to store hidden layer for testing data
rbm = RBM(input_size, output_size, learning_rate, batch_size)
#Loop for contrastive divergence.
for i in range(k):
print('CD: {}'.format(int(i+1)))
rbm.train(cdk_train_input, cdk_test_input) #Using reconstruction as input layer for CD
cdk_train_input = rbm.new_input_layer
cdk_train_hidden = rbm.rbm_output(cdk_train_input)
cdk_test_hidden = rbm.rbm_output(cdk_test_input)
return [cdk_train_hidden, cdk_test_hidden, cdk_train_input]
with tf.Graph().as_default():
temp = train_cd_k(1,784,650,trX,teX,.001,32)
lsvc_RBM = LinearSVC()
lsvc_RBM.fit(temp[cdk_train_hidden], trY)
lsvc_RBM.score(temp[cdk_test_hidden], teY)
###Output
_____no_output_____ |
deep-learning/Faster R-CNN/preprocess_data.ipynb | ###Markdown
This module preprocesses the data to create the regression and classification labels used by the Region Proposing Network
###Code
import numpy as np
SCALE = 600/18
OUT_LEN = 17
def get_box_extra(y1, x1, y2, x2):
center_x = (x1 + x2) // 2
center_y = (y1 + y2) // 2
width = x2 - x1
height = y2 - y1
return center_x, center_y, width, height
def create_anchors_map():
anchors_map = np.zeros((17, 17, 3, 3), dtype=[('y1', 'i4'),('x1', 'i4'), ('y2', 'i4'), ('x2', 'i4')])
for i in range(17):
for j in range(17):
for r, ratio in enumerate(((1, 1), (0.75, 1.5), (1.5, 0.75))):
for s, size in enumerate((128, 256, 512)):
anchor_x_center = i * SCALE
anchor_x1 = anchor_x_center - ratio[1] * size / 2
anchor_x2 = anchor_x_center + ratio[1] * size / 2
anchor_y_center = j * SCALE
anchor_y1 = anchor_y_center - ratio[0] * size / 2
anchor_y2 = anchor_y_center + ratio[0] * size / 2
anchors_map[i][j][r][s] = (anchor_y1, anchor_x1, anchor_y2, anchor_x2)
return anchors_map
anchors_map = create_anchors_map()
def prepare_output_values(row_dict):
# output of last regression layer per image: (17, 17, 36)
# 17 anchors and 4 (dimensions) * 9 (scales & sizes)
y_regr = np.zeros((17,17,3,3,4)) + 100
y_class = np.zeros((17,17,3,3))
for obj in row_dict['objects']['bbox']:
groundtruth_y1, groundtruth_x1, groundtruth_y2, groundtruth_x2 = bbox_perc_to_pixels(obj)
groundtruth_center_x, groundtruth_center_y, groundtruth_width, groundtruth_height = get_box_extra(
groundtruth_y1, groundtruth_x1, groundtruth_y2, groundtruth_x2)
###################
## REGRESSION
anchor_center_x = (anchors_map['x1'] + anchors_map['x2']) // 2
anchor_center_y = (anchors_map['y1'] + anchors_map['y2']) // 2
anchor_width = anchors_map['x2'] - anchors_map['x1']
anchor_height = anchors_map['y2'] - anchors_map['y1']
current_r = np.zeros(y_regr.shape)
current_r[:,:,:,:,0] = (groundtruth_center_x - anchor_center_x) / anchor_width # t_x
current_r[:,:,:,:,1] = (groundtruth_center_y - anchor_center_y) / anchor_height # t_y
current_r[:,:,:,:,2] = np.log(groundtruth_width / anchor_width) # t_w
current_r[:,:,:,:,3] = np.log(groundtruth_height / anchor_height) # t_h
# Overwrite anchors distances closer to ground-truth object.
# cloer = minimum sum of (t_x, t_y, t_w, t_h)
current_r_sum = np.sum(np.abs(current_r), axis = -1)
y_regr_sum = np.sum(np.abs(y_regr), axis = -1)
y_regr[current_r_sum < y_regr_sum] = current_r[current_r_sum < y_regr_sum] # TODO Is this correct?
# Doesn't it overwrite only the last axis?
###################
## CLASSIFICATION
x1 = np.maximum(groundtruth_x1, anchors_map['x1'])
y1 = np.maximum(groundtruth_y1, anchors_map['y1'])
x2 = np.minimum(groundtruth_x2, anchors_map['x2'])
y2 = np.minimum(groundtruth_y2, anchors_map['y2'])
intersection_area = np.maximum(0, x2 - x1) * np.maximum(0, y2 - y1)
# Intersection over Union
groundtruth_area = (groundtruth_x2 - groundtruth_x1) * (groundtruth_y2 - groundtruth_y1)
anchor_area = anchor_width * anchor_height
current_iou = intersection_area / (groundtruth_area + anchor_area - intersection_area)
# Overwrite the IOU if ground-truth objects with higher iou were found
y_class = np.maximum(y_class, current_iou)
return y_regr, y_class
def anchor_and_distance_to_groundtruth(anchor_y1, anchor_x1, anchor_y2, anchor_x2, distance):
t_x, t_y, t_w, t_h = distance
anchor_center_x, anchor_center_y, anchor_width, anchor_height = get_box_extra(
anchor_y1, anchor_x1, anchor_y2, anchor_x2)
groundtruth_center_x = anchor_center_x + t_x * anchor_width
groundtruth_center_y = anchor_center_y + t_y * anchor_height
groundtruth_width = anchor_width * np.e ** t_w
groundtruth_height = anchor_height * np.e ** t_h
return groundtruth_center_x - groundtruth_width / 2, \
groundtruth_center_y - groundtruth_height / 2, groundtruth_width, groundtruth_height
###Output
_____no_output_____ |
1. FUNDAMENTOS/3. PROGRAMACION ESTADISTICA CON PYTHON/4. work in groups/spotify.ipynb | ###Markdown
Descripción del dataset Hemos escogido este dataset porque nos parecía muy interesante el hecho de poder analizar las causas por las que una canción se hace popular en Spotify, plataforma de música en streaming usada en todo el mundo.Contamos con las siguientes variables:* **Popularidad (popu)**: mide la popularidad de una canción en una escala de 0 a 100. Cuanto más cerca del 100, más popular es la canción.* **Género musical (genre)**: ha sido dividido en **pop** y **no pop** para simplificar el análisis, ya que los géneros **no pop** eran muchos y muy dispersos.* **Año (year)**: los años van desde el 2010 hasta el 2019 ambos inclusive, y han sido agrupados en dos grupos: de **2010 a 2014** y de **2015 a 2019**. De esta manera agrupamos los datos en "más actuales" y "más antiguos" y podremos observar claramente las disferencias en el caso de que las hubiera.* **Pulsaciones por minuto (bpm)**: es el tempo de la canción. Esta medido en una escala de 0 al 210. Cuanto más alto el valor, más beats per minute.* **Energía (nrgy)**: mide la cuán energética es la cancion en una escala de 0 a 100. Cuanto más alto sea el valor, más energía tendrá la canción.* **Bailabilidad (dnce)**: mide lo bailable que pueda ser una canción en una escala de 0 a 100. A mayor puntuación, más fácil será bailar esa canción.* **Valencia (val)**: mide el "estado de ánimo" de la canción en una escala de 0 a 100. Cuanto más cerca del 100, más positiva será la canción. Esta variable ha sido categorizada para obtener una variable cualitativa más. * **Duración (dur)**: representa lo larga que es la canción en segundos. Esta variable ha sido categorizada para agrupar las canciones según su mayor o menor duración, ya que hemos considerado que los segundos exactos no son relevantes.* **Letra (lyrics)**: indica en un índice de 0 a 100 la parte cantada de una canción. A mayor puntuación, mayor parte cantada tendrá esa canción.
###Code
import os
import numpy as np
import pandas as pd
from pandas.api.types import CategoricalDtype
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
from scipy.stats.stats import pearsonr
from statsmodels.formula.api import ols
from google.colab import drive
drive.mount('mydrive')
spoti = pd.read_csv ("/content/mydrive/MyDrive/EDEM/PEP/spotify.csv", sep= ";")
spoti
###Output
_____no_output_____
###Markdown
Formulación de Hipótesis preanálisis H1. Cuando la canción sea pop, más popular será.H2. Cuando más actual sea el año, más índice de popularidad.H3. Si la canción tiene más ritmo, más popular será.H4. Si la canción es más energética, más popular será.H5. Cuanto más bailable sea la canción, más popularidad.H6. Cuanto menos triste sea la canción, más popularidad.H7. Si la canción es larga, más popular será.H8. Si la canción tiene más contenido cantable (lyrics), más popular será. 1.Popularidad (Variable Target) Descripción
###Code
popu = spoti['popu'].describe()
popu
n = popu[0]
m_popu = popu[1]
sd_popu = popu[2]
###Output
_____no_output_____
###Markdown
Gráfico
###Code
popularidad = spoti['popu']
plt.hist(popularidad, bins=15, edgecolor='black')
plt.xlabel('Popularidad')
plt.xticks(np.arange(0, 100, step= 10))
plt.ylabel('Frequencia')
props = dict(boxstyle= 'round', facecolor='white', lw=0.5)
plt.text(2,100,'Media:66.62''\n''N:603' '\n' 'SD: 14.26', bbox=props)
plt.title('Figura 1: Número de canciones por popularidad ''\n')
plt.axvline(66.52, linewidth=1, linestyle='solid', color = 'red')
plt.axvline(52, linewidth=1, linestyle= 'dashed', color= 'green')
plt.axvline(81.04, linewidth=1, linestyle= 'dashed', color= 'green')
plt.legend(labels=['Media', 'SD ± Media'])
plt.show()
###Output
_____no_output_____
###Markdown
2.Género Descripción
###Code
gen = spoti.genre.describe()
gen
gen_table = spoti.groupby(['genre']).size()
print(gen_table)
n=gen_table.sum()
gen_table2 = (gen_table/n)*100
print(gen_table2)
n=gen_table.sum()
###Output
genre
no pop 135
pop 468
dtype: int64
genre
no pop 22.38806
pop 77.61194
dtype: float64
###Markdown
Gráfico
###Code
bar_list = ['no pop', 'pop']
plt.bar(bar_list, gen_table2, edgecolor='black')
plt.title("Figura 2.1. Porcentaje canciones no pop y pop")
plt.ylabel('Porcentaje')
plt.xlabel('Género')
plt.text(0,50,'n: 603')
props = dict(boxstyle='round', facecolor='white',lw=0.5)
textstr = '$\mathrm{n}=%.0f$'%(n)
plt.text (0,50, textstr , bbox=props)
plt.show()
###Output
_____no_output_____
###Markdown
Comparación con popu
###Code
pop_no_popu=spoti.loc[spoti.genre=='no pop', "popu"]
pop_popu=spoti.loc[spoti.genre=='pop', "popu"]
res = stats.f_oneway(pop_no_popu,pop_popu)
print(res)
plt.figure(figsize=(4,4))
ax = sns.pointplot(x="genre", y="popu", data=spoti,capsize=0.05, ci=95, join=0, order=['no pop', 'pop'])
ax.set_ylabel('Popularidad')
plt.yticks(np.arange(60, 80, step=5))
plt.axhline(y=spoti['popu'].mean(),linewidth=1,linestyle= 'dashed',color="green")
props = dict(boxstyle='round', facecolor='white', lw=0.5)
plt.text(0.9,72,'Mean:66.52''\n''n:603''\n' 'Pval.:0.84', bbox=props)
plt.xlabel('Género')
plt.title('Figura 2.2. Media de popularidad según el género.''\n')
plt.show()
###Output
_____no_output_____
###Markdown
Miniconclusión Como el p valor es mayor de 0.05, aceptamos la hipótesis nula. La popularidad no cambia significativamente si una canción es pop o no pop. 3.Año Descripción
###Code
spoti.year.describe()
mytable = spoti.groupby(['year']).size()
print(mytable)
n = mytable.sum()
mytable2 = (mytable/n)*100
print(mytable2)
spoti.loc[(spoti['year']==2010),"year2"] = "2010-2014"
spoti.loc[(spoti['year']==2011),"year2"] = "2010-2014"
spoti.loc[(spoti['year']==2012),"year2"] = "2010-2014"
spoti.loc[(spoti['year']==2013),"year2"] = "2010-2014"
spoti.loc[(spoti['year']==2014),"year2"] = "2010-2014"
spoti.loc[(spoti['year']==2015),"year2"] = "2015-2019"
spoti.loc[(spoti['year']==2016),"year2"] = "2015-2019"
spoti.loc[(spoti['year']==2017),"year2"] = "2015-2019"
spoti.loc[(spoti['year']==2018),"year2"] = "2015-2019"
spoti.loc[(spoti['year']==2019),"year2"] = "2015-2019"
pd.crosstab(spoti.year, spoti.year2)
spoti.year2.describe()
mytable3 = spoti.groupby(['year2']).size()
n = mytable.sum()
mytable4 = (mytable3/n)*100
###Output
_____no_output_____
###Markdown
Gráfico
###Code
barlist = ['2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019']
plt.bar(barlist, mytable2)
plt.ylabel('Percentage')
plt.xlabel('Year')
plt.title('Figura 3.1. Porcentaje de canciones cada año')
plt.text(0.5,15,'n: 603')
props = dict(boxstyle='round', facecolor='white',lw=0.5)
textstr = '$\mathrm{n}=%.0f$'%(n)
plt.text (0.5,15, textstr , bbox=props)
plt.show()
barlist2 = ['1: 2010-2014', '2: 2015-2019']
plt.bar(barlist2, mytable4)
plt.ylabel('Percentage')
plt.xlabel('Years')
plt.title('Figura 3.2. Porcentaje de canciones por años')
plt.text(0,50,'n: 603')
props = dict(boxstyle='round', facecolor='white',lw=0.5)
textstr = '$\mathrm{n}=%.0f$'%(n)
plt.text (0,50, textstr , bbox=props)
plt.show()
###Output
_____no_output_____
###Markdown
Comparación con popu
###Code
spoti.groupby('year2').popu.mean()
popu10_14 = spoti.loc[spoti.year2=='2010-2014', 'popu']
popu15_19 = spoti.loc[spoti.year2=='2015-2019', 'popu']
res= stats.stats.ttest_ind(popu10_14, popu15_19, equal_var=False)
print(round(res[1],3), round(res[0],3))
plt.figure(figsize=(5,5))
ax = sns.pointplot(x= 'year2', y= 'popu',
data= spoti, ci= 99, join=0)
plt.yticks(np.arange(60,80, step=5))
plt.axhline(y=spoti.popu.mean(), linewidth=1, linestyle = 'dashed', color='green')
ax.set_ylabel('Popularidad')
props= dict(boxstyle='round', facecolor='white', lw=0.5)
plt.text(0.05,70,'Media: 66.5' '\n' 'n: 603' '\n' 't: -4.11' '\n' 'Pval: 0.00', bbox=props)
plt.xlabel('Años')
plt.title('Figura 3.3. Popularidad media por años''\n')
plt.show()
###Output
_____no_output_____
###Markdown
Miniconclusión Concluimos que las variables año y popularidad están relacionadas y que existen diferencias estadísticamente significativas entre los grupos. Podemos afirmar esto basándonos en el p valor menor a 0,05 que nos ha devuelto la prueba estadística t-student, el cual rechaza la H0 de que los grupos son iguales. 4.Beats per minute (bpm) Descripción
###Code
res = spoti.bpm.describe().round(3)
res
m = res[1]
sd = res[2]
n = res[0]
print("Mean:",m,"\n","Standard Deviation:",sd,"\n","N:",n)
###Output
Mean: 118.736
Standard Deviation: 24.32
N: 603.0
###Markdown
Gráfico
###Code
plt.figure(figsize=(5,5))
x=spoti.bpm
plt.hist (x, bins=10,edgecolor="black")
plt.title("Figura 4.1, Ritmo (bpm) de las canciones de Spotify.")
plt.xlabel("Beats per minute (bpm)")
plt.ylabel("Canciones")
props = dict (boxstyle="round", facecolor ="white", lw =1)
plt.xticks(np.arange(50, 225, step=25))
plt.yticks(np.arange(0, 225, step=25))
plt.text(45, 170, "n: 603" "\n" "Mean: 118.74" "\n" "std: 24.32", bbox=props)
plt.axvline(x=m, linewidth=1, linestyle= 'solid',color="red", label='Mean')
plt.axvline(x=(m+sd) , linewidth=1, linestyle= 'dashed',color="darkgreen", label='m + sd')
plt.axvline(x=(m-sd), linewidth=1, linestyle= 'dashed',color="darkgreen", label='m - sd')
plt.legend(labels=['Media', 'SD ± Media'])
plt.show()
###Output
_____no_output_____
###Markdown
Comparación con popu
###Code
x= spoti.bpm
y=spoti["popu"]
pearsonr (x,y)
r, p_val = pearsonr(x,y)
n = len(spoti["popu"])
print('r:', round(r,3), 'P.Val:', round(p_val,3), 'n:', n)
plt.figure(figsize=(5,5))
x= spoti.bpm
y=spoti["popu"]
plt.scatter (x, y, s=20, facecolors="none", edgecolors="C0")
plt.xticks(np.arange(0,225,step=25))
plt.yticks(np.arange(0,110,step=10))
plt.title("Figura 4.2, Popularidad de las canciones sobre su ritmo (bpm)")
plt.xlabel("Beats per minute (bpm)")
plt.ylabel("Popularidad")
props =dict(boxstyle ="round", facecolor ="white", lw=0.5)
textstr = '$\mathrm{r}=%.2f$\n$\mathrm{P.Val:}=%.3f$\n$\mathrm{n}=%.0f$'%(r, p_val, n)
plt.text (10,10, textstr , bbox=props)
plt.show()
###Output
_____no_output_____
###Markdown
Miniconclusión Al ser el p-value mayor de 0.05, no se rechaza la hipótesis nula. El ritmo de las canciones (bpm), no tiene relación con la popularidad de las canciones. 5.Energía Descripción
###Code
res= spoti.nrgy.describe().round(3)
res
m = res[1]
sd = res[2]
n = res[0]
print("Mean:",m,"\n","Standard Deviation:",sd,"\n","N:",n)
###Output
Mean: 70.62
Standard Deviation: 16.055
N: 603.0
###Markdown
Gráfico
###Code
plt.figure(figsize=(5,5))
x=spoti.nrgy
plt.hist (x, bins=10,edgecolor="black")
plt.title("Figura 5.1, Energía de las canciones de Spotify.")
plt.xlabel("Energía")
plt.ylabel("Canciones")
props = dict (boxstyle="round", facecolor ="white", lw =1)
plt.xticks(np.arange(0, 101, step=10))
plt.yticks(np.arange(0, 175, step=10))
plt.text(10, 110, "n: 603" "\n" "Mean: 70.602" "\n" "std: 16.055", bbox=props)
plt.axvline(x=m, linewidth=1, linestyle= 'solid',color="red", label='Mean')
plt.axvline(x=(m+sd) , linewidth=1, linestyle= 'dashed',color="darkgreen", label='m + sd')
plt.axvline(x=(m-sd), linewidth=1, linestyle= 'dashed',color="darkgreen", label='m - sd')
plt.legend(labels=['Media', 'SD ± Media'])
plt.show()
###Output
_____no_output_____
###Markdown
Comparación con popu
###Code
x= spoti.nrgy
y=spoti["popu"]
pearsonr (x,y)
r, p_val = pearsonr(x,y)
n = len(spoti["popu"])
print('r:', round(r,3), 'P.Val:', round(p_val,3), 'n:', n)
plt.figure(figsize=(5,5))
x= spoti.nrgy
y=spoti["popu"]
plt.scatter (x, y, s=20, facecolors="none", edgecolors="C0")
plt.xticks(np.arange(0,110,step=10))
plt.yticks(np.arange(0,110,step=10))
plt.title("Figura 5.2, Popularidad de las canciones sobre su energía")
plt.xlabel("Energía")
plt.ylabel("Popularidad")
props =dict(boxstyle ="round", facecolor ="white", lw=0.5)
textstr = '$\mathrm{r}=%.2f$\n$\mathrm{P.Val:}=%.3f$\n$\mathrm{n}=%.0f$'%(r, p_val, n)
plt.text (10,10, textstr , bbox=props)
plt.show()
###Output
_____no_output_____
###Markdown
Miniconclusión Como el p-value es menor que 0.05, se rechaza la hipótesis nula. La energía de la canciones tiene relación con la popularidad de las canciones. 6.Danzabilidad Descripción
###Code
dance = spoti['dnce'].describe()
dance
n = dance[0]
m_dance = dance[1]
sd_dance = dance[2]
###Output
_____no_output_____
###Markdown
Gráfico
###Code
x=spoti['dnce']
plt.hist(x,edgecolor='black',bins=20)
plt.xticks(np.arange(0,100, step=10))
plt.title("Figura 6.1. Danzabilidad")
plt.ylabel('Frecuencia')
plt.xlabel('Nivel de danzabilidad')
props=dict(boxstyle='round', facecolor='white', lw=0.5)
textstr= '$\mathrm{Media}=%.2f$\n$\mathrm{SD}=%.3f$\n$\mathrm{N}=%.0f$'%(m_dance, sd_dance, n)
plt.text (3,55, textstr , bbox=props)
plt.axvline(x=m_popu, linewidth=1, linestyle= 'solid', color="red", label='Mean')
plt.axvline(x=m_popu-sd_popu, linewidth=1, linestyle= 'dashed', color="green", label='- 1 S.D.')
plt.axvline(x=m_popu + sd_popu, linewidth=1, linestyle= 'dashed', color="green", label='+ 1 S.D.')
plt.legend(labels=['Media', 'SD ± Media'])
plt.show()
###Output
_____no_output_____
###Markdown
Comparación con popu
###Code
x=spoti['dnce']
y=spoti['popu']
pearsonr(x,y)
r, p_val=pearsonr(x,y)
print(r,p_val)
print ('r:', round(r,3), 'P.Val:', round(p_val,3), 'n:', n)
plt.figure(figsize=(5,5))
plt.scatter(spoti['dnce'], spoti['popu'], s=20, facecolors='none', edgecolors='C0')
plt.xticks(np.arange(0, 110, step=10))
plt.yticks(np.arange(0, 110, step=10))
plt.title("Figura 6.2. Popularidad según la danzabilidad")
plt.ylabel('Popularidad')
plt.xlabel('Danzabilidad')
props=dict(boxstyle='round', facecolor='white', lw=0.5)
textstr= '$\mathrm{r}=%.2f$\n$\mathrm{P.Val:}=%.3f$\n$\mathrm{n}=%.0f$'%(r, p_val, n)
plt.text (10,10, textstr , bbox=props)
plt.show()
###Output
_____no_output_____
###Markdown
Miniconclusión Como el p valor es menor de 0.05, rechazamos la hipótesis nula y decimos que las variables tienen diferencias estadísticamente significativas. 7.Valencia Descripcion
###Code
res= spoti.val.describe()
print(res)
m = res[1]
sd = res[2]
n = res[0]
###Output
count 603.000000
mean 52.308458
std 22.412200
min 4.000000
25% 35.000000
50% 52.000000
75% 69.000000
max 98.000000
Name: val, dtype: float64
###Markdown
Recodificación
###Code
spoti.loc[(spoti['val']<(m-sd)),"val2"] = "Puntuación Baja"
spoti.loc[(spoti['val']>=(m-sd)) & (spoti['val']<(m+sd)),"val2"] = "Puntuación Media"
spoti.loc[(spoti['val']>(m+sd)),"val2"] = "Puntuación Alta"
my_categories=["Puntuación Baja", "Puntuación Media", "Puntuación Alta"]
val_type = CategoricalDtype(categories=my_categories, ordered=True)
spoti["val2"]=spoti.val2.astype(val_type)
spoti.info()
plt.scatter(spoti.val, spoti.val2)
plt.show()
spoti.val2.describe()
mytable = spoti.groupby(['val2']).size()
n = mytable.sum()
mytable2 = (mytable/n)*100
print(mytable2)
###Output
val2
Puntuación Baja 18.242123
Puntuación Media 63.018242
Puntuación Alta 18.739635
dtype: float64
###Markdown
Gráfico
###Code
barlist = ["Puntuación Baja", "Puntuación Media", "Puntuación Alta"]
plt.bar(barlist, mytable2)
plt.ylabel('Percentage')
plt.xlabel('Valence')
plt.yticks(np.arange(0, 100, step= 10))
props = dict(boxstyle='round', facecolor='white',lw=0.5)
textstr = '$\mathrm{Mean}=%.1f$\n$\mathrm{S.D.}=%.1f$\n$\mathrm{n}=%.0f$'%(m, sd, n)
plt.text (0.2,70, textstr , bbox=props)
plt.title('Figura 7.1: Porcentaje de canciones por índice de valencia''\n')
plt.show()
###Output
_____no_output_____
###Markdown
Comparación con popu
###Code
spoti.groupby('val2').popu.mean()
popu_high = spoti.loc[spoti.val2=='Puntuación Alta', 'popu']
popu_avg = spoti.loc[spoti.val2=='Puntuación Media', 'popu']
popu_low = spoti.loc[spoti.val2=='Puntuación Baja', 'popu']
res= stats.f_oneway(popu_low,popu_avg,popu_high)
print(round(res[0],3),'\n',round(res[1],3))
plt.figure(figsize=(5,5))
ax = sns.pointplot(x= 'val2', y= 'popu',
data= spoti, ci= 99, join=0)
plt.yticks(np.arange(50,90, step=5))
plt.axhline(y=spoti.popu.mean(), linewidth=1, linestyle = 'dashed', color='green')
ax.set_ylabel('Popularidad')
props= dict(boxstyle='round', facecolor='white', lw=0.5)
plt.text(0.01,75,'Mean: 66.5' '\n' 'n: 603' '\n' 't: 1.766' '\n' 'Pval: 0.172', bbox=props)
plt.xlabel('Año')
plt.title('Figura 7.2. Popularidad media por valencia''\n')
plt.show()
###Output
_____no_output_____
###Markdown
Miniconclusion Gracias al p valor mayor a 0,05, aceptamos la H0 y concluimos en que las variables no están relacionadas y no existen diferencias significativamente estadísticas. 8.Durabilidad Descripción
###Code
print(spoti.dur.describe())
plt.figure(figsize=(15, 5))
sns.set_theme()
ax = plt.hist(x=spoti['dur'], density=True, bins=range(130,430,10), align="left")
ax = sns.kdeplot(data=spoti, x='dur', bw_adjust = 1.3, cut = 0, palette="crest")
ax.axvline(x=spoti.dur.mean(), color='black', linestyle="--")
ax.axvline(x=spoti.dur.median(), color='red', linestyle="--")
ax.legend(labels=['Distribución', 'Media', 'Mediana'])
plt.title('Figura 8.1. Popularidad por durabilidad de la canción''\n')
plt.show()
###Output
_____no_output_____
###Markdown
Recodificación Para la categorización de la variable cuantitativa "durabilidad", se ha optado a escoger la mediana como criterio de recodificación puesto que la durabilidad no sigue una distribución normal, como se observa en la figura anterior. De la misma manera, se tendrán en cuenta los cuartiles en la subdivisión del dataset.
###Code
spoti.loc[(spoti.dur<=(spoti.dur.describe()[5])), "dur_cat_d"]= "Short"
spoti.loc[(spoti.dur>=(spoti.dur.describe()[5])), "dur_cat_d"]= "Long"
spoti.loc[spoti.dur_cat_d=='Short', "dur_cat_n"]= 0
spoti.loc[spoti.dur_cat_d=='Long', "dur_cat_n"]= 1
spoti.loc[(spoti.dur<=(spoti.dur.describe()[4])), "dur_cat"]= "Short"
spoti.loc[((spoti.dur>(spoti.dur.describe()[4])) & (spoti.dur<(spoti.dur.describe()[6]))), "dur_cat"]= "Average"
spoti.loc[(spoti.dur>=(spoti.dur.describe()[6])), "dur_cat"]= "Long"
###Output
_____no_output_____
###Markdown
Comparación con popu
###Code
t, p = stats.ttest_ind(spoti.loc[spoti.dur_cat_d == 'Short']['popu'], spoti.loc[spoti.dur_cat_d == 'Long']['popu'])
n = len(spoti.dur)
props = dict(boxstyle='round', facecolor='white', lw=0.5)
textstr = '$\mathrm{r}=%.2f$\n$\mathrm{P.Val:}=%.3f$\n$\mathrm{n}=%.0f$'%(t, p, n)
ax.text(0.05, 0.95, textstr, transform=ax.transAxes, fontsize=14, verticalalignment='top', bbox=props)
plt.figure(figsize=(15,7))
sns.set_theme()
plt.suptitle('Figura 8.2. Popularidad por durabilidad de la canción''\n')
plt.subplot(1,2,1)
ax = sns.pointplot(x='dur_cat', y='popu', data=spoti, join=0, order=['Short', 'Average', 'Long'])
ax.axhline(y=spoti['popu'].mean(), linewidth=1, linestyle='dashed', color='green')
t, p = stats.f_oneway(spoti.loc[spoti.dur_cat == 'Short']['popu'], spoti.loc[spoti.dur_cat == 'Average']['popu'], spoti.loc[spoti.dur_cat == 'Long']['popu'])
n = len(spoti.dur)
props = dict(boxstyle='round', facecolor='white', lw=0.5)
textstr = '$\mathrm{Estadístico}=%.2f$\n$\mathrm{P.Val:}=%.4f$\n$\mathrm{n}=%.0f$'%(t, p, n)
ax.text(0.05, 0.05, textstr, transform=ax.transAxes, fontsize=14, bbox=props)
ax.set_xlabel('Durabilidad')
ax.set_ylabel('Popularidad')
ax.set_title('Tres categorías')
plt.subplot(1,2,2)
ax = sns.pointplot(x='dur_cat_d', y='popu', data=spoti, join=0, order=['Short', 'Long'])
ax.axhline(y=spoti['popu'].mean(), linewidth=1, linestyle='dashed', color='green')
t, p = stats.ttest_ind(spoti.loc[spoti.dur_cat_d == 'Short']['popu'], spoti.loc[spoti.dur_cat_d == 'Long']['popu'])
props = dict(boxstyle='round', facecolor='white', lw=0.5)
textstr = '$\mathrm{Estadístico}=%.2f$\n$\mathrm{P.Val:}=%.4f$\n$\mathrm{n}=%.0f$'%(t, p, n)
ax.text(0.05, 0.05, textstr, transform=ax.transAxes, fontsize=14, bbox=props)
ax.set_xlabel('Durabilidad')
ax.set_ylabel('Popularidad')
ax.set_title('Dos categorías')
plt.show()
sns.reset_orig()
###Output
_____no_output_____
###Markdown
Miniconclusión Con los resultados obtenidos en los análisis estadísticos, rechazando la hipótesis nula del segundo t-test, se confirma la existencia de una relación altamente significativa, aunque bastante débil (como se verá en la regresión lineal), entre la duración de una canción con su popularidad.Concretamente, de media, las canciones de menor duración serán más populares que las largas. 9.Lyrics Descripción
###Code
print(spoti.lyrics.describe())
###Output
count 603.000000
mean 8.374793
std 7.475686
min 3.000000
25% 4.000000
50% 5.000000
75% 9.000000
max 48.000000
Name: lyrics, dtype: float64
###Markdown
Gráfico
###Code
lyr = spoti['lyrics']
plt.hist(lyr, bins=15, edgecolor='black')
plt.xlabel('Lyrics')
plt.xticks(np.arange(0, 48, step=5))
plt.ylabel('Frecuencia')
props = dict(boxstyle= 'round', facecolor='white', lw=0.5)
plt.text(40.5,200,'Media:8.37''\n''N:603' '\n' 'SD: 7.48', bbox=props)
plt.title('Figura 9.1: Número de canciones por speechness score''\n')
plt.axvline(8.37, linewidth=1, linestyle='solid', color = 'red')
plt.axvline(0.88, linewidth=1, linestyle= 'dashed', color= 'green')
plt.axvline(15.84, linewidth=1, linestyle= 'dashed', color= 'green')
plt.legend(labels=['Media', 'SD ± Media'])
plt.show()
###Output
_____no_output_____
###Markdown
Comparación con popu
###Code
lyr= spoti["lyrics"]
popularidad=spoti["popu"]
pearsonr (lyr,popularidad)
r, p_val = pearsonr(lyr,popularidad)
n = len(spoti["popu"])
print('r:', round(r,3), 'P.Val:', round(p_val,3), 'n:', n)
plt.figure(figsize=(5,5))
plt.scatter(lyr,popularidad, s=20, facecolors='none', edgecolors='C0')
plt.title('Figura 9.2: Popularidad de las canciones por sus lyrics')
plt.ylabel('Popularidad')
plt.xlabel('Lyrics')
props = dict(boxstyle='round', facecolor='white', lw=0.5)
textstr = '$\mathrm{r}=%.2f$\n$\mathrm{P.Val:}=%.3f$\n$\mathrm{n}=%.0f$'%(r, p_val, n)
plt.text (14,6, textstr , bbox=props)
plt.show()
###Output
_____no_output_____
###Markdown
Miniconclusion Al ser el p-value mayor que nuestro nivel de significación (0.05), aceptamos la hipotesis nula. Por lo tanto, la cantidad de letra de las canciones no influye en su popularidad. Regresión Lineal
###Code
from statsmodels.formula.api import ols
model1 = ols('popu ~ year2 + dnce + dur + nrgy + bpm + val + genre + lyrics', data=spoti).fit()
print(model1.summary2())
spoti['nrgy_sqr'] = spoti.nrgy**2
model2 = ols('popu ~ year2 + dnce + nrgy_sqr', data=spoti).fit()
print(model2.summary2())
spoti['dnce_sqr'] = spoti.dnce**2
spoti['val_sqr'] = spoti.val**2
spoti['dur_sqr'] = spoti.dur**2
model3 = ols('popu ~ year2 + dnce + nrgy_sqr', data=spoti).fit()
print(model3.summary2())
spoti['bpm_sqr'] = spoti.bpm**2
model4 = ols('popu ~ year2 + nrgy_sqr + dur_sqr', data=spoti).fit()
print(model4.summary2())
spoti['nrgy_sqr3'] = spoti.nrgy**3
spoti['dnce_sqr3'] = spoti.dnce**3
spoti['val_sqr3'] = spoti.val**3
spoti['dur_sqr3'] = spoti.dur**3
spoti['bpm_sqr3'] = spoti.bpm**3
model5 = ols('popu ~ year2 + nrgy_sqr3 + dur_sqr3', data=spoti).fit()
print(model5.summary2())
model6 = ols('popu ~ year2 + nrgy_sqr3 + dur_sqr', data=spoti).fit()
print(model6.summary2())
from numpy import log
spoti['nrgy_log'] = log(spoti.nrgy)
spoti['dnce_log'] = log(spoti.dnce)
spoti['val_log'] = log(spoti.val)
spoti['dur_log'] = log(spoti.dur)
spoti['bpm_log'] = log(spoti.bpm)
model7 = ols('popu ~ year2 + nrgy_sqr3 + dur_sqr + dnce_log', data=spoti).fit()
print(model7.summary2())
###Output
Results: Ordinary least squares
==================================================================
Model: OLS Adj. R-squared: 0.048
Dependent Variable: popu AIC: 4891.6660
Date: 2021-12-24 12:31 BIC: 4913.6755
No. Observations: 603 Log-Likelihood: -2440.8
Df Model: 4 F-statistic: 8.631
Df Residuals: 598 Prob (F-statistic): 8.91e-07
R-squared: 0.055 Scale: 193.66
------------------------------------------------------------------
Coef. Std.Err. t P>|t| [0.025 0.975]
------------------------------------------------------------------
Intercept 51.1532 10.7944 4.7389 0.0000 29.9537 72.3528
year2[T.2015-2019] 3.7263 1.1940 3.1209 0.0019 1.3814 6.0713
nrgy_sqr3 -0.0000 0.0000 -2.9439 0.0034 -0.0000 -0.0000
dur_sqr -0.0001 0.0000 -2.0800 0.0379 -0.0001 -0.0000
dnce_log 4.9291 2.4944 1.9761 0.0486 0.0303 9.8279
------------------------------------------------------------------
Omnibus: 181.549 Durbin-Watson: 0.468
Prob(Omnibus): 0.000 Jarque-Bera (JB): 523.536
Skew: -1.469 Prob(JB): 0.000
Kurtosis: 6.494 Condition No.: 8958633
==================================================================
* The condition number is large (9e+06). This might indicate
strong multicollinearity or other numerical problems.
###Markdown
Modelo 8
###Code
model8 = ols('popu ~ year2 + nrgy_sqr3 + dur_sqr + dnce_log', data=spoti).fit()
print(model8.summary2())
spoti['val_inv'] = 1/spoti.val
spoti['bpm_inv'] = 1/spoti.bpm
spoti['dur_inv'] = 1/spoti.dur
model9 = ols('popu ~ year2 + nrgy_sqr3 + dur_sqr + dnce_log +val_inv', data=spoti).fit()
print(model9.summary2())
###Output
Results: Ordinary least squares
====================================================================
Model: OLS Adj. R-squared: 0.050
Dependent Variable: popu AIC: 4891.5436
Date: 2021-12-24 12:31 BIC: 4917.9551
No. Observations: 603 Log-Likelihood: -2439.8
Df Model: 5 F-statistic: 7.339
Df Residuals: 597 Prob (F-statistic): 1.09e-06
R-squared: 0.058 Scale: 193.30
--------------------------------------------------------------------
Coef. Std.Err. t P>|t| [0.025 0.975]
--------------------------------------------------------------------
Intercept 60.3798 12.5199 4.8227 0.0000 35.7915 84.9682
year2[T.2015-2019] 3.8223 1.1947 3.1993 0.0015 1.4759 6.1687
nrgy_sqr3 -0.0000 0.0000 -3.2296 0.0013 -0.0000 -0.0000
dur_sqr -0.0001 0.0000 -1.9135 0.0562 -0.0001 0.0000
dnce_log 2.9916 2.8273 1.0581 0.2904 -2.5611 8.5443
val_inv -40.6742 28.0350 -1.4508 0.1474 -95.7334 14.3849
--------------------------------------------------------------------
Omnibus: 183.606 Durbin-Watson: 0.474
Prob(Omnibus): 0.000 Jarque-Bera (JB): 538.838
Skew: -1.478 Prob(JB): 0.000
Kurtosis: 6.564 Condition No.: 23426082
====================================================================
* The condition number is large (2e+07). This might indicate
strong multicollinearity or other numerical problems.
|
implementations/pix2pix/pix2pix.ipynb | ###Markdown
Install
###Code
!git clone https://github.com/derwind/PyTorch-GAN.git
import os
os.chdir("PyTorch-GAN")
!pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Training
###Code
os.chdir("implementations/pix2pix")
os.makedirs("../../data/facades", exist_ok=True)
!curl http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/facades.tar.gz -o ../../data/facades.tar.gz
!tar xf ../../data/facades.tar.gz -C ../../data/
!rm ../../data/facades.tar.gz
!python pix2pix.py
###Output
_____no_output_____ |
Gravedad.ipynb | ###Markdown
TAREA GRAVEDAD
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
m = 1
x_0 = .5
x_0_dot = .1
t = np.linspace(0, 50, 300)
gravedad=np.array([9.81,2.78,8.87,3.72,22.88])
gravedad
plt.figure(figsize = (7, 4))
for indx, g in enumerate (gravedad):
omega_0 = np.sqrt(g/m)
x_t = x_0 *np.cos(omega_0 *t) + (x_0_dot/omega_0) * np.sin(omega_0 *t)
x_t_dot = -omega_0 * x_0 * np.sin(omega_0 * t) + x_0_dot * np.cos(omega_0 * t)
plt.plot(x_t, x_t_dot/omega_0, 'ro', ms = 2)
plt.legend(loc='best', bbox_to_anchor=(1.01, 0.5), prop={'size': 14})
plt.scatter (x_t , (x_t_dot/omega_0), cmap = "viridis", label = g)
plt.show()
###Output
C:\Users\MaríaEsther\Anaconda3\lib\site-packages\matplotlib\axes\_axes.py:545: UserWarning: No labelled objects found. Use label='...' kwarg on individual plots.
warnings.warn("No labelled objects found. "
|
notebooks/digit-classification-test.ipynb | ###Markdown
The Performance Of Models Trained On The MNIST Dataset On Custom-Drawn Images
###Code
import numpy as np
import tensorflow as tf
import sklearn, sklearn.linear_model, sklearn.multiclass, sklearn.naive_bayes
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams["figure.figsize"] = (10, 10)
plt.rcParams.update({'font.size': 12})
###Output
_____no_output_____
###Markdown
Defining the data
###Code
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
###Output
_____no_output_____
###Markdown
Making 1D versions of the MNIST images for the one-vs-rest classifier
###Code
train_images_flat = train_images.reshape((train_images.shape[0], train_images.shape[1] * train_images.shape[2])) / 255.0
test_images_flat = test_images.reshape((test_images.shape[0], test_images.shape[1] * test_images.shape[2])) / 255.0
###Output
_____no_output_____
###Markdown
Making a 4D dataset and categorical labels for the neural net
###Code
train_images = np.expand_dims(train_images, axis=-1) / 255.0
test_images = np.expand_dims(test_images, axis=-1) / 255.0
#train_images = train_images.reshape(60000, 28, 28, 1)
#test_images = test_images.reshape(10000, 28, 28, 1)
train_labels_cat = tf.keras.utils.to_categorical(train_labels)
test_labels_cat = tf.keras.utils.to_categorical(test_labels)
def plot_images(images, labels, rows=5, cols=5, label='Label'):
fig, axes = plt.subplots(rows, cols)
fig.figsize=(15, 15)
indices = np.random.choice(len(images), rows * cols)
counter = 0
for i in range(rows):
for j in range(cols):
axes[i, j].imshow(images[indices[counter]])
axes[i, j].set_title(f"{label}: {labels[indices[counter]]}")
axes[i, j].set_xticks([])
axes[i, j].set_yticks([])
counter += 1
plt.tight_layout()
plt.show()
plot_images(train_images, train_labels)
###Output
_____no_output_____
###Markdown
Training Defining and training the one-vs-rest classifier
###Code
log_reg = sklearn.linear_model.SGDClassifier(loss='log', max_iter=1000, penalty='l2')
classifier = sklearn.multiclass.OneVsRestClassifier(log_reg)
classifier.fit(train_images_flat, train_labels)
###Output
_____no_output_____
###Markdown
Defining and training the neural net
###Code
from tensorflow.keras import Sequential
from tensorflow.keras import layers
def create_model():
model = Sequential([
layers.Conv2D(64, 5, activation='relu', input_shape=(28, 28, 1)),
layers.MaxPool2D(2),
layers.Conv2D(128, 5, activation='relu'),
layers.MaxPool2D(2),
layers.GlobalAveragePooling2D(),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = create_model()
model.summary()
train_gen = tf.keras.preprocessing.image.ImageDataGenerator(zoom_range=0.3,
height_shift_range=0.10,
width_shift_range=0.10,
rotation_range=10)
train_datagen = train_gen.flow(train_images, train_labels_cat, batch_size=256)
'''def scheduler(epoch):
initial_lr = 0.001
lr = initial_lr * np.exp(-0.1 * epoch)
return lr
from tensorflow.keras.callbacks import LearningRateScheduler
lr_scheduler = LearningRateScheduler(scheduler, verbose=1)'''
history = model.fit(train_datagen, initial_epoch=0, epochs=30, batch_size=256,
validation_data=(test_images, test_labels_cat))
model.save('cnn-64-128-5-aug')
#model.load_weights('cnn-64-128-5-aug')
###Output
INFO:tensorflow:Assets written to: cnn-64-128-5-aug\assets
###Markdown
Assessing model performance Loading drawn images
###Code
def read_images(filepaths, reverse=False):
images = []
images_flat = []
for filepath in filepaths:
image = tf.io.read_file(filepath)
image = tf.image.decode_image(image, channels=1)
image = tf.image.resize(image, (28, 28))
if reverse:
image = np.where(image == 255, 0, 255)
else:
image = image.numpy()
image = image / 255.0
images.append(image)
images_flat.append(image.reshape(28 * 28))
return np.array(images), np.array(images_flat)
filepaths = tf.io.gfile.glob('images/*.png')
list.sort(filepaths, key=lambda x: int(x[12:-4]))
images, images_flat = read_images(filepaths, True)
images.shape
###Output
_____no_output_____
###Markdown
Creating labels for the one-vs-rest classifier and the neural net
###Code
labels = 100 * [0] + 98 * [1] + 100 * [2] + 101 * [3] + 99 * [4] + 111 * [5] + 89 * [6] + 110 * [7] + 93 * [8] + 112 * [9]
labels = np.array(labels)
labels.shape
labels_cat = tf.keras.utils.to_categorical(labels)
labels_cat.shape
labels_cat[0]
###Output
_____no_output_____
###Markdown
Plotting the drawn images and their corresponding labels
###Code
plot_images(images, labels)
###Output
_____no_output_____
###Markdown
Evaluating model performance Neural net on MNIST test dataset
###Code
model.evaluate(test_images, test_labels_cat)
from sklearn.metrics import classification_report, confusion_matrix
predictions = np.argmax(model.predict(test_images), axis=-1)
conf_mat = confusion_matrix(test_labels, predictions)
conf_mat
class_report = classification_report(test_labels, predictions, output_dict=True)
class_report
###Output
_____no_output_____
###Markdown
Neural net on drawn images
###Code
model.evaluate(images, labels_cat)
predictions = np.argmax(model.predict(images), axis=-1)
rows, cols = 5, 5
fig, axes = plt.subplots(rows, cols)
fig.figsize=(15, 15)
indices = np.random.choice(len(images), rows * cols)
counter = 0
for i in range(rows):
for j in range(cols):
axes[i, j].imshow(images[indices[counter]])
axes[i, j].set_title(f"Prediction: {predictions[indices[counter]]}\n"
f"True label: {labels[indices[counter]]}")
axes[i, j].set_xticks([])
axes[i, j].set_yticks([])
counter += 1
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Plotting wrong predictions
###Code
wrong_predictions = list(filter(lambda x: x[1][0] != x[1][1], list(enumerate(zip(predictions, labels)))))
len(wrong_predictions)
cols, rows = 5, 5
fig, axes = plt.subplots(rows, cols)
fig.figsize=(15, 15)
counter = 0
for i in range(rows):
for j in range(cols):
axes[i, j].imshow(images[wrong_predictions[counter][0]])
axes[i, j].set_title(f"Prediction: {wrong_predictions[counter][1][0]}\n"
f"True label: {wrong_predictions[counter][1][1]}")
axes[i, j].set_xticks([])
axes[i, j].set_yticks([])
counter += 1
plt.tight_layout()
plt.show()
from sklearn.metrics import classification_report, confusion_matrix
conf_mat = confusion_matrix(labels, predictions)
conf_mat
class_report = classification_report(labels, predictions, output_dict=True)
class_report
###Output
_____no_output_____
###Markdown
One-vs-rest classifier on MNIST test dataset
###Code
classifier.score(test_images_flat, test_labels)
predictions = classifier.predict(test_images_flat)
conf_mat = confusion_matrix(test_labels, predictions)
conf_mat
class_report = classification_report(test_labels, predictions, output_dict=True)
class_report
###Output
_____no_output_____
###Markdown
One-vs-rest classifier on drawn images
###Code
classifier.score(images_flat, labels)
predictions = classifier.predict(images_flat)
conf_mat = confusion_matrix(labels, predictions)
conf_mat
class_report = classification_report(labels, predictions, output_dict=True)
class_report
###Output
_____no_output_____ |
notebooks/10b-anlyz_run02-synthetic_lethal_classes-feat1.ipynb | ###Markdown
Breakdown of lethality
###Code
import pickle
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from ast import literal_eval
###Output
_____no_output_____
###Markdown
Read in files
###Code
dir_in_res = '../out/20.0216 feat/reg_rf_boruta'
dir_in_anlyz = os.path.join(dir_in_res, 'anlyz_filtered')
df_featSummary = pd.read_csv(os.path.join(dir_in_anlyz, 'feat_summary.csv')) #feature summary
df_featSummary['feat_sources'] = df_featSummary['feat_sources'].apply(literal_eval)
df_featSummary['feat_genes'] = df_featSummary['feat_genes'].apply(literal_eval)
feat_summary_annot_gene = pd.read_csv(os.path.join(dir_in_anlyz, 'onsamegene', 'feat_summary_annot.csv'), header=0, index_col=0)
gs_name = 'paralog'
feat_summary_annot_paralog = pd.read_csv(os.path.join(dir_in_anlyz, f'insame{gs_name}', 'feat_summary_annot.csv'), header=0, index_col=0)
gs_name = 'Panther'
feat_summary_annot_panther = pd.read_csv(os.path.join(dir_in_anlyz, f'insamegeneset{gs_name}', 'feat_summary_annot.csv'), header=0, index_col=0)
###Output
_____no_output_____
###Markdown
Breakdown - basic - top most important feature
###Code
df_counts = df_featSummary.groupby('feat_source1')['feat_source1'].count()
df_counts = df_counts.to_dict()
df_sl = pd.DataFrame([{'new_syn_lethal':df_counts['CERES'],
'classic_syn_lethal': sum([df_counts[k] for k in ['CN','Mut','RNA-seq']]) }])
df_sl = df_sl.T.squeeze()
df_sl
###Output
_____no_output_____
###Markdown
Breakdown of lethality, top most important feature
###Code
df_src1 = df_featSummary[['target','feat_source1']].set_index('target')
df = pd.DataFrame({'isNotCERES': df_src1.feat_source1.isin(['RNA-seq', 'CN', 'Mut']),
'sameGene': feat_summary_annot_gene.inSame_1,
'sameParalog': feat_summary_annot_paralog.inSame_1,
'sameGS': feat_summary_annot_panther.inSame_1,
'isCERES': df_src1.feat_source1 == 'CERES'
})
lethal_dict = {'sameGene': 'Same gene',
'sameParalog': 'Paralog',
'sameGS': 'Gene set',
'isCERES': 'Functional',
'isNotCERES': 'Classic synthetic'}
df_counts = pd.DataFrame({'sum':df.sum(axis=0)})
df_counts['lethality'] = [lethal_dict[n] for n in df_counts.index]
df_counts
plt.figure()
ax = sns.barplot(df_counts['lethality'], df_counts['sum'], color='steelblue')
ax.set(xlabel='Lethality types', ylabel='Number of genes')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Breakdown of lethality, top 10 most important feature
###Code
df_src = df_featSummary.set_index('target').feat_sources
df = pd.DataFrame({'hasNoCERES': df_src.apply(lambda x: any([n in x for n in ['CN','Mut','RNA-seq','Lineage']])),
'sameGene': feat_summary_annot_gene.inSame_top10,
'sameParalog': feat_summary_annot_paralog.inSame_top10,
'sameGS': feat_summary_annot_panther.inSame_top10,
'hasCERES': df_src.apply(lambda x: 'CERES' in x)
})
lethal_dict = {'sameGene': 'Same gene',
'sameParalog': 'Paralog',
'sameGS': 'Gene set',
'hasCERES': 'Functional',
'hasNoCERES': 'Classic synthetic'}
df_counts = pd.DataFrame({'sum':df.sum(axis=0)})
df_counts['lethality'] = [lethal_dict[n] for n in df_counts.index]
df_counts
plt.figure()
ax = sns.barplot(df_counts['lethality'], df_counts['sum'], color='steelblue')
ax.set(xlabel='Lethality types', ylabel='Number of genes', ylim=[0,500])
plt.tight_layout()
###Output
_____no_output_____ |
02_Traffic_info_test_2_hidden_12_pool_multi_location.ipynb | ###Markdown
Speed Test
###Code
times = []
valid_mask_t = torch.from_numpy(np.ones([1,80,80,1]).astype(np.float32)).to(DEVICE)
for d_i in range(10):
_target = torch.from_numpy(d_trains[d_i].astype(np.float32)).to(DEVICE)
calibration_map = make_circle_masks(_target.size(0), map_size[0], map_size[1],
rmin=0.5, rmax=0.5)[..., None]
calibration_map = torch.from_numpy(calibration_map.astype(np.float32)).to(DEVICE)
x0 = np.repeat(seed[None, ...], _target.size(0), 0)*0
x0 = torch.from_numpy(x0.astype(np.float32)).to(DEVICE)
start_time = time.time()
x, history = test(x0, _target, valid_mask_t, calibration_map, N_STEPS)
times.append((time.time()-start_time)/_target.size(0))
print(times[-1])
print("---------")
print(np.mean(times))
###Output
0.03891327977180481
0.04005876183509827
0.04132132604718208
0.04402513429522514
0.04346586391329765
0.04147135838866234
0.03921307995915413
0.038483794778585434
0.04098214581608772
0.044003926217556
---------
0.04119386710226536
|
PyCitySchools/PyCitySchools_1.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas DataFrames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset.
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
school_data_complete.head()
###Output
_____no_output_____
###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Calculate the percentage of students who passed math **and** reading (% Overall Passing)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
###Code
number_of_schools = len(school_data["School ID"].unique())
number_of_schools
number_of_students = len(student_data["Student ID"].unique())
number_of_students
total_budget = school_data["budget"].sum()
total_budget
avg_math_score = student_data["math_score"].mean()
avg_math_score
avg_read_score = student_data["reading_score"].mean()
avg_read_score
passing_math = (student_data["math_score"] >= 70)
math_passers = student_data.loc[passing_math]
number_math_passers = len(math_passers)
pct_pass_math = number_math_passers * 100 / number_of_students
pct_pass_math
passing_read = (student_data["reading_score"] >= 70)
read_passers = student_data.loc[passing_read]
number_read_passers = len(read_passers)
pct_pass_read = number_read_passers * 100 / number_of_students
pct_pass_read
pass_math_read = passing_math & passing_read
math_read_passers = student_data.loc[pass_math_read]
number_math_read_passers = len(math_read_passers)
pct_pass_read_math = number_math_read_passers * 100 / number_of_students
pct_pass_read_math
district_summary_df = pd.DataFrame(
[
{"number of schools": number_of_schools,
"number of students": number_of_students,
"total budget": total_budget,
"average math score": avg_math_score,
"average reading score": avg_read_score,
"% passing math score": pct_pass_math,
"% passing reading score": pct_pass_read,
"% passing math and reading score": pct_pass_read_math
}
]
)
district_summary_df
# Format final district summary
district_summary_df["number of students"] = district_summary_df["number of students"].map("{:,}".format)
district_summary_df["total budget"] = district_summary_df["total budget"].map("${:,}".format)
district_summary_df["average math score"] = district_summary_df["average math score"].map("{:.1f}".format)
district_summary_df["average reading score"] = district_summary_df["average reading score"].map("{:.1f}".format)
district_summary_df["% passing math score"] = district_summary_df["% passing math score"].map("{:.1f}%".format)
district_summary_df["% passing reading score"] = district_summary_df["% passing reading score"].map("{:.1f}%".format)
district_summary_df["% passing math and reading score"] = district_summary_df["% passing math and reading score"].map("{:.1f}%".format)
district_summary_df
###Output
_____no_output_____
###Markdown
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * % Overall Passing (The percentage of students that passed math **and** reading.) * Create a dataframe to hold the above results
###Code
# Strategy: school_data already has the first few columns. Format school_data, then calculate
# additional series columns separately, then add each series column to the formatted dataframe
# start with formatting school_data. Important to set index for future merges
school_summary = (school_data.set_index("school_name")
.sort_values("school_name")
.rename(columns = {
"type": "School Type",
"size": "Total Students",
"budget": "Total School Budget"
}
)
)
# Calculate Per Student Budget series, append to school_summary
school_summary["Per Student Budget"] = school_summary["Total School Budget"] / school_summary["Total Students"]
school_summary.head(5)
# Group and compute average math and reading scores from student_data
school_score_mean = (student_data.groupby(by="school_name")
.mean()
)
school_score_mean.head(5)
# Append average math score and average reading score to school_summary
school_summary["Average Math Score"] = school_score_mean["math_score"]
school_summary["Average Reading Score"] = school_score_mean["reading_score"]
school_summary.head(5)
# Get number of students passing math by school. Set index.
math_pass_by_school = (math_passers.set_index("school_name")
.rename(columns={"Student ID": "Number Students Pass Math"})
.groupby(by="school_name")
.count()
)
math_pass_by_school.head(5)
# Get number of students passing reading by school. Set index.
read_pass_by_school = (read_passers.set_index("school_name")
.rename(columns={"Student ID": "Number Students Pass Read"})
.groupby(by="school_name")
.count()
)
read_pass_by_school.head(5)
# Get number of students passing math and reading by school. Set index.
math_read_pass_by_school = (math_read_passers.set_index("school_name")
.rename(columns={"Student ID": "Number Students Pass Math and Read"})
.groupby(by="school_name")
.count()
)
math_read_pass_by_school.head(5)
# Divide number of students passing by number of students per school, then append columns
# to school_summary dataframe
school_summary["% Passing Math"] = math_pass_by_school["Number Students Pass Math"] / school_summary["Total Students"] * 100
school_summary["% Passing Reading"] = read_pass_by_school["Number Students Pass Read"] / school_summary["Total Students"] * 100
school_summary["% Overall Passing"] = math_read_pass_by_school["Number Students Pass Math and Read"] / school_summary["Total Students"] * 100
school_summary.head()
# Make an unformatted copy for to use in 'Scores by School Spending' later on
school_summary_unformatted = school_summary.copy()
# Add formatting to school_summary. This turns some float columns into strings
school_summary["Total School Budget"] = school_summary["Total School Budget"].map("${:,.2f}".format)
school_summary["Per Student Budget"] = school_summary["Per Student Budget"].map("${:,.2f}".format)
school_summary["Average Math Score"] = school_summary["Average Math Score"].map("{:.2f}".format)
school_summary["Average Reading Score"] = school_summary["Average Reading Score"].map("{:.2f}".format)
school_summary["% Passing Math"] = school_summary["% Passing Math"].map("{:.2f}%".format)
school_summary["% Passing Reading"] = school_summary["% Passing Reading"].map("{:.2f}%".format)
school_summary["% Overall Passing"] = school_summary["% Overall Passing"].map("{:.2f}%".format)
school_summary
###Output
_____no_output_____
###Markdown
Top Performing Schools (By % Overall Passing) * Sort and display the top five performing schools by % overall passing.
###Code
(school_summary.sort_values("% Overall Passing", ascending=False)
.head(5)
)
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By % Overall Passing) * Sort and display the five worst-performing schools by % overall passing.
###Code
(school_summary.sort_values("% Overall Passing", ascending=True)
.head(5)
)
###Output
_____no_output_____
###Markdown
Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
###Code
# Index student_data and get only relevant columns
score_by_grade = student_data[["school_name", "grade", "math_score"]].set_index("school_name")
# Create initial math_by_school dataframe, then create additional series and append them to
# the dataframe
math_by_school = (score_by_grade.loc[score_by_grade["grade"] == "9th"]
.groupby(by="school_name")
.mean()
.rename(columns={"math_score": "9th"})
)
math_by_school["10th"] = (score_by_grade.loc[score_by_grade["grade"] == "10th"]
.groupby(by="school_name")
.mean()
)
math_by_school["11th"] = (score_by_grade.loc[score_by_grade["grade"] == "11th"]
.groupby(by="school_name")
.mean()
)
math_by_school["12th"] = (score_by_grade.loc[score_by_grade["grade"] == "12th"]
.groupby(by="school_name")
.mean()
)
math_by_school
###Output
_____no_output_____
###Markdown
Reading Score by Grade * Perform the same operations as above for reading scores
###Code
score_by_grade = student_data[["school_name", "grade", "reading_score"]].set_index("school_name")
# Create initial read_by_school dataframe, then create additional series and append them to
# the dataframe
read_by_school = (score_by_grade.loc[score_by_grade["grade"] == "9th"]
.groupby(by="school_name")
.mean()
.rename(columns={"reading_score": "9th"})
)
read_by_school["10th"] = (score_by_grade.loc[score_by_grade["grade"] == "10th"]
.groupby(by="school_name")
.mean()
)
read_by_school["11th"] = (score_by_grade.loc[score_by_grade["grade"] == "11th"]
.groupby(by="school_name")
.mean()
)
read_by_school["12th"] = (score_by_grade.loc[score_by_grade["grade"] == "12th"]
.groupby(by="school_name")
.mean()
)
read_by_school
###Output
_____no_output_____
###Markdown
Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
# Use school_summary_unformatted dataframe that still has numeric columns as float
# Define the cut parameters
series_to_cut = school_summary_unformatted["Per Student Budget"]
bins_to_fill = [0, 584.9, 629.9, 644.9, 675.9]
bin_labels = ["<$584", "$585-629", "$630-644", "$645-675"]
# New column with the bin definition into school_summary_unformatted
school_summary_unformatted["Spending Ranges (per student)"] = pd.cut(x=series_to_cut, bins=bins_to_fill, labels=bin_labels)
# Exclude unneeded columns, group by the bin series and take the average of the scores
scores_by_spending = (school_summary_unformatted.groupby(by="Spending Ranges (per student)")
.mean()
)
scores_by_spending_final = scores_by_spending[["Average Math Score",
"Average Reading Score",
"% Passing Math",
"% Passing Reading",
"% Overall Passing"]]
scores_by_spending_final
###Output
_____no_output_____
###Markdown
Scores by School Size * Perform the same operations as above, based on school size.
###Code
# Use school_summary_unformatted dataframe that still has numeric columns as float
# Define the cut parameters
series_to_cut = school_summary_unformatted["Total Students"]
bins_to_fill = [0, 1799.9, 2999.9, 4999.9]
bin_labels = ["Small (< 1800)", "Medium (1800-2999)", "Large (3000-5000)"]
# New column with the bin definition into school_summary_unformatted
school_summary_unformatted["School Size"] = pd.cut(x=series_to_cut, bins=bins_to_fill, labels=bin_labels)
# Exclude unneeded columns, group by the bin series and take the average of the scores
scores_by_school_size = (school_summary_unformatted.groupby(by="School Size")
.mean()
)
scores_by_school_size_final = scores_by_school_size[["Average Math Score",
"Average Reading Score",
"% Passing Math",
"% Passing Reading",
"% Overall Passing"]]
scores_by_school_size_final
###Output
_____no_output_____
###Markdown
Scores by School Type * Perform the same operations as above, based on school type
###Code
# No cut action needed since 'School Type' is not numeric. Can be grouped as is.
# Exclude unneeded columns, group by School Type and take the average of the scores
scores_by_school_type = (school_summary_unformatted.groupby(by="School Type")
.mean()
)
scores_by_school_type_final = scores_by_school_type[["Average Math Score",
"Average Reading Score",
"% Passing Math",
"% Passing Reading",
"% Overall Passing"]]
scores_by_school_type_final
###Output
_____no_output_____ |
.ipynb_checkpoints/IPL 2008 - 2018 Analysis-checkpoint.ipynb | ###Markdown
Plotting the results of Man Of the Match Award in IPL 2008 - 2018
###Code
player_names = list(player_of_match.keys())
number_of_times = list(player_of_match.values())
# Plotting the Graph
plt.bar(range(len(player_of_match)), number_of_times)
plt.title('Man Of the Match Award')
plt.show()
###Output
_____no_output_____
###Markdown
Number Of Wins Of Each Team
###Code
teamWinCounts = dict()
for team in matches_dataset['winner']:
if team == None:
continue
else:
teamWinCounts[team] = teamWinCounts.get(team,0) + 1
for teamName, Count in teamWinCounts.items():
print(teamName,':',Count)
###Output
Sunrisers Hyderabad : 52
Rising Pune Supergiant : 10
Kolkata Knight Riders : 86
Kings XI Punjab : 76
Royal Challengers Bangalore : 79
Mumbai Indians : 98
Delhi Daredevils : 67
Gujarat Lions : 13
Chennai Super Kings : 90
Rajasthan Royals : 70
Deccan Chargers : 29
Pune Warriors : 12
Kochi Tuskers Kerala : 6
nan : 3
Rising Pune Supergiants : 5
###Markdown
Plotting the Results Of Team Winning
###Code
numberOfWins = teamWinCounts.values()
teamName = teamWinCounts.keys()
plt.bar(range(len(teamWinCounts)), numberOfWins)
plt.xticks(range(len(teamWinCounts)), list(teamWinCounts.keys()), rotation='vertical')
plt.xlabel('Team Names')
plt.ylabel('Number Of Win Matches')
plt.title('Analysis Of Number Of Matches win by Each Team From 2008 - 2018', color="Orange")
plt.show()
###Output
_____no_output_____
###Markdown
Total Matches Played by Each team From 2008 - 2018
###Code
totalMatchesCount = dict()
# For Team1
for team in matches_dataset['team1']:
totalMatchesCount[team] = totalMatchesCount.get(team, 0) + 1
# For Team2
for team in matches_dataset['team2']:
totalMatchesCount[team] = totalMatchesCount.get(team, 0) + 1
# Printing the total matches played by each team
for teamName, count in totalMatchesCount.items():
print('{} : {}'.format(teamName,count))
###Output
Sunrisers Hyderabad : 93
Mumbai Indians : 171
Gujarat Lions : 30
Rising Pune Supergiant : 16
Royal Challengers Bangalore : 166
Kolkata Knight Riders : 164
Delhi Daredevils : 161
Kings XI Punjab : 162
Chennai Super Kings : 147
Rajasthan Royals : 133
Deccan Chargers : 75
Kochi Tuskers Kerala : 14
Pune Warriors : 46
Rising Pune Supergiants : 14
###Markdown
Plotting the Total Matches Played by Each Team
###Code
teamNames = totalMatchesCount.keys()
teamCount = totalMatchesCount.values()
plt.bar(range(len(totalMatchesCount)), teamCount)
plt.xticks(range(len(totalMatchesCount)), list(teamNames), rotation='vertical')
plt.xlabel('Team Names')
plt.ylabel('Number Of Played Matches')
plt.title('Total Number Of Matches Played By Each Team From 2008 - 2018')
plt.show()
###Output
_____no_output_____ |
Wi20_content/SEDS/L5.Procedural_Python.ipynb | ###Markdown
Procedural programming in python Topics* Tuples, lists and dictionaries* Flow control, part 1 * If * For * range() function* Some hacky hack time* Flow control, part 2 * Functions TuplesLet's begin by creating a tuple called `my_tuple` that contains three elements.
###Code
my_tuple = ('I', 'like', 'cake')
my_tuple
###Output
_____no_output_____
###Markdown
Tuples are simple containers for data. They are ordered, meaining the order the elements are in when the tuple is created are preserved. We can get values from our tuple by using array indexing, similar to what we were doing with pandas.
###Code
my_tuple[0]
###Output
_____no_output_____
###Markdown
Recall that Python indexes start at 0. So the first element in a tuple is 0 and the last is array length - 1. You can also address from the `end` to the `front` by using negative (`-`) indexes, e.g.
###Code
my_tuple[-1]
###Output
_____no_output_____
###Markdown
You can also access a range of elements, e.g. the first two, the first three, by using the `:` to expand a range. This is called ``slicing``.
###Code
my_tuple[0:2]
my_tuple[0:3]
###Output
_____no_output_____
###Markdown
What do you notice about how the upper bound is referenced? Without either end, the ``:`` expands to the entire list.
###Code
my_tuple[1:]
my_tuple[:-1]
my_tuple[:]
###Output
_____no_output_____
###Markdown
Tuples have a key feature that distinguishes them from other types of object containers in Python. They are _immutable_. This means that once the values are set, they cannot change.
###Code
my_tuple[2]
###Output
_____no_output_____
###Markdown
So what happens if I decide that I really prefer pie over cake?
###Code
#my_tuple[2] = 'pie'
###Output
_____no_output_____
###Markdown
Facts about tuples:* You can't add elements to a tuple. Tuples have no append or extend method.* You can't remove elements from a tuple. Tuples have no remove or pop method.* You can also use the in operator to check if an element exists in the tuple.So then, what are the use cases of tuples? * Speed* `Write-protects` data that other pieces of code should not alter You can alter the value of a tuple variable, e.g. change the tuple it holds, but you can't modify it.
###Code
my_tuple
my_tuple = ('I', 'love', 'pie')
my_tuple
###Output
_____no_output_____
###Markdown
There is a really handy operator ``in`` that can be used with tuples that will return `True` if an element is present in a tuple and `False` otherwise.
###Code
'love' in my_tuple
###Output
_____no_output_____
###Markdown
Finally, tuples can contain different types of data, not just strings.
###Code
import math
my_second_tuple = (42, 'Elephants', 'ate', math.pi)
my_second_tuple
###Output
_____no_output_____
###Markdown
Numerical operators work... Sort of. What happens when you add? ``my_second_tuple + 'plus'`` Not what you expects? What about adding two tuples?
###Code
my_second_tuple + my_tuple
###Output
_____no_output_____
###Markdown
Other operators: -, /, * Questions about tuples before we move on? ListsLet's begin by creating a list called `my_list` that contains three elements.
###Code
my_list = ['I', 'like', 'cake']
my_list
###Output
_____no_output_____
###Markdown
At first glance, tuples and lists look pretty similar. Notice the lists use '[' and ']' instead of '(' and ')'. But indexing and refering to the first entry as 0 and the last as -1 still works the same.
###Code
my_list[0]
my_list[-1]
my_list[0:3]
###Output
_____no_output_____
###Markdown
Lists, however, unlike tuples, are mutable.
###Code
my_list[2] = 'pie'
my_list
###Output
_____no_output_____
###Markdown
Multiple elements in the list can even be changed at once!
###Code
my_list[1:] = ['love', 'puppies']
my_list
###Output
_____no_output_____
###Markdown
You can still use the `in` operator.
###Code
'puppies' in my_list
'kittens' in my_list
###Output
_____no_output_____
###Markdown
So when to use a tuple and when to use a list?* Use a list when you will modify it after it is created?Ways to modify a list? You have already seen by index. Let's start with an empty list.
###Code
my_new_list = []
my_new_list
###Output
_____no_output_____
###Markdown
We can add to the list using the append method on it.
###Code
my_new_list.append('Now')
my_new_list
###Output
_____no_output_____
###Markdown
We can use the `+` operator to create a longer list by adding the contents of two lists together.
###Code
my_new_list + my_list
###Output
_____no_output_____
###Markdown
One of the useful things to know about a list how many elements are in it. This can be found with the `len` function.
###Code
len(my_list)
###Output
_____no_output_____
###Markdown
Some other handy functions with lists:* max* min* cmp Sometimes you have a tuple and you need to make it a list. You can `cast` the tuple to a list with ``list(my_tuple)``
###Code
list(my_tuple)
###Output
_____no_output_____
###Markdown
What in the above told us it was a list? You can also use the ``type`` function to figure out the type.
###Code
type(tuple)
type(list(my_tuple))
###Output
_____no_output_____
###Markdown
There are other useful methods on lists, including:| methods | description ||---|---|| list.append(obj) | Appends object obj to list || list.count(obj)| Returns count of how many times obj occurs in list || list.extend(seq) | Appends the contents of seq to list || list.index(obj) | Returns the lowest index in list that obj appears || list.insert(index, obj) | Inserts object obj into list at offset index || list.pop(obj=list[-1]) | Removes and returns last object or obj from list || list.remove(obj) | Removes object obj from list || list.reverse() | Reverses objects of list in place || list.sort([func]) | Sort objects of list, use compare func, if given |Try some of them now.```my_list.count('I')my_listmy_list.append('I')my_listmy_list.count('I')my_listmy_list.index(42)my_list.index('puppies')my_listmy_list.insert(my_list.index('puppies'), 'furry')my_list```
###Code
my_list.count('I')
my_list
my_list.append('I')
my_list
my_list.count('I')
my_list
#my_list.index(42)
my_list.index('puppies')
my_list
my_list.insert(my_list.index('puppies'), 'furry')
my_list
my_list.pop()
my_list
my_list.remove('puppies')
my_list
my_list.append('cabbages')
my_list
###Output
_____no_output_____
###Markdown
Any questions about lists before we move on? DictionariesDictionaries are similar to tuples and lists in that they hold a collection of objects. Dictionaries, however, allow an additional indexing mode: keys. Think of a real dictionary where the elements in it are the definitions of the words and the keys to retrieve the entries are the words themselves.| word | definition ||------|------------|| tuple | An immutable collection of ordered objects || list | A mutable collection of ordered objects || dictionary | A mutable collection of named objects |Let's create this data structure now. Dictionaries, like tuples and elements use a unique referencing method, '{' and its evil twin '}'.
###Code
my_dict = { 'tuple' : 'An immutable collection of ordered objects',
'list' : 'A mutable collection of ordered objects',
'dictionary' : 'A mutable collection of objects' }
my_dict
###Output
_____no_output_____
###Markdown
We access items in the dictionary by name, e.g.
###Code
my_dict['dictionary']
###Output
_____no_output_____
###Markdown
Since the dictionary is mutable, you can change the entries.
###Code
my_dict['dictionary'] = 'A mutable collection of named objects'
my_dict
###Output
_____no_output_____
###Markdown
Notice that ordering is not preserved! As of Python 3.7 the ordering is garunteed to be insertion order but that does not mean alphabetical or otherwise sorted.And we can add new items to the list.
###Code
my_dict['cabbage'] = 'Green leafy plant in the Brassica family'
my_dict
###Output
_____no_output_____
###Markdown
To delete an entry, we can't just set it to ``None``
###Code
my_dict['cabbage'] = None
my_dict
###Output
_____no_output_____
###Markdown
To delete it propery, we need to pop that specific entry.
###Code
my_dict.pop('cabbage', None)
my_dict
###Output
_____no_output_____
###Markdown
You can use other objects as names, but that is a topic for another time. You can mix and match key types, e.g.
###Code
my_new_dict = {}
my_new_dict[1] = 'One'
my_new_dict['42'] = 42
my_new_dict
###Output
_____no_output_____
###Markdown
You can get a list of keys in the dictionary by using the ``keys`` method.
###Code
my_dict.keys()
###Output
_____no_output_____
###Markdown
Similarly the contents of the dictionary with the ``items`` method.
###Code
my_dict.items()
###Output
_____no_output_____
###Markdown
We can use the keys list for fun stuff, e.g. with the ``in`` operator.
###Code
'dictionary' in my_dict.keys()
###Output
_____no_output_____
###Markdown
This is a synonym for `in my_dict`
###Code
'dictionary' in my_dict
###Output
_____no_output_____
###Markdown
Notice, it doesn't work for elements.
###Code
'A mutable collection of ordered objects' in my_dict
###Output
_____no_output_____
###Markdown
Other dictionary methods:| methods | description ||---|---|| dict.clear() | Removes all elements from dict || dict.get(key, default=None) | For ``key`` key, returns value or ``default`` if key doesn't exist in dict | | dict.items() | Returns a list of dicts (key, value) tuple pairs | | dict.keys() | Returns a list of dictionary keys || dict.setdefault(key, default=None) | Similar to get, but set the value of key if it doesn't exist in dict || dict.update(dict2) | Add the key / value pairs in dict2 to dict || dict.values | Returns a list of dictionary values|Feel free to experiment... Flow controlFlow control figureFlow control refers how to programs do loops, conditional execution, and order of functional operations. Let's start with conditionals, or the venerable ``if`` statement.Let's start with a simple list of instructors for these classes.
###Code
instructors = ['Dave', 'Jim', 'Dorkus the Clown']
instructors
###Output
_____no_output_____
###Markdown
IfIf statements can be use to execute some lines or block of code if a particular condition is satisfied. E.g. Let's print something based on the entries in the list.
###Code
if 'Dorkus the Clown' in instructors:
print('#fakeinstructor')
###Output
_____no_output_____
###Markdown
Usually we want conditional logic on both sides of a binary condition, e.g. some action when ``True`` and some when ``False``
###Code
if 'Dorkus the Clown' in instructors:
print('There are fake names for class instructors in your list!')
else:
print("Nothing to see here")
###Output
_____no_output_____
###Markdown
There is a special do nothing word: `pass` that skips over some arm of a conditional, e.g.
###Code
if 'Jim' in instructors:
print("Congratulations! Jim is teaching, your class won't stink!")
else:
pass
###Output
_____no_output_____
###Markdown
_Note_: what have you noticed in this session about quotes? What is the difference between ``'`` and ``"``?Another simple example:
###Code
if True is False:
print("I'm so confused")
else:
print("Everything is right with the world")
###Output
_____no_output_____
###Markdown
It is always good practice to handle all cases explicity. `Conditional fall through` is a common source of bugs.Sometimes we wish to test multiple conditions. Use `if`, `elif`, and `else`.
###Code
my_favorite = 'pie'
if my_favorite is 'cake':
print("He likes cake! I'll start making a double chocolate velvet cake right now!")
elif my_favorite is 'pie':
print("He likes pie! I'll start making a cherry pie right now!")
else:
print("He likes " + my_favorite + ". I don't know how to make that.")
###Output
_____no_output_____
###Markdown
Conditionals can take ``and`` and ``or`` and ``not``. E.g.
###Code
my_favorite = 'pie'
if my_favorite is 'cake' or my_favorite is 'pie':
print(my_favorite + " : I have a recipe for that!")
else:
print("Ew! Who eats that?")
###Output
_____no_output_____
###Markdown
ForFor loops are the standard loop, though `while` is also common. For has the general form:```for items in list: do stuff```For loops and collections like tuples, lists and dictionaries are natural friends.
###Code
for instructor in instructors:
print(instructor)
###Output
_____no_output_____
###Markdown
You can combine loops and conditionals:
###Code
for instructor in instructors:
if instructor.endswith('Clown'):
print(instructor + " doesn't sound like a real instructor name!")
else:
print(instructor + " is so smart... all those gooey brains!")
###Output
_____no_output_____
###Markdown
Dictionaries can use the `keys` method for iterating.
###Code
for key in my_dict.keys():
if len(key) > 5:
print(my_dict[key])
###Output
_____no_output_____
###Markdown
range()Since for operates over lists, it is common to want to do something like:```NOTE: C-likefor (i = 0; i < 3; ++i) { print(i);}```The Python equivalent is:```for i in [0, 1, 2]: do something with i```What happens when the range you want to sample is big, e.g.```NOTE: C-likefor (i = 0; i < 1000000000; ++i) { print(i);}```That would be a real pain in the rear to have to write out the entire list from 1 to 1000000000.Enter, the `range()` function. E.g. ```range(3) is [0, 1, 2]```
###Code
range(3)
###Output
_____no_output_____
###Markdown
Notice that Python (in the newest versions, e.g. 3+) has an object type that is a range. This saves memory and speeds up calculations vs. an explicit representation of a range as a list - but it can be automagically converted to a list on the fly by Python. To show the contents as a `list` we can use the type case like with the tuple above.Sometimes, in older Python docs, you will see `xrange`. This used the range object back in Python 2 and `range` returned an actual list. Beware of this!
###Code
list(range(3))
###Output
_____no_output_____
###Markdown
Remember earlier with slicing, the syntax `:3` meant `[0, 1, 2]`? Well, the same upper bound philosophy applies here.
###Code
for index in range(3):
instructor = instructors[index]
if instructor.endswith('Clown'):
print(instructor + " doesn't sound like a real instructor name!")
else:
print(instructor + " is so smart... all those gooey brains!")
###Output
_____no_output_____
###Markdown
This would probably be better written as
###Code
for index in range(len(instructors)):
instructor = instructors[index]
if instructor.endswith('Clown'):
print(instructor + " doesn't sound like a real instructor name!")
else:
print(instructor + " is so smart... all those gooey brains!")
###Output
_____no_output_____
###Markdown
But in all, it isn't very Pythonesque to use indexes like that (unless you have another reason in the loop) and you would opt instead for the `instructor in instructors` form. More often, you are doing something with the numbers that requires them to be integers, e.g. math.
###Code
sum = 0
for i in range(10):
sum += i
print(sum)
###Output
_____no_output_____
###Markdown
For loops can be nested_Note_: for more on formatting strings, see: [https://pyformat.info](https://pyformat.info)
###Code
for i in range(1, 4):
for j in range(1, 4):
print('%d * %d = %d' % (i, j, i*j)) # Note string formatting here, %d means an integer
###Output
_____no_output_____
###Markdown
You can exit loops early if a condition is met:
###Code
for i in range(10):
if i == 4:
break
i
###Output
_____no_output_____
###Markdown
You can skip stuff in a loop with `continue`
###Code
sum = 0
for i in range(10):
if (i == 5):
continue
else:
sum += i
print(sum)
###Output
_____no_output_____
###Markdown
There is a unique language feature call ``for...else``
###Code
sum = 0
for i in range(10):
sum += i
else:
print('final i = %d, and sum = %d' % (i, sum))
###Output
_____no_output_____
###Markdown
You can iterate over letters in a string
###Code
my_string = "DIRECT"
for c in my_string:
print(c)
###Output
_____no_output_____
###Markdown
Hacky Hack Time with Ifs, Fors, Lists, and imports!Objective: Replace the `bash magic` bits for downloading the HCEPDB data and uncompressing it with Python code. Since the download is big, check if the zip file exists first before downloading it again. Then load it into a pandas dataframe.Notes:* The `os` package has tools for checking if a file exists: ``os.path.exists`````import osfilename = 'HCEPDB_moldata.zip'if os.path.exists(filename): print("wahoo!")```* Use the `requests` package to get the file given a url (got this from the requests docs)```import requestsurl = 'http://faculty.washington.edu/dacb/HCEPDB_moldata.zip'req = requests.get(url)assert req.status_code == 200 if the download failed, this line will generate an errorwith open(filename, 'wb') as f: f.write(req.content)```* Use the `zipfile` package to decompress the file while reading it into `pandas````import pandas as pdimport zipfilecsv_filename = 'HCEPDB_moldata.csv'zf = zipfile.ZipFile(filename)data = pd.read_csv(zf.open(csv_filename))``` Now, use your code from above for the following URLs and filenames| URL | filename | csv_filename ||-----|----------|--------------|| http://faculty.washington.edu/dacb/HCEPDB_moldata_set1.zip | HCEPDB_moldata_set1.zip | HCEPDB_moldata_set1.csv || http://faculty.washington.edu/dacb/HCEPDB_moldata_set2.zip | HCEPDB_moldata_set2.zip | HCEPDB_moldata_set2.csv || http://faculty.washington.edu/dacb/HCEPDB_moldata_set3.zip | HCEPDB_moldata_set3.zip | HCEPDB_moldata_set3.csv |What pieces of the data structures and flow control that we talked about earlier can you use? How did you solve this problem? FunctionsFor loops let you repeat some code for every item in a list. Functions are similar in that they run the same lines of code for new values of some variable. They are different in that functions are not limited to looping over items.Functions are a critical part of writing easy to read, reusable code.Create a function like:```def function_name (parameters): """ optional docstring """ function expressions return [variable]```_Note:_ Sometimes I use the word argument in place of parameter.Here is a simple example. It prints a string that was passed in and returns nothing.
###Code
def print_string(str):
"""This prints out a string passed as the parameter."""
print(str)
return
###Output
_____no_output_____
###Markdown
To call the function, use:```print_string("Dave is awesome!")```_Note:_ The function has to be defined before you can call it!
###Code
print_string("Dave is awesome!")
###Output
_____no_output_____
###Markdown
If you don't provide an argument or too many, you get an error. Parameters (or arguments) in Python are all passed by reference. This means that if you modify the parameters in the function, they are modified outside of the function.See the following example:```def change_list(my_list): """This changes a passed list into this function""" my_list.append('four'); print('list inside the function: ', my_list) returnmy_list = [1, 2, 3];print('list before the function: ', my_list)change_list(my_list);print('list after the function: ', my_list)```
###Code
def change_list(my_list):
"""This changes a passed list into this function"""
my_list.append('four');
print('list inside the function: ', my_list)
return
my_list = [1, 2, 3];
print('list before the function: ', my_list)
change_list(my_list);
print('list after the function: ', my_list)
###Output
list before the function: [1, 2, 3]
list inside the function: [1, 2, 3, 'four']
list after the function: [1, 2, 3, 'four']
###Markdown
Variables have scope: `global` and `local`In a function, new variables that you create are not saved when the function returns - these are `local` variables. Variables defined outside of the function can be accessed but not changed - these are `global` variables, _Note_ there is a way to do this with the `global` keyword. Generally, the use of `global` variables is not encouraged, instead use parameters.```my_global_1 = 'bad idea'my_global_2 = 'another bad one'my_global_3 = 'better idea'def my_function(): print(my_global) my_global_2 = 'broke your global, man!' global my_global_3 my_global_3 = 'still a better idea' return my_function()print(my_global_2)print(my_global_3)``` In general, you want to use parameters to provide data to a function and return a result with the `return`. E.g.```def sum(x, y): my_sum = x + y return my_sum```If you are going to return multiple objects, what data structure that we talked about can be used? Give and example below. Parameters have four different types:| type | behavior ||------|----------|| required | positional, must be present or error, e.g. `my_func(first_name, last_name)` || keyword | position independent, e.g. `my_func(first_name, last_name)` can be called `my_func(first_name='Dave', last_name='Beck')` or `my_func(last_name='Beck', first_name='Dave')` || default | keyword params that default to a value if not provided |
###Code
def print_name(first, last='the Clown'):
print('Your name is %s %s' % (first, last))
return
###Output
_____no_output_____
###Markdown
Play around with the above function. Functions can contain any code that you put anywhere else including:* if...elif...else* for...else* while* other function calls
###Code
def print_name_age(first, last, age):
print_name(first, last)
print('Your age is %d' % (age))
if age > 35:
print('You are really old.')
return
print_name_age(age=40, last='Beck', first='Dave')
###Output
_____no_output_____ |
ipynb/bac_genome/priming_exp/priming_exp_info.ipynb | ###Markdown
Getting info on Priming experiment dataset that's needed for modeling Info:* __Which gradient(s) to simulate?__* For each gradient to simulate: * Infer total richness of starting community * Get distribution of total OTU abundances per fraction * Number of sequences per sample * Infer total abundance of each target taxon User variables
###Code
baseDir = '/home/nick/notebook/SIPSim/dev/priming_exp/'
workDir = os.path.join(baseDir, 'exp_info')
otuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt'
otuTableSumFile = '/var/seq_data/priming_exp/data/otu_table_summary.txt'
metaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt'
#otuRepFile = '/var/seq_data/priming_exp/otusn.pick.fasta'
#otuTaxFile = '/var/seq_data/priming_exp/otusn_tax/otusn_tax_assignments.txt'
#genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
###Output
_____no_output_____
###Markdown
Init
###Code
import glob
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(fitdistrplus)
if not os.path.isdir(workDir):
os.makedirs(workDir)
###Output
_____no_output_____
###Markdown
Loading OTU table (filter to just bulk samples)
###Code
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
###Output
_____no_output_____
###Markdown
Which gradient(s) to simulate?
###Code
%%R -w 900 -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_count = sum(count)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
ggplot(tbl.h.s, aes(day, total_count, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme(
text = element_text(size=16)
)
%%R
tbl.h.s$sample[grepl('700', tbl.h.s$sample)] %>% as.vector %>% sort
###Output
_____no_output_____
###Markdown
NotesSamples to simulate* Isotope: * 12C vs 13C* Treatment: * 700* Days: * 14 * 28 * 45
###Code
%%R
# bulk soil samples for gradients to simulate
samples.to.use = c(
"X12C.700.14.05.NA",
"X12C.700.28.03.NA",
"X12C.700.45.01.NA",
"X13C.700.14.08.NA",
"X13C.700.28.06.NA",
"X13C.700.45.01.NA"
)
###Output
_____no_output_____
###Markdown
Total richness of starting (bulk-soil) communityMethod:* Total number of OTUs in OTU table (i.e., gamma richness)* Just looking at bulk soil samples Loading just bulk soil
###Code
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl$OTUId = rownames(tbl)
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:(ncol(tbl)-1)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -w 800
tbl.s = tbl.h %>%
filter(count > 0) %>%
group_by(sample, isotope, treatment, day, rep, fraction) %>%
summarize(n_taxa = n())
ggplot(tbl.s, aes(day, n_taxa, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R -w 800 -h 350
# filter to just target samples
tbl.s.f = tbl.s %>% filter(sample %in% samples.to.use)
ggplot(tbl.s.f, aes(day, n_taxa, fill=rep %>% as.character)) +
geom_bar(stat='identity') +
facet_grid(. ~ isotope) +
labs(y = 'Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R
message('Bulk soil total observed richness: ')
tbl.s.f %>% select(-fraction) %>% as.data.frame %>% print
###Output
_____no_output_____
###Markdown
Number of taxa in all fractions corresponding to each bulk soil sample* Trying to see the difference between richness of bulk vs gradients (veil line effect)
###Code
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting all OTUs in a sample
n.OTUs = function(samples, otu.long){
otu.long.f = otu.long %>%
filter(sample %in% samples,
count > 0)
n.OTUs = otu.long.f$OTUId %>% unique %>% length
return(n.OTUs)
}
num.OTUs = lapply(fracs, n.OTUs, otu.long=tbl.h)
num.OTUs = do.call(rbind, num.OTUs) %>% as.data.frame
colnames(num.OTUs) = c('n_taxa')
num.OTUs$sample = rownames(num.OTUs)
num.OTUs
%%R
tbl.s.f %>% as.data.frame
%%R
# joining with bulk soil sample summary table
num.OTUs$data = 'fractions'
tbl.s.f$data = 'bulk_soil'
tbl.j = rbind(num.OTUs,
tbl.s.f %>% ungroup %>% select(sample, n_taxa, data)) %>%
mutate(isotope = gsub('X|\\..+', '', sample),
sample = gsub('\\.[0-9]+\\.NA', '', sample))
tbl.j
%%R -h 300 -w 800
ggplot(tbl.j, aes(sample, n_taxa, fill=data)) +
geom_bar(stat='identity', position='dodge') +
facet_grid(. ~ isotope, scales='free_x') +
labs(y = 'Number of OTUs') +
theme(
text = element_text(size=16)
# axis.text.x = element_text(angle=90)
)
###Output
_____no_output_____
###Markdown
Distribution of total sequences per fraction * Number of sequences per sample* Using all samples to assess this one* Just fraction samples__Method:__* Total number of sequences (total abundance) per sample Loading OTU table
###Code
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_seqs = sum(count))
p = ggplot(tbl.h.s, aes(total_seqs)) +
theme_bw() +
theme(
text = element_text(size=16)
)
p1 = p + geom_histogram(binwidth=200)
p2 = p + geom_density()
grid.arrange(p1,p2,ncol=1)
###Output
_____no_output_____
###Markdown
Distribution fitting
###Code
%%R -w 700 -h 350
plotdist(tbl.h.s$total_seqs)
%%R -w 450 -h 400
descdist(tbl.h.s$total_seqs, boot=1000)
%%R
f.n = fitdist(tbl.h.s$total_seqs, 'norm')
f.ln = fitdist(tbl.h.s$total_seqs, 'lnorm')
f.ll = fitdist(tbl.h.s$total_seqs, 'logis')
#f.c = fitdist(tbl.s$count, 'cauchy')
f.list = list(f.n, f.ln, f.ll)
plot.legend = c('normal', 'log-normal', 'logistic')
par(mfrow = c(2,1))
denscomp(f.list, legendtext=plot.legend)
qqcomp(f.list, legendtext=plot.legend)
%%R
gofstat(list(f.n, f.ln, f.ll), fitnames=plot.legend)
%%R
summary(f.ln)
###Output
_____no_output_____
###Markdown
Notes:* best fit: * lognormal * mean = 10.113 * sd = 1.192 Does sample size correlate to buoyant density? Loading OTU table
###Code
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
# summarize
tbl.s = tbl %>%
group_by(sample) %>%
summarize(total_count = sum(count))
tbl.s %>% head(n=3)
###Output
_____no_output_____
###Markdown
Loading metadata
###Code
%%R -i metaDataFile
tbl.meta = read.delim(metaDataFile, sep='\t')
tbl.meta %>% head(n=3)
###Output
_____no_output_____
###Markdown
Determining association
###Code
%%R -w 700
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
ggplot(tbl.j, aes(Density, total_count, color=rep)) +
geom_point() +
facet_grid(Treatment ~ Day)
%%R -w 600 -h 350
ggplot(tbl.j, aes(Density, total_count)) +
geom_point(aes(color=Treatment)) +
geom_smooth(method='lm') +
labs(x='Buoyant density', y='Total sequences') +
theme_bw() +
theme(
text = element_text(size=16)
)
###Output
_____no_output_____
###Markdown
Number of taxa along the gradient
###Code
%%R
tbl.s = tbl %>%
filter(count > 0) %>%
group_by(sample) %>%
summarize(n_taxa = sum(count > 0))
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
tbl.j %>% head(n=3)
%%R -w 900 -h 600
ggplot(tbl.j, aes(Density, n_taxa, fill=rep, color=rep)) +
#geom_area(stat='identity', alpha=0.5, position='dodge') +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='Number of taxa') +
facet_grid(Treatment ~ Day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
###Output
_____no_output_____
###Markdown
Notes:* Many taxa out to the tails of the gradient.* It seems that the DNA fragments were quite diffuse in the gradients. Total abundance of each target taxon: bulk soil approach* Getting relative abundances from bulk soil samples * This has the caveat of likely undersampling richness vs using all gradient fraction samples. * i.e., veil line effect
###Code
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
message('Number of samples: ', tbl.h$sample %>% unique %>% length)
message('Number of OTUs: ', tbl.h$OTUId %>% unique %>% length)
%%R
tbl.hs = tbl.h %>%
group_by(OTUId) %>%
summarize(
total_count = sum(count),
mean_count = mean(count),
median_count = median(count),
sd_count = sd(count)
) %>%
filter(total_count > 0)
tbl.hs %>% head
###Output
_____no_output_____
###Markdown
For each sample, writing a table of OTU_ID and count
###Code
%%R -i workDir
setwd(workDir)
samps = tbl.h$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'OTU.txt'), collapse='_')
tbl.p = tbl.h %>%
filter(sample == samp, count > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
message('Table written: ', outFile)
message(' Number of OTUs: ', tbl.p %>% nrow)
}
###Output
_____no_output_____
###Markdown
Making directories for simulations
###Code
p = os.path.join(workDir, '*_OTU.txt')
files = glob.glob(p)
baseDir = os.path.split(workDir)[0]
newDirs = [os.path.split(x)[1].rstrip('.NA_OTU.txt') for x in files]
newDirs = [os.path.join(baseDir, x) for x in newDirs]
for newDir,f in zip(newDirs, files):
if not os.path.isdir(newDir):
print 'Making new directory: {}'.format(newDir)
os.makedirs(newDir)
else:
print 'Directory exists: {}'.format(newDir)
# symlinking file
linkPath = os.path.join(newDir, os.path.split(f)[1])
if not os.path.islink(linkPath):
os.symlink(f, linkPath)
###Output
Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X13C.700.28.06
Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X12C.700.28.03
Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X13C.700.14.08
Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X13C.700.45.01
Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X12C.700.45.01
Directory exists: /home/nick/notebook/SIPSim/dev/priming_exp/X12C.700.14.05
###Markdown
Rank-abundance distribution for each sample
###Code
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
# ranks of relative abundances
tbl.r = tbl.h %>%
group_by(sample) %>%
mutate(perc_rel_abund = count / sum(count) * 100,
rank = row_number(-perc_rel_abund)) %>%
unite(day_rep, day, rep, sep='-')
tbl.r %>% as.data.frame %>% head(n=3)
%%R -w 900 -h 350
ggplot(tbl.r, aes(rank, perc_rel_abund)) +
geom_point() +
# labs(x='Buoyant density', y='Number of taxa') +
facet_wrap(~ day_rep) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
###Output
_____no_output_____
###Markdown
Taxon abundance range for each sample-fraction
###Code
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
tbl.ar = tbl %>%
#mutate(fraction = gsub('.+\\.', '', sample) %>% as.numeric) %>%
#mutate(treatment = gsub('(.+)\\..+', '\\1', sample)) %>%
group_by(sample) %>%
mutate(rel_abund = count / sum(count)) %>%
summarize(abund_range = max(rel_abund) - min(rel_abund)) %>%
ungroup() %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.ar %>% head(n=3)
%%R -w 800
tbl.ar = tbl.ar %>%
mutate(fraction = as.numeric(fraction))
ggplot(tbl.ar, aes(fraction, abund_range, fill=rep, color=rep)) +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='Range of relative abundance values') +
facet_grid(treatment ~ day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
###Output
_____no_output_____
###Markdown
Total abundance of each target taxon: all fraction samples approach* Getting relative abundances from all fraction samples for the gradient * I will need to calculate (mean|max?) relative abundances for each taxon and then re-scale so that cumsum = 1
###Code
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting mean OTU abundance from all fractions
OTU.abund = function(samples, otu.long){
otu.rel.abund = otu.long %>%
filter(sample %in% samples,
count > 0) %>%
ungroup() %>%
group_by(sample) %>%
mutate(total_count = sum(count)) %>%
ungroup() %>%
mutate(perc_abund = count / total_count * 100) %>%
group_by(OTUId) %>%
summarize(mean_perc_abund = mean(perc_abund),
median_perc_abund = median(perc_abund),
max_perc_abund = max(perc_abund))
return(otu.rel.abund)
}
## calling function
otu.rel.abund = lapply(fracs, OTU.abund, otu.long=tbl.h)
otu.rel.abund = do.call(rbind, otu.rel.abund) %>% as.data.frame
otu.rel.abund$sample = gsub('\\.[0-9]+$', '', rownames(otu.rel.abund))
otu.rel.abund %>% head
%%R -h 600 -w 900
# plotting
otu.rel.abund.l = otu.rel.abund %>%
gather('abund_stat', 'value', mean_perc_abund, median_perc_abund, max_perc_abund)
otu.rel.abund.l$OTUId = reorder(otu.rel.abund.l$OTUId, -otu.rel.abund.l$value)
ggplot(otu.rel.abund.l, aes(OTUId, value, color=abund_stat)) +
geom_point(shape='O', alpha=0.7) +
scale_y_log10() +
facet_grid(abund_stat ~ sample) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank(),
legend.position = 'none'
)
###Output
_____no_output_____
###Markdown
For each sample, writing a table of OTU_ID and count
###Code
%%R -i workDir
setwd(workDir)
# each sample is a file
samps = otu.rel.abund.l$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'frac_OTU.txt'), collapse='_')
tbl.p = otu.rel.abund %>%
filter(sample == samp, mean_perc_abund > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
cat('Table written: ', outFile, '\n')
cat(' Number of OTUs: ', tbl.p %>% nrow, '\n')
}
###Output
_____no_output_____ |
docs/source/recipes/convert_datasets.ipynb | ###Markdown
Convert Dataset FormatsThis recipe demonstrates how to use FiftyOne to convert datasets on disk between common formats. Setup If you haven't already, install FiftyOne:
###Code
!pip install fiftyone
import fiftyone as fo
###Output
_____no_output_____
###Markdown
If the above import fails due to a `cv2` error, it is an issue with OpenCV in Colab environments. [Follow these instructions to resolve it.](https://github.com/voxel51/fiftyone/issues/1494issuecomment-1003148448). This notebook contains bash commands. To run it as a notebook, you must install the [Jupyter bash kernel](https://github.com/takluyver/bash_kernel) via the command below.Alternatively, you can just copy + paste the code blocks into your shell.
###Code
pip install bash_kernel
python -m bash_kernel.install
###Output
_____no_output_____
###Markdown
In this recipe we'll use the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) to download some open source datasets to work with.Specifically, we'll need [TensorFlow](https://www.tensorflow.org/) and [TensorFlow Datasets](https://www.tensorflow.org/datasets) installed to [access the datasets](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.htmlcustomizing-your-ml-backend):
###Code
pip install tensorflow tensorflow-datasets
###Output
_____no_output_____
###Markdown
Download datasets Download the test split of the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) from the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) using the command below:
###Code
# Download the test split of CIFAR-10
fiftyone zoo datasets download cifar10 --split test
###Output
Downloading split 'test' to '~/fiftyone/cifar10/test'
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ~/fiftyone/cifar10/tmp-download/cifar-10-python.tar.gz
170500096it [00:04, 35887670.65it/s]
Extracting ~/fiftyone/cifar10/tmp-download/cifar-10-python.tar.gz to ~/fiftyone/cifar10/tmp-download
100% |███| 10000/10000 [5.2s elapsed, 0s remaining, 1.8K samples/s]
Dataset info written to '~/fiftyone/cifar10/info.json'
###Markdown
Download the validation split of the [KITTI dataset]( http://www.cvlibs.net/datasets/kitti) from the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) using the command below:
###Code
# Download the validation split of KITTI
fiftyone zoo datasets download kitti --split validation
###Output
Split 'validation' already downloaded
###Markdown
The fiftyone convert command The [FiftyOne CLI](https://voxel51.com/docs/fiftyone/cli/index.html) provides a number of utilities for importing and exporting datasets in a variety of common (or custom) formats.Specifically, the `fiftyone convert` command provides a convenient way to convert datasets on disk between formats by specifying the [fiftyone.types.Dataset](https://voxel51.com/docs/fiftyone/api/fiftyone.types.htmlfiftyone.types.dataset_types.Dataset) type of the input and desired output.FiftyOne provides a collection of [builtin types](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlsupported-formats) that you can use to read/write datasets in common formats out-of-the-box: | Dataset format | Import Supported? | Export Supported? | Conversion Supported? || ---------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- || [ImageDirectory](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlimagedirectory) | ✓ | ✓ | ✓ || [VideoDirectory](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlvideodirectory) | ✓ | ✓ | ✓ || [FiftyOneImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimageclassificationdataset) | ✓ | ✓ | ✓ || [ImageClassificationDirectoryTree](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlimageclassificationdirectorytree) | ✓ | ✓ | ✓ || [TFImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmltfimageclassificationdataset) | ✓ | ✓ | ✓ || [FiftyOneImageDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagedetectiondataset) | ✓ | ✓ | ✓ || [COCODetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcocodetectiondataset) | ✓ | ✓ | ✓ || [VOCDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlvocdetectiondataset) | ✓ | ✓ | ✓ || [KITTIDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlkittidetectiondataset) | ✓ | ✓ | ✓ || [YOLODataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlyolodataset) | ✓ | ✓ | ✓ || [TFObjectDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmltfobjectdetectiondataset) | ✓ | ✓ | ✓ || [CVATImageDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcvatimagedataset) | ✓ | ✓ | ✓ || [CVATVideoDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcvatvideodataset) | ✓ | ✓ | ✓ || [FiftyOneImageLabelsDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagelabelsdataset) | ✓ | ✓ | ✓ || [FiftyOneVideoLabelsDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyonevideolabelsdataset) | ✓ | ✓ | ✓ || [BDDDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlbdddataset) | ✓ | ✓ | ✓ | In addition, you can define your own [custom dataset types](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcustom-formats) to read/write datasets in your own formats.The usage of the `fiftyone convert` command is as follows:
###Code
fiftyone convert -h
###Output
usage: fiftyone convert [-h] [--input-dir INPUT_DIR] [--input-type INPUT_TYPE]
[--output-dir OUTPUT_DIR] [--output-type OUTPUT_TYPE]
Convert datasets on disk between supported formats.
Examples::
# Convert an image classification directory tree to TFRecords format
fiftyone convert \
--input-dir /path/to/image-classification-directory-tree \
--input-type fiftyone.types.ImageClassificationDirectoryTree \
--output-dir /path/for/tf-image-classification-dataset \
--output-type fiftyone.types.TFImageClassificationDataset
# Convert a COCO detection dataset to CVAT image format
fiftyone convert \
--input-dir /path/to/coco-detection-dataset \
--input-type fiftyone.types.COCODetectionDataset \
--output-dir /path/for/cvat-image-dataset \
--output-type fiftyone.types.CVATImageDataset
optional arguments:
-h, --help show this help message and exit
--input-dir INPUT_DIR
the directory containing the dataset
--input-type INPUT_TYPE
the fiftyone.types.Dataset type of the input dataset
--output-dir OUTPUT_DIR
the directory to which to write the output dataset
--output-type OUTPUT_TYPE
the fiftyone.types.Dataset type to output
###Markdown
Convert CIFAR-10 dataset When you downloaded the test split of the CIFAR-10 dataset above, it was written to disk as a dataset in [fiftyone.types.FiftyOneImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimageclassificationdataset) format.You can verify this by printing information about the downloaded dataset:
###Code
fiftyone zoo datasets info cifar10
###Output
***** Dataset description *****
The CIFAR-10 dataset consists of 60000 32 x 32 color images in 10
classes, with 6000 images per class. There are 50000 training images and
10000 test images.
Dataset size:
132.40 MiB
Source:
https://www.cs.toronto.edu/~kriz/cifar.html
***** Supported splits *****
test, train
***** Dataset location *****
~/fiftyone/cifar10
***** Dataset info *****
{
"name": "cifar10",
"zoo_dataset": "fiftyone.zoo.datasets.torch.CIFAR10Dataset",
"dataset_type": "fiftyone.types.dataset_types.FiftyOneImageClassificationDataset",
"num_samples": 10000,
"downloaded_splits": {
"test": {
"split": "test",
"num_samples": 10000
}
},
"classes": [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
]
}
###Markdown
The snippet below uses `fiftyone convert` to convert the test split of the CIFAR-10 dataset to [fiftyone.types.ImageClassificationDirectoryTree](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlimageclassificationdirectorytree) format, which stores classification datasets on disk in a directory tree structure with images organized per-class:```├── /│ ├── .│ ├── .│ └── ...├── /│ ├── .│ ├── .│ └── ...└── ...```
###Code
INPUT_DIR=$(fiftyone zoo datasets find cifar10 --split test)
OUTPUT_DIR=/tmp/fiftyone/cifar10-dir-tree
fiftyone convert \
--input-dir ${INPUT_DIR} --input-type fiftyone.types.FiftyOneImageClassificationDataset \
--output-dir ${OUTPUT_DIR} --output-type fiftyone.types.ImageClassificationDirectoryTree
###Output
Loading dataset from '~/fiftyone/cifar10/test'
Input format 'fiftyone.types.dataset_types.FiftyOneImageClassificationDataset'
100% |███| 10000/10000 [4.2s elapsed, 0s remaining, 2.4K samples/s]
Import complete
Exporting dataset to '/tmp/fiftyone/cifar10-dir-tree'
Export format 'fiftyone.types.dataset_types.ImageClassificationDirectoryTree'
100% |███| 10000/10000 [6.2s elapsed, 0s remaining, 1.7K samples/s]
Export complete
###Markdown
Let's verify that the conversion happened as expected:
###Code
ls -lah /tmp/fiftyone/cifar10-dir-tree/
ls -lah /tmp/fiftyone/cifar10-dir-tree/airplane/ | head
###Output
total 8000
drwxr-xr-x 1002 voxel51 wheel 31K Jul 14 11:08 .
drwxr-xr-x 12 voxel51 wheel 384B Jul 14 11:08 ..
-rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000004.jpg
-rw-r--r-- 1 voxel51 wheel 1.1K Jul 14 11:23 000011.jpg
-rw-r--r-- 1 voxel51 wheel 1.1K Jul 14 11:23 000022.jpg
-rw-r--r-- 1 voxel51 wheel 1.3K Jul 14 11:23 000028.jpg
-rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000045.jpg
-rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000053.jpg
-rw-r--r-- 1 voxel51 wheel 1.3K Jul 14 11:23 000075.jpg
###Markdown
Now let's convert the classification directory tree to [TFRecords](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmltfimageclassificationdataset) format!
###Code
INPUT_DIR=/tmp/fiftyone/cifar10-dir-tree
OUTPUT_DIR=/tmp/fiftyone/cifar10-tfrecords
fiftyone convert \
--input-dir ${INPUT_DIR} --input-type fiftyone.types.ImageClassificationDirectoryTree \
--output-dir ${OUTPUT_DIR} --output-type fiftyone.types.TFImageClassificationDataset
###Output
Loading dataset from '/tmp/fiftyone/cifar10-dir-tree'
Input format 'fiftyone.types.dataset_types.ImageClassificationDirectoryTree'
100% |███| 10000/10000 [4.0s elapsed, 0s remaining, 2.5K samples/s]
Import complete
Exporting dataset to '/tmp/fiftyone/cifar10-tfrecords'
Export format 'fiftyone.types.dataset_types.TFImageClassificationDataset'
0% ||--| 1/10000 [23.2ms elapsed, 3.9m remaining, 43.2 samples/s] 2020-07-14 11:24:15.187387: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-07-14 11:24:15.201384: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f83df428f60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-14 11:24:15.201405: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
100% |███| 10000/10000 [8.2s elapsed, 0s remaining, 1.3K samples/s]
Export complete
###Markdown
Let's verify that the conversion happened as expected:
###Code
ls -lah /tmp/fiftyone/cifar10-tfrecords
###Output
total 29696
drwxr-xr-x 3 voxel51 wheel 96B Jul 14 11:24 .
drwxr-xr-x 4 voxel51 wheel 128B Jul 14 11:24 ..
-rw-r--r-- 1 voxel51 wheel 14M Jul 14 11:24 tf.records
###Markdown
Convert KITTI dataset When you downloaded the validation split of the KITTI dataset above, it was written to disk as a dataset in [fiftyone.types.FiftyOneImageDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagedetectiondataset) format.You can verify this by printing information about the downloaded dataset:
###Code
fiftyone zoo datasets info kitti
###Output
***** Dataset description *****
KITTI contains a suite of vision tasks built using an autonomous
driving platform.
The full benchmark contains many tasks such as stereo, optical flow, visual
odometry, etc. This dataset contains the object detection dataset,
including the monocular images and bounding boxes. The dataset contains
7481 training images annotated with 3D bounding boxes. A full description
of the annotations can be found in the README of the object development kit
on the KITTI homepage.
Dataset size:
5.27 GiB
Source:
http://www.cvlibs.net/datasets/kitti
***** Supported splits *****
test, train, validation
***** Dataset location *****
~/fiftyone/kitti
***** Dataset info *****
{
"name": "kitti",
"zoo_dataset": "fiftyone.zoo.datasets.tf.KITTIDataset",
"dataset_type": "fiftyone.types.dataset_types.FiftyOneImageDetectionDataset",
"num_samples": 423,
"downloaded_splits": {
"validation": {
"split": "validation",
"num_samples": 423
}
},
"classes": [
"Car",
"Van",
"Truck",
"Pedestrian",
"Person_sitting",
"Cyclist",
"Tram",
"Misc"
]
}
###Markdown
The snippet below uses `fiftyone convert` to convert the test split of the CIFAR-10 dataset to [fiftyone.types.COCODetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlcocodetectiondataset) format, which writes the dataset to disk with annotations in [COCO format](https://cocodataset.org/format-data).
###Code
INPUT_DIR=$(fiftyone zoo datasets find kitti --split validation)
OUTPUT_DIR=/tmp/fiftyone/kitti-coco
fiftyone convert \
--input-dir ${INPUT_DIR} --input-type fiftyone.types.FiftyOneImageDetectionDataset \
--output-dir ${OUTPUT_DIR} --output-type fiftyone.types.COCODetectionDataset
###Output
Loading dataset from '~/fiftyone/kitti/validation'
Input format 'fiftyone.types.dataset_types.FiftyOneImageDetectionDataset'
100% |███████| 423/423 [1.2s elapsed, 0s remaining, 351.0 samples/s]
Import complete
Exporting dataset to '/tmp/fiftyone/kitti-coco'
Export format 'fiftyone.types.dataset_types.COCODetectionDataset'
100% |███████| 423/423 [4.4s elapsed, 0s remaining, 96.1 samples/s]
Export complete
###Markdown
Let's verify that the conversion happened as expected:
###Code
ls -lah /tmp/fiftyone/kitti-coco/
ls -lah /tmp/fiftyone/kitti-coco/data | head
cat /tmp/fiftyone/kitti-coco/labels.json | python -m json.tool 2> /dev/null | head -20
echo "..."
cat /tmp/fiftyone/kitti-coco/labels.json | python -m json.tool 2> /dev/null | tail -20
###Output
{
"info": {
"year": "",
"version": "",
"description": "Exported from FiftyOne",
"contributor": "",
"url": "https://voxel51.com/fiftyone",
"date_created": "2020-07-14T11:24:40"
},
"licenses": [],
"categories": [
{
"id": 0,
"name": "Car",
"supercategory": "none"
},
{
"id": 1,
"name": "Cyclist",
"supercategory": "none"
...
"area": 4545.8,
"segmentation": null,
"iscrowd": 0
},
{
"id": 3196,
"image_id": 422,
"category_id": 3,
"bbox": [
367.2,
107.3,
36.2,
105.2
],
"area": 3808.2,
"segmentation": null,
"iscrowd": 0
}
]
}
###Markdown
Now let's convert from COCO format to [CVAT Image format](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlcvatimageformat) format!
###Code
INPUT_DIR=/tmp/fiftyone/kitti-coco
OUTPUT_DIR=/tmp/fiftyone/kitti-cvat
fiftyone convert \
--input-dir ${INPUT_DIR} --input-type fiftyone.types.COCODetectionDataset \
--output-dir ${OUTPUT_DIR} --output-type fiftyone.types.CVATImageDataset
###Output
Loading dataset from '/tmp/fiftyone/kitti-coco'
Input format 'fiftyone.types.dataset_types.COCODetectionDataset'
100% |███████| 423/423 [2.0s elapsed, 0s remaining, 206.4 samples/s]
Import complete
Exporting dataset to '/tmp/fiftyone/kitti-cvat'
Export format 'fiftyone.types.dataset_types.CVATImageDataset'
100% |███████| 423/423 [1.3s elapsed, 0s remaining, 323.7 samples/s]
Export complete
###Markdown
Let's verify that the conversion happened as expected:
###Code
ls -lah /tmp/fiftyone/kitti-cvat
cat /tmp/fiftyone/kitti-cvat/labels.xml | head -20
echo "..."
cat /tmp/fiftyone/kitti-cvat/labels.xml | tail -20
###Output
<?xml version="1.0" encoding="utf-8"?>
<annotations>
<version>1.1</version>
<meta>
<task>
<size>423</size>
<mode>annotation</mode>
<labels>
<label>
<name>Car</name>
<attributes>
</attributes>
</label>
<label>
<name>Cyclist</name>
<attributes>
</attributes>
</label>
<label>
<name>Misc</name>
...
<box label="Pedestrian" xtl="360" ytl="116" xbr="402" ybr="212">
</box>
<box label="Pedestrian" xtl="396" ytl="120" xbr="430" ybr="212">
</box>
<box label="Pedestrian" xtl="413" ytl="112" xbr="483" ybr="212">
</box>
<box label="Pedestrian" xtl="585" ytl="80" xbr="646" ybr="215">
</box>
<box label="Pedestrian" xtl="635" ytl="94" xbr="688" ybr="212">
</box>
<box label="Pedestrian" xtl="422" ytl="85" xbr="469" ybr="210">
</box>
<box label="Pedestrian" xtl="457" ytl="93" xbr="520" ybr="213">
</box>
<box label="Pedestrian" xtl="505" ytl="101" xbr="548" ybr="206">
</box>
<box label="Pedestrian" xtl="367" ytl="107" xbr="403" ybr="212">
</box>
</image>
</annotations>
###Markdown
CleanupYou can cleanup the files generated by this recipe by running the command below:
###Code
rm -rf /tmp/fiftyone
###Output
_____no_output_____
###Markdown
Convert Dataset FormatsThis recipe demonstrates how to use FiftyOne to convert datasets on disk between common formats. Setup If you haven't already, install FiftyOne:
###Code
!pip install fiftyone
###Output
_____no_output_____
###Markdown
If you run into a `cv2` error when importing FiftyOne later on, it is an issue with OpenCV in Colab environments. [Follow these instructions to resolve it.](https://github.com/voxel51/fiftyone/issues/1494issuecomment-1003148448) This notebook contains bash commands. To run it as a notebook, you must install the [Jupyter bash kernel](https://github.com/takluyver/bash_kernel) via the command below.Alternatively, you can just copy + paste the code blocks into your shell.
###Code
pip install bash_kernel
python -m bash_kernel.install
###Output
_____no_output_____
###Markdown
In this recipe we'll use the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) to download some open source datasets to work with.Specifically, we'll need [TensorFlow](https://www.tensorflow.org/) and [TensorFlow Datasets](https://www.tensorflow.org/datasets) installed to [access the datasets](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.htmlcustomizing-your-ml-backend):
###Code
pip install tensorflow tensorflow-datasets
###Output
_____no_output_____
###Markdown
Download datasets Download the test split of the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) from the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) using the command below:
###Code
# Download the test split of CIFAR-10
fiftyone zoo datasets download cifar10 --split test
###Output
Downloading split 'test' to '~/fiftyone/cifar10/test'
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ~/fiftyone/cifar10/tmp-download/cifar-10-python.tar.gz
170500096it [00:04, 35887670.65it/s]
Extracting ~/fiftyone/cifar10/tmp-download/cifar-10-python.tar.gz to ~/fiftyone/cifar10/tmp-download
100% |███| 10000/10000 [5.2s elapsed, 0s remaining, 1.8K samples/s]
Dataset info written to '~/fiftyone/cifar10/info.json'
###Markdown
Download the validation split of the [KITTI dataset]( http://www.cvlibs.net/datasets/kitti) from the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) using the command below:
###Code
# Download the validation split of KITTI
fiftyone zoo datasets download kitti --split validation
###Output
Split 'validation' already downloaded
###Markdown
The fiftyone convert command The [FiftyOne CLI](https://voxel51.com/docs/fiftyone/cli/index.html) provides a number of utilities for importing and exporting datasets in a variety of common (or custom) formats.Specifically, the `fiftyone convert` command provides a convenient way to convert datasets on disk between formats by specifying the [fiftyone.types.Dataset](https://voxel51.com/docs/fiftyone/api/fiftyone.types.htmlfiftyone.types.dataset_types.Dataset) type of the input and desired output.FiftyOne provides a collection of [builtin types](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlsupported-formats) that you can use to read/write datasets in common formats out-of-the-box: | Dataset format | Import Supported? | Export Supported? | Conversion Supported? || ---------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- || [ImageDirectory](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlimagedirectory) | ✓ | ✓ | ✓ || [VideoDirectory](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlvideodirectory) | ✓ | ✓ | ✓ || [FiftyOneImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimageclassificationdataset) | ✓ | ✓ | ✓ || [ImageClassificationDirectoryTree](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlimageclassificationdirectorytree) | ✓ | ✓ | ✓ || [TFImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmltfimageclassificationdataset) | ✓ | ✓ | ✓ || [FiftyOneImageDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagedetectiondataset) | ✓ | ✓ | ✓ || [COCODetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcocodetectiondataset) | ✓ | ✓ | ✓ || [VOCDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlvocdetectiondataset) | ✓ | ✓ | ✓ || [KITTIDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlkittidetectiondataset) | ✓ | ✓ | ✓ || [YOLODataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlyolodataset) | ✓ | ✓ | ✓ || [TFObjectDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmltfobjectdetectiondataset) | ✓ | ✓ | ✓ || [CVATImageDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcvatimagedataset) | ✓ | ✓ | ✓ || [CVATVideoDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcvatvideodataset) | ✓ | ✓ | ✓ || [FiftyOneImageLabelsDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagelabelsdataset) | ✓ | ✓ | ✓ || [FiftyOneVideoLabelsDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyonevideolabelsdataset) | ✓ | ✓ | ✓ || [BDDDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlbdddataset) | ✓ | ✓ | ✓ | In addition, you can define your own [custom dataset types](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlcustom-formats) to read/write datasets in your own formats.The usage of the `fiftyone convert` command is as follows:
###Code
fiftyone convert -h
###Output
usage: fiftyone convert [-h] [--input-dir INPUT_DIR] [--input-type INPUT_TYPE]
[--output-dir OUTPUT_DIR] [--output-type OUTPUT_TYPE]
Convert datasets on disk between supported formats.
Examples::
# Convert an image classification directory tree to TFRecords format
fiftyone convert \
--input-dir /path/to/image-classification-directory-tree \
--input-type fiftyone.types.ImageClassificationDirectoryTree \
--output-dir /path/for/tf-image-classification-dataset \
--output-type fiftyone.types.TFImageClassificationDataset
# Convert a COCO detection dataset to CVAT image format
fiftyone convert \
--input-dir /path/to/coco-detection-dataset \
--input-type fiftyone.types.COCODetectionDataset \
--output-dir /path/for/cvat-image-dataset \
--output-type fiftyone.types.CVATImageDataset
optional arguments:
-h, --help show this help message and exit
--input-dir INPUT_DIR
the directory containing the dataset
--input-type INPUT_TYPE
the fiftyone.types.Dataset type of the input dataset
--output-dir OUTPUT_DIR
the directory to which to write the output dataset
--output-type OUTPUT_TYPE
the fiftyone.types.Dataset type to output
###Markdown
Convert CIFAR-10 dataset When you downloaded the test split of the CIFAR-10 dataset above, it was written to disk as a dataset in [fiftyone.types.FiftyOneImageClassificationDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimageclassificationdataset) format.You can verify this by printing information about the downloaded dataset:
###Code
fiftyone zoo datasets info cifar10
###Output
***** Dataset description *****
The CIFAR-10 dataset consists of 60000 32 x 32 color images in 10
classes, with 6000 images per class. There are 50000 training images and
10000 test images.
Dataset size:
132.40 MiB
Source:
https://www.cs.toronto.edu/~kriz/cifar.html
***** Supported splits *****
test, train
***** Dataset location *****
~/fiftyone/cifar10
***** Dataset info *****
{
"name": "cifar10",
"zoo_dataset": "fiftyone.zoo.datasets.torch.CIFAR10Dataset",
"dataset_type": "fiftyone.types.dataset_types.FiftyOneImageClassificationDataset",
"num_samples": 10000,
"downloaded_splits": {
"test": {
"split": "test",
"num_samples": 10000
}
},
"classes": [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
]
}
###Markdown
The snippet below uses `fiftyone convert` to convert the test split of the CIFAR-10 dataset to [fiftyone.types.ImageClassificationDirectoryTree](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlimageclassificationdirectorytree) format, which stores classification datasets on disk in a directory tree structure with images organized per-class:```├── /│ ├── .│ ├── .│ └── ...├── /│ ├── .│ ├── .│ └── ...└── ...```
###Code
INPUT_DIR=$(fiftyone zoo datasets find cifar10 --split test)
OUTPUT_DIR=/tmp/fiftyone/cifar10-dir-tree
fiftyone convert \
--input-dir ${INPUT_DIR} --input-type fiftyone.types.FiftyOneImageClassificationDataset \
--output-dir ${OUTPUT_DIR} --output-type fiftyone.types.ImageClassificationDirectoryTree
###Output
Loading dataset from '~/fiftyone/cifar10/test'
Input format 'fiftyone.types.dataset_types.FiftyOneImageClassificationDataset'
100% |███| 10000/10000 [4.2s elapsed, 0s remaining, 2.4K samples/s]
Import complete
Exporting dataset to '/tmp/fiftyone/cifar10-dir-tree'
Export format 'fiftyone.types.dataset_types.ImageClassificationDirectoryTree'
100% |███| 10000/10000 [6.2s elapsed, 0s remaining, 1.7K samples/s]
Export complete
###Markdown
Let's verify that the conversion happened as expected:
###Code
ls -lah /tmp/fiftyone/cifar10-dir-tree/
ls -lah /tmp/fiftyone/cifar10-dir-tree/airplane/ | head
###Output
total 8000
drwxr-xr-x 1002 voxel51 wheel 31K Jul 14 11:08 .
drwxr-xr-x 12 voxel51 wheel 384B Jul 14 11:08 ..
-rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000004.jpg
-rw-r--r-- 1 voxel51 wheel 1.1K Jul 14 11:23 000011.jpg
-rw-r--r-- 1 voxel51 wheel 1.1K Jul 14 11:23 000022.jpg
-rw-r--r-- 1 voxel51 wheel 1.3K Jul 14 11:23 000028.jpg
-rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000045.jpg
-rw-r--r-- 1 voxel51 wheel 1.2K Jul 14 11:23 000053.jpg
-rw-r--r-- 1 voxel51 wheel 1.3K Jul 14 11:23 000075.jpg
###Markdown
Now let's convert the classification directory tree to [TFRecords](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmltfimageclassificationdataset) format!
###Code
INPUT_DIR=/tmp/fiftyone/cifar10-dir-tree
OUTPUT_DIR=/tmp/fiftyone/cifar10-tfrecords
fiftyone convert \
--input-dir ${INPUT_DIR} --input-type fiftyone.types.ImageClassificationDirectoryTree \
--output-dir ${OUTPUT_DIR} --output-type fiftyone.types.TFImageClassificationDataset
###Output
Loading dataset from '/tmp/fiftyone/cifar10-dir-tree'
Input format 'fiftyone.types.dataset_types.ImageClassificationDirectoryTree'
100% |███| 10000/10000 [4.0s elapsed, 0s remaining, 2.5K samples/s]
Import complete
Exporting dataset to '/tmp/fiftyone/cifar10-tfrecords'
Export format 'fiftyone.types.dataset_types.TFImageClassificationDataset'
0% ||--| 1/10000 [23.2ms elapsed, 3.9m remaining, 43.2 samples/s] 2020-07-14 11:24:15.187387: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-07-14 11:24:15.201384: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f83df428f60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-14 11:24:15.201405: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
100% |███| 10000/10000 [8.2s elapsed, 0s remaining, 1.3K samples/s]
Export complete
###Markdown
Let's verify that the conversion happened as expected:
###Code
ls -lah /tmp/fiftyone/cifar10-tfrecords
###Output
total 29696
drwxr-xr-x 3 voxel51 wheel 96B Jul 14 11:24 .
drwxr-xr-x 4 voxel51 wheel 128B Jul 14 11:24 ..
-rw-r--r-- 1 voxel51 wheel 14M Jul 14 11:24 tf.records
###Markdown
Convert KITTI dataset When you downloaded the validation split of the KITTI dataset above, it was written to disk as a dataset in [fiftyone.types.FiftyOneImageDetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/datasets.htmlfiftyoneimagedetectiondataset) format.You can verify this by printing information about the downloaded dataset:
###Code
fiftyone zoo datasets info kitti
###Output
***** Dataset description *****
KITTI contains a suite of vision tasks built using an autonomous
driving platform.
The full benchmark contains many tasks such as stereo, optical flow, visual
odometry, etc. This dataset contains the object detection dataset,
including the monocular images and bounding boxes. The dataset contains
7481 training images annotated with 3D bounding boxes. A full description
of the annotations can be found in the README of the object development kit
on the KITTI homepage.
Dataset size:
5.27 GiB
Source:
http://www.cvlibs.net/datasets/kitti
***** Supported splits *****
test, train, validation
***** Dataset location *****
~/fiftyone/kitti
***** Dataset info *****
{
"name": "kitti",
"zoo_dataset": "fiftyone.zoo.datasets.tf.KITTIDataset",
"dataset_type": "fiftyone.types.dataset_types.FiftyOneImageDetectionDataset",
"num_samples": 423,
"downloaded_splits": {
"validation": {
"split": "validation",
"num_samples": 423
}
},
"classes": [
"Car",
"Van",
"Truck",
"Pedestrian",
"Person_sitting",
"Cyclist",
"Tram",
"Misc"
]
}
###Markdown
The snippet below uses `fiftyone convert` to convert the test split of the CIFAR-10 dataset to [fiftyone.types.COCODetectionDataset](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlcocodetectiondataset) format, which writes the dataset to disk with annotations in [COCO format](https://cocodataset.org/format-data).
###Code
INPUT_DIR=$(fiftyone zoo datasets find kitti --split validation)
OUTPUT_DIR=/tmp/fiftyone/kitti-coco
fiftyone convert \
--input-dir ${INPUT_DIR} --input-type fiftyone.types.FiftyOneImageDetectionDataset \
--output-dir ${OUTPUT_DIR} --output-type fiftyone.types.COCODetectionDataset
###Output
Loading dataset from '~/fiftyone/kitti/validation'
Input format 'fiftyone.types.dataset_types.FiftyOneImageDetectionDataset'
100% |███████| 423/423 [1.2s elapsed, 0s remaining, 351.0 samples/s]
Import complete
Exporting dataset to '/tmp/fiftyone/kitti-coco'
Export format 'fiftyone.types.dataset_types.COCODetectionDataset'
100% |███████| 423/423 [4.4s elapsed, 0s remaining, 96.1 samples/s]
Export complete
###Markdown
Let's verify that the conversion happened as expected:
###Code
ls -lah /tmp/fiftyone/kitti-coco/
ls -lah /tmp/fiftyone/kitti-coco/data | head
cat /tmp/fiftyone/kitti-coco/labels.json | python -m json.tool 2> /dev/null | head -20
echo "..."
cat /tmp/fiftyone/kitti-coco/labels.json | python -m json.tool 2> /dev/null | tail -20
###Output
{
"info": {
"year": "",
"version": "",
"description": "Exported from FiftyOne",
"contributor": "",
"url": "https://voxel51.com/fiftyone",
"date_created": "2020-07-14T11:24:40"
},
"licenses": [],
"categories": [
{
"id": 0,
"name": "Car",
"supercategory": "none"
},
{
"id": 1,
"name": "Cyclist",
"supercategory": "none"
...
"area": 4545.8,
"segmentation": null,
"iscrowd": 0
},
{
"id": 3196,
"image_id": 422,
"category_id": 3,
"bbox": [
367.2,
107.3,
36.2,
105.2
],
"area": 3808.2,
"segmentation": null,
"iscrowd": 0
}
]
}
###Markdown
Now let's convert from COCO format to [CVAT Image format](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.htmlcvatimageformat) format!
###Code
INPUT_DIR=/tmp/fiftyone/kitti-coco
OUTPUT_DIR=/tmp/fiftyone/kitti-cvat
fiftyone convert \
--input-dir ${INPUT_DIR} --input-type fiftyone.types.COCODetectionDataset \
--output-dir ${OUTPUT_DIR} --output-type fiftyone.types.CVATImageDataset
###Output
Loading dataset from '/tmp/fiftyone/kitti-coco'
Input format 'fiftyone.types.dataset_types.COCODetectionDataset'
100% |███████| 423/423 [2.0s elapsed, 0s remaining, 206.4 samples/s]
Import complete
Exporting dataset to '/tmp/fiftyone/kitti-cvat'
Export format 'fiftyone.types.dataset_types.CVATImageDataset'
100% |███████| 423/423 [1.3s elapsed, 0s remaining, 323.7 samples/s]
Export complete
###Markdown
Let's verify that the conversion happened as expected:
###Code
ls -lah /tmp/fiftyone/kitti-cvat
cat /tmp/fiftyone/kitti-cvat/labels.xml | head -20
echo "..."
cat /tmp/fiftyone/kitti-cvat/labels.xml | tail -20
###Output
<?xml version="1.0" encoding="utf-8"?>
<annotations>
<version>1.1</version>
<meta>
<task>
<size>423</size>
<mode>annotation</mode>
<labels>
<label>
<name>Car</name>
<attributes>
</attributes>
</label>
<label>
<name>Cyclist</name>
<attributes>
</attributes>
</label>
<label>
<name>Misc</name>
...
<box label="Pedestrian" xtl="360" ytl="116" xbr="402" ybr="212">
</box>
<box label="Pedestrian" xtl="396" ytl="120" xbr="430" ybr="212">
</box>
<box label="Pedestrian" xtl="413" ytl="112" xbr="483" ybr="212">
</box>
<box label="Pedestrian" xtl="585" ytl="80" xbr="646" ybr="215">
</box>
<box label="Pedestrian" xtl="635" ytl="94" xbr="688" ybr="212">
</box>
<box label="Pedestrian" xtl="422" ytl="85" xbr="469" ybr="210">
</box>
<box label="Pedestrian" xtl="457" ytl="93" xbr="520" ybr="213">
</box>
<box label="Pedestrian" xtl="505" ytl="101" xbr="548" ybr="206">
</box>
<box label="Pedestrian" xtl="367" ytl="107" xbr="403" ybr="212">
</box>
</image>
</annotations>
###Markdown
CleanupYou can cleanup the files generated by this recipe by running the command below:
###Code
rm -rf /tmp/fiftyone
###Output
_____no_output_____ |
notebooks/Morglorb-Peanut-Butter-SLSQP.ipynb | ###Markdown
IntroductionThis Morglorb recipe uses groupings of ingredients to try to cover nutritional requirements with enough overlap that a single ingredient with quality issues does not cause a failure for the whole recipe. An opimizer is used to find the right amount of each ingredient to fulfill the nutritional and practical requirements. To Do* Nutrients without an upper limit should have the upper limit constraint removed* Add constraints for the NIH essential protein combinations as a limit* Add a radar graph for vitamins showing the boundry between RDI and UL* Add a radar graph for vitamins without an upper limit but showing the RDI* Add a radar graph for essential proteins showing the range between RDI and UL* Add a radar graph for essential proteins without an upper limit, but showing the RDI as the lower limit* Add a radar graph pair for non-essential proteins with the above UL and no UL pairing* Add equality constraints for at least energy, and macro nutrients if possible
###Code
# Import all of the helper libraries
from scipy.optimize import minimize
from scipy.optimize import Bounds
from scipy.optimize import least_squares, lsq_linear, dual_annealing, minimize
import pandas as pd
import numpy as np
import os
import json
from math import e, log, log10
import matplotlib.pyplot as plt
import seaborn as sns
from ipysheet import from_dataframe, to_dataframe
#!pip install seaborn
#!pip install ipysheet
#!pip install ipywidgets
# Setup the notebook context
data_dir = '../data'
pd.set_option('max_columns', 70)
###Output
_____no_output_____
###Markdown
Our DataThe [tables](https://docs.google.com/spreadsheets/d/104Y7kH4OzmfsM-v2MSEoc7cIgv0aAMT2sQLAmgkx8R8/editgid=442191411) containing our ingredients nutrition profile are held in Google Sheets.The sheet names are "Ingredients" and "Nutrition Profile"
###Code
# Download our nutrition profile data from Google Sheets
google_spreadsheet_url = 'https://docs.google.com/spreadsheets/d/104Y7kH4OzmfsM-v2MSEoc7cIgv0aAMT2sQLAmgkx8R8/export?format=csv&id=104Y7kH4OzmfsM-v2MSEoc7cIgv0aAMT2sQLAmgkx8R8'
nutrition_tab = '624419712'
ingredient_tab = '1812860789'
nutrition_tab_url = f'{google_spreadsheet_url}&gid={nutrition_tab}'
ingredient_tab_url = f'{google_spreadsheet_url}&gid={ingredient_tab}'
nutrition_profile_df = pd.read_csv(nutrition_tab_url, index_col=0, verbose=True)
for col in ['RDI', 'UL', 'Target Scale', 'Target', 'Weight']:
nutrition_profile_df[col] = nutrition_profile_df[col].astype(float)
nutrition_profile_df = nutrition_profile_df.transpose()
ingredients_df = pd.read_csv(ingredient_tab_url, index_col=0, verbose=True).transpose()
# convert all values to float
for col in ingredients_df.columns:
ingredients_df[col] = ingredients_df[col].astype(float)
###Output
Tokenization took: 0.04 ms
Type conversion took: 1.01 ms
Parser memory cleanup took: 0.00 ms
Tokenization took: 0.06 ms
Type conversion took: 1.29 ms
Parser memory cleanup took: 0.01 ms
###Markdown
Problem SetupLet's cast our data into the from $\vec{y} = A \vec{x} + \vec{b}$ where $A$ is our ingredients data, $\vec{x}$ is the quantity of each ingredient for our recipe, and $\vec{b}$ is the nutrition profile.The problem to be solved is to find the quantity of each ingredient which will optimally satisfy the nutrition profile, or in our model, to minimize: $|A \vec{x} - \vec{b}|$.There are some nutrients we only want to track, but not optimize. For example, we want to know how much cholesterol is contained in our recipe, but we don't want to constrain our result to obtain a specific amount of cholesterol as a goal. The full list of ingredients are named: A_full, and b_full. The values to optimized are named: A and b
###Code
b_full = nutrition_profile_df
A_full = ingredients_df.transpose()
A = ingredients_df.transpose()[nutrition_profile_df.loc['Report Only'] == False].astype(float)
b_full = nutrition_profile_df.loc['Target']
b = nutrition_profile_df.loc['Target'][nutrition_profile_df.loc['Report Only'] == False].astype(float)
ul = nutrition_profile_df.loc['UL'][nutrition_profile_df.loc['Report Only'] == False].astype(float)
rdi = nutrition_profile_df.loc['RDI'][nutrition_profile_df.loc['Report Only'] == False].astype(float)
weight = nutrition_profile_df.loc['Weight'][nutrition_profile_df.loc['Report Only'] == False]
ul_full = nutrition_profile_df.loc['UL']
rdi_full = nutrition_profile_df.loc['RDI']
# Constrain ingredients before the optimization process. Many of the ingredients are required for non-nutritional purposes
# or are being limited to enhance flavor
#
# The bounds units are in fractions of 100g / day, i.e.: 0.5 represents 50g / day, of the ingredient
#bounds_df = pd.DataFrame(index=ingredients_df.index, data={'lower': 0.0, 'upper': np.inf})
bounds_df = pd.DataFrame(index=ingredients_df.index, data={'lower': 0.0, 'upper': 1.0e6})
bounds_df.loc['Guar gum'] = [1.5 * .01, 1.5 * .01 + .0001]
bounds_df.loc['Xanthan Gum'] = [1.5 * .01, 1.5 * .01 + .0001]
bounds_df.loc['Alpha-galactosidase enzyme (Beano)'] = [1.0, 1.0 + .0001]
bounds_df.loc['Multivitamin'] = [1.0, 1.0 + .0001]
bounds_df.loc['Corn flour, nixtamalized'] = [0, 1.0]
bounds_df.loc['Whey protein'] = [0.0,0.15]
bounds_df.loc['Ascorbic acid'] = [0.01, 0.01 + .0001]
bounds_df.loc['Peanut butter'] = [0.70, 5.0]
bounds_df.loc['Wheat bran, crude'] = [0.5, 5.0]
bounds_df.loc['Flaxseed, fresh ground'] = [0.25, 5.0]
bounds_df.loc['Choline Bitartrate'] = [0.0, 0.05]
bounds_df.loc['Potassium chloride'] = [0.0, 0.15]
lower = bounds_df.lower.values
upper = bounds_df.upper.values
lower.shape, upper.shape
x0 = np.array(lower)
bounds = pd.DataFrame( data = {'lower': lower, 'upper': upper}, dtype=float)
a = 100.; b = 2.; c = a; k = 10
a = 20.; b = 2.; c = a; k = 10
a = 10.; b = 0.1 ; c = a; k = 5
#u0 = (rdi + np.log(rdi)); u0.name = 'u0'
#u0 = rdi * (1 + log(a))
u0 = rdi / (1 - log(k) / a)
u1 = ul / (log(k) / c + 1)
#u1 = ul - np.log(ul); u1.name = 'u1'
#u = pd.concat([limits, pd.Series(y0,scale_limits.index, name='y0')], axis=1)
def obj(x):
y0 = A.dot(x.transpose())
obj_vec = (np.exp(a * (u0 - y0)/u0) + np.exp(b * (y0 - u0)/u0) + np.nan_to_num(np.exp(c * (y0 - u1)/u1))) * weight
#print(f'obj_vec: {obj_vec[0]}, y0: {y0[0]}, u0: {u0[0]}')
return(np.sum(obj_vec))
#rdi[26], u0[26], u1[26], ul[26]
#rdi[0:5], u0[0:5], u1[0:5], ul[0:5]
#np.log(rdi)[26]
#u1
solution = minimize(obj, x0, method='SLSQP', bounds=list(zip(lower, upper)), options = {'maxiter': 1000})
solution.success
A_full.dot(solution.x).astype(int)
# Scale the ingredient nutrient amounts for the given quantity of each ingredient given by the optimizer
solution_df = A_full.transpose().mul(solution.x, axis=0) # Scale each nutrient vector per ingredient by the amount of the ingredient
solution_df.insert(0, 'Quantity (g)', solution.x * 100) # Scale to 100 g since that is basis for the nutrient quantities
# Add a row showing the sum of the scaled amount of each nutrient
total = solution_df.sum()
total.name = 'Total'
solution_df = solution_df.append(total)
# Plot the macro nutrient profile
# The ratio of Calories for protein:carbohydrates:fat is 4:4:9 kcal/g
pc = solution_df['Protein (g)']['Total'] * 4.0
cc = solution_df['Carbohydrates (g)']['Total'] * 4.0
fc = solution_df['Total Fat (g)']['Total'] * 9.0
tc = pc + cc + fc
p_pct = int(round(pc / tc * 100))
c_pct = int(round(cc / tc * 100))
f_pct = int(round(fc / tc * 100))
(p_pct, c_pct, f_pct)
# create data
names=f'Protein {p_pct}%', f'Carbohydrates {c_pct}%', f'Fat {f_pct}%',
size=[p_pct, c_pct, f_pct]
fig = plt.figure(figsize=(10, 5))
fig.add_subplot(1,2,1)
# Create a circle for the center of the plot
my_circle=plt.Circle( (0,0), 0.5, color='white')
# Give color names
cmap = plt.get_cmap('Spectral')
sm = plt.cm.ScalarMappable(cmap=cmap)
colors = ['yellow','orange','red']
plt.pie(size, labels=names, colors=colors)
#p=plt.gcf()
#p.gca().add_artist(my_circle)
fig.gca().add_artist(my_circle)
#plt.show()
fig.add_subplot(1,2,2)
barWidth = 1
fs = [solution_df['Soluble Fiber (g)']['Total']]
fi = [solution_df['Insoluble Fiber (g)']['Total']]
plt.bar([0], fs, color='red', edgecolor='white', width=barWidth, label=['Soluble Fiber (g)'])
plt.bar([0], fi, bottom=fs, color='yellow', edgecolor='white', width=barWidth, label=['Insoluble Fiber (g)'])
plt.show()
# Also show the Omega-3, Omega-6 ratio
# Saturated:Monounsaturated:Polyunsaturated ratios
# Prepare data as a whole for plotting by normalizing and scaling
amounts = solution_df
total = A_full.dot(solution.x) #solution_df.loc['Total']
# Normalize as a ratio beyond RDI
norm = (total) / rdi_full
norm_ul = (ul_full) / rdi_full
nuts = pd.concat([pd.Series(norm.values, name='value'), pd.Series(norm.index, name='name')], axis=1)
# Setup categories of nutrients and a common plotting function
vitamins = ['Vitamin A (IU)','Vitamin B6 (mg)','Vitamin B12 (ug)','Vitamin C (mg)','Vitamin D (IU)',
'Vitamin E (IU)','Vitamin K (ug)','Thiamin (mg)','Riboflavin (mg)','Niacin (mg)','Folate (ug)','Pantothenic Acid (mg)','Biotin (ug)','Choline (mg)']
minerals = ['Calcium (g)','Chloride (g)','Chromium (ug)','Copper (mg)','Iodine (ug)','Iron (mg)',
'Magnesium (mg)','Manganese (mg)','Molybdenum (ug)','Phosphorus (g)','Potassium (g)','Selenium (ug)','Sodium (g)','Sulfur (g)','Zinc (mg)']
essential_aminoacids = ['Cystine (mg)','Histidine (mg)','Isoleucine (mg)','Leucine (mg)','Lysine (mg)',
'Methionine (mg)','Phenylalanine (mg)','Threonine (mg)','Tryptophan (mg)','Valine (mg)']
other_aminoacids = ['Tyrosine (mg)','Arginine (mg)','Alanine (mg)','Aspartic acid (mg)','Glutamic acid (mg)','Glycine (mg)','Proline (mg)','Serine (mg)','Hydroxyproline (mg)']
def plot_group(nut_names, title):
nut_names_short = [s.split(' (')[0] for s in nut_names] # Snip off the units from the nutrient names
# Create a bar to indicate an upper limit
ul_bar = (norm_ul * 1.04)[nut_names]
ul_bar[ul_full[nut_names].isnull() == True] = 0
# Create a bar to mask the UL bar so just the end is exposed
ul_mask = norm_ul[nut_names]
ul_mask[ul_full[nut_names].isnull() == True] = 0
n = [] # normalized values for each bar
for x, mx in zip(norm[nut_names], ul_mask.values):
if mx == 0: # no upper limit
if x < 1.0:
n.append(1.0 - (x / 2.0))
else:
n.append(0.50)
else:
n.append(1.0 - (log10(x) / log10(mx)))
clrs = sm.to_rgba(n, norm=False)
g = sns.barplot(x=ul_bar.values, y=nut_names_short, color='red')
g.set_xscale('log')
sns.barplot(x=ul_mask.values, y=nut_names_short, color='white')
bax = sns.barplot(x=norm[nut_names], y=nut_names_short, label="Total", palette=clrs)
# Add a legend and informative axis label
g.set( ylabel="",xlabel="Nutrient Mass / RDI (Red Band is UL)", title=title)
#sns.despine(left=True, bottom=True)
# Construct a group of bar charts for each nutrient group
# Setup the colormap for each bar
cmap = plt.get_cmap('Spectral')
sm = plt.cm.ScalarMappable(cmap=cmap)
#fig = plt.figure(figsize=plt.figaspect(3.))
fig = plt.figure(figsize=(20, 20))
fig.add_subplot(4, 1, 1)
plot_group(vitamins,'Vitamin amounts relative to RDI')
fig.add_subplot(4, 1, 2)
plot_group(minerals,'Mineral amounts relative to RDI')
fig.add_subplot(4, 1, 3)
plot_group(essential_aminoacids,'Essential amino acid amounts relative to RDI')
fig.add_subplot(4, 1, 4)
plot_group(other_aminoacids,'Other amino acid amounts relative to RDI')
#fig.show()
fig.tight_layout()
#solu_amount = (solution_df['Quantity (g)'] * 14).astype(int)
pd.options.display.float_format = "{:,.2f}".format
solu_amount = solution_df['Quantity (g)']
solu_amount.index.name = 'Ingredient'
solu_amount.reset_index()
###Output
_____no_output_____ |
doc2vec/Word2Vec to cyber security data.ipynb | ###Markdown
- Combine all data
###Code
import pandas as pd
from os import listdir
path = '../data/'
files = listdir('../data/')
df = pd.DataFrame(columns=["url", "query", "text"])
for f in files:
temp = pd.read_csv(path + f)
if 'article-name' in temp.columns:
temp.rename(columns={'article-name':'name','article-url':'url','content':'text','keyword':'query'}, inplace=True)
if len(temp) < 1:
continue
df = df.append(temp)
df.drop(['Unnamed: 0', 'name'], inplace=True, axis=1)
###Output
_____no_output_____
###Markdown
- data preprocessing 1. stop word removal 2. lower case letters 3. non ascii character removal
###Code
from nltk.corpus import stopwords
import re
stop = stopwords.words('english')
def normalize_text(text):
norm_text = text.lower()
# Replace breaks with spaces
norm_text = norm_text.replace('<br />', ' ')
# Pad punctuation with spaces on both sides
norm_text = re.sub(r"([\.\",\(\)!\?;:])", " \\1 ", norm_text)
return norm_text
def remove_stop_words(text):
return " ".join([item.lower() for item in text.split() if item not in stop])
def remove_non_ascii(text):
return ''.join(["" if ord(i) < 32 or ord(i) > 126 else i for i in text])
df['text'] = df['text'].apply(remove_non_ascii)
df['text'] = df['text'].apply(normalize_text)
df['text'] = df['text'].apply(remove_stop_words)
df["text"] = df['text'].str.replace('[^\w\s]','')
###Output
_____no_output_____
###Markdown
- a simple word2vec model In this section we apply simple word to vec model to tokenized data.
###Code
from gensim.models import Word2Vec
from nltk import word_tokenize
df['tokenized_text'] = df.apply(lambda row: word_tokenize(row['text']), axis=1)
model = Word2Vec(df['tokenized_text'], size=100)
for num in [1, 3, 5, 10, 12, 16, 17, 18, 19, 28, 29, 30, 32, 33, 34, 37, 38]:
term = "apt%s"%str(num)
if term in model.wv.vocab:
print("Most similar words for %s"%term)
for t in model.most_similar(term): print(t)
print('\n')
###Output
Most similar words for apt1
('mandiant', 0.9992831349372864)
('according', 0.9988211989402771)
('china', 0.9986724257469177)
('defense', 0.9986507892608643)
('kaspersky', 0.9986412525177002)
('iranian', 0.9985784888267517)
('military', 0.9983772039413452)
('lab', 0.9978839159011841)
('detected', 0.997614860534668)
('published', 0.997364342212677)
Most similar words for apt3
('strontium', 0.9977763891220093)
('cozy', 0.9963721036911011)
('tracked', 0.9958826899528503)
('team', 0.994817852973938)
('also', 0.9941498041152954)
('menupass', 0.9935141205787659)
('linked', 0.9934953451156616)
('axiom', 0.9930843114852905)
('chinalinked', 0.9929003715515137)
('behind', 0.9923593997955322)
Most similar words for apt10
('apt37', 0.9996817111968994)
('sophisticated', 0.9994451403617859)
('naikon', 0.9994421601295471)
('overlap', 0.999294638633728)
('entities', 0.9992740154266357)
('micro', 0.9989956021308899)
('noticed', 0.9988883137702942)
('tracks', 0.9988324642181396)
('primarily', 0.9988023042678833)
('associated', 0.9987926483154297)
Most similar words for apt17
('vietnamese', 0.9984132051467896)
('hellsing', 0.9982680082321167)
('netherlands', 0.9982122182846069)
('turla', 0.9981800317764282)
('aligns', 0.99793940782547)
('region', 0.997829258441925)
('continues', 0.9977688193321228)
('operating', 0.9977645874023438)
('variety', 0.9977619647979736)
('aware', 0.9976860284805298)
Most similar words for apt28
('sofacy', 0.9984127283096313)
('bear', 0.9978348612785339)
('known', 0.9976195096969604)
('fancy', 0.9963506460189819)
('storm', 0.9960793256759644)
('apt', 0.995140790939331)
('pawn', 0.9940293431282043)
('sednit', 0.9939311742782593)
('tsar', 0.9931427240371704)
('actor', 0.9903273582458496)
Most similar words for apt29
('sandworm', 0.9979566335678101)
('2010', 0.9978185892105103)
('including', 0.9976153373718262)
('observed', 0.9976032972335815)
('overview', 0.9973697662353516)
('spotted', 0.9972324371337891)
('aimed', 0.9965631365776062)
('2007', 0.9963749647140503)
('buckeye', 0.9962424039840698)
('aka', 0.9962256550788879)
Most similar words for apt30
('companies', 0.998908281326294)
('prolific', 0.9988271594047546)
('variety', 0.9987081289291382)
('expanded', 0.9986468553543091)
('focuses', 0.9986134767532349)
('continues', 0.998511552810669)
('connected', 0.9984531402587891)
('detailed', 0.9984067678451538)
('interests', 0.9984041452407837)
('actively', 0.9984041452407837)
Most similar words for apt32
('continues', 0.9995431900024414)
('region', 0.9994964003562927)
('ties', 0.9994940757751465)
('destructive', 0.999233067035675)
('interests', 0.9991957545280457)
('europe', 0.9991946220397949)
('dukes', 0.9991874098777771)
('mainly', 0.9991647005081177)
('countries', 0.9991510510444641)
('apt38', 0.9991440176963806)
Most similar words for apt33
('multiple', 0.9996379613876343)
('japanese', 0.9994475841522217)
('revealed', 0.9994279146194458)
('involved', 0.9992635250091553)
('south', 0.9992367029190063)
('2009', 0.998937726020813)
('responsible', 0.9989287257194519)
('evidence', 0.9987417459487915)
('associated', 0.9987338781356812)
('determined', 0.9987262487411499)
Most similar words for apt34
('shift', 0.9994713068008423)
('particularly', 0.9993870258331299)
('continue', 0.9993187785148621)
('indicate', 0.9992826581001282)
('crew', 0.9991933703422546)
('consistent', 0.999139666557312)
('palo', 0.999091625213623)
('august', 0.9990721344947815)
('added', 0.9990265369415283)
('provided', 0.9990137815475464)
Most similar words for apt37
('apt10', 0.9996817111968994)
('sophisticated', 0.9993605017662048)
('entities', 0.9991942048072815)
('overlap', 0.9991032481193542)
('naikon', 0.9991011619567871)
('micro', 0.9990009069442749)
('primarily', 0.9989291429519653)
('associated', 0.9988642930984497)
('highly', 0.9987080097198486)
('noticed', 0.9986851811408997)
Most similar words for apt38
('continues', 0.9994156956672668)
('individuals', 0.9993045330047607)
('early', 0.9992733001708984)
('turla', 0.9992636442184448)
('stone', 0.9992102980613708)
('experts', 0.9991610050201416)
('europe', 0.9991508722305298)
('apt32', 0.9991441965103149)
('kitten', 0.9991305470466614)
('region', 0.9991227388381958)
###Markdown
here we got one interesting result for apt17 as apt28 but for all other word2vec results we observe that we are getting names like malware, attackers, groups, backdoor in the most similar items. It might be the case that the names of attacker groups are ommited because they are phrases instead simple words. - word2vec with bigram phrases here we try to find bigram phrases from the dataset and apply word2vec model to it
###Code
from gensim.models import Phrases
from collections import Counter
bigram = Phrases()
bigram.add_vocab(df['tokenized_text'])
bigram_counter = Counter()
for key in bigram.vocab.keys():
if len(key.split("_")) > 1:
bigram_counter[key] += bigram.vocab[key]
for key, counts in bigram_counter.most_common(20):
print '{0: <20} {1}'.format(key.encode("utf-8"), counts)
bigram_model = Word2Vec(bigram[df['tokenized_text']], size=100)
for num in [1, 3, 5, 10, 12, 16, 17, 18, 19, 28, 29, 30, 32, 33, 34, 37, 38]:
term = "apt%s"%str(num)
if term in bigram_model.wv.vocab:
print("Most similar words for %s"%term)
for t in bigram_model.most_similar(term): print(t)
print('\n')
###Output
Most similar words for apt1
(u'different', 0.99991774559021)
(u'likely', 0.9999154806137085)
(u'well', 0.9999152421951294)
(u'says', 0.9999047517776489)
(u'multiple', 0.9999043941497803)
(u'threat_actors', 0.9998949766159058)
(u'network', 0.9998934268951416)
(u'according', 0.9998912811279297)
(u'compromised', 0.9998894929885864)
(u'related', 0.999876856803894)
Most similar words for apt3
(u'actor', 0.9998462796211243)
(u'described', 0.9998243451118469)
(u'also_known', 0.9998069405555725)
(u'actors', 0.9997928738594055)
(u'recently', 0.9997922778129578)
(u'experts', 0.999782919883728)
(u'apt29', 0.9997620582580566)
(u'identified', 0.9997564554214478)
(u'two', 0.9997557401657104)
(u'domains', 0.9997459650039673)
Most similar words for apt10
(u'time', 0.999898374080658)
(u'analysis', 0.9998810291290283)
(u'u', 0.9998781681060791)
(u'version', 0.9998765587806702)
(u'based', 0.9998717308044434)
(u'provided', 0.9998701810836792)
(u'least', 0.9998694658279419)
(u'mandiant', 0.9998666644096375)
(u'governments', 0.9998637437820435)
(u'apt32', 0.9998601675033569)
Most similar words for apt17
(u'connections', 0.9996646642684937)
(u'email', 0.9996588230133057)
(u'find', 0.9996576905250549)
(u'across', 0.9996559023857117)
(u'order', 0.9996424913406372)
(u'web', 0.9996327757835388)
(u'user', 0.9996271133422852)
(u'connection', 0.9996263980865479)
(u'key', 0.9996225833892822)
(u'shows', 0.9996156096458435)
Most similar words for apt28
(u'fireeye', 0.9996447563171387)
(u'using', 0.999575138092041)
(u'targeted', 0.9995599985122681)
(u'sofacy', 0.9995203614234924)
(u'known', 0.9995172619819641)
(u'tools', 0.9993760585784912)
(u'spotted', 0.9993688464164734)
(u'researchers', 0.9991514086723328)
(u'report', 0.9991289973258972)
(u'also', 0.9991098046302795)
Most similar words for apt29
(u'recently', 0.9998775720596313)
(u'however', 0.9998724460601807)
(u'actors', 0.9998624920845032)
(u'two', 0.999857485294342)
(u'vulnerabilities', 0.9998537302017212)
(u'identified', 0.9998456835746765)
(u'first', 0.9998396635055542)
(u'described', 0.9998297691345215)
(u'leveraged', 0.999822735786438)
(u'seen', 0.9998195767402649)
Most similar words for apt30
(u'research', 0.999484658241272)
(u'published', 0.9994805455207825)
(u'noted', 0.9994770288467407)
(u'fireeye_said', 0.9994675517082214)
(u'account', 0.9994667768478394)
(u'provide', 0.9994657039642334)
(u'command_control', 0.9994556903839111)
(u'splm', 0.9994515776634216)
(u'c2', 0.9994462728500366)
(u'2013', 0.9994445443153381)
Most similar words for apt32
(u'techniques', 0.9999111890792847)
(u'additional', 0.9999087452888489)
(u'analysis', 0.9999069571495056)
(u'many', 0.9999059438705444)
(u'companies', 0.9998983144760132)
(u'based', 0.9998965263366699)
(u'part', 0.9998964071273804)
(u'backdoors', 0.999894380569458)
(u'mandiant', 0.9998939037322998)
(u'another', 0.9998925924301147)
Most similar words for apt33
(u'mandiant', 0.9999130368232727)
(u'year', 0.9999092221260071)
(u'techniques', 0.9998992681503296)
(u'tracked', 0.999896764755249)
(u'team', 0.9998966455459595)
(u'last_year', 0.9998915195465088)
(u'part', 0.9998914003372192)
(u'military', 0.9998868703842163)
(u'chinese', 0.9998816251754761)
(u'threat', 0.9998784065246582)
Most similar words for apt34
(u'services', 0.9997851848602295)
(u'targeted_attacks', 0.9997463226318359)
(u'example', 0.9997448325157166)
(u'called', 0.999743640422821)
(u'available', 0.9997414946556091)
(u'able', 0.9997405409812927)
(u'activities', 0.999738335609436)
(u'2018', 0.9997329711914062)
(u'make', 0.9997280836105347)
(u'details', 0.9997265934944153)
Most similar words for apt37
(u'flaw', 0.999801754951477)
(u'2014', 0.9997944831848145)
(u'2013', 0.9997936487197876)
(u'efforts', 0.999792754650116)
(u'made', 0.9997915625572205)
(u'designed', 0.9997785091400146)
(u'list', 0.9997777938842773)
(u'media', 0.9997776746749878)
(u'make', 0.9997761845588684)
(u'attribution', 0.9997747540473938)
Most similar words for apt38
(u'command_control', 0.99981290102005)
(u'attribution', 0.9997984170913696)
(u'media', 0.9997962117195129)
(u'activities', 0.9997954368591309)
(u'2014', 0.9997861385345459)
(u'software', 0.9997845888137817)
(u'see', 0.9997791051864624)
(u'research', 0.999776303768158)
(u'designed', 0.9997758865356445)
(u'even', 0.9997751712799072)
###Markdown
After applying bigram phrases still we cannot see the desired results. Word2Vec model topic by topic using bigram phrases
###Code
df_doc = df[['query', 'text']]
df_doc
df_doc = df_doc.groupby(['query'],as_index=False).first()
df_doc
from nltk.corpus import stopwords
import re
stop = stopwords.words('english') + ['fireeye', 'crowdstrike', 'symantec', 'rapid7', 'securityweek', 'kaspersky']
def normalize_text(text):
norm_text = text.lower()
# Replace breaks with spaces
norm_text = norm_text.replace('<br />', ' ')
# Pad punctuation with spaces on both sides
norm_text = re.sub(r"([\.\",\(\)!\?;:])", " \\1 ", norm_text)
return norm_text
def remove_stop_words(text):
return " ".join([item.lower() for item in text.split() if item not in stop])
def remove_non_ascii(text):
return ''.join(["" if ord(i) < 32 or ord(i) > 126 else i for i in text])
df_doc['text'] = df_doc['text'].apply(remove_non_ascii)
df_doc['text'] = df_doc['text'].apply(normalize_text)
df_doc['text'] = df_doc['text'].apply(remove_stop_words)
df_doc["text"] = df_doc['text'].str.replace('[^\w\s]','')
df_doc
df_doc['tokenized_text'] = df_doc.apply(lambda row: word_tokenize(row['text']), axis=1)
df_doc
from gensim.models import Phrases
from collections import Counter
for num in ['APT1', 'APT10', 'APT12', 'APT15', 'APT16', 'APT17', 'APT18', 'APT27', 'APT28', 'APT29', 'APT3', 'APT30', 'APT32', 'APT33', 'APT34', 'APT35', 'APT37', 'APT38']:
temp = df_doc[df_doc['query'] == num]
print(temp.shape)
if temp.shape[0] == 0:
continue
bigram = Phrases()
bigram.add_vocab(temp['tokenized_text'])
bigram_model = Word2Vec(bigram[temp['tokenized_text']], size=100)
term = num.lower()
if term in bigram_model.wv.vocab:
print("Most similar words for %s"%term)
for t in bigram_model.most_similar(term, topn=20): print(t)
print('\n')
num = 38
temp = df_doc[df_doc['query'] == 'APT%s'%num]
bigram = Phrases()
bigram.add_vocab(temp['tokenized_text'])
bigram_model = Word2Vec(bigram[temp['tokenized_text']], size=100)
term = 'apt%s'%num
if term in bigram_model.wv.vocab:
print("Most similar words for %s"%term)
for t in bigram_model.most_similar(term, topn=20): print(t)
print('\n')
temp.shape
###Output
_____no_output_____ |
figure_04.ipynb | ###Markdown
Figure 4 - Effective variablesCreate the figure panels describing the model's effective variables dependencies on US frequency, US amplitude and sonophore radius. Imports
###Code
import os
import matplotlib.pyplot as plt
from PySONIC.plt import plotEffectiveVariables
from PySONIC.utils import logger
from PySONIC.neurons import getPointNeuron
from utils import saveFigsAsPDF
###Output
_____no_output_____
###Markdown
Plot parameters
###Code
figindex = 4
fs = 12
lw = 2
ps = 15
figs = {}
###Output
_____no_output_____
###Markdown
Simulation parameters
###Code
pneuron = getPointNeuron('RS')
a = 32e-9 # m
Fdrive = 500e3 # Hz
Adrive = 50e3 # Pa
###Output
_____no_output_____
###Markdown
Panel A: dependence on acoustic amplitude
###Code
fig = plotEffectiveVariables(pneuron, a=a, f=Fdrive, cmap='Oranges', zscale='log')
figs['a'] = fig
###Output
_____no_output_____
###Markdown
Panel B: dependence on US frequency
###Code
fig = plotEffectiveVariables(pneuron, a=a, A=Adrive, cmap='Greens', zscale='log')
figs['b'] = fig
###Output
28/04/2020 22:17:14: Rounding f value (4000000.000000001) to interval upper bound (4000000.0)
###Markdown
Panel C: dependence on sonophore radius
###Code
fig = plotEffectiveVariables(pneuron, f=Fdrive, A=Adrive, cmap='Blues', zscale='log')
figs['c'] = fig
###Output
_____no_output_____
###Markdown
Save figure panelsSave figure panels as **pdf** in the *figs* sub-folder:
###Code
saveFigsAsPDF(figs, figindex)
###Output
_____no_output_____ |
python/.ipynb_checkpoints/0107_search_sort-checkpoint.ipynb | ###Markdown
검색wihle loop 를 이용한 선형 검색
###Code
from typing import Any,List
def linear_search_while(lst:List, value:Any) -> int:
i = 0
while i != len(lst) and lst[i] != value:
i += 1
if i == len(lst):
return -1
else:
return 1
l = [1,2,3,4,5,6,7,8,9]
linear_search_while(l,9)
def linear_search_for(lst:List, value:Any) -> int:
for i in lst:
if lst[i] == value:
return 1
return -1
l = [1,2,3,4,5,6,7,8,9]
linear_search_for(l,9)
def linear_search_sentinal(lst:List, value:Any) -> int:
lst.append(value)
i=0
while lst[i] != value:
i += 1
lst.pop()
if i == len(lst):
return -1
else:
return 1
l = [1,2,3,4,5,6,7,8,9]
linear_search_sentinal(l,9)
import time
from typing import Callable, Any
def time_it(search: Callable[[list,Any],Any],L:list,v:Any):
t1 = time.perf_counter()
search(L,v)
t2 = time.perf_counter()
return (t2-t1) *1000.0
l = [1,2,3,4,5,6,7,8,9]
time_it(linear_search_while,l,5)
###Output
_____no_output_____
###Markdown
이진 검색반절씩 줄여나가며 탐색하는 방법
###Code
def binary_search(lst:list,value:Any) -> int:
i = 0
j = len(lst)-1
while i != j+1:
m = (i+j)//2
if lst[m]<v:
i = m+1
else:
j=m-1
if 0<= i< len(lst) and lst[i]==i:
return i
else :
return -1
if __name__ == '__main__':
import doctest
doctest.testmod()
###Output
_____no_output_____
###Markdown
Selection sort - 선택정렬정렬되지 않은 부분 전체를 순회하며 가장 작은 값을 찾아 정렬된 부분 우측에 위치시킨다. 이것을 모든 값이 정렬될 때까지 반복한다. n길이의 선형 자료형을 n번 반복하게 되므로 n^2
###Code
def selection_sort(l:list):
for i in range(len(l)):
idx = l.index(min(l[i:]),i)
dummy = l[i]
l[i] = l[idx]
l[idx] = dummy
return l
l = [7,16,3,25,2,6,1,7,3]
print(selection_sort(l))
###Output
[1, 2, 3, 3, 6, 7, 7, 16, 25]
###Markdown
Insertion sort - 삽입정렬전체를 순회하며 현재 값이 정렬된 부분에서 올바른 위치에 삽입하는 방식.
###Code
# 기 정렬된 영역에 L[:b+1] 내 올바른 위치에 L[b]를 삽입
def insert(L: list, b: int) -> None:
i = b
while i != 0 and L[i - 1] >= L[b]:
i = i - 1
value = L[b]
del L[b]
L.insert(i, value)
def insertion_sort(L: list) -> None:
i = 0
while i != len(L):
insert(L, i)
i = i + 1
L = [ 3, 4, 6, -1, 2, 5 ]
print(L)
insertion_sort(L)
print(L)
###Output
[3, 4, 6, -1, 2, 5]
[-1, 2, 3, 4, 5, 6]
###Markdown
Merge sort - 병합정렬
###Code
# 2개의 리스트를 하나의 정렬된 리스트로 반환
def merge(L1: list, L2: list) -> list:
newL = []
i1 = 0
i2 = 0
# [ 1, 1, 2, 3, 4, 5, 6, 7 ]
# [ 1, 3, 4, 6 ] [ 1, 2, 5, 7 ]
# i1
# i2
while i1 != len(L1) and i2 != len(L2):
if L1[i1] <= L2[i2]:
newL.append(L1[i1])
i1 += 1
else:
newL.append(L2[i2])
i2 += 1
newL.extend(L1[i1:])
newL.extend(L2[i2:])
return newL
def merge_sort(L: list) -> None: # [ 1, 3, 4, 6, 1, 2, 5, 7 ]
workspace = []
for i in range(len(L)):
workspace.append([L[i]]) # [ [1], [3], [4], [6], [1], [2], [5], [7] ]
i = 0
while i < len(workspace) - 1:
L1 = workspace[i] # [ [1], [3], [4], [6], [1], [2], [5], [7], [1,3],[4,6],[1,2],[5,7], [1,3,4,6],[1,2,5,7],[1,1,2,3,4,5,6,7] ]
L2 = workspace[i + 1]
newL = merge(L1, L2)
workspace.append(newL)
i += 2
if len(workspace) != 0:
L[:] = workspace[-1][:]
import time, random
def built_in(L: list) -> None:
L.sort()
def print_times(L: list) -> None:
print(len(L), end='\t')
for func in (selection_sort, insertion_sort, merge_sort, built_in):
if func in (selection_sort, insertion_sort, merge_sort) and len(L) > 10000:
continue
L_copy = L[:]
t1 = time.perf_counter()
func(L_copy)
t2 = time.perf_counter()
print("{0:7.1f}".format((t2 - t1) * 1000.0), end="\t")
print()
for list_size in [ 10, 1000, 2000, 3000, 4000, 5000, 10000 ]:
L = list(range(list_size))
random.shuffle(L)
print_times(L)
###Output
10 0.0 0.0 0.0 0.0
1000 16.5 37.0 4.1 0.1
2000 54.9 141.1 12.2 0.2
3000 130.2 321.6 15.0 0.4
4000 217.9 592.7 20.7 0.5
5000 357.6 871.0 26.0 0.7
10000 1450.2 3544.8 55.7 1.5
###Markdown
객체지향 프로그래밍```isinstance(object,class)``` 해당 객체가 클래스에 해당하는지 아닌지를 반환.
###Code
from typing import List,Any
class Book:
def num_authors(self) -> int:
return len(self.authors)
def __init__(self,title:str,authors:List[str],publisher:str,isbn:str,price:float) : # 생성자.
self.title = title
self.authors = authors[:] # [:] 를 적지 않고 직접 넘겨주면 참조형식이기 때문에 외부에서 값이 바뀌면 해당 값도 바뀜. 때문에 새로 만들어서 복사하는 방법을 채택.
self.publisher = publisher
self.isbn = isbn
self.price = price
def print_authors(self) -> None:
for authors in self.authors:
print(authors)
def __str__(self) -> str:
return 'Title : {}\nAuthors : {}'.format(self.title,self.authors)
def __eq__(self,other:Any) -> bool:
if isinstance(other,Book):
return True if self.isbn == other.isbn else False
return False
book = Book('My book',['aaa','bbb','ccc'],'한빛출판사','123-456-789','300000.0')
book.print_authors()
print(book.num_authors())
print(book)
newBook = Book('My book',['aaa','bbb','ccc'],'한빛출판사','123-456-789','300000.0')
print(book==newBook)
###Output
aaa
bbb
ccc
3
Title : My book
Authors : ['aaa', 'bbb', 'ccc']
True
###Markdown
레퍼런스 타입을 넘겨줄때 값을 참조하는 형식이 아닌 값을 직접 받는 형식으로 취하게 해야 한다. 캡슐화 : 데이터와 그 데이터를 사용하는 코드를 한곳에 넣고 정확히 어떻게 동작하는ㄴ지 상세한 내용은 숨기는 것 다형성 : 하나 이상의 형태를 갖는 것. 어떤 변수를 포함하는 표현식이 변수가 참조하는 객체의 타입에 따라 서로 다른 일을 하는 것 상속 : 새로운 클래스는 부모 클래스(object 클래스 또는 사용자 정의 속성을 상속)
###Code
class Member:
def __init__(self,name:str,address:str,email:str):
self.name = name
self.address = address
self.email = email
class Faculty(Member):
def __init__(self,name:str,address:str,email:str,faculty_num:str):
super().__init__(name,address,email)
self.faculty_number = faculty_num
self.courses_teaching = []
class Atom:
'''번호, 기호, 좌표(X, Y, Z)를 갖는 원자'''
def __init__(self, num: int, sym: str, x: float, y: float, z: float) -> None:
self.num = num
self.sym = sym
self.center = (x, y, z)
def __str__(self) -> str:
'''(SYMBOL, X, Y, Z) 형식의 문자열을 반환'''
return '({}, {}, {}, {}'.format(self.sym, self.center[0], self.center[1], self.center[2])
def translate(self, x: float, y: float, z: float) -> None:
self.center = (self.center[0] + x, self.center[1] + y, self.center[2] + z)
class Molecule:
''' 이름과 원자 리스트를 갖는 분자 '''
def __init__(self, name: str) -> None:
self.name = name
self.atoms = []
def add(self, a: Atom) -> None:
self.atoms.append(a)
def __str__(self) -> str:
'''(NAME, (ATOM1, ATOM2, ...)) 형식의 문자열을 반환'''
atom_list = ''
for a in self.atoms:
atom_list = atom_list + str(a) + ', '
atom_list = atom_list[:-2] # 마지막에 추가된 ', ' 문자를 제거
return '({}, ({}))'.format(self.name, atom_list)
def translate(self, x: float, y: float, z: float) -> None:
for a in self.atoms:
a.translate(x, y, z)
ammonia = Molecule("AMMONIA")
ammonia.add(Atom(1, "N", 0.257, -0.363, 0.0))
ammonia.add(Atom(2, "H", 0.257, 0.727, 0.0))
ammonia.add(Atom(3, "H", 0.771, -0.727, 0.890))
ammonia.add(Atom(4, "H", 0.771, -0.727, -0.890))
ammonia.translate(0, 0, 0.2)
#assert ammonia.atoms[0].center[0] == 0.257
#assert ammonia.atoms[0].center[1] == -0.363
assert ammonia.atoms[0].center[2] == 0.2
print(ammonia)
###Output
(AMMONIA, ((N, 0.257, -0.363, 0.2, (H, 0.257, 0.727, 0.2, (H, 0.771, -0.727, 1.09, (H, 0.771, -0.727, -0.69))
|
lectures/Week 05 - Data Processing and Visualization Part 2/02 - NumPy Data analysis.ipynb | ###Markdown
NumPy Tutorial: Data analysis with Python[Source](https://www.dataquest.io/blog/numpy-tutorial-python/)NumPy is a commonly used Python data analysis package. By using NumPy, you can speed up your workflow, and interface with other packages in the Python ecosystem, like scikit-learn, that use NumPy under the hood. NumPy was originally developed in the mid 2000s, and arose from an even older package called Numeric. This longevity means that almost every data analysis or machine learning package for Python leverages NumPy in some way.In this tutorial, we'll walk through using NumPy to analyze data on wine quality. The data contains information on various attributes of wines, such as pH and fixed acidity, along with a quality score between 0 and 10 for each wine. The quality score is the average of at least 3 human taste testers. As we learn how to work with NumPy, we'll try to figure out more about the perceived quality of wine.The wines we'll be analyzing are from the Minho region of Portugal. The data was downloaded from the UCI Machine Learning Repository, and is available [here](https://archive.ics.uci.edu/ml/datasets/Wine+Quality). Here are the first few rows of the winequality-red.csv file, which we'll be using throughout this tutorial:``` text"fixed acidity";"volatile acidity";"citric acid";"residual sugar";"chlorides";"free sulfur dioxide";"total sulfur dioxide";"density";"pH";"sulphates";"alcohol";"quality"7.4;0.7;0;1.9;0.076;11;34;0.9978;3.51;0.56;9.4;57.8;0.88;0;2.6;0.098;25;67;0.9968;3.2;0.68;9.8;5```The data is in what I'm going to call ssv (semicolon separated values) format -- each record is separated by a semicolon (;), and rows are separated by a new line. There are 1600 rows in the file, including a header row, and 12 columns.Before we get started, a quick version note -- we'll be using Python 3.5. Our code examples will be done using Jupyter notebook.If you want to jump right into a specific area, here are the topics:* Creating an Array* Reading Text Files* Array Indexing* N-Dimensional Arrays* Data Types* Array Math* Array Methods* Array Comparison and Filtering* Reshaping and Combining Arrays Lists Of Lists for CSV DataBefore using NumPy, we'll first try to work with the data using Python and the csv package. We can read in the file using the csv.reader object, which will allow us to read in and split up all the content from the ssv file.In the below code, we:* Import the csv library.* Open the winequality-red.csv file. * With the file open, create a new csv.reader object. * Pass in the keyword argument delimiter=";" to make sure that the records are split up on the semicolon character instead of the default comma character. * Call the list type to get all the rows from the file. * Assign the result to wines.
###Code
import csv
with open("winequality-red.csv", 'r') as f:
wines = list(csv.reader(f, delimiter=";"))
# print(wines[:3])
headers = wines[0]
wines_only = wines[1:]
# print the headers
print(headers)
# print the 1st row of data
print(wines_only[0])
# print the 1st three rows of data
print(wines_only[:3])
###Output
[['7.4', '0.7', '0', '1.9', '0.076', '11', '34', '0.9978', '3.51', '0.56', '9.4', '5'], ['7.8', '0.88', '0', '2.6', '0.098', '25', '67', '0.9968', '3.2', '0.68', '9.8', '5'], ['7.8', '0.76', '0.04', '2.3', '0.092', '15', '54', '0.997', '3.26', '0.65', '9.8', '5']]
###Markdown
The data has been read into a list of lists. Each inner list is a row from the ssv file. As you may have noticed, each item in the entire list of lists is represented as a string, which will make it harder to do computations.As you can see from the table above, we've read in three rows, the first of which contains column headers. Each row after the header row represents a wine. The first element of each row is the fixed acidity, the second is the volatile acidity, and so on. Calculate Average Wine QualityWe can find the average quality of the wines. The below code will:* Extract the last element from each row after the header row.* Convert each extracted element to a float.* Assign all the extracted elements to the list qualities.* Divide the sum of all the elements in qualities by the total number of elements in qualities to the get the mean.
###Code
# calculate average wine quality with a loop
qualities = []
for row in wines[1:]:
qualities.append(float(row[-1]))
sum(qualities) / len(wines[1:])
# calculate average wine quality with a list comprehension
qualities = [float(row[-1]) for row in wines[1:]]
sum(qualities) / len(wines[1:])
###Output
_____no_output_____
###Markdown
Although we were able to do the calculation we wanted, the code is fairly complex, and it won't be fun to have to do something similar every time we want to compute a quantity. Luckily, we can use NumPy to make it easier to work with our data. Numpy 2-Dimensional ArraysWith NumPy, we work with multidimensional arrays. We'll dive into all of the possible types of multidimensional arrays later on, but for now, we'll focus on 2-dimensional arrays. A 2-dimensional array is also known as a matrix, and is something you should be familiar with. In fact, it's just a different way of thinking about a list of lists. A matrix has rows and columns. By specifying a row number and a column number, we're able to extract an element from a matrix.If we picked the element at the first row and the second column, we'd get volatile acidity. If we picked the element in the third row and the second column, we'd get 0.88.In a NumPy array, the number of dimensions is called the **rank**, and each dimension is called an **axis**. So * the rows are the first axis* the columns are the second axisNow that you understand the basics of matrices, let's see how we can get from our list of lists to a NumPy array. Creating A NumPy ArrayWe can create a NumPy array using the numpy.array function. If we pass in a list of lists, it will automatically create a NumPy array with the same number of rows and columns. Because we want all of the elements in the array to be float elements for easy computation, we'll leave off the header row, which contains strings. One of the limitations of NumPy is that all the elements in an array have to be of the same type, so if we include the header row, all the elements in the array will be read in as strings. Because we want to be able to do computations like find the average quality of the wines, we need the elements to all be floats.In the below code, we:* Import the ```numpy``` package.* Pass the ```list``` of lists wines into the array function, which converts it into a NumPy array. * Exclude the header row with list slicing. * Specify the keyword argument ```dtype``` to make sure each element is converted to a ```float```. We'll dive more into what the ```dtype``` is later on.
###Code
import numpy as np
np.set_printoptions(precision=2) # set the output print precision for readability
# create the numpy array skipping the headers
wines = np.array(wines[1:], dtype=np.float)
# If we display wines, we'll now get a NumPy array:
print(type(wines), wines)
# We can check the number of rows and columns in our data using the shape property of NumPy arrays:
wines.shape
###Output
_____no_output_____
###Markdown
Alternative NumPy Array Creation MethodsThere are a variety of methods that you can use to create NumPy arrays. It's useful to create an array with all zero elements in cases when you need an array of fixed size, but don't have any values for it yet. To start with, you can create an array where every element is zero. The below code will create an array with 3 rows and 4 columns, where every element is 0, using ```numpy.zeros```:
###Code
empty_array = np.zeros((3, 4))
empty_array
###Output
_____no_output_____
###Markdown
Creating arrays full of random numbers can be useful when you want to quickly test your code with sample arrays. You can also create an array where each element is a random number using ```numpy.random.rand```.
###Code
np.random.rand(2, 3)
###Output
_____no_output_____
###Markdown
Using NumPy To Read In FilesIt's possible to use NumPy to directly read ```csv``` or other files into arrays. We can do this using the ```numpy.genfromtxt``` function. We can use it to read in our initial data on red wines.In the below code, we:* Use the ``` genfromtxt ``` function to read in the ``` winequality-red.csv ``` file.* Specify the keyword argument ``` delimiter=";" ``` so that the fields are parsed properly.* Specify the keyword argument ``` skip_header=1 ``` so that the header row is skipped.
###Code
wines = np.genfromtxt("winequality-red.csv", delimiter=";", skip_header=1)
wines
###Output
_____no_output_____
###Markdown
Wines will end up looking the same as if we read it into a list then converted it to an array of ```floats```. NumPy will automatically pick a data type for the elements in an array based on their format. Indexing NumPy ArraysWe now know how to create arrays, but unless we can retrieve results from them, there isn't a lot we can do with NumPy. We can use array indexing to select individual elements, groups of elements, or entire rows and columns. One important thing to keep in mind is that just like Python lists, NumPy is **zero-indexed**, meaning that:* The index of the first row is 0* The index of the first column is 0 * If we want to work with the fourth row, we'd use index 3* If we want to work with the second row, we'd use index 1, and so on. We'll again work with the wines array:||||||||||||||-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:||7.4 |0.70 |0.00 |1.9 |0.076 |11 |34 |0.9978 |3.51 |0.56 |9.4 |5||7.8 |0.88 |0.00 |2.6 |0.098 |25 |67 |0.9968 |3.20 |0.68 |9.8 |5||7.8 |0.76 |0.04 |2.3 |0.092 |15 |54 |0.9970 |3.26 |0.65 |9.8 |5||11.2|0.28 |0.56 |1.9 |0.075 |17 |60 |0.9980 |3.16 |0.58 |9.8 |6||7.4 |0.70 |0.00 |1.9 |0.076 |11 |34 |0.9978 |3.51 |0.56 |9.4 |5| Let's select the element at **row 3** and **column 4**.We pass:* 2 as the row index* 3 as the column index. This retrieves the value from the **third row** and **fourth column**
###Code
wines[2, 3]
wines[2][3]
###Output
_____no_output_____
###Markdown
Since we're working with a 2-dimensional array in NumPy we specify 2 indexes to retrieve an element. * The first index is the row, or **axis 1**, index* The second index is the column, or **axis 2**, index Any element in wines can be retrieved using 2 indexes.
###Code
# rows 1, 2, 3 and column 4
wines[0:3, 3]
# all rows and column 3
wines[:, 2]
###Output
_____no_output_____
###Markdown
Just like with ```list``` slicing, it's possible to omit the 0 to just retrieve all the elements from the beginning up to element 3:
###Code
# rows 1, 2, 3 and column 4
wines[:3, 3]
###Output
_____no_output_____
###Markdown
We can select an entire column by specifying that we want all the elements, from the first to the last. We specify this by just using the colon ```:```, with no starting or ending indices. The below code will select the entire fourth column:
###Code
# all rows and column 4
wines[:, 3]
###Output
_____no_output_____
###Markdown
We selected an entire column above, but we can also extract an entire row:
###Code
# row 4 and all columns
wines[3, :]
###Output
_____no_output_____
###Markdown
If we take our indexing to the extreme, we can select the entire array using two colons to select all the rows and columns in wines. This is a great party trick, but doesn't have a lot of good applications:
###Code
wines[:, :]
###Output
_____no_output_____
###Markdown
Assigning Values To NumPy ArraysWe can also use indexing to assign values to certain elements in arrays. We can do this by assigning directly to the indexed value:
###Code
# assign the value of 10 to the 2nd row and 6th column
print('Before', wines[1, 4:7])
wines[1, 5] = 10
print('After', wines[1, 4:7])
###Output
Before [ 0.1 25. 67. ]
After [ 0.1 10. 67. ]
###Markdown
We can do the same for slices. To overwrite an entire column, we can do this:
###Code
# Overwrites all the values in the eleventh column with 50.
print('Before', wines[:, 9:12])
wines[:, 10] = 50
print('After', wines[:, 9:12])
###Output
Before [[ 0.56 9.4 5. ]
[ 0.68 9.8 5. ]
[ 0.65 9.8 5. ]
...
[ 0.75 11. 6. ]
[ 0.71 10.2 5. ]
[ 0.66 11. 6. ]]
After [[ 0.56 50. 5. ]
[ 0.68 50. 5. ]
[ 0.65 50. 5. ]
...
[ 0.75 50. 6. ]
[ 0.71 50. 5. ]
[ 0.66 50. 6. ]]
###Markdown
1-Dimensional NumPy ArraysSo far, we've worked with 2-dimensional arrays, such as wines. However, NumPy is a package for working with multidimensional arrays. One of the most common types of multidimensional arrays is the **1-dimensional array**, or **vector**. As you may have noticed above, when we sliced wines, we retrieved a 1-dimensional array. * A 1-dimensional array only needs a single index to retrieve an element. * Each row and column in a 2-dimensional array is a 1-dimensional array. Just like a list of lists is analogous to a 2-dimensional array, a single list is analogous to a 1-dimensional array. If we slice wines and only retrieve the third row, we get a 1-dimensional array:
###Code
third_wine = wines[3,:]
third_wine
###Output
_____no_output_____
###Markdown
We can retrieve individual elements from ```third_wine``` using a single index.
###Code
# display the second item in third_wine
third_wine[1]
###Output
_____no_output_____
###Markdown
Most NumPy functions that we've worked with, such as ```numpy.random.rand```, can be used with multidimensional arrays. Here's how we'd use ```numpy.random.rand``` to generate a random vector:
###Code
np.random.rand(3)
###Output
_____no_output_____
###Markdown
Previously, when we called ```np.random.rand```, we passed in a shape for a 2-dimensional array, so the result was a 2-dimensional array. This time, we passed in a shape for a single dimensional array. The shape specifies the number of dimensions, and the size of the array in each dimension. A shape of ```(10,10)``` will be a 2-dimensional array with **10 rows** and **10 columns**. A shape of ```(10,)``` will be a **1-dimensional** array with **10 elements**.Where NumPy gets more complex is when we start to deal with arrays that have more than 2 dimensions. N-Dimensional NumPy ArraysThis doesn't happen extremely often, but there are cases when you'll want to deal with arrays that have greater than 3 dimensions. One way to think of this is as a list of lists of lists. Let's say we want to store the monthly earnings of a store, but we want to be able to quickly lookup the results for a quarter, and for a year. The earnings for one year might look like this:``` python[500, 505, 490, 810, 450, 678, 234, 897, 430, 560, 1023, 640]```The store earned \$500 in January, \$505 in February, and so on. We can split up these earnings by quarter into a list of lists:
###Code
year_one = [
[500,505,490], # 1st quarter
[810,450,678], # 2nd quarter
[234,897,430], # 3rd quarter
[560,1023,640] # 4th quarter
]
###Output
_____no_output_____
###Markdown
We can retrieve the earnings from January by calling ``` year_one[0][0] ```. If we want the results for a whole quarter, we can call ``` year_one[0] ``` or ``` year_one[1] ```. We now have a 2-dimensional array, or matrix. But what if we now want to add the results from another year? We have to add a third dimension:
###Code
earnings = [
[ # year 1
[500,505,490], # year 1, 1st quarter
[810,450,678], # year 1, 2nd quarter
[234,897,430], # year 1, 3rd quarter
[560,1023,640] # year 1, 4th quarter
],
[ # year =2
[600,605,490], # year 2, 1st quarter
[345,900,1000],# year 2, 2nd quarter
[780,730,710], # year 2, 3rd quarter
[670,540,324] # year 2, 4th quarter
]
]
###Output
_____no_output_____
###Markdown
We can retrieve the earnings from January of the first year by calling ``` earnings[0][0][0] ```. We now need three indexes to retrieve a single element. A three-dimensional array in NumPy is much the same. In fact, we can convert earnings to an array and then get the earnings for January of the first year:
###Code
earnings = np.array(earnings)
# year 1, 1st quarter, 1st month (January)
earnings[0,0,0]
# year 2, 3rd quarter, 1st month (July)
earnings[1,2,0]
# we can also find the shape of the array
earnings.shape
###Output
_____no_output_____
###Markdown
Indexing and slicing work the exact same way with a 3-dimensional array, but now we have an extra axis to pass in. If we wanted to get the earnings for **January of all years**, we could do this:
###Code
# all years, 1st quarter, 1st month (January)
earnings[:,0,0]
###Output
_____no_output_____
###Markdown
If we wanted to get first quarter earnings from both years, we could do this:
###Code
# all years, 1st quarter, all months (January, February, March)
earnings[:,0,:]
###Output
_____no_output_____
###Markdown
Adding more dimensions can make it much easier to query your data if it's organized in a certain way. As we go from 3-dimensional arrays to 4-dimensional and larger arrays, the same properties apply, and they can be indexed and sliced in the same ways. NumPy Data TypesAs we mentioned earlier, each NumPy array can store elements of a single data type. For example, wines contains only float values. NumPy stores values using its own data types, **which are distinct from Python types** like ```float``` and ```str```. This is because the core of NumPy is written in a programming language called ```C```, **which stores data differently than the Python data types**. NumPy data types map between Python and C, allowing us to use NumPy arrays without any conversion hitches.You can find the data type of a NumPy array by accessing the dtype property:
###Code
wines.dtype
###Output
_____no_output_____
###Markdown
NumPy has several different data types, which mostly map to Python data types, like ```float```, and ```str```. You can find a full listing of NumPy data types [here](https://www.dataquest.io/blog/numpy-tutorial-python/), but here are a few important ones:* ```float``` -- numeric floating point data.* ```int``` -- integer data.* ```string``` -- character data.* ```object``` -- Python objects.Data types additionally end with a suffix that indicates how many bits of memory they take up. So ```int32``` is a **32 bit integer data type**, and ```float64``` is a **64 bit float data type**. Converting Data TypesYou can use the numpy.ndarray.astype method to convert an array to a different type. The method will actually **copy the array**, and **return a new array with the specified data type**. For instance, we can convert wines to the ```int``` data type:
###Code
# convert wines to the int data type
wines.astype(int)
###Output
_____no_output_____
###Markdown
As you can see above, all of the items in the resulting array are integers. Note that we used the Python ```int``` type instead of a NumPy data type when converting wines. This is because several Python data types, including ```float```, ```int```, and ```string```, can be used with NumPy, and are automatically converted to NumPy data types.We can check the name property of the ```dtype``` of the resulting array to see what data type NumPy mapped the resulting array to:
###Code
# convert to int
int_wines = wines.astype(int)
# check the data type
int_wines.dtype.name
###Output
_____no_output_____
###Markdown
The array has been converted to a **64-bit integer** data type. This allows for very long integer values, **but takes up more space in memory** than storing the values as 32-bit integers.If you want more control over how the array is stored in memory, you can directly create NumPy dtype objects like ```numpy.int32```
###Code
np.int32
###Output
_____no_output_____
###Markdown
You can use these directly to convert between types:
###Code
# convert to a 64-bit integer
wines.astype(np.int64)
# convert to a 32-bit integer
wines.astype(np.int32)
# convert to a 16-bit integer
wines.astype(np.int16)
# convert to a 8-bit integer
wines.astype(np.int8)
###Output
_____no_output_____
###Markdown
NumPy Array OperationsNumPy makes it simple to perform mathematical operations on arrays. This is one of the primary advantages of NumPy, and makes it quite easy to do computations. Single Array MathIf you do any of the basic mathematical operations ```/```, ```*```, ```-```, ```+```, ```^``` with an array and a value, it will apply the operation to each of the elements in the array.Let's say we want to add 10 points to each quality score because we're feeling generous. Here's how we'd do that:
###Code
# add 10 points to the quality score
wines[:,-1] + 10
###Output
_____no_output_____
###Markdown
*Note: that the above operation won't change the wines array -- it will return a new 1-dimensional array where 10 has been added to each element in the quality column of wines.*If we instead did ```+=```, we'd modify the array in place:
###Code
print('Before', wines[:,11])
# modify the data in place
wines[:,11] += 10
print('After', wines[:,11])
###Output
Before [5. 5. 5. ... 6. 5. 6.]
After [15. 15. 15. ... 16. 15. 16.]
###Markdown
All the other operations work the same way. For example, if we want to multiply each of the quality score by 2, we could do it like this:
###Code
# multiply the quality score by 2
wines[:,11] * 2
###Output
_____no_output_____
###Markdown
Multiple Array MathIt's also possible to do mathematical operations between arrays. This will apply the operation to pairs of elements. For example, if we add the quality column to itself, here's what we get:
###Code
# add the quality column to itself
wines[:,11] + wines[:,11]
###Output
_____no_output_____
###Markdown
Note that this is equivalent to ```wines[:,11] * 2``` -- this is because NumPy adds each pair of elements. The first element in the first array is added to the first element in the second array, the second to the second, and so on.
###Code
# add the quality column to itself
wines[:,11] * 2
###Output
_____no_output_____
###Markdown
We can also use this to multiply arrays. Let's say we want to pick a wine that maximizes alcohol content and quality. We'd multiply alcohol by quality, and select the wine with the highest score:
###Code
# multiply alcohol content by quality
alcohol_by_quality = wines[:,10] * wines[:,11]
print(alcohol_by_quality)
alcohol_by_quality.sort()
print(alcohol_by_quality, alcohol_by_quality[-1])
###Output
[650. 650. 650. ... 900. 900. 900.] 900.0
###Markdown
All of the common operations ```/```, ```*```, ```-```, ```+```, ```^``` will work between arrays. NumPy Array MethodsIn addition to the common mathematical operations, NumPy also has several methods that you can use for more complex calculations on arrays. An example of this is the ```numpy.ndarray.sum``` method. This finds the sum of all the elements in an array by default:
###Code
# find the sum of all rows and the quality column
total = 0
for row in wines:
total += row[11]
print(total)
# find the sum of all rows and the quality column
wines[:,11].sum(axis=0)
# find the sum of the rows 1, 2, and 3 across all columns
totals = []
for i in range(3):
total = 0
for col in wines[i,:]:
total += col
totals.append(total)
print(totals)
# find the sum of the rows 1, 2, and 3 across all columns
wines[0:3,:].sum(axis=1)
###Output
_____no_output_____
###Markdown
We can pass the ```axis``` keyword argument into the sum method to find sums over an axis. If we call sum across the wines matrix, and pass in ```axis=0```, we'll find the sums over the first axis of the array. This will give us the **sum of all the values in every column**. This may seem backwards that the sums over the first axis would give us the sum of each column, but one way to think about this is that **the specified axis is the one "going away"**. So if we specify ```axis=0```, we want the **rows to go away**, and we want to find **the sums for each of the remaining axes across each row**:
###Code
# sum each column for all rows
totals = [0] * len(wines[0])
for i, total in enumerate(totals):
for row_val in wines[:,i]:
total += row_val
totals[i] = total
print(totals)
# sum each column for all rows
wines.sum(axis=0)
###Output
_____no_output_____
###Markdown
We can verify that we did the sum correctly by checking the shape. The shape should be 12, corresponding to the number of columns:
###Code
wines.sum(axis=0).shape
###Output
_____no_output_____
###Markdown
If we pass in axis=1, we'll find the sums over the second axis of the array. This will give us the sum of each row:
###Code
# sum each row for all columns
totals = [0] * len(wines)
for i, total in enumerate(totals):
for col_val in wines[i,:]:
total += col_val
totals[i] = total
print(totals[0:3], '...', totals[-3:])
# sum each row for all columns
wines.sum(axis=1)
wines.sum(axis=1).shape
###Output
_____no_output_____
###Markdown
There are several other methods that behave like the sum method, including:* ```numpy.ndarray.mean``` — finds the mean of an array.* ```numpy.ndarray.std``` — finds the standard deviation of an array.* ```numpy.ndarray.min``` — finds the minimum value in an array.* ```numpy.ndarray.max``` — finds the maximum value in an array.You can find a full list of array methods [here](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html). NumPy Array ComparisonsNumPy makes it possible to test to see if rows match certain values using mathematical comparison operations like ``````, ```>=```, ```<=```, and ```==```. For example, if we want to see which wines have a quality rating higher than 5, we can do this:
###Code
# return True for all rows in the Quality column that are greater than 5
wines[:,11] > 5
###Output
_____no_output_____
###Markdown
We get a Boolean array that tells us which of the wines have a quality rating greater than 5. We can do something similar with the other operators. For instance, we can see if any wines have a quality rating equal to 10:
###Code
# return True for all rows that have a Quality rating of 10
wines[:,11] == 10
###Output
_____no_output_____
###Markdown
SubsettingOne of the powerful things we can do with a Boolean array and a NumPy array is select only certain rows or columns in the NumPy array. For example, the below code will only select rows in wines where the quality is over 7:
###Code
# create a boolean array for wines with quality greater than 15
high_quality = wines[:,11] > 15
print(len(high_quality), high_quality)
# use boolean indexing to find high quality wines
high_quality_wines = wines[high_quality,:]
print(len(high_quality_wines), high_quality_wines)
###Output
855 [[1.12e+01 2.80e-01 5.60e-01 ... 5.80e-01 5.00e+01 1.60e+01]
[7.30e+00 6.50e-01 0.00e+00 ... 4.70e-01 5.00e+01 1.70e+01]
[7.80e+00 5.80e-01 2.00e-02 ... 5.70e-01 5.00e+01 1.70e+01]
...
[5.90e+00 5.50e-01 1.00e-01 ... 7.60e-01 5.00e+01 1.60e+01]
[6.30e+00 5.10e-01 1.30e-01 ... 7.50e-01 5.00e+01 1.60e+01]
[6.00e+00 3.10e-01 4.70e-01 ... 6.60e-01 5.00e+01 1.60e+01]]
###Markdown
We select only the rows where ```high_quality``` contains a ```True``` value, and all of the columns. This subsetting makes it simple to filter arrays for certain criteria. For example, we can look for wines with a lot of alcohol and high quality. In order to specify multiple conditions, we have to place each condition in **parentheses** ```(...)```, and separate conditions with an **ampersand** ```&```:
###Code
# create a boolean array for high alcohol content and high quality
high_alcohol_and_quality = (wines[:,11] > 7) & (wines[:,10] > 10)
print(high_alcohol_and_quality)
# use boolean indexing to select out the wines
wines[high_alcohol_and_quality,:]
###Output
[ True True True ... True True True]
###Markdown
We can combine subsetting and assignment to overwrite certain values in an array:
###Code
high_alcohol_and_quality = (wines[:,10] > 10) & (wines[:,11] > 7)
wines[high_alcohol_and_quality,10:] = 20
###Output
_____no_output_____
###Markdown
Reshaping NumPy ArraysWe can change the shape of arrays while still preserving all of their elements. This often can make it easier to access array elements. The simplest reshaping is to flip the axes, so rows become columns, and vice versa. We can accomplish this with the ```numpy.transpose``` function:
###Code
np.transpose(wines).shape
###Output
_____no_output_____
###Markdown
We can use the ```numpy.ravel``` function to turn an array into a one-dimensional representation. It will essentially flatten an array into a long sequence of values:
###Code
wines.ravel()
###Output
_____no_output_____
###Markdown
Here's an example where we can see the ordering of ```numpy.ravel```:
###Code
array_one = np.array(
[
[1, 2, 3, 4],
[5, 6, 7, 8]
]
)
array_one.ravel()
###Output
_____no_output_____
###Markdown
Finally, we can use the numpy.reshape function to reshape an array to a certain shape we specify. The below code will turn the second row of wines into a 2-dimensional array with 2 rows and 6 columns:
###Code
# print the current shape of the 2nd row and all columns
wines[1,:].shape
# reshape the 2nd row to a 2 by 6 matrix
wines[1,:].reshape((2,6))
###Output
_____no_output_____
###Markdown
Combining NumPy ArraysWith NumPy, it's very common to combine multiple arrays into a single unified array. We can use ```numpy.vstack``` to vertically stack multiple arrays. Think of it like the second arrays's items being added as new rows to the first array. We can read in the ```winequality-white.csv``` dataset that contains information on the quality of white wines, then combine it with our existing dataset, wines, which contains information on red wines.In the below code, we:* Read in ```winequality-white.csv```.* Display the shape of white_wines.
###Code
white_wines = np.genfromtxt("winequality-white.csv", delimiter=";", skip_header=1)
white_wines.shape
###Output
_____no_output_____
###Markdown
As you can see, we have attributes for 4898 wines. Now that we have the white wines data, we can combine all the wine data.In the below code, we:* Use the ```vstack``` function to combine wines and white_wines.* Display the shape of the result.
###Code
all_wines = np.vstack((wines, white_wines))
all_wines.shape
###Output
_____no_output_____
###Markdown
As you can see, the result has 6497 rows, which is the sum of the number of rows in wines and the number of rows in red_wines.If we want to combine arrays horizontally, where the number of rows stay constant, but the columns are joined, then we can use the ```numpy.hstack``` function. The arrays we combine need to have the same number of rows for this to work.Finally, we can use ```numpy.concatenate``` as a general purpose version of ```hstack``` and ```vstack```. If we want to concatenate two arrays, we pass them into concatenate, then specify the axis keyword argument that we want to concatenate along. * Concatenating along the first axis is similar to ```vstack```* Concatenating along the second axis is similar to ```hstack```:
###Code
x = np.concatenate((wines, white_wines), axis=0)
print(x.shape, x)
###Output
(6497, 12) [[7.40e+00 7.00e-01 0.00e+00 ... 5.60e-01 5.00e+01 1.50e+01]
[7.80e+00 8.80e-01 0.00e+00 ... 6.80e-01 5.00e+01 1.50e+01]
[7.80e+00 7.60e-01 4.00e-02 ... 6.50e-01 5.00e+01 1.50e+01]
...
[6.50e+00 2.40e-01 1.90e-01 ... 4.60e-01 9.40e+00 6.00e+00]
[5.50e+00 2.90e-01 3.00e-01 ... 3.80e-01 1.28e+01 7.00e+00]
[6.00e+00 2.10e-01 3.80e-01 ... 3.20e-01 1.18e+01 6.00e+00]]
###Markdown
BroadcastingUnless the arrays that you're operating on are the exact same size, it's not possible to do elementwise operations. In cases like this, NumPy performs broadcasting to try to match up elements. Essentially, broadcasting involves a few steps:* The last dimension of each array is compared. * If the dimension lengths are equal, or one of the dimensions is of length 1, then we keep going. * If the dimension lengths aren't equal, and none of the dimensions have length 1, then there's an error.* Continue checking dimensions until the shortest array is out of dimensions.For example, the following two shapes are compatible:``` pythonA: (50,3)B (3,)```This is because the length of the trailing dimension of array A is 3, and the length of the trailing dimension of array B is 3. They're equal, so that dimension is okay. Array B is then out of elements, so we're okay, and the arrays are compatible for mathematical operations.The following two shapes are also compatible:``` pythonA: (1,2)B (50,2)```The last dimension matches, and A is of length 1 in the first dimension.These two arrays don't match:``` pythonA: (50,50)B: (49,49)```The lengths of the dimensions aren't equal, and neither array has either dimension length equal to 1.There's a detailed explanation of broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html), but we'll go through a few examples to illustrate the principle:
###Code
wines * np.array([1,2])
###Output
_____no_output_____
###Markdown
The above example didn't work because the two arrays don't have a matching trailing dimension. Here's an example where the last dimension does match:
###Code
array_one = np.array(
[
[1,2],
[3,4]
]
)
array_two = np.array([4,5])
array_one + array_two
###Output
_____no_output_____
###Markdown
As you can see, array_two has been broadcasted across each row of array_one. Here's an example with our wines data:
###Code
rand_array = np.random.rand(12)
wines + rand_array
###Output
_____no_output_____ |
experiments/tl_1v2/oracle.run1.framed-oracle.run2.framed/trials/27/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:oracle.run1.framed-oracle.run2.framed",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_power"],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_power"],
"episode_transforms": [],
"domain_prefix": "ORACLE.run2_",
},
],
"dataset_seed": 500,
"seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
tutorials/detect.ipynb | ###Markdown
Source detection with Gammapy ContextThe first task in a source catalogue production is to identify significant excesses in the data that can be associated to unknown sources and provide a preliminary parametrization in term of position, extent, and flux. In this notebook we will use Fermi-LAT data to illustrate how to detect candidate sources in counts images with known background.**Objective: build a list of significant excesses in a Fermi-LAT map** Proposed approach This notebook show how to do source detection with Gammapy using the methods available in `~gammapy.detect`.We will use images from a Fermi-LAT 3FHL high-energy Galactic center dataset to do this:* perform adaptive smoothing on counts image* produce 2-dimensional test-statistics (TS)* run a peak finder to detect point-source candidates* compute Li & Ma significance images* estimate source candidates radius and excess countsNote that what we do here is a quick-look analysis, the production of real source catalogs use more elaborate procedures.We will work with the following functions and classes:* `~gammapy.maps.WcsNDMap`* `~gammapy.detect.ASmooth`* `~gammapy.detect.TSMapEstimator`* `~gammapy.detect.find_peaks`* `~gammapy.detect.compute_lima_image` SetupAs always, let's get started with some setup ...
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from gammapy.maps import Map
from gammapy.detect import (
ASmooth,
TSMapEstimator,
find_peaks,
compute_lima_image,
)
from gammapy.catalog import SOURCE_CATALOGS
from gammapy.cube import PSFKernel
from gammapy.stats import significance
from astropy.coordinates import SkyCoord
from astropy.convolution import Tophat2DKernel
import astropy.units as u
import numpy as np
# defalut matplotlib colors without grey
colors = [
u"#1f77b4",
u"#ff7f0e",
u"#2ca02c",
u"#d62728",
u"#9467bd",
u"#8c564b",
u"#e377c2",
u"#bcbd22",
u"#17becf",
]
###Output
_____no_output_____
###Markdown
Read in input imagesWe first read in the counts cube and sum over the energy axis:
###Code
counts = Map.read("$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-counts.fits.gz")
background = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-background.fits.gz"
)
exposure = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-exposure.fits.gz"
)
maps = {"counts": counts, "background": background, "exposure": exposure}
kernel = PSFKernel.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/fermi-3fhl-gc-psf.fits.gz"
)
###Output
_____no_output_____
###Markdown
Adaptive smoothingFor visualisation purpose it can be nice to look at a smoothed counts image. This can be performed using the adaptive smoothing algorithm from [Ebeling et al. (2006)](https://ui.adsabs.harvard.edu/abs/2006MNRAS.368...65E/abstract).In the following example the `threshold` argument gives the minimum significance expected, values below are clipped.
###Code
%%time
scales = u.Quantity(np.arange(0.05, 1, 0.05), unit="deg")
smooth = ASmooth(threshold=3, scales=scales)
images = smooth.run(**maps)
plt.figure(figsize=(15, 5))
images["counts"].plot(add_cbar=True, vmax=10);
###Output
_____no_output_____
###Markdown
TS map estimationThe Test Statistic, TS = 2 ∆ log L ([Mattox et al. 1996](https://ui.adsabs.harvard.edu/abs/1996ApJ...461..396M/abstract)), compares the likelihood function L optimized with and without a given source.The TS map is computed by fitting by a single amplitude parameter on each pixel as described in Appendix A of [Stewart (2009)](https://ui.adsabs.harvard.edu/abs/2009A%26A...495..989S/abstract). The fit is simplified by finding roots of the derivative of the fit statistics (default settings use [Brent's method](https://en.wikipedia.org/wiki/Brent%27s_method)).
###Code
%%time
estimator = TSMapEstimator()
images = estimator.run(maps, kernel.data)
###Output
_____no_output_____
###Markdown
Plot resulting images
###Code
plt.figure(figsize=(15, 5))
images["sqrt_ts"].plot(add_cbar=True);
plt.figure(figsize=(15, 5))
images["flux"].plot(add_cbar=True, stretch="sqrt", vmin=0);
plt.figure(figsize=(15, 5))
images["niter"].plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Source candidatesLet's run a peak finder on the `sqrt_ts` image to get a list of point-sources candidates (positions and peak `sqrt_ts` values).The `find_peaks` function performs a local maximun search in a sliding window, the argument `min_distance` is the minimum pixel distance between peaks (smallest possible value and default is 1 pixel).
###Code
sources = find_peaks(images["sqrt_ts"], threshold=8, min_distance=1)
nsou = len(sources)
sources
# Plot sources on top of significance sky image
plt.figure(figsize=(15, 5))
_, ax, _ = images["sqrt_ts"].plot(add_cbar=True)
ax.scatter(
sources["ra"],
sources["dec"],
transform=plt.gca().get_transform("icrs"),
color="none",
edgecolor="w",
marker="o",
s=600,
lw=1.5,
);
###Output
_____no_output_____
###Markdown
Note that we used the instrument point-spread-function (PSF) as kernel, so the hypothesis we test is the presence of a point source. In order to test for extended sources we would have to use as kernel an extended template convolved by the PSF. Alternatively, we can compute the significance of an extended excess using the Li & Ma formalism, which is faster as no fitting is involve. Li & Ma significance mapsWe can compute significance for an observed number of counts and known background using an extension of equation (17) from the [Li & Ma (1983)](https://ui.adsabs.harvard.edu/abs/1983ApJ...272..317L/abstract) (see `gammapy.stats.significance` for details). We can perform this calculation intergating the counts within different radius. To do so we use an astropy Tophat kernel with the `compute_lima_image` function.
###Code
%%time
radius = np.array([0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.4, 0.5])
pixsize = counts.geom.pixel_scales[0].value
nr = len(radius)
signi = np.zeros((nsou, nr))
excess = np.zeros((nsou, nr))
for kr in range(nr):
npixel = radius[kr] / pixsize
kernel = Tophat2DKernel(npixel)
result = compute_lima_image(counts, background, kernel)
signi[:, kr] = result["significance"].data[sources["y"], sources["x"]]
excess[:, kr] = result["excess"].data[sources["y"], sources["x"]]
###Output
_____no_output_____
###Markdown
For simplicity we saved the significance and excess at the position of the candidates found previously on the TS map, but we could aslo have applied the peak finder on these significances maps for each scale, or alternatively implemented a 3D peak detection (in longitude, latitude, radius). Now let's look at the significance versus integration radius:
###Code
plt.figure()
for ks in range(nsou):
plt.plot(radius, signi[ks, :], color=colors[ks])
plt.xlabel("Radius")
plt.ylabel("Li & Ma Significance")
plt.title("Guessing optimal radius of each candidate");
###Output
_____no_output_____
###Markdown
We can add the optimal radius guessed and the corresdponding excess to the source candidate properties table.
###Code
# rename the value key to sqrt(TS)_PS
sources.rename_column("value", "sqrt(TS)_PS")
index = np.argmax(signi, axis=1)
sources["significance"] = signi[range(nsou), index]
sources["radius"] = radius[index]
sources["excess"] = excess[range(nsou), index]
sources
# Plot candidates sources on top of significance sky image with radius guess
plt.figure(figsize=(15, 5))
_, ax, _ = images["sqrt_ts"].plot(add_cbar=True, cmap=cm.Greys_r)
phi = np.arange(0, 2 * np.pi, 0.01)
for ks in range(nsou):
x = sources["x"][ks] + sources["radius"][ks] / pixsize * np.cos(phi)
y = sources["y"][ks] + sources["radius"][ks] / pixsize * np.sin(phi)
ax.plot(x, y, "-", color=colors[ks], lw=1.5);
###Output
_____no_output_____ |
numpy-data-science-essential-training/Ex_Files_NumPy_Data_EssT/Exercise Files/Ch 3/03_02/Starting/.ipynb_checkpoints/Boolean Mask Arrays-checkpoint.ipynb | ###Markdown
Boolean Mask Arrays
###Code
import numpy as np
my_vector = np.array([-17, -4, 0, 2, 21, 37, 105])
my_vector
###Output
_____no_output_____ |
Models/random_forest.ipynb | ###Markdown
Machine Learning- Exoplanet Exploration Extensive Data Dictionary: https://exoplanetarchive.ipac.caltech.edu/docs/API_kepcandidate_columns.htmlHighlightable columns of note are:* kepoi_name: A KOI is a target identified by the Kepler Project that displays at least one transit-like sequence within Kepler time-series photometry that appears to be of astrophysical origin and initially consistent with a planetary transit hypothesis* kepler_name: [These names] are intended to clearly indicate a class of objects that have been confirmed or validated as planets—a step up from the planet candidate designation.* koi_disposition: The disposition in the literature towards this exoplanet candidate. One of CANDIDATE, FALSE POSITIVE, NOT DISPOSITIONED or CONFIRMED.* koi_pdisposition: The disposition Kepler data analysis has towards this exoplanet candidate. One of FALSE POSITIVE, NOT DISPOSITIONED, and CANDIDATE.* koi_score: A value between 0 and 1 that indicates the confidence in the KOI disposition. For CANDIDATEs, a higher value indicates more confidence in its disposition, while for FALSE POSITIVEs, a higher value indicates less confidence in that disposition.
###Code
# # Update sklearn to prevent version mismatches
# !pip install sklearn --upgrade
# # install joblib
# !pip install joblib
###Output
_____no_output_____
###Markdown
Import Dependencies
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Hide warning messages in notebook
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Read the CSV and Perform Basic Data Cleaning
###Code
# Read/Load CSV file
df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
###Output
_____no_output_____
###Markdown
Basic Statistic Details
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Select Features (columns)* Feature Selection: Removing irrelevant feature results in better performing model that is easeir to understands & model runs faster
###Code
target_names = df["koi_disposition"].unique()
#target_names
print(df["koi_disposition"].unique())
# Assign X (Independant data) and y (Dependant target)
# Set X equal to the entire data set, except for the first column
X = df.iloc[:, 1:]
# X.head()
# Set y equal to the first column
y = df.iloc[:,0].values.reshape(-1, 1)
# y.head()
from sklearn.ensemble import ExtraTreesClassifier
# Search for top 10 features according to feature importances
model = ExtraTreesClassifier()
model.fit(X,y)
model.feature_importances_
# sorted(zip(model.feature_importances_, X), reverse=True)
# Store the top (20) features as a series, using the column headers as the index
top_feat = pd.Series(model.feature_importances_, index=X.columns).nlargest(10)
top_feat
# Set features based on feature importances
X = df[top_feat.index]
# Use `koi_disposition` for the y values
y = df['koi_disposition']
# y = df['koi_disposition'].values.reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Create a Train Test Split
###Code
from sklearn.model_selection import train_test_split
# Split the data into smaller buckets for training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
X_train.head()
# X and y Train shape have 5243 rows (80% of data)
X_train.shape, y_train.shape
# X and y Test shape have 1748 rows (20% of data)
X_test.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Pre-processingScale the data using the MinMaxScalerMinMaxScaler: * A way to normalize the input features/variables * Features will be transformed into the range * Scales the range of fetures from 0 to 1
###Code
from sklearn.preprocessing import MinMaxScaler
# Create a MinMaxScaler model and fit it to the training data
X_scaler = MinMaxScaler().fit(X_train)
# Transform the training and testing data using the X_scaler
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
#print(np.matrix(X_test_scaled))
###Output
_____no_output_____
###Markdown
Train the Model * Used Random Forest Model
###Code
from sklearn.ensemble import RandomForestClassifier
# Create a Randome Forest Model
model = RandomForestClassifier(n_estimators=200)
# Train (Fit) the model to the data
model.fit(X_train_scaled, y_train)
# Score/Validate the model using the test data
print(f"Training Data Score: {'%.3f' % model.score (X_train_scaled, y_train)}")
print(f"Testing Data Score: {'%.3f' % model.score(X_test_scaled, y_test) }")
# Printed the r2 score for the test data, testing is lower than training which is good we are not over feeding
###Output
Training Data Score: 1.000
Testing Data Score: 0.891
###Markdown
Model Accuracy
###Code
# Predicting the Test set results
y_predic = model.predict(X_test)
# Making the confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_predic)
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_predic)
print('Test Model Accurracy: %.3f' % (accuracy))
###Output
_____no_output_____
###Markdown
Prediction
###Code
predictions = model.predict(X_test_scaled)
# print(f"first 10 Predictions{predictions[:10].tolist()}")
# print(f"first 10 Actual{y_test[:10].tolist()}")
# Printing into a Dataframe (y_test can't be reshap on top)
df_pred = pd.DataFrame({"Actual":y_test, "Predicted":predictions})
df_pred.head()
###Output
_____no_output_____
###Markdown
Hyperparameter TuningUse `GridSearchCV` to tune the model's parameters
###Code
# Check Random Forest Model parameters that can be used for Tuning
model = RandomForestClassifier()
model
from sklearn.model_selection import GridSearchCV
param_grid = {'max_depth': [1, 5, 15, 25, 35],
'n_estimators': [100, 300, 500, 700, 1000]}
grid = GridSearchCV(model, param_grid, verbose=3)
# Train the model with GridSearch
grid.fit(X_train_scaled, y_train)
# List the best parameters for this dataset
print('Best Parameter: ',grid.best_params_)
# List the best score
print('Best Score: %.3f' % grid.best_score_)
# Score the model
print('Model Score: %.3f' % grid.score(X_test_scaled, y_test))
# Make predictions with the hypertuned model
predictions = grid.predict(X_test_scaled)
df_grid = pd.DataFrame({"Actual":y_test, "Predicted":predictions})
df_grid.head()
# Calculate classification report
# print(np.array(y_test))
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions,
target_names=target_names))
###Output
precision recall f1-score support
CONFIRMED 0.82 0.72 0.77 404
FALSE POSITIVE 0.76 0.83 0.80 435
CANDIDATE 0.99 1.00 0.99 909
accuracy 0.89 1748
macro avg 0.86 0.85 0.85 1748
weighted avg 0.89 0.89 0.89 1748
###Markdown
Save the Model* Using joblib
###Code
import joblib
filename = 'RandomForestClassifier.sav'
joblib.dump(RandomForestClassifier, filename)
###Output
_____no_output_____ |
CNN-Keras-Tensorflow/q1-ck2840.ipynb | ###Markdown
Importing required stuff
###Code
import time
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from datetime import timedelta
import scipy.misc
import glob
import sys
%matplotlib inline
###Output
_____no_output_____
###Markdown
Helper files to load data
###Code
# Helper functions, DO NOT modify this
def get_img_array(path):
"""
Given path of image, returns it's numpy array
"""
return scipy.misc.imread(path)
def get_files(folder):
"""
Given path to folder, returns list of files in it
"""
filenames = [file for file in glob.glob(folder+'*/*')]
filenames.sort()
return filenames
def get_label(filepath, label2id):
"""
Files are assumed to be labeled as: /path/to/file/999_frog.png
Returns label for a filepath
"""
tokens = filepath.split('/')
label = tokens[-1].split('_')[1][:-4]
if label in label2id:
return label2id[label]
else:
sys.exit("Invalid label: " + label)
# Functions to load data, DO NOT change these
def get_labels(folder, label2id):
"""
Returns vector of labels extracted from filenames of all files in folder
:param folder: path to data folder
:param label2id: mapping of text labels to numeric ids. (Eg: automobile -> 0)
"""
files = get_files(folder)
y = []
for f in files:
y.append(get_label(f,label2id))
return np.array(y)
def one_hot(y, num_classes=10):
"""
Converts each label index in y to vector with one_hot encoding
"""
y_one_hot = np.zeros((num_classes, y.shape[0]))
y_one_hot[y, range(y.shape[0])] = 1
return y_one_hot
def get_label_mapping(label_file):
"""
Returns mappings of label to index and index to label
The input file has list of labels, each on a separate line.
"""
with open(label_file, 'r') as f:
id2label = f.readlines()
id2label = [l.strip() for l in id2label]
label2id = {}
count = 0
for label in id2label:
label2id[label] = count
count += 1
return id2label, label2id
def get_images(folder):
"""
returns numpy array of all samples in folder
each column is a sample resized to 30x30 and flattened
"""
files = get_files(folder)
images = []
count = 0
for f in files:
count += 1
if count % 10000 == 0:
print("Loaded {}/{}".format(count,len(files)))
img_arr = get_img_array(f)
img_arr = img_arr.flatten() / 255.0
images.append(img_arr)
X = np.column_stack(images)
return X
def get_train_data(data_root_path):
"""
Return X and y
"""
train_data_path = data_root_path + 'train'
id2label, label2id = get_label_mapping(data_root_path+'labels.txt')
print(label2id)
X = get_images(train_data_path)
y = get_labels(train_data_path, label2id)
return X, y
def save_predictions(filename, y):
"""
Dumps y into .npy file
"""
np.save(filename, y)
###Output
_____no_output_____
###Markdown
Load test data from using the helper code from HW1
###Code
# Load the data
data_root_path = 'cifar10-hw2/'
X_train, Y_train = get_train_data(data_root_path) # this may take a few minutes
X_test_format = get_images(data_root_path + 'test')
X_test_format = X_test_format.T
#print('Data loading done')
X_train = X_train.T
Y_train = Y_train.T
###Output
_____no_output_____
###Markdown
Load all the data
###Code
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
data_dict = pickle.load(fo, encoding='bytes')
return data_dict
path = 'cifar-10-batches-py'
file = []
file.append('data_batch_1')
file.append('data_batch_2')
file.append('data_batch_3')
file.append('data_batch_4')
file.append('data_batch_5')
file.append('test_batch')
X_train = None
Y_train = None
X_test = None
Y_test = None
for i in range(6):
fname = path+'/'+file[i]
data_dict = unpickle(fname)
_X = np.array(data_dict[b'data'], dtype=float) / 255.0
_X = _X.reshape([-1, 3, 32, 32])
_X = _X.transpose([0, 2, 3, 1])
_X = _X.reshape(-1, 32*32*3)
_Y = data_dict[b'labels']
if X_train is None:
X_train = _X
Y_train = _Y
elif i != 5:
X_train = np.concatenate((X_train, _X), axis=0)
Y_train = np.concatenate((Y_train, _Y), axis=0)
else:
X_test = _X
Y_test = np.array(_Y)
print(data_dict[b'batch_label'])
# confirming the output
print(X_train.shape, Y_train.shape, X_test.shape, Y_test.shape)
###Output
b'training batch 1 of 5'
b'training batch 2 of 5'
b'training batch 3 of 5'
b'training batch 4 of 5'
b'training batch 5 of 5'
b'testing batch 1 of 1'
(50000, 3072) (50000,) (10000, 3072) (10000,)
###Markdown
Defining Hyperparameters
###Code
# Convolutional Layer 1.
filter_size1 = 3
num_filters1 = 64
# Convolutional Layer 2.
filter_size2 = 3
num_filters2 = 64
# Fully-connected layer.
fc_1 = 256 # Number of neurons in fully-connected layer.
fc_2 = 128 # Number of neurons in fc layer
# Number of color channels for the images: 1 channel for gray-scale.
num_channels = 3
# image dimensions (only squares for now)
img_size = 32
# Size of image when flattened to a single dimension
img_size_flat = img_size * img_size * num_channels
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# class info
classes = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
num_classes = len(classes)
# batch size
batch_size = 64
# validation split
validation_size = .16
# learning rate
learning_rate = 0.001
# beta
beta = 0.01
# log directory
import os
log_dir = os.getcwd()
# how long to wait after validation loss stops improving before terminating training
early_stopping = None # use None if you don't want to implement early stoping
###Output
_____no_output_____
###Markdown
Helper-function for plotting imagesFunction used to plot 9 images in a 3x3 grid (or fewer, depending on how many images are passed), and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
if len(images) == 0:
print("no images to show")
return
else:
random_indices = random.sample(range(len(images)), min(len(images), 9))
print(images.shape)
images, cls_true = zip(*[(images[i], cls_true[i]) for i in random_indices])
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_size, img_size, num_channels))
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(classes[cls_true[i]])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Plot the images and labels using our helper-function above.
plot_images(X_train, Y_train)
###Output
(50000, 3072)
###Markdown
Normalize
###Code
mean = np.mean(X_train, axis = 0)
stdDev = np.std(X_train, axis = 0)
X_train -= mean
X_train /= stdDev
X_test -= mean
X_test /= stdDev
X_test_format -= mean
X_test_format /= stdDev
###Output
_____no_output_____
###Markdown
Tensorflow graph Regularizer
###Code
regularizer = tf.contrib.layers.l2_regularizer(scale=0.1)
###Output
_____no_output_____
###Markdown
Weights and Bias
###Code
def new_weights(shape):
return tf.get_variable(name='weights',shape=shape,regularizer=regularizer)
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
###Output
_____no_output_____
###Markdown
Batch Norm
###Code
def batch_norm(x, n_out, phase_train):
"""
Batch normalization on convolutional maps.
Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
Args:
x: Tensor, 4D BHWD input maps
n_out: integer, depth of input maps
phase_train: boolean tf.Varialbe, true indicates training phase
scope: string, variable scope
Return:
normed: batch-normalized maps
"""
with tf.variable_scope('batch_norm'):
beta = tf.Variable(tf.constant(0.0, shape=[n_out]),
name='beta', trainable=True)
gamma = tf.Variable(tf.constant(1.0, shape=[n_out]),
name='gamma', trainable=True)
batch_mean, batch_var = tf.nn.moments(x, [0,1,2], name='moments')
ema = tf.train.ExponentialMovingAverage(decay=0.5)
def mean_var_with_update():
ema_apply_op = ema.apply([batch_mean, batch_var])
with tf.control_dependencies([ema_apply_op]):
return tf.identity(batch_mean), tf.identity(batch_var)
mean, var = tf.cond(tf.equal(phase_train,1),
mean_var_with_update,
lambda: (ema.average(batch_mean), ema.average(batch_var)))
normed = tf.nn.batch_normalization(x, mean, var, beta, gamma, 1e-3)
return normed
###Output
_____no_output_____
###Markdown
Helper function for summaries:
###Code
def variable_summaries(var):
"""Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean', mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar('stddev', stddev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
###Output
_____no_output_____
###Markdown
Convolutional Layer
###Code
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True, normalize=True, phase=1, batch_normalization =False): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
with tf.variable_scope('weights'):
weights = new_weights(shape=shape)
#tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, weights)
variable_summaries(weights)
# Create new biases, one for each filter.
with tf.variable_scope('biases'):
biases = new_biases(length=num_filters)
variable_summaries(biases)
with tf.variable_scope('convolution_layer'):
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
#layer = tf.layers.batch_normalization(layer,
# center=True, scale=True,
# training=phase)
#layer = tf.contrib.layers.batch_norm(layer,is_training=phase)
# Use pooling to down-sample the image resolution?
# Adding batch_norm
if batch_normalization == True:
layer = batch_norm(layer,num_filters, phase)
with tf.variable_scope('Max-Pooling'):
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
with tf.variable_scope('ReLU'):
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
tf.summary.histogram('activations', layer)
return layer, weights
###Output
_____no_output_____
###Markdown
Flatten Layer
###Code
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
###Output
_____no_output_____
###Markdown
FC Layer
###Code
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
with tf.variable_scope('weights'):
weights = new_weights(shape=[num_inputs, num_outputs])
with tf.variable_scope('biases'):
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
with tf.variable_scope('matmul'):
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
with tf.variable_scope('relu'):
layer = tf.nn.relu(layer)
return layer, weights
###Output
_____no_output_____
###Markdown
Placeholder variables
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, axis=1)
phase = tf.placeholder(tf.int32, name='phase')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
###Output
_____no_output_____
###Markdown
Convolutional Layers
###Code
with tf.variable_scope('Layer-1'):
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True, phase=phase, batch_normalization=True)
with tf.variable_scope('Layer-2'):
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True, phase=phase)
###Output
_____no_output_____
###Markdown
Flatten Layer
###Code
with tf.variable_scope('Flatten'):
layer_flat, num_features = flatten_layer(layer_conv2)
print(layer_flat,num_features)
###Output
Tensor("Flatten/Reshape:0", shape=(?, 16384), dtype=float32) 16384
###Markdown
FC Layers
###Code
with tf.variable_scope('Fully-Connected-1'):
layer_fc1, weights_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_1,
use_relu=True)
with tf.variable_scope('Fully-Connected-2'):
layer_fc2, weights_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc_1,
num_outputs=fc_2,
use_relu=True)
with tf.variable_scope('Fully-connected-3'):
layer_fc3, weights_fc3 = new_fc_layer(input=layer_fc2,
num_inputs=fc_2,
num_outputs=num_classes,
use_relu=False)
#with tf.variable_scope('dropout'):
# layer = tf.nn.dropout(layer_fc2,keep_prob)
###Output
_____no_output_____
###Markdown
Softmax and argmax functions
###Code
with tf.variable_scope('Softmax'):
y_pred = tf.nn.softmax(layer_fc3)
y_pred_cls = tf.argmax(y_pred, axis=1)
###Output
_____no_output_____
###Markdown
Cost-Function:
###Code
with tf.variable_scope('cross_entropy_loss'):
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc3,
labels=y_true)
loss = tf.reduce_mean(cross_entropy)
tf.summary.scalar('cross_entropy', loss)
#with tf.variable_scope('Regularization'):
reg_variables = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
reg_term = tf.contrib.layers.apply_regularization(regularizer, reg_variables)
loss += reg_term
cost = loss
tf.summary.scalar('Total-Loss', cost)
###Output
_____no_output_____
###Markdown
Using Adam Optimizer
###Code
#with tf.variable_scope('Optimize'):
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-2).minimize(cost)
###Output
_____no_output_____
###Markdown
Metrics
###Code
with tf.variable_scope('Metrics'):
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('accuracy', accuracy)
###Output
_____no_output_____
###Markdown
Tensorflow Session
###Code
session = tf.Session()
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Summaries
###Code
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(log_dir + '/train', session.graph)
test_writer = tf.summary.FileWriter(log_dir + '/test')
print(X_train.shape)
def one_hot(y, num_classes=10):
"""
Converts each label index in y to vector with one_hot encoding
"""
y_one_hot = np.zeros((num_classes, y.shape[0]))
y_one_hot[y, range(y.shape[0])] = 1
return y_one_hot
Y_hot = one_hot(Y_train)
Y_hot = Y_hot.T
# split test and train:
x_dev_batch = X_train[0:5000,:]
y_dev_batch = Y_hot[0:5000,:]
X_train = X_train[5000:,:]
Y_hot = Y_hot[5000:,:]
###Output
_____no_output_____
###Markdown
Training
###Code
train_batch_size = batch_size
def print_status(epoch, feed_dict_train, feed_dict_validate, train_loss, val_loss, step):
# Calculate the accuracy on the training-set.
summary, acc = session.run([merged,accuracy], feed_dict=feed_dict_train)
train_writer.add_summary(summary, step)
summary, val_acc = session.run([merged,accuracy], feed_dict=feed_dict_validate)
test_writer.add_summary(summary, step)
msg = "Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Training Loss: {3:.3f}, Validation Loss: {4:.3f}"
print(msg.format(epoch + 1, acc, val_acc, train_loss, val_loss))
# Counter for total number of iterations performed so far.
total_iterations = 0
batch_id = 1
def get_batch(X, Y, batch_size):
"""
Return minibatch of samples and labels
:param X, y: samples and corresponding labels
:parma batch_size: minibatch size
:returns: (tuple) X_batch, y_batch
"""
global batch_id
if batch_id*batch_size >= X.shape[0]:
batch_id = 1
if batch_id == 1:
permutation = np.random.permutation(X.shape[0])
X = X[permutation,:]
Y = Y[permutation,:]
lb = batch_size*(batch_id-1)
ub = batch_size*(batch_id)
X = X[lb:ub,:]
Y = Y[lb:ub,:]
batch_id += 1
return X,Y
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
best_val_loss = float("inf")
patience = 0
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images
x_batch, y_true_batch = get_batch(X_train,Y_hot, train_batch_size)
# getting one hot form:
#y_true_batch = one_hot(y_true_batch)
#y_dev_batch = one_hot(y_dev_batch)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch, phase: 1, keep_prob:0.5}
feed_dict_validate = {x: x_dev_batch,
y_true: y_dev_batch, phase: 0, keep_prob:1.0}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
#print(x_batch.shape,y_true_batch.shape)
acc = session.run(optimizer, feed_dict=feed_dict_train)
# Print status at end of each epoch (defined as full pass through training dataset).
if i % int(X_train.shape[0]/batch_size) == 0 == 0:
train_loss = session.run(cost, feed_dict=feed_dict_train)
val_loss = session.run(cost, feed_dict=feed_dict_validate)
epoch = int(i / int(X_train.shape[0]/batch_size))
print('Iteration:',i)
print_status(epoch, feed_dict_train, feed_dict_validate, train_loss, val_loss, i)
if early_stopping:
if val_loss < best_val_loss:
best_val_loss = val_loss
patience = 0
else:
patience += 1
if patience == early_stopping:
break
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# close the writers
train_writer.close()
test_writer.close()
# Print the time-usage.
print("Time elapsed: " + str(timedelta(seconds=int(round(time_dif)))))
# Run the optimizer
optimize(num_iterations=16873)
Y_test_hot = one_hot(Y_test)
Y_test_hot = Y_test_hot.T
feed_dict_test= {x: X_test,y_true: Y_test_hot, phase: 0, keep_prob:1.0}
summary, acc = session.run([merged,accuracy], feed_dict=feed_dict_test)
print("Accuracy on test set is: %f%%"%(acc*100))
###Output
Accuracy on test set is: 71.399993%
###Markdown
Write out the results
###Code
feed_dict_test= {x: X_test_format,y_true: Y_test_hot, phase: 0, keep_prob:1.0}
y_pred = session.run(y_pred, feed_dict=feed_dict_test)
save_predictions('ans1-ck2840.npy', y_pred)
session.close()
###Output
_____no_output_____ |
COAD-DRD/scripts/full_pivotTable.ipynb | ###Markdown
Pivot table with dynamic plots for COAD
###Code
import pandas as pd
import numpy as np
from pivottablejs import pivot_ui
# get data
database = "../db/dbCOAD-DRD.csv"
df = pd.read_csv(database)
df.head()
df.columns
# use other order of columns
df_repurposing = df[['AE', 'HGNC_symbol', 'DrugName', 'ProteinID', 'DrugCID', 'Drug']]
df_repurposing
#df.to_json(r'all.json')
# export to HTML
pivot_ui(df, rows=["HGNC_symbol"], outfile_path="All_pivotTable.html")
###Output
_____no_output_____
###Markdown
Manually edit the resulted HTML by replacing this part:
###Code
<body>
<script type="text/javascript">
$(function(){
var derivers = $.pivotUtilities.derivers;
var renderers = $.extend($.pivotUtilities.renderers,$.pivotUtilities.plotly_renderers);
if(window.location != window.parent.location)
$("<a>", {target:"_blank", href:""})
.text("[pop out]").prependTo($("body"));
$("#output").pivotUI(
$.csv.toArrays($("#output").text()),
$.extend({
renderers: $.extend(
$.pivotUtilities.renderers,
$.pivotUtilities.c3_renderers,
$.pivotUtilities.d3_renderers,
$.pivotUtilities.export_renderers,
),
hiddenAttributes: [""]
},
{rows : ["Target"],
filter : (function(r){ return r["Target"] != null }),
renderers: renderers,
rendererName : "Bar Chart",
rowOrder: "value_a_to_z"
})
).show();
});
</script>
<div id="output" style="display: none;">
###Output
_____no_output_____ |
Implementations/unsupervised/.ipynb_checkpoints/K-means - Empty-checkpoint.ipynb | ###Markdown
K-means clusteringWhen working with large datasets it can be helpful to group similar observations together. This process, known as clustering, is one of the most widely used in Machine Learning and is often used when our dataset comes without pre-existing labels. In this notebook we're going to implement the classic K-means algorithm, the simplest and most widely used clustering method. Once we've implemented it we'll use it to split a dataset into groups and see how our clustering compares to the 'true' labelling. Import Modules
###Code
import numpy as np
import random
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
###Output
_____no_output_____
###Markdown
Generate Dataset
###Code
modelParameters = {'mu':[[-2,1], [0.5, -1], [0,1]],
'pi':[0.2, 0.35, 0.45],
'sigma':0.4,
'n':200}
#Check that pi sums to 1
if np.sum(modelParameters['pi']) != 1:
print('Mixture weights must sum to 1!')
data = []
#determine which mixture each point belongs to
def generateLabels(n, pi):
#Generate n realisations of a categorical distribution given the parameters pi
unif = np.random.uniform(size = n) #Generate uniform random variables
labels = [(u < np.cumsum(pi)).argmax() for u in unif] #assign cluster
return labels
#Given the labels, generate from the corresponding normal distribution
def generateMixture(labels, params):
normalSamples = []
for label in labels:
#Select Parameters
mu = params['mu'][label]
Sigma = np.diag([params['sigma']**2]*len(mu))
#sample from multivariate normal
samp = np.random.multivariate_normal(mean = mu, cov = Sigma, size = 1)
normalSamples.append(samp)
normalSamples = np.reshape(normalSamples, (len(labels), len(params['mu'][0])))
return normalSamples
labels = generateLabels(100, modelParameters['pi']) #labels - (in practice we don't actually know what these are!)
X = generateMixture(labels, modelParameters) #features - (we do know what these are)
###Output
_____no_output_____
###Markdown
Quickly plot the data so we know what it looks like
###Code
plt.figure(figsize=(10,6))
plt.scatter(X[:,0], X[:,1],c = labels)
plt.show()
###Output
_____no_output_____
###Markdown
When doing K-means clustering, our goal is to sort the data into 3 clusters using the data $X$. When we're doing clustering we don't have access to the colour (label) of each point, so the data we're actually given would look like this:
###Code
plt.figure(figsize=(10,6))
plt.scatter(X[:,0], X[:,1])
plt.title('Example data - no labels')
plt.show()
###Output
_____no_output_____
###Markdown
If we inspect the data we can still see that the data are roughly made up by 3 groups, one in the top left corner, one in the top right corner and one in the bottom right corner How does K-means work?The K in K-means represents the number of clusters, K, that we will sort the data into.Let's imagine we had already sorted the data into K clusters (like in the first plot above) and were trying to decide what the label of a new point should be. It would make sense to assign it to the cluster which it is closest to.But how do we define 'closest to'? One way would be to give it the same label as the point that is closest to it (a 'nearest neighbour' approach), but a more robust way would be to determine where the 'middle' of each cluster was and assign the new point to the cluster with the closest middle. We call this 'middle' the Cluster Centroid and we calculate it be taking the average of all the points in the cluster. That's all very well and good if we already have the clusters in place, but the whole point of the algorithm is to find out what the clusters are!To find the clusters, we do the following:1. Randomly initialise K Cluster Centroids2. Assign each point to the Cluster Centroid that it is closest to.3. Update the Cluster Centroids as the average of all points currently assigned to that centroid4. Repeat steps 2-3 until convergence Why does K-means work?Our aim is to find K Cluster Centroids such that the overall distance between each datapoint and its Cluster Centroid is minimised. That is, we want to choose cluster centroids $C = \{C_1,...,C_K\}$ such that the error function:$$E(C) = \sum_{i=1}^n ||x_i-C_{x_i}||^2$$is minimised, where $C_{x_i}$ is the Cluster Centroid associated with the ith observation and $||x_i-C_{x_i}||$ is the Euclidean distance between the ith observation and associated Cluster Centroid. Now assume after $m$ iterations of the algorithm, the current value of $E(C)$ was $\alpha$. By carrying out step 2, we make sure that each point is assigned to the nearest cluster centroid - by doing this, either $\alpha$ stays the same (every point was already assigned to the closest centroid) or $\alpha$ gets smaller (one or more points is moved to a nearer centroid and hence the total distance is reduced). Similarly with step 3, by changing the centroid to be the average of all points in the cluster, we minimise the total distance associated with that cluster, meaning $\alpha$ can either stay the same or go down.In this way we see that as we run the algorithm $E(C)$ is non-increasing, so by continuing to run the algorithm our results can't get worse - hopefully if we run it for long enough then the results will be sensible!
###Code
class KMeans:
def __init__(self, data, K):
self.data = data #dataset with no labels
self.K = K #Number of clusters to sort the data into
#Randomly initialise Centroids
self.Centroids = np.random.normal(0,1,(self.K, self.data.shape[1])) #If the data has p features then should be a K x p array
def closestCentroid(self, x):
#Takes a single example and returns the index of the closest centroid
#Recall centroids are saved as self.Centroids
pass
def assignToCentroid(self):
#Want to assign each observation to a centroid by passing each observation to the function closestCentroid
pass
def updateCentroids(self):
#Now based on the current cluster assignments (stored in self.assignments) update the Centroids
pass
def runKMeans(self, tolerance = 0.00001):
#When the improvement between two successive evaluations of our error function is less than tolerance, we stop
change = 1000 #Initialise change to be a big number
numIterations = 0
self.CentroidStore = [np.copy(self.Centroids)] #We want to be able to keep track of how the centroids evolved over time
#while change > tolerance:
#Code goes here...
print(f'K-means Algorithm converged in {numIterations} steps')
myKM = KMeans(X,3)
myKM.runKMeans()
###Output
K-means Algorithm converged in 4 steps
###Markdown
Let's plot the results
###Code
c = [0,1,2]*len(myKM.CentroidStore)
plt.figure(figsize=(10,6))
plt.scatter(np.array(myKM.CentroidStore).reshape(-1,2)[:,0], np.array(myKM.CentroidStore).reshape(-1,2)[:,1],c=np.array(c), s = 200, marker = '*')
plt.scatter(X[:,0], X[:,1], s = 12)
plt.title('Example data from a mixture of Gaussians - Cluster Centroid traces')
plt.show()
###Output
_____no_output_____
###Markdown
The stars of each colour above represents the trajectory of each cluster centroid as the algorithm progressed. Starting from a random initialisation, the centroids raplidly converged to a separate cluster, which is encouraging.Now let's plot the data with the associated labels that we've assigned to them.
###Code
plt.figure(figsize=(10,6))
plt.scatter(X[:,0], X[:,1], s = 20, c = myKM.assignments)
plt.scatter(np.array(myKM.Centroids).reshape(-1,2)[:,0], np.array(myKM.Centroids).reshape(-1,2)[:,1], s = 200, marker = '*', c = 'red')
plt.title('Example data from a mixture of Gaussians - Including Cluster Centroids')
plt.show()
###Output
_____no_output_____
###Markdown
The plot above shows the final clusters (with red Cluster Centroids) assigned by the model, which should be pretty close to the 'true' clusters at the top of the page. Note: It's possible that although the clusters are the same the labels might be different - remember that K-means isn't supposed to identify the correct label, it's supposed to group the data in clusters which in reality share the same labels. The data we've worked with in this notebook had an underlying structure that made it easy for K-means to identify distinct clusters. However let's look at an example where K-means doesn't perform so well The sting in the tail - A more complex data structure
###Code
theta = np.linspace(0, 2*np.pi, 100)
r = 15
x1 = r*np.cos(theta)
x2 = r*np.sin(theta)
#Perturb the values in the circle
x1 = x1 + np.random.normal(0,2,x1.shape[0])
x2 = x2 + np.random.normal(0,2,x2.shape[0])
z1 = np.random.normal(0,3,x1.shape[0])
z2 = np.random.normal(0,3,x2.shape[0])
x1 = np.array([x1,z1]).reshape(-1)
x2 = np.array([x2,z2]).reshape(-1)
plt.scatter(x1,x2)
plt.show()
###Output
_____no_output_____
###Markdown
It might be the case that the underlying generative structure that we want to capture is that the 'outer ring' in the plot corresponds to a certain kind of process and the 'inner circle' corresponds to another.
###Code
#Get data in the format we want
newX = []
for i in range(x1.shape[0]):
newX.append([x1[i], x2[i]])
newX = np.array(newX)
#Run KMeans
myNewKM = KMeans(newX,2)
myNewKM.runKMeans()
plt.figure(figsize=(10,6))
plt.scatter(newX[:,0], newX[:,1], s = 20, c = np.array(myNewKM.assignments))
plt.scatter(np.array(myNewKM.Centroids).reshape(-1,2)[:,0], np.array(myNewKM.Centroids).reshape(-1,2)[:,1], s = 200, marker = '*', c = 'red')
plt.title('Assigned K-Means labels for Ring data ')
plt.show()
###Output
_____no_output_____ |
urban-heat-islands.ipynb | ###Markdown
Visualize Urban Heat Islands (UHI) in Toulouse - France Data from meteo stations can be downloaded on the French open source portal https://www.data.gouv.fr/
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import string
from glob import glob
from matplotlib.dates import DateFormatter
from ipyleaflet import Map, Marker, basemaps, basemap_to_tiles
from ipywidgets import Layout
met_files_folder = 'station-meteo' # Folder where to put meteo data files
legend_file = 'stations-meteo-en-place.csv' # File listing all meteo stations
start_date = '2019-06-27'
end_date = '2019-06-27'
toulouse_center = (43.60426, 1.44367)
default_zoom = 12
###Output
_____no_output_____
###Markdown
Parse file listing all met stations
###Code
leg = pd.read_csv(legend_file, sep=';')
def get_legend(id):
return leg.loc[leg['FID']==id]['Nom_Station'].values[0]
def get_lon(id):
return leg.loc[leg['FID']==id]['x'].values[0]
def get_lat(id):
return leg.loc[leg['FID']==id]['y'].values[0]
###Output
_____no_output_____
###Markdown
Build a Pandas dataframe from a met file
###Code
def get_table(file):
df = pd.read_csv(file, sep=';')
df.columns = list(string.ascii_lowercase)[:17]
df['id'] = df['b']
df['annee'] = df['e'] + 2019
df['heure'] = (df['f'] - 1) * 15 // 60
df['minute'] = 1 + (df['f'] - 1) * 15 % 60
df = df.loc[df['g'] > 0] # temperature field null
df['temperature'] = df['g'] - 50 + df['h'] / 10
df['pluie'] = df['j'] * 0.2 # auget to mm
df['vent_dir'] = df['k'] * 2
df['vent_force'] = df['l'] # en dessous de 80 pareil au dessus / 2 ?
df['pression'] = df['m'] + 900
df['vent_max_dir'] = df['n'] * 22.5
df['vent_max_force'] = df['o'] # en dessous de 80 pareil au dessus / 2 ?
df['pluie_plus_intense'] = df['p'] * 0.2
df['date'] = df['annee'].map(str) + '-' + df['c'].map(str) + '-' + df['d'].map(str) \
+ ':' + df['heure'].map(str) + '-' + df['minute'].map(str)
df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d:%H-%M')
df = df[['date','id','temperature','pression','pluie','pluie_plus_intense','vent_dir', \
'vent_force','vent_max_dir','vent_max_force']]
df.set_index('date', inplace=True)
df = df.loc[start_date:end_date]
return df
###Output
_____no_output_____
###Markdown
Parse met files (in met_files_folder)
###Code
table_list = []
for file in glob(met_files_folder + '/*.csv'):
table_list.append(get_table(file))
tables = [table for table in table_list if not table.empty]
legs = [get_legend(table['id'].iloc[0]) for table in tables]
lats = [get_lat(table['id'].iloc[0]) for table in tables]
longs = [get_lon(table['id'].iloc[0]) for table in tables]
print('Number of meteo stations with available recordings for this time period: {}'.format(len(legs)))
print(legs)
###Output
Number of meteo stations with available recordings for this time period: 21
['Nakache', 'Toulouse_Cote_Pavee', 'Toulouse_Paul_Sabatier', 'Toulouse_Carmes', 'Toulouse_parc_japonais', 'Basso_Cambo', 'Toulouse_Lardenne', 'Marengo', 'Montaudran', 'Busca', 'La_Salade', 'Pech_David', 'Avenue_Grde_Bretagne', 'Soupetard', 'Toulouse_parc_Jardin_Plantes', 'Meteopole', 'Toulouse_Cyprien', 'Castelginest', 'Toulouse_Canceropole', 'Valade', 'Colomiers_ZA_Perget']
###Markdown
Plot all met stations around Toulouse
###Code
m = Map(center=toulouse_center, zoom=default_zoom, layout=Layout(width='100%', height='500px'))
for i in range(len(legs)):
m.add_layer(Marker(location=(lats[i], longs[i]), draggable=False, title=legs[i]))
m
###Output
_____no_output_____
###Markdown
Plot temperature chart for all met stations
###Code
ax = tables[0]['temperature'].plot(grid=True, figsize=[25,17])
for i in range(1, len(tables)):
tables[i]['temperature'].plot(grid=True, ax=ax)
ax.legend(legs)
ax.xaxis.set_major_formatter(DateFormatter('%H:%M'))
ax.set_xlabel('Temperatures of ' + start_date)
plt.savefig('temperatures.png')
###Output
_____no_output_____ |
05.01-KNN-imputation.ipynb | ###Markdown
KNN imputationThe missing values are estimated as the average value from the closest K neighbours.[KNNImputer from sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.impute.KNNImputer.htmlsklearn.impute.KNNImputer)- Same K will be used to impute all variables- Can't really optimise K to better predict the missing values- Could optimise K to better predict the target**Note**If what we want is to predict, as accurately as possible the values of the missing data, then, we would not use the KNN imputer, we would build individual KNN algorithms to predict 1 variable from the remaining ones. This is a common regression problem.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# to split the datasets
from sklearn.model_selection import train_test_split
# multivariate imputation
from sklearn.impute import KNNImputer
###Output
_____no_output_____
###Markdown
Load data
###Code
# list with numerical varables
cols_to_use = [
'MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual',
'OverallCond', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea',
'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF',
'1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea',
'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath',
'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd',
'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea',
'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch',
'ScreenPorch', 'PoolArea', 'MiscVal', 'MoSold', 'YrSold',
'SalePrice'
]
# let's load the dataset with a selected variables
data = pd.read_csv('../houseprice.csv', usecols=cols_to_use)
# find variables with missing data
for var in data.columns:
if data[var].isnull().sum() > 1:
print(var, data[var].isnull().sum())
# let's separate into training and testing set
# first drop the target from the feature list
cols_to_use.remove('SalePrice')
X_train, X_test, y_train, y_test = train_test_split(
data[cols_to_use],
data['SalePrice'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# reset index, so we can compare values later on
# in the demo
X_train.reset_index(inplace=True, drop=True)
X_test.reset_index(inplace=True, drop=True)
###Output
_____no_output_____
###Markdown
KNN imputation
###Code
imputer = KNNImputer(
n_neighbors=5, # the number of neighbours K
weights='distance', # the weighting factor
metric='nan_euclidean', # the metric to find the neighbours
add_indicator=False, # whether to add a missing indicator
)
imputer.fit(X_train)
train_t = imputer.transform(X_train)
test_t = imputer.transform(X_test)
# sklearn returns a Numpy array
# lets make a dataframe
train_t = pd.DataFrame(train_t, columns=X_train.columns)
test_t = pd.DataFrame(test_t, columns=X_test.columns)
train_t.head()
# variables without NA after the imputation
train_t[['LotFrontage', 'MasVnrArea', 'GarageYrBlt']].isnull().sum()
# the obseravtions with NA in the original train set
X_train[X_train['MasVnrArea'].isnull()]['MasVnrArea']
# the replacement values in the transformed dataset
train_t[X_train['MasVnrArea'].isnull()]['MasVnrArea']
# the mean value of the variable (i.e., for mean imputation)
X_train['MasVnrArea'].mean()
###Output
_____no_output_____
###Markdown
In some cases, the imputation values are very different from the mean value we would have used in MeanMedianImputation. Imputing a slice of the dataframeWe can use Feature-engine to apply the KNNImputer to a slice of the dataframe.
###Code
from feature_engine.wrappers import SklearnTransformerWrapper
data = pd.read_csv('../houseprice.csv')
X_train, X_test, y_train, y_test = train_test_split(
data.drop('SalePrice', axis=1),
data['SalePrice'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# start the KNNimputer inside the SKlearnTransformerWrapper
imputer = SklearnTransformerWrapper(
transformer = KNNImputer(weights='distance'),
variables = cols_to_use,
)
# fit the wrapper + KNNImputer
imputer.fit(X_train)
# transform the data
train_t = imputer.transform(X_train)
test_t = imputer.transform(X_test)
# feature-engine returns a dataframe
train_t.head()
# no NA after the imputation
train_t['MasVnrArea'].isnull().sum()
# same imputation values as previously
train_t[X_train['MasVnrArea'].isnull()]['MasVnrArea']
###Output
_____no_output_____
###Markdown
Automatically find best imputation parametersWe can optimise the parameters of the KNN imputation to better predict our outcome.
###Code
# import extra classes for modelling
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import Lasso
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
# separate intro train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[cols_to_use], # just the features
data['SalePrice'], # the target
test_size=0.3, # the percentage of obs in the test set
random_state=0) # for reproducibility
X_train.shape, X_test.shape
pipe = Pipeline(steps=[
('imputer', KNNImputer(
n_neighbors=5,
weights='distance',
add_indicator=False)),
('scaler', StandardScaler()),
('regressor', Lasso(max_iter=2000)),
])
# now we create the grid with all the parameters that we would like to test
param_grid = {
'imputer__n_neighbors': [3,5,10],
'imputer__weights': ['uniform', 'distance'],
'imputer__add_indicator': [True, False],
'regressor__alpha': [10, 100, 200],
}
grid_search = GridSearchCV(pipe, param_grid, cv=5, n_jobs=-1, scoring='r2')
# cv=3 is the cross-validation
# no_jobs =-1 indicates to use all available cpus
# scoring='r2' indicates to evaluate using the r squared
# for more details in the grid parameters visit:
#https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
# and now we train over all the possible combinations
# of the parameters above
grid_search.fit(X_train, y_train)
# and we print the best score over the train set
print(("best linear regression from grid search: %.3f"
% grid_search.score(X_train, y_train)))
# let's check the performance over the test set
print(("best linear regression from grid search: %.3f"
% grid_search.score(X_test, y_test)))
# and find the best parameters
grid_search.best_params_
###Output
_____no_output_____
###Markdown
Compare with univariate imputation
###Code
from sklearn.impute import SimpleImputer
# separate intro train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[cols_to_use], # just the features
data['SalePrice'], # the target
test_size=0.3, # the percentage of obs in the test set
random_state=0) # for reproducibility
X_train.shape, X_test.shape
pipe = Pipeline(steps=[
('imputer', SimpleImputer(strategy='mean', fill_value=-1)),
('scaler', StandardScaler()),
('regressor', Lasso(max_iter=2000)),
])
param_grid = {
'imputer__strategy': ['mean', 'median', 'constant'],
'imputer__add_indicator': [True, False],
'regressor__alpha': [10, 100, 200],
}
grid_search = GridSearchCV(pipe, param_grid, cv=5, n_jobs=-1, scoring='r2')
# and now we train over all the possible combinations of the parameters above
grid_search.fit(X_train, y_train)
# and we print the best score over the train set
print(("best linear regression from grid search: %.3f"
% grid_search.score(X_train, y_train)))
# and finally let's check the performance over the test set
print(("best linear regression from grid search: %.3f"
% grid_search.score(X_test, y_test)))
# and find the best fit parameters like this
grid_search.best_params_
###Output
_____no_output_____ |
CS229_PS/.ipynb_checkpoints/PS4_Q6_ReinforcementLearning-checkpoint.ipynb | ###Markdown
Inverted Pendulum: Reinforcement learningMeichen Lu ([email protected]) 26th April 2018Source: CS229: PS4Q6Starting code: http://cs229.stanford.edu/ps/ps4/q6/Reference: https://github.com/zyxue/stanford-cs229/blob/master/Problem-set-4/6-reinforcement-learning-the-inverted-pendulum/control.py
###Code
from cart_pole import CartPole, Physics
import numpy as np
from scipy.signal import lfilter
import matplotlib.pyplot as plt
%matplotlib inline
# Simulation parameters
pause_time = 0.0001
min_trial_length_to_start_display = 100
display_started = min_trial_length_to_start_display == 0
NUM_STATES = 163
NUM_ACTIONS = 2
GAMMA = 0.995
TOLERANCE = 0.01
NO_LEARNING_THRESHOLD = 20
# Time cycle of the simulation
time = 0
# These variables perform bookkeeping (how many cycles was the pole
# balanced for before it fell). Useful for plotting learning curves.
time_steps_to_failure = []
num_failures = 0
time_at_start_of_current_trial = 0
# You should reach convergence well before this
max_failures = 500
# Initialize a cart pole
cart_pole = CartPole(Physics())
# Starting `state_tuple` is (0, 0, 0, 0)
# x, x_dot, theta, theta_dot represents the actual continuous state vector
x, x_dot, theta, theta_dot = 0.0, 0.0, 0.0, 0.0
state_tuple = (x, x_dot, theta, theta_dot)
# `state` is the number given to this state, you only need to consider
# this representation of the state
state = cart_pole.get_state(state_tuple)
# if min_trial_length_to_start_display == 0 or display_started == 1:
# cart_pole.show_cart(state_tuple, pause_time)
# Perform all your initializations here:
# Assume no transitions or rewards have been observed.
# Initialize the value function array to small random values (0 to 0.10,
# say).
# Initialize the transition probabilities uniformly (ie, probability of
# transitioning for state x to state y using action a is exactly
# 1/NUM_STATES).
# Initialize all state rewards to zero.
###### BEGIN YOUR CODE ######
V_s = np.random.rand(NUM_STATES)
P_sa = np.ones((NUM_STATES,NUM_ACTIONS, NUM_STATES))/NUM_STATES
R_s = np.zeros((NUM_STATES))
# Initialise intermediate variables
state_transition_count = np.zeros((NUM_STATES,NUM_ACTIONS, NUM_STATES))
new_state_count = np.zeros(NUM_STATES)
R_new_state = np.zeros(NUM_STATES)
###### END YOUR CODE ######
# This is the criterion to end the simulation.
# You should change it to terminate when the previous
# 'NO_LEARNING_THRESHOLD' consecutive value function computations all
# converged within one value function iteration. Intuitively, it seems
# like there will be little learning after this, so end the simulation
# here, and say the overall algorithm has converged.
consecutive_no_learning_trials = 0
while consecutive_no_learning_trials < NO_LEARNING_THRESHOLD:
# Write code to choose action (0 or 1).
# This action choice algorithm is just for illustration. It may
# convince you that reinforcement learning is nice for control
# problems!Replace it with your code to choose an action that is
# optimal according to the current value function, and the current MDP
# model.
###### BEGIN YOUR CODE ######
# TODO:
action = np.argmax(np.sum(P_sa[state]*V_s, axis = 1))
###### END YOUR CODE ######
# Get the next state by simulating the dynamics
state_tuple = cart_pole.simulate(action, state_tuple)
# Increment simulation time
time = time + 1
# Get the state number corresponding to new state vector
new_state = cart_pole.get_state(state_tuple)
# if display_started == 1:
# cart_pole.show_cart(state_tuple, pause_time)
# reward function to use - do not change this!
if new_state == NUM_STATES - 1:
R = -1
else:
R = 0
# Perform model updates here.
# A transition from `state` to `new_state` has just been made using
# `action`. The reward observed in `new_state` (note) is `R`.
# Write code to update your statistics about the MDP i.e. the
# information you are storing on the transitions and on the rewards
# observed. Do not change the actual MDP parameters, except when the
# pole falls (the next if block)!
###### BEGIN YOUR CODE ######
# record the number of times `state, action, new_state` occurs
state_transition_count[state, action, new_state] += 1
# record the rewards for every `new_state`
R_new_state[new_state] += R
# record the number of time `new_state` was reached
new_state_count[new_state] += 1
###### END YOUR CODE ######
# Recompute MDP model whenever pole falls
# Compute the value function V for the new model
if new_state == NUM_STATES - 1:
# Update MDP model using the current accumulated statistics about the
# MDP - transitions and rewards.
# Make sure you account for the case when a state-action pair has never
# been tried before, or the state has never been visited before. In that
# case, you must not change that component (and thus keep it at the
# initialized uniform distribution).
###### BEGIN YOUR CODE ######
# TODO:
sum_state = np.sum(state_transition_count, axis = 2)
mask = sum_state > 0
P_sa[mask] = state_transition_count[mask]/sum_state[mask].reshape(-1, 1)
# Update reward function
mask = new_state_count>0
R_s[mask] = R_new_state[mask]/new_state_count[mask]
###### END YOUR CODE ######
# Perform value iteration using the new estimated model for the MDP.
# The convergence criterion should be based on `TOLERANCE` as described
# at the top of the file.
# If it converges within one iteration, you may want to update your
# variable that checks when the whole simulation must end.
###### BEGIN YOUR CODE ######
iter = 0
tol = 1
while tol > TOLERANCE:
V_old = V_s
V_s = R_s + GAMMA * np.max(np.sum(P_sa*V_s, axis = 2), axis = 1)
tol = np.max(np.abs(V_s - V_old))
iter = iter + 1
if iter == 1:
consecutive_no_learning_trials += 1
else:
# Reset
consecutive_no_learning_trials = 0
###### END YOUR CODE ######
# Do NOT change this code: Controls the simulation, and handles the case
# when the pole fell and the state must be reinitialized.
if new_state == NUM_STATES - 1:
num_failures += 1
if num_failures >= max_failures:
break
print('[INFO] Failure number {}'.format(num_failures))
time_steps_to_failure.append(time - time_at_start_of_current_trial)
# time_steps_to_failure[num_failures] = time - time_at_start_of_current_trial
time_at_start_of_current_trial = time
if time_steps_to_failure[num_failures - 1] > min_trial_length_to_start_display:
display_started = 1
# Reinitialize state
# x = 0.0
x = -1.1 + np.random.uniform() * 2.2
x_dot, theta, theta_dot = 0.0, 0.0, 0.0
state_tuple = (x, x_dot, theta, theta_dot)
state = cart_pole.get_state(state_tuple)
else:
state = new_state
# plot the learning curve (time balanced vs. trial)
log_tstf = np.log(np.array(time_steps_to_failure))
plt.plot(np.arange(len(time_steps_to_failure)), log_tstf, 'k')
window = 30
w = np.array([1/window for _ in range(window)])
weights = lfilter(w, 1, log_tstf)
x = np.arange(window//2, len(log_tstf) - window//2)
plt.plot(x, weights[window:len(log_tstf)], 'r--')
plt.xlabel('Num failures')
plt.ylabel('Num steps to failure')
plt.show()
###Output
_____no_output_____ |
notebooks/4-Views.ipynb | ###Markdown
Views - Views are nothing but widget only but having capability to hold widgets.
###Code
from webdriver_kaifuku import BrowserManager
from widgetastic.widget import Browser
command_executor = "http://localhost:4444/wd/hub"
config = {
"webdriver": "Remote",
"webdriver_options":
{"desired_capabilities": {"browserName": "firefox"},
"command_executor": command_executor,
}
}
mgr = BrowserManager.from_conf(config)
sel = mgr.ensure_open()
class MyBrowser(Browser):
pass
browser = MyBrowser(selenium=sel)
browser.url = "http://0.0.0.0:8000/test_page.html"
from widgetastic.widget import View, Text, TextInput, Checkbox, ColourInput, Select
# Example-1
class BasicWidgetView(View):
text_input = TextInput(id="text_input")
checkbox = Checkbox(id="checkbox_input")
button = Text(locator=".//button[@id='a_button']")
color_input = ColourInput(id="color_input")
view = BasicWidgetView(browser)
###Output
_____no_output_____
###Markdown
Nested Views
###Code
# Example-2
class MyNestedView(View):
@View.nested
class basic(View): #noqa
text_input = TextInput(id="text_input")
checkbox = Checkbox(id="checkbox_input")
@View.nested
class conditional(View):
select_input = Select(id="select_lang")
view = MyNestedView(browser)
view.fill({'basic': {'text_input': 'hi', 'checkbox': True},
'conditional': {'select_input': 'Go'}})
# Example-3
class Basic(View):
text_input = TextInput(id="text_input")
checkbox = Checkbox(id="checkbox_input")
class Conditional(View):
select_input = Select(id="select_lang")
class MyNestedView(View):
basic = View.nested(Basic)
conditional = View.nested(Conditional)
view = MyNestedView(browser)
view.read()
###Output
_____no_output_____
###Markdown
Switchable Conditional Views
###Code
from widgetastic.widget import ConditionalSwitchableView
# Example-4: Switchable widgets
class MyConditionalWidgetView(View):
select_input = Select(id="select_lang")
lang_label = ConditionalSwitchableView(reference="select_input")
lang_label.register("Python", default=True, widget=Text(locator=".//h3[@id='lang-1']"))
lang_label.register("Go", widget=Text(locator=".//h3[@id='lang-2']"))
view = MyConditionalWidgetView(browser)
# Example-5: Switchable Views
class MyConditionalView(View):
select_input = Select(id="select_lang")
lang = ConditionalSwitchableView(reference="select_input")
@lang.register("Python", default=True)
class PythonView(View):
# some more widgets
lang_label = Text(locator=".//h3[@id='lang-1']")
@lang.register("Go")
class GoView(View):
lang_label = Text(locator=".//h3[@id='lang-2']")
view = MyConditionalView(browser)
###Output
_____no_output_____
###Markdown
Parametrized Views
###Code
from widgetastic.widget import ParametrizedView
from widgetastic.utils import ParametrizedLocator
# Example-6
class MyParametrizedView(ParametrizedView):
PARAMETERS = ('name',)
ROOT = ParametrizedLocator(".//div[contains(label, {name|quote})]")
widget = Checkbox(locator=".//input")
view = MyParametrizedView(browser, additional_context={'name': 'widget 1'})
# Example-7: Nested Parametrized View
class MyNestedParametrizedView(View):
@View.nested
class widget_selector(ParametrizedView):
PARAMETERS = ('name',)
ROOT = ParametrizedLocator(".//div[contains(label, {name|quote})]")
widget = Checkbox(locator=".//input")
view = MyNestedParametrizedView(browser)
###Output
_____no_output_____ |
DeepLearning.AI Tensorflow Developer/Course 1 - Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/Resources/Copy of Course 1 - Part 8 - Lesson 3 - Notebook.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/horse-or-human.zip
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \
-O /tmp/validation-horse-or-human.zip
###Output
_____no_output_____
###Markdown
The following python code will use the OS library to use Operating System libraries, giving you access to the file system, and the zipfile library allowing you to unzip the data.
###Code
import os
import zipfile
local_zip = '/tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/horse-or-human')
local_zip = '/tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation-horse-or-human')
zip_ref.close()
###Output
_____no_output_____
###Markdown
The contents of the .zip are extracted to the base directory `/tmp/horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like', 'this is what a human looks like' etc. One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this is a 7' etc. Later you'll see something called an ImageGenerator being used -- and this is coded to read images from subdirectories, and automatically label them from the name of that subdirectory. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. ImageGenerator will label the images appropriately for you, reducing a coding step. Let's define each of these directories:
###Code
# Directory with our training horse pictures
train_horse_dir = os.path.join('/tmp/horse-or-human/horses')
# Directory with our training human pictures
train_human_dir = os.path.join('/tmp/horse-or-human/humans')
# Directory with our training horse pictures
validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses')
# Directory with our training human pictures
validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans')
###Output
_____no_output_____
###Markdown
Now, let's see what the filenames look like in the `horses` and `humans` training directories:
###Code
train_horse_names = os.listdir(train_horse_dir)
print(train_horse_names[:10])
train_human_names = os.listdir(train_human_dir)
print(train_human_names[:10])
validation_horse_hames = os.listdir(validation_horse_dir)
print(validation_horse_hames[:10])
validation_human_names = os.listdir(validation_human_dir)
print(validation_human_names[:10])
###Output
_____no_output_____
###Markdown
Let's find out the total number of horse and human images in the directories:
###Code
print('total training horse images:', len(os.listdir(train_horse_dir)))
print('total training human images:', len(os.listdir(train_human_dir)))
print('total validation horse images:', len(os.listdir(validation_horse_dir)))
print('total validation human images:', len(os.listdir(validation_human_dir)))
###Output
_____no_output_____
###Markdown
Now let's take a look at a few pictures to get a better sense of what they look like. First, configure the matplot parameters:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Parameters for our graph; we'll output images in a 4x4 configuration
nrows = 4
ncols = 4
# Index for iterating over images
pic_index = 0
###Output
_____no_output_____
###Markdown
Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time:
###Code
# Set up matplotlib fig, and size it to fit 4x4 pics
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 4)
pic_index += 8
next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in train_human_names[pic_index-8:pic_index]]
for i, img_path in enumerate(next_horse_pix+next_human_pix):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off') # Don't show axes (or gridlines)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
Building a Small Model from ScratchBut before we continue, let's start defining the model:Step 1 will be to import tensorflow.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Finally we add the densely connected layers. Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0).
###Code
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
The model.summary() method call prints a summary of the NN
###Code
model.summary()
###Output
_____no_output_____
###Markdown
The "output shape" column shows how the size of your feature map evolves in each successive layer. The convolution layers reduce the size of the feature maps by a bit due to padding, and each pooling layer halves the dimensions. Next, we'll configure the specifications for model training. We will train our model with the `binary_crossentropy` loss, because it's a binary classification problem and our final activation is a sigmoid. (For a refresher on loss metrics, see the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) We will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, we will want to monitor classification accuracy.**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descentRMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descentAdam) and [Adagrad](https://developers.google.com/machine-learning/glossary/AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.)
###Code
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Data PreprocessingLet's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network. We'll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary).As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).In Keras this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. These generators can then be used with the Keras model methods that accept data generators as inputs: `fit`, `evaluate_generator`, and `predict_generator`.
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tmp/horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 300x300
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow training images in batches of 128 using train_datagen generator
validation_generator = validation_datagen.flow_from_directory(
'/tmp/validation-horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 300x300
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
###Output
_____no_output_____
###Markdown
TrainingLet's train for 15 epochs -- this may take a few minutes to run.Do note the values per epoch.The Loss and Accuracy are a great indication of progress of training. It's making a guess as to the classification of the training data, and then measuring it against the known label, calculating the result. Accuracy is the portion of correct guesses.
###Code
history = model.fit(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1,
validation_data = validation_generator,
validation_steps=8)
###Output
_____no_output_____
###Markdown
Running the ModelLet's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human.
###Code
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(300, 300))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a human")
else:
print(fn + " is a horse")
###Output
_____no_output_____
###Markdown
Visualizing Intermediate RepresentationsTo get a feel for what kind of features our convnet has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the convnet.Let's pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images.
###Code
import numpy as np
import random
from tensorflow.keras.preprocessing.image import img_to_array, load_img
# Let's define a new Model that will take an image as input, and will output
# intermediate representations for all layers in the previous model after
# the first.
successive_outputs = [layer.output for layer in model.layers[1:]]
#visualization_model = Model(img_input, successive_outputs)
visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)
# Let's prepare a random input image from the training set.
horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]
human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]
img_path = random.choice(horse_img_files + human_img_files)
img = load_img(img_path, target_size=(300, 300)) # this is a PIL image
x = img_to_array(img) # Numpy array with shape (150, 150, 3)
x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3)
# Rescale by 1/255
x /= 255
# Let's run our image through our network, thus obtaining all
# intermediate representations for this image.
successive_feature_maps = visualization_model.predict(x)
# These are the names of the layers, so can have them as part of our plot
layer_names = [layer.name for layer in model.layers[1:]]
# Now let's display our representations
for layer_name, feature_map in zip(layer_names, successive_feature_maps):
if len(feature_map.shape) == 4:
# Just do this for the conv / maxpool layers, not the fully-connected layers
n_features = feature_map.shape[-1] # number of features in feature map
# The feature map has shape (1, size, size, n_features)
size = feature_map.shape[1]
# We will tile our images in this matrix
display_grid = np.zeros((size, size * n_features))
for i in range(n_features):
# Postprocess the feature to make it visually palatable
x = feature_map[0, :, :, i]
x -= x.mean()
x /= x.std()
x *= 64
x += 128
x = np.clip(x, 0, 255).astype('uint8')
# We'll tile each filter into this big horizontal grid
display_grid[:, i * size : (i + 1) * size] = x
# Display the grid
scale = 20. / n_features
plt.figure(figsize=(scale * n_features, scale))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
###Output
_____no_output_____
###Markdown
As you can see we go from the raw pixels of the images to increasingly abstract and compact representations. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called "sparsity." Representation sparsity is a key feature of deep learning.These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline. Clean UpBefore running the next exercise, run the following cell to terminate the kernel and free memory resources:
###Code
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
###Output
_____no_output_____ |
BestSpiderWeb.ipynb | ###Markdown
BestSpiderWeb ProblemA city without roads has a wheat producer, an egg producer and a hotel.The mayor also wants to build a pasta producer and a restaurant in the future. He also wants to build roads like in the picture, so that the producer can easily take the wheat and eggs to make pasta, and the restaurant can easily buy pasta, welcome hotel people, and buy eggs for other preparations.**Goal:** to build roads costs, you have to make them as short as possible.---**In other words:** In an Euclidean space there is a graph with constant edges and with 2 types of nodes, one with constant coordinates, the other with a variable coordinates.**Goal:** To find the positions of the variable nodes in order to have the smaller sum of the length of the edges Solution$$N_0[c] = \sum_{i \in N}\sum_{v \in P_{N_0 \longleftrightarrow i}}\frac{\sum O_i[c]}{v}$$where* $$N_0$$* $$c$$coordinates* $$N$$set of nodes with variable coordinates reachable from N with 0 passing only through nodes belonging to N* $$O$$set of nodes with constant coordinates* $$O_i$$set of nodes belonging to "O" adjacent to "i"* $$P_{N_0 \rightarrow i}$$set of all possible paths (infinite for lenght of "N" greater than 1") between node "N with 0" to node "i",passing only through nodes belonging to N* $$v$$Or path, is a multiplication of the number of edges for all the nodes it crosses, "N with 0" included, "i" included,(e.g. if it starts from a node that has 7 adjacent edges, then goes through one that has 2,and ends up with one having 3, the calculation will be 7 * 2 * 3 = 42 Implementation
###Code
import numpy as np
class Node:
NoCoordinates = None
def __init__(self, coordinates: np.ndarray = None):
self.AdjacentNodes = []
if coordinates is None:
self.Constant = False
else:
if len(coordinates) != Node.NoCoordinates:
raise Exception('wrong number of coordinates')
self.Coordinates = coordinates
self.Constant = True
def AddAdjacentNode(self, item: 'Node'):
self.AdjacentNodes.append(item)
class _VirtualNode:
def __init__(self, nodeBase: 'Node' = None):
if nodeBase is not None:
self.ActualNode = nodeBase
self.SumConstantNodes = np.zeros(Node.NoCoordinates)
for item in nodeBase.AdjacentNodes:
if item.Constant:
self.SumConstantNodes += item.Coordinates
self.NumTmpPath = len(nodeBase.AdjacentNodes)
def Copy(self, actualNode: 'Node') -> '_VirtualNode':
item = Node._VirtualNode()
item.ActualNode = actualNode
item.SumConstantNodes = self.SumConstantNodes
item.NumTmpPath = self.NumTmpPath * len(actualNode.AdjacentNodes)
return item
def ComputeBestSpiderWeb(variablesNodes: list):
# initialize coordinates of variables nodes
for item in variablesNodes:
item.Coordinates = np.zeros(Node.NoCoordinates)
# initialize virtual nodes
_VirtualNodes = []
for item in variablesNodes:
_VirtualNodes.append(Node._VirtualNode(item))
# ALGORITHM
# more iterations means more accuracy (exponential)
for i in range(40):
next_VirtualNodes = []
# iterate through all variables virtual nodes
for item in _VirtualNodes:
# update the coordinates of the actual node
item.ActualNode.Coordinates += item.SumConstantNodes / item.NumTmpPath
# iterate through adjacent nodes of the actual node
for AdjacentItem in item.ActualNode.AdjacentNodes:
# if the adjacent node is variable add it in a new virtual node (like a tree)
if not AdjacentItem.Constant:
next_VirtualNodes.append(item.Copy(AdjacentItem))
_VirtualNodes = next_VirtualNodes
def main():
Node.NoCoordinates = 2
# constant nodes
Wheat = Node(np.array([0, 0]))
eggs = Node(np.array([5, 40]))
hotel = Node(np.array([50, 10]))
# variables nodes
pastaProducer = Node()
restaurant = Node()
# define edges
pastaProducer.AddAdjacentNode(Wheat)
pastaProducer.AddAdjacentNode(eggs)
pastaProducer.AddAdjacentNode(restaurant)
restaurant.AddAdjacentNode(pastaProducer)
restaurant.AddAdjacentNode(eggs)
restaurant.AddAdjacentNode(hotel)
ComputeBestSpiderWeb([pastaProducer, restaurant])
print('pastaProducer: ' + str(pastaProducer.Coordinates))
print('restaurant: ' + str(restaurant.Coordinates))
if __name__ == '__main__':
main()
###Output
pastaProducer: [ 8.75 21.25]
restaurant: [21.25 23.75]
|
Pandas Data Series/main.ipynb | ###Markdown
Pandas Data Series [40 exercises]
###Code
import pandas as pd
import numpy as np
#1 Write a Pandas program to create and display a one-dimensional array-like object containing an array of data using Pandas module
data = pd.Series([9,8,7,6,5,4,3,2,1,])
print(data)
# 2. Write a Pandas program to convert a Panda module Series to Python list and it's type.
data = pd.Series([9,8,7,6,5,4,3,2,1,])
print(type(data))
lis = data.tolist()
print(type(lis))
# 3. Write a Pandas program to add, subtract, multiple and divide two Pandas Series
data1 = pd.Series([2, 4, 6, 8, 10])
data2 = pd.Series([1, 3, 5, 7, 9])
print(data1 + data2, data1 - data2, data1 * data2 ,data1 / data2)
#4 Write a Pandas program to compare the elements of the two Pandas Series.
data1 = pd.Series([2, 4, 6, 8, 10])
data2 = pd.Series([1, 3, 5, 7, 10])
print("Equal : ")
print(data1 == data2)
print("greater : ")
print(data1 > data2)
print("lesser : ")
print(data1 < data2)
#5 Write a Pandas program to convert a dictionary to a Pandas series
dic = {'a': 100, 'b': 200, 'c': 300, 'd': 400, 'e': 800}
ser = pd.Series(dic)
print(ser)
# 6. Write a Pandas program to convert a NumPy array to a Pandas series
np_arr = np.array([10, 20, 30, 40, 50])
ser = pd.Series(np_arr)
print(ser)
# 7. Write a Pandas program to change the data type of given a column or a Series.
data = pd.Series([100,200,'python',300.12,400])
data = pd.to_numeric(data,errors='coerce')
print(data)
# 8. Write a Pandas program to convert the first column of a DataFrame as a Series.
d = {'col1': [1, 2, 3, 4, 7, 11], 'col2': [4, 5, 6, 9, 5, 0], 'col3': [7, 5, 8, 12, 1,11]}
df = pd.DataFrame(data=d)
print(pd.Series(df['col1']))
# 9. Write a Pandas program to convert a given Series to an array.
data = pd.Series([100,200,'python',300.12,400])
print(np.array(data.tolist()))
# 10. Write a Pandas program to convert Series of lists to one Series
s = pd.Series([
['Red', 'Green', 'White'],
['Red', 'Black'],
['Yellow']])
print(s.apply(pd.Series).stack().reset_index(drop=True))
# 11. Write a Pandas program to sort a given Series.
data = pd.Series(['100', '200', 'python', '300.12', '400'])
data.sort_values()
# 12. Write a Pandas program to add some data to an existing Series.
data = pd.Series(['100', '200', 'python', '300.12', '400'])
data = data.append(pd.Series(['500','php']))
print(data)
# 13. Write a Pandas program to create a subset of a given series based on value and condition.
data = pd.Series([0,1,2,3,4,5,6,7,8,9])
data = data[data < 6]
print(data)
###Output
0 0
1 1
2 2
3 3
4 4
5 5
dtype: int64
|
One-Shot Classification/One_Shot_Classification_V1.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/Naren-Jegan/Deep-Learning-Keras/blob/master/One_Shot_Classification_V1.ipynb) One Shot Learning on Omniglot DatasetThe [Omniglot](https://github.com/brendenlake/omniglot) dataset contains 1623 different handwritten characters from 50 different alphabets.Each of the 1623 characters was drawn online via Amazon's Mechanical Turk by 20 different people.This dataset has been the baseline for any one-shot learning algorithm.Some of the machine learning algorithms used for learning this dataset over the years are listed below in order of accuracy:* Hierarchical Bayesian Program Learning - 95.2%* Convolutional Siamese Net - 92.0%* Affine model - 81.8%* Hierarchical Deep - 65.2%* Deep Boltzmann Machine - 62.0%* Siamese Neural Net - 58.3%* Simple Stroke - 35.2%* 1-Nearest Neighbor - 21.7%This notebook implements a [Convolutional Siamese Neural Network](https://https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf) using a background set of 30 alphabets for training and evaluate on set of 20 alphabets.
###Code
from google.colab import auth, drive
auth.authenticate_user()
drive.mount('/content/drive')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import math
import os
from PIL import Image, ImageFilter, ImageOps, ImageMath
import numpy.random as rnd
import pickle
from time import sleep
from copy import deepcopy
# from tf.keras.models import Sequential # This does not work!
from tensorflow.python.keras.models import Model, Sequential
from tensorflow.python.keras.layers import InputLayer, Input, Lambda
from tensorflow.python.keras.layers import Reshape, MaxPooling2D, Dropout, BatchNormalization
from tensorflow.python.keras.layers import Conv2D, Dense, Flatten
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.regularizers import l2
from tensorflow.python.keras.initializers import RandomNormal
from tensorflow import test, logging
from keras.wrappers.scikit_learn import KerasClassifier
from keras.wrappers.scikit_learn import GridSearchCV
logging.set_verbosity(tf.logging.ERROR)
test.gpu_device_name()
tf.__version__
one_shot_path = os.path.join("drive", "My Drive", "Colab Notebooks", "One-Shot Classification")
background_path = os.path.join(one_shot_path, "background")
evaluation_path = os.path.join(one_shot_path, "evaluation")
recognition_model_path = os.path.join(one_shot_path, "recognition_model.h5")
##creating training set
train_data = np.ndarray(shape=(964, 20, 105, 105))
train_alphabets = dict()
#for alphabet in os.listdir(background_path):
# alphabet_path = os.path.join(background_path, alphabet)
# for character in os.listdir(alphabet_path):
# character_path = os.path.join(alphabet_path, character)
# for image in os.listdir(character_path):
# index = int(image[0:4]) - 1
# writer = int(image[5:7]) - 1
# train_data[index][writer] = np.array(Image.open(os.path.join(character_path, image)))
# train_alphabets[alphabet] = index if alphabet not in train_alphabets or train_alphabets[alphabet] > index else train_alphabets[alphabet]
#with open(os.path.join("train.pickle"), 'wb') as f:
# pickle.dump([train_data, train_alphabets], f, protocol=2)
with open(os.path.join(one_shot_path, "train.pickle"), 'rb') as f:
train_data, train_alphabets = pickle.load(f, encoding='latin1')
#@title Inputs
conv_activation = 'relu' #@param ['relu', 'softplus', 'tanh', 'sigmoid'] {type:"string"}
dense_activation = 'sigmoid' #@param ['relu', 'softplus', 'tanh', 'sigmoid'] {type:"string"}
learning_rate = 1e-2 #@param {type:"number"}
conv_regularization_parameter = 1e-2 #@param {type:"number"}
dense_regularization_parameter = 1e-4 #@param {type:"number"}
batch_size = 128 #@param {type:"slider", min:0, max:1024, step:16}
batches_per_epoch = 75 #@param {type:"slider", min:0, max:100, step:5}
n_epochs = 200 #@param {type:"slider", min:25, max:500, step:25}
batch_size = 1 if batch_size == 0 else batch_size
batches_per_epoch = 1 if batches_per_epoch == 0 else batches_per_epoch
#@title Data Augmentation
image_size = 105 #@param {type:"slider", min:32, max:512, step:1}
rotation_range = 10 #@param {type:"slider", min:0, max:90, step:1}
width_shift_range = 2 #@param {type:"slider", min:0, max:10, step:0.1}
height_shift_range = 2 #@param {type:"slider", min:0, max:10, step:0.1}
shear_range = 0.3 #@param {type:"slider", min:0, max:1, step:0.1}
zoom_range = 0.2 #@param {type:"slider", min:0, max:1, step:0.01}
# this is the augmentation configuration we will use for training
datagen = ImageDataGenerator()
def transform_image(image):
return datagen.apply_transform(image.reshape((image_size, image_size, 1)),
transform_parameters =
{'theta': rnd.uniform(-rotation_range, rotation_range),
'tx' : rnd.uniform(-width_shift_range, width_shift_range),
'ty' : rnd.uniform(-height_shift_range, height_shift_range),
'shear': rnd.uniform(-shear_range, shear_range),
'zx' : rnd.uniform(-zoom_range, zoom_range),
'zy' : rnd.uniform(-zoom_range, zoom_range)
})
#generate image pairs [x1, x2] with target y = 1/0 representing same/different
def datagen_flow(datagen, val = False):
while True:
X1 = np.ndarray(shape=(batch_size, image_size, image_size, 1))
X2 = np.ndarray(shape=(batch_size, image_size, image_size, 1))
Y = np.ndarray(shape=(batch_size,))
s_alphabets = sorted(train_alphabets.values())
a_indices = list(range(len(s_alphabets)))
times = batch_size//(2*len(a_indices))
remainder = (batch_size//2)%len(a_indices)
aindices = a_indices*times + list(rnd.choice(a_indices, remainder))
rnd.shuffle(aindices)
w_range = list(range(12, 20) if val else range(12))
i = 0
for a in aindices:
end_index = (len(train_data) if a+1 == len(s_alphabets) else s_alphabets[a+1])
c_range = list(range(s_alphabets[a], end_index))
writers = rnd.choice(w_range, 2)
same = rnd.choice(c_range)
X1[2*i] = transform_image(train_data[same, writers[0]])
X2[2*i] = transform_image(train_data[same, writers[1]])
Y[2*i] = 1.0
writers = rnd.choice(w_range, 2)
diff = rnd.choice(c_range, 2)
X1[2*i + 1] = transform_image(train_data[diff[0], writers[0]])
X2[2*i + 1] = transform_image(train_data[diff[1], writers[1]])
Y[2*i + 1] = 0.0
i += 1
yield [X1, X2], Y
train_generator = datagen_flow(datagen)
# this is a similar generator, for validation data that takes only the remaining 8 writers
train_dev_generator = datagen_flow(datagen, val=True)
w_init = RandomNormal(mean=0.0, stddev=1e-2)
b_init = RandomNormal(mean=0.5, stddev=1e-2)
input_shape=(image_size, image_size, 1)
left_input = Input(input_shape)
right_input = Input(input_shape)
# Start construction of the Keras Sequential model.
convnet = Sequential()
# First convolutional layer with activation, batchnorm and max-pooling.
convnet.add(Conv2D(kernel_size=10, strides=1, filters=64, padding='valid',
input_shape=input_shape, bias_initializer=b_init,
activation=conv_activation,
name='layer_conv1', kernel_regularizer=l2(conv_regularization_parameter)))
convnet.add(BatchNormalization(axis = 3, momentum=0.5, name = 'bn1'))
convnet.add(MaxPooling2D(pool_size=2, strides=2, name="max_pooling1"))
# Second convolutional layer with activation, batchnorm and max-pooling.
convnet.add(Conv2D(kernel_size=7, strides=1, filters=128, padding='valid',
kernel_initializer=w_init, bias_initializer=b_init,
activation=conv_activation, name='layer_conv2', kernel_regularizer=l2(conv_regularization_parameter)))
convnet.add(BatchNormalization(axis = 3, name = 'bn2'))
convnet.add(MaxPooling2D(pool_size=2, strides=2, name="max_pooling2"))
# Third convolutional layer with activation, batchnorm and max-pooling.
convnet.add(Conv2D(kernel_size=4, strides=1, filters=128, padding='valid',
kernel_initializer=w_init, bias_initializer=b_init,
activation=conv_activation, name='layer_conv3', kernel_regularizer=l2(conv_regularization_parameter)))
convnet.add(BatchNormalization(axis = 3, name = 'bn3'))
convnet.add(MaxPooling2D(pool_size=2, strides=2, name="max_pooling3"))
# Fourth convolutional layer with activation, batchnorm and max-pooling.
convnet.add(Conv2D(kernel_size=4, strides=1, filters=256, padding='valid',
kernel_initializer=w_init, bias_initializer=b_init,
activation=conv_activation, name='layer_conv4', kernel_regularizer=l2(conv_regularization_parameter)))
convnet.add(BatchNormalization(axis = 3, name = 'bn4'))
convnet.add(MaxPooling2D(pool_size=2, strides=2, name="max_pooling4"))
# Flatten the 4-rank output of the convolutional layers
# to 2-rank that can be input to a fully-connected / dense layer.
convnet.add(Flatten())
# First fully-connected / dense layer with activation.
convnet.add(Dense(4096, activation=dense_activation,
kernel_initializer=w_init, bias_initializer=b_init,
name = "dense_1", kernel_regularizer=l2(dense_regularization_parameter)))
convnet.add(BatchNormalization(axis = 1, name = 'bn5'))
#call the convnet Sequential model on each of the input tensors so params will be shared
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)
#layer to merge two encoded inputs with the l1 distance between them
L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1]))
#call this layer on list of two input tensors.
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(1,activation='sigmoid',bias_initializer=b_init)(L1_distance)
model = Model(inputs=[left_input,right_input],outputs=prediction)
from tensorflow.python.keras.optimizers import SGD, Adam
#optimizer = SGD(lr=learning_rate, momentum=0.5)
optimizer = Adam(lr=learning_rate)
model.compile(optimizer=optimizer,
loss='binary_crossentropy',
metrics=['accuracy'])
steps_train = batches_per_epoch
steps_validation = batches_per_epoch
from tensorflow.python.keras.callbacks import ModelCheckpoint, Callback, LearningRateScheduler, ReduceLROnPlateau
model_checkpoint = ModelCheckpoint(recognition_model_path, monitor='val_loss',
save_best_only=True, period=10)
lr_scheduler = LearningRateScheduler(lambda epoch, lr: 0.99*lr)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5,
patience=5, min_lr=1e-4)
class LearningRateFinder(Callback):
def __init__(self, steps=100, period=10):
super(LearningRateFinder, self).__init__()
self.steps = steps
self.batch_size=batch_size
self.period = period
self.best_lr = 1e-4
self.best_loss = 1000
self.find_lr = True
self.current_lr = None
self.training_path = os.path.join(one_shot_path, "training_model.h5")
self.model_weights = None
def reset_values(self):
K.set_value(self.model.optimizer.lr, self.best_lr)
self.best_lr = 1e-4
self.best_loss = 1000
self.model = load_model(self.training_path)
def on_train_begin(self, logs={}):
return
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
self.find_lr = epoch % self.period == 0
if epoch % self.period == 1:
print("Learning Rate: " + "{0:.2g}".format(K.get_value(self.model.optimizer.lr)))
if(self.find_lr):
self.current_lr = K.get_value(self.model.optimizer.lr)
self.model.save(self.training_path)
self.model_weights = self.model.get_weights()
def on_epoch_end(self, epoch, logs={}):
if(self.find_lr):
self.reset_values()
return
def on_batch_begin(self, batch, logs={}):
if(self.find_lr):
K.set_value(self.model.optimizer.lr, 10**(2*batch/self.steps + np.log10(self.current_lr) - 1))
return
def on_batch_end(self, batch, logs={}):
if(self.find_lr):
loss = logs.get('loss')
if loss < self.best_loss:
self.best_loss = loss
self.best_lr = K.get_value(self.model.optimizer.lr)
elif loss >= 1.25*self.best_loss:
self.find_lr = False
self.reset_values()
self.model.set_weights(self.model_weights)
return
lr_finder = LearningRateFinder(steps=steps_train, period=n_epochs//4)
model.fit_generator(train_generator,
steps_per_epoch = steps_train,
epochs=n_epochs,
validation_data = train_dev_generator,
validation_steps = steps_validation,
callbacks = [model_checkpoint, lr_scheduler, reduce_lr]
)
model = load_model(recognition_model_path)
##creating test set
test_data = np.ndarray(shape=(659, 20, 105, 105))
test_alphabets = dict()
#for alphabet in os.listdir(evaluation_path):
# alphabet_path = os.path.join(evaluation_path, alphabet)
# for character in os.listdir(alphabet_path):
# character_path = os.path.join(alphabet_path, character)
# for image in os.listdir(character_path):
# index = int(image[0:4]) - 965
# writer = int(image[5:7]) - 1
# test_data[index][writer] = np.array(Image.open(os.path.join(character_path, image)))
# test_alphabets[alphabet] = index if alphabet not in test_alphabets or test_alphabets[alphabet] > index else test_alphabets[alphabet]
#with open(os.path.join("test.pickle"), 'wb') as f:
# pickle.dump([test_data, test_alphabets], f, protocol=2)
with open(os.path.join(one_shot_path, "test.pickle"), 'rb') as f:
test_data, test_alphabets = pickle.load(f, encoding='latin1')
N = 20
st_alphabets = sorted(test_alphabets.values())
correct = 0
show = True
for i in range(len(st_alphabets)):
end_index = len(test_data) if i+1 == len(st_alphabets) else st_alphabets[i+1]
c_range = list(range(st_alphabets[i],end_index))
for j in range(2):
c_list = rnd.choice(c_range, N)
w_list = rnd.choice(range(20), 2)
for c_i in range(N):
image = test_data[c_list[c_i]][w_list[0]]
X1 = np.array([image]*N).reshape((N, image_size, image_size, 1))
X2 = np.array(test_data[c_list][w_list[1]]).reshape((N, image_size, image_size, 1))
if show and c_i == 2 and i == 3:
plt.imshow(image)
plt.show()
for m in range(N):
plt.imshow(test_data[c_list[m]][w_list[1]])
plt.show()
targets = np.zeros((N,))
targets[c_i] = 1
predictions = model.predict([X1, X2])
if show and c_i == 2 and i == 3:
print(targets)
print(predictions)
show = False
if(np.argmax(predictions) == np.argmax(targets)):
correct += 1
print(str(N) + "-Way Classification Accuracy: " + "{0:.2f}".format(correct/(N*20*2)))
###Output
_____no_output_____ |
pandas/01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | ###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the csv file and assign it to a dataframe called food
###Code
food = pd.read_csv('/Users/guilhermeoliveira/Desktop/world-food-facts/FoodFacts.csv')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2723: DtypeWarning: Columns (0,3,5,27,36) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print food.shape #will give you both (observations/rows, columns)
print food.shape[1] #will give you only the columns number
#OR
food.info() #Columns: 159 entries
###Output
(65503, 159)
159
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 65503 entries, 0 to 65502
Columns: 159 entries, code to nutrition_score_uk_100g
dtypes: float64(103), object(56)
memory usage: 79.5+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____ |
machine_learning/logistic_regression/sklearn_lr.ipynb | ###Markdown
逻辑回归在scikit-learn中的实现简介============================== 分析用的代码版本信息:```bash~/W/g/scikit-learn ❯❯❯ git log -n 1commit d161bfaa1a42da75f4940464f7f1c524ef53484fAuthor: John B Nelson Date: Thu May 26 18:36:37 2016 -0400 Add missing double quote (6831)``` 0. 总纲下面是sklearn中逻辑回归的构成情况:
###Code
SVG("./res/sklearn_lr.svg")
###Output
_____no_output_____ |
Midterm_Exam_PYTHON_FUNDAMENTALS.ipynb | ###Markdown
**PROBLEM STATEMENT 1**
###Code
name = "Name: Buño, Fernando Jr, L."
studentnumber = "Student Number: 202013292"
age = "Age: 11"
birthday = "Birthday: April 11, 2001"
address = "Addres: B 8 L 1 Brazil Street Langkaan II Dasma Cavite"
course = "Course: Electrical Communication Engineering"
gwa = "GWA: 1.75"
print(name)
print(studentnumber)
print(age)
print(birthday)
print(address)
print(course)
print(gwa)
###Output
Name: Buño, Fernando Jr, L.
Student Number: 202013292
Age: 11
Birthday: April 11, 2001
Addres: B 8 L 1 Brazil Street Langkaan II Dasma Cavite
Course: Electrical Communication Engineering
GWA: 1.75
###Markdown
**PROBLEM STATEMENT 2**
###Code
n = 4
answ = "Y"
print(2<n) and (n<6)
print(2<n) or (n==6)
print(not(2<n)) or (n==6)
print(not(n<6))
print(answ=="Y") or (answ=="y")
print(answ=="Y") and (answ=="y")
print(not(answ=="y"))
print((2<n)and(n==5+1)) or (answ=="No")
print(n==2)and(n==7) or (answ=="Y")
print(n==2) and (n==7) or(answ=="Y")
###Output
True
True
False
False
True
True
True
False
False
False
###Markdown
**PROBLEM STATEMENT 3**
###Code
x = 2
y = -3
w = 7
z = -10
print(x/y)
print(w/y/x)
print(z/y%x)
print(x%-y*w)
print(x%y)
print(z%w-y/x*5+5)
print(9-x%(2+y))
print(z//w)
print((2+y)**2)
print(w/x*2)
###Output
-0.6666666666666666
-1.1666666666666667
1.3333333333333335
14
-1
16.5
9
-2
1
7.0
|
Network.ipynb | ###Markdown
MNIST Digit Classification Neural Network---A neural network is a system of interconnected nodes, or artificial neurons that perform some task by learning from a dataset and incrementally improving its own performance. These artificial neurons are organised into multiple layers including an input layer, where data is fed forward through the network's successive layers, until it produces some output in the final layer.Networks "learn" by analyzing a dataset of training inputs, where each training example is classified by a label. Through a process called backpropagation, the network adjusts the "weights" connecting each neuron (which can be thought of as the synapses connecting neurons in a human brain) based on how close the output produced from traning examples, which classifies each training example, is to the actual classification of those examples. Biases for each neuron are also updated accordingly. The MNIST DatasetThis project produces a neural network that classifies images of handwritten digits ranged from 0-9. These images are gathered from the MNIST database - a large set of images of handwritten digits commonly used for training neural networks like this one. This is my first attempt at building a neural network from scratch and I plan to continually update this project as I improve my code.Each image is input as a 784-dimensional vector, with each vector component representing the greyscale value of a pixel in the image. The network has one hidden layer composed of 25 neurons and a final output layer of 10 neurons. Output in the network can be viewed as the "activation" of these output neurons, or the degree to which a neuron is affected by the input of the system. For example, with an input representing the digit 0, the output neuron of index 0 (so, the first neuron) would have a higher value (or activation) associated with it, while other neurons would have comparably lower activations. Here are some other important features about my network:- It uses the sigmoid activation function- The number of epochs (a mini-batch of 100 training examples) and learning rates can be cusomised. These values are set to 800 and 1 by default.- Currently, my network has an average training accuracy of 85%.---The following code implements my neural network
###Code
import numpy as np
import math
# Sigmoid activation function returns a value between 0 and 1
# based on the degree to which the input varies from 0
def sigmoid(x):
if x.size == 1:
return 1 / (1 + math.exp(-x))
else:
return np.array([(1 / (1 + math.exp(-i))) for i in x])
def sigmoid_derivative(x):
if x.size == 1:
return math.exp(-x) / ((1 + math.exp(-x))**2)
else:
return np.array([((math.exp(-i))/(1 + math.exp(-i))**2) for i in x])
class NNetwork:
# The network is initialised with the training and testing sets as input
def __init__(self, X_train, Y_train, X_test, Y_test):
self.X_train = X_train
self.Y_train = Y_train
self.X_test = X_test
self.Y_test = Y_test
self.input = np.zeros(784)
self.output = np.zeros(10)
self.y = np.zeros(10)
# Weights and biases are initialised as random values between -1 and 1
self.weights2 = np.random.uniform(low=-1.0, high=1.0, size=(25,784))
self.weights3 = np.random.uniform(low=-1.0, high=1.0, size=(10,25))
self.bias2 = np.random.uniform(low=-1.0, high=1.0, size=25)
self.bias3 = np.random.uniform(low=-1.0, high=1.0, size=10)
def train(self, epochs, lr):
for i in range(epochs):
d_weights2 = np.zeros(self.weights2.shape)
d_weights3 = np.zeros(self.weights3.shape)
d_bias2 = np.zeros(self.bias2.shape)
d_bias3 = np.zeros(self.bias3.shape)
for j in range(100):
self.input = self.X_train[(i * 100) + j,:]
self.y[self.Y_train[(i * 100) + j]] = 1
self.feedforward()
updates = self.backprop() # The gradient of the cost function
d_weights2 += updates[0]
d_weights3 += updates[1]
d_bias2 += updates[2]
d_bias3 += updates[3]
self.y = np.zeros(10)
d_weights2 /= 100
d_weights3 /= 100
d_bias2 /= 100
d_bias3 /= 100
# The average negative value of the change in the cost with respect to the change
# in each weight & bias for 100 training examples is calculated and added to the
# current value of each weight and bias
self.weights2 += -1 * lr * d_weights2
self.weights3 += -1 * lr * d_weights3
self.bias2 += -1 * lr * d_bias2
self.bias3 += -1 * lr * d_bias3
print("Training complete!")
# This function classifies a single image
def classify(self, x):
self.input = x
self.feedforward()
return np.argmax(self.output)
def test(self):
acc = 0
for i in range(10000):
x = X_test[i,:]
y = Y_test[i]
yHAT = self.classify(x)
if y == yHAT:
acc += 1
print("Testing accuracy: " + str((acc / 10000) * 100) + "%")
# This function uses the sigmoid activation function to
# feed an input forward, producing the values of the neurons
# in the second layer and the final layer
def feedforward(self):
self.layer2 = sigmoid(np.dot(self.input, self.weights2.T) + self.bias2)
self.output = sigmoid(np.dot(self.layer2, self.weights3.T) + self.bias3)
# This function calculates the gradient of the cost function, where each
# component of the cost gradient is associated with a single weight or bias
def backprop(self):
d_weights2 = np.zeros(self.weights2.shape)
d_weights3 = np.zeros(self.weights3.shape)
d_bias2 = np.zeros(self.bias2.shape)
d_bias3 = np.zeros(self.bias3.shape)
d_weights2 = self.input * (sigmoid_derivative(np.dot(self.input, self.weights2.T) + self.bias2)[:, np.newaxis] * np.sum((self.weights3.T * (sigmoid_derivative(np.dot(self.layer2, self.weights3.T) + self.bias3)) * 2 * (self.output - self.y)), axis=1)[:, np.newaxis])
d_weights3 = np.tile(self.layer2,(10,1)) * sigmoid_derivative(np.dot(self.layer2, self.weights3.T) + self.bias3)[:, np.newaxis] * (2 * (self.output - self.y))[:, np.newaxis]
d_bias2 = sigmoid_derivative(np.dot(self.input, self.weights2.T) + self.bias2) * (d_bias2 + np.sum((self.weights3.T * (sigmoid_derivative(np.dot(self.layer2, self.weights3.T) + self.bias3)) * 2 * (self.output - self.y)), axis=1))
d_bias3 = sigmoid_derivative(np.dot(self.layer2, self.weights3.T) + self.bias3) * (d_bias3 + 2 * (self.output - self.y))
return d_weights2, d_weights3, d_bias2, d_bias3
###Output
_____no_output_____
###Markdown
The following code downloads the mnist dataset and converts it to input for the network. This code is based on hsjeong5's github project [MNIST-for-Numpy](https://github.com/hsjeong5/MNIST-for-Numpy).
###Code
import mnist
mnist.init()
X_train, Y_train, X_test, Y_test = mnist.load()
X_train = X_train / 255
X_test = X_test / 255
###Output
Downloading train-images-idx3-ubyte.gz...
Downloading t10k-images-idx3-ubyte.gz...
Downloading train-labels-idx1-ubyte.gz...
Downloading t10k-labels-idx1-ubyte.gz...
Download complete.
Save complete.
###Markdown
The following code uses the above input data to train & test the accuracy of a neural network
###Code
network = NNetwork(X_train, Y_train, X_test, Y_test)
network.train(600, 1)
network.test()
###Output
Training complete!
Testing accuracy: 81.8%
###Markdown
Run the code below to test my network on three random images
###Code
import matplotlib.pyplot as plt
imgIndex = np.random.randint(low=0, high=10000, size=4)
fig = plt.figure(figsize=(8,6))
fig.subplots_adjust(wspace=0.3, hspace=0.3)
ax1 = fig.add_subplot(221)
ax1.set_title("Testing image index " + str(imgIndex[0]))
plt.imshow(X_test[imgIndex[0]].reshape(28, 28), cmap='gray')
ax2 = fig.add_subplot(222)
ax2.set_title("Testing image index " + str(imgIndex[1]))
plt.imshow(X_test[imgIndex[1]].reshape(28, 28), cmap='gray')
ax3 = fig.add_subplot(223)
ax3.set_title("Testing image index " + str(imgIndex[2]))
plt.imshow(X_test[imgIndex[2]].reshape(28, 28), cmap='gray')
ax4 = fig.add_subplot(224)
ax4.set_title("Testing image index " + str(imgIndex[3]))
plt.imshow(X_test[imgIndex[3]].reshape(28, 28), cmap='gray')
print("Image " + str(imgIndex[0]) + " value: " + str(Y_test[imgIndex[0]]) + " Classified by network as: " + str(network.classify(X_test[imgIndex[0],:])))
print("Image " + str(imgIndex[1]) + " value: " + str(Y_test[imgIndex[1]]) + " Classified by network as: " + str(network.classify(X_test[imgIndex[1],:])))
print("Image " + str(imgIndex[2]) + " value: " + str(Y_test[imgIndex[2]]) + " Classified by network as: " + str(network.classify(X_test[imgIndex[2],:])))
print("Image " + str(imgIndex[3]) + " value: " + str(Y_test[imgIndex[3]]) + " Classified by network as: " + str(network.classify(X_test[imgIndex[3],:])))
print()
plt.show()
###Output
Image 5533 value: 7 Classified by network as: 7
Image 4278 value: 0 Classified by network as: 0
Image 5643 value: 2 Classified by network as: 2
Image 9791 value: 0 Classified by network as: 0
###Markdown
Load IMDB Data
###Code
actors = getFile("/Users/tgadfort/Downloads/Dropbox/ACTORS.p", version=2)
actresses = getFile("/Users/tgadfort/Downloads/Dropbox/ACTRESSES.p", version=2)
movieActors = {}
for actor, actormovies in actors.items():
for actormovie in actormovies:
movieyear = "{0} [{1}]".format(actormovie[1], actormovie[0])
if movieActors.get(movieyear) is None:
movieActors[movieyear] = {}
movieActors[movieyear][actor] = True
for actor, actormovies in actresses.items():
for actormovie in actormovies:
movieyear = "{0} [{1}]".format(actormovie[1], actormovie[0])
if movieActors.get(movieyear) is None:
movieActors[movieyear] = {}
movieActors[movieyear][actor] = True
for movieyear in movieActors.keys():
movieActors[movieyear] = list(movieActors[movieyear].keys())
actorDF = DataFrame(Series(movieActors))
actorDF.reset_index(inplace=True)
actorDF.columns = ["Movie", "Actors"]
actorDF["Year"] = actorDF["Movie"].apply(getYear)
actorDF.head()
matchedTest.head()
#matchedMovies = matchedTest[~matchedTest["Actors"].isna()]["Movie"]
movieTest = movieDF[movieDF["Year"].isin([2009, 2010, 2011])]
#actorTest = actorDF[actorDF["Year"].isin([2009, 2010, 2011])]
actorTest[actorTest["Movie"].str.contains("Precious")].loc[503153]["Actors"]
actorTest
movieTest = movieDF[movieDF["Year"].isin([2009, 2010, 2011])]
actorTest = actorDF[actorDF["Year"].isin([2009, 2010, 2011])]
matchedTest = movieTest.merge(actorTest, on=["Movie", "Year"], how='left').copy()
matchedMovies = matchedTest[~matchedTest["Actors"].isna()]["Movie"]
actorTestAvailable = actorTest[~actorTest["Movie"].isin(matchedMovies)].copy()
for cutoff in [0.90, 0.85, 0.8]:
print("Matching {0}".format(cutoff))
matches = matchedTest[matchedTest['Actors'].isna()]["Movie"].apply(lambda x: difflib.get_close_matches(x, actorTestAvailable['Movie'], n=1, cutoff=0.90))
matches = matches.apply(lambda x: None if len(x) == 0 else x[0])
dm = DataFrame(matches)
dm.columns = ["Match"]
dm = dm.join(DataFrame(matchedTest.iloc[dm.index]["Movie"]))
dm.reset_index(inplace=True)
dm = dm.merge(actorTestAvailable, left_on='Match', right_on='Movie')
dm.index = dm["index"]
matchedTest.loc[dm.index, "Actors"] = dm["Actors"]
matchedMovies = matchedTest[~matchedTest["Actors"].isna()]["Movie"]
print(matchedMovies.shape)
actorTestAvailable = actorTest[~actorTest["Movie"].isin(matchedMovies)].copy()
matches = matchedTest[matchedTest['Actors'].isna()]["Movie"].apply(lambda x: difflib.get_close_matches(x, actorTestAvailable['Movie'], n=1, cutoff=0.85))
matches = matches.apply(lambda x: None if len(x) == 0 else x[0])
dm = DataFrame(matches)
dm.columns = ["Match"]
dm = dm.join(DataFrame(matchedTest.iloc[dm.index]["Movie"]))
dm.reset_index(inplace=True)
dm = dm.merge(actorTestAvailable, left_on='Match', right_on='Movie')
dm.index = dm["index"]
matchedTest.loc[dm.index, "Actors"] = dm["Actors"]
matchedMovies = matchedTest[~matchedTest["Actors"].isna()]["Movie"]
print(matchedMovies.shape)
actorTestAvailable = actorTest[~actorTest["Movie"].isin(matchedMovies)].copy()
print(actorTestAvailable.shape)
actorTestAvailable.head()
matchedTest
tmpdm
matchedTest
#matchedTest[matchedTest["Actors"].isna()]["Actors"] = list(dm["Actors"])#.merge(dm, on=["Movie"], how='outer').copy()
#len(list(dm["Actors"]))
matchedTest[matchedTest["Actors"].isna()]["Actors"].shape
DataFrame(matches).merge(actorTestAvailable, on="Movie")
matches = [None if len(x) == 0 else x[0] for x in actorTest["Match"].tolist()]
actorTestAvailable["Movie"].isin(matches)
actorTest['Name'].describe()
actorTest[actorTest["Name"] == "Remember Me [2010]"]
lambda x: difflib.get_close_matches(x, df1['name'])[0])
#movieActors = {k: str(v) for k,v in movieActors.items()}
actorDF = DataFrame(Series(movieActors))
actorDF.columns = ["Actors"]
name = Series(actorDF.index)
actorDFName = name.apply(getName)
actorDFName.index = actorDF.index
actorDF["Name"] = actorDFName
df.head()
from searchUtils import findNearest
movieLists = {}
for x in list(df.index):
year = int(x[-5:-1])
name = x[:-7]
for y in range(year-2, year+3):
if movieLists.get(y) is None:
movieLists[y] = {}
movieLists[y][x] = True
for year in movieLists.keys():
movieLists[year] = list(movieLists[year].keys())
import numpy as np
import pandas as pd
from multiprocessing import cpu_count, Pool
cores = cpu_count() #Number of CPU cores on your system
partitions = cores #Define as many partitions as you want
def parallelize(data, func):
data_split = np.array_split(data, partitions)
pool = Pool(cores)
data = pd.concat(pool.map(func, data_split))
pool.close()
pool.join()
return data
def findit(movieyear):
try:
year = int(movieyear[-5:-1])
except:
return movieyear
if movieLists.get(year) is None:
return movieyear
nearest = findNearest(item=movieyear, ilist=movieLists[year], num=1, cutoff=0.9)
if len(nearest) == 0:
return movieyear
else:
return nearest[0]
#retvals = []
#for i,movieyear in enumerate(list(actorDF.index)):
#actorDF.index = retvals
test = Series(actorDF.index).apply(findit)
#test = parallelize(Series(actorDF.index), findit);
test
#test = df.join(actorDF, how='left')
#test.tail(1000)
###Output
_____no_output_____ |
Proyecto 3. Equipo.ipynb | ###Markdown
Proyecto 3.- Carlos González Mendoza- Raul Enrique González Paz- Juan Andres Serrano Rivera Para este proyecto revisaremos los precios ajustados de las empresas **ADIDAS**, **NIKE** y **UNDER ARMOUR**, ya que son las dos empresas deportivas mas grandes del mundo. Y aparte son un trio de empresas con gran impacto en todo el mundo.
###Code
# Primero importamos todas las librerias que ocuparemos.
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
import pandas_datareader.data as web
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Despues creamos la funcion con la cual descargaremos los datos
def get_closes(tickers, start_date=None, end_date=None, freq='d'):
# Por defecto la fecha de inicio es el 01 de Enero de 2010 y fecha de termino por defecto es hasta el dia de hoy.
# Frecuencia de muestreo por defecto es 'd'.
# Creamos DataFrame vacío de precios, con el índice de las fechas que necesitamos.
closes = pd.DataFrame(columns = tickers, index=web.YahooDailyReader(symbols=tickers[0], start=start_date, end=end_date, interval=freq).read().index)
# Una vez creado el DataFrame, agregamos cada uno de los precios con YahooDailyReader
for ticker in tickers:
df = web.YahooDailyReader(symbols=ticker, start=start_date, end=end_date, interval=freq).read()
closes[ticker]=df['Adj Close']
closes.index_name = 'Date'
closes = closes.sort_index()
return closes
# Instrumentos a descargar (Apple, Walmart, IBM, Nike)
names = ['NKE', 'ADDYY', 'UAA']
# Fechas: inicios 2011 a finales de 2015
start, end = '2011-01-01', '2017-12-31'
###Output
_____no_output_____
###Markdown
- Entonces ya realizado lo anterior, podremos obtener los precios ajustados de las empresas **ADIDAS** y **NIKE**
###Code
closes = get_closes(names, start, end)
closes
closes.plot(figsize=(15,10))
closes.describe()
###Output
_____no_output_____
###Markdown
- Obtendremos los rendimientos diarios
###Code
closes.shift()
###Output
_____no_output_____
###Markdown
- Entonces se calculan los rendimientos diarios de la siguiente manera.
###Code
rend = ((closes-closes.shift())/closes.shift()).dropna()
rend
###Output
_____no_output_____
###Markdown
- Y los rendimientos se grafican de la siguiente manera.
###Code
rend.plot(figsize=(15,10))
###Output
_____no_output_____
###Markdown
- Sacamos las medias y las desviaciones estandar de cada una de las empresas.
###Code
mu_NKE, mu_ADDYY, mu_UAA = rend.mean().NKE, rend.mean().ADDYY, rend.mean().UAA
s_NKE, s_ADDYY, s_UAA = rend.std().NKE, rend.std().ADDYY, rend.std().UAA
###Output
_____no_output_____
###Markdown
- Simulamos 10000 escenarios de rendimientos diarios para el 2017 para las 3 empresas deportivas.
###Code
def rend_sim(mu, sigma, ndays, nscen, start_date):
dates = pd.date_range(start=start_date,periods=ndays)
return pd.DataFrame(data = sigma*np.random.randn(ndays, nscen)+mu, index = dates)
simrend_NKE = rend_sim(mu_NKE, s_NKE, 252, 10000, '2018-01-01')
simrend_ADDYY = rend_sim(mu_ADDYY, s_ADDYY, 252, 10000, '2018-01-01')
simrend_UAA = rend_sim(mu_NKE, s_ADDYY, 252, 10000, '2018-01-01')
simcloses_NKE = closes.iloc[-1].NKE*((1+simrend_NKE).cumprod())
simcloses_ADDYY = closes.iloc[-1].ADDYY*((1+simrend_ADDYY).cumprod())
simcloses_UAA = closes.iloc[-1].UAA*((1+simrend_UAA).cumprod())
###Output
_____no_output_____
###Markdown
- Calculamos la probabilidad de que el precio incremente un 10% el siguiente año
###Code
K_NKE = (1+0.05)*closes.iloc[-1].NKE
prob_NKE = pd.DataFrame((simcloses_NKE>K_NKE).sum(axis=1)/10000)
prob_NKE.plot(figsize=(10,6), grid=True, color = 'r');
K_ADDYY = (1+0.05)*closes.iloc[-1].ADDYY
prob_ADDYY = pd.DataFrame((simcloses_ADDYY>K_ADDYY).sum(axis=1)/10000)
prob_ADDYY.plot(figsize=(10,6), grid=True, color = 'g');
K_UAA = (1+0.05)*closes.iloc[-1].UAA
prob_UAA = pd.DataFrame((simcloses_UAA>K_UAA).sum(axis=1)/10000)
prob_UAA.plot(figsize=(10,6), grid=True);
###Output
_____no_output_____ |
01-Lesson-Plans/23-Project-4/1/Activities/01-Evr_MNIST/Unsolved/MNIST.ipynb | ###Markdown
Dependencies
###Code
# Dependencies to Visualize the model
%matplotlib inline
from IPython.display import Image, SVG
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
# Filepaths, numpy, and Tensorflow
import os
import numpy as np
import tensorflow as tf
# Sklearn scaling
from sklearn.preprocessing import MinMaxScaler
###Output
_____no_output_____
###Markdown
Keras Specific Dependencies
###Code
# Keras
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import Dense
from tensorflow.keras.datasets import mnist
###Output
_____no_output_____
###Markdown
Loading and Preprocessing our Data Load the MNIST Handwriting Dataset from Keras
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("Training Data Info")
print("Training Data Shape:", X_train.shape)
print("Training Data Labels Shape:", y_train.shape)
###Output
_____no_output_____
###Markdown
Plot the first digit
###Code
# Plot the first image from the dataset
plt.imshow(X_train[0,:,:], cmap=plt.cm.Greys)
###Output
_____no_output_____
###Markdown
Each Image is a 28x28 Pixel greyscale image with values from 0 to 255
###Code
# Our image is an array of pixels ranging from 0 to 255
X_train[0, :, :]
###Output
_____no_output_____
###Markdown
For training a model, we want to flatten our data into rows of 1D image arrays
###Code
# We want to flatten our image of 28x28 pixels to a 1D array of 784 pixels
ndims = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], ndims)
X_test = X_test.reshape(X_test.shape[0], ndims)
print("Training Shape:", X_train.shape)
print("Testing Shape:", X_test.shape)
###Output
_____no_output_____
###Markdown
Scaling and NormalizationWe use Sklearn's MinMaxScaler to normalize our data between 0 and 1
###Code
# Next, we normalize our training data to be between 0 and 1
scaler = MinMaxScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# Alternative way to normalize this dataset since we know that the max pixel value is 255
# X_train = X_train.astype("float32")
# X_test = X_test.astype("float32")
# X_train /= 255.0
# X_test /= 255.0
###Output
_____no_output_____
###Markdown
One-Hot EncodingWe need to one-hot encode our integer labels using the `to_categorical` helper function
###Code
# Our Training and Testing labels are integer encoded from 0 to 9
y_train[:20]
# We need to convert our target labels (expected values) to categorical data
num_classes = 10
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
# Original label of `5` is one-hot encoded as `0000010000`
y_train[0]
###Output
_____no_output_____
###Markdown
Building our ModelIn this example, we are going to build a Deep Multi-Layer Perceptron model with 2 hidden layers. Our first step is to create an empty sequential model
###Code
# Create an empty sequential model
###Output
_____no_output_____
###Markdown
Next, we add our first hidden layerIn the first hidden layer, we must also specify the dimension of our input layer. This will simply be the number of elements (pixels) in each image.
###Code
# Add the first layer where the input dimensions are the 784 pixel values
# We can also choose our activation function. `relu` is a common
###Output
_____no_output_____
###Markdown
We then add a second hidden layer with 100 densely connected nodesA dense layer is when every node from the previous layer is connected to each node in the current layer.
###Code
# Add a second hidden layer
###Output
_____no_output_____
###Markdown
Our final output layer uses a `softmax` activation function for logistic regression.We also need to specify the number of output classes. In this case, the number of digits that we wish to classify.
###Code
# Add our final output layer where the number of nodes
# corresponds to the number of y labels
###Output
_____no_output_____
###Markdown
Model Summary
###Code
# We can summarize our model
model.summary()
###Output
_____no_output_____
###Markdown
Compile and Train our ModelNow that we have our model architecture defined, we must compile the model using a loss function and optimizer. We can also specify additional training metrics such as accuracy.
###Code
# Compile the model
###Output
_____no_output_____
###Markdown
Finally, we train our model using our training data Training consists of updating our weights using our optimizer and loss function. In this example, we choose 10 iterations (loops) of training that are called epochs.We also choose to shuffle our training data and increase the detail printed out during each training cycle.
###Code
# Fit (train) the model
###Output
_____no_output_____
###Markdown
Saving and Loading modelsWe can save our trained models using the HDF5 binary format with the extension `.h5`
###Code
# Save the model
# Load the model
###Output
_____no_output_____
###Markdown
Evaluating the ModelWe use our testing data to validate our model. This is how we determine the validity of our model (i.e. the ability to predict new and previously unseen data points)
###Code
# Evaluate the model using the training data
model_loss, model_accuracy = model.evaluate(X_test, y_test, verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
###Output
_____no_output_____
###Markdown
Making PredictionsWe can use our trained model to make predictions using `model.predict`
###Code
# Grab just one data point to test with
test = np.expand_dims(X_train[0], axis=0)
test.shape
plt.imshow(scaler.inverse_transform(test).reshape(28, 28), cmap=plt.cm.Greys)
# Make a prediction. The result should be 0000010000000 for a 5
model.predict(test).round()
# Grab just one data point to test with
test = np.expand_dims(X_train[2], axis=0)
test.shape
plt.imshow(scaler.inverse_transform(test).reshape(28, 28), cmap=plt.cm.Greys)
# Make a prediction. The resulting class should match the digit
print(f"One-Hot-Encoded Prediction: {model.predict(test).round()}")
print(f"Predicted class: {model.predict_classes(test)}")
###Output
_____no_output_____
###Markdown
Import a Custom Image
###Code
filepath = "../Images/test8.png"
# Import the image using the `load_img` function in keras preprocessing
# Convert the image to a numpy array
# Scale the image pixels by 255 (or use a scaler from sklearn here)
# Flatten into a 1x28*28 array
plt.imshow(img.reshape(28, 28), cmap=plt.cm.Greys)
# Invert the pixel values to match the original data
# Make predictions
model.predict_classes(img)
###Output
_____no_output_____ |
PrimalCore/doc/homogeneous_table/mldataset_userguide.ipynb | ###Markdown
.. _MLDataSet_user_guide:
###Code
# Example of building a MLDataSet
###Output
_____no_output_____
###Markdown
We provide a simple workflow to build a fetures dataset using the class :class:`dataset.MLDataSet` (see for full API the module :mod:`~PrimalCore.homogeneous_table.dataset.MLDataSet`).. currentmodule:: PrimalCore.homogeneous_table.dataset.. contents:: :local:.. toctree::
###Code
## Building a Features MLDataSet from a Table
###Output
_____no_output_____
###Markdown
we follow the approach in `Table_user_guide`_ to build a catalog using the :class:`PrimalCore.heterogeneous_table.table.Table`
###Code
from PrimalCore.heterogeneous_table.table import Table
from ElementsKernel.Path import getPathFromEnvVariable
ph_catalog=getPathFromEnvVariable('PrimalCore/test_table.fits','ELEMENTS_AUX_PATH')
catalog=Table.from_fits_file(ph_catalog,fits_ext=0)
catalog.keep_columns(['FLUX*','reliable_S15','STAR','AGN','MASKED','FLAG_PHOT'],regex=True)
###Output
_____no_output_____
###Markdown
We now build a Features dataset using the :class:`dataset.MLDataSet` class from the :mod:`PrimalCore.homogeneous_table.dataset` python module.
###Code
First we import the classes and the functions we need
###Output
_____no_output_____
###Markdown
from PrimalCore.homogeneous_table.dataset import MLDataSet
###Code
.. note::
It is worth noting that for the :class:`MLDataSet` class, most of the functions to modify the dataset
content have been implemented as functions in separate python modules. This is made on purpose, and shows a
different approach compared to that used for the :class:`PrimalCore.heterogeneous_table.table.Table` class, where
the functions are implemented as methods of the same class.
To build a MLDataSet directly from a Table we can use the classmethod :func:`MLDataSet.new_from_table`
###Output
_____no_output_____
###Markdown
dataset=MLDataSet.new_from_table(catalog) print dataset.features_names
###Code
.. note::
as you can see, the **__original_entry_ID__** is not present among the features, indeed it is used only to track
the original catalog IDs, but it is present as separate member of the dataset (we print onlty the 10 first
elements)
###Output
_____no_output_____
###Markdown
print dataset.features_original_entry_ID[1:10]
###Code
and in this way it **safely** can not be used as a feature.
## Building a Features MLDataSet from a FITS file
###Output
_____no_output_____
###Markdown
To build a MLDataSet directly from a FITS file we can use the classmethod :func:`MLDataSet.new_from_fits_file`
###Code
dataset_from_file=MLDataSet.new_from_fits_file(ph_catalog,fits_ext=0,\
use_col_names_list=['FLUX*','reliable_S15','STAR','AGN','MASKED','FLAG_PHOT'],\
regex=True)
print dataset_from_file.features_names
###Output
['FLUX_G_1', 'FLUX_G_2', 'FLUX_G_3', 'FLUX_R_1', 'FLUX_R_2', 'FLUX_R_3', 'FLUX_I_1', 'FLUX_I_2', 'FLUX_I_3', 'FLUX_VIS', 'FLUX_Z_1', 'FLUX_Z_2', 'FLUX_Z_3', 'FLUX_Y_1', 'FLUX_Y_2', 'FLUX_Y_3', 'FLUX_J_1', 'FLUX_J_2', 'FLUX_J_3', 'FLUX_H_1', 'FLUX_H_2', 'FLUX_H_3', 'FLUXERR_G_1', 'FLUXERR_G_2', 'FLUXERR_G_3', 'FLUXERR_R_1', 'FLUXERR_R_2', 'FLUXERR_R_3', 'FLUXERR_I_1', 'FLUXERR_I_2', 'FLUXERR_I_3', 'FLUXERR_VIS', 'FLUXERR_Z_1', 'FLUXERR_Z_2', 'FLUXERR_Z_3', 'FLUXERR_Y_1', 'FLUXERR_Y_2', 'FLUXERR_Y_3', 'FLUXERR_J_1', 'FLUXERR_J_2', 'FLUXERR_J_3', 'FLUXERR_H_1', 'FLUXERR_H_2', 'FLUXERR_H_3', 'FLUX_RADIUS_DETECT', 'reliable_S15', 'STAR', 'AGN', 'MASKED', 'FLAG_PHOT']
###Markdown
Columns selection using `use_col_names_list` in the factories
###Code
Columns can be selected using the `use_col_names_list` parameter in the classmethod factories :func:`MLDataSet.new_from_table` and :func:`MLDataSet.new_from_fits_file`
###Output
_____no_output_____
###Markdown
dataset=MLDataSet.new_from_table(catalog,use_col_names_list=['FLUX*','reliable_S15','STAR','AGN','MASKED','FLAG_PHOT'],\ regex=True) print dataset.features_names
###Code
### using dataset_handler fucntions
###Output
_____no_output_____
###Markdown
Or, columns can be selected using specific selection functions, from the :mod:`~PrimalCore.homogeneous_table.dataset_handler` module
###Code
from PrimalCore.homogeneous_table.dataset_handler import drop_features
from PrimalCore.homogeneous_table.dataset_handler import keep_features
###Output
_____no_output_____
###Markdown
for example we decide to drop columns with names matching expression "FLUX\*1\*" by using the :func:`~PrimalCore.homogeneous_table.dataset_handler.drop_features`
###Code
drop_features(dataset,['FLUX*1*'])
dataset.features_names
###Output
| features initial Rows,Cols= 1000 50
| removing features ['FLUX_G_1', 'FLUX_R_1', 'FLUX_I_1', 'FLUX_Z_1', 'FLUX_Y_1', 'FLUX_J_1', 'FLUX_H_1', 'FLUXERR_G_1', 'FLUXERR_R_1', 'FLUXERR_I_1', 'FLUXERR_Z_1', 'FLUXERR_Y_1', 'FLUXERR_J_1', 'FLUXERR_H_1']
| features final Rows,Cols= 1000 36
###Markdown
Furtger we can decide to keep only columns with names matching the regular expression "FLUX\*2\*" by using the :func:`~PrimalCore.homogeneous_table.dataset_handler.keep_features` function from the :mod:`PrimalCore.homogeneous_table.dataset_handler` package
###Code
keep_features(dataset,['FLUX*2*'],regex=True)
print dataset.features_names
###Output
| features initial Rows,Cols= 1000 36
| removing features ['FLUX_G_3', 'FLUX_R_3', 'FLUX_I_3', 'FLUX_VIS', 'FLUX_Z_3', 'FLUX_Y_3', 'FLUX_J_3', 'FLUX_H_3', 'FLUXERR_G_3', 'FLUXERR_R_3', 'FLUXERR_I_3', 'FLUXERR_VIS', 'FLUXERR_Z_3', 'FLUXERR_Y_3', 'FLUXERR_J_3', 'FLUXERR_H_3', 'FLUX_RADIUS_DETECT', 'reliable_S15', 'STAR', 'AGN', 'MASKED', 'FLAG_PHOT']
| features final Rows,Cols= 1000 14
['FLUX_G_2', 'FLUX_R_2', 'FLUX_I_2', 'FLUX_Z_2', 'FLUX_Y_2', 'FLUX_J_2', 'FLUX_H_2', 'FLUXERR_G_2', 'FLUXERR_R_2', 'FLUXERR_I_2', 'FLUXERR_Z_2', 'FLUXERR_Y_2', 'FLUXERR_J_2', 'FLUXERR_H_2']
###Markdown
Adding features
###Code
And finally we can add a new feature with the :func:`~PrimalCore.homogeneous_table.dataset_handler.add_features` function
We can add a single feature:
###Output
_____no_output_____
###Markdown
from PrimalCore.homogeneous_table.dataset_handler import add_featurestest_feature=dataset.get_feature_by_name('FLUXERR_H_2')**2add_features(dataset,'test',test_feature)dataset.features_names
###Code
Or we can add a 2dim array of features
###Output
_____no_output_____
###Markdown
test_feature_2dim=np.zeros((dataset.features_N_rows,5))test_feature_2dim_names=['a','b','c','d','e']add_features(dataset,test_feature_2dim_names,test_feature_2dim)dataset.features_names
###Code
We can think to a more meaningful example, i.e. we want to add flux ratios. Lets start by defining the list of
contigous bands, for the flux evaluation
###Output
_____no_output_____
###Markdown
flux_bands_list_2=['FLUX_G_2','FLUX_R_2','FLUX_I_2','FLUX_Z_2','FLUX_Y_2','FLUX_J_2','FLUX_VIS','FLUX_VIS','FLUX_VIS']flux_bands_list_1=['FLUX_R_2','FLUX_I_2','FLUX_Z_2','FLUX_Y_2','FLUX_J_2','FLUX_H_2','FLUX_Y_2','FLUX_J_2','FLUX_H_2']
###Code
we import the module where we have defined the FluxRatio class (:mod:`PrimalCore.phz_tools.photometry`)
###Output
_____no_output_____
###Markdown
from PrimalCore.phz_tools.photometry import FluxRatio for f1,f2 in zip(flux_bands_list_1,flux_bands_list_2): f1_name=f1.split('_')[1] f2_name=f2.split('_')[1] if f1 in dataset.features_names and f2 in dataset.features_names: f=FluxRatio('F_%s'%(f2_name+'-'+f1_name),f1,f2,features=dataset) add_features(dataset,f.name,f.values)
###Code
.. note::
Note that in this example we skipped the selection CLEAN=" (FLAG_PHOT == 0) & (MASKED == 0) & (STAR == 0) &
(AGN == 0) & (reliable_S15==1)", so we have entries with flux values that are zero, and this results in the
corresponding warning messge due to zero division
###Output
_____no_output_____
###Markdown
dataset.features_names
###Code
## Operations on rows
### filtering NaN/Inf with dataset_preprocessing functions
###Output
_____no_output_____
###Markdown
We can get rid of the NAN/INF rows using the :func:`~PrimalCore.preprocessing.dataset_preprocessing.drop_nan_inf` function
###Code
from PrimalCore.preprocessing.dataset_preprocessing import drop_nan_inf
drop_nan_inf(dataset)
###Output
| features cleaning for nan/inf
| features initial Rows,Cols= 1000 26
| features initial Rows,Cols= 1000 26
| removing features []
| features final Rows,Cols= 1000 26
|removed columns []
|removed rows 468
| features cleaned Rows,Cols= 532 26
|
Jupyter_HW2.ipynb | ###Markdown
Using surface roughness to date landslides OverviewIn March of 2014, unusually high rainfall totals over a period of several weeks triggered a deep-seated landslide that mobilized into a rapidly moving debris flow. The debris flow inundated the town of Oso, Washington, resulting in 43 fatalities and the destruction of 49 houses. Other landslide deposits are visible in the vicinity of the 2014 Oso landslide (see figure below). The goal of this assignment is to estimate the ages of the nearby landslide deposits so that we can say something about the recurrence interval of large, deep-seated landslides in this area. Do they happen roughly every 100 years, 5000 years, or do they only happen once every 100,000 years?Our strategy will be to take advantage of the fact that recent landslides have “rougher” surfaces. Creep and bioturbation smooth the landslide deposits over time in a way that we can predict (using the diffusion equation!). We will use the standard linear diffusion model, shown below, to simulate how the surface of a landslide deposit will change with time:$$ \frac{\partial z}{\partial t}=D\frac{\partial^2z}{\partial x^2} $$Here, $z$ denotes elevation, $x$ is distance in the horizontal direction, and $D$ is the colluvial transport coefficient. Recall, that in a previous exercise we estimated the value of $D$ within the San Francisco Volcanic Field (SFVF) in northern Arizona. We found that $D\approx5$ $\mathrm{m^2}$ $\mathrm{kyr}^{-1}$ in the SFVF. In this exercise, we will use a larger value of $D=10$ $\mathrm{m^2}$ $\mathrm{kyr}^{-1}$ since our study site near Oso, Washington, is in a wetter climate with more vegetation (and therefore greater rates of bioturbation). Once we have a model that lets us determine how the surface of a landslide deposit will change with time, we may be able to use it to describe how surface roughness varies with age. Landslide Deposit MorphologyFirst, examine the map below showing the slope in the area of the Oso Landslide. Also pictured is the Rowan Landslide, which is older than the Oso Landslide.Notice how the Oso landslide, which is very recent, is characterized by a number of folds and a very “rough” surface. This type of hummocky topography is common in recent landslide deposits. The plot on the right shows a topographic transect that runs over the Oso Landslide deposit from north to south. Quantifying Surface RoughnessIf we are ultimately going to use surface roughness to date landslide deposits (i.e. older deposits are less rough, younger deposits are more rought), we first need a way to quantify what we mean by "roughness". One way to quantify surface roughness is to extract a transect from the slope data and compute the standard deviation of the slope along that transect. That is what we do here; we compute the standard deviation of the slope (SDS) over each 30-meter interval along a transect and then take the mean of all of these standard deviations to arrive at an estimate of roughness for each landslide deposit that we are interested in dating. The plots below show slope (deg) along transects that run over the 2014 Oso Landslide and the nearby Rowan Landslide (unknown age). Note that the Rowan landslide looks slightly less “rough” and therefore has a lower SDS value associated with it.Don't worry about understanding exactly how SDS is computed. The most important thing to note here is that SDS gives us a way to objectively define how "rough" a surface is. Higher values of SDS correspond to rough surfaces whereas lower values correspond to smoother surfaces. Estimating the Age of the Rowan LandslideWe will now estimate the age of the Rowan Landslide using the diffusion model. This is the same model we used to simulate how cinder cones evolve. Since the same processes (creep, bioturbation) are driving sediment transport on landslide deposits, we can apply the same model here. However, when we modeled cinder cones we knew what the initial condition looked like. All cinder cones start with cone shape that is characterized by hillslope angles that are roughly equal to the angle of repose for granular material ($\approx 30^{\circ}$). We do not know what each of these landslide deposits looked like when they were first created. So, we will assume that all landslide deposits (including the Rowan Landslide) looked like the Oso Landslide immediately after they were emplaced. Of course, no two landslide deposits ever look exactly the same but it is reasonable to assume that the statistical properties of the initial landslide deposits (i.e. roughness) are similar to each other. We will make this assumption and simulate how the roughness, as quantified using the SDS, of the Oso Landslide deposit will change over time. If we know the relationship between SDS and deposit age, then we can estimate the age of any landside deposit in this region simply by computing its SDS. Let’s start by using the model to estimate how much the Oso Landslide deposit will change after it is subjected to erosion via diffusive processes (e.g. bioturbation, rain splash, freeze-thaw) for 100 years. The code below is set up to run the diffusion model using a topographic transect through the Oso Landslide as the initial topography. All you need to do is assign realistic values for the colluvial transport coefficient (landscape diffusivity) and choose an age. Use a value of $D=10$ $\mathrm{m}^2$ $\mathrm{kyr}^{-1}$ for the colluvial transport coefficient and an age of 0.1 kyr (since we want to know what the deposit will look like when the Oso Landslide is 100 years old). Then run the code block below.
###Code
D=10; # Colluvial transport coefficient [m^2/kyr] (i.e. landscape diffusivity)
age=0.1; # Age of the simulated landslide deposit [kyr]
# !! YOU DO NOT NEED TO MODIFY THE CODE BELOW THIS LINE !!
from diffusion1d import oso
[distance,elevation,SDS]=oso(D,age)
import matplotlib.pyplot as plt
plt.plot(distance,elevation,'b-')
plt.xlabel('Distance (m)', fontsize=14)
plt.ylabel('Elevation (m)', fontsize=14)
plt.title('SDS = '+str(round(SDS,1)), fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
You should see that the SDS value for the topography (shown on the plot) after 0.1 kyr of erosion is slightly smaller than the SDS value of the initial landslide surface (e.g. the SDS value of the Oso Landslide deposit). This is a result of the fact that diffusive processes smooth the surface over time, but 0.1 kyr is not a sufficient amount of time to substantially *smooth* the surface of the landslide deposit. Although the SDS value has decreased over a time period of 0.1 kyr, it is still larger than the SDS value that we have computed for the Rowan Landslide deposit. Therefore, the Rowan Landslide deposit must be older than 0.1 kyr. Continue to run the model using the code cell above with increasing values for the age until you find an age that gives you a SDS value that is close to the one computed for the Rowan Landslide ($SDS\approx5.2$). Based on this analysis, how old is the Rowan Landslide (you can round your answer to the nearest 0.1 kyr)? INSERT YOUR ANSWER HERE How does SDS vary with age?You have now successfully dated the Rowan Landslide! This process does not take too long, but it can be inefficient if we want to date a large number of landslide deposits. Later, I will give you SDS values for 12 different landslide deposits in this area. We want to date all of them so that we can have more data to accomplish our original goal of saying something about the recurrence interval of large, deep-seated landslides in this area. To do this, we will determine an equation that quantifies the relationship between SDS and age using our diffusion model. Then, we will use this equation to tell us the age of each landslide deposit based on its SDS. To get started on this process, lets use the model in the code cell below to determine how the SDS value changes as we change the age of the landslide deposit. Use the model (in the code cell below) to simulate the surface of the Oso Landslide after 1 kyr, 2 kyr, 5 kyr, 10 kyr, and 20 kyr. Continue to use a value of $D=10$ $\mathrm{m}^2$ $\mathrm{kyr}^{-1}$. Write down each of the SDS values that you get for these 5 different ages. You will need each of them to complete the next step. Note that it may take 5-10 seconds to compute the SDS when the ages are 10 kyr or 20 kyr since more computations need to be performed to complete these longer simulations.
###Code
D=10; # Colluvial transport coefficient [m^2/kyr] (i.e. landscape diffusivity)
age=0.1; # Age of the simulated landslide deposit [kyr]
# !! YOU DO NOT NEED TO MODIFY THE CODE BELOW THIS LINE !!
from diffusion1d import oso
[distance,elevation,SDS]=oso(D,age)
import numpy as np
itopo=np.loadtxt('osotransect.txt')
import matplotlib.pyplot as plt
plt.plot(distance,elevation,'b-',label="Modeled Topography")
plt.plot(distance,itopo,'--',color='tab:gray',label="Modeled Topography")
plt.xlabel('Distance (m)', fontsize=14)
plt.ylabel('Elevation (m)', fontsize=14)
plt.title('SDS = '+str(round(SDS,1)), fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
A general method for estimating age based on SDSIn the code below, we are going to create several variables ("SDS_0kyr", "SDS_1kyr", etc) so that we can store the information that you obtained in the previous section. Each variable will hold the SDS value of our idealized landslide deposit for different ages. Notice that the variable called *SDS_0kyr* is equal to the SDS value of the Oso transect, which is the same as the SDS value at a time of 0 kyr (since the landslide occured in 2014). The variables *SDS_1kyr*, *SDS_2kyr*,...,*SDS_20kyr* are all set equal to a value of 1. Change these values in the code block below to reflect the SDS values that you computed in the above exercise. For example, if you determined that the landslide deposit has an SDS value of $6.4$ after 5 kyr then set *SDS_5kyr* equal to $6.4$. When you are finished, run the code cell. The code should produce a plot of your data. Verify that the plot appears to be accurate.
###Code
SDS_0kyr=9.5 # This is the initial (i.e. t=0) SDS value of our landslide deposit.
SDS_1kyr=1 # Change this values from "1" to the SDS value after 1 kyr.
SDS_2kyr=1 # Change this values from "1" to the SDS value after 2 kyr.
SDS_5kyr=1 # Change this values from "1" to the SDS value after 5 kyr.
SDS_10kyr=1 # Change this values from "1" to the SDS value after 10 kyr.
SDS_20kyr=1 # Change this values from "1" to the SDS value after 20 kyr.
# You do not need to modify any code below this point
import numpy as np
age=np.array([0,1,2,5,10,20])
SDS=np.array([SDS_0kyr,SDS_1kyr,SDS_2kyr,SDS_5kyr,SDS_10kyr,SDS_20kyr])
import matplotlib.pyplot as plt
plt.scatter(SDS,age,s=60, c='b', marker='o') # Create the scatter plot, set marker size, set color, set marker type
plt.xlabel('SDS [-]', fontsize=14)
plt.ylabel('Landslide Age [kyr]', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Now, we need to find a way to use the information above to come up with a more general relationship between SDS and age. Right now we only have 6 points on a graph. We have no way to determine the age of a landslide if its SDS value falls in between any of the points on our plot. One way to proceed is to fit a curve to our 6 data points. Python has routines that can be used to fit a function to X and Y data points. You may have experience using similar techniques in programs like Excel or MATLAB. Before proceeding to work with our data, lets examine how this process of curve fitting works for a simple case. Suppose we are given three points, having X coordinates of 1,2, and 3 and corresponding Y coordinates of 2,4, and 6. Below is an example of how to fit a line to data using Python. **Do not worry about understanding how all of the code works. The aim of this part of the exercise is simply to introduce you to the types of tools that are available to you in programming languages like Python. That way, if you run into problems later in your professional or academic career, you will know whether or not using Python or a similar approach will be helpful.** Run the code block below. Then we will examine the output of the code.
###Code
# You do not need to modify any code in this cell
# First, define some X data
X=[1,2,3]
# Then define the corresponding Y data
Y=[3,5,7]
# Use polyfit to find the coefficients of the best fit line (i.e the slope and y-intercept of the line)
import numpy as np
pfit=np.polyfit(X,Y,1)
# Print the values contained in the variable "pfit"
print(pfit)
###Output
_____no_output_____
###Markdown
You should see two values printed at the bottom of the code block. Python has determined the line that best fits the X and Y data that we provided. As you know, a line is described by two numbers: a slope and a y-intercept. Not surprisingly, Python has given us two numbers. The first number, which is a 2, corresponds to the slope of the best fit line. The second number, which is a 1, corresponds to the y-intercept. Thus, we now know that the best fit line for this X and Y data is given by$$ Y=2X+1$$ Fitting a line to your dataNow that we know how to interpret the output from the *polyfit* function, we can see what information it gives us about the relationship between SDS and age. Look at the plot that you created earlier that shows age as a function of SDS. Age is the Y variable (i.e. the dependent variable) and SDS is the X variable (or independent variable). This is what we want because we ultimately want to be able to estimate the age of a landslide based on its SDS value. In the code below, we use polyfit to find the line that best describes the relationship between age and SDS. Notice that the code looks exactly like the code for the simple curve fitting example shown above except that *SDS* has been substituted for X and *age* has been substituted for Y. Run the code below (you don't need to make any changes to it) and then we will discuss the output.
###Code
# You do not need to modify any code in this cell
# You do not need to modify any code in this cell
pfit=np.polyfit(SDS,age,1)
# Print the values contained in the variable "pfit"
print(pfit)
###Output
_____no_output_____
###Markdown
Python has determined the line that best desribes the relationship between SDS and age. The first number in the output represents the slope of the line and the second number is the y intercept. The first number (the slope) should be roughly $-2.65$. The second number (the y-intercept) should be roughly $21.5$. This means that $$\mathrm{AGE}=21.5-2.65 \cdot{} \mathrm{SDS}$$where AGE denotes the age of the landslide deposit in kyr. The code below will plot your best fit line on top of the actual data. Run the code below to see what your best fit line looks like.
###Code
# You do not need to modify any code in this cell
pfit=np.polyfit(SDS,age,1);
import matplotlib.pyplot as plt
plt.scatter(SDS,age,s=60, c='b', marker='o',label="Original Data") # Create the scatter plot, set marker size, set color, set marker type
plt.plot(SDS,pfit[1]+pfit[0]*SDS,'k-',label="Best Fit Line")
plt.xlabel('SDS [-]', fontsize=14)
plt.ylabel('Landslide Age [kyr]', fontsize=14)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
You should see that a line does not fit the data very well. If you have correctly completed the assignment to this point, you will notice that the data points (blue circles) in the above plot show that age decreases rapidly with SDS at first and then decreases more slowly at higher SDS values. This pattern suggests that age varies nonlinearly with SDS. Motivated by this observation, let’s see if a 2nd order polynomial (i.e. a quadratic function) will provide a better fit to our data. Fitting a quadratic function to your SDS dataWe use *polyfit* in the code below in much the same way as before. The only difference is that we want Python to find the quadratic funciton that best describes our data. We still need to provide the X data (i.e. "SDS") and the Y data (i.e. age). The only difference is that we change the third input for the *polyfit* function from a 1 (indicating that you want your data fit to a 1st order polynomial, which is a line) to a 2 (which indicates that you want your data fit to a 2nd order polynomial, which is a quadratic). Run the code block below and Python will determine the quadratic function that best fits your data.
###Code
# You do not need to modify any code in this cell
pfit=np.polyfit(SDS,age,2)
# Print the values contained in the variable "pfit"
print(pfit)
###Output
_____no_output_____
###Markdown
Notice that Python returns three numbers. This is because three numbers are required to define a quadratic function, which looks like:$$ AGE=A\cdot (SDS)^2+B\cdot SDS+C $$The first number above is the coefficient $A$. The second number is equal to $B$ and the third is equal to $C$. In your notes, write down the equation of the best fit quadratic function. You will need to use this equation to finish the exercise. Let's see how well this quadratic function fits our data. Run the code below to plot the best fit quadratic function on the same plot as your data. Verify that the best fit quadratic looks reasonable in comparison to the actual data points. In other words, it should look like the curve fits the data reasonably well.
###Code
# You do not need to modify any code in this cell
pfit=np.polyfit(SDS,age,2);
import matplotlib.pyplot as plt
plt.scatter(SDS,age,s=60, c='b', marker='o',label="Original Data") # Create the scatter plot, set marker size, set color, set marker type
plt.plot(SDS,pfit[2]+pfit[1]*SDS+pfit[0]*SDS**2,'k-',label="Best Fit Quadratic")
plt.xlabel('SDS [-]', fontsize=14)
plt.ylabel('Landslide Age [kyr]', fontsize=14)
plt.legend()
plt.show()
###Output
_____no_output_____ |
code/lqr_policy_comparisons.ipynb | ###Markdown
Here is a link to [lqrpols.py](http://www.argmin.net/code/lqrpols.py)
###Code
np.random.seed(1337)
# state transition matrices for linear system:
# x(t+1) = A x (t) + B u(t)
A = np.array([[1,1],[0,1]])
B = np.array([[0],[1]])
d,p = B.shape
# LQR quadratic cost per state
Q = np.array([[1,0],[0,0]])
# initial condition for system
z0 = -1 # initial position
v0 = 0 # initial velocity
x0 = np.vstack((z0,v0))
R = np.array([[1.0]])
# number of time steps to simulate
T = 10
# amount of Gaussian noise in dynamics
eq_err = 1e-2
# N_vals = np.floor(np.linspace(1,75,num=7)).astype(int)
N_vals = [1,2,5,7,12,25,50,75]
N_trials = 10
### Bunch of matrices for storing costs
J_finite_nom = np.zeros((N_trials,len(N_vals)))
J_finite_nomK = np.zeros((N_trials,len(N_vals)))
J_finite_rs = np.zeros((N_trials,len(N_vals)))
J_finite_ur = np.zeros((N_trials,len(N_vals)))
J_finite_pg = np.zeros((N_trials,len(N_vals)))
J_inf_nom = np.zeros((N_trials,len(N_vals)))
J_inf_rs = np.zeros((N_trials,len(N_vals)))
J_inf_ur = np.zeros((N_trials,len(N_vals)))
J_inf_pg = np.zeros((N_trials,len(N_vals)))
# cost for finite time horizon, true model
J_finite_opt = lqrpols.cost_finite_model(A,B,Q,R,x0,T,A,B)
### Solve for optimal infinite time horizon LQR controller
K_opt = -lqrpols.lqr_gain(A,B,Q,R)
# cost for infinite time horizon, true model
J_inf_opt = lqrpols.cost_inf_K(A,B,Q,R,K_opt)
# cost for zero control
baseline = lqrpols.cost_finite_K(A,B,Q,R,x0,T,np.zeros((p,d)))
# model for nominal control with 1 rollout
A_nom1,B_nom1 = lqrpols.lsqr_estimator(A,B,Q,R,x0,eq_err,1,T)
print(A_nom1)
print(B_nom1)
# cost for finite time horizon, one rollout, nominal control
one_rollout_cost = lqrpols.cost_finite_model(A,B,Q,R,x0,T,A_nom1,B_nom1)
K_nom1 = -lqrpols.lqr_gain(A_nom1,B_nom1,Q,R)
one_rollout_cost_inf = lqrpols.cost_inf_K(A,B,Q,R,K_nom1)
for N in range(len(N_vals)):
for trial in range(N_trials):
# nominal model, N x 40 to match sample budget of policy gradient
A_nom,B_nom = lqrpols.lsqr_estimator(A,B,Q,R,x0,eq_err,N_vals[N]*40,T);
# finite time horizon cost with nominal model
J_finite_nom[trial,N] = lqrpols.cost_finite_model(A,B,Q,R,x0,T,A_nom,B_nom)
# Solve for infinite time horizon nominal LQR controller
K_nom = -lqrpols.lqr_gain(A_nom,B_nom,Q,R)
# cost of using the infinite time horizon solution for finite time horizon
J_finite_nomK[trial,N] = lqrpols.cost_finite_K(A,B,Q,R,x0,T,K_nom)
# infinite time horizon cost of nominal model
J_inf_nom[trial,N] = lqrpols.cost_inf_K(A,B,Q,R,K_nom)
# policy gradient, batchsize 40 per iteration
K_pg = lqrpols.policy_gradient_adam_linear_policy(A,B,Q,R,x0,eq_err,N_vals[N]*5,T)
J_finite_pg[trial,N] = lqrpols.cost_finite_K(A,B,Q,R,x0,T,K_pg)
J_inf_pg[trial,N] = lqrpols.cost_inf_K(A,B,Q,R,K_pg)
# random search, batchsize 4, so uses 8 rollouts per iteration
K_rs = lqrpols.random_search_linear_policy(A,B,Q,R,x0,eq_err,N_vals[N]*5,T)
J_finite_rs[trial,N] = lqrpols.cost_finite_K(A,B,Q,R,x0,T,K_rs)
J_inf_rs[trial,N] = lqrpols.cost_inf_K(A,B,Q,R,K_rs)
# uniformly random sampling, N x 40 to match sample budget of policy gradient
K_ur = lqrpols.uniform_random_linear_policy(A,B,Q,R,x0,eq_err,N_vals[N]*40,T)
J_finite_ur[trial,N] = lqrpols.cost_finite_K(A,B,Q,R,x0,T,K_ur)
J_inf_ur[trial,N] = lqrpols.cost_inf_K(A,B,Q,R,K_ur)
colors = [ '#2D328F', '#F15C19',"#81b13c","#ca49ac"]
label_fontsize = 18
tick_fontsize = 14
linewidth = 3
markersize = 10
tot_samples = 40*np.array(N_vals)
plt.plot(tot_samples,np.amin(J_finite_pg,axis=0),'o-',color=colors[0],linewidth=linewidth,
markersize=markersize,label='policy gradient')
plt.plot(tot_samples,np.amin(J_finite_ur,axis=0),'>-',color=colors[1],linewidth=linewidth,
markersize=markersize,label='uniform sampling')
plt.plot(tot_samples,np.amin(J_finite_rs,axis=0),'s-',color=colors[2],linewidth=linewidth,
markersize=markersize,label='random search')
plt.plot([tot_samples[0],tot_samples[-1]],[baseline, baseline],color='#000000',linewidth=linewidth,
linestyle='--',label='zero control')
plt.plot([tot_samples[0],tot_samples[-1]],[J_finite_opt, J_finite_opt],color='#000000',linewidth=linewidth,
linestyle=':',label='optimal')
plt.axis([0,2000,0,12])
plt.xlabel('rollouts',fontsize=label_fontsize)
plt.ylabel('cost',fontsize=label_fontsize)
plt.legend(fontsize=18, bbox_to_anchor=(1.0, 0.54))
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.grid(True)
fig = plt.gcf()
fig.set_size_inches(9, 6)
plt.show()
plt.plot(tot_samples,np.median(J_finite_pg,axis=0),'o-',color=colors[0],linewidth=linewidth,
markersize=markersize,label='policy gradient')
plt.fill_between(tot_samples, np.amin(J_finite_pg,axis=0), np.amax(J_finite_pg,axis=0), alpha=0.25)
plt.plot(tot_samples,np.median(J_finite_ur,axis=0),'>-',color=colors[1],linewidth=linewidth,
markersize=markersize,label='uniform sampling')
plt.fill_between(tot_samples, np.amin(J_finite_ur,axis=0), np.amax(J_finite_ur,axis=0), alpha=0.25)
plt.plot(tot_samples,np.median(J_finite_rs,axis=0),'s-',color=colors[2],linewidth=linewidth,
markersize=markersize,label='random search')
plt.fill_between(tot_samples, np.amin(J_finite_rs,axis=0), np.amax(J_finite_rs,axis=0), alpha=0.25)
plt.plot([tot_samples[0],tot_samples[-1]],[baseline, baseline],color='#000000',linewidth=linewidth,
linestyle='--',label='zero control')
plt.plot([tot_samples[0],tot_samples[-1]],[J_finite_opt, J_finite_opt],color='#000000',linewidth=linewidth,
linestyle=':',label='optimal')
plt.axis([0,2000,0,12])
plt.xlabel('rollouts',fontsize=label_fontsize)
plt.ylabel('cost',fontsize=label_fontsize)
plt.legend(fontsize=18, bbox_to_anchor=(1.0, 0.54))
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.grid(True)
fig = plt.gcf()
fig.set_size_inches(9, 6)
plt.show()
plt.plot(tot_samples,np.median(J_inf_pg,axis=0),'o-',color=colors[0],linewidth=linewidth,
markersize=markersize,label='policy gradient')
plt.fill_between(tot_samples, np.amin(J_inf_pg,axis=0), np.minimum(np.amax(J_inf_pg,axis=0),15), alpha=0.25)
plt.plot(tot_samples,np.median(J_inf_ur,axis=0),'>-',color=colors[1],linewidth=linewidth,
markersize=markersize,label='uniform sampling')
plt.fill_between(tot_samples, np.amin(J_inf_ur,axis=0), np.minimum(np.amax(J_inf_ur,axis=0),15), alpha=0.25)
plt.plot(tot_samples,np.median(J_inf_rs,axis=0),'s-',color=colors[2],linewidth=linewidth,
markersize=markersize,label='random search')
plt.fill_between(tot_samples, np.amin(J_inf_rs,axis=0), np.minimum(np.amax(J_inf_rs,axis=0),15), alpha=0.25)
plt.plot([tot_samples[0],tot_samples[-1]],[J_inf_opt, J_inf_opt],color='#000000',linewidth=linewidth,
linestyle=':',label='optimal')
plt.axis([0,3000,5,10])
plt.xlabel('rollouts',fontsize=label_fontsize)
plt.ylabel('cost',fontsize=label_fontsize)
plt.legend(fontsize=18, bbox_to_anchor=(1.0, 0.54))
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.grid(True)
fig = plt.gcf()
fig.set_size_inches(9, 6)
plt.show()
plt.plot(tot_samples,1-np.sum(np.isinf(J_inf_pg),axis=0)/10,'o-',color=colors[0],linewidth=linewidth,
markersize=markersize,label='policy gradient')
plt.plot(tot_samples,1-np.sum(np.isinf(J_inf_ur),axis=0)/10,'>-',color=colors[1],linewidth=linewidth,
markersize=markersize,label='uniform sampling')
plt.plot(tot_samples,1-np.sum(np.isinf(J_inf_rs),axis=0)/10,'s-',color=colors[2],linewidth=linewidth,
markersize=markersize,label='random search')
plt.axis([0,3000,0,1])
plt.xlabel('rollouts',fontsize=label_fontsize)
plt.ylabel('fraction stable',fontsize=label_fontsize)
plt.legend(fontsize=18, bbox_to_anchor=(1.0, 0.54))
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.grid(True)
fig = plt.gcf()
fig.set_size_inches(9, 6)
plt.show()
one_rollout_cost-J_finite_opt
one_rollout_cost_inf-J_inf_opt
###Output
_____no_output_____ |
nbs/edu_nbs/load_model_from_wandb.ipynb | ###Markdown
Load model from Weights & Biases (wandb) This tutorial is for people who are using [Weights & Biases (wandb)](https://wandb.ai/site) `WandbCallback` in their training pipeline and are looking for a convenient way to use saved models on W&B cloud to make predictions, evaluate and submit in a few lines of code.Currently only Keras models (`.h5`) are supported for wandb loading in this framework. Future versions will include other formats like PyTorch support. --------------------------------------------------------------------- 0. AuthenticationTo authenticate your W&B account you are given several options:1. Run `wandb login` in terminal and follow instructions.2. Configure global environment variable `'WANDB_API_KEY'`.3. Run `wandb.init(project=PROJECT_NAME, entity=ENTITY_NAME)` and pass API key from [https://wandb.ai/authorize](https://wandb.ai/authorize) ----------------------------------------------------- 1. Download validation dataThe first thing we do is download the current validation data and example predictions to evaluate against. This can be done in a few lines of code with `NumeraiClassicDownloader`.
###Code
#other
import pandas as pd
from numerblox.download import NumeraiClassicDownloader
from numerblox.numerframe import create_numerframe
from numerblox.model import WandbKerasModel
from numerblox.evaluation import NumeraiClassicEvaluator
#other
downloader = NumeraiClassicDownloader("wandb_keras_test")
# Path variables
val_file = "numerai_validation_data.parquet"
val_save_path = f"{str(downloader.dir)}/{val_file}"
# Download only validation parquet file
downloader.download_single_dataset(val_file,
dest_path=val_save_path)
# Download example val preds
downloader.download_example_data()
# Initialize NumerFrame from parquet file path
dataf = create_numerframe(val_save_path)
# Add example preds to NumerFrame
example_preds = pd.read_parquet("wandb_keras_test/example_validation_predictions.parquet")
dataf['prediction_example'] = example_preds.values
###Output
_____no_output_____
###Markdown
-------------------------------------------------------------------- 2. Predict (WandbKerasModel)`WandbKerasModel` automatically downloads and loads in a `.h5` from a specified wandb run. The path for a run is specified in the ["Overview" tab](https://docs.wandb.ai/ref/app/pages/run-pageoverview-tab) of the run.- `file_name`: The default name for the best model in a run is `model-best.h5`. If you want to use a model you have saved under a different name specify `file_name` for `WandbKerasModel` initialization.- `replace`: The model will be downloaded to the directory you are working in. You will be warned if this directory contains models with the same filename. If these models can be overwritten specify `replace=True`.- `combine_preds`: Setting this to True will average all columns in case you have trained a multi-target model.- `autoencoder_mlp:` This argument is for the case where your [model architecture includes an autoencoder](https://forum.numer.ai/t/autoencoder-and-multitask-mlp-on-new-dataset-from-kaggle-jane-street/4338) and therefore the output is a tuple of 3 tensors. `WandbKerasModel` will in this case take the third output of the tuple (target predictions).
###Code
#other
run_path = "crowdcent/cc-numerai-classic/h4pwuxwu"
model = WandbKerasModel(run_path=run_path,
replace=True, combine_preds=True, autoencoder_mlp=True)
###Output
_____no_output_____
###Markdown
After initialization you can generate predictions with one line. `.predict` takes a `NumerFrame` as input and outputs a `NumerFrame` with a new prediction column. The prediction column name will be of the format `prediction_{RUN_PATH}`.
###Code
#other
dataf = model.predict(dataf)
dataf.prediction_cols
#other
main_pred_col = f"prediction_{run_path}"
main_pred_col
###Output
_____no_output_____
###Markdown
---------------------------------------------------------------------- 3. EvaluateWe can now use the output of the model to evaluate in 2 lines of code. Additionally, we can directly submit predictions to Numerai with this `NumerFrame`. Check out the educational notebook `submitting.ipynb` for more information on this.
###Code
#other
evaluator = NumeraiClassicEvaluator()
val_stats = evaluator.full_evaluation(dataf=dataf,
target_col="target",
pred_cols=[main_pred_col,
"prediction_example"],
example_col="prediction_example"
)
###Output
_____no_output_____
###Markdown
The evaluator outputs a `pd.DataFrame` with most of the main validation metrics for Numerai. We welcome new ideas and metrics for Evaluators. See `nbs/07_evaluation.ipynb` in this repository for full Evaluator source code.
###Code
#other
val_stats
###Output
_____no_output_____
###Markdown
After we are done, downloaded files can be removed with one call on `NumeraiClassicDownloader` (optional).
###Code
#other
# Clean up environment
downloader.remove_base_directory()
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------We hope this tutorial explained clearly to you how to load and predict with Weights & Biases (wandb) models.Below you will find the full docs for `WandbKerasModel` and link to the source code:
###Code
# other
# hide_input
show_doc(WandbKerasModel)
###Output
_____no_output_____ |
ipython/XYZ + RDKitMol.ipynb | ###Markdown
A demo of XYZ and RDKitMol There is no easy way to convert xyz to RDKit Mol/RWMol. Here RDKitMol shows a possibility by using openbabel / method from Jensen et al. [1] as a molecule perception backend. [1] https://github.com/jensengroup/xyz2mol.
###Code
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath('')))
from rdmc.mol import RDKitMol
###Output
_____no_output_____
###Markdown
1. An example of xyz str block
###Code
######################################
# INPUT
xyz="""14
C -1.77596 0.55032 -0.86182
C -1.86964 0.09038 -2.31577
H -0.88733 1.17355 -0.71816
H -1.70996 -0.29898 -0.17103
O -2.90695 1.36613 -0.53334
C -0.58005 -0.57548 -2.76940
H -0.35617 -1.45641 -2.15753
H 0.26635 0.11565 -2.71288
H -0.67469 -0.92675 -3.80265
O -2.92111 -0.86791 -2.44871
H -2.10410 0.93662 -2.97107
O -3.87923 0.48257 0.09884
H -4.43402 0.34141 -0.69232
O -4.16782 -0.23433 -2.64382
"""
xyz_wo_header = """O 2.136128 0.058786 -0.999372
C -1.347448 0.039725 0.510465
C 0.116046 -0.220125 0.294405
C 0.810093 0.253091 -0.73937
H -1.530204 0.552623 1.461378
H -1.761309 0.662825 -0.286624
H -1.923334 -0.892154 0.536088
H 0.627132 -0.833978 1.035748
H 0.359144 0.869454 -1.510183
H 2.513751 -0.490247 -0.302535"""
######################################
###Output
_____no_output_____
###Markdown
2. Use pybel to generate a OBMol from xyz pybel backend, `header` to indicate if the str includes lines of atom number and title.
###Code
rdkitmol = RDKitMol.FromXYZ(xyz, backend='openbabel', header=True)
rdkitmol
###Output
_____no_output_____
###Markdown
Please correctly use `header` arguments, otherwise molecule perception can be problematic
###Code
rdkitmol = RDKitMol.FromXYZ(xyz_wo_header, backend='openbabel', header=False)
rdkitmol
###Output
_____no_output_____
###Markdown
Using `jensen` backend. For most cases, Jensen's method returns the same molecule as using `pybel` backend
###Code
rdkitmol = RDKitMol.FromXYZ(xyz, backend='jensen', header=True)
rdkitmol
###Output
_____no_output_____
###Markdown
Here some options for Jensen et al. method are listed. The nomenclature is kept as it is in the original API.
###Code
rdkitmol = RDKitMol.FromXYZ(xyz, backend='jensen',
header=True,
allow_charged_fragments=False, # radical => False
use_graph=False, # accelerate for larger molecule but needs networkx as backend
use_huckel=True,
embed_chiral=True)
rdkitmol
###Output
_____no_output_____
###Markdown
3. Check the xyz of rdkitmol conformer
###Code
rdkitmol.GetConformer().GetPositions()
###Output
_____no_output_____
###Markdown
4. Export xyz
###Code
print(rdkitmol.ToXYZ(header=False))
###Output
C -1.775960 0.550320 -0.861820
C -1.869640 0.090380 -2.315770
H -0.887330 1.173550 -0.718160
H -1.709960 -0.298980 -0.171030
O -2.906950 1.366130 -0.533340
C -0.580050 -0.575480 -2.769400
H -0.356170 -1.456410 -2.157530
H 0.266350 0.115650 -2.712880
H -0.674690 -0.926750 -3.802650
O -2.921110 -0.867910 -2.448710
H -2.104100 0.936620 -2.971070
O -3.879230 0.482570 0.098840
H -4.434020 0.341410 -0.692320
O -4.167820 -0.234330 -2.643820
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.